[07:41] <dholbach> good morning
[12:51] <dholbach> davidcalle, mhall119 : can we discuss cutting out some of the pip-cache branch history later on?
[12:51] <davidcalle> dholbach: +1
[12:52] <dholbach> davidcalle, re structureboard et al: yes, very likely my mistake (it was a fresh checkout of trunk)
[12:52] <davidcalle> mhall119: since you masterfully debugged d.s.u.c routing issues, maybe you have an idea why static/* is not accessible (401) ?
[12:56] <dholbach> does "make collectstatic.debug" say anything interesting or is that totally unrelated?
[13:05] <davidcalle> dholbach: it says that I'm not running the devportal version I think I'm testing because it's hitting an old bug I have fixed, thanks! /me bumps the version and re-deploys :)
[13:14]  * dholbach crosses fingers
[13:14] <dholbach> I wonder if we shouldn't deploy all of the good fixes we have accumulated in branches already O:-)
[13:24] <davidcalle> dholbach: probably, yeah
[14:37] <mhall119> davidcalle: you can try running "make swift-perms" on devportal-app/0, but sometimes the access stuff needs to be fixed by webops
[14:39] <mhall119> davidcalle: also, it seems we have no data in our database, are you able to load the production datadump yourself, or do we have to file an RT for that?
[15:17] <mhall119> davidcalle: I've checked the swift perms from wendigo and they all seem fine, so once again this is something that needs webops
[15:20] <davidcalle> mhall119: no need for webops for the db dump. I'm reading #webops now, thanks for acting on it, I was in a meeting
[16:14] <mhall119> davidcalle: our apache configs arne't having their variables replaced properly,which is a problem we had fixed in the past
[16:14] <mhall119>      <Location /static/>
[16:14] <mhall119>         ProxyPass balancer://swift-cache/v1/OS_SWIFT_AUTH/
[16:14] <mhall119>         ProxyPassReverse balancer://swift-cache/v1/OS_SWIFT_AUTH/
[16:14] <mhall119>         RequestHeader set Host OS_SWIFT_HOST
[16:14] <mhall119>         Header set Cache-Control "public,max-age=31536000"
[16:14] <mhall119>      </Location>
[16:14] <mhall119> OS_SWIFT_AUTH should have been replaced with something
[16:15] <mhall119> davidcalle: what directory's mojo spec did you use for the deployment?
[16:16] <davidcalle> mhall119: I've cleaned them I think, there is only a mojo-ue-devportal left, which local changes are simply a bump of revnos in staging/collect
[16:17] <mhall119> davidcalle: are you on wendigo now?
[16:18] <davidcalle> mhall119: no but I can
[16:20] <mhall119> davidcalle: check revno 501, I've just committed the fix, can you try re-deploying so that it gets applied to the apache instances?
[16:21] <mhall119> somehow this got lost when I was merging in all of mthaddon's changes
[16:22] <mhall119> or it was never merged into upstream in teh first place, which is also possible,because getting spec changes off of wendigo isn't straight forward
[16:22] <davidcalle> mhall119: alright, scratching and deploying
[16:42] <davidcalle> mhall119: deployment still in progress, but the site is already up and FIXED :)
[16:42] <mhall119> \o/
[16:43] <mhall119> davidcalle: I'll send an MP to the upstream branch with that fix, can you work on testing the database migration today?
[16:43]  * davidcalle quickly creates a homepage before nagios runs
[16:48] <davidcalle> mhall119: I have time to put the db in place, but I'll probably need to run just after
[16:49] <davidcalle> Oh wait, the deployment isn't even finished, so no
[16:51] <davidcalle> mhall119: looks like assets upload timed out, not uncommon, but I need to restart it (not from scratch)
[16:55] <mhall119> davidcalle: collectstatic? It will keep running after juju times out, just let it go
[17:18] <davidcalle> mhall119: I've put the dump in place and started a deployment over it. Deployment output is in m-ue-d/ue/m-ue-d/nohup.log if you feel like having a look. I'll have one tonight and see if our migrations steps need to be fixed.
[17:18]  * davidcalle runs
[17:20] <mhall119> davidcalle: thanks
[17:28] <mhall119> davidcalle: it looks like there were database errors on the deployment,but I suspect that's due to it having an older database with the newer code
[17:29] <mhall119> everything is failing with a 500 error now too because of that