=== JoseeAntonioR is now known as jose === tsimonq2alt is now known as tsimonq2- === tsimonq2- is now known as tsimonq2 [07:41] good morning === cjohnston_ is now known as cjohnston [12:51] davidcalle, mhall119 : can we discuss cutting out some of the pip-cache branch history later on? [12:51] dholbach: +1 [12:52] davidcalle, re structureboard et al: yes, very likely my mistake (it was a fresh checkout of trunk) [12:52] mhall119: since you masterfully debugged d.s.u.c routing issues, maybe you have an idea why static/* is not accessible (401) ? [12:56] does "make collectstatic.debug" say anything interesting or is that totally unrelated? [13:05] dholbach: it says that I'm not running the devportal version I think I'm testing because it's hitting an old bug I have fixed, thanks! /me bumps the version and re-deploys :) [13:14] * dholbach crosses fingers [13:14] I wonder if we shouldn't deploy all of the good fixes we have accumulated in branches already O:-) [13:24] dholbach: probably, yeah [14:37] davidcalle: you can try running "make swift-perms" on devportal-app/0, but sometimes the access stuff needs to be fixed by webops [14:39] davidcalle: also, it seems we have no data in our database, are you able to load the production datadump yourself, or do we have to file an RT for that? [15:17] davidcalle: I've checked the swift perms from wendigo and they all seem fine, so once again this is something that needs webops [15:20] mhall119: no need for webops for the db dump. I'm reading #webops now, thanks for acting on it, I was in a meeting === JoseeAntonioR is now known as jose [16:14] davidcalle: our apache configs arne't having their variables replaced properly,which is a problem we had fixed in the past [16:14] [16:14] ProxyPass balancer://swift-cache/v1/OS_SWIFT_AUTH/ [16:14] ProxyPassReverse balancer://swift-cache/v1/OS_SWIFT_AUTH/ [16:14] RequestHeader set Host OS_SWIFT_HOST [16:14] Header set Cache-Control "public,max-age=31536000" [16:14] [16:14] OS_SWIFT_AUTH should have been replaced with something [16:15] davidcalle: what directory's mojo spec did you use for the deployment? [16:16] mhall119: I've cleaned them I think, there is only a mojo-ue-devportal left, which local changes are simply a bump of revnos in staging/collect [16:17] davidcalle: are you on wendigo now? [16:18] mhall119: no but I can [16:20] davidcalle: check revno 501, I've just committed the fix, can you try re-deploying so that it gets applied to the apache instances? [16:21] somehow this got lost when I was merging in all of mthaddon's changes [16:22] or it was never merged into upstream in teh first place, which is also possible,because getting spec changes off of wendigo isn't straight forward [16:22] mhall119: alright, scratching and deploying [16:42] mhall119: deployment still in progress, but the site is already up and FIXED :) [16:42] \o/ [16:43] davidcalle: I'll send an MP to the upstream branch with that fix, can you work on testing the database migration today? [16:43] * davidcalle quickly creates a homepage before nagios runs [16:48] mhall119: I have time to put the db in place, but I'll probably need to run just after [16:49] Oh wait, the deployment isn't even finished, so no [16:51] mhall119: looks like assets upload timed out, not uncommon, but I need to restart it (not from scratch) [16:55] davidcalle: collectstatic? It will keep running after juju times out, just let it go [17:18] mhall119: I've put the dump in place and started a deployment over it. Deployment output is in m-ue-d/ue/m-ue-d/nohup.log if you feel like having a look. I'll have one tonight and see if our migrations steps need to be fixed. [17:18] * davidcalle runs [17:20] davidcalle: thanks [17:28] davidcalle: it looks like there were database errors on the deployment,but I suspect that's due to it having an older database with the newer code [17:29] everything is failing with a 500 error now too because of that