[00:33] <wgrant> Wow, congrats lifeless.
[00:33] <poolie> yes, indeed
[00:33] <poolie> hi wgrant
[00:34] <wgrant> Morning poolie.
[00:34] <poolie> i think robert's new role will have benefits for you in particular
[00:35] <wgrant> Oh?
[00:35] <poolie> being another step towards openness and ease of contribution
[00:35] <poolie> "think or hope" i suppose
[00:40] <poolie> leonardr: did you read jml's lep about api access to authentication tokens?
[00:41] <lifeless> thumper: thanks
[00:42] <lifeless> wgrant: thanks; poolie: thanks
[00:42] <poolie> i miss you lifelesS!
[00:45] <lifeless> already? I'm not gone till Monday :)
[00:45] <poolie> well, i did briefly
[00:45] <lifeless> and even then, I think its appropriate to think of it as expanding - like my waistline - the bzr team is definitely able to call on me to discuss stuff :)
[00:45] <poolie> :)
[00:46] <lifeless> oh, or did you mean as an office-mate
[00:46] <poolie> yes, that's what i meant
[00:46] <lifeless> righto ! - 5 hours sleep last night, but boy, better sleep than I had been getting
[00:46] <lifeless> have seen allergy doctor this morning
[00:47] <lifeless> so; finally; no medical stuff to do for a month; no frenzy; a feeling of peace surrounds me :)
[00:48] <wgrant> lifeless: I'm sure we can invoke an architecture frenzy for you :P
[00:59] <lifeless> hmm, are we in testfix mode I wonder
[00:59] <lifeless>   restricted_families = archive_arch_set.getRestrictedfamilies(self)
[00:59] <lifeless> ForbiddenAttribute: ('getRestrictedfamilies', <lp.soyuz.model.archivearch.ArchiveArchSet object at 0x143bc190>)
[01:00] <lifeless> seems unrelated to oops infrastructure changes
[01:01] <lifeless> ah yes.
[01:51] <thumper> lifeless: a feeling of peace except for the frantic travelling coming?
[01:53] <lifeless> thumper: compared to the last three months, its quiet
[02:01] <ajmitch> lifeless: great, so I can ask you all my LP questions now? :)
[02:02] <lifeless> yes
[02:02] <lifeless> Not that that would seem to be any different
[02:03] <ajmitch> heh
[02:38] <wgrant> Can I coerce someone into ec2landing https://code.edge.launchpad.net/~wgrant/launchpad/faster-and-more-general-getBuildQueueSizes/+merge/28476 ?
[02:40] <lifeless> wgrant: does that fix henninge's patch ?
[02:41] <lifeless> rev 11093 seems to break everything, or perhaps its just my branch it breaks.
[02:41] <lifeless> if not, then no, I can't be coerced.
[02:41] <wgrant> You mean jelmer's/
[02:41] <lifeless> ah yeah
[02:41] <lifeless> the one reviewed by
[02:41] <wgrant> That is a bit confusing.
[02:42] <wgrant> I wonder why PQM doesn't use --author.
[02:42] <wgrant> lifeless: Which is the test that breaks?
[02:43] <lifeless> wgrant: pqm doesn't use --author because its not the author.
[02:43] <lifeless> log -n0 shows the people totally accurately.
[02:43] <lifeless> alias it on.
[02:43] <wgrant> lifeless: Doesn't help for commit notifications.
[02:43] <lifeless> wgrant: we should fix that, then.
[02:44] <wgrant> Is there a testfix in progress?
[02:44] <lifeless> bzr has the real data; this is a presentation issue IMO: fudging it by feeding PQM an approximation you want to see on the outside just leads to harder to work with data.
[02:45] <lifeless> wgrant: I'm not sure how to look that up yet.
[02:45] <lifeless> wgrant: and today is a bzr day :)
[02:45] <wgrant> lifeless: Well, if you tell me which test fails, I will fix it. It's just an s/f/F/.
[02:46] <lifeless> sure
[02:47] <lifeless> yu have mail
[02:48] <wgrant> Thanks.
[02:48] <lifeless> de nada
[02:54] <wgrant> Huh.
[02:54] <wgrant> Did that get *CPed* without being EC2'd?
[02:54] <wgrant> Crazy.
[02:59] <lifeless> da wtf
[02:59] <poolie> lifeless: i don't know the state of the committed/released distinction
[03:00] <poolie> there are various contradictory bugs
[03:00] <poolie> i don't think it's useful
[03:00] <poolie> in that it makes noise while simplifying reality too much to be useful
[03:00] <poolie> but maybe that's just me
[03:00] <lifeless> I really don't like the noise when a release happens
[03:00] <lifeless> The 'it is available to the user to use now' aspect is good though.
[03:00] <lifeless> perhaps doing daily releases would address it ?
[03:01] <lifeless> (e.g. edge :))
[03:01] <poolie> that's one element of my dissatisfaction
[03:01] <poolie> 'fix released' does not aiui reliably mean you will see the fix, even on edge
[03:02] <poolie> doing daily releases would be good
[03:02] <poolie> it wouldn't reduce the amount of noise
[03:03] <lifeless> whats one of the bug numbers you saw
[03:03] <poolie> istm that a lean view of the process is: it's waiting, it's in progress, it's done
[03:03] <lifeless> I'm wondering if I'm missing a subscription somewhere
[03:03] <poolie> bug 592792
[03:03] <lifeless> yeah, I need to tweak my subscriptions
[03:03] <lifeless> \o/ more mail.
[03:04] <wgrant> poolie: You're saying that's not fixed on edge?
[03:04] <lifeless> poolie: perhaps there are two clients here - the feature requestor and the dev asking 'do I have more to do' ?
[03:04] <poolie> wgrant: that was a bug that sent me mail, i'm not saying it is or isn't fixed on edge
[03:05] <poolie> lifeless: there are
[03:05] <poolie> the feature requester wants to know "is it worth me testing whether it's really fixed"
[03:05] <poolie> or "bothering to try that again" etc
[03:05] <lifeless> if we try we can probably make both deeply unhappy
[03:05] <poolie> heh
[03:05] <lifeless> oops, thats the wrong way around :)
[03:06] <poolie> istm that a state change is not enough
[03:06] <poolie> without saying "this is live on edge now" or "this will be in bzr2.2b4"
[03:07] <lifeless> I need a volunteer at the epic, with a 2 minute 'lightning talk' screen
[03:09] <thumper> lifeless: in a certain light, isn't the point of --author to say who the author of the code is?
[03:09] <thumper> lifeless: a merge isn't really code
[03:09] <thumper> so pqm could be the committer, and set the author
[03:10] <thumper> which is what I think many people think like
[03:10] <lifeless> thumper: if you squint real real hard, but both Aaron and I have argued that this is working around a bug rather than fixing the root cause.
[03:10] <thumper> :)
[03:10] <lifeless> thumper: if we were sending in *patches* it would be trivially correct to use --author
[03:11]  * thumper cranks up the muzak
[03:11] <lifeless> code in an elevator?
[03:12] <thumper> matrix soundtrack
[03:12] <lifeless> nice
[03:57] <wgrant> thumper: So, how do I get this testfix merged?
[03:57]  * thumper looks up
[03:57] <thumper> wgrant: eh?
[03:57] <thumper> what's broken?
[03:57] <foxxtrot> Is there an easy way to run all the windmill tests in a given JS file? I know you can bin/test PATH/TO/PYTHON/FILE.py but that doesn't work for a JS file
[03:57] <wgrant> prod_lp, at least, but devel should be too.
[03:57] <lifeless> wgrant: oh right
[03:57] <lifeless> wgrant: whats the branch, I'll ec2land it
[03:58] <wgrant> lifeless: Is that going to work with [testfix]?
[03:58] <lifeless> I don't know the process yet :)
[03:58] <lifeless> thumper: how does one land a testfix fixing branch
[03:58] <thumper> pqm-submit
[03:58] <wgrant> https://code.edge.launchpad.net/~wgrant/launchpad/testfix-getRestrictedfamilies is the branch. Trivial -- there were just some references in db-devel that were merged in while the branch was landing.
[03:58] <lifeless> thumper: directly, not ec2land ?
[03:59] <thumper> for a test fix, normally yes, but...
[03:59] <thumper> it depends on the extent of the fix
[03:59] <thumper> since buildbot is broken anyway
[03:59] <thumper> adding an extra ec2 run doesn't add anything
[04:00] <wgrant> All the archive and PPA tests pass fine.
[04:00] <wgrant> Which is, I believe, all that failed in lifeless' run.
[04:00] <wgrant> Although I haven't worked out how to get a list from the gzipped thing yet.
[04:01] <lifeless> wgrant: gunzip -c | subunit-filter | subunit-ls
[04:01] <lifeless> wgrant: or
[04:01] <lifeless> gunzip -c | testr load; testr failing
[04:02] <wgrant> Fancy.
[04:02] <lifeless> nah
[04:02] <lifeless> its all about the onions
[04:02] <lifeless> if you want fancy
[04:02] <lifeless> gunzip -c | tribunal -
[04:02] <lifeless> now thats sexy
[04:03] <thumper> I'd just like the failing tests back in the email body :(
[04:03] <lifeless> thumper: they are
[04:03] <wgrant> thumper: +1
[04:03] <lifeless> aren't they ?
[04:03] <thumper> no
[04:03] <lifeless> oh right, I mailed bac and mars when jml forwarded me mail
[04:03] <lifeless> its very shallow. Lets do it at the epic.
[04:04]  * thumper nods
[04:04] <lifeless> thumper: can you pqm-submit wgrants branch? the change is a three-liner, one char per line ;)
[04:04] <lifeless> thumper: (given you're all setup for it)
[04:04] <lifeless> I've got ec2land setup, not pqm-submit yet.
[04:04] <lifeless> Hmm, I should make hydrazine talk lp targets to
[04:05] <lifeless> too
[04:05] <thumper> I only have devel and db-devel set up
[04:05] <thumper> I should be able to do it without too much problem though
[04:05] <lifeless> it should land on devel fine.
[04:05] <lifeless> in devel if I do a missing on his branch I see just one commit
[04:05] <wgrant> I'm not sure if this will work on production-devel, since I can't see it.
[04:05] <thumper> it seems the error isn't on devel is it?
[04:05] <wgrant> But it should be fine.
[04:05] <wgrant> devel is broken.
[04:05] <lifeless> devel is naffed too
[04:05] <thumper> ah
[04:05] <wgrant> I'm not sure if there's a failure yet, but it is broken.
[04:06] <thumper> the devel failure was a weird twisted one
[04:06] <thumper> I'll land wgrants
[04:06] <wgrant> thumper: Thanks
[04:06] <wgrant> lifeless: How do I run tests with testrepository?
[04:07] <wgrant> quickstart doesn't say.
[04:08] <lifeless> testr run
[04:08] <wgrant> Ah, I presumed I'd have to pipe 'testr failing' in or something.
[04:09] <lifeless> wgrant: you can (testr load - in the quickstart)
[04:09] <lifeless> but you can also run tests directly if there is a .testr.conf
[04:09] <lifeless> please file a bug though
[04:09] <lifeless> that should be polished more
[04:09] <thumper> :(
[04:09] <thumper> bzr crashed
[04:10] <thumper> bzr pqm-submit -m "[testfix][r=thumper] getRestrictedfamilies camel case fixup." --public-location=lp:~wgrant/launchpad/testfix-getRestrictedfamilies --ignore-local --submit-branch=../devel
[04:10] <thumper> that is what I tried
[04:10] <thumper> and it crashed
[04:10] <wgrant> What did it complain about?
[04:10] <Muscovy> Does anyone know anything about: createlang: language installation failed: ERROR:  could not access file "$libdir/plpython": No such file or directory ? It happens in an updated default install of 10.04 server.
[04:10] <thumper> NoPQMSubmissionAddress
[04:11] <thumper> basicly
[04:11] <Muscovy> It happens during "make schema".
[04:11] <wgrant> Muscovy: Make sure you have postgresql-plpython-8.3 or postgresql-plpython-8.4 installed, depending on which version of PostgreSQL you're using.
[04:11] <Muscovy> Thanks, I'll try that.
[04:12] <thumper> lifeless: any idea why my pqm-submit line doesn't work?
[04:13] <lifeless> thumper: does your ../devel have a submit_thingy setting ?
[04:13] <thumper> wgrant: I'll pull your branch into my testfix
[04:13] <thumper> lifeless: yes
[04:13] <lifeless> bzr info on it should say
[04:13] <lifeless> thumper:  then no
[04:13] <wgrant> thumper: Sounds reasonable.
[04:13] <lifeless> I'd like to delete pqm-submit soon
[04:13] <wgrant> But it needs to go to p-d too.
[04:13] <lifeless> wgrant: the magic of auto merge should do that, no ?
[04:13] <wgrant> lifeless: I really hope that production-devel isn't auto-merging anything.
[04:13] <lifeless> devel being the top of the input tree
[04:13] <lifeless> oh right
[04:14] <lifeless> I meant after it percolates to db-devel
[04:14] <thumper> lifeless: but not production devel
[04:14] <lifeless> wgrant: mmm, I disagree on that hope; but there are some necessary conditions to make it safe that aren't true today.
[04:15] <wgrant> lifeless: Well, OK, pre-MergeWorkflowDraft.
[04:16] <wgrant> Is that still happening, what with our new overlord?
[04:17] <thumper> wgrant: branch in the pipe
[04:17] <lifeless> dunno, you'll need to ask him
[04:17] <wgrant> lifeless: Is it still happening?
[04:17] <lifeless> wgrant: :P I was referring to jml, he of the hacking-like-an-evil-overlord talk
[04:18] <wgrant> Heh, indeed.
[04:18] <lifeless> wgrant: not that he can answer the question, just for fun.
[04:18] <lifeless> anyhow
[04:18] <lifeless> I think that reducing developer friction is important
[04:19] <lifeless> that process change should do that, and if there is an available hacker doing it, great.
[04:19] <lifeless> I would like to look a little deeper at how concept -> production all works
[04:20] <lifeless> possibly up the dial on risk mitigation and down on risk avoidance
[04:20] <lifeless> then push rate-of-change up
[04:21] <lifeless> I hope thats not uselessly vague. I haven't dug into the precise details of the process in some time - before buildbot - so Monday I'll be *very* busy indeed :)
[04:22] <wgrant> Heh.
[04:23] <lifeless> concretely, I think we court risk by doing big rollouts
[04:23] <lifeless> this is observation over years
[04:24] <lifeless> so we put a lot of effort into making sure nothing can go wrong in the rollout : but because its always a big change, something always does.
[04:24] <lifeless> FSVO always
[04:24] <lifeless> 30 man-days of changes, more or less.
[04:24] <lifeless> smaller rollouts would have less risk.
[04:25] <lifeless> and if the risk handling stuff is non-linear, then smaller rollouts may be better than just the ratio of sizes-included.
[04:25] <wgrant> Yep.
[04:25] <wgrant> But this requires the ability to handle a crisis effectively at any time.
[04:25] <lifeless> it may be easier to figure out what *can* go wrong looking at a smaller rollout, and so be more prepared for what-ifs.
[04:26] <wgrant> Which is certainly not the case now.
[04:26] <lifeless> wgrant: not quite true. It requires the ability to handle a crisis starting within some time window T of the rollout
[04:27] <lifeless> you could, for instance, have an automerge-wait-for-losa-to-hit-a-red-button situation
[04:27] <lifeless> and every losa at start of shift could assess the risk, and hit the button.
[04:27] <lifeless> if the relevant people are around-and-will-be-for-say-2-hours
[04:27] <wgrant> True.
[04:27]  * lifeless handwaves furiously
[04:28] <lifeless> if you're interested in helping shape something like that, say so
[04:28] <lifeless> I can chat about it with folk at the epic and see if we can give you a set of constraints and requirements
[04:31] <lifeless> I mean to say: this is something that anyone interested should be able to work on.
[04:32] <lifeless> and while I'm interested, I'm unlikely to have personal directed time on it immediately. But I'd love to see things become easier for everyone :)
[04:58] <thumper> wgrant: I'm applying your fix to the prod-devel branch too
[04:58] <wgrant> thumper: Thanks.
[05:07] <spm> fwiw, recognising it is handwaving, we don't look after just LP. we have 3 or 4 (depending on how you count it) major systems. so the implicit assumption that we can treat LP as special/above the others is ... hrm... unwise is harsher than I want, but you get the idea. So 'start of shift, make a choice' stuff would not be great I'm thinking
[05:11] <wgrant> spm: Your name says you only have two services, so any of the others are figments of your imagination. QED.
[05:13]  * spm ponders if suspending wgrants account in retribution would 1. be overkill 2. pointless as he'd hack his way back in anyway 3. seen to be an excessive response 4.. there is no 4.
[05:14] <wgrant> Heh.
[05:17] <spm> lifeless: fwiw#2, yesterdays rollout was incredibly smooth. there were internal comments that a crisis needed to be manufactured so tom'd believe we were actually working. Having said that. 1hr 15 in the actual rollout portion of really quite intense very procedural Do X, then Y; isn't a walk in the sunshine either. :-)
[05:18] <wgrant> Why does it take so long?
[05:18] <wgrant> DB upgrades?
[05:19] <spm> that's part of it, but no means all.
[05:20] <spm> a basic breakdown: 10-15 mins to breakout to R/O and be able to do the DB updates. This includes shutting down the services that currently can't be kept up. code*, soyuz* etc
[05:20] <spm> DB updates themselves, which can vary widely. 20-30 mins'd be the norm
[05:21] <spm> re-enable the DB to be live again; restart all the app servers; verify; restart all the shutdowns.
[05:22] <wgrant> Hm, OK.
[05:23] <spm> And also ignores about 60 minutes of moderately intense prep to get to that stage :-)
[05:24] <wgrant> Prebuilding the new trees on all machines and such?
[05:25] <spm> crontabs, nagios, irc topics, etc yup
[05:25] <spm> verifying the cowboys that are live are known cowboys and are included
[05:42] <lifeless> spm: I know you have massive widespread responsibilities
[05:42] <lifeless> spm: for start of shift, read 'appropriate time' : I find dwelling too much on the minutiae distracts from the concept.
[05:42] <spm> heh
[05:43] <lifeless> spm: I'd _love_ it if everything that makes a rollout slow (e.g. shutting down services to go ro) had bugs. And I knew the bug numbers.
[05:43] <lifeless> spm: do you know if that is the case?
[05:43] <spm> not sure tbh. some parts we have raised in the past. whether they became bugs, probably not.
[05:44] <lifeless> I'm a data monster, really ;)
[05:44] <spm> I hadn't noticed? This is very much news to me!
[05:44] <spm> damn. left my sarcasm meter on. it just exploded.
[05:44] <lifeless> :)
[05:46] <spm> I guess what I'm not saying above - ideally I (personally!) don't want us to be yay/nay a rollout except in *exceptional* circumstances. like edge. it is it assumed it will work; if it doesn't we handle said failure gracefully, and that's the point we'd get involved - but the failure is not critical. Not a respond now event. More a respond soonish.
[05:48] <lifeless> that makes sense to me
[05:48] <lifeless> the nuance here, is that I was proposing you assess the surrounding support for dealing with stuff - controlling the timing, not the do/not do.
[05:48] <spm> or perhaps: it's your code, your system (tho recognising that is an artificial distinction with ownership there...); if you want X, we'll make it so, but pls retain 'ownership' of the responsibility. kinda thiungy.
[05:48] <lifeless> because you are the response team. You know if you're insanely busy, or just flat out busy.
[05:48] <spm> ahh I see
[05:49] <lifeless> we together provide launchpad.net - its lp-devs + lp-managers, not lp-devs alone or lp-managers alone
[05:49] <lifeless> (which you know)
[05:49] <lifeless> :P
[05:49] <spm> I try to forget....
[05:51] <lifeless> back to the example - if there was a buildd change, you might want lamont available
[05:51] <lifeless> so you'd say 'delayed to $his tz'
[05:52] <lifeless> but most changes are relatively shallow and would just be 'doit'
[05:52] <spm> I'd, again very much personally!!!, be very keen if lp prod rollouts were much like edge is now. just a matter of course and all done purty much automagically. The key reason being a somehwat selfish one - is that makes our life much easier. if the rollout is so smooth that simplish scripting can do; then problems are also equally trivial to solve
[05:52] <thumper> lifeless: hey, as your new role as TA, you can give out rc's for production-devel
[05:52] <thumper> lifeless: want to give me an RC for wgrant's fix?
[05:52] <lifeless> thumper: What are the implications here :) - its day -2 :P
[05:53] <thumper> not much in this case, I'm just dotting the i's
[05:53] <spm> actually - if there's a buildd change, I don't believe we have too many options but to wait for him. tbh, not sure of the fine detail there.
[05:53] <lifeless> spm: so you see the point :)
[05:53] <spm> oh yes.
[05:53] <thumper> lifeless: I could equally go release-critical=thumper, but I thought you might like your name there :)
[05:53] <lifeless> spm: FTR, I'm a huge fan a automation. I was surprised by Tom's apparent disinterest in the recent thread about detecting stale trees or whatever it was.
[05:54] <lifeless> thumper: use yours ;)
[05:54] <thumper> :)
[05:54] <thumper> ok
[05:54] <lifeless> thumper: When I'm doooog tired after that terrible hotel, iz not a good time to do new things ;)
[05:54] <spm> which thread was that, don't recall seeing it myself?
[05:54] <thumper> heh
[08:13] <adeuring> good morning
[08:46] <wgrant> Yay, buildbot loves me.
[08:46] <spm> nah, that was me forcing a build :-)
[08:47] <spm> thumper: wgrant: edge1: canonical.database.revision.InvalidDatabaseRevision: patch-2207-56-0.sql has been applied to the database but does not exist in this source code tree. You probably want to run 'make schema'. <== is that you guys causing that?
[08:48] <wgrant> spm: stable is out of date.
[08:48] <spm> oh bah.
[08:48] <wgrant> Since it broke soon after db-stable was merged.
[08:48] <wgrant> buildbot is happy now, though.
[08:48] <wgrant> So it should pull soon.
[08:49] <spm> we can hop. I've reverted the edge update; I guess we'll find out this timish tomorrow if edge is happy again
[08:49] <spm> hope too
[08:51] <wgrant> spm: Heh, it's just pulled now.
[11:16] <jtv> wgrant, do you know of any recipe builds that should work on either dogfood or staging so we can Q/A your change to launchpad-buildd?
[11:17] <wgrant> jtv: I don't know if staging works yet, but I may be able to get something working on dogfood.
[11:17] <wgrant> ... except that I forget how it interacts with codehosting.
[11:17] <wgrant> Does it use staging or production codehosting?
[11:18] <jtv> It just doesn't have any.  So we may have to fake database records or something.
[11:18] <jtv> Actually, if it's just for reading from a branch, it uses production codehosting.
[11:19] <wgrant> Right.
[11:19] <wgrant> Is buildd-manager running?
[11:19]  * jtv checks
[11:20] <wgrant> Oh good, I can't log in on DF.
[11:20] <jtv> how jolly
[11:20] <wgrant> (OOPS-1650DF10)
[11:21] <jtv> On the bright side, buildd-manager is indeed running
[11:26] <wgrant> bigjools: Any idea why DF won't let me log in?
[11:47] <bigjools> wgrant: checking
[11:47] <wgrant> bigjools: Thanks.
[11:48] <bigjools> "HTTP Response status from identity URL host is not 200. Got status 404"
[11:48] <bigjools> awesome
[11:48] <wgrant> No provider set up?
[11:48] <bigjools> some config must have changed somewhere
[11:48] <bigjools> no idea what that might be
[11:59] <wgrant> Can someone please land https://code.edge.launchpad.net/~wgrant/launchpad/faster-and-more-general-getBuildQueueSizes/+merge/28476 ?
[11:59] <deryck> Morning, all.
[12:01] <bigjools> wgrant: nice, I'll land it
[12:02] <wgrant> bigjools: Thanks.
[12:23] <bigjools> wgrant: try again
[12:25] <wgrant> bigjools: Failure continues.
[12:25] <bigjools> gnargh
[12:25] <wgrant> OOPS 20
[12:26] <bigjools> same problem
[12:26] <bigjools> the oops report doesn't do anything useful and print the url it's trying to use
[12:26] <bigjools> that would be too useful
[12:26] <wgrant> Heh.
[12:26] <bigjools> sigh
[12:27] <wgrant> What's the OpenID host it's configured to use?
[12:27] <bigjools> fuck nose
[12:27] <bigjools> the config is so convoluted it's hard to work out
[12:27] <bigjools>   Module canonical.launchpad.webapp.login, line 183, in render
[12:27] <bigjools>     allvhosts.configs[openid_vhost].rooturl)
[12:27] <bigjools> but rooturl doesn't exist
[12:29] <wgrant> bigjools: You need launchpad/openid_provider_vhost set.
[12:29] <bigjools> it is
[12:29] <wgrant> To one of your vhosts.
[12:29] <wgrant> And that vhost has its hostname set?
[12:29] <bigjools> yes
[12:30] <wgrant> Yay...
[12:30] <bigjools> 404 means it's talking to something at least
[12:30] <wgrant> Something, yes.
[12:30] <bigjools> but what....
[12:37] <bigjools> wgrant: please try again, I added some more info to the exception that goes in the oops
[12:37] <wgrant> bigjools: Done. 22.
[12:38] <bigjools> GAR
[12:39] <wgrant> Oh?
[12:39] <bigjools> I picked the wrong egg to hack
[12:39] <wgrant> Heh.
[12:41] <bigjools> try again...
[12:46] <bigjools> nm
[12:52] <wgrant> Any luck?
[12:52] <bigjools> no
[12:53] <bigjools> it's trying to access https://login.dogfood.launchpad.net/ which 404s
[12:53] <wgrant> Right.
[12:53] <wgrant> You could just tell it to use login.launchpad.net...
[12:53] <bigjools> I *could*
[12:54] <wgrant> Unless you want c-i-p fun...
[12:54] <bigjools> trying one last thing before I give up
[12:54] <bigjools> bah no fair, I have to travel on Sunday when the British GP is on
[12:54] <lifeless> wgrant: cip?
[12:55] <wgrant> lifeless: canonical-identity-provider.
[12:55] <wgrant> Crazy Django thing.
[12:55] <lifeless> yup
[12:55] <lifeless> I remembers
[12:56]  * bigjools shovels more coal into dogfood
[12:59] <lifeless> night
[12:59] <wgrant> Night.
[13:00] <bigjools> grar
[13:03] <bigjools> still no dice
[13:05] <wgrant> bigjools: What're you trying to do?
[13:06] <bigjools> point df at staging's openid
[13:07] <wgrant> Hm. It's not working? Firewalled, maybe?
[13:07] <bigjools> changing the config made zero difference
[13:07] <wgrant> There are three OpenID vhosts -- you changed the right one(s)?
[13:08] <wgrant> I've managed to get the wrong one before.
[13:08] <bigjools> there's only one in the DF config
[13:08] <bigjools> FINALY
[13:08] <bigjools> and finally too
[13:09] <wgrant> Yay, it works. Thanks.
[13:10] <bigjools> now, food
[13:10] <wgrant> Is buildd-manager alive?
[13:11] <wgrant> It apparently was before.
[13:11] <wgrant> But it's not dispatching a recipe build now.
[13:11] <wgrant> Ah, there it is.
[13:17] <wgrant> rubidium really is unbelievably slow.
[13:25] <wgrant> jtv: Around?
[13:26] <jtv> wgrant: Around.
[13:27] <wgrant> https://code.dogfood.launchpad.net/~wgrant/+recipe/ivle-test/+build/145 just worked. The i386 build seems to be working fine too.
[13:27] <wgrant> jtv: ^^
[13:27] <jtv> wgrant: \o/
[13:27] <jtv> I'll tell lamont that we can roll out
[13:28] <wgrant> Great.
[20:46] <lifeless> morning
[20:53] <mtaylor> morning lifeless
[20:55] <lifeless> hi mtaylor
[20:56] <lifeless> 'sup ?
[20:57] <mtaylor> lifeless: hanging out in dallas. enjoying sitting by the pool hacking
[20:58] <lifeless> nice
[21:07] <lifeless> mtaylor: hacking on blueprints yet ?
[21:08] <mtaylor> lifeless: not yet
[21:09] <mtaylor> lifeless: currently, actually waiting on some launchpad things to get renamed ... but I'm guessing the losas are busy
[21:11] <lifeless> looks like it
[21:14] <mbarnett> yeah, sorry.  I am a bit behind
[21:14] <mbarnett> should be able to get to it this evening, or first thing in the morning central US time.
[21:18] <lifeless> gary_poster: got a second ?
[21:18] <gary_poster> lifeless, on calls till EoD but can reply for little questions
[21:18] <lifeless> sure
[21:18] <lifeless> its just a 'best way to do' question
[21:19] <lifeless> I have this lsprof patch to extract the code into a dedicated module using an event
[21:19] <lifeless> it needs either a tweaked, or a whole new, zope.app.publication
[21:19] <lifeless> the tweak is adding the event; the whole new would be landing the 2 upstream patches and upgrading to latest upstream
[21:20] <gary_poster> I see
[21:20] <lifeless> or I could add the event in the lp tree and reference it there
[21:21] <lifeless> whats the current, usual, practice when fixing something the long term way requires an upstream change? Do we submit the upstream change and then do /whatever/, or do we fork the egg temporarily ?
[21:23] <gary_poster> will answer within half hour or so (sorry)
[21:23] <gary_poster> (want to ask questions :-) )
[21:23] <lifeless> thats fine
[21:23] <lifeless> given the ec2land latency it won't make much diff to when it lands.
[21:24] <lifeless> In 40 minutes I'm heading out for 2 hours - pre-flight-chores
[21:24] <lifeless> but its not urgent regardless.
[23:38] <gary_poster> lifeless: sorry, calls kept going
[23:39] <gary_poster> lifeless: my preference order:
[23:39] <gary_poster> 1) submit upstream and use upstream (sometimes impractical)
[23:39] <gary_poster> 2) submit upstream and change locally in LP with one of our many existing subclasses
[23:42] <gary_poster> 3) submit upstream and use upstream older egg with change (e.g., we're using 3.4.1 and upstream is at 3.5.0: we can try releasing 3.4.2 in addition to getting patch in trunk
[23:42] <gary_poster> )
[23:43] <gary_poster> 10) (i.e., we really don't wanna do this but sometimes we have to): make our own branch and make our own egg
[23:43] <gary_poster> with (10) and possibly with (2) a LP bug is appropriate
[23:44] <gary_poster> Done, and going