[00:57] <wgrant> lifeless: How old is staging's DB?
[01:00] <wgrant> (I need to get a rough idea of the publications demolished by bug #653382)
[01:00] <_mup_> Bug #653382: BinaryPackagePublishingHistory._getOtherPublications fails to restrict the distroseries context <Soyuz:In Progress by wgrant> <https://launchpad.net/bugs/653382>
[01:09] <lifeless> wgrant: I'm not sure
[01:11] <wgrant> It appears to be from the 16th.
[01:13] <wgrant> lifeless: Could you please run http://paste.ubuntu.com/505395/?
[01:13] <wgrant> (hm, it has a shiny new theme)
[01:23] <lifeless> wgrant: terrible query
[01:23] <lifeless> wgrant: not exists is probably better than count(*) == 0;
[01:24] <wgrant> lifeless: Fair point.
[01:25] <lifeless> also its probably flattenable into a single query rather than correlated subquery
[01:25] <lifeless> which will tend to be much faster
[01:25] <wgrant> How could I flatten that?
[01:26] <lifeless> using group by?
[01:26] <wgrant> Oh, true.
[01:27] <lifeless> or possibly
[01:27] <lifeless> just use a left outer where bpph.id is NULL
[01:27] <lifeless> but I'd have to actually think to suggest more
[01:32] <lifeless> wgrant: its going to be a while, the query is expressed poorly I think.
[01:33] <lifeless> wgrant: by a while, I mean its running and has been for 10 minutes
[01:36] <lifeless> I'll be back in a bit
[01:44] <nigelb> 25
[01:45] <wgrant> lifeless: Yeah, 3am SQL isn't optimal, as it turns out.
[01:47] <wgrant> http://paste.ubuntu.com/505410/ removes the subquery, and gets rid of one of the BPPH scans.
[01:47] <wgrant> And is generally a whole lot less hostile.
[01:50] <wgrant> But it will still probably take a while, given what it's doing.
[01:51] <wgrant> I could reduce the dataset to publications superseded since the bug was introduced, but I know there have been similar bugs in the past, so I'd like to see that there is no older broken data too.
[01:57] <wgrant> http://paste.ubuntu.com/505415/
[02:10] <lifeless>  123744
[02:10] <lifeless> (1 row)
[02:10] <lifeless> Time: 768829.442 ms
[02:10] <lifeless> thats using 'not exists (select binarypackagepublishinghistory.id ...
[02:10] <wgrant> Hm, that's a few.
[02:11] <lifeless> 505415 is running now
[02:11] <wgrant> Thanks.
[02:13] <wgrant> Er, 505415 doesn't have a COUNT, so it's going to give a crapload of output.
[02:16] <lifeless> when it finishes :P
[02:16] <lifeless> I'll hit END
[02:18] <lifeless> wgrant: the 505415 query is odd
[02:18] <wgrant> Howso?
[02:18] <lifeless> wgrant: if you want 'no other_bpph' then use join, not left join
[02:18] <lifeless> and drop the having COUNT
[02:19] <wgrant> Er. Won't a JOIN against something that doesn't exist result in... nothing?
[02:19] <lifeless> right, so those rows are skipped.
[02:19] <lifeless> think sets
[02:19] <lifeless> not procedure
[02:21] <lifeless> 3662 rows
[02:22] <wgrant> Hmm. I wonder what the other 120000 are.
[02:23] <lifeless> mwhudson: 'test' passed ec2, of course testfix then fcked us.
[02:23] <mwhudson> lifeless: \o/, at least a little bit
[02:23] <lifeless> yeah
[02:24] <lifeless> will bomb them in at EOD
[02:25] <wgrant> lifeless: How would you rewrite that query to not left join? I want the IDs, so joining against an empty set is not going to help me much.
[02:26] <lifeless> I'd start by writing out in english what I want
[02:26] <lifeless> I'm not clear what you want
[02:27] <lifeless> the two versions I've seen are doing different things. (thus the different results)
[02:28] <wgrant> I want to find superseded binary publications where the dominant build has no publications in that context.
[02:28] <wgrant> (context == (archive, distroarchseries, pocket))
[02:29] <wgrant> Er, (archive, distroseries, pocket) for this particular case.
[02:29] <lifeless> so 'superseded but nothing superseding it' ?
[02:29] <wgrant> Right.
[02:30] <wgrant> We record the build that supersedes each publication.
[02:30] <lifeless> so
[02:30] <lifeless> to first address the left join/join lemma
[02:30] <lifeless> if I have two tables
[02:31] <lifeless> SUPERSEDED SUPERSEDES
[02:31] <lifeless> (imaginary)
[02:31] <lifeless> with supereseded.id == supersedes.superseded_id
[02:32] <lifeless> then ED JOIN ES group by ED.id having count(ES.id) == 0
[02:32] <lifeless> is equivalent to
[02:32] <lifeless> ED LEFT JOIN ES group by ed.id
[02:32] <lifeless> bah
[02:32] <lifeless> got the LEFT in the wrong example
[02:32] <lifeless> then ED LEFT JOIN ES group by ED.id having count(ES.id) == 0
[02:33] <lifeless> is equivalent to
[02:33] <lifeless> ED JOIN ES group by ed.id
[02:33] <wgrant> Won't ED JOIN ES return the opposite of what we want?
[02:33] <lifeless> oh right
[02:33] <wgrant> It's the same as HAVING COUNT(ES.id) > 0
[02:34] <lifeless> left join where ed.id is NULL
[02:34] <wgrant> I guess that might work.
[02:35] <lifeless> now, one question is whether ES is better correlated or not; if we can make it very fast to satisfy for a given ED
[02:35] <lifeless> wgrant: I meanT Left join where es.ID IS null
[02:36] <wgrant> Right, that makes more sense than JOIN.
[02:37]  * rockstar slaps buildout
[02:37] <lifeless> now one issue here
[02:37] <lifeless> there is what, 100 times as many ED as ES
[02:37] <lifeless> so it might be better to start with dominant builds
[02:38] <lifeless> and for each dominant build return: content, publication, (a) superseded build
[02:39] <lifeless> which should look at 1% of the data we're considering today, no ?
[02:39] <lifeless> awol for a bit
[02:40] <lifeless> bbiab
[02:43] <wgrant> lifeless: Where did you pull the 100 figure from?
[02:44] <wgrant> I guess starting at BPB might be a little smaller, but we would still need to select every single superseded BPPH.
[02:44] <wgrant> ES's intersection with ED is massive.
[02:44] <lifeless> wgrant: well, gimme a query to get stats
[02:45] <lifeless> wgrant: we'll see what my ass looks like
[02:45] <wgrant> It's not clear what the stats should be.
[02:45] <wgrant> It's also not clear why we're trying to optimise a query that will hopefully only be run once :P
[02:56] <michaelh1> yao, could you put LP: 653316 on your list please?
[02:56]  * michaelh1 types on the wrong channel
[03:21] <rockstar> wallyworld__, how's things?
[03:21] <wallyworld__> rockstar: pretty quiet. everyone's away
[03:22] <wallyworld__> i'm having "fun" with YUI
[03:23] <rockstar> wallyworld__, me too!
[03:23] <wallyworld__> :-)
[03:23] <rockstar> wallyworld__, it's quite possible my fun will be breaking yours.  :)
[03:23] <rockstar> wallyworld__, are you working on the thing you and I talked about last week?
[03:24] <wallyworld__> yeah, got the endpoint all set up. just having fun figuring out the YUI object model so I can get the info I need from client side and send it to the server
[03:24] <wallyworld__> rockstar: i just got a call from my son who is sick at school. i have to duck out briefly to pick him up
[03:26] <rockstar> wallyworld__, okay.  I just wanted to make sure you had someone around with thumper out this week.
[04:28] <lifeless> wgrant: several reasons:
[04:28] <lifeless>  - if the data model doesn't support this sort of query, we're going to be doing it more than once, eventually.
[04:29] <lifeless>  - e.g. in the garbo. And 10 minute queries are flat bad [in the absence of dedicated data mining clusters]
[04:29] <lifeless>  - You're not going to want to fix up the prod data? We can't sensibly run a 10 minute query on lpprod
[04:34] <wgrant> True.
[04:38] <lifeless> hmm
[04:51] <wgrant> lifeless: How long did 505415 take on staging?
[05:21] <lifeless> 600941.564 ms
[05:23] <lifeless> statik: don't cringe at the ui, but - https://edge.launchpad.net/+feature-rules
[05:24] <lifeless> statik: thats our 'takes effect immediately' config system, up and live
[05:24] <lifeless> statik: we can also use it to drive A/B tests and similar experiments
[05:24]  * jtv peeks as well
[05:25] <wgrant> lifeless: Hmm. Not fantastic. But it'd probably be faster on a production slave, and we do already have some regular multi-minute queries (on the master, no less).
[05:25] <lifeless> wgrant: please file bugs for all those queries that you know of.
[05:25] <lifeless> wgrant: any query over 5 seconds is a bug.
[05:25] <wgrant> (although I know how to optimise those queries down to a few seconds, I haven't got around to doing it yet)
[05:25] <wgrant> They're in the publisher.
[05:25] <lifeless> wgrant: any *write* query over 1 second is a bug.
[05:25] <lifeless> wgrant: doesn't matter.
[05:26] <lifeless> wgrant: I should say, any write transaction, from the first update/insert/delete through to commit, taking more than 1 second, is a bug.
[05:26] <wgrant> Right.
[05:26] <statik> awesome
[05:26] <wgrant> The publisher is itself pretty much a bug. We all know that.
[05:26] <statik> hey jtv! how about a phone call?
[05:26] <lifeless> wgrant: I like having bugs if *someone* knows there is a problem.
[05:26] <jtv> statik!  Not sleeping yet?
[05:26]  * jtv scrambles around for headsets
[05:27] <statik> jtv: nosir, i hung around specially for you
[05:27] <jtv> :)
[05:27]  * jtv takes leisurely sip of coffee
[05:28] <lifeless> wgrant: so, please file them?
[05:28] <lifeless> wgrant: if you get pushback, I'll go to bat over their existing.
[05:35] <wgrant> I suppose I might be able to use that same optimisation in the repair query.
[05:36] <wgrant> Mm, no, not really.
[05:36] <wgrant> It makes too many assumptions that are now broken.
[06:35] <lifeless> stub: https://lpbuildbot.canonical.com/builders/prod_lp/builds/120/steps/shell_7/logs/summary
[06:36] <lifeless> stub: thats a prod-lp run, the same passed ec2 earlier
[06:36] <lifeless> stub: I'm wondering if its a python/twisted version issue?
[06:38] <stub> lifeless: No such file lib/lp/services/scripts/tests/cronscripts.ini
[06:38] <stub> lifeless: That is a relative path, so a current working directory issue
[06:39] <lifeless> *blink*
[06:39] <stub> No idea how it got through ec2 though if that is really the case\
[06:39] <lifeless> stub: I landed the CP using ec2land
[06:39] <lifeless> stub: its 50 or so revs that have all passed qc
[06:40] <lifeless> https://code.edge.launchpad.net/~lifeless/launchpad/cp/+merge/37345
[06:41] <stub> lifeless: maybe a isolation/test ordering issue with an earlier test changing the cwd?
[06:41] <lifeless> stub: could be; maybe BaseLayer should check that cwd hasn't been fucked with
[06:42] <lifeless> in bzrlib we ensure its not messed up in the base class, *and* we don't use chdir except in very controlled circumstances (because its not nice as a library to use it :P)
[06:42] <stub> It does already :-/
[06:42] <lifeless> ><
[06:43] <lifeless> mwhudson: wow, something chomped on the encoding in that mail
[06:43] <stub> Does that file still exist in the branch that failed?
[06:44] <lifeless> its production-devel
[06:44] <lifeless> checking
[06:45] <lifeless> stub: yes
[06:46] <lifeless> bzr ls lib/lp/services/scripts/tests
[06:46] <lifeless> lib/lp/services/scripts/tests/__init__.py
[06:46] <lifeless> lib/lp/services/scripts/tests/cronscripts.ini
[06:46] <lifeless> ...
[06:47] <stub> Can add the cwd to the error if that file isn't found to check my initial theory (although I still can't see how it would be possible, it is my best guess).
[06:47] <lifeless> it could be a dict order thing between python versions
[06:47] <lifeless> causing the layers to be grouped in a different order.
[06:48] <lifeless> stub: alternatively, where are we at in terms of getting to python 2.6 everywhere? [db server upgrades]
[06:49] <lifeless> forwarded you the ec2run results for the branch
[06:49] <stub> We have done hackberry, we have chokecherry, wildcherry, poha and plantain to go. As far as Python 2.6 everywhere, the important one is wildcherry
[06:49] <lifeless> whats the eta on wildcherry?
[06:49] <lifeless> I've never heard of poha and plaintain :(
[06:50] <stub> I was planning to do it after chokecherry, but there is nothing stopping us doing it next now I think about it.
[06:50] <stub> poha and plantain are the SSO database servers.
[06:50] <lifeless> ah kk
[06:57] <lifeless> stub: rs=me if you wanted to land something to debug this, but I think upgraded bb to python2.6 for prod_lp is probably best, if we can get wildcherry done.
[06:57] <lifeless> stub: its past EOD for me, so I'll only be sporadically around now.
[07:04] <lifeless> (you'd need to send it straight to production-devel)
[07:34] <jtv> lifeless: you wouldn't happen to know where the branch scanner does its get_r[ow]_server() and start_server(), would you?
[07:34] <jtv> and hi, btw :)
[07:34]  * jtv now sees the eod note
[08:45] <noodles775> Hi jtv1, is there one particular view in translations that displays multiple selectable items for form processing which you'd recommend looking at? I want to compare with teh way we do it for copy/delete packages in case there are better examples.
[08:45] <noodles775> Specifically, creating a vocabulary based on the current view's search terms...
[08:46] <danilos> noodles775, we've got some stuff like that, but none of that is, as you put it, "recommended to look at" :)
[08:46] <noodles775> heh, OK. Thanks danilos.
[08:47] <danilos> noodles775, looking at import queue stuff is of that type (lp.translations.model.translationimportqueueentry and on from there)
[08:47]  * noodles775 looks
[08:48] <adeuring> good morning
[08:48] <jtv1> noodles775: whatever you do, don't look at our languages list.  :)
[08:48] <jtv> hi adeuring
[08:49] <adeuring> hi jtv!
[08:50] <noodles775> danilos, jtv: OK, it looks like you guys similarly use a VocabularyFactory when its dependent on the context object, ah, but you pass the view into __init__, that's what I was looking for. Thanks!
[08:51] <noodles775> Morning adeuring, bigjools
[08:51] <adeuring> hi noodles775!
[08:51] <jtv> danilos: uh-oh, that sounds like he may imitate something we did.  Should we stop him?
[08:51] <bigjools> morning all
[08:52] <jtv> morning bigjools!
[08:52] <danilos> jtv, heh, in general yes, but let's run an experiment and see where it takes him :)
[08:52] <danilos> bigjools, morning
[08:52] <noodles775> jtv: heh, afaics, it's the most zope-like way to do it (using a vocabulary factory which is evaluated when the view is initiated etc.
[08:52] <jtv> danilos: good point… this way he gets the blame or we get the credit
[08:52] <noodles775> lol
[08:52] <jtv> Take.
[08:53] <jtv> Did I say "get" the credit?
[08:54] <jtv> mwhudson: do you know if getting, starting, and stopping servers as returned by get_ro_server() is something we can do very frequently?
[09:00] <gmb> hurrah for code that fixes things for vague and (to me) confusing reasons.
[09:01] <lifeless> jtv: hi, 'sup ?
[09:01] <jtv> lifeless: hi, I _think_ I found the answer more or less the second you answered.
[09:01] <lifeless> hah, kk
[09:02] <jtv> I was trying to figure out whether a transport or server (I'm not sure of the terminology) for lp-internal:/// branch URLs was running when the branch scanner triggers a particular kind of work.
[09:02] <bigjools> lifeless: hi
[09:02] <lifeless> bigjools: evening!
[09:02] <bigjools> lifeless: did you fix prod_lp by any chance?
[09:03] <lifeless> the builder yes; the build decided to fail on some of stubs new stuff; looks like a test that is doing chdir or something
[09:03] <lifeless> we *think* upgrading the builder to py 2.6 will fix it (because the cp passed ec2)
[09:03] <bigjools> lifeless: I've been trying to get a CP done for a week now, this is crazy.  I am going to do a cowboy instead.
[09:03] <lifeless> bigjools: have you landed the revision ?
[09:03] <bigjools> yes
[09:03] <lifeless> then it should be deployable
[09:04] <bigjools> from prod-devel?
[09:04] <lifeless> we can deploy from any branch
[09:04] <bigjools> ok
[09:04] <lifeless> if the rev is specified
[09:04] <bigjools> lemme find it
[09:04] <lifeless> but, I thought your stuff hit prod-stable last week
[09:04] <lifeless> rev 9777 in prod-stable
[09:05] <lifeless> isn't that your thing?
[09:05] <bigjools> no
[09:05] <bigjools> 9779 in devel
[09:05] <lifeless> oh, 9778 and 9779
[09:06] <bigjools> yeah, 9779 had the oops isolation fix in it to see if that helped
[09:06] <bigjools> that fix is critical for the platform team
[09:06] <lifeless> got a link to a failure from last week?
[09:06] <lifeless> did you land your cp's via ec2? [If so that implies real seriously annoying fuckage in the hardy builder]
[09:07] <bigjools> lifeless: https://lpbuildbot.canonical.com/builders/prod_lp/builds/119/steps/shell_7/logs/summary
[09:07] <bigjools> no, withoyt
[09:07] <bigjools> without
[09:07] <lifeless> ahhh
[09:07] <bigjools> I don't use ec2
[09:07] <lifeless> I -hate- the openid glue on buildbot
[09:08] <mrevell> Morning
[09:09] <lifeless> that does look like the oops issue doesn't it
[09:09] <bigjools> aye
[09:09] <lifeless> bigjools: so my branch included your stuff and passed ec2
[09:09]  * bigjools brb
[09:09] <lifeless> bigjools: but i'd -really- like it if we consistently use ec2 when landing stuff on production-stable
[09:09] <lifeless> bigjools: anyhow
[09:10] <lifeless> bigjools: I'm +1 on a cowboy of your revisions (e.g. deploy from prod-devel rev 9779)
[09:10] <lifeless> bigjools: stub: is working on wildcherry now I think.
[09:10] <lifeless> I'll send mail then off again.
[09:10] <wgrant> bigjools: Feel like some DB surgery?
[09:10] <stub> eh?
[09:11] <lifeless> stub: well, 'you' meaning you're coordinating, no ?
[09:13] <bigjools> lifeless: I've never used ec2, and I'm not about to start :)
[09:14] <bigjools> wgrant: wassup?
[09:14] <lifeless> bigjools: that sounds unhelpful and negative
[09:14] <bigjools> lifeless: what?
[09:14] <lifeless> bigjools: but its after 9pm, so I'm going to ignore it and go unstrap more boxes
[09:14] <bigjools> that's a particularly unhelpful comment yourself
[09:14] <lifeless> blah it is
[09:14] <lifeless> sorry
[09:14] <wgrant> bigjools: See https://edge.launchpad.net/ubuntu/lucid/i386/python-imaging-doc -- ignoring the new Proposed publication, spot the issue.
[09:15] <lifeless> so, ec2test gives us important guarantees for bottleneck branhces
[09:15] <bigjools> lifeless: I run the test suite locally, always have done.
[09:15] <lifeless> bigjools: so one of the things that ec2 does is run in a consistent clean state each time; do you arrange that locally as well?
[09:16] <bigjools> lifeless: yes
[09:16] <lifeless> bigjools: I'm fine with folk running the test suite locally, it just seems particularly hard to make local match 'what buildbot does'
[09:16] <lifeless> hell its hard to even make buildbot match buildbot at the moment.
[09:16] <bigjools> lifeless: if nobody ran it locally (there's 2 of us) then you'd not see local issues either
[09:17] <lifeless> bigjools: I think you should ignore my previous snarky comment; am very tired, been battling a sinus infection all week, terrible beds at the hotel in sydney, and *yawn*
[09:17] <lifeless> bigjools: -sorry-
[09:17] <bigjools> wgrant: groan
[09:17] <bigjools> lifeless: no worries, it's not good to work when tired eh? :)
[09:17] <wgrant> bigjools: Yes. My fault, due to unobvious Storm quirks plus shitty old tests which missed it. Bug #653382.
[09:18] <_mup_> Bug #653382: BinaryPackagePublishingHistory._getOtherPublications fails to restrict the distroseries context <Soyuz:In Progress by wgrant> <https://launchpad.net/bugs/653382>
[09:18] <wgrant> Easy enough to fix, fortunately.
[09:18] <lifeless> bigjools: I wasn't ;) I wandered past and saw a ping... then you pinged as well ;)
[09:18] <bigjools> ah yes I saw that
[09:18] <bigjools> lifeless: ok :)  thanks for the +1 on the cowboy anyway
[09:19] <bigjools> wgrant: how many packages are affected?
[09:20] <wgrant> bigjools: lifeless ran a query for me on staging (which has data from 3ish weeks ago), and there were around 3000 publications. I'm not sure if the query was perfect, nor how many of those were in the primary archive.
[09:21] <wgrant> For those in the primary archive we just need to set them back to Published, and unset datesuperseded and supersededby, since they'll all be in Release.
[09:21] <wgrant> And there should be just about no PPA publications affected, since the use cases there are different.
[09:22] <bigjools> this is .... not good
[09:22] <wgrant> ... yes.
[09:22] <bigjools> can you show me the query please?
[09:22] <wgrant> http://paste.ubuntu.com/505415/
[09:23] <wgrant> (the code rolled out on 2010-08-11 or so)
[09:23] <wgrant> It finds publications superseded by a build that's not published in the context.
[09:24] <bigjools> what about deletions?
[09:24] <bigjools> that'll pick up valid deletions
[09:24] <wgrant> It won't.
[09:24] <wgrant> Deletion doesn't set supersededby.
[09:24] <bigjools> aha
[09:25] <bigjools> umm has death row reaped these?
[09:25] <wgrant> When I discovered this on Saturday, I initially thought that mass expiration last week would have killed lots of binaries that were hit by this. But it turns out that it wouldn't have.
[09:25] <wgrant> No.
[09:26] <wgrant> Because the dominator doesn't run over frozen pockets.
[09:26] <wgrant> So they're Superseded, but have not been scheduled for removal.
[09:26] <bigjools> that's lucky :)
[09:26] <wgrant> If they were deathrow candidates, we would have noticed almost immediately that something was wrong.
[09:26] <bigjools> what about maverick though?
[09:27] <wgrant> Release pocket changes can only happen in maverick.
[09:27] <wgrant> So it doesn't matter that they leak into maverick through this bug, because they were performed in maverick anyway.
[09:28] <wgrant> So only frozen release pockets (and possibly some PPAs, but that's probably tiny) are affected.
[09:28] <bigjools> ah maverick only has a release pocket at the moment
[09:28] <wgrant> The query still respected pockets.
[09:28] <wgrant> Maverick has -proposed too.
[09:29] <bigjools> really? so why didn't the dominator kill maverick's packages then?
[09:29] <wgrant> It did!
[09:29] <wgrant> But they were meant to be killed.
[09:29] <bigjools> true
[09:30] <wgrant> All of the problematic dominations were meant to remove things from maverick, since that's the only place they can happen.
[09:31] <bigjools> let's get the code fix in before doing this then
[09:31] <wgrant> I think this is about the fourth time this code has broken :/
[09:31] <wgrant> Certainly, yes.
[09:31] <wgrant> Was just letting you know the details.
[09:32] <bigjools> wgrant: thanks
[09:32] <bigjools> the dominator code scares me :)
[09:32] <wgrant> It is a bit like that.
[09:38] <wgrant> This was only noticed when a lucid-proposed upload landed in NEW, which strongly suggests that nothing bad made it further than the DB.
[09:40] <jml> hello
[09:51] <bigjools> morning jml
[09:53] <jml> bigjools: good morning.
[09:58] <jml> What merriment and hijinx are in store for me today, I wonder.
[09:58]  * wgrant lures jml back to engineering.
[09:59] <jml> I suspect I've got more hiring than engineering to be doing.
[09:59] <wgrant> There seems to have been a bit of that lately.
[10:02] <jml> it never seems to stop
[10:14] <wgrant> bigjools: Thanks.
[10:15] <bigjools> np
[10:17] <bigjools> wgrant: I'm trying to figure out if we can make your query faster
[10:17] <wgrant> bigjools: So was I.
[10:18] <bigjools> maybe a subselect?
[10:18] <wgrant> It's possible we could use an optimisation like the one in Dominator.judgeAndDominate/
[10:18] <wgrant> What sort of subselect?
[10:18] <wgrant> My initial version had a NOT EXISTS subselect, and was even slower.
[10:18] <bigjools> urgh
[10:19] <bigjools> maybe stub has some time to help :)
[10:20] <wgrant> bigjools: Does the query look otherwise correct?
[10:20] <bigjools> wgrant: it's hard for me to tell, to be frank
[10:20] <wgrant> Heh, yes.
[10:20] <bigjools> the left join is confusing me
[10:20] <bigjools> and it doesn't help that my sql is shite
[10:21] <wgrant> The subselect was a bit more obvious.
[10:22] <wgrant> But I'm taking each publication and left joining any publications of the supersededby build in the original publication's context.
[10:22] <wgrant> Then finding those that have no publications.
[11:18] <jtv1> danilos: just to let you know, the bzr plugin does what we expect (even more so than other API stuff): uses production xmlrpc on production, edge xmlprc on edge, and staging xmlrpc on staging.
[11:19] <danilos> jtv1, ok, cool, thanks for checking
[11:23] <jml> bigjools: has demand for PPA builds dropped or is jelmer's improvement really as awesome as the graphs make it look?
[11:23] <bigjools> jml: the latter
[11:23] <wgrant> It could easily have increased throughput 300%.
[11:23] <bigjools> bear in mind that we used to spend up to around 20 minutes blocking on uploads per scan...
[11:24] <bigjools> it's the most awesome change on soyuz I've seen in 3 years
[11:24] <jml> wgrant: I've only really been looking at queue length. it's much shorter, which presumably means that people are getting what they want much faster.
[11:25] <jml> bigjools: yeah. it's amazing.
[11:25] <wgrant> It will be interesting to see how it copes in a couple of days when all the builders disappear.
[11:25] <jml> bigjools: also shows that sabdfl was probably right to refuse more hardware
[11:25] <bigjools> jml: once we get this change done that we're working on too, then it'll be more awesomererererer than anything
[11:25] <bigjools> jml: totally
[11:25] <bigjools> we always knew utilisation was crap
[11:26] <jml> although, as wgrant says, we'll find out if we need more *dedicated* hardware fairly soon.
[11:26] <bigjools> yip
[11:28] <wgrant> So the graphs all look healthy? My occasional /builders checks have had the queues pretty much constantly empty.
[11:28] <jml> wgrant: if bigjools doesn't mind, I'll post a public screenshot.
[11:28] <bigjools> jml: I was going to blog about it
[11:28] <bigjools> meant to do it Friday and forgot
[11:29] <jml> bigjools: I reckon it's definitely blog-worthy.
[11:29] <wgrant> I have seen a couple of reports of builds stuck in Uploading, though.
[11:29] <bigjools> jml: if you want to paste something for wgrant that's ok
[11:29] <bigjools> wgrant: yeah, known issue, it's when they fail to upload
[11:29] <wgrant> Ah, k.
[11:30] <wgrant> Apart from that, it seems to have gone incredibly smoothly.
[11:30] <bigjools> it had 2 months of dogfooding :)
[11:32] <jml> wgrant: http://people.canonical.com/~jml/Active-Builders.png
[11:33] <jml> wgrant: we changed the way the green bit was calculated on the 30th
[11:33] <wgrant> jml: It was originally the total?
[11:33] <jml> yeah.
[11:33] <bigjools> the graphing app is a little restrictive
[11:34] <wgrant> That certainly looks blog-worthy.
[11:34] <jml> bigjools: I spent a few minutes doing some research the other day. Looks like the only decent web-based graphing app is hosted in a google data center.
[11:35] <bigjools> heh
[11:36] <wgrant> Although the graph would be more impressive if the old data was corrected, so the reduction in utilisation was more obvious.
[11:37] <wgrant> Hmm. I suppose this also means that we'll be able to do rebuilds without destroying the world for two weeks?
[11:37] <bigjools> the important bit is the queue length
[11:37] <wgrant> It is.
[11:37] <bigjools> we want as much utilisation as possible
[11:37] <bigjools> rebuilds will certainly be better!
[11:37] <bigjools> yay for codebounce being down
[11:38] <jml> bigjools: that seems a non sequitur
[11:38]  * bigjools has secateurs and is not afraid to use them
[11:38] <jml> bigjools: if you wanted as much utilization as possible, then the important thing would be the red bit – not seeing any green.
[11:39] <bigjools> jml: yes - however, the current graph will never do that since the builders are not marked as building until the files are dispatched
[11:39] <bigjools> that'll change when we release the stuff we did
[11:40] <wgrant> How well does your new thing survive restarts?
[11:40] <jml> bigjools: right, but that means that the queue length is *not* the important part.
[11:41] <bigjools> jml: it was, but as we get better at utilisation, I agree
[11:42] <jml> bigjools: well, I happen to only care about increased utilization insofar as it reduces the wait time for our users.
[11:42] <wgrant> I can't say I trust tuolumne's data, though... how is the maximum number of active builders not integral?
[11:42] <wgrant> jml: But wait time is better estimated by queue size, as bigjools says.
[11:42] <bigjools> jml: yes - but remember the graph works off the data in the DB - so although we are making great utilisation of the builders now, the graph is incorrect.
[11:44] <jml> wgrant: yeah, I know. that's what I'm getting at: wait time is the important thing; queue length is a good proxy measure; statements about utilization do not follow from this
[11:45] <jml> e.g. we can easily increase utilization by reducing the number of builders
[11:47] <wgrant> jml: Oh, right. Misread, sorry.
[12:01] <deryck> Morning, all.
[12:04] <jml> brb. network issues.
[12:05] <stub> Anyone still on Lucid? I'm longer able to build Launchpad on my Lucid desktop and am still sorting Maverick issues on my laptop.
[12:06] <wgrant> What does it whinge about?
[12:06] <stub>     ImportError: cannot import name SAFE_INSTRUCTIONS
[12:06] <wgrant> update-sourcecode
[12:06] <wgrant> Your bzr-builder is out of date.
[12:08] <stub> Ok. Not sure how I got that out of sync.
[12:08] <wgrant> What's the Maverick trouble? Not Launchpad-specific?
[12:09] <stub> No, other stuff. Now I've tracked down and reported the continual Firefox crashes I should be ok but haven't tried for a few days.
[12:16] <bigjools> http://blog.launchpad.net/cool-new-stuff/more-build-farm-improvements
[12:17] <bigjools> jml: the graph description is wrong :)
[12:17] <wgrant> bigjools: Is that a challenge to the community to make the build farm collapse under the load of more daily builds?
[12:18] <jml> bigjools: was it Shakespeare who once said "Then fix it, dear Henry"
[12:18] <bigjools> ummm :)
[12:34] <stub> So its trying to import from bzrlib.plugins.builder.recipe, and my egg doesn't have plugins/builder, and I've rebuilt it too.
[12:34] <wgrant> stub: Egg? You mean tree?
[12:35] <wgrant> Oh.
[12:35] <wgrant> There should be a sourcecode/bzr-builder symlink.
[12:35] <wgrant> Is there not?
[12:35] <bigjools> jml: did you get a chance to look at my b-m changes?
[12:35] <stub> wgrant: Its a directory. Is it a symlink in your tree?
[12:36] <wgrant> stub: It is a symlink.
[12:36] <stub> Ahh... probably old cruft stopping update-sourcecode from doing its thang
[12:37] <stub> Dunno how bzr-builder and bzr-hg got that way - it isn't *that* old.
[12:37] <jml> bigjools: no, I didn't. I'll take a look now after I finish reading through sinzui's privacy mails
[12:38] <bigjools> jml: ok cheers. I want to try and keep coding while I have momentum.
[12:38] <jml> also, I'd just like to note for the record that even though I am incredibly bad at doing push-ups, I am trying.
[12:38] <jml> off to grab a bite. back soon.
[12:39] <bigjools> good plan
[13:09] <wgrant> bigjools: Did you get anywhere with making my query suck less?
[13:11] <jml> bigjools: it's still builderslave-resume?
[13:12] <wgrant> I looked at the size of the diff and ran away.
[13:15] <jml> yeah. it's a big, scary branch.
[13:16] <wgrant> The size makes it scary, the changes make it scary, and the part of the system that it touches makes it scary.
[13:17] <jml> yes.
[13:17] <jml> Using saved parent location: bzr://stinkpad.local/builderslave-resume/
[13:17] <jml> that's not going to work
[13:19] <wgrant> jml: Ooh, we're getting a web designery person?
[13:19] <jml> wgrant: yes.
[13:20] <wgrant> Excellent news.
[13:20] <jml> visual design + CSS/JS/HTML
[13:20] <wgrant> Perfect.
[13:20] <jml> http://webapps.ubuntu.com/employment/canonical_LPSWD/
[13:20] <jml> still interviewing.
[13:45] <bigjools> wgrant: bi
[13:45] <bigjools> sigh
[13:45] <bigjools> wgrant: no
[13:45] <bigjools> jml: yes
[13:49] <mars> bigjools, ping, did your cherrypick from Friday get through prod_lp?
[13:49] <bigjools> mars: no
[13:50] <mars> bigjools, :(
[13:50] <bigjools> quite :/
[13:50] <bigjools> see lifeless's email to the list
[13:50] <mars> bigjools, anything I can do to help?  Is prod_lp even viable right now?
[13:52] <mars> yes, I saw that.  But I don't think there is anything for me to do then besides trying to pull the py2.6 code out myself
[13:52] <bigjools> mars: I've not really been following it closely, I've asked for a cowboy until it's fixed
[13:52] <mars> makes sense
[13:57] <mars> man, that failure in prod_lp is scary
[14:06]  * jml finally gets around to filing a bzr bug from the worcestershire sprint
[14:15] <bigjools> saucy
[14:17] <jml> hruh
[14:18] <jcsackett> danilos: ping.
[14:18] <danilos> jcsackett, hi
[14:19] <wgrant> bigjools: So, you're landing my domination branch?
[14:19] <bigjools> yes
[14:19] <jcsackett> danilos: just saw your comment on the bug 652256 (about configuring translations from front page via action menu).
[14:19] <_mup_> Bug #652256: Cannot configure translations from translations front page <bridging-the-gap> <Launchpad Translations:In Progress by jcsackett> <https://launchpad.net/bugs/652256>
[14:19] <wgrant> Great.
[14:19] <wgrant> I shall depart for the evening, then.
[14:19] <jml> stupid crappy buggy java crap.
[14:20] <bigjools> s/crappy buggy//
[14:20] <danilos> jcsackett, right, it seems we are working towards a similar thing from a different direction :)
[14:20] <danilos> jcsackett, granted, we've been a bit stuck on this
[14:20] <noodles775> Night wgrant
[14:20] <jml> actually, it's threads.
[14:20] <jcsackett> danilos: really, i think the focus of the bug i'm no is the configuration of the using launchpad enum; i'm happy to avoid treading on toes re: permissions.
[14:22] <jcsackett> danilos: when you refer to action menus, you mean the sidebar menus we see on some applications, like answers (e.g. the menu for gdp: https://answers.edge.launchpad.net/gdp)
[14:22] <danilos> jcsackett, right, I am not sure what the scope is, and it'd probably be best to discuss this in a call which includes Curtis as well
[14:22] <jcsackett> danilos: agreed; just getting my information in a line right now as i figure out my day. :-)
[14:22] <danilos> jcsackett, yeah for "action menus"
[14:22] <jcsackett> danilos: so the intent is to not use those? i was under the impression those were the current preferred way.
[14:23] <danilos> jcsackett, heh, I don't know, when we did the initial 3.0 design ~15 ago we said that we are going to avoid them, which is why those translation pages have none
[14:24] <danilos> jcsackett, we later agreed to have them "where we must", but we already had a sufficiently good solution for translations pages so we didn't have to use them
[14:24] <jcsackett> danilos: okay. curtis and i have a meeting in 5 (which he will hopefully be online for). i'll bring this up with him and we'll try and get back to you in 35 or so. that work?
[14:25] <danilos> jcsackett, sure, though I should be on another call at about that time
[14:25] <jcsackett> danilos: ok. what timezone are you (or when do you EOD) so i can make sure we get back to you in a timely fashion?
[14:25] <danilos> jcsackett, heh, I am UTC+2, and I'll probably stick around for another 1h30mins
[14:26] <jcsackett> danilos: dig. thanks!
[14:26] <danilos> somebody at the door, brb
[14:29] <jml> bigjools: you could separate out the change to logger. (it needs at least a comment and maybe a test and then it could land)
[14:30] <bigjools> jml: yeah
[14:35] <jml> bigjools: doing this to handleTimeout makes it a bit simpler: http://pastebin.ubuntu.com/505740/
[14:44] <deryck> so are we in testfix until the python 2.6 issues sinzui referred to in reply to a couple broken build emails.
[14:46] <deryck> hmmm, ok, so the lucid builders are green.
[14:46] <bigjools> jml: yes thanks - it was a bit of old factoring I'd not got around to yet, thanks for the reminder :)
[15:01] <jml> bigjools: so much deleted code :)
[15:01] <bigjools> jml: \o/
[15:01] <bigjools> jml: it's basically all the recording slave stuff
[15:01] <bigjools> the b-m is looking quite simple again
[15:05] <deryck> I need a team lead to sign off on my script to disable bug expiry option and notify users we'll be re-enabling this.
[15:06] <deryck> maybe bigjools or sinzui could help here? ^^
[15:06] <jml> bigjools: I've sent you an email with the remainder of my thoughts for now.
[15:22] <deryck> jml, could I get your sign off on my re-enable bug expiry notification script?  You looked once before for email clarity.
[15:22] <deryck> Now just need sign off for running on staging then lpnet.
[15:23] <jml> deryck: where can I find it?
[15:23] <deryck> jml, http://pastebin.ubuntu.com/505754/
[15:24] <deryck> jml, or lp:~deryck/+junk/lpjunk
[15:37] <jml> deryck: the script looks sound to me. I'm a bit nervous about the queries, since the last time I approved something like this, I was highly mistaken.
[15:39] <deryck> jml, nervous that they'll take forever?
[15:39] <jml> deryck: nervous that they'll spam everyone / do the wrong thing
[15:39] <jml> deryck: I guess the debug options are a good check on that.
[15:39] <deryck> jml, that's why I have the verbose/report options.  I'm going to check the data before sending email.
[15:40] <deryck> jml, so I can claim your approval for staging at least?  And ping back after I have the data in hand?
[15:40] <jml> deryck: deffo
[15:40] <deryck> cool, thanks!
[15:45] <jml> c.l.webapp.testing.verifyObject is ridiculous.
[16:15] <cr3> noodles775: hi there, did you write lp_dynamic_dom_updater.js?
[16:16] <noodles775> cr3: yeah, a long time ago.
[16:17] <noodles775> cr3: what are you wanting to do? (I'm not sure how useful that module is generally)
[16:17] <cr3> noodles775: question for you: under what circumstances will will the actual_interval be halved? I don't quite see how this could ever be true on line 319: if (new_actual_interval >= config_interval) {
[16:17]  * noodles775 looks
[16:21] <cr3> noodles775: as far as I can see, that will only ever be true in the event the actual_interval is doubled
[16:21] <noodles775> cr3: If there are a few xhr requests that take a long time, line 308 will double the interval used between requests each time. ..
[16:21] <noodles775> :)
[16:21] <cr3> noodles775: so, perhaps my misunderstanding is that I was expecting the dynamic updater to also potentially decrease request interval
[16:22] <cr3> noodles775: however, I now see the interval will never be lower than the config interval, only potentially higher
[16:22] <noodles775> cr3: yes, it will, but looking at that code, it will never be l....
[16:22] <noodles775> Yes.
[16:22] <cr3> noodles775: is that the intended behavior?
[16:22] <noodles775> cr3: Yes - why would you want it to go below the interval which you configured?
[16:23] <cr3> noodles775: because more is better :) but I can appreciate the motivation, thanks for the explanation though
[16:25] <noodles775> cr3: np. Maybe you've got a different use-case. When I wrote that, it was after we'd had lots of people with the same page open hitting the server every 5 seconds (or whatever we had configured). We were only interested in reducing load on the server if it's not responding as expected, not increasing the frequency which we'd already decided was sufficient for page updates.
[16:25] <cr3> noodles775: yeah, sounds perfectly reasonable and I'll actually change my use case to that :)
[16:37] <bigjools> jml: thanks, making some changes as you suggest, they all look good.
[18:16] <jml> g'night all.
[18:16] <jml> will be back a bit later to talk to the kiwis
[18:53] <lifeless> moin
[19:06] <lifeless> abentley: https://devpad.canonical.com/~lpqateam/qa_reports/deployment-stable.html hi; could you perhaps QA bug 613958
[19:06] <_mup_> Bug #613958: upload failure emails should include the upload log <qa-needstesting> <recipe> <Launchpad Bazaar Integration:Fix Committed by abentley> <https://launchpad.net/bugs/613958>
[19:06] <lifeless> ?
[19:07] <abentley> lifeless: maybe.  I keep hitting failure modes like staging restores and buildfarm weirdness.
[19:07] <lifeless> ugh
[19:08] <lifeless> abentley: whats needed to be confident in deploying it ?
[19:08] <lifeless> (given that qa-ok really means 'ok to release with it')
[19:09] <abentley> lifeless: I don't know.  I only think about qa-ok meaning that "behaves as intended".
[19:09] <lifeless> gary_poster: is https://code.edge.launchpad.net/~stub/launchpad/production/+merge/37482 on the way to pqm or does it need someone to do that? I'm happy to do the legwork...
[19:10] <abentley> lifeless: I don't think there's a test for deployability that I could do that would be much simpler than testing that it behaves as expected.
[19:10] <lifeless> kk
[19:10] <lifeless> anything I can do to help us check that?
[19:11] <abentley> lifeless: I don't think so.
[19:13] <lifeless> ok; I'd like to do more continuous-ish deployment drops this week, so if there is anything I can do to help please let me know.
[19:36] <gary_poster> lifeless: sorry, was talking to mars about status.  stub's branch was rejected when stub submitted it.  I don't know why it was rejected, so I just retried it, and PQM is just now getting to it.  (https://pqm.launchpad.net/).
[19:36] <gary_poster> mars and I think that it will not address at least one of the five errors, the one from ec2 test.  mars is going to get a 2.5 version running and will report back our status on those five failures as soon as he knows.
[19:36] <gary_poster> I'm going to switch to launchpad-ops for some questions to losas about this.
[19:39] <lifeless> kk
[19:39] <lifeless> fooding, brb
[20:00] <sinzui> Has anyone gotten spam from jscrambler? I think they harvested names. I cannot find any suspects in our two-week-old staging db
[20:12] <gary_poster> mars, stub's branch made it through pqm.  any news on test failures?
[20:12] <mars> pulling prod-devel onto the laptop now
[20:12] <gary_poster> ok
[20:33] <gary_poster> abentley: you pointed foundations at https://bugs.edge.launchpad.net/launchpad-code/+bug/644788 but I don't know what aspect of foundations is involved.  Does the librarian serve that file?  Or is the lack of an oops id the foundations problem you identified?  Depending on what is going on, that might or might not be unexpected.
[20:33] <_mup_> Bug #644788: error fetching large diff "There was a problem fetching the contents of this file." <Launchpad Bazaar Integration:Triaged> <Launchpad Foundations:New> <https://launchpad.net/bugs/644788>
[20:43] <abentley> gary-brb: can't it be both?
[20:59] <gary_poster> abentley: both what?
[21:00] <abentley> gary_poster: the librarian is failing to serve the file, and failing to give an oops.
[21:03] <lifeless> flacoste: hi; I have a call in my calendar now, with you.
[21:03] <gary_poster> abentley: ah, ok.  I'll update the bug with that then.  does that have anything to do with the librarian token stuff Robert did--I mean, might it be addressed by this?
[21:04] <gary_poster> lifeless, I didn't understand your concern about lucid_lp_production--do you agree that I can make an RT about this and ask flacoste to give it a high priority?
[21:04] <gary_poster> We may want to get rid of it, but we're not getting rid of it this instance
[21:04] <gary_poster> instant I mean
[21:04] <lifeless> gary_poster: of course, RT away.
[21:05] <gary_poster> cool thanks
[21:06] <lifeless> gary_poster: I 100% agree it needs fixing ;)
[21:07] <gary_poster> heh ok
[21:09] <flacoste> hi lifeless
[21:09] <lifeless> flacoste: hiya
[21:09] <flacoste> lifeless: luckily, it's also on my calendar! let's skype
[21:11] <mars> gary_poster, stub's branch did not fix the ec2 errors
[21:11] <gary_poster> mars, any of them?
[21:11] <mars> it did fix at least one of the others
[21:11] <mars> going through now
[21:11] <gary_poster> oh I see
[21:11] <gary_poster> ok
[21:15] <lifeless> flacoste: http://wiki.postgresql.org/wiki/Postgres-XC
[21:27] <lifeless> flacoste: https://edge.launchpad.net/+feature-rules
[21:29] <mars> gary_poster, I can reproduce the prod_lp ec2 test failure locally, am working on a fix now
[21:30] <gary_poster> mars, great.  Everything else is addressed?
[21:30] <mars> (locally, as in maverick + py2.6)
[21:30] <mars> yes, everything else is fixed by stub's branch
[21:30] <gary_poster> awesome.  Oh, you mean that ec2 test tests fail in maverick/py 2.6, mars?
[21:31] <mars> yep - weird
[21:31] <gary_poster> hm
[21:31] <gary_poster> very
[21:31] <mars> on devel: $ bin/test devscripts.ec2test.tests.test_remote -t test_email_body_is_format_result
[21:31] <gary_poster> OK, I'll make a reply to stub's reply.  Thank you.
[21:33] <abentley> gary_poster: We proxy the librarian because we need to control access in case it's a private merge proposal.
[21:33] <gary_poster> abentley, so lifeless' token work would address this, right?
[21:35] <jml> lifeless: ping
[21:35] <abentley> gary_poster: doesn't lifeless' token work address all cases where we proxy?
[21:35] <lifeless> jml: otp
[21:35] <jml> lifeless: ok. let me know when you're off & available for a call.
[21:36] <gary_poster> I believe so, yes, abentley, but I was hoping for confirmation from you that you thought that approach would work for the code team's use case.
[21:38] <abentley> gary_poster: We're using the standard approach for serving files at a URL.  I don't know enough details about lifeless token approach to know whether it would work for us.
[21:38] <gary_poster> fair enough.  thanks abentley
[21:40] <lifeless> gary_poster: the code teams use case involves parsing the librarian content in the appserver, for now.
[21:40] <lifeless> gary_poster: if its MP's
[21:41] <gary_poster> lifeless: oh. :-(
[21:41] <lifeless> I've raised this with tim as a possible thing to evolve in the future.
[21:42] <abentley> gary_poster: however, that isn't the use case shown in the bug.
[21:42] <gary_poster> lifeless: I'll record this in the bug comment then.  It sounds like Tim, you, and somebody on Foundations should figure out a way to make the librarian able to do everything over in the librarian
[21:42] <gary_poster> abentley: oh, sigh.  trying to do too many things at once
[21:43] <abentley> lifeless: the use case in the bug is just retrieving the raw bytes of the diff.
[21:44] <lifeless> abentley: ah, I see
[21:44] <abentley> lifeless: using this url: https://code.edge.launchpad.net/~jameinel/launchpad/lp-service/+merge/35877/+preview-diff/+files/preview.diff
[21:44] <lifeless> gary_poster: abentley: this is a dupe of adeurings bug I think
[21:44]  * gary_poster would be happy to hear that
[21:44] <lifeless> this particular case will be helped by the token stuff
[21:45] <gary_poster> cool
[22:04] <wallyworld> morning
[22:05] <wallyworld> rockstar, abently - ready for standup when you are
[22:05] <rockstar> wallyworld, great.
[22:09] <rockstar> abentley, skype?
[22:12] <gary_poster> sinzui: would love your comment/thoughts on adding a check for bare excepts in our linter, https://bugs.edge.launchpad.net/launchpad-foundations/+bug/636325
[22:12] <lifeless> jml: ready in a few
[22:12] <_mup_> Bug #636325: please add a commit ratchet for bare excepts <Launchpad Foundations:Triaged> <https://launchpad.net/bugs/636325>
[22:13] <jml> lifeless: ok.
[22:13] <gary_poster> running away, back soon.
[22:22] <mars> alright! we have a fix for prod_lp
[22:23] <wgrant> NonZeroExitCode: Test process died with exit code 2, but no tests failed.
[22:23] <wgrant> Thanks, ec2, I love you too.
[22:27] <benji> heh
[22:28] <jam> jml: since I see you here, when is the next lp db-devel rollout? (I have a patch based on db-devel accidentally, and I want to get it cleaned up, but don't want to pull in the next cycle's changes and have it delayed that much longer)
[22:29] <lifeless> jml: 13th or so
[22:29] <lifeless> jam: ^
[22:29] <jam> lifeless: thanks
[22:29] <jam> btw, *I* find your performance emails interesting to read
[22:29] <lifeless> jam: cool, thanks ;)
[22:34] <mars> lifeless, would you be willing to review this prod_lp fix?  I need to sign off soon, possibly before gary_poster returns: https://code.edge.launchpad.net/~mars/launchpad/fix-ec2test-on-production-devel/+merge/37526
[22:35] <lifeless> seems plausible
[22:38] <mars> lifeless, thanks
[22:41] <gary_poster> mars, do you ned me to shepherd this through?
[22:41] <gary_poster> and thank you
[22:41] <mars> gary_poster, please - lp-land is busted on the system I'm using :(
[22:41] <gary_poster> mars, ack, will do.
[22:42] <mars> thank you.  gary_poster, I need to sign off now.  talk to you tomorrow
[22:42] <gary_poster> bye mars
[22:49] <wgrant> Can someone please re-ec2 lp:~wgrant/launchpad/bug-629921-packages-empty-filter? The last run died spuriously.
[22:56] <wgrant> StevenK: When you're around, can I convince you to look at some PPA publisher logs for me?
[23:13] <wgrant> StevenK: unping.
[23:24] <lifeless> gary_poster: hi
[23:24] <gary_poster> lifeless: hey, will be leaving in 60 secs
[23:24] <lifeless> grah :P
[23:24] <gary_poster> sorry :-/
[23:24] <lifeless> I wanted to talk about 'definition of critical' and its relation to bugs.
[23:25] <gary_poster> fair enough
[23:25] <gary_poster> tomorrow?
[23:25] <lifeless> sure
[23:25] <gary_poster> cool.  ttyl.
[23:25] <lifeless> no panic, I just suspect we have wiki pages to edit and fix up etc
[23:25] <gary_poster> cool
[23:42] <wgrant> lifeless: Could you ec2 https://code.edge.launchpad.net/~wgrant/launchpad/bug-629921-packages-empty-filter/+merge/37339?
[23:48] <lifeless> wgrant: looks like henninge is landing it ?
[23:49] <wgrant> lifeless: ec2 got angry and exploded.
[23:49] <wgrant> NonZeroExitCode: Test process died with exit code 2, but no tests failed.
[23:49] <wgrant> The other one went through fine.
[23:49] <lifeless> sending it
[23:49] <wgrant> Thanks.
[23:57] <poolie> hi lifeless, wgrant
[23:58] <wgrant> Morning poolie.
[23:59] <lifeless> hi poolie