[00:02] <lifeless> there shouldn't be
[00:02] <lifeless> its a gpg config dir
[00:16] <poolie> gpg might make a symlink as a lock, but it looks like we're not actually running gpg in this test
[00:16] <poolie> i've been running it for a while and it has not failed
[00:32] <poolie> lifeless: how about https://code.launchpad.net/~mbp/launchpad/882324-mtime-test/+merge/80521 then
[00:36] <lifeless> commented. Three times.
[00:40] <poolie> you'd do it that way because it will automatically give a clear message?
[00:41] <poolie> are you saying that you want me to keep the 'floor()' call, or do you want to have another attempt? ;-)
[00:44] <lifeless> dropping the floor is fine
[00:44] <lifeless> yes, it will automatically give a clearer message, and it eliminates another use of a poor primitive (assertTrue)
[00:51] <poolie> great, thanks
[00:51] <poolie> good idea in fact
[00:53] <poolie> is there a link from the ppa to the recipe?
[00:53] <poolie> i guess it's only off the branch
[01:00] <StevenK> poolie: If an upload was created by a recipe, the package will link back to it
[01:01] <StevenK> poolie: For example, https://launchpad.net/~launchpad/+archive/ppa/+packages and then expand any of the launchpad-dependancies
[01:02] <poolie> ok thanks
[01:45] <lifeless> it would be nice to link back recipes that are autobuilding separately from successful builds
[01:48] <wgrant> lifeless: Huh?
[01:54] <lifeless> auto builds build into specific ppas
[01:54] <lifeless> if they haven't successfully built, there is no link from the ppa
[01:54] <lifeless> the package has to build for the link to show up
[01:54] <wgrant> Ah, daily builds?
[01:55] <lifeless> yes
[02:02] <lifeless> https://code.launchpad.net/~lifeless/python-oops-amqp/0.0.3/+merge/80524
[02:10] <StevenK> lifeless: If the *recipe* built, but the packages resulting didn't, the link should still be there.
[02:11] <lifeless> StevenK: indeed, I'm saying that waiting for that much success is still less clear than we could be
[02:13] <lifeless> StevenK: can has review, as you are here?
[02:13] <lifeless> or is it wgrants turn as wictum
[02:13]  * lifeless consults the oracle
[02:13] <lifeless> thursday. Its me!
[02:13] <lifeless> -> bah
[02:14] <lifeless> StevenK: wgrant: ^
[02:14] <StevenK> Haha
[02:14]  * wgrant blames wallyworld_.
[02:14] <wallyworld_> huh?
[02:20] <wgrant> poolie: https://staging.launchpad.net/~wgrant/+archive/experimental/+recipebuild/87804
[02:21] <wgrant> bzr: ERROR: Cannot commit directly to a stacked branch in pre-2a formats. See https://bugs.launchpad.net/bzr/+bug/375013 for details.
[02:21] <_mup_> Bug #375013: Cannot commit directly to a stacked branch <commit> <launchpad> <lp-translations> <qa-ok> <rodeo2011> <stacking> <Bazaar:Fix Released by jameinel> <bzr-builder:Fix Released by jelmer> <Launchpad itself:Fix Released by jtv> < https://launchpad.net/bugs/375013 >
[02:21] <wgrant> Is that new?
[02:21] <wgrant> I haven't used this recipe in more than a year, but it used to work...
[02:22] <lifeless> looks like fallout from fixing commit to stacked branches in 2a; possibly due to a previously undocumented failure more
[02:22] <lifeless> mode
[02:22] <lifeless> bah https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-2126AT10
[02:24] <wgrant> lifeless: BranchRevision timeout...
[02:32] <StevenK> wallyworld_: Can you update the kanban board? I think the cards in Review and Landing can move
[02:32]  * wallyworld_ looks
[02:34] <wallyworld_> StevenK: the one in landing, i'm fixing tests. makeBug, makeBugTask need a bit of work to make them compatible with the new disclosure rules
[02:34] <lifeless> wgrant: looks like it wasn't safe for 1.9 either according to comments inline
[02:40] <wgrant> :(
[02:41] <lifeless> e.g. that the only reason things worked on 1.9 was it being a no-change commit
[02:41] <lifeless> a change commit wouldn't be able to delta correctly in all cases in 1.9
[02:43] <lifeless> I need a review - any volunteerS?
[02:56] <lifeless> StevenK: can I beg a review off you?
[02:57] <StevenK> lifeless: I suppose.
[02:57] <lifeless> https://code.launchpad.net/~lifeless/python-oops-amqp/0.0.3/+merge/80524
[03:00] <StevenK> lifeless: r=me
[03:00] <lifeless> thanks
[03:08] <poolie> (back)
[03:09] <poolie> wgrant: pre-2a formats are kinda deprecated now
[03:09] <poolie> i don't think that failure is very conclusive either way for qa
[03:11] <jelmer> wgrant: is this about recipes using stacking and falling over when there is a pre-2a branch involved?
[03:11] <wgrant> jelmer: Well, I was trying to QA the new bzr-builder, and my test recipe happened to fall into that category.
[03:12] <jelmer> wgrant: that's not a new regression FWIW
[03:12] <wgrant> k
[03:12] <wgrant> Does someone have a less archaic recipe that they can reasonably test on staging?
[03:13] <jelmer> wgrant: python-fastimport is one of my usual test recipes
[03:13] <jelmer> I can have a look at adding that on staging, but not before dinner...
[03:14] <wgrant> Ah, you're still in the US?
[03:15] <wgrant> StevenK: Yay, p-d-r back down to 8 minutes.
[03:15] <wgrant> Still sucks, but a bit better.
[03:15] <jelmer> wgrant: yeah
[03:16] <poolie> o/ jelmer
[03:16] <poolie> wgrant:  i can try mine on bzr
[03:16] <poolie> both should be modern formats
[03:16] <wgrant> poolie: That works.
[03:23] <poolie> ok doing it now
[03:23] <wgrant> Thanks.
[03:29] <poolie> ok https://code.staging.launchpad.net/~mbp/+recipe/bzr-daily exists
[03:29] <poolie> queued for build
[03:37] <poolie> building
[03:42] <StevenK> wgrant: 8 minutes sounds pretty good to me.
[03:43] <wgrant> StevenK: It is LP good.
[03:43] <wgrant> It is not good.
[03:43] <StevenK> wgrant: Pretty good compared to the 1.5 hour transactions we were having
[03:44] <wgrant> It's considering less than 1500 items. There's no reason for it to take more than 30 seconds, and that's being generous.
[03:44] <poolie> what's p-d-r?
[03:44] <StevenK> process-death-row
[03:44] <StevenK> Deletes superseded publications
[03:45] <poolie> failed: https://staging.launchpadlibrarian.net/80692680/buildlog.txt.gz
[03:45] <wgrant> It's the pool cleaner.
[03:45] <poolie> i guess because of dpkg-deb: error: control directory has bad permissions 700 (must be >=0755 and <=0775)
[03:45] <wgrant> poolie: sudo breakage
[03:45] <wgrant> lamooooont
[03:45] <wgrant> lamont: ^^
[03:45] <poolie> which control directory is it talking about? the ./debian in the build area?
[03:46] <wgrant> probably ./DEBIAN.
[03:54] <lifeless> wgrant: that yuixhr thing is still booming
[03:57] <wgrant> lifeless: :/
[03:57] <lifeless> I found a think in my OopsHandler change
[03:57] <lifeless> (is not None) s/not // to fix
[03:57] <lifeless> thinko
[03:58]  * StevenK kicks Person:+index for being stupid
[03:58] <lifeless> anyhow, that *might* be it, if yuixhr is logging warnings or something
[04:03] <lifeless> oh ugh
[04:03] <poolie> has anyone seen a test fail with "error: pid file was not removed"?
[04:03] <lifeless> so I just realised, if we want different vhosts for txlongpoll, I'll need separate config for oops-amqp
[04:04] <lifeless> poolie: nope
[04:04] <lifeless> wgrant: do you happen to know an ETA on txlongpoll being unbroken ?
[04:05] <poolie> i see muharem hit it in 2009
[04:05] <poolie> just bad luck i guess
[04:05] <StevenK> wgrant: So, where should I start digging for this +index madness?
[04:05] <lifeless> StevenK: queries ?
[04:05] <StevenK> lifeless: Hm?
[04:06] <lifeless> StevenK: what sort of madness
[04:06] <StevenK> lifeless: create_initialized_view(<person>, '+index') doesn't work
[04:08] <lifeless> StevenK: it does too
[04:08] <lifeless> see TestPersonIndexView
[04:08] <wgrant> Does that render the view?
[04:09] <lifeless> oh, rendering doesn't work?
[04:09] <wgrant> Doesn't seem to.
[04:10] <lifeless> thats odd
[04:10] <lifeless> uhm, there are additional parameters
[04:10] <poolie> well https://bugs.launchpad.net/launchpad/+bug/882370
[04:10] <_mup_> Bug #882370: spurious failure in pidfile.txt: "pid file was not removed" <Launchpad itself:Triaged> < https://launchpad.net/bugs/882370 >
[04:11] <lifeless> poolie: again, what makes this spurious? Do you know that the thing actually stopped and didn't get lost?
[04:11] <mwhudson> wgrant: didn't you work on ddebs for ppas a bit ages ago?
[04:11] <poolie> what does "actually stopped and didn't get lost" mean?
[04:11] <lifeless> poolie: we have a recurring issue with test helpers going awol and not shutting down
[04:12] <wgrant> mwhudson: I did.
[04:12] <mwhudson> wgrant: did that ever get anywhere?
[04:12] <lifeless> a pid file not being deleted is often also a symptom
[04:12] <wgrant> mwhudson: For PPAs? It's believe to be finished, with a small fix Julian made last week.
[04:12] <mwhudson> wgrant: also if i say "ddebs" and "derived distros" in the same sentence, will the result be hollow laughter?
[04:12] <mwhudson> wgrant: oh cool
[04:12] <wgrant> mwhudson: Not fully tested, and can't be at the moment, because DF is screwed.
[04:12] <wgrant> Ah.
[04:12] <wgrant> Primary archives are another matter.
[04:13] <wgrant> They have an implementation constraint that I consider to be false.
[04:13] <poolie> perhaps we have different meanings of 'spurious'
[04:13] <wgrant> A very onerous implementation constraint.
[04:13] <mwhudson> o
[04:13] <mwhudson> wgrant: i'm looking at https://blueprints.launchpad.net/linaro-ubuntu/+spec/linaro-platforms-lc4.11-ubuntu-leb-dbg-pkgs-support you see
[04:13] <poolie> these tests failed when i didn't (afaik) touch anything related to that test
[04:13] <wgrant> It is said that ddebs must live in a separate archive.
[04:13] <poolie> and they're not consistently failing
[04:13] <lifeless> poolie: that still fits the symptoms we've experienced.
[04:13] <wgrant> mwhudson: Well, the Soyuz team can probably fix that oh wait.
[04:13] <mwhudson> wgrant: ok
[04:13] <StevenK> Now now
[04:13] <poolie> are you saying this is not a bug, or are you saying it's a dupe of a known bug?
[04:14] <lifeless> poolie: that said, the test helper has timing code all over it
[04:14] <wgrant> mwhudson: So, PPA ddebs are probably a couple of months away.
[04:14] <lifeless> poolie: I was investigating to assess that, yes.
[04:14] <wgrant> mwhudson: Primary archive ddebs are an undefined period of time and convincement of IS and Julian away.
[04:14] <mwhudson> wgrant: thanks
[04:15] <mwhudson> wgrant: is there a LEP or anything for PPA ddebs?
[04:15] <poolie> so on the face of it, it seems spurious that this test would fail based off this change
[04:15] <poolie> imbw
[04:15] <mwhudson> or anything i can link to, for that matter
[04:15] <lifeless> poolie: oh, I don't think your change provoked it
[04:15] <poolie> i can't see any known bug about this test
[04:15] <poolie> are you saying you didn't want me to file a bug?
[04:15] <wgrant> Bug #747558
[04:15] <_mup_> Bug #747558: PPAs should create backtracable packages <escalated> <ppa> <qa-ok> <Launchpad itself:In Progress by julian-edwards> < https://launchpad.net/bugs/747558 >
[04:15] <lifeless> poolie: the question was whether it was related to the test reliability issues we've been having on and off, or a different low-frequency issue all of its own.
[04:15] <lifeless> poolie: I think its all of its own and I've commented in the bug
[04:15] <lifeless> poolie: filing the bug was the right thing to do
[04:16] <poolie> any test that has the line 'time.sleep(0.1)' is prima facie a critical bug ;)
[04:17] <lifeless> poolie: tests that break in buildbot and cause 4 to 8 hour team stalls are, and for all that its low frequency this may well fit that
[04:18] <poolie> yep
[04:18] <poolie> this bug and the previous one caused ~5h stalls for me
[04:18] <poolie> if it had happened in bb it could have been worse of course
[04:18] <StevenK> lifeless: So, initializing the view gives the output as ""
[04:18] <mwhudson> wgrant: i just read your explanation of how ddebs.ubuntu.com works
[04:18] <poolie> lifeless: i wonder if AWS is having some kind of underlying issue that is provoking more timing bugs
[04:18] <wgrant> StevenK: No, *rendering* the view does.
[04:18] <StevenK> lifeless: wgrant thinks that there is METAL madness involved.
[04:18] <wgrant> mwhudson: Sorry, I should have warned you.
[04:18] <wgrant> StevenK: sinzui's theory is possibly more relevant for this view.
[04:18] <poolie> eg something in the virtualization layer is making the machine clock irregular
[04:19] <mwhudson> wgrant: yeah, i may not sleep properly tonight
[04:19] <lifeless> mwhudson: :)
[04:19] <poolie> two failures in a row seems suspicuous
[04:19] <lifeless> poolie: that would be epic win wouldn't it ;)
[04:19] <poolie> or, perhaps they're just bogging down sometimes
[04:20] <poolie> eg this test waits only 2s for the thing to complete
[04:20] <poolie> or possibly even less
[04:20] <poolie> it's pretty plausible you could get >2s lag on a very loaded host
[04:20] <lifeless> I need a rabbit sanity check
[04:21] <lifeless> is it *possible* for rabbit to buffer messages in an exchange and send them to a queue that is subsequently bound to it ?
[04:21] <StevenK> wgrant: My brain has melted, what was his theory?
[04:21] <wgrant> lifeless: I imagine that such a race would be possible in rare circumstances.
[04:22] <lifeless> wgrant: I have one nonreproducable failure.
[04:22] <wgrant> StevenK: XRDS
[04:22] <lifeless> wgrant: test_rosteta_branches_script_oops gets 7 oopses included some with storm DisconnectionError and similar in it
[04:22] <lifeless> wgrant: my only theory is that having the exchange with -no- queue at all (e.g. tests in a too-low-layer) actually buffers
[04:23] <StevenK> wgrant: eXtensible Resource Descriptor Sequence? Srsly?
[04:27] <wgrant> StevenK: Correct.
[04:27] <poolie> ew doctests
[04:29] <StevenK> wgrant: So where do I start?
[04:29] <wgrant> StevenK: Work out what's returning ""
[04:30] <lifeless> http://www.rabbitmq.com/tutorials/tutorial-three-python.html says 'The messages will be lost if no queue is bound to the exchange yet'
[04:30] <lifeless> so that rules out that scenario
[04:30] <wgrant> It doesn't.
[04:30] <lifeless> wgrant: how so ? stale appserver helper s?
[04:30] <StevenK> wgrant: So create_initialized_view() ; view() should be enough in the usual case, right?
[04:30] <wgrant> 1) It's a tutorial.
[04:30] <lifeless> wgrant: on the rabbitmq site
[04:30] <wgrant> 2) It probably just means "You can't rely on messages not being lost"
[04:31] <wgrant> StevenK: Yes.
[04:31] <lifeless> I find reading what is written usually works.
[04:31] <poolie> lifeless:  i realize it's intermittent but it offends me
[04:31] <wgrant> lifeless: Not in a tutorial, and not from rabbitmq.
[04:31] <poolie> shall i just bump it up to waiting 20s?
[04:31] <lifeless> poolie: I take it you also cannot see a cause
[04:31] <poolie> i've looked for races and i don't see any
[04:32] <lifeless> sounds like a (painful) workaround
[04:32] <poolie> the fact that it's been there for >2 years but hit only intermittently suggests it timing related
[04:32] <lifeless> agreed
[04:32] <poolie> especially as i  had another perhaps timing related thing today
[04:32] <poolie> i was wondering about making it also report if the process still exists
[04:33] <lifeless> that might help with diagnostics
[05:01] <StevenK> wgrant: I think lib/canonical/launchpad/webapp/publisher.py, line 296 is to blam.
[05:01] <StevenK> *blame.
[05:02] <StevenK> "Oh, Person:+index is a redirect, return u''"
[05:02] <lifeless> StevenK: you may need to pass in a layer (not test layer)
[05:02] <wgrant> StevenK: pdb will tell you...
[05:02] <wgrant> But I doubt it.
[05:02] <lifeless> StevenK: this is one of the extra params IIRC. Or something.
[05:03] <StevenK> wgrant: I discovered it because of pdf
[05:03] <StevenK> *pdb
[05:03] <StevenK> I've been tracing for a while
[05:03] <StevenK> Hm, or not.
[05:03] <StevenK> return self.render()
[05:04] <wgrant> It's probably feeds or XRDS interacting badly, but step through in pdb and find out.
[05:06] <poolie> lifeless: https://code.launchpad.net/~mbp/launchpad/882370-pidfile/+merge/80536
[05:06] <poolie> :/
[05:06] <poolie> not the most satisfying series of patches
[05:06] <poolie> i'm going to run some errands
[05:13] <StevenK> wgrant: I'm in the middle of zope.tales, so it must be doing useful stuff
[05:16] <wgrant> Indeed.
[05:16] <wgrant> Which suggests that it may in fact be METAL.
[05:18] <StevenK>         if current_url != expected_url:
[05:18] <StevenK>             self.request.response.redirect(expected_url)
[05:18] <StevenK>             return ''
[05:19] <wgrant> lol
[05:19] <wgrant> I found that within 5 seconds of you.
[05:19] <StevenK> In lib/lp/services/openid/browser/openiddiscovery.py
[05:19] <wgrant> Yes.
[05:19] <wgrant> Was just switching over to paste that same thing.
[05:19] <StevenK> Haha
[05:20] <wgrant> Although I was 3 seconds later than you :(
[05:21] <StevenK> I'm on my laptop due to my desktop being busy with this evil program called dd, so I was a little slower than pasting
[05:21] <wgrant> Heh
[05:21] <wgrant> So, that would do it.
[05:21] <StevenK> Indeed.
[05:23] <StevenK> Current URL: http://127.0.0.1/
[05:23] <StevenK> Expected URL: http://launchpad.dev/~team-name-150898
[05:23] <StevenK> Strangfe
[05:23] <StevenK> s/f//
[05:24] <wgrant> Right, the fake request (LaunchpadTestRequest, probably) used by create_initialized_view doesn't have a sensible URL.
[05:24] <wgrant> Because most things are sensible enough to not require one.
[05:24] <StevenK> Right, I was just thinking that LTR is probably implicated.
[05:25] <StevenK> Can't we create a request and toss that into create_initialized_view()?
[05:26] <StevenK> I have this feeling I've seen that done. *Where* ...
[05:29] <wgrant> You could also change create_initialized_view to set the URL to canonical_url(obj, view_name)
[05:30] <StevenK> create_initialized_view takes a server_url param
[05:30] <poolie> lifeless: thanks
[05:31] <lifeless> wgrant: I'm not going to move the lp_sitecustomise line; I worry its order dependent because its doing monkey patching.
[05:31] <lifeless> wgrant: or am I wrong ?
[05:31] <lifeless> wgrant: ah, I'm wrong.
[05:32] <wgrant> lifeless: I thought that as well, but it's in the sort of file where that would surely not be well-placed.
[05:35] <StevenK> RARGH
[05:35] <StevenK> Current URL: http://launchpad.dev/~team-name-333852/
[05:35] <StevenK> Expected URL: http://launchpad.dev/~team-name-333852
[05:37] <StevenK> They both call canonical_url, why does one have a slash and the other doesn't ... :-(
[05:37] <wgrant> The / will be because you asked for the +index view.
[05:37] <wgrant> I suspect the other one just asks for the object.
[05:39] <StevenK>     if not server_url:
[05:39] <StevenK>         server_url = canonical_url(context)
[05:39] <StevenK>         expected_url = canonical_url(self.context)
[05:41] <StevenK> If both contexts are the same, the URLs should be equal
[05:41] <lifeless> wgrant: I've just pushed rev 14150 if you want to do an incremental review. I've added autosync to the attachOopses() call, which will hopefully result in these naughty oopses being sucked up earlier.
[05:42] <lifeless> wgrant: I've added de-duplication of oops ids to accomodate that
[05:44] <wgrant> lifeless: I considered asking for that, but it sounds like it could be slow.
[05:44] <wgrant> Although I guess it shouldn't be.
[06:12] <lifeless> wgrant: if it passes ec2, I should have some timing data
[06:12] <lifeless> wgrant: so we can assess, and we'll know
[06:12] <wgrant> lifeless: To an extent.
[06:12] <wgrant> But yes.
[06:13] <wgrant> Wow.
[06:13] <wgrant> The security declarations for Bug are... special.
[06:14] <wgrant> Notably, there are some attributes listed that I've never heard of.
[06:14] <wgrant> And they are in no order whatsoever.
[07:11] <jtv> Any reviewers in the house?  https://code.launchpad.net/~jtv/launchpad/pre-876594/+merge/80537
[07:13] <nigelb> "me" is new.
[07:13] <nigelb> hah, ok, lifeless.
[07:15] <jtv> Thanks for figuring that out, Nigel.  :)
[07:15] <jtv> lifeless: could I trouble you for a review?  It's this one: https://code.launchpad.net/~jtv/launchpad/pre-876594/+merge/80537
[07:15] <jtv> Not very exciting stuff, I'm afraid.
[07:15] <lifeless> sure
[07:16] <jtv> You had me there for a moment.
[07:16] <nigelb> heh
[07:16] <lifeless> well, its 8pm, well past EOD
[07:16] <lifeless> even for extended-batteries-lifeless
[07:16] <lifeless> I'd forgotten to update the topic
[07:27] <lifeless> jtv: done
[07:34] <jtv> thanks lifeless
[07:37] <jtv> morning rvba
[07:37] <rvba> Hey jtv.
[07:38] <jtv> rvba: I didn't make very much progress on your query, except to find a sort of rock-bottom reason why it's slow.
[07:38] <jtv> See email.
[07:38] <rvba> Saw your email, with many more details, your reach the same conclusion I did: rows=336922, that's a huge number.
[07:39] <jtv> You may be wondering why I said it's not worth counting if the owner owns as little 1% of the branches.
[07:40] <rvba> So I suppose the comment that lifeless put on the bug itself (that the is a way to use StormRangeFactory (tweaking involved)) is the only way to go.
[07:40] <rvba> jtv: Yes I do.
[07:40] <jtv> It's because with 1% of the branches, you could still end up touching a large portion of the table pages.
[07:40] <jtv> Worst case, you need to touch 1 branch per page.
[07:41] <jtv> If you can skip the count, great.
[07:41] <rvba> Well, using StormRangeFactory would avoid the count, I suspect the perf of the page would be dramatically improved.
[07:41] <jtv> Probably, yes.
[07:42] <jtv> You'd also be rid of the OFFSET.
[07:42] <jtv> Which could matter if we just grew a few hundred thousand branches recently.
[07:43] <lifeless> private branches will be getting overhauled by disclosure
[07:43] <lifeless> I suggest talking to them about the schema changes planned - the schema drives performance here in fairly significant ways
[07:45] <jtv> The 300K branches were private on dogfood but public on staging.  They're only a problem when they're public, but I assumed they'd be public in production.
[07:45] <jtv> The private-branches part isn't much of a performance problem, apart from a lot of repetitive querying.
[07:45] <rvba> All the explains I've done are on production.
[07:46] <jtv> So this isn't tied to branch privacy.
[07:46] <lifeless> jtv: I did a previous optimisation pass in june or so, because its a problem :)
[07:46] <lifeless> jtv: but if you've newer data than I, great.
[07:47] <jtv> Not for what rvba is looking at.  In other words, you probably solved the problem as far as private branches are concerned and we're left with public branches as the bottleneck.
[07:47] <rvba> The small improvement I did with counting private branch (simply caching the results) will help a little.
[07:48] <rvba> But like jtv said, the main problem is counting the public branches.
[07:48] <jtv> It got a bit faster when I omitted the "transitively_private is false" condition from the public-branch count,
[07:48] <jtv> but still not as fast as counting public and private branches separately & adding up the numbers afterwards.
[07:50] <lifeless> rvba: just a style thing - I'd hesitate to use a cached property on GenericBranchCollection
[07:50] <lifeless> rvba: I suspect it could easily lead to incorrect behaviour
[07:50] <lifeless> rvba: its really a fancy query builder; any caching should be done on the resulting query
[07:51] <rvba> lifeless: my goal was to cache the results of a subquery that is repeated more than 10 times.
[07:52] <lifeless> rvba: why is it repeated 10 times?
[07:52] <lifeless> rvba: also are you aware that passing in long IN clauses can trigger inappropriate sequential scans ?
[07:52] <lifeless> rvba: [so if you expect thousands of entries in an IN clause you may need to move to a temp table, insert, analyze, then query]
[07:53] <lifeless> rvba: I certainly agree with jtv's suggestion that a union may be better here
[07:53] <rvba> Right, I still need to make sure that the worse that can happen with the IN query is a few thousands.
[07:54] <rvba> I've not used a with query because I wanted to improve lots of queries at once.
[07:54] <lifeless> as a data point, the last time I touched this code I fixed one page and broke another
[07:54] <lifeless> qa on this will be tricky
[07:55] <lifeless> ah, ubuntu-branches, \o/
[07:55] <rvba> And I confess I'm not sure why I have so many different queries using this subquery, I must say that this code is a little bit spaghetti like, but anyway, this was just a first attempt I made before jtv & I realised this was really not where the improvement should be done.
[07:55] <lifeless> k
[07:56] <lifeless> so the plan issue is the hashed SubPlan 1, I think
[07:56] <rvba> I think the potential problems of this IN query optimisation are probably not worth the improvement it brings … I shall see if stormrangefactory can be applied.
[07:56] <lifeless> select count(*) from branch where owner=2866082;
[07:56] <lifeless>  count
[07:56] <lifeless> --------
[07:56] <lifeless>  338016
[07:56] <lifeless> (1 row)
[07:56] <lifeless> Time: 284.695 ms
[07:57] <lifeless> 343ms with the lifecycle status
[07:57] <rvba> i've only touched the surface of if so far, but this stormrangefactory is a very, very nice piece of work I must say.
[07:57] <lifeless> this is consistent with my previous memory
[07:57] <jtv> That's a lot faster than on the test servers.
[07:57] <lifeless> the public branch is not the issue
[07:58] <rvba> That's a typical 1s query that the page is doing:
[07:58] <lifeless> 710ms with the transitively_private check added, which is odd.
[07:58] <rvba> https://pastebin.canonical.com/54885/
[07:58] <rvba> The related explain: https://pastebin.canonical.com/54886/
[07:59] <rvba> I suspect if you add the lifecycle_status check it will take 1sec.
[08:00] <lifeless> 743ms with owner + lifecycle_status + transitively_priate
[08:00] <jtv> lifeless: the slowdown from the privacy check may be something to do with access patterns within the page.  Do you know off the top of your head where a tuple's visibility-related info is stored?  Maybe it's a compact blob at the beginning of the page.
[08:00] <rvba> I said typical because like I said earlier, many similar count queries are generated by this page (https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-2124AO73)
[08:00] <lifeless> jtv: you thinking a repack or something ?
[08:01] <jtv> Wasn't, but always a nice thought.  :)
[08:01] <jtv> Oh, I think I know.
[08:01] <lifeless> jtv: I don't know that bit of the layout, no.
[08:01] <lifeless> checking private (the old column) isn't visibly faster
[08:02] <adeuring> good morning
[08:02] <rvba> Morning adeuring!
[08:02] <lifeless> 480k branches in total
[08:02] <adeuring> hi rvba!
[08:02] <lifeless> so this is getting 66% or so of pages
[08:02] <jtv> lifeless: IIRC there's a shortcut for determining that all tuples in a page are visible.  We won't make full use of that without index-only scans, but it probably still means that the liveness check has better locality than actual tuple data check.
[08:02] <jtv> hi adeuring
[08:03] <adeuring> hi jtv
[08:03] <jtv> Once you start looking at the actual data, you do need to look at the tuple's data.
[08:03] <jtv> Until then, just visibility metadata.
[08:03] <lifeless> anyhow, given we're getting 2/3rds of the data, the seq scan is appropriate
[08:04] <jtv> Absolutely.
[08:04] <jtv> Even for a few % of the data it would be appropriate.
[08:04] <jtv> Why aren't we ever discussing clustering?
[08:05] <jtv> Not that I know what to cluster _on_…
[08:05] <lifeless> well, partly because it needs downtime :)
[08:05] <lifeless> also its not maintained by vacuum
[08:05] <jtv> Why does it need downtime?
[08:05] <lifeless> and the online adding scares me ;)
[08:06] <lifeless> jtv: 'When a table is being clustered, an ACCESS EXCLUSIVE lock is acquired on it. This prevents any other database operations (both reads and writes) from operating on the table until the CLUSTER is finished. '
[08:06] <lifeless> http://www.postgresql.org/docs/current/static/sql-cluster.html
[08:06] <jtv> That's no good.  :/
[08:06] <jtv> I thought vacuum maintained clustering, if you assigned a clustering index.
[08:07] <lifeless> nope
[08:07] <lifeless> well, let me be precise
[08:07] <lifeless> I know that autovacuum does not
[08:07] <lifeless> I believe that vacuum also does not. IMBW on this one.
[08:07] <lifeless> rvba: so looking at that oops
[08:07] <lifeless> slowest query is the BMP one
[08:08] <lifeless> second slowest is is the count of branches
[08:08] <lifeless> third is the retreival of the branches batch
[08:08] <rvba> Yes, again, a count(*), and you can see the repeated In subquery, hence my first idea to cache the result of this subquery.
[08:11] <lifeless> rvba: yes, I get that... note though that that the fact you're querying in against so many public branches will reduce that impact ;)
[08:11] <lifeless> rvba: which you already know
[08:12] <lifeless> rvba: so I suggest a different tack on the analysis
[08:12] <lifeless> rvba: lets ask what would be the fastest way to implement the slowest part of the page
[08:12] <lifeless> rvba: and then fix that
[08:13] <lifeless> rvba: (one fast way is to not do something at all)
[08:13] <lifeless> rvba: the branch mergeproposal count isn't in a batch
[08:13] <lifeless> rvba: and we'll cut 3.7 seconds out if we stop showing it; 1.8 if we get a 50% improvement there.
[08:14] <rvba> Note that the total sql time is ~10s so we have a long way to go ;)
[08:15] <lifeless> rvba: fixing 30% of that would be a pretty good start :)
[08:15] <rvba> I suppose that there is one count for the batch, and the rest is to display the numbers in the right hand side portlet.
[08:16] <rvba> That is a costly portlet :)
[08:16] <lifeless> working page: https://code.launchpad.net/~rvb/+branches
[08:16] <lifeless> rvba: yes, note that https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-2124AO73#longstatements top-N 5
[08:16] <rvba> Yes, that's precisely the one I'm looking at.
[08:17] <lifeless> 5. 	948.0 	1 	SQL-main-master 	
[08:17] <lifeless> SELECT BranchMergeProposal.commit_message,
[08:17] <lifeless>        BranchMergeProposal.date_created,
[08:17] <lifeless>        BranchMergeProposal.date_merged,
[08:17] <lifeless>        BranchMergeProposal.date...
[08:17] <lifeless> that seems total waste
[08:17] <lifeless> as the reviews are not show.
[08:17] <lifeless> so you can save 1 second by fixing htat bug.
[08:17] <lifeless> work not done is cheap :)
[08:17] <nigelb> I never knew there would be a day I'd be reviewing danilos' code :P
[08:18] <rvba> lifeless: Very true :)
[08:20] <lifeless> rvba: I have to wonder if showing a count of subscribed branches is needed
[08:22] <lifeless> rvba: e.g. perhaps we shouldn't show random users how many branches someone is subscribed too, nor what they are subscribed to. Perhaps we should.
[08:22] <lifeless> its consistent with what we do to bugs, so I wouldn't arbitrarily change it
[08:23] <rvba> Also, maybe the numbers are not that important, but maybe the links are for some workflows, maybe we could remove the numbers and keep the links.
[08:23] <rvba> Lots of 'maybe' :)
[08:23] <lifeless> contrast with https://bugs.launchpad.net/~rvb/
[08:24] <lifeless> no numbers there
[08:24] <rvba> Right.
[08:25] <lifeless> that is perhaps a sensible first step; get us some headroom
[08:26] <lifeless> more consistent with the bugs application
[08:26] <bigjools> morning all
[08:26] <nigelb> Morning bigjools
[08:27] <nigelb> *snicker* English cricket team.
[08:27] <rvba> lifeless: That's a good idea, and an easy fix :). Should we run this past someone first?
[08:28] <StevenK> English "cricket" team.
[08:28] <bigjools> nigelb: you mean the #1 test team?
[08:29] <lifeless> rvba: up to you :) - I'm one of the people it would be run past :P; secondly you'll need to convince a reviewer that its reasonable, and they can always suggest you seek UI discussion too
[08:29] <nigelb> bigjools: Yeah, the one that didn't win a single match :P
[08:30] <bigjools> nigelb: that'd be you guys over here then?
[08:30] <nigelb> haha
[08:30] <bigjools> ;)
[08:30] <nigelb> Apparently our respective teams perform only in their home country.
[08:30] <rvba> lifeless: ok, I think the argument will depend on the perf improvement it brings, I'll do it, have it reviewed, and the will we evaluate the benefit it brings on staging. As this time we could have the UI vs. performance arguement
[08:30] <nigelb> (Not like I watched either set of matches)
[08:31] <lifeless> rvba: you can feature flag it
[08:31] <lifeless> rvba: makes the decision to flip the flag separate to the coding aspect
[08:32] <rvba> lifeless: Good idea, this way we can test it on production.
[08:32] <bigjools> nigelb: yes, for someone who doesn't like cricket, you seem to spend a lot of time talking about it :)
[08:32] <nigelb> bigjools: I follow StevenK's principle about cricket :D
[09:14] <lifeless> allenap: you are added now
[09:15] <allenap> lifeless: Thanks, I saw that, and uploaded 0.0.6 last night.
[09:16] <lifeless> cool
[10:00] <nigelb> rvba: Congrats!
[10:04] <rvba> nigelb: thanks ;).
[10:43] <rvba> nigelb: brutal reviews…?
[10:44] <nigelb> rvba: where you nitpick every line.
[10:44] <rvba> rvba: There are a few "fascists reviewers" out there.  But I'm not one of them ;).
[10:44] <nigelb> rvba: You should be.
[10:45] <nigelb> :)
[10:45] <rvba> I follow another way to do things, no brutality involved :)
[10:45] <nigelb> hehehe
[10:45] <rvba> It's called the Panella style.
[10:45] <nigelb> haha
[10:45] <rvba> :)
[10:46] <nigelb> I'm too used to the jv/fascist style.
[10:46] <bigjools> the best reviews are the ones that make me think about what I am actually doing
[10:47] <nigelb> A lot of times I see my code evolve from "this works" to "this works and looks pretty neat as well" during code review.
[10:59] <bigjools> :)
[10:59] <rvba> Yeah!
[10:59] <nigelb> \o/
[13:06] <rvba> Morning jcsackett.
[13:06] <jcsackett> morning, rvba. thought you were on monday shifts now?
[13:07] <rvba> jcsackett: yeah, but I'm finishing today's shift since I'll be off on Monday.
[13:07] <jcsackett> Ah, dig.
[13:08] <deryck> Morning, all.
[13:17] <jcsackett> Morning, deryck.
[13:20] <deryck> rvba or jcsackett -- I have a js branch for review if either of you care to review it.
[13:20] <rvba> deryck: sure.
[13:21] <deryck> rvba, thanks!  https://code.launchpad.net/~deryck/launchpad/buglists-orderby-widget/+merge/80563
[13:31] <flacoste> morning deryck!
[14:10] <rvba> deryck: just checking: the code in the demo is in sync with the code in the MP?
[14:14] <deryck> rvba, on call, sorry, just a sec.
[14:20] <deryck> rvba, ok, off now, sorry.  by demo do you mean the html stuff at people.canonical.com?
[14:20] <rvba> deryck: I'm asking because if I click on 'Importance' then on 'Bug heat' and then repeat the operation, I get this: http://people.canonical.com/~rvb/big_button.png. Looks a space somewhere is not removed.
[14:20] <rvba> deryck: yes
[14:21] <rvba> s/Looks/Looks like/
[14:21] <deryck> rvba, yeah, so the demo version was just a proof to get the interaction down.  The widget doesn't copy it exactly, and that particular issue should be fixed in the widget.
[14:21] <rvba> deryck: Ok.
[14:21] <deryck> rvba, it's due to popping a span on the li for the arrows, and in the final version you're reviewing the span is always there.
[14:22] <rvba> deryck: Right, I saw the code was not exactly the same but if you're aware of the issue in the demo that's fine.
[14:22] <deryck> rvba, gotcha.
[14:26] <flacoste> rvba: congratulations on achieving reviewerhood!
[14:26] <rvba> Thanks flacoste!
[14:29] <deryck> rvba, thanks for the review!  I've no issues with anything you mention and will take your suggestions up before landing.
[14:29] <rvba> Cool.
[14:42] <deryck> oh rvba, I did like the suggestion for the tearDown destroy, but the one place it's not used is not needed because that's an error test.  The render never completes.
[14:43] <rvba> deryck: ah ok.
[14:46] <rvba> deryck: I also suggested that because of http://bazaar.launchpad.net/~deryck/launchpad/buglists-orderby-widget/revision/14212
[14:46] <deryck> rvba, yup, it's a good suggestion.
[15:00] <cr3> hi folks, what's the difference between documentation under doc/ in the source tree and dev.launchpad.net?
[15:01] <sinzui> jcsackett, pinf
[15:01] <sinzui> jcsackett, ping
[15:02] <jcsackett> sinzui: ponf/g.
[15:02] <sinzui> jcsackett, I have a data fixing script that needs review. I want to talk to you about it since it is not a branch
[15:02] <sinzui> mumble?
[15:02] <jcsackett> sure, one moment.
[15:05] <bigjools> cr3: hi
[15:06] <bigjools> cr3: the stuff under doc/ is generated and shown at readthedocs. I'm currently looking into sorting the documentation mess out, so bear with me.
[15:07] <cr3> bigjools: what's "readthedocs"?
[15:07] <bigjools> cr3: http://launchpad.readthedocs.org/en/latest/
[15:09] <sinzui> jcsackett, http://pastebin.ubuntu.com/720766/ and http://pastebin.ubuntu.com/720768/
[15:09] <cr3> bigjools: I'm adding documentation to the launchpad-results project which tries to follow launchpad practices as closely as possible, any suggestions for me?
[15:10] <bigjools> cr3: not really :(  I've not started sorting this out yet, it's a fledgling initiative. My main aim is to get low-level API info in the Sphinx stuff so that it lives with the code, and the wiki should contain more higher-level stuff.
[15:11] <cr3> bigjools: no worries, since I won't have much documentation for a while, it shouldn't be too difficult to migrate later
[15:12] <bigjools> cr3: I'd use reST markup as well
[15:12] <bigjools> it's rather flexible
[15:12] <cr3> bigjools: I thought about that but then I figured that using text under doc/ the same as the doctests throughout the code might be beneficial, rather than having two formats
[15:13] <bigjools> cr3: doctests suck
[15:14] <bigjools> cr3: the stuff under doc/ is reST
[15:14] <cr3> bigjools: I know, I know, but some might be useful. I think the problem is that launchpad has been using them too much and for the wrong purpose, like unit testing in doctests, but I'm not convinced they're all bad
[15:15] <bigjools> cr3: maybe - LP's use is teh suck for sure, but they can get unwieldy very fast IME
[15:15] <cr3> bigjools: so, ultimately, the objective would be to have no doctests at all?
[15:16] <bigjools> cr3: that's the majority consensus in the LP team, yes
[15:18] <jcsackett> sinzui: r=me on the sql, looks good.
[15:19] <sinzui> thank you
[15:27] <jcsackett> sinzui: one note on the script. there are two exceptions in which you log, but since those are both errors don't you want to pass in error=true to the log function?
[15:27] <sinzui> Oh
[15:28] <sinzui> jcsackett,  no for unauthorized, yes for the WTF !!
[15:28] <jcsackett> I figured a comment like "Something went very wrong" implied error logging should be enabled. :-P
[15:28] <sinzui> jcsackett, Since I did not get any !! in the log, I did not catch that I needed to report the super error
[15:29] <sinzui> fixed
[15:29] <jcsackett> sinzui: otherwise, this looks good to me. r=jcsackett.
[15:29] <sinzui> thank you
[15:35] <flacoste> bigjools: well, there's a disagreement in the sense that some people (at least stub) argues that documentation should be tested
[15:36] <flacoste> so a kind of doctest (even if doctest tool isn't used) could be used
[15:36] <flacoste> i think manuel was suggested
[15:36] <flacoste> as it integrates with sphinx
[15:36] <bigjools> flacoste: well he was the lone dissenter, hence my phrase "majority consensus"
[15:37] <bigjools> flacoste: right - I need to investigate all this manuel/sphinx etc
[15:42] <bigjools> abentley: hey do you have a brief moment to talk about canonistack + LP?
[15:43] <abentley> bigjools: sure.
[15:43] <bigjools> abentley: I've got it such that I have the same 9 tests failing every time, and they are a bunch of them in test_lpserve
[15:43] <bigjools> because bzr is outputting:
[15:43] <bigjools> bzr: warning: unsupported locale setting\n'
[15:43] <bigjools> If I can figure out why it does that, we've got a winner
[15:44] <abentley> bigjools: I'm not optimistic about using openstack for running tests, because they still haven't fixed my RT about instances not starting properly.
[15:45] <bigjools> right - I've kept the same one up for a week :)
[15:45] <abentley> bigjools: But I would imagine the complaint about locale setting is related to LANG.
[15:47] <bigjools> abentley: yes - I've tried setting it to en_US.UTF8 and I get the same error
[15:48] <abentley> bigjools: And it fails when you run the test by itself?
[15:48] <bigjools> abentley: tell a lie - it's working now
[15:48] <bigjools> only one test failing now!
[15:48] <abentley> bigjools: Let's hope it stays that way :-)
[15:49] <bigjools> abentley: http://pastebin.ubuntu.com/720814/
[15:50] <abentley> bigjools: Since it's in an assertRaises, I guess it's raising the wrong exception?
[15:50] <rvba> bigjools: I remember having this one failing too!
[15:50] <rvba> abentley: exactly
[15:52] <bigjools> which is odd
[16:05] <flacoste> deryck: yesterday on the TL call, in the feature checkpoint when i mentioned that you guys were going to look at multi-sort next
[16:06] <flacoste> lifeless pointed out that you might hit some performance problems
[16:06] <flacoste> as all our bug queries are optimized including the sort order
[16:06] <flacoste> you might encounter case that don't perform well
[16:06] <flacoste> so it's something you'll have to be wary of
[16:21] <deryck> flacoste, ah, ok.  Good to know.  adeuring, see ^^
[16:22] <adeuring> deryck, flacoste: yeah, that's what I was thinking too...
[16:48] <deryck> abentley, ok, I have my head around your branch now.  Can we dialog about this review now?
[17:01] <elmo> is there a tool for caching Launchpad bugs for offline use?
[17:17] <abentley> deryck: I'm free when you are.
[17:19] <deryck> abentley, ok.  Let's mumble to make it quicker.
[17:19] <abentley> deryck: sure.
[17:36] <flacoste> elmo: i think rick spencer had a LP bugs GUI at some point that was doing something like that
[17:45] <jelmer> flacoste, elmo: lp:bughugger
[17:46] <flacoste> yep, that sounds like a rick_spencer project name!
[17:46] <jelmer> that's Rick's LP bugs GUI, not sure how much offline support it has though
[17:46] <flacoste> thanks jelmer
[17:46] <jelmer> flacoste: :)
[17:46] <flacoste> i know it was downloading the bugs api representation in a local cache and then was operating the gui from that
[17:57] <nigelb> flacoste: brian murray had something like that. Maybe its the same project.
[18:04] <elmo> flacoste, jelmer: thanks - that's trying to install desktopcouch and stuff and looks dead upstream
[18:04] <elmo> I guess it'll be easy to do a dumb cache with launchpadlib
[18:05] <flacoste> desktopcouch!
[18:05] <flacoste> wow, yeah, that's dead
[18:05] <flacoste> the hug of death
[18:07] <nigelb> desktopcouch is dead?
[18:07] <nigelb> Doesn't Ubuntu use it in a lot of places?
[18:08] <nigelb> Or is it couchdb.
[18:08] <nigelb> elmo: james_w had some API demo that did some offline stuff.
[18:08] <nigelb> At Belgium.
[18:08] <james_w> I assume that elmo wants something that works
[18:09] <nigelb> lol.
[18:09] <nigelb> james_w: A common feature of your lightning talks :P
[18:11] <elmo> nigelb: desktopcouch may or may not be dead - I just don't want (any more of) it installed on my machine if I can avoid it.  my 'dead' comment referred to the fact there haven't been any code changes to bughugger in almost a year
[18:11] <sinzui> jcsackett, ping
[18:14] <nigelb> elmo: Ah, right. Its unloved.
[18:16] <elmo>  The Launchpad API is currently in beta, and may well change in ways
[18:16] <elmo>  incompatible with this library.
[18:16] <elmo> is that still true?
[18:24] <deryck> elmo, depends on which version of the api you're using.  there is a beta, a 1.0, and a devel.
[18:25] <deryck> or maybe I just inserted myself in a conversation without context. :)
[18:26] <elmo> well, that comes from the package description for python-launchpadlib
[18:26] <deryck> ah
[18:26] <deryck> I would guess launchpadlib uses the 1.0 api now.
[19:11] <jcsackett> sinzui: sorry, didn't see your highlight an hour ago.
[19:12] <sinzui> jcsackett, I reported https://bugs.launchpad.net/launchpad/+bug/882714 and attached the script that I have used.
[19:12] <sinzui> jcsackett, please review the script, I want an admin to run it once to retire it
[19:13] <jcsackett> sinzui: happily. :-)
[19:15] <jcsackett> sinzui: big script, eh? may take a bit.
[19:16] <sinzui> jcsackett, you do not need to know every project in the dict. that is what I collected in the evening reviewing projects one January
[19:16] <jcsackett> yeah, i just realized that the dicts could be skipped.
[19:17] <jcsackett> quite a bit shorter without that. :-P
[19:22] <jcsackett> sinzui: you say you've been using this script before?
[19:22] <flacoste> deryck: yes, it does, we should update the package description
[19:22] <flacoste> or the Ubuntu maintainer should
[19:22] <sinzui> jcsackett, I run it once a month
[19:22] <jcsackett> dig.
[19:23] <jcsackett> sinzui: i see nothing i can add about or too this script, looks good.
[19:23] <jcsackett> r=me, sinzui.
[19:23] <sinzui> it fixes about 1 project every other run because the user gave ~registry the project
[20:34] <deryck> abentley, I have bug listings reordering now.  it all works with very few lines changed.
[20:34] <deryck> abentley, http://pastebin.ubuntu.com/721063/
[20:34] <abentley> deryck: Awesome!
[20:35] <benji> hi jcsackett, do you have time to review this for me?  https://code.launchpad.net/~benji/launchpad/bug-877195-code/+merge/80617
[20:35] <jcsackett> benji: sure thing.
[20:35] <benji> cool
[20:35] <deryck> abentley, I still need to setup the widget correctly, to match actual sort orders we use.  And then some css and we're good.
[20:37] <abentley> deryck: Cool.  I've got the polish basically done.  It now updates the batch info "1 → 5 of 256", the actions are green if active, and the actions are greyed out if not applicable.
[20:37] <deryck> abentley, very nice.
[20:38] <jcsackett> benji: just checking, is there any reason you didn't set the pre-req branch as the pre-req in the MP?
[20:39] <benji> jcsackett: heh, I just realized that I should do that
[20:39] <jcsackett> :-)
[20:39] <jcsackett> cool, if you could resubmit with that i'll pick up that MP, so i know what i should be looking at. :-)
[20:42] <jcsackett> benji: i've got it now. thanks.
[20:42] <benji> k
[20:42] <mwhudson> so the launchpad thing that kills idle transactions... is that reusable at all?
[20:43] <lifeless> pgkillidle
[20:43] <lifeless> yes
[20:43] <lifeless> isd and u1 use it already
[20:44] <lifeless> should ust be a matter of asking losa to configure it up
[20:45] <poolie> hi all
[20:50] <benji> morning, lifeless; I left a present on your doorstep (or stubs, but I figure he's somewhat distracted right now): https://code.launchpad.net/~benji/launchpad/bug-877195-db/+merge/80618
[20:51] <lifeless> benji: you're a good cat. Miaow.
[20:51] <benji> :)
[20:52] <lifeless> benji: what are lines 54 and 55 for ?
[20:52] <lifeless> benji: or is it just a confusing diff ?
[20:53] <benji> lifeless: they are to make something unrelated to my change stop barfing.  when I run "cronscripts/run_jobs.py pofile_stats" without those lines I get exceptions about them being missing
[20:54] <lifeless> benji: if you copy lines 61 and 62 up, then bzr may anchor the diff better
[20:54] <lifeless> its a confusing diff basically.
[20:54] <mwhudson> lifeless: ta
[20:54] <benji> lifeless: will do
[20:54] <lifeless> benji: more importantly, you need to bzr add your schema change
[20:54] <benji> lifeless: ooh, indeed
[20:54] <lifeless> benji: also, this should be in two pastches
[20:55] <lifeless> benji: a) schema only
[20:55] <lifeless> benji: b) the lazr.conf and any other code things
[20:55] <lifeless> benji: because we don't deploy from db-devel
[20:56] <benji> lifeless: oh, so I should move the schema-lazr.conf changes to the code branch; ok
[20:56] <lifeless> benji: yah, db-schema != config-schema
[21:00] <jcsackett> benji: r=me on the code changes as is. i gather from the above you'll have another branch with the schema changes? :-P
[21:00] <benji> jcsackett: yep; thanks!
[21:01] <jcsackett> sinzui: the post-oneiric-install script you pointed me at last week is somewhat frightening in all that it does. :-P
[21:02] <sinzui> yes, but you should consider that that script is very similar to what many deb packaging scripts to at build and install phases
[21:02] <flacoste> benji: i think you code as tiny but (i think benigh) race condition
[21:02] <flacoste> benji: it could still end up creating two POFile job
[21:03] <flacoste> since there is no constraint preventing two from existing at the same time
[21:03] <lifeless> flacoste: uhm hi
[21:03] <flacoste> but i think we don't care about that, right?
[21:03] <sinzui> I image that a a tuning script framework could be written to get the most out of several lines of computers. I have a sony that also needs special drivers configured to the the keyboard tor rock
[21:03] <benji> flacoste: hmm, you may be right
[21:03] <lifeless> flacoste: I've just replied to your new RT - I think we need to coordinate on that a bit more :)
[21:03] <flacoste> it would simply waste some CPU cycle
[21:04] <flacoste> lifeless: that last statement was adressed to benji, just in case, you got hit by paranoia overnight
[21:04] <flacoste> :-)
[21:04] <lifeless> flacoste: :)
[21:04] <lifeless> no paranoia, I *know* they are out to get me!
[21:05] <flacoste> that's why i tell myself whenever i see an (american) football scrum!
[21:05] <flacoste> they are plotting against me!
[21:06] <flacoste> lifeless: so basically, the DB load problem isn't for the normal situation, but for the fail-over one?
[21:07] <lifeless> flacoste: theres a few interlocking things
[21:07] <lifeless> let me just pull up some rports
[21:10] <lifeless> flacoste: this might be easier in voice
[21:11] <flacoste> lifeless: then let's skype, there are a couple of things to cathc-up on from the LP/IS call
[21:11] <benji> flacoste: yeah, there is a race there (well, it's not technically a race, but the opportunity for the same POFile to be scheduled for an update multiple times); since updates take 20 or 30 seconds and they could be scheduled tens of times in a particular update window) I'm inclined to collapse them if you don't object
[21:11] <flacoste> benji: sure
[21:11] <flacoste> benji: you mean at processing time?
[21:11] <jcsackett> sinzui: that seems reasonable. i'm just trying to sort out what differs between macbook5,1 and macbookair4,2.
[21:12] <benji> flacoste: I figured I'd do it at job creation time, but in a simple, actually racy way that would eliminate the vast majority of duplicate jobs but still be very straightforward
[21:13] <flacoste> benji: makes sense
[21:13] <flacoste> soviet enginerring
[21:13] <flacoste> the Launchapd way :-)
[21:16] <benji> flacoste: makes me think of the US SR-71 spy plane, it got so hot during flight that it was built with gaps, so when they fueled it up on the ground it would leak until it got into the air and the gaps would close
[21:21] <benji> lifeless: https://code.launchpad.net/~benji/launchpad/bug-877195-db/+merge/80618 looks better now
[21:56] <lifeless> benji: are you intending to address bug 781274 as pat of your arc?
[21:56] <lifeless> *part*
[21:58] <jcsackett> sinzui: i'll be a few minutes late to standup.
[22:04] <lifeless> benji: reviewed
[22:05] <benji> lifeless: thanks; I'll take a look at that bug tomorrow and see what I can do with it
[22:09] <poolie> benji: istr lots of people got cancer from dealing with the consequences of that design
[22:10] <poolie> how naive would it be to want to move some build jobs onto ec2 instances (not necc AWS ec2)
[22:10] <poolie> actually that might be too big of a question for irc
[22:11] <benji> poolie: well, that's quite a symptom for trying to make a change to software, but I like a challenge
[22:13] <lifeless> poolie: build jobs?
[22:13] <poolie> sr-71 leaking fuel
[22:14] <poolie> imbw
[22:15] <lifeless> poolie: '11:10 < poolie> how naive would it be to want to move some build jobs onto ec2 instances (not necc AWS ec2)
[22:15] <lifeless> '
[22:15] <poolie> oh, yes, build jbos
[22:15] <poolie> *jobs
[22:15] <lifeless> what do you mean by build jobs, specifically.
[22:15] <poolie> the work done by buildds
[22:15] <poolie> to accommodate people who want more or larger buildders
[22:15] <poolie> i realize there are trust issues about running some jobs on hardware we don't specifically control
[22:15] <lifeless> so, buildds are trusted machines, because their output runs as root on other peoples machines.
[22:15] <poolie> but perhaps for other jobs it would be an ok tradeoff
[22:16] <lifeless> ec2 is very expensive in lots of ways
[22:16] <lifeless> so even if we handwave the trust aspect away
[22:16] <lifeless> I doubt it would make financial sense
[22:16] <lifeless> internal cloud, perhaps.
[22:17] <poolie> well, at the moment the price is infinite :)
[22:17] <lifeless> (but thats what the builders basically are anyhow, except they have a -very- tuned image-start process (we can bring up a slave in a second or so)
[22:18] <poolie> anyhow, so the answer is basically:
[22:18] <poolie> trust, and cost?
[22:19] <lifeless> those are the nontechnical bits yes
[22:19] <jml> poolie: people have asked for that though
[22:19] <jml> poolie: openstack, notably
[22:19] <lifeless> technically, we'd need some fairly deep changes to the idea of a builder (ephemeral vs static), and obviously the implementation of protocols to deal with ec2
[22:20] <poolie> jml, right, it seems like trust and cost is something that a user can make their own decision about
[22:20] <jml> who really dislike the fact that their development process can be disrupted by the whims of Ubuntu
[22:20] <poolie> assuming it's technically feasible for them to pay the financial cost
[22:20] <jml> poolie: and also financially feasible :D
[22:20] <StevenK> jml: But it can be anyway -- if they are building against precise, library changes can impact them.
[22:20] <lifeless> The way I would address trust and cost is to allow a ppa to choose to use our cloud (existing builders) or supply their own cloud credentials
[22:21] <poolie> yep
[22:21] <jml> StevenK: but again, that's something they can choose
[22:21] <poolie> so my question was really how deep that is
[22:21] <jml> lifeless: you might also want info about that choice on the PPA page for prospective downloaders.
[22:21] <lifeless> poolie: a couple man-weeks + review deployment and friction
[22:22] <lifeless> jml: yes, and/or even in the archive metadata
[22:22] <poolie> thanks
[22:22] <jml> *nod*
[22:22] <lifeless> jml: I don't think the thinking about it is complete yet
[22:22] <poolie> jml, info for prospective downloaders about where it was built?
[22:22] <lifeless> poolie: about whether its trustable like things built on our cloud
[22:22] <lifeless> poolie: e.g. was it actually built from the source we think it was.
[22:23] <lifeless> poolie: for something out of our control the answer has to be 'unknown'
[22:23] <poolie> i suppose there is a channel there whereby running on aws gives them a chance to inject things not visible in the source
[22:23] <poolie> right
[22:23] <poolie> though, this is a little academic if you're already trusting the author to provide all the source
[22:23] <lifeless> perhaps :)
[22:23] <lifeless> see under reflections on trust
[22:24] <poolie> 'on trusting trust'
[22:24] <poolie> sure
[22:25] <lifeless> yes, that one :)
[22:25] <poolie> i'm just saying that's a bit academic compared to the way people treat ppas today
[22:25] <lifeless> e.g. if openstack point at a rackspace cloud because they trust it
[22:25] <lifeless> etc
[22:25] <lifeless> poolie: there are folk in the distro deeply concerned about the way people treat ppas today ;)
[22:26] <poolie> for instance that every ppa can upgrade any package
[22:26] <lifeless> -> because it installs as root
[22:26] <lifeless> -> and can change anything
[22:26] <poolie> sure, i sympathize with them, i just don't think the idea that AWS is malicious or subverted would be very high on the list
[22:27] <lifeless> There is a 'any unnecessary risk is too much' mentality, which to a large extent I agree with.
[22:27] <lifeless> in this area
[22:28] <poolie> it's true this is opening up some new risks
[22:28] <poolie> that would be a tradeoff against allowing these people to use the system at all, vs not building packages, or building them on entirely separate machinery
[22:28] <poolie> or not getting a good quality of service
[22:44] <lifeless> well
[22:45] <lifeless> yes, but that doesn't mean we should do it :) - the tradeoff may not be worth it.
[22:50] <poolie> yes :)
[23:20] <poolie> can i navigate back to previous builds from https://code.launchpad.net/~neon/+recipe/project-neon-kdesdk
[23:28] <wgrant> I don't believe so.
[23:36] <wgrant> Grar
[23:36] <wgrant> More string interpolation to generate HTML.
[23:36] <wgrant> Without escaping.
[23:43] <poolie> wgrant:  at least one build has passed of a package that was previously reported to be failing
[23:43] <poolie> https://launchpad.net/~bzr/+archive/daily/+recipebuild/108303
[23:44] <wgrant> That's an odd place to build it into :)
[23:45] <poolie> gar, unix copy and paste again
[23:46] <poolie> no, actually
[23:46] <poolie> i was fooled by the 'request builds' having an interesting default
[23:47] <wgrant> Yeah, the alphabetically first PPA by display name.
[23:47] <poolie> obviously
[23:48] <poolie> can i cancel it?
[23:48] <wgrant> It depwaited.
[23:49] <wgrant> Oh, the new build in ppa:mbp?
[23:49] <wgrant> I can cancel it if you want.
[23:49] <poolie> ah which does not actually mean "i'm waiting" but "i'm not waiting"
[23:49] <poolie> no, the new build into my ppa is fine
[23:49] <wgrant> For binary builds it means "i'm waiting".
[23:49] <wgrant> But not for recipe builds.
[23:57] <poolie> ah ok
[23:58] <poolie> ok, so on a later attempt, the build into ~mbp failed, but i think after all the bzr work completed
[23:58] <poolie> and just because of an actual build problem in the source
[23:59] <poolie> so, it's a good sign