[02:46] <wgrant> Ew.
[02:46] <wgrant> Lots of odd MP OOPSes...
[02:46] <wgrant> Can't see how it could be a regression, though.
[02:52] <lifeless> mp creation was timing out for the package importer yesterday
[02:52] <wgrant> It was OOPSing, not timing out.
[02:53] <lifeless> true
[02:54] <lifeless> they were getting 500's and not seeing the oops id
[02:54] <lifeless> so I was guessing
[02:54] <wgrant> Ah.
[02:55] <lifeless> bah
[02:55] <lifeless> BMP index still timing out
[02:55] <wgrant> :(
[02:55] <lifeless> https://lp-oops.canonical.com/oops.py/?oopsid=1910B1069#longstatements
[02:56] <lifeless> oh right, that wasn't deployed
[02:57] <wgrant> You mean the one that landed 64 minutes ago?
[02:57] <lifeless> yeah
[02:57] <wgrant> HYeh
[02:58] <lifeless> 1	2079.0	1	SQL-launchpad-main-slave	
[02:58] <lifeless> SELECT BranchMergeProposal.commit_
[03:00] <lifeless> bah, terrible plan
[03:07] <lifeless> bug 742916 if you're interested
[03:07] <_mup_> Bug #742916: BranchMergeProposal:+index timeouts - slow query plan <timeout> <Launchpad itself:Triaged> < https://launchpad.net/bugs/742916 >
[03:09] <lifeless> BranchRevision needs revision_date denormalised into it
[03:19] <wgrant> lifeless: No.
[03:19] <wgrant> That will double the size of our biggest table, so I think not :)
[03:23] <lifeless> its only 34GB
[03:25] <wgrant> True. It's the indices that are really big :/
[03:25] <lifeless> wgrant: we're currently bringing 10K rows in at a time
[03:25] <lifeless> wgrant: a bigger table (and another index - shrug) - which stays mostly on disk, would be a net win.
[03:30] <lifeless> wgrant: select count(*) from revision where Revision.revision_date >= '2009-11-05 00:21:45.372993+00:00' and Revision.revision_date <= '2009-11-06 15:52:25.310401';
[03:30] <lifeless>  count
[03:30] <lifeless> -------
[03:30] <lifeless>   9722
[03:31] <lifeless> I suspect a recursive graph traversal query will be faster
[03:31] <lifeless> and we should investigate that before denormalising
[03:32] <wgrant> lifeless: I personally prefer the "don't store 50000 times more data than necessary" approach.
[03:34] <wgrant> lifeless: What do you know about bugnotification?
[03:34] <wgrant> https://lpstats.canonical.com/graphs/TableRowCountbugnotification/20100327/20110327/ is somewhat interesting.
[03:35] <wgrant> The trend from the start of Feb hasn't been seen before.
[03:36] <lifeless> https://lpstats.canonical.com/graphs/TableRowCountbugnotification/20090327/20100327/
[03:39] <wgrant> I guess the pre-release rush could just have been brought all the way back to the start of Feb rather than March.
[03:39] <wgrant> But mmm.
[03:44] <lifeless> so its a short lived audit trail + work queue
[03:45] <lifeless> on qastaging
[03:45] <lifeless> select min(date_emailed) from bugnotification;
[03:45] <lifeless>             min
[03:45] <lifeless> ---------------------------
[03:45] <lifeless>  2011-01-14 23:05:53.30206
[04:36] <LPCIBot> Project windmill build #105: STILL FAILING in 1 hr 9 min: https://lpci.wedontsleep.org/job/windmill/105/
[06:34] <wallyworld> thumper: ping
[08:54] <LPCIBot> Project windmill build #106: STILL FAILING in 1 hr 11 min: https://lpci.wedontsleep.org/job/windmill/106/
[08:59] <LPCIBot> Yippie, build fixed!
[08:59] <LPCIBot> Project db-devel build #492: FIXED in 5 hr 19 min: https://lpci.wedontsleep.org/job/db-devel/492/
[11:12] <cjwatson> 7990 N T Mar 26 William Grant          Test results: tar-xz => devel: FAILURE
[11:12] <cjwatson> 7991 N   Mar 26 noreply@launchp        [Merge] lp:~cjwatson/launchpad/tar-xz into lp:launchpad
[11:13] <cjwatson> wgrant: ^- do I need to do something about that failure, or did it get resolved before merge?
[11:15] <StevenK> cjwatson: It has hit devel as of r12667, and the LP buildbot is currently building that rev -- so we'll find out in about 37 minutes.
[11:15] <StevenK> There are currently no test failures, though.
[11:19] <cjwatson> the failures look legit, investigating
[11:21] <cjwatson> unfortunately I don't appear to be able to run the uploadprocessor tests locally
[11:21] <cjwatson> I've pushed what I think is a fix to lp:~cjwatson/launchpad/tar-xz
[11:22] <wgrant> cjwatson: Ah, sorry, forgot to reply. I fixed that myself and landed it.
[11:22] <wgrant> cjwatson: Why can't you run it locally?
[11:24] <cjwatson> setting up all the necessary postgres stuff for full LP test runs impedes my ability to do Ubuntu boot speed work
[11:24] <cjwatson> ah, if you fixed it yourself, great
[11:25] <cjwatson> wgrant: you might want to check my tar-xz branch all the same - I had apparently forgotten to add the .tar.xz file to a second set of test data that hangs off the first
[11:25] <cjwatson> it seemed to pass anyway but it's more correct this way, I think
[11:31] <wgrant> cjwatson: Lots of us run LP in a VM.
[11:31] <wgrant> Because it is slightly intrusive.
[11:32] <wgrant> (Unity and r600g are really not good friends at the moment)
[13:06] <LPCIBot> Project windmill build #107: STILL FAILING in 1 hr 9 min: https://lpci.wedontsleep.org/job/windmill/107/
[14:26] <james_w> is there a bug for whatever is causing the package importer to fail with a 500 error? I was told it was a timeout error that a fix was available for
[14:38] <wgrant> james_w: It's not a timeout. I suspect that it's a bug in the importer... it's OOPSing with a mismatch between the number of reviewers and review types in the request.
[14:39] <wgrant> james_w: https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1910A1001
[14:44] <wgrant> james_w: It looks like change landed in lp:udd three weeks ago, but perhaps it was only deployed a couple of days back?
[14:55] <cjwatson> wgrant: I don't work on LP often enough to have spent time setting that kind of thing up, TBH
[16:04] <james_w> wgrant, that's quite possible
[19:20] <lifeless> moin
[20:39] <lifeless> we should upgrade to bzr 2.3