=== Ursinha is now known as Ursinha-afk [03:15] spm: Hi. I'm trying to QA my fix for bug #592573. The differences between the right panels on https://launchpad.net/builders and https://edge.launchpad.net/builders show that there are ~146 builds in some sort of limbo. Do you have a moment to do a bit of DB poking to work out what they are and why? [03:15] <_mup_> Bug #592573: BuilderSet.getBuildQueueSizes doesn't consider non-binary builds [03:19] wgrant: hrm. curious. sure. I guess the ones you have via the bug is a good starter? [03:27] spm: SELECT id, builder, lastscore, job, job_type, processor FROM buildqueue WHERE virtualized=true; [03:27] At the moment that query *should* be empty. [03:28] But it will probably return around 61 rows [03:28] (23461 rows) ho ho ho ho [03:28] Oh, oops. Forgot a join. [03:28] heh, np [03:28] SELECT id, builder, lastscore, job, job_type, processor FROM buildqueue JOIN job ON job.id = buildqueue.job WHERE virtualized=true AND job.status = 0; [03:29] you did get the '61', just missed the 23,400 as well. so.. not too shabby? [03:29] SELECT buildqueue.id, builder, lastscore, job, job_type, processor FROM buildqueue JOIN job ON job.id = buildqueue.job WHERE virtualized=true AND job.status = 0; [03:29] yarp. 61 rows [03:30] one looks suspiciously old. very low job# compared to the rest [03:30] There should be nothing sensitive there (I excluded logtail). Can you pastebin, please? [03:31] http://paste.ubuntu.com/466237/ [03:31] Oh wow :( [03:31] hrm? in what sense? [03:31] Can you 'SELECT MAX(id) FROM buildqueue;' just to see how old those are? [03:32] 3708959 eek [03:32] Ah, so nothing new. Good. [03:32] Just from the early days of the move to the job system, I suspect. [03:33] (those are all queued builds that are being ignored, so are somehow corrupt) [03:35] ahh I see. is this something that should be cleaned up? or can be happily ignored? [03:36] We need to clean it up. I guess I'll talk to Julian about it. [03:37] oki, ta [03:37] SELECT * FROM buildpackagejob WHERE job=2691238; [03:38] I suspect it's the BuildPackageJobs that are missing. [03:38] id | job | build [03:38] --------+---------+--------- [03:38] 495749 | 2691238 | 1684155 [03:38] Damn. [03:45] spm: ola amigo. como estas [03:45] spm: or, should I say - ¿como estas? [03:46] mtaylor: alas, my spanish is limited to that picked up via overhearing Dora the Explorer. So beyond Hola and Gracios(sp?) you've lost me :-) [03:46] spm: ¿como estas? means "what's up?" - although I honestly don't speak much myself [03:47] heh [03:47] but I'm in cozumel, mx, so I'm trying to make an effort past "uno mas magarita, por favor" [03:48] spm: One last one: [03:48] SELECT buildqueue.id, builder, lastscore, buildqueue.job, job_type, processor, build FROM buildqueue JOIN job ON job.id = buildqueue.job JOIN buildpackagejob ON buildpackagejob.job = job.id WHERE virtualized=true AND job.status = 0; [03:48] Then I can examine the builds myself. [03:48] mtaylor: ha! [03:49] * mtaylor cringes that you're implementing queues in the database... then decides to shut his mouth before he's asked to fix it [03:49] wgrant: http://paste.ubuntu.com/466245/ [03:49] what do you mean *implementing*?!? implemented. :-) [03:50] *shudder* :) [03:50] mtaylor: Before my time :( [03:50] This is five years old :( [03:50] I believe plans are to move to something like rabbit, or whatever. but nfi. [03:50] wgrant: it's ok - in my days as a mysql consultant, I saw _many_ _MANY_ thing implemented inside of an RDBMS that didn't belong there [03:51] but my favorite things people mis-use dbs for are queues and email [03:52] free tip #1 from formerly over-paid db consultant - if your app needs some sort of message system that's similar to email ... USE EMAIL SERVERS .. don't half-way implement private broken email in database tables [03:52] spm: :) [03:52] spm: Ah, it's not a bug after all. Most of those are builds that were cancelled by manual SQL to mark them superseded. [03:52] ah [03:53] Not all, but most. I'll work out how to clean them up. [03:53] Thanks. [03:53] mtaylor: that sounds suspiciously like heresy. email for email!?!? I mean, srsly!?!? [03:54] wgrant: np [03:55] spm: I know - right? you should have seen the look on the client's face the first time I suggested that as the fix for their performance isues [03:55] issues [03:55] "um, hai! you've implemented email in php ... try learning the internet" [03:55] heh [03:56] NIH. Alive and well. [03:56] yup. also "I don't want to learn anything" [04:58] morning [05:19] * wgrant is still looking for someone to land three branches. [05:29] here's two fun lines to have next to each other: [05:29] from canonical.launchpad.layers import WebServiceLayer [05:29] from canonical.testing.layers import FunctionalLayer [05:43] mwhudson: can you help wgrant out? [05:43] pretty please? [05:46] ah right [05:46] wgrant: url me up [05:47] mwhudson: lp:~wgrant/launchpad/bug-598345-restrict-dep-contexts, lp:~wgrant/launchpad/refactor-_dominateBinary and lp:~wgrant/launchpad/really-publish-ppa-ddebs [05:48] * mwhudson grrs at not being able to give branch urls to ec2 land [05:49] Can't you? [05:49] I thought you could now. [05:49] well i didn't work for these [05:55] Does anyone happen to know if BFB works for Hardy yet? [05:59] wgrant: ok, three instances started [06:00] wgrant: did it not work for hardy at some point? [06:01] mwhudson: Oh, right, I remember now. [06:01] Its bzr was old, so it didn't work with 2a. [06:03] Thanks for sending those off. [06:23] mwhudson: Ah, no, it is really broken like this: http://launchpadlibrarian.net/52193804/buildlog.txt.gz [06:23] bzr-builder: Depends: python-debian but it is not going to be installed [06:24] wgrant: weee [06:26] * wgrant builds it the old way :( [08:03] wgrant: hi [08:39] lifeless: Hi. [08:40] mtaylor, re: Tarmac needing commit message set... The reasoning is because Tarmac doesn't know what to set the commit message to when it commits if you don't specify it. [08:40] mtaylor, I realize that this is rather awkward. Merge queues will fix that. [08:40] * rockstar keeps rolling the squeaky wheel [08:40] what advice do we normally get to people who don't receive the account confirmation mail? [08:41] check spam [08:41] SSO does hate some domains, though. [08:43] or they hate us? [08:43] is there any escape? [08:44] Well, OK, one of my email aliases that is handled by a Canonical machine does not get LP or SSO email. [08:44] it becomes a losa ping [08:44] Everything else works. [08:44] So there's something not quite right about some email setup somewhere. [08:44] wgrant: orly? have you filed an rt? [08:45] lifeless: The address isn't important to me, so I never really bothered. [08:48] lifeless: What was that ping before about? [08:48] buildds [08:48] seen the thread on deployment? [08:48] Yes. [08:48] They don't care if the connection dies. [08:48] can you confirm or deny slave behaviour ? [08:49] You can kill buildd-manager at any time, and it will be fine. [08:49] \o/ [08:49] elmo: ^ [08:49] wgrant: what other things can go wrong [08:49] buildd-manager and the protocol are pretty stateless. [08:49] (apart from DB build state) [08:49] sure [08:50] wgrant: is there anything that can be done to make the publisher runs able to be interrupted without causing havoc ? [08:50] lifeless: Um. [08:50] They shouldn't be tooo bad at the moment. [08:50] any manual recovery == too bad [08:50] For PPAs it would be really bad. [08:50] For primary it should just about work. [08:51] (primary has atomic dists/ update; PPAs do not) === almaisan-away is now known as al-maisan [08:54] Phase A (file publishing) is fine, since it doesn't matter if we publish something again on the next run. [08:55] B (domination) is all in one transaction, and doesn't touch the filesystem at all. [08:55] But C and D touch indices, so will leave the archive inconsistent. [08:56] Hm, actually, it's slightly worse. [08:56] If we get through A fine, commit, then get terminated, there'll be no dirty pockets to force index regeneration when everything is rerun. [08:57] 2pc needed ? [08:57] That would mean a half-hour transaction. [08:57] ugh [08:57] (alternatively we could make the publisher less goddamn slow) [08:57] or start domination later ? [08:58] So, there are two problems: [08:58] there are two sorts of people in the world .... [08:58] 1) We rely on state internal to the publisher process to work out whether we need to regenerate indices. [08:58] 2) Termination during index generation will leave the archive inconsistent for at least a few minutes. [08:59] seems like they are both addressed by making indice regeneration into a separate task [08:59] pipeline-like [08:59] Applying the atomic dists/ update system to all archives would solve #2 simply. [09:00] For #1... we may need to store dirty pockets in the DB. [09:00] also less code variation - ++ [09:03] Perhaps. [09:03] But the way it's done now is not exactly... acceptable. [09:03] GUten morgen. [09:06] wgrant: are bugs filed to fix it? [09:07] lifeless: It's cron.publish-ftpmaster. Everyone is terrified. [09:07] is it risk ? [09:09] It's evil and does some things that nobody understands any more. [09:09] Well, I guess that's just dsync. [09:13] Morning bigjools. [09:13] You may be, er, interested to know that we have somewhere around 146 inconsistent builds on production. [09:13] They have BuildQueues and Jobs, but are not actually pending. [09:15] sigh [09:15] givebacks? [09:15] Some are SUPERSEDED, others are FULLYBUILT. [09:15] did I just catch lifeless looking at cron.publish-ftpmaster? [09:15] The former can be explained by LOSAs manually cancelling builds, I suppose. [09:16] I expect so - doing it wrong. Are they package builds? [09:16] binary that is [09:16] They're all BPBs, yes. [09:16] are they rebuilds? [09:16] http://paste.ubuntu.com/466245/ are the virtualized ones. Although there are a couple of legitimate builds that snuck in there. [09:17] how was this discovered? [09:17] The ones with score == 0 are the buggy ones. [09:17] I was QAing my getBuildQueueSizes change. [09:17] Compare the build queue sizes on edge and production. [09:17] sigh [09:18] lastscore=0 indicates a retry [09:18] Ah, true. I guess it would be NULL if they'd just finished naturally. [09:19] They're fortunately all fairly old. [09:19] I need to clean up the rebuild anyway [09:20] There are 80 or so non-virt builds which I didn't query for, since the non-virt build queues are large. [09:20] how old? [09:21] Well, we're up to BQ 3700000 or so. [09:21] Most of them are around 3330000 [09:21] did you get any dates? [09:22] They should be in Job, but I didn't query for them, no. [09:22] I decided I'd bothered people enough. [09:23] it's so much harder to query across jobs now :( [09:24] http://paste.ubuntu.com/466245/ has the query [09:24] I have some pre-potted ones [09:24] Just throw a couple of extra columns like BPB.status and Job.date_created in, I suppose. [09:25] argh, I can't even do this on staging any more [09:25] No perms? [09:25] no, we wipe the queues when restoring [09:26] to stop staging collecting build from production [09:26] Oh, true. [09:26] So, the few builds on there from before 3300000 are all SUPERSEDED. [09:26] Then there are a couple around 3300000 [09:26] And the rest around 3330000 [09:27] I haven't checked the statuses for the last two categories, though [09:32] wgrant: there's one (!) like that on dogfood [09:33] bigjools: What's its status? [09:33] Build status, not Job status. [09:37] * bigjools rides SQL wild horses [09:38] bigjools: asking about things that will make deployment hard [09:38] bigjools: easy deployment is a really important thing for iterating on performance at faster than monthly cycles. [09:38] lifeless: do you know how it will make it hard? [09:39] bigjools: no idea, wgrant mentioned it was all [09:39] other than being a PoS bash script, I don't see the problem [09:39] bigjools: for the publisher specifically I'm told that interrupting it causes damaged PPA archives until its run again, and in some cases until the PPA has a new build and the publisher runs again. [09:40] yes it's possible, but easy to fix [09:40] bigjools: If we die during !primary index generation, the indices will be inconsistent until the next run. [09:40] And if we die before index generation, there'll be no dirty pockets, so no index regeneration will occur. [09:40] we need to write to a tmp dir like ubuntu does [09:40] That's what I suggested. [09:41] But it's not trivial to port, given how it's done now. [09:41] and remove the stupid partial commits [09:41] We can't do that until we publish everything really quickly. [09:41] bigjools: the impact of that on the deployment story is that we have to be careful about when we start deployments, shutting down the task well in advance, which adds latency and that adds perceived downtime. [09:41] I think a half-hour transaction over the whole primary archive would make stub cry a bit. [09:42] none of this is intractable, but the more steps that are needed, the more coordination, the slower the process is. [09:42] yeah, whoever wrote the publisher knew nothing about recoverability [09:43] yup [09:43] bigjools: Alternatively, we could write indices atomically like the primary archive, and store dirty pockets somewhere persistent. [09:43] Everything else should work. [09:43] wgrant: everything you do to make the system more resilient, j'adore [09:43] me likey atomic [09:44] Or we make the publisher complete in a few seconds, and do it all in one transaction. [09:44] But I can't get it much below two minutes. [09:44] can you make it pipeline / incremental ? [09:45] The thing that takes all the time (after optimisation) is serialising the indices. [09:45] does the db know the indices ? [09:45] pipelining is possible but has ramifications [09:46] could you, just commit, then do the indices as a read-back from the db ? [09:46] (in principle) [09:48] lifeless: you mean lazy generation? [09:48] not sure I follow [09:49] I'm fixing the conflict, btw [09:50] is there any meaningful difference between IStore(self) and Store.of(self)? [09:50] IStore uses c.l.webapp.adapter.get_store [09:51] I think the second reads more easily [09:51] Does the former respect the master/slave policy? [09:51] bigjools: I mean doing the transaction commit as soon as possible [09:51] bigjools: and generating the indices from the result of the commit, not within the commit [09:52] lifeless: The problem is that if we commit before doing indices, we don't know that we need to regenerate them later. [09:52] lifeless: what wgrant said [09:52] wgrant: that seems fixable [09:52] hehe - take a look at the publisher :) [09:52] lifeless: Right, by storing dirty pockets in the DB, which fixes it all. [09:52] But is very ugly. [09:53] compared to 30 minute db transactions with uninterruptable cron scripts that prevent rollouts for 30% of the day. [09:53] not ugly at all. [09:53] Shhhhh. [09:54] I'm very keen to see things here improved. [09:54] The general principle of 'do small amounts of work, often' and 'delay till outside of transactions things that don't need to be in the transaction' are very dear to me. [09:56] wgrant, the former goes straight to the master, I think. [09:56] jml: Ah. [10:04] wgrant: does this look sane to display the affected builds? http://pastebin.ubuntu.com/466361/ [10:06] wgrant: your branches were 1 of 3, did you get the emails? [10:12] mwhudson: Yeah, saw that. Thanks. [10:12] bigjools: I would have said buildfarmjob.status NOT IN (0, 6), but looks OK. [10:13] wgrant: using NOT IN makes queries slow [10:13] bigjools: Ah, I guess so. [10:13] bigjools: Also, grab buildqueue.virtualized. [10:13] why? [10:14] Why not? [10:14] Might as well get all the categorisation information. [10:16] I only omitted it from the initial query because I was restricting to virt jobs. [10:23] * wgrant grumbles about the incomplete Registry<->Soyuz split. [10:41] mwhudson: Tests fixed. Can you please rerun, if you're still around? [10:42] I guess not. [10:43] Can someone else please re-EC2 lp:~wgrant/launchpad/refactor-_dominateBinary and lp:~wgrant/launchpad/bug-598345-restrict-dep-contexts? [10:49] wgrant: problematic queue rows blitzed, how's it looking? [10:49] bigjools: So none of them were actually pending? [10:50] some where but I left those alone ;) [10:50] sigh [10:50] some *were* [10:50] That looks OK. [10:50] The numbers match now. [10:50] \\o/ [10:50] I will be really glad when we get the model rework done, and such inconsistency becomes impossible. [10:50] yarp [10:51] Because we will be able to avoid storing the status in four or five places... [10:51] although notice that all of those were from march/april [10:51] Yeah. [10:52] wgrant: https://edge.launchpad.net/~oem-archive/+archive/budapest/+build/1880257 [10:52] can you see that? [10:53] bigjools: No. Is that the one that failed to upload this morning? [10:54] yes [10:54] I considered going hunting for the upload log, but decided the search space was slightly too big. [10:54] only it says it's built properly so it looks like it dispatched twice :/ [10:54] What was the upload error? [10:54] PM [10:54] Or is it FULLYBUILT now? [10:54] ? [10:55] oH. [10:55] Right. [10:55] it's built [10:55] So it was built four times. [10:55] Succeeded the first and last. [10:55] But somehow was retried after the first. [10:55] Yay. [10:55] I thought we'd weeded out all of that :( [11:05] wgrant: I suspect double clicking on UI buttons [11:05] bigjools: But... transactions. [11:05] wgrant: that already causes havok on copy packages [11:05] For copies, sure. [11:05] wgrant: they don't help if the db constraints don't catch the problem [11:05] bigjools: But a retry resets the BFJ status. [11:06] They both have to update the same row. [11:06] I have a suspicion something is double-forwarding every now and then [11:06] or something [11:06] to the same thing [11:06] see the bug I filed about getting two bugs [11:06] bigjools: I really hope that Postgres doesn't accept that. [11:06] wgrant: unless there's a constraint, it will [11:06] lifeless: really? ugh [11:08] elmo: https://bugs.edge.launchpad.net/soyuz/+bug/607397 [11:08] <_mup_> Bug #607397: buildds need to survive the buildd master being upgraded [11:08] elmo: can you please describe there the build farm issue you related to me - or perhaps its no longer an issue ? [11:09] bigjools: With SERIALIZABLE as the isolation level, a concurrent update like that is prevented. [11:09] I just checked. [11:09] I wonder what Storm uses, though. [11:10] elmo: nvm [11:10] It default to serializable, as I'd hoped. [11:12] Ahh. [11:12] But LP overrides it to READ COMMITTED, and that allows it. [11:12] Damn. [11:13] bwah [11:13] all of lp is read committed ? [11:13] I think so. Need to check Postgres logs harder. [11:13] that's.... not good [11:13] * wgrant looks harder. [11:14] At least the appserver transactions immediately set it to READ COMMITTED, or so the postgresql logs show. [11:15] bah [11:15] wgrant: you broke our builds [11:15] :P [11:15] lifeless: It was too quick to be my fault :( [11:15] Build Reason: [11:15] * wgrant blames the build system. [11:15] Build Source Stamp: [branch bzr+ssh://bazaar.launchpad.net/~launchpad-pqm/launchpad/devel] HEAD [11:15] Blamelist: William Grant [11:15] BUILD FAILED: failed shell_6 compile [11:15] Oh yes, I got the email. [11:15] But the test suite doesn't take an hour. [11:16] So either I broke things really badly, or the build system is broken as usual. [11:16] Or someone turned on parallelisation while I wasn't looking. [11:16] no [11:17] not done yet [11:18] bigjools: Ah, wait, there's a UNIQUE on buildpackagejob.build. So you can't queue the build twice regardless of how screwed LP's default transaction level may or may not be. [11:19] that's the kind of constraint I like [11:19] So we have more hunting to do. [11:19] grar [11:19] Anything in the logs yet, or must we go librarian diving? [11:20] hang on [11:22] Hmm. The librarian GC is rather aggressive :/ [11:22] mpt: hai [11:22] mpt: lunch is @ 1 [11:22] mpt: are you planning to starve me? [11:22] It will immediately kill anything unreferenced and with no expiry date set. [11:23] lifeless, clearly, by "1" I meant "2" [11:23] awesome [11:23] Ursinha-afk: how does the oops <-> bug stuff work ? [11:24] wgrant: are you working on 78 SELECT COUNT(*) FROM Archive, BinaryPackageBuild, BuildFarmJob, PackageBuild WHERE distro_arch_se ... tus=$INT AND Archive.purpose IN ($INT,$INT) AND Archive.id = PackageBuild.archive AND ($INT=$INT): [11:24] ? [11:24] jml: Um, canonical.uuid has been gone for more than a hundred devel revisions... [11:24] lifeless: I don't really know how to fix it. [11:25] There's nothing obviously wrong with it that I can see. [11:25] http://paste.ubuntu.com/465800/ is the EXPLAIN ANALYZE of the other slow query, which is just about the same. [11:26] right [11:26] so dropping the count * separate query will save 5 seconds [11:26] sorry 7.7 [11:26] But that's lazr.restful. [11:26] Not sure we can do much about that. [11:26] And the query shouldn't take long at all anyway. [11:26] sure we can [11:26] is there a bug for it ? [11:26] The slow queries? [11:26] wgrant, I'm just the messenger. [11:26] the problem [11:26] I filed one a month or two ago. [11:26] whats the number [11:27] * wgrant is hunting. [11:27] If bug search was a little better... [11:27] hush [11:27] Bug #590708 [11:27] <_mup_> Bug #590708: DistroSeries.getBuildRecords often timing out [11:28] jml: Well, since I have no recent shipit, I can do nothing. [11:29] bigjools: hi - https://bugs.edge.launchpad.net/soyuz/+bug/590708 [11:29] <_mup_> Bug #590708: DistroSeries.getBuildRecords often timing out [11:30] * wgrant just commented with the paste. [11:30] lifeless: I wonder if we should test kernel delayed copies and acceptance from +queue before taking the timeout down permanently. Those are done infrequently, take ages, and it's pretty bad if they stop working. [11:34] wgrant: can you add a test plan for testing them on staging ? [11:34] wgrant: staging is at 12 seconds. [11:34] lifeless: Um, I'm not sure if testing on staging is valid. [11:35] wgrant: why wouldn't it be ? [11:35] lifeless: It sucks performance-wise. [11:36] right [11:36] so if it works on staging, we're set for prod. [11:36] True. [11:45] bigjools: Any luck? It'd be nice to get onto it before librarian-gc deletes the evidence. [11:45] wgrant: no, best start diving [11:50] bigjools: I guess you could just look for any recent restricted 'uploader.log's... [11:51] urh [11:51] Since buildd-manager's log doesn't seem to be much help in this sort of situation. [12:00] wgrant: so, I've commented and escalated this [12:00] lifeless: Thanks. [12:01] wgrant: I think lazr restful is hurting us here and we may want to change it. [12:01] lifeless: Possibly, that probably breaks all clients in the wild. [12:01] separately fixing up the query to not do table scans - +1 [12:01] flacoste: http://people.canonical.com/~flacoste/tags-burndown-report.html is not updating ? [12:01] wgrant: on this url, they are already broken. [12:02] Morning, all. [12:02] it was timing out regularly on prod before [12:02] now its just -clear- :) [12:02] hey deryck [12:02] Can someone please re-EC2 lp:~wgrant/launchpad/refactor-_dominateBinary and lp:~wgrant/launchpad/bug-598345-restrict-dep-contexts? [12:08] wgrant: I can do it locally if nobody volunteers ec2 [12:08] btw I haz logs [12:08] Ooh, logs. [12:08] Are the logs useful? [12:08] somewhat, I'm PM you === al-maisan is now known as almaisan-away === mrevell is now known as mrevell-lunch === almaisan-away is now known as al-maisan [13:15] lifeless, hi? === mrevell-lunch is now known as mrevell [13:19] adiroiban, hi, it seems there's a conflict in +templates fix of yours now (I'm trying to land it); can you please take a look and fix it :) [13:21] * wgrant still needs someone to land those two branches. [13:21] danilos: Hi. Looking... [13:23] wgrant, got MPs for me that I can just pass into "ec2 land"? (fwiw, I had some problems with ec2 land lately, so I am not promising it will work) [13:24] danilos: Thanks. https://code.edge.launchpad.net/~wgrant/launchpad/refactor-_dominateBinary/+merge/29667 and https://code.edge.launchpad.net/~wgrant/launchpad/bug-598345-restrict-dep-contexts/+merge/30203 are the MPs. === Ursinha-afk is now known as Ursinha [13:34] mars, hi [13:34] Hi jml, what's up? [13:34] mars, I don't understand your recent email. [13:35] jml, the "Hurray for failing fast" one? [13:35] mars, yes. the build *did* go on to fail with indecipherable errors. [13:36] build 1066, right? [13:36] According to the waterfall, I see "pull new revisions [failed]", and "compile [failed]", and that's it [13:36] According to this it never ran the test suite [13:37] mars, it still tried to. [13:37] mars, and the error the compile fails with is: zope.configuration.xmlconfig.ZopeXMLConfigurationError: File "/srv/buildbot/slaves/launchpad/devel/build/script.zcml", line 7.4-7.35 [13:37] ZopeXMLConfigurationError: File "/srv/buildbot/slaves/launchpad/devel/build/lib/canonical/configure.zcml", line 157.4-158.42 [13:37] ZopeXMLConfigurationError: File "/srv/buildbot/slaves/launchpad/devel/build/lib/canonical/shipit/configure.zcml", line 55.4 [13:37] ImportError: No module named uuid [13:37] mars, I mean, it's definitely better than running the whole test suite. [13:38] mars, which is wonderful :) [13:38] hmm [13:39] I don't know why it moved on to compile_6. Just a sec, checking the config [13:39] but if a dependent branch had changed in a subtler way, it still would have gone on to run the whole suite [13:41] interesting [13:41] my fix did *not* land [13:42] the compile steps are supposed to halt the build by default [13:42] perhaps that is what caught it [13:42] what caught it was that it's an import error [13:42] mthaddon, ping, was there any word on landing my buildbot "fail fast" config change? [13:42] and the apidoc generation has to import everything [13:42] mars: we landed it but didn't restart the builder - want me to do that now? [13:43] oh I see, there were no steps beyond the compile one. [13:43] actually, no, there were [13:43] mthaddon, sure, there are three branches in there, but we have to do it sometime :/ [13:45] jml, yes, but you are right about the subtle changes things. If it misses pulling GPG or something, then it happily goes onward into the suite. You are right, we are lucky it happened to fail and halt on the compile step. [13:45] danilos: conflict solved and I have pushed the changes [13:46] danilos: the branch should be ready for ec2-test and landing [13:46] mars, ok cool. I'll gladly watch my two branches get delayed if your fix for that gets deployed & works. [13:46] (will I have to resubmit?) [13:46] actually, I guess it's just force another rebuild [13:46] jml, nope, it will be pulled into the next build [13:46] right [13:47] * jml moves on to his next problem [13:47] how can I run pyflakes on doctests? [13:47] I seem to remember being able to do so [13:49] or, lifeless, hi [13:49] jml, if you don't find something in the list archive, it might be on the Hacking pages [13:50] jml, check your "Sent" folder, dated 13/7/2009, "[EMACS] Another flymake trick" [13:51] Ursinha: hi! ;) [13:51] lifeless, you asked about the oops <-> bug link, I assume you're talking about the bug link in the oopses? [13:51] mars, :) thanks. [13:51] lifeless, it was too early in this timezone when you pinged me :) [13:51] Ursinha: yeah [13:51] Ursinha: and yes, I asked for your backscroll :) [13:52] Ursinha: I also filed a bug [13:52] mars, actually, nothing useful in there :\ [13:52] lifeless, let me see [13:53] losa ping: bazaar.lp.net seems unhappy. can i haz restart? [13:53] barry: hi [13:53] barry: whats your bug # ? [13:53] jml, Ah, sorry, thought that mail was right on target. Maybe the great Warsaw knows [13:53] barry: unhappy in what way? [13:53] mthaddon, can't connect [13:54] barry: via bzr+ssh? [13:54] barry: its happy for me [13:54] lifeless, bug #? [13:54] barry: try turning on your network [13:54] http://bazaar.launchpad.net/~ubuntu-dev/ubuntu-dev-tools/trunk/files/head:/doc/ [13:54] ? [13:54] barry: you said you had a page timing out [13:54] barry: that's codebrowse - was just restarted [13:54] barry, works for me [13:54] lifeless, ah, yes [13:54] barry: also if that page comes up [13:54] lifeless, https://edge.launchpad.net/~pythoneers/+archive/py27stack4/+packages?start=0&batch=200 [13:54] lifeless, bug 607087 ? [13:54] <_mup_> Bug #607087: enable 'search by method' [13:54] barry: wait 60 seconds and hit ctrl-R [13:54] mthaddon, thanks! works now [13:54] Ursinha: no, a new one ;) [13:55] Ursinha: I'd love that one fixed too, of course ;) [13:55] hehe [13:55] Ursinha: I filed the other one in launchpad-foundations [13:55] ah [13:55] lifeless, do you have the #: [13:55] because its about our docs [13:55] uhm [13:55] barry: Is this another of those 2700-build monsters that gets deleted a couple of hours after it finishes using days of build farm time? [13:55] sec [13:55] lifeless, I can find it here, no prob [13:55] wgrant, ;) no [13:56] barry: so is that your hacked url [13:56] wgrant, just 150-ish packages [13:56] mthaddon, btw, can we update the buildbot configs trunk with my 'fail fast' patch? Or do you want to wait for it to run successfully first? [13:56] barry: or the original [13:56] lifeless, it is hacked [13:56] barry: what url fails [13:56] lifeless, bug 607680 [13:56] <_mup_> Bug #607680: documentation needed on oops<->bugs linking [13:56] barry: Ah, good. Sanity. [13:56] Ursinha: yes! [13:56] lifeless, that url. iow. normally you get batches of 50 but i want to see all packages in one page [13:56] lifeless, what links oopses to bugs is part of the oops-tools [13:57] barry, do you have a copy of pyflakes-doctest, or now where I can find one? [13:57] jml, atm, i don't [13:57] barry, thanks anyway. [13:57] jml, yeah, sorry [13:57] Ursinha: ok, thats cool. However most devs don't have that on their disk :) [13:57] * barry -> reboots. hopefully will brb [13:58] lifeless, exactly :) afaiu it's a simple mechanism that associates the "oops signature" to a bug number [13:58] ahh [13:58] it's in old versions of the tree [13:59] that's why we have incorrect links sometimes [13:59] lifeless, what exactly are you aiming with that? [14:00] lifeless: that graph was moved to lpstats [14:00] adiroiban, wgrant: all your branches are on ec2 now [14:01] lifeless: https://lpstats.canonical.com/graphs/LPQA/ [14:01] lifeless: https://lpstats.canonical.com/graphs/LPQAByTeam/ [14:01] lifeless, I have a suggestion for bug 590708, I've added it to the bug and emailed some reasoning to the list as well [14:01] <_mup_> Bug #590708: DistroSeries.getBuildRecords often timing out [14:01] danilos: Thanks. [14:01] bigjools, ^ [14:02] lifeless, ignore the timing differences on the bug (that's with explain analyze which usually doubles the times) and in the email though ;) [14:02] danilos: Ooh, that's good. [14:02] wgrant, the traditional translations tricks fwiw :) [14:03] wgrant, hopefully the queries are compatible :) [14:03] or, let's say equivalent [14:07] mars: which branch is that again? [14:09] Ursinha: I want to be able to make the associations, so I want the way I should do that documented :) [14:09] flacoste: ok cool, its linked from https://dev.launchpad.net/PolicyAndProcess/ZeroOOPSPolicy [14:10] flacoste: how does oops tie into that graph ? [14:10] danilos: actually the explain in the bug case adds about 1000ms if you compare to the oopses [14:12] lifeless, well, even explain analyze runs quickly (300ms) on an optimized limit 50 query for me [14:12] lifeless, ah, that's bloody simple :) to make the association when there's no association, I mean, because we still cannot edit that other that using sql directly on oops-tools application db [14:12] *other than [14:13] lifeless, I'll make sure that's documented somewhere [14:13] lifeless, but anyway, I was just trying to point out that the times I recorded are not really correct (I've ran it a number of times with both explain analyze and without), but my conclusions should generally hold for these queries [14:13] danilos: yeah [14:13] danilos: want to make a patch ? [14:14] Ursinha: ok, so how does one do it ? [14:14] lifeless, not really, I've got a few branches in my queue already :) [14:14] lifeless, it'd be good to test queries on production DB first as well [14:14] flacoste: is zerooopspolicy still active? or died-a-quiet-death ? [14:15] mthaddon: could you try danilos query on a slave please? from the bug https://launchpad.net/bugs/590708 [14:15] <_mup_> Bug #590708: DistroSeries.getBuildRecords often timing out [14:15] lifeless: am in the middle of some ISD deployments at the moment - can it wait a little? [14:15] mthaddon: of course [14:16] thx [14:16] lifeless: it's kind of a not fully implemented policy [14:16] ok [14:16] lifeless: the intent is still there, but we lack the tools to fully back it up [14:16] lifeless, if an oops doesn't have a bug associated, add the bug number to the text box where the number usually is, click "Bug #", and it's done [14:16] but in principle folk have the goahead to shelve stuff to work on oops [14:16] Ursinha: oh, on the web ui ? [14:16] lifeless: the OOPS report don't make it easy to get the "list of things" that need fixing [14:16] lifeless, as you can see in this one, for example: https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1661CCW1 [14:17] lifeless, yes sir [14:17] ah doh! I see [14:17] :) [14:17] lifeless: yes [14:17] lifeless, it clearly needs to be documented! :) [14:17] Ursinha: nah, i'm still a little tired here, its been busy ;) [14:18] Ursinha: close with 'lifeless is blind', if you like. [14:18] flacoste: ok, so lets talk about making it easy to see what needs to be fixed. [14:18] lifeless, hehe, no, it really needs to be somewhere, so we can make easy to see what needs to be done [14:18] flacoste: As I get the policy, every oops in the oops report - say just the top 15 timeouts for now - should be a high/critical bug right ? [14:19] Ursinha: does the oops report sent to the list list the bugs when there is one ? [14:19] lifeless, yes, right below the oops signature/count [14:20] lifeless, as you can see in the oops reports [14:21] I guess I just repeated what you said, but nevermind :) [14:21] lifeless: there should be clear list of things to fix for each team [14:22] currently, they have to look for that list across several reports and several questions [14:22] s/questions/sections/ [14:23] flacoste, I've discussed that with diogo a few times, and we need to improve the oops signature in order to better group the oopses [14:23] done that, I guess the per team's reports will be more accurate [14:23] I also would like to know from the teams which oopses are real ones and which ones are only tainting the summaries [14:23] like the checkwatches one, for example [14:24] Ursinha: best way to do that is to book a call with each TL and go over a couple of reports with them [14:24] like bug 592345 [14:25] <_mup_> Bug #592345: Checkwatches produces a lot of OOPSes that aren't real LP failures [14:25] flacoste, I started with bugs team [14:25] but I'll have to do that with every other one, yes [14:25] Ursinha: they're all real. [14:25] In my simple simple opinion [14:26] lifeless: not really [14:26] for example a 404 isn't a real OOPS [14:26] and a lot of checkwatches failure are of that sort [14:26] flacoste: here - Ursinha, We already have a backoff mechanism in place. However, now that we're tracking BugWatchActivity we can probably stop recording OOPSes for some things. [14:26] right [14:27] flacoste: I agree a 404 isn't a real oops, but the oops system already knows about that one and could trivially skip it [14:27] flacoste: (except where we have an internal ref that makes a 404) [14:28] barry: so what url was timing out again? not the hacked one, the real one. [14:30] Ursinha: help me out with the oops reports here - [14:30] the 19th report, edge errors [14:30] === Top 15 Time Outs (total of 108 unique items) === [14:30] 876 SELECT BugTask.assignee, BugTask.bug, BugTask.bugwatch, BugTask.date_assigned, BugTask.date_close ... ON BugTask.bugwatch = "_prejoin5".id [14:31] when I open the first oops up [14:31] it has a bug # [14:31] but I didn't see the bug reference on the page [14:31] lifeless, it's wrong [14:31] hm [14:31] lifeless, i don't think the unhacked one was timing out :/ [14:31] lifeless, it has? [14:31] barry: ah. Don't hack the url [14:31] lifeless, let me just do my daily call with foundations, I'll get back to you in a moment [14:31] >:-# [14:31] https://lp-oops.canonical.com/oops.py/?oopsid=1661A1010 [14:31] :) [14:31] Ursinha: ^ [14:33] Ursinha: but it thinks it is a translations problem, yet its a bugs issue [14:33] lifeless, I'll explain [14:34] after your call :) [14:37] gmb: are you back ? [14:41] lifeless, so [14:41] lifeless, this bug is linked incorrectly because of the "oops signature" oops-tools uses to identify the uniqueness of a problem, and link it to the bug [14:42] lifeless, we're aware that the way it is now isn't good because it doesn't work for timeouts [14:42] Ursinha: can we purge the old one then ? I mean, we need a improvement in that eventually. [14:42] lifeless, it's a known issue [14:42] but right now its steering folk wrong [14:42] lifeless, sure, I can do that [14:45] sweet [14:47] Ursinha: do you think we'll be ready to switch to the new workflow this cycle ? [14:47] lifeless, that will require changes in the scripts we use today [14:47] I'm not sure [14:48] lifeless, when the new cycle starts? in three weeks? [14:48] 1 week I think, we're in 4 of 10.07 according to topic. [14:48] of course, topic could be lying [14:48] Ursinha: if you could have a set of bugs tagged release-features-when-they-are-done, I might try to help out on the change [14:49] This is 10.08 week 1. === Ursinha changed the topic of #launchpad-dev to: Launchpad Development Channel | Week 1 of 10.08 | PQM is open | https://dev.launchpad.net/ | Get the code: https://dev.launchpad.net/Getting | On-call review in irc://irc.freenode.net/#launchpad-reviews | Use http://paste.ubuntu.com/ for pastes [14:50] lifeless, do you mean switch to the new merge workflow at the end of this cycle? [14:50] mars: thats the question [14:50] lifeless, we still need to create the blesser, and integrate that with the merger infrastructure [14:51] and mars know about the second part better [14:51] given its non trivial, its not jfdi-able [14:51] I'd like to be able to see where I should help make this happen [14:51] yes, I can think of three separate code changes (including one entirely new script or application) [14:51] either by begging some time from team leads, or by helping out and doing, as appropriate. [14:52] its a crucial change to make our production story cleaner and better [14:53] well, it will happen - as to an alpha of the new cycle, I think getting that for 10.08 is unknown (I don't know our velocity), but I would think 10.09 for an alpha or even a beta could definitely happen [14:53] I agree with mars [14:56] lifeless, I'll keep this moving forward, and I will let you know how things are going, so you can jump in where you think it's best [14:56] if you make it visible to me [14:56] I'll help in some fashion [14:56] better deployment is a key enabler for overall velocity [14:57] and this is a necessary condition for better deployment [14:57] lifeless, it should be on the Foundations Kanban board - I'll make a lane now [14:57] wow [14:57] https://lp-oops.canonical.com/oops.py/?oopsid=1661C183 [14:58] mars: so kanban is great too - my main point is if *I* can figure out what is still todo, I can help somehow :) [14:58] 14 seconds of non-sql time [14:59] heh, ok [15:03] just to be clear, are our buildbots running Python 2.6? [15:03] (because ec2 test certainly isn't) [15:04] jml, only the lucid_db_lp one is [15:04] mars, thanks. [15:04] next question [15:04] jml, ok, so 'lucid ec2 image' goes on my ToDo list [15:04] how do I use "with" statements in doctests in Python 2.5? [15:04] benji might know? [15:05] from __future__ in a >>> line doesn't work? [15:05] * benji reads the scrollback [15:07] gary_poster, apparently not. [15:07] gary_poster: from __future__ is magic IIRC [15:07] gary_poster: it changes the parser [15:07] IIRC to do that with a non .py compile you need to pass a compile flag in [15:08] jml: I assume you tried "from __future__ import with_statement" and it didn't work. [15:08] benji, that's correct. [15:08] deryck: whats 'null bug task' all about [15:08] hmm, there is some support for future in doctests, let me look at the source real quick [15:09] my experimentation cycle is quite long, since I don't know how to build Launchpad locally for Python 2.5 [15:09] lifeless, null is a workaround for not being able to delete a task, since marking it invalid means you continue to receive mail [15:09] not the null project [15:09] the code path [15:09] https://lp-oops.canonical.com/oops.py/?oopsid=1661C183 [15:09] Page-id: NullBugTask:+index [15:10] time: 4388 ms [15:10] non-sql time: 14274 ms [15:10] Statement Count: 499 [15:10] erm, shouldn't NullBugTask just redirect now? [15:12] lifeless and bigjools, leonardr and I are discussing his reply to https://bugs.edge.launchpad.net/soyuz/+bug/590708 . [15:12] We are considering the backwards compatibility issues of what he described, because we feel we're the ones who are most likely to care about that, and we are responsible for it. [15:12] If we decide that leonardr's proposal is acceptable, I have the understanding that you are calling this a critical issue and that we should proceed to work on it, pushing our other tasks aside per the usual "critical" behavior. [15:12] (1) Do I understand correctly? (2) If so, feedback on his reply would be appreciated, particularly if you have concerns. [15:12] <_mup_> Bug #590708: DistroSeries.getBuildRecords often timing out [15:13] gary_poster: sounds good to me - AIUI we are supposed to treat OOPSes as "stop the line" [15:14] * bigjools reading the reply [15:14] I don't think that policy is practically acceptable for Foundations on a global basis, but that's a different conversation for a different forum [15:15] mthaddon: https://bugs.edge.launchpad.net/soyuz/+bug/590708 - another for the queue, equal basis with doing the queries from danilo on a slave [15:15] <_mup_> Bug #590708: DistroSeries.getBuildRecords often timing out [15:16] mthaddon: leonardr needs to correspond with the folk having trouble [15:16] leonardr: wgrant is one of those people [15:16] Hi. [15:16] lifeless: can you ping losa, I've passed the baton to another losa for interrupt queries [15:16] wgrant, i'd like to see the program that's getting the timeouts [15:17] so that i can see if your program will break if we apply the fix i've proposed [15:17] mthaddon: sure, sorry I should have in that case too. [15:17] losa ping [15:17] lifeless: morning [15:17] leonardr: http://qa.ubuntuwire.org/ftbfs/source/build_status.py [15:17] lifeless: or evening in your case?! [15:17] Chex: currently mid avo [15:17] Chex: as I'm in prague [15:18] lifeless: ah yes, your sprinting, cool. [15:18] Chex: if you look up a bit [15:18] there is a bug - one of several - high frequency timeouts [15:18] leonardr: It's currently timing out every time it runs. [15:18] gary: wgrant's script doesn't use len(), it just iterates over the resultset [15:19] jml: I wanted to abstract the Python selection so that it wouldn't have to always be the system's Python, but it was regarded as unnecessary. The "test with another version of Python" story is another way in which that feature would be valuable, though. Maybe it will come to life at some point. [15:19] gary_poster, *nod* [15:19] Chex: we'd like two bits of losa assistance in the short term on this - a) danilo has a faster proposed query, we'd like to validate its performance on a production slave. [15:19] gary_poster, I guess it's only really necessary during interim phases like this one. [15:19] right [15:19] Chex: b) we may need to get leonardr in contact with some users via api keys [15:20] Chex: a) should be cheap - can we do that please; b) ask leonardr in a minute :) [15:20] jml: any futures included in the test globs are respected [15:20] wgrant: did it ever work? [15:20] leonardr: Until the edge timeout reduction, it worked aboutu 75% of the time. [15:20] leonardr: Until 10.05, it worked 100% of the time. [15:21] wgrant: it broke with the build farm model changes? [15:21] benji, cool. how would I add a future to a test glob? [15:21] bigjools: Somewhat, yes. [15:21] gary_poster: I don't have an position-specific opinion on who should do the work; LEAN dictates team accountability (all of LP devs being a single team), not smaller granularity; but we don't seem to do that at the moment : and I'm finding my feet. [15:21] bigjools: I suspect it was sitting just under the threshold, or something has gone really wrong plan-wise. [15:21] I'll figure out an example. [15:21] benji, thank you. [15:22] ideally we should make the query very very fast [15:22] lifeless: sure, fair enough on all counts [15:22] Certainly. [15:22] we want to both make it fast, and avoid unnecessary work. [15:22] lifeless: ok, looking at the bug, and your request [15:22] the count(*) there seems to be unnecessary for many cases. [15:22] Chex: thanks [15:22] unless we make the query(ies) faster, LP will never get faster [15:23] right [15:23] lifeless: im a little confused what you would like me to do for you? [15:23] Chex: run the query in https://bugs.edge.launchpad.net/soyuz/+bug/590708/comments/8 [15:23] <_mup_> Bug #590708: DistroSeries.getBuildRecords often timing out [15:23] so I think the count(*) is a red herring [15:23] Chex: on a slave [15:23] bigjools: its not the root cause here, but its not a non-issue. [15:23] agreed [15:24] COUNT is always going to be fairly slow. [15:24] lifeless: ok, so one of the DB slaves, seems ok, and the SQL seems to be a SELECT only, so thats ok, too [15:24] And it [15:24] fixing *either* the query, or the count(*) will fix this issue. [15:24] it's unnecessary. [15:24] lifeless: any idea on the performance hit for the query? [15:24] Chex: by ok, can you paste the output please :) [15:24] Chex: its being hit by bots at least once an hour [15:24] count should be as quick as the query itself [15:24] bigjools: hell no [15:24] ? [15:25] lifeless: you mean the bots are generating that query once an hour? [15:25] Chex: something very like it, but causing table scans. [15:25] bigjools: sec, let me get the query tested first. [15:25] lifeless: understood, ok, will run on one of the Db slaves, chokecherry [15:25] Chex: thanks [15:26] bigjools: ok, so count(*) has to complete the entire thing, ignoring offsets and limits [15:27] bigjools: this is always more work except the special case when the limit of the first chunk matches the total work [15:27] lifeless: if the query has an order, that doesn't apply [15:27] bigjools: why do you say that? [15:27] it has to complete the query to order it! [15:27] Even with indices? [15:27] that depends on the query [15:27] very very much depends on the query [15:28] yes [15:28] * bigjools goes back to buildd-manager hacking [15:28] anyhow, my point is just - don't assume count(*) is effectively free: its not. :) [15:28] lifeless: hell no [15:29] but if the original query is quick, and it bloody well should be, then the count should not matter in the bigger picture [15:29] lifeless: bigjools: https://pastebin.canonical.com/34829/ [15:29] I agree it would be lost in noise today [15:30] so, we may have stale statistics or something [15:30] Chex: thanks! [15:30] lifeless: "date_finished IS NOT NULL" kills that query [15:30] bigjools: is date_finished indexed ? [15:31] yes [15:31] but it does an index scan over 103k rows [15:31] lifeless: your welcome [15:31] when you use NOT IN it has no choice === mtaylor is now known as mtaylor|breakfas [15:31] Chex: thanks from me too :) [15:32] bigjools: not in - yes, I get that. I don't see 'is not null' being == to 'not in', but I may be missing a specific technicality. [15:32] my tiny brain conflated the two, my bad [15:33] hehe no worries. [15:33] jml: import __future__ [15:33] is not null can be badly affected by index statistics and index selectivity though [15:33] and then in your test setUp, add a global like... [15:34] hmmm status [15:34] test.globs['with_statement'] = __future__.with_statement [15:34] leonardr: so, I have a small separate idea for you. [15:34] lifeless, ok [15:34] leonardr: what if, when the result set is < pagination size [15:34] leonardr: lazr restful simply *does not* call len() on the result set [15:34] benji, sweet. thanks. [15:34] jml: I don't have a Python 2.5 so I can't test it, so hopefully it'll work as-is [15:34] leonardr: in this particular case we have 19 results [15:34] lifeless, makes sense [15:34] lifeless: we can improve it with an index btree(date_finished, status) [15:35] benji, well, I'll try that and resubmit via ec2 [15:35] lifeless: it's scanning over status [15:35] leonardr: I know it won't help in the greater case [15:35] lifeless: Only 19? There should be more than that... [15:35] benji, in ~3hrs I'll know if it worked. [15:35] heh [15:35] it might be quicker for one of us to install 2.5 [15:37] wgrant: privmsg [15:37] leonardr: how big a fix would that short hack I'm proposing be ? [15:37] lifeless: very easy [15:37] fix/workaround [15:37] leonardr: I propose, if you think its sensible, that we: [15:37] - do this tiny hack; get that cowboyed to prod [15:38] - do the larger one you proposed, if you think its tolerable [15:38] - add the index bigjools is proposing, after evaluating it on staging [15:39] lifeless: The 'Failed to build' query that normally times out should return 2614 results. [15:40] wgrant: thats odd :P [15:40] That's larger than the batch size. [15:40] wgrant: yes, we have to deal with bigger things [15:40] lifeless et al: a quick analysis of the oopses shows that we seem to have three users [15:40] 1. leann ogasawara [15:40] 2. someone at ubuntuwire.org [15:40] lifeless: one thing I think we need to do is to document the queries we're using and the indexes that they need. They're currently disjoint and we have no idea what needs what and what indexes are obsolete (which are a waste of processing time) [15:41] 3. someone at cranberry.canonical.com (wgrant?) [15:41] lifeless: a bit like the prejoins too [15:41] leonardr: I have access to no Canonical machines. I manage the script on ubuntuwire.org. [15:41] no, sorry, leann ogasawara is the person at cranberry.canonical.com [15:41] our third client is someone from optusnet.com.au [15:41] That's me. [15:42] bigjools: I think thats a great idea. Start doing it however, we can iterate to make it structured later. [15:42] wgrant, not internode? [15:42] jml: No. Too far for reasonable ADSL performance. [15:43] lifeless: one thing that really scares me is changing prejoins - we've simply got no idea what it will affect. [15:43] jml: And Optus' caps are about an order of magnitude larger now than they were 6 months ago. [15:43] So it's not that bad. [15:43] bigjools: we'll be a lot safer once staging's timeout limit is down to 5 seconds [15:43] wgrant: ok, so we have two users, you and leann [15:43] we can rev leann's launchpadlib pretty easily [15:43] let me go ask her === deryck is now known as deryck[lunch] [15:44] wgrant, fair enough :) [15:44] leann is being asked now in person [15:45] ah, i just asked her on irc, but ok [15:50] she was walking past the door to a meeting [15:50] she is running in the dc on the platform lp lib - hardies I suspect [15:50] but she can RT an upgrade anytime. [15:50] lifeless, have we run the queries that oopsed to see how many results they actually return? [15:50] there are 800 or so [15:51] I'd rather not do that by hand [15:51] wgrant says one in particular he does routinely returns 2.7K [15:52] isn't backwards compatibility not really an issue as len() didn't work for a long time? [15:53] leonardr: when doing 'approve' on an MP please approve the overall proposal too - so that the queue is representative of the work reviewers have left to do [15:53] sorry [15:53] * wgrant sleeps. [15:53] Thanks for looking at this. [15:55] leonardr: no probs [15:55] leonardr: its a small thing, but it helps the flow. [15:55] james_w: it's somewhat unlikely, but there was a published workaround for a long time, so i want to make sure [16:00] flacoste: is there a burndown chart of oops and timeout bugs ? [16:00] flacoste: or should I perhaps ask jml with his mad graphing skills to write one [16:01] I only do graphs with wobbly lines that go upwards and to the right. [16:02] don't worry, we can make this one do that [16:02] jml: how hard would it be for you to do this? [16:03] hmm meeting time [16:03] lifeless, about as hard as it would be for you. [16:03] maybe a little bit less if I used lpstats. [16:05] * bigjools chuckles [16:08] jml: I think you underestimate startup cost/activation energy [16:09] lifeless, maybe. [16:09] lifeless, if you email me with exactly what you want, I can give it a go. [16:10] lifeless, but if you want it today or tomorrow, you're genuinely better off finding someone else. [16:11] have mailed you [16:15] rockstar, ping [16:15] mars, distracted pong [16:16] rockstar, just wondering what the progress was on your YUI 3.1 upgrade. [16:16] mars, ah, very close. Dealing with a bug where YUI 3.1 doesn't like the lp.client. [16:17] ok [16:17] mars, I'll send it to you for review. [16:17] I'm going to get someone to test 1.0, then we can look at rippling the change upward through the lazr-js tree [16:17] rockstar, sure [16:18] mars, 1.0? [16:18] lifeless et al: ogasawara is indeed accessing the total_size (using the workaround since len() doesn't work in old wadllib) [16:19] in fact, that's all she's doing with the data [16:19] rockstar, yes, there are three lines of development right now: trunk (dev), a 1.0 release branch that will become 1.0-dev, and the 2.0-dev line [16:20] mars, is there anyone else outside of Canonical using lazr-js? [16:20] rockstar, not to my knowledge [16:20] mars, I guess I'm asking "is there much point in the overhead required for coordinating various lines of development" ? [16:20] I think there should be trunk, and then the project using it can maintain their own branch of that. [16:20] yes, because we have projects on 1.0, and also projects on 2.0 [16:21] mars, what projects are those? [16:21] ISD and LP are on 1.0, U1 and Landscape are on 2.0 [16:22] mars, afaik, LP is on 0.9.2, which means very little, since we're not really on a branch at all. [16:22] rockstar, I'm working to get it down to two branches: 1.0 dev, and 2.0 dev [16:22] yes, the fact we are not on a branch is a problem as well [16:22] mars, what I'm saying is "the branch we're based off has little to do with what we're actually running" [16:22] mars, landscape, for instance, maintains their own branch. [16:23] ogasawara says that when it worked, her script found a total_size of 2614. given that neither of our users will benefit from the small-batch optimization, i think i should be doing something else (if we have a consensus about what else should be done) [16:23] If we land something on the 1.0 branch, we shouldn't be risking the breakage of a bunch of other projects. [16:23] rockstar, yes, and that leads to everything being one big hairball :) [16:23] mars, how's that? [16:24] because changes like the YUI 3.0/3.1 split or the distribute debacle force other projects to maintain branches [16:24] mars, if we make changes in trunk and then the projects then pull when they're ready, then we really only have one branch to worry about as lazr-js (trunk), and two branches to worry about as LP (trunk and whatever we're running) [16:24] 4 projects with 4 branches and mainline is 5 times the maintenance work needed [16:24] mars, the YUI 3.0/3.1 debacle wouldn't have happened if sidnei had landed in trunk. [16:25] and no one else would have been able to use or patch trunk [16:25] you either have everyone maintaining a private fork, or you consolidate [16:25] I want to get everything consolidated [16:25] mars, everyone needs to maintain a fork anyway. [16:25] rockstar, why? [16:25] Hopefully it's a "pull only" fork. [16:26] mars, because if we update 1.0, we can break other projects. [16:27] Allowing the other projects to pull in changes when they are ready is (IMHO) the best option. We only have one line of development, and when they're ready to upgrade, they do. [16:27] that leads to real problems with versioning and contribution [16:27] mars, howso? [16:28] you have to write two patches (one for you, one for mainline), and you can't just pull trunk to get some new feature - there could be massive changes, meaning you have to backport and maintain yet another patch for your private fork [16:28] People should just be able to skip between releases [16:28] and releases should be documented in the changes they perform [16:29] mars, I think you ought to propose this to a mailing list somewhere, and find out what other projects are doing. I have suspicions whether or not it needs to be this complicated. [16:30] mars, having two lines of development might mean that I need to patch my private fork, 1.0, and 2.0. [16:30] rockstar, I don't see the complexity - we'll have one mainline (2.0), and one legacy line (1.0) [16:30] rockstar, then you should drop your private fork, and fix mainline [16:32] mars, I think this is better for the mailing list. I suspect that the private fork is a feature other projects are a bit attached to (Launchpad historically is) [16:38] rockstar, would you be willing to test the 1.0 branch to see if it builds on your system? I would like to make lazr-js hackable again. [16:39] mars, do you need it right now, or can it be in 1 hour? [16:39] rockstar, better make that ~3 hours then, I'll be taking lunch around 12:30 [16:40] err, "I'll be taking lunch in 1 hour" [16:40] mars, okay, that will work better, because I need to eat dinner soon. I'm happy to test it though. [16:40] cool [16:40] bigjools, are you still around? [16:40] yep [16:42] leonardr: lol! [16:42] leonardr: so just exposing the size only thing would help her [16:43] lifeless: yes, if we implemented the full solution she would have to change her script but we could get it to work [16:44] \o/ [16:50] bigjools, I'm having a pretty hard time changing the status of a SPRBuild... The security proxy is only slightly the problem. [16:51] bigjools, do you have methods for flipping the switches? All I see is handleStatus, and then things like "_handleStatus_OK" etc. [16:51] rockstar: what are you trying to do? [16:51] lifeless, i posted an update to the bug. i think we should shelve the 'don't run the count(*)' optimization since it's somewhat difficult and it won't solve the big problems. do you want me to work on the annotation-based solution? [16:51] (shelve for purposes of this problem, not permanently) [16:51] bigjools, make a SourcePackageRelease tied to a recipe [16:52] (it's a hole in our testing currently) [16:52] rockstar: in a test or in the code? if the latter, at what point in the pipeline? [16:53] bigjools, in a test. [16:53] ok [16:53] bigjools, it won't let me set ISourcePackageRecipeBuild.source_package_release [16:53] ok let me check === deryck[lunch] is now known as deryck [16:56] rockstar: dude, ISourcePackageRecipeBuild.source_package_release is a property so I'm not surprised :) [16:57] rockstar: set SourcePackageRelease.source_package_recipe_build [16:58] bigjools, *facepalm* I was looking at the interface thinking it'd give me all I needed... [16:58] :) [16:58] :) [16:58] rockstar: if it makes you feel any better, I've done this exact same thing myself [16:58] bigjools, it's a sign that we should delete all interfaces. [17:01] how do i do an api query for bugs containing a particular tag (and maybe other tags)? [17:01] leonardr: please [17:12] bigjools: BuildFarmJob.status <> 1 - thats an issue [17:14] losa ping [17:15] lifeless: hi [17:15] hi, we'd like to run an analyze on packagebuild after checking how big the table is [17:15] on each db server [17:16] and then check explain analyze SELECT BinaryPackageBuild.distro_arch_series, BinaryPackageBuild.id, BinaryPackageBuild.package_build, BinaryPackageBuild.source_package_release FROM Archive, BinaryPackageBuild, BuildFarmJob, PackageBuild WHERE distro_arch_series IN (109, 110, 111, 112, 113, 114) AND BinaryPackageBuild.package_build = PackageBuild.id AND PackageBuild.build_farm_job = BuildFarmJob.id AND (BuildFarmJob.status <> 1 [17:16] again [17:16] if the table is -huge- we obviously don't want to wedge things. [17:18] lifeless: erm, why do we need to do this? [17:18] also I'd like to know the range of values in BuildFarmJob.status - select status from buildfarmjob unique; [17:19] mthaddon: because we have an API call timing out - taking 18 seconds - and the query plan suggests a mismatch between statistics and actual data. [17:19] http://paste.ubuntu.com/465800/ [17:19] we've identified a few issues all at once related to this: [17:19] - the api is doubling the db load by one of its bits of magic [17:19] - the query is extremely slow itself [17:20] lifeless: I'd like to get stub's input on that (particularly why it's out of whack in the first place) ideally [17:20] - the query contains an exclude - status <>1 rather than status in (2,3,4,5,6) or whatever it should be [17:20] mthaddon: hes on a plane. [17:20] mthaddon: I tried to ring him just before :( [17:20] lifeless: sure - is this a critical issue now? [17:21] its timing out for ogsawara and wgrant every time; they aren't able to retry to fix it. [17:21] if its simply stale statistics, that would be a very easy bandaid. [17:21] we can also raise the timeout again [17:22] by which I mean, 'retries also time out' [17:24] I think increasing the timeout is a better short term fix - what should we increase it to? [17:24] lifeless: and it times out for them on both edge and lpnet? [17:24] yes [17:24] thats my [weak] understanding [17:25] edge is doing a timeout every 120 seconds [17:25] prod is a lot more unhappy, but that is primarily the bug attachment oops which gmb has been working on. [17:25] which reminds me [17:25] lifeless: so which one are we changing? edge or lpnet? and to what? [17:26] mthaddon: do you know how to generate a manual oops report for the oops since this morning on edge and lpnet? [17:26] mthaddon: that would help me answer your question [17:26] because I know what fixes are in-progress [17:27] lifeless: I don't, no :( [17:28] mthaddon: then I'd say lets raise it back to 14 seconds on edge [17:28] I know that most of the prod ones are the bug attachment script [17:29] and it is in progress [17:29] lifeless: like this? https://pastebin.canonical.com/34844/ [17:29] mthaddon: could we run an analyze on staging at least, see how long it takes, and if it improves the query ? [17:30] mthaddon: yes, that patch will raise the edge timeout. === al-maisan is now known as almaisan-away === beuno is now known as beuno-lunch [17:33] lifeless: it's hard to say if that will match production since the load on the DBs is so different though (having never been asked to do this before for LP is throwing up a minor red flag as "doing it wrong" as well) [17:33] mthaddon: I'm positive stub has done analyze's to fix statistics many times. A greb of the lp-code logs will probably find some ;) [17:33] in any case, I'm pushing out the cowboy to edge with the higher timeout now [17:33] thanks [17:34] mthaddon: I'm curious how, since its the same revno ... [17:34] lifeless: I landed the branch that allows me to specify a revno [17:34] lifeless, lpnet oopses since 00utc: https://devpad.canonical.com/~lpqateam/lpnet-oops.html#time-outs [17:34] mthaddon: \o/ [17:34] mthaddon: thats awesome [17:34] s/specify a revno/specify a custom directory name/ [17:34] Ursinha: thanks [17:35] lifeless, same for edge and staging: https://devpad.canonical.com/~lpqateam/edge-oops.html#time-outs https://devpad.canonical.com/~lpqateam/staging-oops.html#time-outs [17:41] bryceh: where is the code for it? [17:41] (I wish bluprints had a comments area too for each action item) [17:41] nigelb, hang on I'm composing an email [17:41] heh :D [17:41] nigelb, damn you're quick ;-) [17:41] haha [17:47] lifeless: timeout increased on edge [17:48] lifeless: although the config change hasn't been landed, so it'll be overwritten on next rollout unless that happens === mtaylor|breakfas is now known as mtaylor [17:57] mthaddon: ok, can you do that too - or should I just land an r=mthaddon to increase it ? [17:57] Ursinha: thanks [17:58] lifeless: r=mthaddon would be great, thx [17:58] lifeless: has it fixed the issue? === beuno-lunch is now known as beuno [18:06] mthaddon: don't know [18:06] mthaddon: wgrant may have gone to sleep [18:08] mthaddon: yes its fixed [18:08] cool [18:14] at least for leanne [18:15] but I think they're looking at the same think [18:15] *thing* [18:27] sinzui: bug 607879 - if you want to discuss with me, gimme a shout [18:27] <_mup_> Bug #607879: https://bugs.edge.launchpad.net/~person/+participation timeouts [18:57] losa ping [18:58] nigelb, ok finally got that email out [18:58] lifeless: hi there [18:59] Chex: hi, uhm channel confusion - query plan tweaking on staging [18:59] lifeless: ok, run that query on staging DB, then? [19:00] Chex: so an analyze of packagebuild on staging [19:00] and then [19:00] # explain analyze SELECT BinaryPackageBuild.distro_arch_series, BinaryPackageBuild.id, BinaryPackageBuild.package_build, BinaryPackageBuild.source_package_release FROM Archive, BinaryPackageBuild, BuildFarmJob, PackageBuild WHERE distro_arch_series IN (109, 110, 111, 112, 113, 114) AND BinaryPackageBuild.package_build = PackageBuild.id AND PackageBuild.build_farm_job = BuildFarmJob.id AND (BuildFarmJob.status <> 1 OR BuildFarm [19:01] on staging [19:01] lifeless, you don't mind me doing the query I suggested two times on production slave? (i.e. you still have reasons to believe it would hurt us) [19:02] lifeless, fwiw, your query above was cut-off [19:02] lifeless: ERROR: column "buildfarm" does not exist [19:02] LINE 5: ... BuildFarmJob.id AND (BuildFarmJob.status <> 1 OR BuildFarm)... [19:02] Chex: http://paste.ubuntu.com/465800/ [19:02] Chex: top line [19:02] lifeless: oops, yeah pastebin is better, thanks [19:03] lifeless: http://pastebin.ubuntu.com/466583/ [19:04] Chex: and you analyzed packagebuild first ? [19:04] lifeless: sorry, no I did not [19:05] lifeless, fwiw, I wasn't thinking of doing analyze on production DB :) [19:05] Chex: please do :) - there is a mismatch between rows=702 and rows=28253 in the middle of the explain [19:05] that jtv pointed out [19:06] danilos: this is "analyze," not "explain analyze" [19:07] jtv, right, lifeless stopped me from doing explain analyze on production slave because it would "mess up caches on production DBs" [19:07] danilos: well no, you were saying something that I interpreted to mean 'drop caches' [19:07] danilos: which is rather different from 'run twice to eliminate cold cache effects' [19:08] lifeless, well, I was saying exactly this: "losa quick ping: hi, can you please check how caches on DB server affect executing a query at https://bugs.edge.launchpad.net/soyuz/+bug/590708/comments/8 (i.e. do it a few times on the same production slave DB)"; I'd never interpret it the way you did, but that's not up for debate :) [19:08] <_mup_> Bug #590708: DistroSeries.getBuildRecords often timing out [19:09] danilos: crossed wires happen :) [19:09] lifeless, yeah, you were doing something very similar here so I guess that's where the confusion comes from ;) [19:10] anyway, Chex, can you please try the query above twice on a single production slave to compare the results? [19:10] lifeless: http://pastebin.ubuntu.com/466585/ [19:10] lifeless: this look any better? [19:11] Nope [19:11] well, 2 seconds better [19:12] Chex: so, to check - you did 'analyze packagebuild' then the query from the pastebin I linked earlier ? [19:12] jtv: Nested Loop (cost=0.00..11792.41 rows=2783 width=8) (actual time=0.068..2155.895 rows=904092 loops=1) [19:12] jtv: thats another row expectation mismatch ? [19:12] lifeless: that is correct [19:13] the analyze packagebuild then the query you pasted. [19:13] cool [19:13] danilos: ok, sure, hang on [19:13] can you please do analyze archive and analyze binarypackagebuild too [19:13] then analyze packagebuild, then the query? [19:14] analyze archive; analyze binarypackagebuild; the query [19:14] lifeless: that looks like a mismatch, yes... but that looks like a definite but [19:14] bug [19:14] lifeless: ok. [19:14] I mean, why a million rows there? [19:15] lifeless: http://pastebin.ubuntu.com/466592/ [19:16] oh! [19:16] Ah, it's a highly unfortunate thing... those million rows are the Archive × PackageBuild records [19:17] Chex: lets do this differently. [19:17] Chex: analyze packagebuild; analyze archive; analyze binarypackagebuild; - outside of a transaction [19:17] Chex: we don't want to rollback, we want the analyzes committed [19:18] I'm not 100% sure about the impact of analyze + rollback :- [19:19] lifeless: oh, fair enough, hang on [19:19] lifeless: done [19:19] lifeless: now try your query? [19:19] now, the explain query please :) [19:21] lifeless: http://pastebin.ubuntu.com/466597/ [19:21] ok, well that fairly definitely answers that [19:21] thanks [19:21] danilos: I've finished monopolising Chex for now :) [19:22] jtv: I agree that that 900K loop finding 0 rows is an issue [19:23] It was only expecting 2.0358 iterations there I guess. [19:23] nigelb, had a chance to try it out? thoughts so far? [19:23] Sorry, 20,358 [19:24] so there are lots of bpb records [19:24] and pb records [19:24] I guess [19:24] please tell me we haven't split out a common table to join that has 1:1 mapping to the table we filter on ? [19:24] danilos: now _your_ query.. [19:25] Chex, mine should be quick ;) [19:26] bigjools: are you still around ? [19:27] danilos: http://pastebin.ubuntu.com/466604/ [19:27] danilos: note the much quicker run the 2nd time [19:31] Chex, excellent, just what I suspected :) [19:31] lifeless, jtv: the times with the above query seem much better this time, see http://pastebin.ubuntu.com/466604/ :) [19:31] danilos: why do you join PackageBuild twice? [19:32] I mean, not that I'm arguing against the speedup... :-) [19:32] jtv, because I am smart :) [19:32] that can't be it [19:32] jtv, that's the same trick we use in translations: note how packagebuild-archive join takes the most time in the original query [19:33] jtv, because it joins entire packagebuild with archive (across all rows); this forces postgres to avoid that so it's much faster :) [19:34] It almost looks as if the original query intended something like this... two of the join conditions occurred double, just like with yours. [19:35] its joining to workaround the split out [19:35] I think the split out should not have been done at the table level: N separate tables with a common columnprefix [19:36] databases tables are not classes :) [19:37] lifeless, I think both of these are much faster simply because the caches are already warm (because of your test :) [19:38] anyway, now I go away and will be able to sleep at night :) [19:38] cheers [19:38] * danilos goes === danilos is now known as daniloff === leonardr is now known as leonardr-afk [19:38] sinzui, hello [19:39] sinzui, we had a bunch of oopses like https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1661XMLP111, MailingListAPIView [19:39] sinzui, I see there's a bug for this oops, bug 531371, which is already fix released [19:39] <_mup_> Bug #531371: oops MailingListAPIView email already in use [19:40] daniloff: gnight. I think your query is very clever; It would be good for storm to do that for us [19:40] Urshina see my email about how I hate Launchpad developers who offer crap services for free [19:41] :) [19:41] I will [19:41] Ursinha, I am sick and cannot help, the LOSAs corrupted the DB because they thought they were being nice to monte [19:41] oh, argh [19:42] sinzui: get well [19:42] I forgot you were ill [19:42] sinzui, please, get well [19:42] Ursinha, There is a question tracking how to fix the data. Someone needs to kill the private project that should never have been mondified [19:43] oh, that one with the pending mailing list? I see. [19:45] thanks sinzui, sorry to ping [20:23] Ursinha: so, the grouping [20:23] Ursinha: does it just use the exception type, or the exception type + the string ? [20:26] lifeless, exception type + value [20:26] lifeless, bug 461269 [20:26] <_mup_> Bug #461269: oops reports should be grouped by oops signature not exception type and exception value [20:28] lifeless, I was about to leave to have some food [20:28] ciao [20:28] I'm just opportunistic on asking stuff [20:28] no need to hang around for me [20:28] lifeless, okay :) anything else, just ask and I'll answer when I return [20:28] kk === Ursinha is now known as Ursinha-nom [20:28] lifeless: your lp:~lifeless/launchpad/soyuz mp diff is screwed up [20:30] and lifeless, i had a test failures back from by ec2 land (feedparser branch) [20:30] this is the fix I applied: http://pastebin.ubuntu.com/466624/ [20:30] do you have a better suggestion? [20:32] flacoste: thats fine with me, its not hugely beautiful, but its not ugly. [20:33] yeah, my feeling also, wondered if there was a better known idiom [20:33] its essentially mocking; we could use an official mock, but it wouldn't be any smaller. [20:34] right [20:35] what library would you recommend for mocking (unrelated to this branch, asking for another project) [20:35] do you use something in bzr? [20:36] we don't routinely mock [20:36] mocking has some risks [20:36] and some rewards [20:36] uhm [20:36] for your line there, even with a mocking library, I'd probably just do the lambda :) [20:37] ok [20:38] from the school of 'simplest is often clearest' [20:40] flacoste: speaking of reviews [20:40] I got the queue down to 0 [20:41] for devel anyhow [20:41] ah the soyuz brnach is messed up because db-devel exists [20:43] flacoste: I've enjoyed using Gustavo Niemeyer's Mocker on another project (http://labix.org/mocker) [20:44] many other options at http://pycheesecake.org/wiki/PythonTestingToolsTaxonomy#MockTestingTools [20:44] I much prefer verified fakes to mocks [20:44] less skew [20:44] but this is not a late-at-night discussion I think; its been ... intense today [20:44] thanks benji [20:45] lifeless: I suspect Mocker can do what you want; it's quite full-featured. [20:45] benji: the point is to not mock. [20:45] benji: so no, it can't :) [20:46] what do you mean by "verified fakes"? [20:46] just that [20:46] a fake (not a mock or stub) that is verified to behave the same [20:46] as a full implementation [20:46] e.g. sqlite in-memory db's are a pretty good verified fake for disk databases. [20:47] ok, Martin Fowler's definition of "fake" [20:48] yes, I find the definition to be usefully precise [21:01] rockstar, ping [21:27] hmm [21:27] more count(*) taking ages [21:27] https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1662EA433 [21:33] ok, I need to break for awhile. Until later on then...... [21:33] flacoste: may need to think about making oops' critical rather than high... teams have lots of high already :) [21:34] lifeless: that was the idea of zerooopspolicy [21:34] flacoste: I thought it said high, not critical [21:34] yes, it says high on the wiki [21:37] hmm, ok [21:37] flacoste: If the goal is 'in front of the queue', critical would seem appropriate to me. [21:37] flacoste: but I wasn't part of the discussion for the policy, I don't want to just jump in here [21:43] https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1662EA447 is fun [21:43] anyone have ideas on what to do with *that* ? [21:44] I guess its getting the wadl ? [21:46] hmm; I did a rocketfuel-get and now my tests fail [21:46] is the devel branch broken? [21:47] shouldn't be [21:54] Ursinha-nom: when you return, if you could regenerate the edge oops-since-utc0 page, I think I've got bugs made for most of them now [22:17] rinze: please set the MP status when reviewing as well [22:25] mars, sorry, hi. === leonardr-afk is now known as leonardr [22:29] I'm -> sleep [22:29] gary_poster: so you know, edge is back up to 14 seconds, 12 seconds is past the knee and unsafe [22:30] heh, ok, thanks for update [22:30] https://lpstats.canonical.com/graphs/OopsEdgeHourly/ shows it quite graphically [22:31] prod is still unhappy - https://lpstats.canonical.com/graphs/OopsLpnetHourly/ - but the pending fixes should make a dramatic difference to that === flacoste is now known as flacoste_afk [22:31] and your pqm hack is on my kanban todo, but I've been bouncing from thing to thing all day. [22:31] on the bright side I seem to have gotten past the stomach ache part of this lurgy, so I can actually concentrate again. [22:32] and with that, I bid you all asnore. [22:35] thank you and good night [23:25] Can someone please ec2 land https://code.edge.launchpad.net/~wgrant/launchpad/refactor-_dominateBinary/+merge/29667? danilos tried to do it last night, but apparently ended up starting two instances for the *other* branch. [23:38] bryceh: trying out now. Do I get to confirm on the upstream tracker before it gets submitted? [23:39] nigelb, yes [23:39] nigelb, btw do you find that aspect important? I've considered eliminating that as an extraneous step if it isn't considered important [23:41] Since I'm testing now, I'd find it important. But when I'm using the tool, I'd find it extraneous [23:42] I keep getting, "Sory produce xorg in ubuntu does not exist or you're not allowed to report a bug in it" :/ [23:42] yeah try another package. 'xorg' isn't supported, but e.g. 'xserver-xorg-video-intel' is [23:43] (there isn't actually an 'xorg' package upstream, it's a non-source debian package only) [23:43] ahh [23:52] bryceh: same error with xserver-xory-video-intel [23:54] hrm [23:56] nigelb, ok try now [23:56] weird, I was sure I'd fixed that already [23:59] bryceh: wow, just WOW