[04:28] WTF is ~itiseasy-org, and why has it uploaded several hundred packages overnight? [04:28] Like we needed more in the queue... [06:08] wgrant: dunno [06:21] lifeless: Could you land https://code.edge.launchpad.net/~wgrant/launchpad/replace-archiveuploader-doctests-0/+merge/30850? You approved it a week ago. [06:27] kicking it off [06:28] wgrant: you didn't find anyone to do it for a week ? [06:29] lifeless: No, I just forgot about it. [06:29] :) [06:30] Thanks. [06:31] Do we have working feature flag support yet? [06:31] I saw some stuff land. [06:31] And I'd like to disable something with it, if it does indeed exist... [06:31] db-devel only [06:32] could probably port the interface to devel [06:32] with some care [06:32] No need. [06:32] and then your thing would be disabled by it but able to be controlled on staging & next release [06:32] There's only a few days left until the freeze. [06:33] last freeze ever! [06:33] Oh? [06:33] We're moving to the new merge workflow for 10.09? [06:33] Well, what was 10.09. [06:33] I think all the bits are in place [06:34] if not then very very close [06:34] Wow. [06:34] That was swift. [06:34] * wgrant whines about tree builds taking far too long. [06:35] 4 minutes and counting... [06:35] wgrant: using ? [06:35] wgrant: well the basic bits are: [06:36] lifeless: The initial make in a new branch. [06:36] - a daily staging [trivialish] [06:36] - a way to disable stuff [feature flags, rough but usable] [06:36] - new QA toolchain/automation [done, I believe] [06:37] Right, I wan't aware that the last bit was even started. [06:37] So the new system will take all the branches in the queue, merge them, test them as a whole, then commit them? [06:39] no [06:39] thats totally different [06:39] Oh. [06:39] So just the new QA workflow, not the new merge workflow? [06:39] thats an optimisation for landing stuff [06:39] Ah. [06:39] I se. [06:39] https://dev.launchpad.net/LEP/ReleaseFeaturesWhenTheyAreDone [06:40] https://dev.launchpad.net/MergeWorkflowDraft specifically [06:40] https://dev.launchpad.net/MergeWorkflowDraft?action=AttachFile&do=view&target=merge-workflow-draft-2.jpeg is the thing we drew [06:40] or maris drew, I should say [06:41] Hm. that looks a lot simpler than the last draft I saw. [06:41] if the last thing you was before the epic, then yes [06:41] Right. [06:41] We revisited the entire proposal there [06:43] we also need a patch to make 'edge' behaviour depend on the vhost, not on the confi [06:43] config [06:43] I don't think anyone has done that yet, would be nice to do. [06:45] though the meaning of edge is going to erode pretty quickly [06:48] Yes. [06:49] I also suppose that the automation of service restarts should make DB upgrades a whole lot quicker. [06:49] oh yeah thats the other key thing, but its necessary for ratcheting up the frequency, not for switching at all [06:50] Yep. [11:13] wgrant: nuts [15:47] lifeless: Um, devel is broken on 2.5? [15:47] Ew. [15:50] I don't see how that happens, unless people aren't ec2ing branches before landing... [16:01] hey wgrant [16:03] Morning jelmer. [16:03] wgrant, it's broken how? [16:03] jelmer: test_sourcepackagerecipe.py doesn't appear to import with_statement. [16:04] ah [16:04] wgrant, not everybody uses ec2, some people run make test locally [22:17] Can someone please EC2 https://code.edge.launchpad.net/~wgrant/launchpad/replace-archiveuploader-doctests-0/+merge/30850? It was caught in the devel breakage last night. [22:17] So, the build farm has currently been unusable for five days. [22:17] I am failing to see how this is not a critical service issue. [22:18] One of Launchpad's most popular features is broken. [22:22] wgrant: kicked it off again [22:22] wgrant: uhm, I think its an important issue. [22:22] It happens time and time again. [22:23] wgrant: builds > builders for extended periods, + starvation events ? [22:24] lifeless: The sustained reduction of the build farm is an issue. [22:24] As well as prioritisation during extended starvation events. [22:24] But new builds will be waiting for nearly four days. [22:25] so, the whole scoring thing is what drives starvation [22:25] its implementation not policy, and because its complex few people can actually see clearly to define a policy [22:25] Scoring doesn't drive starvation. It just means that the dailies build before the realtime packages. [22:25] So it exacerbates the effects. [22:26] I disagree, but this is an assessment of the impact of the design choice; we both agree that the behaviour is undesirable [22:26] Starvation is going to occur whatever the ordering of the builds. [22:26] It's just going to be less noticable if realtime stuff builds first. [22:27] by realtime do you mean 'manually uploaded single packages' ? [22:27] Yes. [22:27] Things that aren't going to be unnoticed if they don't build for a few days. [22:27] ie. not dailies, and not rebuilds. [22:27] sure [22:28] so short term, we can futz with the scoring algorithms [22:28] builder shortages, and inefficient use of builders, exacerbate the situation [22:28] Longer-term: where did all those builders go for two or three days? [22:28] QA [22:28] only a fraction of the PPA buildds are dedicated to LP [22:28] Odd that it takes so long. [22:28] I know, yes. [22:29] But normally they're only gone for a day at most. [22:29] Except when they're taken for mirrors. [22:29] the handover isn't flawless [22:29] so one possibility is a failed handover and they end up in limbo [22:30] a system wide failure in the handover would explain a batch not coming back [22:31] anyhow, with bigjools fix to the sequences nearly here [22:31] your branch is testing [22:31] Thanks. [22:31] s/sequences/sequencer/ we should get much more juice from the buildds, and be in a good position to talk about expanding the farm with less transient resources [22:32] Right. [22:32] Although buildd-manager isn't toooo bad with this few builders. [23:26] if someone is bored: https://code.edge.launchpad.net/~james-w/launchpad/buildout-doc/+merge/31465 [23:27] wgrant: has a scheduler based on quotas for people/archives been discussed? [23:28] james_w: Discussed? No. [23:28] It gets mentioned occasionally. [23:28] But nothing comes of it. [23:29] what do we need to do to get a new scheduling algorithm? [23:30] 1) Adjust findBuildCandidate [23:30] is it just a case of trying something, or does there need to be a discussion about what direction to go in? [23:30] 2) Adjust dispatch time estimation [23:30] I don't know. [23:30] 1) is easy. [23:30] 2) is very difficult. [23:30] So it's not as easy as it was two years ago :( [23:31] Although, if the current trend continues we can just replace the estimation with 'give up'. [23:32] p.s. I have an ArchiveCollection running through ec2 to catch any issues before review, and I'm currently taking a hatchet to some soyuz tests to remove sampledata uses [23:32] yay for a bored weekend [23:32] Ooh, excellent. [23:33] I've been meaning to see how many tests I can get to not use sampledata simply by tweaking STP. [23:35] yeah, I'd never seen it before, but I'm mainly stopping things from using that [23:35] the factory is just about as good for the uses I have seen so far [23:36] Oh. [23:36] STP is use pretty extensively throughout the test suite (including brand new stuff). [23:36] yeah [23:37] It probably should be merged into the factory, but it does lots of stuff that the factory does not. [23:38] I'm also not a fan of the factory's monolithic approach. [23:38] It doesn't seem to be very compatible with the ongoing attempts at modularity. [23:42] indeed [23:42] I've only gone through about 10 methods so far, and they made little use of the publisher I think [23:43] For a lot of things, makeBinaryPackagePublishingHistory will indeed suffice.