[00:01] lifeless, I'm signing off for now, will push the branch once the upgrade finishes [00:02] (which would be now) [00:09] mars: cool [00:23] lifeless: I think basic auth is still enabled, just using the old passwords. [00:23] Which is probably useless and a security risk. [00:27] oh, hahaha. [00:27] I just realised another interaction/issue with lowering hard timeouts [00:27] * lifeless sighs [00:27] Oh? [00:27] lifeless: ? [00:27] a single rogue page that renders slowly can cause 3 other requests to timeout [00:28] because python + threads is epic fail. [00:29] so the lower the hard timeout, the more boundary-case slow pages will cause *other* pages to have inexplicable timeouts like - https://lp-oops.canonical.com/oops.py/?oopsid=1681EB3871 [00:30] OTOH this may just be a genuinely slow template [00:33] lifeless: Maybe you should use that as an excuse to spawn a push to minimise Python time throughout the application. [00:33] wgrant: we're doing that by lowering the timeout :P [00:33] Is the performance report public again yet? [00:34] any losa around that can't sleep? I need a kcachegrind profile from staging please. [00:34] wgrant: no, it needs log filtering applied to strip private urls. [00:34] wgrant: technically even staff can't look at it atm. [00:34] Ahh. [00:35] well, thats a slight exaggeration, but you know what I mean: there *can* be stuff turn up that normally you'd need DB access to see [00:36] which is undesirable at best [00:38] OOPS reports don't expose quite as much? [00:38] they are also staff only [00:38] and also need to have this issue addressed [00:38] with the issue addressed we could start publicising them, which I would dearly love to do [00:40] * thumper is frustrated [00:40] damn code didn't work as I expected [00:40] which one? [00:41] QAing the fix I did for processing merge directives [00:41] wgrant: publishedpackage is what you nuked yes? [00:41] wgrant: what revision did it land in? has it landed? [00:41] I fixed one bug, but now it isn't actually pulling in the directive (it seems) [00:42] how often are the staging logs synced to devpad? [00:43] 7 minutes I think [00:43] perhaps 6 [00:45] :( known 2a and MD bug check failed: Cannot add revision(s) to repository: missing text keys: [00:45] * thumper thinks [00:45] only way to confirm fix is not to use 2a [00:45] lifeless: db-devel r9628 [00:46] damn [00:46] Not on staging yet? [00:46] wgrant: I wish you had done that as two separate patches, db + other [00:47] wgrant: no, 10 timeouts from it in sundays edge report [00:47] :( [00:48] Which views? [00:48] +distrotask? [00:48] +filebug [00:49] Ah. [00:59] tell me about SPPPH [00:59] sorry, SPPH [01:00] also, ugh, pagination *hate* [01:00] lifeless: got a stutter there? [01:01] can anyone push to staging right now? [01:09] lifeless: What about SPPH? [01:10] SSPPH died a few months ago. [01:12] wgrant: timeouts [01:14] lifeless: Ah, the getBuildRecords timeout? [01:15] wgrant: yeah, it's a separate class so we can port a bit at a time. It has a different API as I think this is better than the old one. [01:15] james_w: It looks like it, yes. [01:16] STP is pretty old and was put together over a long time. [01:16] yeah, it's understandable. IMO it needs to mature now that it is a central part of Soyuz's testing [01:16] Definitely. [01:16] And bonus points for eliminating sample data while we're at it. [01:17] yeah, that's my main aim, but as I started working with it I wanted to clean up the API [01:17] 0*$ bash Menu: [01:17] some changes are gratuitous, but others will help stop being the tests so tightly bound to the implementation, which is one of the problems that we have with sampledata, so I wouldn't want to replicate it at a different level [01:18] Indeed. [01:18] We have model code working around bad sampledata :( [01:18] So it will be good to eliminate it. [01:18] grah [01:26] hmm [01:26] this is nonoptimal [01:26] lpmain_staging=> SELECT Bug.heat FROM Bug, Bugtask, DistroSeries WHERE Bugtask.bug = Bug.id AND Bugtask.distroseries = DistroSeries.id AND DistroSeries.distribution = 3 ORDER BY Bug.heat DESC LIMIT 1; [01:26] heat [01:27] ------ [01:27] (0 rows) [01:27] Time: 235347.492 ms [01:27] ... [01:27] that's awfully long [01:27] you think? [01:27] It is staging, though. [01:27] But yeah, just slightly slow. [01:27] 14 seconds 'hot' [01:27] which matches production timing [01:29] http://pastebin.com/G8sctDVF [01:30] Hmm. [01:31] If I read it correctly its scanning every bug ever in descending heat and one by one comparing their distroseries distribution [01:32] what we'd really rather it do is expand the set of distroseries ids to be 'IN select distroseries.id from distroseries where distroseries.distribution = 3' [01:32] i don't suppose it matters here, but can there really not be an index on distroseries.distribution? [01:32] of course, also to do that magically. [01:32] maybe it's just such a small table that seqscan is quicker [01:32] mwhudson: it may be that that is needed. [01:32] It's a tiny table at the moment. [01:32] yes, but query analyser magic. [01:34] * wgrant wishes for an import fixer. [01:35] sed's handy for moving stuff, but then you have to reorder and merge lines :( [01:35] edwin has a vim script for it [01:36] I have his set of vim scripts. [01:36] Maybe I have it but just don't know about it. [01:36] * wgrant looks. [01:37] i want to write something that does all my import statement editing for me [01:37] (based on tags or something) [01:37] rope apparently has something to do this, but i never got ropemacs to work [01:37] Hm, so, yes, it has Ctrl+L which uses tags and adds the magical import. [01:37] And Ctrl+P supposedly reformats all the imports. [02:02] wgrant: if you're interested in performance stuff [02:02] DistroSeries:EntryResource:getBuildRecords [02:03] https://bugs.launchpad.net/soyuz/+bug/590708 [02:03] <_mup_> Bug #590708: DistroSeries.getBuildRecords often timing out [02:03] wgrant: ^ the soyuz task on it needs to change to a better query in the short term [02:04] wgrant: danilo has crafted one, but its not been transcribed; should be very low hanging fruit, and its the top timeout from mondahy [02:05] poolie: shall we continue with that bug ? [02:24] hmm, when do we toggle from 'fix committed' to 'fix released' ? [02:31] sinzui: hi, btw Ursinha requested that we use oops and timeout tags as mutually exclusive things [02:32] lifeless, after the rollout? [02:32] Ursinha: no, a couple of weeks ago [02:32] Ursinha: you pinged me about filing bugs with oops & timeout tags, as said 'one or other' [02:32] lifeless, I answered your first question [02:32] hmm, when do we toggle from 'fix committed' to 'fix released' ? [02:33] Ursinha: ah, cool, ECONFUSED :) [02:33] hehe :) [02:33] lifeless, akc [02:33] or ack [02:35] Ursinha, When I see the release announcement, and I see the registry change is indeed on lpnet, I run the close_developer_bugs for the registry team. When I was RM, I ran it for all launchpad-project [02:36] sinzui, that's cool [02:36] lifeless, ^ [02:36] cool [02:36] I was asking because I wanted to fit in ;) [02:37] I know I sent it to the list a few months back. you can pass a list of launchpad ids to update, or ALL to DTRT [02:38] sinzui: the portlet thing [02:38] yes that thing [02:38] when I looked at an oops for the timeout [02:38] it shows a very long non-sql gap taking place [02:39] do you think that that is just the parsing and construction of the objects returned by the earlier query ? [02:40] lifeless, yes. EdwinGrubbs can to that conclusion [02:40] we only want a number too [02:40] right [02:40] I'm with you on that, just wanting to be sure we fix the core issue :) [02:41] sinzui: the thing is, that there are 2155 rows [02:41] sinzui: I find it hard to believe that creating 2155 objects takes 18 seconds of python time [02:41] sorry, 16 seconds [02:42] sinzui: I'm looking at https://lp-oops.canonical.com/oops.py/?oopsid=1681EB2995 [02:42] the last query that completed is at 960ms into the request, and has no limit - its the one you want to convert to get a count. [02:43] the distroseries is 103 based on another place in that page that has the literal [02:43] That is the one I was looking at [02:46] I think it is getting packaging, spn, product_series, product) because in most cases, you want to see all the items...but most cases also make the request with a batch navigatot [02:46] lifeless, ^ [02:47] so, that makes sense to me [02:47] lifeless, The property must use a decorated resultset too. It is iterating over *all* the items [02:48] it does, or it should ? [02:48] It should I think [02:48] makes sense to me [02:48] whats the property called ? [02:48] distroseries.packaging [02:50] lifeless, last year this property only had to content with a few hundred packages. It is time to review all the callsites and decide wha the model really needs to provide [02:50] s/content/contend/ [02:50] thanks [02:50] yes, I agree [02:50] scaling - this is part of my design guidelines, and its past time I wrote them up on the wiki [02:51] I almost got rid of distroseries.packaging 6 months ago too :( [02:51] I wanted to get some actual concrete fingers-dirty things done before making real suggestions [02:51] sinzui: :) [02:53] lifeless, we are down to 4 callsite, 3 of which is the distroseries model itself [02:54] so is the attribute the issue [02:54] or the table joins etc? [02:54] I guess I mean, whichi thing are yo uworking to reduce-the-use-of-and-delete? [02:54] lifeless, email sent [02:54] I am wrong, one callsite, just the one getting the count! we can replace it without packaging_count [02:54] (about oops/bug change) [02:55] Ursinha: sweet, thanks. [02:56] * sinzui sees EdwinGrubbs move the other callsites to use _all_packagings via specialised menthods [02:56] is '++profile++' the name of soom tool? [02:56] *some [02:57] It's a magic URL path segment you'll be able to add to Launchpad URLs to make it generate profiling data. [02:57] (once the relevant work lands, if it hasn't already) [03:00] lifeless: hi, yes, let's [03:00] scutwork of expenses and sysadmin is done [03:00] i wonder if paused kvms survive reboots? [03:00] probably not [03:01] The defn does [03:01] the instance doesn't [03:01] spiv: everything but the glue has landed [03:02] Are there any examples in Launchpad where the privacy of an object is determined by an 'owning' object? [03:02] lifeless, as of last night, i put that null object into the testrequest and that had a trivial failure [03:02] so let's try again [03:03] cody-somerville: not polished ones, no. [03:04] cody-somerville: but private bug attachments will be kindof this soon [03:04] cody-somerville: and branches already are via the 'branch policy' stuff [03:04] poolie: cool; I has caffeine and I has toast cooking [03:04] Thats not really what I meant [03:04] although branch policy describes the default privacy for a new branch, privacy checks aren't actually delegated to it [03:04] * poolie tries again [03:05] have you seen any deprecation warning failures in bzr builder to do with 'debian' vs 'debian_bundle'? [03:05] cody-somerville: sinzui's team will be working on something very much like this soon [03:05] cody-somerville: P3As do that sort of thing. [03:05] or is that just out of date sourcecode or something? [03:05] All the publications and builds within the P3A inherit their privacy from it. [03:05] poolie: jelmer did something related to that recently [03:05] wgrant, Is this a feature inherent to zope or something the soyuz folks bolted on? [03:06] The security adapters just delegate to the archive. [03:06] You could write a security adapter which just by default delegates to the parent, but that's not the default. [03:06] In a normal Zope app that would work fine, since everything has a parent. [03:06] Not so much in LP. [03:06] wgrant, Do you actually write a security adapter or is it all done via zcml? [03:06] god no [03:07] cody-somerville: One just needs to throw an adapter in lib/canonical/launchpad/security.py. No ZCML required. [03:07] * ajmitch is reminded too much of acquisition at the moment [03:07] ajmitch: Hm? [03:07] cody-somerville: its a four letter word around here :P [03:07] though that's more the containing object [03:08] ajmitch: its more uncles [03:08] how do you typically run tests? [03:08] make check? [03:08] or bin/test? [03:08] poolie: The former runs everything, so you can't use it. [03:09] ? [03:09] You want to use bin/test, unless you want to run the whole test suite locally, in which case you might be crazy. [03:09] poolie: testr run -- -t PATTERN [03:10] cos.. it'll take a long time? [03:10] poolie: ~4 hours on a fast machine. [03:10] that's ok [03:10] I just 'bin/test -1cvvt [sometestspec]' [03:10] Ah. [03:10] i have a decently fast machine [03:11] i don't see a good reason to run it on ec2 and leave this idle [03:11] but also, i want to run specific tests too [03:11] If you want to run the whole thing, do use 'make check'. It wraps things in xvfb-run so you don't get Firefox windows popping up everywhere. [03:11] on a separate vm [03:12] wgrant: so does testr run :) [03:12] lifeless: Hm, so it does. [03:12] Handy. [03:13] The good thing about waking up at decent times for my timezone is the first thing I see online is my antipodean friends. The bad thing is: there is a risk that they may be talking windmill. [03:13] wgrant: so what do you think about doing that query change? [03:13] lifeless: I was hoping that stub could come up with something better. [03:13] But apparently not. [03:14] poolie: ok, so where does that leave us - are you making headway, or fighting friction? [03:15] wgrant: I'm about to start hacking on another branch; you're better equiped to do that one, but I don't want to ignore the high-oopser [03:16] wgrant: which is why I'm looking for either 'I'm on it' or 'not me' from you :) [03:16] still moving forward [03:16] lifeless: I'm on it. [03:16] wgrant: \o/ [03:18] hm apparently 2GB is not enough to run the tests [03:22] lifeless: Can you please explain analyze these two queries a few times on staging? http://paste.ubuntu.com/475721/ [03:26] http://pastebin.com/AT04PS1P [03:27] WTF [03:28] poolie, I run the whole suite in 2G on a dual core 1.8Gh [03:29] sinzui: so this is the problem line ? [03:29] lifeless: http://paste.ubuntu.com/475724/ [03:29]
[03:29] ? [03:29] before that [03:30] sinzui: its my first time (in many years) debugging from the pt down, so please excuse my fail :) [03:30] lifeless, count view/num_linked_packages >> len(self.context.packagings) [03:31] ok [03:31] and that would be found by looking at the view class [03:31] yes? [03:31] (just so I know how the dots are joined) [03:32] lifeless, the line you were looking at is intended to return the 5 latest packagings...I noted that we may need an index on datecreated too [03:32] lifeless, yes, I see the view class doing len() [03:32] wgrant: http://pastebin.com/1mdW53Yv [03:32] sinzui: ok, cool. [03:33] sinzui: I'll have a poke [03:33] lifeless: Aha, thanks. [03:33] The query planner is stupid. [03:33] wgrant: remember that staging stats can be different to prod too, but yes - thats what that bug seemed to say, to me. [03:33] wgrant: do you think you've got a better offering than danilos? [03:34] lifeless: I'm trimming a couple of other bits out of the query that don't make sense. [03:34] ah, ye old holistic approach [03:35] sinzui: this looks odd to me [03:35] sinzui: my mistake, the vm only had 1gb [03:35] and no swap [03:35] results = results.order_by(Desc(Packaging.datecreated), SourcePackageName.name)[:5] [03:36] oh foo, ignore me. [03:36] ECAFFEINE> [03:37] would it make any sense to share ~/launchpad across VMs over nfs? [03:37] no [03:37] test isolation is - and I use the word advisedly - terrible in Launchpad [03:37] bzr doesn't work across nfs, remember! [03:37] * mwhudson hides [03:37] mwhudson: :P [03:37] :-P [03:38] might make sense to share your repo and use a checkout over nfs [03:38] the sourcedeps caches are pretty small and I wouldn't stress about them [03:39] well [03:39] my download-cache is twice the size of my launchpad repo [03:40] lightweight checkout time ;) [03:40] and the eggs dir about 1.5x the size [03:40] its actually a case where that works well [03:40] i guess that would save about 50% yar [03:40] the eggs dir I have to nuke regularly [03:40] i have a 1.5GB download cache [03:40] how do you clean it up? just randomly delete stuff? [03:40] that seems very large [03:41] mine is 415 megs and is a full branch i think [03:41] rm -rf eggs/*; make build (but of course be sure to delete a couple of things I never remember because the Makefile + buildout combo is ... traumatic [03:41] oh eggs can get huge [03:42] 611MB eggs; 470MB sourcecode; 416MB download-cache (of which 217MB .bzr) === Ursinha is now known as Ursinha-afk [03:44] sinzui: whats a good test case to run to be sure I haven't broken this ? [03:44] * sinzui thinks [03:47] lifeless, I do not see a single test for num_linked_packages [03:47] sinzui: well, anything that uses 'packagings' will be fine. [03:49] lifeless, lp/registry/doc/distroseries.txt has 3 tests [03:49] thanks [03:49] http://pastebin.com/xUwxKGC1 [03:50] hah, forbidden attribute len() [03:53] ok, so with all eggs deleted, i get 'no module ZConfig' [03:53] i guess it depends on actually having some eggs present? [03:53] yes [03:53] :/ [03:53] as I said - traumatic [03:53] you need to delete the generated files from bin/ [03:53] (bzr can tell you those) [03:54] and then do python ./bootstrap [03:54] and then 'make build [03:54] bin in my branch, or elsewhere? [03:54] in your working tree [03:54] lifeless, I do not think we need .packagings any more. No callsite wants the data returned. The only callsite wants count [03:54] sinzui: \o/ tests pass [03:54] sinzui: whats the page that shows it ? [03:55] just any distroseries ? [03:56] Upstream packaging [03:56] 3 source packages are linked to registered upstream projects. 1 need linking. [03:56] \o/ [03:56] any ubuntu or debian series [03:57] sinzui: well, this was very shallow, so I'll push it up and we can see ;) [03:58] fab [03:58] sinzui: I don't know enough to know what the right/wrong fix is here [03:58] other than that this should help [03:58] ok, re-bootstrapping [04:01] sinzui: https://code.edge.launchpad.net/~lifeless/launchpad/registry-packagings/+merge/32164 [04:10] lifeless, r=me [04:12] lifeless: so if 'testr run' mentions no failures, there were no failures? [04:13] sinzui: cool [04:13] poolie: thats right [04:13] sinzui: do you think this is rc ? [04:14] what's the invocation to re-run failures? [04:14] testr run --faling [04:14] bah [04:14] testr run --failing [04:17] lifeless, yes.I think it is RC. I expect it is also a CP candidate since we really want users seeing the distroseries page [04:18] well as rc I can shoot it off now and it should be in the rollout ? [04:19] ok now http://pastebin.ubuntu.com/475731/ is strange [04:19] poolie: how so ? the fact its failing? [04:20] yes [04:20] maybe it's not my fault [04:22] how would i re-run just a doctest? -t views_txt doesn't seem to work [04:22] maybe with a dot? [04:22] teset [04:23] yes [04:23] e.g. -t doc/views.txt [04:23] or whatever [04:24] do you have any guesses about the memcached test failure? [04:25] none sorry [04:25] it looks like it uses a subclassed test or something ? [04:25] sinzui: throwing it at ec2 [04:25] subclassed view I mean [04:25] woot! [04:26] maybe memcached is sad? i'm also getting "WARNING:root:Memcache set failed"... [04:26] just thinking aloud here [04:27] poolie, I think that means the test needs to run on the LaunchpadFunctionalLayer [04:28] Oh, it is on the correct layer [04:30] poolie, Did you change something on the distroseries-needs-packaging.pt page? [04:32] sinzui: he changed base-layout [04:33] I do not think that should have any impact on the test. The test does not care about layout/markup. [04:35] poolie, when you saw ""WARNING:root:Memcache" did you see the the memcache layer was brought up before the test? [04:35] poolie: you can do 'subunit-ls<.testrepository/$testid' to answer that question [04:36] lifeless: would it be easy to implement the fake librarian as a test resource? I could install it from setUp/tearDown but it'd be such a chore. [04:36] jtv: we don't have testresources glued in with layers yet [04:36] jtv: so, setUp/tearDown will be easier, for now. [04:37] afk for a bit [04:37] bummer [04:37] sinzui: it did say the layer had been started up [04:38] poolie, I see the test requires LaunchpadFunctionalLayer. I have see the warning reported when authoring a test on the wrong layer [04:38] poolie, memcache layer is brought up just before the LaunchpadFunctionalLayer... [04:39] do we need another scummy loop to make sure memcache has started accepting connections? [04:40] poolie, I think (speculate) that I have seen memcached startup/shutdown contentions between tests. I think I saw memcached not come up because it was not fully torn down in a previous test [04:40] I saw it only once, and the issue was only on my machine when I was refactoring [04:42] so maybe it's something about the specific tests i was asking for [04:42] :( [04:44] poolie, you may want to try --layer=LaunchpadFunctional to restrict tests to a certain layer [04:44] that runs only the tests in that layer? [04:44] correct [04:44] ok [04:44] i'm now trying to just run everything [04:51] everything seems to be passing [04:52] :) [04:53] though who knows what will fail later [04:53] and i found another tribunal bug :) [04:53] it should set stdin into nonblocking mode to cope better with slowly-fed pipes [04:53] jtv: did you get the mail 'Translation template import - typewriter in pidgin-typewriter trunk' just now ? [04:54] lifeless: I don't think I did. [04:54] forwarded [04:54] its a little worrying that it exposes all the email addresses [04:54] and [04:54] that we're getting it at all :) [04:55] That was a little side job I didn't quite get around to doing at Epic. [04:55] We want to disable these success notices for Ubuntu imports at the least. [04:55] i get a lot of these [04:55] i don't know about yours in particular [04:56] lifeless: I believe the exposure of co-recipients happens in simple_sendmail [04:57] jtv: i just happened to notice that there's what seems to be translations exported metadata inside 'virsh help domname' [04:57] actually i'll just file, or look for, a bug [04:57] poolie: sorry what is "virsh help domname"? [04:57] virsh is a tool for managing kvm stuff [04:58] domname is a help topic in the tool [04:58] and who is domname? [04:58] i think it's a bug in either their use of gettext or our packaging [04:58] Then that would probably be a bug in libvirt-bin [04:58] we're down to 52 timeout bugs [04:58] < 2 per developer! [04:59] (or rather, whatever source package it came from) [04:59] lifeless, you rock! :) [04:59] cody-somerville: not I [05:00] cody-somerville: I've done 2 I think [05:01] lifeless, I was just commenting on how you rock :P [05:01] cody-somerville: aw shucks, well thanks! [05:10] or 52 per lifeless [05:11] so with all of the tests running, i don't seem to see that memcached failure [05:13] poolie: cool [05:34] lifeless: with that null object added, 0 failures out of 2280 so far [05:35] awesome [05:41] want to give me an incremental review? [05:42] yes [05:42] r=lifeless [05:42] except as its not release-critical, it can't be landed for the moment >< [05:42] ok [05:42] i might take a break and let this keep grinding [05:45] kk [05:45] wgrant: around/ [05:45] lifeless: Just wandered into uni. [05:45] So, yes. [05:45] hey [05:45] so [05:46] I mailed you an oops-extract [05:46] I was wondering if you could quickly eyeball me and tell me if its still relevant, or if something done in soyuz recently has 'made it all better' [05:47] My eyes are bleeding. [05:47] That... that query. [05:49] So, I'd say that's still problematic. [05:49] And probably related to the model changes a couple of months ago. [05:49] Nothing there has changed recently. [05:50] the query at the top isn't particularly bad [05:50] its death by a thousand cuts territory [05:50] It's always been a problematic page. [05:50] do you remember the template name ? [05:50] nvm I see it [05:51] more relevantly, are their unit tests of it ? [05:51] Of the view? [05:51] Very unlikely. [05:51] Only Code tends to do those, AFAIK. [05:51] s/their/there/ [05:51] wgrant: many places do fwiw [05:51] not directly, but via publication, using unit test frameork though [05:52] Hm, so they do. [05:52] in fact, test_archive_packages == soyuz one [05:52] Indeed, there are a few. [05:53] But it's tiny. [05:57] also, if they drive the view without publication its an invalid test [05:59] which is higher up the stack [05:59] LPFunctionalLayer [05:59] or DBFunctionalLayer ? [05:59] lifeless: So, the whole getBuildRecords stack makes me somewhat sad. Code duplication, manual SQL construction, etc. I've a quick and ugly fix for the slow query now, but I'll rewrite it all next cycle. [06:00] wgrant: \o/ [06:00] lifeless: LP == DB + librarian + memcached [06:00] functional = Zope [06:00] zopeless = Zopeless Zope [06:00] I think those are the main ones. [06:01] I should have called "zopeless" "zope on a rope". It would still be inaccurate, but at least it would make me laugh. [06:01] spiv: Was it ever actually Zopeless? [06:01] wgrant: it is zope-- [06:01] Kinda. [06:02] Once upon a time it avoided loading all the config and zcml and whatnot. [06:02] Right. That's what I thought it would do, when I first came upon the name. [06:02] whats the thing to get a test browser again ? [06:03] I think that was back when the paint in cave paintings was still wet. [06:03] lifeless: self.getUserBrowser [06:03] spiv: so, yesterday then ? [06:03] spiv: :P [06:05] wgrant: any tips on test helpers to populate a ppa? [06:05] wgrant: I want to trigger the repeated queries seen in https://edge.launchpad.net/~kalon33/+archive/experimental-stuff/+packages [06:05] lifeless: See SoyuzTestPublisher. james_w is in the middle of an API change, but the old one will be around for a while. [06:05] Although there might be enough stuff directly in the factory now. [06:05] Let's see. [06:07] lifeless: STP.getPubBinaries will be more useful if you want sources, binaries and builds (which is probably the case). If you just want sources, LOF.makeSourcePackagePublishingHistory would work. [06:07] thanks, I'll see if I can get a failing test up with those hints [06:08] object.canonical_url is the way to get a canonical url, right ? [06:08] canonical_url(object) [06:08] blah [06:08] * mtaylor decides to just be annoying and complain about zcml for no reason [06:09] mtaylor: I think you need to actually make a complaint for that to make any sense at all [06:10] lifeless: oh - I was just complaining about the existence of zcml in the first place [06:10] mtaylor: it solves a problem [06:11] lifeless: yes. it does. [06:11] mtaylor: you didn't have enough things to complain about! [06:12] lifeless: this is a strong, but potentially correct assertion :) [06:16] gah! python-software-properties should have been in base [06:17] mtaylor: Yes :( [06:17] add-apt-repository is handy. [06:17] it's fantastic [06:17] of course ... don't even get me started that to install add-apt-repository one has to install python-software-properties... [06:18] wgrant: self.getPubBinaries ? [06:18] lifeless: You need a SoyuzTestPublisher. [06:19] Which is in lp.soyuz.tests.test_publishing [06:19] ay [06:19] is making a new series feasible [06:19] or should I just use hoary? [06:20] lifeless: http://bazaar.launchpad.net/~wgrant/launchpad/bug-590708-getBuildsByArchIds-timeout/revision/11317 is my quick fix. It does the same thing we do in most of the rest of the Soyuz queries -- precalculate the archive IDs (making use of the distribution/purpose index), and stick them into the query, preventing the planner from fucking us over. [06:20] I'm cloning from the top of xx-archive.txt which only makes me slightly dirty [06:20] lifeless: Use hoary for now. [06:20] james_w is fixing everything to let us not use sampledata, but it's not quite there yet. [06:21] My problem with that is that almost all of the Soyuz tests use sampledata :-( [06:21] Yes :( [06:21] wgrant: +1 [06:22] wgrant: is the fakeChroots thing etc needed? [06:23] wgrant: I see 5 lines of boilerplate in xx-archive.txt and and don't want to copy unneeded boilerplate [06:23] lifeless: I wonder if you might not get any builds if you don't add them. [06:23] But try without. [06:23] so, just construct and use ? [06:24] In order to get the right binaries, you may have to prepareBreezyAutotest and use breezy-autotest instead. [06:24] Different tests do different things for different reasons :( [06:24] wgrant: I'm totally list [06:24] lost [06:25] lifeless: STP.prepareBreezyAutotest does 'stuff'. [06:25] The 'stuff' may be necessary to get the right binaries. [06:25] I'm not quite sure. [06:25] ok, I'll just copy cprovs stuff :) [06:25] lifeless: So, I'd like to confirm that the query still performs sanely. For that I need the archive IDs. [06:26] shoot [06:26] SELECT archive.id, archive.name FROM archive JOIN distribution ON archive.distribution = distribution.id WHERE distribution.name = 'ubuntu' AND archive.purpose IN (1, 4); [06:27] 1 | primary [06:27] 534 | partner [06:27] Thanks. [06:27] you could do this with a subselect and an IN clause, just saying. [06:28] The query planner disagrees. [06:28] oh! [06:28] thats fail [06:28] though I'm a natural born sceptic here [06:28] I would suspect a correlated subquery-by-mistake or something like that [06:28] That's the whole problem with the query. [06:29] lifeless: http://paste.ubuntu.com/475767/ [06:29] analyze plz. [06:31] wgrant: cold cache this one is a bitch [06:31] Of course. [06:31] suggests we have more work to do is all [06:33] Index Scan Backward using buildfarmjob__date_finished__idx on buildfarmjob (cost=0.00..71787.87 rows=103051 width=12) (actual time=0.118..246.771 rows=778 loops=1) [06:33] is the one that seems to be painful [06:34] Still, timing-wise it's 30 times better than the current one. [06:34] And I may be able to remove the date_finished thing later. [06:34] Its purpose is limited. [06:35] how do you interpret the cost= bit? [06:35] cody-somerville: its a scale-less estimate [06:36] all the costs are on the same scale, but the scale can't be compared outside the explain [06:36] It looked at over 103k rows? [06:36] That's the expected. [06:36] StevenK: well, the stats made it thought it would [06:36] one more failure, a very specific test about base-layout [06:36] should be shallow [06:36] poolie: \o/ [06:36] First parenthesised section is what the planner thinks, second is what actually happened. [06:37] with 6168 run [06:40] wgrant: [06:40] ERROR: lp.soyuz.browser.tests.test_archive_packages.TestPPAPackages.test_ppa_packages_query_limit [06:40] File "/home/robertc/launchpad/lp-branches/working/lib/lp/soyuz/browser/tests/test_archive_packages.py", line 111, in test_ppa_packages_query_limit [06:40] test_publisher.getPubBinaries('binary1', distroseries=hoary, archive=ppa) [06:40] File "/home/robertc/launchpad/lp-branches/working/lib/lp/soyuz/tests/test_publishing.py", line 303, in getPubBinaries [06:40] build.builder = builder [06:40] Unauthorized: (, 'builder', 'launchpad.Edit') [06:40] ------------ [06:41] should I login as the owner of the ppa first ? [06:41] I hope not. [06:42] http://pastebin.com/ciEsMcd6 [06:42] the with_person_logged_in bit was not there when it errored [06:43] I didn't think .builder was settable except by celebrity? [06:44] so how are you meant to use this test 'helper' [06:44] lifeless: See how others use it, I guess. [06:45] I suspect the other users might mostly run Zopeless :/ [06:45] deathrow doesn't even login [06:45] Zopeless doesn't need login. [06:45] ok, so -who- should be logged in to use this ? [06:45] I guess you could just use an admin. [06:45] That's what lots of tests seem to use. [06:46] how is that spelt ? [06:46] login('admin@canonical.com') [06:46] There may be a constant for that now. [06:47] adding one [06:50] I thought one had been added [06:50] there is one [06:50] of course i can't find it now [06:50] login_team('admins') also works i guess [06:51] elif person_or_team.is_team: [06:51] AttributeError: 'str' object has no attribute 'is_team' [06:51] sadness, person_logged_in -> fail on str [06:51] Then obviously it isn't an IPersonRole [06:52] I think I'm misremembering that [06:54] jtv, lifeless, bug 615708 is the one i mentioned before [06:54] <_mup_> Bug #615708: help for domname contains Launchpad translations metadata? [06:55] StevenK: do you have any idea how that^^ could happen in soyuz [06:56] Sounds like Rosetta's langpack stuff to me. [06:56] Indeed [06:56] Soyuz has just about nothing to do with that. [06:56] No, that's the po file header (which would typically be present anyway, Launchpad involvement or not) being allowed to sneak into somewhere where it's not supposed to be. [06:56] Rosetta just sends the files through it. [06:57] As I said yesterday, looks more like a bug in the source package—something in how it's built. [07:00] Maybe upstream uses gettext files without headers, and came to rely on that. Maybe they run the empty string through gettext somewhere and so get the header as a translation. [07:00] in short, its a software bug in virsh ? :) [07:01] lifeless: that was my impression yesterday, and I stick by it. [07:01] stand by it, rather. [07:01] * StevenK grumbles at initialise-from-parent.txt [07:02] StevenK: bzr rm [07:02] I did [07:02] wgrant: Code from it in a unittest fails [07:02] StevenK: Functional vs Zopeless, I bet. [07:03] wgrant: thanks, I have enough now to try hooking in HasQueryCount [07:03] Or possibly just being logged in as a different user. [07:04] wgrant: The line that is failing is hoary['i386'].getPublishedReleases(). But IDistroArchSeries doesn't have a .getPublishedReleases()? WTF? [07:04] StevenK: i-f-p.txt doesn't have ['i386']. [07:05] Also, that method name is broken. [07:05] Kill it. [07:05] - >>> hoary_i386_pmount_pubs = hoary['i386'].getReleasedPackages('pmount') [07:05] haha [07:05] hahaha [07:05] hahahahahahaha [07:05] lib/lp/soyuz/doc/initialise-from-parent.txt: >>> hoary_pmount_pubs = hoary.getPublishedReleases('pmount') [07:05] Difference: queries do not match: 12 is >= 51 [07:05] lib/lp/soyuz/doc/initialise-from-parent.txt: >>> foobuntu_pmount_pubs = foobuntu.getPublishedReleases('pmount') [07:05] to show a PPA with 2 binary publications. [07:05] StevenK: getPublishedReleases != getReleasedPackages [07:05] wgrant: A few lines under that [07:06] StevenK: You initially spoke of getPublishedReleases. But the line you quote mentions getReleasedPackages. [07:06] Right. wgrant correctly tells me that I need to learn to read. [07:06] I stand by my assertion that the method names are fucked. [07:06] This may mean that I need to go on a rampage next week and destroy them all. [07:09] and 55 for 3 packages [07:09] this is going to end badly:) [07:09] Indeed it will. [07:13] omg doctests [07:13] failing on whitespace differences [07:14] actually [07:14] what it probably means is that there is -one- key diff [07:14] and the whitespace stuff just telling you about all differences [07:14] perhaps not, but I found this behaviour very confusion [07:14] s/ion/ing/ [07:14] right, that is it [07:14] poolie: "Feature" [07:21] wgrant: if you're still around ? [07:21] lifeless: Of course. [07:21] In a shitty tute. [07:21] So please, distract me :) [07:22] I'd like to do roughly what I did with _all_members for +packaging [07:22] which is to make the request count constant for 2 and 3 build pages by populating related data [07:22] If you could point me at a couple of entry points in soyuz, that would be great. [07:23] +packaging is more Registry. [07:24] I don't believe in those lines [07:24] So there probably aren't really any Soyuz entrypoints, as such. [07:24] the data is generated by soyuz [07:24] So I'm not quite sure what you're asking. [07:24] hmm [07:25] so [07:26] I'll follow my noise [07:26] *nose*, from the template [07:30] * StevenK wonders if he can get the exception text with self.assertRaises() [07:34] it should do that by default [07:34] or you could grab my matchesException , mmm, I wonder if thats in testtools yet [07:41] lifeless: i wonder if we should convert doctests one by one to just use matchers or something [07:41] just in line, to make it easier [07:41] it would have made this one easier [07:41] however [07:41] my test run failed i guess due to the lack of a $display to run windmill things [07:41] i'm going to retry with make check and then i think that's it [07:45] testr run should have done the display thing for you [07:45] be sure to log in with X forwarding in your ssh [07:45] it lets it work regardless [07:46] i didn't use it because i wanted to pipe it out to tribunal [07:46] ah right - to get incremental ? [07:47] incremental progress and errors yes [07:50] stub: btw, I've tagged a couple of things dba [07:52] k [07:53] /launchpad-project/+bugs?field.tag=dba or something like that [07:53] stub: I need to chat with you to make sure your life won't become a living hell with the RFWTAD LEP [07:54] francis pointed out that db patching is a bit labour intensive at the moment, or something [07:54] ubuntu['breezy-autotest'] [07:54] And ubuntutest['breezy-autotest'] [07:54] Sampledata, YOU (&(^ING SUCK [08:06] how does this hang together [08:07] archive-packages.pt does [08:07] [08:08] oh, I see, non_selectable is a bit magic [08:10] and -fun- it has a template per row [08:11] O hai, I can haz review? [08:19] O hai, I can haz review? [08:20] Whoops. [08:20] And obviously not [08:21] heh [08:21] whats up? [08:21] lifeless: Tis simple-ish: https://code.edge.launchpad.net/~stevenk/launchpad/switch-ifp-to-unittests/+merge/32073 [08:23] Its a bit late for me to be confident in checking that [08:23] sorry [08:23] at a first glance though [08:23] you have lots of literal sample data there [08:23] So the original doctest :-( [08:23] might want to add constants to sampledata.py [08:23] Er, so did [08:26] remind me [08:26] how does storm and __init__ interact [08:27] danilos: another nice effect of your big variants cleanup is that we're one step closer to not passing pofiles around in the new "recife" methods. [08:28] \o/ registry-packagings passed ec2 [08:31] stub: ping [08:35] * lifeless wtfs class SourcePackagePublishingHistory(SQLBase, ArchivePublisherBase): [08:35] Yes, SPPH is ... special [08:36] StevenK: btw I really really wish you wouldn't propose things that you're only just working on [08:36] lifeless: Why not? [08:36] terrible diff gets mailed out [08:36] by which I mean useless [08:36] lifeless: The last MP I did as WIP? [08:36] yes [08:36] Does that still mail out a diff? [08:36] its not how the system is meant to be used [08:37] lifeless: If I want Julian to look at a branch, I need a WIP MP :-( [08:37] why? [08:37] (obvious answer, don't worry about Julian looking at the branch :P) [08:37] Because he wants to view a nice diff in a web page, I guess [08:37] danilos, henninge_: setCurrentTranslation doesn't really need a TranslationSide argument, so I'm removing that. Also, IPOTemplate now exposes its TranslationSide but not the traits that go with it. [08:38] I'm doing it because Julian asks me, not to annoy you [08:38] danilos, henninge_: at least, when I get this branch reviewed & landed. :) [08:38] StevenK: you're the only person doing it, its pretty weird. I know you're not trying to annoy; What do we need to change to fix this? === henninge_ is now known as henninge [08:38] lifeless: Have Julian be less lazy? [08:39] I guess? :-) [08:39] jtv: yeah, try finding a reviewer willing to approve *that*! [08:39] uh-oh [08:39] ;) [08:39] henninge: well no worries, guess who's ocr tomorrow :) [08:39] no jtv, you can *not* approve your own branches! [08:40] «damn—how did he guess!?» [08:41] is there a utility to do what cachedproperty does, but in a function ? [08:43] good morninig [08:45] https://bugs.edge.launchpad.net/launchpad/+bug/615740 sigh [08:45] <_mup_> Bug #615740: test_on_merge.py doesn't handle eintry [08:47] are there ppas in the sample data? [08:48] yes [08:48] cprov has one, for example [08:49] ok, cool, I've found code that runs when the packages list is shown :) [08:50] except, not on +packages, only on the ppa page itself. wtf [08:50] lifeless: pong [08:50] stub: hey [08:51] stub: can you do a brief skype call, or is irc more convenient? topic: agile db patches [08:51] oh man, thats so annoying; my timeout fix for sinzui missed edge by 1 hour [08:52] I can do either. Typing is generally better for me. [08:52] ok, lets go with typing [08:52] so, in the new workflow that should start in ~ a week [08:52] we'll get rid of edge [08:52] and have nondb patches flowing straight through with in-line qa [08:52] you know all this ;) [08:53] to start doing db patches in a similar manner, I've been assuming it was primarily a matter of figuring out a series of non-downtime db patches and background scripts interspersed with code changes to match [08:53] e.g. db, code, db, code etc until the transition is done [08:54] not all db patches can be done like this, and so far I've been figuring that its up to the developer to choose between their development time vs doing a single patch in the monthly downtime [08:54] Unless we plan to be cowboying all our DB changes, the main issue is reworking our processes and tools to support that. [08:55] so - what things are likely to cause friction or failure [08:55] With replication, non-downtime means data migration and adding/dropping indexes. [08:55] could you expand a little, so I think I understand :) [08:55] how do i fix this: [08:55] Getting distribution for 'testtools==0.9.6dev91'. [08:55] Error: Couldn't find a distribution for 'testtools==0.9.6dev91'. [08:56] poolie: update your download-cache and run make build [08:56] Data migration are just python or sql scripts that get run on the master - just data changes like any other script. No real problem, except we need to ensure they get run and tested on staging automatically. [08:58] lifeless: with update-sourcecode? [08:58] Adding and dropping indexes may well have to be a manual process. We need to run CREATE INDEX CONCURRENTLY on each node, outside of a transaction, and be around to cleanup if things messup. Also need to manage long running transactions as the index build will block on long running transactions. [08:58] poolie: I use 'bzr update' in the download-cache directory [08:59] For pretty much all other DB patches, we need to go via the slony tools. Slony locks all the tables, removes all the replication triggers, applies the patches to the nodes one at a time, rebuilds all the replication triggers (which might be different now there might be new columns etc.), and unlocks. [09:00] stub: does that involve perceived downtime ? [09:00] oh for deb dependencies :) [09:01] lifeless: Yes. The lock will never be granted while the appservers are active. [09:01] locks I should say (exclusive lock on every replicated table in the replication set) [09:01] stub: ok, so that I understand - we can't e.g. add a column live [09:02] stub: but we could create a whole new table, migrate data to it, then start it replicating ? [09:02] Yup. So goal is to minimize that time. There are at least two improvements we can make to our existing tools to lower times, and that is the first step I think. [09:03] Not invoking comments.sql on production (no real need so it is waste), and making security.py more intelligent. [09:03] Yes, we can create a new table live and add it to replication. [09:03] ok [09:04] so a long term theme will be increasing the refactorings we can do live [09:04] spiv i was going to try something like https://bugs.edge.launchpad.net/launchpad-foundations/+bug/615740/comments/1 [09:04] <_mup_> Bug #615740: test_on_merge.py doesn't handle eintr [09:04] Although it usually would be create table, add to replication, populate rather than create table, populate, replicate [09:04] oh ffs launchpad [09:04] poolie: ? [09:04] whitespace folding in bug comments is not the kindest thing for python programmers [09:04] heh [09:05] masochists all [09:05] anyhow spiv http://pastebin.ubuntu.com/475807/ [09:05] stub: thanks; so - in summary - its doable for a small set of cases now, and we can start building them out as we go [09:05] * spiv moves to this channel [09:05] stub: there is expected to be a high risk of pear-shapedness, so it won't be part of the fast-qa path [09:06] stub: and thus we also need some reporting for folk that are doing staged-refactorings so they know when w particular db change has gone live and they can move to their next stage. [09:06] poolie: looks reasonable to me [09:06] stub: is that about the size of it ? [glossing over the details about what we can/can't do today] [09:07] i'll clarify that comment and interactively test it [09:07] poolie: although I wonder why make check needs a SIGWINCH handler? [09:07] spiv: make check runs the pqm merge check [09:07] spiv: which ~ 0 devs run - it has a supervisor process etc [09:07] lifeless: Pretty much. We already do indexes live as necessary, normally without too much grief. The biggest fallout from live changes is occasionally timeout OOPSES when I underestimate how long a lock will need to be held, and breaking staging (eg. creating new DB users and forgetting to replicate that work there). [09:07] lifeless: this only makes me more perplexed [09:08] stub: thanks heaps; I'll write this up somewhere. How would you feel if I encourage people to start exercising this ? [09:08] lifeless: New tables we can do with the existing infrastructure for process and staging. And if the patch is suitable to be applied live, I can do so. I'll need to do it manually, but I don't see any risk on the db side. [09:09] spiv i can't see where it gets one from [09:09] stub: do you think the additional work will be a burden ? [09:09] stub: (work for you I mean) [09:09] We can start now, sure. I'm not sure how popular it will be - suck it and see I guess. [09:09] maybe as a side effect of psycopg2 or something? [09:09] stub: cool, thanks! [09:10] poolie: possibly something like that :/ [09:12] poolie: +1 [09:13] lifeless: to which bit? the patch? [09:13] yes [09:13] * poolie is glad he bought a new desktop cpu/mb :) [09:15] Good morning [09:15] lifeless: So do we want a short term 'I'm busy' system? If the appserver is in read-only mode, all requests pop up a 'I'm busy. This page will automatically reload in 60 seconds' screen. This could give us some agility for, say, <5 min db outages without annoying users too much (for some values of 'too much'). [09:16] stub: that sounds like an improvement on the current read-only thing regardless [09:16] why auto reload? [09:17] So people can walk away, have a coffee, and their posts have been completed when they get back with no need for retyping. [09:17] except for POSTs [09:17] :) [09:18] It would need to cope with POSTs. I'm not sure of the JS magic required. [09:19] coping with POSTs will run into browser issues [09:20] an interaction between the HTTP spec for POST, and browser security - I [09:20] stub: Hi. [09:20] 'm fairly sure it would be a path to pain [09:22] yo [09:23] If it doesn't cope with posts, then there is no improvement. GET requests already work just fine in read-only mode. [09:24] stub: Could you please EXPLAIN ANALYZE http://paste.ubuntu.com/475767/ on a production slave? [09:24] gotcha [09:24] I think we already do something similar for OpenID - not sure. [09:25] wgrant: http://paste.ubuntu.com/475813/ [09:25] stub: Eeexcellent. Thanks. [09:25] lifeless: helleau [09:25] bigjools: hiya [09:26] lifeless: a new storm seems to be in our sourcecode, which is nice, coz it's got some improvements we should be using. Thought you might like to be aware :) [09:26] bigjools: I'm trying to find the code that runs to generate the table of packages on ppa/+packages [09:26] bigjools: any tips? [09:26] I'm going to go on a rampage to remove unnecessary .count() with .any() [09:27] s/remove/replace/ [09:27] since the latter now strips ordering [09:27] * noodles775 looks [09:27] bigjools: thanks for highlighting that; as it happens I knew because I let deryck know he needed it to land his branch - his branch was how we found the bug with the storm cache that is fixed in 0.17 [09:27] and bool() on SQLObjResultSet now works [09:27] \o/ [09:27] wgrant: That might start performing bad when there are a large number of unfinished jobs [09:27] wgrant: I think we can make an index to cope if that happens [09:27] stub: There are already lots with date_finished NULL. [09:28] lifeless: yeah +packages can get slow. [09:28] bigjools: right, I'm pulling on it - it goes up by 4 queries per package, more or less [09:28] bigjools: but having trouble figuring out what damn methods run :) [09:30] lifeless: is it the source-package-list macro that you're looking for (in archive-macros.pt), which uses the view's batched_sources property? [09:30] noodles775: no, its down at the model layer [09:30] I navigated the templates ok with a little elbow grease [09:30] wgrant: (((status <> 1) OR (date_finished IS NOT NULL)) AND (status = 2)) [09:31] wgrant: Doesn't that just simplify to 'status=2' ? [09:31] Or am I lacking coffee again? [09:34] stub: It does, yes. [09:34] But fixing that requires a bit more of a rework. [09:34] lifeless (if any): i think flags-webapp is good now, can you try it again? [09:34] lifeless: so you've been through the ArchiveSourcePublications iterator that's used by the batched_sources property then? Hrm. ok. [09:34] There's duplicated code and string-based query construction everywhere. I plan to rewrite it. [09:35] Cool. If we can simplify that sort of thing, we can improve things with more specialized indexes if needed. [09:35] poolie: try again ? If you mean land it - no, I can't - we're frozen for non-rc till, I think, friday [09:35] noodles775: I'm currently looking for getBuildSummaryStatus [09:36] poolie: I can do a test-only run, but that should be precisely equivalent to what you just did [09:36] noodles775: I found what I needed - ArchiveSourcePublication in the adapters [09:36] Great. [09:41] ok, so it causes 2 of those per-row queries itself [09:42] and man is it indirect, how many getUtility things? [09:42] I'll unravel it a bit tomorrow. Objects are great. [09:45] lifeless: dude, you can't RC approve your own branches! [09:45] Teehee [09:45] That's cheating [09:47] heh [09:49] thumper: did you see my comment on the bug you marked qa-untestable? [09:49] yes [09:49] but I didn't see how that tested it [09:49] it didn't start [09:49] and staging was down [09:49] the problem was in the creation - as soon as you hit the button it oopsed [09:49] it didn't need to start building [09:50] I didn't think so [09:51] I thought it was the fact that it could have a date finished, but not a date started [09:51] that is what the branch fixed [09:51] hmmm actually, I might be mistaken [09:51] bigjools: can't rc-review, can rc-approve ;) [09:52] lifeless: dude, no! [09:52] not if it's your own branch [09:52] no? my bad then; where are the docs, I'll get the policy straight for future [09:53] rc analysis seems to be totally separate from code review to me [09:53] in this case it's fine I would have said it was ok, but generally it's frowned on [09:53] lifeless: but you can't rc analyse your own [09:53] thumper: *why not* [09:53] its not like code review [09:53] why can't you review your own code? [09:53] yeah, it is [09:53] no, its not [09:53] in this regard it is [09:53] *why* [09:54] judgement is skewed when looking at your own stuff [09:54] conflict of interest [09:54] that is why we have other people look at things [09:55] thats a very adversarial way of thinking about both code review and release-critical assessement [09:55] if it's a genuine RC you won't have any problems getting it in [09:55] there is no rush at this stage [09:56] huh? I think you've just segued to a different concern [09:56] I would not think of doing a branch, getting someone to review it, and then landing it with my own RC [09:56] not really, I'm pointing out that you had plenty of time to get approval from someone else [09:57] I didn't think about time-till-release at all in doing this [09:57] the new approval process says that TLs must get approval from Francis or other TLs [09:58] what new approval process? Only one that I recall was db-query approvals wasn't it? [09:58] all out of band prod changes [09:58] which was (francis, stub, me: can approve anything, TL's can approve each other) [09:59] and you can't approve your own stuff [09:59] taking a step back, I can see you feel I didn't something totally wrong [09:59] and I don't want to be doing that [10:00] however I don't *understand* the assertion you are making [10:00] so I want to drill into that [10:01] it boils down to the fact that we don't like unilateral changes and rc analysing your own code is a potential recipe for problems. [10:02] so there are two things there that concern [10:02] you're the first person I know who's RCed his own branch :) [10:03] what risks do you see in assessing a branch for rc status at this stage, and how are they affected by who wrote the branch vs who assesses it. [10:04] it's the same principle as reviewing your own code, you *will* miss problems that other people see [10:04] I object to it being the same principle [10:04] I don't think its even vaguely similar [10:04] 2 of us disagree with you so far [10:05] anyway I don't want to argue with you about this, perhaps you'd like to raise it in tomorrow's TL call [10:05] You've both asserted a cultural norm, not made any attempt to expand other than to say 'judgment is in question' [10:06] I'm going to read up the process etc [10:06] and get well across that [10:06] I'll probably send it to the list, I think the more intellects assessing this the better [10:06] I think it is a matter of culture and policy rather than us saying "in this particular respect" [10:06] I am not asserting a cultural norm, I am telling you that it's the same as reviewing your own code [10:06] bigjools: And I'm disagreeing with the assertion that it is the same. [10:06] but I agree with bigjools and the "same as reviewing your own code" [10:07] two to one, we win [10:07] * thumper leaves [10:07] lol [10:07] rc analysis is about risk assessment: chance of throwing out the release vs delaying the fix for a separate QA round [10:07] code review is about making the code better, understanding if its clear enough for another person to understand it and maintain it [10:09] lifeless: in that case, why don't we let all developers RC their own branches? [10:10] what makes you special? [10:10] because RC assessment is delegated to the rotating RM, plus jml me & flacoste, AIUI [10:10] ah, so it's by convention [10:10] yes [10:10] or should I say "culture and policy" [10:11] which is what you were arguing against earlier [10:11] I didn't argue against it, I said I seem to have violated a cultural norm [10:11] that in itself doesn't mean its good or bad [10:11] just different [10:11] and [10:12] Would I be happy with every developer being able to make an RC assessment? Actually, I think I would. [10:13] however, entirely separately, I must apologise for doing *any* RC approvals [10:13] according to https://dev.launchpad.net/PolicyAndProcess/ReleaseManagerRotation its not delegated to *anyone* except the RM [10:13] I'd like to understand why [10:13] (because it doesn't fit what I've observed happening here in the not-very-distant-before-I-became-TA-past [10:14] bigjools: I shall however follow it now that I've reread it and raise it for reevaluation [10:15] lifeless: usually the RM and Francis dole out RCs, because they are best placed to see the bigger picture [10:16] bigjools: and this makes sense to me, and was why I felt comfortable both handing out one the other day (though I put a caveat saying 'If I can' - which you and the other people on the MP seemed to think was appropiate) [10:16] - and handing out one today, because I have been doing little but watching the bigger picture [10:16] lifeless: as long as it's not your own branch I don't mind, but the RM should know about it regardless [10:17] bigjools: of course - note that the one I did of mine I had you as an explicit rc-review requested [10:17] so that you would [10:17] which was good :) [10:18] so concretely, you're happy that that particular branch was rc worthy, you're happy that you were in the loop, you're unhappy that I made that assessment *and* wrote the code. [10:25] I think it would be safe to say that most if not all of us would be uncomfortable with people giving RCs on their own code. [10:26] why? Given what RC /means/ [10:27] lifeless: the whole point is to have more than one set of eyes on the problem [10:27] thumper: not all problems are equal. Perhaps a definition of RC would help. [10:27] 'worth adding to the QA queue while the trunk is frozen' [10:27] thats it. Thats /all/ that rc means. [10:29] no, that's completely incorrect [10:29] ok. [10:29] please help me understand, because thats what the process documentation says it is [10:30] RC means we need to land something that is either fixing a critical breakage if we release with it, or adding something that is business critical [10:30] I think you have conflated two concepts: why we rc-approve, and what rc-approve *implies* [10:30] we rc-approve to fix critical breakage or address a business critical concern [10:31] rc approval *implies* that we'll have more QA coming in rather than just reducing the QA so that the release can happen [10:32] reasons *why* we do something are motivation. The impact and effect of doing it are risk and rewards. [10:32] assessing whether something should be rc approved is balancing the why - the desired reward - against the risk and the expected actual reward [10:33] yes, but the motivations to do it are as I stated. The person giving RC needs to assess how risky the change is. [10:33] and now, I am done talking about this. [10:34] ok [10:34] its annoying how chromiums search box is on top of the moin search box on the wiki [10:56] gnight y'all [10:56] * thumper throws in the towel [10:58] see y'all [11:58] Morning, all. [12:57] Yay! staging is back. Let's Q/A, people! === jtv1 is now known as jtv [12:58] Finally! [13:00] Of course that doesn't mean the code isn't ancient. === matsubara-afk is now known as matsubara === Ursinha-afk is now known as Ursinha [13:14] OOPS-1683S339 [13:14] Can I have a statement log for that? [13:16] It's really quick now. I guess I could just blame a cold cache. [13:33] wgrant: Did somebody get the statement log for you for that OOPS yet? [13:34] jelmer: Not as yet. [13:35] OOPS-1683S348 also concerns me, and is still happening. [13:36] (I'm trying to QA one of my revs) [13:38] wgrant: I'll get them to you in a sec. [13:38] jelmer: Thanks. [13:40] stub: still around? I have problems with the PG 8.4 upgrade [13:40] wazzup? [13:41] stub: I installed the 8.4 packages, did the config changes then ran utilites/launchpad-database-setup [13:41] and I get: [13:41] createuser: could not connect to database postgres: [13:41] after it's done the config change [13:41] the server is running AFAICS [13:41] There is another error message after the one you quoted (the reason it couldn't connect) [13:42] could not connect to server: No such file or directory [13:42] That makes it sound like it isn't running. You shut it down, edited the configs, restarted yes? [13:42] /var/run/postgresql/.s.PGSQL.5432 is not there, but /var/run/postgresql/.s.PGSQL.5433 is [13:42] yep [13:43] your port instructions on the wiki page conflict with the ones in that email BTW [13:43] So look in /var/log/postgresql/postgresql-8.4-main.log - should be an error message in there saying why it isn't running. [13:44] it's running ok [13:44] just using port 5433 from the looks of it [13:44] but the client expects 5422 [13:44] Yes, which is why we edited the config files to make 8.4 listen on 5432 [13:45] gar [13:45] missed that bit :( [13:45] ;) [13:46] https://dev.launchpad.net/DatabaseSetup needs updating then [13:46] it says to change 5432 into 5433 [13:47] stub: aha, I see what's going on [13:47] launchpad-database-setup is changing the port to 5433 [13:48] bigjools: Look closer - that section is about upgrading, so matches my email (edit pg8.3's config to use a different port, so 8.4 can use 5432) [13:48] so I hadn't missed it :) [13:48] ah ok [13:49] launchpad-database-setup is buggering things up though [13:49] So the wiki page documents reconfiguring the old version before the installing the new packages, and my email documents reconfiguring after installing the new packages. [13:49] Dunno why launchpad-database-setup would be doing that :-P [13:50] maxb, ping [13:50] jelmer: pong, but about to disappear for lunch [13:50] bigjools: Did you edit your 8.3 postgresql.conf too? [13:50] oh, was just replying to bigjools email [13:50] maxb: ah, ok [13:50] bigjools: And bounce 8.3 per the email? [13:50] maxb: I was just wondering if you might be able to have a look at my open meta-lp-deps reviews [13:50] stub: yes, I took it down [13:51] entirely [13:51] And the config has been edited to say port=5433 ? [13:51] maxb: in preparation of uploading a new version of meta-lp-deps, but from what I read now that process already seems to be in motion.. [13:51] bigjools: The 8.3 config I mean [13:51] stub: no, I left it alone since I didn't intend to start it up (I wanted to purge the configs but the package dependencies foiled me) [13:52] Because it looks like launchpad-database-setup rebuilds the 8.4 cluster, and if the postgresql tools see another database configured to use a port it will choose another. [13:52] yay :/ [13:52] jelmer: I'll look. AFAIK the current blocker is unresolved questions about what to do w.r.t. pocket-lint and rabbitmq on hardy [13:52] * maxb gone [13:52] maxb: pocket-lint should no longer be an issue (we've uploaded a package for hardy), not sure about rabbitmq [14:09] bigjools: Are you all settled on pg8.4 now? [14:10] maxb: I am :) [14:10] 8.3 and configs is history [14:10] When's production moving to Lucid? [14:10] RSN [14:10] What's the ordering here? => pg 8.4, then later => Lucid? [14:11] jelmer: you saw my needsinfo review of https://code.edge.launchpad.net/~jelmer/meta-lp-deps/chardet-dep/+merge/31260, right? [14:11] maxb: Yep, thanks [14:12] Also, in the spirit of Robert's email, I should point out that no one has said I can review meta-lp-deps merges. Though in this case I _know_ that the process is that there is no defined process. [14:14] maxb: That's a good point. You had been doing reviews (and are the registrant of lp:meta-lp-deps) so I assumed that was the proper procedure. [14:15] https://dev.launchpad.net/LaunchpadPpa says "Exercise personal judgment on whether your change merits a merge proposal, or is sufficiently trivial to just be committed directly." -- and I wrote that [14:17] maxb: That doesn't say anything about who should do the review :-) [14:17] That seemed in keeping with the fact that I wasn't given any requirements to seek review of any kind when I was issued with ~launchpad/+archive/ppa upload permissions [14:17] It doesn't even say there should be a review, just an MP :P [14:18] maxb: Anyway, these changes are quite simple but I would prefer if somebody else could give them a quick glance regardless. There are two MPs left. [14:19] wgrant: Well, a proposal implies that there is going to be acceptance or a declination somehow. [14:22] danilos, jtv: POTMsgSet.submitSuggestion does not sanitize translations. Is that intended? [14:22] * henninge vaguely remembers something like that. [14:23] henninge, what does "sanitize" translations mean? calling old "sanitizeTranslations"? :) [14:23] henninge: for setCurrentTranslation it was a documented assumption, but I don't think we discussed it for this one. [14:23] danilos: No, calling the new code. [14:23] henninge, oh, right, we've split that out already, right? [14:24] henninge, jtv: I don't think it should sanitize input [14:24] danilos: yes, I did that in a hotel room in Recife ;-) [14:24] henninge, jtv: it's something that callsites should do [14:24] henninge, when it was four of us squeezing together? oh yes, I remember [14:24] danilos: ok, so in tests I should make sure to pass sanitized input. [14:24] henninge, put that into a factory method ;) [14:25] The problem IIRC was that "sanitized" was actually relative to the input medium—different de-indentation requirements for browser and import, etc. [14:25] henninge, or simply into translations test helpers [14:25] jtv, exactly [14:25] Then, there's validation. [14:25] jtv, also, we store invalid translations as suggestions as well [14:25] :) [14:25] But we don't make them current, right? [14:26] Although that can still happen if a template update adds a flag, I guess. [14:28] jtv, yeah, that's an old bug [14:28] jtv, but unrelated to this :) [14:28] just saying :) [14:28] * bigjools -> lunch [14:39] bigjools, jelmer: Does soyuz.browser.build.CompleteBuild serve any purpose nowadays? AFAICT, we don't use its self._buildqueue_record attribute at all. [14:41] abentley: I'm not sure, I [14:41] 'm not very familiar with that class. [14:41] noodles775: Do you happen to know perhaps? [14:46] https://code.edge.launchpad.net/~maxb/meta-lp-deps/pg8.4/+merge/32200 <-- if someone has a moment [14:47] abentley, jelmer: It's caching the buildqueue_record with the batched builds, and seems to be used in soyuz/templates/builds-list.pt:96? [14:48] maxb: Reviewing [14:48] noodles775, ah. I missed that buildqueue_record method. (Which surely ought to be a property?) [14:49] Looks like it certainly should, but zope templates apparently don't care. [14:51] noodles775, it was causing me grief because it delegates to IBinaryPackageBuild, but setupCompleteBuilds was wrapping it onto SourcePackageRecipeBuilds. [14:52] Who is the best person to ask to push for getting the launchpad ppa / hardy / rabbitmq situation resolved? Basically it needs packages copying from the internal "cat" archive [14:52] maxb, copied *to* the cat archive? [14:52] *from* [14:52] maxb, to what? [14:53] to the public launchpad PPA - the problem is that launchpad-dependencies depends on rabbitmq, which is satisfied via the cat archive, but makes launchpad-dependencies uninstallable unless you have access to cat [14:55] maxb, I assume you mean on older distros, because I don't use the cat archive and I have launchpad-dependencies installed. [14:55] Correct, this is only an issue on hardy [14:55] maxb, elmo is the usual guardian of that archive. mars might also be interested. [14:58] maxb, I can file an RT for you. It is rabbitmq for hardy, right? Do we still support hardy in the PPA? I thought we skipped to the next LTS? [14:59] I believe there's an email from flacoste stating that we ought to support what production is running in the PPA [14:59] maxb: FWIW the landscape folks have a backport of rabbitmq to hardy in their PPA. [14:59] lifeless asked mthaddon about this on launchpad-dev@, but the thread died [15:00] maxb: the request has been filed, we're just working on other stuff (based on priorities that we've agreed with the LP devs) [15:00] mthaddon is usually swamped, and rarely checks mail unless it is an RT [15:00] iow, it should get done at some magical time in the future when we have time to do it :) [15:00] mthaddon: ok. That bit of info didn't make it into the public dev list :-) [15:02] Meanwhile, we can't/shouldn't update the launchpad-dependencies version in hardy in the PPA, unless we make a hardy branch that omits rabbit [15:23] sinzui, do you know what external services/scripts use xmlrpc.launchpad.net? [15:24] salgado, I do not [15:24] flacoste, do you? (^) [15:24] salgado: bazaar at least [15:27] yeah, I thought bazaar would be one of its users. I'm trying to find all of its users to see if we'd need it for vostok, in order to make sure https://code.edge.launchpad.net/~salgado/launchpad/layer-specific-navigation/+merge/32124 is not going to be a problem for us [15:27] james_w, btw, it'd be great if you could have a look at the cover letter there and tell me what you think [15:28] salgado, maybe your best bet is to ask a losa for the logs? [15:28] indeed, that should work. thanks jml [15:30] salgado: I think that if we do want xmlrpc to support any existing clients then we likely won't need to change the navigation [15:31] salgado: would it be feasible to have a vostokxmlrpc layer if we needed?# [15:32] I think so === deryck is now known as deryck[lunch] [16:18] salgado, how do I look at the apidocs? [16:18] jml, apidocs.launchpad.dev [16:18] salgado, and that starts from "make run"? [16:19] yep [16:19] is there a way to do a static build of them? [16:19] jml, yes, gary has one at https://devpad.canonical.com/~gary/apidoc.launchpad.dev/++apidoc++/static.html [16:20] salgado, do you know how to make one of those? [16:22] hmm. it looks much better from the live site. [16:22] jml, I think gary pointed something like wget at apidocs.lp.dev/static.html to generate that [16:22] heh, although clicking "Book" makes an oops. [16:23] salgado, ok, thanks. [16:25] do I need to do anything special to do LFA.read() in the testsuite? [16:25] james_w, maybe be in the librarian layer? [16:25] (I honestly have no clue) [16:25] It appears to be a problem with something about http_proxy, which is then masked in to an error "raise LookupError, aliasID" [16:25] jml: yep, already hit that one [16:26] james_w, perhaps you could find out what PGPSignatureNotPreserved in test_uploadprocessor does and copy that [16:28] jml: doesn't seem to do anything special. I suspect an environment issue. [16:29] james_w, good luck with that. sorry I couldn't help more. [16:29] * jml goes to purchase some consolation chocolate === salgado is now known as salgado-lunch [16:40] jelmer: Don't forget to 'bzr mark-uploaded' and push [16:42] up-to-date launchpad branch is getting deprecation warning from bzrplugins/builder ("please use 'debian' instead of 'debian_bundle'). anyone know what's up? === Ursinha is now known as Ursinha-afk [16:50] leonardr: I'll guess that that's been triggered by upgrading to the newer python-debian in the launchpad ppa [16:51] hmm, no it's nothing to do with http_proxy it seems. It seems like the librarian just doesn't have the file. Maybe there is a commit missing somewhere or something [16:51] james_w: you need to commit [16:51] or it doesn't write the file [16:52] * bigjools thinks commits are evil in tests :( [16:55] thanks bigjools === deryck[lunch] is now known as deryck === Ursinha-afk is now known as Ursinha [17:09] salgado, when i integrate the version of lazr.restful with your change into launchpad i get some test failures. can you take al ook? === beuno is now known as beuno-lunch === salgado-lunch is now known as salgado [17:21] leonardr, sure, send them my way === bigjools changed the topic of #launchpad-dev to: Launchpad Development Channel | Week 4 of 10.08, Release Manager: bigjools | PQM is release-critical | firefighting: - | buildbot/pqm has been switched to watching the *lucid* builders | https://dev.launchpad.net/ | Get the code: https://dev.launchpad.net/Getting | On-call review in irc://irc.freenode.net/#launchpad-reviews | Use http://paste.ubuntu.com/ for pastes [17:24] salgado, http://paste.ubuntu.com/476022/ [17:24] any clues? [17:26] * salgado checks [17:27] leonardr, is that the only failure you got? [17:27] salgado: there were a bunch of failures afterward but they were all caused by spec not being defined [17:28] leonardr, right, so only this test failed, then? [17:28] salgado: there was also a failure in cache.txt but i don't think that was your fault [17:32] leonardr, apparently that's a bug which is only exposed when we export an entry with no fields/operations. I should be able to get it fixed quickly [17:32] salgado: lazr.restful bug? ok [17:32] yes [17:37] leonardr, got http://paste.ubuntu.com/476025/ when running buildout on a new lazr.restful branch. any idea why? [17:38] salgado: yeah, benji changed buildout to disallow 'picked' versions of the dependencies. they have to be explicitly named in versions.cfg [17:39] i must have had that one installed globally or something, so i missed it [17:39] add distributed = 0.6.14 to versions.cfg in the appropriate place and rerun === beuno-lunch is now known as beuno [18:26] leonardr, I'm seeing a failure on webservice-declarations.txt on trunk. is that known? [18:28] leonardr, and http://paste.ubuntu.com/476043/ has a fix for the problem you encountered when running LP's test suite === matsubara is now known as matsubara-lunch [18:40] rockstar, ping, Tarmac question for you [18:41] mars, pong. Also, #tarmac if I'm not around. [18:41] oh, cool [18:41] Er, if I'm ever not around. [18:43] salgado: give me the test failure? [19:19] leonardr, http://paste.ubuntu.com/476065/ [19:20] leonardr, does the fix look ok to you? would you like me to create a merge-proposal for it? [19:21] o/` Mammal, mammal. Their names are called, they raise a paw, the bat, the cat, dolphin and dog, koala bear and hog! o/` [19:22] salgado, i think start should really be 0, so i don't know what's up [19:23] leonardr, doesn't it fail for you on a trunk branch? [19:23] i don't think so, let me try [19:24] salgado: no, i get no failures [19:25] you are at revno 137? [19:26] yes, 137. using python2.6 [19:29] weird === matsubara-lunch is now known as matsubara [19:32] * salgado starts a new trunk branch from scratch, just in case [19:32] someone might be interested in reviewing https://code.launchpad.net/~jml/launchpad/readme/+merge/32238 [19:32] (not that I can land it) [19:33] leonardr, it doesn't fail on this branch I started from scratch [19:34] s/branch/tree [19:34] flacoste, you might actually be interested ^^ [19:39] jml: reviewed [19:39] jml: not that i have any comments apart awesome! [19:39] flacoste, thanks. I'll land that first thing Tuesday :) [19:39] jml: that's today or next week? [19:39] whenever PQM opens. [19:40] * rockstar heads to lunch [20:43] thanks jml [20:55] matsubara: I'm here === Ursinha is now known as Ursinha-lunch === Ursinha-lunch is now known as Ursinha-afk [22:12] Is there a way to tell what user I'm logged in as in the tests? I'm pretty sure I have the right permissions to get to these attributes, but for some reason the test is saying no. [22:13] getCurrentInteraction or something [22:13] its the 'interaction' that defines 'who is logged in' [22:14] look in lp.testing._login [22:14] lifeless, great, thanks. [22:15] there's get_current_principal somewhere in canonical.launchpad.webapp [22:16] rockstar: there's also getUtility(ILaunchBag).user [22:16] if the two disagree, confusion can certainly result === matsubara is now known as matsubara-afk [22:16] mwhudson, yeah, I had your suggestion, and it seems to not be working. [22:17] * mwhudson afk [22:17] grah I remember what I wanted leonardr for [22:18] nope, forgotten again [22:23] james_w, any time === salgado is now known as salgado-afk [23:05] when is the next staging update due ? === Ursinha-afk is now known as Ursinha [23:13] lifeless: It looks like one might be in progress now. [23:13] Although it hasn't been trying to update regularly since it returned last night... [23:14] So the break in logging may not be indicative of an in-progress update, but just that someone hasn't switched the cron job on properly. [23:20] losa ping ^ [23:22] nice, edge is looking way better than prod now [23:22] mean 0.37 stddev 0.62 [23:22] mean 0.36 stddev 0.94 [23:23] wow, Public XML-RPC is terrible [23:23] Is that over all requests? [23:23] 8.36 4.48 [23:24] mwhudson ^ [23:24] What's public XML-RPC, except for lp: lookups? [23:24] wgrant: I'm not sure [23:24] but 8 seconds of db time ain't brilliant [23:24] db avg stddev [23:24] 7.49 2.61 [23:24] Ew. [23:24] for that group [23:26] translations are also up there [23:26] 1.42 1.89 [23:26] compared to the overall app [23:26] problem pages [23:26] Archive:+copy-packages [23:26] Archive:+delete-packages [23:26] Archive:+index [23:26] Archive:+packages [23:26] I knew +delete-packages would be up there, but i didn't think it was that bad. [23:26] Also, is there a reason that the OOPS summaries are private? [23:26] BazaarApplication:+index [23:26] Don't they just contain page IDs, not URLs? [23:26] wgrant: they contain urls [23:26] but we can fix [23:26] :( [23:26] please file a bug [23:26] Against what? [23:26] oops-tools? [23:28] yeah [23:29] BranchMergeProposal:+index [23:29] BranchSet:CollectionResource [23:29] BugTask:+create-question is -terrible- [23:29] 11.03 8.97 [23:30] only two hits, but *boy* where they doozies [23:30] I don't know how it's so bad, given that it's normally used on new bugs. [23:30] no, its not [23:30] can't be given the hit count [23:30] Hm. [23:30] BugTask:+distrotask [23:30] BugTask:+editstatus-page [23:30] I've hopefully fixed those. [23:30] Well, that and the heat fix. [23:31] BugTrackerSet:+index wow [23:31] 14.63 5.58 [23:31] It's unbatched and going to be selecting lots of tasks, I guess. [23:31] Builder:+history [23:31] 8.59 2.42 [23:31] It probably needs some prejoining for the new schema. [23:32] Distribution:+addquestion [23:32] thats probably search performance, which is fixed [23:32] Distribution:+archivemirrors [23:32] 7.76 4.18 [23:32] 99% in 19 seconds [23:32] bet you its on the lpnet timeout summary [23:32] Distribution:+assignments [23:33] 15.98 7.77 [23:33] Distribution:+branches 5.44 2.76 [23:33] Distribution:+bugs 3.24 2.98 [23:34] Distribution:+bugtarget-portlet-bugfilters-stats 6.79 1.64 [23:34] Distribution:+bugtarget-portlet-tags-content 6.35 2.00 [23:34] Distribution:+cdmirrors [23:35] Distribution:+filebug - again likely search and thus fixed. [23:35] Distribution:+filebug-show-similar ditto [23:35] Distribution:+patches [23:35] Distribution:+ppas [23:36] Distribution:+questions [23:36] Distribution:+search [23:36] DistributionSourcePackage:+addquestion - again, hopefully fixed [23:36] DistributionSourcePackage:+filebug-show-similar ditto [23:37] DistributionSourcePackage:+questions ditto [23:37] DistroArchSeries:+builds 3.50 4.04 [23:37] DistroArchSeries:+index 2.39 3.10 [23:38] mwhudson: hi [23:38] 10:22 < lifeless> mean 0.36 stddev 0.94 [23:38] 10:23 < lifeless> wow, Public XML-RPC is terrible [23:38] 10:23 < wgrant> Is that over all requests? [23:38] 10:23 < lifeless> 8.36 4.48 [23:38] mwhudson: ^ page performance stats [23:39] DistroSeries:+builds 6.47 4.50 [23:39] DistroSeries:+cve 19.64 0.00 [23:39] lifeless: got a link? [23:39] https://devpad.canonical.com/~stub/ppr/lpnet/latest-daily-combined.html [23:39] lifeless: i suspect that's actually private xml-rpc stuff [23:40] getJobForMachine does a good job of contending with itslef [23:40] DistroSeries:+needs-packaging 12.93 7.63 [23:40] DistroSeries:+templates 18.53 0.73 [23:40] LibraryFileAlias:StreamOrRedirectLibraryFileAliasView 12.45 9.55 [23:41] yeah, regex for private xml-rpc looks wrong [23:41] MaloneApplication:+bugs 7.09 3.19 [23:41] Person:+editproposedmembers 12.22 6.93 [23:42] its a little worrying how fast I'm getting at eyeballing 99% on this thing [23:42] Person:+patches 17.61 0.00 [23:43] Product:+filebug-show-similar search again [23:43] ProductSet:+project-cloud 6.22 4.31 [23:43] ProjectGroup:+milestones 4.87 5.94 [23:43] project-cloud _really_ should be in memcache [23:43] or something [23:43] Question:+confirm 4.87 6.75 [23:44] Question:+edit 6.66 6.87 [23:44] mwhudson: we need the _miss_ time to be subsecond [23:44] mwhudson: after that we can talk caching [23:44] if we need to memoise it to make it subsecond we should do that, but it should be persistent, no ? [23:44] lifeless: maybe generated daily and stuffed somewhere that the website can access then? [23:45] doing it 'in-line' in the appserver is fairly bonkers [23:45] mwhudson: agreed [23:45] Question:+makebug 4.61 7.73 - probably search [23:45] <_mup_> Bug #4: Importing finished po doesn't change progressbar [23:45] RootObject:+recently-changed-branches 6.68 3.05 [23:45] RootObject:+recently-registered-branches 5.08 2.30 [23:46] TranslationGroup:+index 6.82 5.10 [23:46] -fin- [23:46] we fix those to lower their 99% point, we can lower the timeout some more [23:51] lifeless: the branch ones shouldn't be too hard [23:51] lifeless: I've not looked at them [23:52] thumper: sure, I'm looking forward now that we've got edge OOPS really under control