[00:20] Purple buildbot [00:21] Exception? [00:21] Yeah [00:21] subunit corruption [00:25] wgrant, wallyworld_: http://instagram.com/p/RYnio1s6gf/ [00:26] yeah, saw that. i don't like instagram [00:26] i can't see what all the fuss is about [00:26] why ruin perfectly good photos with crappy filters [00:27] wallyworld_: Yes, but you don't like anything that's social. And I'm getting off your lawn now. [00:27] how is ruining a good photo being social? [00:27] We may have a dogfood DB in about 20 minutes, finally... [00:27] Facebook bought out Instagram, which I thought was your issue. [00:27] However, I was talking about the content, not the platform hosting the photo. :-) [00:28] The issue with Instagram is that it's the worst idea ever [00:28] *and* it's "social" [00:28] Note to self: Do not share hurricane photos with workmates. [00:29] StevenK: sorry, i saw instagram and my radar went off [00:29] StevenK: did you see the one of the crane? [00:29] wallyworld_: Nope [00:30] StevenK: it was on this morning's news - a huge apartment building, still under construction - the crane is folded in half and dangling in the wind [00:30] http://www.smh.com.au/photogallery/environment/weather/hurricane-sandy-strikes-us-coast-20121030-28g5z.html?selectedImage=3 [00:30] That one? [00:30] yeah [00:31] ... holy ... [00:31] the latest pictures have it swinging in the "breeze" [00:31] those apartments are selling for $90M [01:50] Hi everyone , I get this channel from here https://help.launchpad.net/ [01:51] currently I have a question , I want to get people list of launchpadlib [01:51] but I only get 50 persons [01:51] so I do not know how to to [02:07] StevenK: DF is back, and timing out merrily [02:44] wgrant: what's the difference between spr.upload_archive and spph.archive? are these related? [02:45] Loosely [02:45] spph.archive == Archive it is published in, spr.upload_archive == Archive it was first published in, so they can be different. [02:46] right, ok. thanks [02:46] s/first published in/first uploaded to or seen in/ [02:46] It wasn't necessarily ever published there. [02:46] roo bad spph.archive is not denormalised ontop spr [02:46] too [02:46] wallyworld: That doesn't make sense [02:46] It can't be. [02:46] Since an SPR can be published in an unbounded number of different archives [02:46] Or even different series in the same archive [02:46] ok [02:54] when I use the method PersonSet.find() [02:54] it told me AttributeError: type object 'PersonSet' has no attribute 'find' [02:58] wallyworld: http://www.smh.com.au/photogallery/environment/weather/hurricane-sandy-strikes-us-coast-20121030-28g5z.html?selectedImage=1 [02:59] wgrant: these 2 queries should produce same result, but 2nd one faster. can you try on df for me when you have a moment? https://pastebin.canonical.com/77405/ [02:59] StevenK: that's a lot of water [03:00] enginespot: find() is exported, so perhaps you can paste your code for us to look at [03:00] wallyworld: http://www.smh.com.au/photogallery/environment/weather/hurricane-sandy-strikes-us-coast-20121030-28g5z.html?selectedImage=10 is pretty impressive too [03:00] yes indeed [03:05] my code like the follows: [03:05] people=PersonSet.find() [03:05] # for project in pad.projects: [03:05] # print project.name [03:05] people = pad.people [03:05] i = 0 [03:05] j=0 [03:05] for person in people: [03:05] i+=1 [03:05] ppas = person.ppas [03:05] # r.set(person.name, 1) [03:06] for ppa in ppas: [03:06] j += 1 [03:06] r.set(j, ppa) [03:06] I want to get a person list [03:07] or launchpadlib can supply to me a pointer , I can read a person one by one [03:09] enginespot: You've been told by two different people that what you want isn't possible. Asking different people isn't going to change the answer. [03:09] yes , I find a different interface from launchpadlib [03:27] StevenK: what was the strategy going to be to stop the mixed visibility error oopses? what we do now to show seems reasonable? i can't recall what the end game was to be [03:28] I've can't recall off the top of my head, maybe wgrant can. [03:31] It's complicated™ [03:31] Ignore for now [03:51] wgrant: i made a mistake on the sql above, here's a fixed version https://pastebin.canonical.com/77410/ [03:54] wallyworld: Ah, sorry, distracted by secure boot destroying the world [03:54] Will run in a sec [03:54] no hurry [03:54] i'm not sure if we need the filter on the spph subquery [04:00] wallyworld: So removing the persistent cache fixed the ec2 issue? [04:01] wgrant: yeah, thanks. my mind was telling me it wasn't persistent when clearly it was [04:01] Great [04:01] if you want to look at my soyuz mp, it's up. [04:02] I've glanced at it and opted to try to finish my current stuff before diving into it, unless you're in a hurry [04:03] wallyworld: Is this the +ppa-packages query? [04:03] oh, no not at all. just mentioning it in case you hadn't noticed [04:03] yes [04:03] Right, so you've changed the semantics slightly [04:03] But probably not in a way that anyone cares about [04:03] i can't see why it was joining to spph when it didn't need to [04:03] And it's now much faster [04:03] It did technically need to, for the old definition of the package [04:03] the semantics should be the same [04:03] s/package/page/ [04:04] ... oh, I see [04:04] You're right [04:04] since there's a clause that says spr.uploaded_archive = archive.id [04:04] So it joined Archive against SPR.upload_archive anyway? [04:04] yes, so it seems [04:04] Right, so that SPPH join is completely unused? [04:04] as far as i can tell [04:04] but it's there to eliminate spr that haven't been published? [04:05] Ah, yes, of course [04:05] so i did it as a subquery [04:05] so the semantics should be the same [04:05] Right, I see the subquery now [04:05] Missed it because of the indentation [04:05] Um, well [04:05] It's slightly different [04:05] it was quicker? [04:05] Heh [04:05] heh [04:05] heh [04:05] The subquery one is still running... [04:06] oh :-( [04:06] i would have thought postgres would have optimised that [04:07] I'm not sure why it didn't here. [04:07] But rewording it as an EXISTS is a bit cleaner and works fine [04:08] i did it as an exists originally, but thought it too verbose [04:08] It's about 30% faster for me [04:08] ok, will change it [04:08] using AND EXISTS (SELECT 1 FROM sourcepackagepublishinghistory spph WHERE spph.sourcepackagerelease = sourcepackagerelease.id AND spph.archive = sourcepackagerelease.upload_archive) [04:08] but is it faster than the original join? [04:08] Although the spph.archive constraint is new; it's not in the original SPPH-joining query [04:08] Right [04:08] 30% faster than the original query for me [04:08] how much faster? [04:08] Trying with pathological cases now [04:08] ok [04:08] 35ms vs 24ms [04:09] the oops should it taking a lot longer [04:09] showed [04:09] Well, I'm sort of the optimal case for the query [04:09] try with fta [04:12] 14s -> 8s with fta [04:13] to slow :) [04:14] well, it's still almost 50% improvement :-) [04:14] and thats on DF [04:15] prod won't be significantly faster for this [04:15] might be ok on prod [04:15] i'm not sure hoe else to make it faster [04:16] Well [04:16] wallyworld: yes its a lot better, but prods cpu isn't /that/ much better than dogfoods; once you're in cache... [04:16] The general approach for this sort of problem is to not be stupid in the first place [04:16] Sadly we have history [04:16] So we can't really decide to not be stupid in the first place [04:17] But, in general, trying to calculate this sort of stuff out of the live tables is a terrible idea. [04:17] agreed [04:17] We could perhaps redesign the page [04:17] Such that it returns recent PPA publications created by the user [04:18] Why do you need to chain through SPR anyway? [04:18] i'm not sure of the requirements or history of what's required [04:18] StevenK: not sure, that's how it was [04:18] It's not chaining through SPR [04:18] It's entirely based on SPR [04:18] Ew [04:19] It doesn't care about publications, beyond the "has this been published?" check to get around rejections [04:21] with the CPU issue, that assumes the query will be in cache, but i would have thought this page would nearly always be loaded cold [04:21] The involved tables total only 10GB or so [04:21] And some of that is probably (hopefully) TOASTed [04:21] I wonder if SRF and batching will help [04:22] No [04:23] SRF helps only if we can calculate an ordered batch substantially more quickly than we can calculate the entire set [04:23] Which requires sensible schema design and indexing [04:23] i may as well land this small change till we figure out what to do next [04:24] since it helps as is [04:25] No [04:25] It probably doesn't help significantly, and it may regress other cases [04:25] It's liable to turn the subquery into a 4-6s seqscan if the wind blows the wrong way [04:26] what would cause that to happen? [04:26] If it decides that there will be too many SPRs to efficiently use an index lookup [04:27] but don't we batch so that we only ask for 75 or so at once? [04:28] We only ask for 75 at once, sure [04:28] i guess it still needs to know the number [04:28] But it's an ordered set [04:28] And the order is sufficiently complex and layered that it can't be indexed [04:28] yeah, sadly [04:28] So it has to calculate the entire set [04:28] Sort it [04:28] Then take the first 75 [04:29] are you sure it could become a seqscan? [04:29] Yes, I've seen it happen here with fta once [04:29] and the join avoids that? [04:30] Somehow, yeah [04:30] The join has to be applied after the conditions [04:30] So for the join it doesn't have to optimally determine the condition order to work out numbers [04:31] above you said for fta it went from 14->8 [04:31] and he is a bit of a corner case [04:31] And then it went to 16 or so a couple of times, as it chose a different plan for reasons which are unclear [04:32] ah, bollocks, ok [04:32] i could use a CTE perhaps [04:32] to narrow down the spr records [04:32] before checking for publiscation [04:45] wgrant: is this any faster? https://pastebin.canonical.com/77411/ (when you have a moment) [04:46] I suspect not, but let's see [04:47] still going... [04:47] :-( [04:49] wallyworld, wgrant: https://code.launchpad.net/~stevenk/launchpad/sensible-superseded-by/+merge/132009 [04:49] maybe i need to add an array column to spr - "archives_published_in" [04:49] that would eliminate the join [04:50] That wouldn't help significantly. [04:50] It would eliminate the seqscan, but leave the big performance issue [04:51] any other ideas? [04:51] I'm trying to use window functions to do it [04:51] that don't involve a schema redesign [04:52] StevenK: typing a blueprint name seems unfortunate. [04:52] maybe a picker? [04:52] wallyworld: So you missed that part of the call when Curtis said don't write a picker? [04:53] must have [04:53] sometimes words cut out [04:53] or whole sentences with curtis speaks [04:53] Hm [04:53] Curtis did say that :) [04:53] i would hate to type a blueprint name [04:54] Copy and paste, hit continue [04:54] Move on [04:54] guess so. seems very primative [04:56] wallyworld: Better or worse than a dropdown with 6200 items? [04:56] that's why a search based solution like a picker is best [04:57] not sure why it was rejected [04:58] Oh good god https://twitter.com/raywert/status/263102070989680640/photo/1/large [04:58] StevenK: in validate, fetch the spec and put it in the data map. then do not fetch it again in the submit [04:59] wallyworld: I wasn't sure if I could do that. [04:59] StevenK: yeah, the data dict is just passed arounf during the submit [05:00] run the tests just to be sure your implementation is all good [05:00] I've not run all the blueprint tests, but xx-superseding{,-within-projects}.txt pass [05:01] that should be enough to send to ec2 with [05:01] Pfft, I was going to play buildbot bingo [05:02] I already lost once today, though [05:02] StevenK: also [05:02] 192 + if result is None: [05:02] 193 + return result [05:02] rs.one() should be enough [05:02] i think? [05:03] I thought that would die horribly? [05:03] let me check the code [05:03] yes, rs.one() works [05:04] it returns None if rs is empty [05:04] It returns None, the sole value, or raises an exception [05:04] For 0, 1 and >1 results respectively [05:05] Yeah, I've changed it and checked it in iharness [05:06] StevenK: r=me, gotta pick up kid from school [05:11] blah [05:11] PARTITION BY seems to always want to traverse the whole set [05:16] I guess it's not smart enough to realise that the inner and outer sorts match [05:17] eg. SELECT id, purpose, row_number() OVER (PARTITION BY purpose ORDER BY id) FROM archive ORDER BY id LIMIT 5; should just be able to walk down the id index until the fifth row [05:17] But it actually grabs the full table due to the PARTITION :/ [05:19] Ah, it wants them sorted by purpose, id [05:19] Perhaps it doesn't want to have to remember the latest row_number for each purpose [05:20] So there's no efficient way to do the DISTINCT ON on the server [05:21] The quickest solution is probably to ask for a reasonable number and do the distinct in Python :/ [05:39] wgrant: so removing the distinct on from the query will make it fast? [05:39] wallyworld: Well, with an index on (creator, dateuploaded DESC, id) [05:39] You can then do a direct indexed query [05:40] If you ignore the DISTINCT ON bit [05:40] The page tries to only show the latest SPR for each (distroseries, archive, sourcepackagename) [05:40] i guess i could iterate the result set till i have the requisite number of records [05:40] And that is the bit that is slow to implement in SQL [05:40] Bleh, sinzui didn't use rollback [05:41] s/in SQL/in Postgres [05:41] We can do that using DISTINCT ON or a rank() with PARTITION BY, but both of those require that it be sorted by the partition/distinct first [05:41] s/in SQL/in SQL in postgres/, but yes [05:41] so i assume without checking yet that we don't have the index [05:42] We don't yet, indeed [05:42] would adding it make the distinct faster? [05:42] No [05:42] ok, we can add it live too then i think [05:42] Indeed [05:43] * StevenK tries to work out how to QA r16207 [05:43] oh well, thanks for experimenting with it [05:43] too bad it doesn't want to play nice [05:43] StevenK: librarian, codehosting, regret that you didn't wait until we had builders [05:43] basically [05:44] wgrant: So push and pull a branch, but I'm not sure how to fiddle with the librarian [05:44] click link to librarian file [05:44] see if link to librarian file works [05:45] upload file to l[ibrarian [05:45] see if librarian file works [05:55] Bleh, can't push to lazr.restful on qas due to bzr: ERROR: Server sent an unexpected error: ('error', 'NotBranchError', 'Not a branch: "chroot-67793680:///+branch-id/43364/".') which I guess is stacking [05:55] +junk is your friend [05:57] Excellent, all four look good [06:19] Hi everyone [06:19] how to get ppa from a project [06:28] enginespot: Projects don't have PPAs. People do. [06:49] wgrant: do you think that without the distincts, it's best to do the subquery for the spph check or stick to a big join? [06:51] wallyworld: You'll need to do the subquery [06:51] cool, thanks. that's what i've done [06:51] just wanted to check [06:52] i also need an index on maintainer [06:52] since another query filters on that === yofel_ is now known as yofel === Ursinha-afk is now known as Ursinha [08:23] rick_h_, you actually on irc? [08:23] deryck: rgr [08:24] rick_h_, there's a hacking room all the way at the back of the hall. right side of the hall as you go back. [08:24] rick_h_, in case you need a place between sessions. [08:24] deryck: cool thanks. hacking out in the main area from yesterday atm [08:24] rick_h_, ok, might come say hello when I get coffee and chat through a couple things. [08:46] good morning [08:49] morning adeuring [08:49] morning, adeuring [08:49] hi ric, deryck === almaisan-away is now known as al-maisan === benji is now known as Guest93766 [10:53] dpm: O HAI [10:53] dpm: Can you look at production in terms of raring translations? [10:53] StevenK, hey people at UDS say hi, you were on the projector [10:53] Haha [10:53] ajmitch says hi, and go to sleep :) [10:53] Pft, it's only 10pm [10:53] He can go to sleep. [10:54] lol [11:00] StevenK, raring translations look good to me in production - only the contributors column has gone down in terms of number of contributors for each language comparing q to r - any ideas why that could have happened? [11:02] None, I'm afraid [11:02] StevenK: I'm hardly going to go & sleep at noon [11:02] ajmitch: Pft. You fail at jetlag, then. [11:09] StevenK, so the contributor numbers is the only issue I see, otherwise translations look good. Could you investigate what caused the decrease in number of contributors? === benji___ is now known as benji === Ursinha is now known as Ursinha-afk === al-maisan is now known as almaisan-away === almaisan-away is now known as al-maisan === Ursinha-afk is now known as Ursinha === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha === matsubara is now known as matsubara-lunch [14:58] sinzui, are you here? [14:58] I am [15:01] Hi! Long time no see. [15:01] sinzui: Laura mentioned that you'd been busy opening translations, and were having some trouble with Blender's uploads. [15:01] (As two separate issues) [15:01] blender was broken before we started opening [15:02] The two things I wanted to ask are: [15:02] (1) Need any urgent help with the blender issue? I may even be able to meet up physically with them in the coming days or weeks. [15:02] (2) Maybe we should try skipping the whole translations-copying step for S. [15:04] please help with blender, the users do not understand lp -- they are not configuring translations as I suggest and they do not believe that open source communities will hate them if they setup a separate bug tracker [15:04] translations for raring are in qa now. We might enable them in 12 hours [15:05] jtv, this is the first opening where there there no errors in the db or scripts for translations... [15:05] a non-event that we don't know how to document [15:14] sinzui: that is fantastic — documentation should just show date done, steps taken & time spent. The way we always hoped it would become. [15:14] Can you give me a quick summary of what's wrong with the blender setup? [15:15] 1 they did not setup the right series. [15:15] 2. they did not setup the branch form the series [15:15] 3. yesterday, they had still not set the sync to import templates [15:15] * jtv looks at blender's series [15:16] yes, we both can do it, but I think the suer should do it show that he knows ho to change it [15:16] I agree. [15:16] and he has set pots and pos I see [15:16] Ouch — a separate “translations” series === al-maisan is now known as almaisan-away [15:16] but still note mplates [15:17] still no templates [15:17] but I don't know when this change was made [15:18] sinzui: this is https://launchpad.net/blender ? I see a template. [15:20] jtv https://translations.launchpad.net/blender/2.6x/+templates has no templates, which is what he was trying to do [15:20] Ah! [15:20] oh, the tree is scons an deep [15:20] is this intl tools? [15:20] But there's nothing in the upload queue waiting for review. [15:21] I don't see any pots or pos [15:24] I'm not finding it either. [15:25] Maybe they want to get Launchpad to extract the pots. But are we still running that? [15:26] And it doesn't look as if strings are even marked for translation… [15:35] jtv: http://wiki.blender.org/index.php/Dev:Doc/How_to/Translate_Blender [15:36] Ah, I was getting the branch [15:42] sinzui: not having much luck running their tools… but it doesn't sound as if there's very much I can do to help. :/ [15:45] yuck [15:46] I think this is a case where we advise them to stick with the import of translation branch, and work closer with the upstream community [15:50] Wait... these aren't the upstream people? === matsubara-lunch is now known as matsubara === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha [17:33] jcsackett, would you have time to review https://code.launchpad.net/~sinzui/launchpad/hide-question-comment/+merge/132177 [18:04] sinzui: sure. [18:11] sinzui: r=me. thanks for the fix. :-) [19:45] jcsackett, how goes bug 164530 [19:45] <_mup_> Bug #164530: User translations showing broken links <404> < https://launchpad.net/bugs/164530 > [19:46] sinzui: it died in ec2 when i sent it out to land on thursday. i sent it out to ec2 again earlier today when i caught up with email and realized it had never landed and i had no ec2 results. [19:46] :( [19:47] sinzui: it's been out for 3h45m. should get a (hopefully) ok result soon, so barring pqm issues it will be qa-able before i EoD. [19:47] * jcsackett knocks on all available wood surfaces [19:49] jcsackett, baring that, I would run all lp.translations tests and submit if they all pass [19:49] jcsackett, what bugs are you looking at now? [19:50] sinzui: bug 798954 [19:50] <_mup_> Bug #798954: InvalidProductName: Invalid name for product: bookmark:galapagos. < https://launchpad.net/bugs/798954 > [19:50] we discussed it the other day, i'm trying to replicate the error now. [19:51] hmm [19:52] jcsackett, Do we have any modern oopses [19:52] jcsackett, https://oops.canonical.com/oops/?oopsid=OOPS-1df2ed046d05000215e8d42933e44934 [19:53] sinzui: what method are you using to search the OOPS DB? you seem *much* faster at it than me. [19:54] jcsackett, I was already 2F logged in. I went to the lp production page, opened the latest report and search for InvalidProductName, got nothing, then url hacked to the day earlier, and repeated the search [19:55] sinzui: ah. [19:55] chromium also remembers the search so I only type it once [19:56] I think I get these often when I forget how to push to lp://qastaging/ , but I don't see the oopses [20:00] it's easy to reproduce manually on dev; i just had to spend some time getting codehosting working in my lxc. [20:04] Maybe I should never switch to lxc [20:05] this oops might be more interesting. https://oops.canonical.com/oops/?oopsid=OOPS-a3ef3a2868f310958e03998c18758fdb I don't think the user ever figured our how to push a branch [20:17] sinzui: does look like they were having problems. === Ursinha is now known as Ursinha-afk [22:24] wgrant, https://bugs.launchpad.net/launchpad/+bug/408585 [22:24] <_mup_> Bug #408585: choosing blueprint for branch is broken < https://launchpad.net/bugs/408585 > === StevenK changed the topic of #launchpad-dev to: http://dev.launchpad.net/ | On call reviewer: - | Firefighting: - | Critical bugs: ~200 [22:37] wgrant: http://pastebin.ubuntu.com/1319322/ [23:03] wallyworld, sinzui: You two coming back? [23:04] I am not. got eat [23:04] back where? [23:04] sorry, i thought cal lhad finished [23:04] The server kicked everyone but wgrant off === Ursinha-afk is now known as Ursinha === Ursinha is now known as Ursinha-afk