[03:01] StevenK: I return [03:01] and therefor you are? [03:02] wgrant: Just as I'm about to go to lunch, nice work. :-P [03:02] wgrant: I thought ? and params=(list,) were allergic? [03:04] StevenK: Ah, indeed. Though you can easily Stormify the outer bit of that and just use SQL for the inner bit, or even add FOR UPDATE/SHARE support to Select in about 4 lines of code. [03:06] Or I could ignore it and make you twitch [03:46] wgrant: http://pastebin.ubuntu.com/5685858/ btw [03:52] StevenK: Does that work? [03:53] Oh, I guess it does. [03:53] Because all the SSs that we touch will have BSFs at the start. [03:53] Exactly. And possibly none of them will at the end. [03:54] StevenK: However, does that query at the start perform adequately? [03:55] It would be more reliable to filter on SS ID in the outer query [03:55] Rather than relying on the planner to guess that the inner query results in extremely good selectivity. [03:55] Requires a JOIN in the outer query, ids == BSF not SS [03:56] Or obtain the IDs first, since you need them anyway [03:56] I had that, you said I should get them from the FOR SHARE query ... :-) [03:56] Bleh, do I need execute_zcml_for_scripts to do IStore(BranchMergeProposal) ? [03:57] You need the component architecture, yes. [03:57] * StevenK stabs Zope === tasdomas_afk is now known as tasdomas [04:36] StevenK: http://pastebin.ubuntu.com/5685908/ [04:37] Haha [04:37] I see you took the easy way of using FOR SHARE too [04:38] wgrant: Your filter_expr is too late [04:38] We've already deleted some BSFs by then [04:38] StevenK: Why is that a problem? [04:38] It's a NOT EXISTS at that point [04:39] Ah [04:42] It only locks the first row that it finds, since a NOT EXISTS obviously fails as soon as it finds a row, but that's fine for our needs. [04:47] wgrant: My MP script of doom is almost done too [04:47] Then we'll see PreviewDiffPruner likes a large dataset [04:51] :) [05:08] 2 PDs for MP 863 [05:08] 4 PDs for MP 861 [05:08] Come on! [05:08] 290 [05:09] Oh, I think it's commint [05:09] *commiting [05:17] StevenK: Going OK? [05:19] I think it's still commiting [05:19] That's very unlikely [05:19] Committing is cheap [05:20] SELECT * FROM pg_stat_activity; [05:21] in transaction [05:21] What's the query start time? [05:21] 2013-05-21 04:09:56.441673+00 | 2013-05-21 04:09:56.448991+00 | 2013-05-21 05:21:26.137095+00 [05:22] That's session | transaction | query [05:22] ? [05:22] backend_start | xact_start | query_start [05:22] So yes [05:23] You're not running $script with LP_DEBUG_SQL? [05:23] Nope [05:23] If this is local, enable statement logging in postgres [05:23] DF [05:23] If it's on DF, god help you. [05:23] Ah [05:23] It's adding an average of 3 PDs for every MP [05:24] (But only one actual diff, so the rest are just rows in PD) [05:24] I just caught it doing an INSERT [05:24] MP 128897 [05:24] It's very slow [05:24] Where's the code? [05:25] codebase-current/foo, it's incredible naive [05:25] incredibly [05:25] Awwww [05:26] It just threw MemoryError [05:26] Yeah [05:26] It used all RAM [05:26] I saw it at 75% [05:26] Commit in batches [05:27] I would if you'd close it [05:27] Done [05:28] wgrant: Look sensible? [05:32] StevenK: I'd do more like 2000, but doesn't really matter, so go ahead :) [05:33] StevenK: It might still fail due to StupidCache, but it's worth a try [05:34] wgrant: I'd bump it to 2000, but you like leaving vim running [05:34] StevenK: E! [05:36] Hah [05:36] 3 PDs for MP 160882 [05:36] 4 PDs for MP 160881 [05:36] 5 PDs for MP 160880 [05:36] Commit [06:06] Ah, look at that [06:07] Hm? [06:07] It finished [06:08] And I think we have OMGLOTS of previewdiffs to kill [06:10] SELECT count(*) FROM previewdiff WHERE EXTRACT(year FROM date_created) = 2001; => 570852 [06:11] :) [06:12] 2013-05-21 06:12:14 DEBUG2 [PreviewDiffPruner] Iteration 14 (size 10000.0): 1.226 seconds [06:13] 2013-05-21 06:13:06 DEBUG2 [PreviewDiffPruner] Iteration 64 (size 10000.0): 0.002 seconds [06:13] 2013-05-21 06:13:06 DEBUG2 [PreviewDiffPruner] Done. 584116 items in 65 iterations, 68.688814 seconds, average size 8986.402205 (8503.80299128/s) [06:13] wgrant: So, I will put up a branch tomorrow with 44-4 that adds the two indices on pd and id. [06:14] StevenK: Sounds good [06:14] And PDP is already hellishly fast, so that's handy [06:14] It took 30 minutes to add them and 60 seconds to delete them [06:14] The deletion is optimised SQL [06:14] The insertion was awful Python [06:15] s/awful/naive/ :-) [06:15] No, the script wasn't bad, but the LP infrastructure is [06:22] wgrant: News at 11 :-P [06:22] Haha [06:22] DiffPruner punted the lone diff I added too [06:22] :) [06:23] 2013-05-21 06:14:10 DEBUG2 [DiffPruner] Iteration 0 (size 1.0): 132.183 seconds [06:23] I'm guessing DF wasn't so happy [06:23] Oh, it might have been transaction blocked [06:23] Given that time is extremly close to the entire runtime [06:24] More likely that it includes the time to calculate the condemned IDs [06:24] StevenK: Are these in -daily? [06:24] They both are, yes [06:24] Good [06:25] I wasn't sure how heavyweight they would be, and we don't buy anything with them in hourly [06:29] wgrant: If you're happy to accept r=sinzui on that MP, the changes you suggested have been pushed for a while. [06:45] I was hoping to relayer lp.bugs.m.bsf and .ss, but they're too intertangled, so I guess that will have to do [06:46] StevenK: r=me [06:47] StevenK: For QA, I'd recomment running a harness with LP_DEBUG_SQL and trying a deletion [06:47] The queries are designed to be very fast, but we should check that I didn't miss something === liam_ is now known as Guest39447 [09:00] jtv: you alive.... [09:12] czajkowski: barely... this particular jet lag always gets me in a weird way, where for a while I think I'm off the hook and then it just comes back. [09:12] jtv: tomorrow could you look at https://answers.launchpad.net/launchpad/+question/229157 please.... [09:12] OK [09:13] thanks [10:17] StevenK: any chance you can get at any oopses generated from this error? https://answers.launchpad.net/launchpad/+question/229157 [10:20] wgrant: ^ [10:20] Ah [10:20] I have tracebacks about that [10:20] I thought it'd be the plurals, but I see nothing wrong with them. [10:21] Nor does the empty Last-Translator cause it. [10:21] No, it's something to do with the query logging, IIRC [10:21] Yeah [10:21] The statement issued by _getPOTMsgSetBy causes the statement logger to choke when trying to expand parameters [10:22] It first started happening yesterday [10:22] Only on empathy and webtrees [10:24] Haven't had a chance to investigate yet [10:38] wgrant: sorry for the silence — pre-imp call. Do we have a bug for that problem? [10:41] jtv: Not yet [10:41] wgrant: it sounds as if there isn't much I can do for this question... would it be OK if I handed it off to you? [10:42] jtv: I'd love to say no, but sadly I cannot. [10:42] :) [10:42] "To the first part or the second part?" :) [10:43] If there's a way I can usefully take a portion of this problem off your hands, I'd be happy to. [10:48] * jtv has to relocate [10:48] jtv: You can't take it off my hands, so I can't say it's not OK to hand it off to me, as much as I'd love to have it not be my problem :P [10:49] wgrant: then we both feel entirely the same. That is one bright spot in the situation. :) [10:49] Heh [10:50] * jtv runs === gary_poster is now known as gary_poster|away === gary_poster|away is now known as gary_poster === tasdomas is now known as tasdomas_afk === tasdomas_afk is now known as tasdomas === wedgwood_away is now known as wedgwood === deryck is now known as deryck[afk] === BradCrittenden is now known as bac === stub1 is now known as stub === tasdomas is now known as tasdomas_afk === deryck[afk] is now known as deryck === gary_poster is now known as gary_poster|away === gary_poster|away is now known as gary_poster [18:41] hmm === BradCrittenden is now known as bac === wedgwood is now known as wedgwood_away === wedgwood_away is now known as wedgwood [22:52] Laney: what have you broken... === wedgwood is now known as wedgwood_away