=== cinerama_ is now known as cinerama [03:33] morning folks [03:50] hahhahahahahaahahahahahahahahaha [03:50] ha [03:54] wgrant: yo, soyuz question [03:54] https://lp-oops.canonical.com/oops.py/?oopsid=1904K1614 [03:54] queries 10 and 12 [03:57] lifeless: What about them? [03:57] why does the file size count not have the status enum check [03:58] lifeless: It uses dateremoved instead. [03:58] Which is similar, but not quite the same. [03:58] dateremoved lags a bit. [03:58] But it represents the actual size on disk. [03:58] uhm [03:58] would it be incorrect to put the status enum in ? [03:58] It depends on your definition of correctness. [03:59] It would make it no longer reflect the archive disk usage. [04:00] lifeless: Erm, 6to4 doesn't do that. [04:00] Maybe you mean NAT64. [04:00] Which is in no way widely supported. [04:01] And then you probably need DNS64 too. [04:01] Pure IPv6 is not a very friendly environment at the moment. [04:01] ah [04:01] there used to be a 6 -> 4 thingy [04:01] Still, World IPv6 Day soon. [04:01] 6-7 years ago [04:01] Yeah. [04:01] That was deprecated. [04:02] And replaced by NAT64. [04:02] But neither has ever been widely supported. [04:02] Doable, but not common. [04:02] I feel for the guy [04:02] but its totally not a bug in lp's code [04:02] I just love how The Plan for IPv6 is "Oh, everyone will move. At some point." [04:02] StevenK: What more plan can there be? [04:02] People have to stop being hideously lazy. [04:02] And just do it. [04:02] I just like to imagine IPv6 causing DNS servers the world over to scream “AAAA!” [04:02] It's not that hard. [04:03] spiv: Boo. Hiss. [04:03] wgrant: No, it just costs time and lots of money. [04:04] StevenK: And they've had more than a decade to do it. [04:04] And telecommunication companies prefer to *recieve* lots of money and spend very little. [04:04] wgrant: thats -barely- 2 equipment refreshes of core switch gear [04:04] lifeless: Sure. But most companies are not even thinking about it yet. [04:05] wgrant: and its not been /stable/ for all that long [04:05] anyhow [04:05] archive:+repository-size [04:05] not ip6 [04:05] kgo [04:06] At least aarnet's tunnel broker is reasonable. [04:06] lifeless: What about it? [04:08] wgrant: thats the oops I'm looking at [04:08] https://bugs.launchpad.net/launchpad/+bug/739070 [04:08] <_mup_> Bug #739070: Archive:+repository-size timeout < https://launchpad.net/bugs/739070 > [04:08] 9+ seconds to query file sizes [04:08] Yes. [04:09] lifeless: Would the status filter improve it significantly? [04:09] I'm not 100% sure [04:09] exploring the space atm [04:09] is it 2 for binaries as well? === Ursinha is now known as Ursinha-afk [04:10] Yes. [04:10] PackagePublishingStatus is universal. [04:10] ~/launchpad/lp-branches/bfj-different-urls$ bzr push [04:10] Using saved push location: lp:~wgrant/launchpad/bfj-different-urls [04:10] bzr: ERROR: Server sent an unexpected error: ('error', '') [04:10] I don't think that's meant to happen. [04:10] wgrant: would 2+3 be equivalent ? [04:10] lifeless: No. [04:11] lifeless: What's 3? Superseded? [04:11] (it's working this time) [04:11] wgrant: yes [04:11] 4 is deleted [04:12] 1 is pending [04:12] 5 Obsolete. [04:12] Right. [04:12] I never remember the 3/4 order. [04:12] So, 3/4/5 are all potentially removed. [04:12] In the beginning there was Pending/Published/PendingRemoval/Removed. [04:12] anyhow [04:12] Then ArchiveRemovalRedesign happend, and the removedness was moved into dateremoved instead. [04:12] with (2) its a 50ms query hot [04:12] With status indicating the reason for removal. [04:12] Hmm. [04:13] With dateremoved how is it hot? [04:13] 73ms [04:13] :P [04:22] look at the plan [04:22] "securebinarypackagepublishinghistory__archive__status__idx" btree (archive, status) [04:22] is the index it is using [04:23] so it reads every row ever in that archive [04:26] wgrant: ^ [04:28] wgrant: ping [04:28] create index bpph__dateremoved__for__size__idx on binarypackagepublishinghistory using btree (dateremoved) WHERE dateremoved IS NULL; [04:28] can you create that on dogfood [04:28] and get me an explain analyze for the query in https://bugs.launchpad.net/launchpad/+bug/739070/comments/1 ? [04:28] <_mup_> Bug #739070: Archive:+repository-size timeout < https://launchpad.net/bugs/739070 > [04:31] lifeless: Ah, didn't see the plans in the bug. [04:31] Let's see. [04:33] lifeless: The index is building. [04:34] I suspect we'll want (archive) where dateremoved is null [04:34] or [04:35] we should make dateremoved go away [04:35] either folding it into the status enums [04:35] or dropping the librarian reference or whatever [04:43] lifeless: -> Index Scan using temp_bpph_archive_dateremoved_idx on binarypackagepublishinghistory (cost=0.00..1112.40 rows=391 width=4) (actual time=56.562..107.930 rows=915 loops=1) [04:43] Index Cond: ((archive = 14516) AND (dateremoved IS NULL)) [04:43] So the bpph scan is cheap with an index [04:44] 34ms hot. [04:44] Is that dateremoved check a status check in disguise? [04:44] 10s cold, but DF. [04:44] jtv: Yes. [04:44] lifeless: There is a stay of execution for [bs]pph, we'd need to think very carefully about it. [04:44] StevenK: We don't need to remove it [04:45] StevenK: but we do need to model it. Modelling it as a separate field may be worse than not. [04:45] wgrant: the query was 10s cold ? did you get the full explain analyze? [04:46] We could add another DBItem to PackagePublishingStatus -- PURGED or so [04:46] lifeless: I had it, but then I made the mistake of pressing PgUp, which DF does not enjoy at all. [04:46] StevenK: Doesn't quite work. [04:46] StevenK: PPS has three separate end states. [04:46] (whether this is sane or not is in question) [04:47] lifeless: But cold data from DF is entirely useless. [04:47] lifeless: The numbers, at least. [04:47] The plan is OK. [04:47] wgrant: can I see? [04:47] wgrant: cold data still tells me where time is going, and thats useful. [04:47] wgrant: so I dispute entirely useless [04:47] wgrant: oh, I see, lost. [04:47] kk [04:47] Yeah. [04:48] Get one from qastaging, I guess. [04:48] select count(*) from branchrevision; then try again ? [04:48] It doesn't tell you where the time goes, really. [04:48] It tells you that DF has like 4MB of RAM. [04:48] 4MB. 'lol' [04:48] 640k surely === Ursinha-afk is now known as Ursinha [04:52] ugh, timeline only works if you run upstream builds of chromium? wtf [04:53] lifeless: Hm? [04:53] Works for me. [04:53] "Pfffttt, Speed Tracer is not working. [04:53] Please double check a couple of things: [04:53] You must start Chrome with the flag: --enable-extension-timeline-api [04:53] You must be running the Chrome Dev channel. [04:53] For more details, see our getting started docs." [04:53] Oh. Not an upstream build. [04:53] I have enable-extension-timeline-api in CHROMIUM_FLAGS in /etc/chromium-browser/default [04:53] Just not a stable build. [04:54] webdevs are stable users too [04:54] And error message that includes "Pfffttt," Kwality [04:54] eg. ppa:chromium-daily/beta or so. [04:54] I question the sanity of this [04:54] lifeless: *That* makes you question the sanity of Chromium?" [04:54] I have yet to see any *compelling* reason to switch to Chromium, TBH. [04:54] StevenK: I use Chromium. [04:55] It's not bad. [04:55] However, I question Google's motives. [04:55] Firefox 4 is OK, except not with fglrx. [04:55] wgrant: so can has plan? [04:55] lifeless: The hot one? [04:56] the cold one after querying branchrevision a lot [04:56] It's been running that select for 8 minutes, still hot as ever. [04:56] argh hhaa [04:56] ok [04:56] Yes. [04:56] hot will be fine [04:56] http://paste.ubuntu.com/602605/ [04:57] funny, 3.6 was broke with nvidia, 4.0 is broke with fglrx [04:57] Our test suite is slow. [05:00] wgrant: https://bugs.launchpad.net/launchpad/+bug/739070/comments/4 [05:00] <_mup_> Bug #739070: Archive:+repository-size timeout < https://launchpad.net/bugs/739070 > [05:01] lifeless: 19ms [05:01] this is 1/2 the estimated cost [05:01] which is daft but there you go [05:01] http://paste.ubuntu.com/602606/ [05:02] why are we bringing back the rows ? [05:03] Which rows? [05:03] these rows [05:03] why aren't we summing [05:03] NFI [05:03] But it's only a single column. [05:03] So it's not that bad. [05:03] (yes, it should be fixed) [05:03] Oh. This query is not a single column. [05:04] Ahh, this is query 11, not query 10. [05:06] wgrant: https://bugs.launchpad.net/launchpad/+bug/739070/comments/5 [05:06] <_mup_> Bug #739070: Archive:+repository-size timeout < https://launchpad.net/bugs/739070 > [05:06] wgrant: actually I'm in https://lp-oops.canonical.com/oops.py/?oopsid=1948BB197 [05:07] query 21 [05:07] 8929ms SQL-launchpad-main-master [05:07] SELECT DISTINCT LibraryFileContent ... [05:07] lifeless: Still 19ms. [05:08] spm: can you run https://bugs.launchpad.net/launchpad/+bug/739070/comments/5 on the readonlymode replica - *one* run, I need the explain analyze (looking for cold cache effects) [05:08] <_mup_> Bug #739070: Archive:+repository-size timeout < https://launchpad.net/bugs/739070 > [05:08] oki [05:23] why does packagefile exist? [05:24] https://bugs.launchpad.net/launchpad/+bug/739070/comments/6 [05:24] <_mup_> Bug #739070: Archive:+repository-size timeout < https://launchpad.net/bugs/739070 > [05:25] wgrant: ^ [05:25] StevenK: ^ [05:25] lifeless: packagefile? You mean BinaryPackageFile? [05:25] yes [05:25] There are two [05:25] and I assume sourcepackagefile [05:26] SourcePackageReleaseFile is essential. [05:26] It's SourcePackageReleaseFile [05:26] (Usually called SPRF) [05:26] BinaryPackageFile is for potential expansion to multi-file binaries. [05:26] (yes, seriously, one has Release and one does not) [05:26] ok [05:26] I can see the expansion factor [05:26] but [05:27] if we makde bpph be an owner of LFC (wrapping the LFA fields in it, and glued into the gc process) we'd save 2.2 seconds of overhead. [05:28] aaaaaaaaaaaaaaa [05:28] aaaaaaaaaaaaaaaaaa [05:28] So, an SPR can have any number of SPRFs. [05:28] And there are often lots of BPPHs for a BPR. [05:28] sure [05:28] but the whole point of LFA is to reference-count-own LFC [05:29] ignore SPRF for now, I see why the 1:M is needed [05:29] we could roll up some stuff there too, but separate problem. [05:29] No point solving BPPH without SPPH. [05:29] wgrant: 3:1 bpph -> spph rows [05:29] or more [05:30] A constant factor of three is not very compelling. [05:30] wgrant: solving a problem is solving a problem [05:30] so the question is, can we solve [05:30] or [05:30] do we need to precalculate [05:43] * StevenK murders gina.txt [05:52] stub: hi [05:54] Speaking of stub, where did his wonderful "prejoin" decorator go? [05:54] jtv: deleted [05:54] in favour of what? [05:54] DecoratedResultSet [05:55] stub: is 2.8 seconds 'about right' for reading in 1000 cold rows from libraryfilecontent? [05:56] With cold rows, how would you know? [05:56] yo [05:56] jtv: https://bugs.launchpad.net/launchpad/+bug/739070/comments/11 [05:56] <_mup_> Bug #739070: Archive:+repository-size timeout < https://launchpad.net/bugs/739070 > [05:56] jtv: science :> [05:56] lifeless: I don't think is very good [05:57] It's still considerably less than you'd expect from a full seek per row. [05:57] stub: https://bugs.launchpad.net/launchpad/+bug/739070/comments/6 [05:57] <_mup_> Bug #739070: Archive:+repository-size timeout < https://launchpad.net/bugs/739070 > [05:57] lifeless: no I don't mean how do you know it's 2.8 seconds for 1000 rows, but how would you know whether it's "about right." [05:57] jtv: we're seeing 2.5ms per row, thats plausible with many redundant spindles [05:58] jtv: I would ask someone familiar with our db's guts :> [05:59] Regardless, it's all a question of access patterns isn't it? It could probably be 5x worse, it could probably be 500x better. [05:59] jtv: not really [06:00] jtv: if this is slower than we'd anticipate given our hardware and prior experience, then we can go digging for issues like table/index bloat, contention whatever [06:00] jtv: if this is approx as fast as we'd expect on an average day, then we're wasting time if we do that and we should instead look at how to either avoid it being cold or avoid accessing the rows [06:00] wgrant: Want to hear something awesome? [06:01] wgrant: gina.txt has been running for 20 *minutes* locally. [06:01] So that is 695 index lookups and row lookups, so pulling a minimum of 1400 blocks off the disk random access [06:01] lifeless: what I'm trying to say is that cold random reads have so many unknowns that this sample size seems inadequate. [06:01] StevenK: It normally only takes 60 seconds. I think you have a bug. [06:01] wgrant: Or I've broken it into pieces. [06:02] And since it's a doctest, I have to printf-debug [06:02] StevenK: how do you go from 695 -> 1400 blocks ? [06:02] bah [06:02] stub: ^ [06:02] stub: heuristic of 2:1 (index + table) ? [06:02] (It would be pretty nice if postgres could at least profile distribution of I/O latencies) [06:03] jtv: that would be awesome [06:03] jtv: I agree that there are many unknowns [06:03] Read a bucket in the index == pull one block from disk. Read that row from disk == pull one block from disk. [06:04] (The stupid thing is, we've been thinking of I/O costs in terms of FS cache misses, which is impossible to predict and may soon be an outdated concept anyway. If we could just look at it from an empirical standpoint, we might win something.) [06:04] its about 150 bytes per row min, so what 25 per page [06:04] on a 14M row set we're very likely to get one page per row [06:04] So 700*2 == 1400, == the minimum if we somehow magically only load up wanted information from the index [06:05] stub: kk, thats cool, i have the same basic model [06:05] stub: was just checking [06:05] Ok. I was assuming the size of the table would make it unlikely to get multiple rows in a page. [06:05] yeah, an assumption I agree with [06:05] the BPPH table is very spread over time [06:05] jtv: thats not my model of IO costs [06:06] Not "we" personally; I mean the world. [06:06] The index should be much more tightly packed too. [06:06] jtv: ah; so I don't think the world thinks of IO that way per se ;) - see for instance the scale free algorithms work [06:06] jtv: which models IO as tiers of progressively bigger and slower cache [06:07] Well okay, the database world. [06:07] And that model is still wrong, is my point. [06:07] lifeless: Oh... and the toast lookups for the md5 and sha1 hashes which are stored as text, so potentially another 4 lookups [06:07] jtv: they seem to have /very/ good results doing this - noting that they start with the CPU L1 cache and work out [06:08] stub: even if we're not looking at the columns (as my rephrased cheaper-estimate query doesn't) [06:08] Yes, but that's cache-oblivious algorithms isn't it? I'm talking database. [06:08] jtv: it is, but database will get those algorithms eventually I suspect [06:08] lifeless: I think that will avoid loading the data from the toast tables, yes. [06:08] that or mlock() the entire table [06:08] (And I'm not talking cache-oblivious database either: we can worry about that once that world finally catches on to space-filling curves) [06:09] jtv: I think its percona that have a space-filling curve index [06:09] lifeless: There have long been cache-oblivious algorithms for databases. I worked on a cache-oblivious join algorithm 2 jobs ago. [06:09] jtv: very similar to bzr's index /packing logic in fact [06:09] jtv: \o/ [06:11] stub: so what I'm trying to assess, is there some way to get this down to < 2 seconds reliably (we have to do a parallel query for the source package data) [06:11] But when it comes to buffer management and individual query optimization, my point is is we can't afford to keep thinking "what if this misses in FS cache and the seek is long" because it just gets too complex: SSD, on-disk cache, bad blocks, network latency. [06:11] stub: or should we be maintaining the archive size as a cache [06:12] stub: its currently 9 seconds with the original query, 6.6 seconds in my rephrased one [06:13] StevenK, wgrant: should the "synchronize the simple updates" button on +localpackagediffs update blacklisted differences as well? [06:13] stub: and we're seeing ~ 7 of these a day where its stuttering (presumably on IO) [06:13] jtv: No. [06:14] lifeless: So your theory is this query will often be cold due to access patterns so the lfc lookups will sometimes be too slow? I'm trying to think if we have other pages accessing more than a few dozen lfc rows. [06:15] More of the ones I can think of are dealing with bulk packaging data and are slow too [06:15] stub: bug attachments show their size in the comment, for some bizarre reason [06:15] very busy bug pages could access hundreds; *but* we will paginate those eventually [06:15] so not relevant here [06:15] lifeless: That's also the only non-packaging one I can think of. [06:16] lifeless: Bug attachments are fine as it would be rare for a page to have more than a dozen attachments. [06:16] Some have hundreds. [06:16] wgrant: I'll take your word for it. [06:16] stub: its rare, but I can link you to the bugs :> [06:16] jtv: That is sort of the point of blacklisting. [06:16] stub: I don't think its *often* cold, but I think its cold *often enough* [06:17] specifically I don't think we do this query on this dataset anywhere else except *perhaps* on package upload (for quota checking) [06:17] wgrant: well, unless maybe we have automated syncing and then users start blacklisting dsds they feel will behave correctly without human attention. [06:17] jtv: Then we've redefined the term. [06:17] wgrant: so it sort of requires a conscious decision that blacklisting means "I don't want this change" and not "I don't want to see this change." [06:17] yeah. so caching the archive size is fine for this query. I'm wondering if there is something more generic we can do to solve slowness in similar pages. The packaging tables always seem to be an issue - large, and the users want complex reports. [06:17] jtv: At the moment the Ubuntu sync blacklist blacklists the sync. [06:18] stub: so in this particular case; there is a missing index that would drop the rows examined by 6000 in the packaging table [06:18] stub: I don't know if its worth adding [06:18] I don't think it will solve the problem [06:18] jtv: It effectively means "I don't care" [06:18] wgrant: fine by me; the "I'll take your word for it" still stands but in light of yesterday's conversation I thought I'd at least ask. :) [06:19] Heh. [06:19] StevenK: ^^ [06:19] StevenK: that seems to contradict wgrant's take. [06:19] Do you agree? [06:19] Oh, he's here. [06:19] Alas, yes. [06:19] [06:19] StevenK: You mean "I don't care what the parent thinks"? [06:19] stub: ok, I'll update this bug to say 'needs cached figures' [06:19] More correctly: "I don't care about this difference" [06:20] wgrant is showing his age [06:20] "I don't care what my parents think" [06:21] "I'll hack on Launchpad even if my father thinks that Web 2.0 is wrong! I'll show them!" [06:21] StevenK: humans may not think they care, but it'd be nice to have a clear notion of whether they have a right to be disappointed if the blacklisted difference stops the package from updating. [06:21] jtv: Yes, a blacklisted difference stops a sync [06:21] Of course the button I'm building will sort of eliminate an excuse for blacklisting packages that should be updated. [06:21] * jtv updates his tests. [06:22] Keep in mind there are two blacklist states [06:22] lifeless: So that query is broken anyway, because the distinct on is lacking an order by so it isn't doing what the author thought it is doing [06:23] stub: my one ? [06:23] the one in the bug comment you linked me to [06:23] stub: my rephrased one is the one with the subselect [06:23] stub: it doesn't need the order by [06:23] Ran 1 tests with 1 failures and 0 errors in 35 minutes 35.459 seconds. [06:23] stub: look at the plan: it orders to do the distinct and unique automatically [06:23] WHEEE! [06:24] lifeless: That is fine if you happen to get the plan you are expecting. [06:25] StevenK: not too worried about the two blacklist states; I'm whitelisting, not blacklisting statuses. Are we confused yet? :) [06:25] lifeless: So we really need the order by, esp if we want this to survive future PG upgrades. [06:26] (and with luck, it will be a noop with the expected plan) [06:26] stub: I'd like to learn more about this [06:26] stub: which docs should I read? [06:26] jtv: Don't make me kick you. [06:27] lifeless: This is documented in the DISTINCT ON section of the SELECT page in the SQL Reference guide for PostgreSQL [06:27] Sorry - 'DISTINCT clause' section [06:27] file:///usr/share/doc/postgresql-doc-8.4/html/sql-select.html [06:30] stub: so, I read that [06:30] stub: and it doesn't conflict with what I've done [06:30] stub: because - 'Note that the "first row" of each set is unpredictable unless ORDER BY is used to ensure that the desired row appears first' [06:31] thats fine, we're selecting identical rows, so whichever row comes first is ok [06:31] stub: anyhow, I can add a sort in [06:32] stub: but we don't use that crafted query /anyway/ because I was seeing if we could make it fast without caching etc [06:32] lifeless: nm. You are correct I think. [06:33] I wonder if it changed, or if I'm thinking of a similar statement? I'm sure there was one that if you neglected the ORDER BY you can get duplicate results [06:36] lifeless: So in this case you can use the SQL standard "SELECT DISTINCT libraryfile, filesize" rather than SELECT DISTINCT ON (libraryfile) filesize, but the cost saving is statistically insignificant. [06:37] stub: interesting; we don't need libraryfile at all anyhow, just the summed sizes [06:37] lifeless: Should we be using rabbitmq instead of triggers to maintain BugSummary btw? [06:38] stub: it would remove all the races [06:38] stub: I have a branch for rabbitmq but ec2 threw it out [06:38] stub: I don't know why yet and haven't gotten back to it [06:38] (now that I think the DB patch is done, and just waiting on tests to confirm this hypothesis :) ) [06:39] :> [06:39] stub: thanks! [06:39] I've written up my theory in https://bugs.launchpad.net/launchpad/+bug/739070 [06:39] <_mup_> Bug #739070: Archive:+repository-size timeout retrieving many hundreds of package sizes < https://launchpad.net/bugs/739070 > [06:39] just the top of the bug summary [06:41] lifeless: I needed to change the structure a bit - we might be race free now with the simpler triggers that I thought were needed to catch some potential modifications [06:41] bug_fti || 76% || 612 MB of 798 MB [06:42] wgrant: I fixed gina! [06:42] stub: that would be cool, but the case of 'person subscribes and the bug is made public' - how will that be race free? [06:42] StevenK: Oh? [06:42] wgrant: Well, the test now takes 65 seconds, rather than 2135. [06:42] That's more normal. [06:42] What did you break? [06:43] wgrant: gina needed access to DSP [06:43] Ah. [06:43] Since she creates DSDJs [06:43] Yes. [06:43] Anyone want to review https://code.launchpad.net/~wgrant/launchpad/bfj-different-urls/+merge/59730? Fairly long, but mostly just sed applied to test. [06:43] +s [06:44] wgrant: I'll take it [06:45] stub: have we got enough bloat-changing data to point at (or exclude) bug heat yet ? [06:45] stub: or equally, backups [06:46] lifeless: yer... I think the race is still there. [06:46] stub: I thought of a way to make it detect races [06:46] lifeless: No, I haven't got the data to confirm the cause. [06:46] stub: but you'll either cry or aim a nuke at me. [06:47] stub: and I don't want to cause either event. [06:47] We can serialise it by using an advisory lock. [06:47] stub: the way would be in any transaction affecting a bug, update a row in a common table, to for serialization detection [06:49] a lock sounds better :> [06:49] And advisory lock will do the same thing, and be explicit and faster. yes. [07:10] bzr di can't handle multiple revspecs? :-) [07:10] :-( rather [07:11] StevenK: -r x..y [07:12] lifeless: I'd like to skip one in the middle [07:12] you may want difftastic [07:13] Project windmill-devel build #2: STILL FAILING in 1 hr 5 min: https://lpci.wedontsleep.org/job/windmill-devel/2/ [07:13] lifeless: Google doesn't show anything relevant for that? [07:14] aarons plugin for lp merge proposals to show incremental diffs [07:14] difftacular? [07:14] Perhaps I'll just use filterdiff [07:15] whata the rev you want to skip? a merge? [07:15] Yes [07:15] interdiff / difftcular is what yo uneed then [07:17] I can generate the two diffs, interdiff shows the diff between them, how does that help? [07:19] Oooh, I see. combinediff [07:19] wgrant: done [07:35] bbiab [08:25] wgrant: in case I didn't get through earlier, your review is done. [08:25] Anyone care for a simple, short cleanup review? https://code.launchpad.net/~jtv/launchpad/db-pre-747546/+merge/59737 [08:28] Bah, I was about to answer him, too [08:29] jtv: r=me [08:29] thanks [08:29] jml: hi [08:30] jml: sorry I was afk; was fooding [08:30] StevenK: and fwiw you're absolutely right about moving it. One battle at time though. [08:31] jtv: Sure, which is why I just approved it [08:31] We're in loud agreement. [08:31] jtv: You probably could have targetted devel, too [08:32] Yes, but I already have these changes in another branch and I want to had them off at the diff. [08:33] jtv: Just saying, is all [08:33] That is understood. [08:44] jtv: Thanks. It changes the URLs used by the API, but not the API itself. [08:44] Very well. [08:46] good morning [08:54] wgrant: Hi! I modified the call sites of Archive.syncSource like you suggested ... you might want to have a look since the modification is kind of sensitive (https://code.launchpad.net/~rvb/launchpad/change-perm-sync/+merge/59341). [08:58] rvba: I think CannotCopy exceptions are exposed in the UI, so you probably want to check that their values will be sensible. [08:59] wgrant: good point. [08:59] Otherwise that looks great. [08:59] morning all [08:59] Morning bigjools. [08:59] bigjools: Morning! [08:59] what's broken? :) [08:59] You survived your vacation without PSN? [09:00] yeah, I got a copy of Portal 2 :) [09:00] Ah, 'tis an excellent game. [09:00] I like Valve stuff anyway [09:00] although dissapointingly short. [09:06] * bigjools sees size of inbox [09:07] bigjools: And you run screaming? [09:07] A GLaDOS reference somewhere in Soyuz would be apt. [09:07] I am gently rocking backwards and forwards while hugging a small furry toy [09:08] StevenK: ha [09:09] Good morning! [09:11] jml: ready when you are [09:13] hi mrevell === almaisan-away is now known as al-maisan [09:49] lifeless: hi [09:49] hi [09:49] lifeless: sorry, I got distracted by my RSS feeds. [09:49] skype/voip either is good [09:49] lifeless: ok. what's your voip number? [09:49] jml: thats ok, its only getting vs is late :) [09:49] jml: actually, skype is better because I don't need a headset ;) [09:50] ekigas echo detection is still poor [09:50] lifeless: ok. skype. [09:53] allenap: http://pastebin.ubuntu.com/602641/ ; thanks! [09:54] StevenK: Cheers. [10:26] wgrant: Hi! === gmb changed the topic of #launchpad-dev to: PQM in RC mode | https://dev.launchpad.net/ | On call reviewer: gmb | https://code.launchpad.net/launchpad-project/+activereviews [10:53] henninge: Hi. [10:54] wgrant: Do you happen to know if bug 776160 is a duplicate? [10:54] <_mup_> Bug #776160: mail headers inconsistent between FAILEDTOUPLOAD and FAILEDTOBUILD < https://launchpad.net/bugs/776160 > [10:54] henninge: I don't think it is. [10:54] wgrant: thanks === al-maisan is now known as almaisan-away [10:57] I have a problem writing a view test for bug 644872. [10:57] <_mup_> Bug #644872: UnicodeEncodeError in search_text field < https://launchpad.net/bugs/644872 > [10:58] I can trigger it in the UI by searching "português" here: https://answers.launchpad.net/ubuntu/ [10:59] wgrant, rvba: regarding the sync package permission change, did you think about how it'll fit in when we start doing it asynchronously? [10:59] bigjools: Do we have a plan for that yet? [11:00] bigjools: I'm afraid not. But how is this particular change (the permission change) related to that exactly? [11:00] wgrant: sorta. The job is done but not used. allenap is working on changing the job schema so we can query across archive/series and display something to the user to show pending jobs. Then we'll switch over the syncing first, maybe PPA copies in the future. [11:00] rvba: the check will happen in the job, not the UI. [11:01] it would be nice if it happened in the UI but I think it'll bust the query time [11:01] bigjools: This seems like it's only useful for mass copies. [11:01] the permission check is now done in the copyChecker along with the various other checks [11:02] wgrant: correct. We might decide an arbitrary number of packages, over which we use the job rather than synchronously. The next step is to design the UI. [11:02] bigjools: Great. [11:02] bigjools: As long as you're not trying to do async for normal copies. [11:02] When immediate feedback is helpful. [11:02] wgrant: define "normal" [11:03] not trying to do async for *anything* yet :) [11:03] bigjools: PPA copies, mostly. [11:03] Not let's-sync-20%-of-the-archive. [11:03] not initially but I have PPAs in mind [11:03] since some PPA copying times out still [11:04] It's very fixable. [11:04] Most of the time in most cases is now spent in the UI. [11:04] Determining the list of sources. [11:04] ok [11:04] But there are some very expensive build queries when there is a potential source conflict in the destination (which is rare) [11:05] And delayed copy creation is slow :( [11:07] wgrant: what do you think is a good number? 50? 100? (total source+binaries) [11:07] bigjools: NFI [11:07] We may need science. [11:08] * bigjools licks finger and holds to the wind [11:11] henninge: Wishlist doesn't exist. We are using Low instead. [11:11] (just hope Rob doesn't notice you doing that ;)) [11:12] wgrant: thanks ;) [11:12] wgrant: I'll fix the documentation [11:12] I thought it already was fixed :( [11:13] We map these buckets into: [11:13] critical : generally empty, bugs that need to jump the queue go here. [11:13] high: bugs that are likely to get attention within 6 months [11:13] low [or perhaps wishlist]: All other bugs. [11:13] that sounds old [11:14] is the critical definition current? [11:15] Yes. [11:15] But ZOP means it's not empty. [11:15] Yet. [11:15] Ah, I see. [11:15] We have 5 years of debt to repay. [11:16] ok, doc changed. [11:16] low: All other bugs. [11:16] We don't use wishlist. [11:16] Thanks. [11:19] rvba: regarding the permission changes to syncing, I am concerned that users with zero permissions can cause a load on the job runner [11:19] and in the webapp for direct syncs, for that matter [11:20] bigjools: good point, but is there a "small" permission that can be checked before doing the sync? [11:21] rvba: I'm trying to work that out now :) Ideally, we need a "has any permission at all?" check [11:21] If it's too expensive to check all permissions at the time of the request, you could just check for any ArchivePermission on the Archive. [11:21] right [11:21] in fact the upload processor does this somewhere [11:21] * bigjools digs [11:21] If you use the upload processor as a model for anything I will cry. [11:21] * bigjools slaps wgrant [11:21] But yes. [11:22] It's struct_component on checkUpload. [11:22] s/struct/strict/ [11:22] But it's not quite right. [11:22] Oh, no, there's that other thing too. [11:22] Argh this is such a mess. [11:22] yes [11:22] blame Ubuntu policies [11:23] Yeah, see Archive.verifyUpload [11:23] if not self.getComponentsForUploader(person): [11:23] But it's buggy because it only considers components. [11:23] le sigh [11:23] A bug exists for that, I believe. [11:24] But the any-permission-at-all check is pretty simple to query for. [11:24] yep, we can just join TP to AP [11:24] Exactly. [11:24] Really cheap. [11:24] rvba, ok? [11:24] bigjools: sounds ok. Just tell me what TP and AP are [11:25] ArchivePerm [11:25] TeamParticipation [11:25] It is a flattened version of TeamMembership. [11:25] ok [11:25] So it contains indirect memberships too. [11:25] make life so much easier when querying [11:25] And faster. [11:25] And less recursive. [11:25] And less reliable... [11:25] so if someone has any AP at all, show the button [11:25] otherwise, nada [11:26] all right. I'm on it. [11:26] wgrant: why "less reliable"? [11:26] There are occasional bugs in its maintenance. [11:27] So it gets inconsistent with TM. [11:27] I suppose it gets periodically whipped out and recalculated then. [11:28] Project windmill-db-devel build #230: STILL FAILING in 1 hr 4 min: https://lpci.wedontsleep.org/job/windmill-db-devel/230/ [11:45] allenap: are you all caught up? [11:46] jtv: In what? :) [11:46] allenap: ah [11:46] jtv: I assume you want to talk about the sync button? [11:46] Yes! [11:46] jtv: IRC, Mumble, Skype? [11:46] I've been asked to make a change to the indexes initialization code, which means dropping my ongoing work on the sync button. [11:47] IRC would be convenient for me right now (headset is charging). [11:47] jtv: Okay. [11:47] Now, the code I have should be in a pretty nice shape. [11:48] I'm trying to make useful notes. [11:48] And Steve said you'd been working on something called... SyncPackageJob? [11:49] jtv: SyncPackageJob has been reincarnated as PackageCopyJob, but yes. [11:49] Instead of calling IArchive.syncSource() it calls do_copy() directly, and so can copy multiple packages in a single job. [11:50] Can anybody give me a clue why this test http://paste.ubuntu.com/602716/ produces this output http://paste.ubuntu.com/602717/ ? [11:50] allenap: ahhh so copying and syncing really are more or less the same thing? [11:50] For some reason, "search_text" is an "object", not a string. [11:51] jtv: Yeah, so it seems :) syncSource() and syncSources() both end up calling do_copy(). [11:53] henninge: no idea; looks like it might be one of those uses of object() as a way of creating a guaranteed-unique value in cases where None already has a meaning. [11:53] allenap: Thanks. I'll update my comments so at least that is correct. [11:55] jtv: thanks for looking at it ;) [11:56] allenap: another point is that the button's working title is Upgrade Packages. There may well be a better name for that. I'm told cjwatson would probably know. [11:56] adeuring: Moin! Do you have an idea ^^ (My question with the two pastebins) [11:56] jtv: Cool. [11:57] * adeuring is looking [11:58] the main thing to be careful of is to draw a distinction between pulling in new versions of unmodified packages from the upstream distribution, which can be done verbatim, and pulling in new versions of packages that have been modified downstream, which requires human attention (and is, I hope, outside the scope of this button) [11:58] cjwatson: thanks. That is correct: the harder case is outside the scope of this button. [11:58] The big question right now is what should be on the button. [11:58] in Ubuntu jargon we call those "sync" and "merge" respectively [11:58] its label? [11:59] bigjools: yes [11:59] "sync all unchanged" ? [11:59] Well they are unchanged, but in the parent. [11:59] or sommat [11:59] henninge: after looking at an "advanced search" page, I'd suggest: s/search_text/searchtext/ [11:59] hm [12:00] cjwatson: do you differentiate in nomemclature between a single sync and the mass-sync? [12:00] nomenclature, even [12:00] adeuring: I'll try but this is the equivalent in the UI: [12:00] https://answers.launchpad.net/ubuntu/+questions?field.search_text=portugu%C3%AAs&field.actions.search=Search&field.status=OPEN [12:01] bigjools, cjwatson: I'd also like to be very clear in the distinction between the "interesting" cases and this simple case. [12:01] bigjools: no [12:01] ok [12:01] adeuring: does not help but interestingly gives the same error. [12:01] Just repeating the word "sync" all over the place and making the button labels very long doesn't quite do it for me. [12:01] henninge: weird... [12:01] adeuring: it's like it does not see the parameter. [12:01] cjwatson: when someone wants a brand new package, presumably its uploaded rather than synced? [12:01] henninge: thanks [12:01] henninge: then it may just mean that the data dict is uninitialized and there's a get(foo, object()) [12:01] bigjools: that's generally synced too [12:02] interesting [12:02] bigjools: well, if it was in Debian and we can just reuse that, anyway [12:02] bigjools: if they do it from scratch for Ubuntu, that's an upload [12:02] well, sync-source is rather esoteric [12:02] henninge: have a look at https://bugs.launchpad.net/launchpad/+bugs?advanced=1 . There is this tag: not especially, it's rather simple-minded [12:02] cjwatson: right, ta [12:02] in our terminology, "sync" is any case where we copy a package verbatim from another archive [12:02] Packages that are in the parent but not the derived series are another thing I'd very much like to stay away from, naming-wise. [12:02] yeah, makes sense [12:03] (source-only) [12:03] adeuring: I am quite sure it's "search_text" in this case. [12:03] I'm not wedded to that terminology, but if you do use the word "sync" then it should match those semantics, IMO [12:04] yup, that's been my assumption all along [12:04] we're not touching mergers [12:04] merges [12:04] henninge: argh... your bug is about questions, not bugs, right? [12:04] * adeuring was looking in the wrong place... [12:04] adeuring: it is. Sorry. [12:04] bigjools, cjwatson: would "upgrade" be a good word to use then, for the case where it's merely an upgrade that's being pulled in? [12:05] The same button will also probably sync new packages. So I doubt it. [12:05] personally, "upgrade" sounds like something users do [12:05] and yeah, I'm not sure I see any value in a UI distinction between install/upgrade in the button name [12:06] cjwatson: oh one more thing, we're going to auto-create indexes in new series as part of the first publisher run. I think you'll still be able to do compare-archive even without disabling the publisher since the release pocket won't change and the new series is frozen [12:06] I think we should reuse the terminology from everywhere else in LP: just say copy. [12:06] bigjools: doesn't the first publisher run already create indexes? unless you're talking about some indexes I'm not aware of [12:07] I think I agree with wgrant [12:07] it's a source-only copy [12:07] cjwatson: the "careful" mode I mean [12:07] The button as specified doesn't copy new packages. [12:07] cjwatson: initialisation will automatically create the indices. [12:07] cjwatson: No need for a manual run any more. [12:07] oh, right [12:07] sure, whatever :) [12:07] heh [12:07] Click button in the web UI, it will do the rest. [12:07] cjwatson: is compare-archive really necessary anyway? [12:07] No manual i-f-p or p-d or blah. [12:07] bigjools: it's just paranoia [12:07] figured [12:08] and it doesn't cost you guys anything [12:08] but yeah, less manual work => good [12:08] yeah I didn't think there'd be any complaints :) [12:09] we need to do it for derivations, or they will need someone to ssh in to somewhere [12:09] jtv: hm, well, sync-source can only go away once we have some other way to copy new packages [12:09] so that sounds like a spec weakness [12:09] there is a way, it's in the spec [12:09] I wasn't aware it wasn't going to copy everything :( [12:09] bigjools: Also, how are we going to push copies through the queue? [12:09] we're not [12:10] Erm. [12:10] Erm. [12:10] well, it doesn't do that right now [12:10] Clearly. [12:10] But it needs to. [12:10] out of scope [12:10] so right now, syncs get away with going through a specialised queue (ignoring unapproved etc.) because only archive admins can run it === almaisan-away is now known as al-maisan [12:11] right [12:11] once it's opened up to everybody, not going through the queue means that any developer can break our release management around milestones and we can't stop them [12:11] so if it's out of scope, it needs to be turned off :-) [12:11] heh [12:11] That is my point, yeah. [12:11] We can't allow syncSource into a primary archive unless it goes through the que. [12:11] +ue [12:11] the other option is to restrict copying to archvie admins until we add queueing [12:11] Since it would allow anyone to copy into -updates, for example. [12:11] == bad [12:14] * cjwatson nods [12:16] cjwatson: so, you'd be happy with all syncs going through the normal queue process that uploads do? (including auto-acceptance etc) [12:16] henninge: trying to reproduce your problem, I got at first another error: SyntaxError: Non-ASCII character '\xc3' in file ... After s/ê/\x2345, and after adding "with_person_logged_in", create_initialized_view worked for me. [12:17] adeuring: thanks, I'll check that after lunch. [12:17] bigjools: totally === henninge is now known as henninge-lunch [12:17] bigjools: er, modulo mail announcements [12:18] cjwatson: great, I always wondered why sync-source never did that anyway [12:18] manually requested syncs need announcements on the -changes list; mass autosyncs need to not flood the -changes list [12:18] but as you say, it needs shell anyway [12:18] Yeah. I think mass syncs will have to be AA-driven with a no-mail flag. [12:18] AA? [12:18] oh nm [12:18] bigjools: I think it's because if you put it through the queue then there's no way to suppress announcements [12:18] IIRC [12:18] there's always a way :) [12:18] wgrant: doesn't really help derivatives ... [12:19] Also because the hack wasn't dirty enough, they needed a bit more dirt. [12:19] cjwatson: I mean in the UI. [12:19] bigjools: well, I mean when we set it up [12:19] wgrant: ok [12:19] cjwatson: We don't want normal people to copy things in without announcement, do we? [12:19] oh, actually, there was a BIG reason they went through a separate hacked queue [12:19] syncs aren't GPG-signed [12:20] wgrant: no, indeed [12:21] we can handle that now [12:21] bigjools: Oh? [12:21] Perhaps we just implement NSS :-) [12:21] at the time, though, that's the basic reason why Daniel set it up with a different queue [12:21] StevenK: That's what this is... [12:22] I can't remember *why* but there's code that handles lack of signer [12:22] bigjools: Right, and that's what the backdoor policies use. :( [12:22] not those [12:22] Huh? [12:22] What else accepts uploads with no signer? :/ [12:22] there is actual code that doesn't fall over blindly accessing the signature [12:22] buildd? [12:22] Oh. [12:22] I guess. [12:23] like I said, I can't remember why it's there [12:23] buildd doesn't go anywhere near sigs. [12:23] But it does do uploads. [12:23] Oh, you mean -C buildd? [12:23] cjwatson: which queue? [12:24] -C sync, I presume. [12:24] That is SyncUploadPolicy. [12:24] yes. [12:24] ~lp_queue/sync-queue/ for those with cocoplum access. [12:25] the processing command is LPCONFIG=ftpmaster /srv/launchpad.net/codelines/current/scripts/process-upload.py -d ubuntu -C sync $DRYRUN $NOMAILS -v . [12:25] cjwatson: Of LP devs, that's ... me [12:25] how do the packages get to the local FS? [12:26] sync-source with evil... [12:26] They are grabbed from ftp.uk.debian.org and/or the librarian [12:26] ahhhh [12:26] More evil, that is. [12:26] right, I forgot that sync-source doesn't actually do any uploading does it? [12:26] no, it creates the files in good order to be copied into a queue [12:27] which queue? [12:27] see above! [12:27] I mean, LP queue. I presume they just hit accepted? [12:28] They go through whatever SyncUploadPolicy puts them into. Which is normally DONE> [12:28] auto-accepted then [12:28] ok I'll dig later [12:28] Indeed. [12:29] AbstractUploadPolicy.autoApprove returns True. [12:29] The FROZEN handling is in InsecureUploadPolicy. Odd. [12:29] this makes things a little more complicated than I'd anticipated [12:29] right now yes - but as I say they'd need to be treated slightly more like uploads once this is open to all developers. [12:29] That seems like the sort of thing that really should be elsewhere. [12:29] the real distinctions we actually need are insecurity and announcements. [12:29] (AFAICR.) [12:29] Insecurity? [12:29] Oh. [12:29] Right. [12:29] Yes. [12:29] (no signed .changes) [12:30] Announcements are going to be amusing. [12:30] for NSS that won't be an issue since there's no changes file [12:30] Since we don't have changes files at all for Debian uploads. [12:30] We'll have to parse the changelogs. [12:30] yeah, *that*'s the issue :) [12:30] sync-source does that already. just move the code around. [12:30] sync-source must burn. [12:30] BURN. [12:30] but that code is the very bit you need. burning it out of pique is a tad silly. :) [12:30] python-debian makes it easy these days. [12:31] sure - reuse the business logic though [12:31] Don't need sync-source's regexps. [12:31] It does things like this: [12:31] if previous_version is None: [12:31] previous_version = "9999:9999" [12:32] I'm reluctant to reuse logic that has been near that. [12:32] /o\ [12:32] I thought I proposed a patch at one point to fix that. [12:33] There was a fix at one point for < 0 versions. [12:33] But I don't recall anything about this bit. [12:33] sigh, trapped in a local branch [12:33] http://paste.ubuntu.com/602728/ [12:33] Well, that was easy. [12:33] (since < 0 versions don't work right now) [12:33] Yeah. [12:35] cjwatson: Can we get away with not attaching a fake changes file -- just compiling the usual email contents? [12:35] probably [12:35] I don't know of anything that parses the changes files. [12:35] oh, hm [12:35] I do [12:35] Can it use the API instead? :) [12:35] it does [12:35] we have an API-based thing that we use for compiling point release notes [12:35] Oh, right. [12:35] it grabs .changes files using the API [12:35] HMm. [12:35] so when we have this in the UI, one button "mass sync" will not send announcements, the other one "sync" will. I assume that's good enough. [12:35] That's slightly upsetting. [12:36] cjwatson: Can I see that code somewhere? [12:36] http://paste.ubuntu.com/602729/ is the most recent version of it I have to hand [12:36] Hopefully it doesn't want too much. [12:36] Ah, so it just wants bugs? [12:37] right now I think so, though I think in general having access to the equivalent of changes['changes'] would be good. [12:37] Yeah. Hmm. That's difficult. [12:38] Since the definition of that is a little hazy. [12:38] it's tricky in cases where the sync skips over several versions [12:38] Right. [12:38] We need to determine the old version. [12:38] Whatever that means... [12:38] publishing history? [12:39] Sure, we have the data, but we have to work out how to interpret it. And then how to store the changelog entries. [12:40] Unless we just provide a method on SPPH which finds the last published version and grabs the library files and works it out. [12:40] Not quick, but meh. [12:40] Since we don't really have anywhere to store it. [12:41] bigjools: announcements> yes, sounds good enough [12:41] (whether it meets your UI guidelines is another matter, but it meets our requirements) [12:41] Come now, you've been around Launchpad for long enough to know how strict and excellent our UI guidelines are.... [12:41] cjwatson: I'll work that out later but it was to clarify the difference for me, which you did [12:42] it's certainly better than archive admins having to remember which ones need a special flag passed [12:42] Or forgetting to flush normal sync queues before someone does a no-mail run... [12:43] yeah, though the person who does such a run is supposed to check [12:43] but it's a bit error-prone [12:55] allenap: O hai -- has my diff made you blind yet? :-/ [12:55] wgrant: I think the queue stuff will be easy-ish. We can refactor the nascentupload checks so they can be re-used. If the package is not auto-accepted, we make the appropriate PU, otherwise copy as we do now. Does that sound sane? [12:55] bigjools: We also need to log the copies. [12:56] log? [12:56] Who did them, and where from. [12:56] At the moment we have no idea. [12:56] joy === henninge-lunch is now known as henninge [12:56] I had considered just adding a creator to [SB]PPH. I still think that might be reasonable. [12:56] But it sounds relevant to the queue stuff. [12:56] well we're not losing anything by not doing it if it's not already done, so it's not critical [12:57] cjwatson: Do you want to know who copied things? I presume so. [12:57] It would seem mildly insane to not. [13:01] wgrant: at the moment we record who requested the sync by pretending they're the uploader [13:02] Right, I know that, but we can't do that any more. [13:02] that kind of loses information a bit and it would be better to have that in a separate slot [13:02] Yeah. [13:02] yes, definitely do want to know, for audit trail [13:02] But in the current copy model there is no record (besides appserver logs) [13:02] Right. [13:02] bigjools: ^^ [13:02] well, they are effectively the uploader [13:02] and it should go out in mail announcements of manual syncs too [13:02] ok [13:02] bigjools: in the past, Debian people have justifiably got upset at what looks like an Ubuntu person taking credit for their work [13:03] cjwatson: But we also have them getting upset about the opposite :/ [13:03] so we should have both names [13:03] "I didn't upload that" [13:03] cjwatson: don't they see the different users involved? uploader vs maintainer vs creator? [13:03] uploader/creator are one field in LP. [13:03] bigjools: by overwriting uploader, we lose one of those [13:03] signer is different, but that's not used for syncs. [13:03] not exactly [13:04] and signer may be the sponsor in Debian anyway, not the person who actually did the work [13:04] wgrant: I'm much happier to defend the case where we give too much credit [13:04] sourcepackagerelease.creator [13:04] bigjools: That's Changed-By, right? [13:04] I thought it was the dsc signer? can't remember! [13:04] as for recording the Ubuntu side of it: we do need to do that because it's used as part of people's applications for Ubuntu upload rights [13:04] No. [13:05] bigjools: SPR.dscsigningkey [13:05] Huh. [13:05] I would like to maintain uploaded-by as-is [13:05] Why dscsigningkey and not changesigningkey? :/ [13:05] That's pretty bad. [13:05] bigjools: As-is? What do you mean? [13:05] if something's getting overwritten that's a separate bug [13:05] bigjools: We have no control over that. gina sets it. [13:06] I'm talking about syncs [13:06] Syncs will stop existing. [13:06] gina should not set uploader [13:06] Hm? [13:06] What do you mean by uploader? [13:06] of course they won't, we're syncing here [13:07] We normally use SPR.creator as uploader. [13:07] instead of sync-source [13:07] And that is Changed-By. [13:07] that's crack [13:07] bigjools: Why? [13:07] And even so, we can't use anything on SPR to record the requestor. [13:07] the uploader is whoever signed the changes file [13:07] or in this case, did the sync [13:07] bigjools: Right, that's (mostly) dscsigningkey. [13:07] um, ish [13:07] not really [13:08] At present it's all we have. [13:08] right, and it needs cleaning [13:08] Where do you propose to store the sync requester? [13:08] It can't be on SPR> [13:08] dunno [13:08] not got that far yet :) [13:09] I am initially inclined to store the uploader/requestor in some generic version of PackageUpload, which works for copies too and goes through the queue. [13:09] mebbe [13:10] why can't it be on spr? [13:10] ah I see [13:10] What happens if I sync it to two places? [13:10] yeah [13:10] so, maybe on spph [13:11] not sure about creating PUs all the time [13:11] We need to think and see if we can unify this and PackageUpload and all the checks in archiveuploader that duplicate packagecopier. [13:11] yes [13:11] Since it's currently stupid. [13:11] yes [13:11] Really duplicated and really stupid. [13:11] yes [13:11] one step at a time [13:11] StevenK: I was talking about other things this morning, sync buttons, PackageCopyJob, etc, so I haven't got to the diff yet, but I'm clear now. [13:12] Right, but some of these steps are DD-critical, so it is probably a good idea to go some way to at least identifying that they exist :) [13:15] wgrant: do you have a list/idea of which checks are duped in packagecopier and archiveuploader? [13:16] bigjools: Not really... but everything in packagecopier is either buggily duplicated or buggily omitted from archiveuploader. [13:17] It is *entirely* checks that archiveuploader needs too. [13:17] sorta, but yeah [13:18] I don't really know how we can unify them. [13:18] archiveuploader deals with one at a time [13:18] Unless we make the internal copy logic take a set of SPRs and BPRs and their overrides. [13:18] I don't either, we'll just have to look at it and work it out [13:18] And then the copier can extract those from the source archive, and archiveuploader can use the ones it's just created. [13:19] The post-refactor underlying logic mostly does that, anyway. [13:19] It just needs to be dragged up a bit further. [13:19] it needs a coding sprint [13:19] Yes [13:19] the copying stuff is not going to get fixed without that, I don't think :/ [13:20] It needs thought and experimentation and throwing at people. [13:23] * bigjools eats [13:33] adeuring: thanks, that helped! I also had use a different layer. [13:34] henninge: out of curiosity: Do you know what type this object was you got for the string "português"? [13:35] adeuring: "object" ... [13:35] generic [13:35] nothing === al-maisan is now known as almaisan-away [13:35] indeed [13:35] ... [13:35] adeuring: I also found it that it was using the widgets from zope.app.dav not zope.add.form. [13:36] s/add/app/ [13:36] intereting [13:36] I don't know if the wrong layer influenced that but I could imagine that. [13:44] clear [13:45] gmb: free to review a simple peephole cleanup branch? https://code.launchpad.net/~jtv/launchpad/pre-747546/+merge/59771 [13:46] jtv: Sure. [13:46] thanks [13:49] allenap: my branch for the upgrade button is here as a WIP... https://code.launchpad.net/~jtv/launchpad/bug-747546/+merge/59765 [13:49] Cool, thanks jtv. [13:49] A lot of that diff is actually just cleanup, which is currently being reviewed as a separate branch to lighten the diff. [13:49] jtv: r=me [13:49] thanks! [13:50] allenap, correction: the cleanup branch _has been_ reviewed. :) [13:50] It's set as a prerequisite for the main branch. [13:51] (So actually I'm not even sure why its changes show up in the diff) [13:52] jtv: Mmm, suspicious :) [14:00] Hi deryck! ;) [14:00] Hi hi [14:01] henninge, stand up? [14:02] deryck: I need a minute alone with my laptop ... :( [14:02] henninge, tmi [14:02] henninge, join us when you finish :) [14:02] sure [14:10] allenap: the big remaining issue with my branch is that the few people who actually know soyuz stuff get into long, incomprehensible discussions when you ask "what label do you want on the button?" :-) [14:11] jtv: lol! [14:12] I'm serious! [14:17] gmb, ping [14:17] deryck: Hi === almaisan-away is now known as al-maisan [14:35] deryck: This page has directions to enable the backlight buttons on you computer: https://help.ubuntu.com/community/MacBook5-1/Natty [14:35] sinzui, oh happy day! [14:35] sinzui, it worked for you? [14:35] yes [14:37] lovely. I shall disappear shortly as I relog. ;) :) [14:37] I did the same :) [14:40] oh happy not-overly-bright-or-overly-dim day! [14:40] sinzui, many thanks for the pointer! [14:40] heh [14:40] * deryck turns the brightness up and down repeatedly, just 'cause he can [14:41] deryck: I believe every feature of older macbooks are now working working without hacks [14:42] sinzui, yup, that was my last issue. [14:42] Now I need to avoid buying a new computer for a few years [14:43] sinzui: in a few years you won't be able to install any software on a macbook anyway, so you'll have to buy a different computer *anyway*. [14:43] sooner or later usb ports will be deemed as too messy for the unibody. :-P [14:44] jcsackett: I doubt that. I do not use the macos except as a way to buy music from time to time. I buy from U1 and amazon UK too because my indie tastes are poorly represented in all music shops [14:45] jcsackett: But then how will fanbois hook up their iPhones and iPods! [14:45] jcsackett: I mean, oh noes! [14:46] sinzui: ah, so your next computer may be a thinkpad or something? [14:46] * jcsackett is considering his next upgrade [14:47] StevenK: ipods will be updated via a proprietary protocol that is basically tarted up bluetooth. they will call it iSync, and it will return "childlike wonder" to wireless devices. [14:47] * jcsackett stops his cynical futurism. [14:48] Where did I put that hangman's noose ... [14:48] ? [14:48] jcsackett: iSync as a 'feature' makes me want to inflict harm [14:48] * jcsackett laughs. [14:48] i joke, but apple did just by the iCloud domain. [14:49] s/by/buy/ [14:49] StevenK: hmm, I think that tarted up protocol will be based on what the hacker community created to enable mounting the pnhone over the net. [14:49] allenap: Thank you for the review. No fair not +1'ing my shell output. [14:49] StevenK: Okay, okay, +1 on shell dump. [14:49] That sounds like Apple. What Free Software project can we take, bend to our will and never release as source? [14:49] allenap: :-) [14:51] StevenK: The whole diff was actually repeated twice. It was a nice surprise to find that I was done once I was only half way through. [14:52] allenap: Oh, drat. Sorry about that. In which case, there is *one* change between the two diffs, can you find it? :-D [14:52] StevenK: Yeah, there was a change to a docstring iirc. [14:53] Right [14:54] allenap: Nicely done. At least you didn't mention the fix in the review mail :-) === Ursinha-afk is now known as Ursinha [15:10] gmb: could you please review this mp: https://code.launchpad.net/~adeuring/launchpad/bug-768443/+merge/59782 ? [15:14] mrevell, ping [15:15] Hey there deryck. How are things in Alabama? [15:15] gmb: Hi! Are you free for a review? [15:15] https://code.launchpad.net/~henninge/launchpad/devel-644872-unicode-error-in-search-text/+merge/59783 [15:15] mrevell, getting better here. still lots of damage to clean up, though. [15:16] adeuring: Sure. [15:16] thanks! [15:16] henninge: I'll take yours afte adeuring's [15:16] gmb: thank you ;) [15:22] adeuring: r=me [15:22] gmb: coool, thanks! [15:27] henninge: r=me [15:27] gmb: thank you ;) [15:29] gmb: Hi! Could you please review this MP: https://code.launchpad.net/~rvb/launchpad/sync-to-updates/+merge/59653 ? [15:29] rvba: Sure [15:29] thanks. [15:33] rvba: Approved. [15:33] gmb: thanks a lot. [15:33] np === al-maisan is now known as almaisan-away [15:48] abentley: can i beg your assistance on some yui/test stuff? [15:48] jcsackett: Sure. [15:49] cool. so, i am trying to just get a stubbed out suite with two tests that fail. right now i have http://bazaar.launchpad.net/~jcsackett/launchpad/spam-button-ui/view/head:/lib/lp/answers/javascript/tests/test_question_spam.html [15:49] and it's js file is http://bazaar.launchpad.net/~jcsackett/launchpad/spam-button-ui/view/head:/lib/lp/answers/javascript/tests/test_question_spam.js [15:49] the js file being tested is stubbed out here: http://bazaar.launchpad.net/~jcsackett/launchpad/spam-button-ui/view/head:/lib/lp/answers/javascript/question_spam.js [15:50] i am pretty sure i have failed in setting something up, as when i open up the html i expect to be told of two failures (for the Assert.isTrue(false)) stuff, but i get nothing. [15:50] jcsackett: For me, that's usually failure to include a JS file, or perhaps a wrong path. [15:51] Project db-devel build #512: FAILURE in 5 hr 39 min: https://lpci.wedontsleep.org/job/db-devel/512/ [15:51] abentley: i thought that, but i've double checked the paths and they all look correct. [15:52] abentley: in the html harness, i link in test_question_spam.js and ../question_spam.js; as the test_*js file is in the same dir as html and the other file is one above, that should be right, shouldn't it? [15:52] Yes, that does look right. [15:54] jcsackett: Your paths to yui etc don't look like mine at all. [15:54] jcsackett: e.g. I have ../../../../canonical/launchpad/icing/yui/yui/yui.js [15:55] abentley: ah, that could be. i didn't double check those b/c i copied them out of the wiki page. [15:55] jcsackett: I know the wiki was wrong, but I thought we'd fixed it. [15:55] jcsackett: Anyhow, try copying them out of lp/translations/javascript/tests/test_sourcepackage_sharing_details.html [15:56] abentley: okay, thanks. [16:04] Is 404 to an empty PPA really the correct response? When you create a PPA, it should exist at the URL, even if it's empty IMO. Maybe I'm about to split blasphemy but wouldn't a 204 No Content be an improvement? [16:04] indexes are deliberately not published to save space/inodes [16:05] abentley: i have updated the paths, still nothing. :-/ [16:05] abentley: do you have any knowledge regarding hooking in namespace stuff? perhaps i've screwed up in getting the stub of a js module hooked in... [16:05] bigjools, but changing the HTTP code to a 204 would distinguish is from a "you mistyped the URL" error [16:05] jcsackett: sorry, OTP [16:05] timrc: how do we know it wasn't a mistype? [16:06] it's just a plain Apache server === matsubara is now known as matsubara-lunch [16:10] bigjools, launchpad should be smart enough to know that the ppa was created, but is empty and then return a code better reflecting that.. based on launchpad's implementation 404 is probably technically correct, but it's not intuitively correct IMO [16:10] timrc: fair enough. I think it's quite a lot of work to do for a small gain though. [16:11] perhaps :) [16:12] patches welcome :D [16:20] jcsackett: I did have to hook up a namespace for the work I did, but I just followed an example I found. [16:21] jcsackett: Are you using firebug or similar? That sort of problem will usually generate error messages. [16:21] abentley: no, firebug isn't catching anything either, which is weird. [16:21] abentley: does our test stuff work with firefox4? [16:22] jcsackett: there were some firefox4 compatibility issues. deryck, could you remind us what those were? [16:23] * deryck looks at scrollback [16:23] deryck: with javascript testing in general? [16:24] abentley, jcsackett -- ah. so yui test has no ff 4 issues. windmill has issues with the magic comment bugs that only become enabled when you type text into them. [16:25] abentley, jcsackett -- and then getting a profile windmill can use with ff 4 is a bit tricky [16:25] but running yui test in the browser should work fine [16:25] deryck: thanks. [16:25] ok, so that eliminates that as a source of the problem. thanks, deryck. [16:25] np! [16:33] ok. i have managed to get an error. yuitest is not loaded. except that i can see the .use('test') bit. hm. [16:44] success! also, i cannot read. [16:47] jcsackett: yay! [16:48] abentley: so, i was doing use('yuitest'), not use('test). [16:48] jcsackett: doh! [16:49] when i went to doublecheck, i was actually looking at your file opened as an example, not mine, and saw 'test'. :-P [16:49] thus, i cannot read. :-) [16:49] thanks for the help, abentley. :-) [16:49] jcsackett: you're welcome. [16:50] jml: do you have a minute to chime in on bug 772763? in particular the importance of "low" [16:50] <_mup_> Bug #772763: Unmuting a bug's notifications should restore your previous direct subscription < https://launchpad.net/bugs/772763 > [16:50] benji: not until tomorrow afternoon. sorry. [16:51] jml: no worries; I'll ping you again then [16:51] benji: I'm already late for an appointment, and am flying out to Budapest tomorrow morning. will make sure that bug doesn't slip through the cracks. [16:51] benji: ta [16:51] * jml gone === Ursinha is now known as Ursinha-afk === matsubara-lunch is now known as matsubara === Ursinha-afk is now known as Ursinha === Ursinha is now known as Ursinha-lunch === gmb changed the topic of #launchpad-dev to: PQM in RC mode | https://dev.launchpad.net/ | On call reviewer: - | https://code.launchpad.net/launchpad-project/+activereviews [17:46] Greetings! I am having a LaunchPad PPA issue and I'm not sure who to direct it to. I have many builds that are marked as 'Needs building', and they have been stuck in the queue for about 12 hours now. Any thoughts? [17:47] JonOomph: there is currently a very large backlog of builds, the queue is very long [17:47] see https://launchpad.net/builders [17:48] we're currently missing a number of builders as they are temporarily used during the Ubuntu release [17:49] Ahhh, that makes perfect sense [17:50] bac: what's the right way to give you feedback regarding better bug notifications? [17:51] Having a few days of delay in the PPA build system makes releasing a new version of OpenShot a bit tricky, but I'm patient and can wait. =) === Ursinha-lunch is now known as Ursinha === almaisan-away is now known as al-maisan === al-maisan is now known as almaisan-away [19:50] jcsackett: do you have time to mumble? [19:50] sinzui: sure, just one moment. [20:10] sinzui: will you have some time this afternoon to do a small UI review for me? (https://code.edge.launchpad.net/~benji/launchpad/click-to-close-boxes/+merge/59818) [20:10] benji: I can starting it now [20:11] sinzui: I am thanking you now. [20:41] rockstar: hi [20:41] moin [20:42] g'day lifeless [20:43] jelmer, hello sir [20:44] rockstar: What's the policy on landing changes in lp:tarmac? [20:48] rockstar: related, is it intentional that lp:tarmac is owned by an open team? [20:49] argh, it isn't.. nevermind [21:31] deryck: there's no OCR. Could you please review https://code.launchpad.net/~abentley/launchpad/allow-noop-claims/+merge/59795 ? [21:31] abentley: sure. [21:31] deryck: thanks. [21:32] sinzui: Thanks for the review. Do you want the "Hide \2715" bit in parenthesis to distringuish it from the text of the message? [21:32] No, that would introduce difference. We want to be the same [21:32] (I'm also investigating using the sprite.) [21:32] k [21:34] abentley: r=me [21:34] deryck: thanks. [21:42] benji: hi [21:42] merge proposals don't do attachments [21:42] lifeless: I know that now. :) [21:42] benji: where else could one see these screenshots? [21:42] ah, I see, nvm [21:43] I edited the MP to point to... [21:43] right [21:43] so, UI question (perhaps curtis has asked this) [21:43] we have a X icon in the top right of other things that are closable [21:44] lifeless: they do too! [21:44] abentley: oh ? [21:44] lifeless: attach a patch and we'll even colourize it! [21:45] abentley: supporting attachments is a superset of supporting attachments-that-are-patches [21:45] lifeless: right; in the review he suggested emulating the "Hide X" target on the new private bug ribbon [21:45] benji: ok cool; I can butt out [21:45] benji: separately, I declined the team-join for lp to alpha testers [21:46] lifeless: you didn't say "launchpad doesn't support attachments of particular kinds" [21:46] I think a feature flag is better and easier to clean up [21:46] abentley: my problem with that MP was that I submitted it after forgetting the attachments and it took less time to find the edit link then to figure out how to add attachments after the fact [21:46] abentley: you're splitting hairs I think; theres a very large set of more refined meanings I could have meant [21:47] abentley: mps don't support attachments *like bugs support attachments* [21:47] lifeless: i.e., just change the flag to predicate on team:launchpad instead of team:malone-alpha? if so, that sounds good to me. [21:47] benji: yeah, or add a row [21:47] benji: depending on whether you anticipate having non-lp folk participate in the alpha stage [21:47] sounds good; I'll get that done tomorrow [21:47] I would expect something like adding launchpad-beta-users as a rule when it comes out of alpha [21:48] lifeless: It looked to me like you were saying "merge proposals don't allow attachments" as an absolute. [21:48] lifeless: I wasn't aware of launchpad-beta-users, sounds like the ticket [21:49] hmm, spelling error, one sec [21:50] benji: https://help.launchpad.net/GetInvolved/BetaTesting [21:50] launchpad-beta-testers [21:50] is the team [21:50] 2K early adopters [21:51] cool === flacoste changed the topic of #launchpad-dev to: Merge to devel are open again | https://dev.launchpad.net/ | On call reviewer: - | https://code.launchpad.net/launchpad-project/+activereviews [21:56] sinzui: hmm, the privacy ribbon's hide affordance is underlined on hover and changes the mouse cursor. I'll match that behavior. [22:11] benji: oh, btw - you have 'edge' in your browser bookmarks :> [22:19] heh, so I do; fixed now === matsubara is now known as matsubara-afk [22:27] hmm [22:27] we have lots of css warnings in the ff4 console === salgado is now known as salgado-afk === Ursinha is now known as Ursinha-afk === Ursinha-afk is now known as Ursinha [23:02] lifeless: pocket-list also complains about CSS. There is no to that is aware of supported and proprietary rules to provide a competent report of broken css. [23:02] s/pocket-list/pocket-lint === Ursinha is now known as Ursinha-afk [23:25] sinzui: hi [23:25] hi lifeless [23:25] sinzui: I'm thinking of closing https://bugs.launchpad.net/launchpad/+bug/741092 [23:25] <_mup_> Bug #741092: Archive:+subscriptions times out with many subscribers < https://launchpad.net/bugs/741092 > [23:25] because it seems ok now [23:25] IMBW, what do you think? [23:27] I honestly do not know. When was the last time we saw an issue [23:27] I looked back a few days, couldn't see one [23:28] it may be very sporadic though [23:29] I favour closing it, though I also like being able to say this issue has not been see in 30 days [23:31] better matching in oops-tools would be nice === Ursinha-afk is now known as Ursinha [23:31] Any follow on issue will not need the sql optimisation [23:31] lifeless: actually, I still wonder if qastaging was a bad env to test one. [23:31] so I think I'll close it [23:32] +1 I believe there are some performance issues that can only be experienced in production [23:48] losa ping [23:48] can you please change a feature flag on qastaging: [23:48] malone.bugmessage_owner default 0 on [23:48] wgrant: I cannot attend the meeting in 15 minutes. I can we talk later? I want to understand the effort needed to address bug bug/747558