[01:31] lifeless: Well, he was told to try it a few hundred times, IIRC. [01:31] terrible advice [01:31] Probably, yes. [01:32] they're all the same, if he's in a rush to merge we could update those translations by hand. [01:32] I believe there's a question open. [01:59] === Top 10 Time Out Counts by Page ID === [01:59] Hard / Soft Page ID [01:59] 3555 / 0 LoginToken:+accountmerge [01:59] 298 / 55 Person:+commentedbugs [01:59] 231 / 6781 Archive:+index [01:59] 88 / 307 BugTask:+index [01:59] 13 / 10 Person:+bugs [01:59] 12 / 101 ProjectGroupSet:CollectionResource:#project_groups [01:59] 11 / 260 Distribution:+bugtarget-portlet-bugfilters-stats [01:59] 11 / 29 DistroSeries:+queue [01:59] 8 / 1 BinaryPackageBuild:+retry [01:59] 7 / 40 Milestone:+index [02:53] wgrant: O hai? [03:06] StevenK: Hi. Sort of here. [03:06] Exam tomorrow, but distractions are welcome :P [03:10] wgrant: I'm looking at modelling SourcePackageAncestry -- do you think per-distribution or per-distroseries is the way to go? [03:11] Hmm. [03:11] This needs a lot of thought. [03:12] I've been thinking about it over the weekend, and I'm currently on the distribution side [03:12] I think so. [03:12] But what are we modelling? [03:13] Just the versions present for an SPN in a distribution [03:13] Including relationships? [03:15] For what purpose? [03:15] Determining the shared ancestor? [03:16] wgrant: They'll be stored in order, with the newest first, for determing shared ancestors, yes [03:16] well, in-order is a lie with postgresql [03:16] indexed perhaps [03:16] It will stored as INTEGER[], which IS ordered [03:17] Er, TEXT[] [03:17] Sorry, brainfart [03:17] an array in a single row? [03:17] that seems risky [03:17] lifeless: It does? [03:18] Oh, I see. [03:18] That's a bit special. [03:18] StevenK: yes, if you have more than one mutator [03:18] I'd like a big brainstorm with everyone before we do anything, I think. [03:18] wgrant: Special bad, special good or special special? [03:19] I've spent about 15 minutes on this branch so far, so I'm perfectly willing to throw it away and write it off as a learning exercise [03:20] Arrays for this sort of thing in a relational database smell bad. [03:20] Particularly when we model versions already (as SPRs), and have ancestry in bzr. [03:20] what would be in each cell in the array? [03:20] lifeless: A version number [03:21] I think we probably need to store ancestry per (archive, distroseries, spr), but I may be wrong. [03:22] Per spr seems like a lot of duplicate information to me [03:24] Why? [03:25] wgrant: SPR has an version number stored with it, so the 'foo 1.0-1' spr will have ancestry of '1.0-1', '0.9-1', '0.8-1'; and 'foo 1.1-1' will have '1.1-1', '1.0-1', '0.9-1', '0.8-1' [03:26] StevenK: Hm? 1.0-1 will reference 0.9-1 [03:26] 0.9-1 will reference 0.8-1 [03:27] So we have a map like this: [03:27] (primary, hardy, 1.0-1) -> 0.9-1 [03:27] (primary, hardy, 0.9-1) -> 0.8-1 [03:27] That is the model that seems intuitive to me. [03:27] We don't store version strings -- we store SPR references. [03:27] Which is also quite hard to traverse :-) [03:28] Not if we drop the distroseries. [03:28] Which we may be able to do. [03:28] StevenK: why array's ? [03:29] lifeless: Because they can be grabbed from the db and pushed into a set, and then it's cheap to the find common ancestors [03:29] you should probably read all of http://www.postgresql.org/docs/8.4/static/arrays.html [03:29] I already have [03:29] StevenK: red flag alert: in postgresql whenever you say 'grab it all from the db', you're almost certainly doing it wrong. [03:30] StevenK: you can pull the versions from related spr's easily too. [03:30] that said, I'm not against denormalising here - nor am I for it as such. [03:30] Have you spoken with stub ? [03:30] Not yet [03:30] [by you, I mean the folk doing this design] [03:30] However, wgrant is starting to convince me [03:32] Yay. [03:32] Youknowimright. [03:32] wgrant: So, your idea is a 'ancestry' column, which links to the previous SPR? And if it's NULL, it's the first one seen? [03:32] (™) [03:32] StevenK: We can't do it on SPR itself. [03:32] StevenK: Since SPRs are shared between distroseries. [03:32] Perhaps I should argue more, just to deflate wgrant a bit [03:32] s/distroseries/distros/ [03:33] s/distros/archives/ [03:33] something to bear in mind [03:33] the data model currently blows up on duplicate uploads [03:33] we should fix things going boom when that happens. Would that influence this work ? [03:33] This is a feature [03:34] StevenK: its very much not. [03:34] StevenK: it causes two single points of failure, manual recovery, makes person merge of ppas hard. [03:34] StevenK: Not A Feature. [03:34] I would object to any person merge with PPAs. [03:35] There's no need for that in reasonable merge use-cases. [03:35] wgrant: on principle or technicalities [03:35] Except maybe team merges, I guess. [03:35] But then rename. [03:35] And that would screw up keys and URLs. [03:35] so block it. [03:35] block it all. [03:35] wgrant: hard to do does not imply not worth doing. [03:35] lifeless: Is this is upload to cocoplum and germanium at the same time bug? [03:35] StevenK: yes [03:36] StevenK: which stops us trivially load balancing the two uploaders. [03:36] StevenK: which means we can't deploy fixes to those servers but once a month. [03:36] yeah, it sort of fucks up your PPA if you exploit that :P [03:37] lifeless: Will this work impact that? No, not I can see [03:37] I don't know how to solve that. [03:37] It could be made to impact it. [03:37] Indeed [03:37] If we stored string versions. [03:37] That is interesting indeed. [03:37] lifeless: You do enjoy banging that SPOF drum :-P [03:38] good, that he does. [03:38] I think it can be solved, by changing the upload processor on both cocobanana and germanium to be a daemon process that scans the upload directories, and always talks to the master store [03:39] StevenK: Doesn't help. [03:39] We don't have sufficient constraints to forbid duplicate uploads. [03:39] That can be solved [03:39] If our model didn't suck, UNIQUE constraints would do it. [03:39] But it is a thirteen-table join, like you say [03:39] But I don't think we can model it sanely like that. [03:39] Right. [03:40] Glancing at https://lpstats.canonical.com/graphs/CodehostingPerformance/ I wonder happened to codehosting 2 days ago? [03:40] wgrant: Maybe the answer currently is "Yes, we can do it, but it will blow, the database will suck, and you'll hate us" ? [03:40] bzr downgrade happened around then, maybe? [03:41] wgrant: Or perhaps we can expect a branch from you on Wednesday that fixes all of our modelling problems? *eg* [03:41] Oh, huh, for the SFTP issue? [03:42] spiv: I believe so. There's an incident report. [03:42] wgrant: On a serious note, do you want me to look at your script on dogfood again? [03:42] StevenK: I don't start for three weeks :P [03:42] StevenK: enjoy it? no. [03:43] StevenK: enjoy the *benefits* of banging it?hell yes. [03:43] StevenK: That would be nice. But I don't know if DF will survive. [03:43] wgrant: There's no way to make the first query be, err, nicer? [03:43] I don't think so. [03:43] But maybe. [03:44] Remember, though, that the publisher takes hours on DF, but just a few minutes on prod. [03:44] So DF is special :P [03:44] Bad special. [03:45] wgrant: So kick the script off, and wait a few hours, and hide if IS tell me that mawson has exploded and set a few racks and Nafallo on fire? [03:47] Hopefully he'll be in the other DC at the time. [03:47] actually, the benefits of the the spofs being fixed. [03:48] wgrant: Remind me of the script name? [03:48] StevenK: populate-spr-changelog.py [03:49] Hah, df has moved on, and canonical.launchpad.database has died, so the script dies [03:49] * wgrant disappears for a while. [03:49] Hah. [03:50] ARGH, everything this script wants is from canonical.launchpad [03:50] * StevenK shakes his fist at wgrant [04:29] StevenK: Heh, sorry. I am outdated. [04:42] wgrant: It's been running for ~ 40 minutes, since I fixed the imports [04:43] "its [04:43] my opinion that fixing our data mapping story is the highest leverage [04:43] item that requires TA input to achieve pervasive implementation and [04:43] deployment." [04:43] * wgrant cries. [04:43] wgrant: why so sad? [04:44] lifeless: Highest leverage for achieving pervasive implementation. [04:44] wgrant: bingo? [04:44] wgrant: though you've mangled the sentence ther [04:44] StevenK: Has the query finished? [04:45] 2010-11-22 04:01:48 DEBUG3 new transaction [04:45] StevenK: Has London been evacuated? [04:45] Nope [04:45] lol [04:45] Not to my knowledge [04:45] It is tempting to try that query somewhere else. [04:45] And see if it sucks less. [04:46] Otherwise I guess I'll poke around once I have DF access eventually. [05:00] wgrant: so I'm still confused [05:01] wgrant: are you crying because it was jingoistic, or because it wasn't $othertask. [05:02] Maybe because you missed one crucial square of his bingo card? ;) [05:03] Haha [05:08] * spiv guesses the it was the "synergy" square. [05:09] wgrant: That query completed in 68 minutes [06:13] wgrant: http://pastebin.ubuntu.com/535084/ [06:13] wgrant: Short answer, I killed it [06:48] StevenK: Yay :/ [06:48] lifeless: The former. [06:49] wgrant: I've not seen an error like that, but it looks like it updated at least two SPRs :-) [06:49] StevenK: Indeed. [06:49] But I wonder if the query performs less atrociously on >mawson. :/ [06:50] wgrant: I suspect we can get a RO counting query going, if you wish [06:55] ugh. launchpad_qastaging stop == make build. yay. not. [06:55] Hah. Fun. [06:56] stop should not be doing things like rm'ing directories, and moving stuff around. No. it has just one thing it needs to do and one thing only. terminate the running process. grumble grumble grumble. [06:57] orsum. and then the make fails and stop doesn't. this is gold. [06:57] StevenK: Can you stick your fixed script up somewhere? [06:57] I seem to have lost mine. [06:57] I can only find an old version :( [06:57] http://pastebin.ubuntu.com/535088/ [06:58] Thanks. [06:58] Ugh, could the gina doctest be any slower [06:59] It's only a couple of minutes, right? :P [06:59] yes. ?? [06:59] What are you doing to the poor creature? [06:59] wgrant: Having it toss changelogs at the librarian and linking them into the SPRs it creates [07:00] Ah, great. [07:00] I didn't get around to that. [07:00] mumble exams mumble [07:00] hah [07:00] $ bzr di | diffstat | tail -n 1 [07:00] 2 files changed, 13 insertions(+), 6 deletions(-) [07:00] Is tiny, so far [07:00] Yep, it should be. [07:00] But the tests might be amusing. [07:02] Yes, I'm not looking forward to changing them [07:02] spm: Could you throw http://paste.ubuntu.com/535089/ at a production slave if you have time, pls? [07:02] sure [07:02] "Sure, but I have no time, so no" ? [07:03] Heh. [07:03] nah. I prefer to leave him in suspense for a few hours. the simple "sure" implies an answer that I'll do his request. but not mention of when. so it works from a bofh perspective. fwiw. [07:05] wgrant: https://pastebin.canonical.com/40005/ argh. ffs. wrong paste. one sec. [07:05] Haha [07:06] wgrant: http://paste.ubuntu.com/535090/ [07:06] spm: That works. Yes, I've run the query, but you have to wait a few weeks for the results. [07:06] spm: Thanks. [07:06] I even knew to use the ubuntu one. but auto defaulted to the can pastebin. loltastic at self. [07:06] StevenK: How long does that same query take on DF? [07:08] wgrant: http://pastebin.ubuntu.com/535092/ [07:09] lolwut [07:09] It's *quicker* [07:09] ... yes. [07:09] It must have had stuff cached? [07:09] * wgrant stabs WADL in the mouth. [07:09] That's quite possible, since I re-ran it to actually get the paste. Duh. [07:10] Total runtime: 458.468 ms [07:10] And the universe makes sense again [07:10] That's more like it. [07:10] Now to get a longer one. [07:11] However this relies on my local appserver starting, which it seems to be reluctant to do. [07:12] Your machine is hinting that you have study to do [07:12] Silence. [07:12] mv lib/canonical/launchpad/apidoc.tmp lib/canonical/launchpad/apidoc [07:12] mv: cannot move `lib/canonical/launchpad/apidoc.tmp' to `lib/canonical/launchpad/apidoc/apidoc.tmp': Directory not empty [07:12] make: *** [lib/canonical/launchpad/apidoc/index.html] Error 1 [07:12] wgrant: ^^ that error? [07:12] wgrant: how many exams left? [07:12] cause that's nicely broken qastaging [07:12] Oh, fricken apidoc [07:12] Ahh mr lifeless ^^ [07:12] spm: .... [07:12] spm: I got that too. [07:12] spm: We have that error about once a month [07:13] spm: That's why I was stabbing WADL in the mouth. [07:13] On devel, at least [07:13] lifeless: Just the one. [07:13] 9am. [07:13] wgrant: For which subject? [07:13] spm: if I may be so bold. [07:13] spm: file a bug [07:13] StevenK: Theory of Computation [07:13] lifeless: was just about to; but was wondering if a known/quickish fix. no worries. [07:13] spm: note in the bug that a) we need to find the landing that broke things and mark it as bad somehow [07:14] nod [07:14] spm: secondly, would you be able to run 'make clean' [07:14] spm: Just blow away that dir. [07:14] Or do that. [07:14] spm: then make' [07:14] Hmmm [07:14] wgrant: nice stuff :) [07:14] Gina's changelog parser is different from the archives. Fun. [07:14] launchpad-rev-11956 would be a guess [07:14] wgrant: turing equivalence, NDFA etc ? [07:15] lifeless: Yup. [07:15] one of my fav subjects at uni [07:15] no. would have been before that. [07:15] One of the few subjects in which I've actually learnt something :/ [07:16] StevenK: Is http://paste.ubuntu.com/535094/ nice and slow on DF? [07:16] wgrant: its good that you're learning stuff ;) [07:16] wgrant: it gets harder the older you get :) [07:16] wgrant: Not excessively, http://pastebin.ubuntu.com/535095/ [07:17] lifeless: Oh, yes, the learning is a good thing. The restricted number of subjects in which it has happened isn't. [07:17] StevenK: :( [07:18] StevenK: Could you run the script with LP_DEBUG_SQL=1 and see what's slow? [07:18] wgrant: you need to do subjects like I did in CS. ie "How to be an ego ridden dork, who manages to piss off she who will one day be his wife, by having a working program in computer graphics: @3am in the labs; and doing more than was asked for in the assignment" [07:18] spm: Heh. [07:19] amusingly, 20 years later. she still hasn't forgiven me for that. :-D [07:20] wgrant: http://pastebin.ubuntu.com/535096/ [07:20] That query will spin for ~ an hour if the last run is anything to go by [07:21] spm: was she competing for marks, or stood up for a date? [07:22] StevenK: /^troseries/ does not a valid query make. [07:22] heh, more like if you do more stuff than required, that tends to force others marks down on assignments. so she was shitty at having to do even more work. given her code wasn't running at that point in time. :-) [07:23] wgrant: Er, lemme fix that [07:24] spm: I submitted a 12 page Perl assignment for Systems Administration Programming, and got 95. The lecturer was cranky at me that she couldn't scale the marks, because if she did, the next best assignment would have gotten 60 [07:25] wgrant: http://pastebin.ubuntu.com/535097/ [07:26] StevenK: \o/ [07:26] StevenK: Could you replace the two %s with 0 and see how long it takes? [07:27] StevenK: 12 pages of perl... shoulda marked you down [07:27] I wonder if the overly verbose Storm-generated GROUP BY is being expensive. [07:27] (I reduced it to a GROUP BY over SourcePackageRelease.id before) [07:28] lifeless: That only made me chuckle ... what got me laughing so hard was when she reached for Programming Perl and waved at me saying "Do you know how many times I had to refer to this to mark your assignment?!" [07:29] * wgrant is glad that LP will be an escape from Perl... except for sbuild. [07:30] thats a rather crackful query [07:30] wgrant: Its still going, [07:30] StevenK: Aha. [07:30] StevenK: Excellent, thanks. [07:30] I suspect it will last the full 60-odd minutes [07:30] lifeless: Oh? [07:30] StevenK: Kill it pls (with fire, if you see fit) [07:31] wgrant: And the script? [07:31] StevenK: Yeah. [07:31] Oh, I love it when scripts don't listen to Ctrl-C, and instead just echo ^C [07:32] yay. qas is back. [07:32] lifeless: What is crackful about it? [07:32] Besides the insane groupby? [07:32] Pity killing the script didn't kill the query [07:33] whats the 'HAVING COUNT(LibraryFileAlias) = %s' for [07:33] If I kill a postgres child, will it get cranky? [07:33] yes [07:33] lifeless: s/%s/0/ in your head [07:33] lifeless: The %s is 0. [07:33] wgrant: be grateful you're not going from pascal, C and modula-2 @uni to Assembler and Cobol @ $job1. :-) [07:33] lifeless: It's finding SPRs without expired files. [07:33] spm: Ahahah. [07:33] And the query actually performs rather well. [07:33] I'm surprised it accepts it at all [07:34] Why? [07:34] count(TableName) [07:34] spm: Out of interest, do you have access to mawson? [07:34] StevenK: not really. I think we can login and that's about it. [07:34] Damn it, I want to kill that query [07:34] Oh well, the load is only 2 [07:34] wgrant: so, 'LEFT JOIN LibraryFileAlias ON LibraryFileAlias.id = SourcePackageReleaseFile.libraryfile AND LibraryFileAlias.content IS NULL' [07:35] StevenK: there is a pg helper to kill backends cleanly [07:35] StevenK: or read the docs. IIRC TERM is ok [07:35] pg_cancel_backend, or something like that. [07:35] I doubt I have access to the postgres user, sadly [07:36] StevenK: if you were considering killing it, I assume you had some access :) [07:36] I wouldn't say no to access, I just like cleaning up after myself [07:36] StevenK: You must surely have a postgres superuser? [07:36] wgrant: + a HAVING constraint which implies NOT NULL [07:36] wgrant: so, it doesn't need to be a LEFT JOIN [07:37] wgrant: Well, yes, there is one, I just can't actually a shell as it [07:37] StevenK: I lie. we have more access than I recalled. [07:37] spm: Win. Can you switch to postgres? [07:37] I can.... :-P [07:37] StevenK: Don't you need a postgres superuser to run tests? [07:37] Dogfood doesn't run tests? [07:37] lifeless: How does the HAVING imply not null? [07:38] lifeless: HAVING COUNT(LibraryFileAlias) = 0 [07:38] spm: kill -TERM 27507 [07:38] wgrant: count(something that is NULL) -> NULL [07:38] NULL != 0 [07:38] wgrant: no? [07:38] StevenK: that would be an unwise way of doing things; but gist of the request is understood. one sec. [07:40] lifeless: I don't think so... [07:40] StevenK: https://wiki.canonical.com/InformationInfrastructure/OSA/SafelyKillBackendProcess for the record [07:40] lifeless: http://paste.ubuntu.com/535099/ [07:41] StevenK: done [07:42] So I guess I need to pre-select the IDs the turn them into real objects later :( [07:43] spm: Danke [07:44] wgrant: 'count(expression)anybigint number of input rows for which the value of expression is not null' [07:45] lifeless: eparse [07:48] wgrant: so an SPR without expired files [07:48] wgrant: is two queries [07:48] sprs without files [07:49] (exclude those) [07:49] and sprs with files that are not expired - that have content) [07:49] sorry, union them [07:49] (tired) [07:50] hi lifeless [07:50] hi poolie [07:50] welcome back(?) [07:50] heh, haven't been away ;) [07:50] poolie: just in orlando, still working [07:50] has anyone tried/considered running launchpad in lxc? [07:51] well, physically away [07:51] or are you still there? [07:52] wgrant: that query wants 0 LFA's which will turn up if no LFA's are found, and we force no-files and files with content to null via the complicated join clause [07:52] poolie: no, nz now. [07:53] lifeless: Any SPR in prod without SPRFs is a bug. [07:53] wgrant: I think I'd write this as an EXISTS then [07:54] HAVING is forcing it to evaluate all results which you've helpfully inverted to make a negative. [07:54] but we don't care about the count [07:54] we care that there is one or more LFA with content. [07:55] True. [07:57] wgrant: I suspect it will perform a lot better - but you may need a DISTINCT in the EXISTS select clause [07:58] lifeless: The performance of the query is not problematic at the moment. [07:58] wgrant: an hour isn't problematic?! [07:58] lifeless: Except for the GROUP BY. [07:58] lifeless: That goes away magically when I write the query manually to group by SPR.id, rather than SPR.*. [07:59] aggregates are hard :P [07:59] You'd think pg would be smart enough to notice that there's a PK in there. [07:59] poolie: no idea [08:00] it may be lighter wait than full virtualization, but still strong [08:00] ensemble is building around it [08:00] ok, afk === almaisan-away is now known as al-maisan [08:08] jtv: woo! [08:08] poolie: woo? [08:08] woo whom? [08:08] i think that's the first time i've ever had a proposal just land the first time [08:09] well, i guess there was a fizzle in the mp [08:13] StevenK: Still around? [08:13] wgrant: Hm? [08:13] Could you try http://pastebin.ubuntu.com/535106/? [08:16] wgrant: Running [08:17] oh ugh [08:17] StevenK: Less obscenely slowly? [08:17] launchpad.net/linaro is a package [08:17] project I mean [08:17] lifeless: This could be amusing. [08:17] wgrant: what could be? [08:18] lifeless: Converting Linaro to a distro. [08:18] fortunately, NMP. [08:18] :P [08:18] 'hi duncan, please reregister. Thanks.' [08:18] You nasty strategists and architects... [08:18] we sit around all day playing buzzword bingo [08:19] wgrant: How long are expecting the new query to take? :-) [08:19] StevenK: Not long :( [08:19] wgrant: try an exists, you know you want to [08:19] lifeless: I'm not sure if non-admins can actually register a distro [08:20] They can't. [08:20] Therefore, 'hi duncan, please reregister' is completly pointless [08:20] lifeless: But I like avoiding subselects :( [08:20] they aren't always wrong [08:20] I know. [08:21] wgrant: 4 minutes, and counting [08:21] But I like avoiding subselects. [08:21] StevenK: Bah. [08:21] 8.4 seems to want DISTINCT a lot [08:21] Which query? LIMIT 1? [08:21] wgrant: I'm sure you'll try soon enough [08:21] wgrant: ENOLPSQLwhatsit [08:22] StevenK: :( [08:22] lifeless: We'll see. [08:22] wgrant: Kill it and run it with the DEBUG? [08:22] StevenK: Please. [08:23] Hm. [08:23] I guess it's probably doing the limit in the wrong place. [08:23] Sigh, SQL. [08:23] lifeless: I think that is a side effect of the short circuiting not happening as often (at all?) in subqueries [08:23] stub: yeah, I think that too [08:23] stub: possibly a correctness fix in some corner case. I haven't looked in the changelog to see though [08:24] (which might be trading performance for correctness, although the cases I can think of that need the short circuiting disabled should be detectable) [08:25] I actually recall removing some distincts before, as they used to slow some queries down [08:25] wgrant: http://pastebin.ubuntu.com/535109/ [08:25] stub: \o/ [08:25] (but others used to benefit from its addition too) [08:25] At least it is more consistent now ;) [08:27] wgrant: so we can use a regular JOIN on sourcepackagereleasefile right? if not playing silly buggers with having [08:28] lifeless: I want to find those SPRs that have no files with LFA.content IS NULL. [08:28] lifeless: How can I word that in a non-LEFT JOIN? [08:29] Unless I do an exists. [08:30] wgrant: Time: 6.897 ms [08:30] wgrant: the current group by *does not do that* [08:31] wgrant: the content is NULL clause matches the join, and then the hacing COUNT excludes them [08:32] or I may be misphrasing. Anyhow, one sec, I'll put that helpful english description into this [08:32] lifeless: The HAVING doesn't work without the GROUP BY... [08:33] poolie: only thing I had to do about your MP was set a commit message—so that all the metadata except the --no-qa option is in there and test-and-land becomes a very simple command line. Congratulations, and thanks again for fixing this! [08:33] StevenK: http://pastebin.ubuntu.com/535111/ [08:33] wgrant: so thats sprs where all their files are not null [08:34] 15 seconds on a first attempt [08:35] 11 seconds on hot cache [08:35] wgrant: Killed and re-running [08:35] lifeless: Which? [08:35] StevenK: Thanks. [08:35] Ooooh, that's MUCH quicker [08:35] Like actually not slow? [08:35] wgrant: http://pastebin.com/q6XXG2H0 [08:36] I'm getting confused by the DEBUG [08:36] I think Foxtel has bought out every bit of Australian ad space. [08:36] I have not seen a non-Foxtel ad in weeks. [08:36] adblock FRW [08:36] *FTW [08:37] 2010-11-22 08:37:08 DEBUG Iteration 0 (size 1.0): 1.043 seconds [08:37] 2010-11-22 08:37:08 DEBUG3 new transaction [08:37] I used to use it... but I go to so few adful sites that I didn't bother installing it this time. [08:37] If this is the pastbin http://pastebin.ubuntu.com/535106/ , I'm confused and the comment on the query doesn't match the query at all (comment is about finding sprf, query is finding sprs) [08:37] StevenK: Is it expanding the set size? [08:38] wgrant: http://pastebin.ubuntu.com/535113/ [08:38] stub: The comments used to be interspersed with the relevant bits of the query. Then I rewrote it and moved them all to the top for now. [08:38] stub: But http://pastebin.ubuntu.com/535111/ is the latest version, which might make more sense. [08:39] wgrant: the comment and code are confused ;) [08:40] lifeless: Oh? [08:40] well it says 'Find any SPRFs that have expired (LFA.content IS NULL' [08:40] but you sau I want to find those SPRs that have no files with LFA.content IS NULL [08:40] I don't know what Count(LibraryFileAlias) actually does, but I suspect nothing useful. [08:40] lifeless: Have you read the version with the comments not all at the top? [08:41] http://pastebin.ubuntu.com/535111/ [08:41] stub: Howso? It works fine. [08:41] wgrant: its unusual, sets of alarums [08:41] wgrant: This looks like it runs well, kill it? [08:41] StevenK: Yep, thanks. [08:42] wgrant: Bill is in the mail [08:42] wgrant: I'm very confused about whether you want 'expired sprfs' or 'no expired sprfs' [08:42] wgrant: I can get you expired ones in 6ms. [08:42] lifeless: I want those SPRs without expired SPRFs. [08:42] wgrant: why those in particular ? [08:43] wgrant: and why the specific case of sprs without *any* expired sprfs ? [08:43] I see. It is a confusing spelling of COUNT(LibraryFileAlias.id), which excludes NULLs unlike COUNT(*) would [08:43] lifeless: because I can't extract an SPR without all of its files. [08:46] stub: It wasn't a deliberate choice, I don't think. Does it make a difference? [08:47] COUNT(*) or Count(LibraryFileAlias.*) would match every row. Count(LibraryFileAlias.id) counts rows with a non-null id [08:49] stub: Hm, but COUNT(LibraryFileAlias) happily returns 0... [08:50] In SQL or Storm? [08:50] Both. [08:51] size is 2811, but 11244 were read from the file [08:51] * StevenK grumbles at gina [08:52] wgrant: Oh, that script was running fast enough with 5 that I suspect it could be bumped to 20 or so easily [08:54] StevenK: Yay. Thanks. [08:55] http://paste.ubuntu.com/535119/ is the clearest SQL and likely fastest [08:55] But if it works [08:56] morning [08:56] 'AND SourcePackageRelease.id >= 1) AS Whatever' Haha [08:57] good morning [08:58] stub: I tend to avoid EXCEPT when it's easy. But if you say so :) [08:58] Its hard to say without benchmarking, and I haven't done that :) [08:59] In fact, that limit might have to go inside... [09:00] Anyway, this script works quickly now, and only has to be run once... [09:01] Crap, I think I made gina spin [09:02] stub: Is postgres not smart enough to reduce a 'GROUP BY SourcePackageRelease.id, SourcePackageRelease.foo, SourcePackageRelease.bar' to 'GROUP BY SourcePackageRelease.id'? [09:02] I thought GROUP BY were never twiddled with [09:02] SQL isn't [09:03] http://pastebin.com/bsCfW3Ya [09:03] stub: But surely it can see that it's just grouping across real fields, one of which is a PK. [09:03] Yes, but the SQL standard wants them listed IIRC [09:04] PG could extend the spec here I guess, but hasn't [09:06] stub: I know that SQL requires that they be listed. But PG also seems to execute them. [09:06] When it can tell that it's optimisable down to SourcePackageRelease.id. [09:09] Hi [09:10] SELECT SourcePackageRelease.* [09:10] FROM SourcePackageRelease [09:10] WHERE SourcePackageRelease.id >= 1 and not [09:10] SourcePackageRelease.id in (SELECT SourcePackageReleaseFile.sourcepackagerelease [09:10] FROM SourcePackageReleaseFile, LibraryFileAlias [09:10] WHERE SourcePackageReleaseFile.sourcepackagerelease = SourcePackageRelease.id and SourcePackageReleaseFile.libraryfile = LibraryFileAlias.id [09:10] AND LibraryFileAlias.content IS NULL [09:10] ) order by id desc limit 1; [09:10] stub: wgrant: ^ [09:10] 20ms [09:12] lifeless: Ah, that's not bad. [09:12] 420.120 ms to get 55 rows [09:13] Would a NOT EXISTS be saner, though? [09:13] seemed to perform worse before [09:13] :( [09:13] but it wasn't quite identical [09:13] lets see [09:15] SELECT SourcePackageRelease.* [09:15] FROM SourcePackageRelease [09:15] WHERE SourcePackageRelease.id >= 1 and not [09:15] EXISTS (SELECT 1 [09:15] FROM SourcePackageReleaseFile, LibraryFileAlias [09:15] WHERE SourcePackageReleaseFile.sourcepackagerelease = SourcePackageRelease.id and SourcePackageReleaseFile.libraryfile = LibraryFileAlias.id [09:15] AND LibraryFileAlias.content IS NULL [09:15] ) order by id desc limit 1; [09:15] 11 seconds [09:15] *yawn* [09:15] gnight [09:16] Night. [09:16] Thanks. [09:21] wgrant: http://paste.ubuntu.com/535123/ is 20ms to retrieve approx 55 rows, and for this use we don't care about the approx. [09:21] stub: Is it worth it? [09:22] I have no idea, but you guys seemed to be optimizing it :) [09:22] We'll be making lots more queries. [09:22] I don't want to optimise it, lifeless does :P [09:22] Is this a run once? [09:22] Yes. [09:22] Don't care then :) [09:22] Yay. [09:23] As long as it runs in days rather than weeks. [09:23] I just wanted it down from an hour on DF... which was achieved by fixing the GROUP BY. [09:24] 68 minutes down to about 30 seconds or so [09:35] bigjools: Have the recent log parser changes actually been tried? [09:36] I recall a couple of emails to the LOSAs but I never saw a response (although that may have just gone to canonical-launchpad and not me) [09:37] morning [09:37] this cold is finally getting to be a hassle [09:38] wgrant: otp, but yes and it DOSed germanium [09:38] bigjools: Still!? [09:39] :( [09:39] still [09:39] They must have pulled some nice tricks with the librarian one the first time :( [10:00] another incident? [10:01] wgrant: the librarian logs are nowhere near the size of the Apache ones for PPAs [10:01] bigjools: Ah. [10:01] Sad. [10:03] bigjools: Even with the max set? [10:03] And it no longer reading all the lines in first? [10:04] wgrant: I don't know the details, I am yet to re-create on DF. [10:04] mostly because I've not tried to re-create, mind ... [10:04] I guess I can do that in a couple of weeks. [10:04] I've got the production logs on it ready to go [10:04] Ah, great! [10:04] So it's not entirely hopeless. [10:04] but I need to fix the 2 thousand soyuz oopses per day first [10:05] mrevell: hi [10:05] Hell jml [10:05] That I am :) [10:05] bigjools: PFOE? [10:06] heh [10:06] wgrant: yes [10:06] wgrant: also stub's change to make log.error generate an OOPS is making me sad [10:06] bigjools: Yeah, it would :( [10:07] * bigjools gets caffeine while waiting for rf-get [10:08] You don't want the behaviour, or you don't like what it is telling you? [10:09] stub: Soyuz is ancient and broken :P [10:12] stub: the code that logs an error for upload problems doesn't differentiate between what should or should not be an oops (it can be a package problem or a code problem) [10:12] so I need to fix the code [10:13] onwards and upwards! [10:14] wgrant: had a look at the distributionmirror_prober recently? [10:14] jml: We offloaded that to Registry :D [10:14] NFI why. [10:14] But it's damn convenient. [10:15] it's always been registry [10:15] jml: Why? [10:15] wgrant: when you said broken and ancient, I was reminded of it (or at least its tests) [10:15] jml: Ah, yes. [10:15] lol [10:15] It's possibly even more broken and ancient than most of the rest. [10:15] Apart from gina. [10:16] Although gina was refurbished. [10:16] It still sucks. [10:16] I know :-( [10:16] I keep wanting to rip out all of kiko's XXXs [10:16] And some of the older bits of archivepublisher are fairly nauseating. [10:16] But one day it will be fixed. [10:16] Maybe. [10:16] about to try to land the branch that changes all of the twisted tests bar the prober ones to use testtools [10:16] Hah. [10:16] That bad? [10:16] they basically need to be rewritten. [10:16] it's not that hard [10:17] One could say that about 90% of the Soyuz tests... [10:17] but it shouldn't block getting this large branch integrated [10:17] wgrant: well, these are actually just generally incorrect [10:17] jml: ... oh. [10:17] (*always* return the Deferred!) === jelmer is now known as Guest24864 [10:18] james_w: What happened to those massive anti-sampledata branches of yours? [10:26] jml: see, now I understand about always returning the Deferred :) [10:27] bigjools: yep :) [10:28] I really need to learn Twisted properly. [10:28] wgrant: I would like that [10:29] Since I don't really completely understand the intricacies of the new buildd-manager. And that could be problematic. [10:30] wgrant: it's not too hard once you get into it [10:30] Indeed, I probably just need to do it. [10:30] but there is a bit of a learning cliff [10:30] Can't be as bad as Zope's. [10:30] And I picked that up OK. [10:31] I tried to document all the gotchas as much as possible, I probably missed a load though [10:35] wgrant: Twisted is less abstract than Zope and has fewer grand ideas, imo. [10:36] Great. [10:36] wgrant: also, it has the happy advantage of having primarily XML-hating developers [10:36] Hah. [10:36] ZCML's not that bad. [10:37] the syntax isn't that bad, I guess [10:37] I prefer Grok's approach, yes, but it's not that bad. [10:37] the action-at-a-distance thing is a bit of a pain. [11:06] ZCML's not that bad. [11:06] who are you and what did you do with wgrant? [11:09] bigjools: There are far worse things! [11:10] like limb loss? [11:10] having heard that Twisted devs hate xml, I want to become one! [11:10] For one. [11:11] A lost limb? An XML? Or a Twisted dev? === Guest24864 is now known as jelmer [11:46] wgrant: it's that time to remove jaunty's old binaries from the librarian, can you sanity check this for me, it's the same as the one for intrepid. http://pastebin.ubuntu.com/535163/ [11:50] bigjools: Replace the "- interval '1 week'" with "+ interval '1 week'" and you have a deal. [11:51] wgrant: well we don't need to add any interval really, the GC has a built in one, which is why we took it *off* in the last query as time/space was tight. [11:51] bigjools: At which stage is it built in? Does it set the expiry time in the future? [11:52] no, it adds a week to the timestamp in the query it uses [11:52] IIRC [11:53] I'm extremely uncomfortable OKing a query that I know to be buggy (probably insignificantly, but still) when it's not an emergency, and I have an exam in 10 hours. [11:53] heh [11:53] go to bed [11:53] what was buggy last time? [11:53] We didn't check dateremoved. [11:54] in the EXCEPT clause? [11:55] Rght. [11:55] If we don't exclude unremoved files, p-d-r will cry. [11:55] right [11:55] * bigjools fixors [11:56] How critical are we for space? [11:56] 300G [11:56] It means nothing to me... [11:56] heh [11:56] not crtitical but in the time-to-think-about it stage [11:56] Yep. [11:57] I'll go through the query properly after my exam, then we can run it without the expiry mangling tomorrow night, then the morning after I will realise another flaw in it, and we will have no blown away the files yet :) [11:58] http://pastebin.ubuntu.com/535167/ [11:58] :) [11:58] We've fucked it up somewhat the last three times. Let's not do it again :) [11:58] the p-d-r thing was the only problem last time [11:58] Er. [11:58] ffs [11:58] Yeah, looks good. [11:58] why does absolutely *every* site that wants my money demand that I create a bleeding account [11:59] so you can be labelled and tracked [11:59] jml: Everyone should accept OpenID. [11:59] *nudge nudge* [11:59] these guys *do* accept openid [11:59] Hah. [11:59] but they also need me to make up a password and so forth [11:59] As many do :( [12:00] Sounds like Launchpad! [12:00] Anyway, I should probably sleep. [12:01] Morning, all. [12:01] deryck: good morning [12:02] bigjools: Heh, that query's still bad. [12:02] I will fix tomorrow. [12:02] Sigh. [12:02] sigh [12:02] This is hard. [12:02] yes [12:02] what's up? [12:02] AND bpph.status in (2, 5) [12:02] AND bpph.dateremoved is NULL [12:03] I think the first one probably wants to be removed. [12:03] Since the second will clearly be true in all those cases. [12:03] true [12:03] but [12:04] competing p-d-r processes. Grah [12:04] Hm? [12:04] gror my head [12:04] basically we need to run this periodically [12:04] for all obsolete series [12:05] or, fix p-d-r not to care [12:05] That and obsolete-series.py itself, yeah. [12:05] Hm? [12:05] Why does p-d-r care? [12:05] the problem last time ... [12:05] Ah, yes. [12:05] Or we could forcibly delete all the PPA pubs from obsolete series. [12:05] no [12:05] :( [12:05] some people are using obsolete series [12:06] they are crackpots, I know [12:06] Those people are wrong. [12:08] Anyway, I will recheck that query tomorrow while I am still able, since that will resolve the immediate issue. But I would later like to know these wondrous use-cases for obsolete series, since I'm fairly sure they're insane. [12:08] Night. [12:09] It would make obsolete series handling much easier :( [12:12] nn wgrant, good luck with the exam === al-maisan is now known as almaisan-away [12:21] crappy crapzor mccrapperton [13:13] do we have an existing mechanism to use the librarian as a fixture rather than as a layer? [13:18] I guess not, since testSetUp and testTearDown both do something in the librarian layer. [13:18] which means getting the log of the librarian during tests is going to be a pain === mrevell is now known as mrevell-lunch === almaisan-away is now known as al-maisan === matsubara is now known as matsubara-lunch === mrevell-lunch is now known as mrevell === jaycee is now known as jcsackett === salgado is now known as salgado-lunch [14:47] jml ping [14:47] sinzui: hi. otp [14:48] * sinzui will talk later === matsubara-lunch is now known as matsubara [15:11] jtv: is your current branch browser code branch pushed? [15:11] henninge: yes. [15:11] Currently going through review for the migration script with Aaron. [15:11] jtv: Does it touch CurrentTranslationMessageView? [15:12] Yes. [15:12] Had to. :( [15:12] that's ok [15:13] jtv: there are two branches (pre and normal). I guess the one depends on the other? [15:13] Yes. [15:14] The "pre" branch is waiting for review (though I'm keeping our reviewer busy; he found what looks like a serious gotcha in the script) [15:14] If that gets through unscathed, then the regular branch is also ready for review. [15:18] Meanwhile, I guess I'll be adding tests to the migration script and fixing the bug the review turned up. [15:32] deryck: is it bad that I found it hard to read anything after the line "disable windmill tests" in your last email? :) [15:34] bigjools, I guess not ;) but yeah, maybe so :-) === al-maisan is now known as almaisan-away [15:54] abentley, you have another bzr pipeline convert in matsubara :) [15:57] * mars tea :) mouse battery recharge :( === almaisan-away is now known as al-maisan [15:57] wrong macro [15:59] mars: Cool. [16:05] sinzui: ping [16:08] hi jml [16:39] moin [16:40] morning, lifeless [16:40] hi deryck [16:42] sinzui: 173 oops bugs, 59 timeout bugs === jelmer is now known as Guest34932 [16:47] good morning lifeless. We don't normally see you around at this hour [16:48] mars: I'm not normally jetlagged either :) [16:48] rough. That's twice in four weeks, isn't it? [16:48] bigjools: hi [16:48] bigjools: is https://bugs.launchpad.net/soyuz/+bug/654372 good to go ? [16:48] <_mup_> Bug #654372: Source domination is inefficient [16:49] lifeless: testing it *right now* as it happens :) [16:49] \o/ [16:49] I've been doing QA all freaking day === Guest34932 is now known as jelmer [16:51] unfortunately dogfood is slower than anything I can think of a funny idiom for right now [16:51] than a laser with a fricken shark on its handle? [16:52] I can think of plenty of really offensive comparisons [16:55] bdmurray: hi [16:56] https://bugs.launchpad.net/malone/+bug/594247 - is that gtg on qastaging ? [16:56] <_mup_> Bug #594247: searchTasks with structural_subscriber times out regularly [16:56] lifeless: gtg? [16:56] good to go [16:57] lifeless: yes, let me update the tag === salgado-lunch is now known as salgado === deryck is now known as deryck[lunch] === lifeless changed the topic of #launchpad-dev to: Performance Tuesday! | Launchpad Development Channel | Week 4 of 10.11 | PQM open for 10.12 | firefighting: importd system failing to import | https:/​/​dev.launchpad.net/​ | Get the code: https:/​/​dev.launchpad.net/​Getting === lifeless changed the topic of #launchpad-dev to: Performance Tuesday! | Launchpad Development Channel | Week 4 of 10.11 | PQM open for 10.12 | firefighting: - | https:/​/​dev.launchpad.net/​ | Get the code: https:/​/​dev.launchpad.net/​Getting === benji is now known as benji-lunch [17:37] lifeless: good morning [17:39] jml: oh hai [17:46] good night all [17:54] jml: I landed a couple of self reviewed branches since the optional review experiment started. [17:54] jml: could you please review those for me, per the 'robert will review everything except his stuff, jml will review that' bootstrap phase? [17:55] lifeless: sure. can you maybe email me the revnos or mps or something? [17:55] lifeless: otherwise I'll make a note to figure those out myself. [17:56] * jml partly afk assembling whiteboard [17:57] make a note. [17:57] If I determine them I will mail you [18:00] jtv: when moving things to Ubuntu, if you change the projet to null, we won't get more email. [18:01] lifeless: btw, I'm getting exactly one failure w/ my testtools branch, it's a 500 response from the librarian. has your recent fixture/layer work made it possible for me to get the librarian log attached to the error? [18:02] (I can't seem to reproduce locally with a subset of the tests) [18:02] there is glue for that in this branch - https://code.launchpad.net/~lifeless/launchpad/librarian/+merge/39013 [18:03] its not landed yet but you might find merging it, running the test with it, to help you [18:03] lifeless: ok, thanks. [18:03] I'll give that a try. [18:04] if it blows up you might find grabbing the addDetail change to be easy enough to pull out, but I think the branch is in tolerable shape. [18:04] it's also making the fixture a member of the layer, I think. not such a big deal. [18:07] the reason its not landed is it depends on teh databasefixture layer which is broken in 'make check' (but not bin/test) atm [18:07] s/layer/branch/ [18:08] meh. conflicts. [18:08] :< [18:08] too sick/tired to deal with that now. [18:09] I guess this branch lands Wednesday at the earliest. [18:10] go rest [18:10] get well [18:10] nothing good comes of working sick === deryck[lunch] is now known as deryck [18:23] fun stuff - http://www.facebook.com/note.php?note_id=76191543919 === didrocks1 is now known as didrocks === benji-lunch is now known as benji [18:28] deryck: hi [18:28] hi lifeless [18:28] deryck: something for consideration in your js landing stuff [18:28] lifeless, sure. shoot. Was just replying to your email. [18:28] I think its easy to end up with measurement bias in assessing the benefits of even terrible tests [18:29] the problem is in gathering data when there is a filter (the tests) on half the range [18:30] lifeless, ok, I don't know that I fully follow what you mean there. :-) [18:30] ok [18:31] so, we're mentally saying 'gee, I think that only sometimes catches failures, so we only sometimes have defects it would catch' [18:31] but the actual set of failures it catches is 22 times larger than the local-test-failures-one-encounters [18:32] (we have ~ 22 active coding developers) [18:32] the tests, terrible as they are, are filtering out failures from all developers, but individually we only perceive the failures we've made ourselves. [18:32] no, I don't think 'gee, I think that only sometimes catches failures, so we only sometimes have defects it would catch' [18:32] I wouldn't say it like that [18:33] deryck: ok [18:33] I do think Windmill catches some failures. I don't think that means we only sometimes have defects it would catch. [18:33] I'm not sure what that means for your argument :-) [18:33] I'm not really arguing [18:34] throwing an issue I struggle with when evaluating the benefit of tests - any tests - into the right for discussion [18:34] since its part of the risk analysis. [18:34] s/right/ring/ [18:35] ah, right [18:36] so how do we determine the value of these tests? That's the heart of your analysis, right? [18:36] its extremely hard to reason about life without any given set of tests, because there are so many places that the filtering effect (filtering out defects) that they offer occurs at. [18:36] deryck: I think its an input into assessing the value of the tests, yes. [18:39] lifeless, so you're saying that the tests filter defects in ways that we can't know, because we only the past defect the tests did catch. But we don't know the things they catch they we were never aware of. [18:39] that's hard to type [18:39] it's a bit meta-physical to me. :-) [18:41] yeah [18:41] more simply [18:41] you never see my mistakes that tests caught. [18:41] *we* never see each others mistakes that tests caught. [18:42] ah, ok [18:42] I know mine, but not yours. [18:43] but when we remove the tests [18:43] I agree with that. But that also assumes you write js or windmill. And well, there ain't that many of us doing it for lp. ;) [18:43] you will feel many more, perhaps even all, of the mistakes. [18:43] deryck: ack, as I say - this is a general modelling thing [18:43] lifeless, right, and I completely agree with you in the abstract. === matsubara is now known as matsubara-afk [18:44] deryck: for clarity, I'm ok with what you're proposing, tentatively. [18:44] I don't think I'll get beyond tentative ;) [18:45] lifeless, fair enough. I don't think I can make you feel any better about it either :-) [18:45] lifeless, is there anything I need to do because it's "tentative" or can I move ahead, assuming no one else objects. [18:46] oh, I'm nonblocking as always [18:46] if I wanted to stop it I would say so very clearly. [18:46] I'd be happiest if we just fix windmill [18:46] but you're doing the work - you get to decide on the right path. [18:47] if I was doing it, i think I'd fix windmill [18:47] fair enough, thanks! [18:47] I'm replying in email on why I don't think we should do that. or why I choose not to, rather. [18:48] thanks [18:50] flacoste: when did windmill become unmaintained ? [18:51] statik: we on today? [18:51] I think I overreached with this whiteboard [18:52] lifeless: sure thing, just wrapping up with IBM [18:53] lifeless, by unmaintained, do you mean, when did mikeal leave the project? [18:53] mars: I don't know what I mean. [18:53] lifeless, or do you mean umaintained in LP - no active work [18:53] mars: see flacostes mail [18:54] lifeless, oops, my bad - /me goes to read context first [18:56] I believe unmaintained == mikeal leaving. [18:56] windmill is not under active development anymore, as I understand it. [18:56] is there any reason to believe that the 512K bug wasn't inherited from Selenium? [18:57] I don't know. rockstar worked on that bug before and could say, I think. [18:57] deryck, well, there are developers, but the project velocity may have slowed a fair bit. (I haven't been keeping close tabs though) [18:58] yeah, the project web site has a new face lift. So someone cares about it still. === al-maisan is now known as almaisan-away [18:58] lifeless, I wasn't aware that Windmill inherited anything from Selenium. [18:59] rockstar: wasn't windmill a fork of selenium, way back when ? [18:59] lifeless, no, not at all. [19:00] rockstar: I've been misinformed then. Someone should vote that comment down :) [19:01] lifeless, IIRC: Selenium was a complex piece of tech, the windmill devs thought they could have done some parts better - windmill tried a different way [19:02] I don't see much difference myself between selenium 1 and windmill. Just that windmill is Python. [19:02] lifeless, now Selenium 2 apparently has taken the same approach that windmill did. [19:02] mars, yeah, and the windmill devs hated selenium. [19:02] or maybe it's selenium 2, I'm thinking of ;) [19:02] deryck, it was under the hood. SSL handling, cross-domain requests, etc. This is messy tech [19:02] Mikeal said himself "and, for the most part, the things we hated about Selenium are basically fixed now." [19:02] yeah [19:03] I actually have less issue with windmill vs. selenium than the kind of testing we chose to do with windmill. [19:03] deryck, everyone but the Selenium 2 folks are staying away from Selenium 2. [19:03] deryck, yeah, me too. [19:03] I think using selenium is better since it's more widely used and because it forces us to think differently about how we test, but I don't think selenium is a magic bullet. [19:04] statik: ok, skype me when ready? [19:04] rockstar, "everyone but the Selenium 2 folks are staying away from Selenium 2" - care to elaborate? [19:10] deryck: they did made a release last week though [19:10] windmill 1.4 [19:12] lifeless: you don't seem to be online in skype [19:12] fixed [19:15] flacoste, yes, but no one's really actively working on it. It's just patches that people send in now. That release has been on deck for a while now. [19:16] mars, I don't have many other details, other than Selenium 2's java server stuff is apparently really whack, I guess. [19:16] mars, this is all speculation, as is everything in our javascript testing story. [19:17] mars, one thing that is clear is that we need to be careful how we invest in any ship, because if we commit too strongly to something, we'll be sinking far too fast. [19:17] Er, we may find ourselves on a sinking ship. [19:18] Yep. Always a risk (I've been down this road more than once) [19:19] Alas, there is no formula to predict which budding OSS project in a given arena will be best for your needs 2 years from now. [19:39] deryck: ,/bin/lp-windmill -m firebug jsdir=lib/canonical/launchpad/windmill/jstests http://launchpad.dev:8087/ [19:57] flacoste: hi [20:05] I'm confused. [20:05] The lazr-js upgrade triples our JS size? [20:05] Isn't that a bug? [20:05] lifeless: finishing off with deryck [20:05] flacoste: ah, my bad. [20:05] flacoste: sorry ;) [20:06] wgrant: I think so. [20:06] yes, it's a bug. [20:07] If we need more than a megabyte of JS on every page, I think we have a serious problem :/ [20:08] I agree it's serious. hence, my effort to solve it. ;) [20:08] Great. [20:08] morning [20:08] we don't ever need 1.3 Mb of js for any page. But we're concat'ing of all yui + lp js + lazr-js for every page. [20:09] and yui is the real beast [20:09] morning, thumper [20:09] Ahh, I see. [20:09] yui is designed with batteries included like Python, thinking you'll dynamically load the modules you'll need. [20:10] and we've never used it like that [20:10] Does U1? [20:11] It seems like U1 has solved this sort of thing. [20:11] But I can't really see.. [20:11] u1 has a smaller base yui, I believe. but rockstar is working on this for them. he could say for sure [20:12] Ah. [20:39] hi thumper [20:39] thumper, I think I remember reading somewhere that you had plans to work on blueprints in the near future. have you started yet? [20:40] salgado: not really [20:40] salgado: I've done a little yak shaving [20:40] salgado: but not started in earnest [20:40] thumper, did you have any plans to expose them over the API? [20:40] yes [20:42] thumper, then it looks like I have good news for you. project managers in Linaro need that ASAP, so I'm taking james_w's branch branch which started exposing blueprints over the API and try to nail down the remaining issues to get it landed [20:43] salgado: sounds good [20:43] lp:~james-w/launchpad/safe-blueprints-model is the branch, btw. === salgado is now known as salgado-afk [20:53] wgrant, U1 doesn't ship lazr-js, which drastically reduces its footprint. [20:53] wgrant, also, until recently, we were loading each file separately in U1 (bad). [21:24] salgado-afk: awesome [21:25] salgado-afk: one small request [21:25] salgado-afk: please don't put the blueprints in api/1.0 [21:28] rockstar: hey [21:28] rockstar: remember at uds you had a features issue in a template [21:28] rockstar: I would love to know what the problem was [21:32] lifeless, uds was an eternity ago. [21:32] :) [21:32] lifeless, I remember it had something to do with template macros not having the same scope or something. We were find that it was creating a new WSGI request for each template macro, even if it wasn't actually a net request itself. [21:33] I don't remember how I fixed it though. [21:33] ah well [21:33] thanks [21:33] lifeless, are you having an issue now? [21:33] no [21:33] I wanted to: [21:33] - think about root cause fixes [21:33] - have the answer handy if someone else encounters it [21:50] Project devel build (237): FAILURE in 4 hr 5 min: https://hudson.wedontsleep.org/job/devel/237/ === jkakar__ is now known as jkakar [22:39] meep [22:39] n [22:39] http://news.cnet.com/8301-30685_3-20023535-264.html [23:01] gmb: isn't it late for you ?