[00:23] <wallyworld_> thumper: just talked to amazon. they are sending out a replacement. they are charging me but will refund the money when i send back the broken one
[00:23] <thumper> wallyworld_: awesome news
[00:24] <wallyworld_> thumper:  yeah. they asked several times if it had been dropped or pressure put on the screen. but it lives in the amazon hard covered case....
[00:34] <wgrant> So much QA :/
[00:36] <wgrant> lifeless: Can we restore qastaging?
[00:37] <lifeless> +1
[00:39] <wgrant> jelmer: Hi.
[00:39] <wgrant> jelmer: http://staging.launchpadlibrarian.net/73288606/vcs-imports-pydoctor-trunk.log -- I don't think cscvs likes the autoupgrade stuff.
[00:47] <lifeless> I will be back later; today is in 2 parts :)
[00:59] <jelmer> wgrant: darn
[01:00] <jelmer> wgrant: should be a trivial change to clean it up after upgrade
[01:03] <wgrant> jelmer: Rather.
[01:12] <wgrant> jelmer: Trivial enough that it's worth fixing rather than rolling it back?
[01:12] <jelmer> wgrant, https://code.launchpad.net/~jelmer/launchpad/auto-upgrade-qafix/+merge/64771
[01:13] <wgrant> jelmer: Perfect, thanks.
[01:13] <wgrant> I think we shall skip ec2 this time.
[01:14] <StevenK> Hm. No wallyworld
[01:15] <StevenK> I guess he did see the State of Origin, then ...
[01:15] <jelmer> wgrant: are you landing it, or should I?
[01:15] <wgrant> jelmer: I've approved it, and can land it if you want.
[01:15] <wgrant> I realise it's late.
[01:16] <jelmer> wgrant: if you can, that would be great
[01:16]  * jelmer gets some sleep
[01:17] <wgrant> jelmer: Thanks for the fix.
[01:17] <wgrant> Night!
[01:17] <poolie> flacoste: ping?
[02:58] <92AADA0EO> StevenK: hi. did you ping me?
[02:58] <92AADA0EO> stupid nick :-(
[02:59] <StevenK> 92AADA0EO: Yes. To gloat.
[03:00] <92AADA0EO> ns
[03:01] <wallyworld_> StevenK: well, i bet you didn't even watch the game?
[03:01] <StevenK> Right.
[03:02] <wallyworld_> so nothing to gloat about :-P
[03:02] <StevenK> But QLD lost, so that makes me happy.
[03:02] <wallyworld_> but you don't care you said
[03:03] <LPCIBot> Project devel build #809: STILL FAILING in 5 hr 51 min: https://lpci.wedontsleep.org/job/devel/809/
[03:06] <LPCIBot> Project windmill-devel build #232: STILL FAILING in 1 hr 7 min: https://lpci.wedontsleep.org/job/windmill-devel/232/
[04:48] <wgrant> wallyworld_: Hi.
[04:48] <wallyworld_> hello
[04:49] <wgrant> Have you seen bug #797820?
[04:49] <_mup_> Bug #797820: searching for the last name makes still hard to find the person <disclosure> <exploratory-testing> <person-picker> <Launchpad itself:Triaged> < https://launchpad.net/bugs/797820 >
[04:49] <wgrant> The name-prefix matches drown everything else out :/
[04:49] <wallyworld_> no, not yet
[04:49] <wgrant> Even after I reduced the numbers.
[04:49] <wallyworld_> let me have a quick read
[04:49] <wgrant> I think we may need to take them right down :(
[04:51] <wallyworld_> wgrant: so that bug report implies there are lots of names starting with "lange"?
[04:51] <wgrant> wallyworld_: It doesn't say that, but when you try it it's clear.
[04:51]  * wallyworld_ tries it
[04:52] <wgrant> https://pastebin.canonical.com/48548/ are the results of that search.
[04:52] <wgrant> The second column is the rank.
[04:53] <wgrant> jml's FTI match is 0.6, but the karma scales it up.
[04:53] <wgrant> But not high enough.
[04:53] <wgrant> (0.6 is about right for a single name match on a two-part name)
[04:53] <wallyworld_> wgrant: i tried it on lp.net and yes there are lots of usernames starting with "lange". so i think we just de-prioritise name prefix matches
[04:53] <wallyworld_> adjust the scaling factor
[04:54] <wgrant> Adjust the scaling factor, or reduce the hardcoded ranks?
[04:54] <wallyworld_> reduce the hard coded ranks
[04:54] <wallyworld_> for name prefix
[04:54] <wgrant> name prefix = 5, displayname prefix = 4, IRC nick match = 3, email prefix = 2
[04:54] <wgrant> That's where we are now.
[04:55] <wallyworld_> where is exact name match in that list?
[04:55] <wgrant> Exact name match is always 100
[04:55] <wgrant> There is nobody with username 'lange', so it doesn't show up here.
[04:56] <wgrant> (yeah, having it hardcoded to 100 is bad, but it will work unless someone has 10^20 karma, in which case we are probably screwed anyway)
[04:56] <wallyworld_> i don't like 5,4,3,2. i think ranks needs to be between 0,100 and sacling factor applied to fti to put it in the 0,100 range
[04:56] <wgrant> Why 0-100 rather than 0-1?
[04:57] <wallyworld_> so that we can make minor adjustments using whole number arithmatic
[04:57] <wgrant> I wonder what happens if we just reduce them to a perfect FTI match... 1.
[04:58] <wgrant> That would probably be good for full email address local part matches.
[04:58] <wgrant> But not general prefix matches.
[04:58] <wallyworld_> i think we need to discriminate between them though, not just make all = 1
[04:58] <wallyworld_> hence the 0-100 range
[04:58] <wgrant> Probably, but how?
[04:59] <wallyworld_> conceptually it's easier to think about adjustments in that range
[04:59] <wgrant> Not for me, but OK :)
[04:59] <wallyworld_> i mean whole number arithmetic is easier than decimals between 0..1
[04:59] <wallyworld_> so we scale the fti result by 100
[04:59] <wgrant> Hardly :)
[05:00] <wgrant> But this is not really crucial to the problem.
[05:00] <wgrant> We don't know where to put these non-FTI matches.
[05:00] <wgrant> That is the important bit.
[05:00] <wallyworld_> sure, butno we have a 0-100 range to play in, we can experiment
[05:00] <wallyworld_> but now
[05:01] <wallyworld_> so start by reducing the weight of name prefix matches
[05:01] <wallyworld_> but then again, if we scale fti by 100 maybe no adjustemtn needed
[05:02] <wallyworld_> to the name prefix weighting if it's say 50
[05:02] <wallyworld_> since 0.6 * 100 = 60 > 50
[05:02] <wgrant> But is that what we want?
[05:03] <wallyworld_> well, i think an exact match on lastname should be > name prefix match
[05:03] <wallyworld_> too bad we can't weight based on how closely the name prefix matches
[05:03] <wgrant> Indeed.
[05:03] <wallyworld_> that would help a lot
[05:04] <wallyworld_> but in any case, if someone types the full lastname and it matches, then surely that should place those results near the top?
[05:04] <wallyworld_> above any name prefix matches
[05:04] <wgrant> What if I'm not typing a full lastname, but a name prefix?
[05:05] <wallyworld_> then you see that result below the display name matches :-)
[05:05] <wallyworld_> i think it will be more common for people to type last name
[05:05] <wallyworld_> if they search on lp id (name), then surely they will typr the whole id?
[05:06] <wallyworld_> i would :-)
[05:07] <wgrant> Possibly.
[05:07] <wgrant> Perhaps we should remove the name prefix matching and see how it goes.
[05:07] <wgrant> Not sure about displayname.
[05:07] <wgrant> There are still 9 displayname match results above jml.
[05:07] <wallyworld_> i just think we need to reduce the name prefix weight
[05:08] <wgrant> But I wonder if FTI deals with them OK.
[05:08] <wgrant> Yes, but reduce it to where?
[05:08] <wallyworld_> wgrant: so imho, i think we should: scale fti by 100 and adjust the 5,4,3,2 to 50, 40, 35 or whatever
[05:08] <wgrant> There are tonnes of FTI matches.
[05:08] <wallyworld_> tonnes yes, but all with different rankings
[05:08] <wgrant> Lots at the same rankings.
[05:09] <wallyworld_> so if we scale correctly, they will fall somewhere in the range also occupied by those other matches
[05:09] <wgrant> The visible non-jml FTI matches are 0.75990885..., and there are dozens of them.
[05:09] <StevenK> What FTI match does jml get for 'lange'?
[05:10] <wgrant> 0.6
[05:10] <wgrant> 0.607927
[05:11] <wallyworld_> winder why a full lastname match is lower than those other results
[05:14] <wgrant> wallyworld_: lange stems to lang.
[05:14] <wgrant> alexander-lang's displayname is Alexander Lang
[05:14] <wgrant> So he gets a double lang match.
[05:14] <LPCIBot> Project db-devel build #639: STILL FAILING in 5 hr 27 min: https://lpci.wedontsleep.org/job/db-devel/639/
[05:14] <wgrant> launchpad_dogfood=> SELECT name, fti, rank(fti, ftq('lange')) FROM person WHERE name IN ('jml', 'alexander-lang');
[05:14] <wgrant>       name      |                       fti                        |   rank
[05:14] <wgrant> ----------------+--------------------------------------------------+----------
[05:14] <wgrant>  jml            | 'jml':1A 'jonathan':2A 'lang':3A                 | 0.607927
[05:15] <wgrant>  alexander-lang | 'alexand':2A,4A 'alexander-lang':1A 'lang':3A,5A | 0.759909
[05:15] <wallyworld_> why does it chop off the last e?
[05:15] <StevenK> So, in this case, I don't think jml has to be *first*, I think he has to be in the top 10
[05:15] <wgrant> Stemming does not work well on names :)
[05:15] <wgrant> StevenK: He should be first.
[05:15] <wgrant> StevenK: He's the only one related to the project.
[05:17] <StevenK> That's a point
[05:17] <wallyworld_> wgrant: so perhaps we can do in memory affiliation sorting once the db query has been done
[05:18] <wallyworld_> use the affiliation to fine tune the result
[05:18] <wgrant> We can do that in the DB just as easily.
[05:18] <wgrant> We have no choice.
[05:18] <wgrant> We have to do it there.
[05:18] <wgrant> Or we have to pull back EVERYTHING from the DB.
[05:20] <wallyworld_> i didn't think the affiliation was easily queryable
[05:21] <wallyworld_> and i mean we use affiliation to fine tune the results
[05:21] <wgrant> "fine tune" is a rather broad term.
[05:21] <wallyworld_> so we still limit the db query to 100
[05:22] <wallyworld_> and then "fine tune" the final order of those 100 records, for whatever mean of "fine tune"
[05:22] <wallyworld_> meaning
[05:22] <wgrant> What if there are 500 0.7 FTI matches?
[05:22] <wgrant> We pull back the top 100 sorted by displayname.
[05:22] <wgrant> And bump the affiliated people.
[05:22] <wallyworld_> well we limit the results to 100 now don't we?
[05:22] <wgrant> But the person I wanted was named Zack.
[05:22] <wgrant> So he's number 499.
[05:22] <wgrant> We do. But we do affiliation in the query.
[05:22] <wgrant> All ordering is done in the query.
[05:22] <wgrant> And we take the top 100.
[05:23] <wallyworld_> you mean karma = affiliation?
[05:23] <wallyworld_> i haven't seen the latest query
[05:23] <wgrant> For the purposes of search, yes.
[05:23] <wgrant> ORDER BY rank * COALESCE( (SELECT LOG(karmavalue) FROM KarmaCache WHERE person = Person.id AND product = 10294 AND category IS NULL AND karmavalue > 10), 1) DESC, Person.displayname, Person.name LIMIT 100
[05:23] <wallyworld_> i think that's perhaps a false assumption that karma == affiliation
[05:23] <wgrant> Yes, but it was quick and it's pretty effective.
[05:24] <wgrant> And it handles unofficial relationships well.
[05:24] <wgrant> And it's logarithmic, so infrequent contributors aren't penalised too much.
[05:24] <wgrant> And if you're searching for someone to assign them to a bugtask, that's probably been done before, so they have karma.
[05:24] <wallyworld_> ok. so we have affiliation in the query. we "just" need to tweak the ranking factors :-)
[05:25] <wallyworld_> but it will be hard
[05:25] <wallyworld_> for every search we "fix", we will "break" another
[05:25] <wgrant> Yes.
[05:26] <wgrant> I question the utility of the prefix matches.
[05:26] <wallyworld_> so i think so long as the intended person appears on the first page every time, that wil suffice
[05:26] <wgrant> But perhaps there is a reason.
[05:26] <wgrant> Yes.
[05:26] <wgrant> The correct person will rarely be first.
[05:26] <wgrant> Because exact name matches always win.
[05:26] <wgrant> and I don't think that's negotiable.
[05:26] <wallyworld_> yes
[05:27] <wallyworld_> if we are unsure about prefix matches we just need to devalue them
[05:27] <wallyworld_> but not exclude them
[05:27] <wallyworld_> totally
[05:27] <wgrant> Devaluation is pretty much exclusion, unless there are no FTI matches at all.
[05:27] <wgrant> Mm.
[05:27] <wgrant> Although I guess relevant prefix matches will be bumped up a bit.
[05:28] <wallyworld_> surely bad fti matches will be < 0.3 or whatever
[05:28] <wallyworld_> so prefix matches can be made to be higher than 0.5 or whatever
[05:28] <wallyworld_> so that only close fti matches 'win"
[05:28] <wgrant> "close"
[05:28] <wallyworld_> :-)
[05:28] <wgrant> We have to choose arbitrary constants :/
[05:29] <wallyworld_> yep
[05:29] <wallyworld_> unless we can run the data through some statistical analysis
[05:29] <wgrant> s/statistical/magical/
[05:30] <wallyworld_> well, if we had expected search terms and results, and a lot of those, then we could do something. but we don't
[05:30] <wgrant> Even if we did, there will always be other cases.
[05:30] <wallyworld_> outliers. sure. but that's unavoidable
[05:31] <wgrant> What if the cases we test are the outliers?
[05:31] <wgrant> :)
[05:31] <wallyworld_> up to us to make sure they're not
[05:32] <wallyworld_> i think there's wiggle room. desired person on first page seems reasonably doable hopefully. or very least, on page 2.
[05:32] <wallyworld_> i'd like to be on page 3 personally :-)
[05:37] <lifeless> wallyworld_: *blind*
[05:39] <wallyworld_> lifeless: maybe we should do a naked geeks calendar for charity :-) i'll be Mr December.
[05:39] <StevenK> And none of us will enjoy Christmas again
[05:39] <lifeless> -ever-
[05:40]  * StevenK high fives lifeless 
[05:42]  * wallyworld_ lols
[05:56] <wgrant> jtv: Around yet?
[05:56] <jtv> far too round, thank you
[05:56] <jtv> Had to go make some travel preps etc.
[05:56] <wgrant> jtv: Ah, hi! Bug #797151 needs QA, when you have time.
[05:56] <_mup_> Bug #797151: Display DSD copy errors differently from comments <derivation> <qa-needstesting> <Launchpad itself:Fix Committed by jtv> < https://launchpad.net/bugs/797151 >
[05:56] <jtv> thanks for bringing it to my attention.  Won't take long.
[05:57] <wgrant> mwhudson: How much effort is setting up importds for qastaging?
[05:57] <wgrant> mwhudson: Is it just a matter of configuring the importd with a qastaging tree too?
[06:01] <mwhudson> wgrant: um, probably
[06:01] <mwhudson> the process is documented, would you believe
[06:01] <wgrant> I don't appreciate your lies.
[06:02] <mwhudson> wgrant: https://wiki.canonical.com/InformationInfrastructure/OSA/LPHowTo/SetUpCodeImportSlave
[06:02] <wgrant> I don't appreciate lies with evidence, either.
[06:02] <wgrant> But OK.
[06:02] <mwhudson> most of those steps will have been done already for the staging importds, i don't know what modifications will be needed for qastaging
[06:03] <wgrant> mwhudson: There's no master setup needed? It just does xmlrpc?
[06:03] <mwhudson> wgrant: yeah
[06:03] <wgrant> Great, thanks.
[06:03] <mwhudson> the production importds connect to the database for oops-prune
[06:04] <wgrant> Well, at the moment some of them don't.
[06:04] <wgrant> And they cronspam about it :)
[06:04] <spm> pear? it should now.
[06:04] <wgrant> Indeed! It stopped a week ago.
[06:04] <wgrant> This is good.
[06:05] <spm> yah. just needed same rules to allow those connections thru and we were gold.
[06:05] <spm> some*
[06:10] <jtv> Anyone mind if I sync some packages on staging's oneiric?  wgrant, StevenK?
[06:10] <lifeless> wgrant: do me a favour?
[06:10] <jtv> Oh darn, can't do it on staging — not privileged
[06:10] <lifeless> wgrant: file bugs for any db access you run across that isn't from an appserver, db-stats or security/py/upgrade.py/fti.py.
[06:11] <StevenK> lifeless: The publisher? :-)
[06:11] <wgrant> lifeless: Um, you know we have dozens of scripts, right? :)
[06:11] <wgrant> Like StevenK said.
[06:11] <StevenK> loganberry
[06:12] <StevenK> process-accepted, queue, change-overrides, process-death-row ...
[06:12] <wgrant> jtv: StevenK or I can do it there, but I don't think the flag is enabled.
[06:12] <jtv> ahhh
[06:12] <jtv> nm I'll stick with df
[06:14] <jtv> wgrant, StevenK: mind if I restart the app server on df?
[06:14] <StevenK> Go ahead
[06:14] <wgrant> Doit.
[06:15] <jtv> thx
[06:15] <jtv> the +localpackagediffs is currently oopsing on df because it picks up my tal change on the fly, but needs a restart for the code change
[06:18] <jtv> wgrant: it's out of your way.
[06:19] <huwshimi> wallyworld_: Can't believe I missed the game last night
[06:20] <jtv> wgrant: did you Q/A my update to the queue.py script?  Julian said it was broken.
[06:21] <wgrant> jtv: It seemed to work for me.
[06:21] <wgrant> jtv: I initially thought the code was wrong, but it seems OK.
[06:22] <jtv> Any idea whether you made it print out any package uploads with PCJs attached?
[06:22] <jtv> That's the part that was in question.
[06:22] <wgrant> No, but they don't exist yet, so I don't care :)
[06:23] <jtv> They don't exist?
[06:23] <lifeless> wgrant: yes, I just mean whenever you think of one.
[06:24] <lifeless> wgrant: particularly ones where we're close to nuking all db access
[06:24] <wgrant> jtv: PCJ PUs don't exist in any real environment yet, and they are not close to done.
[06:24] <jtv> wgrant: in the current code we create PUs with PCJs by creating a fresh PPCJ, and .run()ing it.
[06:25] <jtv> wgrant: that's no reason to deploy a branch that may or may not have spectacularly failed Q/A!
[06:25] <wgrant> jtv: It's not going to affect production.
[06:25] <wgrant> So it has not failed QA.
[06:25] <wgrant> Production does not yet use PCJs.
[06:25] <jtv> Until these jobs exist, and suddenly it may break.
[06:25] <jtv> The whole point is to notice that _before_ it hits production.
[06:26] <wgrant> There is a fuckload more QA to do of the whole thing before PCJs are going anywhere near production.
[06:26] <wgrant> Once it's done.
[06:26] <wgrant> And working.
[06:26] <jtv> Yes, but this could be an easy thing to miss at that point.
[06:27] <wgrant> It's one of the more obvious things to test.
[06:27] <jtv> And it can be tested now.
[06:27] <wgrant> It can be.
[06:28] <jtv> Anyway, I think I'm conflating qa-tagging with q/a
[06:28] <wgrant> Probably.
[06:29] <jtv> It'd be really nice to have that process cleaned up.  You're right that it can be safely deployed; I've still got the ingrained habit of qa-needstesting meaning what it did.
[06:30] <wgrant> It needs fixing, yes.
[06:30] <jtv> deploy-needstesting / deploy-ok / deploy-bad
[07:16] <LPCIBot> Project parallel-test build #42: STILL FAILING in 1 hr 3 min: https://lpci.wedontsleep.org/job/parallel-test/42/
[07:16] <StevenK> stub: O hai!
[07:19] <stub> yo
[07:19] <StevenK> stub: Mind looking at https://code.launchpad.net/~stevenk/launchpad/db-add-bprc/+merge/64783 for a db review when you have a sec?
[07:20] <stub> k
[07:23] <stub> StevenK: So we don't stuff the files in the Librarian because they already exist somewhere on disk, and we can find that from the filename and the bpr?
[07:24] <stub> oic. The filename is the data, and we generate a file containing all the filenames
[07:25] <StevenK> Right
[07:25] <stub> The tablename makes me think 'This row represents the contents of a BinaryPackageRelease', which would be a binary blob or maybe the TOC of a tarball.
[07:25] <StevenK> It will be many rows
[07:26] <StevenK> Which is why the primary key is both columns
[07:26] <wgrant> This is the TOC of a tarball.
[07:27] <stub> Which answers my next question on why this is a separate table too (not that it would be a bad idea even if it was 1:1, just interested in reasoning)
[07:27] <lifeless> StevenK: stub: denorm the filename
[07:28] <StevenK> Hm?
[07:28] <lifeless> theres only ~2M unique filenames in Ubuntu
[07:28] <StevenK> Oh
[07:28] <lifeless> sorry
[07:28] <lifeless> I mean 'normalise the filename'
[07:28] <StevenK> I like that idea even less
[07:28] <stub> normalize
[07:29] <wgrant> It will save lots of disk.
[07:29] <stub> Hmm... its always a tough call that. It might save a little space, but will be slower and harder to query.
[07:29] <wgrant> Probably a good idea.
[07:29] <wgrant> But...
[07:29] <stub> How much we looking at?
[07:29] <wgrant> Makes it hard to search.
[07:29] <wgrant> This table will be huge.
[07:29] <StevenK> stub: MANY MANY LOTS
[07:29] <lifeless> well
[07:29] <wgrant> Probably >100m rows.
[07:29] <lifeless> it changes the search dynamic
[07:29] <lifeless> sometimes its better
[07:29] <lifeless> sometimes its worse.
[07:29] <wgrant> No way we can do it before we have more disk.
[07:29] <StevenK> At this point, I only want the table there. I will look at populating it later
[07:30] <lifeless> conflictchecker has this table
[07:30] <lifeless> done normalised style
[07:30] <stub> StevenK: Will these files already be in the librarian?
[07:30] <lifeless> it can find conflicting filenames -extremely- fast
[07:30] <lifeless> stub: no, they are the unpacked contents
[07:31] <StevenK> stub: No, they won't.
[07:31] <lifeless> StevenK: so, rather than liking or not liking the table
[07:31] <StevenK> stub: It will be a list of the contents of the files in the librarian
[07:31] <lifeless> StevenK: lets analyze. What are the queries you want to do on it?
[07:32] <StevenK> lifeless: In pseudo-SQL: SELECT filename FROM bprc WHERE binarypackagerelease IN (lots);
[07:32] <lifeless> StevenK: so for that, we'll be more efficient with two tables.
[07:32] <StevenK> With some possible bulk loading
[07:32] <lifeless> StevenK: because 8 bytes are << 50 bytes.
[07:33] <lifeless> StevenK: the whole 2-4M unique filenames can page into memory, this can't happen with the table you have today because of the 100's of M of rows will be huge.
[07:34] <stub> Although we don't care if 100's of millions of filenames fit - we only care if the index fits.
[07:34] <StevenK> A id, filename table, and BPRC links between them?
[07:34]  * stub tries to recall if each node in the btree contains the text
[07:35] <lifeless> stub: agreed, but the PK index will be large (because it has to have every filename hundreds or thousands of times)
[07:35] <lifeless> stub: yes
[07:35] <lifeless> bah
[07:35] <lifeless> StevenK: yes
[07:35] <StevenK> I doubt it is thousands
[07:35] <lifeless> debian + ubuntu + ppa builds
[07:35] <lifeless> dailies of some packages, with a couple of series
[07:36] <StevenK> That's still only 500 to 600
[07:36] <stub> Just stuff it in Cassandra and stop bothering me before my coffee has kicked in.
[07:36] <lifeless> StevenK: I'm sure we will have hundreds; we may have thousands for some.
[07:36] <lifeless> StevenK: 600 * 2M = 1.2B :) I don't think 600 is the common case though.
[07:36] <lifeless> stub: :)
[07:37] <lifeless> StevenK: the point is, you aren't building a search schema, you are building an export schema.
[07:38] <StevenK> lifeless: If it's two tables, it's much harder to populate, too.
[07:38] <lifeless> not at all
[07:38] <stub> For some values of 'much'...
[07:38] <StevenK> I can no longer be done in one query
[07:38] <StevenK> Er, It
[07:39] <stub> StevenK: So technically, it could, but we don't want to go there
[07:42] <stub> I suspect lifeless is right in that we want a separate table. Its 100,000,000 integers + 2,000,000 strings vs 100,000,000 strings. I could do real calculations if we knew the average filename length.
[07:43] <wgrant> I also concur with lifeless.
[07:43] <wgrant> FWIW
[07:44] <StevenK> 38 characters, picking on yelp
[07:44] <stub> average or max?
[07:44] <lifeless> StevenK: average length?
[07:45] <wgrant> I suspect they're going to be well over than 4 characters :)
[07:45] <StevenK> That is the average
[07:45] <stub> wow.
[07:45] <lifeless> if you want, I can dig up my conflict checker credentials and tell you the histogram of sizes for all ubuntu uploads
[07:45] <stub> So yeah, we would break even around an average length of 3 I think...
[07:45] <wgrant> stub: These are absolute FS paths.
[07:45] <StevenK> So it's 3, we have a problem
[07:45] <stub> Does that mess up the 2,000,000 unique guestimate?
[07:46] <wgrant> No.
[07:46] <wgrant> But it's at least 3.2 million.
[07:46] <StevenK> Suggestions for a filename table? If I use BinaryPackageReleaseFilename, wgrant will hurt me.
[07:46] <wgrant> But it's roughly around that order of magnitude.
[07:46] <wgrant> I like Contents.
[07:46] <wgrant> But I won't murder over Filename :)
[07:47] <lifeless> StevenK: so the filename table is just filenames
[07:47] <wgrant> Oh.
[07:47] <wgrant> Right, that table.
[07:47]  * stub checks up on toast tables
[07:47] <wgrant> Well, paths, not filenames.
[07:47] <lifeless> sure
[07:48] <lifeless> so anyhow, it's an optimisation table not a semantic table really today; BinaryPaths ? PackagePaths? whatever
[07:48] <LPCIBot> Project windmill-db-devel build #395: STILL FAILING in 1 hr 4 min: https://lpci.wedontsleep.org/job/windmill-db-devel/395/
[07:49] <lifeless> stub: funky test failure
[07:49] <StevenK> lifeless, stub: http://pastebin.ubuntu.com/627812/
[07:49] <wgrant> InternedString? :)
[07:50] <StevenK> Hm?
[07:50] <lifeless> wgrant: diaf ? :>
[07:50] <wgrant> lifeless: :(
[07:50] <wgrant> I was only partially joking.
[07:50] <lifeless> I know. Uhm.... needs thought.
[07:51] <stub> BinaryPackageReleaseContentsFilename.... bleh
[07:51] <lifeless> and I have a virus or someshite
[07:51] <StevenK> stub: BPRCF makes me sad
[07:51] <wgrant> Should I be concerned that I immediately wondered if you were running Windows, and didn't consider it might be a biological entity?
[07:51] <StevenK> I've gone with BinaryPackagePaths
[07:51] <stub> It makes me sad two. In hindsight, there would be common acronyms we use like SPR, BPR, SPN
[07:52] <wgrant> But Code destroyed SPR.
[07:52] <StevenK> This gives us BPP and BPRC
[07:52] <StevenK> They overloaded it
[07:58] <lifeless> StevenK: how about PRC ?
[07:58]  * lifeless just wants to get country codes into the codebase
[07:59] <StevenK> Package is ambiguous.
[08:05] <lifeless> yes, but it would be such a cool acronym
[08:07] <StevenK> "Meh" :-P
[08:07] <StevenK> lifeless, stub: Changes pushed, diff updated
[08:32] <lifeless> stub: I don't see those tests
[08:32] <stub> lifeless: eh?
[08:32] <lifeless> you had 3 failures
[08:32] <lifeless> only one of the tests matches a -t parameter to bin/test
[08:37] <lifeless> stub: anyhow, lib/canonical/launchpad/doc/unicode_csv.txt passes for me
[08:37] <stub> Hmmm... and I thought it was weird just because the tests were totally unrelated...
[08:37] <lifeless> stub: does it pass locally for you ?
[08:37] <lifeless> ah, I've got the test_encryptor one to run
[08:38] <lifeless> ok, all three pass
[08:38] <LPCIBot> Project devel build #810: STILL FAILING in 5 hr 35 min: https://lpci.wedontsleep.org/job/devel/810/
[08:39] <lifeless> it could be a dependency skew, or a locale thing
[08:40] <stub> StevenK: bprc.path.path :-(
[08:40] <stub> lifeless: yes, unicode_csv.txt passes locally
[08:41] <StevenK> unicode_csv.txt also fails in Jenkins
[08:41] <stub> Ahh...
[08:41] <lifeless> did the test environment change recently or something?
[08:41] <lifeless> like, did someone make the VMs run with a UTF8 locale?
[08:42] <wgrant> Hmm, it only just failed now.
[08:42] <wgrant> "Failing for the past 1 build"
[08:42] <StevenK> stub: Yes, bprc.path.path makes me sad
[08:42] <lifeless> stub got a failure 9 hours back from ec2
[08:42] <stub> and 4 hours before that too!
[08:42] <wgrant> Hm. Jenkins lies.
[08:42] <wgrant> It failed in 809 too.
[08:43] <stub> Anyone else seeing issues with ec2 test/land though?
[08:43] <stub> Maybe I'm the only one trying to use it for db-devel?
[08:43] <lifeless> stub: so, this is clearly unreleated.
[08:44] <lifeless> stub: lp-land time.
[08:44] <wgrant> I guess it's probably related to sinzui's YUITestLayer thing.
[08:44] <wgrant> But I'm not sure when it first appeared...
[08:45] <wgrant> A-ha.
[08:45] <wgrant> Yes.
[08:45] <wgrant> Those three tests first failed in the build with the new YUITestLayer.
[08:51] <jtv> lifeless, can I trouble you for a review?  It's nice & short: https://code.launchpad.net/~jtv/launchpad/bug-394645/+merge/64787
[08:51] <lifeless> wgrant: do you have a feeling for Time to fix?
[08:51] <wgrant> lifeless: NFI
[08:51] <wgrant> I can't even start the test suite now.
[08:51] <lifeless> wgrant: roll it back then, if you would
[08:51] <wgrant> gi.RepositoryError: Requiring namespace 'Gtk' version '2.0', but '3.0' is already loaded
[08:52] <lifeless> win?
[08:52] <lifeless> wgrant: are you on oneiric or natty ?
[08:52] <wgrant> Natty.
[08:52] <lifeless> or lucid?
[08:52]  * lifeless hugs his dev vm
[08:52] <adeuring> good morning
[08:53]  * stub wonders if we should use a view to hide bprc.path.path as an implementation detail
[08:55] <jtv> hi adeuring!
[08:55] <adeuring> hi jtv!
[08:55] <jtv> and hi dpm as well :)
[08:57] <dpm> hey jtv :)
[08:58] <jml> good morning
[08:58] <jtv> good morning jml
[09:02] <jml> I thought huwshimi would be around.
[09:04] <jtv> jml: your wish, mylord..?
[09:04] <jml> :P
[09:12] <jml> huwshimi: http://www.timeanddate.com/worldclock/meetingtime.html?day=16&month=6&year=2011&p1=165&p2=240&p3=-1&p4=-1
[09:17] <jml> huwshimi: https://dev.launchpad.net/Projects/Disclosure
[09:28] <mrevell> Hello!
[09:41] <jml> mrevell: hi
[09:41] <mrevell> Hello jml
[09:42] <lifeless> jml: bug 797697 is an example of bugs not quite fitting our critical rules, which I think should jump queue ... what do you think ?
[09:42] <_mup_> Bug #797697: Creating a bug in a private project via email does not make the bug private <Launchpad itself:Triaged> < https://launchpad.net/bugs/797697 >
[09:44] <jml> lifeless: looking.
[09:45] <jml> lifeless: I think it's a bug that doesn't quite fit our critical rules that should jump queue
[09:46] <jml> lifeless: as a practical matter, it might be addressed by the Teal squad sooner than a maintenance squad.
[09:48] <rvba> wgrant: bigjools Do you have a moment to talk about the duplication of binary packages in my multi parent initialization branch?
[09:49] <wgrant> Hi.
[09:49] <rvba> wgrant: Hi William!
[09:50] <bigjools> wgrant: we'd like you to expand on the comment you made in the MP that is still unaddressed
[09:50] <bigjools> if only I could find the MP
[09:50] <rvba> https://code.launchpad.net/~rvb/launchpad/init-series-bug-789091-devel/+merge/63676
[09:50] <rvba> bigjools: ^
[09:50] <bigjools> thanks
[09:51] <bigjools> "It seems to me that there's no binary de-duplication at all"
[09:52] <jml> bigjools: btw, did you see bryce's email to launchpad-dev@ about Derived distros worksheet?
[09:52] <jml> jelmer: hello
[09:52] <bigjools> jml: I havenow
[09:53] <jelmer> jml: 'morning
[09:55] <jml> jelmer: how do you wish to communicate, mortal?
[09:55] <lifeless> wgrant: how do I put something in teals queue?
[09:55] <wgrant> bigjools, rvba: no_duplicates prevents source conflicts. It doesn't prevent binary conflicts, AFAICT.
[09:55] <wgrant> lifeless: Poke sinzui, I guess.
[09:56] <wgrant> lifeless: You could probably throw something on the board.
[09:56] <rvba> wgrant: correct.
[09:56] <wgrant> But that's not really happened before.
[09:56] <jelmer> jml: by some magic coincidence, mumble or skype both appear to work for me at the moment. I'm already on mumble
[09:56] <wgrant> rvba: That's Really Bad™
[09:56] <bigjools> wgrant: oh where difference sources generate the same binary
[09:56] <wgrant> bigjools: Or even the same source, right?
[09:56] <jml> jelmer: mumble hangs for me. skype?
[09:57] <lifeless> wgrant: bug 798081
[09:57] <_mup_> Bug #798081: unicode tests failing since yuitest landing <build-infrastructure> <Launchpad itself:Triaged by sinzui> < https://launchpad.net/bugs/798081 >
[09:57] <bigjools> wgrant: same source?
[09:57] <wgrant> bigjools: The dedup code is in _clone_source_packages.
[09:57] <wgrant> That does not provide input to _clone_binary_packages, AFAICT.
[09:57] <jelmer> jml: Sure - I'm jvernooij
[09:58] <bigjools> ah ISWYM
[09:59] <wgrant> What, "that there's no binary de-duplication at all"? :)
[09:59] <bigjools> no, I was wondering about the "from the same source" comment
[10:00] <bigjools> having not looked at that code for a long time
[10:02] <rvba> I suppose I could add the same kind of trick that I used in _clone_source_packages to  _clone_binary_packages to avoid duplicates ...
[10:02] <rvba> Since the copies are done in sequence AFAIUI
[10:03] <rvba> Are the db constraints enough to ensure all is well?
[10:04] <wgrant> rvba: No :(
[10:04] <wgrant> rvba: They would be huge slow multi-table triggers.
[10:05] <lifeless> rvba: db constraints never are - because you'd trigger an exception in the db
[10:05] <lifeless> rvba: (at best)
[10:06] <jml> jelmer: http://launchpad.leankitkanban.com/Boards/Show/12720553
[10:06] <lifeless> sinzui: I've assigned a bug to you - its a yuitestlayer regression :(
[10:06] <lifeless> sinzui: making ec2 & jenkons fail
[10:07] <jml> jelmer: https://dev.launchpad.net/PolicyAndProcess/FeatureDevelopment
[10:07] <rvba> lifeless: wgrant: I think you misunderstood me: what I meant was: can I just take a look at the constraints of the binary package table and enforce those in the code?
[10:08] <stub> lifeless: Any objections to a bzr branch to maintain database patch numbers, or would a wiki page be better?
[10:08] <jml> jelmer: https://dev.launchpad.net/Projects
[10:08] <lifeless> stub: We could just use trunk :)
[10:08] <stub> lp:~launchpad/+junk/dbpatches ?
[10:08] <rvba> I was not even thinking about letting the db do the job :)
[10:08] <lifeless> stub: but I have no objections to $whatever-you-want-to-do
[10:09] <lifeless> stub: there is a number on the policy wiki page
[10:09] <stub> lifeless: Which trunk? devel or db-devel?
[10:09] <lifeless> stub: its a bit fiddly to edit
[10:10] <lifeless> stub: {yes} (db- for -0, devel for -!0)
[10:10] <stub> lifeless: Yes. I want something better that is shared, and can't be arsed retrieving my google docs password.
[10:10] <wgrant> rvba: No :(
[10:10] <stub> lifeless: no, we are not splitting our documentation on what we plan to land into two.
[10:11] <rvba> wgrant: hum ... after taking a look at the table there is no constraint ...
[10:11] <wgrant> rvba: There are far more complex constraints that cannot be feasibly enforced in the DB.
[10:11] <wgrant> rvba: Right, because it's cross-table.
[10:12] <rvba> wgrant: Right ... so, what do you reckon should be enforced on the data during the copy? What kind of "uniqueness"?
[10:12] <wgrant> rvba: A very good question, and the reason I didn't provide a more concrete answer in the review.
[10:13] <wgrant> rvba: We should ensure that there are no duplicates within the copy. If it's not a new archive, we also need to ensure there are no file conflicts with anything in the archive. And we should also ensure that there are no orphaned binaries.
[10:14] <rvba> wgrant: ok, a distinct should take care of "no duplicates within the copy"
[10:15] <wgrant> rvba: Probably, yes.
[10:15] <rvba> wgrant: what do you mean exactly by "no file conflicts"?
[10:15] <wgrant> rvba: I can't have two different foo_1.2-3_i386.deb files in the same archive.
[10:16] <rvba> wgrant: no orphaned binaries: I'll see what can be done ... but unless I'm mistaken bigjools sayed this is Bad But Not Too Bad™ ... at least for now.
[10:16] <rvba> arf s/sayed/said/
[10:16] <wgrant> rvba: We really want to avoid initialising an archive into a bad state, I think.
[10:16] <wgrant> It's not fatal, unlike the other two.
[10:16] <wgrant> But it's not good.
[10:17] <rvba> wgrant: Understood ... but I'm not sure it's easy to do with a simple sql query
[10:17] <rvba> wgrant: and the initialisation already takes ages ...
[10:17] <bigjools> distinct on the (name, sha1) I think
[10:17] <LPCIBot> Project windmill-db-devel build #396: STILL FAILING in 1 hr 16 min: https://lpci.wedontsleep.org/job/windmill-db-devel/396/
[10:17] <bigjools> ah yes StevenK, around?
[10:19] <lifeless> stub: anyhow, a +junk branch sounds fine, but I don't know if teams are permitted +junk
[10:19] <stub> lifeless: It worked...
[10:19] <lifeless> stub: cool
[10:19] <lifeless> stub: can you subscribe me to the branch ?
[10:19] <jelmer> jml: One more quick question - do I keep the current LEP as is (since it's already approved), or can I remove the bits from there that I've moved to the new LEP?
[10:19] <lifeless> stub: also I had an idea
[10:19] <jml> jelmer: remove the bits. leave it as RTC
[10:19] <lifeless> stub: can we remove the INSERT INTO LaunchpadRevision ... line from patches ?
[10:20] <lifeless> stub: and have the applying wrapper supply it ?
[10:20] <jelmer> jml: thanks
[10:20] <jml> lifeless: teams are permitted junk.
[10:21] <adeuring> bigjools: ping
[10:23] <rvba> wgrant: bigjools I think I've enough to get going with this duplication thing ... thanks.
[10:23] <bigjools> adeuring: hello
[10:23] <bigjools> rvba: great
[10:23] <adeuring> bigjools: do you have to talk about bug 793630?
[10:23] <stub> lifeless: Yes, but there were reasons I had it in there explicitly.
[10:23] <_mup_> Bug #793630: better cronscript setup: remove hard-coded paths from cronscripts/publishing/cron.base-ppa <Launchpad itself:Triaged> < https://launchpad.net/bugs/793630 >
[10:24] <lifeless> stub: do they still apply? Is the trade off still worth it?
[10:24] <stub> lifeless: I like having it encoded in the content for when things get passed around or applied through means other than update.py
[10:25] <stub> lifeless: Also, it is a bit of a checksum against typos and the version number is important.
[10:25] <stub> c/version/patch
[10:25] <lifeless> stub: it being important is why I'm wondering about removing the duplication
[10:26] <StevenK> bigjools?
[10:26] <stub> Removing duplication means scope for error when it needs to be manually entered, such as when I'm applying a db patch live at 4am while drunk.
[10:27] <stub> c/when/if :-)
[10:27] <lifeless> stub: there seem to be a -raft- of issues in that proposition :0
[10:28] <stub> So I don't think the duplication solves anything, but I think it helps a little.
[10:29] <stub> And I don't think the overhead is bad and ensures any patch renumbering is done carefully.
[10:30] <lifeless> stub: does upgrade.py cross-check the patch number?
[10:30] <stub> yes
[10:30] <lifeless> kk
[10:31] <stub> It was deliberately done this way, but I have no idea if my original reasoning matches my current reasoning.
[10:31] <adeuring> bigjools: ?
[10:31] <bigjools> sorry had a call
[10:32] <rvba> bigjools: I see a sha1 field only in libraryfilecontent ... and I'm not really sure how to find those from binarypackagepublishinghistory ...  I suppose making sure (binarypackagename, version) of the related binarypackagerelease is another way to avoid this kind of duplicates ...?
[10:32] <bigjools> StevenK: I wanted to chat briefly about where the packagecloner could be going slow, I think you identified it as the slow bit in IDS right?
[10:32] <bigjools> adeuring: let me take a quick look
[10:33] <StevenK> bigjools: Yes, it's the package cloner. But I have no idea about its internals
[10:33] <StevenK> bigjools: Perhaps it's the .createMissingBuilds() calls
[10:33] <bigjools> StevenK: ok.  it's a job for a PG expert then :)
[10:34] <bigjools> StevenK: no, on DF I observed a VERY long INSERT
[10:34] <wgrant> StevenK: But IDS shouldn't be using createMissingBuilds
[10:34] <wgrant> DF SPPH (I think) inserts often take a long time.
[10:34] <wgrant> You can see it in p-u too.
[10:34] <wgrant> Creating PENDING publication record.
[10:34] <wgrant> [... wait for a couple of minutes]
[10:34] <StevenK> The PackageCloner calls .createMissingBuilds()
[10:34] <wgrant> But it's possible cMB.
[10:34] <bigjools> it was BPPH
[10:35] <bigjools> rvba: one sec, the join is easy
[10:36] <bigjools> what['s the normal way of scripts finding the root of the code tree?
[10:36] <StevenK> canonical.config.root ? Or something?
[10:36] <bigjools> in a shell script
[10:37] <bigjools> hmmm it comes from the crontab I think
[10:37] <StevenK> pwd?
[10:37] <bigjools> adeuring: so, in this case I think we need to shift the definition of LPCURRENT to something that comes from the crontab
[10:37] <rvba> bigjools: yeah, but if we can avoid joining too many tables it's all for the best so my question is: is the uniqueness of binarypackagerelease(binarypackagename, version) sufficient?
[10:38] <bigjools> rvba: no it's not
[10:38] <adeuring> bigjools: yeah, that's what wanted to ask: the regular config stuff would result in a hen/egg problem, wouldn't it?
[10:38] <bigjools> rvba: BPPH -> BPR -> BPF -> LFA
[10:38] <rvba> bigjools: ok, thanks!
[10:38] <bigjools> adeuring: LPCONFIG is already set in the crontab
[10:39] <bigjools> adeuring: unless you can get config.root out in bash?
[10:39] <bigjools> rvba: I hope hose abbreviations make sense :)
[10:39] <rvba> bigjools: I think it's going to be ok ;)
[10:40] <adeuring> bigjools: we could switch to a python script.
[10:40] <StevenK> stub: You said both columns in BPRC should be NOT NULL -- they're both in the primary key, so it shouldn't be needed?
[10:40] <bigjools> adeuring: yes, we already did that for the ubuntu wrapper
[10:41] <bigjools> adeuring: the base-ppa is sourced from less cronscripts/publishing/cron.publish-
[10:41] <stub> StevenK: Yes, but I like being explicit. We don't want to change the primary key constraint in the future and lose the implicit NOT NULL
[10:41] <bigjools> ppa
[10:41] <StevenK> stub: Fair enough, I shall add it.
[10:41] <bigjools> adeuring: let me try that again!  it's sourced from cronscripts/publishing/cron.publish-ppa
[10:42] <adeuring> right
[10:42] <adeuring> and that's done in two scriptd.
[10:42] <bigjools> adeuring: which is a trivial bash script that calls three python scripts in turn
[10:42] <stub> (Not sure if it matters here - just consider it lint)
[10:42] <adeuring> bigjools: right, but there is no scriptactivity logging, at least for one of the scripts
[10:42] <bigjools> adeuring: the cron.daily-ppa does some cleanup
[10:42] <adeuring> so another reason to change to apython script
[10:43] <bigjools> adeuring: there is scriptactivity
[10:43] <bigjools> from the 3 python scripts
[10:43] <bigjools> that's what matters
[10:43] <adeuring> in cron.daily-ppa?
[10:43] <bigjools> not that one
[10:43] <stub> lifeless: So 0mq looks really cool and reading the docs well worth while, even if it becomes clear it doesn't meet some of our existing use cases without development work.
[10:43] <bigjools> that one's not critical
[10:43] <adeuring> ok
[10:44] <stub> lifeless: The authors also share some of my opinions on HA setups, and argue some points better than I do.
[10:44] <adeuring> but still: If the wrong value of LPCURRENT was a problem, why not define it in the regular configuration?
[10:45] <stub> lifeless: But if we want a queue with persistence that survives restarts, we would need to implement it in 0mq (they provide designs and most of an implementation in the docs), stick with rabbit, or go with kestrel or activemq or something.
[10:47] <adeuring> bigjools: can we talk on mumble about it?
[10:48] <lifeless> stub: yes, thats true
[10:48] <lifeless> stub: gavin seems to have some traction on rabbit
[10:50] <stub> lifeless: Cool. If he has no luck next week, I'll implement a trivial messaging system with PG that we can swap out for something better later.
[10:50] <lifeless> stub: hmm, given our initial use cases don't need persistence, I'd suggest holding off
[10:51] <lifeless> stub: Its up to you, but I think there are other things more pressing for us
[10:54] <stub> lifeless: I mean if we can't get rabbit stable in the test environment, I can throw together a simple high level API backed by PG that will work and allow us to do basic messaging while we work out the low level details.
[10:55] <stub> It won't advertise itself as being reliable or persistent, because we may well swap the implementation for something unreliable and ephemeral.
[10:55] <lifeless> stub: sure, thats what I thought you meant
[10:56] <stub> 0mq does look ideal for centralised logging though
[10:56] <stub> And centralised OOPS collecting
[10:57] <stub> Lets us kill OOPS prefix too
[10:58] <lifeless> we can kill the prefix by changing the naming to a hash of the oops file
[10:58] <stub> Or we can keep them short and have the id allocated by the OOPS collector
[10:58] <lifeless> we need the name allocated on the server if we want to show the id to the user even if the collector is down
[10:59] <stub> We can have multiple collectors
[10:59] <lifeless> stub: yes, but that won't help when the collector is 'down' due to network issues
[11:00] <allenap> Or use base64.urlsafe_b64encode(uuid.uuid1().bytes).rstrip("=") for the OOPS number.
[11:00] <stub> turtles all the way down
[11:00] <lifeless> stub: hah yes.
[11:00] <stub> If the network is having issues bad enough to not submit an OOPS, I'd argue the OOPS is just noise.
[11:00] <lifeless> stub: I guess my basic thinking here is that its better to have fire-and-forget during error handling
[11:01] <lifeless> stub: actually I care a great deal about oopses at such times; short-transient-network glitches are hell to diagnose
[11:01] <lifeless> stub: lots of data is a big win
[11:01] <allenap> stub: I'm 90% confident that I've found the source of the Rabbit fixture problems we were having.
[11:01] <stub> Ok. So we could fallback to using a UUID if the OOPS server can't reply immediately.
[11:01] <lifeless> stub: wouldn't it be simpler to just have one way of doing it that is reliable-by-approach  ?
[11:01] <stub> allenap: Yay :)
[11:02] <stub> lifeless: I like short OOPS numbers. The ones we have are already too long. They are human readable codes.
[11:02] <lifeless> stub: Why does human readable matter?
[11:03] <stub> Because we see them all the time in bug reports, we paste them into IRC sessions, we email them, we might even end up memorizing some of them.
[11:03] <lifeless> all of those are robustly copy-pastable
[11:03] <bigjools> allenap: yay!
[11:03] <lifeless> except memorising & speaking
[11:04] <bigjools> lifeless: I really want us to sort out a MQ strategy because I intend to do the long poll stuff in Dublin
[11:04] <stub> And having to cut and paste is suckier than having the option of cutting and pasting, and a 40 character id is going to be less readable than a 6 character id when embedded into a bug report.
[11:04] <lifeless> stub: when was the last time you spoke an OOPS ID ?
[11:05] <lifeless> bigjools: yes, I know :(.
[11:05] <bigjools> lifeless: I am assuming it's going to be Rabbit
[11:05] <stub> lifeless: Can't recall, but I'm sure I've had them read out in skype conversations.
[11:05] <stub> Maybe as I used to do standups with the QA team
[11:06] <lifeless> stub: a prefix match will support reading-out easily - just use the first 6 digits
[11:06] <lifeless> bigjools: well, as I said in prague, lets unblock and use it.
[11:07] <lifeless> bigjools: just not [for now] for persistent data; your use case doesn't need that.
[11:07] <stub> Which is still suckier than just being able to read the whole code.
[11:07] <lifeless> stub: all things are tradeoffs
[11:07] <stub> We can do client generated oops codes, but they are less usable than server allocated codes.
[11:08] <bigjools> lifeless: exactly
[11:08] <stub> And usability matters as they are the interface between humans and our qa databases.
[11:09] <stub> I don't think we want to throw away that usability for pathological situations, given there will always be a more pathological situation we will never be able to cope with
[11:09] <lifeless> stub: I agree that usability matters; I am not at all convinced that modest length is an issue
[11:09] <stub> (oops... disk full... )
[11:10] <lifeless> wgrant: can you put a link to the damaged feed in https://bugs.launchpad.net/launchpad/+bug/78942 ?
[11:10] <bigjools> allenap: so what is the rabbit issue you found?
[11:12] <bigjools> lifeless: why does the rabbit fixture double-fork?
[11:12] <allenap> bigjools: See my reply to "rabbit, where art thou?" in launchpad-dev. I think it's a race between rabbitmqctl and the rabbit server to create the cookie file.
[11:12] <lifeless> bigjools: thats how U1 do it
[11:12] <bigjools> lifeless: why does U1 make the rabbit fixture double-fork? :)
[11:12] <lifeless> bigjools: I made the minimal changes on the assumption their code had reasons to be the way it was.
[11:12] <bigjools> it seems - odd
[11:12] <wgrant> lifeless: Their discontinuation was announced three weeks ago.
[11:12] <bigjools> and there's no comments
[11:12] <lifeless> bigjools: no tests and no docs make answering that hard.
[11:13] <bigjools> quite
[11:13] <bigjools> having seen some other Python projects recently I really appreciate our coding standards
[11:13] <lifeless> stub: I've a few bugs that contributed to downtime post bugsummary; would you look at them for me?
[11:13] <wgrant> I questioned the double-fork when I semi-reviewed it. But decided it was better to get the code in and iterate.
[11:13] <lifeless> stub: and how would you like me to put them in your queue.
[11:14] <bigjools> allenap: ah ok, I am currently ploughing through 25 unread messages in that thread
[11:14] <allenap> wgrant: I have a branch that removes the double-fork and uses subprocess.
[11:14] <wgrant> allenap: Excellent.
[11:15] <bigjools> yay!
[11:15] <wgrant> allenap: Does it remove any of the other crapness?
[11:15] <wgrant> This crapness may include bad style, and not working at all.
[11:15] <bigjools> you'd think that U1 would get this race too
[11:15] <allenap> wgrant: I hope so... :)
[11:16] <allenap> wgrant: I'll get it landed, today hopefully, and I promise not to mind if you improve on that :)
[11:17] <wgrant> allenap: I fear the improvement will be to disable the test again, but I can hope I'm wrong :)
[11:17] <allenap> wgrant: Ah, I think I've figured out the cause of the problems.
[11:17] <wgrant> I saw that.
[11:21] <lifeless> bigjools: they probably do occasionally
[11:25] <lifeless> wgrant: which were discontinued?
[11:27] <wgrant> lifeless: http://feeds.ubuntu-nl.org/MaverickChanges
[11:29] <lifeless> ok, so wontfix ?
[11:29] <wgrant> No.
[11:29] <wgrant> Not necessarily.
[11:29] <wgrant> Well.
[11:29] <wgrant> The redirects are still broken.
[11:29] <wgrant> I don't know of any remaining non-antique sources of those URLs.
[11:29] <lifeless> but nothing outputs that url
[11:29] <lifeless> right
[11:29] <lifeless> I'm going to close it; if someone finds an active source we can reevaluate
[11:29] <wgrant> Sounds good.
[11:32] <lifeless> stub: please ping when you are bakc
[11:42] <stub> lifeless: back
[11:43] <lifeless> catchup call ?
[11:43] <stub> lifeless: Give 'em a tag (dba is fine), or assign them to me.
[11:43] <lifeless> stub: i've tagged em ops
[11:44] <stub> lifeless: Sure, but needs to be quick. I need to go buy some UK power adapters.
[11:44] <lifeless> stub: there are 3; the fourth ops a maintenance squad will get
[11:44] <lifeless> stub: kk, skype?
[11:44]  * stub boots his laptop
[11:46] <lifeless> clearly not 5 second boot :)
[11:51] <bigjools> wgrant: can I borrow your brain and eyeballs for a bit please
[11:51] <wgrant> bigjools: Probably.
[11:51] <bigjools> http://pastebin.ubuntu.com/627895/
[11:51] <wgrant> aaaaa
[11:51] <bigjools> I wrote that test for the SPR.getBuildByArch problem and it passes unexpectedly
[11:52] <bigjools> I've been staring at it too long to see why
[11:53] <wgrant> bigjools: You're sure that derived_series is in a new distribution?
[11:53] <wgrant> I mean, it looks like it should be, but still.
[11:53] <wgrant> Oh.
[11:53] <wgrant> The binary is published.
[11:53] <wgrant> That would do it.
[11:54] <bigjools> sorry, why?  I am feeling particularly slow today
[11:55] <wgrant> bigjools: IIRC getBuildByArch looks for anything published first.
[11:55] <wgrant> Otherwise copies would duplicate builds.
[11:55] <wgrant> Let me check.
[11:56] <wgrant> Yes.
[11:56] <wgrant> It queries for any binary publications, and returns the single build that produced them.
[11:57] <wgrant> Only afterwards does it do the whole-archive match on (spr, archtag)
[11:59] <bigjools> meh
[11:59] <bigjools> thanks.  I am blind
[12:01] <bigjools> wgrant: hmmm wait though
[12:02] <bigjools> it's looking for publications in the passed DAS
[12:02] <wgrant> :(
[12:02] <wgrant>  do_copy(
[12:02] <wgrant> +            [parent_source_pub], derived_archive, dsp.derived_series,
[12:02] <wgrant> +            PackagePublishingPocket.RELEASE, include_binaries=True,
[12:02] <wgrant> +            check_permissions=False)
[12:02] <wgrant> You copied binaries.
[12:02] <wgrant> There are pubs :)
[12:03] <bigjools> yes the intention was to copy binaries
[12:03] <bigjools> to get the build from the parent
[12:03] <wgrant> Yes.
[12:03] <wgrant> 21:02:01 < bigjools> it's looking for publications in the passed DAS
[12:03] <wgrant> Is that a surprise?
[12:03] <bigjools> gah
[12:04] <bigjools> so none of this may be a problem at all
[12:04] <wgrant> Well.
[12:04] <wgrant> Maybe.
[12:04] <bigjools> wtf would someone query for a build, passing in a context where it's not published
[12:05] <wgrant> Well.
[12:05] <bigjools> well :)
[12:05] <wgrant> This works to get a failed build from the source of a copy.
[12:05] <wgrant> I'm not sure it's ever used for that.
[12:06] <bigjools> I think I am going to invalidate this bug because the scenario I was concerned about is not a problem
[12:06] <bigjools> I might keep my test though :)
[12:07] <wgrant> A good plan.
[12:08]  * bigjools sighs at the headset volume control STILL controlling the speak volume
[12:08] <bigjools> speaker*
[12:17] <LPCIBot> Project windmill-devel build #233: STILL FAILING in 1 hr 5 min: https://lpci.wedontsleep.org/job/windmill-devel/233/
[12:29]  * bigjools grumbles at patch 75
[12:33] <wgrant> bigjools: Is that the bugsummary expansion?
[12:34] <bigjools> yeah
[12:34] <bigjools> df is going to be busy for a while
[12:50] <LPCIBot> Project db-devel build #640: STILL FAILING in 6 hr 20 min: https://lpci.wedontsleep.org/job/db-devel/640/
[12:58] <wgrant> jelmer: Your fix QAed fine. Thanks for the quick work.
[12:59] <LPCIBot> Project windmill-devel build #234: STILL FAILING in 41 min: https://lpci.wedontsleep.org/job/windmill-devel/234/
[12:59] <jelmer> wgrant: awesome - thanks for pinging me about it
[13:00]  * bigjools holds on to desk while overflying Chinook shakes the office to bits
[13:00] <jelmer> wgrant: is there any way to easily notice that a new revision has been deployed on staging?
[13:00] <wgrant> jelmer: If the rev needs QA its bugs will be tagged qa-needstesting once it's on (qa)staging.
[13:01] <wgrant> jelmer: But rollback= revs don't need QA, so that didn't happen here.
[13:01] <jelmer> wgrant: I knew about qastaging, but do deployments to staging happen at the same time?
[13:01] <wgrant> And this can only be QAed on staging, because qastaging has no importds yet :(
[13:01] <wgrant> No.
[13:01] <wgrant> It's separate.
[13:01] <wgrant> But it normally happens within a couple of hours of db-stable tip changing.
[13:02] <jelmer> wgrant: That's useful to know - thanks
[13:06] <jtv> Has anybody else been seeing failures on the unicode tests in PQM that don't happen locally?
[13:07] <wgrant> jtv: Yes.
[13:07] <wgrant> There are three. Ignore them for now. sinzui's YUITestLayer changes somehow break them.
[13:07] <jtv> Thanks!
[13:08] <wgrant> (but not on buildbot, and not locally)
[13:09] <lifeless> hmm, bugsummary 2 isn't on staging yet ;(
[13:13] <wgrant> renamed: lib/lp/bugs/stories/bugs/xx-bug-personal-subscriptions.txt => lib/lp/bugs/stories/bugs/xx-bug-personal-subscriptions.txt.THIS
[13:13] <wgrant> danilos: ^^
[13:13] <wgrant> r13243
[13:13] <danilos> wgrant, :/
[13:13] <danilos> wgrant, I'll land a fix for that (it's supposed to be removed)
[13:13] <wgrant> Also, 12k-line unflagged changes on fragile code upset me.
[13:13] <wgrant> But OK.
[13:14] <danilos> wgrant, what do you mean with "unflagged"?
[13:14] <StevenK> Non-feature flagged
[13:14] <danilos> ah
[13:14] <wgrant> So there is no escape.
[13:14] <danilos> well, I am not disagreeing with that, but 4k of those lines are removals, and another 3k are the actual feature flag removal
[13:15] <wgrant> Branches are cool :)
[13:16] <danilos> wgrant, yeah, it was all spread out in like 8-9 branches, but for easier "escape", we did it as one branch
[13:16] <danilos> we landed it as one branch
[13:19] <danilos> wgrant, btw, are you reverting that revision or should I?
[13:19] <wgrant> danilos: I'd prefer not to fix that move.
[13:20] <wgrant> It is late here.
[13:20] <danilos> wgrant, it seems there's a bunch of PackageCopyJob failures, not sure what they are about
[13:20] <danilos> wgrant, right, just checking, thanks
[13:20] <wgrant> Oh.
[13:20] <wgrant> What?
[13:20] <wgrant> PackageCopyJob failures? Where?
[13:20] <wgrant> Huh.
[13:20] <danilos> wgrant, buildbot run that's just ending for my rev: https://lpbuildbot.canonical.com/builders/lucid_lp/builds/1048/steps/shell_6/logs/summary
[13:20] <danilos> tests, not production :)
[13:21] <danilos> wtf, "NameError: global name 'ISourcePackageNameSet' is not defined"
[13:21] <wgrant> This is why we don't land 12000 line revisions.
[13:21] <wgrant> They are unreviewable.
[13:21] <wgrant> And un-QAable.
[13:22] <wgrant> And... they do that.
[13:22] <wgrant> :)
[13:22] <danilos> wgrant, well, these same tests fail for me in devel
[13:22] <danilos> not sure how it got through ec2 :/
[13:22] <spiv> Land 13000 line revisions instead.  13k is a lucky number!
[13:22] <danilos> spiv, good point, thanks
[13:23] <wgrant> danilos: Yes, there was a merge issue somewhere. :(
[13:23] <wgrant> They are less bad if you review every diff before you land them. Or if people are watching diffs as they land. Which is a bit impossible when they are so huge :/
[13:23] <danilos> wgrant, yeah, I'll have to revert and look into it
[13:24] <wgrant> Thanks.
[13:24] <wgrant> Also, one of the MPs that was merged was rejected, which is somewhat concerning.
[13:24] <wgrant> I'm not sure you merged what you thought you did.
[13:25] <danilos> wgrant, was it? which one was that?
[13:25] <wgrant> danilos: You can't land the FF removal first?
[13:25] <wgrant> https://code.launchpad.net/~benji/launchpad/bug-784575-message/+merge/62157
[13:25] <wgrant> Something like that landed on db-devel last month.
[13:25] <danilos> wgrant, well, that seems to have been included in a different branch from Benji then
[13:27] <danilos> I am guessing the problem with packagecopyjob merge is that someone reverted something using patch instead of bzr merge
[13:28] <wgrant> danilos: Reversions are just patches.
[13:28] <wgrant> danilos: Since they are cherrypicks, which aren't tracked in the DAG.
[13:28] <danilos> wgrant, doesn't bzr track them (I've had bzr nicely recognize a reverted *revision* before and not worry about it, so I thought it was smarter than that)
[13:28] <wgrant> danilos: No.
[13:28] <danilos> wgrant, that's surprising
[13:29] <wgrant> A little.
[13:29] <wgrant> But it normally works fine.
[13:31] <spiv> Well, a commit of a revert is a "rich patch" if you like, in that it copes with renames and deletes a little better than a raw diff/patch.
[13:31] <wgrant> True.
[13:32] <spiv> But there's no record kept in the revision graph to indicate the fact that a particular revision has been undone.  But as you say mostly things work fine without explicitly tracking that.
[13:40] <LPCIBot> Project windmill-devel build #235: STILL FAILING in 40 min: https://lpci.wedontsleep.org/job/windmill-devel/235/
[13:40] <danilos> spiv, oh, and I thought "bzr merge" (and especially "-r" option) did more than that
[13:42] <spiv> danilos: usually it does
[13:42] <danilos> spiv, but specifically not for "-rREVNO..REVNO-1"?
[13:42] <spiv> danilos: but giving it a reversed revision range is a cherrypick
[13:42] <danilos> spiv, right, understood
[13:43] <spiv> danilos: at a low-level each revision has a field to record a sequence of one or more parent revisions
[13:43] <spiv> danilos: and that's all the data kept for the revision graph
[13:43] <jtv> allenap: got time for a quick review?
[13:43] <spiv> So there's no way to represent "oh actually I took revisions 3 and 4 back out" in that structure.
[13:43] <jtv> It's something I'm hoping to polish off very soon.
[13:45] <spiv> (and similarly no way to represent a cherrypick in general.  Just a statement that revision X includes Y and Z, and so transitively all the ancestry of Y and Z)
[13:45] <danilos> spiv, right, that's a shame, but if it mostly works fine, I guess it's no big deal
[13:45] <spiv> Or if you like, if 'bzr qlog' can't plot it that relationship, bzr can't record it ;)
[13:47] <danilos> spiv, I generally attributed "smart" behaviour to it recording more data, but I guess it's intrinsic to the merge algorithm (i.e. if I land a branch which includes a "reverted" change, that change generally is not re-applied on the landing; so, now I know to be a bit more wary of these things :)
[13:52] <spiv> Right, because after all if someone commits a revert you probably want that change preserved :)
[13:58] <danilos> hum, I keep getting the test failures in my devel copy for PackageCopyJob tests, anyone else can confirm/dispute that?
[14:01] <wgrant> danilos: Even after you've reverted it?
[14:02] <danilos> wgrant, well, I definitely had it earlier, but not with the revert now (I had it before I refreshed my devel to include r13243)
[14:02] <jtv> allenap: I've got a review I'm hoping to polish off very quickly before I leave… got time for it?  It's not big: https://code.launchpad.net/~jtv/launchpad/bug-798179/+merge/64813
[14:03] <allenap> jtv: Yes indeed.
[14:03] <jtv> Great!
[14:03] <wgrant> danilos: 13243 was clearly the cause. It removed the import.
[14:04] <danilos> wgrant, right, but I had it even without it, so it seems some of my devel merges have gotten that in (I, naturally, haven't touched the file)
[14:04] <danilos> though, I was exclusively merging stable
[14:05] <jtv> so... you ran into that one as well?
[14:05] <danilos> jtv, so I am not the only one?
[14:05] <jtv> Nope.  Missing ISourcePackageNameSet?
[14:05] <jtv> In packagecopyjob.py?
[14:06] <danilos> jtv, or are you perhaps hitting it because of my recent landing earlier today?
[14:06] <jtv> No idea… I thought I'd caused it but couldn't find where I'd done it.
[14:06] <allenap> jtv: Why do you need to call view.setupQueueList() explicitly in the test?
[14:06] <danilos> jtv, hum, my branch landed on r13243, can you please check if you have that in your revision history?
[14:07] <jtv> allenap: that initializes the batch navigator… is the call redundant?
[14:07] <jtv> danilos: I do have the revision
[14:07] <jtv> otherwise I wouldn't have known we were hitting the same thing.  :)
[14:07] <danilos> jtv, heh, well, what I am asking is if you had that problem prior to that revision as well :)
[14:08] <jtv> Oh.  I don't _think_ I did; I've been branching a lot these past few days and only hit it today.
[14:08] <allenap> jtv: Maybe not. I would assume that happens when you call the view. I mean, it *has* to happen IRL, outside of a test case, but perhaps there's something missing that means it's needed in the test.
[14:08] <danilos> jtv, that revision has obviously caused it, but I got it into my branches somehow without ever touching the file (so due to some weird merging, I gues)
[14:08] <jtv> allenap: why don't I just try it?  :)
[14:09] <allenap> Okay :)
[14:09] <jtv> danilos: as if the revision that added it maybe got lost?
[14:09] <danilos> yeah
[14:10] <jtv> Frightening.
[14:10] <jtv> allenap: you're right — renders fine without.
[14:10] <jtv> Pushing change.
[14:11] <allenap> jtv: Cool. r=me.
[14:11] <jtv> thanks!
[14:13] <jtv> good morning jcsackett
[14:13] <jcsackett> good morning jtv.
[14:21] <danilos> it gets even better, I checked the branches I landed and *none* of them have changes in packagecopyjob.py file that my branch supposedly modifies
[14:22] <wgrant> danilos: No warnings of criss-cross merges as you merge them?
[14:23] <danilos> wgrant, no, but some of them have been live for a few weeks already and they've been merged by other people
[14:23] <danilos> well, had devel merged in by other people
[14:24] <LPCIBot> Project windmill-devel build #236: STILL FAILING in 44 min: https://lpci.wedontsleep.org/job/windmill-devel/236/
[14:28] <danilos> spiv, so, now that I reverted one change, if I want to land it and keep the revision graph, is there a way to do that? (considering it's a 12k line change, I'd like to keep actual revision log in the history)
[14:38] <jtv> allenap, jcsackett: I have one more candidate for review, but I'm afraid I can't stay around to answer questions.  It's a large-scale removal of an obsolete method: https://code.launchpad.net/~jtv/launchpad/post-394645/+merge/64829
[14:39] <allenap> jtv: I'll take it.
[14:39] <jtv> wonderful, thanks
[14:39] <jtv> allenap: I'm afraid I have no idea what to do about the pre-existing lint that I didn't remove.
[14:39] <jtv> I did land 7—8K of lint cleanup for those tests earlier though.
[14:40] <jtv> 7—8K *lines of diff*, that is.
[14:40] <allenap> jtv: No worries, I'll ignore lint.
[14:40] <jtv> Hmmm.  :)
[14:40] <allenap> Blimey.
[14:40] <jtv> Yes, I've really been pushing it.
[14:41] <jtv> Lazily continuing to let the tech debt pile up on soyuz didn't seem like the right thing to do somehow.  :-)
[14:41] <spiv> danilos: I'm not sure what you mean exactly by keep the revision graph
[14:42] <allenap> jtv: Agreed :)
[14:42] <danilos> spiv, well, "bzr log -n" should show the actual revisions
[14:42] <jtv> allenap: Anyway, I'll be off then.  Thanks again.
[14:42] <spiv> Revisions are only added, not taken away
[14:42] <allenap> Cheerio jtv.
[14:42] <spiv> (Unless you forcibly uncommit or push/pull --overwrite with an older revision)
[14:43] <spiv> So if you add a commit, you are strictly adding to history, not removing any of it.
[14:43] <danilos> spiv, well, if I try to "stupidly" remerge my branch into trunk (which now has a reverted change), my changes are ignored; basically, these revisions are already in the graph for the previous commit, so I guess bzr can't internally link to it either?
[14:44] <spiv> Right, those revisions are already present.
[14:44] <danilos> spiv, and there is no way to indicate that revision I am committing includes changes from that revision? so basically, it'll be a simple "patch" again?
[14:45] <spiv> If/when you want to revert the revert, you can do just that: bzr merge -r REVERT_REV..REVERT_REV-1
[14:45] <danilos> spiv, right, that's what I am doing, and losing all the subrevision history that the original commit had
[14:46] <spiv> What does "it" in "it'll be a simple..." refer to?  The revision you are about to commit, or some other revision?
[14:46] <danilos> spiv, revision I am about to commit
[14:47] <danilos> spiv, ideally, it'd be a revision listing previously reverted revision as the parent, at least imho :)
[14:47] <spiv> In general: if "bzr st" shows "pending merges:", then the tips of those merges will be included as parents in the new commit.
[14:47] <danilos> spiv, (just wondering if that'd be at all possible and whether it makes sense, not a big deal)
[14:49] <spiv> It would be technically possible to have revision where one of the extra parents is in the ancestry of the left-hand (a.k.a. mainline) parent.
[14:49] <spiv> I'm not sure if you can easily convince the CLI to create that situation, though.
[14:51] <spiv> Basically, the short answer is: just say what you're reverting or unreverting in the commit message :)
[14:51] <spiv> That's probably going to be good enough :)
[14:52] <danilos> spiv, heh, right enough, so in theory it should be possible to create a cyclic revision graph in bzr? :))
[14:52] <danilos> spiv, anyway, sure thing, I'll be detailed enough in the commit message
[14:53] <spiv> No, I don't see how that allows cycles?
[14:54] <spiv> A cycle would require a situation where you somehow make an older commit point to the new commit
[14:54] <spiv> Which I'm pretty sure is impossible with the CLI :)
[14:55] <spiv> You could of course construct all sorts of invalid data by hand.
[14:56] <spiv> (Or by abusing sufficiently low-level parts of bzrlib)
[15:04] <LPCIBot> Project windmill-devel build #237: STILL FAILING in 39 min: https://lpci.wedontsleep.org/job/windmill-devel/237/
[15:17] <timrc> jcsackett, hey I re-submitted my merge proposal[1] -- the original change I made broke Storm expressions using Archive.private, so I had to address that
[15:17] <timrc> [1] https://code.launchpad.net/~timrchavez/launchpad/set_ppa_private_from_api_724740-4/+merge/64840
[15:18] <jcsackett> timrc: cool. i will look at it in just a moment.
[15:20] <timrc> jcsackett, cool, I spot checked some of the tests that were failing yesterday afternoon and am now running the full test suite right now
[15:23] <rvba> allenap: jcsackett(hi!) Could you have a look at https://code.launchpad.net/~rvb/launchpad/initseries-no-arch-bug-795396/+merge/64843 ?
[15:24] <allenap> I'm doing a big review for jtv, going to be a while longer.
[15:24] <allenap> But I can do it after that unless jcsackett has got to it :)
[15:25] <rvba> Great! (it's a tiny MP)
[15:25] <bigjools> timrc: ah you found your bug then?
[15:26] <timrc> bigjools, yah, abentley helped me out there... turns out using Python getter/setters breaks Storm expressions that use the attribute (e.g. Archive.private) -- I'm not entirely sure why, to be honest
[15:26] <bigjools> ha!
[15:27] <bigjools> yeah it makes sense
[15:27] <bigjools> I didn't spot that.... doh
[15:27] <bigjools> good job that one unrelated test caught that
[15:27] <abentley> timrc: It's because Archive.private is a reference to the BoolCol class, but if you change it into a property, it's not.
[15:28] <timrc> well there were a few failures and one error and that was the common thread
[15:28] <timrc> abentley, ah okay, yeah that makes sense
[15:29] <timrc> abentley, which is why using the Archive._private satisfies Storm
[15:29] <abentley> timrc: right.
[15:35] <jcsackett> timrc: only changes between this and the last version is the updates from using .private to ._private, yeah?
[15:39] <timrc> jcsackett, yeah
[15:39] <adeuring> allenap: fancy a review with a tiny diff? https://code.launchpad.net/~adeuring/launchpad/bug-793630/+merge/64847
[15:39] <timrc> jcsackett, I believe I caught all the instances where the change was needed, testing should confirm that (should)
[15:40] <jcsackett> timrc: dig.
[15:40] <jcsackett> timrc: this looks good; i've approved the mp.
[15:40] <jcsackett> you said the full suite is running now?
[15:40] <timrc> jcsackett, great, thanks... yes full suite running
[15:40] <jcsackett> timrc: sounds good.
[15:40] <timrc> jcsackett, will we have to re-test to land?
[15:41] <jcsackett> timrc: generally i'm loathe to land without ec2 landing. how long have you had the suite running?
[15:41] <timrc> jcsackett, not very long, 20-30 minutes
[15:42] <jcsackett> timrc: if you want to kill the current test run, i can send your branch out via ec2 to land now.
[15:42] <timrc> jcsackett, okay, sounds good
[15:42] <jcsackett> timrc: cool. i'll send it out momentarily then. :-)
[15:42] <allenap> adeuring: Looks good.
[15:42] <adeuring> allenap: thanks!
[15:43] <cody-somerville> timrc, terminating ec2 test
[15:52] <timrc> jcsackett, doh, you'll be mad at me... the merge proposal did not include the "note" explaining the lint's failure to recognize the correctness of the way I implemented the getter/setter... what shall we do? http://paste.ubuntu.com/628006/
[15:54] <jcsackett> timrc: if you want to push that change up, it's fine. i've had to do a quick update and link your branch to the bug it's fixing, so i'm just now at issuing the land command.
[15:54] <jcsackett> push up the change with the comment, and then i'll send it out.
[15:56] <timrc> ok, doing so now
[15:56] <LPCIBot> Project windmill-devel build #238: STILL FAILING in 40 min: https://lpci.wedontsleep.org/job/windmill-devel/238/
[16:03] <timrc> jcsackett, okay, think everything is there now -- thanks for your patience!
[16:09] <jcsackett> timrc: looks good. sending it off now.
[16:09] <jcsackett> should it fail, you'll get the traditional email telling you things are broken. :-)
[16:09] <timrc> jcsackett, sounds good, thanks
[16:15] <jcsackett> rvba: i'm looking at your branch now.
[16:16] <rvba> jcsackett: thanks!
[16:27] <flacoste> allenap, jcsackett: https://code.launchpad.net/~flacoste/launchpad/bug-797917/+merge/64852
[16:27] <flacoste> should be a quick review
[16:27] <flacoste> pretty please :-)
[16:27] <allenap> flacoste: Okay, you ask so nicely :)
[16:28] <flacoste> thanks allenap
[16:34] <allenap> flacoste: r=me
[16:39] <flacoste> allenap: thanks!
[16:40] <flacoste> any vim users who integrated make lint output? (so that I can use cn to go to all of them?
[17:03] <jml> I have to head off.
[17:03] <jml> see you all tomorrow
[17:08] <maxb> flacoste: Hello. What is the landing status of ~flacoste/launchpad/bug-797088 ? Thanks
[17:09] <flacoste> maxb: running through EC2
[17:10] <flacoste> maxb: EC2 didn't love me yesterday, all my attemps failed in stalled bzr
[17:10] <LPCIBot> Project devel build #811: STILL FAILING in 5 hr 46 min: https://lpci.wedontsleep.org/job/devel/811/
[17:10] <maxb> Ah, I did notice some conversation about that
[17:10] <jcsackett> rvba: comments on your diff; i think you can cut down the query count you boosted.
[17:10] <rvba> jcsackett: great, thanks. I'll have a look.
[17:11] <flacoste> maxb: with luck it will be deployed before tomorrow, otherwise, it will probably go to Sunday (antepodeans Monday)
[17:11] <flacoste> maxb: i could fix the Debian issue immediately though
[17:11] <flacoste> as this only requires chaning the maintainer team
[17:11] <maxb> I think most of the problems are SRUs wanting to create their natty-updates branches
[17:12] <flacoste> that requires my branch
[17:12] <maxb> Right - so no need to hurry on fixing just debian
[17:22] <timrc> I forgot to pay Amazon $10 for 2 or so years and now they are mad at me
[17:22] <timrc> need to take it out to dinner and buy it flowers, I think
[17:23] <timrc> s/forgot/neglected/
[17:23] <sinzui> flacoste, ping
[17:28] <rvba> jcsackett: thanks for your suggestion, I've fixed the branch.
[17:30] <jcsackett> rvba: r=me.
[17:31] <jcsackett> and i concur that ordinarily is_empty is better. :-)
[17:31] <rvba> thanks!
[17:47] <sinzui> hands up who knows how to set/force the default encoding in python. My use of sys.setdefaultencoding('ascii') does not seem to work.
[18:18] <LPCIBot> Project db-devel build #641: STILL FAILING in 5 hr 27 min: https://lpci.wedontsleep.org/job/db-devel/641/
[18:28] <flacoste> hi sinzui
[18:29] <sinzui> flacoste, I was struggling with an import issue. importing gtk sets python's default encoding to utf-8. barry has been helping me.
[18:30] <sinzui> flacoste, I am going to trying reload(sys), the only known way to correct the corruption
[18:30] <sinzui> my next choice is to make lucid run modern gtk
[18:33] <flacoste> is this related to the the UnicodeEncodeError test failures?
[18:39] <sinzui> flacoste, yes it it. I updated https://bugs.launchpad.net/launchpad/+bug/798081 to report my findings
[18:39] <_mup_> Bug #798081: unicode tests failing since yuitest landing <build-infrastructure> <Launchpad itself:Triaged by sinzui> < https://launchpad.net/bugs/798081 >
[18:40] <flacoste> sinzui: that would explains the error i got back from ec2test
[18:40] <flacoste> sinzui: but it doesn't seem to tbe a problem from buildbot though?
[18:40] <sinzui> flacoste, It wont be cause my rt is not closed
[18:41] <flacoste> did somebody took care of the buildbot failure that happened earlier?
[18:41] <flacoste> i see a testfix and a remerge from danilo
[18:41] <flacoste> is that it?
[18:41] <flacoste> sinzui: ah, ok
[18:41] <flacoste> sinzui: so it's only a problem for ec2test
[18:41] <flacoste> which means i can safely submit my branch
[18:41] <sinzui> I can remove my ami so that the import is not used
[18:44] <sinzui> flacoste, you certainly can submit
[18:50] <deryck> sinzui, hi.  I get seg fault running YUI layer tests.  About 6-7 tests into the run.
[18:50] <deryck> sinzui, anyone else reported this?
[18:53] <bac> hi flacoste, i've got some questions about the fix for bug 776437
[18:53] <_mup_> Bug #776437: Enable ARM builders for PPA via API <api> <escalated> <not-pie-critical> <oem-services> <ppa> <Launchpad itself:In Progress by bac> < https://launchpad.net/bugs/776437 >
[19:03] <sinzui> deryck, I have never seen this
[19:04] <deryck> sinzui, hmm, I wonder what's happening for me then.  I updated developer deps.  Anything else required?
[19:04] <sinzui> no
[19:04] <sinzui> but maybe I borked the deps
[19:05] <sinzui> deryck, are you on maverick?
[19:05] <sinzui> sorry natty?
[19:05] <deryck> sinzui, natty, yes
[19:07] <flacoste> bac: shoot, irc or voice?
[19:07] <bac> skype?
[19:07] <sinzui> deryck, does dpkg-query -s python-html5-browser libwebkitgtk-1.0-0
[19:08] <sinzui> version: 0.0.3-0~15-6 for html5browser and
[19:08] <sinzui> version: 1.4.1-0ubuntu1 for webkitgtk
[19:10] <deryck> sinzui, hmmm, no, I get 1.3.13-0ubuntu2 for webkitgtk.
[19:10] <cr3> is rabbitmq used for anything other than testing in launchpad?
[19:10] <deryck> sinzui, but the html5browser is right.
[19:11] <sinzui> deryck, that looks fine. I updated to oneiric so I am on a higher rev od webkit
[19:11] <deryck> ah, ok
[19:11] <sinzui> deryck, my tree is a day old
[19:12] <sinzui> deryck, which is the last test you see run?
[19:14] <sinzui> deryck, test_subscription.html is test 7 in my list
[19:16] <deryck> sinzui, it's always the 7th test, regardless of which test is run 7th.  test_filebug_dupfinder.html or test_subscription.html usually.
[19:16] <deryck> sinzui, they don't seem to be run in the same order to me.
[19:16] <deryck> hmmm, yeah, a few runs here now confirm the order isn't the same, and the 7th test always dies.
[19:16]  * sinzui runs tests with current tree
[19:17] <sinzui> deryck, can you pastebin the fault?
[19:17] <deryck> I didn't update today. this is update from yesterday.  wifi is very spotty here at velocity.
[19:17] <deryck> sinzui, I'll try.  if pastebin will load.
[19:18] <sinzui> I see test_lp_names timesout
[19:19] <sinzui> so does test_bug_subscription_portlet
[19:19] <sinzui> I expected the latter since it is a massive mosule
[19:19] <sinzui> module
[19:22] <deryck> sinzui, here's a couple runs:  http://pastebin.ubuntu.com/628105/
[19:22] <sinzui> deryck, test_lp_names.js is in the old format. It never signals that the test is compelte
[19:24] <sinzui> and  test_bug_subscription_portlet was reverted to the old format I think
[19:24] <sinzui> So the two failures I see are bad tests
[19:24] <sinzui> deryck, your failure is quite cryptic
[19:24] <deryck> ah, ok.
[19:24] <deryck> they run fine by themselves.
[19:25] <deryck> but that should be easy enough to fix up, if they're old format.
[19:25] <sinzui> deryck, your tests always die in /bugs I see
[19:25] <sinzui> deryck, actually the test dies in the first bugs test each time
[19:26] <sinzui> so maybe there is a teardown issue in the previous test suite (/app) or in bugs itself
[19:27] <deryck> ah, interesting.  that makes sense.
[19:27] <sinzui> deryck, can you confirm this package is installed dpkg-query -s gir1.2-webkit-1.0
[19:28] <sinzui> I am also working on a fix for lucid's import for gtk from pygtk. I want to be certain you are getting the gir version of Gtk
[19:28] <deryck> sinzui, yup, I have that installed.
[19:29] <sinzui> deryck, I am going to fix the lucid ec2 image first, then look at your segfault.
[19:30] <abentley> sinzui: Also, when html5browser is imported, it tries to inisialize gtk, and if that fails, it raises a RunTimeError, which breaks some of the TwistedJobRunner tests.  I'm moving the import to YUIUnitTestCase so we don't start gtk when we don't need it.
[19:31] <deryck> sinzui, ok, thanks.  not meaning to create work for you.  just trying to work out what's up.
[19:32] <sinzui> deryck, this isn't a distraction. I really want this test framework to rock
[19:38] <sinzui> abentley, thanks
[19:39] <sinzui> I was pondering the same to solve the encoding breakage in gtk. This is not an issue with modern code that  imports from git.
[19:39] <sinzui> maybe I should send another day trying to make lucid use gir properly
[19:42] <abentley> sinzui: we solved it the old-fashioned way in bzr-gtk.
[19:43] <abentley> sinzui: reload(sys); sys.setdefaultencoding()
[19:44] <sinzui> abentley, that is what I added to lucid's import of gtk
[19:45] <sinzui> abentley, gtk is only used in two Browser() methods. would moving the import into those mthods make eveyone happy?
[19:45] <abentley> sinzui: That would make me happy, but I suspect it would not fix the unicode issues.
[19:46] <sinzui> abentley, the unicode fix is separate
[19:46] <sinzui> the unicode issue is only lucid
[19:46] <sinzui> I think the issue you bring up affects natty/oneiric
[19:47] <abentley> sinzui: It only affects systems where html5browser is installed, at least.
[19:47] <sinzui> abentley, bzr-gtk does not work with gtk3.
[19:48] <sinzui> I will learn more about how bzr-gtk solved the issue soon since I am creating a gtk3 branch
[20:01] <sinzui> abentley, I do not see anything special about importing gtk in bzr-gtk
[20:01] <sinzui> abentley, what twisted thing can I run in Lp to see the issue?
[20:01] <abentley> sinzui: bin/test -t test_timeout_short
[20:02] <sinzui> thanks
[20:02] <abentley> sinzui: the bzr-gtk code is wrapped around the test suite in /__init__.py
[20:04] <sinzui> hmm, natty does not use pygtk
[20:04] <sinzui> oh, I love the segfault though
[20:32] <sinzui> abentley, moving the import of html5browser into the one testcase that uses it is much easier than changing html5browser. importing anything form gir seems  to cause the problem
[20:32] <LPCIBot> Project windmill-devel build #239: STILL FAILING in 1 hr 5 min: https://lpci.wedontsleep.org/job/windmill-devel/239/
[20:33] <lifeless> morning
[20:33] <abentley> sinzui: I agree.  Hate modules with side effects on import.
[20:33] <abentley> lifeless: morning.
[20:34] <lifeless> sinzui: perhaps we should rollback the yui layer until these issues are fixed on a branch ?
[20:36] <lifeless> sinzui: ah, catching up with mail; I see its not run but is an import side effect
[20:39] <lifeless> flacoste: hi
[20:39] <flacoste> hi lifeless
[20:41] <lifeless> flacoste: reviews: reducing inventory, OCR drive-to-zero thread: I'd like your opinion on us adding 'drive nearly-there non-paid-launchpad-hacker patches to zero by helping' to the CHR collection-of-duties; and 'please put things which are "tweaks" into state approved, and things which need rework into "WIP"
[20:41] <lifeless> (the last two items being for reviewers)
[20:42] <flacoste> lifeless: -1 to adding to CHR
[20:42] <lifeless> flacoste: oh, and OCR to officially drive the 'needs-review' collection to zero
[20:42] <flacoste> OCR focus makes sense though
[20:42] <flacoste> i can reply to the thread
[20:43] <lifeless> flacoste: so the reason I suggest the rotating squads get the helping aspect is because OCR is perhaps too short a turnaround
[20:43] <flacoste> i see
[20:43] <lifeless> flacoste: 5 hour test run + followup : there will be lots of person-helping-churn if we ask OCR to do it.
[20:43] <flacoste> indeed
[20:43] <lifeless> flacoste: I would *like* to ask OCR to do it, but not -yet-
[20:44] <flacoste> i'm still -0 on adding maintenance rotation responsibility
[20:44] <flacoste> we should ditch some if we want to add some
[20:44] <lifeless> flacoste: so instead asking a squad thats on for a week to do it would at least allow for continuity.
[20:44] <lifeless> flacoste: this is something we kindof have as a 'we should be doing' but its not structured well [because of the churn issue]
[20:46] <lifeless> flacoste: so I'd argue we are reducing churn on an existing todo item, not adding a whole new item.
[20:46] <flacoste> maybe, but i'm not sure that how it would be perceived
[20:46] <flacoste> we are adding a line item to their responsibility list
[20:46] <flacoste> now, the todo item is diffuse across the team
[20:47] <lifeless> (and doesn't happen)
[20:51] <lifeless> flacoste: ok, so concretely - we have 4 social things I think need to happen if we want to reach a 'merge proposals are reviewed quickly' state of nirvana; there are technical things we can do to change those social things but thats going to take longer to bring about.
[20:52] <lifeless> flacoste:  we need to make both approved and needs review lists in the lp-project/+activereviews page get driven to zero without it being frustrating for the folk doing the driving (2 items)
[20:52] <lifeless> flacoste: we need to move things that don't need another review when the next OCR starts to either WIP or approved or rejected (1 item)
[20:53] <lifeless> flacoste: ok so 3 things :).
[20:55] <lifeless> flacoste: how do we move forward with this ?
[20:55] <lifeless> (by which I mean resolve the bugs in the current process one way or another)
[20:55] <flacoste> the weekly reviewer meeting was the forum where these were resolved previously
[20:56] <flacoste> we could call one to discuss this
[20:56] <flacoste> bac: ^^^
[20:57]  * bac reads
[21:00] <sinzui> lifeless, I have a fix the the YUITestlLayer
[21:00] <lifeless> sinzui: great
[21:00] <sinzui> lifeless, I was hoping to make everything perfect, but that is not one branch :(
[21:01] <sinzui> lifeless,  the issue is only in the latest image. I will delete my lucid image in a few minutes, and make another one.
[21:01] <lifeless> great
[21:09] <bac> lifeless, flacoste: we can have a reviewers meeting next week to discuss this.  i'm -1 on it becoming another CHR task.  point #3 is trivially done, just ensuring we remind people to do it.
[21:09] <bac> i like the patch pilot idea but would like to see OCRs try to tackle it at the beginning of their day.  set a goal of trying to get one of the branches landed and see if we can drive it to zero
[21:10] <flacoste> bac: we could also do this live in Dublin
[21:10] <lifeless> bac: That seems like it ignores the friction around landing things
[21:11] <bac> lifeless:  perhaps we ask the OCRs on maintenance to claim at least one during their shift and see it through over the next couple of days.
[21:12] <lifeless> bac: we could; that is in tension with 'do project work' though
[21:12] <lifeless> bac: and we have a lot of experience that says that project work really crowds out, well, just about everything else.
[21:12] <lifeless> bac: I'd like us to learn from that experience
[21:12] <bac> flacoste: if we can keep the discussion focused doing it in dublin would make sense.  i don't want another ramble-ass discussion on reviewing.
[21:13] <flacoste> bac: agreed, we can have a 1h reviewers meeting with an agenda
[21:13] <lifeless> flacoste: bac: I won't be there in Dublin; I will just raise these angles on the list thread and suggest we keep discussing it there.
[21:13] <flacoste> ok
[21:14] <bac> lifeless: ok.  thanks for bringing it up.
[21:14] <lifeless> bac: its my job :)
[21:19] <LPCIBot> Project parallel-test build #43: STILL FAILING in 1 hr 8 min: https://lpci.wedontsleep.org/job/parallel-test/43/
[21:38] <LPCIBot> Project windmill-db-devel build #397: STILL FAILING in 1 hr 5 min: https://lpci.wedontsleep.org/job/windmill-db-devel/397/
[21:40] <allenap> sinzui: I've recently started getting http://pastebin.ubuntu.com/628166/ when I try to use bin/ec2 land. It references your ec2 image (516). Any ideas?
[21:40] <sinzui> allenap, I deleted that image 30 minutes ago
[21:41] <allenap> sinzui: Interesting! bin/ec2 images still reports that it's there.
[21:41] <sinzui> allenap, That image has a version of html5browser that was changing the default encoding
[21:42] <sinzui> allenap, okay, I deregistered the name
[21:42] <sinzui> I think it is fixed now
[21:42] <allenap> sinzui: Yes, bin/ec2 images is happy. Thanks :)
[21:44]  * flacoste concurs
[22:25] <sinzui> jcsackett, mumble?
[22:29] <jelmer> mwhudson, hi
[22:29] <mwhudson> jelmer: hey
[22:29] <mwhudson> did i just leave the channel?  my machine was in a strange state when i got in to the office this morning...
[22:29] <jelmer> you quit and rejoined twice, once because of "Remote host closed the connection", second time "Changing host"
[22:34] <LPCIBot> Project devel build #812: STILL FAILING in 5 hr 24 min: https://lpci.wedontsleep.org/job/devel/812/
[22:35] <jelmer> mwhudson: I made some progress on the svn import thing today, hopefully will have a fix up tomorrow
[22:35] <jelmer> mwhudson, I was also looking at some of the workermonitor code
[22:35] <mwhudson> ok so i guess x was running but just not displaying anything then :)
[22:35] <mwhudson> jelmer: woo
[22:35] <mwhudson> jelmer: does the problem have a simple description?
[22:36] <jelmer> mwhudson, for the tags code?
[22:36] <mwhudson> yeah
[22:37] <jelmer> mwhudson: bzr-svn has a simple way to discover tags - it just looks at a portion of the log and keeps track of when tags were created and when they were added
[22:37] <jelmer> mwhudson: usually it only does this for the fraction of the log related to the piece of history you're importing. So, about 5000 revisions
[22:38] <mwhudson> ah, but if you haven't imported tags before, it chews over everything?
[22:38] <jelmer> It's now look at the entire history to discover tags, probably because of the way we look at the tags before actually fetching anything now
[22:40] <jelmer> the tag discovery also uses way more memory than it needs to, which never was much of an issue but we can very well notice it now.
[22:43] <jelmer> mwhudson, some of the values in CodeImportResultStatus don't appear to be used. Do you know if they ever were?
[22:44] <mwhudson> jelmer: no, i think that was a big dose of big design up front
[22:45] <mwhudson> in some ways it would still be nice to distinguish if e.g. it's the pull from escudero that's failing
[22:45] <mwhudson> but it's probably not worth the effort
[22:48] <jelmer> mwhudson: I'll leave them in, I was just curious
[22:48] <jelmer> mwhudson, I'm adding FAILURE_UNSUPPORTED_FEATURE and FAILURE_INVALID
[22:48] <mwhudson> if you're looking for things to rip out, please remove CodeImport.updateFromData
[22:49] <mwhudson> that was definitely a mistake :)
[22:49] <mwhudson> jelmer: ah right, signalled via different exit codes?
[22:49] <jelmer> mwhudson, yep, exactly
[22:49] <mwhudson> cool
[22:49] <jelmer> and determined by the exception classes
[22:50] <mwhudson> what will failure_invalid mean?  connection refused, authentication required, that sort of thing?
[22:52] <jelmer> mwhudson, yeah - ConnectionError, NotBranchError, PermissionDenied
[22:52] <mwhudson> jelmer: something i've talked about before is that the review_status kinda represents two things: sort of an intention and a result
[22:52] <mwhudson> i.e. reviewed == we think this should be imported
[22:53] <jelmer> mwhudson: yeah
[22:53] <mwhudson> failed == we can't
[22:53] <mwhudson> so really, FAILURE_UNSUPPORTED_FEATURE in particular should result in some kind of reviewed/failed combo
[22:53] <jelmer> yep
[22:53] <jelmer> Hopefully that's the next step
[22:54] <LPCIBot> Project windmill-devel build #240: STILL FAILING in 1 hr 8 min: https://lpci.wedontsleep.org/job/windmill-devel/240/
[22:55] <sinzui> deryck, I would like to know if you still get segfaults when the changes I landed in devel.
[22:55] <deryck> sinzui, sure.  are they in devel now?
[22:55] <sinzui> yes
[22:56] <jelmer> mwhudson: I would also really like to get rid of notion of what VCS you're importing from (and whether it's foreign or a mirror), that seems harder to fix though.
[22:56] <deryck> sinzui, ok, let me see if I can get an update here.
[22:56] <mwhudson> jelmer: yeah
[22:56] <lifeless> jelmer: welcome back to lp :)
[22:57] <mwhudson> jelmer: really, the fact that Branch has columns to do with mirroring is wrong
[22:57] <mwhudson> (but very understandable, given history)
[22:57] <lifeless> IIRC it didn't use to; it got refactored to have them. IMBW
[22:58] <jelmer> lifeless: :)
[23:00] <mwhudson> before my time then
[23:02] <deryck> sinzui, so I get further now.  12 tests in.  re-running to see if it's consistent.
[23:04] <maxb> If someone has a moment, could you check whether devel r13249 is likely to become deployable before the weekend?
[23:05] <sinzui> deryck, Since you get a segfault, is is still at the start of an new testcasde, such as bugs
[23:06] <deryck> sinzui, no, it's a few test into bugs now.  and 12 does seems the consistent number, if 4 runs can be trusted.
[23:07] <sinzui> deryck, what of layer=BugsYUI
[23:07]  * deryck tries....
[23:07] <deryck> sinzui, that doesn't run any tests.  I thought you went with a single YUI layer anyway?
[23:08] <lifeless> sinzui: its probably fork() related
[23:08] <sinzui> oh yes, that was the testcase name :)
[23:08] <lifeless> sinzui: gui libraries and fork tend to have a hate-hate relationship
[23:09] <sinzui> what for we fork?
[23:09] <sinzui> Why do not not see it?
[23:09] <lifeless> zope.testrunner forks whenever a layer that cannot be torn down finishes
[23:10] <lifeless> the CA layer cannot be torn down
[23:10] <lifeless> maxb: probability 0
[23:10] <lifeless> maxb: its not through BB yet
[23:10] <sinzui> maxb, I think It could happen, I so not think we will have more than 5 things that need qa
[23:11] <sinzui> lifeless, is there no hope that matsubara-afk or Ursinha could do a release?
[23:11] <lifeless> Revision 13240 can not be deployed: needstesting, Revision 13243 can not be deployed: needstesting, Revision 13244 can not be deployed: needstesting, Revision 13248 can not be deployed: needstesting
[23:12] <lifeless> sinzui: doing a deploy on friday afternoon means we have no safety net
[23:12] <sinzui> understood
[23:12] <lifeless> sinzui: we'd be deploying 30 revisions - a lot of work.
[23:12] <maxb> Right, thanks, that was what I was wondering - there's enough QA already pending before it that I shouldn't expect anything
[23:12] <lifeless> sinzui: which increases the likelyhood of needing the safety net
[23:13] <lifeless> sinzui: if we can get it done in the next couple of hours it should be fine - that would leave nearly 24 hours of losa coverage before EOW
[23:13] <deryck> sinzui, so I got the bugs tests with -- ./bin/test -cvv --layer=YUI -m bugs -- and still get the seg fault.
[23:14] <sinzui> I think qaing bug 772754 is a lot of work
[23:14] <_mup_> Bug #772754: After better-bug-notification changes, list of bug subscribers is confusing <qa-needstesting> <story-better-bug-notification> <Launchpad itself:Fix Committed by gary> < https://launchpad.net/bugs/772754 >
[23:14] <lifeless> sinzui: yes
[23:15] <sinzui> deryck, thanks. I think I need to add something brutal to the layer. deryck I suspect the issue may related to timeouts. Those tests are really slow
[23:15] <deryck> sinzui, yeah, I was wondering that, too.  My new branch speeds them up, but maybe not enough.
[23:16] <sinzui> deryck, The test case does handle the 30 second timeout, but something else might be deciding hat the test is taking too long to rrin
[23:16] <deryck> all that dom redrawing is super slow.
[23:16] <sinzui> deryck, all tests passed for my in 43 seconds 30 minutes ago
[23:16] <sinzui> But I can see that the subscription tests are monsterd
[23:16] <sinzui> monsters
[23:17] <deryck> yeah, they're pretty fast for me, too.  I was just trying to make them even faster.  cleaning up some of that waits stuff.
[23:17] <sinzui> rock
[23:17] <deryck> the subscriptions tests are slow because of all the dom manipulation.
[23:17] <deryck> that is slow and ugly for users too.
[23:18] <sinzui> There are a lot of testcase in those modules too.
[23:18] <deryck> yeah, agreed.  they are long tests.
[23:20] <deryck> ok, back soon.  break time and changing session rooms.
[23:23] <jelmer> Hmm, are those OOPSes for code imports still actually generated (and thus Critical bug worthy) ?
[23:25] <mwhudson> bah, my login to devpad has been disabled again