[00:00] <lifeless> I've added (and its landing now) an earlier cache of revision trees, but that doesn't help here because its not a revision tree
[00:01] <lifeless> and as the tree is modified it seems to me it will be doing the wrong thing - this could be exacerbating up-thread/down-thread/update conflicts as well, if I understand it correctly.
[00:05] <abentley> lifeless: I don't understand jam's change, but it looks wrong in two ways.
[00:06] <lifeless> ok; I'll file a bug about this, as its not 'just a locking issue' then.
[00:06] <abentley> lifeless: You shouldn't call set_base_revision if you don't want to set the base tree, and you shouldn't set base_rev_id for the revision-id of a WorkingTree.
[00:07] <abentley> lifeless: For working trees, Merger does know about a basis revision, e.g. Merger.this_basis and Merger.other_basis.  There's no corresponding variable for base trees, but that's what he would want to set.
[00:09] <abentley> lifeless: The smallest change that would get this working would be to set Merger.base_rev_id directly instead of calling set_base_revision.
[00:10] <abentley> lifeless: Confirmed, that passes.
[00:10] <lifeless> abentley: thank you. It could have fall out elsewhere though?
[00:11] <abentley> lifeless: It's conceivable.  Most likely in a test case where we had coded it to expect the wrong behaviour.
[00:11] <lifeless> ok. I was doing this locking stuff as a warmup, so I'll put this bug in for now, and come back to it later.
[00:11] <lifeless> do you have a diff of that change you just made?
[00:14] <abentley> lifeless: http://pastebin.ubuntu.com/259556/
[00:22] <lifeless> thanks
[00:38] <lifeless> win 83
[00:59] <lifeless> sorry about bug noise on 305006
[00:59] <lifeless> trying to get lp to do what I need ><
[01:22] <lifeless> igc: when you get here
[01:22] <lifeless> igc: my faster commit was broken; it should have been faster; fixed and landing now
[01:23] <lifeless> igc: so you may want to retest it speed wise :)
[02:04] <igc> morning
[02:04] <igc> lifeless: shall do
[02:05] <josh_k> anyone want to point me in the right direction for using bzr to merge two heretofore unrelated files?
[02:07] <josh_k> hm
[02:07] <josh_k> now that i say that out loud, I don't see how it would reasonably be possible
[02:07] <josh_k> hand-merge, here we come...
[02:08] <lifeless> igc: its landing at the moment
[02:08] <igc> lifeless: excellent. well done!
[02:08] <lifeless> http://pqm.bazaar-vcs.org/
[02:09] <igc> lifeless: as soon as I wake up this morning, I'll give it a spin :-)
[02:09] <lifeless> kk
[02:09] <lifeless> it wasn't using specified_files in iter_changes
[02:09] <lifeless> which is why 'partial commit' was only as fast as full commit
[02:11] <igc> poolie: my focus today will be on reviewing critical/high patches and ...
[02:12] <igc> poolie: announcing explorer 0.7 + fastimport 0.9
[02:12] <igc> hopefully james_w or someone can get those uploaded to karmic in time
[02:17] <poolie> igc, that sounds good
[02:17] <poolie> mine is to merge the 2.0b1 branches and get that out
[02:30] <lifeless> igc: its landed
[02:34] <igc> lifeless: sweet
[02:35] <igc> lifeless: so IIUIC, shelve/unshelve now work correctly on Windows?
[02:35] <lifeless> igc: yes
[02:35] <igc> lifeless: landed?
[02:35] <lifeless> or rather
[02:35] <lifeless> unit tests say they do
[02:35] <lifeless> yes, landed.
[02:35] <igc> lifeless: wow, you rock!
[02:36] <lifeless> also pushing locally
[02:36] <lifeless> there are now no tests that should fail due to locking interactions on windows that represent things users can do
[02:36] <lifeless> there are a few that we use to test race conditions
[02:36] <igc> lifeless: you may want to email the bzr-windows list calling for testing on this
[02:36] <lifeless> I've a branch up for review that guards those tests on windows more aggressively
[02:36] <lifeless> igc: I'm not on the windows list - could you send such a mail?
[02:37] <igc> lifeless: sure
[02:37] <igc> lifeless: that's mega cool because I'll now get some interest in someone writing qsheve/qunshelve
[02:37] <lifeless> we can of course still collide between separate processes
[02:37] <lifeless> but same process stuff should be rock solid
[02:38] <igc> lifeless: as long as they weren't working on Windows, there was limited interest
[02:40] <igc> lifeless: pulling the 2.0 branch doesn't show your commit changes, nor the shelve-related changes
[02:40] <igc> lifeless: are they only in trunk or is lp mirroring to blame?
[02:40] <lifeless> only trunk
[02:40] <lifeless> landing a big batch to 2.0 later
[02:41] <igc> ok
[02:42] <poolie> ok time for brunch, or whatever it is :)
[02:44] <AfC> poolie: it's not a weekend, so I think yes, you need to have brunch before noon. Weekends you're ok until 13:30 :)
[03:00] <lifeless> igc: my testing shows - 1.3 seconds for commit w/out iter_changes, 0.25s for commit w/iter_changes 0.36s to commit the full tree
[03:00] <lifeless> igc: (alll with a single file changed)
[03:00] <igc> lifeless: which project?
[03:00] <lifeless> samba in 2a
[03:00] <lifeless> the first two times are 'commit  README', the last one is 'commit'
[03:13] <lifeless> spiv: ping
[03:13] <lifeless> IncompatibleRepositories: CHKInventoryRepository('chroot-97810960:///stack-on/.bzr/repository/')
[03:13] <lifeless> is not compatible with
[03:13] <lifeless> KnitPackRepository('chroot-97810960:///to/.bzr/repository/')
[03:13] <lifeless> different rich-root support
[03:13] <lifeless> thats what I'm generating at the moment
[03:13] <lifeless> spiv: this isn't ideal, but AFAICT its roughly what we do elsewhere
[03:14] <lifeless> and its at least in the category of 'class of thing we already have to fix'.
[03:14] <lifeless> spiv: seem ok to run with this?
[03:20] <spiv> lifeless: ugh
[03:21] <spiv> What's the benefit we'd trade this against?
[03:21] <lifeless> well at the moment its an UnknownSmartServerError
[03:21] <lifeless> the client hasn't ever opened 'stack-on', so it can't translate it without going to the network.
[03:22] <lifeless> and 'to' doesn't exist because the server is aborting a 'create foo for me' operation.
[03:22] <lifeless> so in terms of benefits; it will error about 600ms faster
[03:23] <lifeless> possibly my unit tests are not representative of what launchpad itself will do
[03:23] <spiv> I'm not really fussed about making that error faster :)
[03:23] <spiv> I think we need to fix the server to return relpaths fairly soon.
[03:23] <lifeless> spiv: note that I'm going to be catching this in cmd_push *anyway*
[03:24] <lifeless> this is 'bzr push a 1.9 branch somewhere with a stacking policy that points at a 2a branch'
[03:24] <lifeless> a precondition on that is that I can catch the error; so my motivation here isn't 'make this error nice', its 'make it typed rather than 'unknown''
[03:25] <spiv> IncompatibleRepositories can occur on other operations, although I agree that's the most common.
[03:25] <lifeless> they can, and at the moment - well see above - unknown smart server error.
[03:25] <spiv> What does the error serialisation look like?  I'm wondering if it will be reasonably easy to replace the useless in-proc URLs with relpaths later without disrupting old clients.
[03:26] <lifeless> I argue that what I've done so far is an unqualified improvement, even if its not an ideal end-state.
[03:26] <lifeless> ('Incompa..', str(err.source), str(err.target), str(err.detail))
[03:26] <spiv> Yeah, I agree it sounds like an improvement.
[03:26] <lifeless> but as the client doesn't process at all it would be easy to change later
[03:27] <spiv> The crummy URLs are essentially the same issue as in the lock contention message, I guess.
[03:28] <lifeless> yes
[03:28] <spiv> I'm satisfied that this is an improvement.  I'm a little worried it'll be an impediment to doing something better later.
[03:28] <lifeless> there are two cases
[03:29] <lifeless> either we need more fields, or we need to change the serialised value of fields
[03:29] <lifeless> the client doesn't care about how many fields
[03:29] <lifeless> so we can add some (including adding different representations of existing fields)
[03:29] <spiv> Ok, if you leave that wiggle room in the client that sounds ok.
[03:29] <lifeless> and the client doesn't parse the fields, so we can change fields if we want- knowing that the client will just show the strings it receives
[03:30] <spiv> i.e. effectively reserving some fields for future use.
[03:32] <lifeless> its no worse than
[03:32] <lifeless> Using default stacking branch /~bzr/bzr/trunk at lp-45211856:///~lifeless/bzr
[03:32] <lifeless> I feel you're very concerned about a special case of a pervasive problem.
[03:32] <lifeless> which is a bit odd.
[03:33] <spiv> Oh, it is pervasive.
[03:33] <spiv> That doesn't mean we should spread it even further without thought :)
[03:34] <spiv> Actually, that particular message is extra crummy.
[03:34] <spiv> It's actually emitted on stderr by the server process!
[03:34] <lifeless> the incompatible one?
[03:34] <spiv> Oh, actually, not that one.
[03:34] <lifeless> merge review request sent.
[03:35] <spiv> I'm thinking of the 'Source format doesn't support stacking' one.
[03:35] <lifeless> as we've discussed, I figure you've just volunteered to review ;)
[03:45] <lifeless> spiv:
[03:45] <lifeless> https://code.launchpad.net/~lifeless/bzr/bug-393677/+merge/10710
[03:48] <lifeless> spm: whats the url configured in pqm's bzr config for 2.0 ?
[04:08] <lifeless> spm: ping
[04:08] <lifeless> poolie: whats the url for submitting to 2.0?
[04:09] <igc> lifeless: I'll kick off that benchmark now while I grab some lunch
[04:09]  * igc food
[04:09] <lifeless> igc: its going to be awesome
[04:11] <spm> lifeless: similar to the 1.17 & 18: bzr+ssh://bazaar.launchpad.net/~bzr-pqm/bzr/2.0
[04:11] <lifeless> spm: thanks
[04:11] <lifeless> we need to doc this better. /me adds a yak to the list
[04:20] <poolie> spiv, hi, shall we catch up?
[04:20] <poolie> lifeless: do you think https://code.edge.launchpad.net/~ian-clatworthy/bzr/faster-dirstate-saving/+merge/7537 is good to merge now? igc says you read a previous version
[04:22] <lifeless> poolie: the last one I read looked like it could cause confusion about the internal state; jam's comments were along those lines and I agreed.
[04:22] <lifeless> I don't know if its been improved since then; and while its a functional improvement it is a risky area of code.
[04:22] <poolie> so it needs to be reworked?
[04:23] <poolie> jml: regarding the test suite
[04:23] <poolie> i don't think running it twice from pqm is, um
[04:23] <poolie> likely to be actioned very fast
[04:23] <poolie> so, i'd rather rely on buildbot, which is apparently now working ok
[04:24] <lifeless> poolie: do you mean 'lifeless: regarding the test suite' ?
[04:24] <jml> well, it is being run twice from PQM right now
[04:24] <poolie> no :-P
[04:24] <poolie> jml: now that we have a buildbot and are getting results from it i propose to just remove the second run from make check
[04:25] <poolie> and get vila to do that on a slave somewhere
[04:25] <jml> poolie, that sounds fine
[04:25] <lifeless> poolie: I have been watching carefully since we last discussed this
[04:25] <lifeless> poolie: I haven't landed any utf8-related patches to see if it helps or not :P
[04:26] <poolie> i loked through all my recent pqm mails and didn't see any failures in ascii tests
[04:26] <poolie> but that may be unrepresentative because people tend to work in different areas
[04:26] <lifeless> poolie: right, but what fraction were ... exactly
[04:26] <poolie> anyhow i'm not going to do it now
[04:26] <lifeless> anyhow, I've voiced my opinion - to keep it - before. I don't think the gain is worth the pain if something slips through.
[04:26] <poolie> or not today
[04:27] <lifeless> but I won't try to stop the change, if I can't convince you, the arguments aren't convincing
[04:27] <poolie> so, i respect your opinion, and i'm open to changing it back, but i think we should try changing it
[04:27] <poolie> i'm sure we get more breakage across platforms or distro versions
[04:27] <poolie> so we need buildbot to work well for that coverage
[04:28] <poolie> so i'd be ok relying on it for that too
[04:28] <poolie> s//for lang=c too
[04:28] <lifeless> I get that. OTOH we've known about windows breakage for ages. It got fixed when it became possible for us to test it locally and quickly.
[04:28] <lifeless> [locking specifically]
[04:29] <lifeless> My experience with fix-later environments is that they are consistently lower quality than fix-before ones.
[04:29] <lifeless> But as you say, open to trying the change and measuring.
[04:31] <poolie> so eventually it might be nice to test them all synchronously but in parallel
[04:32] <lifeless> poolie: what url did you give pqm to land bialix's patch?
[04:32] <lifeless> on 2.0
[04:32] <poolie> i didn't
[04:32] <lifeless> spm: whats the public address configured in the pqm conf?
[04:32] <poolie> i think vila sent it
[04:32]  * lifeless hunts yaks
[04:32] <spm> lifeless: http://bazaar.launchpad.net/~bzr-pqm/bzr/2.0
[04:33] <lifeless> spm: thanks
[04:33]  * lifeless submitifies
[04:37] <poolie> how about https://code.edge.launchpad.net/~jameinel/bzr/2.0b1-402645-fragmentation/+merge/10621
[04:39] <lifeless> spm: can you just paste me the stanza please..
[04:39] <lifeless> econfused
[04:39] <lifeless> poolie: jam and I don't agree about the short term fix
[04:39] <spm> lifeless: https://pastebin.canonical.com/21499/
[04:40] <lifeless> I'm worried about unordered being pathological because of whats out there already; he's worried about preventing existing branches getting worse
[04:40] <lifeless> its not a strong difference
[04:41] <lifeless> but as we rather have to fix this properly (IMO) for 2.0 to be released, I don't feel a strong urge to land a bandaid: users of 1.16/1.17/1.18 are already plentiful
[04:42] <lifeless> spm: can you check if there is a pqm job in process on there? (not via the web UI... I suspect I may have tickled a bug)
[04:43] <lifeless> poolie: bug 418998 is probably the symlink dereferencing bug
[04:43] <lifeless> poolie: rather than bzr-svn
[04:44] <lifeless> the 'didn't add files' aspect will be the 'silently ignores subtrees' bug in bzr core
[04:44] <spm> lifeless: technicall yes a job is running; but isn't a bzr one - if you ken what I'm not saying :-)
[04:44] <lifeless> I do
[04:44] <lifeless> can you check the log for requests from me
[04:44] <spm> aye
[04:44] <poolie> oh ok
[04:44] <lifeless> spm: actually.
[04:45] <lifeless> spm: cron spam!
[04:45] <spm> ha
[04:45] <lifeless> bzrlib.errors.ConnectionReset: Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist.
[04:45] <lifeless> Permission denied (publickey).
[04:45] <spm> (*^%)@*(#&$^)@#*(^%()*!&@%$_@#$&*())@*(#&$^@(*&^!!!!!!!!!!!!!!!!!!
[04:45] <lifeless> spm: please be fixing that file, that file you love so well.
[04:45] <lifeless> poolie: I replied asking if it was a symlink, that will tell us
[04:46] <poolie> spiv, tell me things?
[04:46] <spm> lifeless: pls to try try again
[04:46] <lifeless> bombs aaway
[04:48] <spiv> poolie: heading out to lunch in a sec.  I've been responding to jam's review of my groupcompress batching branch.
[04:48] <lifeless> arjenAU: hi
[04:48] <lifeless> arjenAU: when are you in sydney?
[04:49] <poolie> spiv, so how is it overall?
[04:49] <lifeless> spm: no go?!
[04:49]  * lifeless tries with http <-> http
[04:50] <arjenAU> lifeless: 7-9 Sep (7pm on the 6th)
[04:50] <lifeless> arjenAU: I'm going to try to swing past the mechanics school
[04:51] <arjenAU> lifeless: hmm?
[04:51] <lifeless> aren't you giving a talk?
[04:51] <lifeless> at SMUG?
[04:52] <spiv> poolie: he saw some possible holes in the batching/caching interaction.  I think I've fixed that up, although it's perhaps a little bit more complex now.  It's looking good; I'll submit it for re-review by jam just in case, but I expect it'll get a thumbs up.
[04:53] <lifeless> poolie: http://pqm.bazaar-vcs.org/ will hopefully land all the cumulative fixes from bzr.dev. If it fails, they are pushed to lp:~lifeless/bzr/2.0 so that people can tweak.
[04:53] <lifeless> I don't expect failures - but theres > 10 patches in the set so it may find something to bail on
[04:55] <lifeless> arjenAU: http://www.pythian.com/news/3728/sydney-mysql-user-group-meetup-9-in-depth-mysql-with-arjen-lentz specifically ;)
[04:55] <arjenAU> lifeless: oh right! sure
[04:56] <lifeless> but if you want to get together and chat bzr/lp stuff before/after/some-other-timeslot that would be cool too
[04:59] <jam> lifeless: so the code as it stands now only makes things worse and never better. Why wouldn't switching to 'unordered' be a reasonable short term fix?
[05:00] <lifeless> jam: I'm concerned that it will also make things worse, just differently, as it will apply to all fetches too. And its a chance for more bug reports while we're doing the longer term fix.
[05:00] <spm> lifeless: well I guess one positive - that file is back. So I suspect we know where it's coming from...
[05:01] <lifeless> spm: wheres that then ...
[05:01] <jam> lifeless: so.. if the source has been packed (ever) then at least we preserve that
[05:01] <lifeless> my using lp: as the source for the merge, I'd suspect
[05:01] <lifeless> jam: I'm mainly coming from paranois :)
[05:01] <jam> it doesn't effect Packer
[05:02] <lifeless> I need food
[05:02] <lifeless> back shortly- jam, land if if you're quite sure we won't have fallout while we develop the longer fix.
[05:02] <spm> lifeless: that was my guess, yes.
[05:02] <lifeless> spm: so, what happened when we tried an empty file?
[05:02] <jam> lifeless: given that we've been fetching pack => pack 'unordered' for a while....
[05:03] <spm> lifeless: kaboom
[05:03] <lifeless> actually, I don't want to know. I want to debug it with you tomorrow and file copious bugs
[05:03] <spm> heh
[05:03] <lifeless> we can't reliably switch to lp until thats fixed.
[05:03] <spm> right
[05:05] <lifeless> arjenAU: but if you want to get together and chat bzr/lp stuff before/after/some-other-timeslot that would be cool too
[05:05] <lifeless> poolie: EOD(); suspend();
[05:06] <poolie> ok
[05:07] <poolie> i'm going to get as many of these things merged as we can then do a beta
[05:07] <poolie> unless something comes up
[05:07] <arjenAU> lifeless: we'd be done at 5pm, barack st in city.
[05:07] <lifeless> sounds good
[05:07] <lifeless> I'll tee it up with you next week
[05:08] <lifeless> poolie: http://pqm.bazaar-vcs.org/ is good so far
[05:10] <lifeless> jml: after work today can you do me a small favour. subunit progress review-or-rubber-stamp
[05:10] <jml> lifeless, sure thing
[05:45] <thumper> abentley: pipeline ping
[05:50] <poolie> spiv, back?
[05:54] <spiv> poolie: yeah
[05:54] <poolie> i guess rather than nagging i want to know
[05:54] <poolie> when i should do 2.0b
[05:54] <poolie> so can you tell me either when you have nothing more to do with it, or alternatively
[05:55] <poolie> if there's anything i should be doing, or if it's pear-shaped
[05:55] <lifeless> poolie: if 2.0b means 'we think we're done', in a few days
[05:55] <lifeless> poolie: if it means 'get a snapshot of where we are at' - well anytime really.
[05:56] <lifeless> my backport of the last weeks progress looks like it will land
[05:56] <lifeless> in 30-40 minutes
[05:56] <poolie> it means we think there's no more critical bugs
[05:57] <lifeless> ok, well I don't think we're going to be there today
[05:57] <spiv> So I'm currently working on the fix for bug 402657, by doing the last tweaks from jam's review.  I'm going to ask him to re-review it, though, because he's touched this more than I have and I'd like to be cautious with this.
[05:57] <spiv> Which means I expect it to land tomorrow morning.
[05:57] <jam> well, if you submit it right now
[05:57] <jam> I might get to it :)
[05:58] <poolie> that would be much better
[05:59] <spiv> Hmm, I'll see how quickly I can take care of the last little bits :)
[06:02] <poolie> thanks
[06:20] <lifeless> poolie:  '(mbp) fix IndexError printing CannotBindAddress'
[06:20] <lifeless> thats in my branch thats landing
[06:21] <lifeless> might want to ask spm to cancel your merge
[06:21] <poolie> hm
[06:21] <poolie> why is it in your branch?
[06:21] <lifeless> because I did what I said I would do earlier
[06:21] <lifeless> which is to cherrypick all the 2.0 candidates from trunk
[06:22] <poolie> ah, ok
[06:22] <poolie> i understood you meant you'd do one merge with all of your changes
[06:22] <poolie> so spm would you please cancel my pending pqm job?
[06:22] <lifeless> have a look at http://pqm.bazaar-vcs.org/ at my commit message :P
[06:28] <spm> poolie: cancelled
[06:28] <poolie> oh thanks
[06:31] <jam> spiv: I'm going to bed now. So better to do a good job then a fast one.
[06:31] <jam> I'll try to review first thing when I wake up
[06:31] <lifeless> sleep well jam
[06:31] <jam> night all
[06:31] <jam> thx lifeless
[06:33] <lifeless> spm: please tell me you didn't cancel the running job
[06:33] <lifeless> spm: (and mean it!)
[06:33] <spiv> jam: g'night
[06:33] <spiv> d'oh, missed him.
[06:34] <spm> lifeless: I didn't cancel the running job. ??
[06:34] <spiv> Just pushing it up now, anyway.
[06:34] <lifeless> whew
[06:34] <spm> lifeless: rm patch.1251263587 ==> star-merge http://bazaar.launchpad.net/~mbp/bzr/prepare-2.0 http://bazaar.launchpad.net/~bzr-pqm/bzr/2.0
[06:34] <lifeless> ah found it
[06:34] <lifeless> poolie: its all landed
[06:37] <poolie> spiv, so what now?
[06:43] <spiv> poolie: Well, the review reply with a fairly small incremental diff (+68, -45) has been sent and I think addresses jam's concerns.
[06:44] <poolie> so..
[06:45] <spiv> Gar, wireless dropout.
[06:45] <spiv> (on wired for the moment)
[06:53] <lifeless> poolie: btw, record_entry_contents has exhaustive permutation tests.
[06:54] <poolie> do they pick up the case where it shouldn't record a new text?
[06:54] <lifeless> yes
[06:54] <lifeless> they pin it down very very precisely.
[06:54] <lifeless> they assumed that the interface they were building on returned accurate data.
[06:55] <lifeless> grep for unchanged in bzrlib/tests/per_repository/test_commit_builder.py
[06:57] <vila> ow, almost crossed jam...
[06:57] <vila> hi all :)
[07:01] <igc> hi vila!
[07:02] <lifeless> poolie: Its my contention that only explicit tests for the problem would have exposed it - be they integration tests or unit tests of the interface.
[07:03] <lifeless> poolie: I don't think it is a lack of tests on the record_entry_contents interface though; given accurate data it will behave well and is tested to do that.
[07:03] <lifeless> given bogus data I think its extremely likely to misbehave - but thats why we have tests for the behaviour of tree :)
[07:03] <poolie> https://bugs.edge.launchpad.net/bzr/+bug/418931 spiv
[07:11] <lifeless> poolie: I guess I'm trying to say that you're very unlikely to be able to create a test that when given valid data the api does the wrong thing
[07:11] <lifeless> poolie: because I spent a bunch of time trying to make the interface tests as defensive as possible
[07:31] <igc> poolie: reviewing that cf patch now and have some questions ...
[07:32] <poolie> ok
[07:32] <igc> poolie: in repository.py, why the new exception wrt nested tree
[07:32] <igc> ?
[07:32] <poolie> more context?
[07:32] <igc> did that fall out of tests?
[07:32] <igc> lines 98-100
[07:33] <igc> see https://code.edge.launchpad.net/~mbp/bzr/415508-content-filtering/+merge/10640
[07:33] <poolie> it came out of testing it
[07:33] <poolie> it should never be hit, but trapping it here gives a more meaningful message
[07:33] <igc> ok
[07:34] <igc> poolie: also the docstring in workingtree4.py ...
[07:34] <igc> code doesn't seem to match?
[07:34] <igc> well match precisely
[07:34] <poolie> that it holds the hash and size?
[07:35] <poolie> i thought it did :/
[07:36] <lifeless> igc: how did the perf test go?
[07:36] <lifeless> did it rock?
[07:36] <lifeless> igc: also, all the stuff for windows users is in the 2.0 branch now.
[07:36] <igc> lifeless: the one over lunch sucked because the lp mirror wasn't up to date :-(
[07:36] <igc> lifeless: I kicked off another test 30 minutes ago
[07:36] <lifeless> igc: ah, shame. eta?
[07:37] <igc> lifeless: *this* time with your code :-)
[07:37] <poolie> lifeless: so, possibly with the test_commit_builder tests, i just want to add a new one asserting that it copes ok if the size and hash are null
[07:37] <poolie> none*
[07:37] <poolie> and if the size is none but the hash is known
[07:38] <poolie> and maybe for both those cases, with the case that the file actually either has or hasn't changed
[07:38] <lifeless> poolie: So unknown I'd expect to be covered (I'd have to check though). But unknown size was not conceived of in the original interface definition.
[07:38] <lifeless> poolie: so I definitely adding tests to cover the change to the interface is appropriate.
[07:39] <igc> lifeless: full test is still running but the number you want just popped on my screen 10 seconds ago ...
[07:39] <lifeless> s/adding/think &
[07:39] <igc> 1.2 seconds
[07:39] <igc> down from 73+ seconds
[07:39] <lifeless> igc: and full commit is?
[07:39] <igc> 1.96 seconds
[07:39] <lifeless> booyah!
[07:39] <igc> :-)
[07:39] <poolie> i guess it returns an indication of whether something was recorded
[07:39] <lifeless> igc: that 0.7 seconds is 'status on the rest of the tree'
[07:39] <igc> right
[07:40] <lifeless> poolie: yes, we also check that the inventory gets the right last-modified value for that entry.
[07:40] <lifeless> poolie: remembering that this is an awkward interface we are deleting, so don't go overboard :)
[07:40] <lifeless> poolie: 'rm the interface' is not-all-that-far-away.
[07:41] <lifeless> igc: If you recally, I was expressing concern that that overhead would be substantial vis-a-vis layering on iter_changes rather than changing iter_changes.
[07:41] <poolie> so i've sent that thing for merge
[07:41] <lifeless> igc: you can see why now, I hope.
[07:41] <poolie> there's a bug open about adding more tests but i'm not very motivated to do it now
[07:42] <lifeless> I think thats fine; I'd be inclined to close the bug wontfix
[07:42] <lifeless> diminishing returns
[07:42] <poolie> if i'm going to do more here it would be primarily
[07:42] <poolie> adding integration tests
[07:43] <poolie> or trying to refactor it into a clearer layer that does filtering
[07:44] <poolie> igc, tell me about bug 374735?
[07:45] <igc> poolie: one sec
[07:47] <igc> poolie: so the issue is ...
[07:48] <igc> could the Upgrade Guide recommend a less complex upgrade recipe for branches stacked on LP?
[07:48] <igc> poolie: iiuic, thumper thinks no
[07:49] <poolie> so, um
[07:49] <igc> poolie: I'm happy to document whatever we agree on
[07:49] <poolie> i suppose what i want to know now is, what's the implications for the 2.0 release? should i be doing something to unblock it, or should we considering that it might block 2.0?
[07:49] <igc> poolie: maybe we need a conference call with thumper, lifeless, you and I to resolve it
[07:50] <igc> poolie: I think the current recipe and doc is acceptable
[07:50] <igc> poolie: I agree it *could* be better one day but I don't think it's a blocker
[07:51] <igc> poolie: my personal opinion is that making shared repository upgrades easier is more important
[07:52] <igc> by upgrading branches in that repo implicitly
[07:52] <igc> my patch for patch got some love last Friday but hasn't moved since
[07:52] <poolie> right
[07:52] <igc> s/for patch/for that/
[07:52] <poolie> so that's only weakly connected to the issue of upgrading stacked branches?
[07:52] <poolie> like technically unconnected, just in the same area?
[07:53] <igc> my patch is currently unconnected from stacked btranches
[07:53] <igc> an earlier cut tried doing them in a certain order but ...
[07:53] <igc> I backed that out because it didn't apply ...
[07:54] <igc> stacked branches aren't supported in a shared repo at all
[07:54] <poolie> i'm thinking about narrowing down the 2.0-targeted bugs to just things that we must or nearly must fix
[07:54] <poolie> just for easier thinking
[07:54] <igc> right
[07:54] <jml> the best kind of thinking!
[07:54] <igc> I think the Upgrade Guide is fine for 2.0
[07:55] <igc> after all, it actually works
[07:56] <igc> I wish the Data MIgration Guide (fastimport) had reached that lofty state
[07:56] <lifeless> poolie: jml: spiv: I wonder if making scenario application do a shallow copy would be safe. It looks like it might be substantially faster.
[07:56] <lifeless> we'd probably want to change some static parameters to factories.
[07:56] <poolie> shallow copy of what?
[07:56] <igc> poolie: btw, just approved that content filtering patch
[07:56] <poolie> thanks
[07:56] <poolie> i already sent it :)
[07:56] <spiv> lifeless: of the test case, vs. a deep copy?
[07:57] <lifeless> yes
[07:57] <igc> poolie: ok, so approving it makes me feel better, if not adding any value otherwise :-)
[07:57] <spiv> lifeless: I imagine it would be faster... I wonder if perhaps there should be an option for deep vs. shallow?
[07:58] <lifeless> apply_scenarios is ~ 100% of the time of 'test_test_suite'
[07:58] <spiv> I don't have a very strong opinion on the safety of it.  I think using copy/deepcopy is a bit of a rough approximation for the semantic that we want.
[07:58] <lifeless> test.clone() would be better.
[07:58] <spiv> Right.
[07:58] <poolie> well, add a .copy() method and then it can be modified
[07:58]  * lifeless adds another yak.
[07:58] <poolie> exactyl
[07:58]  * jml is agreeing so far
[07:58] <jml> shallow will work, I think.
[07:58] <spiv> Well, there's __deepcopy__ already ;)
[07:58] <poolie> did you ever have yak butter tea? i once did in montreal.
[07:59] <poolie> kiwis might like it :)
[07:59]  * lifeless makes it shallow
[08:00] <lifeless> 50% faster
[08:00] <lifeless> and another second off loading the regular test suite.
[08:03] <lifeless> 3.3 seconds being reported by selftest selftest now.
[08:03] <lifeless> almost down to tdd speeds again
[08:03] <poolie> nice :)
[08:04] <poolie> want to do my selftest --debug-failure feature request too? :)
[08:04] <lifeless> poolie: yes, but I won't
[08:04] <lifeless> I want to drive this thread to the ground
[08:05] <lifeless> and I haven't yet figured out how to fork() myself
[08:05] <lifeless> _check_safety_net
[08:05] <lifeless> this is kindof slow.
[08:07] <poolie> hm, i don't know what to do now
[08:08] <lifeless> I have been wondering if thats what you were doing:) [figuring out what to do]
[08:08] <lifeless> poolie: if you don't have anything specifically for 2.0, you could pick one of [brain food, manage someone, something useful even if not 2.0 specific]
[08:09] <poolie> i need to merge 415508 to 2.0 and then i guess look at john's bug 402645 fix
[08:09] <poolie> and then do a beta release
[08:10] <vila> lifeless: _check_safety_net *intent* is to ensure we don't fallback to a shared repo, it's more problematic when one set TMPDIR under his $HOME than the default under /tmp,
[08:11] <vila> that being said, the *implementation* can be changed as you see fit
[08:11] <lifeless> vila: yes, but opening a working tree 20K times is a lot of work.
[08:11] <lifeless> vila: I'd rather make sure we error when someone reads from that dir
[08:11] <lifeless> vila: by using the branch open hook. For instance.
[08:11] <vila> lifeless: yup, that will work too, we discussed it with poolie long ago, but I never found the time to fix iit
[08:12] <vila> maybe just chmod'ing may be enough (but may not work on windows)
[08:13] <vila> *but* I'm a bit scared by a too tricky impl. that some tests may defeat
[08:13] <lifeless> vila: so, a few thoughts
[08:13] <lifeless> a test could go above it anyway.
[08:13] <vila> anyway, a good test for that is to create a standalone branch and then do TMP_DIR='.' :-)
[08:13] <lifeless> so we can't be perfect.
[08:14] <lifeless> As we can't be perfect, we get to choose just where we put the line.
[08:14] <lifeless> I suspect that a python callback * 20K will be much cheaper than open_wt + get_last_revision * 20K
[08:14] <lifeless> the only tests that could bypass that are those that actually run bzr externally.
[08:14] <vila> a few thoughts too: I'm pretty sure some tests already escapes the current jail (some cxxxpolicy tests actually use the local branch settings)
[08:14] <lifeless> so roughly:
[08:15] <lifeless>  - make a subclass (e.g. ExternalBase :P) be the only test class with run_bzr_subprocess()
[08:15] <lifeless> give that subclass the current safety net check
[08:15] <vila> External must die
[08:15] <vila> ExternalBse must die
[08:15] <lifeless> actually, I think it should live, for this and only this.
[08:15] <vila> It's the one in blackbox right ?
[08:16] <vila> be aware that it isn't used by all blackbox tests then
[08:16] <lifeless> and remove the current safety net creation and check in the base class with the replacement of a bzrdir/branch open hook to catch library attempts to use that area.
[08:16] <lifeless> vila: when i delete the method from the base class, they will show up pretty fast.
[08:17] <vila> oh, by running selftest -s bb you mean ?
[08:17] <lifeless> broadly, yes
[08:17] <vila> also, some tests escape the safety net by accessing only the config files if I remember correctly
[08:18] <lifeless> yes, thats why we change HOME
[08:18] <lifeless> I plan to add hooks to file() and open() to catch them
[08:18] <vila> I'm pretty sure some tests still access the local branch if TMPDIR is set
[08:18] <poolie> lifeless: that still makes me a bit unhappy
[08:18] <poolie> i'd really rather you use a readonly directory
[08:19] <lifeless> poolie: I want to get away from disk IO completely for tests that think they are only testing functions.
[08:19] <poolie> i agree too
[08:19] <vila> I'm sorry I don't have more precise info there, it's all from experiments I made long ago and could never finish, the point is, we have some minor isolation problems that I saw long ago with the TMPDIR trick,
[08:19] <lifeless> poolie: I don't see making a directory, readonly or otherwise, as being compatible with that goal :(
[08:19] <lifeless> vila: I hope that I'll flush these out eventually.
[08:19] <vila> *but* they never show up again with --parallel=fork, so may be some have been already addressed
[08:20] <poolie> why?
[08:20] <vila> and the long term plan is to catch them by running them one by one when I'm able to work on coverage again
[08:20] <lifeless> poolie: because it requires mkdir(), either per-test, or a global variable and per-test-run
[08:20] <lifeless> the former is expensive, and the latter is also aesthetically displeasing
[08:20] <vila> lifeless: indeed, that's why I try to inherit from TestCase as much as I can :)
[08:21] <vila> poolie: run selftest -v -s xxx (limited to 20 tests or so) and you quickly get the feeling
[08:22] <poolie> which feeling?
[08:22] <vila> that's TestCase tests are faster than TestCaseInTempDir
[08:22] <poolie> but doesn't even the base class get a tmpdir?
[08:23] <poolie> hm, ok
[08:23] <poolie> i guess it may stop us reading anything
[08:23] <poolie> i kind of suspect it will break things but it's worth a try
[08:24] <vila> poolie: no, that starts with  TestCaseWithMemoryTransport
[08:25] <poolie> did any of you have an opinion about john's groupcompress fragmentation patch https://code.edge.launchpad.net/~jameinel/bzr/2.0b1-402645-fragmentation/+merge/10621
[08:28] <vila> poolie: we had a long discussion with jam yesterday regarding gc fragmentation, did you talk to him since then ?
[08:28] <vila> or did you read the logs ?
[08:28] <poolie> i did see it but i haven't read them exhaustively
[08:28] <poolie> so i'd rather if you could summarize?
[08:29] <vila> hmm
[08:30] <poolie>  /should i stay or should i go now?/
[08:30] <vila> 1) gc groups can't be fetched as a whole, we need to split them
[08:30] <poolie> or more specifically is it a good idea to merge
[08:30] <vila> I think it's good to merge in the short term
[08:31] <vila> wow, you wanted a really short summary :)
[08:32] <poolie> yeah
[08:34] <vila> ... but the commit messages of the mp summarizes perfectly the situation: best we can to do in the short term, but there are far better alternatives for the long term
[08:34] <vila> the discussion was about these alternatives
[08:36] <vila> grr, damn lp:mad, this jam's mp is really a one-liner !
[08:36] <vila> poolie: ow, you approved it, ok
[08:40] <poolie> 'ow'?
[08:42] <vila> poolie: no need for me to review it then, but no worries :)
[08:44] <lifeless> sorry about the formating
[08:44] <lifeless> {'TimeStandard': 0.001, 'TimeWithMemory': 11.399999999999878, 'TimeWithTransport': 13.820999999999875, 'TimeBzr': 2.1129999999999911, 'TimeWithDir': 14.200999999999917}
[08:44] <vila> s/9//
[08:45] <lifeless> thats 1000 tests, time summed
[08:45] <lifeless> Standard is unittest.TestCase
[08:45] <poolie> so close, so far away
[08:45] <lifeless> you can see that tests.TestCase is about 2000 times slower than unittest.TestCase, and ~1/5 to 1/7th the speed of the richer test cases
[08:45] <vila> lifeless: s/Time/bzrlib.test.Test/ ?
[08:46] <vila> ok
[08:46] <lifeless> yes, TimeWithMemory = a noop TestCaseWithMemoryTransport test
[08:46] <vila> fully match my mental model
[08:46] <vila> but what is TimeBzr ?
[08:46] <lifeless> http://paste.ubuntu.com/259705/
[08:47] <lifeless> +class TimeBzr(tests.TestCase):
[08:47] <lifeless> +
[08:47] <lifeless> +    def test_time(self):
[08:47] <lifeless> +        pass
[08:47] <vila> oh, ok
[08:47] <lifeless> bzr's 'regular' TestCase
[08:47] <vila> haa, costs a bit, well, cleanEnv I presume
[08:47] <lifeless> mkdir()
[08:47] <lifeless> and rmdir()
[08:48] <lifeless> I'm pretty sure TestCase does the chdir and set HOME
[08:48] <vila> lifeless: for tests.TestCase ?
[08:50] <lifeless> yes
[08:50] <vila> I don't think so, since home is under the TEST_ROOT shared by WithMemory
[08:50] <vila> startLog instead
[08:51] <vila> which seems wrong as many tests won't need it
[08:51] <vila> I mean, it shouldn't be there or fail if we try to log anythinh
[08:51] <vila> I mean, it shouldn't be there or fail if we try to log anything
[08:52] <vila> that whole test/log interactions needs an overhaul anyway :-D
[08:52] <vila> see the XXX in _get_log()
[08:53] <vila> :-D
[08:56] <vila> http://instantrimshot.com/
[08:56] <vila> one more regression caught early by the test farm
[08:57] <vila> FAIL: bzrlib.tests.blackbox.test_filesystem_cicp.TestMisc.test_status
[08:57] <poolie> nice one
[08:57] <vila> AssertionError: not equal:
[08:57] <vila> a = 'added:\n  CamelCaseParent/\n  CamelCaseParent/CamelCase\n  lowercaseparent/\n  lowercaseparent/lowercase\n'
[08:57] <vila> b = 'added:\n  CamelCaseParent/CamelCase\n  lowercaseparent/lowercase\n'
[08:57] <poolie> i'm kind of tired and i want to stop, but i want to make 2.0b1 today
[08:57] <vila> lifeless: I bet it's related to one of your latest changes but you can't run on case insensitive fs ?
[08:57] <poolie> vila, do you think you could make a source tarball when pqm finishes grinding?
[08:58] <poolie> oh also i was going to mention bug 409717
[08:58] <vila> poolie: certainly
[08:58] <poolie> and maybe ping james_w when you do?
[08:59] <vila> poolie: it seems like lifeless will disagree/is working on 409717
[08:59] <poolie> lifeless: bug 403360 is the report that the tests didn't clean up
[08:59] <poolie> i haven't looked into it
[08:59] <poolie> i haven't done anything about 409717
[08:59] <vila> poolie: i.e. make a tarball from the 2.0 branch, upload it to lp and tell james_w about it ?
[09:00] <vila> poolie: and let you do the announce email
[09:00] <poolie> i wouldn't say we should certainly remove ExternalBase, but there is some misalignment
[09:00] <poolie> vila, well, follow the process in the 'releasing' doc
[09:00] <poolie> including bumping the version number etc
[09:00] <poolie> and then send mail to bazaar@
[09:00] <poolie> it doesn't take that long but it does block on waiting for pqm to finish these two branches
[09:01] <poolie> assuming they all pass
[09:02] <poolie> vila re ExternalBase, I think that just being able to run external commands doesn't need to be in a subclass
[09:02] <poolie> saying "I need a temp directory" possibly does
[09:03] <vila> I agree that cleanup is strongly needed
[09:03] <lifeless> vila: I don't have a case insensitive machine handy; would a loopback vfat fs on linux work to test that ?
[09:04] <garyvdm> Hi vila, lifeless
[09:04] <vila> lifeless: in principle, but I'm pretty sure we use sys.platform to infer case sensisititotovity (sp? :)
[09:04] <poolie> lifeless: wine would probably work :)
[09:04] <lifeless> poolie: good point
[09:04] <lifeless> vila: I'll see about playing around tomorrow in brain time to look at that
[09:04] <garyvdm> vila: is it possible to download builds that your buildbot creates?
[09:05] <lifeless> vila: unless you can track it down:)
[09:05] <vila> lifeless: and I wasn't throwing stones, rather I wanted you to say,: "Oh yes, definetly, st output doesn't includes some parents anymore" :-D
[09:05] <vila> lifeless: I'm tracking right away once Xchat stop ringing that is...
[09:07] <poolie> ok
[09:07] <poolie> that's enough for me then
[09:07] <vila> garyvdm: not yet, but that's planned, I have a web page opened describing the necessary plumbing in buildbot, I need to solve some minors naming issues and more importantly get some freee time to do that
[09:07] <poolie> good night
[09:07] <lifeless> vila: oh no; uhm, not sure.
[09:07] <lifeless> vila: I thought you meant it was a testing cleanup
[09:07] <garyvdm> vila: ok - thanks
[09:08] <vila> lifeless: ok, no worries, I'll investigate
[09:08] <lifeless> if you want to binary search, start with the iter_changes patch
[09:08] <vila> My point was: the buildbot test farm caught a *single* test failing ! Now, *that's* defect localisation :-D
[09:09] <vila> garyvdm: to explain a bit more: you can't download from a slave, but a slave can upload to the master and from there we may fond a way to allow you to download
[09:10] <poolie> vila, the two branches, in case they fail
[09:10] <poolie> are
[09:10] <poolie> http://bazaar.launchpad.net/~spiv/bzr/gc-batching-2.0
[09:10] <poolie>  http://bazaar.launchpad.net/~mbp/bzr/prepare-2.0
[09:10] <vila> poolie: ok
[09:19]  * igc dinner
[09:53] <vila> lifeless: ok, the test output was actual/expected instead of expected/actual so the question now is: "Did you make a change that will make status displays *also* the parents when adding  children ?"
[09:53] <vila> lifeless: again, no worries if you can't answer, just a general feeling
[09:54] <lifeless> yes
[09:54] <lifeless> if the parent is changed
[09:57] <Spabby> hi folks, my local tree appears to be borked, whenever I try and do anything  I get the error "bzr: ERROR: exceptions.AssertionError: name u'signapps_central_user.sqlite.THIS'  already in parent"
[09:57] <vila> lifeless: oh right, 4649 NEWS entry is pretty clear :) In that specific case, the operation is a first 'bzr add' so I presume creating the parent counts as modifying it :)
[10:12] <james_w> poolie, vila: I would also need to know compatible versions of plugins
[10:12] <james_w> I feel like I should just upload 1.18 today rather than running around like a headless chicken trying to beat the deadline for 2.0beta
[10:13] <vila> well, I think we use betas to *know* which plugins are compatible :)
[10:14] <james_w> true :-)
[10:15] <james_w> I won't make many friends if I upload something that breaks other things on the final day before FF
[10:15] <vila> Firefox ? :-D
[10:16] <vila> james_w: well, I'm not aware of any API changes at least, so I don't expect a lot of plugin breaks either
[10:17] <james_w> uploading is not the way to find out though
[10:17] <vila> and if *you* don't upload, nobody (or nearly) can find right ?
[10:18] <lifeless> vila: it does
[10:18] <vila> lifeless: context ? 8-}
[10:18] <lifeless> 18:57 < vila> lifeless: oh right, 4649 NEWS entry is pretty clear :) In that specific case, the operation is a first 'bzr add' so I presume creating the parent counts as modifying it :)
[10:18] <vila> ha :)
[10:19] <vila> lifeless: test running
[10:21] <vila> lifeless: so that's a canonical example of the benefits of the test farm: nobody can run all tests in all configs (I know we can improve things, but yet, the principle remains), the test farm caught a failure (splendid one, single failure), the NEWS entry for the offending revision gives an explanation related to the failure
[10:22]  * vila takes picture, put on wall
[10:22] <lifeless> vila: I think build farms are wonderful
[10:22] <vila> lifeless: kudos to you for that by the way :)
[10:23] <vila> lifeless: Oh I know I'm preaching the choir, I just wanted to share the pleasure :)
[10:23] <lifeless> :)
[10:28] <fullermd> Crazy people, those bzr folks.  They get excited when tests _fail_.
[10:30] <vila> ouch, traceback while ttryinh to pull the 2.0 branch :-/
[10:31] <vila> what's the flag to disable apport ?
[10:34] <vila> hmm, weird, 'bzr pull' failed in a bound branch but 'unbind, pull, push, bind' worked :-/
[10:35] <lifeless> vila: read only transport?
[10:35] <lifeless> vila: iz bug with url normalisation
[10:36] <vila> no, toomanyconcurrentrequests, the branch is pretty new (yesterday) so url norm is ruled out I think
[10:37] <james_w> vila: ideally someone (possibly me) would *try* the plugins and make a best effort to check what was compatible, but that's what I mean by running around on the last day
[10:37] <james_w> now, lots of us run bzr.dev + plugins, which is a good start
[10:37] <vila> hmm, interesting, I have another one with almost the same setup, the difference being one has bazaar-cvs.org as parent while the other has lp:bzr/2.0 as parent,
[10:38] <vila> so it may be that :parent and :bound being on the same server triggers the bug, no time to check yet though
[10:38] <vila> james_w: ha ok, yes, I have a patch pending review that will allow me to include whatever set of plugins we decide in the test farm....
[10:39] <james_w> nice
[10:39] <vila> so far, it's running with --no-plugins (ashes on my head but the one who never sinned throw me the first stone :-)
[10:42] <james_w> I'll get 1.18 in first, and if there is a beta tarball today then we can see if there is time to squeeze it in as well
[10:45]  * lifeless hands vila a stone to throw at himself
[10:46] <vila> lifeless: nice try :)
[11:15] <bialix> lifeless: still here?
[11:15] <lifeless> yes
[11:15] <bialix> do you think it's possible (for me) to cherrypick your shelve fix back to 1.18?
[11:15] <lifeless> yes
[11:15] <bialix> I'll try
[11:16] <lifeless> and the one for push ../newbranch
[11:16] <lifeless> both are easy
[11:16]  * bialix not ready for --2a as default
[11:16] <bialix> ok, I'll try to cherrypick both
[11:16] <bialix> (dreaming: will be nice to have them as 1.18.1)
[11:17] <lifeless> bialix: go for it
[11:17] <lifeless> I totally support you doing that :)
[11:17] <bialix> 1.18.1?
[11:17] <lifeless> yes
[11:17] <bialix> ok
[11:17] <bialix> I'll publish it shortly
[11:18] <lifeless> if you ahve them in a branch of your own etc and it works
[11:18] <lifeless> we can easily merge it to the 1.18 branch on launchpad - you should ahve access to do that yourself via pqm
[11:18] <bialix> yes, this is my plan
[11:18] <lifeless> worst case its a windows only release; but I think its important for windows users to have these fixes.
[11:19] <bialix> yes
[11:19] <lifeless> which is why I did them now that the problem was accessible to me :)
[11:20] <bialix> you're using vila's farm or use windows machine?
[11:21] <bialix> sorry
[11:21] <bialix> lifeless: are you using kerguelen/vila buildbots or working on fixing this on native windows machine?
[11:23] <vila> bialix: I think the devil is using wine :)
[11:24] <bialix> lifeless: I should cherrypick from bzr.dev revno.4635 and 4650 only?
[11:25]  * bialix hopes so
[11:26] <bialix> 4635 applied cleanly
[11:26] <bialix> 4650 has conflict (in NEWS -- surprise,eh?)
[11:31]  * bialix found bug in replay :-/
[11:32] <bialix> apparently I was wrong. There is a lot of conflicts while cherrypicking 4635
[11:37] <bialix> lifeless: I'm unable to cherrypick 4635. Is it possible part of this fixed was already merged to 1.18 branch?
[11:37] <bialix> part of this fixes
[11:38] <bialix> the same for 4650
[11:38] <bialix> it seems my bzr is totally broken
[11:40] <bialix> some plugin... with --no-plugins I've got sane results
[11:41] <spiv> poolie, vila: pqm tells me gc-batching-2.0 landed on 2.0 ok :)
[11:43] <vila> spiv: yup, marked fixed released too :)
[11:46] <bialix> oh, there is several non-trivial conflicts in tests, so I'm not sure how to cherrypick this cleanly
[12:38] <lifeless> bialix: I don't know if 1.18 had the thisFailsStrict.. stuff
[12:39] <lifeless> bialix: if it *doesn't* you can probably discard most of the test changes
[12:39] <lifeless> bialix: or mail me/the list the conflict and I'll help you out tomorrow
[12:39] <lifeless> bialix: vila: Neither; jam landed a patch to allow emulation of windows locks in the test suite.
[12:40] <vila> wow, I didn't see that 8-/
[12:40] <lifeless> which meants that a) I knew precisely which tests were failing, and b) I could use my standard productive environment to debug and fix
[12:40] <vila> or may be I did but didn't realize the implications ...
[12:40] <lifeless> so I was able to slide it in.
[12:40] <lifeless> in brain-food time
[12:41] <vila> brain-food ? Means short time here I presume but what is the general sense ?
[12:41] <lifeless> well, food is something you need and consume
[12:41] <lifeless> and for the brain
[12:42] <lifeless> just bits and pieces, like first thing in the morning to warm up and get into the zone
[12:44] <vila> I see, strangely nothing like that exists in french
[12:45] <lifeless> its not a standard english idiom
[12:45] <lifeless> its a robertism
[12:45] <vila> :)
[13:03] <beuno> james_w, hi
[13:03] <james_w> hi beuno
[13:03] <beuno> did you maange to work out all the bzr-gtk problems?
[13:04] <james_w> bzr-gtk problems?
[13:04] <vila> beuno: bzr-gtk problems ?
[13:04] <beuno> james_w, weren't you having problems with bzr-gtk to upload all the latest bzr stuff to karmic?
[13:05]  * beuno is secretly interested in loggerhead
[13:05] <james_w> beuno: bzr-svn
[13:05] <james_w> I thought you might be
[13:05] <james_w> I'm working on it right now
[13:05] <james_w> I used bzr-svn 0.6.4
[13:05]  * vila relaxes :)
[13:05] <beuno> right, that other jelmer thing  :)
[13:06] <james_w> there was a report of it working, and any changes in trunk post that release didn't look to be compatibility fixes
[13:06] <beuno> james_w, so we're good for feature freeze?
[13:06] <james_w> should be
[13:06] <beuno> awesome
[13:07] <james_w> if people stop asking me things :-p
[13:08] <vila> james_w: What do you want me to stop asking you today ?
[13:08] <beuno> :)
[13:08] <james_w> beuno: was the loggerhead release done from a release branch or trunk?
[13:08] <vila> james_w: by the way, 2.0 tarball available
[13:08] <james_w> vila: great, thanks, I'll add it to the back of the queue
[13:09] <vila> james_w: sure, it's not as if I asked anything right ?
[13:09] <vila> :-D
[13:09] <james_w> heh
[13:09] <vila> courtesy url https://edge.launchpad.net/bzr/2.0/2.0rc1
[13:09] <beuno> james_w, I pushed a release branch
[13:09] <beuno> and tagged trunk
[13:09] <beuno> https://code.edge.launchpad.net/~loggerhead-team/loggerhead/1.17
[13:10] <james_w> nice
[13:10] <lifeless> james_w: we still have patches we consider critical for 2.0
[13:10] <lifeless> just FYI
[13:10] <james_w> sure
[13:10] <james_w> but I was asked to get a beta in to karmic
[13:11] <lifeless> yup yup
[13:11] <bialix> vila: ping
[13:12] <lifeless> just wearing my cautious hat; I doubt bzr-* that are version locked will play nice, for instanc.e
[13:12] <lifeless> james_w: but while we're speaking of packaging, bzr-loom has some fixes in trunk not packaged yet, I think.
[13:13] <vila> lifeless: I thought about that too, but generally they check the API and we didn't change that
[13:13] <bialix> vila: if/when I cherrypick lifeless patches back to 1.18, may I ask you to run tests on win32 buildbot for me?
[13:13] <james_w> "fixes" -> "talk to me next week"
[13:13] <james_w> or even tomorrow
[13:14] <lifeless> james_w: sure; its packaged as snapshots I think, FWIW
[13:14] <vila> bialix: no, I suspended tests on kerguelen because 1) they didn't finish so we still don't know how many failures we have, 2) they run under cygwin which isn't our main target, 3) they were a nuisance for jam to build the windows installer
[13:15] <vila> bialix: that being said, I still plan to install a local windows for my own needs and run the tests there until we migrate from kerguelen
[13:15] <bialix> may I know more details about 1, 2 and 3?
[13:15] <bialix> 2 is fixable I guess?
[13:15] <vila> 1) blows up on CantCreateThread
[13:15] <vila> 2) Needs a local setup to understand the issues
[13:15] <bialix> 3) does jam build installers every night?
[13:16] <vila> 2) is an related to buildbot
[13:16] <bialix> buildbot won't work on native windows?
[13:16] <vila> We *still* build the installers, we just don't run the tests :-/
[13:17] <bialix> lifeless: I'm planning to cherrypick your patches without tests changes where I see conflicts
[13:17] <vila> it works but the way the *service* is installed doesn't fit...
[13:17] <bialix> :-/ :-/ :-\
[13:17] <bialix> no means no
[13:18] <vila> bialix: ?
[13:20] <bialix> [15:14]	<vila>	bialix: no, I suspended tests on kerguelen...
[13:20] <bialix> no means no
[13:20] <vila> haa, yes :)
[13:20] <bialix> I can live without tests
[13:22] <vila> well, I didn't like doing that, but 1) kerguelen has limited memory and the tests were eating more than half of it, 2) 9 hours everyday to confirm that we still don't know how many tests fail was a bit too high a price :-/
[13:22] <bialix> yep
[13:23] <bialix> 9 hours is a bit too much
[13:26] <alex-weej> hi
[13:26] <alex-weej> i really like the way i can use "git checkout feature-x" to change my working directory to a new branch and continue testing my web app
[13:27] <alex-weej> is there any way i can do the same with bzr?
[13:27] <alex-weej> or will i have to manage some symlinks with my own scripts?
[13:28] <bialix> no, you only need shared repo + lightweight checkot in bzr and then bzr switch between branches
[13:28] <bialix> I guess there even was mini-tutorial on such setup
[13:29] <alex-weej> there was nothing on the git user guide
[13:29] <luks> I guess they don't mention much about bzr there :)
[13:29] <alex-weej> btw is a shared repo one that is created by "bzr init-repo"?
[13:29] <alex-weej> luks: i meant http://bazaar-vcs.org/BzrForGITUsers
[13:29] <alex-weej> :P
[13:30] <bialix> alex-weej: his page out of date (last changed 2007/09/28)
[13:30] <alex-weej> aw
[13:30] <luks> http://doc.bazaar-vcs.org/latest/en/user-guide/index.html#switching-a-lightweight-checkout
[13:31] <alex-weej> thank you
[13:32] <alex-weej> luks: so is this a good workflow in your opinion? i don't mind changing my workflow with bzr if there are better ways
[13:32] <luks> I use it for every larger project in bzr
[13:32] <luks> sometimes I have more than one working tree
[13:32] <luks> but almost never as many as branches
[13:36] <Ddorda> when i try to pull a project i get an error message. what could be the reason? http://ddorda.pastebin.com/d7d9fbc38
[13:38] <luks> that your SSH key doesn't match the key on launchpad
[13:38] <james_w> beuno: loggerhead uploaded
[13:38] <beuno> james_w, THANK YOU
[13:38] <james_w> no, thank you
[13:38] <Ddorda> luks: how can i fix that?
[13:40] <bialix> Ddorda: go to your personal page on LP and click on edit pen around your SSH keys; add your curent public  key there
[13:40] <luks> another possibility is that your username is wrong
[13:41] <luks> so try bzr launchpad-login first
[13:41] <vila> poolie: I think I'm done with releasing 2.0rc1, please check :-)
[13:41] <luks> wow, 2.0rc1 already and 1.8 is not even released?
[13:42] <bialix> 1.18 was secret release
[13:42] <bialix> https://launchpad.net/bzr/+download
[13:43] <luks> heh
[13:43] <vila> luks, bialix : They share the same RM :)
[13:43] <bialix> they?
[13:43] <luks> so you are now at 0 days release cycle?
[13:43] <vila> 1.18 and 2.0
[13:43] <luks> that's super-fast development
[13:44] <vila> luks: 1.18 is about to be released, 2.0 is only at the rc stage
[13:44] <bialix> vila: but announce for 1.18rc1 was never made
[13:44] <vila> we need a 2.0beta into karmic
[13:44] <vila> 1.18rc1 was announced, I just copied the mail to announce 2.0rc1...
[13:45] <vila> 1.18 is yet to be announced
[13:45] <bialix> 1.18rc1 -- I never seen announce
[13:45] <luks> ah, couldn't you get 2.0 it in without this "cheating"?
[13:45] <vila> that's because we've changed the process a bit so that installers can be ready when the announce is official
[13:45] <vila> luks: that's not cheating, that the way it was planned
[13:46] <vila> karmic is not released either see ?
[13:46] <luks> vila: it just feels weird
[13:46] <vila> luks: do you use Ubuntu ?
[13:46] <luks> yes
[13:46] <vila> which version ?
[13:46] <luks> 8.04
[13:46] <bialix> luks: new evil plan is to release once at 6 months, so 1.17 was release in June, so 1.18 will be around new year :-D
[13:47] <luks> that's cool
[13:47] <vila> luks: yet, many people are running 9.04 and some are already running 9.10...
[13:47] <james_w> bzr-gtk uploaded
[13:47] <vila> james_w: yes by me, yesterday
[13:47] <james_w> no, by me today, to karmic
[13:47] <james_w> :-)
[13:48] <vila> james_w: yes by me, yesterday including karmic, sync ?
[13:48] <luks> vila: I know, but seing two releases in the same day is pretty rare
[13:48] <james_w> I think that's all in order to get us a consistent stack for 1.18
[13:48] <vila> james_w: forget me, I was talking about ppas
[13:48] <james_w> ah
[13:49] <vila> james_w: hence my 'builddeb rocks' yesterday,
[13:49] <james_w> ah, cool :-)
[13:51] <vila> james_w: if you find I made a mistake, please, please, report, I added jaunty and karmic, stop producing for gutsy and feisty, without any real discussions, so any feedback will be warmly welcomed (no urgency though, you have enough to do :)
[13:51] <james_w> I think that's a good plan
[13:51] <james_w> they are EOL anyway
[13:51] <james_w> not worth the effort
[13:59] <vila> james_w: I'm unclear about deleting the remaining packages for edg, feisty and gutsy in https://edge.launchpad.net/~bzr/+archive/ppa
[13:59] <vila> I can't imagine who will need them, but ignorance happens everyday :)
[14:01] <james_w> vila: there's little cost to having them there, but then there is little benefit, so I don't really have an opinion
[14:02] <vila> james_w: ok, I lightly prefer having a clean plate so nobody wonders why that's the only package for these distros...
[14:03] <james_w> yeah
[14:18]  * vila enjoys the silence brought by DOWNgrading its graphic card to a fan-less one ...
[14:19] <vila> Now, only --parallel=fork starts the fans :-D
[14:21]  * pygi sets some fire near vila's house...
[14:26] <vila> pygi: hehe, I still have the laptop.... which just became the noisiest around here when the fan speed is above 4000rpm...
[14:27] <pygi> vila, ^_^
[15:30] <emmajane> beuno, ping?
[15:30] <beuno> emmajane, good morning
[15:30] <emmajane> beuno, morning! :)
[15:30] <emmajane> PM?
[15:33] <jam> vila: morning
[15:33] <jam> I just submitted: https://bugs.edge.launchpad.net/launchpad-code/+bug/419252
[15:33] <jam> I was wondering if you could confirm the email headers I'm seeing.
[15:33] <vila> morning jam !
[15:34] <vila> I replied via the web
[15:34] <vila> but my threading is perfect :)
[15:35] <jam> do you see different mail headers?
[15:35] <jam> subject lines
[15:46] <vila> jam: wow, rectification: *my* threading is broken too of course, see comment on bug
[15:47] <jam> ah, so it is probably reply-to-comment versus add a review sort of thing?
[15:47] <jam> vila: thanks for helping track it down
[15:47] <jam> I've seen it before, and it is quite annoying
[15:48] <vila> it's higly misleading indeed, I have no idea where the subject is coming from for *my* mail !
[15:50] <vila> jam: Oh, I see we make the same jokes in addition to making the same reviews :)
[15:56] <jam> vila: so why did we go with rc1 rather than b1 ?
[15:56] <vila> jam: :my mistake ?
[15:58] <vila> jam: I followed releasing.txt and missed a part ? Missed a reference ?
[15:59] <vila> cycle.txt mentions 2.0rc1, rats, I used 2.0rc1 released instead of available :-/
[15:59] <vila> well, still haven't been a true RM haven't I ? :)
[16:00] <vila> but I think that 2.0beta1 will be *released* later anyway so I wasn't wrong on the naming, or was I ?
[16:00] <vila> jam: ^
[16:01] <jam> vila: it would be 2.0beta1-rc1 in that case
[16:02] <jam> but we are switching to the 'new' layout, I thought
[16:02] <jam> where we don't do rc's for beta releases
[16:02] <vila> then I goofed :-/
[16:03] <jam> vila: question about mainline parent ghosts and merge_sort
[16:03] <jam> if you have ghost => A => B
[16:03] <jam> should A be revno 1 ?
[16:04] <jam> (that is the best I could come up with...)
[16:04] <vila> strictly speaking you know it *can't* be one since there is a ghost...
[16:05] <vila> so it should be 2
[16:06] <vila> jam: I'm unclear about deleting the remaining bzr-gtk packages for edgy, feisty and gutsy in https://edge.launchpad.net/~bzr/+archive/ppa
[16:06] <vila> what do you think ?
[16:07] <jam> vila: I wouldn't delete packages
[16:08] <vila> jam: ok, sold, I didn't but was unsure, james_w agree with you, 3/0, they stay, my only problem is that it's a bit unclean as we don't have corresponding bzr packages to go with
[16:09] <jam> vila: then obviously someone else doesn't agree with not deleting packages
[16:09] <jam> :)
[16:10] <vila> err, no we all agree to keep the packages
[16:10] <vila> or are you just joking ? :)
[16:11] <jam> well, I'm guessing that poolie deleted the old packages
[16:11] <jam> so he obviously would disagree (at least a little)
[16:11] <jam> so.. I wouldn't delete them *yet*
[16:11] <jam> but we should think about it
[16:11] <vila> ha !
[16:12] <vila> ok, get it :)
[16:12] <vila> Now that you mention it, I seem to recall poolie talking about deleting them, so we'll see if he read the highlights in the IRC logs :)
[16:15] <jam> vila: what is the branch in lp?
[16:15] <vila> lp:bzr/2.0 ?
[16:15] <jam> k
[16:15]  * jam wonders how we'll do it with on-going beta branches, etc.
[16:15] <jam> and the final stable branches
[16:16] <vila> I'd say betas will come out one at a time from lp:bzr/2.1 and lp:bzr/3.0
[16:17] <jam> vila: that isn't how we do it today... but perhaps
[16:17] <jam> I kind of like having a separate branch per official release
[16:17] <vila> we don't have a lp:bzr/1.18rc1 AFAIK
[16:17] <jam> but I guess we aren't planning on doing point releases from any of the betas
[16:17] <jam> vila: rc != beta
[16:18] <vila> jam: look at cycle.txt may be ? It sounds that a single branch is enough for each serie
[16:18] <vila> I'm talking about the scheme in the Schedule section
[16:32] <hno> A little question related to the 2.0rc1 release. Will there be a bzrtools release matching this shortly, or do the 1.18.0 bzrtools release work with 2.0rc1 as well?
[16:34] <hno> and related to this, is bzr 1.18 released or not? It's uploaded in launchpad, but bazaar-vcs webpages and this channel topic seems to say it's not released..
[16:37] <jam> hno: we have a new policy of not making the official announcement until all plugins/installers/etc have been built
[16:39] <hno> jam: that's regarding 1.18 right?
[16:40] <jam> hno: yeah. I'm pretty sure it is 'all released' but last night not all the packages had been made
[16:41] <hno> jam: So it should be fine if I go ahead and get it packaged for Fedora I guess.
[16:41] <jam> vila: also, you seem to have sent the announcement to 'bzr-announce' which I thought we were waiting for packages to do
[16:41] <jam> hno: the tarball is the 'official' release, yes
[16:42] <jam> Consider it "gone gold" versus "at the publishers"
[16:42] <vila> releasing.txt :
[16:42] <vila>    For release candidates, this is sent to the ``bazaar-announce`` and
[16:42] <vila>    ``bazaar`` lists.  For final releases, it should also be cc'd to
[16:42] <vila>    ``info-gnu@gnu.org``, ``python-announce-list@python.org``,
[16:42] <vila>    ``bug-directory@gnu.org``.  In both cases, it is good to set
[16:42] <vila>    ``Reply-To: bazaar@lists.canonical.com``, so that people who reply to
[16:42] <vila>    the announcement don't spam other lists.
[16:42] <jam> hno: and bzrtools 1.18 will explicitly not work with 2.0. abentley has code in bzrtools that checks the version string and refuses to run if they don't match
[16:43] <jam> vila: 1) this wasn't supposed to be an rc
[16:43] <vila> rats, forgot the reply-ti :-(
[16:43] <jam> 2) I think we are changing that policy
[16:43] <jam> at the minimum, though, we aren't sending "announce" until we have packages
[16:44] <igc> vila: new docs built, packaged and uploaded for 1.18 and 2.0rc1
[16:44] <igc> vila: the current bot is called igc til we get an RT request done :-(
[16:45] <vila> igc: bot ?
[16:45] <igc> well 'cron job'
[16:45] <vila> haaaa
[16:45] <hno> jam: So the 2.0rc1 isn't an rc?
[16:45] <igc> anyhow, time fro bed
[16:46] <igc> night all
[16:46] <vila> sleep well igc, I'm almost EODing myself
[16:46] <jam> hno: so.... in *my* opinion, rc1 means we expect 0 changes before turning this code into -final
[16:46] <jam> and I don't think the current release is at that point
[16:46] <vila> when jam had finished listing all my mistakes in that 2.0rc1 release :)
[16:46] <jam> Maybe Martin decided otherwise overnight and told that to vila without telling me
[16:47] <vila> jam: nope, martin told me: I'm tired but I want that relase to be out today, can you help
[16:47] <jam> vila: so... I can't package the windows installer until the plugins have updated to 2.0-X, which means we should 'announce' the full release until after everything has been brought up to date
[16:47] <jam> so we should make an bazaar@ posting that the code is frozen and available
[16:47] <vila> yes, we should release 1.18 first
[16:47] <jam> and get people to catch up, build installers, etc.
[16:47] <vila> I understand we needed 2.0 to enter karmic asap
[16:48] <vila> so let's forget about 2.0 until the *official* RM for both 1.18 and 2.0 clears my mistakes :)
[16:48] <hno> The window for Fedora-12 is also relatively short (weeks).
[16:48] <vila> hno: we're talking days or hours here, the RM is in AU TZ
[16:49]  * vila chuckles AU RM TZ, how about obscuring via acronyms...
[16:49] <hno> wasn¨t obscure to me.
[16:50] <vila> hno: I was thinking about an outsider trying to catch up the thread... poor guy :-/
[16:50] <jam> vila: and he is in EST (AEST) which doesn't help
[16:50] <jam> EST == New York time, *and* Australian Eastern (sydney) time.
[16:50] <fullermd> It's not a REAL timezone unless it has a 15 minute offset...
[16:50] <jam> fullermd: I'm pretty sure AU has one of those, too
[16:51] <jam> AIUI, they have 9 different timezones in the summer, and 5 in the winter
[16:51] <vila> jam: who is in EST ?
[16:51] <fullermd> Yah.  AU and somewhere in SE Asia I think.
[16:51] <jam> vila: The official abbreviation for the time zone in Sydney is "EST"
[16:51] <hno> SE is here... don¨t think we have an Asia region in SE.
[16:51] <jam> though for sanity purposes, Martin calls it AEST
[16:52] <jam> I think it might be Eastern Standard versus Eastern Seaboard or something silly like that.
[16:53] <jam> vila: Google just claims Sydney == UTC+10: http://www.google.com/search?hl=en&client=firefox-a&rls=org.mozilla%3Aen-US%3Aofficial&hs=lY0&q=time+zone+sydney&aq=f&oq=&aqi=g7
[16:53] <jam> however:
[16:53] <jam> http://www.timeanddate.com/worldclock/city.html?n=240 says "EST"
[16:53] <jam> vs
[16:53] <jam> http://www.timeanddate.com/library/abbreviations/timezones/na/est.html
[16:53] <jam> which also says EST and is New York (ish)
[16:54] <Tak> imo the only reasonable abbreviation for timezones is UTC+-N
[16:55] <hno> Tak: +/-NNNN in such case...
[16:55] <jam> Tak: given that there are 15min offset timezones...
[16:56] <Tak> I meant to indicate N as a number, rather than as a character ;-)
[16:56] <vila> UTC+4h15
[16:57] <jam> vila: UTC+4.25
[16:57] <vila> jam: tsk tsk, too easy
[16:57] <hno> jam: UTC+0425 usually...
[16:57] <jam> hno: 0415 actually
[16:57] <jam> it is base 60 not 100
[16:58] <hno> right.
[16:58] <hno> whoever came up with base 60 for time... most of us have 10 fingers..
[16:58] <jam> vila: UTC+4.4h ?
[16:58] <jam> 2*3*4*5 = 60
[16:59] <jam> it is the smallest number that is evenly divisible by 2,3,4,5,6
[16:59] <lfaraone> Hey, a git import in launchpad failed with this traceback: http://launchpadlibrarian.net/30852208/rainbow-olpc-mstone-trunk-log.txt
[16:59] <lfaraone> Is this a bzr bug?
[16:59] <jam> I personally prefer base 12, but its never caught on
[17:00] <jam> lfaraone: most likely a bug in bzr-git, but I don't really know for sure
[17:00] <lfaraone> jam: mkay.
[17:00] <vila> I recently read a funny explanation for base 60: you count 12 via 3 points on each finger (can find the english word for french phalange) and then 5 fingers on the other hand
[17:01] <hno> bzr 1.18 will hit Fedora-11 testing tomorrow I hope.. still building.
[17:01] <jam> vila: interesting.
[17:01] <vila> of course bases 12 and 60 are far more useful than 10 because you can divide by 2/3/4/5/6 (well not 5 for 12 but you get the idea)
[17:01] <jam> I've usually counted to 10 on one hand
[17:01] <jam> using the thumb as a pointer for >5
[17:01] <jam> which makes it pretty easy to count to 100 on two hands
[17:02] <jam> I've never really successfully counted in binary to get to 1024
[17:02] <jam> fingers just don't bend that way easily :)
[17:02] <jam> vila: yeah, it is a shame we weren't born with 12 fingers :)
[17:02] <jam> It actually was the one thing that is somewhat nice about the US inches to feet.
[17:03] <jam> It is very easy to cut 1 foot into 3rds
[17:03] <jam> Which I've done far more often than cutting into 5ths
[17:03] <jam> 1/2 and 1/5ths is all metric is really good at, without getting repeating decimals :)
[17:04] <vila> there were other articles about the romans counting up to hundreds of thousands with various finger gestures
[17:05] <vila> and some others about base 20 being popular using fingers and toes...
[17:05] <jam> vila: I can count in the hundreds of thousands. just not with 1 digit precision :)
[17:06] <vila> jam: they had a way to express such numbers with all digits in several gestures, but they didn't use positional numeration (sp?) so they had special gestures for higher numbers
[17:07] <vila> what surprised me was that I thought they couldn't go higher than M, were in fact, even in the written form, they could
[17:08] <vila> oh, and another funny one was about the chinese having special ideograms for bank accounts to fight counterfeiting (sp ?)
[17:08] <jam> vila: https://code.edge.launchpad.net/~jameinel/bzr/2.0rc2-419241-merge-sort/+merge/10755 if you get a chance
[17:09] <vila> not today, I'm crashing :-)
[17:09] <jam> vila: hmm.. I wonder about finger positions for roman numerals
[17:09] <jam> vila: critical segfault in the  2.0 release
[17:09] <jam> 'bzr log lp:~jameinel/bzr/2.0b1-stable-groupcompress-order -n0 -r -20..-1' segfaults
[17:09] <vila> you ran 'make' ?
[17:10] <jam> vila: I wrote the code
[17:10] <jam> I know the segfault
[17:10] <jam> fixed in attached
[17:10] <jam> though there is still a critical regression, at least it doesn't segfault
[17:13] <vila> jam: couldn't you be more daggy than 2.0 ?
[17:13] <hno> Ok. I won't package 2.0rc1 today
[17:14] <jam> vila: for what purpose? The code is only in 2.0 or trunk
[17:14] <jam> wasn't in 1.18
[17:14] <vila> ok, wasn't sure
[17:15] <vila> jam: the only worrying line is the deleted parents = self._original_graph[node_name]
[17:15] <vila> jam: reviewing online I miss the context
[17:16] <jam> vila: parents is passed into the function
[17:16] <jam> it was a bit bogus to re-read it from the dict
[17:16] <vila> ok
[17:16] <jam> I've been meaning to clean that up for a while
[17:16] <vila> approved then
[17:17] <jam> argh...
[17:17] <jam> robert's fix for "bzr selftest -s bt" isn't in 2.0
[17:18] <vila> I'm sure we can backport that one :)
[17:20] <jam> I just sent in the MP so that we don't forget about it
[17:20] <vila> james_w: I love that feature :)
[17:21] <vila> err, sorry james_w that was for jam :-/
[17:21] <vila> stupid xchat
[17:32] <vila> jam: hehe, not all things went wrong finally, my message to @announce is waiting approval :)
[17:32] <vila> jam: was waiting, cancelled now :)
[17:33]  * vila really EODing
[17:35] <jam> vila: ok. I'll have another patch for the rest of the regression, but that can be handled by Martin et al
[17:46] <mthaddon> anyone able to tell me how do to figure out how many packs and indices a branch has (per https://bugs.edge.launchpad.net/launchpad-code/+bug/416990/comments/3)?
[17:47] <mthaddon> nm
[17:48] <cellofellow> I was just wondering if there was a way to put the bzr revision number into the code, in a comment particularly.
[17:48] <jam> mthaddon: ll .bzr/repository/{packs,indices}
[17:48] <jam> should be even able to do that over http
[17:48] <mthaddon> thx jam
[17:49] <jam> mthaddon: I suppose the most accurate is to check .bzr/repository/pack-names in case some packs aren't referenced
[17:49] <jam> but that depends on the branch format
[17:49] <jam> (older formats is just 'cat', newer ones 'bzr dump-btree --raw .../.bzr/repository/pack-names')
[17:50] <jam> well repo format i guess :)
[18:42] <beuno> igc, spelling mistakes have not been fixed because I'm using up the designers time to do design changes
[18:42] <beuno> emmajane can pick it up after we finish with the graphical part of it, and tweak
[18:42] <emmajane> beuno, I just got back in from lunch and am reading backwards in time. :)
[18:43] <emmajane> igc, yeah. No worries about the spelling mistakes. That's easily fixed.
[18:44] <emmajane> beuno, I think the shuffling that is being suggested would be remedied if the banner had more breathing room and there was a more obvious "in" for each of the top four sections.
[18:44] <emmajane> beuno, In my response I think I'd recommended that the list of links be reduced to a single button or graphical entry point to those four topics?
[18:44] <beuno> abentley, a single button for all the links?
[18:45]  * emmajane blinks.
[18:45] <emmajane> that was for me?
[18:45] <abentley> beuno: I throw my blind support behind the Canadian.
[18:45] <emmajane> LOL
[18:46] <beuno> argh
[18:46] <beuno> :)
[18:46] <emmajane> abentley, thanks :)
[18:46] <beuno> damn
[18:46] <beuno> emmajane, yes
[18:46] <beuno> tab FAIL
[18:46] <emmajane> abentley, I owe you beer :)
[18:47] <abentley> emmajane: Sure, we can hang out with William next time you're in the T dot.
[18:47] <emmajane> abentley, that'd be most excellent!
[18:47] <beuno> mwhudson, btw, while you where away, I did an emergency release of Loggerhead
[18:47] <beuno> to get it into karmic
[18:47] <emmajane> beuno, yeah, a single button that leads to essentially a second "home" page for each of those topics.
[18:47] <beuno> emmajane, wireframe?
[18:48] <emmajane> beuno, there's *so*much* right now on the front that it doesn't direct the eye to click *somewhere*
[18:48] <beuno> it's the fastest way to communicate  :)
[18:48] <emmajane> ah
[18:48] <emmajane> ok.
[18:48] <beuno> emmajane, remember when I said taht it was too much information?
[18:48]  * beuno is a sucker for "I told you so"'s
[18:48] <beuno> :)
[18:48] <emmajane> beuno, but I said the list of links could be dropped to a single icon into a page BEFORE you started with the designer.
[18:48] <emmajane> beuno, then I said it as feedback.
[18:48] <beuno> ah
[18:48] <beuno> ok
[18:49] <beuno> I take it back then
[18:49] <emmajane> beuno, you're just ignoring me so taht you can be right. ;)
[18:49] <emmajane> Can I save stuff on the web version of http://www.balsamiq.com/?
[18:49] <beuno> my brain tends to do that
[18:49] <emmajane> hehehe
[18:50] <beuno> emmajane, I don't know. But if you can export the bmml file, I have a full version
[18:50]  * emmajane gives it a try.
[19:03] <emmajane> beuno, http://pastebin.ubuntu.com/259976/ see if that works?
[19:03] <emmajane> it's just "above the fold"
[19:04]  * beuno tries
[19:04] <emmajane> (if I'd had more time the boxes would ahve been the same size)
[19:04] <emmajane> I was just trying to mash out something to see though :)
[19:13] <emmajane> beuno, not sure if that worked?
[19:18] <jjack86> are there any known issues with using the bzr server across linux / windows?  I'm having a heck of a time getting going -- I can "bzr checkout bzr://yadda-yadda" just fine from linux to linux, but my windows machines seem to just hang with the same command
[19:20] <emmajane> jjack86, any chance the windows machine has a port closed or some other weird windows-specific restriction?
[19:21] <beuno> emmajane, it did
[19:21] <beuno> thanks
[19:21] <beuno> I'll get another revision with this cahnge
[19:21] <jjack86> i turned off my firewall to no avail, don't know where else to look (windows 7)
[19:21] <emmajane> beuno, do you want me to follow up to igc's email as well?
[19:21] <beuno> emmajane, please
[19:22] <emmajane> beuno, will do right now.
[19:22] <emmajane> jjack86, hrm. I'm not entirely sure. I haven't used Windows in a while.
[19:22] <LarstiQ> jjack86: does networking to the server work at all?
[19:23] <jjack86> I can ssh with putty to the bzr server just fine
[19:23] <jjack86> and bzr checkout doesn't work from a windows xp machine either, fwiw
[19:25] <Tak> bzr co WorksForMe on windows (bzr 1.17)
[19:26] <emmajane> jjack86, you said it just hangs?
[19:26] <Tak> this is with lp, sftp, bzr+ssh, and bzr-svn
[19:26] <emmajane> jjack86, is there anything on the server side that shows an incoming request?
[19:26] <LarstiQ> jjack86: does the bzr.log provide any helpful info?
[19:27] <jjack86> yeah, it hangs, and when I break off the connection, the server throws some 'error connection closed' messages -- can you tell me where the log file is located by any chance?
[19:27] <LarstiQ> jjack86: bzr --version should mention that
[19:28] <jjack86> ty
[19:28] <LarstiQ> jjack86: both server and client have a log, but I meant the windows client in this case (doesn't hurt to look at the other one too)
[19:44] <emmajane> beuno, sent! hopefully that makes more sense... :)
[19:54] <awilcox> Hello.  I was wondering if anyone has ever successfully versioned /etc in a Bazaar repo.  I have tried but I'm still relatively new to Bazaar and I haven't been successful.  I've also googled around and found nothing.  I would appreciate pointers?
[19:55] <emmajane> awilcox, do you know about etckeeper?
[19:55] <emmajane> awilcox, It does exactly that. :)
[19:55] <emmajane> awilcox, http://joey.kitenet.net/code/etckeeper/
[19:57] <awilcox> emmajane: I have read about etckeeper but I would like to keep *all* my servers' (I have about 15) /etc's in one central repository.  (In fact I have made an (approved) repo called "servconf" on the central code server.)  It didn't look like etckeeper did that; it seemed to make /etc a repo itself.
[19:59] <emmajane> once you have one server using etckeeper you've got a bazaar branch that you can share with other machines...
[19:59] <awilcox> I'm also using FreeBSD while this looks very linux-specific.
[19:59] <Tak> one place I worked had all the /etc on an nfs mount
[19:59] <emmajane> awilcox, Bazaar is "just" python. anything that can run python can run Bazaar.
[19:59] <awilcox> emmajane: I meant etckeeper.
[20:00] <emmajane> awilcox, it will version any folder you tell it to.
[20:00] <awilcox> My code repo is on FreeBSD :)  I also do my development on OS X and Windows in addition to FBSD
[20:00] <emmajane> awilcox, it'll even do /user/home if htat's what you want
[20:39] <lifeless> moin
[20:53] <mwhudson> beuno: i saw that, thanks a lot!
[21:16] <mtaylor> lifeless: you wake up entirely too early in the morning
[21:18] <lifeless> I do I do I do I do...
[21:20] <lifeless> jam: you'll have to explan to me why you say commit doesn't have tests about the file graph
[21:20] <jam> lifeless: it obviously didn't have a test under the particular circumstances...
[21:21] <lifeless> jam: it did the right thing given the data the api gave it
[21:22] <jam> lifeless: record_iter_changes behaved reasonable wrt path_content_summary
[21:22] <jam> commit was still wrong
[21:22] <lifeless> record_iter_changes doesn't use path_content_summary
[21:22] <jam> record_entry_contents
[21:22] <jam> sorry
[21:22] <jam> lifeless: so can you point me to the test that asserts only X texts have been added?
[21:22] <lifeless> I must be missing something here
[21:22] <jam> I'm happy to verify
[21:23] <lifeless> per_repository.test_commit_builder - grep for 'unchanged'
[21:23] <lifeless> those tests assert that the inventory entry gets the original last-modifying revision id
[21:23] <lifeless> in non merged, merge-collapsing etc corner cases
[21:24] <lifeless> they build on the assumption that the tree api isn't damaged
[21:24] <lifeless> I'm not sure that passing in a damaged path_content_summary tuple could be expected to do anything other than record_entry_contents did
[21:25] <jam> lifeless: so... perhaps I'm wrong, but _commit_check_unchanged looks to be doing a no-op commit and asserting that nothing has changed
[21:25] <jam> it doesn't seem to be an edge case
[21:25] <lifeless> test_last_modified_revision_after_converged_merge_dir_unchanged
[21:25] <lifeless> etc
[21:25] <jam> rev1 = tree.commit(); rev2 = mini_commit(), rev1.inventory[file_id] = rev2.inventory.file_id
[21:26] <lifeless> test_last_modified_revision_after_converged_merge_file_unchanged
[21:26] <lifeless> yes, no op commit is one of the corner cases
[21:26] <lifeless> remember that record_entry_contents hits every entry, so its entirely possible to have it spuriously record the entire tree
[21:26] <lifeless> if not tested
[21:29] <Wurblzap> Hey there... I don't seem to get Bazaar's --fixed functionality to work. I added a bugtracker_xyz_url line to branch.conf, but I keep getting my commits with "--fixed=xyz:123" refused because of "Unrecognized bug". Do you have a pointer for me?
[21:30] <jam> lifeless: I'm trying to figure out if this line is a bug:
[21:30] <jam>         def _check_graph(in_tree, changed_in_tree):
[21:30] <jam>             rev3 = mini_commit(in_tree, name, 'new_' + name, False,
[21:30] <jam>                 delta_against_basis=changed_in_tree)
[21:30] <jam>             tree3, = self._get_revtrees(in_tree, [rev2])
[21:30] <jam> shouldn't that be tree3 = ...[rev3] +
[21:30] <jam> ?
[21:31] <lifeless> let me look
[21:31] <jam> also, the way you do "expected_graph" isn't quite right
[21:31] <lifeless> Wurblzap: hi, uhm.
[21:31] <jam> you pass it the tip you want it to expect
[21:32] <jam> and then make sure that things are correct
[21:32] <jam> however, if you were to add (file_id, rev3) to the graph
[21:32] <jam> we wouldn't know
[21:32] <jam> because it wouldn't be an ancestor of (file_id, rev2)
[21:32] <lifeless> Wurblzap: try putting it in bazaar.conf
[21:32]  * Wurblzap tries
[21:32] <lifeless> Wurblzap: it may be a bug in how we find the config
[21:33] <jam> (I'm not saying that the code would have been able to catch the issue. I *am* saying that it doesn't seem to actually be asserting that we didn't create a new node.)
[21:33] <lifeless> Wurblzap: did you put in the {id} bit?
[21:33] <Wurblzap> Yup
[21:33] <Wurblzap> Don't start bug hunting just yet, I'm using 1.11rc1
[21:34] <lifeless> oh ok.
[21:34] <jam> Wurblzap: I assume you got this from "bzr help bugs" ?
[21:34] <Wurblzap> No, from doc/en/user-reference/bzr_man.html
[21:34] <jam> Wurblzap: k. I think that is taken from the other anyway
[21:35] <lifeless> jam: so, these tests got quite archaic, its entirely possible there are bugs. They have caught bad graph output from commit many times for me though.
[21:35] <lifeless> jam: re: adding file_id, rev3 - we don't care if thats added do we? as long as its not referenced by the entry it won't get copied...
[21:36] <jam> lifeless: seems like a good way to introduce bugs
[21:36] <jam> like the one where we reference a text that isn't referenced by the original inventory
[21:36] <Wurblzap> Yeah, lifeless, it works if it's in bazaar.conf
[21:36] <lifeless> jam: how do you mean?
[21:37] <jam> lifeless: having a text key in the repository that is pointed to by the index doesn't seem like a good thing
[21:37] <lifeless> Wurblzap: you might like to try bzr 1.18, and if that doesn't work in branch.conf file a bug.
[21:37] <jam> it doesn't seem critical if it isn't referenced
[21:37] <jam> but it may not be referenced in X, but *is* referenced by Y
[21:37] <jam> and we just didn't think to check both
[21:37] <Wurblzap> All right. Thanks, lifeless.
[21:37] <jam> (say in the dirstate cache, but not in the committed revision inventory)
[21:37] <jam> so I sort of do care that it isn't written to the repo
[21:38] <jam> as then I'm more sure that it won't be referenced anywhere without that location blowing up
[21:38] <lifeless> jam: so, I think that if we care if its written we have other bugs - design ones even
[21:38] <lifeless> All we should care about is the reference that is made by the commit code - other people only get references from commit
[21:41] <jam> lifeless: I would care if every commit generated a new text for every file in my tree that was just never referenced
[21:41] <lifeless> jam: If the disk store that that environment had was a git layout, you wouldn't care
[21:41] <lifeless> because those references would be to the hashed unchanging versions
[21:41] <jam> lifeless: if it wrote new blobs I would
[21:41] <jam> and it still bloats the index severely
[21:41] <jam> which is the whole problem that caused Frits to even really notice
[21:42] <jam> 700k entries in an index for 20k changes isn't a good thing
[21:42] <lifeless> I may be unclear on something
[21:42] <lifeless> were his inventory entries bogus?
[21:42] <mthaddon> jam: can you talk me through the process of debugging the command run against one of these lp-serve processes that you describe in the bug?
[21:42] <jam> lifeless: I'm saying that bloating the indexes is bad, regardless of whether it is referenced from the inventory
[21:43] <jam> mthaddon: I'm honestly not sure how to do the debugging in your environment, but I can gi
[21:43] <jam> give some ideas
[21:43] <beuno> emmajane, the tabs on your mockup, don't need to be tabs, do they?
[21:43] <lifeless> mthaddon: you don't want to do this on a live server
[21:43] <emmajane> beuno, nope
[21:43] <lifeless> mthaddon: it will freeze the process while you do it and may cause disconnects
[21:43] <jam> lifeless: consider that we changed 'pack' to always copy all references from the index
[21:43] <mthaddon> lifeless: ah, okay
[21:43] <emmajane> beuno, I was just looking for the shape that meant what I said.
[21:43] <jam> which means that these don't go away
[21:43] <beuno> emmajane, cool. Just handed over the changes, crossing my fingers
[21:43] <mthaddon> lifeless: in that case, I think I'll leave it to one of the lp-bzr devs
[21:43] <emmajane> beuno, thanks :)
[21:44] <lifeless> mthaddon: we should simply be using these questions as things to get codehosting logging. (And I filed bugs in this space some time back)
[21:44] <mthaddon> lifeless: since they now know which branch it is, should be able to reproduce
[21:44] <beuno> emmajane, and we'll probably want to take it to the public list from now on
[21:44] <emmajane> beuno, that's fine.
[21:44] <lifeless> what I'd like to know is what format its being pulled into
[21:44] <lifeless> doing data-conversion on the fly is a likely candidate for that footprint IMO
[21:45] <lifeless> jam: well, pack with no revspec always copied all indexed items.
[21:45] <jam> why am i seeing 3 calls to get_tags_bytes() .....
[21:45] <jam> lifeless: sure.
[21:45] <lifeless> jam: fetch() however was never meant to copy all index items.
[21:45] <jam> which means... it would be carried around
[21:45] <lifeless> jam: and IMO its a pretty bad bug that it does because it means private data can be exposed
[21:45] <mthaddon> lifeless: how would you know what format it's being pulled into?
[21:45] <jam> so... to say "I don't care" is false
[21:46] <jam> I don't care a lot about a small amount of extra
[21:46] <jam> I care a lot about a *lot* of extra
[21:46] <lifeless> jam: were his inventory entries bogus?
[21:46] <jam> lifeless: bogus in what sense?
[21:46] <jam> rev1.inventory[file_id] == rev2.inventory[file_id] except for last-modified-revision
[21:47] <jam> so the inventory referenced a text key which was added to the repo
[21:47] <lifeless> jam: the tree3=...[rev2] thing isn't a bug, but its horribly unclear
[21:47] <lifeless> jam: ok, so the inventory entry was wrong
[21:47] <jam> lifeless: why do you commit rev3, but never look at it?
[21:48] <lifeless> what line are you looking at
[21:48] <jam> lifeless: 1065
[21:48] <jam> rev3 = mini_commit(...)
[21:48] <lifeless> k
[21:48] <jam> and rev3 is never mentioned again
[21:50] <jam> lifeless: in your hpss streamlining work, did you notice that get_tags_bytes is being called repeatedly and getting the same result each time?
[21:50] <lifeless> yes
[21:50] <jam> in poking around at mthaddon's stuff, I just noticed that
[21:50] <lifeless> branch lock being dropped to zero
[21:50] <lifeless> discards the cache
[21:51] <lifeless> got pulled over to the 2a crunch though before fixin
[21:51] <jam> sure
[21:51] <jam> as long as it isn't something new I'm finding
[21:51] <lifeless> can be the client too
[21:51] <jam> lifeless: bzr.dev + bzr.dev for my testing
[21:51] <lifeless> ok
[21:52] <garyvdm> Hi all
[21:52] <lifeless> so, I'm struggling to explain rev2 there
[21:52]  * garyvdm is looking for bialix
[21:52] <lifeless> I'll poke at this
[21:52] <jam> garyvdm: I don't think bialix is usually on at this time... but you might get lucky
[21:53] <lifeless> jam: its a bug;
[21:53] <lifeless> jam: but only on that line; the rest of _check_graph is correct
[21:53] <jam> interestingly, during the fetch, memory consumption is pretty steady until 375s into the fetch, when it starts growing, and it grows until the fetch completes at 560s. (from 75MB to 178MB.)
[21:53] <lifeless> as we're testing it didn't add an entry
[21:54] <jam> lifeless: right, so  I think you are trying to test that tree3.inventory[file_id].revision == rev2
[21:54] <jam> and that the graph ancestry of rev2 is correct.
[21:54] <jam> though I almost think you are trying to test that the graph ancestry should also not have a rev3 in it
[21:54] <jam> but can't test that easily, because you can't use it if it isn't there...
[21:55] <jam> I do wonder if assertFileGraph should be using iter_ancestry()...
[21:55] <jam> lifeless: anyway, back to the original question. I didn't realize that there was at least some level of this testing. Which is good to be wrong about. :)
[21:55] <jam> though they are pretty hard to read and understand tests...
[21:56] <lifeless> yes
[21:56] <lifeless> this was written when we had a merge convergence bug many years ago
[21:56] <lifeless> I was 'dammit I'm going to stomp this out'
[21:56] <jam> lifeless: the flip-flopping issues, etc?
[21:56] <lifeless> yes
[21:57] <lifeless> "failure to converge"
[21:57] <lifeless> [think Mao]
[22:01] <lifeless> jam: so, back to the commit issue
[22:01] <lifeless> I agree that adding huge numbers of unreferenced objects is undesirable; however thats not what happened here.
[22:01] <lifeless> we added huge numbers of referenced objects
[22:02] <jam> lifeless: right
[22:02] <lifeless> I don't think we should pin down at the *interface level* whether unreferenced objects are used or not.
[22:02] <jam> you said "do I care if unreferenced data gets writteN"
[22:02] <jam> I care if *lots* of unreferenced data gets written
[22:02] <lifeless> I'm totally open to writing non-interface tests about unreferenced data being written.
[22:02] <lifeless> But I don't believe its part of the contract
[22:03] <jam> lifeless: it would seem to be part of the contract for all currently-implemented formats...
[22:04] <lifeless> jam: which contract
[22:04] <lifeless> I'm talking about the interface, which bzr-svn also implements
[22:04] <jam> lifeless: all currently implemented repository formats don't add unreferenced text nodes to the index
[22:05] <lifeless> I don't think its appropriate to test *at the interface level* that that is the case
[22:05] <jam> lifeless: and bzr-svn doesn't pass per-repository tests anyway
[22:05] <jam> lifeless: And I would bet that we wouldn't want to write a new format that *does* do that
[22:05] <lifeless> I should have the freedom to drop in an implementation that does things radically differently and passes
[22:05] <jam> so it seems a bit weak to say it isn't an valid interface contract
[22:06] <jam> lifeless: I think a reasonable part of the contract is that I won't grow grossly out of the order of actual content that you are versioning
[22:06] <lifeless> jam: I'm not saying we shouldn't have [some] tests, I'm saying I don't think its part of 'the inventory entries generated by commit are X, Y and Z and have valid references' contract.
[22:06] <jam> it is sort of a hard-to-pin down statement
[22:07] <lifeless> add to that that its proving a negative
[22:07] <jam> sure
[22:07] <jam> which is part of the problem
[22:07] <lifeless> A misbehaving implementation could add newrevid-RANDOM nodes to the index
[22:07] <jam> we have a lot of aspects that we don't test...
[22:07] <lifeless> and not reference them
[22:07] <jam> lifeless: you could check that the index only has 3 nodes
[22:07] <lifeless> they could make a new index
[22:08] <jam> lifeless: so we have some holes when it comes to our testing wrt negatives...
[22:08] <jam> like not fetching 5MB when you should only fetch 10k
[22:08] <lifeless> jam: if we wanted to write a test that all our built in repositories don't write (fileid,NEWREVID) for these cases, I'd be fine with that.
[22:09] <jam> I realize that the particular circumstances with content filters is fairly specific, and hard to predict ahead of time
[22:09] <lifeless> jam: btw 1.9 to 1.9 fetching, does it use the generic codepath now?
[22:09] <jam> lifeless: I don't believe so
[22:09] <jam> that would have been spiv's change, though
[22:09] <jam> not mine
[22:09] <lifeless> jam: my assertion about content filters is that it was testing coverage with the tree interface that lacked
[22:10] <jam> hmm....
[22:10] <lifeless> jam: if we'd insisted on a parameterised version of per_workingtree for content filters - which would be a time consuming change - we would have caught this easily.
[22:10] <jam> lifeless: after the transfer has completed, the process still has 230/299MB of in-use ram.
[22:10] <lifeless> and realised we need to make 'size able to be None'
[22:10] <lifeless> and then gone to the test_commit_builder tests and tested that combination
[22:10] <jam> lifeless: sure, though stacked branches suffered a lot of similar issues
[22:10] <lifeless> yup
[22:10] <jam> in that we don't have a 'per' on them
[22:10] <lifeless> hindsight in both cases
[22:11] <lifeless> I'm trying to formulate something I can be stubborn about with the next orthogonal-but-not-really feature
[22:11] <lifeless> to help us prevent this sort of surprise
[22:12] <jam> lifeless: where do you think would be good to make "Repository.get_stream" a little less opaque
[22:12] <jam> right now if I use -Dhpss
[22:13] <jam> I get about 4 lines
[22:13] <jam> 0.428  hpss call w/body: 'Repository.get_stream'
[22:13] <jam> 15.000     result:   ('ok',)
[22:13] <jam> 554.473  creating branch
[22:13] <jam> make a request
[22:13] <jam> it returns in 15s
[22:13] <jam> and 500s later I'm ready to create the branch
[22:13] <lifeless> -Dhpssdetail
[22:13] <jam> rather opaque
[22:14] <thumper> abentley: ping
[22:15] <jam> lifeless: doesn't add much. Or are you saying that if I add debug statements I should use that category?
[22:15] <lifeless> jam: hmm, it should be reporting all the items
[22:15] <jam> lifeless: there are other bits
[22:15] <jam> my point is that get_stream( ) is 500 / 510s
[22:16] <jam> and it only merits about 3 lines
[22:16] <lifeless> hpss_detail should be printing
[22:17] <lifeless>               %d byte part read', len(bytes_part))
[22:17] <jam> just that it is streaming something for the bulk of the time
[22:17] <jam> ugh
[22:17] <jam> network glitch
[22:17] <lifeless> hpss_detail should be printing
[22:17] <lifeless>               %d byte part read', len(bytes_part))
[22:17] <jam> sure I do see that
[22:17] <lifeless> I'd also consider -Dfetch
[22:17] <lifeless> if you want details on the bits inside the stream
[22:18] <lifeless> [you may need to patch stuff up for that one]
[22:18] <jam>  grep -rnI "fetch.*debug" bzrlib/
[22:18] <jam> returns nothing
[22:18] <jam> so I'm pretty sure -Dfetch isn't going to help (yet) :)
[22:19] <lifeless> it used to :P
[22:19] <lifeless> -Dstream too
[22:19] <lifeless> that should be s/stream/fetch IMO
[22:21] <jam> of course, I'm more interested in the server than client side...
[22:27] <jam> lifeless: something seems wrong with -Dstream
[22:27] <jam> 15.046  inserting substream: revisions
[22:27] <jam> 15.048  inserting substream: revisions
[22:27] <jam> 15.049  inserting substream: revisions
[22:27] <jam> 15.049  inserting substream: revisions
[22:27] <jam> 15.049  inserting substream: revisions
[22:27] <jam> 15.050  inserting substream: revisions
[22:27] <jam> 15.050  inserting substream: revisions
[22:27] <jam> 15.050  inserting substream: revisions
[22:27] <jam> 15.051  inserting substream: revisions
[22:27] <jam> 15.051  inserting substream: revisions
[22:27] <jam> 15.052  inserting substream: revisions
[22:27] <jam> 15.052  inserting substream: revisions
[22:27] <jam> 15.052  inserting substream: revisions
[22:27] <jam> 15.053  inserting substream: revisions
[22:27] <lifeless> I think that shows the point :P
[22:27] <jam> 15.053  inserting substream: revisions
[22:27] <jam> 15.053  inserting substream: revisions
[22:27] <awilcox> someone hit paste...
[22:27] <jam> sorry about the spam
[22:27] <lifeless> its getting lots of little substreams
[22:28] <lifeless> probably the batch size of knit reads
[22:29] <abentley> thumper: pong
[22:30] <jam> lifeless: http://paste.ubuntu.com/260064/ would lead me to believe that every record is being treated as a separate stream
[22:30] <thumper> abentley: I wanted to talk to you about pipelines again
[22:31] <thumper> abentley: I'm in a similar situation to last time
[22:31] <abentley> thumper: It's not a good time right now.  Making supper.
[22:31] <thumper> abentley: ok, will talk later
[22:32] <lifeless> jam: thats possible
[22:41] <beuno> jam, lifeless, while branching LP locally into a shared repo, I'm getting the CPU thrashed, as well as ~500mb of CPU usage
[22:41] <beuno> 70.735  creating new compressed block on-the-fly in 0.000s 2645 bytes => 428 bytes
[22:41] <beuno> that's the last output, something like 2 minutes ago
[22:42] <lifeless> beuno: do you have the C extensions
[22:42] <beuno> lifeless, I'm using people.canonical.com
[22:42] <beuno> so I'm guessing yes?
[22:42] <lifeless> please not to be guessing
[22:42] <beuno> ok, how do I verify?
[22:43] <lifeless> which bzr [check its a system one]
[22:43] <beuno> yes
[22:43] <beuno> /usr/bin/bzr
[22:44] <lifeless> python -c 'import bzrlib._groupcompress_pyx'
[22:44] <beuno> returns nothing
[22:44] <lifeless> then you do
[22:44] <beuno> ok, it's still not giving me output, and using up a lot of CPU and RAM
[22:45] <jam> lifeless, spiv: So it does, indeed, seem that every knit-ft-gz is getting sent as a separate 'substream'....
[22:45] <beuno> it's currently running if you want to ssh in and fiddle  :)
[22:45] <beuno> this is bzr 1.17, and 2a format
[22:46] <jam> beuno: it is also possible that it is just taking a lot of time to create a working tree, and that it has nothing to do with the last log message
[22:46] <jam> I don't see why it would be 500MB, though
[22:46] <beuno> I've been having problems at the office with 2a as well, mainly for a bug repo (1.6gb), making the PC unsable when trying to push, and resulting in a traceback (haven't filed a bug yet)
[22:46] <beuno> deepjoy,
[22:46] <beuno> jam, [#########\          ] Fetching revisions:Get stream source
[22:46] <beuno> it's stuck like that
[22:47] <beuno> aha
[22:47] <beuno> 649.908  creating new compressed block on-the-fly in 0.000s 4063168 bytes => 36 bytes
[22:47] <poolie> hello beuno, jam, lifeless, abentley
[22:47] <beuno> hiya poolie
[22:47] <lifeless> jam: thats undesirable but shouldn't cause memory or serious perf issues
[22:47] <beuno> jam, https://pastebin.canonical.com/21528/
[22:48] <jam> hi lifeless
[22:48] <jam> beuno: it is always stuck like that
[22:48] <jam> no progress with the new stream code
[22:48] <lifeless> jam: ITYM hi poolie :P we've been chatting for hours?
[22:48] <lifeless> hi poolie
[22:48] <poolie> easy mistake to make :)
[22:48] <jam> lifeless: apparently the tab key hits at weird times
[22:49] <jam> beuno: that branch isn't sharing a repository
[22:49] <jam> it is fetching between 2
[22:49] <jam> 'file:///home/beuno/db-devel/.bzr/repository/' to 'file:///home/beuno/launchpad/.bzr/repository/'
[22:50] <jam> beuno: what made you think it was inside a shared repo?
[22:50] <beuno> jam, correct, it's being branched *into* a new shared repo
[22:50] <beuno> https://pastebin.canonical.com/21529/
[22:50] <beuno> it's taking a VERY long time to do so, and a lot of CPU
[22:51] <jam> beuno: k. So you just finished transferring revisions, as near as I can tell. Which is the new "lots and lots of fragmentation" issue
[22:51] <jam> that said, 650s to get to that point is pretty crazy
[22:52] <lifeless> beuno: we know the cause :P. We're working on it - jam has landed some mitigation stuff and spiv has landed more mitigation, but we still need to fix up 'does not compress on combining packs/streaming'
[22:52] <beuno> perfect
[22:52] <jam> lifeless: I don't think we know why it took 650s before it started copying "inventory" texts
[22:52] <jam> that is 650s spent copying Revision texts
[22:52] <poolie> jam, so i wonder what happened, did vila make a beta tarball?
[22:52]  * poolie opens mail
[22:52] <jam> poolie: he made an rc1 tarball
[22:52] <poolie> rc!
[22:53] <poolie> really
[22:53] <jam> ... which I think was a mistake
[22:53] <poolie> i think so too
[22:53] <jam> he called it 2.0rc1
[22:53] <jam> we chatted about it a bit
[22:53] <jam> he also sent it to bzr-announce, though it got cancelled while it was pending
[22:53] <jam> etc
[22:53] <jam> so the release process docs seem to need a bit more polish
[22:53] <jam> since he felt he read them and was following what they said
[22:54] <jam> but he seemed to do a lot that doesn't fit what we were looking for
[22:54] <beuno> jam, it's all on rookery, if it's of any use to you. logs, repos, all of it
[22:55] <lifeless> beuno: what url did you pull from?
[22:55] <beuno> lifeless, originally, http://bazaar.launchpad.net/~launchpad-pqm/launchpad/db-devel/
[22:55] <jam> lifeless: he pulled from launchpad originally, but the current fetch is a local to local one
[22:55] <beuno> correct
[22:55] <lifeless> jam: my top candidate for the 650 seconds is the 'walking the full graph for fetching into a local repo is slow' bug
[22:56] <lifeless> but direct graph access would tend to argue against that
[22:56] <jam> lifeless: well, it still isn't super fast to query for missing revisions when the target repo already exists
[22:56] <jam> so it could be the problem
[22:56] <lifeless> beuno: new repo?
[22:57] <jam> bzr init-repo --2a .; bzr branch ../other
[22:57] <lifeless> jam: an empty repo is _very_ fast to query :>
[22:57] <beuno> lifeless, yes, completely
[22:57] <jam> lifeless: but not as fast as when you create one from scratch and it does 0 queries
[22:57] <lifeless> jam: thats true, but it won't be hitting disk
[22:57] <jam> your code to use AncesterOf rather than _search_missing_revisions
[22:57] <lifeless> something like 6 function calls per graph depth
[22:58] <lifeless> spivs, but yes. it is better
[22:58] <jam> beuno: can you run 'bzr --version' again real quick?
[22:58] <jam> hmm... that doesn't put it in .bzr.log
[22:59] <jam> lifeless: it is using python2.4 if that matters...
[22:59] <jam> 0.029  looking for plugins in /usr/lib/python2.4/site-packages/bzrlib/plugins
[22:59] <beuno> sure
[22:59] <lifeless> I wouldn't expect that dramatic a change
[22:59] <beuno> Bazaar (bzr) 1.17
[22:59] <beuno>   Python interpreter: /usr/bin/python 2.4.3
[23:00] <poolie> emmajane: thanks for keeping the website moving, i'm replying now
[23:00] <jam> lifeless: so yeah, as near as I can tell it is using the compiled extensions, so that isn't strictly a reason
[23:01] <beuno> FWIW, branching /devel from HTTP into the new shared repo, blazing fast
[23:02] <beuno> but that initial operation was murder
[23:02] <jam> 1068s according to the log
[23:02] <jam> 18min
[23:02] <beuno> right, to branch locally
[23:02] <beuno> and the CPU was angry at me in the meantime
[23:03] <jam> beuno: on the plus side you saved 1421 => 1068 = 6 minutes versus branching from remote...
[23:03] <beuno> :)
[23:04] <jam> beuno: any particular reason why you felt you *needed* to do this?
[23:04] <lifeless> jml: you should subscribe yourself to 'https://code.edge.launchpad.net/~jml/testtools/trunk'
[23:04] <lifeless> beuno: I hear that launchpad.net can host branches
[23:04] <jam> anyway.... EOD for me.
[23:04] <beuno> jam, yes. I got db-devel and wanted devel as well
[23:04] <beuno> to run a script against that branch
[23:04] <jam> beuno: but why two repos
[23:04] <jam> anyway
[23:04] <lifeless> jml: and/or fix the bug that reviewers are not treated as implicitly subscribed. Its really the most frustrating thing.
[23:05] <jam> I'm off to pick up my son
[23:05] <beuno> jam, there shouldn't be a shared repo on the first one
[23:05] <jam> beuno: there still is *a* repository
[23:05] <beuno> jam, I realized that I wanted a shared repo *after* I had branched intially
[23:05] <lifeless> bzr bind lp:launchpad/db-devel
[23:05] <lifeless> bzr switch devel
[23:05] <lifeless> bzr switch db-devel
[23:05] <lifeless> no repo, should be fast, thanks :)
[23:05] <beuno> yes, that's another way of doing it
[23:06] <beuno> this week is "don't learn new tricks week"
[23:07] <jam> lifeless, spiv: so as near as I can tell, bzrlib.smart.repository._stream_to_byte_stream does indeed intend to create a separate substream for each record
[23:07] <jam> which doesn't seem to match all the gyrations that _byte_stream_to_stream does to try and get multiple records per substream
[23:08] <jam> for substream_type, substream in stream:
[23:08] <jam>   for record in substream:
[23:08] <jam>   pack_writer.bytes_record(serialised, [(substream_type,)])
[23:08] <lifeless> so,
[23:09] <lifeless> knit-* should all down to serialised = ...get_bytes_as()
[23:09] <lifeless> and for the additional records, that returns None or '' (I don't remember which)
[23:09] <lifeless> so we'll yield one substrea, for the entire group of IO that knit did back to disk
[23:10] <lifeless> the revisions vf is a bit special, because it only uses full texts
[23:10] <jam> lifeless: I see exactly the same for inventories and texts
[23:10] <lifeless> jam: what precisely are you seeing
[23:11] <jam> 3.323  accepting bytes: 'B341\ntexts\n\nknit-del...'
[23:11] <jam> 3.323  substream_type texts, record_bytes: 'knit-delta-gz\nTREE_R'...
[23:11] <jam> 3.323  inserting substream: texts
[23:11] <jam> 3.323                294 byte part read
[23:11] <jam> over and over again
[23:11] <jam> anyway, 1 substream per record is... abusive but not specifically harmful
[23:11] <jam> I suppose it makes it easier for me to debug exactly when memory goes wacko
[23:11] <jam>  /away
[23:11] <lifeless> expected for revisions
[23:11] <lifeless> not for others
[23:12] <lifeless> (revs and sigs)
[23:13] <lifeless> jam: knit-delta-closure and knit-delta-closure-ref
[23:13] <lifeless> jam: in the server you should see lots of the first, and many more of the second
[23:13] <lifeless> knit-delta-closure-ref's are not serialised
[23:14] <lifeless> and it should be using LazyKnitContentFactory
[23:24] <lifeless> poolie: lunch?
[23:40] <lifeless> jam: I'm sorry the selftest bug hit 2.0, I thought I'd cherrypicked around it
[23:40] <lifeless> jam: or rather, I'm pretty certain I had
[23:51] <poolie> jeez you are getting up early
[23:54] <poolie> igc, did we get an os x mailing list? i can't remember
[23:57] <poolie> jam, lifeless, i'm going to merge 2.0 back to trunk
[23:57] <poolie> if we want to back out jam's changes, let's explicitly back them out
[23:58] <poolie> though i'd be inclined to leave them the same as 2.0 until we fix the bug if any properly