[00:44] <spiv> jelmer: thanks for fixing the escaping bug, although I had to delete my svn-cache to fix it completely
[00:47] <spiv> Ah, hmm, or perhaps not quite...
[00:49] <SamB> escaping bug?
[00:51] <spiv> SamB: %2F in svn-v4: revids rather than /
[01:00] <jelmer_> lifeless,jam,igc: is now too late to get another change to the bbc file format in?
[01:00] <lifeless> jelmer_: its never to late
[01:01] <lifeless> jelmer_: we just can't change the format in bzr.devwithout issuing a new one
[01:01] <jelmer_> ok
[01:01] <lifeless> it will come in as beta, meaning we can issue new versions easily
[01:01] <jelmer_> I'd still like to get that new revision serializer in
[01:02] <lifeless> well, get a patch up :)
[01:09] <jelmer_> Well, I asked for feedback a while ago about this
[01:09] <jelmer_> about the format to use
[01:10] <jelmer_> (See "[RFC] New revision serializer format" on the mailing list)
[01:12] <lifeless> you'll need to make sure send keeps working
[01:12] <lifeless> and possibly other things
[01:13] <lifeless> I don't have a strong opinion about revision serialisation; I would say though that it should still be typed
[01:13] <lifeless> I mean
[01:13] <lifeless> having it not know the encoding, or store badly encoded things is undesirable
[01:45] <spiv> lifeless: I think I'll head to epping on the 1:35
[01:46]  * spiv -> lunch
[01:58]  * igc lunch
[02:02] <lifeless> jam: ping
[02:07] <jelmer__> https://code.edge.launchpad.net/~jelmer/git/trunk (-:
[03:09] <lifeless> igc: ping
[03:09] <lifeless> jam has reviewed my commit change, but unless I'm blind managed to not say approve|tweak|whatever
[03:09] <lifeless> abentley: is bb dead?
[03:10] <lzhang> I know this is from a while ago but seriously
[03:10] <lzhang> svn is way harder than bzr
[03:12] <lzhang> if you are coming from no vc experience, bzr is super easy, the only hurdle is the command line
[03:16] <spiv> jml: you may be amused to know that python trunk breaks bzr's test suite too (not to mention bzr itself).
[03:16] <spiv> jml: although for different reasons.
[03:17] <jml> spiv: right now I'm full of despair.
[03:17] <jml> spiv: and I can't say that lightened my mood :)
[03:19] <spiv> jml: fun fact #1: sys.version_info isn't a tuple anymore, and cannot be used in expressions like "%s,%s,%s,%s,%s" % sys.version_info
[03:20] <spiv> jml: #2, TestCase._exc_info has been removed.  Which is actually reasonable I think, although a DeprecationWarning might have been nice.
[03:20] <spiv> (Although being an _-prefixed method I guess their policy doesn't oblige them to do so)
[03:21] <spiv> With those fixed the test suite at least runs, although there are failures from at least test_http, I haven't dug much further...
[03:29] <Peng_> Do new Python releases always cause fallout like this? What happened when Python 2.5 was released? Was bzr even around back then?
[03:30] <spiv> Peng_: well, 2.7 isn't even a beta release yet, so there's still hope ;)
[03:30] <spiv> But basically, yes.
[03:31] <lifeless> so spiv provoked me to minirant
[03:31] <lifeless> http://pybites.blogspot.com/2009/03/unittest-now-with-test-skipping-finally.html
[03:31] <lifeless> the least "you" can do is join in.
[03:31]  * lifeless looks at spiv
[03:31]  * lifeless looks at jml
[03:33] <jml> not going to, sorry.
[03:33] <seb_kuzminsky> i have a shared repo with a bunch of branches in it, and each branch has a single working dir with a checkout in it
[03:34] <seb_kuzminsky> some of the branches are *also* checkouts of a CVS repo elsewhere
[03:34] <seb_kuzminsky> yes it's messy
[03:34] <seb_kuzminsky> it gets worse
[03:34]  * lifeless cris on jml's shoulder
[03:34] <bob2> Peng_: bzr was like 18 months old when 2.5 came out
[03:35] <seb_kuzminsky> the CVS repo recently branched, and now i want a new bzr branch that tracks the new CVS branch
[03:35] <seb_kuzminsky> but i want to be able to merge between my bzr branches...
[03:35] <seb_kuzminsky> if i simply check out the new cvs branch into an empty bzr branch, i can't merge between the bzr branches properly
[03:36] <seb_kuzminsky> what should i do?  just shoot myself?
[03:36] <seb_kuzminsky> is it safe to just "cp" one branch-directory to another?
[03:37] <seb_kuzminsky> ie "cp upstream-trunk upstream-2.3"
[03:37] <lifeless> its about the same as 'bzr branch'
[03:37] <seb_kuzminsky> then in upstream-2.3 do a cvs update to the new branch, and "bzr commit" that
[03:38] <bob2> bzr-load-dirs might still exist
[03:38] <bob2> also the wiki has a list of solutions
[03:38] <seb_kuzminsky> oops i'm back
[03:38] <bob2> seb_kuzminsky: also the wiki has a list of solutions
[03:38] <seb_kuzminsky> ok bob2 i'll check that
[03:43] <seb_kuzminsky> bob2: i'm pretty much using the "converting and ignoring history" method from "TrackingUpstream"
[03:43] <seb_kuzminsky> maybe i should switch to "Converting and keeping history"
[03:44] <igc> lifeless: pong
[03:46] <seb_kuzminsky> i dont have access to the machine hosting the repo, so i can't use "bzr cvsps-import"
[03:46] <jam> igc: hey
[03:46] <jam> hey lifeless, I just "tweaked" your patch, sorry I forgot to vote earlier
[03:48] <lifeless> no worries
[03:48] <lifeless> I fixed the link test
[03:48] <lifeless> I don't think there were other changes needed? it was more informational?
[03:49] <lifeless> jam: if you want to talk about fetch we can arrange a voice call
[03:50] <igc> jam, lifeless: do we have a tool that 'looks' into a pack and gives you info about it?
[03:50] <lifeless> igc: there is an index dump command
[03:50] <lifeless> and there is repository-details
[03:50] <jam> igc: not specifically for a .pack file, there is "repository-details" that does a bit more
[03:50] <lifeless> there isn't a single- .pack tool, partly because they are not standalone enough for that to be useful in current format
[03:50] <lifeless> s
[03:50] <igc> I've added incremental packing to fast-import as lifeless recommended and it does reduce disk space usage
[03:51] <igc> but ...
[03:51] <igc> I'm seeing something weird
[03:51] <igc> emacs has 105K revisions
[03:51] <igc> after importing 100K of them, the single pack file is 100M - cool
[03:52] <jam> lifeless: I would like to see "ExistingContent" return the fs hash
[03:52] <igc> but the pack file generated for the last 5K is 500M!
[03:52] <lifeless> jam: oh right, it doesn't in the current commit code path either
[03:52] <jam> but I'm not sure if that would mess things up because you expect only real changes to come out
[03:52] <lifeless> jam: and I need to figure out why I didn't then
[03:52] <igc> and it seems that the packs grow bigger as fast-import goes
[03:53] <jam> UnsupportedOperation(_generate_inventory) seemed dodgy, but as long as it is good enough...
[03:53] <jam> igc: trees get bigger, packing makes a *large* difference
[03:53] <lifeless> jam: yeah, I'm unhappy with it, but it seems awfully close to make work to make another exception class
[03:53] <igc> is that expected? I would have expected the pack for each 10K to be roughly the same size
[03:53] <jam> I would guess that the new revisions have non-overlapping texts
[03:53] <jam> such that it all gets inserted as fulltexts to GC
[03:53] <jam> groupcompress
[03:53] <lifeless> they may have become more branchy
[03:53] <jam> igc: 'bzr pack' and 'autopack' make a world of difference
[03:54] <jam> like 2.3GB => 30MB for bzr.dev
[03:54] <jam> 500M for 5k is a bit surprising, but not impossible
[03:54] <jam> I'm assuming that with all packing disabled everything is a fulltext
[03:55] <jam> igc: Do you (effectively) have 1 call to insert_record_stream per text that you add?
[03:55] <igc> jam: just wondering out loud whether running pack multiple times in the one process is not releasing something it needs to
[03:55] <jam> Put another way, do you call add_lines() or do you stream in texts?
[03:55] <jam> igc: we don't delta between "insert_record_stream()" calls
[03:55] <jam> even less between pack files
[03:56] <lifeless> igc: so you need to pack again basically
[03:57] <jam> (as in, we may at some point delta inbetween insert_record_stream() calls, but we would really *like* to not delta between pack files)
[03:57] <jam> igc: consider the size of the fast-import stream
[03:58] <jam> how large is it for those 5k revs
[03:58] <jam> I would expect brisbane-core without packing to be ~ that size
[03:58] <igc> jam: I'm using repo.texts.add_lines() to load the texts
[03:58] <jam> igc: add_lines() does 1 insert_record_stream() per text
[03:58] <jam> so yeah, you are getting all fully-expanded texts  in the target repo
[03:58] <lifeless> same as commit
[03:59] <jam> igc: on the good side, it should make things fast, because we don't spend any time working about those stinkin' deltas :)
[03:59]  * igc goes to look at the size of the fast-import stream
[03:59] <lifeless> I think if I was to write a fast import I'd do a 2-pass, pass-1 plan out everything and generate revids etc, pass-2 act like a fetch operation;
[03:59] <lifeless> then I'd probably get clever and make it look like a repo :)
[04:00] <lifeless> jam: where are we at for freezing the disk format
[04:01] <lifeless> I know its alpha, but I'd like to start dogfooding on my laptop; don't want to do that until we're <reasonably> sure its copacetic
[04:04] <jam> lifeless: well, there is the subtree question
[04:04] <jam> whether that is a data format bump or not
[04:04] <jam> and certainly should be answered if you want to dogfood
[04:04] <jam> 20s to do a no-trees branch in a shared repo is a bit much
[04:04] <jam> :)
[04:04] <jam> There is 1 bit I'd like to at least play with to see if we want to change
[04:05] <jam> it is maybe 4 bytes per record, so 3-4% total size,
[04:05] <lifeless> so subtrees are in discussion
[04:05] <jam> nothing huge, but something that is wasteful and easy to remove
[04:05] <lifeless> whats the thing you want to play with?
[04:05] <lifeless> I need to lsprof commit to, I am wondering if group overhead is hitting commit
[04:05] <jam> (each delta record has a "source size" indicator, which isn't useful to us)
[04:05] <jam> lifeless: I would think commit would do really well with groups, since it doesn't have to delta
[04:06] <jam> but perhaps to load inventories, etc.
[04:06] <jam> The stuff about how we want to lay out groups after "bzr pack" isn't a data format bump
[04:06] <jam> so we are fine there
[04:06] <lifeless> jam: it has to load the pages that add_inventory_by_delta hits
[04:06] <jam> If we want a new chk index would be a format bump
[04:06] <jam> but that is a lot more work
[04:07] <lifeless> right, but I'm not trying avoid all changes
[04:07] <jam> and probably won't land in a --dev5
[04:07] <lifeless> heck I'll have to change when we land --dev-5 as it will have a new string anyhow
[04:07] <lifeless> I just don't want to dogfood while we are churning
[04:07] <jam> sure
[04:07] <jam> I just have the one real data-format thing
[04:07] <jam> but it keeps getting pushed off because it doesn't seem critical
[04:07] <jam> I should just do it
[04:07] <lifeless> what is it
[04:08] <jam> (delta records have "source size")
[04:08] <jam> 4-bytes at the start of each delta record indicating the size of the source text they were generated against.
[04:08] <jam> well ~4-bytes
[04:14] <lifeless> useful for extraction
[04:15] <jam> anyway, igc, I was hoping you would be able to bench branch for your python conversion, just to see where I'm at versus .dev
[04:15] <lifeless> if you delete the tree reference walk in branch, you can do that :)
[04:15] <jam> For my testing of Launchpad branching, it dropped from ~44min under lsprof down to 14min (12min without lsprof)
[04:15] <lifeless> jam: I really thought difference_update was only a problem if the rhs wasn't a set
[04:15] <lifeless> jam: am I wrong?
[04:15] <jam> lifeless: you are wrong
[04:15] <lifeless> ok, thanks
[04:16] <jam> it just uses the Iterator protocol
[04:16] <jam> .difference() probably cares whether it is a set or not
[04:16] <jam> .difference_update() seems pretty stupid
[04:16] <lifeless> I'm amazed
[04:16] <lifeless> I wonder if we should blacklist difference_update
[04:17] <jam> lifeless: so... I'm slightly wrong
[04:17] <jam> I just looked at the code
[04:17] <jam> if the other is a set
[04:17] <jam> it uses "set_next()"
[04:17] <jam> rather than PyIter
[04:17] <jam> but it still calls "discard_entry" for every entry in the object
[04:17] <lifeless> thats still size
[04:17] <jam> yep
[04:17] <lifeless> its probably because they are mutating the LHS
[04:18] <jam> it might be *slightly* faster, in that it would have the hash
[04:18] <jam> lifeless: yeah, you would have to create a temp object, build it, and then overwrite the current set with the temp data
[04:18] <lifeless> they should at least recast it as self.difference_update(self.intersection(other))
[04:19] <jam> lifeless: interestingly, that *is* genuinely faster
[04:19] <igc> jam: I'll run the benchmark now
[04:19] <jam> at least when "other" is >> larger than self
[04:19] <lifeless> is it as fast as your .difference using code?
[04:19] <jam> igc: revno 3916 should have a decent set of updates
[04:20] <jam> lifeless: I didn't test it in the real-world, but TIMEIT gave equivalent results
[04:20] <jam> though TIMEIT seems to say that "s = set(small)" takes 9ms, but "s = set(small); s = set(small)" takes 9.5ms
[04:20] <lifeless> lol
[04:20] <jam> so I don't entirely trust it
[04:21] <lifeless> so, I think in C we'd make a list verison of intersect
[04:21] <lifeless> iterate that without increasing refs
[04:21] <lifeless> s/iterate/generate/
[04:21] <lifeless> then iterate it to dec refs in self
[04:21] <lifeless> should be pretty fast
[04:22] <lifeless> and possibly only do that when len(other) > 3* len(self)
[04:22] <lifeless> I wonder if we should write
[04:22] <lifeless> in e.g. osutils
[04:22] <lifeless> difference_update(a_set, other_set)
[04:22] <lifeless> to avoid unclear code and still be fast
[04:22] <jam> lifeless: so is it "BB:tweak" to use a FIFOCache instead of LRUCache for btree._internal_node_cache ?
[04:23] <lifeless> yes
[04:23] <lifeless> long as its capped I'm happy
[04:27] <lifeless> damn, four test failures in commit_uses_ric that I didn't know about
[04:27] <lifeless> fixing...
[04:29] <jam> igc: revno 3916 is 14m under lsprof, down from 44min. And if we can get a better fetch order, it would drop another 3.5min
[04:29] <jam> My first tweaks are good, the later tweaks only effect really large repos
[04:29] <jam> s/good/good everywhere/
[04:30] <igc> jam: that benchmakr is running now - bzr.dev 4208 vs bbc 3916
[04:31] <igc> jam: give it 10 minutes
[04:34] <jam> igc: thanks. Note that "bzr branch bzr.dev" is still 2m8s up from 1m16s.
[04:34] <jam> (but down from ~4min)
[04:35] <jam> so even fixing the obvious stuff, it is going to be a bit slower
[04:36] <jam> though maybe not... the overall time in "item_keys_introduced_by" is 52s
[04:36] <jam> which would bring us very close.
[04:37] <igc> jam: 68s (bzr.dev) vs 132s (bbc)
[04:38] <jam> igc: down from ~350s, right?
[04:38] <jam> igc: this is on python trunk?
[04:38] <jam> (just surprising, because I'm seeing similar numbers for bzr.dev on my machine)
[04:39] <igc> jam: yes, down from 357
[04:39] <igc> python 3.0 branch
[04:51] <jam> lifeless, igc: question for you guys... If a root node turns out to be a simple leaf node which is redundant with a reference from an internal node you already have, is it a big deal if we transmit it?
[04:51] <jam> the chances are pretty low
[04:51] <jam> though it happens to occur in the launchpad history
[04:52] <lifeless> you mean if someone were to e.g. delete everything leaving a much smaller tree?
[04:52] <jam> something like that
[04:52] <jam> If one tree has "aa" and "ab"
[04:52] <lifeless> I don't really care
[04:52] <jam> and then we add "bc" causing a split
[04:52] <jam> then the second tree references a leaf node with just "aa" and "ab"
[04:53] <lifeless> is this in aid of 'only read once'?
[04:53] <jam> lifeless: right
[04:53] <jam> being able to go ahead and transmit the roots
[04:53] <jam> as we read them
[04:53] <lifeless> I think its fine if we read a page to transmit it
[04:53] <jam> so we don't have to try to read them again
[04:53] <lifeless> 'read a page on the changed side'
[04:53] <jam> lifeless: right
[04:54] <jam> iter_interesting_nodes is the only code that buffers records
[04:54] <jam> and now that I have lazy streaming
[04:54] <jam> I'd like to get rid of buffering
[04:54] <jam> because it causes refcycles
[04:54] <jam> and at the moment
[04:55] <jam> the only nodes it buffers are the root nodes
[04:55] <jam> though for a slightly different reason
[04:55] <jam> but still part of the issue as I rewrite it
[04:55] <lifeless> are you going for something more similar to iter_changes?
[04:56] <jam> lifeless: I'm playing around with that
[04:56] <jam> though it gets hairy when you also try to "read once"
[04:56] <lifeless> Ideally we'd only have one function
[04:56] <jam> and "no buffer"
[04:56] <jam> as well as "read in batches"
[04:59] <jam> Though I think I can go away from a pure "never read a node I may find is unneeded" by using the fact that we generally have very flat, and very similar shaped trees. so grouping by exact prefix is generally 'good enough'
[04:59] <jam> rather than worrying about possible longer prefixes
[04:59] <jam> we'll see
[05:00]  * jam goes and finds a soft pillow
[05:00] <lifeless> gnight
[05:00] <lifeless> I'll drop by tomorrow mornign if you want to chat
[05:10] <lifeless> -> town, night all
[07:07] <crisb> is there a way to make a remote branch 'append only'?  Ie for my trunk I dont want someone to be able to push to it if they've synchronised an old version via merge, as it will mess the revision history.
[07:18] <vila> hi all
[07:28] <spiv> crisb: yes, there's a setting you can put in the branch.conf
[07:28] <spiv> crisb: append_revisions_only = True in .bzr/branch/branch.conf IIRC.
[07:44] <crisb> spiv: cheers, found it in the manual too. rtfm!
[07:46] <spiv> crisb: :)
[09:59] <Mez> how do I resolve the conflict
[09:59] <Mez> RK  connect-config.php => connect-config.php.OTHER@
[09:59] <Mez> I'm not even sure what the conflict is
[09:59] <Kinnison> what's the diff between the files?
[10:00] <lifeless> Mez: thats a rename and a kind change
[10:00] <lifeless> Mez: or remove + kind change perhaps
[10:00] <lifeless> one branch made it a symink
[10:00] <lifeless> one did something different
[10:01] <Mez> ah fair enough
[10:01] <Mez> it should be a smylink to be honest
[10:01] <lifeless> bzr rm the one you don't want
[10:01] <lifeless> bzr mv the other to the right spot
[10:02] <Mez> I've just removed it
[10:03] <lifeless> ciao
[10:03]  * lifeless waves bye
[10:19] <thecookie> Hmm, is there any nice stats plugin? Like.. commit stats and such
[10:20] <spiv> thecookie: there's a basic one at https://edge.launchpad.net/bzr-stats
[10:20] <thecookie> Thanks :)
[12:43] <beuno> spiv, lifeless, bzr performance is really starting to shine for remote operations
[12:43] <beuno> you guys rock
[14:04] <Tak|work> jelmer: ping
[14:15] <jam> morning vila
[14:16] <vila> Hi jam !
[14:20] <jam> vila: I noticed you have some gc-python-only commits happening :)
[14:21] <vila> jam: yup, almost up to speed (still not 100% on the subject though :-/)
[14:21] <vila> but close
[14:34] <igc> hi jam, vila
[14:34] <igc> time for bed for me
[14:34] <jam> hey igc, surprised to still see you around :)
[14:34] <jam> and off you go
[14:34] <jam> sleep well igc
[14:34] <vila> wow igc, try to get some sleep !
[14:35] <igc> jam: I wanted to get a patch out and it just didn't want to pass all my tests thanks to a bug :-(
[14:35] <igc> solved now and patch out - hooray!
[14:35] <jam>  \o/
[14:35] <igc> jam: thanks for your work on branch speed
[14:36] <jam> igc: well, it mostly came down to not being something we were profiling, there were a few simple fixes, though there are more that are a bit complex to get right
[14:36] <jam> 'difference_update' being code that I've known to poor scaling in the past, but sometimes forget
[14:36] <jam> "known to have poor"
[14:36] <igc> jam: if you get a chance, I'd really like you to be super brave and stabilise the disk format soon
[14:37] <igc> jam: I want to convert OOo and I really would like to do that on a semi-stable format
[14:37] <igc> jam: those big conversions take some time, even with fast-import
[14:38] <igc> jam: fast-export of mysql from btree format took over 16 hours for example
[14:38] <jam> igc: fast-export, or fast-import?
[14:39] <igc> jam: export
[14:39] <jam> converting mysql took about 44hrs total on my machine
[14:39] <jam> though memory consumption was up in the 1.8GB until I restarted it
[14:39] <jam> so I think we have a bit of a leak somewhere
[14:39] <igc> fast-import of mysql isn't working yet 'cause I think fast-export has some bugs
[14:39] <jam> (it *did* go back up to 800MB or so)
[14:40] <igc> it did import 20K+ revisions in 95 minutes though (to gc-chk255-big)
[14:40] <jam> igc: you *can* just do "bzr init-repo --gc-chk255-big target; bzr branch source target"
[14:40] <igc> right
[14:40] <jam> igc: especially given that the conversion is then guaranteed compatible :)
[14:40] <igc> but I like to stress test fast-export/fast-import to get the bugs out :-)
[14:40] <jam> I don't know if you preserve revision ids and file-ids across fast-export | import
[14:40] <jam> igc: I will also say, having 1 export, and then N imports is a nice property.
[14:40] <jam> as import with CHK is much faster than export from XML
[14:41] <igc> jam: especially if the export time is the bottleneck
[14:41] <lifeless> vila: btw, parallel test branch up for review :)
[14:41] <igc> jam: no preserving of ids yet 'cause it doesn't matter for my testing
[14:42] <igc> anyhow, night all
[14:43] <vila> lifeless: Well, I know the code so it's bb:approve from me, I was wondering if I qualify as a reviewer here (I've seen similar situations where co-authored code is reviewed by a third dev)
[14:44] <jam> vila: we've done it both ways
[14:44] <jam> I usually try to post it, and wait for feedback in case a 3rd-party case
[14:44] <jam> cares
[14:44] <jam> but I'm more willing to merge it anyway
[14:46] <jam> lifeless: I just sent an email about questions with "insert_record_stream()" checking to see if the key already exists in the target.
[14:46] <jam> the problem is that we have an explicit VF test that you can send the same records multiple times
[14:46] <jam> without error
[14:46] <jam> (random_id=False)
[14:47] <jam> but branching LP is 2x faster with (random_id=True), because we don't query all the spilled indices
[14:47] <jam> (the issue is having a .cix with 340k keys as you are copying)
[15:05] <jam> vila: want to do our phone call?
[15:05] <jam> I was about to play with changing the byte stream for gc
[15:06] <vila> sure
[15:06] <jam> and I realized it is going to effect the work you are doing
[15:06] <vila> I push what I have and then call you
[15:11] <vila> jam: pushed, calling
[15:11] <jam> k
[15:14] <lifeless> jam: uhm, perhaps check for dups before linking the pack in, rather than on each key
[15:14] <lifeless> jam: there are some potential attacks if we don'tcheck
[15:20] <Leonidas> hi, I am trying to add files to a tree but somehow I don't know how to do it. Any tips?
[15:20] <Leonidas> (I would like to work without a working tree)
[15:21] <Leonidas> My code currently looks like this: http://paste.pocoo.org/show/109863/
[15:22] <Leonidas> So I am able to build the revisions and get a repository with all log messages but unfortunately I don't know how to add the actual file contents to the repository
[15:23] <lifeless> meep, its nearly 2:30
[15:24] <lifeless> gnight
[15:24] <lifeless> Leonidas: you really should use commit builder
[15:24] <lifeless> use a MemoryTree
[15:25] <lifeless> or a PreviewTree
[15:25] <lifeless> and call commit() on it
[15:25] <lifeless> doing what you're doing I can nearly guarantee that your output branch will fail 'check'
[15:26] <Leonidas> lifeless: what is the difference between a memorytree and a previewtree?
[15:26]  * Leonidas happily uses that if it is easier.
[15:28] <lifeless> a memory tree is mutable, you lock it and edit the content
[15:28] <lifeless> a previewtree is the result of a merge, held in memory
[15:28] <lifeless> anyhow, I must go sleep
[15:28] <lifeless> perhaps ask on the list
[15:30] <Leonidas> ok, thanks
[15:30]  * Leonidas plays with MemoryTree
[16:09] <jelmer> Tak|work: pong
[16:10] <Tak|work> hello
[16:10] <Tak|work> have you tried my md-bzr branch lately?
[16:21] <jelmer> Tak|work: No, I've been meaning to give it a try
[16:24] <Tak|work> ok
[16:25] <Tak|work> I think it's to the point now where it might be worth submitting it to the MD community addin repo
[16:26] <jelmer> ah, cool
[16:26] <jelmer> I'll see if I can give it a try in a couple of hours when I get back home
[16:48] <vadi2> how do I solve a tag conflict?
[16:51] <jelmer> vadi2: remove the tag locally and pull again
[16:51] <jelmer> vadi2: or use --overwrite
[16:51] <vadi2> I want to overwrite it on the server
[16:52] <vadi2> how can I do that? I remapped the tag from one revision to another locally
[16:52] <jelmer> bzr push --overwrite
[16:52] <vadi2> says no revs to push
[16:53] <jelmer> vadi2: but did it change the tag?
[16:55] <vadi2> good question. I think it did actually
[16:55] <vadi2> thanks!
[17:24] <Leonidas> strange, when I do a memorytree.put_files_non_atomic(file-id, "something")
[17:24] <Leonidas> I get this:
[17:25] <Leonidas> bzrlib.errors.NoSuchFile: No such file: u'/whatsonair/seewhatsonair.py'
[17:25] <Leonidas> but I added both whatsonair as well as seewhatsonair.py
[17:27] <james_w> Leonidas: do you have a full backtrace?
[17:28] <Leonidas> james_w: yes, http://paste.pocoo.org/show/109875/
[17:28] <Leonidas> starts getting interesting in line 26
[17:28] <Leonidas> everything above is mercurial stuff
[17:29] <james_w> Leonidas: did you mkdir the dir in the tree?
[17:29] <Leonidas> james_w: is 'add' not enough?
[17:30] <james_w> not sure
[17:30] <Leonidas> ok, sounds like I know where my problem is, moment..
[17:30] <james_w> I haven't used MemoryTree, but normal working trees, you first create then file, and then version it with "add"
[17:33] <Leonidas> hmm, in MemoryTree it looks somehow different
[17:34] <Leonidas> how can I check whether something exists already (mkdir) or is already versioned (add)
[17:35] <Leonidas> the latter could be done by path2id which returns none, but I don't know a nicer solution for checking whether a mkdir is neccessary.
[17:36] <Leonidas> james_w: directories need to be mkdir()'ed and files add()'ed as far as I see. I can't mkdir() and add() afterwards.
[17:36] <james_w> hmm
[17:37] <james_w> that sounds odd
[17:37] <Leonidas> yep
[17:39] <beuno> jam, hi
[17:39] <beuno> you mentioned at some point you had a script to recursively pack branches?
[17:42] <jam> beuno: I'm not sure about "recursively pack", though a simple find + 'bzr pack' would do the job
[17:44] <beuno> aw, that means I have to brush up on my bash foo  :)
[17:45] <jam> beuno: find . -path '*.bzr/repository' | sed -e "s#.bzr/repository##" | xargs -n1 bzr pack
[17:45] <jam> beuno: brushed up enough?
[17:46] <beuno> jam, :)
[17:46] <beuno> thanks
[17:46] <beuno> we should have this on a wiki page somewhere
[17:46] <beuno> recipies
[18:37] <vila> jam: gc extraction speed \o/
[18:37] <jam> vila: yeah, I've known it was good, it is nice to see "just how good" it is.
[18:37] <jam> and shouldn't you be off with Valentine?
[18:37] <jam> :)
[18:39]  * vila runs :)
[18:57] <beuno_> ah!
[18:57] <beuno_> when did we make upgrade recursive by default?
[18:57] <beuno_> it's a fantastic idea
[18:58] <beuno> man, I'm so in love with bzr today
[18:58] <fullermd> It is a fantastic idea, but I didn't know that we did.
[18:58] <fullermd> I was grumbing about it just a week or two ago...
[19:04] <mrooney> Is there a way to unversion things with a pattern? For example I committed and a bunch of things I wanted to ignore went it, so I did bzr ignore pattern and that is fine, but warns me of all the versioned things with that pattern
[19:04] <mrooney> How can I tell it to say remove those from VC in the next commit
[19:09] <mrooney> oh it looks like it has done that for me :)
[19:09] <mrooney> well it says deleted I hope it doesn't mean that
[19:10] <Peng_> Eh? upgrade was made recursive by default? Cool!
[19:11] <fullermd> mrooney: I doubt it would...   my first impulse would be find | grep | xargs bzr rm
[19:11] <Pieter> bye
[20:13] <Peng_> jelmer: ping
[20:17] <jelmer> Peng_: pong
[20:18] <Peng_> jelmer: This is off-topic, but I just wanted to point out that *!*@rhonwyn.vernstok.nl got banned from #oftc until your connection issues are worked out.
[20:22] <jelmer> Peng_: ah, thanks
[20:57] <abentley> jam: It looks like we're doing a case-insensitive match when moving files into directories on all platforms.  (builtins.py:766)  Is that right?
[20:58] <jam> abentley: from the looks of it, if you do "bzr mv foo bar BAZ" and 'baz' exists as a versioned dir, it will be used
[20:59] <jam> however, I think that "BAZ" has to hit a "os.isdir()" check earlier
[20:59] <jam> so on non cicp platforms
[20:59] <jam> the "into_existing" won't be true
[20:59] <abentley> jam: BAZ might be an unversioned dir.
[21:00] <jam> abentley: if BAZ is an unversioned dir, than the "_yield_canonical_inventory_paths" will probably fail
[21:00] <jam> not positive, though
[21:00] <jam> I suppose if you had a versioned 'baz' and an unversioned 'BAZ' 'bzr mv foo bar BAZ' might, indeed, move it into the versioned dir
[21:01] <abentley> jam: Do you agree that we should not be hitting this canonical_inventory_paths at all for case_sensitive filesystems?
[21:01] <jam> abentley: I don't think there was consensus on that, because of vfat
[21:02] <abentley> vfat is not case-sensitive, only case-preserving.  On vfat, we should hit that.
[21:02] <abentley> Or you mean that on Linux, it's actually case-sensitive?
[21:02] <jam> abentley: so, the last I discussed it was quite a while ago
[21:03] <jam> I think we talked about moving "case_sensitive" up into Tree
[21:03] <jelmer> Peng: I've fixed my connection issues
[21:03] <jam> and then something like "canonical_path" would return the first item or None
[21:03] <jelmer> Peng: Can you perhaps ask for an unban?
[21:03] <jam> that said, I'm not really sure, and wasn't the direct reviewer of the code
[21:04] <jam> so... I could agree with you, but I haven't spent a lot of time thinking about the various implications
[21:04] <abentley> jam: WT supports it already, and other Trees don't really have filesystems.
[21:04] <abentley> jam: I would be happy for canonical_path to just return the input on case-sensitive Trees, though.
[21:19] <vadi2> If a file is deleted locally, how can one get bzr to re-download it? "bzr pull" thinks it doesn't need to do anything
[21:28] <mrooney> vadi2: have you tried bzr revert file?
[21:29] <vadi2> no the person just deleted the branch and re-checked out
[21:29] <vadi2> I'll try that next time though, thanks
[21:30] <vadi2> (though svn's behavior in this case makes more sense)
[21:30] <mrooney> if it was a checkout, pull isn't going to do anything
[21:31] <mrooney> your language is conflicting "deleted the branch" "re-checked out" so I am not sure if you were actually dealing with a branch or checkout :)
[21:32] <mrooney> or, maybe I am wrong about pull?
[21:32] <mrooney> oh, he is gone
[21:36] <lifeless> vila: ping
[22:08] <blueyed> I'm having problems with tailor, which appears to want remove files which are not versioned (anymore). Can I filter those out somehow in bzr?
[22:08] <blueyed> The following is the log snippet, including the traceback: http://pastebin.com/m33875e52
[22:09] <blueyed> It seems like those files have been removed in the commit before, but (also?) in the next one (CVS sucks after all).
[22:15] <lifeless> you might have better results with bzr cvsps-import
[22:17] <blueyed> That had other issues IIRC. I'm currently looking into ignoring unversioned files in BzrWorkingDir._removePathnames
[22:50] <furlith> Hi all, I need to download the last version of a project using bazaar, I never used it so I quickly read the user guide to know what I should to get it but it doesn't work
[22:51] <furlith> I've made this: "bzr branch [url]" but i have this error: bzr: ERROR: Not a branch: "http://trunk.ocaml-gnuplot.bzr.sourceforge.net/bzr/trunk.ocaml-gnuplot".
[22:52] <furlith> could anyone help me?
[23:03] <Peng_> furlith: Bazaar is telling the truth. That URL is not a branch. It's a redirect to SourceForge's homepage.
[23:08] <Peng_> furlith: You can see the Bazaar web viewer at http://ocaml-gnuplot.bzr.sourceforge.net/bzr/ocaml-gnuplot/ , but I have no idea where to get the branches.
[23:08] <Peng_> SourceForge--
[23:08] <blueyed> furlith: you prolly need to find the correct URL to branch from.
[23:08] <lifeless> furlith: http://sourceforge.net/scm/?type=bzr&group_id=115637
[23:08] <lifeless> Peng_: ^
[23:08] <Peng_> lifeless: Oooh, where'd you find that URL?
[23:08] <lifeless> bzr://ocaml-gnuplot.bzr.sourceforge.net/bzrroot/ocaml-gnuplot
[23:08] <lifeless> Peng_: project, then svn repo, then url hacked to bzr
[23:08] <lifeless> :P
[23:08] <Peng_> lifeless: Ah, smart.
[23:09] <blueyed> Peng_: see the "Code" tab on SF.net
[23:09] <blueyed> https://sourceforge.net/scm/?type=bzr&group_id=115637
[23:09] <Peng_> blueyed: Oh! You click on "More", which magically makes the other tabs appear.
[23:09] <Peng_> I didn't know that.
[23:09] <Peng_> I just moused-over "More", which linked to the main page, so I didn't click it.
[23:09] <blueyed> yep. strange thing that is.
[23:12] <Peng_> Interesting that SourceForge uses bzr:// for anonymous access.
[23:13] <Peng_> Wait, that's dumb. The website just links to bzr://ocaml-gnuplot.bzr.sourceforge.net/bzrroot/ocaml-gnuplot , which is the shared repo, not an actual branch.
[23:13] <Peng_> furlith: bzr branch bzr://ocaml-gnuplot.bzr.sourceforge.net/bzrroot/ocaml-gnuplot/trunk/
[23:17] <furlith> Peng_: thanks but I have an empty directory and this error: bzr: ERROR: KnitPackRepository('file:///home/maelick/plot/gnuplot/.bzr/repository/')
[23:17] <furlith> is not compatible with
[23:17] <furlith> RemoteRepository(bzr://ocaml-gnuplot.bzr.sourceforge.net/bzrroot/ocaml-gnuplot/.bzr/)
[23:17] <furlith> different rich-root support
[23:18] <Peng_> Uh.
[23:18] <Peng_> furlith: What version of bzr?
[23:18] <Peng_> furlith: Did you create a shared repo for the project? If so, run "bzr upgrade --rich-root-pack".
[23:23] <furlith> thanks it works :)
[23:24] <Peng_> Uh, I hope he didn't have anything else in that shared repo.
[23:24] <Peng_> Oops.
[23:26] <fullermd> Oh well, it'll resolve itself when we release 1.2 or 1.3 and have rich roots by default.
[23:40] <lifeless> fullermd: be nice