[12:20] <DShepherd> bzr is pretty neat..
[12:22] <DShepherd> bzr uncommit feature is what I have been wanting for svn for the paste couple days..
[12:24] <DShepherd> is it possible to check out an svn branch with bzr?
[12:25] <luks> yes, with http://bazaar-vcs.org/BzrForeignBranches/Subversion
[12:29] <DShepherd> aahh sweet
[12:34] <DShepherd> How can i undo what bzr init does?
[12:34] <radix> I guess rm .bzr -r
[12:35] <DShepherd> radix, hmm.. ok
[12:35] <radix> beware of doing that, though :)
[12:35] <DShepherd> no fancy command :-)
[12:35] <DShepherd> radix, why?
[12:35] <radix> well, .bzr is where everything is kept that bzr knows about it
[12:36] <radix> if you delete it, you're deleting all history, etc.
[12:36] <DShepherd> radix, ok thanks for the heads up.
[12:37] <DShepherd> bzr init-repo #what does that really do? and when is it a good time to do it.
[12:37] <radix> DShepherd: It creates a shared repository, which there is a document about
[12:38] <radix> DShepherd: basically it makes it so that if you have a bunch of branches with common revision data, it can be shared in one place
[12:38] <DShepherd> hmmm..
[12:38] <radix> DShepherd: this helps with disk space usage, but more importantly (IMO) network I/O
[12:38] <DShepherd> radix, ok
[12:38] <radix> DShepherd: because if you "bzr branch" a remote branch *into* your shared repository, if that branch largely has revisions that you already have, it won't have to download them again
[12:38] <radix> DShepherd: but yeah, for extension info please read the document on bazaar-vcs.org (I'm sure a search will find it, I don't remember the name)
[12:38] <DShepherd> kool
[12:39] <radix> s/extension/extensive/
[12:39] <DShepherd> ok kool.. i am going thru the document now.. but its so much nicer to talk to people :-).. thanks for the heads up though
[12:43] <DShepherd> does svn even have selective commit?? cause that's kool
[12:46] <radix> committing individual files instead of the whole tree?
[12:46] <radix> both bzr and svn have that, with "<bzr/svn> commit <file>"
[12:48] <DShepherd> radix, oh.. never used it before
[12:51] <fullermd> I'm not sure I know of ANY VCS that doesn't do that (I'm sure there are some, but I don't think any of the majors)
[01:17] <igc> morning
[01:28] <lifeless> hi igc
[01:29] <igc> morning lifeless - have a good weekend?
[01:29] <lifeless> yuo
[01:29] <LaserJock> hi lifeless
[01:29] <lifeless> bucks night; I'm still tired :)
[01:29] <lifeless> hi LaserJock
[01:30] <igc> :-)
[01:32] <Verterok> Hi y'all
[01:32] <lifeless> hi
[01:37] <Verterok> lifeless: Hola!
[01:43] <Verterok> I'm going to start working to add multiple (eclipse) project per branch, to bzr-eclipse (the current implementation only support the branch at the root of the project)
[01:43] <Verterok> my idea was to support one level up in the hierarchy, I'ld like to hear your thoughts about it
[01:44] <Verterok> with one level, bzr-eclipse would support a branch with multiple eclipse projects as it's childs directories
[01:46] <lifeless> I'm just going to run an errand, I'll be back shortly
[02:10] <lifeless> back
[02:51] <poolie> hello
[02:54] <LaserJock> hola poolie
[03:18] <poolie> hi
[03:27] <poolie> igc, would you like a call today sometime?
[03:28] <igc> poolie: yes that would be good. How's 2.30 say?
[03:28] <igc> or whatever time suits you
[03:31] <poolie> sounds good
[03:34] <lifeless> igc: were you going to review the bytes_to_gzip patch ?
[03:34] <igc> I can yes
[03:35] <igc> still wrapping up the review of abentley's reconfigure one - when it's done, your patches are next
[04:01] <luisfelipe> Hey guys, I need help. I'm using bazaar here on a project, I'm trying to pull some changes another guy did on his repo, but both pull and merge says that there's nothing to pull
[04:02] <poolie_> ok
[04:03] <lifeless> luisfelipe: bzr missing can tell you the difference between branches in terms of commits
[04:03] <poolie> luisfelipe, if you run 'bzr log' on his repo, what do you see?
[04:04] <lifeless> luisfelipe: I suspect he has not pushed, or you are not pulling/merging from the correct URL for his branch
[04:07] <luisfelipe> the url is correct
[04:07] <luisfelipe> I tried to pull, and it said that the branches had diverged
[04:07] <luisfelipe> then I merged, and some of his changes came, but not all
[04:09] <luisfelipe> oh, I think I've got it
[04:09] <luisfelipe> sorry
[04:10] <luisfelipe> lifeless, he had not pushed, only commited his changes
[04:29] <lifeless> poolie: lol, win65
[04:31] <abentley> lifeless: No one would buy the first release of win64.  Personally, I'm holding out for win67.
[04:41] <lifeless> :)
[04:42] <Peng> Hmm. I'm getting a traceback when pulling from a knit branch into a pack branch.
[04:43] <lifeless> oh ?
[04:43] <lifeless> from bzr.dev ?
[04:45] <Peng> Yeah.
[04:45] <poolie> abentley, heh
[04:45] <Peng> Pulling a local copy of bzr.dev. I'm using your repository branch.
[04:46] <lifeless> Peng: thats the known data issue
[04:46] <lifeless> Peng: use abentley's delta fixing 'bzr check' patch on your copy of bzr.dev first.
[04:47] <Peng> abentley's delta fixing?
[04:48] <Peng> As in not bzr 0.90?
[04:48] <lifeless> Peng: yes
[04:48] <lifeless> Peng: this was all covered in the list
[04:49] <lifeless> Peng: And I know I chatted with you about it before. bzr.dev has delta's that are not referenced from the inventory
[04:49] <lifeless> this has to be fixed to use packs on them.
[04:50] <Peng> Okay.
[04:51] <Peng> (I *have* been reading the list, through Gmane's web interface. Guess I haven't gone far enough back?)
[04:51] <Peng> So...what do I need to do?
[05:00] <lifeless> easiest is to take a branch of my copy of bzr.dev using packs
[05:00] <lifeless> (you are failing on a new pull right ?, or incremental ?)
[05:01] <lifeless> bbiab, lunch
[05:01] <Peng> Incremental?
[05:02] <Peng> Easier than using your copy of bzr.dev is simply not keeping a pack copy of bzr.dev. I was just doing it for fun.
[05:02] <Peng> I already have a knit copy.
[05:03] <lifeless> ok
[05:03] <lifeless> you shouldn't have trouble with any other project in pack form
[05:15] <poolie> !!
[05:15] <poolie> i didn't know '123' in '1234' was true
[05:15] <poolie> but it is
[05:16] <poolie> but only for strings, not other sequences
[06:04] <lifeless> poolie: yay special casing
[06:09] <poolie> lifeless, can you have a look for me at test_selftest_benchmark_parameter_invokes_test_suite__benchmark__
[06:09] <poolie> it does not seem to test anything like what the name says
[06:11] <poolie> i think it's a reasonable test but misnamed
[06:20] <lifeless> igc: ping
[06:20] <igc> h lifeless
[06:20] <lifeless> poolie: are there dots in that name ?
[06:20] <poolie> not in the method name
[06:20] <keir> hello
[06:20] <poolie> it is in test_selftest.TestSelftest
[06:20] <lifeless> igc: does TestWorkingFormat4.test_id2path sound familiar ?
[06:20] <igc> yes
[06:21] <igc> is that a leading Q?
[06:21] <lifeless> poolie: annotate it; I'll bet its misnamed due to refactoring
[06:22] <poolie> apparently not
[06:22] <lifeless> heh
[06:22] <lifeless> anyway I agree
[06:22] <lifeless> igc: it is. I think that rename_one is buggy
[06:22] <lifeless> igc: because it allows a rename to a non-normalized, non-accesible path
[06:23] <igc> yes - the fix for that is in my patch. Digs ...
[06:23] <lifeless> igc: you have a test for rename_one ?
[06:23] <lifeless> whats the topic of the mail to search for
[06:23] <Vantage13> hi, i'm trying to use bzr-cvsps-import to import a cvs repository, but I always get "bzr: ERROR: Must end write groups before releasing write locks.".  Any idea what might cause that or how to get more information?
[06:24] <lifeless> Vantage13: api skew has affected the plugin
[06:24] <igc> lifeless: see the end of http://bundlebuggy.aaronbentley.com/request/%3C46DC2EB1.3010901@internode.on.net%3E
[06:24] <lifeless> Vantage13: if you use 0.18 it will work; I will try to fix that shortly if I can get a minute spare
[06:24] <igc> that's the iter_changes commit one
[06:24] <igc> lifeless: the fix was recommended by jam
[06:25] <lifeless> igc: yah, thats the test change needed but its buggy
[06:25] <igc> in which way?
[06:26] <lifeless> rename_one() is broken
[06:26] <igc> lifeless: You're very likely to be right. There are several emails ...
[06:27] <igc> between jam and myself (all on list) re this.
[06:27] <lifeless> yah, I was asking about the topic :)
[06:27] <lifeless> so I could find em
[06:27] <igc> There was talk about taking out some of the checking also. I'll dig so a moment ...
[06:28] <lifeless> --- bzrlib/tests/test_workingtree_4.py  2007-04-26 22:56:01 +0000
[06:28] <lifeless> +++ bzrlib/tests/test_workingtree_4.py  2007-09-17 04:28:02 +0000
[06:28] <lifeless> @@ -420,6 +420,8 @@
[06:28] <lifeless> 
[06:28] <lifeless>          try:
[06:28] <lifeless>              tree.rename_one('a', u'b\xb5rry')
[06:28] <lifeless> +            tree.unversion(['a-id'] )
[06:28] <lifeless> +            tree.add([u'b\xb5rry'] , ['a-id'] )
[06:28] <lifeless>              new_path = u'b\xb5rry'
[06:28] <lifeless> that patch demonstrates the problem
[06:28] <Vantage13> lifeless: thanks.  what's the command to grab 0.18?
[06:29] <lifeless> clearly it should be a no-op. But it errors. So rename_one is facilitating a buggy api
[06:29] <igc> That will fail I think
[06:29] <igc> The problem is that add does extra checking that rename doesn't
[06:29] <lifeless> igc: indeed, thats precisely the point, rename_one is allowing broken data into the dirstate.
[06:29] <igc> yup
[06:29] <igc> but whether add was being over strict was related
[06:30] <lifeless> Vantage13: have a look at http://bazaar-vcs.org/src/releases IIRC
[06:30] <igc> poolie: I'll call in 5
[06:30] <lifeless> Vantage13: our downloads page links to the tarballs, but the folder is listable; you can just 'tar xzf bzr-0.18.tar.gz && bzr-0.18/bzr' - you can run it without installing it
[06:30] <lifeless> igc: so the topic was?
[06:31] <igc> lifeless: https://lists.ubuntu.com/archives/bazaar/2007q3/030187.html
[06:31] <lifeless> gah
[06:31] <igc> Filename normalisation handling is the topic
[06:31] <lifeless> just the topic please
[06:31] <lifeless> I have no browser on this terminal ;)
[06:31] <igc> :-(
[06:31] <Vantage13> lifeless: is this 0.18 of the bzr-cvsps-import plugin or 0.18 of bazaar?  I thought bazaar was 0.90?
[06:32] <igc> lifeless: Aug 21 from me
[06:32] <lifeless> Vantage13: you need 0.18 of bzr, because 0.90 breaks the plugin
[06:33] <Vantage13> lifeless: That's an odd versioning scheme, isn't it?
[06:33] <lifeless> Vantage13: we jumped from 0.18 to 0.90 as we approach 1.0
[06:33] <lifeless> igc: I see no discussion of add or rename_one in that thread
[06:35] <Vantage13> lifeless: so is this something that will be fixed in bzr or in the plugin?
[06:36] <lifeless> Vantage13: Like I said, I will try to get around to fixing the plugin shortly; for now it will work fine if you just use an older bzr
[06:37] <Vantage13> lifeless: ok.  I'm just curious which I'll need to upgrade in the future for the long term fix
[06:37] <lifeless> the plugin ;)
[06:37] <Vantage13> lifeless: I usually use bzr from my distro and pull the plugin separately
[06:37] <Vantage13> lifeless: thanks!
[06:38] <lifeless> poolie: seems to be no bzr-cvs-ps luanhcpad page; or am I searching with bad terms ?
[06:38] <igc> lifeless: I'll look again. John and I have never discussed rename_one, only add. I'll just call poolie first then look as soon as I'm off the phone
[06:38] <lifeless> hmm, Ill just fix it and mail the list
[06:38] <lifeless> oh, bzr-cvsps-import got it
[06:39] <lifeless> Vantage13: I'm reporting a bug on the plugin for you
[06:40] <Vantage13> lifeless: excellent.  I'm just getting started with bzr and this was my first snag :)
[06:42] <lifeless> bug 140048
[06:42] <ubotu> Launchpad bug 140048 in bzr-cvsps-import "needs update to use write groups (incompatible with bzr >= 0.90)" [Undecided,New]  https://launchpad.net/bugs/140048
[06:50] <ubotu> New bug: #140048 in bzr-cvsps-import "needs update to use write groups (incompatible with bzr >= 0.90)" [Undecided,New]  https://launchpad.net/bugs/140048
[07:41] <lifeless> igc: ok fixed and mailed.
[07:41] <lifeless> igc: the key thing I've done different to you is to figure out the root cause :)
[07:41] <lifeless> meh, that sounds wrong.
[07:41] <igc> harsh but true :-)
[07:42] <lifeless> I mean I made the test that was putting bogus data in fail
[07:42] <igc> my issue was I wasn't sure what the correct behaviour really ought to be
[07:42] <lifeless> the names in inventory and dirstate bjects should always be NFC/NKFC normalised
[07:43] <igc> good ...
[07:43] <lifeless> thats what the normalized_filename methods docstring says
[07:43] <lifeless> and add enforces
[07:43] <lifeless> the problem was that rename_one was not doing the same check as add
[07:44] <igc> so as I had suspected, my code (which was failing) was actually correct and the test was actually wrong. I'll look forward to reviewing your change then! :-)
[07:45] <lifeless> review away
[07:45] <lifeless> but bytes_to_gzip first please, thats blocking other changes for me
[07:46] <lifeless> I'm thinking I can save a lot of hash update function calls
[07:47] <lifeless> by doing sha1 sum of the byte block rather than sha_strings
[07:49] <igc> interesting
[07:53] <lifeless> 5% difference
[07:53] <lifeless> (as a micro-benchmark)
[08:00] <lifeless> looks like it could be a 1.2% overall win
[08:01] <kgoetz> hi all. just wondering if bzr ignores or reads robots.txt files? i have a deny: * line in my websites root directory, but i still want people to be able to checkout my bzr over http
[08:02] <lifeless> kgoetz: thats fine
[08:02] <lifeless> bzr isn't a robot
[08:02] <kgoetz> lifeless: sweet. thanks.
[08:11] <lifeless> poolie: want a chat ?
[08:47] <vila> lifeless: open_write_stream is dead code can I remove it ?
[08:50] <lifeless> vila: dead? how so
[08:51] <vila> lifeless: kidding, just wanted you to quickly page-in  related info :) Its introduction broke webdav support
[08:51] <vila> looking at it I wonder why you didn't define it Transport
[08:52] <vila> s/it/it in/
[08:52] <vila> since it relies on other primitives
[08:52] <lifeless> huh?
[08:52] <lifeless> colour me confused
[08:53] <lifeless> it is a_transport.open_write_stream
[08:53] <vila> I saw the specific sftp implementation, but I don't understand why you didn't provide a *default* implementation in Transport
[08:53] <vila> since sftp and memory use the same exact code
[08:53] <vila> and remote too
[08:54] <vila> grr s/sftp and memory/ftp and memory/
[08:54] <lifeless> so the FTP implementations ucks
[08:54] <lifeless> *sucks*
[08:55] <lifeless> anything doing self.append_bytes() as a thunk will perform heinously
[08:55] <vila> fine for a default implementation by my book
[08:56] <lifeless> default implementations should make sense
[08:56] <lifeless> if the default is bad, its not a good default
[08:56] <lifeless> the only transport that self.append() makes sense for is Memory
[08:56] <vila> ok, that's the reason then. Fine, but webdav will use the same :-/
[08:57] <lifeless> webdav should definately do a chunked encoded streaming upload
[08:57] <lifeless> or if the server is 1.1
[08:57] <lifeless> sorry
[08:57] <vila> lifeless: chunked encoding is on my TODO list, albeit very deep :-)
[08:58] <lifeless> if the next hop is a 1.0 in its signature then it should buffer locally
[08:58] <lifeless> we do *thousands* of little writes
[08:58] <lifeless> and the api is designed to allow that.
[08:58] <vila> hmmm, and sftp provides buffering via paramiko ?
[08:59] <lifeless> sftp streams
[08:59] <lifeless> non blocking writes
[08:59] <lifeless> I looked at doing a proper stream for the FTP one but it was going to be tricksy
[08:59] <lifeless> IIRC
[09:00] <lifeless> RemoteTransport will want to buffer too
[09:00] <lifeless> but it doesn't yet
[09:00] <lifeless> though RemoteTransport will hopefully not use the api ever as the smart server should kick in
[09:01] <vila> lifeless: ok thanks for "default implementations should not suck" rationale, that was what I was looking after
[09:01] <vila> I mostly understood the rest and will fix webdav with a default sucking implementation and a FIXME for buffering
[09:03] <vila> lifeless: last bit, did you think about some criteria to trigger the buffer flush ? Is there some coherence to ensure about concurrent read or writes by other users ?
[09:03] <lifeless> vila: its documented in the docstring
[09:04] <lifeless> vila: have a look at the LocalTransport implementation for example
[09:06] <vila> lifeless: meh, can't find anything on that subject, more precise pointer ?
[09:08] <vila> LocallTransport.open_write_stream says: See Transport.open_write_stream... which defines the polilcy but local doesn't seem to implement it, still in your private branch only maybe ?
[09:08] <lifeless> pydoc bzrlib.transport.Transport.open_write_stream
[09:09] <igc> lifeless: do you notice a speed difference if the compression level is set to 1 instead of -1?
[09:10] <lifeless> vila: look at the get method on local.py
[09:10] <lifeless> igc: dunno
[09:11] <lifeless> igc: the default (-1) is what hg uses FWIW
[09:11] <vila> lifeless: ok
[09:11] <igc> ok
[09:26] <ubotu> New bug: #140055 in bzr "socket leak in test suite" [Medium,Triaged]  https://launchpad.net/bugs/140055
[09:31] <lifeless> ok, I'm out
[09:31] <lifeless> night
[09:34] <igc> lifeless: night - review just emailed btw
[09:39] <lifeless> thanks
[09:54] <igc> vila: got a moment?
[09:54] <vila> igc: yup
[09:54] <igc> Q re bzrdir.open_from_transport ...
[09:55] <igc> can I comment out the qualified_target = ... line?
[09:55] <igc> or ...
[09:55] <igc> should it be return get_transport(qualified_target)?
[09:58] <vila> the FIXME implies we should return get_transport(qualified_target) *but* it's hard to verify
[09:58] <vila> redirect can be on other schemes (ftp, sftp, etc)
[09:59] <igc> ok, so ...
[09:59] <vila> the problem is really when we are working with the non-default http implementation and the redirect will use the default http implementation
[10:00] <vila> i.e.: http+pycurl is the default, http+urllib: redirects to http
[10:00] <igc> code quality wise, what do you suggest given qualified_target is calculated but never used?
[10:00] <vila> we begin with a urllib transport and ends with a pycurl transport
[10:01] <vila> the code can be safely commented out, one can even write a test that exhibit the problematic behavior and make it an expected failure
[10:02] <igc> I'll comment it out then as part of some cleanups to that module I'll submit
[10:02] <vila> now that ConnectedTransport have been refactored, this bug may be more easy to address, but it may also vanish if we drop pycurl support
[10:02] <igc> I like your idea of an expected failure test btw
[10:02] <vila> I think that's the whole story, so in short, comment it out :)
[10:03] <vila> igc: :)
[10:03] <igc> thanks
[10:03] <vila> in spirit the FIXME comment is a lazy way to do that :)
[10:04] <vila> igc: just of out curiosity, how did you arrive there ?
[10:04] <vila> FIXME grepping ?
[10:04] <igc> vila: I was reading the bzrdir code as part of me reviewing abentley's reconfigure patch
[10:05] <igc> I figured I needed to grok it in order to review the pack stuff soon as well
[10:06] <vila> igc: hmm, bzrdir is hard to grok...
[10:07] <vila> lots of static methods, I had to read a bunch of plugins for foreign vcs to understand why.
[10:07] <igc> 44 A4 pages says to me that it ought to be a few modules in a package, not one big one :-)
[10:08] <igc> moving old formats into a plugin as lifeless has suggested will improve things though
[10:08] <igc> anyhow, time for some food
[10:08] <vila> igc: enjoy your meal
[10:08] <igc> thanks
[10:22] <lifeless> lets drop pycurl :)
[10:24] <rokahn> I'm looking for version control system which I can adapt to store and retrieve XML diffs (as opposed to line-based diffs).  Would someone on this channel know if Bazaar is a good choice (or what might be a better choice)?  We already have software to generate the XML diffs and apply the XML patches so we're looking for a version control system which can be easily modified to use external programs for diff & patch.
[10:37] <alfborge> I'm trying to update my local branch against the upstream branch through sftp using bzr pull and bzr merge, but it's taking ages and showing no progress.
[10:37] <alfborge> Both are running bazaar 0.90
[10:38] <alfborge> Any idea what I can do to speed it up/find out why it's slow (or not working)?
[10:42] <alfborge>   The repos is 193M
[10:43] <alfborge> (At least that's what it says when I do "du -hs ~/repos"
[10:44] <matkor> jelmer: Sorry for bothering but could you please merge tiny fix for push with overwrite from https://code.launchpad.net/~matkor/bzr-gtk/trunk-matkor ? It is pretty straightforward ... TIA
[10:48] <lifeless> alfborge: networking is still slow; 0.92 will largely fix this
[10:48] <lifeless> alfborge: be patient and it will be fine
[10:53] <alfborge> Is there a way to make bzr completely forget files?
[10:53] <alfborge> I.e. remove all history about the files.
[12:09] <quicksilver> alfborge: only madmen and dictators try to change history :)
[12:16] <Peng> Or people who accidentally added an ISO.
[12:17] <jelmer> Peng: You can remove information about history (creating ghosts)
[12:17] <Peng> Oh, reallly? Cool.
[12:18] <Peng> Anyway, alfborge was the one who asked.
[12:18] <Peng> I'd like to do it too, but I converted to hg a month ago for speed.
[12:18] <jelmer> it's not really exposed at the UI level yet
[12:18] <Peng> so it doesn't matter anymore.
[12:18] <Peng> jelmer: It probably shouldn't be.
[12:20] <alfborge> jelmer: Any docs on this?
[12:21] <jelmer> alfborge: there's a plugin that allows removing revisions but it's not very efficient
[12:21] <jelmer> alfborge: see remove-revisions on the plugin page
[12:22] <quicksilver> if I'd accidentally added an ISO, I'd just re-check out the verision before that mistake
[12:22] <quicksilver> but that's obviously not the solution if you want to weed out a file that's been in there for 20 revisions
[12:24] <Peng> Is it possible to check out up to r10, then bundle 12-15 and apply them? Or would it error out because 11 is missing?
[12:24] <quicksilver> Peng: it would error out. but there may be a way to say "do this anyway"
[12:25] <jelmer> Peng: No, that won't work. You could apply those as patches and commit them manually but that would create new revisions.
[12:27] <jelmer> matkor: the abs in your latest commit is not correct
[12:27] <jelmer> matkor: Negatives are used to indicate that the length of mainline has decreased
[12:30] <matkor> jelmer: OK. I think we need descriptive name for that ...
[12:31] <matkor> "%d revisions removed"  "%d revisions added" ?
[12:31] <jelmer> matkor: Eventually, we should display all the information in PullResult()
[12:31] <jelmer> matkor: bzr pull in bzr itself used to print negatives as well, so I'd rather keep that behaviour until we start using PullResult
[12:32] <jelmer> poolie: Hi
[12:33] <matkor> jelmer: OK. I will prepare correct revision and let You know :)
[12:34] <jelmer> matkor: Cool, thanks! And thanks for the patch :-)
[12:35] <jelmer> matkor: phanatic and I have been discussing ways to improve the development process of bzr-gtk
[12:35] <jelmer> we'd like to start using BundleBuggy
[12:36] <jelmer> so it'll hopefully become easier to get things merged and so we can have at least some review
[12:38] <NamNguyen> bundlebuggy doesn't merge
[12:40] <jelmer> NamNguyen: No, but it means merge requests don't get dropped but are remembered
[12:43] <matkor> jelmer: Whateve suits you best it's ok for me. Just let me know what and when I should do to fit new workflow
[12:46] <matkor> jelmer: I think cut down version of fix is on: https://code.launchpad.net/~matkor/bzr-gtk/trunk-matkor
[12:47] <jelmer> matkor: it's ok to just create a new commit that reverts the abs() change, no need to keep one revision per feature
[12:48] <jelmer> matkor: Thanks, pulled
[12:51] <matkor> jelmer: OK. thank you too :)
[02:25] <ubotu> New bug: #140419 in bzr "selective commit sometimes fails with `parent_id not in inventory` error" [Undecided,New]  https://launchpad.net/bugs/140419
[02:26] <lifeless> Peng: any chance you'll come back? Have you tried the C patience matcher - its 10 times faster at diff, which is what was hurting you IIRC.
[02:27] <lifeless> good night all, I'll look in scrollback :)
[02:33] <Peng> lifeless: Don't worry, I still like Bazaar. But I'm planning to stick with Mercurial for this because converting the branch back would be a pain, Bazaar probably isn't going to beat it in performance any time soon, and I like being able to copy files.
[02:35] <Peng> lifeless: But I am curious to see how much faster Bazaar is, and I'm planning to try it out eventually.
[02:49] <jelmer> imho, working tree performance is now acceptable for large trees but network performance is still not quite there yet
[02:50] <Peng> In my case, network performance doesn't matter much.
[03:40] <ubotu> New bug: #140432 in bzr "bzr fails with 'iteration over non-sequence'" [Undecided,New]  https://launchpad.net/bugs/140432
[04:37] <Gacha> hi
[04:38] <Gacha> How can I tell bazaar to ignore a directory?
[04:38] <Gacha> when I type bzr ignore "./my/dir/"
[04:39] <Gacha> it says: bzr: ERROR: [Errno 1]  Operation not permitted
[04:41] <jdong> sounds like you have permissions problems in your branch
[04:43] <Gacha> but the  "./my/dir/" is correct?
[04:45] <dato> Gacha: yes, it is
[04:45] <Gacha> the ./ shows that its from the begining of the tree, right?
[04:46] <dato> yep
[04:47] <Gacha> I checked permissions, everything seems to be ok
[04:47] <Gacha> I'm using sshmount, maybe that the problem
[04:54] <dato> Gacha: that sounds very very likely
[04:54] <Gacha> ok, I will search for workaround
[04:55] <vila> lifeless: to drop pycurl we need certificate verification. python-2.6 will have it, I'm still lurking for a way to back port it.
[05:49] <dato> beuno: that there is not a working tree at all
[05:49] <jelmer> beuno: If there's no working tree remotely, no message will be displayed
[05:49] <dato> beuno: but that's not launhpad specific
[05:51] <beuno> hm, ok, that won't work for me, would a "cron job updating working trees" be too much of a hack?   I'm having all kinds of permission problems using push-and-update plugin
[05:54] <jelmer> beuno: that may result in locking errors while pushing every now and then, but other than that, it should work (won't corrupt any data)
[05:55] <beuno> jelmer, what would be a better approach then?
[05:55] <beuno> (if there is one)
[05:56] <jelmer> beuno: the push-and-update plugin is the only thing I can think of
[05:57] <jelmer> also, locking collisions won't happen very often - only if the cron job is updating the working tree /and/ somebody is pushing changes
[06:00] <beuno> right, thanks jelmer, dato  :D
[06:43] <keir> hmm, the git pack + idx format is quite nice
[08:17] <radix> is there a bug with moving symlinks in bzr 0.90?
[08:17] <radix> I'm getting a "No WorkingTree exists" error
[08:24] <LarstiQ> radix: moving how exactly? There have been some symlink related bugs in the past (and I have no idea what happened since the start of August)
[08:24] <LarstiQ> hi, btw
[08:24] <radix> yo :)
[08:24] <radix> hmm, having trouble reproducing
[08:24] <radix> maybe it has something to do with the fact that the target of this symlink is outside of the branch
[08:24] <radix> and in fact it starts with a ../
[08:25] <radix> yeeeah, that seems to be the problem
[08:25] <radix> or at least, I get a similarish error in my repro environment
[08:26] <radix> http://rafb.net/p/TTFEQe32.html
[08:26] <radix> (../foo/bar doesn't exist at all)
[08:27] <LarstiQ> mkay
[08:39] <james_w> hi LarstiQ.
[08:40] <LarstiQ> heya james_w
[08:40] <james_w> LarstiQ: how are you?
[08:40] <LarstiQ> james_w: ok I think, how about you?
[08:40] <james_w> I'm fine thanks. How was your summer?
[08:40] <dato> LarstiQ: hey, long time no see, I think
[08:41] <james_w> hi dato
[08:42] <dato> hi james_w. I tried a build of bzr-builddeb to upload it yesterday, but it died with some error I can't remember.
[08:42] <LarstiQ> dato: aye, dropped of the internet at the beginning of August
[08:42] <LarstiQ> james_w: quite good, except for a horrible August
[08:43] <james_w> LarstiQ: oh dear. Is September any better for you so far?
[08:43] <james_w> dato: ah, thanks for the warning. I'll try and build now.
[08:43] <LarstiQ> james_w: much improved it is, yes
[08:44] <james_w> radix: yeah, I think that is reported already. The bug log kind of shows why it has not been fixed yet.
[08:44] <radix> can you tell me which bug it is? I tried seraching but didn't find anything.
[08:48] <james_w> bug 32669
[08:49] <ubotu> Launchpad bug 32669 in bzr "Adding a symlink to another branch fails" [Medium,Confirmed]  https://launchpad.net/bugs/32669
[08:49] <james_w> has many bugs with symlinks in it
[08:49] <james_w> I think the last is your case.
[08:51] <james_w> bug 48444 is another
[08:51] <ubotu> Launchpad bug 48444 in bzr "Symlinks to repository branches don't work" [Medium,Confirmed]  https://launchpad.net/bugs/48444
[08:52] <james_w> and bug 124859 another. They all deal with when to dereference a symlink and when not to.
[08:52] <ubotu> Launchpad bug 124859 in bzr "incorrect repository detected with symlink to a branch" [Critical,Triaged]  https://launchpad.net/bugs/124859
[08:52] <james_w> LarstiQ: glad to hear it.
[08:53] <dato> james_w: btw if you could change the b-dep on python-all-dev to just python-all, that'd be cool (I'd personally stick to python only, but if you want to run the testsuite under all supported versions, that's of course fine as well)
[08:54] <james_w> dato: thanks, I'll do that. I'm not sure why I had that to begin with.
[09:04] <beuno> ok, I question for the brave, I have a the following situation:   /dir1, /dir1/subdir1 and dir1/subdir2.   I would like to have a repository in all of them, the top directory, and the subdirs too
[09:04] <beuno> what would be the best approach?
[09:04] <sri> howdy
[09:05] <beuno> because if I push to the subdirs, the main dir's repo will need commiting too
[09:06] <james_w> beuno: do you mean repository or branch?
[09:06] <beuno> james_w, both I think, all of em will have working trees
[09:07] <james_w> so you are not talking about shared repositories?
[09:07] <beuno> I'm not sure  :D
[09:07] <james_w> I don't see why a push to the subdirs would require a parent commit, unless you are talking about versioning the files in the sudirs twice, once in the subdir and once in the parent.
[09:09] <beuno> james_w, well, I was sort of hoping I could pull either all of it, or individually, as needed
[09:10] <james_w> beuno: that sounds like a use case for nested trees.
[09:10] <beuno> and if that's not possible, I guess I can just add those dirs to .bzrignore, and have them branched individually
[09:10] <beuno> james_w, yes! nested trees sounds like what I want, is that already working?
[09:12] <james_w> the internal support is there if you upgrade the branches. However the UI is hidden commands, not polished, and missing functionality for some commands I believe.
[09:13] <beuno> ok, so it doesn't sound like something I want to implement company-wide
[09:14] <james_w> no, I would say that it is not ready for that.
[09:15] <beuno> james_w, great, I'll go for ignoring them then, thanks
[09:43] <james_w> phanatic: hi. In gdiff I would expect "Complete diff" to show the whole diff in the other window, is there a reason why it doesn't?
[09:44] <phanatic> james_w: must be a bug, i've noticed that as well. could you report it, so we can keep track of its status?
[09:50] <james_w> sure, lauchpad I assume?
[09:52] <james_w> annotate says that it has been that way since it was added.
[10:07] <phanatic> james_w: thanks for the report
[10:07] <phanatic> it used to show the complete diff
[10:07] <james_w> phanatic: no problem. Thanks for the project.
[10:09] <james_w> phanatic: ah, looking again, it might be API change in show_diff_trees, if specific_files=[]  used to mean all files and now means none then that would probably cause it.
[10:10] <phanatic> james_w: thanks for tracking it down
[10:16] <james_w> phanatic: yeah, it looks like None might be more appropriate.
[10:17] <phanatic> james_w: this may some rude, but if you have the time, could you create a patch, please?
[10:17] <phanatic> s/some/sound
[10:17] <james_w> phanatic: no problem, I was just about to offer.
[10:17] <james_w> where do you want it?
[10:17] <james_w> and do you have a testsuite?
[10:17] <lifeless> moin
[10:18] <james_w> hi lifeless
[10:18] <phanatic> james_w: we don't have a testsuite for gui bits :( attaching a bundle or branch to the bug report would be awesome
[10:18] <phanatic> morning lifeless
[10:23] <james_w> phanatic: done.
[10:28] <phanatic> james_w: thanks, merged :)
[10:29] <james_w> thanks phanatic
[10:29] <lifeless> hi phanatic
[10:30] <lifeless> LarstiQ: how is subtrees coming ?
[10:48] <james_w> jelmer: the dev branch now has your requested $UPSTREAM_VERSION.
[10:49] <jelmer> james_w: wow, that was quick (-: Thanks, I'll try it out tomorrow
[10:49] <james_w> jelmer: it's actually got a couple of tests, so it might even work.
[10:51] <james_w> jelmer: it's not flexible, it just solves your exact case, so let me know if you think it should be expanded.
[11:02] <keir> lifeless, still around?
[11:03] <keir> lifeless, i have most of the code written short of the actual graphindex wrapper; however, it's grown somewhat complicated
[11:03] <keir> lifeless, i've been studying git's pack format and index. i feel like we should move in that direction.
[11:10] <asabil> abentley: I don't get you first argument concerning bzr description ?
[11:10] <asabil> bzr nick doesn't support referring to a remote branch
[11:11] <asabil> so why does bzr description needs that ?
[11:16] <schierbeck> phanatic: i've pushed a fix for the bug you mentioned :)
[11:16] <phanatic> schierbeck: great, i'll check it out :)
[11:18] <phanatic> schierbeck: works like a charm, i'll merge it
[11:18] <schierbeck> cool
[11:19] <schierbeck> phanatic: when's the release due?
[11:22] <phanatic> schierbeck: this week (probably the weekend)
[11:23] <lifeless> keir: so what does git do differently (other than having fixed length keys) in its index ?
[11:24] <keir> they have a fixed length 256 entry index at the top
[11:24] <keir> which works because of the fixed length keys
[11:24] <keir> then it's a sorted list of the entries. since they are also fixed length it works for direct bisection
[11:25] <keir> what i'm thinking, is why not just move to a sha1 based system?
[11:25] <keir> pack the current keys into the pack and index them by sha, as with everything else
[11:25] <keir> it would be transparent to the upper layers
[11:25] <keir> we can still make the packs toposorted (for storing graphs)
[11:26] <lifeless> so te 256 way fan at the top is equivalent to your key index index
[11:27] <keir> yes
[11:27] <keir> but since it's storing keys, there is no bisection; it's a straight jump to the right place
[11:27] <keir> in my index, you have to bisect it because the var length keys
[11:27] <lifeless> uhm
[11:27] <keir> sorry, not keys, sha hashes
[11:27] <lifeless> you still have to bisect
[11:28] <lifeless> fixed length means you can bisect on record number rather than bytes
[11:28] <keir> no, you just take the first byte and hop. the table stores the cumulative sum of the keys
[11:28] <lifeless> thats all
[11:28] <lifeless> say there are 256000 keys
[11:28] <lifeless> sha's are homogenous
[11:28] <keir> exactly.
[11:29] <lifeless> how will it be direct rather than bisection - the 256 fan out still leaves you 1000 entries to select amongst
[11:29] <keir> lifeless, yes, you have to bisect amongst those 1000
[11:29] <abentley> asabil: All commands that can refer to branches should be able to refer to remote ones.  It's arguably a bug that "nick" cannot.
[11:29] <keir> but the first jump is direct
[11:29] <keir> instead of bisecting the table
[11:29] <lifeless> keir: its a partial radix tree is all
[11:30] <asabil> abentley: I see, then how to fix the bug in nick ?
[11:30] <keir> they also have some other nice things, such as crc's in the packs
[11:31] <lifeless> keir: for each entry? or for the whole pack? we can trivially add that, but we have higher level sha's anyway
[11:31] <abentley> You don't have to fix nick, just "description".  If you want to fix nick, you would add a "-d" or "-f" option to it that can take branch.
[11:31] <lifeless> and crc's in each hunk at the moment
[11:31] <lifeless> so we'd be duplicating the crc processing if we did it the 'obvious' way
[11:31] <keir> lifeless, i guess they found corruption due to HW was real enough to add a 2nd table of crc32's for each object in the pack
[11:31] <keir> lifeless, i was picking the brains of #git
[11:32] <lifeless> ok, so my thoughts are...
[11:32] <lifeless> we have crcs for everything in the packs today.
[11:32] <lifeless> its layered differently to git but its there
[11:32] <lifeless> (we also have shas)
[11:32] <lifeless> so its a really a no-op to move crcs from place A to place B at this point in time.
[11:33] <keir> ok
[11:33] <lifeless> as for the index, their index doesn't sound better to me
[11:33] <keir> what i don't like about my index, is that it's complicated because of the variable length
[11:34] <keir> now that i support short 'tags' for backpointers to the full length key (rather than the offset to the key), the code isn't so simple
[11:34] <lifeless> we're studying changing some parts of the core to have fixed length keys, but that is very deep work
[11:34] <keir> what i was thinking, is to move the fixed-length-ness to be an implementation detail which is not known about above
[11:35] <lifeless> I certainly don't understand how you would do a radix tree to find blocks of 1000 entries (or whatever) and keep topological sorting
[11:36] <keir> git toposorts the contents in the packs
[11:36] <keir> but the index is sorted by sha
[11:37] <radix> :(
[11:37] <lifeless> yes, I know
[11:37] <lifeless> we have looked at git quite closely you know :)
[11:37] <keir> true
[11:37] <lifeless> and for git this works because they operate on local disk
[11:38] <lifeless> as soon as you say 'index operations are remote' the ballgame changes
[11:38] <keir> yes, i noticed that their network perf can't be that amazing compared to what i have planned
[11:39] <keir> my concern with my current code is that building is going to be slow for really large trees
[11:39] <lifeless> ok
[11:39] <lifeless> so there are a range of possibilities
[11:40] <lifeless> possibly the tradeoffs you chose for your design aren't quite right, and the design is driving code complexity
[11:40] <keir> i'm confident i can make a fast C version though.
[11:40] <keir> yeah
[11:40] <keir> do you think you could review the current code? even though it has lots of debug prints.
[11:40] <lifeless> your suggesting moving to something like the git index is a reflection of that
[11:40] <lifeless> sure
[11:41] <keir> i generally feel that simple data structures are important
[11:41] <keir> unless there's a compelling reason to be otherwise
[11:41] <lifeless> so the git index is describable as 'bisection in 1/256 of the file'
[11:41] <keir> yes
[11:41] <lifeless> so a 256M index is 'bisection in 1M' to find a single key
[11:41] <keir> they have a 2ndary index in version 2 which is bigger
[11:42] <lifeless> and to grab (say) 50 keys over the wire to reconstruct a single text - download 50M
[11:42] <lifeless> with a secondary index - presumably extending the radix tree - you can reduce this
[11:42] <keir> with our data, it's really not clear how to efficiently do a radix tree
[11:42] <lifeless> I think simple data structures are very appealing; but they have to do the job :)
[11:43] <keir> absolutely
[11:43] <keir> ok, i'll go add some comments and push my code
[11:43] <lifeless> so LPC trees will eat up our data trivially
[11:43] <lifeless> erm, LPC Tries
[11:44] <keir> what is lpc?
[11:44] <lifeless> level path compressed
[11:44] <lifeless> no nodes where there is no split in keys, variable size nodes so every node is always very close to fully populted
[11:45] <lifeless> but they may not do the right thing network wise; I'm just noting their properties in terms of our keyspace
[11:45] <keir> yes
[11:45] <lifeless> theres a paper on citeseer if you are interested
[11:46] <lifeless> also hash tries might be relevant, if you want to dig
[11:46] <keir> i agree, that given our keyspace, it may be the right thing. but i feel like it's just a bandaid to cover that we have variable length keys
[11:46] <lifeless> well
[11:46] <lifeless> for now variable length keys are axiomatix for all indices
[11:46] <lifeless> I doubt that we will *ever* change that for the revision and signature indices
[11:46] <lifeless> we *may* change that for the text and inventory indices
[11:48] <lifeless> I'd also like to note that postgresql, sqlite etc do a fantastic job indexing variable length strings
[11:48] <keir> ok.
[11:48] <keir> actually that may be worth looking into
[11:48] <keir> but i suppose they have different constraints
[11:48] <lifeless> mysql too
[11:49] <lifeless> I was chatting with David Axmark about this sort of thing actually
[11:49] <lifeless> mysql optimisation both in the server code and in how you design the db is all about latency
[11:49] <lifeless> gonig to disk hurts performance hugely
[11:49] <lifeless> (because their data sets are so big, disk latency is measurable, like network for us)
[11:49] <abentley> jelmer: ping
[11:50] <jelmer> abentley: pong
[11:50] <abentley> I've looked at the BB source, and it all says it's under the GPL 2+.
[11:50] <abentley> The only thing lacking I could see was a copy of the GPL 2.
[11:50] <jelmer> abentley: I was looking for a LICENSE file of some sort
[11:51] <keir> lifeless, pull my branch bzr+ssh://mierle@bazaar.launchpad.net/~mierle/bzr/compactgraph/
[11:51] <jelmer> abentley: though I guess the changes you are referring to some other GPLv2 are slim ;-)
[11:51] <lifeless> keir: anyhow, I will happily eyeball the code and give you some feedback.
[11:51] <keir> lifeless, i'm adding comments now but this way you can take a look
[11:51] <abentley> Well, if that's all you were looking for, it's an easy fix.
[11:52] <keir> lifeless, also i added the entertaining ability to compress the index blocks with repr() :P
[11:52] <lifeless> keir: dude, thats so wrong
[11:52] <keir> :)
[11:52] <lifeless> it will play merry hell with people debugging
[11:52] <keir> i thought it was hilarious
[11:53] <keir> mainly i wanted to make sure swapping out block compression methods worked
[11:54] <lifeless> oh I have more sample data
[11:54] <lifeless> http://people.ubuntu.com/~robertc/fbd6843a48261ccf6291451e0799d06f.tix
[11:55] <lifeless> thats a current-formatted copy of the mozilla text index
[11:55] <lifeless> as opposed to the old 0.tix that needed editing to be usable
[11:55] <keir> cool
[11:56] <keir> is there a wiki page which points to the pack instructions? i don't have the link i saved before, and it's just about useless to search for it on google. for whatever reason google is not good at searching gmane.org.
[11:57] <keir> lifeless, see compact_graph.py and test_compact_graph.py
[11:57] <lifeless> if you google for pack dogfooding bzr
[11:57] <lifeless> the first hit was one of my mails
[11:58] <lifeless> so follow that, then you can hope to the quarter index page
[11:58] <lifeless> and search for [PACKS]  on that page
[11:59] <jelmer> abentley: that'd be nice to have in
[11:59] <abentley> jelmer: see revno 206