[00:12] lifeless: What sort of thing would you like to me to put in the developer guide? [00:14] And why this, rather than the many other things we're constantly adding to the codebase? [00:25] abentley: because I remembered to think about it. [00:27] Well, it seems like it would be onerous to do this for everything. [00:27] But where should it go, and what should it say? [00:28] It doesn't seem to belong under "domain classes". [00:29] Also, I've just found an optimization that gets it about as fast as the old implementation. [00:30] On knits. [00:31] cool [00:35] ok 1.0rc1 debs are up; including dapper [00:37] Oh, cool. [00:38] lifeless: Can you give me some guidance on the documenation? I'm otherwise ready to re-submit. [00:39] abentley: one sec, email frenzy [00:39] Okay. [00:40] lifeless: no bzr-svn debs ? would be nice to have them up, at least for gutsy [00:46] jelmer: hmm, I'll see how far I get; I go on leave in two days [00:46] abentley: ok, with you know. [00:47] Fire away. [00:47] so if the answer is 'there is no developer docs 'about merge' at the moment, then you're good - don't worry,. [00:48] starting something that covers the design, motivation, tricks n tips etc for people writing to the collection of apis that combine to make up 'do merge' would be nice at some point., [00:48] I don't remember writing any, so there probably aren't any. [00:49] I can't see any at a glance in doc/developers [00:49] * lifeless -> food [00:49] Okay. Thanks. === kiko-afk is now known as kiko-zzz === mneptok_ is now known as mneptok [01:38] igc: Hi. [01:39] hi abentley [01:39] Would you be able to review my criss-cross documentation? http://bundlebuggy.aaronbentley.com/download_patch/%3C47505D5B.9030702@utoronto.ca%3E [01:40] jam was a bit concerned about the style. [01:40] abentley: sure [01:41] The bit about merge --weave being slow on packs may be obsolete soon. [01:41] ok === doko_ is now known as doko [02:31] Could someone mark https://bugs.launchpad.net/bzr/+bug/172612 as fixed? The patch is in bzr.dev now. [02:31] Launchpad bug 172612 in bzr "new commit is overly verbose" [Low,Confirmed] [02:38] done [02:43] Thanks. [02:51] bbs [03:00] lifeless: have you written up anything in English describing the scenario stuff in bzrlib.tests? [03:03] * igc lunch [03:25] jml: no [03:30] jml, i know a bit about it, i can possibly describe it in documentation [03:30] in fact, i did write something in HACKING, you could look at that [03:30] poolie: I'll take a look. [03:30] I should finish my in-file tweaks. [03:30] questions welcome [03:30] might do that tomorrow === me_too is now known as turbo0O === turbo0O is now known as me_too [06:18] poolie: patch sent [06:29] how do you actually use DVC to commit? [06:31] add a log entry and then hit C-c C-c in the log buffer apparently. [06:31] seems kind of backwards. [06:34] jml: cou can hit 'c' in a dvc-diff or dvc-status buffer, it will open a buffer for the commit message and there you can hit C-c C-c [06:35] vila: ahh, ok [06:35] vila: I guess seeing as I review a diff before I commit anyway, that makes sense. [06:36] jml: yeah the dvc-diff buffer is handy, since it's based on diff-mode you can also very easily revert any hunk by using C-c C-a or mark the files for selective commit === quicksil1er is now known as quicksilver === mrevell is now known as mrevell-sprint [11:55] hi, is there a document that covers running a bzr server using sshd? [11:57] you don't need to run a bzr server there [11:57] just install bzr on the server and bzr will run it on the remote side over ssh [11:58] luks: is there a way to limit the ssh access on the server? [11:59] luks: I don't want to allow full ssh access on my machine. [12:00] um, I'm not sure [12:01] for my subversion server I use the command option in the key configuration: command="/usr/bin/svnserve -R -t -r /srv/subversion" [12:05] ui, you are releasing 1.0 today? congratulations! [12:19] glatzor: you can use something similar for bzr too [12:20] i think it's "bzr serve --inet --allow-writes", but i'm not completely sure [12:20] --directory=/ too. [12:21] (And maybe even something else that gets cut off the edge of the screen.) [12:45] jelmer: sorry, I meant the trunk of 0.4, so "bzr-svn-0.4" branch. [12:45] glatzor: You want to allow bzr access, but not normal shell access? A restricted shell like rsh can do that. [12:48] dato: I would be interested to hear why the trunk version broke [12:48] jelmer: lemme pull and try again. what I saw a couple of days ago is that it didn't do anything after parsing the dump file. [12:59] mwhudson: Peng: so I would need a user/key for every repository [13:10] glatzor: No. [13:10] glatzor: Create as many users as you want. They just all need the proper file permissions on the repo. === kiko-zzz is now known as kiko [14:11] jelmer: there must be something weird in my user setup; as my user, svn-import does nothing, as a newly created user it works. [14:13] dato: does removing ~/.bazaar/subversion.conf help/ [14:13] ? [14:14] lemme check [14:15] yes === cprov is now known as cprov-lunch === mw|out is now known as mw === kiko is now known as kiko-fud [15:03] abentley: ping [15:25] jelmer: so, do you really recommend to use rich-root or rich-root-packs, instead of -with-subtrees ? [15:25] dato: I don't know that jelmer specifically does, but it was a compromise between him and Aaron [15:26] -subtrees is still considered experimental, because it enables nested tree detection [15:26] dato: yep, I do [15:26] right. so I have to choose between compatibility with < 1.0, using an experimental feature, or >1.0 with non-experimental stuff. [15:27] * dato ponders, as this branch is for other people to use. [15:30] oh. [15:30] you cannot `bzr upgrade` -subtrees to rich-root, but you can pull from -subtrees into a rich-root repository :-? [15:42] jam-laptop: ping, who uses bundle format v4 ? It requires seeking backwards (or force me to wrap http.get() into a StringIO) :-/ [15:43] vila: are you working on the ability to apply remote bundles ? [15:44] jelmer: no, on the ability to remove wrapping all http requests into StringIO [15:44] vila: I want to say that v4 is the newest form, so we *all* use them. === jam-laptop is now known as jam [15:44] however, I haven't paged through it thoroughly [15:44] I know Andrew was working on making them more of a producer + consumer [15:44] so he could stream chunks as part of the response [15:45] def get_bundle_reader(self, stream_input=True): [15:45] self._fileobj.seek(0) [15:45] return BundleReader(self._fileobj, stream_input) [15:45] Invalid range access in http://localhost:52409/test_bundle at 28: RangeFile: trying to seek backwards to 0 [15:45] That's the only remaining failing test :-( [15:46] vila: let me check [15:50] vila: the only code that seems to be using that is the bundle code itself [15:50] I'm guessing that BundleInfoV4 reads some of the header, etc to make sure that it really is a bundle [15:51] and then passes the stream into the BundleReader class [15:51] and the BR class was written to start from the beginning [15:51] But I notice that the first thing it does is chomp the first line off the file [15:51] So it is possible that we could just pass a "first line read" flag [15:52] vila: in general, I don't see why it needs to seek, other than maybe multiple calls to get_bundle_reader? [15:52] vila: which specific test? [15:53] TestReadBundleFromURL.test_read_bundle_from_url(HttpServer_urllib) [15:56] the strange thing is that all comments referring to stream_input says stream_input=False is currently faster but everywhere we use stream_input=True... [15:56] vila: that is because = False has to buffer the whole file in RAM [15:57] yeah I know ;-) [15:57] so we chose to be memory conservative [15:57] I also wonder how efficient IterableFile is === cprov-lunch is now known as cprov [15:58] vila: ok, so it seems we have already read the whole file by the time get_bundle_reader() is being called. [15:58] I haven't figured out why yet [15:58] but that is why the seek is needed [16:00] well, I will punt on that one and mark the test as KnownFailure so that we can discuss that during review [16:00] ok [16:01] I think it occur because actually transport.get *seems* to always returns seekable files while ftp always returns StringIOs andd http did too [16:01] vila: we always returned seekable files [16:01] because gzip required it [16:01] gzip.GzipFile [16:01] expects to be able to tell() and seek() [16:01] I think tuned_gzip does not [16:01] seek in both directions ? [16:01] seek backwards, yet [16:01] yes [16:01] it would read a bit [16:02] and then put it back if it didn't like what it found [16:02] but I think that has been worked around in tuned_gzip [16:02] which predictively reads [16:02] but keeps a small buffer [16:02] the test suite triggers nothing related so you should be correct [16:02] vila: do you know pdb very well? [16:02] In gdb there is a way to go until you go up a frame [16:03] (something like Finish) [16:03] but I don't see that in pdb [16:03] r? [16:03] jam: can't answer that ;-) [16:03] ah, thanks radix [16:03] you mean 'r' [16:03] radix: I swear I didn't read your reply ;-) [16:03] hehe :) [16:04] up is handy too to inspect callers contexts [16:05] vila: I think we need it.... [16:05] vila: yeah, I know about up [16:05] I just needed it for all the times we nest function calls [16:05] foo(bar(baz())) [16:05] I want to step into 'foo' [16:05] but it steps me into the others first [16:05] and I don't want to "n" my way out [16:06] list [16:06] sorry [16:06] vila: so.... the bundle code needs to read at least the start of the file [16:07] so that it can get the first line [16:07] to determine the bundle format [16:07] and return an appropriate serializer [16:07] yeah, got that, but then doing a seek to the beginning of the file *rude* [16:07] well, in my tiny bit of testing [16:07] we do: [16:07] for line in f: [16:07] ... [16:08] which we wanted to do so you could embed a bundle in the middle of an email [16:08] and still have it parsed [16:08] (so we ignore everything until we get to the end of the file) [16:08] anyway [16:08] even though "for line in f" has only yielded a single line [16:08] f.tell() [16:08] returns the end of the file [16:10] vila: anyway, I have the feeling that you will break merging from bundles if we cannot seek [16:10] so we won't be able to do "bzr merge http://bundlebuggy...." which is something that *I* do. [16:10] And I believe Aaron does as well [16:10] yeah, I will break something, sure ;-) [16:10] The question is how will we repair it :) [16:11] well, for starters, you are just trying to get rid of the StringIO() wrapper in HTTPTransport.get() [16:11] So far we *pretend* that we stream to be memory conservative *but* we do that by wrapping the url content into a StringIO.... Ecuse me ? [16:11] I'm not sure that is necessary for readv() support [16:11] I agree it is distasteful [16:12] but it is probably okay in the short term [16:12] we generally don't do get() on big files [16:12] indexes, bundles, etc. [16:12] Everything else we use readv() on [16:12] Not necessary, you're right, I will rollback my last change and leave http.get() wraps into a StringIO and be done with it, just a big red flashing FIXME ;-) [16:14] vila: thanks [16:14] If you are doing anything fancy, make a note of it [16:14] but I imagine you are just removing the StringIO wrapper [16:14] from what I can tell [16:14] for line in file: [16:14] Well, the FIXME was already there: # FIXME: some callers want an iterable... One step forward, three steps backwards :-/ [16:14] buffers at 8192 pages [16:15] I'm curious if readline() in the middle would still work correctly [16:15] I will add a better explanaton [16:15] jam: in my case, it's a socket, and yes, read/readline share the same buffer [16:15] weird, the first real read is 8192 [16:15] but the second is 10240 [16:15] could be a read(-1) [16:16] and then 10240 from there [16:16] which can of file/socket ? [16:16] sure, I'm just commenting on the "open()" (file object) behavior [16:16] s/can/kind/ [16:16] and doing "for line in file" [16:16] ok [16:16] start = 0 [16:17] for line in f: [16:17] start += len(line) [16:17] print start, f.tell() [16:17] Gives interesting results [16:17] f.tell() is definitely being read in chunks of (8192, 10240, 10240, ...) [16:17] for a random text file ? [16:18] just something on disk [16:18] yes [16:18] a 238k file in this case [16:19] vila: anyway, if I was to propose a fix [16:19] it would be to change the bundle reading code [16:19] so that it reads a minimal amount of data [16:19] so it can get the header [16:19] and then it passes the header [16:19] along with the file object [16:20] to the serializer which supports that header [16:20] jam: +1 but I will not be able to do that in the coming days (new project starting in RL) [16:20] vila: I'll write up a bug on it [16:20] I search a bit in that direction but saw nothing obvious and came here for help [16:20] jam: thanks [16:23] s/search/searched/ [16:23] vila: does http://dpaste.com/26608/ sound good? [16:23] vila: (re our conversation the other day) [16:24] s/to remote hosts/& with sftp/ [16:24] s/is useful to ensure SSL certificates are always verified/is required to verify SSL certificates/ [16:24] vila: didn't you tell me it was also needed for bzr+ssh? [16:24] dato: rats, yes, but that may well be a bug [16:25] jam: did you read the IRC logs ? It looks like smart/medium.py requires paramiko, aaron and I thought it should be able to use openssh only [16:25] vila: at one point, we could start a bzr+ssh connection without paramiko [16:25] if we can't [16:26] then it is a regression [16:26] kind of hard to test for... [16:26] jam: look at smart/medium.py I didn't test [16:27] vila: I think it is a bad comment [16:27] It used to "from bzrlib.transport import sftp" [16:27] which *does* need paramiko [16:27] 'ssh' should not [16:28] I'm a little concerned that we use "paramiko.SSHException" in the connect_sftp code [16:28] such that if we don't have paramiko, and we fail to connect [16:28] we will get an AttributeError exception [16:28] vila: but other than that, everything should work without paramiko installed [16:28] dato: as jam said ;-) [16:30] dato: so forget my first comment, but please use the second one, it's the only remaining bit that urllib does not handle today and it looks like for some people it's more a problem than an help [16:31] so there are really two kinds of users, those that want certificate verification and those that absolutely don't want them [16:31] vila: ok. re "more a problem than a help" that's what I wanted to reflect in my wording, like "only install it if you're really sure you want such verification" [16:31] vila: I just removed paramiko from my system [16:31] and "bzr log bzr+ssh:// " worked fine [16:31] great [16:31] oh, good [16:32] I'll post a simple patch to the list [16:34] hmm... it is only in connect_sftp() and our sftp subsystem *does* require paramiko to be present... === kiko-fud is now known as kiko [16:38] dato: the reason why we went with Requires for paramiko (IIRC) was that people expected sftp support out of the box [16:38] Since we now at least have bzr+ssh, I suppose we could relax it a bit [16:39] paramiko has always been in Recommends AFAICR, but tools are supposed to install recommends by default. my question the other day was whether to demote pycurl to suggests, which I'm doing right now. [16:40] dato: for Ubuntu, I'm pretty sure it was set to Required [16:40] but I could be remembering that incorrectly. [16:56] New bug: #173689 in bzr "BundleReader cannot stream" [Wishlist,Triaged] https://launchpad.net/bugs/173689 [18:01] dato: the parallel build check showed that some of bzr refuses to build parallel, at least the docs. I think this is because rules calls 'make docs', when it should be '$(MAKE) docs'. [18:02] dato: mind if I correct that? A grep didn't show any other uses. [18:02] james_w: please do [18:03] it might actually cause build failures, or even allow a broken package to be build, if a parallel make is used, but I'm not sure how to test. [18:04] dpkg-buildpackage -j2 ? [18:09] hmm [18:09] so for a test, i want to lock a branch with a given lock id [18:09] this seems to be pretty hard [18:09] james_w: if you don't mind, please use dch -t in the future (leaves the footer alone; so it gets added when opening a new version, and when preparing a final upload) [18:10] dato: ok, thanks for telling me. [18:40] mwhudson: for any branch format, or just for a specific one? [18:41] james_w: Hi James, did you get my merge request for bzr-builddeb? [18:41] because you could just override [18:41] branch.control_files._lock [18:41] jam: er, no, not really [18:43] ? [18:43] otherwise, I guess you could override bzrlib.lockdir.rand_chars [18:43] since that is the final lock token [18:45] jelmer: yeah, the first one is fine, the second I want to think about a little. [18:45] jelmer: also is exposing the file properties planned/wanted/not wanted? [18:46] james_w: planned/wanted. It depends on support for custom file properties in bzr core though [18:47] james_w: what are your thoughts on the second one? [18:48] jam: let me try to explain what i'm trying to do [18:48] jelmer: do you saw bzr-svn selftest results? [18:49] jam: obviously, branch puller on launchpad locks branches while they are being updated [18:50] jam: if things go wrong in a particular way, these branches can be left locked [18:50] k [18:50] with you so far [18:51] so what i'm doing is have the puller use a particular lock id [18:51] and you want to make the branch puller use specially formatted nonces? [18:51] (by setting BZR_EMAIL) [18:51] so for a test, i want to lock a branch with the same id [18:51] (just realize that pack repositories don't have lock ids at that level) [18:52] then run the puller and check that the puller still runs [18:52] (it will look for and break such a lock) [18:52] jam: ah, that's interesting [18:52] jam: so how would you solve this problem? :) [18:52] pack repositories don't take out a physical lock at 'repo.lock_write()' [18:52] time [18:52] they take a physical lock only when they go to update the 'pack-names' file [18:52] jelmer: I guess I'll just wait for that, it's not a massively urgent to implement this. I suppose I could guess in the meantime as well, and there is always the command line option. [18:53] moin moin [18:53] morning lifeless [18:53] mwhudson: are you trying to get a custom lock token (aka nonce) [18:53] privet [18:53] mwhudson: or are you trying to set the username inside the lock to that specified by BZR_EMAIL [18:53] jelmer: as for the second one, I think it's a fine aim, I just want to work out if it is the most natural way to do it. Your final example command line is great, so I guess it may well be. [18:53] mwhudson: I think you are trying for the latter [18:53] such that peek_lock() will give you back what you want [18:54] versus having token = repo.lock_write() give you back what you want [18:54] mwhudson: do you understand the difference? [18:55] jam: i don't think i care [18:55] bialix: hi [18:55] jelmer: evening [18:55] jam: i want to be able to say "is this branch locked? if so, did i lock it?" [18:56] then, "if all that, break the locks" [18:56] bialix: all tests should actually be succeeding, from what I understood from reports [18:56] james_w: ok [18:56] if a custom nonce is an easier way of achieving that, then maybe that's what i want to be doing [18:56] mwhudson: well the string returned by "branch.lock_write()" is only part of the whole info on disk [18:57] but lock_write() will spin rather than returning right away [18:57] (default timeout of 5 minutes, etc) [18:57] jam: right [18:57] are you trying to avoid that as well? [18:57] (you *can* set bzrlib.lockdir._DEFAULT_TIMEOUT_SECONDS if you want) [18:57] i'd like to, yes [18:58] jelmer: I has firewall on my machine. do you want me to run tests one more time with disabled firewall? [18:58] mwhudson: so... Branch.lock_write() will take out a physical lock right away, so it is probably best to think about starting from there. [18:58] my impression is that lockdir.py has all this configurability and flexibility and branches have lock_write() that takes no arguments [18:58] mwhudson: pretty much [18:59] and I assume you are trying to do it without too much surgery into bzrlib itself, right? [18:59] jam: well, i want to deploy this code within the next two weeks, so yes [19:00] mwhudson: pack repositories have no over-the-wire locks at all [19:01] hm, suddenly log in bzr.dev is very slow? [19:01] mwhudson: if the hpss decides to lock a pack repository itself, it will be because the hpss is performing the write. [19:01] lifeless: this is for the mirror puller [19:01] mwhudson: ok; it will take a lock during insertion of the repository for a couple of ms [19:01] dato: with 1.0rc1 or bzr.dev? [19:01] lifeless: so the hpss doesn't come into this [19:01] jam: bzr.dev [19:02] lifeless: it does sound like this is not going to be a problem for packs [19:02] bug #172975, bug #172567 [19:02] jam: logging in bzr.dev [19:02] Launchpad bug 172975 in bzr "bzr log slower with packs" [Medium,Triaged] https://launchpad.net/bugs/172975 [19:02] Launchpad bug 172567 in bzr "Per-file log is *very* slow with packs" [Undecided,Incomplete] https://launchpad.net/bugs/172567 [19:02] (or at least, not a very real one) [19:02] mwhudson: the branch object will have a longer lock held open [19:02] jam: thanks [19:02] dato: I have a workaround in 1.0rc1 [19:02] we are trying to implement the "right fix" for bzr.dev [19:02] ok [19:02] jam: ah it did get merged straight to 1.0 ? [19:02] lifeless: oh, well, we'll have the same problem then [19:02] jam: good [19:02] lifeless: I believe martin put it in the 1.0 branch [19:03] jam: I checked bzr.dev; didn't see it there. I don't know if 1.0's code has the fix or not. And he's off today. [19:03] I'll peek [19:03] jam: anyhow, I have a considerably better idea of how to get my point across :). I might write something up about it [19:04] he did a "merge updates for 1.0" [19:04] jam: did he? good. :) [19:04] but I'm not sure what that included [19:04] (and, of course, bzr log is slower now :) [19:05] lifeless: it looks like my workaround is *not* in 1.0rc1 [19:05] jam, lifeless: so now i'm more confused than i was when i first asked in here :) [19:06] (that i started work 11 hours ago may be related to this) [19:07] mwhudson: so... basically you want to be able to trap when a branch is already locked, peek to see if it was another process that you would have been controlling, and break it automatically when necessary [19:07] jam: yes [19:07] bialix: nah, it shouldn't do any network access at all [19:07] interesting, we don't even use the per-branch config username, just the global one... [19:08] bialix: what version of windows are you on? [19:08] xp sp2 [19:08] xp home actually [19:08] mwhudson: So, you want to depend on bzr's branch locking, but be willing to ignore it :) [19:08] mwhudson: don't we write the pid in the extra metadata? [19:09] when opening a branch [19:09] I would probably monkey patch .lock_write() [19:09] so that it used self.control_files.attempt_lock() [19:09] and if that fails, then you can peek, etc. [19:09] lifeless: not really sure how that helps? [19:09] mwhudson: 'when is it safe to break_lock' ? [19:10] mwhudson: so you know that all locked branches are from other pullers right? [19:10] mwhudson: when you find a locked branch, you can check the host name, see if it is your current host, and then check to see if the pid is still alive [19:11] jam: it has to always be the current host [19:11] if host == 'localhost' and pid is not present (as given by ps, or whatever) [19:11] then you know you have a stale lock [19:11] umm, is there some kind of semantic difference between RevisionSpec.in_branch, in_store and in_history? [19:11] jam: the public area is not directly writable by users [19:11] they are all aliases, but I guess each was meant for something different? [19:11] lifeless: well, no not necessarily, sometimes for better or worse operators end up touching branches [19:11] mwhudson: on the public side the most operators should do is delete them :) [19:12] lifeless: not going to argue with that [19:12] luks: the idea was in_branch -> in the left side history. in_store -> in the repository, in_history -> in the ancestry of the branch. or something. [19:12] lifeless: this complication wasn't my idea :) [19:12] luks: overly engineered we suspect [19:12] lifeless, luks: probably true, but I think we only implemented in_store... :) [19:13] maybe the code should just break all locks it finds [19:13] and a lot of code expects in_store ability [19:13] mwhudson: sounds aggressive :) [19:13] mwhudson: do you mutex branches against concurrent pullers? [19:13] lifeless: lost my nickserv password i'm afraid, can't pm you here [19:13] I finds 'em, and I breaks 'em [19:13] (until it expires :) [19:13] lifeless: yes [19:14] jelmer: at work I use win2k sp4 [19:18] mwhudson: if you mutex them yourself, just do: [19:18] if branch.get_physical_status(): branch.break_lock() [19:18] unconditionally. [19:19] mwhudson: "get_physical_lock_status()" [19:19] you'll need to do something to handle the prompts for y/n that the ui layer will make [19:19] which returns a True or False [19:19] break_lock is rather ui centric [19:19] ah, i wrote code to break the locks already somewhere [19:20] though i'm getting the feeling that it may not work with packs [19:20] it will [19:20] its tested to, *if* you do get a pack repo with a locked pack-names, break-lock will work, as will get_physical_lock_status [19:21] http://paste.ubuntu-nl.org/46736/ [19:22] right, what you said will probably work, i was talking about what i wrote last week :) [19:23] mwhudson: so, I think if an operator fiddles with these branches, they get what they deserve; if you add more public mirrors it will be impossible for an op to fiddle them all manually. [19:24] mwhudson: so being complex here to support oeprators doing the wrong thing; is nutz. [19:24] lifeless: that sounds good and simple [19:24] and like something to do tomorrow [19:24] that code will work on packs [19:25] its not long term portable [19:25] poolie is cleaning up that interface; please file a bug about needing a programmatic break-lock facility\ [19:25] so that he doesn't shoot you in the feet [19:29] lifeless: how will that not work with packs? [19:29] lifeless: ok [19:29] They also go through control_files [19:29] when the go to "lock_names()" [19:29] jam: 06:24 < lifeless> that code will work on packs [19:29] lifeless: oops, bad read on my part [19:29] jam: :) [19:30] the _lock stuff was a clue that i was breaking the rules :) [19:30] I'm not sure where I got a "not" in there [19:30] lifeless, jam: strange, i misread it the same way too [19:30] mwhudson: well, it probably won't work for SVN branches [19:30] anyway, it's REALLY time to stop for today [19:30] mwhudson: have a good evening [19:30] jam: i find it moderately hard to care [19:30] :) [19:30] bye all [19:30] see in you jan mwhudson [19:30] lifeless: oh right yes, enjoy your time off [19:31] lifeless: after something like one more working week of us both on, we'll be in much closer timezones :) [19:32] if i initialise a repo with bzr init-repo --trees whatever, and then create a branch inside of it based on, say, "cvs checkout -r something_old", what is the best way to then check out something new inside that same repo? [19:32] would it be better to branch that branch i just created, and cvs update to something newer? or would another direct checkout be the same or better? [19:33] after that point i expect to keep those branches untouched, except for occasinoally cvs updating and bzr committing inside them [19:33] mw: 'the same' - no need to do any extra or specific work [19:34] mw: I'm not sure I follow completely, but if doing "another direct checkout" would involve doing a "bzr add" of the whole tree, then I would do 'bzr branch' first. [19:35] maybe i should explain better: i created a repo, and then inside of that i did a cvs checkout of firefox 1.5. [19:35] that won't be updated much anymore :) [19:35] but i'd like to maintain a "parallel" checkout of firefox 2.0, which of course continues to be updated pretty often [19:36] they share lots of files, so i'd like a) patches for one to apply relatively easily to the other, and also to conserve space [19:37] a) being more important than the unlabeled b) [19:51] mw: then I would do a "bzr branch" first [19:51] actually, since you have a CVS checkout you might try using lightweight checkouts, and switching between branches [19:52] then you can leave the CVS checkout in-place, and just update it pointing at different Bazaar branches / Firefox CVS tags [19:52] the only problem is accidentally committing something to the wrong branch, but you could always bzr uncommit, and fix it [19:52] nod [19:53] i'm not terribly worried about screwing up my pristine branches, since i can uncommit or revert easily enough [19:53] mw: so do you know how to do switch, etc, or do you want some help with that [19:56] vila: isn't there a '-Dtransport' ? [19:56] jam: i don't [19:56] mw: so I would start off with a repository with no working trees [19:56] (bzr init-repo --no-trees repo) [19:56] jam: don't think so, may be you mean the trace+ decorator ? [19:56] and then create a branch underneath there [19:57] bzr init repo/branch-name [19:57] then you create a lightweight checkout pointing at that branch [19:57] jam: hold that thought -- i need to run out for an hour or so [19:57] bzr checkout --lightweight repo/branch-name working [19:57] mw: ok [19:58] vila: does trace+file:/// work? [19:59] I tried "bzr log trace+file:///PWD/" and it gave me the log output [19:59] but no extra data in ~/.bzr.log [19:59] jam: yes, but the collected data are available only in the transport object, what kind of data are you after ? [20:00] vila: to help the guy on the ML who was saying: Branching bzr.dev 'hanging'... no CPU activity, no net IO, no disk IO... [20:00] coffeedude: hi there, how are you going? === cprov is now known as cprov-out [20:02] jam: my transportstats plugin may help, but it more love (the decorator approach showed its limits with readv) [20:02] s/but it/but it needs/ [20:03] lifeless: you scared him away [20:06] what's new in 1.0 ? [20:06] jam: finally tried the _max_readv_combine for pycurl since we can't get the data as it arrives [20:07] vila: you can, but you need to pass pycurl a custom write function [20:07] which would also mean it doesn't hang forever [20:07] vila: but anyway, how was it? [20:08] jam: how passing a write function will allow me to return from readv ? pycurl will not give me the return code of the request until it returns [20:09] jam: it went nicely but from here the latency is low so I can't really measure the degradation [20:09] vila: http://people.ubuntu.com/~jameinel/test/ [20:09] has the branches I gave you [20:10] that is at least in London [20:10] but that may not be far enough for you [20:10] argh, still haven't extracted that repo... [20:10] vila: Also, I believe if you set up a custom write function, you can get the headers [20:10] which means you get the "200 OK" right away. [20:11] --- people.ubuntu.com ping statistics --- [20:11] 6 packets transmitted, 6 received, 0% packet loss, time 5399ms [20:11] rtt min/avg/max/mdev = 19.597/20.679/21.470/0.613 ms [20:11] well, 20ms isn't a lot of overhead [20:11] indeed [20:11] vila: where is your branch, mine is 115.685/145.808/226.214/42.441 ms [20:12] I made my tests with http+urllib://bazaar.launchpad.net/~lifeless/bzr/repository should be the same [20:13] the one that leds to bugs #165061 and children [20:13] Launchpad bug 165061 in bzr "bzr branch http:// with a pack repository takes too long" [High,Fix released] https://launchpad.net/bugs/165061 [20:13] vila: sure, my question is more about where your code is, so I can pull it here and give you some benchmarks [20:14] jam: oh! Just finishing to update NEWS and I'll mail the list with the patch, I can send it to you privately if you want [20:14] vila: if you send it to the list, I can just grab it from there, and let you know [20:15] ok [20:17] I just set it pretty arbitrarily to 25 but I'm not in a good place to judge, anyway it *feels* already better, the pb is alive ;) [20:20] jam: I seem to remember you had a trick under linux to check the peek of memory consumed by a bzr command, I'd like to look at the benefits provided by this new readv in that respect [20:21] vila: I have a "watch_mem.py" script [20:21] which dumps the /proc/PID/status [20:21] well, polls it [20:21] while a child process runs [20:21] I can send it to you [20:21] jam: please do ! thanks [20:30] vila: it should be in the mail [20:36] wow, I just ran into an old project that was in Weave format still [20:36] (it was my 'scripts' catchall project) [20:37] good thing we've kept around at least read support for everything [20:39] jam: in the worst case you just had to checkout an old bzr version... [20:39] true [20:40] I wonder if I have any of the flat-file branches anymore [20:40] I ran into some a couple months back when running "bzr heads" [20:41] jam: not only true, but also a pretty strong argument against people whining about our multiple formats, bzr has now an impressive experience in smooth migrations from one the format to the other [20:44] vila: you may be interested in http://rafb.net/p/JUvJkI89.html [20:45] jam: finishing my mail first ;) [20:51] jam: are you going to review vila's patch? [20:51] lifeless: the one he just sent? [20:51] I was giving it a look now [20:52] cool [20:52] no need for two sets of eyes [20:52] lifeless: did we ever get your comments on the reconcile code (the ones I asked for) merged? [20:52] jam: I thought so [20:52] ok [20:52] wasn't spiv proxying me for that ? [20:55] what are the chances of getting into core a -u switch to push, that works only for the smart protocol and makes it update a remote tree? I see why it could be rejected (functionality depending on transport), but IMVHO it'd be quite cool to have in core. [20:55] jam: I forgot to mention: http.readv avoid some caching why yielding directly if possible, that may be applied to sftp as well. [20:55] dato: how should it handle conflicts? [20:55] dato: how will the user resolve them? [20:56] dato: will the push fail if there are conflicts? [20:56] warn; ssh; no. [20:56] vila: some caching is good if it allows network + python concurrency [20:57] dato: and if there are *already* conflicts, will it make them worse? [20:57] plus I think most people that'd be using it would not be editing the remote tree. [20:57] can you pull in a tree with conflicts? [20:57] dato: I'm not against it per se; but I think it needs real careful consideration as to UI impact. [20:57] lifeless, what dato is proposing is actually *exactly* how we work here [20:58] beuno: do you use jam's plugin? [20:58] everybody merges locally, then pushes to the main branch [20:58] lifeless, I used to, now I just have a cron job updating all repo's every 2-3 minutes [20:58] ~90 branches at this point [20:58] beuno: ok. Merging locally is orthogonal to this though. [20:59] which is sub-optimal [20:59] well, we only push changes without conflicts [20:59] beuno: if your wt is different to your branch tip; then changes to the branch can conflict with the wt. [20:59] lifeless, nobody works on the wt, just pushes to it [20:59] it's always "clean" [20:59] beuno: sure. Its still a case for the code to have to handle. [20:59] beuno: which is why I asked the questions I asked. [20:59] :D [21:00] that's just my +1 for dato's proposal [21:00] vila: you only return if the current string fits the whole buffer, which seems like it would rarely hit [21:00] since usually you have already combined a couple ranges [21:00] dato: branches can be updated regardless of the tree's state; thats orthogonal to UI [21:00] oh wait... [21:00] you are buffering everything into a file object first [21:01] and then seeking on that [21:02] lifeless: judging by my perfmeters the concurrency is increased, the network never starves anymore [21:02] cool [21:04] jam: RangeFile reads the body of the response on demand so we should process the data as soon as it arrives which means our code don't wait more than needed [21:05] lifeless: er, but pull finally updates the wt. [21:05] lifeless: (sorry, I was lagged) [21:05] dato: I'm sorry, I don't get your point then. [21:06] vila: do we know that the offsets will always be in increasing order for _coalesced_offsets... I think we do as we do sorted_offsets = sorted(offsets)... so we should be ok [21:06] jam: yes we do [21:06] lifeless: you asked what happens if you push -u into a remote tree where conflicts already exist. I'm comparing it to pulling in a local tree where conflicts already exist. [21:06] we probably should add a line in the _coalesce_offset function that we expect the (start, length) pairs to be sorted [21:06] what could happen is that the offset are not in the same order as the coalesced offsets, and in that case only we have to cache [21:07] dato: I don't think they are equivalent, because bzr:// access does not imply ssh access. [21:08] jam: true, but I reviewed all actual uses and they all sort the offsets before calling (now, that I have said that we can as well sort inside _coalesced_offsets ;-) [21:08] lifeless: the people who edited the remote tree in the first place does have ssh access, and they'd be the one with the responsibility to fix. [21:08] vila: hmm, not all readv are sorted [21:09] lifeless: true, I'm talking about _coalesced_offsets callers [21:09] vila: I wouldn't sort, I would just comment on the api that it expects them to be sorted [21:09] dato: maybe. Maybe a bug caused the conflict. [21:09] (It won't work anyway if they aren't) [21:09] jam: indeed, so why not sort them ? [21:09] dato: but should it be made progressively worse if they are not around? [21:09] vila: because they are already sorted [21:09] and you don't really need to pay to sort a sorted list [21:09] lol [21:10] dato: it just seems unfriendly to progressively wedge a tree that the user has no guaranteed access to get at and fix. I don't think it should allow any conflicts ever. [21:10] yeah, right, we better check they are not sorted before sorting them then [21:10] uhm [21:10] if you mean manually checking, thats slower than sorting anew, in python [21:10] bytecode-- [21:10] vila: right, much more performing :) [21:11] lifeless: joke man [21:11] lifeless: I'm pretty sure he was being sarcastic [21:11] Oh, that's easy to unwedge. You just use bzr --pretend-you're-in bzr://[...] revert [21:11] lifeless: yeah, if you want to help users not shooting themselves that'd be the thing to do. which I also think that makes sense, since personally I'd only recommend -u for remote trees that are never edited by hand. and in such cases there'll be never conflicts, so... qed/whatever. [21:11] of course, as soon as I made a joke, who's coming ? [21:11] vila: http readv of aa3ab464f867aaa3609bc8ba20f1e342.pack offsets => 14672 collapsed 6211 ... [21:12] (with my repository, which seems to be significantly less clean than roberts) [21:12] vila: It would be quicker to find the last item in the list, then append a random earlier range to it. Then you'd know it had to be sorted :p [21:12] dato: anyhow, I'm not against the idea of push-to-branch-triggering-update. I *am* against the idea of it ever adding a remote conflict. [21:12] jam: haaa, nice one [21:12] dato: so I think a mail to the list, and discussion about policy is a good next step. [21:13] hmm... then later we get: http readv of aa3ab464f867aaa3609bc8ba20f1e342.pack offsets => 14672 collapsed 20 [21:13] we've beena round this much before, and the basic issue is always the radical difference between on-my-disk vs remote-disk [21:13] jam: revisions + inventories. [21:13] lifeless: ok, noted in my TODO [21:13] lifeless: sure, revisions versus inventories versus texts. [21:13] just interesting that one section is very unclean [21:13] the other is relatively clean [21:13] jam: do you have other projects in there? [21:13] lifeless: yep [21:14] just like I do in real life [21:14] thats likely it [21:14] sure, I know *why* it is happening, just interesting to see it [21:14] cool [21:14] hmm. it still says it is copying inventory texts [21:15] vila: when you were doing "count of data sizes downloaded" was that using your transport stats plugin? [21:15] brb; fooding [21:15] jam: yes [21:15] also, shouldn't we not log every readv collapse unless we have -Dhttp ? [21:16] jam: what ? [21:16] vila: do a plain "bzr get" and it will show you every time it does a readv [21:16] and how much it collapsed [21:16] (it was an old bit that I wrote) [21:16] I'm thinking we should probably only mutter() that if " 'http' in debug.debug_flags" [21:17] though I certainly find it useful right now [21:17] jam: ha, ok, I thought you wanted more mutter but you want less [21:17] yeah, double negatives are bad [21:17] in fact there is only one mutter by readv, even if there is several GET requests issued [21:17] hmm... so I currently have 28 pack files, which means that we actually have *more* round trips [21:18] well, probably not for all the text data [21:18] but for the inventories [21:18] we have have a lot of rix round trips [21:18] then a more reasonable .iix round trips [21:26] hmm.... it seems vila's code goes in a slightly different pack order than bzr.dev [21:26] I wonder if he is just missing some of the newer patches [21:27] I had to merge during the patch writing when I hit is_ancestor test failing [21:27] I still wonder how I get a version from bazaar.org that fail to pass the test suite... [21:28] vila: upgrading to packs broke the is_ancestor test [21:28] is_ancestor just used the default format [21:29] packs required the test to lock first [21:30] but other than that, I don't know how you would have gotten a bzr that had the no-lock is_ancestor, but packs set to default [21:30] hmmm, and I realized it only when I ran the full test suite, could be [21:30] vila: and I do have to say, I get to see the blinkenlights [21:30] haaaaaaaaaaa :-) [21:30] that little guy, its spinning just fine [21:31] it has 60s to beat bzr.dev's time [21:32] jam: pycurl or urllib ? [21:32] urllib [21:32] it just made it [21:32] 20s faster [21:32] he he [21:32] 9m40s versus 10m [21:33] Is it a good idea to run pack reconcile now? [21:34] jam: can you try pycurl too ? [21:34] hmm.... 64MB / 10m == 6.4MB/min or 106 kB/s... I should be able to get 160 or so, but my connection might be in use somewhere [21:34] vila: sure [21:34] I suppose I can install it [21:34] I haven't to date, because I prefer urllib [21:35] really ? Wow, nice suprise ;) [21:35] yeah [21:35] ^C during download is a pretty big downer for pycurl [21:35] but also, I felt I wanted to get away from it anyway [21:35] so might as well dogfood urllib [21:35] I don't think I have it on any machines right now [21:36] * vila you can ^C pycurl now, but don't tell anyone ;) [21:36] at least I succesfully did a couple of times now [21:36] vila: just because it downloads in smaller chunks at a time? [21:36] yup [21:36] or can you actually interrupt the data stream [21:37] no, just because the probability is lower to break at bad times now [21:37] well, I'm doing memory dumps as I go, too [21:37] so I'll let you know [21:38] initial reports look good [21:38] (just browsing the memory log file, maybe 20M or so less consumed.) [21:38] so peek mem before/after patch: pycurl: 173M/95M urllib: 131M/85M [21:38] nice [21:38] vila: "peak" [21:39] jam: rats [21:39] Peng: yes, my fix for it landed in bzr.dev about 20 minutes ago [21:39] I get 120MB versus 99MB VmPeak for urllib [21:40] vila: is that with robert's repository ? [21:40] 32/64 bit maybe? [21:40] yeah, still the one from bug #165061, by the way lifeless, do you intent to update it ? [21:41] vila: so pycurl still gives blinkenlights, but more intermittently [21:41] Launchpad bug 165061 in bzr "bzr branch http:// with a pack repository takes too long" [High,Fix released] https://launchpad.net/bugs/165061 [21:41] so you see nothing for a few seconds, then a bunch of updates [21:41] while urllib gives a steady stream of updates [21:41] which is pretty expected [21:41] jam: yes, tweak the 25, I was happy with it, but as I said my latency is low so I can't really be the one to judge [21:42] I think this is about right for my latency [21:42] I still prefer urllib [21:42] whats the number mean ? [21:42] but at least it doesn't hang for minutes [21:42] lifeless: number of ranges to collapse [21:42] hmm [21:42] many ranges are small [21:42] well, on top of that, we have 200 ranges per request [21:42] I would rather suggest to do it on the total byte count in the collapsed ranges [21:42] so really it is 200*## [21:43] lifeless: I agree that would probably be better [21:43] because its byte count vs latency that the real world operates on [21:43] (raw ranges => collapsed ranges => request) [21:44] right [21:44] so I'm saying raw ranges => collapsed ranges => byte count filter => requests [21:44] or something like that [21:44] because [21:44] if I ask for a 5MB single text [21:44] at the moment it is "raw ranges => count filter => collapsed ranges => requests" [21:44] ~/src/bzr/173010/bzr branch http+urllib://bzr.arbash-meinel.com/branches/bzr/0.93-dev/extra_range_collapse_165061 [21:44] should that give no output while it reads ? [21:44] is still taking at least a few minutes to show any ui life [21:45] mwhudson: can you wait a bit to let my tests run? [21:45] rofl [21:45] jam: uh, sure [21:45] before my pipe gets killed [21:45] I'm just finishing one [21:45] mwhudson: where the hell did you get that ? :-) [21:45] it should be done in ~ 5min [21:45] vila: your patch on the mailing list [21:45] hehe [21:46] lifeless: I concentrate on the readv implementation, but I noticed that indeed the pb pops up a bit lately [21:47] lifeless: 3070, "Fix an ordering insertion issue with reconcile of pack ..."? [21:47] Peng: yes [21:47] at least 10 requests are issued before it shows up [21:48] Ohh.. [21:48] I think I already reconciled it a while ago. [21:48] then it should be trivial [21:48] before I forget, yes _max_readv_combine generates many coalesced offsets which results in contiguous ranges, so we waste header space here, so lifeless is right that should be reworked [21:49] It took 35 minutes to check and 1 minute to reconcile then. [21:49] vila: more than header space is the latency of issuing new requests. [21:49] vila: clocking on the wire is less expensive than going around the world [21:50] lifeless: sure, one more reason to use the header space efficiently to issue less requests [21:51] fewer [21:51] [21:51] lol [21:52] Excusez-moi madame, je ne parle pas tres bien l'anglais, et en plus je n'ai pas les accents pour le francais (ni la cedille pour le c ;) [21:52] cedille ? [21:52] c'est quoi cedille? [21:52] [21:54] cedilla and you french is correct :) [21:55] by the way, keep correcting me [21:55] oh, ç ? [21:55] yes [21:56] or \,c C-M-" under emacs >-) [21:56] have you set up composing in your gnome keyboard foo? [21:57] ç is left-windows+comma c, for me. [21:58] composing is not available everywhere, I make enough typos by constantly using different keyboards, I settled with US layout everywhere and manage to type less and less french outside emacs... :-) [21:58] vila: mabye it was because of mwhudson, but pycurl with your patch was 14m22s [21:58] I have three keyboards here: one Apple, one Dell, one Sun... [21:59] vila: you are right about the extra header stuff, but you also need some granularity so that you can break at appropriate points [21:59] jam: yes, blinkenlights needs more requests so more latency.... told you ! ;-) [22:00] jam: yes, trade off, *my* plan is to drop pycurl ;-) [22:00] vila: I have a US keyboard. gnome lets you set the compose key [22:01] lifeless: I understand your point about a single 5MB text... but at the moment we don't really have a way to do progress while downloading [22:01] jam: nested_progress_bar().tick() [22:01] vila: Which is your favorite keyboard? [22:02] lifeless: I know, but I prefer to stay with a simple setup that I can use under Ubuntu, Solaris, Mac OS, windows, etc (including VMs which loves to break that sort of thing), add synergy in the scheme (to use the same keyboards between different X servers and....) [22:02] mwhudson: you can abuse my net connection now [22:03] jam: thanks [22:03] mwhudson: I also have: http://people.ubuntu.com/~jameinel/test/ [22:03] jam: i'm only testing the abuse you ask launchpad to give it :) [22:03] Peng: I code therefore I use software designed by people with US keyboards, so having ZXCV (undo/cut/copy/paste) on the same line is a win [22:03] up with a bunch of real branches which is a copy of my working repo [22:03] mwhudson: yeah, except LP only does that 1 time, and then at least quiets down afterwards [22:03] though I do notice in my server logs [22:03] jam: expect not at the moment [22:04] that I get a whole lot of 404's from LP [22:04] https://code.edge.launchpad.net/~jameinel/bzr/extra-range-collapse-165061 [22:04] because it takes over 15 minutes for the progress bar to appear [22:04] mwhudson: why is LP failing on that branch? [22:04] do you have any idea? [22:04] jam: yes [22:04] oh, the LP branch is locked [22:04] then having {} just above [] and () all easily accessible are far better than french layout (and don't talk about windows where { is something like alt-156 or whatever) [22:05] jam: that's just a symptom [22:05] jam: the real problem is that i landed a branch that kills the puller worker if there is no progress bar activity for 15 minutes [22:05] ah [22:05] hmm.... [22:05] mwhudson: well, that branch is in knit format [22:05] because we've had problems with pullers getting stuck for days [22:06] so it is downloading most/all of inventory.knit [22:06] before it does much [22:06] jam: to some extent [22:06] jam: i'm not interested in your excuses :) [22:06] * vila calling it a day [22:06] night all [22:06] gnight [22:06] It is about 25MB at 30KB/s... [22:06] vila: night [22:07] which is 14 minutes [22:07] it's still bad for bzr to show no activity at all for 25 minutes or however long it will take [22:07] 13.9 [22:07] mwhudson: preaching. Choir. [22:07] mwhudson: oh, I'm not saying it isn't bad form for bazaar to act that way [22:07] mwhudson: hear the angels? [22:07] vila's change will do a lot for that [22:07] lifeless: yes, they're singing "give it up for today" [22:07] mwhudson: I thought you gave it up about 3 hours ago [22:08] jam: that's why i'm testing vila's change [22:08] jam: i should have done [22:08] packs will help a lot, too, as they copy in a different fashion that gives better progress anyway [22:08] I'm just being stodgy on my public repo [22:08] and forcibly testing pack => knit stuff [22:09] mwhudson: I did notice a lot of 404 in the server logs, every time LP tries to mirror a branch and finds it is in a shared repo [22:09] which is good [22:10] jam: launchpad just does branch.pull() [22:10] well, it might be nice if it could check Branch.last_revision() and see that it has not changed without doing any more work [22:10] mwhudson: sure, I know why [22:10] jam: little chance for funny business here [22:13] jam: opening the branch finds the repo [22:13] lifeless: the way our code is written, yes, it wouldn't *have* to [22:13] I don't really mind [22:13] I just don't read the server logs [22:13] jam: I really think it should stay the samme [22:14] mwhudson: Did the puller get updated, it seems to mirror 5x per day now [22:14] lifeless: I think having WT act differently based on what repo it is connected to is a layering violation... and we shouldn't have to poll everything when we only need 1 bit of info [22:14] but it isn't critical to me [22:15] jam: I agree about the layering violation, that's a compromise (everyone is unhappy), but; I think the knowledge that the stack is good is valuable. [22:15] jam: 4x is the defaul [22:15] t [22:15] I just get to see this in my logs: http://rafb.net/p/gHjqDX47.html [22:15] and has been for ages and ages, afaik [22:15] much better than operating on something for X time and *then* finding out it's corrupt. [22:15] mwhudson: ok, I just see 5 queries [22:16] I used to see 1, IIRC [22:16] I haven't looked at it in quite a while [22:16] probably since before i started :) [22:18] what I really need to do is update all of those to Branch6 format [22:18] so it doesn't have to download the full revision-history file each time [22:19] my pull against jam's repo is spinning the bar now [22:19] didn't notice when it started though [22:20] pretty sure it's quicker than without vila's patch though [22:20] Hey, it's been more than 1 minute since I started reconcile. [22:20] lifeless: one last thing, regarding obsolete-packs, what about replacing [t.remove(f) for f in t.list_dir('obsolete-packs')] by t.remove_tree('obso-packs'); t.mkdir('obso-packs') ? [22:20] vila: race conditions FTW [22:21] wow, 279 Branch5 branches... [22:21] don't you have more race conditions with a listdir ? [22:21] vila: race conditions, and permission issues [22:22] vila: you don't have anyone who will try to rename a file into a missing directory [22:22] versus having a file that was deleted for you [22:22] or missing a file that showed up [22:22] (which we don't really care about anyway) [22:23] jam: permission issues I understand but what race conditions, isn't the repo locked ? [22:23] vila: no it isn't [22:23] ha, ok [22:23] you only have to lock while updating pack-names [22:23] once you've updated that [22:24] nobody will try to access the packs you are moving [22:24] ok, thks [22:24] bye ;-) [22:25] vila: kthxbye [22:25] You really need to work on your spelling [22:25] :) [22:40] hmm... upgrading to branch6 saved me about 30MB === n[ate]vw is now known as natevw === natevw is now known as n[ate]vw [22:50] jam: I'm wondering if any decisions have been made yet regarding handling Unicode normalizations? [22:51] no new decisions [22:51] I don't think it was discussed much recently [22:51] we've got a lot of other stuff on our plate [22:51] n[ate]vw: if you want to start up a discussion, you are welcome too [22:51] I sort of burned out on trying to make everything work [22:51] as I seemed to be the only person who cared... [22:51] well, that, and it would always break for someone [22:51] yeah, understood [22:52] I felt that penalizing Windows/Linux users because Mac likes to change filenamse [22:52] filenames, was a bit sadistic [22:52] jam: I'm still very much just a user at this point, but I have read up on Unicode a bit [22:53] I'm still kinda trying to pick between bzr/hg [22:53] n[ate]vw: did you try my patch? [22:53] It would be pretty easy to get that merged in [22:53] so at least it would let a single platform stay compatible with itself [22:53] even if branching from Unix => Mac might break stuff [22:54] (well, filenames would show up as missing, etc) [22:54] we recently merged some code which helps with the case insensitivity problems [22:54] spiv, jam, igc call in 5 [22:56] jam: I haven't tried the patch, not much of a python hacker. do I apply that to the "source" tarball d/l, or can it apply straight to the installed bzr "excectuble" in bin? [22:57] n[ate]vw: probably recommended to go for the source [22:57] you probably could apply it to the library in [22:57] /usr/lib/python2.4/site-packages/bzrlib/* [22:57] hm, I'm also noticing pull being much cpu-intensive now, at least pull -v [22:57] but I would probably recommend running from source if possible [22:57] np, the source is fine [22:58] n[ate]vw: also, you can run bzr directly from source (without installing) [22:58] looks like there's a new rc out since I was testing it last [22:58] just run 'make' so you get extensions built [22:58] (if you don't, it will still run, but just slower) [22:58] morning [22:58] that's good to know [22:59] jam: I just read an interesting article by Drew Thaler on case sensitivity, and he happened to defend OS X's normalization in the comments: http://drewthaler.blogspot.com/2007/12/case-against-insensitivity.html#comment-2279674399335241896 [23:00] n[ate]vw: I understand (in theory) why normalization is the "right thing" but because nobody else does it [23:00] it just means yours is the only platform which breaks stuff [23:01] yeah, and it looks as though you guys are trying to push 1.0 out the door so I don't want to harass you about this at the moment [23:02] it'd be nice to see a good cross-platform solution eventually, but I understand it's a tough nut to crack [23:03] lifeless: conference call ? [23:05] What package do I have to install to get the "bzr" command on my system? [23:05] jam: yes, distracted by code. [23:05] I have bazaar [23:05] bzr [23:05] logankoester: ^ [23:06] igc: conf call time if you are not already in [23:06] heh [23:06] in [23:06] thanks ;) [23:16] n[ate]vw: well, the small patch I gave you might be a simple thing to get into 1.0 [23:16] code-wise it is nice and small [23:16] politics wise, a bit more involved [23:17] But as I was the one who spent an irrational amount of time trying to get it all working in the first place. [23:18] vila: pong [23:23] jam: I will try to test that patch tonight after work (unless work continues to be waiting on others, in which case sooner!) and post my results to the ticket [23:29] lifeless: Should I be merging my 1.0 patches into bzr.dev? [23:34] lifeless: Conflict detection is done before the merge is applied, so it would be possible to error out if push would trigger conflicts. [23:58] Reconcile finished in 20 minutes. [23:58] * Peng wanders off.