[00:47] poolie: if you're doing reviews, a review of http://bundlebuggy.aaronbentley.com/project/bzr/request/%3C20081027005419.GA20736%40steerpike.home.puzzling.org%3E would be good. [00:47] spiv, ok [00:47] am doing mail atm but will try to get to it [01:58] * spiv fires some branches to the mailing list and goes to lunch [02:10] extmerge doesn't like me. :( [02:10] Unable to load plugin u'extmerge' from u'C:/Program Files/Bazaar/plugins' [02:12] pdf23ds: try "bzr -Derror ...", or look in your bzr.log (see "bzr --version" for its location) to get a traceback that may explain why. [02:12] pdf23ds: if nothing else, the traceback will be useful in a bug report against extmerge :) [02:13] OK, I think it might have something to do with abently's commit yesterday. [02:13] * spiv really goes to lunch now [02:13] AttributeError: 'dict' object has no attribute 'register_lazy' [02:14] But thanks for the tip. [02:14] How to change the tree to an old rev? [02:14] Just my luck to have a commit yesterday break me on a project that hadn't seen any commits since March. [02:15] bzr up -r x? [02:18] bzr revert -r x [02:19] crazy australians. going to lunch at 9 PM. [02:59] Anybuddy know how to resume when you get conflicts with the replay command? [02:59] rebase-continue? [03:07] pdf23ds: does the help tell you? [03:07] bzr replay help is empty. [03:07] and bzr rebase help doesn't say explicitly. [03:07] I filed a documentation bug. :) [03:08] In the meantime I guess I'll try and see what happens. [03:13] It sort of acts like continue might do the right thing, tho -abort and -todo complain that you're not rebasing. [03:14] Hmm. I didn't mess as much with Mercurial, but Bazaar seems much rougher-around-the-edges than Mercurial. But I still like it more. [03:14] And OTOH, I just ran into a Subversion bug today with the latest version. [03:15] (Couldn't merge a path with spaces over https protocol, works over svn protocol. Weird.) [03:17] meep [03:18] please file a bug [03:18] About replay you mean? Where should I file that? [03:19] https://bugs.launchpad.net/bzr-rebase/ ? [03:20] (The merge thing was a SVN bug I just got hit by today.) [03:20] about the https file with spaces thing === spm_ is now known as spm [03:21] yeah, that's svn. [03:21] I had the same reaction. [03:22] a bug on bzr-svn would be good for that; it may be something jelmer has msised [03:23] It wasn't related to bzr, I mean. [03:23] I work with SVN during the day, I'm converting my home projects to bzr. [03:24] I won't be able to get my job using bzr until TortoiseBZR is mature, and until I convince my coworkers of the joys of branching. [03:24] Hi party ppl, I seem to be having some problems with interop between bzr v1.3.1 and v1.6.1 but only on some branches. [03:25] Most of our dev machines (and the repo) are on Hardy, with bzr 1.3.1. But on a certain big repo when I try a checkout with v1.6.1 (on Intrepid) it first complains about the server not understanding net protocol 3, then falls back to an earlier protocol, then sits for a minute or two, then bombs with a traceback like this: [03:25] File "/usr/bin/bzr", line 119, in [03:25] sys.stdout.flush() [03:25] ValueError: I/O operation on closed file [03:25] ...which looks like a permissions problem, but I don't think it is [03:25] because checkouts work fine from Hardy machines [03:26] Any suggestions where to look? I tried with -Dhpss and it didn't provide any more detail [03:28] Hmm. Maybe I don't need replay after all. Rebase might do the job. [03:29] jonoxer: I don't think its interop [03:30] jonoxer: 1.6 has some memory issues; try 1.8 [03:30] lifeless: so maybe it's a problem with this particular client [03:30] lifeless: OK, I'll give it a shot [03:32] Nope, rebase requires a common ancestor too. [03:32] Revision 0 isn't considered a common ancestor, right? [03:32] bzr branch A B -r 0 is the same as bzr init B? [03:46] pdf23ds: nearly the same; branch will preserve the specific format whereas init will use the current default [03:47] this only matters if either the source branch uses a very new feature, or you're interoperating with clients much older than yours [03:53] lifeless: Well, it matters if you want to merge those branches later and find out it can't be done. [04:01] Wow, bzr doesn't like it when my local commits are identical to upstream commits, and I do a "bzr up". [04:01] Almost everything showed up as a conflict. [04:02] mkanat: it shouldn't conflict there; unless you added files [04:02] mkanat: when you say "identical", do you mean the files actually are identical? As in the same checksums? [04:02] jonoxer: No, I have local uncommitted changes in addition to the local commits. [04:03] jonoxer: Also, the upstream files may have different permissions than my local files. [04:03] permissions shouldn't be an issue there; its likely the local changes [04:03] mkanat: but there are some things update does that make conflicts worse than they should be [04:03] Yeah, that's probably what I'm running into. [04:03] I was just surprised, didn't know if you guys knew it behaved that way. [04:04] It's not that bad, I have my local changes as a patch elsewhere and I can just revert and re-apply them. [04:13] lifeless: I've just upgraded the problematic machine to bzr 1.8.1 (package from ppa) and it still bombs out on a checkout from the same repo [04:13] The error message is identical to with 1.6.1 [04:15] The repo is shown as "Shared repository with trees (format: pack-0.92)" fwiw [04:16] The branch I'm checking out takes about 7.8G on disk, in case that's relevant [04:19] jonoxer: hmm, perhaps we haven't landed the patch yet [04:19] lifeless: think it's worth trying 1.3.1 on that machine? That's what we're running on every other box, and it works fine [04:20] jonoxer: I suspect it will work, yes [04:21] I'm surprised bzr-loom isn't listed in http://bazaar-vcs.org/BzrPlugins [04:22] lifeless: it seems to be working (1.3.1 on Intrepid) [04:25] jonoxer: the memory patch may not have landed; or it may be something else [04:25] jonoxer: please file a bug [04:39] No problem, done (Bug #290558) [04:39] Launchpad bug 290558 in bzr "bzr v1.6.1+ crashes when checking out large branch from v1.3.1 repo" [Undecided,New] https://launchpad.net/bugs/290558 === ozzloy is now known as lbc === lbc is now known as ozzloy === jfroy|work is now known as jfroy [05:42] spiv, i agree with your comments on the hooks doc patch, so i'll apply them and merge it [05:48] poolie: great [05:49] i still find your nickname kind of bizarre :) [05:50] I should trawl a dictionary for more flattering four letter words that are unlikely to already be registered on popular services... [05:50] http://newmatilda.com/2008/01/17/attack-spivs [05:50] excuse the distraction [05:50] "They no longer sport pencil-line moustaches and porkpie hats, but their tell-tale signs give them away - their faux Italian pointy-toe shoes and the hairstyles sculpted into those absurd little tufts. " [05:50] indeed :) [05:50] Oh, I don't mind being distracted. [05:51] That's the problem ;) [05:51] (: [05:53] night all [05:55] cheerio [06:07] spiv, can you look at my reply there quickly? [06:07] by quickly i mean 'it should not take long' not 'urgently' [06:13] poolie: the reply to the hook docs patch? Looks good to me. [06:13] thanks [06:13] i'll send it in [06:25] * spiv -> yoga [06:25] (and hacking on the train) [06:27] i'm going to stop in a bit [07:07] hi all [08:15] Hi ! Is there bzr-gtk release 0.96 ? I see in maillist: Status: Fix Committed => Fix Released  Target: None => 0.96.0 but nothing about 0.96 on http://bazaar-vcs.org/bzr-gtk .. [08:38] matkor: 0.96.0 is where the commits are made to *prepare* 0.96 :) === jfroy is now known as jfroy|work [08:39] matkor: i.e. the trunk, a specific branch will be made once 0.96 is out I think [08:43] vila: Ok.. so status (Fix Committed => Fix Released) change is misleading ? shoudl be left on Fix Committed ? [08:44] matkor: committed means the fix is available somewhere, released means it's merged on trunk [08:46] ah ... thanks a lot for explanation, vila [08:47] you're welcome and not the first one needing the explanation which indicates something should be changed though... [09:28] vila: Perhaps: Fix Commited -> Fix merged mainline (or Accepted to next release) -> Fix Released (in official version) could be more self-explenatory ... [09:32] The last step means that releasing a version requires an additional step whereas the actual usage doesn't... Most of the users will update only when a new release is announced and are unaware of the subtle difference. I know there is work in progress to somehow automate that step but I forgot the details [09:57] Presumably launchpadlib will allow it to be done (if it doesn't already). [10:11] Odd_Bloke: what was that? [10:13] thumper: Marking bugs as Fix Released when an actual release happens. [10:13] Odd_Bloke: ah [10:13] thumper: A better fix might be to fix the bug statuses though. ;) [10:13] heh [10:14] I'd say that a fix is "in progress" until it is landed on trunk [10:14] but hey, that's just me [10:15] I did manage to get an hg->bzr convert last night [10:15] just with "versioned directories" and "bzr shelve" [10:15] Well, 'we' (i.e. bzr) do (In Progress) --submitted for review--> (Fix Committed) --merged--> (Fix Released). [10:16] * thumper nods [10:18] congratulations jelmer [10:22] lifeless: ping [10:24] james_w, thanks, but with what ? :-) [10:24] heh :-) [10:33] jelmer: I'm guessing for your AM report... [10:33] ah, I wasn't aware that was announced someplace [10:37] jelmer: debian-newmaint list. [10:37] jelmer: you know https://nm.debian.org/nmstatus.php I guess [10:37] Odd_Bloke, ah, thanks [10:38] LarstiQ, I don't expect (hope?) James to check that on a daily basis :-) [10:38] jelmer: James has resigned from DAM work afaik [10:38] oh, james_W [10:38] sorry :) [10:38] jelmer: yeah, debian-newmaint [10:39] * LarstiQ takes this as a clear sign he should do something about breakfast [10:48] LarstiQ, :-) [10:49] jelmer: Also on a Debian related note, is there any documentation about how to use the pkg-bazaar repository? I haven't been able to work it out, and I have an ITP for bzr-xmloutput sitting around. [11:01] Odd_Bloke, basically, we have one repository on bzr.debian.org per package [11:01] and then one branch per debian distribution (unstable, experimental) [11:02] the branches are bzr builddeb branches, and should ideally contain the upstream source plus debian/ directory and any necessary changes [11:34] jelmer: Bug 290664 is the one I promised to file a couple of days ago. [11:34] Launchpad bug 290664 in bzr-svn ""Can't get entries of a non-directory"" [Undecided,New] https://launchpad.net/bugs/290664 [11:41] Odd_Bloke, thanks, I hope I can have a look at it during the weekend [11:42] jelmer: I have commit access, so if there's a way I can fix the repository using that I'd be happy to do so. [11:46] Odd_Bloke, it should be possible to add a fix for this to bzr-svn, I just need some time to analyse what's going wrong [11:47] jelmer: Cool, I really meant if a repository fix would be possible sooner, then I'd like to do that. :) === doko_ is now known as doko [13:03] beuno, loggerhead made it into experimental btw [13:03] jelmer, I saw! [13:03] I've been spying [13:03] thanks :) [13:03] :-) [13:04] you're welcome [13:04] I hope we can now help get it running on bzr.debian.org [13:05] oh, I didn't know they needed help [13:05] let me know if I can do anything at all [13:07] will do [13:07] thanks === Verterok|out is now known as Verterok [13:24] what is the easiest way to search for a revision that removed a couple of lines? [13:34] hey can i revert a single file to a specify version using bzr revert -r 46 foo.py ? [13:35] jelmer: abently's latest commit to extmerge breaks it on 1.8. I had to figure that out myself after branching. [13:51] pdf23ds, ? [14:53] vila: ping [14:53] jam: pong [14:53] hey, want to chat about the inventory stuff? [14:53] sure [14:53] I haven't heard any background to know if robert's idea is corerct [15:28] hi asmodehn [15:29] hey hey jam ;- [15:29] ;) [15:31] I'm a bit surprised you sent the .bzr.log to the list, but I suppose filenames aren't particularly private [15:32] it does give me a little bit of insight into your game, though :) [15:32] oh did I ^^ [15:32] you certainly have a very *wide* tree (lots of files) [15:33] approx 30k [15:33] nah I only sent it to you I think ;-) [15:33] dont scare me like that :-p hehe [15:34] anyway as you can see I dont use bazaar to store the source code, but rather distributing different version of it in differnt locations [15:34] which explains the size and the latency I am dealing with.. [15:34] asmodehn: you're right, I wasn't looking at the folder I was in [15:35] but yes lots of big files [15:35] the repository with working trees is now 24GB [15:35] so whenever I doa branch it takes some time :-p [15:36] especially when it s remotely [15:36] I also have scripted the beginning of an automated framework on top of it, to keep different location in sync, with web interface, etc. [15:36] just a beginning :-p [15:38] also because of the size of it my disk is pretty full now, not sure I can do a lot of tests ... I am lacking backup space... [15:39] well, if you want to try my "convert_to_dev2" script, you can actually just use a hardlink of the old repository [15:39] and the only new data will be the indices [15:39] which is small meg [15:39] rather than 24GB [15:40] mmm ok.. no need to upgrade before at all right ? === dobey_ is now known as dobey [15:41] ha yeah ok this is doing the upgrade... ok I ll do that then and let you know... [15:42] so I should say maybe 10MB or so, but that is pretty trivial next to the 24GB. But I would only do it on a copy [15:42] for now [15:43] then again [15:43] I could just write you the reverse script [15:43] to convert them back [15:43] not to mention my convert_to_dev2 script leaves them lying around, IIRC [15:43] thats good enough then, if I can put tehm back somehow ;-) [15:43] I have remote backups if needed [15:44] yeah [15:44] the script creates new indices [15:44] and just moves the old one out of the way [15:44] without deleting it [15:44] I just verified that [15:47] "python convert_to_dev2.py" and thats it correct ? [15:47] * asmodehn just a bit scared :-p hehe [15:49] jam : ok running... [15:49] you have to have "bzrlib" in your python install, but it sounds like you do [15:49] I'm looking at it now [15:49] just for your comfort [15:49] thanks ;-) [15:50] jam : ok done ^^ pretty fast [15:51] yeah, that is all you needed to do [15:51] asmodehn: again, rewriting 20MB isn't bad versus rewriting 24GB [15:52] ok but I changed only the repository I am branching towards, not the repository I am branching from.. [15:52] do I need to do that too ? [15:52] yep :-) [15:52] the one you are branching *from* is the important one [15:52] yeah I would have guessed so.... [15:52] mmm hang on... [15:52] I'm also not 100% sure if you need my prefetch code [15:53] Are you comfortable running bzr from source instead of from an install? [15:53] (it isn't very hard) [15:53] mmm it should be ok, but I really want to avoid to mess up my repositories too much [15:53] so I guess I ll try one thing at a time ;-) [15:54] but yeah I could do that... [15:54] it doesn't change the disk formats at all [15:54] *just* how we access it [15:54] but start with this [15:54] see what happens [15:54] k [15:54] and then we'll try that as the next step [15:55] also, can you run a quick command for me [15:55] sure [15:55] find .bzr -name "*.?ix" -print0 | xargs -0 du -kshc [15:55] I think that is right [15:55] let me run it here [15:56] yeah, that works [15:56] ok on which repo ? [15:57] the source one right ? [15:57] the one you just upgraded [15:57] ah ok [15:57] you could do the source as well [15:57] I mostly just want to see how big your indexes were [15:57] versus how big they are in btree format [15:57] oh ok [15:58] I l send hte result in email ot you ;-) [16:01] done [16:02] ah I have enough space on the disk of the source repo to back everything up :-) good news ;-) [16:04] backup everything ^^ sorry me english bad :-p [16:04] I m doing that now then I convert to dev2 as well so I can try the branch again... [16:08] how do i revert a commit? is it the same way that you do it in subversion? merge the changes between new and old back into the working copy? [16:08] bzr uncommit is your friend ;-) [16:09] if you really want to lose your commit [16:09] asmodehn: so from what I can see, your 31MB text index becomes 9.1MB in btree form [16:09] and your 14MB => 4Mb [16:09] MB [16:09] which is good [16:10] yep wouldnt hurt ;-) [16:11] if we want to add log to the data transfer, I would need to run bzr from source and modify it right ? [16:11] asmodehn: great thanks, i'll take a look [16:11] cause at the moment the log doesnt display nothing while the process seems to be stuck... [16:11] dstanek : no worries ;-) [16:12] dstanek: you could also do "bzr merge -r 10..9" if you want to revert something old [16:12] but yeah, 'bzr uncommit' is probably what you want [16:12] if it was recent [16:12] (as in the last commit) [16:13] asmodehn: so the index seems to go from almost 70MB to down to 30MB, which is about what I would expect. [16:13] Also, it does look like this repository could stand to have "bzr pack" run [16:14] Though I'll mention that running it after the conversion means we don't have a simple way of bringing the indexes back [16:14] it look like uncommit can also take a -r argument - will it work if you specify a revision that is not the last? [16:14] dstanek: it will uncommit everything back to that revision [16:14] not just the changes for that rev [16:14] for something old [16:14] ah i see [16:14] you want "bzr merge -r REV..REV-1" [16:14] sorry [16:14] you want "bzr merge . -r REV..REV-1" [16:14] yeah I was thinking about that too... but I wonder if that might make one big pack, and therefore we might lose some details in debugging what is wrong in the data transfer [16:14] the '.' is important [16:15] asmodehn: it will create 1 big pack [16:15] so we can look at that more later if you prefer [16:15] jam: uncommit works for my immediate need [16:16] jam : also having one huge pack might not be good for me if there is no way to track the progress of the download... [16:16] I prefer downloading slowly but know that it s working ;-) [16:17] btw, same thing, is there an incremental pull / push ? [16:17] revision by revision ? [16:17] asmodehn: 'bzr push -r X", "bzr push -r X+1" [16:18] yes but when I have 123 revision to push one by one because they are big and they might fail, I dont want to type that 123 times ;) [16:18] was thinking of a "bzr ipush -r REV..REV " that will push revision one by one [16:19] and if it breaks in the middle, the last push that worked is still there... [16:19] but maybe that belongs more in a script around bzr than bzr itself [16:19] jam : ... my backup copy on the source repository is still going... [16:20] asmodehn: for i in `seq REV..REV`; do bzr push -r $i; done [16:20] sorry [16:20] for in `seq REV REV`; do echo $i; bzr push -r $i; done [16:20] yeah thats what I meant ;-) [16:21] but you want to check for error and stuff, so I ll gues I ll write a script for that... [16:21] eventually [16:21] at the moment I am pulling revision 10 by 10 to know a bit where I am at... [16:25] well, I'll mention that some of the data transferred is forgotted by the next copy [16:25] so it will be slower [16:27] yep I start doing that when using 1.5, because for long transfer it used to error out [16:28] where few by few it was working [16:28] and since I upgraded to 1.8 it doesnt error as much which is good ;-) [16:28] but it seems to get stuck longer for smoem reason :-s [16:32] 1.8 will switch to grabbing whole indexes from time to time, which might be what you are seeing [16:32] 1.5 would only grab incrementally [16:32] but could end up grabbing 2x the inventory under certain circumstances [16:33] the dev2 format solves both of those issues [16:33] cool ;) good to know [16:33] backup copy almost finished... [16:55] asmodehn: how's it going? [16:55] jam : branching now... [16:55] conversion went without problems [16:55] you're using -Dhpss for me, right? [16:56] yep [16:57] but looking at the log... the copy of the index is fine... [16:58] looks like the copy of the pack doesnt generate a lot of transfer on teh network though.. I am a bit confused... [16:59] copying the pack should be saturating your network [16:59] my pick is 1.2Kbps now ... [16:59] are you sure about that? [16:59] looking at iftop onthe source network now.. [17:00] source server I mean [17:00] it was high up during indexes [17:00] my only other guess is that you are running into swap issues on the source [17:00] oh^ [17:00] mmm I ll check my RAM.. [17:00] as we are trying to copy 20GB across [17:01] nope everything looks fine [17:01] source : [17:01] Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st [17:01] Mem: 3366488k total, 3221756k used, 144732k free, 1672k buffers [17:01] Swap: 2650684k total, 702536k used, 1948148k free, 558160k cached [17:01] dest : [17:01] Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st [17:01] Mem: 6110456k total, 5198228k used, 912228k free, 78484k buffers [17:01] Swap: 1951888k total, 72k used, 1951816k free, 4550312k cached [17:02] and I am still branching -r1 from a repo on source_dev2 [17:02] \ [============================= ] Copying content texts 3/5 [17:03] 700MB in swap seems like a lot to me [17:03] but I don't know what load you run on that server [17:03] well these server are running a bunch of other things ^^ [17:04] so you are still seeing very little data being copied, right? [17:04] but that swap used is not increasing right now anyway [17:04] no transfer ont eh network [17:04] well very little [17:04] and with -Dhpss [17:04] you might try top of the running process as well [17:04] it is also possible that your disks are being heavily loaded ATM [17:04] log stuck at 85.410 result: ('readv',) [17:04] 105.947 1099613 body bytes read [17:04] 108.023 hpss call w/readv: 'readv', '/home/autobzr/deployBZR_dev2/.bzr/repository/packs/f65de233ce31cbdace7370611f49937c.pack' [17:04] 108.024 11675 bytes in readv request [17:05] asmodehn: can you send the 'ls' results for the remote repository? [17:06] I would assume that is a big file, but it isn't in the listing you sent [17:06] (since that is the local repo) [17:08] this pack is 1.1GB [17:10] source : [17:10] PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND [17:10] 12072 autobzr 20 0 1925m 1.9g 2404 S 0 58.4 0:03.46 bzr [17:10] dest: [17:10] PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND [17:10] 25889 autobzr 21 0 228m 159m 3956 S 0 2.7 0:06.94 bzr [17:11] so it is consuming 1.9GB, which may not be terrible, but isn't a great sign [17:11] yep [17:11] mostly for when you hit the 20GB files [17:11] It is possible that is taken up by other info [17:11] but I'm expecting it is the actual data that will be transmitted [17:12] asmodehn: I think it is going to have serious problems soon. I'm seeing: [17:12] backing_bytes = ''.join(bytes for offset, bytes in [17:12] self._backing_transport.readv(self._relpath, offsets)) [17:13] which means... "read the requested bytes, and buffer it into a single string" [17:13] and return that [17:13] ah... [17:13] aka, buffer 20GB in RAM before sending it over the wire [17:13] that will not work :-p [17:15] so... the specific issue is that since 1.6 we are able to request the streams for multiple files simultaneously [17:15] in 1.5 we could only request one file at a time [17:15] which makes things way faster in most cases [17:15] but in your case [17:15] it means we end up requesting 90% of the pack file in one go [17:15] and you end up dying [17:15] argh [17:16] yep... [17:16] we could probably work around this a little bit in the client [17:16] by capping the size of a readv request [17:16] but that also means there no easy perfect solution right ? [17:16] ie, how much oyu are capping will depend on the RAM size, the badwith, etc. [17:17] yeah [17:17] but something like 500MB is generally a safe bet [17:17] also, could you try this over sftp [17:17] ? [17:17] I realize some bits will be slower [17:17] yep [17:17] but I think it the SFTP spec requires that we transfer in 32kB chunks [17:17] so it won't actually hit this [17:17] oh ^^ [17:17] You don't have to [17:17] worht a shot ;-) [17:18] it s ok ;-) [17:18] but it would be worth having the data point [17:18] yep [17:18] and I am pretty desperate atm to be honest... if it doesnt work I ll go back to 1.5 and transfer revision by revision to avoid hitting errors too often... [17:18] I was in this code recently, and I'm pretty sure the client still spools up the whole request as well [17:20] here we go sftp... [17:26] I may have a quick fix on the readv stuff, I'll let you know [17:26] k [17:26] sftp still copying inventory text... [17:27] asmodehn: I'll just mention that bzr hasn't really been designed around handling 10GB trees [17:27] We'll try to accommodate as we can [17:27] yeah I would guess so ;) [17:27] no worries [17:27] but it isn't a base design criteria [17:28] I am just using it because it s better than copying 4GB zip files around evrytime there is just a little difference... [17:28] have you considered unison/rsync? [17:28] ie easy way to get just patches transparently [17:28] mmm dont know about unison... [17:28] let me google that ;-) [17:35] mmm looks interesting... however I need more than what it does though, such as a proper SCM who let me go back in revision history ... [17:35] I feel like I need a BZR multisite :-p [17:35] lol [17:38] mmm sftp doesnt load the source server like bzr+ssh did... [17:38] and now there is still traffic on the pipe... 200 - 300 Kbps [17:38] yeah [17:38] again, requests are made at 32kB per hunk [17:38] and the sftp code was updated to stream that back to the next layer as well [17:39] while the bzr+ssh code does a lot of "finish reading the response, then return" [17:39] k [17:40] so whats the plan from now .. ? I guess I ll use sftp from now on, until the bzr+ssh code is improved to handle big packs ? [17:42] and I keep using my old Knits repo until the dev2 format is finalized, tested, and announced ? [17:44] well, I have a patch for it to be released in 1.9, which may happen by this Friday [17:44] so you won't exactly be waiting long [17:44] can you send me the bzr logs with the dev2 format? [17:44] I'm just curious how much of an improvement we see [17:44] (if any) [17:45] sure ;-) [17:45] just the destination though... the source is a bit loaded by other bzr processes running :-s [17:45] ok ? [17:45] would htat be enough for you ? [17:46] mmm hang on maybe I can get the source one too if you need actually, the process was stuck since two days ago :-p [17:47] no, just the local one is fine [17:47] I don't think you were running -Dhpss on the server anyway [17:48] I'm not specifically interested in the remote one either [17:48] yeha true, no Dhpss.. [17:48] but but btu ... [17:48] my .bzr.log is now only 28K :-s [17:48] .bzr.log.old [17:48] haaa :-) [17:49] I believe we keep 1 level of .bzr.log [17:49] just to prevent it from consuming lots of disk space [17:49] at somewhere around 4MB we copy it to .old [17:49] and start fresh [17:50] k I am bziping both now... I ll send that in your email [17:50] thanks a lot for the help by the way ;-) [17:50] np [17:51] I ll get 1.9 on my etch next week I guess then ;-) [17:51] let me know if you want me to test something ;-) [17:52] asmodehn: well, I'll likely have some readv code that I might have you play with [17:52] I'm still putting together a "summary of what is happening" to the mailing list [17:52] cool ;-) thanks ;-) [18:02] asmodehn: can you grab a branch of bzr.dev to start with? [18:05] yep [18:05] url ? [18:06] got it [18:07] 'bzr branch lp:bzr' should be fine [18:07] branching from bazaar-vsc.org/bzr/bzr.dev atm... [18:08] same thing [18:09] lp:bzr is a mirror of that url [18:09] just a lot shorter to type [18:09] :) [18:09] hehe cool, I ve never used lp before though :-p I more of a SVN / BSD guy :-p [18:09] dev C++ too not much python [18:10] but i ve heard it s quite siple, never had time to get my head around it though.. [18:10] yeah, I came from C++ => python [18:11] all depends on what you are doing [18:11] yep [18:12] one weird thing, the last .bzr.log seems to have a different bandwidth [18:12] like 100kB/s rather than 200kB/s [18:12] is that real? [18:12] huh ? [18:12] * asmodehn is trying to think of what happened 2 minutes ago... [18:13] any clue about where in the log that changes ? [18:14] http://dpaste.com/87602/ [18:15] weird... [18:15] though I also see: [18:15] 43.321 result: ('ok',) [18:15] 391.397 22190509 body bytes read [18:15] which I believe is about 60kB/s [18:16] so your bandwidth seems rather variable [18:16] the locations are quite far away from each other, and a lot of other thing might happen on the network, so that wouldnt surprise me [18:18] I cant recall in which order I stopped / restarte the differnet bazaar processes on hte machines in the last hour to know if it affected something or not... [18:18] I was just trying to figure if the new format was saving you anything, but it looked slower at first glance [18:18] but if your bandwith can vary 2x [18:18] I see [18:18] I can say it is copying less data [18:19] do you have bzr.dev yet? [18:19] Branched 3806 revision(s). [18:19] asmodehn: ok, try doing "bzr pull --ovewrite http://bzr.arbash-meinel.com/branches/bzr/1.9-dev/remote_readv_sections" [18:19] ok ;-) [18:20] and then run [18:20] make [18:20] and then you should be able to run "path/to/bzr branch -r1 -Dhpss bzr+ssh://..." for me [18:21] not a branch... maybe url rpbolem ?? [18:21] oh, and use "-Dindex" if you would [18:21] k [18:21] ah just a sec [18:21] I just forgot to publish it :) [18:21] hehe :-p [18:21] done [18:21] try again [18:21] pulling... [18:22] M NEWS [18:22] M bzrlib/help_topics/en/hooks.txt [18:22] M bzrlib/transport/__init__.py [18:22] M bzrlib/transport/remote.py [18:22] M doc/en/user-guide/hooks.txt [18:22] All changes applied successfully. [18:22] Now on revision 3806. [18:24] asmodehn: sounds right [18:24] huh ? 3806 -> pull -> 3806 ? [18:24] different tips [18:24] different one then ;-) [18:24] k [18:24] Change the default readv() packing rules. [18:24] ah make... what do I need to run that ? [18:25] gmake, gcc, python-pyrex probably [18:25] k [18:25] I could give you the .c files if you don't want to install python-pyrex [18:25] nah it s ok I can [18:26] you don't *have* to, because bzr can fall back to pure python [18:26] but certain ops are slower [18:26] k [18:26] I ll go package hunting ;-) [18:26] brb [18:26] though not at the scale of copying 20GB across a remote network [18:26] asmodehn: so if it is difficult, you can skip it [18:35] k bunch of dev packages + make + bunch of warnings, but it seems fine ... [18:40] another small advantage of the patch [18:40] if it works, you should get a little bit more progress [18:40] well, the spinner should spin more often [18:40] k [18:40] I don't think it will log much more [18:40] just the Dindex I guess... [18:41] well hte start was pretty fast [18:42] I guess dev2 indexing improvements ? [18:42] probably [18:42] now it seem stuck as usual, but my network usage is around 1.2Mbps :) I feel better [18:42] :) [18:42] can you get to the remote server to check the mem usage? [18:43] It should certainly be << 1G [18:43] yep [18:44] PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND [18:44] 6942 autobzr 15 0 124m 120m 2404 S 0 3.7 0:00.39 bzr [18:44] 120M sounds a lot better than 1.9G === mw is now known as mw|food [18:45] yep it does ;-) [18:47] the data transfer rate oscllate between 900Kbps and 1.2Mbps ;-) nice nice... [18:48] note the spinning bar doesnt seem to turn a lot in between different readv though... [18:49] 91.631 result: ('readv',) [18:49] 534.986 52094902 body bytes read [18:49] in between the bar didnt move [18:50] I would expect the readv to request data, then when it comes back it will spin, then it will request more, back and forth [18:51] mmm [18:51] the actual transfer code still buffers stuff [18:51] I dont see the bar moving while it s transferring data through network... [18:51] it is just that inbetween we are smarter about it [18:51] yeah probably [18:51] k [18:51] well, we make smaller requests [18:51] but anyway much better now ;) [18:51] can I get the .bzr.log for it? [18:52] sure ... I ll put that in a mail ;-) [18:52] also, I'm going to have 1 more version for you to test in a sec [18:52] sure [18:52] I ll stop that now then [18:52] and send hte mail k ? [18:53] yeah [18:53] ok, do another "pull --overwrite ..." [18:53] (you can use --remember if you don't want to keep typing the URL) [18:57] it s ok I like my shell history ;-) [18:59] building extension modules. [18:59] python setup.py build_ext -i [18:59] Cannot build extension "bzrlib._dirstate_helpers_c" using [18:59] your version of pyrex "0.9.4.1". Please upgrade your pyrex [18:59] install. For now, the non-compiled (python) version will [18:59] be used instead. [18:59] running build_ext [18:59] huh ? [19:00] ah but you just changed the transport in bzr lib, so that pyrex stuff shouldnt matter I guess [19:00] you shouldn't actually have anything to do there, but whatever [19:00] asmodehn: right pyrex 0.9.4.1 has a known bug [19:01] so we don't compile the extension [19:01] because when we do it segfaults [19:01] ah ok ^^ just hte version in my etch packages :-s [19:01] anyway I ll branch -r -Dhpss -Dindex again... [19:01] I really like ~line 871 in bzr.log: [19:01] 25.841 expanding: 5d3c775ea20da5c234f29fb643c5d8df.tix offsets: [13, 14, 15 ... [19:01] 25.841 not expanding large request (1424 >= 16) [19:01] with 1424 offsets in there :) [19:02] heh :p [19:02] interestingly for this specific branch [19:02] the prefetch doesn't help much [19:02] we are either just reading a tiny bit [19:02] or reading a huge amount at once anyway [19:03] oh added some debug message did you ;) [19:03] yeah [19:03] -Dindex is that part [19:03] but I don't think it is in 1.8 [19:03] well, I *know* it isn't in 1.8 [19:04] as it just landed in bzr.dev 3805 [19:04] mmm error [19:05] bzr: ERROR: exceptions.KeyError: 1566 [19:05] any idea ? [19:06] quick guess is that it doesn't like the partial readv I worked on, let me look a bit [19:06] File "/home/autobzr/bzr.dev/bzrlib/btree_index.py", line 1046, in iter_entries [19:06] node = nodes[node_index] [19:06] it is possible that a page used to be cached, so we didn't request it [19:06] but when we wanted to use it [19:06] it was gone [19:07] asmodehn: can you send that log? [19:07] Looking at the code it should always have those keys [19:07] but I might have a small bug in the new code [19:07] yep I ll do that [19:07] key 1566 means we are requesting really far down in the index [19:08] but that still shouldn't be failing [19:08] before the last pull, the cap was at 50MB, so it probably never hit it with your indexes [19:08] but I dropped it to 5MB [19:08] and that may have triggered something [19:08] since that index is 5.8MB in size [19:11] sent ;-) [19:11] you're clogging my inbox :) [19:11] hehe sorry :-p [19:13] is that one very big? [19:14] no last one was 15Ko [19:14] KB [19:14] weird, usually your mail gets here faster [19:15] my gmail said sent 4 minutes ago [19:15] mail is not that big [19:16] starts with "Logs with second patch :" [19:16] maybe some of your client / server thinks I am a spammer :-p [19:16] well all your others came through fine [19:16] I can check my spam folder [19:17] there it is [19:17] just took a bit [19:17] ;-) [19:20] ah, I see the problem [19:20] just a sec [19:20] (it was popping a node off 2x)( [19:20] so it would skip that at a boundary point [19:21] he ;) no rush ;) [19:24] is there any interest in something like gitosis for bzr? [19:24] asmodehn: ok, pull up and try the new version [19:25] synic: there is contrib/bzr_access [19:25] ok [19:25] it is probably not as complete as gitosis, but it describes how to set up multiple ssh keys to access one account, etc. [19:26] asmodehn: 3811 refactor for clarity [19:26] though 3810 is the important change [19:27] mmm [19:27] sorry :p [19:27] File "/home/autobzr/bzr.dev/bzrlib/transport/remote.py", line 346, in _readv [19:27] next_offset = [cur_offset_and_size.next()] [19:27] NameError: global name 'cur_offset_and_size' is not defined [19:27] stupid bugs [19:27] hehe ;) [19:28] asmodehn: this time I double checked and ran the test suite, 3812 [19:28] I really should put the new code under test [19:28] but I wanted to make sure it was worth the effort [19:28] k ;) no worries let me know when I should pull and test ;) [19:28] now [19:29] roger ;-) [19:30] branching... [19:31] asmodehn: so if I follow the logs correctly, the old repository was taking 900s to get to the point where it could start text.pack data. The last test showed you hit that at 90s [19:31] that seems.... way to good to be true [19:31] mmm [19:31] k error again... [19:32] what is the error now? [19:32] File "/home/autobzr/bzr.dev/bzrlib/btree_index.py", line 1050, in iter_entries [19:32] node = nodes[node_index] [19:32] KeyError: 1566 [19:32] 81.151 return code 4 [19:32] hm... same error [19:32] yeh... [19:32] and about the time to arrive to text.pack yes it s been really fast... [19:32] can you send that log again [19:33] I want to check something [19:35] asmodehn: bah, I know the problem [19:36] asmodehn: 3813 ready for you to test [19:37] log sent anyway ^^ [19:37] I ll pull that ;) [19:39] branching... [19:40] asmodehn: just as a point of comparison, could you try to branch the original repo again as well? [19:40] ? [19:40] just to see if the 900s => 90s is real [19:40] use my code [19:40] ah the one not dev2 correct ? [19:41] right [19:41] I would like you to do both [19:41] so I can compare [19:41] the 900s may have been a fluke [19:41] mmm I still have hte remote knit and the remote dev2 [19:41] or may be real [19:41] but the local ( destination ) is only dev2 now [19:41] asmodehn: that is fine [19:41] I'm more concerned about remote [19:41] maybe 900 is when I lost patiance and stopped it because it was sending nothing ? [19:41] k then ;-) [19:42] I can compare both , and hten send you the log... [19:42] just branching the same stuff from both knit and dev2 repo... [19:43] dev2 -> dev2 is now at 303 seconds... [19:43] is that enough to compare ? [19:43] should I stop that one and do the old one just after ? [19:44] hte old knit repo I mean [19:44] knit -> dev2 [19:45] Odd_Bloke, ping [19:47] jam : ok I saw the line you re talking about around 90s... I ll cancel my branching now from dev2 and start to branch from knit again [19:48] jam: argh IncompatibleRepositories: different rich-root support [19:48] thats strange [19:49] one weird thing : KnitPackRepository [...]/.bzr/repository/ [19:50] RemoteRepository [...]/.bzr/ [19:51] File "/home/autobzr/bzr.dev/bzrlib/repository.py", line 2506, in _assert_same_model [19:51] "different rich-root support") [19:51] yeah, I understand the code path. I just don't know why it would think that. [19:51] just a sec [19:51] sure :) [19:51] can you get to the remote machine? [19:51] yep [19:51] what does "bzr info" say [19:52] Shared repository with trees (format: rich-root-pack) [19:52] oj [19:52] ok [19:52] local says : Shared repository with trees (format: development2) [19:53] so my upgrade hack turned it into something closer to "pack-0.92" without rich-root [19:53] which doesn't matter for testing, except for when you go between formats [19:53] no big deal [19:53] I'll get you something you can use [19:54] mmm what abotu reverting my local repo to the old knitpack now that we have a answer for the slowliness problem ? [19:54] I can wait a bit before using the new dev2 repo format... [19:55] sure [19:55] if you go to the repo [19:55] you can see files/directories that end in -gi [19:55] just move the existing file/dir to -bi [19:55] and then mv -gi => normal [19:55] asmodehn: does that make sense? [19:56] yep [19:56] format and indexes [19:56] and packnames [19:56] right === mw|food is now known as mw [19:57] btw is there any doc around about making the repo smaller... some kind of good practice ? [19:57] I am wondering if it gets rid of the packs that are not useful anymore, and that kind of stuff.. [19:59] ok reverting to rich-root-pack done [19:59] now branching again from knitrrp to knitrrp to see how long it takes before we hit the .pack transfer... [20:00] sure [20:00] I think you'll still need to use my code [20:00] since otherwise you'll hit the wall again [20:00] yep I m using your version of bzr [20:00] with the debug and all the stuff ;-) [20:04] is there any way to see the status of a submitted merge request for the bazaar project? [20:05] dobey: http://bundlebuggy.aaronbentley.com ? [20:05] possibly under Pending, or under Merged [20:06] jam : first readv on pack at 296 secs... [20:06] ah [20:06] Or if you are on the mailing list, there will be a link to the merge request [20:06] which is a stable URL [20:06] yeah, i'm not on the list [20:06] asmodehn: so that is 296 => about 90? [20:06] log files please :) [20:06] I remember I saw a 92 [20:06] yep they re coming ;-) [20:07] asmodehn: oh and the output of: du -ksh .bzr/repository/indices ; du -ksh .bzr/repository/indices-bi [20:07] k [20:07] actually 'du -kshc' if you don't mind [20:07] sure ;) [20:08] so I get the impression your "latency" bound is actually [20:09] 1) Bandwidth limited for indexes [20:09] followed by [20:09] 2) latency bound while the server queues up GB of data to return [20:09] though not really network latency [20:09] more, "takes too long and uses all of swap" [20:11] yep [20:11] the uses all of swap appeard only when I upgraded from 1.5 [20:11] before it used to transfer then error [20:11] so I was transferring few revision at a time and that went fine... [20:11] but recently I think the data became too huge [20:12] note that my change isn't going to make it work if you have a single file that is too big [20:12] like a single 600MB file is likely to clog things up [20:12] yep and thats fine ;-) [20:12] but it looks more like you have *lots* of moderately sized files [20:12] now that it doesnt error anymore on a long transfer ;-) [20:13] yep [20:13] asmodehn: well, you haven't tested a transfer to completion yet, have you? [20:14] also, you say "first readv on pack" but is that after reading the text indexes? [20:14] or is that reading the inventory [20:14] there are a few different stages [20:14] judging by your times, it is after reading .tix [20:14] I just wanted to make sure [20:15] hehe true havnt tested a transfer till the end yet.. [20:15] about the stage before pack I am not sure... you got the log in your email now ;-) [20:16] asmodehn: why that I do :) [20:16] hehe :-p but I think it s after hte text index yes ;-) remembering what the progress bar was telling me [20:20] jam: so I guess we re done for today are we ? I ll keep using sftp waiting for the awesome new version and new repository format ;) [20:21] I am getting hungry hehe [20:21] I think we're done [20:21] enjoy your meal [20:21] thanks ;-) [20:22] and thanks a lot for the help ;) ill go eat and then I clean all remains of my experiments ;) [21:04] dobey: your patch is here, by the way: http://bundlebuggy.aaronbentley.com/project/bzr/request/%3C1225157843.25420.7.camel%40lunatari%3E [21:05] yeah, i found it [21:05] thanks [21:20] Howdy, I'm trying to retrieve the commit message for a revision [21:20] it looks like lf = log.LineLogFormatter(file_handle) [21:20] m = lf.message(rev) [21:20] should work, but if so, I haven't discovered what rev should be [21:24] rev from revno, rev = b.last_revision_info() doesn't work [21:25] ktenney: you need a Revision, not a revision id [21:25] repository.get_revision(rev_id) might be it [21:25] revno, rev_id = b.last_revision_info() [21:26] if you don't need the revno then just just b.last_revision() [21:38] james_w: so I need to look at repository.Revision(self, _format, a_bzrdir, control_files, _revision_store, control_store, text_store).get_revision(rev_id) ? [21:39] ktenney: no, I don't think so [21:39] james_w: revision commit messages aren't fetchable from a working tree? [21:39] I haven't found any shortcut [21:40] wt.branch.repository.get_revision(rev_id) [21:42] ktenney: [21:42] t = bzrlib.workingtree.WorkingTree.open('.'); t.lock_read(); rev = t.branch.repository.get_revision(t.last_revision()) [21:43] msg = rev.message as well I think [21:43] morning lifeless [21:43] ktenney: we use factory methods a lot in bzr, because the library supports working directly with multiple VCS systems [21:43] hi james_w [21:43] james_w: how goes $stuff [21:44] lifeless: good thanks. you? [21:44] lifeless: I met with the code team and others yesterday, and we got somewhere on how the LP stuff should look. [21:48] james_w: lifeless: got it, thanks [21:50] james_w: great [22:16] jam, i was planning today to do more reviews [22:16] and i have a meeting [22:16] basically just that [22:16] do you ever not have a meeting ? [22:16] :) [22:17] :/ [22:17] Today I worked on 3 things. [22:17] the timezones make it more disruptive [22:17] 1) Working with vila to talk about the inventory corruption he saw, and possibly what it needs to fix it [22:18] Today I've just sent the autopack RPC patch for review (so if poolie is doing reviews... *hint*), and am in the process of landing other approved patches from this cycle. [22:18] oh, yay [22:18] well done [22:18] i'll do it first [22:18] 2) I worked with someone who has a 16GB repository [22:18] and was having a hard time doing "bzr branch" [22:18] it turns out that our ability to discuss multiple file histories [22:18] I also want to look at jam's remote readv merge/rfc (first impression is that we should approve it, and start work on a streaming readv a jam suggests) [22:19] was requesting a readv() that was about 10GB long [22:19] which would be buffered on the server [22:19] before sending [22:19] and then buffered on the client [22:19] before processing [22:19] which... doesn't work so well [22:19] and sent that to the list as spiv mentions [22:19] 3) We also played with btree indexes [22:20] which show up nicely for him [22:20] so I'm encouraged to help push a --format=1.9 in [22:20] And for future work, still trying to get the retry code working for fetch() [22:21] (end of text :) [22:22] I've a LeafNode I'm reasonably happy with [22:22] ok, nice [22:22] working on handling too big content and spilling into a InternalNode with LeafNode children [22:22] lifeless: did you see my email about that? I think fullpath is a better route than parent_id,basename [22:22] considering sidetracking to fix ISO storage [22:22] jam: haven't seen it yet, no [22:23] lifeless: np, I sent it about 15 min ago [22:28] spiv: Is there a reason you need to special case everything, rather than just defining "autopack(self)" on Repository and using it instead of target._pack_collection.autopack()? [22:29] It at least seems simpler than special casing the InterPackToRemotePack code [22:29] jam: which special casing are you referring to? [22:29] InterPackToRemotePack._pack [22:29] Ah, I see. [22:29] it seems like you duplicate most of the def _pack code [22:30] If you uncommit the last revision, you'll see why ;) [22:30] just to get the autopack() called [22:30] It's mainly because there was a half-done check_references RPC as well that I have set aside for now. [22:30] ah, I saw that you moved that [22:30] and I wondered why [22:31] also, your "_get_target_revision_index" [22:31] Hmm, and I guess I've pushed some of the other _pack differences into RemoteRepo.autopack [22:31] blindly uses "self.target._real_repository" [22:31] without calling _ensure_real() [22:31] are we sure that is safe? [22:31] Yes, because is_compatible will have called _ensure_real [22:31] ah, ok === spm_ is now known as spm [22:32] Anyway, I need to get going, but I saw those things to at least think about [22:32] (sadly! There's no better way to get the remote format atm, though) [22:33] jam: thanks [22:34] anyway, there is a lot of duplication in the _pack code, that if you could just define .autopack() seems like it would go away [22:35] Why doesn't InterOtherToRemote take prescendence over InterPackRepo ? [22:35] anyway, gotta go [22:35] have a good day [22:49] using bzrlib, how do I get the files added/removed in a commit, and the diff / diffstat? [22:50] tree.iter_changes(other)... [22:50] poolie: and now for a non bzr hacker? [22:50] poolie: I have a branch that I want to compare mainline revisions of [22:51] b = Branch.open('thing') [22:52] b.lock_read() [22:53] tree1 = b.repository.revision_tree(revid_1) [22:53] tree2 = b.repository.revision_tree(revid_2) [22:53] changes = tree2.iter_changes(tree1) [22:53] changes=list(changes) [22:54] for c in changes: print c [22:54] * thumper goes to a python shell [22:54] show_diff_trees(tree1, tree2) [22:54] season to taste [22:55] poolie: thanks [22:59] thumper: 'bzr st -r x..y' ;) [23:06] Anyone here using bzrweb, the HTTP frontend thing? I've got: ImportError: cannot import name LocalBranch [23:08] LeoNerd: I tend to use loggerhead [23:08] Hrm... [23:08] I'll let you into a secret - I want to embed it in my existing website generation system [23:09] Which is perl. Having looked over the bzrweb code, it looks quite easy to drop that in by Inline::Python, and do a small bit of extra work [23:20] LeoNerd: i think that error means it's out of date with bzr trunk [23:21] Ya.. that seems likely [23:21] Tobehonest I may just take the ideas in the code here and reimplement them myslef [23:22] Write some small bits of connector logic in Inline::Python and do the rest from perl [23:23] is there something about bzrweb that makes it easier to do this than with loggerhead? [23:23] Not really. [23:23] It's just bzrweb was a small simple chunk of code I could easily read and work out what it does [23:24] I'm sure I could do it all myself, it'd just mean me spending a while flicking about in the bzrlib docs to find stuff [23:24] And possibly reinventing tricks this code already has [23:26] greetings... I was just asking in launchpad: is there a (reasonable) way to have a server with fine-grained permissions to different projects inside a repository? [23:26] they pointed me to bzr_access but I see that is incomplete, only supporting [/] repo [23:26] I'm just wondering if there might be something similar to the subversion authentication available [23:27] jisakiel: why not have different repositories for each project? [23:27] hmmmm [23:28] bzr_access does not seem to support that? [23:29] I really like the launchpad model [23:30] being a personal server I don't want to give +w to everybody who is able to push via bzr (as I track different projects there) [23:30] and I'd rather not have shell accounts accesible [23:30] jisakiel: launchpad has a repository per branch [23:31] but how do they authenticate then [23:31] they just told me they use a custom ssh client [23:31] er... ssh server ^^ [23:32] jisakiel: hmm, I thought it was possible to configure bzr_access differently [23:33] as far as I understand [23:33] jisakiel: ah, there's also bzr_ssh_path_limiter in the contrib/ directory [23:33] in 1.8?? [23:33] Yes, and probably earlier too [23:34] huh, don't see it when I do apt-get source bzr [23:34] Oh, huh. [23:34] It's not in 1.8 [23:35] It is in bzr.dev [23:35] d'oh [23:35] It had originally been posted on the mailing list for quite a while before that, I guess that's why I was confused. [23:37] jisakiel: you can always have different SSH keys for different repositories. [23:37] point is [23:37] users would be using their own ssh key [23:37] just like in launchpad [23:38] and I, somehow, would be able to give them + or - W [23:38] just like editing svnauth [23:38] Argh, Loggerhead is unhappy. [23:39] I guess the only "reasonable" way is to explode the number of groups in the server [23:39] one per project [23:39] and use filesystem permissions [23:40] but *urgh* [23:41] jisakiel: there's another possibility -- if you're just interested in restricting writes (not reads), then you could use PQM to manage the branches you want to restrict write access to. [23:41] jisakiel: it's a different workflow, so it might not be suitable for you. [23:42] jisakiel: it's how the bzr trunk is managed, for example; bzr developers keep their own branches of bzr wherever they like, but to commit to the trunk you need to send a request to the PQM bot. [23:43] heh, I guess it might be easier to extend the bzr_access script than to deploy that [23:43] ;) [23:43] jisakiel: PQM checks that the request is authorised (via PGP), does the requested merge, runs the test suite, and then only if the tests pass does it commit and publish the merge. [23:44] It solves a slightly different problem, but maybe it's useful to you. [23:44] seems too complicated but txs anyway [23:44] Fair enough. [23:44] ( and test suites are still utopia in the CS courses ;) ) [23:45] The problem with extending bzr_access is that the bzr client always asks for --directory=/ when it connects over bzr+ssh. [23:45] (Ah! ;) ) [23:46] (and because the bzr client can't know which directories it will need in advance, it can't really do anything else) [23:47] I see [23:47] yikes [23:47] :D [23:47] You could make it spawn something other than plain bzr serve, i.e. write a bzr plugin to provide a virtual filesystem with the restrictions you need. [23:48] --directory=(?P\S+)\s+ [23:48] There are already the building blocks for that in bzrlib (there's psuedo-chrooting logic and also force-readonly logic already there in bzrlib.transport.decorators) [23:49] It wouldn't be trivial to write, though. Certainly not as easy as editing a config file! [23:49] still seems like a not-trivial ammount of work [23:49] that's it [23:50] jisakiel: right, that regex is always going to just see /. [23:50] * spiv nods [23:50] then get_directory? [23:50] If you can bear it, using filesystem groups sounds the most practical for you. [23:50] get_directory will always get / :( [23:51] and bzr+ssh sends / because...? [23:51] The author of bzr_access was a bit optimistic about what the bzr client would do. [23:51] lol :D [23:51] just like me [23:51] Because the client can't know what directories it will need to look at before connecting. [23:52] Opening a branch may require opening a shared repository in an arbitrary parent directory, or resolving a branch reference at an arbitrary location, etc. [23:52] aaahm [23:53] I was just scratching my head, as bzr+ssh://host/whatever/path seems complete [23:54] e.g, "bzr log bzr+ssh://host/whatever/path" can't spawn a remote bzr serve with --directory=/whatever/path, because the actual revision data might be at /whatever. [23:54] Or even /! [23:54] but nevertheless [23:54] ah [23:54] bzr serve is, of course, restricted in doing ".." [23:55] because what would be the point in having --directory [23:55] otherwise while browsing the log one could just go "up" like I guess bzr log /whatever/path does [23:56] --directory is there so that "bzr serve" over TCP can serve just part of your filesystem, and also so that bzr_ssh_path_limiter can be written. [23:56] Right. [23:57] (which btw I can't find in bzr-dev, at least not in http://bazaar-vcs.org/bzr/bzr.dev/contrib/ ) [23:57] Hmm, someone really ought to write that hypothetical plugin I was referring to and add it to contrib/ [23:57] lol [23:58] seems overwhelming to me right now, sorry :D [23:58] (that someone is probably me) [23:59] I guess this excludes some "business" uses of bzr more-or-less-centralized workflows [23:59] Huh, it's not in bzr.dev. Oh man, I'm an idiot. [23:59] It's in my local checkout, not committed! [23:59] heh, bzr add perhaps? :P [23:59] it happens xd [23:59] jisakiel: http://rafb.net/p/P1TEcP10.html