/srv/irclogs.ubuntu.com/2008/10/29/#bzr.txt

spivpoolie: if you're doing reviews, a review of http://bundlebuggy.aaronbentley.com/project/bzr/request/%3C20081027005419.GA20736%40steerpike.home.puzzling.org%3E would be good.00:47
pooliespiv, ok00:47
poolieam doing mail atm but will try to get to it00:47
* spiv fires some branches to the mailing list and goes to lunch01:58
pdf23dsextmerge doesn't like me. :(02:10
pdf23dsUnable to load plugin u'extmerge' from u'C:/Program Files/Bazaar/plugins'02:10
spivpdf23ds: try "bzr -Derror ...", or look in your bzr.log (see "bzr --version" for its location) to get a traceback that may explain why.02:12
spivpdf23ds: if nothing else, the traceback will be useful in a bug report against extmerge :)02:12
pdf23dsOK, I think it might have something to do with abently's commit yesterday.02:13
* spiv really goes to lunch now02:13
pdf23dsAttributeError: 'dict' object has no attribute 'register_lazy'02:13
pdf23dsBut thanks for the tip.02:14
pdf23dsHow to change the tree to an old rev?02:14
pdf23dsJust my luck to have a commit yesterday break me on a project that hadn't seen any commits since March.02:14
pdf23dsbzr up -r x?02:15
pdf23dsbzr revert -r x02:18
pdf23dscrazy australians. going to lunch at 9 PM.02:19
pdf23dsAnybuddy know how to resume when you get conflicts with the replay command?02:59
pdf23dsrebase-continue?02:59
lifelesspdf23ds: does the help tell you?03:07
pdf23dsbzr replay help is empty.03:07
pdf23dsand bzr rebase help doesn't say explicitly.03:07
pdf23dsI filed a documentation bug. :)03:07
pdf23dsIn the meantime I guess I'll try and see what happens.03:08
pdf23dsIt sort of acts like continue might do the right thing, tho -abort and -todo complain that you're not rebasing.03:13
pdf23dsHmm. I didn't mess as much with Mercurial, but Bazaar seems much rougher-around-the-edges than Mercurial. But I still like it more.03:14
pdf23dsAnd OTOH, I just ran into a Subversion bug today with the latest version.03:14
pdf23ds(Couldn't merge a path with spaces over https protocol, works over svn protocol. Weird.)03:15
lifelessmeep03:17
lifelessplease file a bug03:18
pdf23dsAbout replay you mean? Where should I file that?03:18
pdf23dshttps://bugs.launchpad.net/bzr-rebase/ ?03:19
pdf23ds(The merge thing was a SVN bug I just got hit by today.)03:20
lifelessabout the https file with spaces thing03:20
=== spm_ is now known as spm
pdf23dsyeah, that's svn.03:21
pdf23dsI had the same reaction.03:21
lifelessa bug on bzr-svn would be good for that; it may be something jelmer has msised03:22
pdf23dsIt wasn't related to bzr, I mean.03:23
pdf23dsI work with SVN during the day, I'm converting my home projects to bzr.03:23
pdf23dsI won't be able to get my job using bzr until TortoiseBZR is mature, and until I convince my coworkers of the joys of branching.03:24
jonoxerHi party ppl, I seem to be having some problems with interop between bzr v1.3.1 and v1.6.1 but only on some branches.03:24
jonoxerMost of our dev machines (and the repo) are on Hardy, with bzr 1.3.1. But on a certain big repo when I try a checkout with v1.6.1 (on Intrepid) it first complains about the server not understanding net protocol 3, then falls back to an earlier protocol, then sits for a minute or two, then bombs with a traceback like this:03:25
jonoxerFile "/usr/bin/bzr", line 119, in <module>03:25
jonoxersys.stdout.flush()03:25
jonoxerValueError: I/O operation on closed file03:25
jonoxer...which looks like a permissions problem, but I don't think it is03:25
jonoxerbecause checkouts work fine from Hardy machines03:25
jonoxerAny suggestions where to look? I tried with -Dhpss and it didn't provide any more detail03:26
pdf23dsHmm. Maybe I don't need replay after all. Rebase might do the job.03:28
lifelessjonoxer: I don't think its interop03:29
lifelessjonoxer: 1.6 has some memory issues; try 1.803:30
jonoxerlifeless: so maybe it's a problem with this particular client03:30
jonoxerlifeless: OK, I'll give it a shot03:30
pdf23dsNope, rebase requires a common ancestor too.03:32
pdf23dsRevision 0 isn't considered a common ancestor, right?03:32
pdf23dsbzr branch A B -r 0 is the same as bzr init B?03:32
lifelesspdf23ds: nearly the same; branch will preserve the specific format whereas init will use the current default03:46
lifelessthis only matters if either the source branch uses a very new feature, or you're interoperating with clients much older than yours03:47
pdf23dslifeless: Well, it matters if you want to merge those branches later and find out it can't be done.03:53
mkanatWow, bzr doesn't like it when my local commits are identical to upstream commits, and I do a "bzr up".04:01
mkanatAlmost everything showed up as a conflict.04:01
lifelessmkanat: it shouldn't conflict there; unless you added files04:02
jonoxermkanat: when you say "identical", do you mean the files actually are identical? As in the same checksums?04:02
mkanatjonoxer: No, I have local uncommitted changes in addition to the local commits.04:02
mkanatjonoxer: Also, the upstream files may have different permissions than my local files.04:03
lifelesspermissions shouldn't be an issue there; its likely the local changes04:03
lifelessmkanat: but there are some things update does that make conflicts worse than they should be04:03
mkanatYeah, that's probably what I'm running into.04:03
mkanatI was just surprised, didn't know if you guys knew it behaved that way.04:03
mkanatIt's not that bad, I have my local changes as a patch elsewhere and I can just revert and re-apply them.04:04
jonoxerlifeless: I've just upgraded the problematic machine to bzr 1.8.1 (package from ppa) and it still bombs out on a checkout from the same repo04:13
jonoxerThe error message is identical to with 1.6.104:13
jonoxerThe repo is shown as "Shared repository with trees (format: pack-0.92)" fwiw04:15
jonoxerThe branch I'm checking out takes about 7.8G on disk, in case that's relevant04:16
lifelessjonoxer: hmm, perhaps we haven't landed the patch yet04:19
jonoxerlifeless: think it's worth trying 1.3.1 on that machine? That's what we're running on every other box, and it works fine04:19
lifelessjonoxer: I suspect it will work, yes04:20
RollyI'm surprised bzr-loom isn't listed in http://bazaar-vcs.org/BzrPlugins04:21
jonoxerlifeless: it seems to be working (1.3.1 on Intrepid)04:22
lifelessjonoxer: the memory patch may not have landed; or it may be something else04:25
lifelessjonoxer: please file a bug04:25
jonoxerNo problem, done (Bug #290558)04:39
ubottuLaunchpad bug 290558 in bzr "bzr v1.6.1+ crashes when checking out large branch from v1.3.1 repo" [Undecided,New] https://launchpad.net/bugs/29055804:39
=== ozzloy is now known as lbc
=== lbc is now known as ozzloy
=== jfroy|work is now known as jfroy
pooliespiv, i agree with your comments on the hooks doc patch, so i'll apply them and merge it05:42
spivpoolie: great05:48
pooliei still find your nickname kind of bizarre :)05:49
spivI should trawl a dictionary for more flattering four letter words that are unlikely to already be registered on popular services...05:50
pooliehttp://newmatilda.com/2008/01/17/attack-spivs05:50
poolieexcuse the distraction05:50
poolie"They no longer sport pencil-line moustaches and porkpie hats, but their tell-tale signs give them away - their faux Italian pointy-toe shoes and the hairstyles sculpted into those absurd little tufts. "05:50
poolieindeed :)05:50
spivOh, I don't mind being distracted.05:50
spivThat's the problem ;)05:51
poolie(:05:51
lifelessnight all05:53
pooliecheerio05:55
pooliespiv, can you look at my reply there quickly?06:07
poolieby quickly i mean 'it should not take long' not 'urgently'06:07
spivpoolie: the reply to the hook docs patch?  Looks good to me.06:13
pooliethanks06:13
pooliei'll send it in06:13
* spiv -> yoga06:25
spiv(and hacking on the train)06:25
pooliei'm going to stop in a bit06:27
vilahi all07:07
matkorHi ! Is there bzr-gtk release 0.96 ?  I see in maillist: Status: Fix Committed => Fix Released  Target: None => 0.96.0 but nothing about 0.96 on http://bazaar-vcs.org/bzr-gtk ..08:15
vilamatkor: 0.96.0 is where the commits are made to *prepare* 0.96 :)08:38
=== jfroy is now known as jfroy|work
vilamatkor: i.e. the trunk, a specific branch will be made once 0.96 is out I think08:39
matkorvila: Ok.. so status (Fix Committed => Fix Released) change is misleading ? shoudl be left on Fix Committed ?08:43
vilamatkor: committed means the fix is available somewhere, released means it's merged on trunk08:44
matkorah ... thanks a lot for explanation, vila08:46
vilayou're welcome and not the first one needing the explanation which indicates something should be changed though...08:47
matkorvila: Perhaps: Fix Commited ->  Fix merged mainline (or Accepted to next release) -> Fix Released (in official version) could  be more self-explenatory ...09:28
vilaThe last step means that releasing a version requires an additional step whereas the actual usage doesn't... Most of the users will update only when a new release is announced and are unaware of the subtle difference. I know there is work in progress to somehow automate that step but I forgot the details09:32
Odd_BlokePresumably launchpadlib will allow it to be done (if it doesn't already).09:57
thumperOdd_Bloke: what was that?10:11
Odd_Blokethumper: Marking bugs as Fix Released when an actual release happens.10:13
thumperOdd_Bloke: ah10:13
Odd_Blokethumper: A better fix might be to fix the bug statuses though. ;)10:13
thumperheh10:13
thumperI'd say that a fix is "in progress" until it is landed on trunk10:14
thumperbut hey, that's just me10:14
thumperI did manage to get an hg->bzr convert last night10:15
thumperjust with "versioned directories" and "bzr shelve"10:15
Odd_BlokeWell, 'we' (i.e. bzr) do (In Progress) --submitted for review--> (Fix Committed) --merged--> (Fix Released).10:15
* thumper nods10:16
james_wcongratulations jelmer10:18
thumperlifeless: ping10:22
jelmerjames_w, thanks, but with what ? :-)10:24
james_wheh :-)10:24
Odd_Blokejelmer: I'm guessing for your AM report...10:33
jelmerah, I wasn't aware that was announced someplace10:33
Odd_Blokejelmer: debian-newmaint list.10:37
LarstiQjelmer: you know https://nm.debian.org/nmstatus.php I guess10:37
jelmerOdd_Bloke, ah, thanks10:37
jelmerLarstiQ, I don't expect (hope?) James to check that on a daily basis :-)10:38
LarstiQjelmer: James has resigned from DAM work afaik10:38
LarstiQoh, james_W10:38
LarstiQsorry :)10:38
LarstiQjelmer: yeah, debian-newmaint10:38
* LarstiQ takes this as a clear sign he should do something about breakfast10:39
jelmerLarstiQ, :-)10:48
Odd_Blokejelmer: Also on a Debian related note, is there any documentation about how to use the pkg-bazaar repository?  I haven't been able to work it out, and I have an ITP for bzr-xmloutput sitting around.10:49
jelmerOdd_Bloke, basically, we have one repository on bzr.debian.org per package11:01
jelmerand then one branch per debian distribution (unstable, experimental)11:01
jelmerthe branches are bzr builddeb branches, and should ideally contain the upstream source plus debian/ directory and any necessary changes11:02
Odd_Blokejelmer: Bug 290664 is the one I promised to file a couple of days ago.11:34
ubottuLaunchpad bug 290664 in bzr-svn ""Can't get entries of a non-directory"" [Undecided,New] https://launchpad.net/bugs/29066411:34
jelmerOdd_Bloke, thanks, I hope I can have a look at it during the weekend11:41
Odd_Blokejelmer: I have commit access, so if there's a way I can fix the repository using that I'd be happy to do so.11:42
jelmerOdd_Bloke, it should be possible to add a fix for this to bzr-svn, I just need some time to analyse what's going wrong11:46
Odd_Blokejelmer: Cool, I really meant if a repository fix would be possible sooner, then I'd like to do that. :)11:47
=== doko_ is now known as doko
jelmerbeuno, loggerhead made it into experimental btw13:03
beunojelmer, I saw!13:03
beunoI've been spying13:03
beunothanks  :)13:03
jelmer:-)13:03
jelmeryou're welcome13:04
jelmerI hope we can now help get it running on bzr.debian.org13:04
beunooh, I didn't know they needed help13:05
beunolet me know if I can do anything at all13:05
jelmerwill do13:07
jelmerthanks13:07
=== Verterok|out is now known as Verterok
davi_what is the easiest way to search for a revision that removed a couple of lines?13:24
EarthLionhey can i revert a single file to a specify version using bzr revert -r 46 foo.py ?13:34
pdf23dsjelmer: abently's latest commit to extmerge breaks it on 1.8. I had to figure that out myself after branching.13:35
jelmerpdf23ds, ?13:51
jamvila: ping14:53
vilajam: pong14:53
jamhey, want to chat about the inventory stuff?14:53
vilasure14:53
jamI haven't heard any background to know if robert's idea is corerct14:53
jamhi asmodehn15:28
asmodehnhey hey jam ;-15:29
asmodehn;)15:29
jamI'm a bit surprised you sent the .bzr.log to the list, but I suppose filenames aren't particularly private15:31
jamit does give me a little bit of insight into your game, though :)15:32
asmodehnoh did I ^^15:32
jamyou certainly have a very *wide* tree (lots of files)15:32
jamapprox 30k15:33
asmodehnnah I only sent it to you I think ;-)15:33
asmodehndont scare me like that :-p hehe15:33
asmodehnanyway as you can see I dont use bazaar to store the source code, but rather distributing different version of it in differnt locations15:34
asmodehnwhich explains the size and the latency I am dealing with..15:34
jamasmodehn: you're right, I wasn't looking at the folder I was in15:34
asmodehnbut yes lots of big files15:35
asmodehnthe repository with working trees is now 24GB15:35
asmodehnso whenever I doa  branch it takes some time :-p15:35
asmodehnespecially when it s remotely15:36
asmodehnI also have scripted the beginning of an automated framework on top of it, to keep different location in sync, with web interface, etc.15:36
asmodehnjust a beginning :-p15:36
asmodehnalso because of the size of it my disk is pretty full now, not sure I can do a lot of tests ... I am lacking backup space...15:38
jamwell, if you want to try my "convert_to_dev2" script, you can actually just use a hardlink of the old repository15:39
jamand the only new data will be the indices15:39
jamwhich is small meg15:39
jamrather than 24GB15:39
asmodehnmmm ok.. no need to upgrade before at all right ?15:40
=== dobey_ is now known as dobey
asmodehnha yeah ok this is doing the upgrade... ok I ll do that then and let you know...15:41
jamso I should say maybe 10MB or so, but that is pretty trivial next to the 24GB. But I would only do it on a copy15:42
jamfor now15:42
jamthen again15:43
jamI could just write you the reverse script15:43
jamto convert them back15:43
jamnot to mention my convert_to_dev2 script leaves them lying around, IIRC15:43
asmodehnthats good enough then, if I can put tehm back somehow ;-)15:43
asmodehnI have remote backups if needed15:43
jamyeah15:44
jamthe script creates new indices15:44
jamand just moves the old one out of the way15:44
jamwithout deleting it15:44
jamI just verified that15:44
asmodehn"python convert_to_dev2.py" and thats it correct ?15:47
* asmodehn just a bit scared :-p hehe15:47
asmodehnjam : ok running...15:49
jamyou have to have "bzrlib" in your python install, but it sounds like you do15:49
jamI'm looking at it now15:49
jamjust for your comfort15:49
asmodehnthanks ;-)15:49
asmodehnjam : ok done ^^ pretty fast15:50
jamyeah, that is all you needed to do15:51
jamasmodehn: again, rewriting 20MB isn't bad versus rewriting 24GB15:51
asmodehnok but I changed only the repository I am branching towards, not the repository I am branching from..15:52
asmodehndo I need to do that too ?15:52
asmodehnyep :-)15:52
jamthe one you are branching *from* is the important one15:52
asmodehnyeah I would have guessed so....15:52
asmodehnmmm hang on...15:52
jamI'm also not 100% sure if you need my prefetch code15:52
jamAre you comfortable running bzr from source instead of from an install?15:53
jam(it isn't very hard)15:53
asmodehnmmm it should be ok, but I really want to avoid to mess up my repositories too much15:53
asmodehnso I guess I ll try one thing at a time ;-)15:53
asmodehnbut yeah I could do that...15:54
jamit doesn't change the disk formats at all15:54
jam*just* how we access it15:54
jambut start with this15:54
jamsee what happens15:54
asmodehnk15:54
jamand then we'll try that as the next step15:54
jamalso, can you run a quick command for me15:55
asmodehnsure15:55
jamfind .bzr -name "*.?ix" -print0 | xargs -0 du -kshc15:55
jamI think that is right15:55
jamlet me run it here15:55
jamyeah, that works15:56
asmodehnok on which repo ?15:56
asmodehnthe source one right ?15:57
jamthe one you just upgraded15:57
asmodehnah ok15:57
jamyou could do the source as well15:57
jamI mostly just want to see how big your indexes were15:57
jamversus how big they are in btree format15:57
asmodehnoh ok15:57
asmodehnI l send hte result in email ot you ;-)15:58
asmodehndone16:01
asmodehnah I have enough space on the disk of the source repo to back everything up :-) good news ;-)16:02
asmodehnbackup everything ^^ sorry me english bad :-p16:04
asmodehnI m doing that now then I convert to dev2 as well so I can try the branch again...16:04
dstanekhow do i revert a commit? is it the same way that you do it in subversion? merge the changes between new and old back into the working copy?16:08
asmodehnbzr uncommit is your friend ;-)16:08
asmodehnif you really want to lose your commit16:09
jamasmodehn: so from what I can see, your 31MB text index becomes 9.1MB in btree form16:09
jamand your 14MB => 4Mb16:09
jamMB16:09
jamwhich is good16:09
asmodehnyep wouldnt hurt ;-)16:10
asmodehnif we want to add log to the data transfer, I would need to run bzr from source and modify it right ?16:11
dstanekasmodehn: great thanks, i'll take a look16:11
asmodehncause at the moment the log doesnt display nothing while the process seems to be stuck...16:11
asmodehndstanek : no worries ;-)16:11
jamdstanek: you could also do "bzr merge -r 10..9" if you want to revert something old16:12
jambut yeah, 'bzr uncommit' is probably what you want16:12
jamif it was recent16:12
jam(as in the last commit)16:12
jamasmodehn: so the index seems to go from almost 70MB to down to 30MB, which is about what I would expect.16:13
jamAlso, it does look like this repository could stand to have "bzr pack" run16:13
jamThough I'll mention that running it after the conversion means we don't have a simple way of bringing the indexes back16:14
dstanekit look like uncommit can also take a -r argument - will it work if you specify a revision that is not the last?16:14
jamdstanek: it will uncommit everything back to that revision16:14
jamnot just the changes for that rev16:14
jamfor something old16:14
dstanekah i see16:14
jamyou want "bzr merge -r REV..REV-1"16:14
jamsorry16:14
jamyou want "bzr merge . -r REV..REV-1"16:14
asmodehnyeah I was thinking about that too... but I wonder if that might make one big pack, and therefore we might lose some details in debugging what is wrong in the data transfer16:14
jamthe '.' is important16:14
jamasmodehn: it will create 1 big pack16:15
jamso we can look at that more later if you prefer16:15
dstanekjam: uncommit works for my immediate need16:15
asmodehnjam : also having one huge pack might not be good for me if there is no way to track the progress of the download...16:16
asmodehnI prefer downloading slowly but know that it s working ;-)16:16
asmodehnbtw, same thing, is there an incremental pull / push ?16:17
asmodehnrevision by revision ?16:17
jamasmodehn: 'bzr push -r X", "bzr push -r X+1"16:17
asmodehnyes but when I have 123 revision to push one by one because they are big and they might fail, I dont want to type that 123 times ;)16:18
asmodehnwas thinking of a "bzr ipush -r REV..REV " that will push revision one by one16:18
asmodehnand if it breaks in the middle, the last push that worked is still there...16:19
asmodehnbut maybe that belongs more in a script around bzr than bzr itself16:19
asmodehnjam : ... my backup copy on the source repository is still going...16:19
jamasmodehn: for i in `seq REV..REV`; do bzr push -r $i; done16:20
jamsorry16:20
jamfor in `seq REV REV`; do echo $i;  bzr push -r $i; done16:20
asmodehnyeah thats what I meant ;-)16:20
asmodehnbut you want to check for error and stuff, so I ll gues I ll write a script for that...16:21
asmodehneventually16:21
asmodehnat the moment I am pulling revision 10 by 10 to know a bit where I am at...16:21
jamwell, I'll mention that some of the data transferred is forgotted by the next copy16:25
jamso it will be slower16:25
asmodehnyep I start doing that when using 1.5, because for long transfer it used to error out16:27
asmodehnwhere few by few it was working16:28
asmodehnand since I upgraded to 1.8 it doesnt error as much which is good ;-)16:28
asmodehnbut it seems to get stuck longer for smoem reason :-s16:28
jam1.8 will switch to grabbing whole indexes from time to time, which might be what you are seeing16:32
jam1.5 would only grab incrementally16:32
jambut could end up grabbing 2x the inventory under certain circumstances16:32
jamthe dev2 format solves both of those issues16:33
asmodehncool ;) good to know16:33
asmodehnbackup copy almost finished...16:33
jamasmodehn: how's it going?16:55
asmodehnjam : branching now...16:55
asmodehnconversion went without problems16:55
jamyou're using -Dhpss for me, right?16:55
asmodehnyep16:56
asmodehnbut looking at the log... the copy of the index is fine...16:57
asmodehnlooks like the copy of the pack doesnt generate a lot of transfer on teh network though.. I am a bit confused...16:58
jamcopying the pack should be saturating your network16:59
asmodehnmy pick is 1.2Kbps now ...16:59
jamare you sure about that?16:59
asmodehnlooking at iftop onthe source network now..16:59
asmodehnsource server I mean17:00
asmodehnit was high up during indexes17:00
jammy only other guess is that you are running into swap issues on the source17:00
asmodehnoh^17:00
asmodehnmmm I ll check my RAM..17:00
jamas we are trying to copy 20GB across17:00
asmodehnnope everything looks fine17:01
asmodehnsource :17:01
asmodehnCpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st17:01
asmodehnMem:   3366488k total,  3221756k used,   144732k free,     1672k buffers17:01
asmodehnSwap:  2650684k total,   702536k used,  1948148k free,   558160k cached17:01
asmodehndest :17:01
asmodehnCpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st17:01
asmodehnMem:   6110456k total,  5198228k used,   912228k free,    78484k buffers17:01
asmodehnSwap:  1951888k total,       72k used,  1951816k free,  4550312k cached17:01
asmodehnand I am still branching -r1 from a repo on source_dev217:02
asmodehn\ [=============================                    ] Copying content texts 3/517:02
jam700MB in swap seems like a lot to me17:03
jambut I don't know what load you run on that server17:03
asmodehnwell these server are running a bunch of other things ^^17:03
jamso you are still seeing very little data being copied, right?17:04
asmodehnbut that swap used is not increasing right now anyway17:04
asmodehnno transfer ont eh network17:04
asmodehnwell very little17:04
asmodehnand with -Dhpss17:04
jamyou might try top of the running process as well17:04
jamit is also possible that your disks are being heavily loaded ATM17:04
asmodehnlog stuck at 85.410     result:   ('readv',)17:04
asmodehn105.947                1099613 body bytes read17:04
asmodehn108.023  hpss call w/readv: 'readv', '/home/autobzr/deployBZR_dev2/.bzr/repository/packs/f65de233ce31cbdace7370611f49937c.pack'17:04
asmodehn108.024                11675 bytes in readv request17:04
jamasmodehn: can you send  the 'ls' results for the remote repository?17:05
jamI would assume that is a big file, but it isn't in the listing you sent17:06
jam(since that is the local repo)17:06
asmodehnthis pack is 1.1GB17:08
asmodehnsource :17:10
asmodehn  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND17:10
asmodehn12072 autobzr   20   0 1925m 1.9g 2404 S    0 58.4   0:03.46 bzr17:10
asmodehndest:17:10
asmodehn PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND17:10
asmodehn25889 autobzr   21   0  228m 159m 3956 S    0  2.7   0:06.94 bzr17:10
jamso it is consuming 1.9GB, which may not be terrible, but isn't a great sign17:11
asmodehnyep17:11
jammostly for when you hit the 20GB files17:11
jamIt is possible that is taken up by other info17:11
jambut I'm expecting it is the actual data that will be transmitted17:11
jamasmodehn: I think it is going to have serious problems soon. I'm seeing:17:12
jam        backing_bytes = ''.join(bytes for offset, bytes in17:12
jam            self._backing_transport.readv(self._relpath, offsets))17:12
jamwhich means... "read the requested bytes, and buffer it into a single string"17:13
jamand return that17:13
asmodehnah...17:13
jamaka, buffer 20GB in RAM before sending it over the wire17:13
asmodehnthat will not work :-p17:13
jamso... the specific issue is that since 1.6 we are able to request the streams for multiple files simultaneously17:15
jamin 1.5 we could only request one file at a time17:15
jamwhich makes things way faster in most cases17:15
jambut in your case17:15
jamit means we end up requesting 90% of the pack file in one go17:15
jamand you end up dying17:15
asmodehnargh17:15
asmodehnyep...17:16
jamwe could probably work around this a little bit in the client17:16
jamby capping the size of a readv request17:16
asmodehnbut that also means there no easy perfect solution right ?17:16
asmodehnie, how much oyu are capping will depend on the RAM size, the badwith, etc.17:16
jamyeah17:17
jambut something like 500MB is generally a safe bet17:17
jamalso, could you try this over sftp17:17
jam?17:17
jamI realize some bits will be slower17:17
asmodehnyep17:17
jambut I think it the SFTP spec requires that we transfer in 32kB chunks17:17
jamso it won't actually hit this17:17
asmodehnoh ^^17:17
jamYou don't have to17:17
asmodehnworht a shot ;-)17:17
asmodehnit s ok ;-)17:18
jambut it would be worth having the data point17:18
asmodehnyep17:18
asmodehnand I am pretty desperate atm to be honest... if it doesnt work I ll go back to 1.5 and transfer revision by revision to avoid hitting errors too often...17:18
jamI was in this code recently, and I'm pretty sure the client still spools up the whole request as well17:18
asmodehnhere we go sftp...17:20
jamI may have a quick fix on the readv stuff, I'll let you know17:26
asmodehnk17:26
asmodehnsftp still copying inventory text...17:26
jamasmodehn: I'll just mention that bzr hasn't really been designed around handling 10GB trees17:27
jamWe'll try to accommodate as we can17:27
asmodehnyeah I would guess so ;)17:27
asmodehnno worries17:27
jambut it isn't a base design criteria17:27
asmodehnI am just using it because it s better than copying 4GB zip files around evrytime there is just a little difference...17:28
jamhave you considered unison/rsync?17:28
asmodehnie easy way to get just patches transparently17:28
asmodehnmmm dont know about unison...17:28
asmodehnlet me google that ;-)17:28
asmodehnmmm looks interesting... however I need more than what it does though, such as a proper SCM who let me go back in revision history ...17:35
asmodehnI feel like I need a BZR multisite :-p17:35
asmodehnlol17:35
asmodehnmmm sftp doesnt load the source server like bzr+ssh did...17:38
asmodehnand now there is still traffic on the pipe... 200 - 300 Kbps17:38
jamyeah17:38
jamagain, requests are made at 32kB per hunk17:38
jamand the sftp code was updated to stream that back to the next layer as well17:38
jamwhile the bzr+ssh code does a lot of "finish reading the response, then return"17:39
asmodehnk17:39
asmodehnso whats the plan from now .. ? I guess I ll use sftp from now on, until the bzr+ssh code is improved to handle big packs ?17:40
asmodehnand I keep using my old Knits repo until the dev2 format is finalized, tested, and announced ?17:42
jamwell, I have a patch for it to be released in 1.9, which may happen by this Friday17:44
jamso you won't exactly be waiting long17:44
jamcan you send me the bzr logs with the dev2 format?17:44
jamI'm just curious how much of an improvement we see17:44
jam(if any)17:44
asmodehnsure ;-)17:45
asmodehnjust the destination though... the source is a bit loaded by other bzr processes running :-s17:45
asmodehnok ?17:45
asmodehnwould htat be enough for you ?17:45
asmodehnmmm hang on maybe I can get the source one too if you need actually, the process was stuck since two days ago :-p17:46
jamno, just the local one is fine17:47
jamI don't think you were running -Dhpss on the server anyway17:47
jamI'm not specifically interested in the remote one either17:48
asmodehnyeha true, no Dhpss..17:48
asmodehnbut but btu ...17:48
asmodehnmy .bzr.log is now only 28K :-s17:48
jam.bzr.log.old17:48
asmodehnhaaa :-)17:48
jamI believe we keep 1 level of .bzr.log17:49
jamjust to prevent it from consuming lots of disk space17:49
jamat somewhere around 4MB we copy it to .old17:49
jamand start fresh17:49
asmodehnk I am bziping both now... I ll send that in your email17:50
asmodehnthanks a lot for the help by the way ;-)17:50
jamnp17:50
asmodehnI ll get 1.9 on my etch next week I guess then ;-)17:51
asmodehnlet me know if you want me to test something ;-)17:51
jamasmodehn: well, I'll likely have some readv code that I might have you play with17:52
jamI'm still putting together a "summary of what is happening" to the mailing list17:52
asmodehncool ;-) thanks ;-)17:52
jamasmodehn: can you grab a branch of bzr.dev to start with?18:02
asmodehnyep18:05
asmodehnurl ?18:05
asmodehngot it18:06
jam'bzr branch lp:bzr' should be fine18:07
asmodehnbranching from bazaar-vsc.org/bzr/bzr.dev atm...18:07
jamsame thing18:08
jamlp:bzr is a mirror of that url18:09
jamjust a lot shorter to type18:09
jam:)18:09
asmodehnhehe cool, I ve never used lp before though :-p I more of a SVN / BSD guy :-p18:09
asmodehndev C++ too not much python18:09
asmodehnbut i ve heard it s quite siple, never had time to get my head around it though..18:10
jamyeah, I came from C++ => python18:10
jamall depends on what you are doing18:11
asmodehnyep18:11
jamone weird thing, the last .bzr.log seems to have a different bandwidth18:12
jamlike 100kB/s rather than 200kB/s18:12
jamis that real?18:12
asmodehnhuh ?18:12
* asmodehn is trying to think of what happened 2 minutes ago...18:12
asmodehnany clue about where in the log that changes ?18:13
jamhttp://dpaste.com/87602/18:14
asmodehnweird...18:15
jamthough I also see:18:15
jam43.321     result:   ('ok',)18:15
jam391.397                22190509 body bytes read18:15
jamwhich I believe is about 60kB/s18:15
jamso your bandwidth seems rather variable18:16
asmodehnthe locations are quite far away from each other, and a lot of other thing might happen on the network, so that wouldnt surprise me18:16
asmodehnI cant recall in which order I stopped / restarte the differnet bazaar processes on hte machines in the last hour to know if it affected something or not...18:18
jamI was just trying to figure if the new format was saving you anything, but it looked slower at first glance18:18
jambut if your bandwith can vary 2x18:18
asmodehnI see18:18
jamI can say it is copying less data18:18
jamdo you have bzr.dev yet?18:19
asmodehnBranched 3806 revision(s).18:19
jamasmodehn: ok, try doing "bzr pull --ovewrite http://bzr.arbash-meinel.com/branches/bzr/1.9-dev/remote_readv_sections"18:19
asmodehnok ;-)18:19
jamand then run18:20
jammake18:20
jamand then you should be able to run "path/to/bzr branch -r1 -Dhpss bzr+ssh://..." for me18:20
asmodehnnot a branch... maybe  url rpbolem ??18:21
jamoh, and use "-Dindex" if you would18:21
asmodehnk18:21
jamah just a sec18:21
jamI just forgot to publish it :)18:21
asmodehnhehe :-p18:21
jamdone18:21
jamtry again18:21
asmodehnpulling...18:21
asmodehn M  NEWS18:22
asmodehn M  bzrlib/help_topics/en/hooks.txt18:22
asmodehn M  bzrlib/transport/__init__.py18:22
asmodehn M  bzrlib/transport/remote.py18:22
asmodehn M  doc/en/user-guide/hooks.txt18:22
asmodehnAll changes applied successfully.18:22
asmodehnNow on revision 3806.18:22
jamasmodehn: sounds right18:24
asmodehnhuh ? 3806 -> pull -> 3806 ?18:24
jamdifferent tips18:24
asmodehndifferent one then ;-)18:24
asmodehnk18:24
asmodehn Change the default readv() packing rules.18:24
asmodehnah make... what do I need to run that ?18:24
jamgmake, gcc, python-pyrex probably18:25
asmodehnk18:25
jamI could give you the .c files if you don't want to install python-pyrex18:25
asmodehnnah it s ok I can18:25
jamyou don't *have* to, because bzr can fall back to pure python18:26
jambut certain ops are slower18:26
asmodehnk18:26
asmodehnI ll go package hunting ;-)18:26
asmodehnbrb18:26
jamthough not at the scale of copying 20GB across a remote network18:26
jamasmodehn: so if it is difficult, you can skip it18:26
asmodehnk bunch of dev packages + make + bunch of warnings, but it seems fine ...18:35
jamanother small advantage of the patch18:40
jamif it works, you should get a little bit more progress18:40
jamwell, the spinner should spin more often18:40
asmodehnk18:40
jamI don't think it will log much more18:40
asmodehnjust the Dindex I guess...18:40
asmodehnwell hte start was pretty fast18:41
asmodehnI guess dev2 indexing improvements ?18:42
jamprobably18:42
asmodehnnow it seem stuck as usual, but my network usage is around 1.2Mbps :) I feel better18:42
jam:)18:42
jamcan you get to the remote server to check the mem usage?18:42
jamIt should certainly be << 1G18:43
asmodehnyep18:43
asmodehn  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND18:44
asmodehn 6942 autobzr   15   0  124m 120m 2404 S    0  3.7   0:00.39 bzr18:44
jam120M sounds a lot better than 1.9G18:44
=== mw is now known as mw|food
asmodehnyep it does ;-)18:45
asmodehnthe data transfer rate oscllate between 900Kbps and 1.2Mbps ;-) nice nice...18:47
asmodehnnote the spinning bar doesnt seem to turn a lot in between different readv though...18:48
asmodehn91.631     result:   ('readv',)18:49
asmodehn534.986                52094902 body bytes read18:49
asmodehnin between the bar didnt move18:49
jamI would expect the readv to request data, then when it comes back it will spin, then it will request more, back and forth18:50
asmodehnmmm18:51
jamthe actual transfer code still buffers stuff18:51
asmodehnI dont see the bar moving while it s transferring data through network...18:51
jamit is just that inbetween we are smarter about it18:51
asmodehnyeah probably18:51
asmodehnk18:51
jamwell, we make smaller requests18:51
asmodehnbut anyway much better now ;)18:51
jamcan I get the .bzr.log for it?18:51
asmodehnsure ... I ll  put that in a mail ;-)18:52
jamalso, I'm going to have 1 more version for you to test in a sec18:52
asmodehnsure18:52
asmodehnI ll stop that now then18:52
asmodehnand send hte mail k ?18:52
jamyeah18:53
jamok, do another "pull --overwrite ..."18:53
jam(you can use --remember if you don't want to keep typing the URL)18:53
asmodehnit s ok I like my shell history ;-)18:57
asmodehnbuilding extension modules.18:59
asmodehnpython setup.py build_ext -i18:59
asmodehnCannot build extension "bzrlib._dirstate_helpers_c" using18:59
asmodehnyour version of pyrex "0.9.4.1". Please upgrade your pyrex18:59
asmodehninstall. For now, the non-compiled (python) version will18:59
asmodehnbe used instead.18:59
asmodehnrunning build_ext18:59
asmodehnhuh ?18:59
asmodehnah but you just changed the transport in bzr lib, so that pyrex stuff shouldnt matter I guess19:00
jamyou shouldn't actually have anything to do there, but whatever19:00
jamasmodehn: right pyrex 0.9.4.1 has a known bug19:00
jamso we don't compile the extension19:01
jambecause when we do it segfaults19:01
asmodehn ah ok ^^ just hte version in my etch packages :-s19:01
asmodehnanyway I ll branch -r -Dhpss -Dindex again...19:01
jamI really like ~line 871 in bzr.log:19:01
jam25.841  expanding: 5d3c775ea20da5c234f29fb643c5d8df.tixoffsets: [13, 14, 15 ...19:01
jam25.841    not expanding large request (1424 >= 16)19:01
jamwith 1424 offsets in there :)19:01
asmodehnheh :p19:02
jaminterestingly for this specific branch19:02
jamthe prefetch doesn't help much19:02
jamwe are either just reading a tiny bit19:02
jamor reading a huge amount at once anyway19:02
asmodehnoh added some debug message did you ;)19:03
jamyeah19:03
jam-Dindex is that part19:03
jambut I don't think it is in 1.819:03
jamwell, I *know* it isn't in 1.819:03
jamas it just landed in bzr.dev 380519:04
asmodehnmmm error19:04
asmodehnbzr: ERROR: exceptions.KeyError: 156619:05
asmodehnany idea ?19:05
jamquick guess is that it doesn't like the partial readv I worked on, let me look a bit19:06
asmodehn  File "/home/autobzr/bzr.dev/bzrlib/btree_index.py", line 1046, in iter_entries19:06
asmodehn    node = nodes[node_index]19:06
jamit is possible that a page used to be cached, so we didn't request it19:06
jambut when we wanted to use it19:06
jamit was gone19:06
jamasmodehn: can you send that log?19:07
jamLooking at the code it should always have those keys19:07
jambut I might have a small bug in the new code19:07
asmodehnyep I ll do that19:07
jamkey 1566 means we are requesting really far down in the index19:07
jambut that still shouldn't be failing19:08
jambefore the last pull, the cap was at 50MB, so it probably never hit it with your indexes19:08
jambut I dropped it to 5MB19:08
jamand that may have triggered something19:08
jamsince that index is 5.8MB in size19:08
asmodehnsent ;-)19:11
jamyou're clogging my inbox :)19:11
asmodehnhehe sorry :-p19:11
jamis that one very big?19:13
asmodehnno last one was 15Ko19:14
asmodehnKB19:14
jamweird, usually your mail gets here faster19:14
asmodehnmy gmail said sent 4 minutes ago19:15
asmodehnmail is not that big19:15
asmodehnstarts with "Logs with second patch :"19:16
asmodehnmaybe some of your client / server thinks I am a spammer :-p19:16
jamwell all your others came through fine19:16
jamI can check my spam folder19:16
jamthere it is19:17
jamjust took a bit19:17
asmodehn;-)19:17
jamah, I see the problem19:20
jamjust a sec19:20
jam(it was popping a node off 2x)(19:20
jamso it would skip that at a boundary point19:20
asmodehnhe ;) no rush ;)19:21
synicis there any interest in something like gitosis for bzr?19:24
jamasmodehn: ok, pull up and try the new version19:24
jamsynic: there is contrib/bzr_access19:25
synicok19:25
jamit is probably not as complete as gitosis, but it describes how to set up multiple ssh keys to access one account, etc.19:25
jamasmodehn:  3811 refactor for clarity19:26
jamthough 3810 is the important change19:26
asmodehnmmm19:27
asmodehnsorry :p19:27
asmodehn  File "/home/autobzr/bzr.dev/bzrlib/transport/remote.py", line 346, in _readv19:27
asmodehn    next_offset = [cur_offset_and_size.next()]19:27
asmodehnNameError: global name 'cur_offset_and_size' is not defined19:27
jamstupid bugs19:27
asmodehnhehe ;)19:27
jamasmodehn: this time I double checked and ran the test suite, 381219:28
jamI really should put the new code under test19:28
jambut I wanted to make sure it was worth the effort19:28
asmodehnk ;) no worries let me know when I should pull and test ;)19:28
jamnow19:28
asmodehnroger ;-)19:29
asmodehnbranching...19:30
jamasmodehn: so if I follow the logs correctly, the old repository was taking 900s to get to the point where it could start text.pack data. The last test showed you hit that at 90s19:31
jamthat seems.... way to good to be true19:31
asmodehnmmm19:31
asmodehnk error again...19:31
jamwhat is the error now?19:32
asmodehn  File "/home/autobzr/bzr.dev/bzrlib/btree_index.py", line 1050, in iter_entries19:32
asmodehn    node = nodes[node_index]19:32
asmodehnKeyError: 156619:32
asmodehn81.151  return code 419:32
jamhm... same error19:32
asmodehnyeh...19:32
asmodehnand about the time to arrive to text.pack yes it s been really fast...19:32
jamcan you send that log again19:32
jamI want to check something19:33
jamasmodehn: bah, I know the problem19:35
jamasmodehn: 3813 ready for you to test19:36
asmodehnlog sent anyway ^^19:37
asmodehnI ll pull that ;)19:37
asmodehnbranching...19:39
jamasmodehn: just as a point of comparison, could you try to branch the original repo again as well?19:40
asmodehn?19:40
jamjust to see if the 900s => 90s is real19:40
jamuse my code19:40
asmodehnah the one not dev2 correct ?19:40
jamright19:41
jamI would like you to do both19:41
jamso I can compare19:41
jamthe 900s may have been a fluke19:41
asmodehnmmm I still have hte remote knit and the remote dev219:41
jamor may be real19:41
asmodehnbut the local ( destination ) is only dev2 now19:41
jamasmodehn: that is fine19:41
jamI'm more concerned about remote19:41
asmodehnmaybe 900 is when I lost patiance and stopped it because it was sending nothing ?19:41
asmodehnk then ;-)19:41
asmodehnI can compare both , and hten send you the log...19:42
asmodehnjust branching the same stuff from both knit and dev2 repo...19:42
asmodehndev2 -> dev2 is now at 303 seconds...19:43
asmodehnis that enough to compare ?19:43
asmodehnshould I stop that one and do the old one just after ?19:43
asmodehnhte old knit repo I mean19:44
asmodehnknit -> dev219:44
jelmerOdd_Bloke, ping19:45
asmodehnjam : ok I saw the line you re talking about around 90s... I ll cancel my branching now from dev2 and start to branch from knit again19:47
asmodehnjam: argh IncompatibleRepositories: different rich-root support19:48
jamthats strange19:48
asmodehnone weird thing : KnitPackRepository [...]/.bzr/repository/19:49
asmodehnRemoteRepository [...]/.bzr/19:50
asmodehn  File "/home/autobzr/bzr.dev/bzrlib/repository.py", line 2506, in _assert_same_model19:51
asmodehn    "different rich-root support")19:51
jamyeah, I understand the code path. I just don't know why it would think that.19:51
jamjust a sec19:51
asmodehnsure :)19:51
jamcan you get to the remote machine?19:51
asmodehnyep19:51
jamwhat does "bzr info" say19:51
asmodehnShared repository with trees (format: rich-root-pack)19:52
jamoj19:52
jamok19:52
asmodehnlocal says : Shared repository with trees (format: development2)19:52
jamso my upgrade hack turned it into something closer to "pack-0.92" without rich-root19:53
jamwhich doesn't matter for testing, except for when you go between formats19:53
jamno big deal19:53
jamI'll get you something you can use19:53
asmodehnmmm what abotu reverting my local repo to the old knitpack now that we have a answer for the slowliness problem ?19:54
asmodehnI can wait a bit before using the new dev2 repo format...19:54
jamsure19:55
jamif you go to the repo19:55
jamyou can see files/directories that end in -gi19:55
jamjust move the existing file/dir to -bi19:55
jamand then mv -gi => normal19:55
jamasmodehn: does that make sense?19:55
asmodehnyep19:56
asmodehnformat and indexes19:56
asmodehnand packnames19:56
jamright19:56
=== mw|food is now known as mw
asmodehnbtw is there any doc around about making the repo smaller... some kind of good practice ?19:57
asmodehnI am wondering if it gets rid of the packs that are not useful anymore, and that kind of stuff..19:57
asmodehnok reverting to rich-root-pack done19:59
asmodehnnow branching again from knitrrp to knitrrp to see how long it takes before we hit the .pack transfer...19:59
jamsure20:00
jamI think you'll still need to use my code20:00
jamsince otherwise you'll hit the wall again20:00
asmodehnyep I m using your version of bzr20:00
asmodehnwith the debug and all the stuff ;-)20:00
dobeyis there any way to see the status of a submitted merge request for the bazaar project?20:04
jamdobey: http://bundlebuggy.aaronbentley.com ?20:05
jampossibly under Pending, or under Merged20:05
asmodehnjam : first readv on pack at 296 secs...20:06
dobeyah20:06
jamOr if you are on the mailing list, there will be a link to the merge request20:06
jamwhich is a stable URL20:06
dobeyyeah, i'm not on the list20:06
jamasmodehn: so that is 296 => about 90?20:06
jamlog files please :)20:06
asmodehnI remember I saw a 9220:06
asmodehnyep they re coming ;-)20:06
jamasmodehn: oh and the output of: du -ksh .bzr/repository/indices ; du -ksh .bzr/repository/indices-bi20:07
asmodehnk20:07
jamactually 'du -kshc' if you don't mind20:07
asmodehnsure ;)20:07
jamso I get the impression your "latency" bound is actually20:08
jam1) Bandwidth limited for indexes20:09
jamfollowed by20:09
jam2) latency bound while the server queues up GB of data to return20:09
jamthough not really network latency20:09
jammore, "takes too long and uses all of swap"20:09
asmodehnyep20:11
asmodehnthe uses all of swap appeard only when I upgraded from 1.520:11
asmodehnbefore it used to transfer then error20:11
asmodehnso I was transferring few revision at a time and that went fine...20:11
asmodehnbut recently I think the data became too huge20:11
jamnote that my change isn't going to make it work if you have a single file that is too big20:12
jamlike a single 600MB file is likely to clog things up20:12
asmodehnyep and thats fine ;-)20:12
jambut it looks more like you have *lots* of moderately sized files20:12
asmodehnnow that it doesnt error anymore on a long transfer ;-)20:12
asmodehnyep20:13
jamasmodehn: well, you haven't tested a transfer to completion yet, have you?20:13
jamalso, you say "first readv on pack" but is that after reading the text indexes?20:14
jamor is that reading the inventory20:14
jamthere are a few different stages20:14
jamjudging by your times, it is after reading .tix20:14
jamI just wanted to make sure20:14
asmodehnhehe true havnt tested a transfer till the end yet..20:15
asmodehnabout the stage before pack I am not sure... you got the log in your email now ;-)20:15
jamasmodehn: why that I do :)20:16
asmodehnhehe :-p but I think it s after hte text index yes ;-) remembering what the progress bar was telling me20:16
asmodehnjam: so I guess we re done for today are we ? I ll keep using sftp waiting for the awesome new version and new repository format ;)20:20
asmodehnI am getting hungry hehe20:21
jamI think we're done20:21
jamenjoy your meal20:21
asmodehnthanks ;-)20:21
asmodehnand thanks a lot for the help ;) ill go eat and then I clean all remains of my experiments ;)20:22
jamdobey: your patch is here, by the way: http://bundlebuggy.aaronbentley.com/project/bzr/request/%3C1225157843.25420.7.camel%40lunatari%3E21:04
dobeyyeah, i found it21:05
dobeythanks21:05
ktenneyHowdy, I'm trying to retrieve the commit message for a revision21:20
ktenneyit looks like lf = log.LineLogFormatter(file_handle)21:20
ktenneym = lf.message(rev)21:20
ktenneyshould work, but if so, I haven't discovered what rev should be21:20
ktenneyrev from revno, rev = b.last_revision_info() doesn't work21:24
james_wktenney: you need a Revision, not a revision id21:25
james_wrepository.get_revision(rev_id) might be it21:25
james_wrevno, rev_id = b.last_revision_info()21:25
james_wif you don't need the revno then just just b.last_revision()21:26
ktenneyjames_w: so I need to look at repository.Revision(self, _format, a_bzrdir, control_files, _revision_store, control_store, text_store).get_revision(rev_id) ?21:38
james_wktenney: no, I don't think so21:39
ktenneyjames_w: revision commit messages aren't fetchable from a working tree?21:39
ktenneyI haven't found any shortcut21:39
james_wwt.branch.repository.get_revision(rev_id)21:40
lifelessktenney:21:42
lifelesst = bzrlib.workingtree.WorkingTree.open('.'); t.lock_read(); rev = t.branch.repository.get_revision(t.last_revision())21:42
james_wmsg = rev.message as well I think21:43
james_wmorning lifeless21:43
lifelessktenney: we use factory methods a lot in bzr, because the library supports working directly with multiple VCS systems21:43
lifelesshi james_w21:43
lifelessjames_w: how goes $stuff21:43
james_wlifeless: good thanks. you?21:44
james_wlifeless: I met with the code team and others yesterday, and we got somewhere on how the LP stuff should look.21:44
ktenneyjames_w: lifeless: got it, thanks21:48
lifelessjames_w: great21:50
pooliejam, i was planning today to do more reviews22:16
poolieand i have a meeting22:16
pooliebasically just that22:16
jamdo you ever not have a meeting ?22:16
jam:)22:16
poolie:/22:17
jamToday I worked on 3 things.22:17
pooliethe timezones make it more disruptive22:17
jam1) Working with vila to talk about the inventory corruption he saw, and possibly what it needs to fix it22:17
spivToday I've just sent the autopack RPC patch for review (so if poolie is doing reviews... *hint*), and am in the process of landing other approved patches from this cycle.22:18
poolieoh, yay22:18
pooliewell done22:18
pooliei'll do it first22:18
jam2) I worked with someone who has a 16GB repository22:18
jamand was having a hard time doing "bzr branch"22:18
jamit turns out that our ability to discuss multiple file histories22:18
spivI also want to look at jam's remote readv merge/rfc (first impression is that we should approve it, and start work on a streaming readv a jam suggests)22:18
jamwas requesting a readv() that was about 10GB long22:19
jamwhich would be buffered on the server22:19
jambefore sending22:19
jamand then buffered on the client22:19
jambefore processing22:19
jamwhich... doesn't work so well22:19
jamand sent that to the list as spiv mentions22:19
jam3) We also played with btree indexes22:19
jamwhich show up nicely for him22:20
jamso I'm encouraged to help push a --format=1.9 in22:20
jamAnd for future work, still trying to get the retry code working for fetch()22:20
jam(end of text :)22:21
lifelessI've a LeafNode I'm reasonably happy with22:22
poolieok, nice22:22
lifelessworking on handling too big content and spilling into a InternalNode with LeafNode children22:22
jamlifeless: did you see my email about that? I think fullpath is a better route than parent_id,basename22:22
lifelessconsidering sidetracking to fix ISO storage22:22
lifelessjam: haven't seen it yet, no22:22
jamlifeless: np, I sent it about 15 min ago22:23
jamspiv: Is there a reason you need to special case everything, rather than just defining "autopack(self)" on Repository and using it instead of target._pack_collection.autopack()?22:28
jamIt at least seems simpler than special casing the InterPackToRemotePack code22:29
spivjam: which special casing are you referring to?22:29
jamInterPackToRemotePack._pack22:29
spivAh, I see.22:29
jamit seems like you duplicate most of the def _pack code22:29
spivIf you uncommit the last revision, you'll see why ;)22:30
jamjust to get the autopack() called22:30
spivIt's mainly because there was a half-done check_references RPC as well that I have set aside for now.22:30
jamah, I saw that you moved that22:30
jamand I wondered why22:30
jamalso, your "_get_target_revision_index"22:31
spivHmm, and I guess I've pushed some of the other _pack differences into RemoteRepo.autopack22:31
jamblindly uses "self.target._real_repository"22:31
jamwithout calling _ensure_real()22:31
jamare we sure that is safe?22:31
spivYes, because is_compatible will have called _ensure_real22:31
jamah, ok22:31
=== spm_ is now known as spm
jamAnyway, I need to get going, but I saw those things to at least think about22:32
spiv(sadly!  There's no better way to get the remote format atm, though)22:32
spivjam: thanks22:33
jamanyway, there is a lot of duplication in the _pack code, that if you could just define .autopack() seems like it would go away22:34
jamWhy doesn't InterOtherToRemote take prescendence over InterPackRepo ?22:35
jamanyway, gotta go22:35
jamhave a good day22:35
thumperusing bzrlib, how do I get the files added/removed in a commit, and the diff / diffstat?22:49
poolietree.iter_changes(other)...22:50
thumperpoolie: and now for a non bzr hacker?22:50
thumperpoolie: I have a branch that I want to compare mainline revisions of22:50
poolieb = Branch.open('thing')22:51
poolieb.lock_read()22:52
poolietree1 = b.repository.revision_tree(revid_1)22:53
poolietree2 = b.repository.revision_tree(revid_2)22:53
pooliechanges = tree2.iter_changes(tree1)22:53
pooliechanges=list(changes)22:53
pooliefor c in changes: print c22:54
* thumper goes to a python shell22:54
poolieshow_diff_trees(tree1, tree2)22:54
poolieseason to taste22:54
thumperpoolie: thanks22:55
lifelessthumper: 'bzr st -r x..y' ;)22:59
LeoNerdAnyone here using bzrweb, the HTTP frontend thing? I've got:  ImportError: cannot import name LocalBranch23:06
lifelessLeoNerd: I tend to use loggerhead23:08
LeoNerdHrm...23:08
LeoNerdI'll let you into a secret - I want to embed it in my existing website generation system23:08
LeoNerdWhich is perl. Having looked over the bzrweb code, it looks quite easy to drop that in by Inline::Python, and do a small bit of extra work23:09
poolieLeoNerd: i think that error means it's out of date with bzr trunk23:20
LeoNerdYa.. that seems likely23:21
LeoNerdTobehonest I may just take the ideas in the code here and reimplement them myslef23:21
LeoNerdWrite some small bits of connector logic in Inline::Python and do the rest from perl23:22
poolieis there something about bzrweb that makes it easier to do this than with loggerhead?23:23
LeoNerdNot really.23:23
LeoNerdIt's just bzrweb was a small simple chunk of code I could easily read and work out what it does23:23
LeoNerdI'm sure I could do it all myself, it'd just mean me spending a while flicking about in the bzrlib docs to find stuff23:24
LeoNerdAnd possibly reinventing tricks this code already has23:24
jisakielgreetings... I was just asking in launchpad: is there a (reasonable) way to have a server with fine-grained permissions to different projects inside a repository?23:26
jisakielthey pointed me to bzr_access but I see that is incomplete, only supporting [/] repo23:26
jisakielI'm just wondering if there might be something similar to the subversion authentication available23:26
pooliejisakiel: why not have different repositories for each project?23:27
jisakielhmmmm23:27
jisakielbzr_access does not seem to support that?23:28
jisakielI really like the launchpad model23:29
jisakielbeing a personal server I don't want to give +w to everybody who is able to push via bzr (as I track different projects there)23:30
jisakieland I'd rather not have shell accounts accesible23:30
pooliejisakiel: launchpad has a repository per branch23:30
jisakielbut how do they authenticate then23:31
jisakielthey just told me they use a custom ssh client23:31
jisakieler... ssh server  ^^23:31
spivjisakiel: hmm, I thought it was possible to configure bzr_access differently23:32
jisakielas far as I understand23:33
spivjisakiel: ah, there's also bzr_ssh_path_limiter in the contrib/ directory23:33
jisakielin 1.8??23:33
spivYes, and probably earlier too23:33
jisakielhuh, don't see it when I do apt-get source bzr23:34
spivOh, huh.23:34
spivIt's not in 1.823:34
spivIt is in bzr.dev23:35
jisakield'oh23:35
spivIt had originally been posted on the mailing list for quite a while before that, I guess that's why I was confused.23:35
spivjisakiel: you can always have different SSH keys for different repositories.23:37
jisakielpoint is23:37
jisakielusers would be using their own ssh key23:37
jisakieljust like in launchpad23:37
jisakieland I, somehow, would be able to give them + or - W23:38
jisakieljust like editing svnauth23:38
Peng_Argh, Loggerhead is unhappy.23:38
jisakielI guess the only "reasonable" way is to explode the number of groups in the server23:39
jisakielone per project23:39
jisakieland use filesystem permissions23:39
jisakielbut *urgh*23:40
spivjisakiel: there's another possibility -- if you're just interested in restricting writes (not reads), then you could use PQM to manage the branches you want to restrict write access to.23:41
spivjisakiel: it's a different workflow, so it might not be suitable for you.23:41
spivjisakiel: it's how the bzr trunk is managed, for example; bzr developers keep their own branches of bzr wherever they like, but to commit to the trunk you need to send a request to the PQM bot.23:42
jisakielheh, I guess it might be easier to extend the bzr_access script than to deploy that23:43
jisakiel;)23:43
spivjisakiel: PQM checks that the request is authorised (via PGP), does the requested merge, runs the test suite, and then only if the tests pass does it commit and publish the merge.23:43
spivIt solves a slightly different problem, but maybe it's useful to you.23:44
jisakielseems too complicated but txs anyway23:44
spivFair enough.23:44
jisakiel( and test suites are still utopia in the CS courses ;) )23:44
spivThe problem with extending bzr_access is that the bzr client always asks for --directory=/ when it connects over bzr+ssh.23:45
spiv(Ah! ;) )23:45
spiv(and because the bzr client can't know which directories it will need in advance, it can't really do anything else)23:46
jisakielI see23:47
jisakielyikes23:47
jisakiel:D23:47
spivYou could make it spawn something other than plain bzr serve, i.e. write a bzr plugin to provide a virtual filesystem with the restrictions you need.23:47
jisakiel--directory=(?P<dir>\S+)\s+23:48
spivThere are already the building blocks for that in bzrlib (there's psuedo-chrooting logic and also force-readonly logic already there in bzrlib.transport.decorators)23:48
spivIt wouldn't be trivial to write, though.  Certainly not as easy as editing a config file!23:49
jisakielstill seems like a not-trivial ammount of work23:49
jisakielthat's it23:49
spivjisakiel: right, that regex is always going to just see /.23:50
* spiv nods23:50
jisakielthen get_directory?23:50
spivIf you can bear it, using filesystem groups sounds the most practical for you.23:50
spivget_directory will always get / :(23:50
jisakieland bzr+ssh sends / because...?23:51
spivThe author of bzr_access was a bit optimistic about what the bzr client would do.23:51
jisakiellol :D23:51
jisakieljust like me23:51
spivBecause the client can't know what directories it will need to look at before connecting.23:51
spivOpening a branch may require opening a shared repository in an arbitrary parent directory, or resolving a branch reference at an arbitrary location, etc.23:52
jisakielaaahm23:52
jisakielI was just scratching my head, as bzr+ssh://host/whatever/path seems complete23:53
spive.g, "bzr log bzr+ssh://host/whatever/path" can't spawn a remote bzr serve with --directory=/whatever/path, because the actual revision data might be at /whatever.23:54
spivOr even /!23:54
jisakielbut nevertheless23:54
jisakielah23:54
jisakielbzr serve is, of course, restricted in doing ".."23:54
jisakielbecause what would be the point in having --directory23:55
jisakielotherwise while browsing the log one could just go "up" like I guess bzr log /whatever/path does23:55
spiv--directory is there so that "bzr serve" over TCP can serve just part of your filesystem, and also so that bzr_ssh_path_limiter can be written.23:56
spivRight.23:56
jisakiel(which btw I can't find in bzr-dev, at least not in http://bazaar-vcs.org/bzr/bzr.dev/contrib/ )23:57
spivHmm, someone really ought to write that hypothetical plugin I was referring to and add it to contrib/23:57
jisakiellol23:57
jisakielseems overwhelming to me right now, sorry :D23:58
spiv(that someone is probably me)23:58
jisakielI guess this excludes some "business" uses of bzr more-or-less-centralized workflows23:59
spivHuh, it's not in bzr.dev.  Oh man, I'm an idiot.23:59
spivIt's in my local checkout, not committed!23:59
jisakielheh, bzr add perhaps? :P23:59
jisakielit happens xd23:59
spivjisakiel: http://rafb.net/p/P1TEcP10.html23:59

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!