[00:00] ok, this makes performance tolerable, just fixing this depth-of-query bug [00:14] abentley: a thought, isn't 'is_ancestor()' just 'return self.heads([ancestor, descendant]) == set([descendant])' [00:14] Could be. Haven't thought of it that way. [00:15] AFAICT it should be equally efficient [00:16] That seems right to me. [00:19] yah it is [00:19] and it exposes the bug in heads, because making it the same errors on the test that is_ancestor does not access all history [00:19] :) [01:18] hi all! i have a couple very very basic questions about bzr. [01:18] (trying to decide between bzr, darcs, and mercurial) [01:18] First of all... does bzr have any support for forests like Mercurial's Forest Extension? [01:21] work is in progress to add support for nested trees (equivalent of forests as I understand it) [01:21] superset of forests, because nested trees recurse indefinately, forests are a single construct [01:22] where is LarstiQ anyhow ? :) [01:22] Second question... how well does bzr handle binary files? [01:23] (pdfs, icky-word-processor documents, images... up to about 5mb) [01:23] lifeless: he had problems with his internet connection after his holidays [01:24] lifeless: haven't spoken to him in a couple of weeks [01:26] bbiab [01:26] jauricchio: binaries are fine [01:26] jauricchio: they can be a bit slow to diff; but other than that we just preserve them as-is. [01:28] lifeless: Does bzr diff magically avoid unchanged files? [01:28] lifeless, did you see my discussion with aaron on dirstate-with-subtree? do you think now be a good moment to encourage packs-with-subtree over packs-without-subtree? [01:28] jelmer: no, I didn't. [01:29] jauricchio: what do you mean? If you mean, do unchanged binaries make 'bzr st' and 'bzr diff' slow - no they don't [01:29] Yeah that was the question. [01:30] So slowness is only incurred when the file is actually changed [01:30] that's good :) [01:30] if you touch the file we sha it [01:30] if the sha is different we will diff it [01:30] (roughly). There are some tweaks in different circumstances. [01:30] makes sense. [01:31] Workflow question. Let's say I'm a developer working on multiple features for a release of a projects. So I make a branch of our most recent version to record my changes that will go into the next release. Now let's say I've got 5 features I'm adding. I get them done, commit them all locally, but then let's say a feature gets pulled. Is there a way to identify code belonging to a feature so that it can be easily added/removed without creating a b [01:32] Vantage13: your sentence is being truncated [01:32] Vantage13: 'without creating a b [01:33] 10:32 < l [01:33] ' [01:33] without creating a branch for every feature? [01:35] or how do people usually handle this scenario? [01:35] I think you wanted to create a branch for every feature. Branches are cheap. :) [01:36] jauricchio: branches are, but working trees are not. Currently our code base is about 2GB, so 5 working trees would be 10GB of space per developer... [01:36] Hm... fair enough. (That's a *lot* of code, man...) [01:37] Darcs would call this a "spontaneous branch". Prefix your commit messages with something, and you can work with 'all patches prefixed with (matching) some text' [01:38] Vantage13: there is a 'switch' command in bzrtools [01:38] the one thing I thought might deal with this would be using a shared repo and doing a bzr switch for each feature. So switch to main version, branch feature1, commit, switch back to main version, branch feature2, commit, switch back, etc. Then merge them all later... [01:39] Vantage13: this lets you transform a working tree from branch A to branch B quite cheaply [01:39] lifeless: exactly what I was thinking [01:39] lifeless: does what I described sound right? [01:41] yup [01:42] lifeless: Is there a way to 'switch' and create a branch at the same time? e.g. switch back to main after feature1, then create a branch for feature2 in the same working tree, then switch to it? (to avoid having to create a second working tree at all) [01:47] Vantage13: why would you create a second working tree? [01:48] lifeless: I wouldn't want to. I'd just want to create a branch and switch to it from my current working tree (which would be on the main branch) [01:48] lifeless: I'm wondering if I can do that without creating another working tree [01:48] Vantage13: 'bzr branch main new-branch && bzr switch new-branch' [01:48] won't create a working tree unless your repository is configured to make a working tree [01:49] so just disable creating working trees [01:49] lifeless: ah... [01:51] So, another question. Suppose I have a directory within a repo. I'd like to pull it out of the repo with all changes applicable to it, and make a new repo containing only it. With leading pathnames removed. [01:52] That is, projects is my repo. I want to take projects/foo/baz and make a new repo containing the contents of baz with all history. [01:52] With baz as the top level of the new repo [02:00] jauricchio: I think "hg transplant" could do that (or at least do the reverse, moving baz into project/foo), but I'm not sure if it preserves cset IDs. [02:00] jauricchio: I don't know of a way for bzr. [02:01] It's okay if cset IDs are lost [02:01] My use case is, I've been working on this little baby project that didn't deserve its own repo. Then I decide it does and I want to share it with the world. [02:04] Peng: answering questions here with 'hg can do X' isn't exactly helpful. [02:04] jauricchio: 'bzr split' can do that [02:05] lifeless: /me checks docs [02:05] lifeless: Heh, sorry. :P [02:05] Peng: You're more than welcome to say things like 'packs at 10% faster for me, but you're still 40% slower'. Thats extremely useful to know. [02:06] lifeless: 'split' not found in the user reference at doc.bazaar-vcs.org [02:06] jauricchio: 'bzr help split'. [02:06] not installed currently :\ [02:06] ah. Well then. We have the feature. [02:06] :) cool [02:07] and naturally we're expanding on it and improving it as time goes by. [02:07] of course [02:09] ok, thanks for your help lifeless, Peng, and jelmer. bzr looks to be what i need! [02:09] bye [02:09] Cool. [02:10] Peng: speaking of which, are you still dogfooding packs from time to time ? [02:12] jelmer: so do you mean bug 131667 ? [02:12] Launchpad bug 131667 in bzr "--dirstate-with-subtree is documented in the man page" [Medium,Fix committed] https://launchpad.net/bugs/131667 [02:12] yes [02:13] I would love someone to sit down and take on the patch [02:13] larstiq was working on it but hes disappeared [02:13] the patch to make nested-by-reference trees robust [02:13] lifeless, hi, so to check that plan: [02:13] this is what is needed to make -subtrees a default format and trigger the format bump [02:13] - make new knit repo containing bzr.dev [02:13] - reconcile that. [02:13] - make a new pack repo [02:14] - pull bzr.dev from the first repo [02:14] - pull packs branch [02:14] (or bzr upgrade --experimental in the reconciled repo) [02:14] but yes, thats the right list [02:19] lifeless: I keep a copy of bzr.dev and repository in pack format and pull and diff frequently, but that's the extent of my dogfooding. [02:20] ok [02:20] FWIW, I never reconciled it. [02:20] did you try the C sequence matcher thing ? [02:20] which speeds up diff, and would AIUI make your home dir versioning tolerable or even pleasant [02:22] lifeless: I switched it over to hg before I found out about that, and I've never gotten around to trying it. [02:22] Hmm. I could right now, but Firefox is hogging so much RAM it wouldn't be remotely accurate. [02:22] thats fine, don't worry about it [02:25] Mozilla should pay me for all the RAM upgrades I have to do for Firefox. [02:25] less tabs++ :) [02:25] Yeah. [02:25] Less Flash++ too. [02:25] That's what's really killing it. [02:37] poolie: how is it going ? [02:37] reconciling again [02:37] i may skip that as i'm pretty sure i did that in the same repo on friday [02:42] * Peng sighs. [02:42] switching terminals [02:43] Firefox froze right before I was going to shut it down. [02:45] :( [02:51] Hmm. I should probably just cp instead of using "bzr branch". [03:00] abentley: you're bouncing [03:08] Yes. Sorry. [03:08] Updated to gutsy, and it's giving me probs. [03:10] :( [03:12] Uh-ohs. [03:12] Branching the pack version of my homedir crashed with a KnitCorrupt error. [03:12] But I think that branch might be older than the most recent pack format. [03:13] yes, you probably have annotated knit data in it [03:13] Yeah. [03:13] Yeah, I think it might be like the very first revision it's saying is corrupt. [03:14] Or, the first file. [03:20] * Peng watches the progress bar. === kiko-fud is now known as kiko-zzz [03:24] A 700 MB .pack file. That scares me. [03:25] It was 743 MB with annotations, now it's 712. [03:29] thats cause you have a lot of data with lots of changes [03:30] Well yeah. But I'm afraid of it. [03:30] New bug: #155629 in bzr "newly-created dirstate file appears blank on curlftpfs filesystem" [Undecided,New] https://launchpad.net/bugs/155629 [03:31] One little bit out of 6 billion could be wrong. [03:32] no different to have many 10M files [03:32] lifeless: With 0.91 with Pyrex and dirstate, real time is 3m9.784s, user is 2m56.132s, sys is 0m10.781s. That might be like a 20 or 30 second improvement. [03:33] Peng: how many files are there there ? [03:33] lifeless: Uhh. A lot. [03:33] Peng: and are you committing a merge, or just a regular commit ? [03:34] Peng: and finally, had you just added new files before the commit ? [03:35] lifeless: What command tells me the total number of files? [03:35] bzr inventory | wc -l [03:36] lifeless: 4695. [03:36] lifeless: In that commit, 39 files changed. Not a merge, and no new files. [03:36] had you run bzr add? [03:37] (bzr add in 0.91 destroys the stat cache) [03:37] lifeless: Nope. [03:37] lifeless: uncommit, maybe a status or two, then commit. [03:37] thanks [03:38] thats very slow compared to my current results with packs, which is encouraging [03:38] I'm testing packs right now. [03:39] BTW, hg is like 30 seconds. ;P [03:39] yah [03:40] Also, I caught it at up to ~355 MB of RAM, and I bet it was bigger at other times. [03:40] Wow. [03:41] real 1m44.768s [03:41] user 1m37.652s [03:41] sys 0m5.437s [03:41] Packs *are* faster. [03:41] The new .pack file is 726 KB. [03:42] hmmm, not that much faster [03:42] I'd love it if you could repeat that [03:42] Just run it again? [03:42] with --lsprof-file foo.callgrind [03:42] then gzip the callgrind file and mail it to me [03:42] Gack. [03:43] this won't give me anything private [03:43] just the callgraph and timings [03:43] so I can see where the slowness is coming from [03:43] I know. [03:43] I said "Gack." because I had just started running it again. [03:43] ;) [03:44] just ctrl-C it [03:44] I did. [03:44] It'll be done shortly. [03:45] thanks! [03:45] real 2m0.958s [03:45] user 1m38.290s [03:45] sys 0m22.328s [03:45] Huh. [03:45] I would've expected user time to be higher due to lsprof. [03:47] * Peng sighs. [03:51] New bug: #155632 in bzr "KeyError: 77 in pycurl_errors.errorcode" [Undecided,New] https://launchpad.net/bugs/155632 [03:53] poolie: how goes it ? [03:53] lifeless: What's your email address? [03:59] robert at ubuntu [04:01] Blaah, you have more email addresses than I do. [04:01] ubuntu.com? [04:01] yah [04:01] lifeless, i think that's the same error i saw on friday [04:02] so [04:02] Wasn't it @canonical.com two months ago? [04:03] yes [04:03] its there too [04:03] you coulda use the prior address :) [04:06] I'll send it in a second. [04:09] Sent. [04:15] Peng: thanks [04:17] Peng: 67% is in get_content_maps [04:17] Peng: which is building up the copy of the the text to diff against [04:17] 10% in IO [04:17] we can't reduce that much [04:17] actually, 5% [04:18] 5% parsing the deltas from disk [04:19] ohfuckme [04:19] Peng: I can see an easy 17% win [04:19] we're memory copying stuff we don't need to, because of a general 'get many versions' function [04:20] Peng: if you look at bzrlib/knit.py [04:20] Peng: the function get_content_maps [04:20] New bug: #155637 in bzr "patch confict should be an internal error" [Low,Confirmed] https://launchpad.net/bugs/155637 [04:20] see the coyp() calls in there - there are two I think. Guard them with e.g. 'if multiple_versions:' [04:21] and at the top of the function do [04:21] multiple_versions = 1 != len(version_ids) [04:21] this should save 17% trivially. [04:24] lifeless: Cool. [04:24] Hold on. [04:25] lifeless: _get_content_maps? [04:26] yah [04:26] line 1000 in my knit.py [04:27] Exactly 1000? [04:27] Anyway, done. [04:28] lifeless: Want a new callgrind file? [04:30] nah, just run it without callgrind and see if it helps [04:30] Well, I ran it with callgrind and it helped. [04:30] Uhm. [04:30] 56 seconds vs. 1m38s. [04:31] great [04:31] Yeah. [04:31] theres another win of the same size in _apply_delta [04:31] in the same file [04:31] will be a little more tricky to fix that without leading to bugs [04:31] I'll do both today though [04:32] I helped make Bazaar faster. Cool. [04:33] yup [04:36] Well, I'd be happy to re-run it in the future if you ever want me to. [04:36] tomorrow probably ;) [04:36] and thanks [04:37] Heh, okay. [05:15] * ig1 food [05:52] lifeless, ping? [05:53] hi [05:54] so lifeless, 3 patches today (so far) - what order do you want them reviewed in if any? [05:55] don't care about order [05:55] just about completeness [05:55] FIFO is fine [05:55] and probably best [05:55] ok - I'll start with the tests one now [05:55] oh that one can be ignored [05:55] leave it till later [05:56] that's why I asked :-) [05:56] graph.heads then === AnMaster_ is now known as AnMaster [06:36] i've turned my check-knits branch into a plugin [06:37] it's faster than a full check (because it does less) [06:37] it may be useful [06:40] ig1: how is the review going ? [06:41] getting there ... [06:41] haven't looked at the graph code before [06:41] you have an import pdb but onothing else stands out so far [06:42] s/onothing/nothing/ [06:42] do I? oh foo [06:42] in test_graph.py [06:43] also 2 lines immediately before run_heads_break_deeper ... [06:43] otherwise tests look good [07:01] is that 'merge it' [07:01] or 'I'm still thinking' ? :) [07:02] I'm still thinking - need another 20-30 minutes I think [07:13] ig1: well, next patch is up [07:14] ok - not thru yet to me - will check again soon [07:18] there it goes [07:20] Peng: 2851 [07:20] Peng: thats the revno in repository with a bugfix to remove all three list-copies. [07:22] Peng: it will also, incidentally, reduce memory pressure, but probably not much [07:58] bug 154283 [07:58] Launchpad bug 154283 in bzr "indexerror in Knit._get_components_positions pulling in pack repo" [Undecided,New] https://launchpad.net/bugs/154283 [08:12] join-branches.txt-20050309044946-f635037a889c0388^@robertc@robertcollins.net-20050919060519-f582f62146b0b458^@^@1747020^M1746789^I1747020^@ 42658896 124$ [08:12] Took the words right outta my mouth. [08:15] You have strange things in your mouth. [08:18] It's like that thing with the old lady who swallowed a fly. [08:19] lifeless: that review finally mailed - few tweaks only as best I can see [08:27] lifeless: Holy crap. With callgrind, user time went from 56 seconds last time to 19. Real time was still at 56 seconds, but Firefox is open now so everything is probably laggier. Sys time went from 23s to 21s. [08:28] lifeless: Trying again, real time is down 10s and the others are down 1 or 2. [08:36] ig1: yup, copacetic [08:36] ig1: will do tomorrow. [08:36] Peng: so is this good ? [08:37] lifeless:quikc Q [08:37] what's the idea behind the version_id param in the apply_delta API? [08:37] Peng: what is real time sitting at without callgrind ? [08:37] ig1: KnitContent objects have a version === pcapriotti_ is now known as pcapriotti [08:38] ig1: Plain ones use this for all lines; Annotated ones carry it per-line [08:38] so the parameter is to totally transform the content object from state to state [08:38] Hold on. I'm brushing my teeth. [08:39] and self_version_id only gets set for one, not the other, is therefore by design? [08:39] yes [08:39] s/self_/self._/ [08:39] ok - thanks [08:40] I'm still playing with svn-import. Does anyone know why I'd get a "server certificate verification failed" on one gutsy computer and not on another, connecting to https://docteam.ubuntu.com/repos? [08:41] check for /etc/ssl/ca-certificates.crt [08:41] if its missing run update-ca-certificates [08:41] real 0m22.511s [08:41] user 0m17.378s [08:41] sys 0m2.758s [08:41] lifeless: ^ [08:41] Peng: how does this compare to hg ? [08:41] Umm. [08:42] Very good. [08:42] well? [08:42] thanks :) [08:42] lifeless: it was present in /etc/ssl/certs/ [08:42] I ran the update anyway, no change [08:42] Last hg one I did was: [08:42] real 0m45.080s [08:42] user 0m13.716s [08:42] sys 0m2.316s [08:42] mdke: sorry yes thats the right place. Other than that I have no suggestions. [08:42] ok, thanks anyway [08:43] Peng: hmm, still need to shave off lots of user time. But I knew that [08:43] lifeless: one was executed over ssh, could that make a difference? [08:43] Bzr and hg aren't committing exactly the same data. [08:43] So it's not an entirely fair comparison. [08:43] Peng: bzr's committing less? [08:43] I don't know. [08:43] ah [08:43] so its roughly the same but not identical ? [08:43] Bzr is using two-month-old data. Since then, my IRC logs have grown larger, but I've also switched from Opera to Firefox. [08:44] Hm. [08:44] I think hg is about the same as it was two months ago. It used to take about 30 seconds, but I'm not sure if that was real or user time. [08:44] If it was real time, the extra time is just due to Firefox hogging RAM. [08:47] gnight all. [08:47] Good night. [09:02] lifeless: BTW, bzr is probably committing more changes. Bzr is doing 12 hours of changes while hg is doing 1. === Mez is now known as Mez|Away [09:10] Is it possible to pull the bzr.dev branch over the bzr protocal [09:10] All the docs point to http urls [09:18] GaryvdM, you should be able to get it from [09:18] bzr+ssh://bazaar.launchpad.net/~bzr/bzr/trunk [09:18] you need to have a launchpad account [09:18] we'll offer public bzr:// soon [09:20] Not to be a troll, but is bzr any less slow than http? [09:21] I think there are fewer round-trips, so probably. [09:22] It takes a very long time to pull changes to bzr.dev. [09:22] Of course, the one time I time it, there aren't any new changes, and it takes 3 seconds. [09:23] * Peng wanders off. [09:29] ok so it's [09:29] KnitVersionedFile(file:///home/mbp/newbzr/dest2/.bzr/repository/text%3ATestUtil.py-20050824080200-5f70140a2d938694) [09:30] bzr: ERROR: Revision {robertc@robertcollins.net-20050919044328-0205c679f3051340} not present in "". [09:34] hi all [09:34] i would like to know, whether there are any tools for viewing the bazaar content [09:34] like for subversion we have, trac, websvn etc [09:36] Morning [09:42] i would like to know, whether there are any tools for viewing the bazaar content [09:42] like for subversion we have, trac, websvn etc [09:47] indraveni, yes, look at loggerhead [09:53] indraveni: There's also a bzr plugin for Trac. [09:55] Peng: acording to http://weblogs.mozillazine.org/jst/archives/2007/02/bzr_and_different_network_prot.html bzr is faster than http [09:55] I'm going to test it now [09:55] I live in South Africa - We have slow internet connections. [10:00] bzr+ssh = 12 sec [10:00] night all [10:00] http = 8.9 [10:00] no changes in the tree on either [10:02] Peng: bzr can be faster than http; packs help with this, on knits latency sucks arse. [10:03] What I wonder is how efficient bzr's smart server is, espcially compared to hg. Hg only uses a smart server, so it must have had a lot of tweaking. With bzr, the smart server is only starting to catch on. [10:04] There is that hpss stuff. === Mez|Away is now known as Mez [11:06] New bug: #155730 in bzr "reconcile doesn't adjust knit index references to otherwise-unreferenced file revisions" [Critical,Confirmed] https://launchpad.net/bugs/155730 [11:45] On http://bazaar-vcs.org/BzrDevelopment , the Release revids have not been updated for a while [11:45] How can I find out what they are for newer releases? === Mez is now known as Mez|Away [12:00] GaryvdM: I was going to guess `bzr tags`, but somewhat astonishingly there aren't tags for releases in their 'bzr.dev' branch [12:00] Sorry, those revids are only available to Members of the Brotherhood. === Mez|Away is now known as Mez [12:01] Who does the releases? === kiko-zzz is now known as kiko [12:01] Well, the releases don't come from bzr.dev. They have their own branches. [12:01] (and bzr.dev can't have tags anyway, in its current form) [12:02] hi all: does anyone know a good method for doing "bzr pull" over an unreliable connection? [12:02] all: my connection only lasts a maximum of 12 minutes at a time, but the pull operation takes longer (because it's a big tree). [12:03] all: I'm using bzr+ssh://....... [12:03] all: I'm open to using a workaround, such as downloading the tree via HTTP, in multiple chunks. [12:04] Well, pull saves what it gets, in some granularity. So as long as it takes you less'n 12 minutes to get a single "hunk" (probably the changes or a subset of the changes to one file, with knits), each pull should move forward. [12:04] I tend to think "kill your network admin in his sleep" is a more satisfying solution, though. [12:05] fullermd: I'd like to do that to my ISP. [12:05] i think jsk would quite like to take that approach [12:05] :) [12:05] * jsk has been plotting revenge for some days now [12:05] Oh, ISP's even easier. You know they're all in one office ;) [12:05] Take off and nuke the site from orbit. It's the only way to be sure. [12:05] fullermd: :) [12:05] fullermd: (bzr.dev is not tags compatible? You guys are weird) [12:06] AfC: bzr.dev is still branch5. I think some Linux distros are still shipping 0.11, so if we updated it they couldn't access it. [12:07] Oh [12:07] It's probably about time that it would be "OK" to make that switch these days. [12:07] Well, the default format has changed to dirstate-tags. [12:07] * AfC refrains from commenting on distros that have such lame upgrade policies, although he notes down a reminder to rant about it again when giving his next conference keynote. [12:07] Hasn't been any big pressure to, though. I wouldn't be too surprised if it stayed as-is until there's been a couple releases with packs, then it gets moved there. [12:08] like, uh, dapper ? [12:08] fullermd: So you suggest the following steps: [12:08] 1. run "bzr pull --remember bzr+ssh://..." [12:08] 2. [12:08] 3. [12:08] 4. do a ^C to kill the "bzr pull" [12:08] 5. rerun "bzr pull --remember bzr+ssh://..." [12:08] 6. repeat 2 onwards... [12:08] where "..." represents the full path [12:08] Well, you'd only need the --remember once. But yeah, that's the best I've got. [12:08] The subsequent commands won't need --remember [12:08] fullermd: cool. I'll give it a try. thanks :) [12:08] * jsk resumes his plotting... [12:10] fullermd: you know, that's not the sort of environment that I would have thought about, but I am curious how the packs modality will work in the face of partial transfers. [12:11] Well, with a buttload fewer round trips, there's a good chance packs will get done sufficiently quicker that you wouldn't have to many mid-flight disconnects. [12:11] The individual hunks to download and save would probably be bigger, so I s'pose that with a sufficiently high disconnect rate and low bandwidth, you might get stuck never making any progress with packs, where knits would inch along. But that's pretty pathological. [12:12] (about to be disconnected) [12:13] In that case, the answer is probably "keep trying to pull until the smart server gets sufficiently smart and the server upgrades" ;) [12:14] 'course, with sufficiently pathological conditions, that solution probably won't work either. [12:14] Your hunks would just be too large to move in the bandwidth/time available. [12:20] Hi guys ... just about to try and migrate from Perforce to bzr, and wanted to double check I'm doing the right thing, so may run some things by you all in the next few minutes [12:23] At the moment, I just need to share development between two of my machines, but eventually I may need to open this up to some of the company's clients, so the "Decentralized with human gatekeeper" model will come into play [12:25] To start, I'm going to pull my last stable version out of perforce and check it in as a head revision, and then the latest code in my two development branches, and check those in as branches to the head revision. I don't need to pull across all the perforce history (which is good, because I probably can't) [12:28] I was just trying to figure out if I should have a shared repository with trees or without [12:28] I guess because the desktop machine is both my fileserver and workstation, with trees makes sense... does that seem reasonable? [12:28] Yes [12:28] Well, on the one hand, it doesn't matter. You can change it around trivially later. [12:29] fullermd: thanks, that's useful to know [12:29] But yah, going with trees now would be the easy thing to do (unless the trees are so big it'll cause issues), since you can blow them away and disable them by default later. [12:29] VSpike: You might be able to convert from p4 to bzr without losing history... [12:30] Peng: I quite like the idea of a clean start - then I can lay out the repository in a more logical way than the rather haphazard one I have in perforce === AfC is now known as AfC|dinner [12:31] If I have a branch object, how do get the bzrdir object? [12:33] Ah - branch.base === jsk__ is now known as jsk [12:35] branch.bzrdir, isn't it? [12:39] mwhudson: yes [12:40] thanks [12:53] fullermd: thanks for your help earlier. I think I'll have to try a different method. My connection windows are 12 minutes long, and this doesn't seem sufficient for a "bzr pull" to get to the stage where it has committed anything to disk (the contents of .bzr do not change during this time, despite the network traffic and RAM usage increasing). [12:54] fullermd: after a full 11 minutes, the "bzr pull" is still at Pull Phase 0/2 [12:54] Bleh. [12:55] Well, that part doesn't mean too much by itself. I think 90% of pull's time is in phase 0 (that 0-based numbering still bugs me) [12:55] computer scientists... [12:55] :) [12:57] Well, it's inconsistent too, I think. The /2 means there are 2 phases, and they're called 0 and 1. So when it's doing its last byte, it's on phase 1/2. [12:57] fullermd: I do have ssh access to the server hosting the branch (at the other end of my connection). Perhaps I could create a tar of the branch, and download it in chunks. [12:58] fullermd: however, I'm not sure if this method will create a working local branch. [12:59] fullermd: +1 on your suggestion to revise the numbering... [12:59] After you have download the branch, you can do bzr checkout to create a working tree. [13:01] GaryvdM: that sounds like a good idea. However, I'm hoping to run a "bzr pull --remember" on the local branch, to tie it up with the remote one. [13:01] GaryvdM: I'm hoping that this operation will complete quickly. [13:01] GaryvdM: and not take as long as it would with an newly "bzr init"-ed directory. [13:02] GaryvdM: thanks for your help :) [13:02] hi [13:02] I get this [13:02] edulix@edulix-laptop:~/descargas/dark-extermination$ bzr push sftp://edulix@bazaar.launchpad.net/~edulix/dark-extermination/trunk [13:02] Permission denied (publickey). [13:02] bzr: ERROR: Unable to connect to SSH host bazaar.launchpad.net; EOF during negotiation [13:03] jsk_: Well, you could rsync too (as long as you're sure your destination doesn't have stuff that needs to be not be overwritten) [13:04] fullermd: he's pulling [13:04] jsk_: rsync would be ideal for this situation [13:04] [I used to use rsync to *push* until the bzr server came along] [13:07] jsk_: have you tried bzr pull http+urllib:// ? It may be longer but urllib implementation should reconnect automatically on transient errors === mrevell is now known as mrevell-lunch [13:14] In checkouted tree I can see parent branch: /home/users/arekdydo/src/abbon/abbon2 which is wrong, how can I change it to proper value ? [13:15] AfC|dinner: ? I know, but what's that matter? [13:16] fullermd: yeah, sorry, you're right. [13:19] Edulix: check that you published your ssh key on launchpad [13:24] unbind / bind does not set parent branch it keeps on bugus value ... [13:25] Parent branch is set/used by pull/merge/missing... maybe others I forget? [13:26] If you don't need those to default to anything in particular though, it doesn't really matter (except cosmetically) if the given value is nutty, though. [13:32] fullermd: Ah ! I can set it by bzr pull --remember [13:32] Thanks [13:34] I know this is a bit OT, but how tightly wedded is Visual Studio to its \Projects, \Websites layout? I'm wondering if I can make it play nicely with a head/project1, head/project2, etc.. layout... does anyone have experience? === mrevell-lunch is now known as mrevell === RichardL__ is now known as rml [14:07] hey guys === AfC|dinner is now known as AfC [14:37] jsk_: you're bouncing :) Did you succeed with your push-inside-a-network-shutting-down-every-12-minutes ? [14:37] pull even [14:39] sabdfl: hi, just passing around or heard about lifeless wonders ? :) [14:39] vila: i hear there are wonders in the works, what's the latest news? [14:40] Peng: can you summarize your last experiments with the pack format to sabdfl ? [14:41] oh, i saw that bit :-) [14:41] sabdfl: :) [14:41] very cool indeed! [14:43] yup, very encouraging [15:07] vila: thanks! I did succeed in the end - but only by zipping up the remote branch and downloading it in small chunks. After reconstituting the branch locally, I was able to do a "bzr pull --remember..." followed by two successive "bzr pull" operations to get the branch synced with its parent. [15:07] vila: (two successive operations were necessary as the first "bzr pull --remember" got cut off by the network going AWOL). [15:10] jsk_: just for completeness, I'd be interested if you an try a bzr pull http+urllib:// in a temp directory to test the transient error handling [15:10] s/an/can/ [15:11] jsk_: and you should write a nice letter to your isp too :) === kiko is now known as kiko-fud === cprov is now known as cprov-lunch [16:06] mdke: any luck with svn-import? === beuno_ is now known as beuno === kiko-fud is now known as kiko === bac_afk is now known as bac === cprov-lunch is now known as cprov [18:40] hi, quick question. How do I configure bzr to create a branch without a working tree (or create a remote branch in a shared repo w/o making a working tree) [18:45] Well, you could use 'bzr init-repo --no-trees'. [18:46] Vantage: there is also 'bzr remove-tree' to do it after the fact. [18:46] and doing a push to a remote location will not create a working tree there. [18:47] If you already have a shared repo and want new branches to stop creating working trees, touching .bzr/repository/no-working-trees should do it. === mrevell is now known as mrevell-dinner [19:01] Peng: thanks. Is that in a document somewhere? I tried hunting for it on the site, but couldn't find it [19:01] * Peng shrugs. [19:02] Vantage: It's in 'bzr help' if you know where to look. [20:01] moin [20:01] morning lifeless, is my clock right in saying it is 5 am there? [20:02] yes [20:02] He's got pack repos. They let him sleep faster. [20:02] * lifeless flips the bird [20:06] ok, its cooked now ;) [20:09] jam-laptop: so, what do you have on your plate? [20:09] lifeless: right now I'm just going through the review queue and trying to squash all of the Critical bugs [20:09] At least the ones with relatively simple fixes [20:09] yah [20:10] the revisionspec one would be lovely if you could do [20:10] there is a surprising amount of stuff in: http://bundlebuggy.aaronbentley.com/?selected=pending&unreviewed=n [20:10] Which has been approved but not merged [20:11] I'm back stripping commit down to do only what it needs to do [20:12] dirstate lookup is problematic [20:12] sounds good [20:12] I'm considering a basic datatype to factor out - 'sorted dict' [20:12] Any way you can completely stream rather than doing the per-file lookup? [20:13] well, I wrote a patch that layers on iter changes [20:13] even if it was only for the non-merge case [20:13] that would be a pretty big win [20:13] but it still has to probe back in because iter changes hides too much [20:13] the core problem is that commit diffs against an arbitrary parent, not parent 0 [20:14] well, that is why I said the "non-merge" case [20:14] well, I say 'problem'. The reason that iter_changes is not a good fit is that .. [20:14] Certainly I'm aware of the api divergence between multiple parents and _iter_changes [20:15] I really don't want two code paths here [20:15] I just feel it would be a recipe for bugs; and this is not a code area we want bugs in. [20:15] lifeless: I'd like to have 1 code path eventually, it is just an issue of crawling before we walk [20:15] I know we've gone around a few times of _iter_changes_for_commit [20:16] right, a native version of that may well help [20:16] but OTOH I think lsprof is lying [20:16] :) [20:16] lsprof does skew results a little bit [20:17] the cache I added shaves 1/2 second off [20:17] but if lookups really were what it though, it should be more, and it gets a solid hit rate [20:19] s/though/*t/ [20:19] &t dammit :) [20:20] have you measured its hit rate? [20:20] I wonder if it is less than you think [20:21] (I don't see why it really would be, but it is something to consider) [20:21] For example, if you have an average of 4 files per directory [20:21] you can only have a 4/5 hit rate [20:21] Though that is still 80% [20:21] I'm testing on good ole moz [20:21] IIRC moz has < 10 files per dir average [20:22] interesting [20:22] At least, we have 5761 directories [20:22] and 49001 files [20:23] Is Moz still using CVS, by the way? [20:23] (49001 + 5761) / 5761 = 9.5 [20:23] Lo-lan-do: primary development on Moz is still in CVS [20:23] Ouch [20:23] I think one of the "advanced branches" are trying hg [20:23] Either FF 3 or Seamonkey .next [20:23] jam-laptop: right [20:23] something like that [20:24] Lo-lan-do: There's an hg mirror of the main CVS repo. I think a couple smaller Mozilla projects are primarily using hg. [20:24] so we could set last_entry_index to 0 when the last_block_index changes [20:24] (Tamarin.) [20:24] lifeless: but don't you "last_entry_index + 1" ? [20:24] So really you want it -1 [20:24] Though the code I saw fails [20:24] if you are the first entry [20:24] or the last entry [20:24] because of IndexError [20:25] Which could actually hurt performance [20:25] because you have an exception stack [20:25] that gets created [20:25] it only fails on first [20:25] not on last [20:25] because I didn't plan on seeding it this way [20:25] I thought I saw [20:25] entry_index = last_index + 1 [20:25] ... [20:25] it is [20:26] And then later it does [20:26] if (key > block[entry-1] and key <= block[entry]) [20:26] which should fail on the last [20:26] well, at least when it goes past the end of the block [20:26] which is the first entry on the next block [20:26] which is a miss anyway [20:27] sure [20:27] you might consider just adding a simple counter [20:27] to see how many queries hit there [20:27] rather than going to the next [20:28] I did in testing it [20:28] it was something I felt the cost of an if() to keep it there was not worth [20:28] sure, I wouldn't leave it in for production [20:28] just to get a feeling for what the hit rate was [20:30] lifeless: but I fully agree that lsprof is not perfect [20:31] I've cover the start case [20:31] covered [20:31] Specifically: http://dpaste.com/23102/ [20:31] I'll send a new patch if you would like to review [20:31] just doing "bzr test-prof" [20:31] shows the "two_functions" case taking 2x as long as the one function [20:32] under lsprof [20:32] I get 38ms versus 161ms [20:32] or 4x [20:32] though the wall clock time stays about 2x [20:33] new patch sent [20:33] yes, 50K function calls with 1 param is about 15ms [20:34] I find timeit is very good at this ;) [20:34] http://dpaste.com/23105/ [20:34] hang on while I swtich to a machine with a browser [20:36] first paste is a plugin [20:36] second paste is the timing [20:37] jam-laptop: If you're looking at critical, 114615 isn't marked critical, but I kinda think it should be... [20:37] bug 114615 [20:37] Launchpad bug 114615 in bzr "Commit can fail and corrupt tree state after encountering some merge/conflicts" [High,Confirmed] https://launchpad.net/bugs/114615 [20:37] lifeless: (you can wget -qO- dpaste.com/123/text) [20:38] yeah, I wanted to follow up with that one and see if it should be critical [20:38] dato: yah [20:38] lifeless: (sorry, s/text/plain/) [20:38] dato: but its easier to move for a bit [20:38] what fits you best :) [20:39] fullermd: does your test case do something significantly different from mine? [20:39] (I can see that mine is still reproducible in bzr.dev, though, which might be enough to go on) [20:40] Not sure. Back when you posted yours, the bug was just (or at least, seemed to just) weird out commit once, then work fine. [20:40] My later one was where it started marking files as deleted :( [20:40] I dunno if that's just that we didn't hit that particular case before, or something in the code changed since we first looked at it. [20:42] fullermd: yeah, mine doesn't break "bzr status" like yours does [20:43] Yah. Nobody else would be nutso enough to setup situations like that one :p [20:43] hi all [20:44] (well, 'stat' isn't broken; it's showing exactly what it thinks is happening. I found the bug by finding a couple revs later that the file had been deleted by commit) [20:47] fullermd: true [20:49] In a sense, it's not really data _loss_, since you can just revert the file and get it back. But it's pretty freakin' scary to see happen to your tree... [20:53] jam-laptop: so, what do you think of my updated cache ? Good to go ? [20:55] I don't see how it is going to hit the first entry on the next pass [20:55] considering that you will have [20:55] entry = -1 + 1 [20:55] block[entry -1] [20:55] which is 0 [20:55] which will be an index error [20:55] or it will give you block[-1] [20:55] which is *really* not what you want [20:55] if (entry > 0 and block[entry - 1]) [20:56] more precisely, this is the line [20:56] + if ((entry_index > 0 and block[entry_index - 1][0] < key) and [20:56] which only looks at entry_index - 1 if entry_index is 1 or more [20:57] ok, I missed the nesting [20:57] lifeless: I would prefer: present = (block[entry_index][0] == key) [20:58] Though otherwise I think I approved the previous implementation [20:58] and you only changed 1 think [20:58] thing [20:58] yes you did [20:58] so I'm asking about the delta [20:58] well if the delta is just using -1 instead of none [20:58] but I'll add a () around present for you [20:58] bb:approve [20:58] its assign last_entry_index to -1 when we assign last_block_index [20:58] and change that one if block line [20:59] lifeless: maybe add a comment about why you are using -1? [20:59] Just a simple [20:59] # setting to -1 since we are likely to ask for the first record next [20:59] or something along those lines [21:00] since we just had the conversation [21:00] I understand what is going on [21:00] but I realize I'll probably forget in a couple months [21:00] # Reset the entry index cache to the beginning of the block. [21:01] ok ? [21:01] sounds good [21:01] I don't know if we want a comment where they are defined [21:01] discussing the basic use case [21:01] I'll do that while we're here. [21:01] (going linearly through the data) [21:01] and that this is meant to speed up searching for the *next* entry [21:03] # These two attributes provide a simple cache for lookups into the [21:03] # dirstate in-memory vectors. By probing respectively for the last [21:03] # block, and for the next entry, we save nearly 2 bisections per path [21:03] # during commit. === mrevell-dinner is now known as mrevell [21:04] lifeless: good enough [21:04] go for it [21:05] thanks [21:26] bzr revert --no-backup [21:27] wrong window :) err right keyboard but not redirected to the right PC to be exact :) [21:28] Could be worse. You could be meaning to type that _here_, and do it in the wrong window... [21:28] heh [21:29] vila: I thought you were just trying to cover up a post you didn't mean to make [21:29] jam-laptop: check your mail :) I was just reverting the part I talk about :) [21:30] ah [21:30] yeah, I responded to you [21:30] Would it be possible to put a check like that in permanently? [21:30] vila: ^^ [21:31] host empty ? I just have to diagnose the smart smoke test part, I fixed the other cases [21:31] but I didn't touch anything else [21:32] we really need to fix urlparse [21:32] lifeless: agree [21:32] nfs+trace+http+urlib:// [21:32] a simple foo://user@host is enough to make it go nuts [21:33] for any foo that is not a known scheme :) [21:36] vila: for some reason urlparse feels that most protocols are not netloc style [21:36] while *we* feel that most *are* [21:36] and it would be more reasonable to register non-netloc protocols [21:36] I suppose it would be different [21:36] if we wanted to support [21:36] file:foo [21:36] or [21:36] file:/path [21:36] rather than file:///path [21:37] we are fairly strict on what type of url we support [21:38] jam-laptop: yes [21:39] jam-laptop: file:foo is not valid. phone: is :) [21:40] well, I *think* that FF and IE will let you type file:foo in the URL and do something with it [21:40] at the minimum [21:40] file:/foo [21:40] should work [21:40] which *we* do not support [21:41] file:/foo should not work according to std66 [21:43] lifeless: what is std66, that the second time I run across that reference today ? [21:44] yeah, ok, google told me :) [21:46] lifeless: try typing file:/tmp into a browser [21:46] I'm pretty sure it will work just fine [21:47] (I just tested it. and it just renamed it to file:///tmp, but still worked) [21:47] I think you both agree :) [21:47] jam-laptop: crackheadedheuristicsforthewin [21:48] well, browser's have a long culture of DWIM [21:48] considering page reflows [21:48] and "gracefully" handling bad html [21:49] jam-laptop: your browser is telling you in a very subliminal way: "Thou Should Fix #123363" [21:49] bug #123363 [21:49] Launchpad bug 123363 in bzr "selftest pollutes /tmp" [Medium,Confirmed] https://launchpad.net/bugs/123363 [21:50] ubotu, you obviously miss the joke... [21:56] anyhow [21:56] brb [22:03] back === thumper_laptop is now known as thumper [22:24] vila: your last commit was "hhtp" again [22:24] "Make hhtp proxy aware of ..." [22:25] LOL [22:25] time to go to sleep then :P [22:25] vila: sleep well [22:25] need me to sing a lullaby? [22:25] nice vila [22:25] I'm getting pretty good at them [22:25] gnight [22:25] jam-laptop: :) [22:25] thanks for the heads up, I was about to resolve conflicts in smtp_connection, better to that with a fresh mind :) [22:25] jam-laptop: so, do you have time perhaps, to do the revspec bug I filed ? [22:26] its not inside the 'make commit fast' envelope, but its really annoying ;) [22:27] lifeless: well, not tonight :) I'm working hard against bug #114615 [22:27] Launchpad bug 114615 in bzr "Commit can fail and corrupt tree state after encountering some merge/conflicts" [High,Confirmed] https://launchpad.net/bugs/114615 [22:27] jam-laptop: ok [22:27] but I can look at bug #154204 tomorrow [22:27] Launchpad bug 154204 in bzr "revision specs do not lock branches appropriately." [Critical,Triaged] https://launchpad.net/bugs/154204 [22:28] bah corruption [22:28] (that is the one you are talking about, right?) [22:28] yes, it [22:28] ... [22:28] is [22:28] jelmer: thanks for asking. I didn't have any more opportunity to try it; I tried it on a faster machine but it gave me ca-cert errors... weird because the machine has the same configuration as this one [22:28] This is an odd case, where because of a conflict you can get some weird tree states [22:28] Like I just renamed a file back to its original location [22:28] and I get: [22:29] ('a', 'n.OTHER', ...), ('r', 'a/n', ...), ('a', ...), ('r', 'a/n', ...) [22:29] Or if your dirstate parser is rusty [22:29] it says that the file a/n.OTHER [22:29] is renamed to 'a/n' in THIS [22:29] and absent in the base [22:30] and renamed to 'a/n' in the other parent [22:30] (aka, it doesn't exist anywhere, but still has an entry.) [22:34] hmm [22:34] so rename is failing to change the 'a' back to 'f' [22:35] well, this is the "temporary" file that the merge conflict generates [22:35] (n.OTHER) [22:35] yah [22:36] so this suggests to me a general rename bug [22:36] so what it *should* do is just recognize it as an absent row [22:36] and nuke it [22:36] that renaming, when it is the last reference to a path, is remove it [22:36] right, we're agreed. [22:36] I don't know that it is the bug under question [22:36] but it is a weird one [22:39] Well, in Internet Years, dirstate is approaching the age where it starts "experimenting" with recreational chemicals, so... [22:45] lifeless: are you trying to get to the point that we don't need to build inventories during commit? [22:45] (or is that quite a way off?) [22:47] I'm going to see about getting rid of the wt inventory during commit [22:47] basis inventory removal is a tad further off [22:50] lifeless: bzr-packs has some impressive performance characteristics. [22:50] lifeless: considerably faster than mercurial in some scenarios, and only slightly slower in others. [22:50] nDuff: Thats excellent news. [22:50] lifeless: any idea how to detect the record_entry_contents API breakage? [22:51] (You now have an extra parameter) [22:51] nDuff: We found a serious performance bug in extracting texts yesterday, 30% win for large binaries, may help you there too. [22:51] I'm trying to update cvsps [22:51] cvsps-import [22:51] and I would like it to work with both pre 0.92 [22:51] and 0.92 [22:51] jam-laptop: Uhm, look at bzrlib.version_info is probably best. [22:51] commit is going to be in flux for at least one more release though [22:52] well, I would technically like to work with bzr.dev pre and post fix, too [22:52] nDuff: (the fix for that is in my repository branch) [22:52] oh, and that api break isn't mentioned in NEWS [22:52] you talk about a *different* api break [22:52] (requiring the root node) [22:52] but not that the number of parameters changed [22:52] lifeless: for the large files, didn't you get that merged? [22:52] heh, sorry [22:52] jam-laptop: it is merged, nDuff is testing packs though [22:52] ah, ok [22:53] lifeless: http://people.ubuntu.com/~robertc/baz2.0/repository/ ? [22:53] yup [22:54] it shows up when applying long delta chains [22:54] basically we did 3 list clones - newlist = oldlist[:] [22:54] which on files with many \n's leads to thousands of object reference-dereference pairs per delta in the delta chain [22:55] so a file that has been altered hundreds of times does millions of unneeded object reference pairs [22:57] jelmer: ok, so now svn-import gives me "Invalid revision id {None} in SvnRepository" [22:59] mdke: what version of bzr-svn? afaik that was a bug fixed in 0.4.2 or 0.4.3 [23:00] jelmer: 0.4.1-1 (gutsy) [23:00] nDuff: I'd be interested in knowing where we were slow, so we can fix that too. [23:01] mdke: .debs of newer versions are available in debian, http://packages.debian.org/sid/bzr-svn [23:01] jelmer: should I install a later version? is your repository the most reliable one? [23:02] ah, ok [23:02] jelmer: no idea about the ca-cert error I had on the other computer, I guess? [23:03] mdke: depends on the error [23:03] mdke: prefixing the url with "svn+" may help [23:03] "server certificate verification failed" [23:03] does "svn ls " work? [23:04] jelmer: do you have some debugging notes? [23:04] lifeless: for what specifically? [23:04] yeah, svn ls works [23:04] for e.g. me to point e.g. mdke at [23:04] currently you're really the only person able to help users. [23:06] there's the FAQ that is going to be part of 0.4.4 [23:07] lifeless: let me get all my results together. there are a few places where I was less rigorous about recording them than I should have been. [23:08] lifeless: that, and the list of known bugs [23:09] nDuff: sure, not meaning to put you on the spot in any way. Knowing the envelope you're working on and where it was slow is all. If you find particular ops slow please also feel free to mail me a callgrind for that run [23:09] lifeless: just threw an exception checking out your branch; see http://pastebin.com/m41646d13 [23:09] nDuff: e.g. 'bzr --lsprof-file foo.callgrind command thing thing' [23:09] mdke: so, prefixing the url with "svn+" should fix the error then [23:09] nDuff: ok, thats the bad index in that repo; known issue - I have a fixed copy, one sec. [23:10] nDuff: http://people.ubuntu.com/~robertc/pack-repository.knits [23:10] nDuff: ^ that has all the code too, and is probably the one you pulled initially. [23:11] lifeless: btw, any news on pqm for bzr-gtk/my pqm merge requests? [23:11] sorry no [23:12] haven't forgotten, just had no cycles [23:12] jelmer: right. I'm going to put another one on, leave it overnight and see what happens in the morning [23:13] lifeless: no hurries, just making sure it doesn't drop off your todo list :-) [23:13] kk [23:13] mdke: with a newer version ? [23:13] yes [23:13] jelmer: incidentally, since you're here, I'll just ask - if it all goes well, do I get separate bzr branches for trunk and each svn branch? or a single bzr branch? I don't understand the explanation in bzr help svn-import about what --all and --standalone do [23:14] mdke: yes, you will [23:14] great [23:14] ok, it seems to be getting on with the first branch already, splendid [23:14] mdke: --all is about importing revisions that were part of historic branches that have since been removed [23:14] mdke: --standalone is basically about not using a shared bzr repository [23:14] ah, perhaps I wanted that [23:15] --standalone will blow up the amount of disk space required [23:15] LP doesn't support shared bzr repositories, does it? [23:15] it doesn't, but you can push from a shared bzr repository to launchpad without problems [23:16] so I can do the import now without --standalone and then create separate branches by pushing them to launchpad? [23:16] when using a shared repository you'll also have separate branches [23:16] but yes [23:16] ok. sorry if I don't understand the concepts very well [23:17] the aim is to have LP host branches centrally that the team can access, as long as I can achieve that, I'm happy [23:17] mdke: so branch is a line of development [23:17] mdke: repository is physical storage for historical data [23:18] mdke: every branch has to have a repository available to store its data; either one per branch, which we call standalone branches, or one shared across many branches, which we call a shared repository [23:18] very clear [23:18] and I can use my shared repository to push standalone branches to Launchpad? [23:18] or not? [23:19] yes, because branch and repository are orthogonal cocepts [23:19] *concepts* [23:19] a branch is a branch [23:19] so you can push a branch from your shared repo to launchpad, and vice verca [23:19] so when I upload a branch from the shared repo to LP, it will take the historical data relevant to it and store it in a standalone branch? [23:19] yes [23:20] great. [23:20] so essentially the history doesn't get lost either way, whether I import it with --standalone or not [23:20] right [23:21] its just much more efficient in terms of disk storage without --standalone [23:21] thanks, and sorry for being a bit slow, but I wanted to make absolutely sure [23:21] ok, we'll see where it gets to overnight. possibly I'll have to leave it to work for some time :D [23:22] thanks for all the help, both [23:27] interesting. [23:27] jam-laptop: looks like we need a tree.iter_entries_by_dir [23:27] that supports file id filtering [23:27] lifeless: why would an uncommit+recommit take considerably less CPU time than the initial commit? [23:29] nDuff: the recommit step ? [23:29] or the two combined ?. Anyhow, no matter - it shouldn't. Unless the hash cache was empty on the first commit [23:30] lifeless: hmm. initial commit was ~15min of CPU time; the recommit was ~2m wall-clock. wrt the cache in question -- would it be populated by a "bzr status"? [23:30] yes [23:31] 15 mins of CPU to 2 is quite remarkable [23:31] hrm; even when it's needing to repopulate the cache, a bzr status isn't taking nearly 13 minutes. [23:32] thats really quite bizarre, no pun intended. If you find yourself able to reproduce, a callgraph of it would be wonderful [23:39] morning all [23:41] moin [23:41] I replied to your graph patch review