[00:01] working adsl FTW === lifeless_ is now known as lifeless [00:06] Are there instructions on how to add bzr to the SCM in XCode? [00:16] spiv, thanks for your mail to sg [00:27] Zectbumo, not that i know of [00:27] i believe there are some people who do use it there though [00:27] a post to the list may shake them out [00:27] where is the list? [00:27] bazaar@lists.ubuntu.com [00:27] or you could post where you're got up to in adding it [00:28] thanks poolie, I'll try that [00:28] lifeless: I would like to know more of this 'working adsl' of which you speak. [00:30] the electrons move [00:30] from node to node [00:30] on wires copper [00:30] like autumn leaves [00:31] in the summer time [00:31] igc: ping [00:31] the living is easy [00:31] hi lifeless [00:31] igc: do you want to talk about this plugin stuff [00:31] igc: or just rapid fire emails ? [00:32] as in auto-loading of plugins? [00:32] I'm yet to look at it to be honest [00:32] as in I b:rejected your patch - you may not have seen that [00:32] ah - no I hadn't seen that [00:32] * igc looks [00:33] lifeless, are you in need of relief from skull drills? [00:33] poolie: no drilling so far today [00:33] poolie: if it starts up I'll ring you and run for a train [00:33] igc: you seem to be reimplementing bzr 0.8's hook system, which is sad. [00:33] igc: so I'd like to understand what you want to achieve, and hopefully suggest a smaller way to do it. [00:34] lifeless: fire away [00:34] uhm, you're the one with a mission in this area:) how about you describe what you want to achieve [00:35] lifeless: I want an easy, scalable way of configuring command line scripts as hooks ... [00:35] a few people have rejected Bazaar for Hg on this feature alone [00:36] igc: Ok. I think this is a rather restrictive and difficult to work with medium, so don't want our core to get cluttered; we can do this entirely as a plugin. [00:36] I don't care a lot about the Python hooks stuff because we support that already [00:36] theres a /lot/ of friction between object-model and command line [00:36] and requiring that every object model hook be command line exposed - urgk. Not to mention locking coherency and performance. [00:36] and windows. [00:37] * Toksyury1l wonders absently if there are any plans to rewrite bzr in a compiled language someday in the future === Toksyury1l is now known as Toksyuryel [00:37] Toksyuryel: python can be compiled FWIW :) [00:37] ok - so we can make it selective as to which hooks can exposed if you wish [00:37] Toksyuryel: but more seriously we have rewritten the inner loops in C already [00:37] oh, sweet ^^ [00:37] s/can/get/ [00:37] Toksyuryel: (Not all inner loops, but selected critical ones) [00:39] spiv: you seem to put success/failure before body content in replies; this doesn't work in the general case [00:39] * Toksyuryel nods "well it is nice to know that it's even being attempted ^_^ bazaar is a great system, and I think being available in a version with far less overhead would get a lot more people interested in using it :)" [00:39] lifeless: chunked replies can be interrupted with an error [00:39] python is already a compiled language [00:39] but this isn't a helpful response :) [00:39] spiv: smells like duplication to me [00:40] spiv: I would be strongly inclined to have [BODY] STATUS [00:40] So most errors would tend to always have no body, then the error status? [00:41] That seems reasonable. [00:41] well the idea I have is that pure errors have no body [00:41] my understanding is that compiling python would still lag behind something like C, and could potential introduce strange compile-time errors that interperted python wouldn't experience [00:41] HTTP's 'errors are html pages' is nasty [00:41] * Toksyuryel could be wrong about this [00:41] Right, I agree. [00:41] I was just referring to the case where an error doesn't occur until the body has already begun. [00:42] That does sound like a good simplification to me, thanks. [00:46] except in the case of http 1.0 indeterminate length replies, you can just treat a connection which closes unexpectedly as an error. [00:47] igc: reviewing your patches now [00:47] igc: I get the impression that you are cloning hg's style here or something, but it has a smell about it that is quite un-bzr-like. [00:47] igc: I appreciate that some folk may say 'we have to have shell hooks' - but lets not copy an inferior system to give them their feature. [00:48] (if hg was superior, we wouldn't be hacking on bzr ;)) [00:48] thanks lifeless. The design is meant to be better than Hg's fwiw :-) [00:48] and it is [00:48] (not saying it can't be better though) [00:48] Right now I have a pretty strong conviction that this all belongs in a plugin [00:49] the shell command wrapper stuff is probably good to put in the core once the code duplication is removed [00:49] oh, I missed another spot of duplication, the unit test support for running real sub processes. [00:49] the config stuff made me wince; I've replied to it [00:49] but only comment because I may be being conservative [00:57] igc: right the main one I've looked closely at as well. I think you got the wrong end of the problem and pulled the thread there [00:58] igc: if you instead start with adapting a config driven shell interface into the existing hook structure, with no changes to the current api (it doesn't need any) I think you'll find it works a /lot/ better and more smoothly. [00:58] (And yes, I do mean start over on the third patch - sorry) [00:58] * igc digests that [01:00] hint: If shell hooks are configured in *Config, then *Config knows the hook exists and should register it [01:02] the current precommit hook also has a performance problem [01:02] its recreating the work of the commit by calling changes_from [01:03] anyhow, I need caffeine apparently, so I'll be back in a few minutes; I'm very happy to have a voice call if you think more bandwidth will help [01:37] igc: you have mail; I'm not arguing against having shell scripts, I'm arguing against an implementation that is problematic (and makes changes to core it should not) [01:37] lifeless: understood [01:38] I'm ok with going back to [] instead of init_hook [01:38] the tricky bit I don't see ... [01:38] is how your auto-register approach can work when the set of hooks ... [01:38] is dependent on the *branch* [01:39] That's the important api change ... [01:39] get_hooks[hook_name, branch) instead of hooks[hook_name] [01:57] igc: when the branch reads its config file it can add hooks, can it not ? And clean them up when unlock() drops the lock count to 0. [01:57] igc: -> no api change needed :] [01:57] * igc thinks [01:58] Branch.hooks is branch independent, its global. [01:58] sounds ok (which is great) [01:58] We can *either* make it branch specific, which has some possible benefits, but some downsides as well. [01:58] OR we can do what I just described, which will also work well. === asak_ is now known as asak [01:59] lifeless: I'd probably prefer making it branch specific but ... [01:59] I'll try the other way if you'd prefer [02:00] as long as locations.conf overrides branch.conf ... [02:00] I'm happy enough [02:02] igc: I am not wedded to it being global (well, more precisely, Branch.hooks must stay global, but you could clone the current global hooks with a deep copy to aBranch.hooks, which is then local and can be tweaked by the branch. But I think thats a mistake due to interactions with bound branches, which is why I think singleton is better in the first place. [02:03] ok - let me get the run-shell-command patch resubmitted and I'll go with your 'stay global' approach [02:04] lifeless: thanks for the reivews btw - much appreciated [02:08] np === jam_ is now known as jam === Verterok is now known as Verterok_ [03:36] abentley: ping [03:36] lifeless: pong [03:36] abentley: is there a trivial method to apply an inventory delta to a tree: to actually do the tree rearrangements requested [03:37] Yes, WorkingTree.apply_inventory_delta [03:37] abentley: that only alters the inventory [03:38] abentley: I want to shuffle directories on disk; write file content out to new files and symlinks etc. [03:38] True, but that's all the inventory delta was meant to describe. [03:38] abentley: I know. the use case is 'I have a partially committed tree - all the content is in but not the revision id.' Now I want to extract this to disk. Hmm, perhaps export will do it [03:39] I think the best option would be to convert it to _iter_changes format, which revert uses. [03:41] what's the eta on the RC? [03:41] abentley: no need, export() will work just fine [03:41] I brain farted for a second :) [03:42] lifeless: So I've got an implementation of iter_files_bytes that reads each pack only once. [03:42] Now 50% of the time is spent reading the index to determine the build-dependencies. [03:43] abentley: cool; I guess that means its faster ? [03:44] Yes, it's faster for the checkout-from branch case when there's no accelerator tree. [03:45] But it's about half the speed of checkout using an accelerator tree. [03:45] So I'm looking for ways to reduce the cost of index lookups. [03:45] Any ideas? [03:48] paste your index using code [03:49] but in general: less trips to the index layer - better. And we need to do the next iteration of the index layer too - I have a half prepped branch working on that [03:53] http://paste.ubuntu-nl.org/54807/ [03:55] abentley: its pretty dense; I guess thats doing inventory and text in a single pass ? [03:56] abentley: or is it because its part of the refactored knit layer, which inventory and revision still use ? [03:56] not saying its bad btw [03:56] Yes, knit_kind can be "file", "inventory", "revision", "signature". [03:56] k. I'm a little confused as to what streams are, I suggst a comment or something [03:57] stream_id would be a better name; they are tuples of (file_id, version_id) [03:57] thats a component isn't it ? [03:57] so I would say component_key [03:58] 'the name of a component' [03:58] Well, they are actually the names of the desired streams, i.e. fulltexts. [03:59] fulltext_keys perhaps ? [03:59] I use key fairly consistently elsewhere when talking about a tuple of ids [03:59] Could work. [04:00] trivial note: while pending_versions: is faster than whiel(len(pending_versions) > 0: and IMO just as readable [04:00] There's currently a lot of pointless conversion between various tuple formats, so I'm not sure exactly what the inputs will be when I'm done. [04:01] one thing you could do to reduce overhead [04:01] make pending_versions a set of the native keys [04:01] that will reduce conversions [04:02] if you had per-kind build_maps that would do it. But maybe that doesn't fit further out [04:02] another alternative would be (kind, key) tuples [04:02] where key is (revision_id,) or (file_id, revision_id) [04:02] less conversion - just wrapping [04:03] one suggestion here - move requirements,update(new_requirements) out of the inner loop [04:03] instead, when you do pending_versions = new_pending [04:04] also do requirements.update(new_pending) [04:04] but otherwise that seems fine - you spider out nicely [04:04] The output is expected as (kind, file_id, revision_id), so it seemed simplest to retain that format. [04:04] so I suspect you'll be payin bisect() costs in index.py [04:04] thats something that is possibly optimisable without changing the disk format. [04:04] But your (kind, key) notion is interesting. [04:07] I see what you mean about requirements.update(). [04:07] sets are fast its unlikely to be a problem, but little bits add up :) [04:08] I'll probably see how iter_all_entries performs, but it's questionable to use that IRL. [04:08] iter_all_entries skips the bisection [04:08] and I know the bisection to be a problem [04:08] Would stink rather badly for many cases, though. [04:11] Yeah, looks like ~all of the iter_entries time is spent in the bisect code. [04:11] feel free to improve it :) but your current patch is orthogonal I think [04:13] * igc food [04:13] you won't be slower than the current code [04:13] lifeless: This would probably require me to have more than a superficial understanding of bisection... [04:13] abentley: fair enough ;) [04:23] abentley: I can describe the theory in an email if the code is too obtuse [04:24] lifeless: The bisect code is slow, but iter_all_entries is slower. [04:32] abentley: what project? bzr? [04:32] Yes, doing a checkout of Bazaar takes 8 seconds with iter_all_entries, and 5 with iter_entries. [04:32] heh [04:32] so, thats size_history issue showing up:) [04:33] a project with less deep history might see a different ratio [04:33] something like mozilla iter_entries will be faster ;) [04:36] We don't have a way of storing arbitrary metadata on revisions, right? [04:37] how does --fixes do it? [04:37] I thought it had a particular field. [04:38] lifeless: Especially mozilla with only one commit [04:39] fullermd: we do, its abused [04:39] I regret us adding it :( [04:39] abentley: well moz with one commit, iter_all will be fasster [04:39] abentley: moz with all its commits, iter_entries will be faster [04:39] Oh, we do? Hm. [04:39] Oh, gotcha. [04:39] I thought the whole point of such things was to abuse them :) [04:40] * lifeless abuses fullermd [04:40] Whip me. Beat me. Make me maintain AIX. [04:40] now thats cruelty [04:40] Ah, but if we made him rewrite AIX in Perl... [04:41] I could rewrite it in APL, and it'd probably be more comprehensible than the original... [04:41] Plus you could use the insanity defence if you happened to go on a killing spree afterward. [04:42] Did I mention that my first sysadmin job was on AIX? [04:42] * fullermd loads another magazine. [04:43] Anyway, some damn fool opened up Yet Another VCS Thread on $MAILING_LIST, and it got me thinking about the question "How do I find foo/bar/baz.c r1.1384.1.7 post-conversion". [04:44] Seems like stashing in random-metadata-slot at conversion time is the least evil answer. [04:44] fullermd: that can be one in the revision properties list, but it makes the revision object rather large :| [04:45] fullermd: if we're talking about CVS, we don't have to worry about renames, so we can use the filenames as file_ids. [04:45] You could use r1.1384.1.7 as part of the revision-id. [04:46] abentley: N files in a commit though [04:46] Urg. That would be Really Ugly(tm) when you have a few dozen files in a commit... [04:46] Okay, clearly I haven't given this enough thought. [04:47] Of course not. It involves CVS. Your mind shunts itself aside in self-defense. [04:47] Though it could be argued that CVS has only one file in each commit... [04:47] Well, yeah. But converting it that way would be a good way to ensure no project ever gets far enough to ask bzr the question... === asac_ is now known as asac [05:08] bbiab [05:34] lifeless: Okay, I looked at the bisect_multi_bytes code and it kinda made sense. What's wrong with it? [05:34] abentley: its slow [05:35] so there are two separate sets of bisection occuring [05:35] But the algorithm is suitable? [05:35] abentley: one is down at the disk level where we are finding the location of the key and reading its data (and data adjacent to it). [05:35] abentley: this bisection is completely fine AFAICT, reading the right data, and not slow. [05:35] abentley: the second set of bisection is occuring in memory [05:36] where we have a list of ranges that have been parsed [05:36] in a file like |-----------------------------| [05:36] we mark the bits we've parsed: [05:36] |+++----------+++-----+++--+--+++-------| [05:36] (for instance) - thats a file after looking up one key 2/3's up the file or so [05:37] --lsprof suggests that we're examining that map or parsed regions expensively [05:37] it may be that the map needs to be represented in memory in some better fashion [05:38] or just that the way bisect is used/how the results pan out can be improved [05:40] One option that might improve performance is if we can turn it into a coroutine. [05:40] Because I suspect it's got keys cached that we need to find on the next lookup. [05:41] Even if not, the position data could be quite useful, I'd expect. [05:41] if by key you mean the (file_id, revision_id) tuple stuff - thats should not trigger a lookup in the parsed regions, unless the key is not fully parsed [05:42] remember that a key on disk is [key, present_flag, references(pointers), byte_value] [05:42] so if I read the content for key X, I need to issue a second readv request to resolve its references into strings [05:42] I mean that when I'm building a list of dependencies, I have to call GraphIndex.iter_entries() multiple times. [05:42] right [05:42] And I assume there's no caching. [05:42] there is caching [05:43] iter_entries is slow because of the in memory data structure that represents what parts of the file we have parsed [05:44] Where is the caching? [05:44] self.bisect_nodes? [05:44] self._nodes_by_reference is part of it [05:45] sorry [05:45] _keys_by_offset [05:45] that maps a pointer to a parsed key [05:45] self._parsed_key_map is the bitmap of the parts of the file we have cached [05:47] Hmm. Would we want some kind of tree structure? [05:48] I don't know :). I'm paging this in as we speak. [05:48] mainly I want to ensure you see the bit that is a problem (in memory data structure) vs the bit that is ok (the actual IO's we perform) [05:48] it may be the structure is ok but we don't use it well [05:48] or it may be that the structure is not ok [05:48] I do see that. [05:49] cool [05:55] * abentley is off to sleep [05:59] night [07:09] Hello bzrrrrr! [07:23] hello matthew [07:41] "bzr: ERROR: bzrlib.errors.BzrCheckError: Internal check failed: file u'/etc/resolv.conf' entered as kind 'file' id 'resolv.conf-20071108133634-j0pcs1k03mehwsbd-106', now of kind 'symlink'" [07:42] what's the rationale for having an error for that instead of just versioning the change? [08:20] New bug: #189182 in bzr-hg "unexpected keyword argument 'find_ghosts' in hg-0.9.5" [Undecided,New] https://launchpad.net/bugs/189182 [08:51] bob2, you shouldn't be seeing that, we're meant to fix it at another level [08:51] unless you're using a very old bzr? [08:55] gotta be old bzr I would have though [09:15] hy all [09:18] anyone may help, i have bazaar installed on my laptop and my desktop , i'm tried to push the repo i have on my laptop into my desktop bazaar repo using bzr+ssh command but i fail, any clues ? [09:23] .bzr.log is a good place to look for clues [09:24] on both sides rolly ? === weigon__ is now known as weigon [09:25] on the remote side, sure. If you are successfully establishing an ssh connection [09:26] let's see if this file exists ;) [09:26] "but i fail" doesn't say much about your problem, so I can't say [09:28] the ssh connection succeed, but the bzr smart server says something about permissions [09:29] bzr push bzr+ssh://username@myhostname/Projet, error : Generic bzr smart protocol error: Permission denied: "/Projet": [Errno 13] Permission denied: '/Projet' [09:29] the Projet dir is in my home on the desktop [09:30] and when i take a look at the /var/log/secure, i can see, the successfull ssh connection [09:31] xinity_mbp: /Project is to the root of your filesystem, you logging in as root ? [09:31] nop weigon [09:32] the Projet dir is in ~username dir [09:32] wich means ~username/Projet [09:32] i found this on .bzr.log [09:32] the "permission denied" smells like it tries to access /Projet instead of ~username/Project [09:33] you need to put the full path in there [09:33] bzr+ssh://username@myhostname/home/username/Projet or whatever it may be [09:33] http://cl1p.net/xinity_bzr/ [09:34] ah better now :) [09:35] I love bzr, even when it crashes :D It's so easy to tell what's wrong [09:37] rolly: you love in practice Python. Most Python programs show this attribute: "Easy to debug" ;) [09:39] yeah it's true [09:39] seems to be working , here is my last log : [09:39] I have 0 python experience and I was able to patch a bzr module [09:40] rolly: And now you need to be a selfemployed contractor for Germans biggest mail portal, and you can charge premium prices for patching (complex and hard Python) modules without understanding them :-P [09:42] Well, I can certainly provide the lack of understanding [09:43] Oh, sorry, was an complete non appropiate assocation that I had. [09:44] But this guy managed to sit around for more than 2 months, and providing exactly 3 1-line changes replacing constants, as a contractor. That's upmanship. [09:44] those must have been some pretty important 3 lines :p [09:45] (for him to charge premium) [09:45] xinity_mbp: I think a newer client version would have reported that error properly. [09:45] anyways, past my bedtime. Goodnight all [09:46] (or perhaps a newer client *and* server version) [09:46] Using fetch logic to copy between RemoteRepository(bzr+ssh://xinity@myhostname/home/xinity/Projet/.bzr/)(remote) and KnitPackRepository('file:///Volumes/Home/Users/xinity/Documents/Exploitation/Projet/.bzr/repository/') [09:46] here is the last log i have [10:05] ok seems to be working, a last thing i didn't catch [10:06] bzr branch Exploitation bzr+ssh://xinity@myhostname/home/xinity/Projet/Exploitation send the .bzr dir , but not all the files, did i missed something ? [10:07] That's expected. "bzr branch" only builds working trees for local branches. [10:07] But if you're just sharing the branch with other people, that doesn't matter. You can "bzr branch" and so on with just the .bzr directory. [10:12] ok so how to pull the whole tree to the remote instance ? [10:13] dato:Hi! [10:14] yes? [10:15] I try to use you VimEditorIntegration plugin and found out then they do not work on Vim 7.1 [10:16] cvv: showing the diff no longer works with recent bzrs. maybe you want to try `bzr ci --show-diff` [10:18] What do you mean? bzr diff work perfectly in bzr 1.0 [10:19] cvv: what does not work for you in 7.1? [10:20] I do not see diff. in both panel same text but `bzr diff` tell about more differencies [10:21] cvv: try running `bzr commit --show-diff` [10:22] bzr: ERROR: no such option: --show-diff [10:23] ok [10:23] then it's not that [10:23] well [10:23] er, probably is [10:23] I never use option --show-diff in past [10:23] it's a bit new [10:24] dato: "bzr diff" [10:24] cvv: so, if you do `bzr commit`, and the editor starts, and you go to another terminal, and you do `bzr diff` there, do you get an error about the repository being locked? [10:24] dato: you should not. [10:25] Qhestion: of course you should [10:25] no. [10:25] diff is a readonly task, commit is a readwrite task. [10:25] but there is a handling for that [10:25] Qhestion: sorry, but the facts support me [10:26] (unless the behavior was changed in bzr.dev in the last two days) [10:26] ok right [10:26] you can't diff while committing [10:26] ok you are right. [10:29] bzr diff [10:29] bzr: ERROR: Could not acquire lock [Errno 11] Resource temporarily unavailable [10:30] cvv: right. you need bzr 0.91 or newer to be able to see the diff in the editor, I'm afraid. [10:30] I just commited files, where were they commited to? Is everything contained in the same project directory? [10:31] Coke: try "bzr info" [10:32] Ok. I at now will be upgrade to bzr-1.0 [10:32] spiv: so everything is a branch? [10:33] Coke: I'm not sure what you mean by "everything" :) [10:33] spiv: every repository [10:33] No, in bzr we use "repository" to mean something different to a "branch". [10:33] spiv: what would be considered a "local copy" [10:34] A repository is where the revisions of one or more branches are stored. [10:34] (Often a repository is co-located with a branch, and so holds just data for that branch) [10:34] spiv: ah, if we are several networked developers working on the same project we have a central repository and we each have our own private branch? [10:35] cvv: good. in that case, you just use `bzr commit --show-diff` [10:35] Well, it's usually easiest to ignore repostories. [10:35] spiv: good. [10:35] The things you work with are branches most of the time. [10:35] (Or perhaps a checkout of a branch) [10:36] spiv: so, each developer has a branch and then there's a branch at our project server too? [10:36] So when you do "bzr commit" it creates a new revision on that branch. [10:37] You can work like that. You can also have just one branch, and have each developer just have a checkout of that branch. [10:37] (i.e. very similar to how you'd use SVN or CVS) [10:38] spiv: ok. what's the difference between a checkout and a branch? [10:38] A checkout is also called a "working tree". [10:38] So it's the files on disk that you work with. [10:38] A branch is a line of development. A series of revisions. [10:39] spiv: ok, then I guess it works sort of the same way like SVN in that respect [10:39] my oh my i'm lost :( [10:39] So if you have a checkout of a branch, and someone else commits to the branch, then your checkout would be out-of-date. [10:39] So you wouldn't be able to commit to that branch with that checkout until you did a "bzr update". [10:39] Argh! There's no bash autocompletion for bzr yet! [10:40] spiv: ah, just tested checking out. ok. it's very similar to cvs [10:40] (Although you would still be able to make "local commits") [10:40] If you're coming from cvs or svn, using checkouts is a pretty good way to ease into bzr. [10:41] Most people using bazaar get addicted to making branches though :) [10:41] spiv: yes. it does seem tempting. [10:41] Often a good way of working is to make a new branch for every distinct feature you're developing. [10:42] spiv: different thinking [10:42] spiv: branching and merging is a pain in the ass with CVS and SVN [10:42] Exactly. [10:48] sounds weird , i followed the pdf doc to build a central repo [10:48] when i do a bzr ls bzr+ssh.... it shows a list of files [10:49] but when i connect remotely, i can't see those files [10:49] spiv: if I work on my branch and change files, can I revert or is the branch "hot"? [10:50] xinity_mbp: 'bzr ls' lists files that are versioned, not what is actually on the filesystem. [10:50] spiv: lemme rephrase that. :) are the changes stored in each branch? [10:50] Coke: yes, each branch keeps a full history [10:50] this is very confusing for xchat having someone else called mbp :) [10:51] spiv: so I can relax and work on my branch (which happens to be the branch root location) and not worry about not being able to revert certain changes? [10:51] Coke: if you have the bzr-gtk plugin installed, "bzr viz" does a great job of showing you the history in a pretty way [10:52] spiv: I hate pretty! [10:52] :) [10:52] Odd_Bloke: how to send those files to the remote repo ? [10:52] xinity_mbp: you could install the "push-and-update" plugin [10:53] spiv: let's try then ;) [10:53] Coke: I don't quite understand your concern. Perhaps you should clarify what you mean by "revert"? [10:53] Coke: There's a "bzr revert" command, but I'm not sure if has the same meaning as what you want or not. [10:53] spiv: I edit file and save, oops! made a mistake, can't remember what I've changed, go back to last commited revision [10:54] spiv: no, more like "revert" in the english sense [10:54] spiv: in svn I'd just remove the changed file and run svn update [10:54] "bzr revert the_file" will revert it to the last committed revision. [10:54] spiv: is it doable for the entire branch, including subdirectories? [10:54] (and "bzr revert" will revert the entire working tree) [10:54] spiv: ah [10:54] Well... I'm all good now. [10:55] xinity_mbp: The files are already there, in version control. The only reason they aren't on the filesystem is that they haven't been checked out. [10:55] There's just one more thing, where can I get free project hosting that use bzr instead of svn? [10:55] Coke: revert changes the working tree, not the branch, btw. [10:55] Coke: Launchpad. [10:55] spiv: my branch is my working tree. :) [10:56] Odd_Bloke: sorry for my noob attitude , should i do a checkout on the remote side ? [10:56] xinity_mbp: Yeah. That's what the push-and-update plugin does for you. :) [10:56] Odd_Bloke: thanks. [10:56] Odd_Bloke: cool ;) [10:57] Odd_Bloke: I don't get it, is that an URL? [10:57] ah, .net, of course. It's a network, not an .org. [10:57] Coke: you can host bzr branches on anything that supports sftp and http [10:58] Coke: Your branch and your working tree are conceptually different. The branch is to do with history, whereas your working tree is just whatever files happen to be in the directory you're using bzr to manage. [10:58] (or ssh) [10:58] luks: that's what I will do with the projects at work, but I have a few things hosted on SF.net [10:58] luks: want to switch [11:05] Odd_Bloke: so after that if i use bzr push it should update the remote repo right ? [11:07] xinity_mbp: Depends what you mean by 'that'. :p [11:08] Odd_Bloke: i've installed the push and update plugin, bzr plugins founds it [11:08] xinity_mbp: I've never used it myself, so I don't really know. [11:08] Try and find out. :) [11:09] Odd_Bloke: but when i try to do a bzr push it says something like : This transport does not update the working tree of: .... :( [11:11] xinity_mbp: OK, it looks like it adds a push-and-update command, so try 'bzr help commands' and look for that. [11:12] Odd_Bloke: nothing about push and update :( [11:15] xinity_mbp: I've just installed it locally and I have a push-and-update command... [11:15] Odd_Bloke: use brz help commands ? [11:15] xinity_mbp: Both there and when I try to run it. [11:18] Odd_Bloke: ok bzr help push_and_update says : Helper functions for pushing and updating a remote tree. [11:18] Odd_Bloke: but nothing about how to use it :( [11:19] xinity_mbp: Use it exactly as you would push. [11:20] Odd_Bloke: i did but it still says : [11:23] Odd_Bloke: bzr commit works perfectly, but bzr push says : This transport does not update the working tree of: [11:24] xinity_mbp: the way the plugin works, I suspect it'd still print that message [11:24] xinity_mbp: but it may have updated the working tree anyway [11:24] Odd_Bloke: but the new file i've added locally isn't in the remote tree, i still need to do a checkout on the remote side [11:25] Odd_Bloke: bzr update zorry [11:39] xinity_mbp: you need to use push-and-update, not push [11:40] (I think) [11:40] luks: not anymore [11:40] oh [11:40] push-and-update could use a README file [11:46] any chance anybody has an special tricks for getting pylint to understand lazy_import? [11:47] mtaylor: s/^lazy_import/#\0/ and s/^""")$/#\0/ ? [11:48] yeah... there's that. [11:48] but then I wind up forgetting to uncomment it before I commit [12:18] anybody ever touch the compiler module? [12:20] once... years ago... after that guys in white give me my own room with padding :P [12:21] mtaylor: I've played with it a little, but nothing especially in-depth. [12:21] Odd_Bloke: I'm trying to inject something into the tree... [12:21] Odd_Bloke: got it , it works now [12:21] ;) [12:22] it's causing me to want to lay in a padded room :) [12:22] :P [12:22] mtaylor: Way over my head, sorry. :) [12:22] I was just trying to get pretty representations of stuff. [12:22] xinity_mbp: Awesome. :) [12:22] yeah! [12:25] now i should be sure of what to do next time;) [12:26] have to say that bzr is great.. played two days and already made own bzr serve command that uses paramiko to made custom ssh server and uses public keys to authenticate.. :P [12:29] Parker-: terrific ! [12:30] still in proof of concept state... [12:30] could be very useful plugin for windows [12:30] it works with windows :P [12:30] https://launchpad.net/bzr-auth [12:31] now uses ssh-agent (or Pageant in windows) to get keys [12:38] wow, that's great Parker [12:38] you should really post to bazaar@lists.ubuntu.com about it [12:39] er... I'm little and shy boy [12:39] :D [12:40] hmmhh.. is bazaar@lists.canonical.com same? [12:41] yes [12:46] New bug: #189227 in bzr-svn "Strip a trailing \n when submitting the commit message to SVN" [Undecided,New] https://launchpad.net/bugs/189227 [12:47] yes [12:54] have to fix it little bit before annoucement === mrevell is now known as mrevell-lunch [13:34] poolie: ah, that' [13:34] d be the problem [13:34] was just surprised, since tla handled it fine ;) === mvo__ is now known as mvo [14:01] New bug: #189246 in bzr "UnicodeDecodeError exception in merge" [Undecided,New] https://launchpad.net/bugs/189246 [14:11] HI, I have bzr-1.1 on a gentoo machine. I have problem to commit my code. Get a ".bzr/branch/revision-history.tmp.1202219907.956677914.25812.1333525853": [Errno 13] Permission denied" [14:12] The owner of bzr root, and the group is bzr. [14:12] wica: This suggests that you've created the branch with a different user and haven't set the permissions on it correctly. [14:12] My normal user is in the bzr group [14:12] and al my dir have 775 [14:13] Odd_Bloke: Yep, that I understand. But the permissions look ok. [14:13] ls -ld ".bzr/branch/" .bzr/branch/revision-history.tmp.1202219907.956677914.25812.1333525853 [14:13] That file is not there [14:14] the normal user in cli. Can create files there [14:14] wica: Is the above the full message that bzr gives? [14:14]  bzr: ERROR: Permission denied: "/var/bzr/iceshop/trunk/.bzr/branch/revision-history.tmp.1202219907.956677914.25812.1333525853": [Errno 13] Permission denied [14:14] this is the full msg [14:16] when I change the owner of /var/bzr/ to my normal user. I can commit [14:16] OK, does the 'bzr' group have the execute bit set on all directories leading up to that path? === mrevell-lunch is now known as mrevell [14:17] I did a chmod -R 1775 [14:17] So it shoutbe [14:22] OK, so the use of the sticky bit might cause problems if you're not the owner of /var/bzr/iceshop/trunk/.bzr/branch [14:22] will try chmod -R -x [14:22] Why? === jam_ is now known as jam [14:23] -x will remove the sticky bit [14:23] use to see what happen [14:24] -x will also stop you being able to go into directories... [14:30] has anyone experimented with gittorrent-like things for bzr? [14:32] wica: x is the execute bit, not the sticky bit. The sticky bit is t. [14:33] abentley: Yes, that is true. don't know why I typed +x [14:33] hmmhh... what then is s bit? [14:33] But the problem is found [14:34] The user with the problem, has created the problem [14:34] :/ [14:34] Parker-: you are probably thinking of SUID or SGID. [14:34] jrydberg_: haven't seen anything like that yet [14:34] ah ok [14:34] wica, usually the user is THE problem :) [14:35] People seem to confuse SGID with the sticky bit a lot. [14:35] jeh [14:36] hmhh.. have to think what to do with the bza-auth [14:39] bzr-auth even === mw|out is now known as mw [16:47] New bug: #189282 in bzr "Internal error on branch of awn-core (httplib.ResponseNotReady)" [Undecided,New] https://launchpad.net/bugs/189282 [17:21] New bug: #189300 in bzr "bzr push to saved location fails" [Undecided,New] https://launchpad.net/bugs/189300 === AnMaster is now known as AnMaster_ === AnMaster_ is now known as AnMaster1 === AnMaster1 is now known as AnMaster === jam_ is now known as jam [19:55] im getting this error when i'm trying to update or pull, anay ideas? bzr: ERROR: Cannot lock LockDir(http://bazaar.launchpad.net/...bzr/branch/lock): Transport operation not possible: http does not support mkdir() [19:55] Bazaar (bzr) 1.0.0.candidate.1 [20:25] is it binded_ [20:25] ? [20:28] dont really know: this is what bzr info gives me: checkout of branch: http://... , parent branch: http://... [20:28] is it? [20:29] hmmhh.. did you use 'bzr checkout' or 'bzr branch' [20:30] alefteris, yes, it looks like it's binded [20:30] you can just do "bzr unbind" [20:30] although it seems the branch is locked [20:31] you might want to do bzr break-lock on it if you have write access [20:31] because isn't http:// access read-only? [20:31] (user bzr+ssh instead of http) [20:32] so what i have to do get the latest revision from lanuchpad? mind that the branch is on a remote server, where i havent got my ssh keys there [20:33] bzr branch http://... [20:33] alefteris, just do "bzr unbind" [20:33] and then [20:33] bzr pull http://... [20:33] which is probably faster then re-branching [20:35] ok you guys rock, thanks a lot :) [20:36] alefteris, :D [20:36] doh... but I'm soft meatbag :( [20:37] Parker-, re-branching works too, just a bit slower [20:37] it's a matter of sticking around long enough here [20:37] things tend to repeat themselves a lot [20:37] we should have a !explain_this bot here at some point I think [20:38] beuno, yeah I know.. :) [20:39] maybe should put some kind of notice text if user uses checkout + http [20:39] Parker-, right, seems reasonable, like when pushing non-locally, with the "this does not update the working tree" notice [20:40] and I think there should be better error information when comes that "cannot mkdir" [20:40] or something [20:41] Parker-, absolutely. If you're subscribed to the mailing list, you might want to propose it (or even cook up a patch if you're up to it) [20:41] beuno, maybe a more desciptive error message would help also. i search the web as well for the error, but found only bug reports but no clear solution about what i shoud do about it [20:41] alefteris, absolutely [20:41] I'll file a bug for it [20:42] thank you [20:42] that way it can be tracked [20:42] Parker-, ^ [20:42] roger :) [20:42] doit because I'm newbie here :P [20:42] started to use bzr two deays ago :) [20:43] Parker-, will do, and welcome! [20:44] thanks... today published first proof of concept plugin to launchpad :P [20:44] Parker-, that was you? cool! I have that thread starred to look into later on. We are really in need of something like that. I'll try and peak into the code and see if I can help you out with that [20:44] hello. how to enable nautilus integration ? [20:46] beuno, heh... thanks :) [20:46] tacone, you have to install bzr-gtk [20:47] I did. [20:47] I noticed from the topic 1.1 is out. I have 0.9 [20:47] 1.1 will enable naut-int by default ? [20:47] tacone, ah, just saw this "(Note that NautilusIntegration is disabled at the moment, friction in the interface between nautilus and bzr resulted in substantial performance issues)." [20:48] jelmer, is that still true? [20:48] moin [20:48] hey lifeless [20:48] tacone, bzr doesn't come with any GUI tools by default, but I do strongly recommend upgrading [20:48] beuno, and yeah... help would be great.. [20:49] I saw it too. but maybe it's not updated. [20:49] tacone, you can download the latest from: https://launchpad.net/~bzr/+archive if you're on Ubuntu [20:50] beuno: I just did (5 seconds :-)) [20:50] hmmhh.. ubuntu-backport have 1.0 [20:50] nothing seems to change [20:50] bzr --version gives 1.1.0 now [20:50] shuold I try to restart nautilus maybe. [20:50] tacone, it might not be enabled [20:50] let me give it a try [20:52] thanks. [20:52] I will restart x in the meantime [20:56] beuno: I restarted the whole pc but I see no naut-int. [20:56] tacone, did you follow the installations instructions in the README? [20:57] you have to install python-nautilus [20:57] beuno: Nobody disabled it afaik [20:57] copy a file into ~/.nautilus/python-extensions [20:57] wow. where's readme ? :-D [20:57] tacone, and 0.93 doesn't seem to work with 1.1 [20:57] branching trunk to see if it does [20:57] but of course, jelmer would know much more about this [20:58] abentley: ping [20:58] so should I downgrade bzr ? [20:58] lifeless: pong [20:58] abentley: I think I have a clue about improving the in memory index stuff [20:59] tacone, hold on a bit, lemme triple check how everything works, and I'll walk you through it [20:59] abentley: I'm going to have a stab at it today [20:59] Have fun. I'll be busy anyhow. [20:59] thanks beuno, waiting for you here :) [20:59] abentley: I'm thinking you may want to merge-and-test-and-revert occasionally once I have something up [21:00] lifeless: If you want to play with my fast-iter-files-bytes branch, it's here: http://code.aaronbentley.com/bzr/bzrrepo/fast-iter-files-bytes [21:01] I do not consider it merge worthy, so don't panic about how it works. [21:01] k [21:09] tacone, got everyhting working, but I can't seem to get nautilus to do stuff... [21:10] nice :-D [21:10] I will live without [21:10] thank you anyway. so kind. [21:10] tacone, it might be worth a try to follow the readme instructions [21:10] and restart X [21:11] which I can't do at this moment [21:11] where do I find readme ? [21:11] tacone, in the plugin directory [21:11] ok [21:11] download 0.93 [21:11] ok. [21:11] thank you very much. have a good evening [21:12] tacone, thanks, you too [21:30] New bug: #189390 in bzr "Warn users when doing checkouts with read-only transports" [Undecided,New] https://launchpad.net/bugs/189390 [21:47] beuno: I'm not sure who added that bit to the wiki for NautilusBzr [21:48] jelmer_, I couldn't get it to work though. Does it requier to restart X? [21:49] jelmer_: well the code is disabled [21:49] jelmer_: there is an XXX about it; or was recently [21:49] ah, somebody has commented it out in the setup [21:51] beuno: It requires you to restart nautilus [21:51] beuno: that should be all [21:51] you also need to have python-nautilus and libnautilus-dev (or something) installed [21:53] poolie: ping [21:53] lifeless: ping [21:53] jam: gnip [21:53] jelmer_, still doesn't work :/ I'll try to debug when I get home [21:54] lifeless: I've been inspecting why "bzr annotate" is slow, and I'm finding some interesting points. [21:54] I thought I would discuss it a bit before I really got down and started coding [21:54] cool [21:54] Basically, the #1 thing that pops up, is that for 700 revisions, we call _lookup_keys_via_location about 70,000 times [21:55] It seems that all of our normal Knit calls have to do a bisect lookup [21:55] so every time we do "get_options()", "get_method()", etc. [21:55] And "get_parents()" has to do 2 lookups [21:55] because it checks to see if the parents are ghosts [21:55] yes [21:56] so we should not be using get_parents except when it matters [21:56] get_delta() ends up doing about 5 lookups, because it uses the get_options, get_methods, and get-parents [21:56] and for decompression trees it does not matter [21:56] a grab-it-all-at-once api would be better for knit text extraction [21:56] also the index bit is slow right now, I'm looking at that today, had some ideas come to me after explaining it to aaron yesterday [21:57] well, as to that, we don't really need the fulltexts to do reannotate [21:57] if we are going to be doing it from the line deltas anyway [21:57] we can just build up annotated fulltexts on the fly [21:57] rather than building up fulltexts [21:57] grabbing the deltas [21:57] and then combining the two into annotated fulltexts [21:57] jam: The line deltas aren't accurate. [21:57] abentley: but you are using them [21:57] abentley: "annotate_knit" [21:57] They don't have correct information about the last line. [21:58] I believe that operation uses both the fulltexts and the line deltas. [21:58] abentley: you use the matching blocks from the line deltas, and the fulltexts in reannotate [21:59] but you use the matching blocks assuming they are accurate [21:59] Is the only problem the "eol" marker? [22:00] jam: you're off on a tangent now, I meant that the in memory management of the pack disk index disk data can be improved; this doesn't affect fulltexts/not [22:00] jam: I believe so. Got work stuff. [22:02] lifeless: well, I'm saying you don't need to extract all the knits into fulltexts [22:02] and was trying to figure out a better way to stream out what I did need [22:02] jam: sure; I didn't think we did. Anyhow its the exact same set of index calls needed [22:02] It also makes me feel like we should be caching in Knitgraph a bit more [22:03] we have to access the component, delta or fulltext, from the index layer [22:03] and we should access each index record once. [22:12] jam: if you look at KnitContent.get_line_delta_blocks, you'll see it does not blindly trust the matching_blocks-- it uses the fulltexts to ensure the delta of the last line is correct. [22:13] This is probably the biggest reason why I hate knit deltas. [22:13] abentley: I do see that, though I wonder if we could just handle the eol on our own and not have to build up the fulltext [22:13] but thanks for pointing it out [22:14] but yeah, the knit way of storing things is to record whether there is an eol [22:14] jam: You'll have to be careful if you take that approach. [22:14] then always force a final eor [22:14] eol [22:14] for example, the last line could match some other line because of the lack of an EOL. [22:15] And of course, there may also be real differences. [22:15] Is the RC out yet? [22:15] jam: the encoding misses data [22:15] jam: it does not indicate if the delta affects the last line [22:15] other than EOL [22:16] true enough [22:16] though if you are doing annotated fulltexts anyway, you'll still have a fulltext to work with [22:16] you just don't have to have 2 of them [22:17] TBH, I'm not sure whether you can calculate it without using using the fulltexts or doing equivalent work. [22:17] jam: depends on the deltas I think [22:17] anyway, the biggest expense is all around the KnitGraphIndex at the moment [22:17] so it is the place to focus [22:17] reannotate is about 3/15s [22:18] _get_entries is 8.5s/15s [22:18] Because in order to compare against the last line, you may need to hit any delta in the history to find its contents. [22:18] And knit delta format won't tell you how many lines there were in that version. [22:19] Sure, I wasn't planning on shortcutting going into all history yet [22:19] just thinking that caching 700 fulltexts in memory was probably not a good idea [22:19] jam: we're doing that, are we? [22:19] Yeah, probably not the best idea. [22:20] ancestry = get_ancestry(versions) [22:20] lines = get_line_list(ancestry) [22:20] (psuedocode) [22:20] wheeee boom [22:21] jam, put option to user, if he/she want to use memory caching? [22:21] annotate an iso, dare you to [22:21] Perhaps an LRU cache would make sense, with reasonable groupings. [22:21] Parker-: well we would need at least a couple copies [22:21] Parker-: we try to solve problems in such a way that no questions/options are needed to get good performance all around [22:21] but we shouldn't ever need all of them at once [22:21] and we should be able to build them up as we go [22:21] options require explanations and understanding of consequences [22:22] lifeless, yeah, but some users have too much memory to use.. so if they want... they can also boost with memory caching? [22:22] jam: well, this was a quick fix. You can probably do better if you're willing to reimplement knit building. [22:22] Parker-: perhaps; but if we can solve it such that the extra memory is irrelevant [22:22] Parker-: then there is no question to ask [22:23] there is the possibility of caching intermediate representations for something like gannotate [22:23] jam: But I'm really surprised that you're not pursuing annotation caching. Because this annotate code is pretty fast when you're dealing with < 1000 revisions. [22:23] but that is certainly a different question [22:23] I didnt meant to solve this one... but in future or something... [22:23] abentley: it is 15s for repository.py which is 700 revisions [22:23] I'm not sure if that is "pretty fast" [22:24] I'm trying to find the case where it was "9 minutes" [22:24] bzr annotate -r 1000 NEWS was pretty fast. [22:24] But that was on knits. [22:24] I have the feeling it is a mixed "too many packs" and too many revisions [22:24] Parker-: sure; I'm not rejecting your idea, but suggesting we should see if there are solutions that are as good without options [22:24] jam: how many packs do you have ? [22:24] lifeless: here only about 6 or so [22:25] jam: and whats a histogram of components for them [22:25] but then.. what I do with my gigs of ram :( [22:25] lifeless: *I* don't have the 9-minute annotate problem [22:25] Parker-: use it for java [22:25] lifeless: it was someone else's error report [22:25] lifeless, ah yes... [22:25] forgot java :) [22:25] jam: right; and interested report would be a histogram of that fileid across packs [22:28] btw, I believe that get_line_lists does not take nearly as much memory as repeated calls to get_lines. [22:28] abentley: well lines that are shared between versions will stay shared as in-memory strings [22:28] Right. Mostly. [22:29] I'm not sure whether that happens when a version has more than one descendant. [22:29] Actually, it's probably unique there, too. [22:29] It would be the list that would have to be copied. [22:30] morning [22:30] igc: evening. [22:30] hi abentley [22:30] abentley: yes list copies - thats the code we were talking about saturdayish [22:30] lifeless: indeedy. [22:31] later, gators. [22:32] later abentley [22:32] have a good evening [22:32] statik: ping [22:34] abentley: jam: bz2 is only slightly better than gzip [22:34] I thought bzip2 was slower though [22:35] it's insanely slower, worst case [22:35] it usually is [22:35] i had a log file which took 2 hours to compress with bzip and 5 minutes with gzip [22:35] and it is generally slow symmetrically [22:35] so it is slow to compress and can be slow to decompress [22:35] so I'm wondering if this change you guys requested is the right thing [22:35] 45K -> 40K is the difference [22:35] lifeless: well, you are the one doing the profilling [22:36] bzip2 comes into its own in the 900k range [22:36] not really in the 40k range [22:36] The big difference being that gzip has a very limited window [22:36] something like 16k [22:36] while bzip has a much larger window [22:36] huffman vs simple sliding window [22:36] I'm going to add size-doubling-every-request to this get_parents thing [22:37] so that full-history is not linear round trip counts [22:37] afaik, LZMA is mostly gzip with a huge window, and can do better than bzip2 on compression, and still keep gzip decompression speeds [22:38] cute [22:38] lifeless: bz2 may not make sense at this level, I'll certainly let you do the tradeoffevaluation [22:38] must be some fun with window refernce pointers [22:38] well I've done the commit already [22:38] I know we do get farking huge histories [22:39] lifeless: but if the request is designed to buffer X amount of data per round-trip an individual request may be quite limited for all cases [22:39] X, then 2X, then 4X [22:39] currently its X [22:41] New bug: #189419 in bzr "Odd behaviour when adding directory with wrong case on Windows" [Undecided,New] https://launchpad.net/bugs/189419 [22:46] lifeless: we might consider capping it at Y*X, if only for interactivity purposes [22:46] (say 16X or something) [22:46] I'm seeing 2.4K revisions in a 64K compressed blob [22:47] our entire history is what, 15K? [22:47] so thats 2.4 + 4.8 + 9.6 - 3 round trips [22:47] lifeless: revisions.kndx is 1.5MB [22:48] you have knit repos left? [22:48] we have to have a chat [22:48] lifeless: yep [22:48] my development stations are all packs [22:48] my public repo is still knits [22:48] gzip revisions.kndx is .5MB [22:48] bzip2 is about the same [22:49] 482KB versus 427KB [22:52] hi statik` [22:58] hello [23:00] hey poolie, phone time, right? [23:00] poolie: I can still merge stuff for 1.2? [23:01] lifeless, yes [23:01] IcanHAZcheezeburger [23:01] * lifeless breaks bzr.dev network with itself, again [23:02] heh [23:20] New bug: #189431 in bzr "pre_commit hook overly expensive" [Undecided,New] https://launchpad.net/bugs/189431 [23:38] hi, i'm looking at converting from svn to bzr. Does bzr support svn style properties? [23:39] some yes [23:40] is this for 3rd party extension, or for file-ending support on windows etc? [23:40] (that is are you asking about 'generic stuff' or specific features) [23:41] it doesn't support keywords or eol-style [23:41] foom: yet; I have a plan. [23:42] does 1.2 make any more great strides in speed? [23:42] yes network pull on smart server will be way faster [23:42] and use less memory [23:42] and make more coffee [23:42] how about local operations? [23:43] branch and checkout are faster [23:43] annotate too I think [23:43] i haven't even gotten to the point where pulling data is a problem yet. :) [23:43] give me a sample branch and the ops that are slow and I'll fix it [23:45] i have a 50krev 1.3G repo, full of proprietary data. I imagine it has the same issues as large public projects, though. [23:45] we're performing quite adequately on openoffice [23:45] which is similar in size [23:46] last time we discussed I think I said words to the effect of 'file bugs, tagged performance' [23:46] which I'll repeat, I'm happy to explain why this matters if its not obvious [23:46] hm, i guess i should grab the ooffice repo and try using it [23:46] igc can help you there [23:47] it'll at least be interesting to see whether there is a marked difference in performance [23:47] start by filing bugs though [23:47] command X is slow, I have <---> this much data of the thing its looking at. [23:48] we'll probably get you to get some stats about your repo without disclosing data to help analyse things [23:48] but I'm really begging you - file bugs. [23:48] ok. [23:48] lifeless: yes, I ran a full set of local benchmarks on openoffice last night [23:49] no bug, no chance of developer thinking about your case. You'll only accidentally get fixed. Bug. Things Get Fixed. [23:49] on both gutsy and leopard [23:49] no deep history in those results (yet) [23:49] jam, did you really want all of ~bzr to join ~bzr-log-rss-devel? [23:50] igc, is there anyway for stats to be automated with bzr.dev? have it spit out nightly stats in HTML format [23:50] poolie: hes using an idiom to allow devs that are only interested in log-rss to do so, but to grant all bzr devs access to [23:50] I think its not really needed; but shrug:) [23:51] do you mean we each individually accept or decline? [23:51] oh i see [23:51] beuno: I think so via cricket [23:51] tb [23:51] nm [23:51] igc: do you have the converted repo available? [23:51] tb? [23:51] foom: see http://bazaar-vcs.org/Benchmarks [23:51] igc: I think foom wants to pull the converted deep history repo and play [23:51] raw snapshots are available here: http://people.ubuntu.com/~ianc/benchmarks/src/ [23:52] I'm working on getting a conversion of openoffice today as it happens [23:52] igc, cricket? Would be nice to have to see performace gains. Even eventually have statistics of how it's being improved [23:52] I'll make that available as soon as I can [23:52] didn't jam do one already ? [23:53] um - someone at Canonical did, not sure if it was jam [23:53] w.r.t. cricket that is [23:53] igc: jam. You should be able to just copy that conversion [23:53] jam: ^ [23:54] w.r.t. OOo, jam was looking at a conversion from the CVS master but I believe it required a lot of manual work to get right [23:54] given their tagging conventions, etc. [23:55] yes but as a data point for playing with .. [23:55] it doesn't have to be right [23:55] foom: we could pair-file-bugs if you like [23:56] spiv, can you explain why we get TooManyConcurrentRequests errors, like in bug 189300 [23:56] Launchpad bug 189300 in bzr "TooManyConcurrentRequests error during push" [Undecided,New] https://launchpad.net/bugs/189300 [23:56] igc: on those benchmarks, it looks like you're testing with no history? [23:57] lifeless: thanks; not today at least, though. [23:57] foom: that's right - it's on my list real soon now to add that [23:57] foom: first step is getting a sensible convension of 3 of large projects which I'm doing today [23:57] i would have thought that could only come from some kind of structural code bug [23:57] poolie: I think maybe because of poor error handling [23:57] but it seems to occur when there are network errors [23:57] hm [23:58] s/3 of/several/ [23:58] maybe if the network is dropped we skip reading the reply, or noticng that there will be no reply [23:58] poolie: i.e. issuing a request that expects a body, but gets an error instead, but forgets to do cancel_read_body in the error handling [23:58] and then fail to see the next one? [23:58] right [23:58] Which is the sort of thing that the next protocol version ought to help with... [23:58] igc: we have a mozilla one already [23:59] igc, i'd be inclined to start with bazaar itself, [23:59] it's easily accessible and it has a realistic merge graph