[00:01] <lifeless_> working adsl FTW
[00:06] <Zectbumo> Are there instructions on how to add bzr to the SCM in XCode?
[00:16] <poolie> spiv, thanks for your mail to sg
[00:27] <poolie> Zectbumo, not that i know of
[00:27] <poolie> i believe there are some people who do use it there though
[00:27] <poolie> a post to the list may shake them out
[00:27] <Zectbumo> where is the list?
[00:27] <poolie> bazaar@lists.ubuntu.com
[00:27] <poolie> or you could post where you're got up to in adding it
[00:28] <Zectbumo> thanks poolie, I'll try that
[00:28] <jml> lifeless: I would like to know more of this 'working adsl' of which you speak.
[00:30] <lifeless> the electrons move
[00:30] <lifeless> from node to node
[00:30] <lifeless> on wires copper
[00:30] <poolie> like autumn leaves
[00:31] <Zectbumo> in the summer time
[00:31] <lifeless> igc: ping
[00:31] <lifeless> the living is easy
[00:31] <igc> hi lifeless
[00:31] <lifeless> igc: do you want to talk about this plugin stuff
[00:31] <lifeless> igc: or just rapid fire emails ?
[00:32] <igc> as in auto-loading of plugins?
[00:32] <igc> I'm yet to look at it to be honest
[00:32] <lifeless> as in I b:rejected your patch - you may not have seen that
[00:32] <igc> ah - no I hadn't seen that
[00:32]  * igc looks
[00:33] <poolie> lifeless, are you in need of relief from skull drills?
[00:33] <lifeless> poolie: no drilling so far today
[00:33] <lifeless> poolie: if it starts up I'll ring you and run for a train
[00:33] <lifeless> igc: you seem to be reimplementing bzr 0.8's hook system, which is sad.
[00:33] <lifeless> igc: so I'd like to understand what you want to achieve, and hopefully suggest a smaller way to do it.
[00:34] <igc> lifeless: fire away
[00:34] <lifeless> uhm, you're the one with a mission in this area:) how about you describe what you want to achieve
[00:35] <igc> lifeless: I want an easy, scalable way of configuring command line scripts as hooks ...
[00:35] <igc> a few people have rejected Bazaar for Hg on this feature alone
[00:36] <lifeless> igc: Ok. I think this is a rather restrictive and difficult to work with medium, so don't want our core to get cluttered; we can do this entirely as a plugin.
[00:36] <igc> I don't care a lot about the Python hooks stuff because we support that already
[00:36] <lifeless> theres a /lot/ of friction between object-model and command line
[00:36] <lifeless> and requiring that every object model hook be command line exposed - urgk. Not to mention locking coherency and performance.
[00:36] <lifeless> and windows.
[00:37]  * Toksyury1l wonders absently if there are any plans to rewrite bzr in a compiled language someday in the future
[00:37] <lifeless> Toksyuryel: python can be compiled FWIW :)
[00:37] <igc> ok - so we can make it selective as to which hooks can exposed if you wish
[00:37] <lifeless> Toksyuryel: but more seriously we have rewritten the inner loops in C already
[00:37] <Toksyuryel> oh, sweet ^^
[00:37] <igc> s/can/get/
[00:37] <lifeless> Toksyuryel: (Not all inner loops, but selected critical ones)
[00:39] <lifeless> spiv: you seem to put success/failure before body content in replies; this doesn't work in the general case
[00:39]  * Toksyuryel nods "well it is nice to know that it's even being attempted ^_^ bazaar is a great system, and I think being available in a version with far less overhead would get a lot more people interested in using it :)"
[00:39] <spiv> lifeless: chunked replies can be interrupted with an error
[00:39] <mwhudson> python is already a compiled language
[00:39] <mwhudson> but this isn't a helpful response :)
[00:39] <lifeless> spiv: smells like duplication to me
[00:40] <lifeless> spiv: I would be strongly inclined to have [BODY] STATUS
[00:40] <spiv> So most errors would tend to always have no body, then the error status?
[00:41] <spiv> That seems reasonable.
[00:41] <lifeless> well the idea I have is that pure errors have no body
[00:41] <Toksyuryel> my understanding is that compiling python would still lag behind something like C, and could potential introduce strange compile-time errors that interperted python wouldn't experience
[00:41] <lifeless> HTTP's 'errors are html pages' is nasty
[00:41]  * Toksyuryel could be wrong about this
[00:41] <spiv> Right, I agree.
[00:41] <spiv> I was just referring to the case where an error doesn't occur until the body has already begun.
[00:42] <spiv> That does sound like a good simplification to me, thanks.
[00:46] <foom> except in the case of http 1.0 indeterminate length replies, you can just treat a connection which closes unexpectedly as an error.
[00:47] <lifeless> igc: reviewing your patches now
[00:47] <lifeless> igc: I get the impression that you are cloning hg's style here or something, but it has a smell about it that is quite un-bzr-like.
[00:47] <lifeless> igc: I appreciate that some folk may say 'we have to have shell hooks' - but lets not copy an inferior system to give them their feature.
[00:48] <lifeless> (if hg was superior, we wouldn't be hacking on bzr ;))
[00:48] <igc> thanks lifeless. The design is meant to be better than Hg's fwiw :-)
[00:48] <igc> and it is
[00:48] <igc> (not saying it can't be better though)
[00:48] <lifeless> Right now I have a pretty strong conviction that this all belongs in a plugin
[00:49] <lifeless> the shell command wrapper stuff is probably good to put in the core once the code duplication is removed
[00:49] <lifeless> oh, I missed another spot of duplication, the unit test support for running real sub processes.
[00:49] <lifeless> the config stuff made me wince; I've replied to it
[00:49] <lifeless> but only comment because I may be being conservative
[00:57] <lifeless> igc: right the main one I've looked closely at as well. I think you got the wrong end of the problem and pulled the thread there
[00:58] <lifeless> igc: if you instead start with adapting a config driven shell interface into the existing hook structure, with no changes to the current api (it doesn't need any) I think you'll find it works a /lot/ better and more smoothly.
[00:58] <lifeless> (And yes, I do mean start over on the third patch - sorry)
[00:58]  * igc digests that
[01:00] <lifeless> hint: If shell hooks are configured in *Config, then *Config knows the hook exists and should register it
[01:02] <lifeless> the current precommit hook also has a performance problem
[01:02] <lifeless> its recreating the work of the commit by calling changes_from
[01:03] <lifeless> anyhow, I need caffeine apparently, so I'll be back in a few minutes; I'm very happy to have a voice call if you think more bandwidth will help
[01:37] <lifeless> igc: you have mail; I'm not arguing against having shell scripts, I'm arguing against an implementation that is problematic (and makes changes to core it should not)
[01:37] <igc> lifeless: understood
[01:38] <igc> I'm ok with going back to [] instead of init_hook
[01:38] <igc> the tricky bit I don't see ...
[01:38] <igc> is how your auto-register approach can work when the set of hooks ...
[01:38] <igc> is dependent on the *branch*
[01:39] <igc> That's the important api change ...
[01:39] <igc> get_hooks[hook_name, branch) instead of hooks[hook_name]
[01:57] <lifeless> igc: when the branch reads its config file it can add hooks, can it not ? And clean them up when unlock() drops the lock count to 0.
[01:57] <lifeless> igc: -> no api change needed :]
[01:57]  * igc thinks
[01:58] <lifeless> Branch.hooks is branch independent, its global.
[01:58] <igc> sounds ok (which is great)
[01:58] <lifeless> We can *either* make it branch specific, which has some possible benefits, but some downsides as well.
[01:58] <lifeless> OR we can do what I just described, which will also work well.
[01:59] <igc> lifeless: I'd probably prefer making it branch specific but ...
[01:59] <igc> I'll try the other way if you'd prefer
[02:00] <igc> as long as locations.conf overrides branch.conf ...
[02:00] <igc> I'm happy enough
[02:02] <lifeless> igc: I am not wedded to it being global (well, more precisely, Branch.hooks must stay global, but you could clone the current global hooks with a deep copy to aBranch.hooks, which is then local and can be tweaked by the branch. But I think thats a mistake due to interactions with bound branches, which is why I think singleton is better in the first place.
[02:03] <igc> ok - let me get the run-shell-command patch resubmitted and I'll go with your 'stay global' approach
[02:04] <igc> lifeless: thanks for the reivews btw - much appreciated
[02:08] <lifeless> np
[03:36] <lifeless> abentley: ping
[03:36] <abentley> lifeless: pong
[03:36] <lifeless> abentley: is there a trivial method to apply an inventory delta to a tree: to actually do the tree rearrangements requested
[03:37] <abentley> Yes, WorkingTree.apply_inventory_delta
[03:37] <lifeless> abentley: that only alters the inventory
[03:38] <lifeless> abentley: I want to shuffle directories on disk; write file content out to new files and symlinks etc.
[03:38] <abentley> True, but that's all the inventory delta was meant to describe.
[03:38] <lifeless> abentley: I know. the use case is 'I have a partially committed tree - all the content is in but not the revision id.' Now I want to extract this to disk. Hmm, perhaps export will do it
[03:39] <abentley> I think the best option would be to convert it to _iter_changes format, which revert uses.
[03:41] <jml> what's the eta on the RC?
[03:41] <lifeless> abentley: no need, export() will work just fine
[03:41] <lifeless> I brain farted for a second :)
[03:42] <abentley> lifeless: So I've got an implementation of iter_files_bytes that reads each pack only once.
[03:42] <abentley> Now 50% of the time is spent reading the index to determine the build-dependencies.
[03:43] <lifeless> abentley: cool; I guess that means its faster ?
[03:44] <abentley> Yes, it's faster for the checkout-from branch case when there's no accelerator tree.
[03:45] <abentley> But it's about half the speed of checkout using an accelerator tree.
[03:45] <abentley> So I'm looking for ways to reduce the cost of index lookups.
[03:45] <abentley> Any ideas?
[03:48] <lifeless> paste your index using code
[03:49] <lifeless> but in general: less trips to the index layer - better. And we need to do the next iteration of the index layer too - I have a half prepped branch working on that
[03:53] <abentley> http://paste.ubuntu-nl.org/54807/
[03:55] <lifeless> abentley: its pretty dense; I guess thats doing inventory and text in a single pass ?
[03:56] <lifeless> abentley: or is it because its part of the refactored knit layer, which inventory and revision still use ?
[03:56] <lifeless> not saying its bad btw
[03:56] <abentley> Yes, knit_kind can be "file", "inventory", "revision", "signature".
[03:56] <lifeless> k. I'm a little confused as to what streams are, I suggst a comment or something
[03:57] <abentley> stream_id would be a better name; they are tuples of (file_id, version_id)
[03:57] <lifeless> thats a component isn't it ?
[03:57] <lifeless> so I would say component_key
[03:58] <lifeless> 'the name of a component'
[03:58] <abentley> Well, they are actually the names of the desired streams, i.e. fulltexts.
[03:59] <lifeless> fulltext_keys perhaps ?
[03:59] <lifeless> I use key fairly consistently elsewhere when talking about a tuple of ids
[03:59] <abentley> Could work.
[04:00] <lifeless> trivial note: while pending_versions: is faster than whiel(len(pending_versions) > 0: and IMO just as readable
[04:00] <abentley> There's currently a lot of pointless conversion between various tuple formats, so I'm not sure exactly what the inputs will be when I'm done.
[04:01] <lifeless> one thing you could do to reduce overhead
[04:01] <lifeless> make pending_versions a set of the native keys
[04:01] <lifeless> that will reduce conversions
[04:02] <lifeless> if you had per-kind build_maps that would do it. But maybe that doesn't fit further out
[04:02] <lifeless> another alternative would be (kind, key) tuples
[04:02] <lifeless> where key is (revision_id,) or (file_id, revision_id)
[04:02] <lifeless> less conversion - just wrapping
[04:03] <lifeless> one suggestion here - move requirements,update(new_requirements) out of the inner loop
[04:03] <lifeless> instead, when you do pending_versions = new_pending
[04:04] <lifeless> also do requirements.update(new_pending)
[04:04] <lifeless> but otherwise that seems fine - you spider out nicely
[04:04] <abentley> The output is expected as (kind, file_id, revision_id), so it seemed simplest to retain that format.
[04:04] <lifeless> so I suspect you'll be payin bisect() costs in index.py
[04:04] <lifeless> thats something that is possibly optimisable without changing the disk format.
[04:04] <abentley> But your (kind, key) notion is interesting.
[04:07] <abentley> I see what you mean about requirements.update().
[04:07] <lifeless> sets are fast its unlikely to be a problem, but little bits add up :)
[04:08] <abentley> I'll probably see how iter_all_entries performs, but it's questionable to use that IRL.
[04:08] <lifeless> iter_all_entries skips the bisection
[04:08] <lifeless> and I know the bisection to be a problem
[04:08] <abentley> Would stink rather badly for many cases, though.
[04:11] <abentley> Yeah, looks like ~all of the iter_entries time is spent in the bisect code.
[04:11] <lifeless> feel free to improve it :) but your current patch is orthogonal I think
[04:13]  * igc food
[04:13] <lifeless> you won't be slower than the current code
[04:13] <abentley> lifeless: This would probably require me to have more than a superficial understanding of bisection...
[04:13] <lifeless> abentley: fair enough ;)
[04:23] <lifeless> abentley: I can describe the theory in an email if the code is too obtuse
[04:24] <abentley> lifeless: The bisect code is slow, but iter_all_entries is slower.
[04:32] <lifeless> abentley: what project? bzr?
[04:32] <abentley> Yes, doing a checkout of Bazaar takes 8 seconds with iter_all_entries, and 5 with iter_entries.
[04:32] <lifeless> heh
[04:32] <lifeless> so, thats size_history issue showing up:)
[04:33] <lifeless> a project with less deep history might see a different ratio
[04:33] <lifeless> something like mozilla iter_entries will be faster ;)
[04:36] <fullermd> We don't have a way of storing arbitrary metadata on revisions, right?
[04:37] <bob2> how does --fixes do it?
[04:37] <fullermd> I thought it had a particular field.
[04:38] <abentley> lifeless: Especially mozilla with only one commit
[04:39] <lifeless> fullermd: we do, its abused
[04:39] <lifeless> I regret us adding it :(
[04:39] <lifeless> abentley: well moz with one commit, iter_all will be fasster
[04:39] <lifeless> abentley: moz with all its commits, iter_entries will be faster
[04:39] <fullermd> Oh, we do?  Hm.
[04:39] <abentley> Oh, gotcha.
[04:39] <fullermd> I thought the whole point of such things was to abuse them   :)
[04:40]  * lifeless abuses fullermd
[04:40] <fullermd> Whip me.  Beat me.  Make me maintain AIX.
[04:40] <lifeless> now thats cruelty
[04:40] <abentley> Ah, but if we made him rewrite AIX in Perl...
[04:41] <fullermd> I could rewrite it in APL, and it'd probably be more comprehensible than the original...
[04:41] <abentley> Plus you could use the insanity defence if you happened to go on a killing spree afterward.
[04:42] <fullermd> Did I mention that my first sysadmin job was on AIX?
[04:42]  * fullermd loads another magazine.
[04:43] <fullermd> Anyway, some damn fool opened up Yet Another VCS Thread on $MAILING_LIST, and it got me thinking about the question "How do I find foo/bar/baz.c r1.1384.1.7 post-conversion".
[04:44] <fullermd> Seems like stashing in random-metadata-slot at conversion time is the least evil answer.
[04:44] <lifeless> fullermd: that can be one in the revision properties list, but it makes the revision object rather large :|
[04:45] <abentley> fullermd: if we're talking about CVS, we don't have to worry about renames, so we can use the filenames as file_ids.
[04:45] <abentley> You could use r1.1384.1.7 as part of the revision-id.
[04:46] <lifeless> abentley: N files in a commit though
[04:46] <fullermd> Urg.  That would be Really Ugly(tm) when you have a few dozen files in a commit...
[04:46] <abentley> Okay, clearly I haven't given this enough thought.
[04:47] <fullermd> Of course not.  It involves CVS.  Your mind shunts itself aside in self-defense.
[04:47] <abentley> Though it could be argued that CVS has only one file in each commit...
[04:47] <fullermd> Well, yeah.  But converting it that way would be a good way to ensure no project ever gets far enough to ask bzr the question...
[05:08] <igc> bbiab
[05:34] <abentley> lifeless: Okay, I looked at the bisect_multi_bytes code and it kinda made sense.  What's wrong with it?
[05:34] <lifeless> abentley: its slow
[05:35] <lifeless> so there are two separate sets of bisection occuring
[05:35] <abentley> But the algorithm is suitable?
[05:35] <lifeless> abentley: one is down at the disk level where we are finding the location of the key and reading its data (and data adjacent to it).
[05:35] <lifeless> abentley: this bisection is completely fine AFAICT, reading the right data, and not slow.
[05:35] <lifeless> abentley: the second set of bisection is occuring in memory
[05:36] <lifeless> where we have a list of ranges that have been parsed
[05:36] <lifeless> in a file like |-----------------------------|
[05:36] <lifeless> we mark the bits we've parsed:
[05:36] <lifeless> |+++----------+++-----+++--+--+++-------|
[05:36] <lifeless> (for instance) - thats a file after looking up one key 2/3's up the file or so
[05:37] <lifeless> --lsprof suggests that we're examining that map or parsed regions expensively
[05:37] <lifeless> it may be that the map needs to be represented in memory in some better fashion
[05:38] <lifeless> or just that the way bisect is used/how the results pan out can be improved
[05:40] <abentley> One option that might improve performance is if we can turn it into a coroutine.
[05:40] <abentley> Because I suspect it's got keys cached that we need to find on the next lookup.
[05:41] <abentley> Even if not, the position data could be quite useful, I'd expect.
[05:41] <lifeless> if by key you mean the (file_id, revision_id) tuple stuff - thats should not trigger a lookup in the parsed regions, unless the key is not fully parsed
[05:42] <lifeless> remember that a key on disk is [key, present_flag, references(pointers), byte_value]
[05:42] <lifeless> so if I read the content for key X, I need to issue a second readv request to resolve its references into strings
[05:42] <abentley> I mean that when I'm building a list of dependencies, I have to call GraphIndex.iter_entries() multiple times.
[05:42] <lifeless> right
[05:42] <abentley> And I assume there's no caching.
[05:42] <lifeless> there is caching
[05:43] <lifeless> iter_entries is slow because of the in memory data structure that represents what parts of the file we have parsed
[05:44] <abentley> Where is the caching?
[05:44] <abentley> self.bisect_nodes?
[05:44] <lifeless> self._nodes_by_reference is part of it
[05:45] <lifeless> sorry
[05:45] <lifeless> _keys_by_offset
[05:45] <lifeless> that maps a pointer to a parsed key
[05:45] <lifeless> self._parsed_key_map is the bitmap of the parts of the file we have cached
[05:47] <abentley> Hmm.  Would we want some kind of tree structure?
[05:48] <lifeless> I don't know :). I'm paging this in as we speak.
[05:48] <lifeless> mainly I want to ensure you see the bit that is a problem (in memory data structure) vs the bit that is ok (the actual IO's we perform)
[05:48] <lifeless> it may be the structure is ok but we don't use it well
[05:48] <lifeless> or it may be that the structure is not ok
[05:48] <abentley> I do see that.
[05:49] <lifeless> cool
[05:55]  * abentley is off to sleep
[05:59] <lifeless> night
[07:09] <mrevell> Hello bzrrrrr!
[07:23] <poolie> hello matthew
[07:41] <bob2> "bzr: ERROR: bzrlib.errors.BzrCheckError: Internal check failed: file u'/etc/resolv.conf' entered as kind 'file' id 'resolv.conf-20071108133634-j0pcs1k03mehwsbd-106', now of kind 'symlink'"
[07:42] <bob2> what's the rationale for having an error for that instead of just versioning the change?
[08:20] <ubotu> New bug: #189182 in bzr-hg "unexpected keyword argument 'find_ghosts' in hg-0.9.5" [Undecided,New] https://launchpad.net/bugs/189182
[08:51] <poolie> bob2, you shouldn't be seeing that, we're meant to fix it at another level
[08:51] <poolie> unless you're using a very old bzr?
[08:55] <lifeless> gotta be old bzr I would have though
[09:15] <xinity_mbp> hy all
[09:18] <xinity_mbp> anyone may help, i have bazaar installed on my laptop and my desktop , i'm tried to push the repo i have on my laptop into my desktop bazaar repo using bzr+ssh command but i fail, any clues ?
[09:23] <rolly> .bzr.log is a good place to look for clues
[09:24] <xinity_mbp> on both sides rolly ?
[09:25] <rolly> on the remote side, sure. If you are successfully establishing an ssh connection
[09:26] <xinity_mbp> let's see if this file exists ;)
[09:26] <rolly> "but i fail" doesn't say much about your problem, so I can't say
[09:28] <xinity_mbp> the ssh connection succeed, but the bzr smart server says something about permissions
[09:29] <xinity_mbp> bzr push bzr+ssh://username@myhostname/Projet, error : Generic bzr smart protocol error: Permission denied: "/Projet": [Errno 13] Permission denied: '/Projet'
[09:29] <xinity_mbp> the Projet dir is in my home on the desktop
[09:30] <xinity_mbp> and when i take a look at the /var/log/secure, i can see, the successfull ssh connection
[09:31] <weigon> xinity_mbp: /Project is to the root of your filesystem, you logging in as root ?
[09:31] <xinity_mbp> nop weigon
[09:32] <xinity_mbp> the Projet dir is in ~username dir
[09:32] <xinity_mbp> wich means ~username/Projet
[09:32] <xinity_mbp> i found this on .bzr.log
[09:32] <weigon> the "permission denied" smells like it tries to access /Projet instead of ~username/Project
[09:33] <rolly> you need to put the full path in there
[09:33] <rolly> bzr+ssh://username@myhostname/home/username/Projet or whatever it may be
[09:33] <xinity_mbp> http://cl1p.net/xinity_bzr/
[09:34] <xinity_mbp> ah better now :)
[09:35] <rolly> I love bzr, even when it crashes :D It's so easy to tell what's wrong
[09:37] <yacc> rolly: you love in practice Python. Most Python programs show this attribute: "Easy to debug" ;)
[09:39] <rolly> yeah it's true
[09:39] <xinity_mbp> seems to be working , here is my last log :
[09:39] <rolly> I have 0 python experience and I was able to patch a bzr module
[09:40] <yacc> rolly: And now you need to be a selfemployed contractor for Germans biggest mail portal, and you can charge premium prices for patching (complex and hard Python)  modules without understanding them :-P
[09:42] <rolly> Well, I can certainly provide the lack of understanding
[09:43] <yacc> Oh, sorry, was an complete non appropiate assocation that I had.
[09:44] <yacc> But this guy managed to sit around for more than 2 months, and providing exactly 3 1-line changes replacing constants, as a contractor. That's upmanship.
[09:44] <rolly> those must have been some pretty important 3 lines :p
[09:45] <rolly> (for him to charge premium)
[09:45] <spiv> xinity_mbp: I think a newer client version would have reported that error properly.
[09:45] <rolly> anyways, past my bedtime. Goodnight all
[09:46] <spiv> (or perhaps a newer client *and* server version)
[09:46] <xinity_mbp> Using fetch logic to copy between RemoteRepository(bzr+ssh://xinity@myhostname/home/xinity/Projet/.bzr/)(remote) and KnitPackRepository('file:///Volumes/Home/Users/xinity/Documents/Exploitation/Projet/.bzr/repository/')
[09:46] <xinity_mbp> here is the last log i have
[10:05] <xinity_mbp> ok seems to be working, a last thing i didn't catch
[10:06] <xinity_mbp> bzr branch Exploitation bzr+ssh://xinity@myhostname/home/xinity/Projet/Exploitation send the .bzr dir , but not all the files, did i missed something ?
[10:07] <spiv> That's expected.  "bzr branch" only builds working trees for local branches.
[10:07] <spiv> But if you're just sharing the branch with other people, that doesn't matter.  You can "bzr branch" and so on with just the .bzr directory.
[10:12] <xinity_mbp> ok so how to pull the whole tree to the remote instance ?
[10:13] <cvv> dato:Hi!
[10:14] <dato> yes?
[10:15] <cvv> I try to use you VimEditorIntegration plugin and found out then they do not work on Vim 7.1
[10:16] <dato> cvv: showing the diff no longer works with recent bzrs. maybe you want to try `bzr ci --show-diff`
[10:18] <cvv> What do you mean? bzr diff work perfectly in bzr 1.0
[10:19] <dato> cvv: what does not work for you in 7.1?
[10:20] <cvv> I do not see diff. in both panel same text but `bzr diff` tell about more differencies
[10:21] <dato> cvv: try running `bzr commit --show-diff`
[10:22] <cvv> bzr: ERROR: no such option: --show-diff
[10:23] <dato> ok
[10:23] <dato> then it's not that
[10:23] <dato> well
[10:23] <dato> er, probably is
[10:23] <cvv> I never use option --show-diff in past
[10:23] <dato> it's a bit new
[10:24] <Qhestion> dato: "bzr diff"
[10:24] <dato> cvv: so, if you do `bzr commit`, and the editor starts, and you go to another terminal, and you do `bzr diff` there, do you get an error about the repository being locked?
[10:24] <Qhestion> dato: you should not.
[10:25] <dato> Qhestion: of course you should
[10:25] <Qhestion> no.
[10:25] <Qhestion> diff is a readonly task, commit is a readwrite task.
[10:25] <Qhestion> but there is a handling for that
[10:25] <dato> Qhestion: sorry, but the facts support me
[10:26] <dato> (unless the behavior was changed in bzr.dev in the last two days)
[10:26] <Qhestion> ok right
[10:26] <dato> you can't diff while committing
[10:26] <Qhestion> ok you are right.
[10:29] <cvv> bzr diff
[10:29] <cvv> bzr: ERROR: Could not acquire lock [Errno 11] Resource temporarily unavailable
[10:30] <dato> cvv: right. you need bzr 0.91 or newer to be able to see the diff in the editor, I'm afraid.
[10:30] <Coke> I just commited files, where were they commited to? Is everything contained in the same project directory?
[10:31] <spiv> Coke: try "bzr info"
[10:32] <cvv> Ok. I at now will be upgrade to bzr-1.0
[10:32] <Coke> spiv: so everything is a branch?
[10:33] <spiv> Coke: I'm not sure what you mean by "everything" :)
[10:33] <Coke> spiv: every repository
[10:33] <spiv> No, in bzr we use "repository" to mean something different to a "branch".
[10:33] <Coke> spiv: what would be considered a "local copy"
[10:34] <spiv> A repository is where the revisions of one or more branches are stored.
[10:34] <spiv> (Often a repository is co-located with a branch, and so holds just data for that branch)
[10:34] <Coke> spiv: ah, if we are several networked developers working on the same project we have a central repository and we each have our own private branch?
[10:35] <dato> cvv: good. in that case, you just use `bzr commit --show-diff`
[10:35] <spiv> Well, it's usually easiest to ignore repostories.
[10:35] <Coke> spiv: good.
[10:35] <spiv> The things you work with are branches most of the time.
[10:35] <spiv> (Or perhaps a checkout of a branch)
[10:36] <Coke> spiv: so, each developer has a branch and then there's a branch at our project server too?
[10:36] <spiv> So when you do "bzr commit" it creates a new revision on that branch.
[10:37] <spiv> You can work like that.  You can also have just one branch, and have each developer just have a checkout of that branch.
[10:37] <spiv> (i.e. very similar to how you'd use SVN or CVS)
[10:38] <Coke> spiv: ok. what's the difference between a checkout and a branch?
[10:38] <spiv> A checkout is also called a "working tree".
[10:38] <spiv> So it's the files on disk that you work with.
[10:38] <spiv> A branch is a line of development.  A series of revisions.
[10:39] <Coke> spiv: ok, then I guess it works sort of the same way like SVN in that respect
[10:39] <xinity_mbp> my oh my i'm lost :(
[10:39] <spiv> So if you have a checkout of a branch, and someone else commits to the branch, then your checkout would be out-of-date.
[10:39] <spiv> So you wouldn't be able to commit to that branch with that checkout until you did a "bzr update".
[10:39] <Coke> Argh! There's no bash autocompletion for bzr yet!
[10:40] <Coke> spiv: ah, just tested checking out. ok. it's very similar to cvs
[10:40] <spiv> (Although you would still be able to make "local commits")
[10:40] <spiv> If you're coming from cvs or svn, using checkouts is a pretty good way to ease into bzr.
[10:41] <spiv> Most people using bazaar get addicted to making branches though :)
[10:41] <Coke> spiv: yes. it does seem tempting.
[10:41] <spiv> Often a good way of working is to make a new branch for every distinct feature you're developing.
[10:42] <Coke> spiv: different thinking
[10:42] <Coke> spiv: branching and merging is a pain in the ass with CVS and SVN
[10:42] <spiv> Exactly.
[10:48] <xinity_mbp> sounds weird , i followed the pdf doc to build a central repo
[10:48] <xinity_mbp> when i do a bzr ls bzr+ssh.... it shows a list of files
[10:49] <xinity_mbp> but when i connect remotely, i can't see those files
[10:49] <Coke> spiv: if I work on my branch and change files, can I revert or is the branch "hot"?
[10:50] <Odd_Bloke> xinity_mbp: 'bzr ls' lists files that are versioned, not what is actually on the filesystem.
[10:50] <Coke> spiv: lemme rephrase that. :) are the changes stored in each branch?
[10:50] <spiv> Coke: yes, each branch keeps a full history
[10:50] <poolie_> this is very confusing for xchat having someone else called mbp :)
[10:51] <Coke> spiv: so I can relax and work on my branch (which happens to be the branch root location) and not worry about not being able to revert certain changes?
[10:51] <spiv> Coke: if you have the bzr-gtk plugin installed, "bzr viz" does a great job of showing you the history in a pretty way
[10:52] <Coke> spiv: I hate pretty!
[10:52] <Coke> :)
[10:52] <xinity_mbp> Odd_Bloke: how to send those files to the remote repo ?
[10:52] <spiv> xinity_mbp: you could install the "push-and-update" plugin
[10:53] <xinity_mbp> spiv: let's try then ;)
[10:53] <spiv> Coke: I don't quite understand your concern.  Perhaps you should clarify what you mean by "revert"?
[10:53] <spiv> Coke: There's a "bzr revert" command, but I'm not sure if has the same meaning as what you want or not.
[10:53] <Coke> spiv: I edit file and save, oops! made a mistake, can't remember what I've changed, go back to last commited revision
[10:54] <Coke> spiv: no, more like "revert" in the english sense
[10:54] <Coke> spiv: in svn I'd just remove the changed file and run svn update
[10:54] <spiv> "bzr revert the_file" will revert it to the last committed revision.
[10:54] <Coke> spiv: is it doable for the entire branch, including subdirectories?
[10:54] <spiv> (and "bzr revert" will revert the entire working tree)
[10:54] <Coke> spiv: ah
[10:54] <Coke> Well... I'm all good now.
[10:55] <Odd_Bloke> xinity_mbp: The files are already there, in version control.  The only reason they aren't on the filesystem is that they haven't been checked out.
[10:55] <Coke> There's just one more thing, where can I get free project hosting that use bzr instead of svn?
[10:55] <spiv> Coke: revert changes the working tree, not the branch, btw.
[10:55] <Odd_Bloke> Coke: Launchpad.
[10:55] <Coke> spiv: my branch is my working tree. :)
[10:56] <xinity_mbp> Odd_Bloke: sorry for my noob attitude , should i do a checkout on the remote side ?
[10:56] <Odd_Bloke> xinity_mbp: Yeah.  That's what the push-and-update plugin does for you. :)
[10:56] <Coke> Odd_Bloke: thanks.
[10:56] <xinity_mbp> Odd_Bloke: cool ;)
[10:57] <Coke> Odd_Bloke: I don't get it, is that an URL?
[10:57] <Coke> ah, .net, of course. It's a network, not an .org.
[10:57] <luks> Coke: you can host bzr branches on anything that supports sftp and http
[10:58] <Odd_Bloke> Coke: Your branch and your working tree are conceptually different.  The branch is to do with history, whereas your working tree is just whatever files happen to be in the directory you're using bzr to manage.
[10:58] <luks> (or ssh)
[10:58] <Coke> luks: that's what I will do with the projects at work, but I have a few things hosted on SF.net
[10:58] <Coke> luks: want to switch
[11:05] <xinity_mbp> Odd_Bloke: so after that if i use bzr push it should update the remote repo right ?
[11:07] <Odd_Bloke> xinity_mbp: Depends what you mean by 'that'. :p
[11:08] <xinity_mbp> Odd_Bloke: i've installed the push and update plugin, bzr plugins founds it
[11:08] <Odd_Bloke> xinity_mbp: I've never used it myself, so I don't really know.
[11:08] <Odd_Bloke> Try and find out. :)
[11:09] <xinity_mbp> Odd_Bloke: but when i try to do a bzr push it says something like : This transport does not update the working tree of: .... :(
[11:11] <Odd_Bloke> xinity_mbp: OK, it looks like it adds a push-and-update command, so try 'bzr help commands' and look for that.
[11:12] <xinity_mbp> Odd_Bloke: nothing about push and update :(
[11:15] <Odd_Bloke> xinity_mbp: I've just installed it locally and I have a push-and-update command...
[11:15] <xinity_mbp> Odd_Bloke: use brz help commands ?
[11:15] <Odd_Bloke> xinity_mbp: Both there and when I try to run it.
[11:18] <xinity_mbp> Odd_Bloke: ok bzr help push_and_update says : Helper functions for pushing and updating a remote tree.
[11:18] <xinity_mbp> Odd_Bloke: but nothing about how to use it :(
[11:19] <Odd_Bloke> xinity_mbp: Use it exactly as you would push.
[11:20] <xinity_mbp> Odd_Bloke: i did but it still says :
[11:23] <xinity_mbp> Odd_Bloke: bzr commit works perfectly, but bzr push says : This transport does not update the working tree of:
[11:24] <spiv> xinity_mbp: the way the plugin works, I suspect it'd still print that message
[11:24] <spiv> xinity_mbp: but it may have updated the working tree anyway
[11:24] <xinity_mbp> Odd_Bloke: but the new file i've added locally isn't in the remote tree, i still need to do a checkout on the remote side
[11:25] <xinity_mbp> Odd_Bloke: bzr update zorry
[11:39] <luks> xinity_mbp: you need to use push-and-update, not push
[11:40] <luks> (I think)
[11:40] <dato> luks: not anymore
[11:40] <luks> oh
[11:40] <dato> push-and-update could use a README file
[11:46] <mtaylor> any chance anybody has an special tricks for getting pylint to understand lazy_import?
[11:47] <spiv> mtaylor: s/^lazy_import/#\0/  and  s/^""")$/#\0/  ?
[11:48] <mtaylor> yeah... there's that.
[11:48] <mtaylor> but then I wind up forgetting to uncomment it before I commit
[12:18] <mtaylor> anybody ever touch the compiler module?
[12:20] <Parker-> once... years ago... after that guys in white give me my own room with padding :P
[12:21] <Odd_Bloke> mtaylor: I've played with it a little, but nothing especially in-depth.
[12:21] <mtaylor> Odd_Bloke: I'm trying to inject something into the tree...
[12:21] <xinity_mbp> Odd_Bloke: got it , it works now
[12:21] <xinity_mbp> ;)
[12:22] <mtaylor> it's causing me to want to lay in a padded room :)
[12:22] <Parker-> :P
[12:22] <Odd_Bloke> mtaylor: Way over my head, sorry. :)
[12:22] <Odd_Bloke> I was just trying to get pretty representations of stuff.
[12:22] <Odd_Bloke> xinity_mbp: Awesome. :)
[12:22] <xinity_mbp> yeah!
[12:25] <xinity_mbp> now i should be sure of what to do next time;)
[12:26] <Parker-> have to say that bzr is great.. played two days and already made own bzr serve command that uses paramiko to made custom ssh server and uses public keys to authenticate.. :P
[12:29] <xinity_mbp> Parker-: terrific !
[12:30] <Parker-> still in proof of concept state...
[12:30] <luks> could be very useful plugin for windows
[12:30] <Parker-> it works with windows :P
[12:30] <Parker-> https://launchpad.net/bzr-auth
[12:31] <Parker-> now uses ssh-agent (or Pageant in windows) to get keys
[12:38] <poolie_> wow, that's great Parker
[12:38] <poolie_> you should really post to bazaar@lists.ubuntu.com about it
[12:39] <Parker-> er... I'm little and shy boy
[12:39] <Parker-> :D
[12:40] <Parker-> hmmhh.. is bazaar@lists.canonical.com same?
[12:41] <dato> yes
[12:46] <ubotu> New bug: #189227 in bzr-svn "Strip a trailing \n when submitting the commit message to SVN" [Undecided,New] https://launchpad.net/bugs/189227
[12:47] <poolie_> yes
[12:54] <Parker-> have to fix it little bit before annoucement
[13:34] <bob2> poolie: ah, that'
[13:34] <bob2> d be the problem
[13:34] <bob2> was just surprised, since tla handled it fine ;)
[14:01] <ubotu> New bug: #189246 in bzr "UnicodeDecodeError exception in merge" [Undecided,New] https://launchpad.net/bugs/189246
[14:11] <wica> HI, I have bzr-1.1 on a gentoo machine. I have problem to commit my code. Get a ".bzr/branch/revision-history.tmp.1202219907.956677914.25812.1333525853": [Errno 13] Permission denied"
[14:12] <wica> The owner of bzr root, and the group is bzr.
[14:12] <Odd_Bloke> wica: This suggests that you've created the branch with a different user and haven't set the permissions on it correctly.
[14:12] <wica> My normal user is in the bzr group
[14:12] <wica> and al my dir have 775
[14:13] <wica> Odd_Bloke: Yep, that I understand. But the permissions look ok.
[14:13] <bob2> ls -ld ".bzr/branch/" .bzr/branch/revision-history.tmp.1202219907.956677914.25812.1333525853
[14:13] <wica> That file is not there
[14:14] <wica> the normal user in cli. Can create files there
[14:14] <Odd_Bloke> wica: Is the above the full message that bzr gives?
[14:14] <wica>  bzr: ERROR: Permission denied: "/var/bzr/iceshop/trunk/.bzr/branch/revision-history.tmp.1202219907.956677914.25812.1333525853": [Errno 13] Permission denied
[14:14] <wica> this is the full msg
[14:16] <wica> when I change the owner of /var/bzr/ to my normal user. I can commit
[14:16] <Odd_Bloke> OK, does the 'bzr' group have the execute bit set on all directories leading up to that path?
[14:17] <wica> I did a chmod -R 1775
[14:17] <wica> So it shoutbe
[14:22] <Odd_Bloke> OK, so the use of the sticky bit might cause problems if you're not the owner of /var/bzr/iceshop/trunk/.bzr/branch
[14:22] <wica> will try chmod -R -x
[14:22] <Odd_Bloke> Why?
[14:23] <wica> -x will remove the sticky bit
[14:23] <wica> use to see what happen
[14:24] <Odd_Bloke> -x will also stop you being able to go into directories...
[14:30] <jrydberg_> has anyone experimented with gittorrent-like things for bzr?
[14:32] <abentley> wica: x is the execute bit, not the sticky bit.  The sticky bit is t.
[14:33] <wica> abentley: Yes, that is true. don't know why I typed +x
[14:33] <Parker-> hmmhh... what then is s bit?
[14:33] <wica> But the problem is found
[14:34] <wica> The user with the problem, has created the problem
[14:34] <wica> :/
[14:34] <abentley> Parker-: you are probably thinking of SUID or SGID.
[14:34] <jelmer> jrydberg_: haven't seen anything like that yet
[14:34] <Parker-> ah ok
[14:34] <Parker-> wica, usually the user is THE problem :)
[14:35] <abentley> People seem to confuse SGID with the sticky bit a lot.
[14:35] <Parker-> jeh
[14:36] <Parker-> hmhh.. have to think what to do with the bza-auth
[14:39] <Parker-> bzr-auth even
[16:47] <ubotu> New bug: #189282 in bzr "Internal error on branch of awn-core (httplib.ResponseNotReady)" [Undecided,New] https://launchpad.net/bugs/189282
[17:21] <ubotu> New bug: #189300 in bzr "bzr push to saved location fails" [Undecided,New] https://launchpad.net/bugs/189300
[19:55] <alefteris> im getting this error when i'm trying to update or pull, anay ideas? bzr: ERROR: Cannot lock LockDir(http://bazaar.launchpad.net/...bzr/branch/lock): Transport operation not possible: http does not support mkdir()
[19:55] <alefteris> Bazaar (bzr) 1.0.0.candidate.1
[20:25] <Parker-> is it binded_
[20:25] <Parker-> ?
[20:28] <alefteris> dont really know: this is what bzr info gives me: checkout of branch: http://... , parent branch: http://...
[20:28] <alefteris> is it?
[20:29] <Parker-> hmmhh.. did you use 'bzr checkout' or 'bzr branch'
[20:30] <beuno> alefteris, yes, it looks like it's binded
[20:30] <beuno> you can just do "bzr unbind"
[20:30] <beuno> although it seems the branch is locked
[20:31] <beuno> you might want to do bzr break-lock on it if you have write access
[20:31] <Parker-> because isn't http:// access read-only?
[20:31] <beuno> (user bzr+ssh instead of http)
[20:32] <alefteris> so what i have to do get the latest revision from lanuchpad? mind that the branch is on a remote server, where i havent got my ssh keys there
[20:33] <Parker-> bzr branch http://...
[20:33] <beuno> alefteris, just do "bzr unbind"
[20:33] <beuno> and then
[20:33] <beuno> bzr pull http://...
[20:33] <beuno> which is probably faster then re-branching
[20:35] <alefteris> ok you guys rock, thanks a lot :)
[20:36] <beuno> alefteris, :D
[20:36] <Parker-> doh... but I'm soft meatbag :(
[20:37] <beuno> Parker-, re-branching works too, just a bit slower
[20:37] <beuno> it's a matter of sticking around long enough here
[20:37] <beuno> things tend to repeat themselves a lot
[20:37] <beuno> we should have a !explain_this bot here at some point I think
[20:38] <Parker-> beuno, yeah I know.. :)
[20:39] <Parker-> maybe should put some kind of notice text if user uses checkout + http
[20:39] <beuno> Parker-, right, seems reasonable, like when pushing non-locally, with the "this does not update the working tree" notice
[20:40] <Parker-> and I think there should be better error information when comes that "cannot mkdir"
[20:40] <Parker-> or something
[20:41] <beuno> Parker-, absolutely. If you're subscribed to the mailing list, you might want to propose it (or even cook up a patch if you're up to it)
[20:41] <alefteris> beuno, maybe a more desciptive error message would help also. i search the web as well for the error, but found only bug reports but no clear solution about what i shoud do about it
[20:41] <beuno> alefteris, absolutely
[20:41] <beuno> I'll file a bug for it
[20:42] <alefteris> thank you
[20:42] <beuno> that way it can be tracked
[20:42] <beuno> Parker-, ^
[20:42] <Parker-> roger :)
[20:42] <Parker-> doit because I'm newbie here :P
[20:42] <Parker-> started to use bzr two deays ago :)
[20:43] <beuno> Parker-, will do, and welcome!
[20:44] <Parker-> thanks... today published first proof of concept plugin to launchpad :P
[20:44] <beuno> Parker-, that was you?  cool!  I have that thread starred to look into later on. We are really in need of something like that. I'll try and peak into the code and see if I can help you out with that
[20:44] <tacone> hello. how to enable nautilus integration ?
[20:46] <Parker-> beuno, heh... thanks :)
[20:46] <beuno> tacone, you have to install bzr-gtk
[20:47] <tacone> I did.
[20:47] <tacone> I noticed from the topic 1.1 is out. I have 0.9
[20:47] <tacone> 1.1 will enable naut-int by default ?
[20:47] <beuno> tacone, ah, just saw this "(Note that NautilusIntegration is disabled at the moment, friction in the interface between nautilus and bzr resulted in substantial performance issues)."
[20:48] <beuno> jelmer, is that still true?
[20:48] <lifeless> moin
[20:48] <beuno> hey lifeless
[20:48] <beuno> tacone, bzr doesn't come with any GUI tools by default, but I do strongly recommend upgrading
[20:48] <Parker-> beuno, and yeah... help would be great..
[20:49] <tacone> I saw it too. but maybe it's not updated.
[20:49] <beuno> tacone, you can download the latest from: https://launchpad.net/~bzr/+archive if you're on Ubuntu
[20:50] <tacone> beuno: I just did (5 seconds :-))
[20:50] <Parker-> hmmhh.. ubuntu-backport have 1.0
[20:50] <tacone> nothing seems to change
[20:50] <tacone> bzr --version gives 1.1.0 now
[20:50] <tacone> shuold I try to restart nautilus maybe.
[20:50] <beuno> tacone, it might not be enabled
[20:50] <beuno> let me give it a try
[20:52] <tacone> thanks.
[20:52] <tacone> I will restart x in the meantime
[20:56] <tacone> beuno: I restarted the whole pc but I see no naut-int.
[20:56] <beuno> tacone, did you follow the installations instructions in the README?
[20:57] <beuno> you have to install python-nautilus
[20:57] <jelmer> beuno: Nobody disabled it afaik
[20:57] <beuno> copy a file into ~/.nautilus/python-extensions
[20:57] <tacone> wow. where's readme ? :-D
[20:57] <beuno> tacone, and 0.93 doesn't seem to work with 1.1
[20:57] <beuno> branching trunk to see if it does
[20:57] <beuno> but of course, jelmer would know much more about this
[20:58] <lifeless> abentley: ping
[20:58] <tacone> so should I downgrade bzr ?
[20:58] <abentley> lifeless: pong
[20:58] <lifeless> abentley: I think I have a clue about improving the in memory index stuff
[20:59] <beuno> tacone, hold on a bit, lemme triple check how everything works, and I'll walk you through it
[20:59] <lifeless> abentley: I'm going to have a stab at it today
[20:59] <abentley> Have fun.  I'll be busy anyhow.
[20:59] <tacone> thanks beuno, waiting for you here :)
[20:59] <lifeless> abentley: I'm thinking you may want to merge-and-test-and-revert occasionally once I have something up
[21:00] <abentley> lifeless: If you want to play with my fast-iter-files-bytes branch, it's here: http://code.aaronbentley.com/bzr/bzrrepo/fast-iter-files-bytes
[21:01] <abentley> I do not consider it merge worthy, so don't panic about how it works.
[21:01] <lifeless> k
[21:09] <beuno> tacone, got everyhting working, but I can't seem to get nautilus to do stuff...
[21:10] <tacone> nice :-D
[21:10] <tacone> I will live without
[21:10] <tacone> thank you anyway. so kind.
[21:10] <beuno> tacone, it might be worth a try to follow the readme instructions
[21:10] <beuno> and restart X
[21:11] <beuno> which I can't do at this moment
[21:11] <tacone> where do I find readme ?
[21:11] <beuno> tacone, in the plugin directory
[21:11] <tacone> ok
[21:11] <beuno> download 0.93
[21:11] <tacone> ok.
[21:11] <tacone> thank you very much. have a good evening
[21:12] <beuno> tacone, thanks, you too
[21:30] <ubotu> New bug: #189390 in bzr "Warn users when doing checkouts with read-only transports" [Undecided,New] https://launchpad.net/bugs/189390
[21:47] <jelmer_> beuno: I'm not sure who added that bit to the wiki for NautilusBzr
[21:48] <beuno> jelmer_, I couldn't get it to work though. Does it requier to restart X?
[21:49] <lifeless> jelmer_: well the code is disabled
[21:49] <lifeless> jelmer_: there is an XXX about it; or was recently
[21:49] <jelmer_> ah, somebody has commented it out in the setup
[21:51] <jelmer_> beuno: It requires you to restart nautilus
[21:51] <jelmer_> beuno: that should be all
[21:51] <jelmer_> you also need to have python-nautilus and libnautilus-dev (or something) installed
[21:53] <jam> poolie: ping
[21:53] <jam> lifeless: ping
[21:53] <lifeless> jam: gnip
[21:53] <beuno> jelmer_, still doesn't work  :/    I'll try to debug when I get home
[21:54] <jam> lifeless: I've been inspecting why "bzr annotate" is slow, and I'm finding some interesting points.
[21:54] <jam> I thought I would discuss it a bit before I really got down and started coding
[21:54] <lifeless> cool
[21:54] <jam> Basically, the #1 thing that pops up, is that for 700 revisions, we call _lookup_keys_via_location about 70,000 times
[21:55] <jam> It seems that all of our normal Knit calls have to do a bisect lookup
[21:55] <jam> so every time we do "get_options()", "get_method()", etc.
[21:55] <jam> And "get_parents()" has to do 2 lookups
[21:55] <jam> because it checks to see if the parents are ghosts
[21:55] <lifeless> yes
[21:56] <lifeless> so we should not be using get_parents except when it matters
[21:56] <jam> get_delta() ends up doing about 5 lookups, because it uses the get_options, get_methods, and get-parents
[21:56] <lifeless> and for decompression trees it does not matter
[21:56] <lifeless> a grab-it-all-at-once api would be better for knit text extraction
[21:56] <lifeless> also the index bit is slow right now, I'm looking at that today, had some ideas come to me after explaining it to aaron yesterday
[21:57] <jam> well, as to that, we don't really need the fulltexts to do reannotate
[21:57] <jam> if we are going to be doing it from the line deltas anyway
[21:57] <jam> we can just build up annotated fulltexts on the fly
[21:57] <jam> rather than building up fulltexts
[21:57] <jam> grabbing the deltas
[21:57] <jam> and then combining the two into annotated fulltexts
[21:57] <abentley> jam: The line deltas aren't accurate.
[21:57] <jam> abentley: but you are using them
[21:57] <jam> abentley: "annotate_knit"
[21:57] <abentley> They don't have correct information about the last line.
[21:58] <abentley> I believe that operation uses both the fulltexts and the line deltas.
[21:58] <jam> abentley: you use the matching blocks from the line deltas, and the fulltexts in reannotate
[21:59] <jam> but you use the matching blocks assuming they are accurate
[21:59] <jam> Is the only problem the "eol" marker?
[22:00] <lifeless> jam: you're off on a tangent now, I meant that the in memory management of the pack disk index disk data can be improved; this doesn't affect fulltexts/not
[22:00] <abentley> jam: I believe so.  Got work stuff.
[22:02] <jam> lifeless: well, I'm saying you don't need to extract all the knits into fulltexts
[22:02] <jam> and was trying to figure out a better way to stream out what I did need
[22:02] <lifeless> jam: sure; I didn't think we did. Anyhow its the exact same set of index calls needed
[22:02] <jam> It also makes me feel like we should be caching in Knitgraph a bit more
[22:03] <lifeless> we have to access the component, delta or fulltext, from the index layer
[22:03] <lifeless> and we should access each index record once.
[22:12] <abentley> jam: if you look at KnitContent.get_line_delta_blocks, you'll see it does not blindly trust the matching_blocks-- it uses the fulltexts to ensure the delta of the last line is correct.
[22:13] <abentley> This is probably the biggest reason why I hate knit deltas.
[22:13] <jam> abentley: I do see that, though I wonder if we could just handle the eol on our own and not have to build up the fulltext
[22:13] <jam> but thanks for pointing it out
[22:14] <jam> but yeah, the knit way of storing things is to record whether there is an eol
[22:14] <abentley> jam: You'll have to be careful if you take that approach.
[22:14] <jam> then always force a final eor
[22:14] <jam> eol
[22:14] <abentley> for example, the last line could match some other line because of the lack of an EOL.
[22:15] <abentley> And of course, there may also be real differences.
[22:15] <jml> Is the RC out yet?
[22:15] <lifeless> jam: the encoding misses data
[22:15] <lifeless> jam: it does not indicate if the delta affects the last line
[22:15] <abentley> other than EOL
[22:16] <jam> true enough
[22:16] <jam> though if you are doing annotated fulltexts anyway, you'll still have a fulltext to work with
[22:16] <jam> you just don't have to have 2 of them
[22:17] <abentley> TBH, I'm not sure whether you can calculate it without using using the fulltexts or doing equivalent work.
[22:17] <lifeless> jam: depends on the deltas I think
[22:17] <jam> anyway, the biggest expense is all around the KnitGraphIndex at the moment
[22:17] <jam> so it is the place to focus
[22:17] <jam> reannotate is about 3/15s
[22:18] <jam> _get_entries is 8.5s/15s
[22:18] <abentley> Because in order to compare against the last line, you may need to hit any delta in the history to find its contents.
[22:18] <abentley> And knit delta format won't tell you how many lines there were in that version.
[22:19] <jam> Sure, I wasn't planning on shortcutting going into all history yet
[22:19] <jam> just thinking that caching 700 fulltexts in memory was probably not a good idea
[22:19] <abentley> jam: we're doing that, are we?
[22:19] <abentley> Yeah, probably not the best idea.
[22:20] <jam> ancestry = get_ancestry(versions)
[22:20] <jam> lines = get_line_list(ancestry)
[22:20] <jam> (psuedocode)
[22:20] <lifeless> wheeee boom
[22:21] <Parker-> jam, put option to user, if he/she want to use memory caching?
[22:21] <lifeless> annotate an iso, dare you to
[22:21] <abentley> Perhaps an LRU cache would make sense, with reasonable groupings.
[22:21] <jam> Parker-: well we would need at least a couple copies
[22:21] <lifeless> Parker-: we try to solve problems in such a way that no questions/options are needed to get good performance all around
[22:21] <jam> but we shouldn't ever need all of them at once
[22:21] <jam> and we should be able to build them up as we go
[22:21] <lifeless> options require explanations and understanding of consequences
[22:22] <Parker-> lifeless, yeah, but some users have too much memory to use.. so if they want... they can also boost with memory caching?
[22:22] <abentley> jam: well, this was a quick fix.  You can probably do better if you're willing to reimplement knit building.
[22:22] <lifeless> Parker-: perhaps; but if we can solve it such that the extra memory is irrelevant
[22:22] <lifeless> Parker-: then there is no question to ask
[22:23] <jam> there is the possibility of caching intermediate representations for something like gannotate
[22:23] <abentley> jam: But I'm really surprised that you're not pursuing annotation caching.  Because this annotate code is pretty fast when you're dealing with < 1000 revisions.
[22:23] <jam> but that is certainly a different question
[22:23] <Parker-> I didnt meant to solve this one... but in future or something...
[22:23] <jam> abentley: it is 15s for repository.py which is 700 revisions
[22:23] <jam> I'm not sure if that is "pretty fast"
[22:24] <jam> I'm trying to find the case where it was "9 minutes"
[22:24] <abentley> bzr annotate -r 1000 NEWS was pretty fast.
[22:24] <abentley> But that was on knits.
[22:24] <jam> I have the feeling it is a mixed "too many packs" and too many revisions
[22:24] <lifeless> Parker-: sure; I'm not rejecting your idea, but suggesting we should see if there are solutions that are as good without options
[22:24] <lifeless> jam: how many packs do you have ?
[22:24] <jam> lifeless: here only about 6 or so
[22:25] <lifeless> jam: and whats a histogram of components for them
[22:25] <Parker-> but then.. what I do with my gigs of ram :(
[22:25] <jam> lifeless: *I* don't have the 9-minute annotate problem
[22:25] <lifeless> Parker-: use it for java
[22:25] <jam> lifeless: it was someone else's error report
[22:25] <Parker-> lifeless, ah yes...
[22:25] <Parker-> forgot java :)
[22:25] <lifeless> jam: right; and interested report would be a histogram of that fileid across packs
[22:28] <abentley> btw, I believe that get_line_lists does not take nearly as much memory as repeated calls to get_lines.
[22:28] <jam> abentley: well lines that are shared between versions will stay shared as in-memory strings
[22:28] <abentley> Right.  Mostly.
[22:29] <abentley> I'm not sure whether that happens when a version has more than one descendant.
[22:29] <abentley> Actually, it's probably unique there, too.
[22:29] <abentley> It would be the list that would have to be copied.
[22:30] <igc> morning
[22:30] <abentley> igc: evening.
[22:30] <igc> hi abentley
[22:30] <lifeless> abentley: yes list copies - thats the code we were talking about saturdayish
[22:30] <abentley> lifeless: indeedy.
[22:31] <abentley> later, gators.
[22:32] <jam> later abentley
[22:32] <jam> have a good evening
[22:32] <jam> statik: ping
[22:34] <lifeless> abentley: jam: bz2 is only slightly better than gzip
[22:34] <lifeless> I thought bzip2 was slower though
[22:35] <foom> it's insanely slower, worst case
[22:35] <jam> it usually is
[22:35] <foom> i had a log file which took 2 hours to compress with bzip and 5 minutes with gzip
[22:35] <jam> and it is generally slow symmetrically
[22:35] <jam> so it is slow to compress and can be slow to decompress
[22:35] <lifeless> so I'm wondering if this change you guys requested is the right thing
[22:35] <lifeless> 45K -> 40K is the difference
[22:35] <jam> lifeless: well, you are the one doing the profilling
[22:36] <jam> bzip2 comes into its own in the 900k range
[22:36] <jam> not really in the 40k range
[22:36] <jam> The big difference being that gzip has a very limited window
[22:36] <jam> something like 16k
[22:36] <jam> while bzip has a much larger window
[22:36] <lifeless> huffman vs simple sliding window
[22:36] <lifeless> I'm going to add size-doubling-every-request to this get_parents thing
[22:37] <lifeless> so that full-history is not linear round trip counts
[22:37] <jam> afaik, LZMA is mostly gzip with a huge window, and can do better than bzip2 on compression, and still keep gzip decompression speeds
[22:38] <lifeless> cute
[22:38] <jam> lifeless: bz2 may not make sense at this level, I'll certainly let you do the tradeoffevaluation
[22:38] <lifeless> must be some fun with window refernce pointers
[22:38] <lifeless> well I've done the commit already
[22:38] <lifeless> I know we do get farking huge histories
[22:39] <jam> lifeless: but if the request is designed to buffer X amount of data per round-trip an individual request may be quite limited for all cases
[22:39] <lifeless> X, then 2X, then 4X
[22:39] <lifeless> currently its X
[22:41] <ubotu> New bug: #189419 in bzr "Odd behaviour when adding directory with wrong case on Windows" [Undecided,New] https://launchpad.net/bugs/189419
[22:46] <jam> lifeless: we might consider capping it at Y*X, if only for interactivity purposes
[22:46] <jam> (say 16X or something)
[22:46] <lifeless> I'm seeing 2.4K revisions in a 64K compressed blob
[22:47] <lifeless> our entire history is what, 15K?
[22:47] <lifeless> so thats 2.4 + 4.8 + 9.6 - 3 round trips
[22:47] <jam> lifeless: revisions.kndx is 1.5MB
[22:48] <lifeless> you have knit repos left?
[22:48] <lifeless> we have to have a chat
[22:48] <jam> lifeless: yep
[22:48] <jam> my development stations are all packs
[22:48] <jam> my public repo is still knits
[22:48] <jam> gzip revisions.kndx is .5MB
[22:48] <jam> bzip2 is about the same
[22:49] <jam> 482KB versus 427KB
[22:52] <lifeless> hi statik`
[22:58] <poolie> hello
[23:00] <jam> hey poolie, phone time, right?
[23:00] <lifeless> poolie: I can still merge stuff for 1.2?
[23:01] <poolie> lifeless, yes
[23:01] <lifeless> IcanHAZcheezeburger
[23:01]  * lifeless breaks bzr.dev network with itself, again
[23:02] <Parker-> heh
[23:20] <ubotu> New bug: #189431 in bzr "pre_commit hook overly expensive" [Undecided,New] https://launchpad.net/bugs/189431
[23:38] <mattimustang> hi, i'm looking at converting from svn to bzr. Does bzr support svn style properties?
[23:39] <lifeless> some yes
[23:40] <lifeless> is this for 3rd party extension, or for file-ending support on windows etc?
[23:40] <lifeless> (that is are you asking about 'generic stuff' or specific features)
[23:41] <foom> it doesn't support keywords or eol-style
[23:41] <lifeless> foom: yet; I have a plan.
[23:42] <foom> does 1.2 make any more great strides in speed?
[23:42] <lifeless> yes network pull on smart server will be way faster
[23:42] <lifeless> and use less memory
[23:42] <lifeless> and make more coffee
[23:42] <foom> how about local operations?
[23:43] <lifeless> branch and checkout are faster
[23:43] <lifeless> annotate too I think
[23:43] <foom> i haven't even gotten to the point where pulling data is a problem yet. :)
[23:43] <lifeless> give me a sample branch and the ops that are slow and I'll fix it
[23:45] <foom> i have a 50krev 1.3G repo, full of proprietary data. I imagine it has the same issues as large public projects, though.
[23:45] <lifeless> we're performing quite adequately on openoffice
[23:45] <lifeless> which is similar in size
[23:46] <lifeless> last time we discussed I think I said words to the effect of 'file bugs, tagged performance'
[23:46] <lifeless> which I'll repeat, I'm happy to explain why this matters if its not obvious
[23:46] <foom> hm, i guess i should grab the ooffice repo and try using it
[23:46] <lifeless> igc can help you there
[23:47] <foom> it'll at least be interesting to see whether there is a marked difference in performance
[23:47] <lifeless> start by filing bugs though
[23:47] <lifeless> command X is slow, I have <---> this much data of the thing its looking at.
[23:48] <lifeless> we'll probably get you to get some stats about your repo without disclosing data to help analyse things
[23:48] <lifeless> but I'm really begging you - file bugs.
[23:48] <foom> ok.
[23:48] <igc> lifeless: yes, I ran a full set of local benchmarks on openoffice last night
[23:49] <lifeless> no bug, no chance of developer thinking about your case. You'll only accidentally get fixed. Bug. Things Get Fixed.
[23:49] <igc> on both gutsy and leopard
[23:49] <igc> no deep history in those results (yet)
[23:49] <poolie> jam, did you really want all of ~bzr to join ~bzr-log-rss-devel?
[23:50] <beuno> igc, is there anyway for stats to be automated with bzr.dev?  have it spit out nightly stats in HTML format
[23:50] <lifeless> poolie: hes using an idiom to allow devs that are only interested in log-rss to do so, but to grant all bzr devs access to
[23:50] <lifeless> I think its not really needed; but shrug:)
[23:51] <poolie> do you mean we each individually accept or decline?
[23:51] <poolie> oh i see
[23:51] <igc> beuno: I think so via cricket
[23:51] <poolie> tb
[23:51] <poolie> nm
[23:51] <foom> igc: do you have the converted repo available?
[23:51] <lifeless> tb?
[23:51] <igc> foom: see http://bazaar-vcs.org/Benchmarks
[23:51] <lifeless> igc: I think foom wants to pull the converted deep history repo and play
[23:51] <igc> raw snapshots are available here: http://people.ubuntu.com/~ianc/benchmarks/src/
[23:52] <igc> I'm working on getting a conversion of openoffice today as it happens
[23:52] <beuno> igc, cricket?   Would be nice to have to see performace gains. Even eventually have statistics of how it's being improved
[23:52] <igc> I'll make that available as soon as I can
[23:52] <lifeless> didn't jam do one already ?
[23:53] <igc> um - someone at Canonical did, not sure if it was jam
[23:53] <igc> w.r.t. cricket that is
[23:53] <lifeless> igc: jam. You should be able to just copy that conversion
[23:53] <lifeless> jam: ^
[23:54] <igc> w.r.t. OOo, jam was looking at a conversion from the CVS master but I believe it required a lot of manual work to get right
[23:54] <igc> given their tagging conventions, etc.
[23:55] <lifeless> yes but as a data point for playing with ..
[23:55] <lifeless> it doesn't have to be right
[23:55] <lifeless> foom: we could pair-file-bugs if you like
[23:56] <poolie> spiv, can you explain why we get TooManyConcurrentRequests errors, like in bug 189300
[23:56] <ubotu> Launchpad bug 189300 in bzr "TooManyConcurrentRequests error during push" [Undecided,New] https://launchpad.net/bugs/189300
[23:56] <foom> igc: on those benchmarks, it looks like you're testing with no history?
[23:57] <foom> lifeless: thanks; not today at least, though.
[23:57] <igc> foom: that's right - it's on my list real soon now to add that
[23:57] <igc> foom: first step is getting a sensible convension of 3 of large projects which I'm doing today
[23:57] <poolie> i would have thought that could only come from some kind of structural code bug
[23:57] <spiv> poolie: I think maybe because of poor error handling
[23:57] <poolie> but it seems to occur when there are network errors
[23:57] <poolie> hm
[23:58] <igc> s/3 of/several/
[23:58] <poolie> maybe if the network is dropped we skip reading the reply, or noticng that there will be no reply
[23:58] <spiv> poolie: i.e. issuing a request that expects a body, but gets an error instead, but forgets to do cancel_read_body in the error handling
[23:58] <poolie> and then fail to see the next one?
[23:58] <poolie> right
[23:58] <spiv> Which is the sort of thing that the next protocol version ought to help with...
[23:58] <lifeless> igc: we have a mozilla one already
[23:59] <poolie> igc, i'd be inclined to start with bazaar itself,
[23:59] <poolie> it's easily accessible and it has a realistic merge graph