lifeless_working adsl FTW00:01
=== lifeless_ is now known as lifeless
ZectbumoAre there instructions on how to add bzr to the SCM in XCode?00:06
pooliespiv, thanks for your mail to sg00:16
poolieZectbumo, not that i know of00:27
pooliei believe there are some people who do use it there though00:27
pooliea post to the list may shake them out00:27
Zectbumowhere is the list?00:27
poolieor you could post where you're got up to in adding it00:27
Zectbumothanks poolie, I'll try that00:28
jmllifeless: I would like to know more of this 'working adsl' of which you speak.00:28
lifelessthe electrons move00:30
lifelessfrom node to node00:30
lifelesson wires copper00:30
poolielike autumn leaves00:30
Zectbumoin the summer time00:31
lifelessigc: ping00:31
lifelessthe living is easy00:31
igchi lifeless00:31
lifelessigc: do you want to talk about this plugin stuff00:31
lifelessigc: or just rapid fire emails ?00:31
igcas in auto-loading of plugins?00:32
igcI'm yet to look at it to be honest00:32
lifelessas in I b:rejected your patch - you may not have seen that00:32
igcah - no I hadn't seen that00:32
* igc looks00:32
poolielifeless, are you in need of relief from skull drills?00:33
lifelesspoolie: no drilling so far today00:33
lifelesspoolie: if it starts up I'll ring you and run for a train00:33
lifelessigc: you seem to be reimplementing bzr 0.8's hook system, which is sad.00:33
lifelessigc: so I'd like to understand what you want to achieve, and hopefully suggest a smaller way to do it.00:33
igclifeless: fire away00:34
lifelessuhm, you're the one with a mission in this area:) how about you describe what you want to achieve00:34
igclifeless: I want an easy, scalable way of configuring command line scripts as hooks ...00:35
igca few people have rejected Bazaar for Hg on this feature alone00:35
lifelessigc: Ok. I think this is a rather restrictive and difficult to work with medium, so don't want our core to get cluttered; we can do this entirely as a plugin.00:36
igcI don't care a lot about the Python hooks stuff because we support that already00:36
lifelesstheres a /lot/ of friction between object-model and command line00:36
lifelessand requiring that every object model hook be command line exposed - urgk. Not to mention locking coherency and performance.00:36
lifelessand windows.00:36
* Toksyury1l wonders absently if there are any plans to rewrite bzr in a compiled language someday in the future00:37
=== Toksyury1l is now known as Toksyuryel
lifelessToksyuryel: python can be compiled FWIW :)00:37
igcok - so we can make it selective as to which hooks can exposed if you wish00:37
lifelessToksyuryel: but more seriously we have rewritten the inner loops in C already00:37
Toksyuryeloh, sweet ^^00:37
lifelessToksyuryel: (Not all inner loops, but selected critical ones)00:37
lifelessspiv: you seem to put success/failure before body content in replies; this doesn't work in the general case00:39
* Toksyuryel nods "well it is nice to know that it's even being attempted ^_^ bazaar is a great system, and I think being available in a version with far less overhead would get a lot more people interested in using it :)"00:39
spivlifeless: chunked replies can be interrupted with an error00:39
mwhudsonpython is already a compiled language00:39
mwhudsonbut this isn't a helpful response :)00:39
lifelessspiv: smells like duplication to me00:39
lifelessspiv: I would be strongly inclined to have [BODY] STATUS00:40
spivSo most errors would tend to always have no body, then the error status?00:40
spivThat seems reasonable.00:41
lifelesswell the idea I have is that pure errors have no body00:41
Toksyuryelmy understanding is that compiling python would still lag behind something like C, and could potential introduce strange compile-time errors that interperted python wouldn't experience00:41
lifelessHTTP's 'errors are html pages' is nasty00:41
* Toksyuryel could be wrong about this00:41
spivRight, I agree.00:41
spivI was just referring to the case where an error doesn't occur until the body has already begun.00:41
spivThat does sound like a good simplification to me, thanks.00:42
foomexcept in the case of http 1.0 indeterminate length replies, you can just treat a connection which closes unexpectedly as an error.00:46
lifelessigc: reviewing your patches now00:47
lifelessigc: I get the impression that you are cloning hg's style here or something, but it has a smell about it that is quite un-bzr-like.00:47
lifelessigc: I appreciate that some folk may say 'we have to have shell hooks' - but lets not copy an inferior system to give them their feature.00:47
lifeless(if hg was superior, we wouldn't be hacking on bzr ;))00:48
igcthanks lifeless. The design is meant to be better than Hg's fwiw :-)00:48
igcand it is00:48
igc(not saying it can't be better though)00:48
lifelessRight now I have a pretty strong conviction that this all belongs in a plugin00:48
lifelessthe shell command wrapper stuff is probably good to put in the core once the code duplication is removed00:49
lifelessoh, I missed another spot of duplication, the unit test support for running real sub processes.00:49
lifelessthe config stuff made me wince; I've replied to it00:49
lifelessbut only comment because I may be being conservative00:49
lifelessigc: right the main one I've looked closely at as well. I think you got the wrong end of the problem and pulled the thread there00:57
lifelessigc: if you instead start with adapting a config driven shell interface into the existing hook structure, with no changes to the current api (it doesn't need any) I think you'll find it works a /lot/ better and more smoothly.00:58
lifeless(And yes, I do mean start over on the third patch - sorry)00:58
* igc digests that00:58
lifelesshint: If shell hooks are configured in *Config, then *Config knows the hook exists and should register it01:00
lifelessthe current precommit hook also has a performance problem01:02
lifelessits recreating the work of the commit by calling changes_from01:02
lifelessanyhow, I need caffeine apparently, so I'll be back in a few minutes; I'm very happy to have a voice call if you think more bandwidth will help01:03
lifelessigc: you have mail; I'm not arguing against having shell scripts, I'm arguing against an implementation that is problematic (and makes changes to core it should not)01:37
igclifeless: understood01:37
igcI'm ok with going back to [] instead of init_hook01:38
igcthe tricky bit I don't see ...01:38
igcis how your auto-register approach can work when the set of hooks ...01:38
igcis dependent on the *branch*01:38
igcThat's the important api change ...01:39
igcget_hooks[hook_name, branch) instead of hooks[hook_name]01:39
lifelessigc: when the branch reads its config file it can add hooks, can it not ? And clean them up when unlock() drops the lock count to 0.01:57
lifelessigc: -> no api change needed :]01:57
* igc thinks01:57
lifelessBranch.hooks is branch independent, its global.01:58
igcsounds ok (which is great)01:58
lifelessWe can *either* make it branch specific, which has some possible benefits, but some downsides as well.01:58
lifelessOR we can do what I just described, which will also work well.01:58
=== asak_ is now known as asak
igclifeless: I'd probably prefer making it branch specific but ...01:59
igcI'll try the other way if you'd prefer01:59
igcas long as locations.conf overrides branch.conf ...02:00
igcI'm happy enough02:00
lifelessigc: I am not wedded to it being global (well, more precisely, Branch.hooks must stay global, but you could clone the current global hooks with a deep copy to aBranch.hooks, which is then local and can be tweaked by the branch. But I think thats a mistake due to interactions with bound branches, which is why I think singleton is better in the first place.02:02
igcok - let me get the run-shell-command patch resubmitted and I'll go with your 'stay global' approach02:03
igclifeless: thanks for the reivews btw - much appreciated02:04
=== jam_ is now known as jam
=== Verterok is now known as Verterok_
lifelessabentley: ping03:36
abentleylifeless: pong03:36
lifelessabentley: is there a trivial method to apply an inventory delta to a tree: to actually do the tree rearrangements requested03:36
abentleyYes, WorkingTree.apply_inventory_delta03:37
lifelessabentley: that only alters the inventory03:37
lifelessabentley: I want to shuffle directories on disk; write file content out to new files and symlinks etc.03:38
abentleyTrue, but that's all the inventory delta was meant to describe.03:38
lifelessabentley: I know. the use case is 'I have a partially committed tree - all the content is in but not the revision id.' Now I want to extract this to disk. Hmm, perhaps export will do it03:38
abentleyI think the best option would be to convert it to _iter_changes format, which revert uses.03:39
jmlwhat's the eta on the RC?03:41
lifelessabentley: no need, export() will work just fine03:41
lifelessI brain farted for a second :)03:41
abentleylifeless: So I've got an implementation of iter_files_bytes that reads each pack only once.03:42
abentleyNow 50% of the time is spent reading the index to determine the build-dependencies.03:42
lifelessabentley: cool; I guess that means its faster ?03:43
abentleyYes, it's faster for the checkout-from branch case when there's no accelerator tree.03:44
abentleyBut it's about half the speed of checkout using an accelerator tree.03:45
abentleySo I'm looking for ways to reduce the cost of index lookups.03:45
abentleyAny ideas?03:45
lifelesspaste your index using code03:48
lifelessbut in general: less trips to the index layer - better. And we need to do the next iteration of the index layer too - I have a half prepped branch working on that03:49
lifelessabentley: its pretty dense; I guess thats doing inventory and text in a single pass ?03:55
lifelessabentley: or is it because its part of the refactored knit layer, which inventory and revision still use ?03:56
lifelessnot saying its bad btw03:56
abentleyYes, knit_kind can be "file", "inventory", "revision", "signature".03:56
lifelessk. I'm a little confused as to what streams are, I suggst a comment or something03:56
abentleystream_id would be a better name; they are tuples of (file_id, version_id)03:57
lifelessthats a component isn't it ?03:57
lifelessso I would say component_key03:57
lifeless'the name of a component'03:58
abentleyWell, they are actually the names of the desired streams, i.e. fulltexts.03:58
lifelessfulltext_keys perhaps ?03:59
lifelessI use key fairly consistently elsewhere when talking about a tuple of ids03:59
abentleyCould work.03:59
lifelesstrivial note: while pending_versions: is faster than whiel(len(pending_versions) > 0: and IMO just as readable04:00
abentleyThere's currently a lot of pointless conversion between various tuple formats, so I'm not sure exactly what the inputs will be when I'm done.04:00
lifelessone thing you could do to reduce overhead04:01
lifelessmake pending_versions a set of the native keys04:01
lifelessthat will reduce conversions04:01
lifelessif you had per-kind build_maps that would do it. But maybe that doesn't fit further out04:02
lifelessanother alternative would be (kind, key) tuples04:02
lifelesswhere key is (revision_id,) or (file_id, revision_id)04:02
lifelessless conversion - just wrapping04:02
lifelessone suggestion here - move requirements,update(new_requirements) out of the inner loop04:03
lifelessinstead, when you do pending_versions = new_pending04:03
lifelessalso do requirements.update(new_pending)04:04
lifelessbut otherwise that seems fine - you spider out nicely04:04
abentleyThe output is expected as (kind, file_id, revision_id), so it seemed simplest to retain that format.04:04
lifelessso I suspect you'll be payin bisect() costs in index.py04:04
lifelessthats something that is possibly optimisable without changing the disk format.04:04
abentleyBut your (kind, key) notion is interesting.04:04
abentleyI see what you mean about requirements.update().04:07
lifelesssets are fast its unlikely to be a problem, but little bits add up :)04:07
abentleyI'll probably see how iter_all_entries performs, but it's questionable to use that IRL.04:08
lifelessiter_all_entries skips the bisection04:08
lifelessand I know the bisection to be a problem04:08
abentleyWould stink rather badly for many cases, though.04:08
abentleyYeah, looks like ~all of the iter_entries time is spent in the bisect code.04:11
lifelessfeel free to improve it :) but your current patch is orthogonal I think04:11
* igc food04:13
lifelessyou won't be slower than the current code04:13
abentleylifeless: This would probably require me to have more than a superficial understanding of bisection...04:13
lifelessabentley: fair enough ;)04:13
lifelessabentley: I can describe the theory in an email if the code is too obtuse04:23
abentleylifeless: The bisect code is slow, but iter_all_entries is slower.04:24
lifelessabentley: what project? bzr?04:32
abentleyYes, doing a checkout of Bazaar takes 8 seconds with iter_all_entries, and 5 with iter_entries.04:32
lifelessso, thats size_history issue showing up:)04:32
lifelessa project with less deep history might see a different ratio04:33
lifelesssomething like mozilla iter_entries will be faster ;)04:33
fullermdWe don't have a way of storing arbitrary metadata on revisions, right?04:36
bob2how does --fixes do it?04:37
fullermdI thought it had a particular field.04:37
abentleylifeless: Especially mozilla with only one commit04:38
lifelessfullermd: we do, its abused04:39
lifelessI regret us adding it :(04:39
lifelessabentley: well moz with one commit, iter_all will be fasster04:39
lifelessabentley: moz with all its commits, iter_entries will be faster04:39
fullermdOh, we do?  Hm.04:39
abentleyOh, gotcha.04:39
fullermdI thought the whole point of such things was to abuse them   :)04:39
* lifeless abuses fullermd04:40
fullermdWhip me.  Beat me.  Make me maintain AIX.04:40
lifelessnow thats cruelty04:40
abentleyAh, but if we made him rewrite AIX in Perl...04:40
fullermdI could rewrite it in APL, and it'd probably be more comprehensible than the original...04:41
abentleyPlus you could use the insanity defence if you happened to go on a killing spree afterward.04:41
fullermdDid I mention that my first sysadmin job was on AIX?04:42
* fullermd loads another magazine.04:42
fullermdAnyway, some damn fool opened up Yet Another VCS Thread on $MAILING_LIST, and it got me thinking about the question "How do I find foo/bar/baz.c r1.1384.1.7 post-conversion".04:43
fullermdSeems like stashing in random-metadata-slot at conversion time is the least evil answer.04:44
lifelessfullermd: that can be one in the revision properties list, but it makes the revision object rather large :|04:44
abentleyfullermd: if we're talking about CVS, we don't have to worry about renames, so we can use the filenames as file_ids.04:45
abentleyYou could use r1.1384.1.7 as part of the revision-id.04:45
lifelessabentley: N files in a commit though04:46
fullermdUrg.  That would be Really Ugly(tm) when you have a few dozen files in a commit...04:46
abentleyOkay, clearly I haven't given this enough thought.04:46
fullermdOf course not.  It involves CVS.  Your mind shunts itself aside in self-defense.04:47
abentleyThough it could be argued that CVS has only one file in each commit...04:47
fullermdWell, yeah.  But converting it that way would be a good way to ensure no project ever gets far enough to ask bzr the question...04:47
=== asac_ is now known as asac
abentleylifeless: Okay, I looked at the bisect_multi_bytes code and it kinda made sense.  What's wrong with it?05:34
lifelessabentley: its slow05:34
lifelessso there are two separate sets of bisection occuring05:35
abentleyBut the algorithm is suitable?05:35
lifelessabentley: one is down at the disk level where we are finding the location of the key and reading its data (and data adjacent to it).05:35
lifelessabentley: this bisection is completely fine AFAICT, reading the right data, and not slow.05:35
lifelessabentley: the second set of bisection is occuring in memory05:35
lifelesswhere we have a list of ranges that have been parsed05:36
lifelessin a file like |-----------------------------|05:36
lifelesswe mark the bits we've parsed:05:36
lifeless(for instance) - thats a file after looking up one key 2/3's up the file or so05:36
lifeless--lsprof suggests that we're examining that map or parsed regions expensively05:37
lifelessit may be that the map needs to be represented in memory in some better fashion05:37
lifelessor just that the way bisect is used/how the results pan out can be improved05:38
abentleyOne option that might improve performance is if we can turn it into a coroutine.05:40
abentleyBecause I suspect it's got keys cached that we need to find on the next lookup.05:40
abentleyEven if not, the position data could be quite useful, I'd expect.05:41
lifelessif by key you mean the (file_id, revision_id) tuple stuff - thats should not trigger a lookup in the parsed regions, unless the key is not fully parsed05:41
lifelessremember that a key on disk is [key, present_flag, references(pointers), byte_value]05:42
lifelessso if I read the content for key X, I need to issue a second readv request to resolve its references into strings05:42
abentleyI mean that when I'm building a list of dependencies, I have to call GraphIndex.iter_entries() multiple times.05:42
abentleyAnd I assume there's no caching.05:42
lifelessthere is caching05:42
lifelessiter_entries is slow because of the in memory data structure that represents what parts of the file we have parsed05:43
abentleyWhere is the caching?05:44
lifelessself._nodes_by_reference is part of it05:44
lifelessthat maps a pointer to a parsed key05:45
lifelessself._parsed_key_map is the bitmap of the parts of the file we have cached05:45
abentleyHmm.  Would we want some kind of tree structure?05:47
lifelessI don't know :). I'm paging this in as we speak.05:48
lifelessmainly I want to ensure you see the bit that is a problem (in memory data structure) vs the bit that is ok (the actual IO's we perform)05:48
lifelessit may be the structure is ok but we don't use it well05:48
lifelessor it may be that the structure is not ok05:48
abentleyI do see that.05:48
* abentley is off to sleep05:55
mrevellHello bzrrrrr!07:09
pooliehello matthew07:23
bob2"bzr: ERROR: bzrlib.errors.BzrCheckError: Internal check failed: file u'/etc/resolv.conf' entered as kind 'file' id 'resolv.conf-20071108133634-j0pcs1k03mehwsbd-106', now of kind 'symlink'"07:41
bob2what's the rationale for having an error for that instead of just versioning the change?07:42
ubotuNew bug: #189182 in bzr-hg "unexpected keyword argument 'find_ghosts' in hg-0.9.5" [Undecided,New] https://launchpad.net/bugs/18918208:20
pooliebob2, you shouldn't be seeing that, we're meant to fix it at another level08:51
poolieunless you're using a very old bzr?08:51
lifelessgotta be old bzr I would have though08:55
xinity_mbphy all09:15
xinity_mbpanyone may help, i have bazaar installed on my laptop and my desktop , i'm tried to push the repo i have on my laptop into my desktop bazaar repo using bzr+ssh command but i fail, any clues ?09:18
rolly.bzr.log is a good place to look for clues09:23
xinity_mbpon both sides rolly ?09:24
=== weigon__ is now known as weigon
rollyon the remote side, sure. If you are successfully establishing an ssh connection09:25
xinity_mbplet's see if this file exists ;)09:26
rolly"but i fail" doesn't say much about your problem, so I can't say09:26
xinity_mbpthe ssh connection succeed, but the bzr smart server says something about permissions09:28
xinity_mbpbzr push bzr+ssh://username@myhostname/Projet, error : Generic bzr smart protocol error: Permission denied: "/Projet": [Errno 13] Permission denied: '/Projet'09:29
xinity_mbpthe Projet dir is in my home on the desktop09:29
xinity_mbpand when i take a look at the /var/log/secure, i can see, the successfull ssh connection09:30
weigonxinity_mbp: /Project is to the root of your filesystem, you logging in as root ?09:31
xinity_mbpnop weigon09:31
xinity_mbpthe Projet dir is in ~username dir09:32
xinity_mbpwich means ~username/Projet09:32
xinity_mbpi found this on .bzr.log09:32
weigonthe "permission denied" smells like it tries to access /Projet instead of ~username/Project09:32
rollyyou need to put the full path in there09:33
rollybzr+ssh://username@myhostname/home/username/Projet or whatever it may be09:33
xinity_mbpah better now :)09:34
rollyI love bzr, even when it crashes :D It's so easy to tell what's wrong09:35
yaccrolly: you love in practice Python. Most Python programs show this attribute: "Easy to debug" ;)09:37
rollyyeah it's true09:39
xinity_mbpseems to be working , here is my last log :09:39
rollyI have 0 python experience and I was able to patch a bzr module09:39
yaccrolly: And now you need to be a selfemployed contractor for Germans biggest mail portal, and you can charge premium prices for patching (complex and hard Python)  modules without understanding them :-P09:40
rollyWell, I can certainly provide the lack of understanding09:42
yaccOh, sorry, was an complete non appropiate assocation that I had.09:43
yaccBut this guy managed to sit around for more than 2 months, and providing exactly 3 1-line changes replacing constants, as a contractor. That's upmanship.09:44
rollythose must have been some pretty important 3 lines :p09:44
rolly(for him to charge premium)09:45
spivxinity_mbp: I think a newer client version would have reported that error properly.09:45
rollyanyways, past my bedtime. Goodnight all09:45
spiv(or perhaps a newer client *and* server version)09:46
xinity_mbpUsing fetch logic to copy between RemoteRepository(bzr+ssh://xinity@myhostname/home/xinity/Projet/.bzr/)(remote) and KnitPackRepository('file:///Volumes/Home/Users/xinity/Documents/Exploitation/Projet/.bzr/repository/')09:46
xinity_mbphere is the last log i have09:46
xinity_mbpok seems to be working, a last thing i didn't catch10:05
xinity_mbpbzr branch Exploitation bzr+ssh://xinity@myhostname/home/xinity/Projet/Exploitation send the .bzr dir , but not all the files, did i missed something ?10:06
spivThat's expected.  "bzr branch" only builds working trees for local branches.10:07
spivBut if you're just sharing the branch with other people, that doesn't matter.  You can "bzr branch" and so on with just the .bzr directory.10:07
xinity_mbpok so how to pull the whole tree to the remote instance ?10:12
cvvI try to use you VimEditorIntegration plugin and found out then they do not work on Vim 7.110:15
datocvv: showing the diff no longer works with recent bzrs. maybe you want to try `bzr ci --show-diff`10:16
cvvWhat do you mean? bzr diff work perfectly in bzr 1.010:18
datocvv: what does not work for you in 7.1?10:19
cvvI do not see diff. in both panel same text but `bzr diff` tell about more differencies10:20
datocvv: try running `bzr commit --show-diff`10:21
cvvbzr: ERROR: no such option: --show-diff10:22
datothen it's not that10:23
datoer, probably is10:23
cvvI never use option --show-diff in past10:23
datoit's a bit new10:23
Qhestiondato: "bzr diff"10:24
datocvv: so, if you do `bzr commit`, and the editor starts, and you go to another terminal, and you do `bzr diff` there, do you get an error about the repository being locked?10:24
Qhestiondato: you should not.10:24
datoQhestion: of course you should10:25
Qhestiondiff is a readonly task, commit is a readwrite task.10:25
Qhestionbut there is a handling for that10:25
datoQhestion: sorry, but the facts support me10:25
dato(unless the behavior was changed in bzr.dev in the last two days)10:26
Qhestionok right10:26
datoyou can't diff while committing10:26
Qhestionok you are right.10:26
cvvbzr diff10:29
cvvbzr: ERROR: Could not acquire lock [Errno 11] Resource temporarily unavailable10:29
datocvv: right. you need bzr 0.91 or newer to be able to see the diff in the editor, I'm afraid.10:30
CokeI just commited files, where were they commited to? Is everything contained in the same project directory?10:30
spivCoke: try "bzr info"10:31
cvvOk. I at now will be upgrade to bzr-1.010:32
Cokespiv: so everything is a branch?10:32
spivCoke: I'm not sure what you mean by "everything" :)10:33
Cokespiv: every repository10:33
spivNo, in bzr we use "repository" to mean something different to a "branch".10:33
Cokespiv: what would be considered a "local copy"10:33
spivA repository is where the revisions of one or more branches are stored.10:34
spiv(Often a repository is co-located with a branch, and so holds just data for that branch)10:34
Cokespiv: ah, if we are several networked developers working on the same project we have a central repository and we each have our own private branch?10:34
datocvv: good. in that case, you just use `bzr commit --show-diff`10:35
spivWell, it's usually easiest to ignore repostories.10:35
Cokespiv: good.10:35
spivThe things you work with are branches most of the time.10:35
spiv(Or perhaps a checkout of a branch)10:35
Cokespiv: so, each developer has a branch and then there's a branch at our project server too?10:36
spivSo when you do "bzr commit" it creates a new revision on that branch.10:36
spivYou can work like that.  You can also have just one branch, and have each developer just have a checkout of that branch.10:37
spiv(i.e. very similar to how you'd use SVN or CVS)10:37
Cokespiv: ok. what's the difference between a checkout and a branch?10:38
spivA checkout is also called a "working tree".10:38
spivSo it's the files on disk that you work with.10:38
spivA branch is a line of development.  A series of revisions.10:38
Cokespiv: ok, then I guess it works sort of the same way like SVN in that respect10:39
xinity_mbpmy oh my i'm lost :(10:39
spivSo if you have a checkout of a branch, and someone else commits to the branch, then your checkout would be out-of-date.10:39
spivSo you wouldn't be able to commit to that branch with that checkout until you did a "bzr update".10:39
CokeArgh! There's no bash autocompletion for bzr yet!10:39
Cokespiv: ah, just tested checking out. ok. it's very similar to cvs10:40
spiv(Although you would still be able to make "local commits")10:40
spivIf you're coming from cvs or svn, using checkouts is a pretty good way to ease into bzr.10:40
spivMost people using bazaar get addicted to making branches though :)10:41
Cokespiv: yes. it does seem tempting.10:41
spivOften a good way of working is to make a new branch for every distinct feature you're developing.10:41
Cokespiv: different thinking10:42
Cokespiv: branching and merging is a pain in the ass with CVS and SVN10:42
xinity_mbpsounds weird , i followed the pdf doc to build a central repo10:48
xinity_mbpwhen i do a bzr ls bzr+ssh.... it shows a list of files10:48
xinity_mbpbut when i connect remotely, i can't see those files10:49
Cokespiv: if I work on my branch and change files, can I revert or is the branch "hot"?10:49
Odd_Blokexinity_mbp: 'bzr ls' lists files that are versioned, not what is actually on the filesystem.10:50
Cokespiv: lemme rephrase that. :) are the changes stored in each branch?10:50
spivCoke: yes, each branch keeps a full history10:50
poolie_this is very confusing for xchat having someone else called mbp :)10:50
Cokespiv: so I can relax and work on my branch (which happens to be the branch root location) and not worry about not being able to revert certain changes?10:51
spivCoke: if you have the bzr-gtk plugin installed, "bzr viz" does a great job of showing you the history in a pretty way10:51
Cokespiv: I hate pretty!10:52
xinity_mbpOdd_Bloke: how to send those files to the remote repo ?10:52
spivxinity_mbp: you could install the "push-and-update" plugin10:52
xinity_mbpspiv: let's try then ;)10:53
spivCoke: I don't quite understand your concern.  Perhaps you should clarify what you mean by "revert"?10:53
spivCoke: There's a "bzr revert" command, but I'm not sure if has the same meaning as what you want or not.10:53
Cokespiv: I edit file and save, oops! made a mistake, can't remember what I've changed, go back to last commited revision10:53
Cokespiv: no, more like "revert" in the english sense10:54
Cokespiv: in svn I'd just remove the changed file and run svn update10:54
spiv"bzr revert the_file" will revert it to the last committed revision.10:54
Cokespiv: is it doable for the entire branch, including subdirectories?10:54
spiv(and "bzr revert" will revert the entire working tree)10:54
Cokespiv: ah10:54
CokeWell... I'm all good now.10:54
Odd_Blokexinity_mbp: The files are already there, in version control.  The only reason they aren't on the filesystem is that they haven't been checked out.10:55
CokeThere's just one more thing, where can I get free project hosting that use bzr instead of svn?10:55
spivCoke: revert changes the working tree, not the branch, btw.10:55
Odd_BlokeCoke: Launchpad.10:55
Cokespiv: my branch is my working tree. :)10:55
xinity_mbpOdd_Bloke: sorry for my noob attitude , should i do a checkout on the remote side ?10:56
Odd_Blokexinity_mbp: Yeah.  That's what the push-and-update plugin does for you. :)10:56
CokeOdd_Bloke: thanks.10:56
xinity_mbpOdd_Bloke: cool ;)10:56
CokeOdd_Bloke: I don't get it, is that an URL?10:57
Cokeah, .net, of course. It's a network, not an .org.10:57
luksCoke: you can host bzr branches on anything that supports sftp and http10:57
Odd_BlokeCoke: Your branch and your working tree are conceptually different.  The branch is to do with history, whereas your working tree is just whatever files happen to be in the directory you're using bzr to manage.10:58
luks(or ssh)10:58
Cokeluks: that's what I will do with the projects at work, but I have a few things hosted on SF.net10:58
Cokeluks: want to switch10:58
xinity_mbpOdd_Bloke: so after that if i use bzr push it should update the remote repo right ?11:05
Odd_Blokexinity_mbp: Depends what you mean by 'that'. :p11:07
xinity_mbpOdd_Bloke: i've installed the push and update plugin, bzr plugins founds it11:08
Odd_Blokexinity_mbp: I've never used it myself, so I don't really know.11:08
Odd_BlokeTry and find out. :)11:08
xinity_mbpOdd_Bloke: but when i try to do a bzr push it says something like : This transport does not update the working tree of: .... :(11:09
Odd_Blokexinity_mbp: OK, it looks like it adds a push-and-update command, so try 'bzr help commands' and look for that.11:11
xinity_mbpOdd_Bloke: nothing about push and update :(11:12
Odd_Blokexinity_mbp: I've just installed it locally and I have a push-and-update command...11:15
xinity_mbpOdd_Bloke: use brz help commands ?11:15
Odd_Blokexinity_mbp: Both there and when I try to run it.11:15
xinity_mbpOdd_Bloke: ok bzr help push_and_update says : Helper functions for pushing and updating a remote tree.11:18
xinity_mbpOdd_Bloke: but nothing about how to use it :(11:18
Odd_Blokexinity_mbp: Use it exactly as you would push.11:19
xinity_mbpOdd_Bloke: i did but it still says :11:20
xinity_mbpOdd_Bloke: bzr commit works perfectly, but bzr push says : This transport does not update the working tree of:11:23
spivxinity_mbp: the way the plugin works, I suspect it'd still print that message11:24
spivxinity_mbp: but it may have updated the working tree anyway11:24
xinity_mbpOdd_Bloke: but the new file i've added locally isn't in the remote tree, i still need to do a checkout on the remote side11:24
xinity_mbpOdd_Bloke: bzr update zorry11:25
luksxinity_mbp: you need to use push-and-update, not push11:39
luks(I think)11:40
datoluks: not anymore11:40
datopush-and-update could use a README file11:40
mtaylorany chance anybody has an special tricks for getting pylint to understand lazy_import?11:46
spivmtaylor: s/^lazy_import/#\0/  and  s/^""")$/#\0/  ?11:47
mtayloryeah... there's that.11:48
mtaylorbut then I wind up forgetting to uncomment it before I commit11:48
mtayloranybody ever touch the compiler module?12:18
Parker-once... years ago... after that guys in white give me my own room with padding :P12:20
Odd_Blokemtaylor: I've played with it a little, but nothing especially in-depth.12:21
mtaylorOdd_Bloke: I'm trying to inject something into the tree...12:21
xinity_mbpOdd_Bloke: got it , it works now12:21
mtaylorit's causing me to want to lay in a padded room :)12:22
Odd_Blokemtaylor: Way over my head, sorry. :)12:22
Odd_BlokeI was just trying to get pretty representations of stuff.12:22
Odd_Blokexinity_mbp: Awesome. :)12:22
xinity_mbpnow i should be sure of what to do next time;)12:25
Parker-have to say that bzr is great.. played two days and already made own bzr serve command that uses paramiko to made custom ssh server and uses public keys to authenticate.. :P12:26
xinity_mbpParker-: terrific !12:29
Parker-still in proof of concept state...12:30
lukscould be very useful plugin for windows12:30
Parker-it works with windows :P12:30
Parker-now uses ssh-agent (or Pageant in windows) to get keys12:31
poolie_wow, that's great Parker12:38
poolie_you should really post to bazaar@lists.ubuntu.com about it12:38
Parker-er... I'm little and shy boy12:39
Parker-hmmhh.. is bazaar@lists.canonical.com same?12:40
ubotuNew bug: #189227 in bzr-svn "Strip a trailing \n when submitting the commit message to SVN" [Undecided,New] https://launchpad.net/bugs/18922712:46
Parker-have to fix it little bit before annoucement12:54
=== mrevell is now known as mrevell-lunch
bob2poolie: ah, that'13:34
bob2d be the problem13:34
bob2was just surprised, since tla handled it fine ;)13:34
=== mvo__ is now known as mvo
ubotuNew bug: #189246 in bzr "UnicodeDecodeError exception in merge" [Undecided,New] https://launchpad.net/bugs/18924614:01
wicaHI, I have bzr-1.1 on a gentoo machine. I have problem to commit my code. Get a ".bzr/branch/revision-history.tmp.1202219907.956677914.25812.1333525853": [Errno 13] Permission denied"14:11
wicaThe owner of bzr root, and the group is bzr.14:12
Odd_Blokewica: This suggests that you've created the branch with a different user and haven't set the permissions on it correctly.14:12
wicaMy normal user is in the bzr group14:12
wicaand al my dir have 77514:12
wicaOdd_Bloke: Yep, that I understand. But the permissions look ok.14:13
bob2ls -ld ".bzr/branch/" .bzr/branch/revision-history.tmp.1202219907.956677914.25812.133352585314:13
wicaThat file is not there14:13
wicathe normal user in cli. Can create files there14:14
Odd_Blokewica: Is the above the full message that bzr gives?14:14
wica bzr: ERROR: Permission denied: "/var/bzr/iceshop/trunk/.bzr/branch/revision-history.tmp.1202219907.956677914.25812.1333525853": [Errno 13] Permission denied14:14
wicathis is the full msg14:14
wicawhen I change the owner of /var/bzr/ to my normal user. I can commit14:16
Odd_BlokeOK, does the 'bzr' group have the execute bit set on all directories leading up to that path?14:16
=== mrevell-lunch is now known as mrevell
wicaI did a chmod -R 177514:17
wicaSo it shoutbe14:17
Odd_BlokeOK, so the use of the sticky bit might cause problems if you're not the owner of /var/bzr/iceshop/trunk/.bzr/branch14:22
wicawill try chmod -R -x14:22
=== jam_ is now known as jam
wica-x will remove the sticky bit14:23
wicause to see what happen14:23
Odd_Bloke-x will also stop you being able to go into directories...14:24
jrydberg_has anyone experimented with gittorrent-like things for bzr?14:30
abentleywica: x is the execute bit, not the sticky bit.  The sticky bit is t.14:32
wicaabentley: Yes, that is true. don't know why I typed +x14:33
Parker-hmmhh... what then is s bit?14:33
wicaBut the problem is found14:33
wicaThe user with the problem, has created the problem14:34
abentleyParker-: you are probably thinking of SUID or SGID.14:34
jelmerjrydberg_: haven't seen anything like that yet14:34
Parker-ah ok14:34
Parker-wica, usually the user is THE problem :)14:34
abentleyPeople seem to confuse SGID with the sticky bit a lot.14:35
Parker-hmhh.. have to think what to do with the bza-auth14:36
Parker-bzr-auth even14:39
=== mw|out is now known as mw
ubotuNew bug: #189282 in bzr "Internal error on branch of awn-core (httplib.ResponseNotReady)" [Undecided,New] https://launchpad.net/bugs/18928216:47
ubotuNew bug: #189300 in bzr "bzr push to saved location fails" [Undecided,New] https://launchpad.net/bugs/18930017:21
=== AnMaster is now known as AnMaster_
=== AnMaster_ is now known as AnMaster1
=== AnMaster1 is now known as AnMaster
=== jam_ is now known as jam
alefterisim getting this error when i'm trying to update or pull, anay ideas? bzr: ERROR: Cannot lock LockDir(http://bazaar.launchpad.net/...bzr/branch/lock): Transport operation not possible: http does not support mkdir()19:55
alefterisBazaar (bzr) 1.0.0.candidate.119:55
Parker-is it binded_20:25
alefterisdont really know: this is what bzr info gives me: checkout of branch: http://... , parent branch: http://...20:28
alefterisis it?20:28
Parker-hmmhh.. did you use 'bzr checkout' or 'bzr branch'20:29
beunoalefteris, yes, it looks like it's binded20:30
beunoyou can just do "bzr unbind"20:30
beunoalthough it seems the branch is locked20:30
beunoyou might want to do bzr break-lock on it if you have write access20:31
Parker-because isn't http:// access read-only?20:31
beuno(user bzr+ssh instead of http)20:31
alefterisso what i have to do get the latest revision from lanuchpad? mind that the branch is on a remote server, where i havent got my ssh keys there20:32
Parker-bzr branch http://...20:33
beunoalefteris, just do "bzr unbind"20:33
beunoand then20:33
beunobzr pull http://...20:33
beunowhich is probably faster then re-branching20:33
alefterisok you guys rock, thanks a lot :)20:35
beunoalefteris, :D20:36
Parker-doh... but I'm soft meatbag :(20:36
beunoParker-, re-branching works too, just a bit slower20:37
beunoit's a matter of sticking around long enough here20:37
beunothings tend to repeat themselves a lot20:37
beunowe should have a !explain_this bot here at some point I think20:37
Parker-beuno, yeah I know.. :)20:38
Parker-maybe should put some kind of notice text if user uses checkout + http20:39
beunoParker-, right, seems reasonable, like when pushing non-locally, with the "this does not update the working tree" notice20:39
Parker-and I think there should be better error information when comes that "cannot mkdir"20:40
Parker-or something20:40
beunoParker-, absolutely. If you're subscribed to the mailing list, you might want to propose it (or even cook up a patch if you're up to it)20:41
alefterisbeuno, maybe a more desciptive error message would help also. i search the web as well for the error, but found only bug reports but no clear solution about what i shoud do about it20:41
beunoalefteris, absolutely20:41
beunoI'll file a bug for it20:41
alefteristhank you20:42
beunothat way it can be tracked20:42
beunoParker-, ^20:42
Parker-roger :)20:42
Parker-doit because I'm newbie here :P20:42
Parker-started to use bzr two deays ago :)20:42
beunoParker-, will do, and welcome!20:43
Parker-thanks... today published first proof of concept plugin to launchpad :P20:44
beunoParker-, that was you?  cool!  I have that thread starred to look into later on. We are really in need of something like that. I'll try and peak into the code and see if I can help you out with that20:44
taconehello. how to enable nautilus integration ?20:44
Parker-beuno, heh... thanks :)20:46
beunotacone, you have to install bzr-gtk20:46
taconeI did.20:47
taconeI noticed from the topic 1.1 is out. I have 0.920:47
tacone1.1 will enable naut-int by default ?20:47
beunotacone, ah, just saw this "(Note that NautilusIntegration is disabled at the moment, friction in the interface between nautilus and bzr resulted in substantial performance issues)."20:47
beunojelmer, is that still true?20:48
beunohey lifeless20:48
beunotacone, bzr doesn't come with any GUI tools by default, but I do strongly recommend upgrading20:48
Parker-beuno, and yeah... help would be great..20:48
taconeI saw it too. but maybe it's not updated.20:49
beunotacone, you can download the latest from: https://launchpad.net/~bzr/+archive if you're on Ubuntu20:49
taconebeuno: I just did (5 seconds :-))20:50
Parker-hmmhh.. ubuntu-backport have 1.020:50
taconenothing seems to change20:50
taconebzr --version gives 1.1.0 now20:50
taconeshuold I try to restart nautilus maybe.20:50
beunotacone, it might not be enabled20:50
beunolet me give it a try20:50
taconeI will restart x in the meantime20:52
taconebeuno: I restarted the whole pc but I see no naut-int.20:56
beunotacone, did you follow the installations instructions in the README?20:56
beunoyou have to install python-nautilus20:57
jelmerbeuno: Nobody disabled it afaik20:57
beunocopy a file into ~/.nautilus/python-extensions20:57
taconewow. where's readme ? :-D20:57
beunotacone, and 0.93 doesn't seem to work with 1.120:57
beunobranching trunk to see if it does20:57
beunobut of course, jelmer would know much more about this20:57
lifelessabentley: ping20:58
taconeso should I downgrade bzr ?20:58
abentleylifeless: pong20:58
lifelessabentley: I think I have a clue about improving the in memory index stuff20:58
beunotacone, hold on a bit, lemme triple check how everything works, and I'll walk you through it20:59
lifelessabentley: I'm going to have a stab at it today20:59
abentleyHave fun.  I'll be busy anyhow.20:59
taconethanks beuno, waiting for you here :)20:59
lifelessabentley: I'm thinking you may want to merge-and-test-and-revert occasionally once I have something up20:59
abentleylifeless: If you want to play with my fast-iter-files-bytes branch, it's here: http://code.aaronbentley.com/bzr/bzrrepo/fast-iter-files-bytes21:00
abentleyI do not consider it merge worthy, so don't panic about how it works.21:01
beunotacone, got everyhting working, but I can't seem to get nautilus to do stuff...21:09
taconenice :-D21:10
taconeI will live without21:10
taconethank you anyway. so kind.21:10
beunotacone, it might be worth a try to follow the readme instructions21:10
beunoand restart X21:10
beunowhich I can't do at this moment21:11
taconewhere do I find readme ?21:11
beunotacone, in the plugin directory21:11
beunodownload 0.9321:11
taconethank you very much. have a good evening21:11
beunotacone, thanks, you too21:12
ubotuNew bug: #189390 in bzr "Warn users when doing checkouts with read-only transports" [Undecided,New] https://launchpad.net/bugs/18939021:30
jelmer_beuno: I'm not sure who added that bit to the wiki for NautilusBzr21:47
beunojelmer_, I couldn't get it to work though. Does it requier to restart X?21:48
lifelessjelmer_: well the code is disabled21:49
lifelessjelmer_: there is an XXX about it; or was recently21:49
jelmer_ah, somebody has commented it out in the setup21:49
jelmer_beuno: It requires you to restart nautilus21:51
jelmer_beuno: that should be all21:51
jelmer_you also need to have python-nautilus and libnautilus-dev (or something) installed21:51
jampoolie: ping21:53
jamlifeless: ping21:53
lifelessjam: gnip21:53
beunojelmer_, still doesn't work  :/    I'll try to debug when I get home21:53
jamlifeless: I've been inspecting why "bzr annotate" is slow, and I'm finding some interesting points.21:54
jamI thought I would discuss it a bit before I really got down and started coding21:54
jamBasically, the #1 thing that pops up, is that for 700 revisions, we call _lookup_keys_via_location about 70,000 times21:54
jamIt seems that all of our normal Knit calls have to do a bisect lookup21:55
jamso every time we do "get_options()", "get_method()", etc.21:55
jamAnd "get_parents()" has to do 2 lookups21:55
jambecause it checks to see if the parents are ghosts21:55
lifelessso we should not be using get_parents except when it matters21:56
jamget_delta() ends up doing about 5 lookups, because it uses the get_options, get_methods, and get-parents21:56
lifelessand for decompression trees it does not matter21:56
lifelessa grab-it-all-at-once api would be better for knit text extraction21:56
lifelessalso the index bit is slow right now, I'm looking at that today, had some ideas come to me after explaining it to aaron yesterday21:56
jamwell, as to that, we don't really need the fulltexts to do reannotate21:57
jamif we are going to be doing it from the line deltas anyway21:57
jamwe can just build up annotated fulltexts on the fly21:57
jamrather than building up fulltexts21:57
jamgrabbing the deltas21:57
jamand then combining the two into annotated fulltexts21:57
abentleyjam: The line deltas aren't accurate.21:57
jamabentley: but you are using them21:57
jamabentley: "annotate_knit"21:57
abentleyThey don't have correct information about the last line.21:57
abentleyI believe that operation uses both the fulltexts and the line deltas.21:58
jamabentley: you use the matching blocks from the line deltas, and the fulltexts in reannotate21:58
jambut you use the matching blocks assuming they are accurate21:59
jamIs the only problem the "eol" marker?21:59
lifelessjam: you're off on a tangent now, I meant that the in memory management of the pack disk index disk data can be improved; this doesn't affect fulltexts/not22:00
abentleyjam: I believe so.  Got work stuff.22:00
jamlifeless: well, I'm saying you don't need to extract all the knits into fulltexts22:02
jamand was trying to figure out a better way to stream out what I did need22:02
lifelessjam: sure; I didn't think we did. Anyhow its the exact same set of index calls needed22:02
jamIt also makes me feel like we should be caching in Knitgraph a bit more22:02
lifelesswe have to access the component, delta or fulltext, from the index layer22:03
lifelessand we should access each index record once.22:03
abentleyjam: if you look at KnitContent.get_line_delta_blocks, you'll see it does not blindly trust the matching_blocks-- it uses the fulltexts to ensure the delta of the last line is correct.22:12
abentleyThis is probably the biggest reason why I hate knit deltas.22:13
jamabentley: I do see that, though I wonder if we could just handle the eol on our own and not have to build up the fulltext22:13
jambut thanks for pointing it out22:13
jambut yeah, the knit way of storing things is to record whether there is an eol22:14
abentleyjam: You'll have to be careful if you take that approach.22:14
jamthen always force a final eor22:14
abentleyfor example, the last line could match some other line because of the lack of an EOL.22:14
abentleyAnd of course, there may also be real differences.22:15
jmlIs the RC out yet?22:15
lifelessjam: the encoding misses data22:15
lifelessjam: it does not indicate if the delta affects the last line22:15
abentleyother than EOL22:15
jamtrue enough22:16
jamthough if you are doing annotated fulltexts anyway, you'll still have a fulltext to work with22:16
jamyou just don't have to have 2 of them22:16
abentleyTBH, I'm not sure whether you can calculate it without using using the fulltexts or doing equivalent work.22:17
lifelessjam: depends on the deltas I think22:17
jamanyway, the biggest expense is all around the KnitGraphIndex at the moment22:17
jamso it is the place to focus22:17
jamreannotate is about 3/15s22:17
jam_get_entries is 8.5s/15s22:18
abentleyBecause in order to compare against the last line, you may need to hit any delta in the history to find its contents.22:18
abentleyAnd knit delta format won't tell you how many lines there were in that version.22:18
jamSure, I wasn't planning on shortcutting going into all history yet22:19
jamjust thinking that caching 700 fulltexts in memory was probably not a good idea22:19
abentleyjam: we're doing that, are we?22:19
abentleyYeah, probably not the best idea.22:19
jamancestry = get_ancestry(versions)22:20
jamlines = get_line_list(ancestry)22:20
lifelesswheeee boom22:20
Parker-jam, put option to user, if he/she want to use memory caching?22:21
lifelessannotate an iso, dare you to22:21
abentleyPerhaps an LRU cache would make sense, with reasonable groupings.22:21
jamParker-: well we would need at least a couple copies22:21
lifelessParker-: we try to solve problems in such a way that no questions/options are needed to get good performance all around22:21
jambut we shouldn't ever need all of them at once22:21
jamand we should be able to build them up as we go22:21
lifelessoptions require explanations and understanding of consequences22:21
Parker-lifeless, yeah, but some users have too much memory to use.. so if they want... they can also boost with memory caching?22:22
abentleyjam: well, this was a quick fix.  You can probably do better if you're willing to reimplement knit building.22:22
lifelessParker-: perhaps; but if we can solve it such that the extra memory is irrelevant22:22
lifelessParker-: then there is no question to ask22:22
jamthere is the possibility of caching intermediate representations for something like gannotate22:23
abentleyjam: But I'm really surprised that you're not pursuing annotation caching.  Because this annotate code is pretty fast when you're dealing with < 1000 revisions.22:23
jambut that is certainly a different question22:23
Parker-I didnt meant to solve this one... but in future or something...22:23
jamabentley: it is 15s for repository.py which is 700 revisions22:23
jamI'm not sure if that is "pretty fast"22:23
jamI'm trying to find the case where it was "9 minutes"22:24
abentleybzr annotate -r 1000 NEWS was pretty fast.22:24
abentleyBut that was on knits.22:24
jamI have the feeling it is a mixed "too many packs" and too many revisions22:24
lifelessParker-: sure; I'm not rejecting your idea, but suggesting we should see if there are solutions that are as good without options22:24
lifelessjam: how many packs do you have ?22:24
jamlifeless: here only about 6 or so22:24
lifelessjam: and whats a histogram of components for them22:25
Parker-but then.. what I do with my gigs of ram :(22:25
jamlifeless: *I* don't have the 9-minute annotate problem22:25
lifelessParker-: use it for java22:25
jamlifeless: it was someone else's error report22:25
Parker-lifeless, ah yes...22:25
Parker-forgot java :)22:25
lifelessjam: right; and interested report would be a histogram of that fileid across packs22:25
abentleybtw, I believe that get_line_lists does not take nearly as much memory as repeated calls to get_lines.22:28
jamabentley: well lines that are shared between versions will stay shared as in-memory strings22:28
abentleyRight.  Mostly.22:28
abentleyI'm not sure whether that happens when a version has more than one descendant.22:29
abentleyActually, it's probably unique there, too.22:29
abentleyIt would be the list that would have to be copied.22:29
abentleyigc: evening.22:30
igchi abentley22:30
lifelessabentley: yes list copies - thats the code we were talking about saturdayish22:30
abentleylifeless: indeedy.22:30
abentleylater, gators.22:31
jamlater abentley22:32
jamhave a good evening22:32
jamstatik: ping22:32
lifelessabentley: jam: bz2 is only slightly better than gzip22:34
lifelessI thought bzip2 was slower though22:34
foomit's insanely slower, worst case22:35
jamit usually is22:35
foomi had a log file which took 2 hours to compress with bzip and 5 minutes with gzip22:35
jamand it is generally slow symmetrically22:35
jamso it is slow to compress and can be slow to decompress22:35
lifelessso I'm wondering if this change you guys requested is the right thing22:35
lifeless45K -> 40K is the difference22:35
jamlifeless: well, you are the one doing the profilling22:35
jambzip2 comes into its own in the 900k range22:36
jamnot really in the 40k range22:36
jamThe big difference being that gzip has a very limited window22:36
jamsomething like 16k22:36
jamwhile bzip has a much larger window22:36
lifelesshuffman vs simple sliding window22:36
lifelessI'm going to add size-doubling-every-request to this get_parents thing22:36
lifelessso that full-history is not linear round trip counts22:37
jamafaik, LZMA is mostly gzip with a huge window, and can do better than bzip2 on compression, and still keep gzip decompression speeds22:37
jamlifeless: bz2 may not make sense at this level, I'll certainly let you do the tradeoffevaluation22:38
lifelessmust be some fun with window refernce pointers22:38
lifelesswell I've done the commit already22:38
lifelessI know we do get farking huge histories22:38
jamlifeless: but if the request is designed to buffer X amount of data per round-trip an individual request may be quite limited for all cases22:39
lifelessX, then 2X, then 4X22:39
lifelesscurrently its X22:39
ubotuNew bug: #189419 in bzr "Odd behaviour when adding directory with wrong case on Windows" [Undecided,New] https://launchpad.net/bugs/18941922:41
jamlifeless: we might consider capping it at Y*X, if only for interactivity purposes22:46
jam(say 16X or something)22:46
lifelessI'm seeing 2.4K revisions in a 64K compressed blob22:46
lifelessour entire history is what, 15K?22:47
lifelessso thats 2.4 + 4.8 + 9.6 - 3 round trips22:47
jamlifeless: revisions.kndx is 1.5MB22:47
lifelessyou have knit repos left?22:48
lifelesswe have to have a chat22:48
jamlifeless: yep22:48
jammy development stations are all packs22:48
jammy public repo is still knits22:48
jamgzip revisions.kndx is .5MB22:48
jambzip2 is about the same22:48
jam482KB versus 427KB22:49
lifelesshi statik`22:52
jamhey poolie, phone time, right?23:00
lifelesspoolie: I can still merge stuff for 1.2?23:00
poolielifeless, yes23:01
* lifeless breaks bzr.dev network with itself, again23:01
ubotuNew bug: #189431 in bzr "pre_commit hook overly expensive" [Undecided,New] https://launchpad.net/bugs/18943123:20
mattimustanghi, i'm looking at converting from svn to bzr. Does bzr support svn style properties?23:38
lifelesssome yes23:39
lifelessis this for 3rd party extension, or for file-ending support on windows etc?23:40
lifeless(that is are you asking about 'generic stuff' or specific features)23:40
foomit doesn't support keywords or eol-style23:41
lifelessfoom: yet; I have a plan.23:41
foomdoes 1.2 make any more great strides in speed?23:42
lifelessyes network pull on smart server will be way faster23:42
lifelessand use less memory23:42
lifelessand make more coffee23:42
foomhow about local operations?23:42
lifelessbranch and checkout are faster23:43
lifelessannotate too I think23:43
foomi haven't even gotten to the point where pulling data is a problem yet. :)23:43
lifelessgive me a sample branch and the ops that are slow and I'll fix it23:43
foomi have a 50krev 1.3G repo, full of proprietary data. I imagine it has the same issues as large public projects, though.23:45
lifelesswe're performing quite adequately on openoffice23:45
lifelesswhich is similar in size23:45
lifelesslast time we discussed I think I said words to the effect of 'file bugs, tagged performance'23:46
lifelesswhich I'll repeat, I'm happy to explain why this matters if its not obvious23:46
foomhm, i guess i should grab the ooffice repo and try using it23:46
lifelessigc can help you there23:46
foomit'll at least be interesting to see whether there is a marked difference in performance23:47
lifelessstart by filing bugs though23:47
lifelesscommand X is slow, I have <---> this much data of the thing its looking at.23:47
lifelesswe'll probably get you to get some stats about your repo without disclosing data to help analyse things23:48
lifelessbut I'm really begging you - file bugs.23:48
igclifeless: yes, I ran a full set of local benchmarks on openoffice last night23:48
lifelessno bug, no chance of developer thinking about your case. You'll only accidentally get fixed. Bug. Things Get Fixed.23:49
igcon both gutsy and leopard23:49
igcno deep history in those results (yet)23:49
pooliejam, did you really want all of ~bzr to join ~bzr-log-rss-devel?23:49
beunoigc, is there anyway for stats to be automated with bzr.dev?  have it spit out nightly stats in HTML format23:50
lifelesspoolie: hes using an idiom to allow devs that are only interested in log-rss to do so, but to grant all bzr devs access to23:50
lifelessI think its not really needed; but shrug:)23:50
pooliedo you mean we each individually accept or decline?23:51
poolieoh i see23:51
igcbeuno: I think so via cricket23:51
foomigc: do you have the converted repo available?23:51
igcfoom: see http://bazaar-vcs.org/Benchmarks23:51
lifelessigc: I think foom wants to pull the converted deep history repo and play23:51
igcraw snapshots are available here: http://people.ubuntu.com/~ianc/benchmarks/src/23:51
igcI'm working on getting a conversion of openoffice today as it happens23:52
beunoigc, cricket?   Would be nice to have to see performace gains. Even eventually have statistics of how it's being improved23:52
igcI'll make that available as soon as I can23:52
lifelessdidn't jam do one already ?23:52
igcum - someone at Canonical did, not sure if it was jam23:53
igcw.r.t. cricket that is23:53
lifelessigc: jam. You should be able to just copy that conversion23:53
lifelessjam: ^23:53
igcw.r.t. OOo, jam was looking at a conversion from the CVS master but I believe it required a lot of manual work to get right23:54
igcgiven their tagging conventions, etc.23:54
lifelessyes but as a data point for playing with ..23:55
lifelessit doesn't have to be right23:55
lifelessfoom: we could pair-file-bugs if you like23:55
pooliespiv, can you explain why we get TooManyConcurrentRequests errors, like in bug 18930023:56
ubotuLaunchpad bug 189300 in bzr "TooManyConcurrentRequests error during push" [Undecided,New] https://launchpad.net/bugs/18930023:56
foomigc: on those benchmarks, it looks like you're testing with no history?23:56
foomlifeless: thanks; not today at least, though.23:57
igcfoom: that's right - it's on my list real soon now to add that23:57
igcfoom: first step is getting a sensible convension of 3 of large projects which I'm doing today23:57
pooliei would have thought that could only come from some kind of structural code bug23:57
spivpoolie: I think maybe because of poor error handling23:57
pooliebut it seems to occur when there are network errors23:57
igcs/3 of/several/23:58
pooliemaybe if the network is dropped we skip reading the reply, or noticng that there will be no reply23:58
spivpoolie: i.e. issuing a request that expects a body, but gets an error instead, but forgets to do cancel_read_body in the error handling23:58
poolieand then fail to see the next one?23:58
spivWhich is the sort of thing that the next protocol version ought to help with...23:58
lifelessigc: we have a mozilla one already23:58
poolieigc, i'd be inclined to start with bazaar itself,23:59
poolieit's easily accessible and it has a realistic merge graph23:59

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!