[00:12] [PUPPETS]Gonzo: Are you awake, Guillermo? [00:21] hello [00:22] hi === mw is now known as mw|out [00:31] abentley: because there's no bzrdir_implementations test for it, I guess. [00:32] spiv: I misspoke, RemoteRepository, not RemoteBzrDir [00:33] It looked very much like you had decided that it should not be implemented. [00:33] If I'm mistaken, I'll gladly implement it. [00:34] abentley: hmm [00:35] abentley: well, currently the smart protocol doesn't have any verbs to talk about working trees [00:35] abentley: so it might have been intentional. [00:36] We need it to properly implement clone_on_transport, so I can do the VFS fallback thing for now. [00:36] abentley: ideally, I would like for it to be the case that a push over the smart protocol would cause the server to update a working tree on the remote side if there's one there. [00:37] spiv: I think thats a problem because of conflicts [00:38] lifeless: yeah, it's definitely not a trivial problem [00:39] lifeless: I also don't think it's a particularly important problem. [00:40] lifeless: so don't worry, there's not too much risk of me doing anything about it anytime soon :) [00:41] spiv: Not creating working trees remotely is handled in other ways, so there's no need to disable set_make_workingtrees [00:43] abentley: incidently, there's no direct repository_implementations test for set_make_working_trees. It probably gets exercised by test_clone_shared_no_tree, depending on if test bails out early or not... if you want to add a simple direct test for set_make_working_trees (and make_working_trees), that would be good. [00:43] abentley: sounds reasonable to me [00:44] Okay, I'm happy to do that, and it will make my repository acquisition policy patch nicer. [00:47] lifeless: I'm wondering whether we should be adding a scenario to the bzrdir implementation tests: bzrdir with non-default repo, tree and branch. [00:48] I had a test for clone_on_transport that was creating a repo in the default format, not the bzrdir format. [00:48] I'm manually forcing that test to run with "knit". [00:52] abentley: so a scenario which runs all the bzrdir tests hoping to catch things that make outputs be default rather than the source format ? [00:52] Right. [00:52] lifeless: considering the two recent bugs with tzdata and python-xml [00:52] both could not be reproduced with upgrades, but were likely caused by them [00:52] abentley: so you're going to remove the special casing for Remote* in that patch, then? [00:53] spiv: Yes. [00:53] i wonder if some better database could keep track of every package they ever published [00:53] [00:53] abentley: ok, good. I don't need to complain about your use of "from remote import ..." then ;) [00:53] abentley: I think that has some value [00:54] spiv: Why do I always do that? [00:55] poolie: yes, there are such things :P [00:55] poolie: archive, my conflict finder, etc === mwhudson__ is now known as mwhudson [01:08] yay mass api changes landed [01:15] New bug: #214878 in bzr "Crash when trying to push to a remote directory (FTP)" [Undecided,New] https://launchpad.net/bugs/214878 [01:16] hm, i guess now would be a good time to fix how loggerhead does annotate [01:16] (it calls annotate_iter currently) [01:17] so many things to do === hexmode` is now known as hexmode [01:49] lifeless: did you get a chance to look at the new rev of that doc? [01:51] markh: yes, I approved on bb [01:53] it addresses the concerns you had re the tone? [02:04] are bzr 1.3.1 bits (release) packaged anywhere? [02:11] hardy === poolie changed the topic of #bzr to: http://bazaar-vcs.org/ | Bazaar 1.4 bug day today! - help us report, triage, fix and review... [02:19] and gutsy-backports i believe [02:23] so [02:23] I have a task for someone [02:23] try freeze on bzr [02:24] oh, python freeze, to turn it into a binary? [02:24] lifeless: quick call? [02:24] yes [02:24] sure [02:51] New bug: #214894 in bzr "AssertionError when pushing to remote bzr+ssh repository (len(history) != revno)" [Undecided,New] https://launchpad.net/bugs/214894 [02:56] fbond: ping [03:19] commenting on bug 209281 patch [03:19] Launchpad bug 209281 in bzr "Windows diff apps don't understand symlinks created by Cygwin bzr diff --using" [Undecided,Invalid] https://launchpad.net/bugs/209281 === BasicMac is now known as BasicOSX [03:24] * poolie looking at "plugins in arch-indep directory" [03:25] lifeless: hi [03:26] fbond: re looms and rebasing [03:27] fbond: why do you want to break collaboration? [03:27] ok, already merged [03:27] lifeless: well, I wasn't collaborating at the time :) [03:28] lifeless: frankly, quilt isn't a collaboration tool, so if I'm using looms as a better quilt, I don't care much about collaboration. [03:28] fbond: right, I realise quilt isn't a collaboration tool... one of things about looms that make it a better quilt is the collaboration aspects :P [03:28] * poolie reads command-not-found core [03:29] poolie: perhaps a #bzr-bug-day channel? [03:29] lifeless: Okay, my understanding of the impact of rebasing is probably limited. [03:29] fbond: that said, I'd like to understand why you want to rebase rather than have the changes merged up (we can put equivalent automation in both approaches) [03:29] maybe [03:30] is this too noisy? [03:30] poolie: its low signal [03:30] i'm trying not to overlap with spiv [03:30] What I want to do with looms is to manage a set of possibly dependent changesets and then pull them off, one at a time, as a single changeset that I can then merge, push, pull, etc. [03:31] poolie: well, as a journal I don't know that it will scale. I suggest gobby if you want to avoid overlap and scale with people doing this [03:31] What I don't like about the current behavior is that I end up getting changes mixed into higher threads that are really specific to a lower thread. [03:31] fbond: so merges don't preclude that, in fact they allow you to handle more merge cases. [03:32] lifeless: It's certainly possible that I'm overlooking a feature that will do what I want. [03:33] fbond: I don't understand what you mean by mixed in [03:33] fbond: rebasing would have as much code commingling [03:33] Hmm. I really want a single commit per thread. [03:33] and quilt has as much too; when you work as a stack. [03:34] fbond: there is at any point in time a cherrypick merge that will drag out a single thread with nothing from other threads. [03:34] fbond: 'bzr merge someloom -r thread:threadbelow..thead:feature' [03:35] lifeless: Okay, I think I see how that would work. [03:35] merging make-dist and command-not-found core.. [03:36] fbond: if you happen to take threads from bottom to top, then cherrypicking is equivalent to just merging/pulling individual threads [03:36] fbond: the bottom thread is typically the trunk [03:36] spiv: can you do the tweaks and submit your _get_line patch? [03:37] Yeah. [03:37] What I'm really after is to have a set of mutable commits, each of which represents a specific feature or fix. I'd like them to remain mutable until I decide that feature/fix is complete, at which point I want to roll that commit into a "final" commit that I apply to trunk. [03:37] if anyone wants to approve and merge the post_change_branch_tip hook patch, it'd be appreciated :) [03:37] jamesh, it's near the top of my list [03:37] lifeless: It sounds like cherry-picking the "final" commits would do that. [03:37] how much hope this gives you i can't say :) [03:37] thanks [03:37] fbond: so, there are two basic ways to mutate a commit; you can either commit a change on top of it, or you can remove it from history and commit it fresh [03:38] fbond: the former allows other developers (and yourself in dependent commits) too adjust to your mutations [03:38] lifeless: right, in the end I want a single commit with a single log message, though. [03:38] So I guess I cherry pick the commits that make up that single commit and then `bzr revert --forget-merges'? [03:38] fbond: the latter looks to the vcs system as two people doing slightly different things, except when special logic (rebase) is applied to avoid massive conflicts. [03:39] lifeless: yeah, I follow you. [03:39] lifeless: But I wouldn't be sharing my loom with others. [03:39] fbond: the problem with rebase is that other people have to know exactly when you rebased to reproduce it exactly, but by the very nature of rebase they cannot know this. [03:39] lifeless: The loom would be a temporary tool for me to manipulate commits. [03:39] fbond: doing a --forget-merges will in fact do what you want yes. [03:39] lifeless: yes, it would, at the cost of having to cherry pick multiple commits instead of rolling off a single commit... [03:40] fbond: well you're not grabbing multiple commits; by forgettings merges you get a single commit [03:40] lifeless: I'm tempted in that case to bzr down-thread; bzr uncommit; [make changes]; bzr commit; bzr up-thread [03:40] fbond: if you do that, bzr will still give you a merge :P [03:41] lifeless: yes, forgetting merges ends up with the same result, but I have to manually track which commits actually "belong" with that thread, and which are (sort of) incidental commits that result from merging mutations to lower threads. [03:41] lifeless: Am I making sense? [03:41] fbond: no, because the recipe I gave you before lets bzr track that for you [03:41] lifeless: let me re-read that [03:43] lifeless: bzr must be smarter than I was thinking; let me make sure I understand: [03:43] fbond: anyhow, meta discussion - I would like looms to work well for you and will happily make changes *that don't preclude its use a collaboration tool* to do that. [03:43] lifeless: The cherry pick would merge those "incidental" commits, but --forget-merges would drop them and I'd be left with a diff that represents my intent. Right? [03:44] right [03:44] lifeless: Okay, I can be happy with that. [03:44] lifeless: I have a tendency to want history to represent my intent, rather than representing what I actually did. [03:44] if you'd like to write up some docs for the HOWTO that describe what you want and why [03:44] I'd be delighted to try and make this clear to other folk with the same penchant. [03:45] lifeless: Yeah, I think that would be a good idea. I feel that this work-flow must not be that uncommon. [03:45] lifeless: I'll write something up. [03:46] thanks! [03:46] lifeless: and thank you, too. [03:46] lifeless: I'd like to try this out a bit before writing anything, so it'll probably be a few days. [03:47] lifeless: Where should I submit a document, the wiki, or elsewhere? [03:50] the bug report about this would be good [03:51] https://bugs.edge.launchpad.net/bzr-loom/+bug/214657 [03:51] Launchpad bug 214657 in bzr-loom "Should support rebasing threads" [Undecided,New] [03:51] lifeless: okay, you got it [03:51] lifeless: so that description actually corresponds to MissingFeature i think (like 'you should install paramiko') [03:51] poolie: ECONTEXT [03:51] and UnavailableFeature means an unavoidable platform problem? [03:53] re the warning about windows diff apps [03:55] we currently differentiate between skipped, unavailable, known failure [03:55] * TFKyle wonders why people would use bzr in cygwin, just for symlink support? [03:56] i will try to update the doc to reality [03:56] well they might be using cygwin anyway [03:57] TFKyle: Some people like shell wildcard expansion to work (and other shell features, too). [03:58] fbond: hmm, couldn't they still just use a normal windows Python+bzr on there? [03:59] TFKyle: Oh, dunno, don't use any of the above. [04:03] spiv: Other methods, when invoked on formats that don't support them, raise UpgradeRequired. Would that be suitable for you? [04:05] abentley: Hmm, the message on UpgradeRequired says "branch" rather than "repository", but apart from that it seems ok. [04:05] Okay, class RepositoryUpgradeRequired(UgradeRequired) [04:07] I don't understand why that's better than NotImplementedError, though. [04:07] spiv: It's better because the semantics are unambiguous. [04:08] I'm also not sure that it's tasteful to have set_make_working_tree succeed if the new_value happens to match what the repository is hard-coded to do, and raise otherwise. [04:08] We use NotImplementedError for things that *must* be implemented, like Tree.get_file. [04:09] It seems like that will lead to callers getting surprised later rather than sooner. [04:09] Okay, I can raise it all the time. [04:09] Well, we also use it for leave_lock_in_place [04:10] (for instance) [04:11] So if Tree.get_file raises NotImplementedError, that means "implementor was drunk", but if Repository.set_make_working_trees raises NotImplementedError, that means "implementor made a rational choice not to implement it"? [04:11] Well, it means "this object doesn't implement that" [04:11] How you assign blame from there seems to be out-of-scope ;) [04:12] I don't mind we decide that we should have distinct exceptions for those cases, but we don't seem to at the moment. [04:12] (Otherwise I'd expect us to already have a RepositoryUpgradeRequired defined...) [04:13] Test suites need to distinguish between these cases. [04:13] Anyway, I don't feel strongly about how this should be done. [04:15] So if it does make your testing easier, please do add the exception! [04:16] I see that Robert on the list shares my impression about the meaning of NotImplementedError [04:16] So clearly there's a HACKING guideline waiting to be written, one way or the other :) [04:20] abentley: *optional* methods use NotImplementedError today, and the test suite looks for that. [04:20] abentley: get_file is not an optional method [04:21] It sounded like you were saying "iff a method is optional, it raises NotImplementedError if not implemented" [04:22] abentley: yes, I misread your line beginning 'So if' [04:22] abentley: your line was correct [04:23] in this particular case I don't think RepositoryUpgradeRequired is appropriate, because we might in future have repositories that don't support this method [04:23] (see the discussion about 'repositories are not semantic') [04:23] So get_file raises it, and therefore we don't have clear semantics for NotImplementedError. [04:23] we do have clear semantics: that method isn't available on that object [04:24] You said it meant the method was optional. [04:24] We don't have clear semantics about whether the method is optional. [04:24] No, I said on optional methods we use NotImplementedError when it has not been implemented. [04:24] NotImplementedError always means 'not implemented' [04:25] Fine. The semantics we have are not rich enough for me. [04:25] ok [04:25] what problem is it causing [04:25] I'm getting test failures. [04:26] as in, is it a testing problem or a client code problem [04:26] I have code that exercises set_make_working_trees, and it fails if NotImplementedError is raised. [04:27] I don't want it to be unimplemented except on very old formats. [04:28] Because clone_on_transport is supposed to create an exact copy, and it can't do that if it can't set that setting. [04:29] abentley: given the current API your code should catch NotImplementedError [04:29] abentley: we do this elsewhere in the test suite when testing optional methods. [04:29] abentley: will that do the job for you ? [04:29] No. Then I don't have a test that it's implemented on RemoteRepository. [04:29] interface tests are not the right place for such a test [04:30] bzrlib/tests/test_remote.py is where such a test should live. [04:30] such a test can test that the method doesn't raise and let the interface tests test the precise behaviour [04:30] You know what, screw this. Spiv approved my plan. I'm going to submit it to PQM. If you think it should be changed, change it. [04:31] where did that come from [04:33] That comes from your word games and from acting like your opinion is the only valid one. Last I checked, poolie was in charge of this project, not you. [04:33] *blink* [04:39] I didn't know I was playing word games; and I don't see what 'whose in charge' has to do with this - our code gateway is geared around getting enough approval from mature devs, not any one person oking or not oking each patch [04:39] but you're obviously upset, and I'm feeling got at with the word games ad hominen [04:47] lifeless: "who's in charge" has to do with flatly stating that X is wrong and Y is right, despite the fact that other people clearly feel otherwise. [04:49] lifeless: "Word games" has to do with our argument about whether NotImplementedError had rich enough semantics for our use case. [04:50] Submitting to PQM has to do with the fact that I have a tweak vote from a mature dev and no vetoes. [04:51] n [04:51] abentley: I have replied to your playing with stacked branches mail as promised. basically I agree with everything you raised. [04:52] abentley: In the prior conversation I was making statements about what is present in the code base, I don't think opinion has anything to do with that. I wasn't trying to argue with you about N.I.E., I was trying to work through it to grok the issue you had. [04:53] and of couse sending to PQM is fine; as you say - you have enough votes and no vetos. I was purely suprised by the sudden vitriol. [04:58] From my perspective, your patches are sailing past, while the things I try to do are blocked or delayed, primarily by you. There has been some tension building. [05:00] oh [05:00] I know three I've caused grief of arious sorts on [05:00] the rich-root format meta-topic [05:00] (mind you, I have patches I haven't sent in because they made you unhappy; I think that that one is quite mutual) [05:01] the fast-build-tree using Storage [05:01] and Storage [05:02] we had a long discussion about Storage, and I ended up ok with the goal, but not beinging another concept without cleaning up what we had first, because we have been building a very long tail of uncleaned up things [05:02] And MPRepo [05:02] s/beinging/bringing/ [05:02] that one I thought john was working on [05:04] MPRepo was the one I did around Xmas, before trying again with Storage. [05:04] yes, using multiparent diffs [05:07] Also, the other weekend when you suddenly said reconcile should change SHA1s. That derailed my plans to implement pack1.4 that weekend. [05:11] abentley: well, I was offering that to avoid 'pull' in general doing magic. I certainly didn't intend it to derail you. [05:12] I've reviewed TransportConfig [05:12] :tweak on it, just could be cleared in the class docstring [05:12] Well, it did, because rewriting SHA1s was was I was trying to avoid. [05:13] And there would have been no point to doing pack1.4. [05:13] the problem is we have stuffed sha1s today. I'd like to say all existing upgraded rich-root/subtrees repositories are just broken, but that seems overly harsh to our users. [05:15] anyhow; this is a good point for my day to end [05:15] But what are those sha1's good for anyway? We already have sha1s of all the repository contents, so we can do file integrity checks. [05:16] we can't detect bit errors or deliberate corruption in the inventory without them. [05:16] We can't detect deliberate corruption anyway-- they can just rewrite the revision. [05:17] I'm off. I do *not* want to be a blocker for stuff you are doing; OTOH I am not going to avoid commenting when I think there is something we can improve on, or which we need to do differently. [05:18] if you want to talk later about the tension; try and defuse it or find something I can do differently that will be less disruptive for you, I'd be happy to do that when I'm not about to halt() from low blood sugar [05:30] lifeless: I am still willing to try. Lately I have felt like we had lost the ability to work together, which saddens me, because you've been part of my open-source contributions from the very beginning. [05:55] how do i unloomify a branch? [05:57] mwhudson: you fix https://bugs.edge.launchpad.net/bzr-loom/+bug/203285 :( [05:57] Launchpad bug 203285 in bzr-loom "unable to "de-loom" a branch" [Medium,Triaged] [05:58] spiv: oh great [05:58] Or more helpfully, you pull the relevant revision into a non-loom branch, and then replace the loom with that branch. [05:59] mwhudson: what I do is: move to the top thread, bzr init a new branch, pull in the loom [05:59] mwhudson: maybe that's over complex [05:59] You can also do bzr export-loom to export all threads as branches. [06:04] bundlebuggy seems to have died [06:32] abentley: I'm trying to think of a use case for a branch to say "refuse to stack on me" [06:33] jamesh: Should be back. [06:34] abentley: I can't think of one that's interesting. [06:34] jml: If a branch has been stacked on, then it must not be deleted. [06:35] But if anyone can stack on anyone else's branch, then deleting a branch might break someone. [06:35] abentley: so, I guess if someone has published a branch that they know will be deleted in future... [06:36] Yes, or else if they just don't want to worry about whether it's safe to delete their branch. [06:36] abentley: maybe it's better to do it the other way around. explicitly say that a branch is ok for stacking [06:37] jml: We may want to take that approach. But it implies that no existing branches would be stackable. [06:37] stack-onable [06:38] abentley: but if there's a simple way to say 'this branch is now ok to be used as a base for stacking', then I think that's ok. [06:39] Well, it might not be too bad to add that to existing formats. [06:39] abentley: I just can't see myself saying "don't stack on this branch". [06:39] Generally we would take the approach of creating a new format. [06:40] I was being blissfully ignorant of the implementation :) [06:40] jml: lucky you. [06:40] it's a user's prerogative :) [06:40] But if we create a new format, then users with old clients will be affected... [06:40] oh wait, free software [06:41] * jml submits a patch or something. [06:41] So you can see that the knock-on effect is pretty severe. [06:41] *nod* [06:42] I think I could live with adding an "allow-stacking" flag to current formats. [06:42] Though of course it will be required for the actually stacked branches. [06:43] yeah [06:44] abentley: more broadly, if I have a branch that I don't want people to stack on, I'll make that clear-ish in its name (e.g. mumak.net/experimental/some-branch) [08:00] lifeless: does addCleanup have tests? [08:05] * mwhudson works himself into further confusion with looms [08:08] jml: I don't see any in test_selftest [08:08] jml: which is where they ought to be [08:08] spiv: grep didn't illuminate anything. [08:08] Probably not, then. [08:09] if i type 'bzr down-thread' and then decide i didn't want to do that, what should i do? [08:12] bzr switch, I think [08:12] I might be horribly wrong though [08:13] i think it's too late for this loom [08:20] What's the current "best" repository format to use with bzr-svn? [08:21] Reason I'm asking is that I am attempting to update my bzr-svn created GTK checkout, but I realized that when it was created (circa bzr 1.3 and bzr-svn 0.4.7) it was in a knits repo. `bzr update` failed (!), [08:21] so I've tried doing the incremental pull trick into a new repo of format --rich-root-packs [08:22] It is taking 32 minutes per 250 revisions, local disk! This is somewhat astonishing, so I figure I must be doing something horribly wrong. [08:22] [There is a vcs discussion on a GNOME list this week, and I wanted to be able to report hosting of a few branches] [08:27] rich-root-pack is the best repository format. [08:28] Converting between repository formats can be pretty slow, it's not a code path we've done a lot of optimisation on. [08:29] Especially if it's between formats with different inventory serialisation, that tends to be very slow. [08:34] spiv: thanks. [08:34] spiv: do you think it would be faster to import from Subversion directly rather than trying to convert between repository formats? [08:35] [re]import; the last one took nine days but this time I have it running on a server in `screen` so it ought to be a bit faster :) [08:35] spiv: (not expecting you to "know"; just curious what your gut feel would be) [08:37] AfC: Hmm, tough call :) [08:38] AfC: With bzr-svn 0.4.9, I'd expect re-importing from scratch to at least be competitive. I wouldn't want to take a bet about which will be faster, though :) [08:40] poolie: finally sent that _get_line patch to PQm [08:42] abentley: I am willing to try too; perhaps tomorrow, but lets try [08:42] jml: don't think so; blame poolie :P [08:42] mwhudson: down-thread is ~= switch, so you can just switch again; or do up-thread [08:43] lifeless: i got conflicts on down-thread and more when i up-thread-ed again [08:44] mwhudson: so to get conflicts on down-thread you must have had outstanding uncommitted changes [08:44] mwhudson: these get preserved (deliberately) [08:44] * mwhudson off [08:44] er [08:44] lifeless: yes, i wanted to commit them to a lower thread [08:44] if you don't resolve conflicts before switching again, you are in for a world of pain; I think switch and X-thread should refuse to operate while there are unresolved conflicts [08:45] oop, thing has finished, bye again :P [08:45] lifeless: ok, but that means there's no "oh crap, i didn't mean to type that" option [08:45] * mwhudson is gone too [08:45] mwhudson: propose a better answer then :P [08:46] Yeah, in an ideal world "bzr down-thread; *oops, conflicts, I don't want to do that!*; bzr up-thread" would be a no-op. [08:46] I don't want to be the poor guy that implements it, though ;) [08:47] (Assuming in this case that there's nothing outstanding in the lower thread to merge back up) [08:47] (Although I occasionally wonder if a "bzr up-thread --dont-merge" option would be a good idea) [09:04] spiv: well done [09:04] spiv: thanks for all the reviews and merges! [09:09] hi - anyone know where I might find news of nautilus integration for bzr? or konqueror, if that's available? [09:09] nexus10: the bzr-gtk plugin has a nautilus-integration component [09:10] spiv: yeah, but I couldn't get it working, and from a recent irc chat log it seems others have the same problem [09:10] nexus10: see the README file that comes with bzr-gtk for details of how to get it going [09:11] spiv: I've tried, failed -- better try again then :-) [09:11] nexus10: Hmm, it worked ok for me last time I tried. I think it needs a fairly new Nautilus and python-nautilus bindings. I use Hardy, and that seems new enough. [09:11] (I'm also using the current development branch of bzr-gtk rather than a release. I don't know if that makes much difference) [09:14] nexus10: there's a bzr-gtk mailing list, btw [09:14] nexus10: if you're stuck, that would be a good place to ask for help [09:15] spiv: thanks, I'm on Gentoo so I'll make sure nautilus is the latest stable, and try on the bzr-gtk ml if probs [09:17] spiv: do you know if there's any integration with konqueror? Found one quote that suggested there is one, but can't find it... [09:17] nexus10: I don't know, sorry [09:17] nexus10: there's a "qbzr", I think [09:17] nexus10: that may be related [09:18] spiv: thanks, np -- yeah, qbzr is quite useful === pmezard_ is now known as pmezard [10:24] I just posted a fix for #213425 [10:25] Reviews and testing of that patch are welcome [10:25] Happy bug day! [10:25] * spiv gets some dinner [10:27] thanks spiv [10:27] bon apetite [10:35] good night [10:35] spiv: thanks for the bug report, should be fixed now [10:58] http://people.debian.org/~igloo/popcon-graphs/index.php?packages=darcs%2Cgit-core%2Cmercurial%2Cbzr&show_installed=on&want_percent=on&want_legend=on&want_ticks=on&from_date=2003-10-01&to_date=&hlght_date=&date_fmt=%25Y-%25m&beenhere=1 [10:58] (graph of number of people installing debian packages of VCS systems) [10:59] bzr trailing behind but moving the right way. About to overtake darcs. [11:14] nice graphs :-) [11:14] does ubuntu have anything like that? [11:18] wow, bzr-svn seems to've lost 30 users by being uninstallable in sid for a week or so [12:07] bzr: ERROR: [Errno 12] Can't create tunnel: Cannot allocate memory [12:07] (with bzr-svn; but the next incremental pull succeeded) [12:09] spiv: BTW pulling with bzr-svn direct outpaces pulling from a Knit repo by 24 minutes to 32 minutes [12:36] New bug: #215059 in bzr "Cannot push if login is an e-mail address" [Undecided,New] https://launchpad.net/bugs/215059 === kiko is now known as kiko-afk [12:45] New bug: #215063 in bzr "v1.3 fails badly when patch being applied twice" [Undecided,New] https://launchpad.net/bugs/215063 === mrevell is now known as mrevell-lunch [13:48] I'm getting this error "bzr: warning: unknown encoding . Continuing with ascii encoding." on Mac OS X 10.5 with bzr 1.3 [13:49] my locale settings are: "LC_CTYPE="UTF-8"" [13:49] Set by default on 10.5 [13:52] AfC: thanks for letting me know which one won the race :) === mw|out is now known as mw [13:56] spiv: I think "won" might be overstating it a bit. It's only at 4800 of >15,000. {sigh}. And I keep getting [13:56] bzr: ERROR: [Errno 12] Can't create tunnel: Cannot allocate memory [13:57] Hard to know what is to blame there, but that sort of thing is bad form. [14:01] Adding export LANG=en_US.UTF-8 to .profile fixed this error :-) === mrevell-lunch is now known as mrevell === thekorn_ is now known as thekorn [14:51] New bug: #215127 in bzr-push-and-update "Error running update on remote path containing a space" [Undecided,New] https://launchpad.net/bugs/215127 [15:36] Hello all [15:37] I just wanted to say 'thank you' for an absolutely superb tool. [15:37] bzr has improved Leo's development effort in ways I never dreamed possible. [15:38] More details at: http://groups.google.com/group/leo-editor/browse_thread/thread/4a0a72ad45d14fe0 [15:39] edreamleo: Thanks! [15:40] edreamleo: that's great. [15:43] The sound bite: bzr does for archiving what Python did for programming :-) [15:44] That is, nothing is dangerous anymore, except possibly not making branches... === weigon__ is now known as weigon [16:20] hello [16:20] bazaar.launchpad.net can't handle files with nonlatin characters [16:20] e.g. http://bazaar.launchpad.net/~aantny/screenlets/universal-applets/files/aantny%40gmail.com-20080406154451-ihk6pxnwyxhwdurl?file_id=share-20070820102741-g2qhjaw8auklwqnv-30 === Mez is now known as Floodbot5 === Floodbot5 is now known as Mez [18:54] ehlo [18:54] if you merge a branch and it creates some directory and you get a conflict and you then try to revert, it don't want to remove the directory and you have to manually remove it [18:54] what gives? === kiko-afk is now known as kiko [19:06] b0ef: People have this strange tendency to complain when we delete their files, even if that's what revert technically should do. So we try to accomodate them. [19:07] abentley: right, but why is there no bzr revert --force? [19:07] because force is very vague? [19:08] we have --no-backups. [19:08] right, so that will do what I want? [19:08] Dunno. If it doesn't, I'll be happy to fix it. [19:09] abentley: right, I'll get back to you, then;) [19:10] New bug: #215253 in bzr "bzr qannotate should allow to navigate from a diff to a revision" [Undecided,New] https://launchpad.net/bugs/215253 [19:13] abentley: nope, doesn't work [19:13] abentley: I have to manually delete the directory [19:13] abentley: even with --no-backup [19:14] Okay. Please file a bug, and I'll take care of it. [19:14] abentley: aiight, thanks [19:18] abentley: seems 54172 pretty much describes it [19:27] abentley: still alive? [19:28] New bug: #215256 in qbzr "bzr qannotate: implement "search" and "search next"" [Undecided,New] https://launchpad.net/bugs/215256 [19:28] Yes. [19:28] New bug: #215258 in qbzr "qbzr: provide extensive help" [Undecided,New] https://launchpad.net/bugs/215258 === cprov is now known as cprov-out [20:01] hello folks! [20:02] up until recently, we've been using bazaar succesfully in our game project, now we have a problem though... bzr push against as central "smart server" results in long waiting time with alot of data being received from the server >50MB.. is this normal? [20:03] blazer, what version of bzr are you using locally and on the server? [20:03] blazer: Bazaar periodically auto-packs the repository to enhance performance. [20:03] we're using 1.3 on both client and server [20:04] If your *next* push is fast, it was probably auto-pack. [20:04] aah [20:04] abentley, couldn't we somehow report to the user that autopack is happening? There seems to be quite a few users dropping in here because of it [20:04] it will transfer the whole repository then? [20:05] blazer: Not necessarily the whole repository, but it could be a significant portion of it. [20:06] is there any documentation available on this behaviour? I might as well indulge in *something* interesting while waiting ^^ [20:06] I'm confused what's being transferred in that case. [20:06] beuno: Sure, and it's already been suggested. [20:06] But no one has actually done it. Which is a pity. It would make a very small patch. [20:07] abentley, do you know if there is a bug open for it? === mw is now known as mw|food [20:09] beuno: No, I don't. [20:09] apparently the server stopped sending at about 70MB of data and now the client is sending... I guess this is a good thing? ;) [20:10] cory: pack-format repositories create a new pack file on every commit. [20:10] Retrieving those files one-by-one would be teremendously slow. [20:11] abentley: Well, I've run into the same thing, and found that if I did the pack locally on the server and then went about my normal operations on the client, I didn't notice any long waits. Is the client unnecessarily getting involved in the re-pack? [20:11] So bazaar combines pack files when there are too many. It does basically log scaling, so that 1 pack has 1 revision, the next has 10, the next has 100, etc. [20:13] cory: In some cases, perhaps. With shared repositories, etc, the local repository may not have all the revisions in the remote repository. [20:14] But because of the log scaling, long auto-packs are rare anyhow. [20:17] abentley: The local repository may not have all the revisions...what's the implication of that here? Doesn't that mean that the server should always do its own packing independently of the client? [20:18] The implication is that it is sometimes necessary to download revisions from the server in order to perform the auto-pack. [20:18] SFTP servers and FTP server are not smart enough to perform an auto-pack. [20:18] Appreciate your patients. I'm just curious and trying to understand. I've since simplified things, but at one point I had a gigantic shared repo, and it seemed to repack more often than I expected, which often ran me out of RAM on the client. [20:19] patience, gah [20:19] it does seem to autopack awfully often [20:19] once for every 10 revisions pushed or something [20:19] Certainly understandable in the non-smart-server cases, sure. [20:20] cory: For the smart server, it would make loads of sense to auto-pack on the server. [20:21] abentley: Ok, I was assuming that here. Things make sense again, thanks. [20:21] You know how it is-- first make it work, then make it right, then make it fast. [20:22] server-side autopacking definitely a valuable optimization, but we're still getting into the "make it fast" stage. [20:26] abentley, I'll take a look at it to see if I understand what needs to be done, and open a bug for it if there isn't one already. Do you have the time/patience to help me with some bits if I get stuck? [20:27] beuno: Sure [20:28] abentley, :) [20:29] poolie: I just switched my .bash* and .vim files to be in a subdir, like you suggested. Much better idea. :) [20:32] abentley, I've got another thing I'd like to work on. It seems to me many new users tend to forget to do "bzr whoami" before starting to use bzr. I was thinking it might be a good idea to tell the user if they've not set it, the first time they run any bzr command, that they should. Do you think that's something reasonable? [20:34] the smart server should autopach already [20:34] beuno: I'm mix about it, but I'd probably come down on your side. [20:34] lifeless: Oh, cool. [20:34] blazer, bug #215323 :) [20:34] Launchpad bug 215323 in bzr "The use should be notified that auto-packing is taking place" [Low,Confirmed] https://launchpad.net/bugs/215323 [20:34] uhm [20:35] lifeless, any opinions ^? [20:35] if we fallback to self._real_repository too early that would prevent the smart server doing it though. [20:36] beuno: autopacking is potentially slow; at least on sftp you get a progress bar when it kicks in [20:36] lifeless, cool, I'll work on it then. How about the notifying users *once* they don't have a "whoami" set? [20:37] abentley: good morning; I haven't had coffee or food yet fwiw [20:37] I tend to forget myself when installing bzr from scratch, and end up committing a revisions gibberish author info [20:38] lifeless: morning [20:38] you know, I htink packs on the smart server will autopack over the wire with the current code [20:38] ouch [20:38] beuno: one way would be to require an explicit setting always [20:39] maybe just add a "Committing to /foo/bar as Author " [20:39] beuno: There's potentially a lotta output on an initial commit. It's easy to miss. [20:40] lifeless, you mean change it so that bzr requires you to set it before committing by default? I actually don't dislike that... (if it's what you meant) [20:40] New bug: #215323 in bzr "The use should be notified that auto-packing is taking place" [Low,Confirmed] https://launchpad.net/bugs/215323 [20:41] abentley, right, it might help further down the road for people who use different emails for different projects, so you know for sure with what you are committing (I use different info for work then for open source) [20:41] beuno: yes, I did [20:41] and I'm out of caffeine until shops open in a couple o fhours :( [20:43] lifeless, cool. I like that even more (thought it would be too intrusive to propose). I'll file a bug and work on that one too :) [20:43] lifeless, sugar is a pretty good replacement for caffeine [20:46] I'm getting a traceback when using bzr annotate. :'( [20:56] awmcclain: :( [20:56] lifeless: :''''''''( [20:58] awmcclain: works for me [20:58] awmcclain: './bzr annotate %' in my editor, current tree [20:58] annotate also works for me. [20:58] It would help if you pasted the traceback. [20:59] abentley: whats the current bundle format? I'm looking to see if I can make this new datastream integrate bundles too/or reuse bundles [20:59] http://dpaste.com/44228/ [20:59] Sorry I didnt'get to taht sooner. :) [21:00] now the server seems to be autopacking again during a merge command.. why are we punished with this slowness on the day of the deadline? ^^ [21:00] lifeless: Current format is 4 [21:01] whats 9 ? [21:02] 0.9 ~= 3 [21:02] blazer: if you are doing a great number of merges of new revisions, you'll could potentially be crossing the log borders more than once; that said, bzr autopacks *all* the time and it's not a problem [21:02] abentley: ah, ok. [21:02] blazer: look in ~/.bzr.log [21:02] blazer: that will have some info [21:04] lifeless: on the server or client? the server one doesn't seem to contain much info [21:05] client [21:05] where would the logfile reside on a windows installation? [21:07] bzr --version will tell you [21:08] awmcclain: you have bzr 1.2; its known to work in 1.4.x [21:08] awmcclain: are you able to upgrade to 1.3.1 or bzr.dev? [21:08] abentley: bb is responding very slowly/not at all [21:09] Ahh... I didn't know that we were at 1.4! [21:09] hmm.. I have alot of "not updating child fraction" in the logfile.. should I be worried about this? what is a child fraction? [21:11] I'm tempted to say its irrational [21:11] but really, its just an overly verbose debug line [21:11] I see.. [21:12] I'll have to dig into the sourcecode someday when I have time :) === mw|food is now known as mw [21:12] lifeless: shouldn't it be a warning? I find it usually appears when there are progress bars that don;t make sense [21:15] abentley: can I ask you some random things about v4 bundles as a shortcut to chasing code pointers? [21:15] Okay. [21:16] in add_fulltext_record, is parents the fully qualified names? [21:16] e.g. (('file', filed, revision), ...) [21:20] Sorry, dunno. [21:20] ok [21:20] I think its probably just the string revids [21:20] add_fulltext_record seems to be used for revisions and signatures only [21:21] New bug: #215350 in bzr-gtk "Viz revision menu "view changes" broken" [Undecided,New] https://launchpad.net/bugs/215350 [21:24] lifeless: Yes, that would make sense. [21:24] The MPDiff format does snapshots as regular mpdiffs with 0 parents. [21:25] abentley: so here is what I'm thinking; I don't know if it makes sense to do today or not [21:25] (time is short) [21:25] I'm generating a new stream for versioned files to eliminate join; [21:26] a single record needs to be able to emit multiple items (for weaves); it needs to handle at least four record types - compressed knit delta, compressed knit ft, plain ft, weave. [21:27] it seems a very small step to use fully qualified keys (type, id, revision) and allow mpdiff records too [21:27] it needs to preserve those record types on the wire? [21:27] the initial code I need to write to elimiate join() doesn't have to specify serialization, only the object interface [21:28] thats a good question; I don't really want to impose large recoding overheads that aren't needed [21:29] its possible that we might make the wire protocol require a subset [21:30] so the plain ft is there to avoid a gzip,ungzip cycle on deannotation/annotation [21:30] Well, mpdiff does generate some overhead, though it's optimized for knit. multiparent entries are recompressed, for example. [21:31] i.e. knit deltas for revisions with multiple parents have comparison against their other parents done. [21:32] the weave is there so that if a server is sending from a weave repo it can just dump the weave (or a filtered list if someone cares to write one) into the stream as-is, which is much cheaper than the N extractions + N diffs otherwise needed [21:32] abentley: mpdiff is optimised for size right? [21:32] abentley: so I'm thinking of this as a lower layer than the different tradeoffs made by mpdiff vs knit vs annotated-knit vs weave [21:33] Right. [21:33] I'll be quite happy to get rid of join; anything beyond that in code reuse or flexibility is a bonus [21:35] So mpdiff support at this layer would be about harmonizing the bundle and repo interfaces? [21:36] bundle repo and versionedfile [21:37] it certainly sounds feasible to me. [21:37] if we could in principle create a v5 bundle serialiser that can use the same stream as I've made (even tho I'm not doing the network implementation today, that will be in the near future) [21:38] ok, I'll write up a document describing what I think will work, I'd value you saying 'this appears to do what bundle serialisation needs' (even though I will be checking against the v4 serialiser code) [21:38] lifeless: I think converting from v4 would be trivial. Am I missing something? [21:38] I don't think you're missing anything [21:38] I'm not parsing "if we could in principle create a v5 bundle serialiser...". [21:38] but I've not done much with bundle internals, and I'd rather have a safety net [21:39] oh [21:39] reprhasing [21:39] "is the concrete stream I come up with compatible with the needs of bundles" [21:40] It sounds like yes. It wouldn't be hard to extend bundle format 4 to support different record types. [21:40] The one thing is weaves. [21:41] Because bundles do expect each record to correspond to one version, right now. [21:41] so I'm thinking the programming interface to read a stream will be iterator of iterators [21:41] unified keyspace [21:44] Could you give me the example for a weave record vs a knit record? [21:44] Would you get all the fulltexts out of the weave? Is that what you mean by iterator of iterators? [21:44] actually; scrub my last two lines [21:44] I am explaining awfully. [21:45] I'll be backin a second as well, nature calleth. [21:48] I'm thinking of the stream as a sequence of records; each record can yield one or more repository entries (e.g. ('revision', revid)) [21:49] So would a knit record represent a single repository entry? [21:49] yes [21:50] the reason for this two layered structure is to handle knit and pack repos nearly optimally in the knit->knit and pack->pack case by avoiding double-handling on the client and server [21:51] a stream from a knit repository would be one stream record per knit versioned file, with the contents of that versioned file that are needed included [21:51] a stream from a pack repository would be one stream record, with all the jumbled-up texts inventories etc included [21:53] So the record would logically provide a set of unified-keyspace keys, though the byte representation could be optimized. [21:53] yes [21:53] one way of doing this would be to have a key prefix in the record [21:53] not sure if this is a good idea [21:54] but it would look like prefix=('text', FILEID), .... [21:54] allowing VersionedFile's current keyspace to just be appended [21:54] to get the unique keys [21:54] lifeless: It sounds almost like a normal bundle could be a record. [21:55] abentley: yes, I think that that would work well [21:55] q [21:55] or it could be a series of records - the byte representation doesn't need to change either way [21:56] its just whatever we think best in terms of programmatic access at this point [21:56] the key question is, does this sound nice? [21:56] is there anything 'ewww' about it, or suggestions to improve? [21:57] Well, I go a bit eew at having multiple keys per record, but I can't think of anything better. [21:57] (sorry to be monopolising your time with this, but you're the only other folk around to tease this out with) [21:57] Unless we make an exception for weaves and recode them. [21:58] we could at a programmatic level just yield many fulltexts from a weave [21:58] at a physical level though it would really be nice to be able to just drop the blob into the stream [21:59] also - performance wise, multiple keys per record with a record key prefix helps knit insertions [22:00] hmm, thats a bit false [22:00] The thing is, current bundles use two pack records for each logical record; one for metadata, one for the bytes. [22:00] it makes it easier in some ways to add many things to a target knit without having a cache of knits [22:01] abentley: I think that that serialisation would work very well for weaves; in the metadata the list of keys to be extracted; in the bytes the raw weave [22:01] How would you represent this for knits? AIUI, the index contains data not represented elsewhere. [22:02] there is a current knit stream, let me dig up its contents [22:03] the knit data stream has a list of (version_id, options, read_length, parents) [22:03] and then sum(read_length) bytes after all of that [22:04] so it gets to do a single readv() from disk to output the stream [22:04] and likewise on insertion, it can often do a single write() [22:04] this is the other thing I would like to preserve, because thats pretty good for performance [22:05] mapping one of these current knit streams into a single record lets me do that without loss of generality [22:06] I think it loses the ability to do random access, though. [22:06] we don't do random access today AFAIK [22:06] not on streams [22:06] Hello, is bzr+ssh:// recommended over sftp:// ? Any reasons? [22:06] do we on bundles ? [22:06] True. [22:07] guilhemb: hi, bzr+ssh runs a bzr process at the other end, so it can do more [22:07] I thought I remember a bug specific to ssh+bzr protocol in the past six weeks or so. I may be wrong. [22:07] lifeless: mmm what more, for example? [22:07] sending mail [22:07] avoiding some (significant) network traffic [22:08] lifeless: thank you [22:10] lifeless: It's true that we don't at present, but I was thinking once stacking works, we would be able to do bundles as stacked repos and merge directives as stacked branches. [22:10] abentley: yes, we've talked about that before :). Hmm, what would be needed to add random IO to this [22:11] Well, nothing would be needed for the format. [22:11] You'd just need to use lots of single-entry records. [22:11] I think in principal you'd parse the stream once to build an index and then use the index [22:12] this wouldn't imply single or multiple-entry records [22:12] we could buffer the whole bundle if we wanted tho on pathological bundles this would really blow memory wise [22:12] Well, let's say I've got a stream of an entire repository in a single record. How would I get out one revision? [22:13] We could also persist an index, to avoid the initial read. [22:14] parse the stream once, lets handwave about how we get code reuse, but say that instead of a get_stream on this, we have a 'get_repository' that indexes and then provides VersionedFiles indices for it [22:14] Depending on size, we might even stick it at the beginning. [22:14] for each record, we just need back the index entries it contains [22:14] this could be one, or it could be N [22:14] I mean an index of the stream. [22:15] a VersionedFile index needs to be able to answer 'versions()', 'get_parent_map()' [22:15] yup [22:15] So we would know exactly where to seek if we want ('revision', 'foo-bar-asdf') [22:15] yes [22:15] building the index would do that [22:16] we'd have a KnitDataAccess implementation for the whole stream [22:16] (or roughly that) [22:16] this is what packs do today [22:17] Okay, but still, how do you seek to a single knit delta when it's inside a massive pack record? [22:17] the index provides the readv() that is needed [22:18] and there is a data access object which does the readv and strips out any serialisation metadata that needs to be removed [22:18] the data access object is stateless [22:19] it just knows that e.g. stuff in a .pack has a pack record prefix to strip [22:19] It just seems pretty gross. The VersionedFileIndex would have to know way too much about the pack serialization format. [22:19] or that stuff in a .knit has no surrounding metadata [22:19] abentley: hmm, can I point you at the current code? [22:19] okay. [22:20] bzrlib.knit._KnitAccess and bzrlib.knit._PackAccess [22:20] oh, and ._StreamAccess too [22:20] these provide the abstraction so that the index doesn't know anything about the serialisation format [22:21] These are exactly what made me spit the dummy and start a new repository format with none of this legacy stuff. [22:21] oh :( [22:23] brb [22:28] lifeless: I'm not sure we're in sync here. [22:28] ok [22:28] We are still talking about the case where there is one record with a bunch of entries in it? [22:29] yes [22:30] So in order to do random access, you have to seek into the position in the container that corresponds with the position in the stream that holds the actual bytes? [22:31] lets consider two cases [22:31] a weave [22:31] and a record with many things from the same 'knit' [22:32] in the former case, we have to seek in the container and read the entire weave bytes [22:33] in the latter case, we only need a small number of bytes [22:33] but the 'bzrlib.pack' container API won't let us read less than a full record at its physical layer. [22:34] So we'd want to make sure, for performance, that we fix that [22:34] one way would be to improve that api - to have a way to say 'read bytes X..Y from within the container record starting at byte Z of this container [22:36] another would be to make sure that we always have a bzrlib.pack container record just for the smallest read we may want to do [22:37] lifeless: We could also achieve it with the current API by chunking up the stream. [22:37] abentley: jinx :) [22:38] or did you mean chunking at arbitrary points like every 512K or something [22:41] No, I mean more or less what you did. [22:45] I think its useful to be able to read part of a containers record [22:45] but perhaps not worth adding [22:45] one way we can achieve small chunks is nested pack containers [22:46] still, this is down to serialisation now, and in the first instance what I need is the in-memory python code [22:46] so I'll write up a developer doc now, and try and include these issues [22:50] Sure. [22:52] brekkie time, back soon [22:56] So to get back to your main question, I think that bundles could use this format. [22:56] They would probably ignore the fact that multiple entries were allowed per record. [22:57] It's a bit overkill for them, and for the other uses. [22:57] But for weave, it seems necessary. [23:00] cool [23:00] is that sha field in mpdiffs that of the reconstructed text? [23:01] I'm on the phone. [23:01] okies [23:04] DAmmit, why isn't REALLY USEFUL STUFF like "bzr modified" mentioned in the help index.... [23:05] awilkins: WTF! I didn't know that even existed! [23:06] It's great... you can just go bzr modified | xargs dostuff [23:06] awilkins: defintiely, except the only problem is that it outputs paths respective to the root of the branch [23:07] it'd be nice of bzr st and friends to display paths relative to `pwd` [23:07] I hadn't noticed that .... but I tend to do that stuff in the roor [23:07] s/roor/root [23:07] awilkins: yeah, if there existed `bzr root` I would be happy too [23:08] Peh, it has a "hidden = true" [23:08] Should be "danguseful = true" [23:10] Doesn't even show if you pass --long to help ; but --long says "help on all commands", not "help on commands except really useful stuff we don't want you to see to retain our mystique as UBER DOODS"... :-P [23:10] awilkins: bzr help hidden-commands [23:11] abentley: I hate to be pedantic, but (AFAICT) that topic is hidden by default, so how are you supposed to know it's there without picking through the source? [23:12] Ok, I'm lying, it's in the topics list [23:12] liar [23:13] * awilkins cuts out his tongue and pickles it in a jar [23:13] o.O [23:13] awilkins: Boutiuqe, not-maintained, achievable-by-other-means functionality is typically hidden. [23:14] If you can use xargs, surely you can use grep. [23:14] There should be a status --modified also. [23:14] There isn't at present, but there should be. [23:15] Is find-merge-base in the category of "achievable by other means" ? [23:15] yes. [23:15] bzr revision-info -r ancestor: [23:15] Aha [23:16] Also, it is in the category of boutique. [23:16] Define boutique in the context of bzr code [23:17] Operations that an end user should never need. [23:21] Does boutique imply achievable-by-other-means or was your list above a set (not a list of synonyms?) [23:22] A set [23:23] awilkins: 'bzr root' exists [23:23] I found find-merge-base and revision-info to be useful enough to patch them into the BazaarClient library that bzr-eclipse is using (bot for use in bzr-eclipse, but so I could use the features in my current Eclipse plugin project. [23:24] lifeless: well it must've only appeared there after your announcement 30 seconds ago :D [23:24] * jdong questions his own sanity [23:24] Now, I was less experienced with bzr at the time... they may well be achieveable through other means. [23:24] bzr root isn't even in the hidden list .... [23:25] I don't think there's anything you can get out of revision info that you can't get out of log. [23:25] it is in the normal bzr help commands list [23:25] abentley: True, but it's nice to have the convenience of not having to parse it out. [23:26] abentley: I respect the Pythonic "do it one way" thing though [23:26] * awilkins attending to food brb [23:27] wasn;t there a "bzr help allcommands" or something [23:30] awilkins: More commands is more intimidating to users, so where possible, we prefer to expose functionality as options to existing commands. [23:46] Meh, I keep typing bz rviz] [23:46] It does weird things with the bz utility [23:48] :P [23:52] Mmmm, faggots, mushy peaa, mashed spud and onion gravy [23:52] don't smoke awilkins [23:52] but definitely eat mushy pea and mashed spuds [23:52] with onion gravy [23:52] Not fags [23:52] faggots [23:53] "british meatball" - they usually include offal such as liver [23:53] oh. well, I also don't recommend eating sticks [23:53] whoah. that's one I haven't heard before :) [23:54] Definitely comfort food. [23:55] Especially "Mr Brains" brand faggots [23:55] New bug: #215426 in bzr "MemoryError - recv for HTTP through Proxy" [Undecided,New] https://launchpad.net/bugs/215426