=== garyvdm_ is now known as garyvdm [00:19] I really want a 'bzr edit-diff' which gives me the diff, lets me edit it, then saves the result back [00:23] Obmar: yes, that's an ugly bug, need to get some time and fix it [00:26] verterok: That would be great. I would love to be able to do bzr from in the IDE. [00:28] spiv: ping [00:29] spiv: the _commit_write_group check; I think that that may be better on the RepositoryPackCollection level ? [00:33] lifeless: rather than on the KnitPackRepository? [00:33] lifeless: sure, I essentially flipped a mental coin when deciding where to place that. [00:36] I've moved it down [00:36] def _commit_write_group(self): [00:36] all_missing = set() [00:36] for prefix, versioned_file in ( [00:36] ('revisions', self.repo.revisions), [00:36] ('inventories', self.repo.inventories), [00:36] ('texts', self.repo.texts), [00:36] ('signatures', self.repo.signatures), [00:36] ): [00:36] # We use KnitVersionedFiles exclusively so can rely on _index. [00:36] missing = versioned_file._index.get_missing_compression_parents() [00:36] all_missing.update([(prefix,) + key for key in missing]) [00:36] if all_missing: [00:36] raise errors.BzrCheckError( [00:36] "Repository %s has missing compression parent(s) %r " [00:36] % (self, sorted(all_missing))) [00:37] ^ ok ? [00:37] self.repo at the end obviosuly [00:40] Looks ok. [00:44] spiv: ok, that branch is on its way to pqm [00:44] where is the next one? [00:55] spiv: ping [00:58] (some people at) nasa \heart bzr [00:58] yay [00:58] nice [01:04] lifeless: tests all passing now I think; just removing a cruft [01:12] beuno: http://blog.crisp.se/davidbarnholdt/2009/02/18/1234986060000.html really is nice [01:21] spiv: well the scan api is gold [01:21] spiv: I'd like to take the next lowest layer on, or ?...? [01:23] lifeless: I think I've removed the worst of the cruft now [01:23] lifeless: I still haven't added a flag to add_records, but otherwise I'm happy with this [01:24] lifeless: I'm pushing it up to lp:~spiv/bzr/missing-parents-integration [01:25] spiv: k, give me a shout when its done [01:27] spiv: also, have you had a chance to review network-name? [01:27] spiv: if not thats fine, I might interrupt poolie [01:28] who has the context for this [01:28] lifeless: I haven't, but now is a good time for me to do so [01:40] lifeless: reviewed, one typo found :) [01:45] spiv: bombs away [01:46] decided against a comment, because of too much duplication [01:47] * spiv nods [01:54] lifeless: nice drawings :) [01:55] spiv: so, I'll review this branch; make any changes i need, and you ack the changes? [01:55] lifeless: +1 [02:00] spiv: looking at it, it seems easier to put back the older buffer code, and just make a single call at the end with the new flag, to add_records [02:00] spiv: sound ok? [02:02] lifeless: yep [02:03] lifeless: having a single call to add_records would be more clearly correct, I think. Less room for accidental double-insertion. [02:03] (Which is a bug I noticed I had during our phone chat this morning...) [02:09] * spiv -> lunch [02:15] lifeless: ping for a quick question [02:15] i'm just looking at mark's vcsrace on status [02:15] the results are generally really good except for status which spends a considerable amount of user time compared to the competition [02:16] i think this is because of overhead of building tuples from the dirstate etc [02:17] poolie: [waiting for the question] [02:18] would you agree? [02:19] from memory i mean [02:19] well [02:20] One planned experiment is to write a pure C implementation [02:20] yes [02:20] actually i see something interesting in the profile [02:20] we may be able to squeeze some more out of chdir [02:20] the system time is actually pretty good [02:20] running up oprofile over it would be interesting too [02:20] the interesting thing is that we're using the generic InterTree.compare [02:20] poolie: pending merge? [02:20] because mark's measuring the special case of changes from the empty tree [02:21] oh [02:21] i suspect dirstate just gives you an EmptyTree [02:21] and this is unoptimized [02:21] grah [02:21] * lifeless hates on special cases [02:21] i may be wrong [02:21] quick, add another special case :) [02:21] measurement is the only way to be sure :) [02:22] well, it is definitely using the generic one [02:23] hm no it's not literally EmptyTree, which is deprecated [02:24] spiv: get_missing_compression_parents: perhaps on _index only ? [02:24] spiv: I don't think it needs to be top level [02:25] spiv: actually scratch that, the tests would be ugly [02:25] spiv: its fine [02:34] crapzor. [02:34] it's doing it again [02:34] same branch. one more revision. [02:38] poolie: IIRC we've gone back and forth on the patch that removes the special case when there is no entries [02:38] at least, I seem to remember fixing that in the past [02:38] # the source revid must be in the target dirstate [02:38] if not (source._revision_id == NULL_REVISION or [02:38] source._revision_id in target.get_parent_ids()): [02:38] spiv: done; bzr sending it [02:39] Looks like we should be handling the NULL tree [02:40] If we aren't, something else is wrong [02:41] spiv: mail sent; review @ will. next is suspend/resume yeah? [03:00] hi when I use ftp push, Does it actually upload the files. So that if I were to have a HDD failure, i could retrieve those files from the FTP server? [03:02] It does it just upload some kind of data about changes to the files [I Dont understand versioning systems] [03:02] PHPlover: it creates a database [03:03] PHPlover: you can inspect what it has uploaded by using bzr - e.g. 'bzr log URL' [03:03] ok, and does that database store the complete data of every file in my project, Or just info about the files [03:03] it stores the complete copy of each commit, which includes the files that *are versioned*, but not those that are not [03:04] PHPlover: the easiest way to see what its done, is to 'bzr branch URL /tmp/spare-directory' and have a look at what bzr puts in spare-directory [03:05] poolie: ping [03:05] poolie: does bencode avoid \n ?, or can/should I avoid a \n some other way . (network_name currently has \n, but that breaks SmartServerRequest.do() [03:08] lifeless: \n in HPSS args are fine with HPSS v3 [03:08] spiv: interesting. [03:08] bencode length prefixes strings. [03:08] spiv: how do a I register a verb to be only used with one HPSS version then ? :P [03:08] File "/home/robertc/source/baz/push.roundtrips/bzrlib/remote.py", line 2002, in _translate_error [03:09] raise errors.UnknownErrorFromSmartServer(err) [03:09] UnknownErrorFromSmartServer: Server sent an unexpected error: ('error', 'do() takes exactly 4 arguments (3 given)') [03:09] the blackbox tests pass [03:09] but the per-repository RemoteRepositoryFormat-v2 tests are failing [03:09] for the reason-under-discussion [03:09] Currently that's taken care of in the client; the client never tries sending various requests to older servers [03:09] ok, so I should guard based on a check? [03:10] Right. The client asks the medium if .is_remote_before((1,2)) or whatever. [03:11] And also does _remember_remote_is_before((x,y)) when it notices a smart method is not implemented to save it trying that or other new methods again. [03:11] is that _protocol_ or _bzrlib version_ [03:12] [how do I get these tests to work :P] [03:12] It's the bzrlib version. [03:12] When the protocol is v2, the medium immediately "remembers" that remote is before (1,6) (or whenever v3 was added). [03:13] ok, so its a lie, but don't look under the hook ? [03:13] *hood* [03:13] Yeah. [03:13] I'm not 100% happy with it, but it does the job. Well, jobs. [03:14] (Avoid trying calls that require \n in args and whatever that are new in v3, and also avoid repeatedly trying an RPC once we know the server doesn't implement it.) [03:15] where is the code so I can figure out what .is_remote_before to use [03:15] like is 1,6 right [03:15] or 1,2 [03:15] or ? [03:16] * spiv is looking [03:17] protocol_version in medium still returns '2' :P [03:18] or should I say is_before(1,13) ? [03:20] Probalby 1,13 would be best. [03:21] * spiv can't find where the remembered version is set low for v2 protocols. [03:23] ok I'm unblocked on that, though 1,13 fails the current version test :P [03:23] spiv: *and* [03:23] it still runs the verb on the Remotev2 tests [03:23] hmm, checking [03:24] Ah, found it. [03:24] ok fixed [03:24] cool [03:24] so the branch for partial insertion is polished and up for review [03:24] bzrlib/smart/client.py calls self._medium._remember_remote_is_before((1, 6)) [03:25] once it finds out that v3 doesn't work. [03:25] Cool, I'll take a look. [03:25] I'm waiting on the next unit of work [03:26] the network data serialisation for record streams is also outstanding for review [03:26] http://bundlebuggy.aaronbentley.com/project/bzr/request/%3C1234854080.11037.18.camel%40lifeless-64%3E [03:36] spiv: ok, and I've just flushed the new verbs to qm [03:36] *pqm* [03:36] spiv: how should we split up the remaining work? Want a quick call? [03:41] lifeless: ok [03:41] lifeless: (I'm reviewing the network serialisation atm) [03:42] ok, I'll ring you now if thats ok [03:42] Yep [03:48] lifeless: http://people.ubuntu.com/~andrew/bzr/streaming-push [03:48] lifeless: ...nearly pushed [03:49] lifeless: done. [03:50] spiv: cool [03:54] * igc lunch [04:05] spiv: so RemoteVersionedFile has some of our fetch refactoring [04:09] spiv: it looks like RemoteVersionedFile is a little confused, the loom basically resets @ Very fetching [04:10] spiv: or something. econfused:P cherrypicking and reviewing [04:12] lifeless: weird. [04:12] jam: dropping the group size will shrink things because of the full texts at new groups === jfroy is now known as cad_monster === cad_monster is now known as jfroy [04:42] mwhudson: ping? [04:43] igc, can you remind me why content-filtering needs to change dirstate? [04:43] would it be at all possible to avoid it? [04:43] poolie: sha on disk != sha in storage ? [04:46] well, of course that's true for filtering [04:46] it's been a while but what lifeless says sounds right [04:47] poolie1:^ [04:47] it was more the next logical step after that that I'm thinking about [04:47] spiv: testing a massaged Very fetching with the lower layers removed [04:47] spiv: I'm keeping length_prefix, though really its in the wrong home [04:49] aaargh [04:50] Raawfle: some context might help [04:50] spiv: as we knew, progress breaks [04:52] spiv: so I'm going to commit it, and then start working on the progress bars [04:53] lifeless: sounds good [05:01] poolie: can you decode what this means for me ? [05:01] /home/robertc/source/baz/fetch.sinks/bzrlib/ui/__init__.py:91: UserWarning: ProgressTask(0/1, msg='fetch texts') is not the active task ProgressTask(None/None, msg='') [05:01] % (task, self._task_stack[-1])) [05:04] lifeless: probably that you're missing a finally block [05:05] prior to the code that constructs ProgressTask(0/1, msg='fetch texts') [05:05] hth [05:05] poolie1: ok, will look for that [05:05] poolie1: if it helps we have generators around generators [05:06] no, that doesn't help :-) [05:06] the previous pb seems to contain nothing [05:07] so you can probably just remove it [05:07] is previous a higher one? [05:07] or one that wasn't cleaned up from an earlier test? [05:07] so, there's supposed to be a stack [05:08] this message indicates that you tried to finish the one that's not on top of the stack [05:08] so actually "prior" is not necessarily true [05:08] so there are (at least) two here [05:08] A and B [05:09] it may be that B is conceptually inside A, and now all of A is finished but you forgot to finish B first [05:09] or, it may be that B is a sibling of A, and you forgot to finish A before starting B [05:09] if you see what i mean [05:10] yeh [05:10] I'm worried that generators are interacting badly [05:10] spiv and I overhauled fetch() [05:11] one answer is to remove the progress entirely for now [05:12] for many generators i'd say that giving an activity indicator may be more appropriate [05:13] because, firstly you may not know how many things are generating [05:13] indeed [05:13] and, as a technical reflection, finishing up is hard [05:15] ah found it [05:15] we have a function that passses a pb into a method [05:15] that method finishes it and rebinds it [05:15] I'll change it to an instance variable, bet it fixes it [05:15] hm as i may have said in a review, i think passing pbs around is whack [05:15] it's one thing i'd like to fix here [05:16] except in pretty special cases, a progress bar should correspond to a for loop in one function [05:16] poolie1: I'm not justifing the current code ;) [05:16] and the nesting of them i think can be done by the ui global :) [05:16] i know, i'm just repeating this thought in case you want to agree :) [05:17] oh, I agree [05:18] no warnings now, that was it [05:18] ok [05:19] well, i'm glad it wasn't spurious and didn't take too long to track down [05:19] :) [05:19] igc, so, i'm not quite done on filtering but it's not all that hadr [05:20] i am inclined to see if doing it on top of dirstate is within reach [05:20] at the least i'd like to document why [05:20] poolie1: did you see my comment about sha on disk and sha in repo? [05:20] yes but no but yes but [05:21] obviously they are different, [05:21] but this doesn't change dirstate to store both of them [05:21] what does it do [05:22] it seems to change it to record the filtered sha1 [05:23] sorry, i mean the sha1 of running the input filters on the file [05:24] Peng_: hola [05:28] mwhudson: Hi. [05:28] Um. [05:31] mwhudson: I have a dumb question about your changes to LH's served_url stuff. I used to monkeypatch served_url to return URLs relative to /bzr instead of /loggerhead, but now that doesn't work. So...what do you think I should do? [05:32] Peng_: um [05:32] :D [05:32] spiv: sending it for review, I'm happy and only trivial changes; if you're happy with my arbitrary cleanups we can just land it [05:32] Peng_: think of a way you'd like this to work and tell me about it :) [05:33] mwhudson: Heh. I was happy with how I was doing it before. :P [05:33] Peng_: you can probably monkey patch BranchFileSystem.app_for_branch [05:35] Peng_: alternatively, one could extract the calculation of the default value out into a method that you could then monkey patch [05:40] I'm barely awake enough to understand what that means, but I think it sounds good. [05:41] lifeless: fancy that, I just sent you a review too. [05:41] lifeless: If you only made arbitrary trivial cleanups to my code, then yes I'm definitely +1 on principle :) [05:41] lifeless: I can eyeball it anyway if you want. [05:41] spiv: just sent it [05:41] spiv: its basically - delete the RVF stuff [05:42] spiv: fix progress bars [05:44] spiv: we can't do RemoteStreamSink until the vf network bytes lands [05:45] I know. I tried not to let immiment achievements cloud my review judgement, though... [05:49] lifeless: there's some odd stuff in that diff you just sent; additions to the makefile etc. [05:49] lifeless: perhaps the branch you sent has more recent bzr.dev than bzr send diffed against? [05:51] unlikely [05:51] however wrong parent is likely [05:54] Ah, yeah. [05:54] resent [05:55] out to the shops before they close, will finish igc's review today though [05:56] morning [05:58] Heh, I see _length_prefix did sneak into that diff. [05:59] spiv: I kept it deliberately [06:00] * spiv nods [06:00] lifeless: reviewed (tweak) [06:01] * spiv keeps reviewing [06:03] spiv: ok network serialisation on its way to pqm [06:04] spiv: so we can merge that into the fetch refactoring branch [06:08] lifeless: just bb:approved your "insert record streams with missing compression parents" patch [06:08] excellent [06:08] I'll kick that off now [06:09] spiv: so what do we have left [06:09] suspend resume? is that up for review yet? [06:11] Hmm, I don't think so. [06:11] so it depended on missing parents and partial streams [06:11] both are fully acked [06:11] why don't you polish that, and I'll get the network stream integrated with RemoteSink [06:11] which will fail on stacked branches until suspend resume is done [06:12] Ok. Let's see if I can figure out which branches to merge into suspend-resume to bring it up to date... [06:12] :P [06:12] so administrivia [06:12] I'd *love* to finish this today [06:12] but its after [06:12] 5 [06:13] if you're up for a sprint-to-finish, so am I. And I'm sure we can convince poolie for some slack on Monday :) [06:13] Oh that's right, we were sensible and made that a branch off bzr.dev. That helps ;) [06:14] I'm still going strong atm ;) [06:14] http://bundlebuggy.aaronbentley.com/project/bzr/request/%3C1235097663.6233.119.camel%40lifeless-64%3E is what you need for suspendresume [06:14] I'll probably need to stop about 6:30 [06:14] the clock is on :) [06:15] :) [06:18] you probably want bzr.dev too or that matter [06:21] Yeah, already grabbed that :) [06:31] spiv: ok, failure in my streams bytes thing in reconcile [06:31] while I investigate that the other approved patch is on the way [06:31] igc, still around at all? [06:38] poolie1: yep [06:40] i'm going to send a review in a sec [06:40] do you want to talk about it/ [06:41] lifeless: http://rafb.net/p/wgjSMm15.html is failing on the assertRaises; because the add_lines after the resume_write_group doesn't actually make a record with a compression parent. [06:42] lifeless: what's a good way to add a record with a compression parent there, should I crib from test_versionedfiles perhaps? [06:42] With a *missing* compression parent, specifically... [06:42] spiv: you need two repos [06:43] in repo one you do add_lines(base, (), [... ... ...]) [06:43] lifeless: that's what I feared :) [06:43] * spiv does that [06:43] and add_line(child, (base,), [.....]) [06:44] then repo2.texts.insert_record_stream(repo1.texts.get_record_stream(base,...) [06:44] sorry not base there [06:44] just delta [06:44] * spiv nods [06:45] or [06:45] use knit.make_pack_factory [06:45] to get a vf object [06:45] in lieu of a second repo [06:53] Existing tests passing, I think... [06:53] cool [07:02] Ok, wrote a test for filling in a missing parent after a suspend/resume, currently failing :) [07:02] Fixing... [07:02] spiv: I'm modifying RemoteSink to fallback to _real when stacked, pending the stuff you're doing [07:04] spiv: minor tweak needed, our substreams need to encode a prefix record [07:05] spiv: dealing with the keywidth mismatch [07:08] Oh, right. [07:09] so in do_chunk on the reader [07:09] does it know its a pack stream and give us only the bytes_record ? [07:09] Ok, filling in a missing parent after suspend/resume works. === chx is now known as chx_sleeping [07:10] woooooooooooooo [07:10] ooooo [07:10] ooo [07:10] and the current branch is into ascii tests [07:10] which is fetch refactor [07:11] Hmm, I think I need more context to understand your question. [07:11] do_chunk is called as bytes of the request *body* arrive. [07:11] yes [07:11] SmartServerRepositoryInsertStream [07:12] I think it will be fine [07:12] So long as the client is generating a request body stream of a pack, which IIRC it is... [07:13] i.e. it's basically just sending what the ContainerWriter is emitting [07:13] So that should be fine, yeah. [07:20] hi all (completely forget my ping this morning :) [07:20] Ok, added what I think is the last test, that suspend/resume doesn't accidentally make an uncommittable write group committable. [07:20] Not passing yet :) [07:21] hello vila [07:22] hi poolie1 :) === poolie1 is now known as poolie [07:30] lifeless: passing, just fixing up duplication added to make it pass [07:32] spiv: cool [07:32] spiv: I think I'm done too, bit of staring involved [07:32] spiv: the glue is a little awkward, pack -> bytes -> pack reader -> bytes -> NetworkRecordStream [07:33] lifeless: ok, pushing [07:33] lifeless: it'll be at http://people.ubuntu.com/~andrew/bzr/suspend-write-group shortly [07:33] spiv: was there a test to run to check this works [07:34] spiv: one branch landed [07:34] pulling .dev and sending the next [07:35] lifeless: there were acceptance tests in that loom [07:36] lifeless: although now that I look at it again, the test_push_smart_stacked_streaming_acceptance test is slightly wrong :) [07:37] lifeless: we expect 2 insert_stream RPCs now, not 1. [07:37] If things are working, that is :) [07:37] spiv: cool cool [07:37] lifeless: so, my branch is pushed, and all tests seem to be passing here. [07:38] (running a bigger selftest to be sure, but so far so good) [07:38] spiv: whats a good test to test RemoteSink ? [07:39] lifeless: hmm [07:40] fetch tests pass [07:41] lifeless: https://code.edge.launchpad.net/~lifeless/subunit/polish/+merge/2735 [07:42] dammit, its creating manually >< [07:42] lifeless: I suppose a simple test_remote test that shows that RemoteRepo._get_sink().insert_stream(...) invokes the right RPC would be good. [07:42] lifeless: btw, "bzr send --no-bundle" to merge@code.launchpad.net Just Works. [07:42] Possibly even with a stream of [] ;) [07:46] I'm sending up the suspend-write-group branch for review [07:46] cool [07:46] resending partial inserts (typo :() [07:47] :( [07:49] jml: thank; commonprefix? [07:50] lifeless: os.commonprefix. [07:50] lifeless: it's a blight on the standard library. only use it if you want your file operations to be broken [07:50] lifeless: oh, heh, I already had that exact one-liner in my suspend-write-group branch. [07:50] lifeless: os.commonprefix('/foo/bar/baz', '/foo/baz/bop') => '/foo/ba' [07:50] spiv: the typo ? [07:52] lifeless: yeah [07:53] spiv: :) [07:54] spiv: I don't see the patch [07:59] spiv: ok, its writing a pack directly [08:00] night all [08:02] lifeless: ok, patch sent. [08:03] lifeless: I'm done for the evening, but I'm very happy with the shape of the resume_write_group code. Maybe it's my Friday evening judgement, but I think that patch only has a few minors warts, rather than any glaring problems ;) [08:04] lifeless: thanks for helping me get it into shape [08:04] * spiv -> food, weekend, world of goo, etc! [08:11] spiv: approved [08:12] spiv: I'm going to keep plugging at joining these dots [08:12] * lifeless will be begging for reviews I think [08:12] lifeless: I'll pqm-submit it, thanks. [08:13] the dependent patch is onto ascii [08:13] should land fine I think [08:19] spiv: woo, 74 -> 60 [08:20] lifeless: the ratchet? [08:20] for non-stacked [08:20] stacked goes 99 -> 101 [08:20] Nice. [08:20] the latter is uninvestigated as yet [08:20] but I think its going to be due to my if block pending suspend integration [08:20] Is stacked still hitting the thing in InterPackRepo that explicitly uses something dumb? [08:20] so commiting and sending for review [08:21] (the "if len(fallback_repos) > 0" check) [08:21] I have a merge request from a friend who didn't set his 'whoami' info on the shared computer he was using. How can I modify the commit email address before I merge into my main branch? [08:24] spiv: yes [08:25] nhaines: you can't, because commits are immutable; you can either degrade to a regular patch (do a merge, then bzr revert --forget-merges), or not [08:27] lifeless: both what I feared and what I needed to know. Thanks. :) [08:34] spiv: one more landed [08:46] I'm looking for a reviewer for http://bundlebuggy.aaronbentley.com/project/bzr/request/%3C1235118639.6233.151.camel%40lifeless-64%3E [08:55] lifeless: bzrlib/smart/protocol.py -> wouldnt just raise do? [08:57] ronny: no, interim revisions in called code trash state [08:57] s/revisions/exceptions./ [08:59] ah, i see [09:00] thats actually a pretty standard python recipe [09:01] never ran across it [09:01] wont that go away in python3 anyway? [09:01] (afair they put the traces on the exception) [09:10] \quit [09:17] lifeless: haha, that diff removes _length_prefix again :) [09:18] Ah, and moves it to a different file. [09:18] lifeless: looks good. bb:approve [09:18] Ok, really gone for the night now. [09:23] spiv: ciao : [09:23] spiv: just finishing the whole thing off now :) [09:29] lifeless: hmm, i made up a small testcase, and it doesnt seem to trash state, was this a bug in earlyer python? [09:29] ronny: possibly [09:29] unless im misstaken http://paste.pocoo.org/show/104633 demonstrates that it works in 2.4/2.5 [09:31] ronny: I'd need to check theexact way things propogate [09:31] your test does look plausible to me at ist consideration [09:33] bbl, need to solve some snow issues === monty|meeting is now known as montywi [09:56] ok [09:56] 83 round trips [09:56] so 18 less [09:56] and massively less for non-muppet content [10:57] I just installed using Bazaar-1.11-OSX10.5.dmg and I see a strange message: "WARNING: bzrlib version doesn't match the bzr program" [10:57] which goes on to print: bzrlib from ['/Library/Python/2.5/site-packages/bzrlib'] is version (1, 11, 0, 'final', 0) [10:57] which looks like a match to me. [10:57] http://pastebin.com/m1260cb9a [10:58] Any suggestion as to what I might look at next? [10:59] * beuno pokes verterok and vila [11:19] * verterok is not here [11:19] hi beuno [11:20] hi verterok [11:20] EricHerman: hi, check you have another bzrlib in your PYTHONPATH [11:20] this is what you get for being an apple fanboy [11:20] looking [11:21] pings for support on it ;) [11:21] beuno: apple fan boy? where? I use Ubuntu ;) [11:22] verterok, I do not have PYTHONPATH set in my envrionment [11:23] verterok, oh, don't tempt me... [11:24] * EricHerman tries find / -name bzrlib [11:30] find only reports /Library/Python/2.5/site-packages/bzrlib [11:35] EricHerman: weird, I didn't installed 1.11, but 1.10 works ok. [11:36] EricHerman: I'll download and install 1.11, and report back if I hit any issues [11:36] EricHerman: did you upgraded from a previous version? [11:37] I /had/ 1.6 which was built from source, but I removed it. [11:37] it's possible that there was something left over, but I don't know what that might have been. [11:38] * domas hands over strace to Eric :) [11:38] EricHerman: which bzr script are you running? [11:40] james_w, I'm not sure how to answer that question. I envoke /usr/bin/bzr .... [11:40] (that file itself does not have a version number in it) [11:41] ls -ld $(which bzr) [11:41] -rwxrwxr-x 1 root wheel 4313 Mar 22 2008 /usr/bin/bzr [11:41] date is suspicious on that, hmmm! [11:42] EricHerman: yeah, just wondered if you had an old /usr/local/bin/bzr or something [11:42] EricHerman: please check if you have a bzr in /usr/local/bin [11:43] 'which bzr' [11:43] Indeed! but oddly it *also* gives that message. [11:43] I think I'll try to remove everything and start again. [11:44] EricHerman: good idea :) [12:22] well, having done a full rm -rf of all the potentailly bzr related stuff I could find on the system (with luck I didn't nuke *too* much!), then doing a reinstall using Bazaar-1.11-OSX10.5.dmg ... I no longer see the strange message. Conclusion: something stale on the filesystem was at issue. Had to be one of these: http://rafb.net/p/X85tuO44.html [12:23] on a clean system Bazaar-1.11-OSX10.5.dmg should be just fine. [12:50] hi, I can I do with bzr the same thing than svn status -r HEAD ? [12:53] harobed: svn status -r HEAD is not a valid svn command [12:54] sorry, svn status -u [12:54] or svn diff -r HEAD [12:55] to see differences between repository and local working copy [12:56] "bzr diff :parent --new ." [12:57] Or maybe "bzr diff :bound --new ." if you're in a checkout/bound branch. [12:57] (Not tested, but I think that should work) [12:58] ok, I'll test it [12:59] Please tell me if it works, it might interest me in the future :-) [13:00] Although I usually do "bzr status" and "bzr missing :bound". [13:01] I've this message : bzr: ERROR: No bound location assigned. [13:03] ok, I've binded to parent bancsh [13:03] branch [13:03] bzr diff :bound --new . work as svn diff -r HEAD [13:04] but I don't know how to do : svn status -u [13:04] Well. If you don't want it to be a checkout, don't bind it... [13:06] bzr diff :bound --new . | lsdiff [13:07] does "bzr status -r branch::bound" work? [13:10] james_w: yes, it work [13:48] Er... I constantly get "Server does not understand Bazaar network protocol 3, reconnecting. (Upgrade the server to avoid this.)" messages, even after upgrading the server to 1.12... [13:49] (In a branch bound to a bzr+ssh://foo branch in a remote repo) [13:53] Hah. It helps if I upgrade the correct server. [13:53] heh :-) [13:53] * Lo-lan-do hides [13:53] :D [13:54] Damn. I upgraded my workstation to 1.12 and now I can't reinstall bzr-svn. [13:55] (Current bzr-svn depends on subvertpy, which is stuck in NEW) [13:56] I'll make do with a local install in the meantime. [14:00] Lo-lan-do: You could build subvertpy yourself. jelmer has published the packaging branch somewhere. [14:01] Yeah, I did that already :-) === `6og is now known as Kamping_Kaiser [14:01] hey guys :) [14:03] What's the status of nested-tree support in bzr? I see some stuff in the wiki about a branch jelmer was working on being merged in 0.15, and that it just requires a manual bzr add path but it doesn't seem to work -- just ignores the subtree, listing it as unknown. [14:03] That branch was back in 2005, so It's hard to believe it's not finished. Was it abandoned? [14:10] vila: good morning [14:13] /join #mercurial [14:13] sorry guys; stupid irc client :) [14:13] jam: good morning [14:14] jel: it wasn't specifically abandoned, just focus of dev shifted a bit. IIRC abentley is planning on working on it near full-time in ~the next month [14:14] vila: just thought I'd say hi. Also, a quick question about the EC2 machine [14:15] Are you interested in setting it up with my help? Or should I just do it? [14:15] jam: ah, that's great news. Are there any details of how it will be implemented/used? [14:15] jel: So the original design was that if you did: [14:15] bzr branch $METAPROJECT project [14:15] cd project [14:15] bzr branch $SUBPROJECT subdir [14:15] bzr add subdir [14:15] bzr commit [14:16] Then you would have a "combined" project [14:16] such that in the future doing "bzr branch $METAPROJECT" would give you both [14:16] but the project histories would be versioned independently [14:16] just that when you do "bzr commit" in the outer-dir, it would recursively commit the children [14:16] jam: I'm still uncomfortable with using Windows remotely, I still plan to set one up as a VM locally (with an emphasis on using only free software and an automated setup) so I will certainly monitor what happen on the EC2 machine [14:16] and track the committed revision in the parent [14:16] jam: ok, cool, sounds like the old plan, which looked very usable :) [14:17] vila: k, I thought I might get you to write my documentation for me again. :) I have 3-4 emails about when I set up kerguelen [14:17] and I just need to turn it into a real "setting up a win32 build host" document [14:17] jam: certainly something I plan to do anyway :) [14:17] Since we're setting up yet-another host, it is a good time to do so [14:18] abentley: Is there a name for your nested-tree project, or a branch/feed somewhere where I can track its progress? [14:19] jam: yes, that's the "yet-another" that triggers my desire to automate it (and the aim to allow others to do it as well, no matter what degree of automation I'll achieve) [14:19] jel: I haven't set anything up yet. [14:19] ok, thanks [14:20] abentley: Not to push you, but can you confirm that you're planning to work on it soon? It's an important feature for me, and I need to make a decision on which vcs to use, is all. [14:20] jel: I am planning to work on it soon. [14:21] k, thanks :) [14:22] abentley: have you seen my mail about iter_changes bogus indentation regarding tree-reference handling ? [14:23] vila: Yes. Thanks for that. I'm surprised it didn't break any test cases. [14:23] abentley: yeah, same here, shame :-( [14:58] jam: ping [14:59] hi barry [14:59] jam: hi! i upgraded code.python.org to bzr 1.12 today [14:59] jam: but i think i want to upgrade the repo formats of the mirrors [15:00] barry, you can use recursive upgrade plugin: http://bazaar-vcs.org/BzrPlugins [15:00] or [15:00] jam: we're on rich-root-pack right now. thinking about 1.9-rich-root, but i'm trying to decide whether the performance improvement would be worth (potentially) forcing people to upgrade their clients [15:00] jam has a nice script in contrib/ [15:01] barry: from my experimentation with the python repo, I would say yes [15:01] but I don't create the policy for upgrading clients [15:01] 0.92 -> 1.9 was a big improvment for me, FWIW [15:01] beuno: for python.org it is even better, because the bzr-svn conversion compresses astoundingly well [15:01] barry: I don't know how hard it is for your clients to upgrade [15:01] so I can't really make that decision for yo [15:01] you [15:02] I can say that for incremental updates [15:02] it should be at least 3x faster [15:02] for initial branch of everything, I know there is 20% less data to copy [15:02] but that isn't necessarily an across-the [15:02] sorry [15:02] 20% isn't necessarily worth upgrading clients [15:02] beuno, jam that's significant. i don't think there are /that/ many users of the mirrors and it's mostly early adopters anyway [15:02] barry: so for *my* anecdote [15:03] doing "bzr up" for bzr.dev [15:03] went from 45s to 15s [15:03] when there are about 3 revisions to update [15:03] I would expect python.org to do even better [15:05] sorry to ask but, 15 seconds is still a lot no? [15:05] jam: thanks. sounds like it's worth it unless i get cries of objections [15:07] santagada: I was going to respond... [15:07] Anyway, it just took 20s to update 7 mainline revs == 67 ancestry revs [15:07] and all that over a dumb transport [15:08] I'm not entirely sure how long things should be, but given that connecting ssh to launchpad takes 7-8s... [15:12] is it possible/easy to change the e-mail address associated with a commit that has been made? [15:12] i.e., the `bzr whoami` entry [15:12] or once its in, can you no longer do anyrthing about it? [15:13] oldman: You can 'uncommit' back to it, but that'll break your history. [15:15] a simple sed -e 's/@example.com/@real.com/' of .bzr would not be safe? [15:16] oldman: I very much doubt it. [15:16] ok [15:16] so if I... [15:16] bzr branch lp:project; bzr uncommit; bzr uncommit; bzr commit -m 'reapplied x' [15:17] and then bzr push lp:project [15:17] would that work? :) [15:19] oldman: You would have to 'bzr push --overwrite'. [15:19] And it would break anyone else's branch of lp:project. [15:20] it would only break their branch if they were > the revno I uncommit to though wouldn't it? === chx_sleeping is now known as chx [16:20] is there a quick way to clean up all ~1~ back ups in a working tree? [16:20] phinze, bzr clean-tree --detritus [16:20] (which is provided by bzrtools) [16:21] beautiful [16:21] Peng_, See my last post to the mailing list :-) [16:21] --detritus ...? what strange voodoo is this? :) [16:21] phinze: it's a beautiful English word akin to 'cruft' [16:21] jelmer: Well, it hasn't been merged yet. :P [16:21] "debris: the remains of something that has been destroyed or broken up " [16:21] nice [16:26] hmm might it make sense to have an option to bzr switch that performs clean-tree as well? [16:28] i use it on lightweight checkouts to switch branches and focus, and i don't need old detritus lying around [16:32] jelmer: quick question about svn import layouts [16:32] I'm accessing a repo which is of the form "trunk/project" [16:32] rather than "project/trunk" [16:32] do you have a standard format for that? [16:33] maybe it is the "itrunk" layouts [16:36] doesn't seem to work [16:39] jam, yeah, itrunk is for that [16:39] itrunk 1/2/3 always gives me "specified URL is under a branch" [16:39] The specific URL is: http://xdelta.googlecode.com/svn/trunk/xdelta3 [16:40] jam, if you're interested in just that branch, "bzr branch" should work [16:40] Importing as "trunk" just worked, but gave me both projects [16:40] "bzr branch http://xdelta.googlecode.com/svn/trunk/xdelta3 xdelta3" should work [16:40] jelmer: it did... so why doesn't svn-import work? [16:41] jam, bzr branch always works, it assumes the URL you give it is a branch [16:41] svn-import imports multiple branches [16:41] so it has to guess what the layout is [16:42] sure. though I would have thought it matched the "itrunk" layout pretty closely [16:42] which is why it is strange that "bzr svn-import --layout=itrunk1" never worked [16:43] then again, it isn't clear (to me) what 'level' means [16:43] jam, which URL did you give it ? http://xdelta.googlecode.com/svn ? [16:43] http://xdelta.googlecode.com/svn/trunk/xdelta3 [16:44] I should give it the root of the repo? [16:44] For auto-detect import: http://xdelta.googlecode.com/svn/trunk worked just fine [16:44] jam: svn-import on a branch should fail [16:45] http://xdelta.googlecode.com/svn/trunk doesn't work here: [16:45] bzr: ERROR: http://xdelta.googlecode.com/svn/trunk appears to contain a branch. For individual branches, use 'bzr branch'. [16:45] I just did "bzr svn-import http://xdelta.googlecode.com/svn/trunk" I'll try to verify that [16:46] nope, you're right [16:46] I must have used .../svn the first time [16:48] jelmer: so would doing "bzr branch .../svn/trunk/xdelta3" give me the same final branch as "bzr svn-import --layout=itrunk1" ? [16:48] (are they compatible?) [16:49] jam, yes [16:49] jam, well, if you're using 0.5 [16:52] jelmer: yeah, I ame [16:52] interestingly I get an error at the very end [16:53] File "/home/jameinel/.bazaar/plugins/svn/revmeta.py", line 1188, in get_revision [16:53] cached.metaiterators.add(metaiterator) [16:53] File "/home/jameinel/.bazaar/plugins/svn/revmeta.py", line 963, in __hash__ [16:53] return hash((type(self), self.from_revnum, self.to_revnum, tuple(self.prefixes), hash(self.layou [16:53] t))) [16:53] TypeError: 'NoneType' object is not iterable [16:53] which bit is None ? [16:53] I don't know [16:54] self.prefixes probably [16:54] anyway, "bzr branch" is great for what I need. Just wanted to understand things a bit better. [16:54] thanks for your tie [16:54] time [16:58] jam, Thanks, I think what's wrong [16:58] jam, bug in itrunk1 [16:58] * jelmer fix0rs [16:58] :) \o/ [17:03] jelmer: for #332116, I had already tried svn-upgrade... but for some reason, it refuses to upgrade that branch. [17:03] nevans, what's the error? [17:03] it successfully upgraded trunk and another branch. [17:03] no error. it just doesn't do anything. [17:04] lemme try again. I'll attach the bzr.log [17:22] jelmer: How much of an impact does svn 1.5 have on performance and whatnot? [17:22] Peng_, svn 1.5 or bzr-svn 0.5 ? === jfroy_ is now known as jfroy [17:41] jelmer: svn 1.5. [17:42] jelmer: The revision "Use global needed variable." vanished from bzr-svn 0.5's history. Was that intentional? [17:50] I've got a mix of 1.3 and 1.4, so I'm wondering if it's worth doing anything about it. bzr-svn is already fast enough with them, I guess, but I dunno. [17:52] Peng_, the main advantage of 1.5 is that you can use revision proeprties for bzr-svn metadata [17:52] retrieving that metadata will also be faster with 1.5 [17:52] other things won't be significantly faster [17:52] Peng_, yeah, it's correct that revision vanished [17:58] Poor, unloved revision. :( [17:58] Anyway, thanks for the information. :) [17:58] abentley, hey [17:59] hey [18:01] abentley, I'm looking at the integrating with bazaar wiki page, and there's a line that reads: [18:01] source.create_checkout('/tmp/newBzrCheckout', None, True, accelerator_tree) [18:01] Does that operate in place, or return the checkout? The wiki makes it look like it operates in place. [18:01] jelmer: good morning [18:01] abentley, and so that would make me think that source would change types. [18:02] jfroy, hi [18:02] rockstar: I believe source.create_checkout returns the working tree. [18:02] abentley, okay, that's better. [18:03] rockstar: It does not change the type of the branch. The branch remains a branch. It's just a branch that is checked out somewhere. [18:03] abentley, okay, so when I reference branch now, it's still the remote branch that I opened with Branch.open('lp:blah') [18:04] Yes. [18:34] jam, any chance you can do a quick review of my InterBranch patch? [18:34] jam, That one would really help a lot of things wrt bzr-svn UI and performance and is a requirement for bzr-git pull from remote branches [19:03] I can't seem to branch with bzr-svn 0.5 [19:03] hi LaserJock [19:03] LaserJock, what doesn't work? [19:03] I just get bzr: ERROR: Not a branch: [19:04] which SVN url? [19:04] svn://anonsvn.kde.org/home/kde/trunk/extragear/sysadmin/kiosktool/ [19:04] svn co works fine [19:05] LaserJock, the layout for the KDE repository is hardcoded [19:05] LaserJock, try svn://anonsvn.kde.org/home/kde/trunk/extragear [19:06] jelmer: ok, well that works [19:07] jelmer: so is there a way to get sysadmin/kiosktool ? [19:08] LaserJock, you can set layout = itrunk3 in the configuration for the KDE URL [19:09] LaserJock, patches to bzrlib.plugins.svn.layout.custom.KDELayout are also welcome [19:18] jelmer: where do you set layout = itrunk3 ? [19:18] LaserJock, ~/.bazaar/subversion.conf or ~/.bazaar/locations.conf [19:18] in the section related to the KDE repo [19:19] jelmer: there isn't a KDE section yet as I haven't branched, can I make one or do I need to branch something first? [19:21] LaserJock, you can add one [19:22] LaserJock, you might want to add use-cache = False too [19:24] jelmer: got it going [19:25] jelmer: this is *way* faster than 0.4 . I've been stuck with svn for KDE but this definately seems usable [19:29] LaserJock, cool ! [19:30] LaserJock, that was one of the main goals of 0.5 [19:36] LaserJock, Ideally the KDELayout class should be customized to properly recognize all branches in the KDE svn repository [19:37] LaserJock: that would, among other things, fix "bzr svn-import svn://anonsvn.kde.org/home/kde" [19:47] abentley: Hi [19:47] jelmer: hi [19:47] abentley: Is there some way to register a remote branch on Launchpad? [19:48] jelmer: I believe so. [19:48] abentley: Any pointer ? Where should I be looking? [19:49] Go to your profile page, hit the code tab, click the big orange "register a branch". [19:49] abentley: Ah, thanks [19:50] jelmer: np [19:50] abentley: Do you happen to know if there's something similar for the remote API? [19:50] jelmer: I don't know. [19:51] abentley: ok, thanks again [19:58] jelmer: is push now safe to use to create a new branch with bzr-svn? [19:58] the svn-push command is gone, but the release notes are not clear if push will work right now or is currently broken pending some changes to bzr [19:58] jfroy, svn-push still works for now [19:58] jfroy, "bzr push" working out of the box is waiting for a patch to enter bzr.dev [19:59] OK, so it's just gone from the command help topic. [20:03] jfroy: yep [20:16] Hi there [20:16] I'm having trouble splitting a tree with bzr [20:16] doing bzr split only produces an error [20:17] ...What's the error? [20:17] bzr: ERROR: No WorkingTree exists for "file:///home/guillaume/sandbox/bzr_tree/Bell/trunk/.bzr/checkout/". [20:17] don't understand it [20:19] It seems "bzr split" needs the branch to have a working tree. You could run "bzr co" to create it and "bzr remove-tree" to remove it again afterwards. [20:19] * Peng_ shrugs [20:20] ? [20:20] I'm trying to branch the tree and split from there maybe [20:21] now it says: [20:21] bzr: ERROR: To use this feature you must upgrade your branch at file:///home/guillaume/sandbox/bzr_branch/. [20:22] I try a checkout as you suggested [20:22] etenil, I think you need to upgrade to a rich-root format (such as 1.9-rich-root) to be able to use split [20:22] how can I upgrade the tree? [20:23] is there a command? [20:23] bzr upgrade --1.9-rich-root [20:23] ok, I'll give it a try [20:24] bzr: ERROR: no such option: --1.9-rich-root [20:24] -_- [20:24] I can just use --rich-root [20:24] from what the help says [20:27] Once I have split my tree, will the created branch have its versions renumbered? [20:33] etenil: It would be better to use --rich-root-pack than --rich-root. [20:34] etenil: What version of bzr are you running? [20:40] jelmer: Not to be excessively stalky, but why did you just rename lp:bzr-svn's branch? [20:41] Oh, it's not mirrored now. [20:42] Peng_, Yep [20:42] Peng_, this should save me some explanation when people try lp:bzr-svn and it's out of date [20:42] Ah. [20:43] The email alerts for new revisions were nice, but oh well. [20:43] Peng_, ah, there's no emails for remote branches? [20:44] jelmer: I dunno. Probably not, since it doesn't show the log. [20:44] Peng_, ideal would be a mixture of mirrored and remote branches [20:44] where it would redirect .bzr to the actual branch but still keep a lp-specific mirror [20:45] That would be neat, but kind of confusing. [20:46] bzr update returns tree is up to date (rev 42) and my repo is at revision 43 (using launchpad)... [20:46] I don't know what to do :/ [20:46] Goundy: "bzr pull"? [20:46] Peng ah [20:47] Peng I don't get it... I get an error ^^ [20:47] You do? [20:48] Peng I think I understand I did a bzr pull and kicked out my mainline before >< [20:48] forgot [20:48] thank you Peng [20:49] the conversion worked [20:49] i try the split [20:49] Goundy: You figured it out? That's good. :) [20:49] Peng_ yea >_< thanks ! [20:51] jelmer: md 2.0b1 packages in debian/experimental [20:52] Tak, w00t! [20:55] the libpython md-bzr backend is rocking [20:55] that works!!!! [21:01] Tak, I'll give it a try :-) [21:07] alright, thanx for your help all [21:07] bye [21:18] Hi again [21:18] I have split up a directory from a tree [21:19] but there are gaps in the revisions [21:19] is it possible to renumber the revisions? [21:24] etenil, what do you mean with gaps? [21:24] well [21:25] my bzr tree comes from a svn tree. And many projects were in there. Some revisions are only relevant to some of them [21:27] etenil: bzr and svn revision numbers are different - the svn revision numbers are per-repository, the bzr revision numbers are per-branch [21:27] or maybe I should split and renumber with svn then convert [21:27] etenil, if you have bzr-svn installed then "bzr log -v" will show the original svn revno as well as the bzr revno [21:28] ok [21:28] but I don't want to keep the svn tree [21:28] etenil, the information about the original svn revno is in the bzr tree [21:28] I want to split it per project and port to bzr [21:29] etenil, why are you running "bzr split" btw? If you need just a subdir of the svn repository, you can just "bzr branch" that instead [21:30] jelmer: that's ok, I think svndumpfilter can split the svn repo and renumber it [21:30] I can import it after that [21:40] jelmer: 'bzr register-branch' ? [21:40] lifeless, that registers a mirrored rather than a remote branch [21:42] oh [21:42] is that a problem? [21:42] lifeless, in some situations it's useful to be able to register remote branches [21:43] since they don't have a delay [21:44] lifeless, not that I would register remote branches very often, being able to do it through the web ui keeps me happy enough [21:56] jam: hi [21:57] hi lifeless, just writing up some summary stuff about my latest version of gc, and I just put together an xdelta repo [21:57] I saw [21:57] is it faster than last time you looked at xdelta? [21:58] the way it is implemented right now, not really [21:58] I'm doing multi-parent with unlimited delta chains [21:58] it is faster to compress than gc [21:58] but slower to extract [21:58] (3m45s versus 10min to compress, but 1m30s versus 17s to extract) [21:59] very interesting [22:00] any feeling on why its faster? [Can we use the compressor in gc ?] [22:01] lifeless: why the compression is faster? [22:01] it probably has less *stuff* to compare against [22:02] doing multi-parent [22:02] means you are comparing 1k lines versus 2k lines [22:02] while gc is comparing with all texts inserted so far [22:02] OTOH gc doesn't have to build new dicts all the time [22:02] sure [22:03] I'm wondering if xdelta simply has a better compare logic we could adapt [22:03] xdelta is also written in C by people who have been spending a lot of time optimizing it :) [22:03] at least, more time than you and I have probably spent on GC [22:03] oh, and for the record, I should mention that the "repository-details" time for the knit repo is 30s [22:04] so it is ~18s to extract all texts for gc, 30s for knits, and currently 1m30s for xdelta [22:04] though when gc was experiencing bad paging, it was down at 7min [22:04] so I'm willing to give xdelta a bit of margin here. [22:04] me too [22:05] xdelta certainly has a good[same as knit] story for skinny packs/delta reuse [22:05] lifeless: I should also mention that your new changes to the streaming logic confuse existing plugins [22:05] "knit-delta-closure" isn't in the adapter registry [22:05] I'm very concerned about getting good compression out of it though [22:05] lifeless: well, for multiparent versus my best gc so far, I get 9M for xdelta, versus 11MB for gc [22:06] so at least *right now* the story is just fine [22:06] jam: yah, one reason we pushed hard on this is that its the start of the release cycle, best time for disruption [22:06] as far as being in the adapter registry, just try for fulltext [22:07] you can see the change in knit.py in insert_record_stream [22:07] k [22:08] anyway, I need to get going [22:08] have a good weekend, lifelessf [22:09] ciao [22:09] I'm going to send up the final parts this morning [22:09] if you have a chance to review it, would love that:) [22:11] jam: thats *very* interesting that mp is smaller than gc; even with reordered texts? === sdboyer-laptop is now known as Crell_ [22:46] I have a directory with a few hundred files in it. I want to ignore most of them, but not all. if I ignore $DIR/*, will it cause problems down the line if I want to add/modify specific files from there? I'm asking because of the "Warning: the following files are version controlled and match your ignore pattern" when I added the ignore === Crell_ is now known as sdboyer-laptop === chx is now known as chx_sleeping [23:00] ameoba_: No, it's just letting you know. [23:01] fullermd- that's what I was hoping for. thx [23:01] ignore means "Don't add this file we we find it recursively". It doesn't have any effect on files that are already versioned or added explicitly. [23:06] uh oh, i think i broke shelve [23:06] i shelved on a bound checkout before running bzr up and then tried to unshelve [23:21] Trying to find an equivalent to git-cherry-pick. I want to grab individual patches from other branches and don't want a seperate merge message. [23:22] Any tips for how I can do this? [23:23] phinze: it should unshelve fine; what happens? [23:23] tyhicks: merge -r x..y does a cherry pick [23:24] tyhicks: because its not a transitive link in the graph the old revisions are not shown in log; you do need to supply a commit message (but it's scriptable with a tiny bit of python if you are doing this a lot and want to carry the commit message and author across automatically) [23:25] lifeless: IIRC, I have to do a commit afterwards and that creates another merge message [23:25] tyhicks: it will create a new commit object; just like git-cherry-pick does [23:25] tyhicks: it won't show as a merge, because cherry picks are not full merges [23:25] lifeless: https://bugs.edge.launchpad.net/bzr/+bug/332314 [23:26] Error: Could not parse data returned by Ubuntu: HTTP Error 502: Bad Gateway (https://launchpad.net/bugs/332314/+text) [23:26] spiv: guess what [23:26] lifeless: short story: the client isn't falling back when Repository.insert_stream is unavailable. [23:27] spiv: doh! [23:27] spiv: I'm seeing something very similar, in the stacked case, which hasn't landed yet [23:27] FAILED (failures=8, errors=19, known_failure_count=20) [23:28] spiv: are you going to do some bzr coding today? [23:28] lifeless: sadly no :( [23:28] spiv: I'm hoping to get the stcked case up for review [23:28] lifeless: I have to head up to the blue mountains today/tomorrow [23:28] nice [23:29] "have to". Woe is me ;) [23:29] spiv: hiking? [23:30] A bunch of stuff, farewell party, visiting family, possibly a small bushwalk might happen too... [23:30] enjoy! [23:32] Thanks! [23:34] can you look at remote.py, 1370 [23:34] that looks like it should fallback fine to me [23:34] spiv: perhaps the stream is being eaten partly? [23:35] lifeless: ah, hmm [23:36] lifeless: It *might* be an interaction with body_stream? [23:36] lifeless: oh! [23:36] I think the stream is consumed [23:36] the error is raised [23:37] remaining elements are consumed [23:37] yes, I bet the stream is being at least partly consumed [23:37] no error, but data not getting there [23:37] I'd only *expect* partly, but I've been wrong before. [23:37] your bug report shows the case that the entire stream is eaten [23:37] Right -- but I was also only pushing one new rev. [23:37] thats why the pack is deleted (it showed up as a no-op insertion) [23:37] one rev has a inventory too at minimum [23:37] so 2 records [23:38] * spiv nods [23:38] If I guard it with 1,13 [23:38] and make it mandatory [23:38] that will be safe I think [23:38] (That rev should have a text and sig, too) [23:39] So long as some prior RPC notices that remote is not 1,13 [23:39] But that guard is a good idea anyway, avoid trying the RPC when we know it won't work. [23:39] if its mandatory - no fallback - then we won't insert a partial stream [23:39] thats the correctness aspect [23:39] Oh, I see. [23:40] An ugly hack would be to first try with a stream of [] [23:40] garh no [23:40] I definitely don't want that as the final solution. [23:41] But it would fix the current bug that's breaking bzr.dev... [23:41] Oh, right. Of course, you are unlikely to get an error back until the whole stream is consumed, [23:41] because the whole request is sent before a response is sent. [23:42] So yeah, the entire stream will be consumed... so I guess you're thinking of moving the fallback logic to closer to where the stream is being generated? [23:43] assuming this passes is it ok, to stop people running trunk corrupting repos [23:43] http://paste.ubuntu.com/120786/ [23:44] _is_remote_before((1,13)) will not be true most of the time, though. [23:44] Unless the server is *really* old ((1,6) or earlier) [23:45] so unknown is > everything? [23:45] Nothing ever does _remember_remote_is_before((1,13)) or whatever the magic is. [23:45] I thought v3 got the servers version [23:45] Right. It assumes "current", and then ratchets down based purely on RPCs that are unknown. [23:46] There's a header, but it's currently purely advisory. [23:46] ah [23:46] so this is critical to fix:P [23:46] (And its value is not necessarily parseable, it's more intended for human-consumption) [23:46] uhm, inserting [] will probably work [23:47] You could make a new version of get_parent_map that's only in 1.13 ;) [23:47] hey people, quick census / survey: what would you consider to be the "essential plug-ins you should have"? [23:47] jfroy: depends on what I'm doing [23:47] I'm building a bazaar distribution and I'm now selecting the plug-ins I will include along with core. [23:48] jfroy: for me personally, it's bzrtools, loom, pqm, launchpad, gtk, svn [23:48] jfroy: but see my patch for bundling some plugins for a list of ones we concluded were good to bundle [23:48] But it definitely depends on what the person is doing. [23:48] lifeless: so I need to go [23:48] The target audience is people who use primary subversion (bzr-svn is already in), and people who may be using another DVCS, likely git. [23:48] lifeless: I think you now have all the relevant info for this bug, though. [23:49] spiv: ok; I'll insert [] [23:49] lifeless: where is that patch? mailing list, bug? [23:49] bb:approve for that? [23:49] lifeless: ok [23:49] jfroy: bundle buggy somewhere; its not finished {doesn't run the install for each plugin} [23:50] spiv: I'm surprised the RemoteV2 tests dont' trigger this [23:50] lifeless: yep. Ideally add the _remember_remote_is_before / _remote_is_before guard like other client methods do. [23:50] lifeless: RemoteV2 tests know _remote_is_before((1,6))... [23:50] lifeless: oh! [23:50] spiv: I'll get the simplest possible correctness fix up now [23:50] lifeless: RemoteV2 also triggers an instant UnknownSmartMethod when you try call_with_body_stream [23:50] spiv: ah [23:51] lifeless: it never consumes the stream in that case, because the encoder knows it can't be done :) [23:51] lifeless: http://bundlebuggy.aaronbentley.com/project/bzr/request/%3C1217210104.21600.106.camel%40lifeless-64%3E? [23:51] lifeless: perhaps it should consume the stream, just to make it harder to get confused... [23:51] lifeless: after all, the caller has to cope with a consumed stream anyway... [23:56] spiv: well, we could look at buffering the stream in a T join, until we know success/fail (HTTP can error before a post completes) [23:56] jfroy: yes