[01:14] looking at bug 726584 and the corresponding RT. we have 2.3b5 on jubany atm. I can find a ppa from max for 2.3.0-1; but it's unclear to me which version matches up to revno? I'm guessing by the dates that we need 2.3.1 or possibly .2? [01:14] Launchpad bug 726584 in Ubuntu Distributed Development "flash-kernel import fails apparently mismatching maverick and natty branches" [Undecided,New] https://launchpad.net/bugs/726584 [01:22] spm: let me double check [01:23] ta [01:23] spm: yes, 2.3.1 [01:23] coolio, do we have a ppa of that already I can backport from? [01:25] spm: none that I can see. [01:26] (There's the daily builds from trunk PPA, but I think that's a bit riskier than we want to use) [01:32] okidoki. I'll update the RT with a request for a PPA we can backport off. ta spiv [01:32] spm: Thanks! Glad to have progress :) [01:32] heh, nothing quite like being down 1-2 folks in a team of 4. :-) [01:33] Yeah, I bet. [01:39] maxb, hi, could you update the ppas some time? [01:58] poolie: hi [01:59] yes, I promised to do that this evening and have so far failed, due to Rocket Bunnies :-) [01:59] I shall do it before I sleep [02:04] thanks mate :) [02:04] i sympathize about the bunnies [02:07] yikes, there have been 19 revisions in the debian packaging branch since the last PPA release [02:26] * maxb is confused [02:27] How can I be getting conflicts in files when merging 2.3.1 into ppa/natty ?! [02:27] upstream files, that is, not debian/* [02:48] Cherrypicking, maybe? [03:13] Is there a way to force Bazaar back to the original behaviour of not attempting to use empty commit messages and just straight up cancelling a commit if it gets one? === psynaptic is now known as psynaptic|away [03:16] that changed? [03:21] lifeless: yeah. It's rather off-putting. Now, every time I go "oops, not ready to commit all that after all", quit editor without saving, and it asks you a stupid "do you really want to commit empty" question, to which I have to answer 'n'. [03:21] lifeless: it used to just say "no commit message specified, quitting" which, as far as I can tell, is how it's supposed to work :( [03:50] Was probably changed for https://bugs.launchpad.net/bzr/+bug/530265 [03:50] Ubuntu bug 530265 in Bazaar "Not modifying a suggested commit message commits anyway" [Medium,Fix released] [03:54] I don't think there's a setting to disable that prompt, but I guess it'd be possible to write a plugin to intercept ui_factory.get_boolean("Commit message was not edited, use anyway") and return a hard-coded answer. [03:56] spiv: or handle empty messages specially [07:02] Say I want to revert an old change in the trunk, I can do this: (in trunk) bzr merge -r old-rev2..old-rev1 && bzr ci -m "Undo an old change". Say there are many changes committed after this reversion. Later on, I want to bring that undone change back, I can do this: (still in trunk) bzr merge -r old-rev1..old-rev2 && bzr ci -m "Bring that old change back". I tried and this worked. But is this the canonical/recommended way? [07:04] yes [07:04] hi jam [07:04] poolie: Thanks [07:40] hi all ! [08:23] hi po [08:23] hi poolie [08:26] and vila, too! [08:26] _o/ [08:29] Does bzr actually use per-file DAGs to assist merging? [08:30] I'm getting all sorts of spurious conflicts in criss-cross situations, even though the specific files conflicting didn't criss-cross [08:30] e.g. the current merge of loggerhead trunk into daily-ppa [08:35] and in fact, 4 files which didn't conflict drastically failed to have their changes merged correctly [08:35] maxb: not as well as it should, jam knows the best about this specific point [08:36] maxb: merge --weave does [08:36] (does what you want) [08:36] maxb: thats the main problem with re-landing code :) [08:36] but I'm trying to at least make sure that I do the work of merging into experimental [08:37] I didn't realize the ppa branch would have that problem, too [08:37] --weave ... did not do what I wanted :-( [08:37] At least for the daily-ppa branch I can just revert all the non-debian bits to the pending merge tip and be done with it [08:42] maxb: can you point me to the branches you are merging? I can give it a look over [08:42] maxb: I would guess the daily ppa would have tried to build the old trunk tip [08:42] which has been removed from history [08:42] which is going to confuse the hell out of everything [08:45] Most recently I was trying to merge lp:loggerhead into lp:~bzr/loggerhead/daily-ppa [08:45] maxb: right, and you actually need to revert all of the 'experimental' branch back out of the daily-ppa, since it used to be in lp:loggerhead [08:45] maxb: so probably the best thing is as you said [08:46] But that had already been done in the past [08:46] maxb: but not in the trunk branch, so things get really weird in history [08:46] evidently :-) [08:46] daily-ppa basically has a change for almost every file that supersedes what is in trunk [08:47] what was in trunk [08:47] but then trunk updates == conflicts [08:47] ugh [08:47] bzr really ought to understand "I merged foo and reverted to its version" as "I'm happy to accept foo's changes in future" [08:47] maxb: if we had reverted trunks contents using "bzr revert -r OLD; bzr commit", and then rebuilt experimental from there [08:47] it would work better for you [08:48] right, I'd better get myself to work now. I'll re-stare at trees [08:48] later [09:01] vila, mgz: ping about tarfile exports. "tarfile.extractfile()" seems to hang indefinitely if you give it corrupt data. Which does exercise the test, but means it has a really bad failure mode [09:02] weird, what is hanging ? [09:02] f = tarfile.extractfile('path'); f.read() the .read() is hanging [09:03] O_o [09:04] I'll try to trace into it to give better detail [09:04] is there a pipe there ? [09:04] vila: nope, it is reading from disk [09:04] O_O [09:05] hi jam, vila [09:06] _o/ [09:08] vila: ah, maybe it is "assertEqualDiff being unhappy trying to match 800 lines of gobbledy-gook [09:08] vila: yeah, stopping the hang shows it in difflib [09:08] so better and worse :) [09:10] 800 lines ? That's a lot... [09:10] yeah, swapping out assertEqualDiff made it happier [09:10] _file_content = ('!\r\n\t\n \r' [09:10] + '\n'.join([osutils.rand_chars(80) for i in range(800)]) [09:10] + '\n') [09:10] doesn't seem very big to mee [09:10] wow, that's a bit bad it would hang [09:10] me [09:10] hmm, please don't use rand_chars in tests, it is *guaranteed* to fail one day or another [09:10] poolie: it might just go O(N^2) and die, or it might have a real bug in the diff code. [09:10] vila: it is meant to [09:11] how do you expect it to fail? [09:11] (I need a way to force '\r' and '\n' in the compressed streams) [09:11] which has all sorts of "tar header", etc. [09:11] I can hand-craft something today [09:11] because you will encounter a combination that while unlikely will break the test ? [09:11] or I can throw enough random data at it, that I'm sure it has at least one char [09:11] vila: this is file content itself [09:12] if we are not managing to preserve exact fidelity [09:12] yeah, hand crafting is better, it's stable ;) [09:12] we have a problem [09:12] vila: except there are timestamps, and mtimes that will vary [09:12] which means that the real data may change its compressed form [09:12] still deterministic [09:12] vila: deterministic in the sense that time.time() is always changing? [09:13] yes, in predictable ways (except on vbox slaves but I digress ;) [09:14] so modulo restting the system clock, you can run the same twice and get the same result [09:14] introducing randomness is the opposite of test isolation :) [09:14] vila: still isolated tests [09:14] each test does not depend on the other [09:15] I can run 'osutils.rand_chars()' manually and hard-code it in the file, but that just bloats the test file (IMO) [09:15] and give us data to analyze if the test ever fails whereas rand_chars won't [09:16] and of course you're supposed to provide a sample that is not 80*800 chars long... [09:19] or said otherwise: if the test fails with 80*800 random chars, how would you debug it ? [09:19] do we have any tests using rand_chars? [09:22] hmm, not much but grep rand_chars in bzrlib/tests find 15 matches (not all of them are relevant though) [09:22] ./test_lockdir.py:394: lf1.nonce = osutils.rand_chars(20) seems valid for example [09:23] mm [09:23] well, almost, there should probably be an assertion that we don't collide [09:23] poolie: indirectly almost all of them via 'gen_file_id()" [09:23] we should probably make them deterministic [09:23] right [09:30] 2.3.1 is up in ~bzr/proposed - testers appreciated [09:34] thanks maxb [09:34] i'll install it === hunger_ is now known as hunger [09:46] jelmer, can you help me think of some other options? [09:53] vila: I used hard-coded data. Now the test fails intermittently [09:53] (explicitly not fixing the bug so that I know I should have a failing test.) [09:53] it was failing reliably with 64k data [09:53] it seems the timestamps are having a pretty strong effect on the compressed steram [09:53] stream [09:54] jam: then your data are not good enough ? Or you should isolate the test from the timestamps [09:54] vila: I can make it 64k data and be done with it [09:54] (which I had) [09:55] poolie: other options for? (and shouldn't you be sleeping by now?) [09:56] jam: huh ? You said bloating the test wasn't good no ? [09:57] vila: osutils.rand_chars(65536) isn't many characters in the file [09:57] or said otherwise: if the test fails with 80*800 random chars, how would you debug it ? [09:57] [osutils.rand_chars(80) + '\n' for i in xrange(800)] isn't much bigger [09:57] s/80*800/65536/ [09:58] jam: how will you debug it if it fails ? [09:58] vila: if we are corrupting the data stream, it doesn't matter what the content is, IMO [09:58] vila: start looking for missing binary streams [09:58] if it is 100 random chars, does that help debugging? [09:58] same problem [09:58] yes the problem is the random part [09:58] to trigger it, needs enough random data, that you can't eyeball the corruption [09:59] vila: note that we also want to trigger it for raw tar, .tgz and .tbz2, and .tar.xz [09:59] and .tar.lzma [10:00] if the test is random it's not a reliable reproducing recipe, do we agree on that ? [10:01] vila: no [10:01] as I mentioned earlier, if the bug is too hard to reliably reproduce, I don't care about having a test for it as long as the diagnosis is 100%: we must use 'wb' and be done with it [10:01] vila: if the chance of failing is 1 in 2^256 [10:01] I'm fine with that [10:01] but you're pulling this number out of thin air right ? [10:02] 65k bytes, compresses down to about 40kb. If you then have 1-in-256 chance per byte to be a '\r' or '\n', then your total chances are (let me grab a calc) [10:02] since you can only prove it to be right by establishing which inputs are valid/invalid [10:03] vila: given 10k compressed characters, the odds it not having what we want is 10^-17 [10:03] vila: what bounds would you be happy with? [10:03] 0 [10:03] vila: note, zlib changes the compressed stream from source from time to time [10:03] it is allowed to [10:03] for the same compression settings [10:03] as long as the actual decompressed content is the same [10:04] vila: I can't give you 0 [10:04] I can give you epsilon [10:04] that's my point [10:04] vila: it means we can't force the compressed stream, but we can be really darn certain [10:06] the options are: [10:06] 1) no test [10:06] 2) test with a hard-coded reduced length [10:06] 3) test with random data [10:07] (3) is the worse because, a) it can't be debugged if it fails b) it will fail [10:07] vila: I disagree on (b) [10:08] if we are failing because of the content of files [10:08] then we have *serious* problems internally [10:08] do you have a fix for the bug ? [10:10] vila: certainly, but I would like to make sure we are testing all of them. Since mgz was kind enough to notice we had the same bug in other exporters [10:12] jam: can you seed rand_chars and guarantee you will get the same chars ? [10:12] jam: can you still haven't answer: "How would you debug it ?" [10:12] vila: with 800 random characters I had 2 accidental successes in 30 runs [10:13] vila: looking at the stream isn't a way to debug it, regardless. [10:13] unless you know a way to debug a gzip stream [10:13] Is that a way to say it can't be debugged ? [10:13] especially a gzip(tar) stream [10:13] vila: it can be "oh it isn't getting the data back properly, I must have missed something" [10:14] but analyzing the data stream probably isn't the way to debug it [10:15] then how about forgetting about the compressed stream and ensures we never corrupt it instead ? [10:16] vila: we are corrupting the compressed stream [10:16] how do we forget about it? [10:16] 'bzr export -' is writing the stream to stdout, and we weren't setting the binary flag [10:16] by using a mock compressor ? [10:17] 'bzr export foo.tar.gz' was opening the output file without the binary flag [10:17] vila: the bug was both in the command [10:17] and in the individual code ptahs [10:17] paths [10:17] right [10:17] and the issue is that it was corrupted because somewhere 'w' was used instead of 'wb' [10:18] vila: if it was just the 'export -' we could put '\r\n' in the file content and 'plain_tar_export' it. [10:18] vila: right [10:18] plus the fact that GzipFile(mode='w') automatically sets 'wb' [10:18] so we had 'w' before [10:18] but we switched to GzipFile(open('w')) [10:18] and ooops, it broke [10:18] if you can't reliably inject a stream that would be corrupted because you don't control how it's transformed, I'm saying: control it [10:19] vila: GzipFile is going to be python-version dependent, and possibly platform dependent [10:19] indeed [10:19] how do you want to control it, since we want to be testing the specific code path [10:19] BzipFile happened to be ok, plain_tar_export was bad, GzipFile was bad [10:19] ZipFile I think was ok [10:20] I'd like to make sure that transforming the code paths in ways that look fine, still are actually fine [10:20] 'make sure' != 1e10-17 [10:20] vila: osutils.rand_chars(65536) [10:20] 10^-17 is sure for me [10:20] if you prefer, I can make it [10:20] 10^-999 [10:21] yeah, some japanese may disagree :-/ [10:22] vila: /me => lunch, bbiab [10:23] i've been feeling sad about that [10:24] poolie: ? [10:24] I didn't mean to go to lunch and make you sad :) [10:25] :) the Japanese disasters [10:32] hello inada-n, i hope you're doing ok [10:34] hello, poolie. [10:34] About earthquake? [10:34] yes [10:34] I'm OK in Tokyo. [10:35] good [10:35] good [10:35] Thanks, a lot. === psynaptic|away is now known as psynaptic === Tak|Work is now known as Tak === deryck is now known as deryck[lunch] === vila changed the topic of #bzr to: Bazaar version control | try https://answers.launchpad.net/bzr for more help | http://irclogs.ubuntu.com/ | Patch pilot: poolie | 2.3.1 is officially out, 2.4b1 has been frozen === Ursinha is now known as Ursinha-lunch === deryck[lunch] is now known as deryck [17:12] goaeruaeos [17:12] I have to figure this out every six months === beuno is now known as beuno-lunch [17:12] how to get a list of landings I've done on launchpad [17:14] jml: some filtering from qlog maybe ? [17:15] vila: ahh, never mind. I apparently wrote a little plugin called pqmstats that does what I want [17:15] hehe :) [17:16] jml: don't tell me, you went: "Let's write a plugin to do X and let's call it pqmstats, oh wait, here is one !" :-D [17:18] vila: yeah, exactly like that [17:19] good, I'm not suffering from Alzheimer then ;D [17:19] hehe [17:20] jelmer: I think you miss my msg yesterday about lp:~vila/ubuntu/lucid/bzr/sru-2.1.3 [17:20] vila: I think I might - what was it? [17:21] jelmer: can you have a look at it (no urgency), there is a dirty trick in debian/watch but I filed bug #736145 about it [17:21] Launchpad bug 736145 in Launchpad itself "no series specific download pages" [Low,Triaged] https://launchpad.net/bugs/736145 [17:22] jelmer: 2.1.4 will be released soon so it's just a warm-up to see if I get it right [17:23] vila: what about https://launchpad.net/bzr/2.3 ? [17:23] Error: Could not parse data returned by Ubuntu: 2 (https://launchpad.net/bugs/2) [17:23] vila: it's only got the latest release, but that should be sufficient in our case [17:23] EPARSE, what about 2.3 ? [17:24] and what is your last remark about ? 2.1 or 2.3 ? [17:25] vila: there is https://launchpad.net/bzr/2.1 as well [17:25] Error: Could not parse data returned by Ubuntu: 2 (https://launchpad.net/bugs/2) [17:25] vila: So in the watch file we can just use the series-specific page, which will always contain the latest release in that series. For debian/watch that's sufficient [17:26] haaa, for watch file! [17:26] vila: well, that /is/ what you were asking about.. :) [17:26] jelmer: but these pages doesn't mention the tar.gz [17:26] jelmer: but these pages don't mention the tar.gz [17:26] vila: they do [17:26] just check https://launchpad.net/bzr/2.1 or https://launchpad.net/bzr/2.3 [17:26] Error: Could not parse data returned by Ubuntu: 2 (https://launchpad.net/bugs/2) [17:26] Error: Could not parse data returned by Ubuntu: 2 (https://launchpad.net/bugs/2) [17:26] there's a link on the right hand side [17:26] ooooooh ! [17:27] right, so for old stable series and as long as we are fast enough to keep track, yeah, far better than my dirty trick [17:33] jelmer: otherwise, is this branch a good base for 2.1.4 ? AIUI the first line in changelog should mention lucid instead of UNRELEASED but the rest should be ok ? [17:35] jelmer: or will 2.1.4 require a roundtrip via debian (squeeze provides 2.1.2) ? [17:35] vila: the version should be 2.3.1-0ubuntu1 [17:36] 2.1.3-0ubuntu1 you mean ? [17:36] vila: (your version is not intended to be uploaded to Debian but to Ubuntu, in case of debian it would be 2.3.1-1) [17:36] vila: sorry, yes [17:36] ok [17:37] 0 because 2.1.3-1 will come from debian and override it ? [17:37] yes, exactly [17:38] vila: it looks good to me otherwise [17:38] ok, thanks ! [17:40] vila: one minor nitpick - maybe this is loggerhead's formatting - there should be one extra level of indentation for the + signs under "New upstream release" [17:40] oh, not loggerhead [17:41] the '+' should below the 'N'ew ? [17:41] it should have one more space than the * [17:41] so it should be below the space in the previous line [17:41] ha no [17:41] sry, was looking at wrong file [17:42] yeah, below the sapce [17:42] space [17:44] hello #bzr [17:45] hello g0nzal0 [17:46] I'm using bzr 2.2.1 with the svn plugin (version 1.0.4) [17:46] I'm trying to branch an svn repository, but said repository has a commit of a file named \ [17:47] I forgot to add a part of a message to a commit. is there a simple way to edit a log? This is a simple versioned directory, not a remote branch, fwiw [17:47] philsf: uncommit/commit if it's the last commit you did [17:47] the svn plugin conveniently checks for this and raises an exception [17:48] is there a way around that? I can't get a full copy of that svn repo with bzr otherwise :( === psynaptic is now known as psynaptic|break [17:49] g0nzal0: Actually, I think the forbidding of backslashes in pathnames is done in the core of Bazaar [17:50] this means there's no way you can have such a file in a Bazaar branch, that you can work with using an unmodified copy of Bazaar [17:51] maxb, yes it is (tried commenting out the check on the svn plugin and got an exception from bzr) [17:51] magcius, the file is removed in the svn repo in a later commit [17:52] *removed from [17:52] I mean, maxb [17:52] I'm not sure that will help you [17:53] it doesn't, bzr branch stops when it hits the commit that adds that backslash-named file :( [17:54] g0nzal0: the best way to work around it is to "svnadmin dump" the repository, eliminate the backslash and reload it [17:55] jelmer, ah, I don't think I have permissions to do that, but I can talk to the sysadmin and try that, thanks! === fjlacoste is now known as flacoste [17:56] g0nzal0: there is a bug report in bzr about this, but it hasn't seen much activity recently [17:56] * jelmer will be back later === beuno-lunch is now known as beuno === psynaptic|break is now known as psynaptic [18:28] hi folks, someone seems to have merged changes in from a branch and merged into trunk. now, the revno in the trunk is 30 numbers smaller than before, what might've happened? [18:30] cr3, they merged trunk into a branch and pushed that up to trunk [18:31] beuno: that resulted in lots of commits being conflated into one, might there be a way to revert that or do something reasonable about it? [18:31] I guess someone could push --overwrite [18:31] if they have trunk [18:32] beuno: yeah, "if they have trunk" is the problem, I just pulled from trunk so mine is reverted now :( [18:32] hm [18:33] not sure how to revert that easily [18:33] beuno: I'm about to take the last snapshot I had, reproduce the commits that were done previously and then try to push that. I expect that to be a lot of work, so looking for the quick fix [18:34] beuno: would the person that merged trunk into the other branch and pushed had to use --overwrite as well? ie, would bzr have provided somekind of warning to prevent this and required explicit action on the part of the user to proceed? [18:39] cr3, no, no --overwrite [18:39] you can, however, set a flag that won't allow this to happen again [18:39] append_only or something [18:39] I forget where to set it [18:41] beuno: sounds like a good lead, thanks so much! [18:47] vila: it was not the last commit. Can I uncommit/commit and still preserve the original revision number? [18:53] beuno: I found that I can add this line to .bzr/branch/branch.conf: append_revisions_only = True. however, not sure how I can commit that so it applies to everyone branching the project [18:54] cr3, that's a good question [18:54] maybe lifeless knows [18:55] * jelmer waves [18:56] heya jelmer [18:56] cr3: "commit" it? It applies to a particular copy of a branch, not a project. [18:57] Usually you would set it on the branches on a server that people share [18:57] Bazaar might copy it to new branches 'bzr branch'-ed from that - I'm not sure, you'd have to test. [18:58] cr3: Before you try to regenerate any commits manually: You really do not need to. No data has been lost. Bazaar has just been instructed to understand it differently? [18:58] maxb: I just tried locally: 1. created file .bzr/branch/branch.conf with append_revisions_only = True; 2. branched project to other directory; 3. looked at .bzr/branch/branch.conf in other directory: branch.conf does not get branched [18:58] cr3: Is this project public? It would be easier to explain with specific examples. If not, I can explain more generically [18:59] maxb: I managed to regenerate the data already and the diff (not bzr diff, just /usr/bin/diff) of my new branch and the one on the server seem the same now [18:59] cr3: However I would still advise you *NOT* to push --overwrite that [19:00] maxb: the project is public and, if you happen to have a moment, I wouldn't mind understanding how to resolve this kind of situation [19:00] We can fix your revnos without forcing everyone else to "pull --overwrite" and potentially have to replay their own local commits [19:00] maxb: really? why not? [19:00] I don't understand the "why not?" question [19:00] maxb: good point. although there are probably not that many people branching the project, I try to do things right when possible [19:01] Ok - give me the URL for the branch concerned, and I'll have a look [19:01] maxb: I meant "why not push --overwrite", but you answered before I hit enter :) [19:01] maxb: lp:checkbox [19:01] fetching..... meanwhile, have you used "bzr qlog" previously? [19:02] I find it an awesome tool for understanding revision graphs [19:02] maxb: the problematic revision is 864 and no, haven't used qlog before [19:02] yikes, that revision merged an awful lot of trunk changes, didn't it :-) [19:02] maxb: totally :) [19:02] maxb: about 30 revisions worth :) [19:03] well, 30 revisions based on difference of current revision in trunk and what it should be [19:03] OK, so here's how we go about restoring the "left hand" or "first parent" ancestry that determines revnos [19:03] First you branch lp:checkbox at 863 [19:04] oh, sorry [19:04] I don't mean 863, 1 sec [19:05] hmm. We need to find the revision which was the tip of lp:checkbox which was before that erroneous merge was pushed [19:07] Ah, OK, so I think that was the current 862 "Merged from testsprint-checkbox-base-sru-changes." [19:07] Right, let me start again, and get the numbers right this time [19:08] maxb: I'll follow what you say in qlog [19:08] vila: that was my impression with rand_chars as well, but jam is right. provided that the (teeny) chance of the randomness working against us results in a spurious pass rather than a spurious failure, it's the best way to write that kind of test. [19:08] So, we branch from lp:checkbox 862, because that is the last revision that is part of the old mainline, which is still on the mainline. [19:09] maxb: why not 863? [19:10] Having looked at the history, it looks to me like the "bad" merge was an attempt to land the revision which is currently 863 on trunk [19:11] More importantly, 863 is diverged from the long string of revisions which we want to put back on the mainline [19:11] maxb: exactly, now I understand where you're getting at [19:11] OK, so having done that, for the sake of illustration, you might choose to turn on append_revisions_only = True in the local branch [19:12] then we're going to pull, not merge, r862.2.31 of lp:checkbox [19:13] (and yes, we could just have branched that directly - this was either my mistake or a fortuitous illustration of exploring the history) [19:13] :-) === psynaptic is now known as psynaptic|break [19:14] After that, we *merge* the formerly "bad" merge commit - i.e. we 'bzr merge -r 864 lp:checkbox' [19:14] maxb: I'm doing bzr branch -r862 lp:checkbox then bzr pull -r862.2.31 lp:checkbox, right? I'm waiting for the first branch to finish, this is taking a while unfortunately [19:14] aha, after that pull, I get: Now on revision 893. [19:14] which is sounding very sane! [19:15] And commit that with a message of something like "Land translation work by Mahyuddin Susanto via Michael Terry." [19:15] maxb: you didn't forget 863, right? [19:15] No, this is intentional - the one merge is going to pull in 863 & 864 [19:16] of course, my bad :) [19:16] Once you've done this commit, you are back in the rough "shape" of the tree that you wanted to have kept all along [19:17] maxb: this is amasing, really! [19:17] The remaining thing is to merge (one by one if you want to preserve their mainline revnos in full) the remaining commits. Fortunately in this case there is only one. [19:17] maxb: Thanks so much for taking the time to explain this to me, I'm keeping this irc log and framing it somewhere :) [19:17] So, "bzr merge -r 865 lp:checkbox" (or for that matter just "bzr merge lp:checkbox") [19:17] And commit [19:18] and we're in our final desired state. Only thing remaining is to then push back to lp:checkbox - *without* --overwrite [19:18] makes perfect sense [19:18] excellent, so my contributors will not need to pull --overwrite themselves. win! [19:19] To prevent it happening again, you can enable append_revisions_only on the Launchpad copy of the branch [19:20] maxb: how can I ssh/scp/ftp there again? [19:20] I once accessed the server to rm branches but that was a very long time ago [19:21] If your local bzr is new enough, you can just run "bzr config -d lp:checkbox append_revisions_only=True" [19:22] um [19:22] wait a moment, I think that may not work [19:23] bzr config -d bzr+ssh://bazaar.launchpad.net/+branch/checkbox append_revisions_only=True [19:23] owing to a bug in current bazaar, you need to not use the lp: shortcut here === psynaptic|break is now known as psynaptic === Ursinha-lunch is now known as Ursinha [21:42] I forgot to add a part of a message to a commit. is there a simple way to edit a log? This is a simple versioned directory, not a remote branch, fwiw. it was not the last commit. Can I uncommit/commit and still preserve the original revision number? === psynaptic is now known as psynaptic|away === BasicPRO is now known as BasicOSX === psynaptic|away is now known as psynaptic === psynaptic is now known as psynaptic|away [23:22] * jelmer waves to spiv [23:26] hi all [23:27] poolie: g'day [23:27] hi jelmer [23:28] jam, still around? [23:31] * spiv waves back [23:33] spiv: Why do we never store directory service URLs but always resolved URLs? Is it since the way a directory service resolves a URL might differ per user? [23:34] jelmer: I think it's just by accident, rather than design [23:34] I'm pretty sure there's a bug or two about how we should store unresolved directory service locations. [23:35] spiv: I was looking at one of the bugs that the linaro folks were hitting, where we are calling urlutils.normalize_url() on a URL that can contain e.g. "lp:bzr" [23:35] There is a bit of an issue that directory services might resolve differently, or even be not present at all, for different users. [23:36] So I guess if we start storing them we should take care to handle that gracefully [23:36] I wouldn't want my bzr to crash with a traceback just because someone else's branch I access has a parent_location with a custom directory service URL. [23:37] right, that makes sense [23:38] spiv: so I guess then a related issue is: is "lp:bzr" something that urlutils should handle gracefully? [23:38] Hmm [23:38] We're really mixing different types of variable [23:38] On the one hand it would be a nice solution here, but I'm worried about misinterpreting what could be a proper URL with an entirely different meaning [23:38] yeah [23:39] "URL" and "location" might be good names for those types, if we had statically typed variables. [23:39] Or "URL" and "BRL" ;) [23:40] So in my ideal world, we'd never all normalize_url on something that isn't a proper URL [23:40] your ideal world has URLs ? :-P [23:41] But we sometimes use them interchangeably, although sometimes not (the CLI isn't consistent about what it accepts) [23:41] s/all/call [23:43] To be pedantic, strictly speaking I didn't imply my ideal world would have to have URLs ;) [23:44] jelmer: I think it's just by accident, rather than design <-- i agree [23:45] Anyway, given we mix URLs and not-quite-URLs quite a bit now, we have two options that I can see: [23:46] 1) make functions like normalize_url that expect URLs also cope with not-quite-URLs [23:47] 2) try to be stricter about not using not-quite-URLs interchangeably with URLs [23:47] Maybe there are other options. [23:47] i think 'lp:' is a proper url [23:47] or 'should be treated the same' [23:49] Or another way to look at it perhaps is: why is linaro calling normalize_url? Is it because they really want "normalize_location" and that's the closest we have? [23:49] spiv: This is the stacked-on argument to cmd_push() [23:49] Ah, right. [23:50] So, essentially, yes :) [23:50] spiv: yes, indeed [23:50] I don't feel strongly about which route we should take, but I do feel strongly that the current situation is a problem :) [23:50] :) [23:51] I think in the short term it would probably make sense to do the directory lookup in cmd_push for --stacked-on, as it's consistent with how we treat directory URLs in other places. [23:52] there is a bug on directory lookups in push :parent as well IIRC [23:52] so you'dfix more than one bug :> [23:52] sorry, I mean, while you're there consider doing more ;) [23:53] I was about to say, I'm not sure that's the same bug (although I'm sure it's related) [23:55] is :parent a directory as well? It seems odd that it breaks in that case, as "bzr push lp:foo" works [23:58] jelmer: it needs to double dereference [23:59] jelmer: when :parent returns lp:foo it breaks [23:59] lifeless: ahh