[01:41] so , i have the small changes that i made to make bazaar explorer not freak out when signing commits [01:41] how do i go about making a patch file to submit with the bug report? [01:44] AuroraBorealis: commit the change, and push the branch with the change to launchpad. If you commit with --bug=lp:1234 it'll get automatically linked to the right bug. (If you don't, you can use the "link branch" link on the bug page to do the same thing.) [01:44] ah ok, will try that in a second! [01:45] AuroraBorealis: you probably also want to add a merge proposal for the branch you push to launchpad, but just doing the above will at least make branch appear on that bug page [01:46] do i have to do anything on launchpad to push to it? [01:46] i have not done that before [01:48] (for a project that isn't mine) [01:51] or do i click "register a branch" [01:56] AuroraBorealis: "bzr push lp://" [01:57] AuroraBorealis: you'll need to add an SSH public key to your LP account (if you haven't already) and run "bzr launchpad-login" (if you haven't already) [01:57] yeah i have that [01:57] Er, I typoed slightly, it's lp:~ [01:57] e.g. lp:~spiv/bzr/foo [02:01] how do i link it to a bug in the command line? [02:02] ironically i can't use bazaar explorer because of the bug i'm trying to fix xD [02:03] oh --bug [02:05] Does that work? The documented option is --fixes [02:06] yeah its --fixes [02:06] which i found after googling [02:08] ok, so i pushed it, its here: https://code.launchpad.net/~markgrandi/bzr/gpg_devttynotfound_fix [02:08] will it do something to the bug report after its done 'scanning' the branch? [02:09] Yep. [02:12] AuroraBorealis: look now [02:12] hooray! [02:13] now what [02:18] do i just wait for someone to do something with it? [02:49] hi all [02:50] hi. [03:21] hi jelmer [04:33] I'm using bzr builddeb. I'm backporting something. I did a build. Took 4 hours, no problem. At the very end, I discover that all though I fixed Build-Depends, I missed correcting the same entry in Depends. [04:34] ouch [04:34] Is there a way to get the .deb rebuilt based on new debian/control [04:34] was the 4h in buildding the source deb or building the binary? [04:34] without doing another 4 hour build? ie, it's just the very outside wrapper that is changing [04:34] poolie: binary [04:34] oh, without actually recompiling it? [04:34] probably [04:35] I mean, sure I could do surgery with tar :) but I'm trying to find a more proper way to do it. [04:35] dpkg-buildpackage with '-nc' "don't clean" somewhere should do it [04:35] The dependency was making "libgmp-dev" (oneiric, unstable) be "libgmp3-dev" (natty) [04:36] poolie: ok. I've got a feeling that with builddeb's defaults, the build-dir/ is already gone [04:36] [I know it is] [04:37] hm, maybe the things you wanted to preserve are already deleted then? [04:37] i guess you could force installation without the dependency to test the package, then do a full clean rebuild when you're happy [04:37] yeah [04:39] Be faster to make an empty package called "libgmp-dev" :/ [04:39] poolie: (and, yes, --force-depends has it on board now for testing) [04:40] i think there is some way to configure that into dpkg short of making an empty package [04:40] I'm guessing that `bzr buildpkg --dont-purge` more or less leads to `-nc` [04:49] well, it would make it at least possible to use -nc later [06:31] hello guys ! [06:51] hello vila [06:54] morning all [06:54] poolie, jam : _o/ [07:05] hi guys [07:05] did you do testing of the new pqm that showed a tmpfs didn't help? [07:06] poolie: replied to your mail, [07:06] but in a nutshell, I'm very surprised it doesn't give better results but I don't know which test to propose to ensure it's really active in the chroot [07:19] I'm also surprised about tmpfs, poolie. Didn't you try it locally and found it was quite useful? [07:19] yes [07:33] poolie: though I'll also say, timing results are showing the new machine running the test suite in <30 min. Which is a pretty good starting point. As long as it is <1hr, I don't see it impacting our workflow much. [07:34] yeah, that is nice [07:34] i was just surprised [07:35] poolie: any idea about further tests ? [07:35] i don't see your mail yet [07:39] vila: you did use the tmpfs *in the chroot* right ? [07:39] vila: as opposed to *outside the chroot* [07:39] vila: to test, schroot ...; mount :) [07:41] lifeless: I don't have access to run such commands ... but I indeed ask to check inside the chroot [07:41] ok. weird. [07:41] dunno what kind of check was done though... [07:54] 1f [07:54] pff [08:07] grr, I hate it when postfix stop working and I don't notice until someone tells me: 'where is your mail ?' [08:39] guys, i might stop soon [08:40] vila, lifeless, i reckon perhaps it was broken by the schroot fstab causing the external /tmp to be bound over the internal tmpfs [08:42] poolie: I had the same kind of feeling but... I trusted our admin [08:44] attn packagers and installer builders: 2.4.1 has been frozen ;) [08:49] good for you === poolie changed the topic of #bzr to: Bazaar version control | try https://answers.launchpad.net/bzr for more help | http://irclogs.ubuntu.com/ | Patch pilot: jelmer === gthorslund_ is now known as gthorslund === zyga is now known as zyga-afk [10:48] vila: hmm, is it not possible to change a remote branches' configuration? [10:58] maxb: hi [10:59] maxb: bzr-builder fails to build on lucid and maverick with an error from the python helper [10:59] do you perhaps know what's going on there? [10:59] hey Riddell [10:59] Do you have a buildlog handy? [10:59] good morning [11:01] maxb: FATAL: python-backport-helper needs updating to know which of python-support or python-central 'bzr-builder' should be build with [11:01] https://launchpadlibrarian.net/79653003/buildlog_ubuntu-lucid-i386.bzr-builder_0.7.1%2Bbzr144~daily10~lucid1_FAILEDTOBUILD.txt.gz [11:01] oh, right [11:01] Well, the error's pretty clear :-) === zyga-afk is now known as zyga [11:01] bzr-builder wasn't in the daily ppa until recently, perhaps it's not in the hardcoded list? [11:01] I was being a bit paranoid at the time, perhaps it should just default to python-support [11:02] maxb: it has to be central for all bzr-related packages [11:02] maxb: since they install under bzrlib.plugins [11:02] although defaulting to python-support seems reasonable, given that's also the default e.g. debhelper uses [11:07] Hello, I've come in this morning to bzr complaining that it hasn't got enough values to unpack with all my branches. Pastebin of the error: http://pastebin.ubuntu.com/687544/ how can I fix this? [11:14] daubers: what are the contents of your .bzr/branch/last-revision file? [11:18] jelmer: It's empty [11:19] daubers: did you perhaps have a recent machine crash after changing that branch? [11:19] jelmer: Not as far as I'm aware [11:28] jelmer: it should be possible to change a remote config, if it's not, it's a bug [11:28] jelmer: unless you don't have write access of course [11:28] vila: I'm finding that calling set_append_revisions_only(True) doesn't work [11:29] jelmer: oh, I thought you were referring to either the new design or 'bzr config' [11:29] jelmer: that's even more surprising given that the package importer did that for a bunch of lp branches [11:30] Hmmm.... whatever caused it has caused it on every branch on this box [11:31] daubers: bug 623152 seems to be what you're hitting [11:31] Launchpad bug 623152 in bzr (Ubuntu) "ValueError: need more than 1 value to unpack in _last_revision_info" [Medium,Triaged] https://launchpad.net/bugs/623152 [11:32] jelmer: Yeah, looks it. I'll restore them from the backup, probably easiest I think [11:33] daubers: It's surprising if you're hitting this without an unclean unmount though [11:34] jelmer: This is on the remote server, all pushed through sftp [11:34] Checked the RAID it's on, and that's fine [11:35] I was going to mention that newer versions of bzr can fsync after updating last-revision, but that doesn't really help if you're using sftp. [11:35] vila: ugh. package importer is *really* unhappy [11:36] vila: ah, this is where it gets interesting [11:36] 1092 failures, all the recent ones are: with key lazr.restfulclient.errors.HTTPError::main:get_versions:get_debian_versions:__getitem__:get:_request [11:36] (Pdb) print branch.get_config().get_user_option('append_revisions_only') [11:36] True [11:36] (Pdb) print branch.get_append_revisions_only() [11:36] False [11:37] jam: lp outage [11:37] jam: requeued [11:37] jelmer: urgh [11:38] jelmer: what kind of branch ? [11:38] vila: remote. it looks like we have a default get_append_revisions_only() implementation that returns False [11:38] vila: should those be marked as transient-should-always-be-retried? [11:39] (you may have already done so) [11:39] jam: in theory, yes, in practice there are potentially may root causes [11:39] jam: there is a bug about the importer making tea while lp is down [11:39] vila: well, lazr.restfulclient.errors.HTTPError sure looks transient to me. [11:40] jam: in this case yes, but it's not a hard rule [11:40] jam: not all lp errors are transient [11:43] vila: sure, though again, I was saying this specific traceback could be marked as auto-retry. [11:43] And the other discussion, all failures should be considered soft-transient, so try X times, and then mark it hard-failure. [11:44] but that is something else [11:44] vila: is there a bug # for the .xz thing? [11:44] I just saw some of the failing imports with it [11:44] jam: right, but the bug explains that you can't *start* auto-retrying until lp is up again or you end up re-trying too much too soon and it becomes a permanent failure [11:44] and it would be good to either link it to the bug, or requeue it if it has been fixed [11:45] vila: sure. Though if you round-robin 500 packages that gives you some time :) [11:45] Especially since LP isn't supposed to go down for more than 5min now. [11:45] then again, maybe it makes this more important [11:45] since they'll be going down 2-3 times / week no [11:45] now [11:45] jam: feel free to play around with it, I'm just explaining the status quo [11:46] vila, jelmer: anyway, .tar.xz thing? [11:46] jam: yes; there is an udd bug for the .xz issue, linked to the upstream bug IIRC [11:46] bzr-builddeb should be able to handle xz now, I haven't seen the bug for the udd issue [11:46] vila: sure, I mean linked to: http://package-import.ubuntu.com/status/bedtools.html#2011-09-11%2009:29:19.287652 [11:47] the actual package-importer tracebacks, etc. [11:47] ha, that, no, not done, patches welcome [11:47] jelmer: bedtools and qtwebkit-source are at least failing with "unknown extension ...tar.xz" [11:47] jelmer: did you work out the bz2 issues for the kde packages? [11:48] jam: we need to update to a newer bzr-builddeb on jubany, that should fix it [11:48] (the .tar.xz issue) [11:48] jam: Riddell did, it turns out OpenSUSE's bz2 generates slightly different output because of a patch they ship [11:48] ouch [11:49] jam: he's filed a bug against pristine-tar: debian bug 641019 [11:49] Debian bug 641019 in pristine-tar "pristine-tar does not work with tar files made by openSUSE" [Normal,Open] http://bugs.debian.org/641019 [11:49] jelmer: oh, right, I was about to pull the newer builddeb on jubany and was interrupted by the week-end, thanks for the reminder, I'll do that now [11:50] jelmer: oh, also, you mentioned detecting dpkgmergechanlog to avoid installing a broken hook ? [11:50] vila: what's the current procedure for updating the failed-imports => bug #. Is it still just ssh to jubany and manually update the file? [11:50] vila: yeah, it would be nice to look into that, but I haven't actually landed any branches which help address that. [11:51] jam: the explanations file is part of the branch [11:51] jelmer: ok, np, just checking [11:51] builddeb upgraded on jubany, brace yourselves :) [11:52] requieing bedtools (last to fail with .xz) [11:53] ha crap, forgot --priority [11:54] I'll let it purge its queue first [12:05] 447 to go. Not bad, though maybe 1/minute or so? [12:07] jelmer: on RemoteBranch.get_append_only... I *think* it was meant that you could try to push things, because the real branch on the remote side would refuse it [12:08] rather than doing a round-trip just to check the bit. [12:08] jam: try analyze_log with a tail -F from jubany :-p [12:08] jam: ah, that makes sense [12:09] jam: so in that case this is a bug I introduced because I made _get_append_revisions_only public. [12:11] jelmer: could be. I won't claim 100% accuracy on my understanding. [12:11] vila: I try not to ssh into jubany unless I must :) [12:12] jam: wrt 2.5-better-gpm-estimate, didn't that branch already land on 2.4? [12:13] * jelmer vaguely recalls reviewing something similar earlier [12:14] hmm, maybe I just looked at it earlier and then didn't have time to actually review.. [12:17] jelmer: the estimation changes haven't landed anywhere IIRC [12:17] there were 2 or 3 other tweaks to get_parent_map that I've proposed that have landed [12:17] let me double check that [12:18] vila: "tail .../progress_log" | scripts/analyze_log.py - [12:18] Just gives me "- does not exist" [12:18] I'm guessing it is an out-of-date udd branch? [12:18] Is it supposed to be somewhere else? [12:20] (just running it normally shows average speed of 41s, which is close to my guessed time.) [12:20] so it will take ~5-7 hrs to finish the backlog. [12:21] jam: refresh your memory: https://code.launchpad.net/~vila/udd/analyze_log/+merge/74056 [12:23] vila: says merged here, doesn't work on jubany [12:23] you don't need it on jubany, you pipe from jubany and execute locally [12:24] /msg vila seems a bit convoluted [12:30] * vila checks with a mirror, a bit tired, yeah, may be, but convoluted... [12:39] jelmer: any idea why installing python-lzma on natty requires installing python-2.6 ? [12:39] does lzma *only* work on 2.6? [12:39] or would it be a packaging bug? [12:41] jam: my guess is that it's just a packaging bug [12:52] ./import_package.py seems to spend an awful lot of time waiting for stuff. It is at 400+s running, and only 20s CPU time. [12:54] which one ? [12:55] anon-proxy [12:55] I'm at least poking at the unicode decode errors [12:55] but so far, I haven't found a version that has anything but ascii [12:56] the one I looked at has a NFKD file [12:58] http://package-import.ubuntu.com/status/acidbase.html#2011-09-01%2007:39:12.274606 [13:32] jam: bah, I mixed bzip2 and .xz issues, no bug for .xz AFAIR, I just mentioned it to jelmer who proposed a fixed that I reviewed [13:32] the bzip2 issue has a bug linked to prisitine-tar upstream [13:34] vila: debian bug 499489 [13:34] Debian bug 499489 in pristine-tar "please add LZMA/XZ/Lzip support" [Wishlist,Open] http://bugs.debian.org/499489 [13:38] jelmer: thanks [13:39] jelmer: this makes me feel a bit more how prisitine-tar can blow up for the package-importer... as soon as someone starts using a new compressor or a new option... boom [13:40] jelmer: I wonder if we can be more robust by getting the *tar* and compressing the tar delta instead of trying to get a delta from the compressed form [13:41] vila: fundamentally the .dsc files contain the sha1 hash of the tar.bz2 [13:41] vila: that won't help - pristine-tar has trouble reproducing the compression [13:41] which is what we are trying to reproduce [13:41] jelmer: anyway, with your xz patch, all previsous failing imports have now passed [13:41] vila: it has no problems with the tar file that is compressed [13:42] jelmer: aiui, it needs to recognize the compressed form [13:42] oh, argh, doomed === med_out is now known as medberry [13:45] vila: it needs to reproduce the compressed file 100% from the pristine tar delta, which means tracking all the compression parameters [13:46] yeah, yeah, hence: doomed [13:46] apparently that is fairly tricky for .xz [13:47] adding these parameters do the .dsc is not option ? :-} [13:47] half-kidding [13:49] vila: they are not known by the debian developer either.. [13:52] jelmer: really ? the upstream devs already provide the compressed tarballs ? [13:52] vila: in a lot of cases, yes [13:52] jelmer: how about mentioning the *tarballs* sha sums instead ? [13:52] in some situations the tarball gets repacked, e.g. if it's zipped or if there are non-DFSG-compatible files in it [13:53] vila: if we do that, we still can't reproduce the tarball [13:53] vila: and we need to reproduce it bit for bit [13:54] jelmer: I mean: work with the uncompressed tarballs and generate the deltas for the tarballs, compress the tarballs and the deltas [13:54] jelmer: this way we still can reproduce the tarballs bit for bit no ? [13:55] vila: we have to reproduce the final files, not the tarballs [13:55] and we only have to track the tar format and not the compressors formats (which doesn't mention the parameters anyway, at least for bzip2) [13:55] vila: we have to reproduce the *compression* as well [13:56] jelmer: why ? [13:56] to build the package you extract the tarball anyway, isn't it what matters ? [13:57] I understand the practices and toolchains requires the compressed form today, I'm just trying to find an escape from the 'sorry, no idea what parameters were used here, will never be able to reproduce the compressed form' trap [13:58] which is currently blocking the bzip2 produced on Suse with a non-standard bzip2 [13:58] and even there the fix may be doable [14:44] hi jelmer [14:45] hi Noldorin_ [14:46] jelmer, been testing this bzr-git issue over the past few days, but can't seem to get anywhere with it i'm afraid... [14:46] Noldorin_: what have you been testing exactly? [14:46] that issue you ran into earlier? [14:47] jelmer, yes with the unknown git databaser entry... you remember? [14:47] i'm trying to merge in parts of the revision that make it fial [14:48] but it gives loads of conflicts [14:48] not sure how to go about it :-S [14:48] have tried a few thigns over the past few days... [14:49] Noldorin_: merging it won't help in this regard, the revision itself would still be processed in its current form [14:50] how should i go about testing this problematic r47 then? [14:50] Noldorin_: try creating a new revision on top of r46 which has some of the changes from r47 and see what the minimal set of changes is that triggers the issue [14:51] jelmer, but how do i do that specifically? [14:51] exactly what i'm trying [14:51] it just messes up the whole branch hmm [14:51] Noldorin_: bzr branch -r46 /path/to/original/branch test; then create a new commit with some of the changes from r47, commit, then dpush into git [14:52] jelmer, yep yep, but how exactly do i get "some of the changes"? [14:52] doing by hand is not practical [14:52] there are many many changes [14:54] Noldorin_: you might be able to do "bzr merge ../path/to/individual/file -r47" [14:54] Noldorin_: and do that for every file [14:55] but you'd have to make sure it doesn't record any pending merges ("bzr st" should display this) [14:55] jelmer, wouldn't always record pending merge? [14:55] i don't know what the alternative is.. [14:56] Noldorin_: you can also revert pending merges with "bzr revert --forget-merges" [14:57] ah ok [14:59] jelmer: did you say you were going to upload bzr-explorer 1.2.1 to debian? it doesn't seem to have arrived [14:59] Riddell: I did say that, and then I got distracted by something else last week. Sorry about that. [14:59] Riddell: I'll have a look at uploading qbzr and bzr-explorer now. [15:07] Using bzrlib, Let's say I create a local branch of a remote tree and make/commit a change in my local working tree. Do I use "clone_on_transport" to push that change back to the remote tree? [15:10] timrc: hi [15:10] timrc: do you perhaps mean remote branch rather than remote tree? [15:11] timrc: If so, you should be able to use wt.branch.push(Branch.open(remote_url)) [15:11] http://paste.ubuntu.com/687688/ [15:11] that is pretty much what I have, so far [15:11] remote branch, yeah [15:11] timrc: it looks like you indeed want to push to a remote branch (we don't support operations against remote working trees) [15:12] jelmer, okay [15:12] timrc: it seems like you indeed want local_branch.push(remote_branch) [15:12] timrc: clone_on_transport creates a clone of the local branch, so would error out saying that a branch already exists (I think) [15:13] it has an option, 'use_existing_dir' which threw me off [15:13] I was using create_clone_on_transport to create a new remote branch and then using the launchpad mergeProposal API call, but I want to eliminate the mergeProposal step [15:34] jam: around? [15:34] Riddell: babune's all red: http://babune.ladeuil.net:24842/job/selftest-chroot-oneiric/lastFailedBuild/testReport/junit/bzrlib.tests.test_transport/TestTransport/test_transport_fallback/ [15:35] Riddell: how did you manage to pass on pqm ? [15:38] Riddell: Ouch, I see, ignore_i18n is called only once and doesn't cleanup at the end of the tests, protecting the others [15:39] Riddell: when running with --parallel=fork (as babune does) this works only for test_trans_dependency and break for the other tests [15:39] test_transport_dependency [15:48] jelmer, that did the trick, thanks [15:58] vila: uh oh [15:59] vila: so test_transport_fallback is the only one that's failing [15:59] or are there others? [15:59] Riddell: I replied to your email about this problematic test but it got delayed locally :-/ Dunno if you got it ? [15:59] I got a reply late on Friday [15:59] Riddell: a single one is failing on babune, but given the way we split with --parallel, there is no guarantee that it's the only one [16:00] Riddell: i.e. every test that potentially try to mutter() or whatever (exceptions ? str(e)) risk triggering the same issue [16:00] why does --parallel affect it? [16:01] Riddell: with --parallel, several *process* are involved [16:01] Riddell: each one running a slice of the whole test suite [16:02] Riddell: the slicing is always the same for a given test suite but can vary as soon as you add or remove a test or change the parametrization or the list goes on [16:02] Riddell: monkey-patching like you did is against all isolation rules :) [16:02] Riddell: at the very least you should use overrideAttr [16:03] fg [16:03] tsk [16:03] Riddell: so the patching is cleaned up at the end of the test [16:03] ok, I'll try that [16:04] Riddell: moreover, as mentioned in my mail, if the test needs disk resources (and it does as soon as it requires a config file which itself is required by i18n) it should use TestCaseInTempDir or TestCaseWithTransport [16:04] Riddell: OR [16:04] well I'm trying to get it to not use i18n so that's not an issue now surely [16:05] Riddell: the failing test has no idea you monkey patched i18n.install, it's running in a different process ! [16:06] Riddell: an alternative is to force the install of a Null translation for *all* tests (overriden for tests that want to test specific aspects) [16:07] yes that might be an idea [16:08] Riddell: I thought there was some code doing that when you started working on i18n but it probably wasn't commented well enough to explain the current issue [16:08] where would that be? [16:08] i18n, let me refresh my memory (may be that was in a mp that never landed) [16:09] Riddell: _translations back in revno 5994 [16:10] Riddell: may be incomplete [16:11] Riddell: the idea is that it's None until one is installed in the nominal case (i.e. for users) [16:12] Riddell: but tests can install whatever they need: either a null one or a test specific one [16:12] Riddell: now, poolie may ask you to put that in library_state instead, but I still don't fully understand the library_state story [16:15] Riddell: the implementation in bzrlib/i18n.py revno 5994 may be broken, you certainly know better, but the bit that should be applied here is that you don't want a test to trigger an access to a file on disk at any cost [16:22] Riddell: revno 5875.3.25 pretty much contains what I'm trying to explain (except for making sure all tests start with a null translation which should be an overrideAttr call in TestCase.setUp() roughly)) [16:33] vila: how about this? https://code.launchpad.net/~jr/bzr/i18n-fix-tests/+merge/75037 === yofel_ is now known as yofel [16:39] Riddell: approved, babune will tell you if it's still unhappy ;) [16:39] merci [18:22] abentley: I'm around now if you're still interested in chatting [18:23] jam: cool. Mumble? [18:24] sure, just a sec === beuno is now known as beuno-lunch === beuno-lunch is now known as beuno [20:15] guys i really like bzr-colo. thank you! [20:17] it would be really amazing if bzr-colo had some sort of secondary/temporary versioning system that would get rid of unknown files when you did a bzr switch [20:17] i think i could implement that. [20:29] Oh right, I forgot I can't bzr check this repo because of symlinks. Bah. [20:56] fullermd: git stash like? :) [20:56] gah [20:56] lamalex: ^ [20:57] nigelb, yes sort of but implicitily when you changed branches [21:07] hey jelmer -- are you aware that bzr-git borks on dpush when there are uncommitted changes? [21:09] Noldorin_: complains or actually tracebacks? [21:09] jelmer, just complains i think. i can pastebin if you like? [21:09] Noldorin_: please do; it might be expected behaviour though [21:10] okay [21:10] jelmer, http://pastebin.com/zs6bL0Yg [21:11] Noldorin_: I think this is just brokenness from the bug you were working on earlier [21:11] ah ok [21:11] i think so too [21:11] it disappears after bzr revert [21:11] of course [21:12] jelmer, ah wait, i'm just encountering this error again with "workign tree is out of date" [21:12] :-S [21:12] weird [21:12] i just did a bzr revert and then bzr st [21:12] after that [21:18] Noldorin_: that restores from the repository though, and that might have inconsistent data because of the failed dpush [21:19] jelmer, ohh...why shoudl the dpush corrupt the repository data though? [21:19] Noldorin_: because of the bug you hit [21:19] oh ok [21:19] which is about misrepresenting bzr data in git [21:19] jelmer, so these myriad problems are all to do with the bug eh?! [21:20] basically [21:20] jelmer, i guess i have to reclone the branch and do uncommit/revert for each new test? [21:20] Noldorin_: yep (and not in a shared repository) [21:20] ah ok [21:20] jelmer, what is a shared repository? [21:20] Noldorin_: in two words, "bzr init-repo" [21:21] okay got it [21:21] i don't use that, so no problem [21:47] jelmer, hmm how can i get a list of changes introduced in r47? [21:47] Noldorin_: bzr log -v -r47 [21:47] thanks [21:47] -v was what i needed ;-) [21:48] from what I could tell earlier the problem is related to one of the tree shape operations [21:48] rather than the contents changing [21:49] yes [21:49] well as i mention in the commit message, i restructure the the directory hierarchy in this changeset [21:49] hmm [21:51] jelmer, hence i guess it's the moves/remaming of files/dirs that's messing it up? [21:52] Noldorin_: presumably [21:52] okay [21:52] *continues checking* [21:53] Noldorin_: really useful would be some sort of recipe that reproduces this issue in a clean branch [21:53] s/clean/empty/ [21:54] that way we can add a regression test, then improve the code until it works [21:54] yep [21:54] i will do some more investigation on my branch first though === supton__ is now known as supton [23:24] Hrm, what time does poolie start his day? [23:28] nigelb: usually around now [23:28] nigelb: I'm doubting your statements about your regular sleeping times btw :) [23:28] jelmer: okay :) [23:28] haha [23:28] I was working [23:28] Just finished :) [23:28] Like, $DAYJOB work. [23:56] nigelb, hello [23:58] Morning!