[00:00] the bug on the nightlies was that they didn't build them at all [00:01] pyrex wasn't installed because the usual debs don't need it as the tarballs contain the c files [00:01] oh! [00:01] righto [00:01] I fixed that, but neglected to check the paths [00:01] so, can you check then, that the so's are going to the right place [00:01] and if they aren't make the build fail [00:01] just looking now [00:01] / workaround [00:02] I'd rather the dailies fail if the .so's won't be available to the user. [00:04] todays uploads have python*/*-packages/bzrlib/*.so [00:04] thats great [00:04] I can add a check that that glob matches something at build time [00:04] please [00:05] look for one of the pyrex based ones [00:06] Noldorin: http://pastebin.ca/1533156 [00:10] sup [00:10] lifeless: what are the current plans for python3 support? [00:10] lifeless: same result [00:10] ronny: there aren't any [00:11] Noldorin: can you paste the output please? [00:11] sure [00:11] ronny: by which I mean, that we'll accept patches making it easier to use a porting tool etc [00:11] lifeless: http://pastebin.ca/1533161 [00:11] ronny: but we need to keep supporting python 2 for 5 more years or so [00:11] ronny: and python 3 is basically a different language/platform [00:12] Noldorin: _wow_ [00:12] Noldorin: can you do that again, with -Dtransport please [00:12] sure [00:13] indeed, it is behaving rather unusually :P [00:15] lifeless: http://pastebin.ca/1533166 [00:20] lifeless: making any sense to you? [00:21] Noldorin: yes, look at lines 33, 34, 35 [00:22] we read a file twice [00:22] then get an error renaming the directory above it [00:22] I suspect this is just too much damage to reliably work around it [00:22] showing that trace to your sysadmins should convince them something is wrong pretty quickly :) [00:22] I'll update the bug with more info [00:23] heh, fair enough [00:23] thanks :) [00:23] and do whatever I can to help them get to a cause [00:23] lifeless: do you recommend i just point the admins to the bug report, or specifically this trace? [00:25] Noldorin: both I suspect [00:26] right [00:27] not sure what i can expect them to do however :( [00:27] if the problem is due to IIS6 [00:27] IIS has many options [00:27] it could be a backing server [00:27] they can file a bug with microsoft [00:27] etc [00:27] I wouldn't want to assume they can't do anything ;) [00:28] yeah, i know [00:28] it certainly doesn't heart to try [00:28] it's just i'm not optimistic! [00:28] but yeah [00:28] if you could update the bug report with the new info, that would be great :) [00:29] I will be doing so shortly [00:29] great [00:30] the latest -Dtransport trace should really be attached there, of course [00:47] *** FYI. restarting codehosting for a Cherry Pick *** [00:52] Hello [00:53] hai [00:54] So, when I cancelled a bzr merge, when I tried the same bzr merge, bzr complained about already existing upload packs [00:56] lifeless: i have to go now, but i plan on emailing my host tomorrow. [00:57] if the bug report will be updated by then, that would be helpful [00:57] anyway, i'll let you know the result. [00:58] it looks like it's related to the bug https://bugs.launchpad.net/bzr/+bug/165293 [00:58] Ubuntu bug 165293 in bzr "collisions through uploading same-named .pack files not handled correctly" [High,Confirmed] [01:01] lifeless: is that alright...? if you're busy, i could always update it myself. but i feel you understand the issue significantly better than me [01:04] Noldorin: Its in my queue to update [01:04] :) [01:04] right ho [01:04] bye [01:04] JoaoJoao: you hit ctrl-C while bzr was doing the merge? [01:06] yep [01:07] ok, I suspect you've found/provoked a bug in the final insertion stage [01:07] what repo do you have - info -v will tell you [01:07] it prints a repository line [01:08] Noldorin: bye ;) [01:08] you mean the format? [01:09] it's 2a [01:09] ok [01:10] bzr dump-btree .bzr/repository/pack-names [01:10] pastebin the output (its not personal) [01:24] JoaoJoao: ^ [01:30] igc: could I beg three reviews off you? [01:30] https://code.edge.launchpad.net/~lifeless/bzr/bug-398668/+merge/10288 [01:30] has two reviews listed in the last comment [01:30] and after that, the main branch itself [01:33] * lifeless foods [01:35] lifeless: I'm not at the computer with the repo, I'll post that tomorrow, ok? [01:35] ok === thumper is now known as thumper-afk [02:05] bbiab [02:58] * igc lunch [03:22] <[1]reggie> anyone got a second to answer a question about bundles? [03:22] best to just ask [03:22] <[1]reggie> sorry [03:22] <[1]reggie> our commit messages have a bundle file attached. I uncommitted some revs and then reverted them (mistake). now I want to reapply from the bundles [03:23] <[1]reggie> but when I save a bundle file to my desktop and then do bzr merge it just says nothing to do [03:23] <[1]reggie> this is bzr 1.1 [03:23] <[1]reggie> 1.17 [03:24] <[1]reggie> I thought the bundle file contained the "diff" of the revision and it could be applied right from the bundle [03:29] [1]reggie: if you are getting 'nothing to do' that probably means the changes are already merged (by ancestry, if not by diff) [03:29] I'm not user about "uncommitted some revs and then reverted them" [03:29] if you reverted some changes and then committed [03:30] then I would see how you would get the behavior you are seeing [03:30] *uncommit* should remove them from the ancestry, such that 'bzr merge' will re-apply it. [03:30] <[1]reggie> jam, could the problem be the encoding of the bundle? it's not a readable diff. it's encoded for email [03:30] <[1]reggie> does merge handle the unencoding or do I have to do that? [03:31] [1]reggie: possible. If we can't detect that it is a real bundle, then we would probably try to merge the containing dir [03:31] though last I looked at the code, it would read through as much as it could to find the # Bazaar bundle ... header [03:31] even if it was in the middle of an email [03:31] (inline vs attachment) [03:31] I suspect a commit-of-the-revert rather than an uncommit [03:31] [1]reggie: so my initial thought is that you didn't uncommit as much as you thought you did [03:31] so the bundle is already merged, as jam says above [03:32] <[1]reggie> i basically did several bzr uncommit --local [03:32] [1]reggie: there is also the possibility that you could pull the revision-id directly from the bundle [03:32] <[1]reggie> and then a bzr revert [03:32] [1]reggie: uncommit --local will be negated by you next 'bzr update' [03:32] *your* next [03:32] since you didn't uncommit the remote revs [03:53] <[1]reggie> jam, ok, sorted some things out but I hit a new snag [03:53] <[1]reggie> the uncommit command told me that I coudl restore to this tip by running a certain bzr pull command [03:53] <[1]reggie> but I neve rpushed the commits that I uncommitted [03:54] <[1]reggie> and when I run the pull command it says the branches have diverged and I need to run bzr merge [03:54] <[1]reggie> but bzr missing says I'm up to date and bzr merge says nothing to do [03:54] <[1]reggie> I thin it is confused [03:54] [1]reggie: are you doing "bzr merge . -r revid:..." ? [03:54] not just plain "bzr merge" [03:55] <[1]reggie> the command it gave me to run was bzr pull . -r revid:reggie.burnett@sun.com-20090817220415-3vg2w7n0a46u09jj [03:55] <[1]reggie> but that says 'these branches have diverged' [03:57] [1]reggie: because you've done commits since the uncommit, the histories have diverged [03:57] you can do "bzr merge . -r revid:reggie.burnett@sun.com-20090817220415-3vg2w7n0a46u09jj" === thumper-afk is now known as thumper [04:41] Instead of keeping a changelog file for changes in software releases, I want to keep a changes_from_previous file, that is, a changelog for the current release only, so that each changeset would be a revision of that file matching the related release. What do you think, is it a good idea? [04:56] I wanted to ask if it's possible to warn someone if they are about to commit a forbiden (or highly questionable) file type, such as pdf, doc, jpeg. We want to make sure our learning courses for ubuntu have only the sources. [04:58] yes, a precommit hook can do policy checks and abort commits that fail $whatever-your-policy is [04:59] useful [05:07] doctormo: fwiw, if you go down that path - don't just rely on "*.(pdf|jpg)" for example; Suggest you use file(1) against the suspect. Assuming I'm not telling you stuff you didnt already know. :-) [05:08] spm: This is an offical Ubuntu project to teach people how stuff works, the course writers will need help understanding bzr to get into development. I was thinking gtk-bzr === thumper is now known as thumper-afk [06:45] if both branches and checkouts can be branches, what is the value of having a mirror branch vs a mirror checkout? [06:45] s/can be branches/can be branched [06:46] johnjosephbachir: branches carry the history along [06:46] checkouts just point to it [06:46] so do non-lightweight checkouts [06:46] they carry all the history [06:47] If you're never committing to it, there's no significant difference. [06:47] if you are, the default merging behaviour differs [06:47] Though of course, you can't really branch a checkout. When you think you are, you're branching a branch, which you find by pointing at a checkout of it. [06:47] 'bzr up' can do unpleasant things [06:47] okay [07:03] hi all [07:04] hmpf, massive upgrade all around, massive reboots will follow :) bbar [07:11] hey vila-massive-reboots! [07:24] re [07:27] i srtongly thinkig about write netbeans bzr plugin [07:31] that would be great [07:31] i use netbeans [07:31] and doesnt exist bzr only git === lionel_ is now known as lionel [07:58] lifeless: inserting CountingDecorator has no effect (unsurprisingly but I checked anyway), do you still want it ? [07:59] vila: it totally should for parallel=fork [07:59] lifeless: yeah, we both wish, but it doesn't work [08:00] do you get a progress bar? [08:00] parallel=subprocess will add its own decorator. [08:00] oh yes, I always have a pb, the # tests is missing though [08:01] I mean a counter ;) [08:01] yes, I got the counter but without indication of how many tests *will* be run, I can live with that for a while though [08:02] it's better than not having it (man, 2 weeks without it... such a joy to get it back ;-) [08:02] ok [08:02] I did the better API [08:02] it needs a review I think, in subunit. You could do that if you wanted ;) [08:02] Already ? Great, I'll have a look [08:03] lp:~lifeless/subunit/push-pop-progress ? [08:06] something like that [08:06] a week and a bit back actually ;) [08:06] yeah. 20090808 sounds like that [08:08] * fullermd begins to lose hope that bundlebuggy will display the page... [08:08] lifeless: I still can't imagine when you can have a negative delta with PROGRESS_CUR, care to enlight me ? [08:09] fullermd: wait for abentley to be bacl from vacations and restart his BB host ? [08:09] Well, I don't so much have any _need_ to see it... [08:10] stop hoping then :) [08:10] but he's supposed to come back today... [08:10] That's sorta my mantra. Everything's easier when you give up hoping :) [08:10] nono. hope is important [08:11] I think the trick is not to give up but to accept that not all hopes can be fulfilled :) [08:11] ... and keep smiling anyway (yeah, right...) :-D [08:14] It takes more muscles to frown than to smile, but it doesn't take any to just sit there with a stupid expression on your face. [08:14] vila: test filtering [08:17] lifeless: ooooh, excellent, thanks === Leonidas_ is now known as Leonidas [08:40] will be a bzr up stop if a shell session is terminated/ [08:40] ? [08:45] OllieR: if it gets SIGHUP'd [08:45] its up to your shell [08:45] ok so yeah the update would fail? [08:46] can you do a disown on that command then [08:46] screen's easiest [08:46] i tried but then it asks for a password and doesn't allow my input [08:47] can't think how I would do it [08:47] That's what nohup is for... [08:49] ok but can i issue a password with nohup? [08:50] I'd use screen [08:50] :) [08:51] nohup doesn't disconnect it from the terminal. It just blocks SIGHUP. [08:53] Of course, it also may screw with std{out,err} and do other such manipulation, so something like screen can be a better choice (and for other reasons as well), but it's a simple choice. [09:00] lifeless: thanks screen is ideal :) === denys_ is now known as denys [09:09] * igc dinner === obstriege is now known as obst === denys_ is now known as denys [09:43] can we move cbranch into bzr.dev? === thumper-afk is now known as thumper [09:43] it is the only reason I still have bzrtools [09:49] thumper: You just don't like bzrtools, are you ? I knew it... [09:49] vila: I love it [09:49] I use cbranch and shelve [09:49] now shelve is in trunk [09:49] I want cbranch there too [09:50] because right now, bzrtools won't run with bzr.dev or the nightly ppa [09:50] which for me is a serious PITA [09:50] Yeah, you want to empty bzrtools, you don't like it, stop pretending :-D [09:50] * vila is afraid the initial joke went unnoticed :-D [09:51] vila, :P [09:51] thumper: hmm, bzrtools still hasn't relaxed it's compatibility policy and refuse to load ? [09:52] no... [09:52] vila: I did notice [09:52] vila: I just ignored it :) [09:52] thumper: abentley is coming back from vacations today right ? The patch is easy enough (jam used such a patched version to build the windows installers) [09:53] vila: oh, I'm sure it is simple [09:53] vila: but this isn't the first time things have gotten out of sync [09:53] vila: and since it is pretty trivial to make this right [09:53] vila: lets DTRT [09:54] thumper: as for your initial question, past 2.0 the overall worflow UI will be reworked so certainly cbranch feature will be taken into account [09:54] I wouldn't want to bring cbranch in before 2.0 [09:55] and after 2.0, well as vila says theres a big overhaul to do [09:55] thumper: ha ! You're preaching the chore (sp?) !! But try again with abentley :-D [09:55] well, we're almost there now anyway [09:55] I know, I know, I'm just frustrated [09:55] thumper: talk to your wife about that :)\ [09:55] I'm running the 1.18 branch build locally instead of my system installed package [09:56] vila: "choir". Though, for some of us, choir WOULD be a chore... [09:57] * vila giggles [09:58] fullermd: by the way, using merge proposals on LP is now the preferred way for submissions, [10:00] fullermd: regarding your DWIM revisions, 1) using mp will ensure a better tracking, 2) it will certainly get more attention once 2.0 is out 3) Can't you try it as a plugin ? Such features are really hard to judge without using them for a while.... [10:00] Well, that requires uploading a branch, rather than just sending a bundle. And that means either waiting forever, or stacking, and from all the fun stacking seems to be giving everyone for 2a upgrades... [10:01] fullermd: bzr.dev is not in 2a (yet) and push is stacked by default on lp and works flawlessly, just give a try, really [10:01] I don't see how it could work as a plugin, since it requires changing functions. [10:01] *All* merge proposals (i.e. the vast majority of landed patches) for the last releases came from stacked branches on lp [10:01] monkey patching ? [10:01] Well, yeah. If it were in 2a already, issues stacking causes upgrading to 2a would be moot :) [10:02] (and for changes like those trivial cleanups in revspec.py in my other [MERGE], it seems insane to create and upload branches for 3 lines of change) [10:03] I'm vaguely aware of what monkey patching it, but I haven't the slightest idea how to go about it, nor would I want to if it were at all avoidable. [10:03] fullermd: pushing a stacked branch to lp is pretty quick [10:03] you should talk to lifeless one of these days, he add some n lines patches (with n quite small) landed in no time with that [10:03] fullermd: also, when the merge directive stuff for 2a is sorted out [10:03] fullermd: you can use bzr send [10:03] fullermd: and LP will create the branch for you [10:04] does anyone else get this. I commit a single character change in one file and I need to upload 3mb for the commit to finish. doesn't make any sense... [10:04] thumper: hey ! Right I think the md stuff *is* sorted out (in trunk if not in 1.18), what email should be used ? [10:04] It's still conceptual overhead of having a branch around, for a change I could write on my hand in ballpoint. [10:05] vila: well, LP needs to be updated for it to work [10:05] fullermd: ho, come on, the point is how that branch/bundle is handled *once* you have created it [10:05] thumper: ha, right, [10:05] I'm also still ill at ease with the fact that the LP merge requests go off to the team, not the list. [10:05] vila: merge@code.launchpad.net [10:05] thumper: thnks for the reminder, sry for being lazy :) [10:06] Maybe with internal stuff that's no difference, but with something like the DWIM revspecs, which is a user-visible change, I'd sure like the opportunity for users on the list to be able to see and comment on it before it lands or is rejected. [10:06] fullermd: right, I keep forgetting that, because, in my case, all the mp related mails are filtered in the same mailbox than the list :-/ [10:06] I feel like the move to LP reviews really cuts off a lot of the chance for community desirability discussion. Maybe that's intentional. [10:06] fullermd: not intentional !!!! [10:06] Well, it is for me too ('cuz otherwise it ends up in the -bugs mailbox. Have I mentioned how much I hate LP's mail sending stuff lately?) [10:06] eeerk, how can you imagine that :-( [10:07] I don't, quite, but nobody seems at all concerned that essentially all the code reviews are no longer seen by anybody other than the devs and a few non-dev masochists like me who joined the team... [10:07] I had some troubles to define my filters correctly but it's all godd now and *I* am quite happy with lp mails... [10:08] fullermd: that's a good point, I really think every dev has more or less handle the filtering and don't realize that ... [10:09] fullermd: there were talks at one point about tagging the mails and let people tweak some preferences in their subscription, I think it's worth raising the point again [10:09] modified system/application/models/select_model.php (I added one word to this file) [10:09] [###############- ] 9211kB @ 26kB/s | Uploading data to master branch - Stage:Copying content texts:Copied record 178/196 [10:09] It just seems crazy [10:11] fullermd: Anyway, overall, I'm convinced there is value in your DWIM rev spec and that surely is worth discussing and get more exposure, I was trying to give my understanding on the apparent lack on feedback so far [10:11] OllieR: What protocol are you talking across? [10:11] sftp:// [10:12] So it's presumably repacking something, which requires downloading N% of the repo, then re-uploading it. [10:12] fullermd: ok so is there any way to speed this up? [10:12] If you can use a smart protocol, that should all happen server-side, and not have to shuffle the data across the network twice. [10:14] OllieR: keep in mind that if it's indeed a repack that is occuring, it shouldn't occur frequently (1 every 10 then 100 then 1000 commits) [10:15] OllieR: another option is to issue 'bzr pack' on the server side, but that also requires bzr on the server [10:16] OllieR: its bzr doing autocompaction [10:16] OllieR: it will happen very rarely, and it keeps the repository performance good [10:18] fullermd: how do you mean a smart protocol? [10:18] OllieR: bzr[+something]:// [10:20] I just wanted to share how I am pleased with 'bzr co' grabbing 600M of RAM on my virtual machine :-)) [10:22] domas: do you have the C extensions? they can help a _lot_ [10:22] I hope I do, um. :) I guess I'd use PPA builds [10:27] This is think is the problem [10:27] checkout of branch: /home/bazaar/mycompany/example.com/screenings/trunk [10:27] shared repository: /home/bazaar [10:28] so every tree uses a single shared repository dispite the fact that they are usually not related [10:28] i.e. /home/bazaar/mycompany/example2.com/web/trunk is completely different code to /home/bazaar/mycompany/example.com/screenings/trunk [10:31] http://bash.pastebin.com/m646edb67 - 7gb repo [10:33] Could anyone advise on this setup? [10:35] Well, it sounds like a situation where you (not necessarily you personally, but you somebody-associated) control the server. So using bzr+ssh instead of sftp probably wouldn't be all that hard. [10:35] And it would certainly mitigate issues like this by not getting the network involved in stuff it doesn't have to be. [10:37] i just ran a bzr pack on the server and then committed again from my local system and it was a lot faster [10:37] I will look into bzr+ssh [10:37] OllieR: the pack didn't make a difference :) [10:38] fullermd: do you know if having one shared repo for all projects is a bad idea? I would assume I should have a shared repo for each project... [10:38] OllieR: the previous commit had already done the tuning :) [10:38] lifeless: I cancelled the commit, as it was taking forever [10:38] then can a pack on the server [10:38] ah [10:38] ok [10:38] then committed from local [10:39] OllieR: Well... it doesn't _gain_ you much, in cases where there's no shared history. [10:40] fullermd: but it could potential slow things down? [10:40] And at some level, it will slow things down, since there's a lot of data there you can't possible care about at any given moment, you've got more to troll through to find the stuff you do. [10:40] Now, whether that's significant, is an empirical question... [10:41] I'd say that putting repos at project boundaries makes more sense, since that's where most of your sharing is. [10:42] yeah everything on this dev box seems to all of a sudden slowed right down [10:42] I think it's axiomatic that one-repo-for-everything is going to be slower. But how much slower? If it's 2% slower, who cares? If it's 20% slower, that may be another matter. [10:43] It seems unlikely for it to have caused a sudden dropoff, unless you just put something new and huge into it. [10:43] A 300 meg project in a 7 gig repo is going to be slower than a 300 meg project in its own 300 meg repo. [10:43] But it'll still be way faster than a 7 gig project in its own 7 gig repo. [10:45] ok thanks for the info [10:46] generally we have log tree depth [10:46] so it shouldn't matter much [10:46] 200+ way fan out on a typically B+Tree index in recent repo foramts [10:47] which is to say, you need 200 times as much data to add a single additional level to the index. === beuno_ is now known as beuno === vila is now known as vila-fud === vila-fud is now known as vila [13:04] Hello! How can I upgrade the working tree of a branch? === mrevell is now known as mrevell-lunch [14:17] hi [14:17] I´m using bzr push to push the history to a location that is backed up daily [14:18] now using push with (X:\Folder\Folder) also uploads the working tree what I don´t need [14:18] what can I do to prevent doing this? [14:19] See "bzr init-repo --no-trees". [14:20] so I should initialize the repo on the server at first? [14:20] Yes, as a parent directory of wherever you're pushing your branch to. [14:21] You can use e.g. a sftp:// URL to "bzr init-repo". [14:21] so sometimes push just pushes the history and sometimes it pushes the working tree, too, depending on the protocol? [14:23] Depends on both the protocol you're pushing over (sftp, local file access, etc.) and how the destination has been set up prior to the push. [14:23] hmeland: you can also just do bzr remove-tree REMOTE_BRANCH [14:24] k [14:24] I meant NEBAP|work ^^ [14:24] NEBAP|work: I would do what hmeland suggested if you are pushing multiple branches. [14:25] so if I want to push to a non-existing local location without pushing the working tree, I should do: [14:25] bzr init-repo --no-trees [14:25] bzr push [14:25] ? [14:25] You can use "bzr reconfigure --use-shared --with-no-trees X:\Folder\Folder" to convert your existing stand-alone branch to a shared repository without any working tree. [14:26] hmm [14:26] NEBAP|work: Yes, with the appropriate target arguments. [14:26] using push with the ftp protocol automatically pushes only the history [14:26] does it create a shared repo automatically? [14:26] or is it still a standalone tree without the working tree? [14:27] No, but updating the working tree over ftp isn't done, due to it being hard to detect whether there would be any conflicts. [14:27] k [14:28] so using ftp the push destination is still a standalone tree? [14:29] Yes. You could then later do "bzr update" in a shell on the target host to update the working tree. [14:29] k [14:29] no that was just for info :) [14:30] just thinking which is the better solution, creating a shared repo and pushing to it [14:30] Whereas that wouldn't do anything in a branch in a tree-less repo. [14:30] or deleting the working tree of the standalone puhs .. [14:30] s/puhs/push [14:34] would be good to offer an option to push without the working tree like "bzr push --no-trees" [14:42] morning all [14:42] fjalvingh: are you around? [14:42] vila: how's it going? [14:43] morning jam ! [14:43] jam: morning, Jam [14:43] jam: morning [14:43] going fine, but I reailzed time is running and I tsill haven't reviewed 1.19-merge_sort :-/ [14:45] vila: I was wondering about that. as I made sure to submit it before you woke up, and yesterday you said "I'm ready to review" :) [14:46] jam: yeah, went into balancing --parallel=fork and ran into bumps... mutli-thread debug is so fun ;) [14:58] jam: and besides, when i said "ready to review", I had actually reviewed most of your commits, but you added ~15 more :-) [15:04] jam: overall you added new implementations in tsort, leaving the old ones without deprecating them, right ? [15:05] hmm, not really :-/ [15:05] vila: I moved tsort.topo_sort, but not merge_sort [15:05] I don't want to implement 'mainline_revs' [15:05] and some other stuff [15:05] because it is cruft [15:05] And there is at least 1 place that needs to add a node to the graph [15:06] before calling merge_sort [15:06] so I need that functionality before I can completely deprecate the existing implementation. [15:06] yeah, re-reading your cover letter, sorry for the noise [15:07] hello [15:07] i have a bunch of commits in my repo which I did, but different computers show different names. can i change them? === abentley1 is now known as abentley [15:10] abentley: welcome back ! [15:10] vila: Thanks! [15:13] any idea how i can change my email address retroactively in commits? [15:13] Stavros: short answer: you can't [15:13] aw [15:13] long answer? [15:14] there were talks about 3 distinct solutions (none fully available yet AFAIK): [15:14] 1) use some variation of bzr-rebase [15:15] 2) handle some aliases in bzr so that the same user can be "credited" for all his emails [15:15] 3) fast-export, hack hack, fast-import (this is rude :) [15:15] write a script which rebuilds your repo one commit at atime [15:16] saving commit messages but altering email addresses [15:16] and also changing your boss's name to BADGER BADGER BADGER BADGER BADGER, just because you can. [15:16] hmm, can i export the repo in a format i can easily parse and substitute the emails? [15:16] which is what the fast-import/export method does [15:16] quicksilver: i am the boss :( [15:16] so i'll go with MUSHROOM [15:16] Stavros: LOL [15:16] what's fast-export/import, btw? [15:17] also, is bzr-rebase useful/mature/stable? [15:17] http://bazaar-vcs.org/BzrFastImport [15:17] and yes, it is [15:17] https://edge.launchpad.net/bzr-fastimport [15:18] oh aha, interesting [15:18] let me get that [15:19] hmm, i need to run something other than setup.py install? [15:21] Stavros: you might just do bzr branch lp:bzr-fastimport fastimport into your bzr plugins dir [15:21] ah right, thanks [15:22] how can i get the branch without the repo? [15:22] Stavros: lp: means get repo from Launchpad? [15:23] yes, i mean get just the "clean" copy, without the .bzr dir [15:23] i guess i can just delete it afterwards [15:23] Stavros: just remove the .bzr afterwards. [15:23] thanks [15:23] Stavros: It won't hurt though and you can easily update later on [15:24] that is true, i should redownload it [15:24] Stavros: or you can use 'bzr export' to get a clean tree [15:25] Stavros: just "bzr pull" inside the branch [15:25] Stavros: but why do you want that ? [15:26] vila: OCD :P [15:26] Stavros: ha, good, fine, no problem, JDI :) [15:27] what's jdi? [15:27] also, i figure i can do bzr co lp:bzr-fastimport and then bzr up whenever i want to update [15:27] Stavros: "just do it" [15:28] ah [15:28] hmm, this fast-export file is binary [15:28] i was hoping for base64 encoding or something [15:29] i can't edit this without breaking it.. === abentley1 is now known as abentley [15:29] Stavros: It is not really binary; it is text intermixed with blobs which can be binary [15:29] fjalvingh: yes, but editing will break the blobs [15:29] Stavros: Only if you edit inside them [15:30] really? text editors can handle that sort of thing? [15:30] should i use vim? [15:30] Stavros: some can; you could also try something like sed or so [15:30] ah, that should work very well [15:31] Stavros: but before trying all this: be aware that this will make a new, idependent repo [15:31] Stavros: so if you have other branches lying around merges will no longer work [15:31] so i won't be able to push it to the parent repo any more? [15:32] Stavros: well, you could push --overwrite, but otherwise no [15:32] Stavros: only by overwriting. [15:32] hmm [15:32] that might be acceptable [15:32] i will try [15:32] Stavros: you basically say "this is the new ancestry, forget about the old" [15:33] ah [15:33] hmm [15:33] Stavros: and one other thing: if you have renames followed by edits in the same commit (something common in Java) import will probably fail.. [15:34] hmm, that shouldn't be the case, thanks for the heads up though [15:40] it crashes with "cannot import name serializer", that's odd [15:40] hrm, is it normal for a pull to show all pulled changes as conflicts? [15:41] Tak: not normal, but it can happen [15:41] if you changed things, that is [15:42] night [15:42] igc: Good night [15:43] <[1]reggie> bzr tag keeps giving me bzr: ERROR: Cannot lock LockDir(http://bazaar.launchpad.net/~mysql-clr-team/connectornet/6.1/.bzr/branch/lock): Transport operation not possible: http does not support mkdir() [15:43] well, yeah, they're valid changes from upstream, but all the conflicts are empty for tree, with stuff in merge-source [15:43] so I'm not sure why they weren't just merged in [15:44] <[1]reggie> I've done bzr lp-login and then done bzr push --remember lp: [15:44] <[1]reggie> but bzr tag still is trying to use http instead of bzr+ssh [15:44] <[1]reggie> what am I doing wrong? [15:45] jesus, 1.17 is out? what am i doing with 1.13? === mrevell-lunch is now known as mrevell === abentley1 is now known as abentley === cprov is now known as cprov-lunch [16:00] <[1]reggie> nm, got it working === wadesworld is now known as wade === wade is now known as wadesworld [16:09] So I just upgraded to jaunty ppa bzr (from normal jaunty bzr) and my stacked bzr-svn repo is complaining with "ERROR: SvnRepository(...) is not compatible with KnitPackRepository(...) different serializers". Is that some known problem? === NfNitLoo` is now known as NfNitLoop === deryck is now known as deryck[lunch] === abentley1 is now known as abentley === beuno is now known as beuno-lunch === deryck[lunch] is now known as deryck [17:43] Hello there [17:44] I'm having a small problem with my repository [17:44] I have a base code in the root directory [17:45] and several independant applications that rely on the base code in sub-folders [17:45] basically, I'd like to version the root directory and each sub-directory independently [17:45] do you know how I could proceed? [17:52] Etenil: You cannot, really. [17:53] fjalvingh, really?? [17:54] what should I do then? [17:54] move my applications out of the common codebase? [17:56] Etenil: In time there will be the nested trees extension. But it's progress is slow. === beuno-lunch is now known as beuno [17:56] What I did is just not to manage the root itself. [17:57] yeah, that's what I did so far, but I need to have at least `index.php' in the root folder [17:57] I created separate repositories/branches for the shared projects === kiko is now known as kiko-fud [17:58] Etenil: Hmm.. It might be that you can just branch /inside/ the root. [17:58] So there are separate branches (not related) and the one is /inside/ the other [17:58] The outer one has the directory for the inner one as .bzrignored [17:59] Limitation would be that the inner branch always is a single directory which in turn contains the other dirs [17:59] oh, so I could ignore each sub-folder that contains an app [17:59] There is also something called the "config" extension/plugin but I don't use that one. === cprov-lunch is now known as cprov === Noldorin_ is now known as Noldorin [18:21] hello! Is it possible to revert a merge that hasn't been committed? [18:21] and if so how thx [18:22] jenred, yes, but theres no way to distinguish it from the changed you had that weren't committed yet (if you had any) [18:23] so I merged 2 branches and now I want to drop the branch that I just merged b/c their are too many conflicts [18:24] is there anything like a "unmerge" [18:24] jenred, jsut "bzr revert" should do it [18:25] ok I think that worked thanks! [18:44] jam: What does the "known" in KnownGraph refer to? [18:46] garyvdm: we know the full ancestry [18:47] versus Graph which tries to know as little as possible [18:47] jam: I see - So in that case It will work nicely for qlog :-) [18:50] yep [19:04] garyvdm: but keep in mind that it's a step (a significant one, but a step anyway), the final target is to make it lazy in the end :) [19:04] vila: Yes [19:06] jam, vila: Would a KnowGraph be mutable? I think for qlog to be lazy, It's going to run merge_sort every time it loads more of the graph. [19:07] KnownGraph is not currently mutable [19:07] But it will have all of the graph that it wants merge_sort to look at loaded [19:07] garyvdm: no, it's not intended to be mutable [19:07] I plan to allow it to add new nodes on the end, probably not in the middle [19:07] jam: which end? [19:07] also, the time for "KG.merge_sort()" is around 40ms for a bzr.dev tree [19:08] garyvdm: I call it FullyKnownGraph in my head myself [19:08] garyvdm: new revs, not old ones [19:10] jam: so if I load bits of the graph, just create a new KG every time some more is loaded? [19:10] Or us the old merge sort? [19:10] *ues [19:10] *use [19:12] merge_sort won't give you the right answers without the whole graph [19:12] new or old [19:13] it needs all ancestors to know if a revision was merged earlier than now [19:14] jam: I plan to only do this if bzr-historycache is available. I will use merge_sort to just give me the info I need to lay out the graph, and get the revno from bzr-historycache. [19:15] garyvdm: it gives wrong answers for the graph as well [19:15] I can give specifics though it takes some drawing [19:15] If historycache is not available, I will only load the graph in 2 stages: mainline, and the whole graph [19:16] jam: in the order? [19:17] Loading the graph lazy is very complex! [19:17] so, if you have a long lived branch, that gets merged at rev 10 and rev 20 [19:17] the revs merged into 10 will show up underneath 20 [19:17] if you haven't loaded the ancestry from 10 [19:17] to know that they were first merged there [19:21] jam: I see. And there is know way to ensure that for rev a, you have loaded all the revisions that have rev a as a parent (other than loading everything :-() [19:21] well there are ways, but nothing we store currently [19:22] * vila EODing by shaving 30% of selftest --parallel=fork execution time, YES :-) [19:22] gdfo is something that might work well here [19:22] * garyvdm goes to read up on what gdfo is [19:22] Greatest Distance From Origin [19:22] it is part of the KnownGraph code [19:23] not to be confused with Get The F Offline [19:23] basically rev.gdfo = max(parents.gdfo) + 1 [19:23] or Get The F Out [19:23] but that is G*t*Fo [19:24] jam: which is phonetically rather close when slurred in Dutch ;) [19:24] it is pretty close in american english, too :) [19:24] garyvdm: the idea is that if rev1.gdfo >= rev2.gdfo you know that rev1 cannot be an ancestor of rev2 [19:25] we use it for the KnownGraph.heads() work [19:25] * garyvdm is digesting.. [19:25] anyway, it allows you to load some of the graph [19:25] until you reach tips that have gdfo <= the one you have now [19:26] since you know that the current one cannot be an ancestor of the others [19:26] the main problem [19:26] is that you need the whole graph to *compute* gdfo in the first place [19:26] vila is trying to figure out how to record it properly [19:26] as it interacts with ghosts ... poorly [19:27] And would need to be recomputed on merge? [19:27] no [19:27] the new revs change [19:27] but gdfo is stable as long as *ancestors* don't change [19:27] I see [19:28] So I should hold of trying to make qlog graph lazy until that is done? [19:29] i think you could lazily load a lot of things like revision info (I think you already do) [19:29] but I wouldn't try to lazy load the graph itself [19:29] I'll also note [19:29] that in current circumstances, computing a node accurately seemed to take about 50% of the graph of bzr.dev anyway except in trivial cases [19:30] stuff like 'brisbane-core' requires a lot of history to be searched [19:30] though potentially that is easier with gdfo... not sure [19:30] oh, and we have several cases where we merge in a plugin [19:30] and a plugin's first rev is gdfo == 1 [19:30] so we have to load the wholegraph again [19:31] that said... [19:31] I made loading the whole graph significantly cheaper [19:31] OOo went from 33s => 10s [19:31] bzr.dev from 1.5s => 300ms [19:31] Wow! [19:32] This was with KnownGraph? [19:32] Will that make it into 2.0? [19:32] jam: Yes - revisions are only loaded if they appear on screen. (Or if you do a search, all revisions are load.) [19:33] just landed in bzr.dev [19:33] jam: :-) [19:33] and merge sort is about 3x faster as well [19:48] well, 8x if you don't count the time to build the KnownGraph === kiko-fud is now known as kiko [20:43] moin [20:46] jam: I'm quite worried that this will regress shared repo cases [20:46] lifeless: it does the same work we do now, just in a tighter loop [20:46] jam: oh cool, so it doesn't load unreferenced nodes? [20:47] lifeless: no [20:47] it walks the ancestry, and just loops on ancestors present on the current btree page [20:47] argh double negatives ;) [20:55] jam: read your reply on the bug; thats really good [22:05] jam: around? [22:06] yeah, for now [22:06] I need a quick incremental review, in about 3 minutes [22:06] for landing 2a default [22:06] k [22:07] its stacking [again] === oubiwann_ is now known as obiwann [22:16] verterok: hi [22:17] Is there a value I could cache to know if the contents of the working tree have changed at all? [22:18] does the wt have a testament? [22:19] james_w: tree.has_changes() [22:20] but I don't have the old tree [22:20] I was just watching my commit hook run the tests after I had just done a full test suite run [22:20] I'll need more context here [22:20] it would be nice if my test runner could write something about the state out [22:20] then the hook could not bother running if the tree it was committing matched that state [22:21] would also solve an issue I have with package maintenance [22:21] you can make a testament of any inventory; however the working tree inventory may not match disk at all [22:22] caching? [22:22] reality [22:22] or wouldn't match the inventory of the revision tree it became? [22:22] neither [22:22] ok [22:22] I'll ponder some more [22:22] fields like size [22:22] sha1 etc [22:23] in a working inventory aren't synced with disk, and even if they were they could become dated asynchronously [22:24] well, it could force a disk scan couldn't it? [22:25] it's racy, but... [22:25] even if it did, it can skew [22:27] jam: ugh, I'm having to fix a totally unrelated bug to make this work :( [22:27] jam: how would you feel about me landing this with the test_repository_clone_preserves.. test made KnownFailure [22:28] james_w: not to mention partial commits, etc [22:28] lifeless: "landing this" ? [22:28] default 2a [22:29] I don't see a 'test_repository_clone_preserves", do you mean "per_workingtree...test_clone_preserves" ? [22:29] line 944 in per_repo/test_repo [22:30] jam: yeah, I would compare to the testament of the revision tree being committed, which should account for that sort of thing [22:30] james_w: myself, I'd made the test suite faster to run [22:31] :) [22:31] james_w: I think you could generate a testament from the WT, but the internal code wouldn't do that. [22:31] I'm just seeing a version tracking system on one hand, and repeated work from not knowing whether something has changed on the other [22:31] easier to stage a commit, and then validate off of that revision [22:31] james_w: for an off the wall approach [22:31] james_w: make all your test runs do a commit [22:31] and uncommit if they fail [22:32] could work :-) [22:32] doesn't really help the packaging case, but neat idea anyway [22:32] as for faster to run, that solves it for one project, but not other cases [22:33] lifeless: so I've at least tracked down the test. How is it failing? [22:33] as it is actually a fairly common case we've run into because "bzr upgrade repo" doesn't upgrade the branches [22:33] jam: firstly the whole thing is testing off stuff [22:34] thanks for your time [22:34] """Cloning an unstackable branch format to a somewhere with a default [22:34] stack-on stack-on branch upgrades branch and repo to match the target [22:34] and honour the policy. [22:34] """ [22:34] my updated docstring [22:34] lifeless: what would happen is your local repo was stackable, but the branch format was too old [22:34] then pushing that to lp would fail [22:34] because it tried to auto-upgrade the repo to abad format [22:34] yes [22:34] but [22:34] that is still broken in trunk [22:34] its just that the test happens to fail closed [22:35] what we want is that when there is a policy pointing at a stackable branch, and the source repo-or-branch isn't stackable, that its upgraded to match [22:36] further, when the source repo is incompatible (e.g. 2a) and the stacked-on is (say) 1.9, we want to not stack [22:36] but there is a separate bug that it currently aborts [22:41] so how is it failing at this point? versus the test passing today [22:42] the test is changed slightly in my branch, but not enough [22:42] the thing is that its broken today [22:42] we actually use the default format /somewhere/ and so the target is often unstackable [22:42] and the test doesn't notice [22:43] when the default is changed, we end up with stackable much more often [22:43] and the test against whether repo is stackable suddenly blows up [22:43] lifeless: so last I checked, BzrDir.initialize_on_transport_ex() would create the repo format you requested, but always uses the default branch format. [22:44] well, init_ex doesn't make a branch [22:44] lifeless: it *does* use the default branch for the format and stacking checks [22:44] which was a source of bogus "this format doesn't support stacking" messages in the past [22:45] yes [22:45] so, there is a broken code path [22:46] we can either fix it, then land this default branch. or vice verca :) [22:46] so [22:46] specifically, there are 7 formats that don't update [22:46] don't upgrade [22:47] and the test was simply not noticing that they don't [22:47] FAIL: per_repository.test_repository.TestRepository.test_clone_stacking_policy_upgrades(RepositoryFormatKnit3) [22:47] FAIL: per_repository.test_repository.TestRepository.test_clone_stacking_policy_upgrades(RepositoryFormatKnit4) [22:47] FAIL: per_repository.test_repository.TestRepository.test_clone_stacking_policy_upgrades(RepositoryFormatKnitPack1) [22:47] FAIL: per_repository.test_repository.TestRepository.test_clone_stacking_policy_upgrades(RepositoryFormatKnitPack4) [22:47] FAIL: per_repository.test_repository.TestRepository.test_clone_stacking_policy_upgrades(RepositoryFormatKnitPack3) [22:47] FAIL: per_repository.test_repository.TestRepository.test_clone_stacking_policy_upgrades(RepositoryFormatKnit1) [22:47] FAIL: per_repository.test_repository.TestRepository.test_clone_stacking_policy_upgrades(RepositoryFormat7) [22:50] lifeless: so I'm not going to make you block forever on it. But I'll note things tend not to get fixed once they are marked KnownFailure [22:51] I suppose for my balance point.... I'd probably go ahead [22:51] I think I have a hacked up version [22:51] its a bit heinous [22:54] jam: re: KF - I know, but we can file bugs that are critical :) [22:56] jam: [22:56] http://paste.ubuntu.com/255402/ [22:56] thats the change vs the branch you reviewed last night. [22:57] I think it captures what we want more accurately. [22:57] lifeless: you've got some serious typos in the docstring, one not introduced by you [22:57] "to a somewhere" [22:58] and then "stack-on stack-on branch" [22:58] fixed [23:00] the "if stack_on_format in... elif..." seems like it could really use a final "else" clause causing an exception [23:00] mm [23:00] you need a long list of formats that just work if you do that [23:00] I don't quite understand why you don't just grab the BzrDir format and pass that in [23:01] I suppose you are intentionally using a diff format? [23:01] yes [23:01] a stackable one matching the serializer [23:01] the unlisted formats are already stackable [23:01] and thus its a no-op really. [23:01] lifeless: k, probably a comment then [23:02] done [23:02] lifeless: good enough then [23:02] DoIt? [23:02] yep [23:03] thanks! [23:18] igc: Hi! Can I have you opinion on the branch I did against bzr-fastexport to fix #347729 ? [23:20] igc s likely still asleep [23:20] give him a couple of hours [23:21] lifeless: oh ok :) [23:55] \o/ onto ascii