[00:02] lifeless: did you revert the bug policy change? [00:11] yes [00:11] about 5am or so [00:14] its now blocked on you, aaron's approve vote notwithstanding [00:17] i can't go for "use either one randomly" [00:18] i could be convinced to use it as Ubuntu does, meaning that random users can't "really" confirm it [00:18] but that feels more heaviweight than necessary [00:18] especially if you think about bugs flipping incomplete/confirmed [00:18] "somebody does it" or even "lots of people do it" is not very convincing [00:19] lots of people use 8-space tabs but i don't want random bzrlib files being done with 8-space tabs [00:21] mm [00:21] poolie: so, I don't think I was asking for random; only to not have to memorise different rules when the difference doesn't gain us anything. [00:21] if you can point to something we gain by being consistent, I'll jump on board and be as consistent as a consistent thing [00:21] the rule being "don't use triaged"? [00:23] a rule we haven't been honouring for some time, based on bug database evidence [00:23] sheesh [00:23] morning [00:23] if it hasn't been causing harm, I think we should relax the rule [00:23] hi poolie, lifeless, jam [00:24] hi igc [00:24] also, according to bug database evidence, we have lots of bugs, therefore we should keep having them? [00:24] be serious [00:24] hi igc [00:25] sheese yourself, totally useless analogy - not the same thing at all. [00:25] * lifeless goes back to fixing bug 393677 [00:26] Launchpad bug 393677 in bzr "pushing a 1.9 branch stacked on a 2a branch causes problems" [High,Confirmed] https://launchpad.net/bugs/393677 [00:55] poolie: sorry we got cut off. my cell phone died and I had to pick up my son anyway. If you want to chat more, maybe we can do so after a couple hours [01:25] np [01:25] jam, ping me here if you like [01:26] Hi igc. [01:26] hi garyvdm! [01:26] I've been working on a blue print for a text conflict resolving tool. [01:27] It's not finished yet, but there lots done so far. [01:27] http://bazaar-vcs.org/qbzr/Blueprint/diffmerge [01:28] ooh [01:28] Just wanted to bring you attention to it. I would like to discusses it in detail post 2.0 [01:28] And with vila [01:28] Hi poolie [01:29] igc: The bits I still need to spec out is the intergration with annotate and log. [01:30] automatically helping you find the context for the merge would be excelletn [01:30] igc: Hey, quick fast-import Q... does it rely on essentially holding the stream in memory? [01:31] fullermd: no [01:31] little bits of [01:31] memory issues? [01:31] fullermd: no [01:31] Good night all. [01:32] So, if I've got this ~3.5 gig SVN repo, it probably won't want to suck up memory on that order to import? [01:32] fullermd: heh [01:32] no [01:32] ports? [01:32] fullermd: that's a baby :-) [01:32] No, ports is still in CVS. src is in SVN. [01:33] I still don't have the guts to tackle ports :p [01:33] \o/ [01:33] fullermd: the emacs fastimport dump is 7.5GB gzipped [01:33] fullermd: does it have modules at all? [01:34] It _is_ a module ;) [01:35] fullermd: the main thing to watch is that svn-fast-export.py falls over on huge repositories (like OOo) with "too many open files" [01:35] fullermd: that's a bug somewhere is the python-subversion bindings I think [01:36] Funny, I was thinking that the other week when I glanced at what was needed for it... [01:36] fullermd: but it often works (and there's a svn-fast-export.c that doesn't have the issue if it does) [01:36] "Subversion python bindings? You mean the ones that jelmer gave up on using 'cuz they were too buggy?" [01:37] fullermd: there's at least 3 binding to my knowledge, so probably but not sure [01:37] bindings [01:37] python-subversion [01:37] subvertpy [01:37] and pysubversion [01:37] IIRC [01:38] So much simpler with CVS, where everybody gets to just implement their own RCS file parser... [01:47] abentley: I've been using your pipeline plugin since yesterday and I love it. Thanks a lot! :) [01:54] igc, i thought you wrote some developer docs about content filtering but i don't see them in the tree [01:54] did they get lost, or was it perhaps just a mail thread that i was thinking of? [01:55] abentley: poolie has asked that I land the iter changes branch and we tweak from there - even if that is to then put the UI back the way it is now. [01:55] abentley: just giving you a heads up === Noldorin_ is now known as Noldorin [02:00] jkakar: Cool! [02:02] abentley: I have a pipeline with 8 branches and am happily making progress jumping between them. [02:03] lifeless, poolie: I don't want to get it the way of fixing bugs. But it feels like fixing the bug has gotten married to UI changes whose value is not clear-cut, and could easily have been done separately. [02:04] jkakar: Wow, that might be the deepest bzr pipeline ever. :-) [02:05] abentley: Heh. :) [02:05] abentley: I'm refactoring a bunch of existing code that has a nice hierarchy of objects already implemented. [02:06] abentley: I'm using a pipe for each level, making the changes I want, and slowly working my way through it. Having pipes is actually helping me focus and figure out clean ways to make the changes I want to make. [02:07] jkakar: I find it helps that way too. Sometimes I'll work on something as a pipeline, even when I wind up submitting only the last pipe. [02:08] abentley: Yeah, I could see doing that too. [02:11] poolie: I'll look - one moment [02:20] lifeless: nice mail re bugs; thanks [02:21] poolie: there's no separate design doc to my knowledge. There's pieces that may help though ... [02:21] (1) http://bazaar-vcs.org/LineEndings/Roadmap [02:21] (2) bzrlib/filters/__init.__.py docstring [02:21] poolie: anytime [02:21] poolie: all it takes is a good 1/2 hour chat :P [02:22] (3) bzrlib/dirstate/SHA1Provider class docstrings [02:23] (4) I'm sure there's useful tidbits in various email threads as well fwiw (not that they count as doc) [02:31] igc, thanks [02:31] i might provide a bit of an overview or fusion of them [02:43] Hmm, the -s option to selftest seems to be broken. [02:43] spiv: fixed [02:43] Or the aliases, anyway..' [02:43] lifeless: in bzr.dev? [02:44] i my branch, which a local typo stopped landing this morning [02:44] its on its way again now [02:44] Ah, good. Thanks. [02:44] just use a long alias for now [02:45] spiv: https://code.edge.launchpad.net/~lifeless/bzr/test-speed/+merge/10633 [02:45] while U wait [02:46] Yeah, I'm using the full python name for now. [02:50] spiv: so two things [02:50] 1) ICanHazReview [02:50] I waved that branch in front of poolie, but I suspect you'll have more immediate interest in it, given you're working on perf [02:51] 2) self.make_branch_and_tree - It seems that this creates disk branches always, or something. I'm confused about it. I'd like to make a playdate to beat up on it, late this week or early next week. [02:54] spiv: ^ [02:54] lifeless: this is an interesting case for wanting some kind of ad-hoc dependency injection to tests [02:54] I haven't looked at make_branch_and_tree recently, but it usually does confuse and vex me when I do. [02:54] poolie: your branch [02:54] ie i'd like to say, "run everything with crlf conversion turned on" and see what fails [02:54] ? [02:54] this meaning content filtering [02:54] poolie: I agree. broadly 'parameterise everything in this dimension' [02:55] obviously that would require the tests be written in a way to work with that [02:55] so you could not just do it now very easily [02:55] yes, words out of my mouth. [02:57] lifeless: looking at your branch now [02:57] danke [03:13] igc, what i have so far is http://pastebin.ubuntu.com/259053/ [03:13] do you see anything wrong? [03:19] polie: I'll take a look [03:19] spiv: ping [03:19] spiv: when adding remoting of exceptions, do you test them? [03:20] or just make-it-work and depend on tests that build on that ? [03:20] lifeless: I try to test them [03:21] I don't think there's perfect coverage [03:21] I'm just looking up the relevant tests. [03:21] I'm adding IncompatibleRepositories [03:22] There's test_remote.TestErrorTranslationSuccess for the client-side [03:22] poolie: that's excellent - thanks [03:22] q [03:22] thanks [03:23] poolie: I particularly like the clarification re content-only, not tree munging [03:23] mm [03:23] it would be interesting to do that later [03:23] poolie: I'm *not* volunteering for that one fwiw :-) [03:24] :) [03:24] And there's some per-request error serialisation tests in test_smart. [03:24] poolie: I'm planning to fix the current content filtering bugs real soon now and then never touch it again :-) [03:24] ok [03:25] let's talk, later [03:25] maybe after this bug i'll understand it better [03:25] this doc is yak-shaving for bug 415508 [03:25] Launchpad bug 415508 in bzr "Content filtering breaks commit w/ merge parents" [Critical,In progress] https://launchpad.net/bugs/415508 [03:26] I don't think I have direct tests for the generic server-side serialisation of various exceptions (bzrlib.smart.requests._translate_error), although I do have tests that translation will occur (even when during streamed bodies etc). [03:26] Those cases are probably covered indirectly, though. [03:27] poolie: understood. the one I need to finish fixing is bug 385879 [03:27] Launchpad bug 385879 in bzr "EOL filter only applied to files when first checked out" [High,In progress] https://launchpad.net/bugs/385879 [03:30] * igc lunch [03:33] spiv: so [03:33] spiv: rich objects [03:33] do we flatten to strings and keep them that way [03:33] or do we try to reconstruct [in these exceptions] [03:41] Command 'pqm-submmit' not found, perhaps you meant 'pqm-submit'? [y/n]: y [03:41] :) [03:42] poolie: I'm halt()ing for work. Hopefully I'll sleep more tonight [03:50] lifeless: we try to reconstruct some things [03:50] lifeless: e.g. if we already have a Branch or Repository locally, we may as well use that. [03:50] lifeless: I don't think there's a firm policy yet. [03:51] spiv: I'm dealing with Repository here, and won't have at either of them. In fact, one won't exist cause it will be deleted :P [03:51] At the moment it's really been driven by what the individual exceptions require/expect. [03:51] spiv: I'm inclined to coerce this one down to strings and stay there [03:52] I'm starting to think I should be more inclined to adjust the exception classes than adjust the error deserialisation to suit them, although I don't have any specific cases in mind. [03:52] That's ok with me, although I suggest making it explicit in the exception class that it has strings rather than repo instances. [03:52] Or at least, that it can have. [03:53] sure [03:53] tomorrow :) [03:53] Enjoy your afternoon :) [03:54] you'll probably get to enjoy this sort of schedule soon :P [03:54] waking @ 4 that is [03:54] Heh. [03:55] If upstairs keeps deciding that hammering things at 1am is a great idea, I won't even have to wait ;) [03:55] >< [04:44] ok switching machines === poolie changed the topic of #bzr to: Bazaar version control system | 1.18 final released soon | try https://answers.launchpad.net/bzr for more help | http://bazaar-vcs.org | http://irclogs.ubuntu.com/ [04:51] igc: mini-review wanted on http://bazaar.launchpad.net/~mbp/bzr/doc/revision/4636 [04:51] just a few more additions beyond what i showed you before [04:54] moved to https://code.edge.launchpad.net/~mbp/bzr/doc/+merge/10638 === gorozco is now known as p4tux [05:11] igc, it seems like what we need for bug 415508 is for commit to say to the wt: give me the hash of the canonical form of this file iff you know it without reading the whole file [05:11] Launchpad bug 415508 in bzr "Content filtering breaks commit w/ merge parents" [Critical,In progress] https://launchpad.net/bugs/415508 [05:20] back [05:20] poolie: I'll take a look now [05:33] igc: sadly the dirstate.py code seems to assume the stat size is the same as the size column [05:33] so we can't rely on the value cached there to be the size of the canonical form [05:48] poolie: where does it still do that? [05:48] add() [05:49] and py_update_entry [05:49] * igc looks [05:50] and the pyrex update_entry [05:59] igc: what do you think? [06:00] that code might not be active on an important path i suppose [06:01] iter_changes [06:01] poolie: starting with your doc patch ... [06:01] I'm not sure the comment about the UI is right [06:01] it would be worth a smoke test to make sure iter_changes with size changing filters returns the right size [06:01] e.g. diff ought to show ... [06:02] how the canonical form changes [06:02] ok, does it? [06:02] that's painfully for external diff tools but the right thing IMO [06:02] i'm trying to document what does happen more than what should happen [06:02] poolie: yes, I think diff does [06:03] also export dumps the canonical format ... [06:03] though it has an option to apply filters to the output [06:05] poolie: so I'll approve your doc patch with those tweaks [06:05] so, maybe i shouldn't generalize, and just say that different commands might have different defaults? [06:05] or do they all default to the canonical form? [06:06] also i'll have to update it about dirstate, assuming that's correct that it's not always storing the canonical size [06:06] poolie: all canonical to my knowledge [06:07] wrt dirstate, I *thought* we had py_update_entry covered and the pyrex implementation [06:08] add() I can't recall changing, though the docstring is explicit about the fingerprint applying to the conical form === abentley1 is now known as abentley [06:10] right, but they are using st_size [06:13] i haven't actually proved it's wrong but they do seem to be accessing something they should not [06:13] test_intertree tests both implementations for iter_changes [06:20] lifeless/igc: teddybear re testing this: [06:20] this being the actual originally reported bug, that multiple commits here store multiple copies even when the canonical form hasn't changed [06:20] i think we are lacking smoke tests here [06:20] or integration tests for all of content filtering [06:22] but probably it also needs a more localized test for the specific change to record_entry_contents [06:22] and we could do that in isolation by just feeding it pre-canned content summaries, along with a fake tree [06:23] and then we'd observe what file graph it eventually writes out [06:24] so [06:24] the bug was in tree - it exposed data inconsistent with its model [06:25] but we're changing record_entry to stop using that exposed data, because exposing the right data is too big a change [or something] [06:25] so, if the new API doesn't have flaky data I don't think we need any new record_changes tests [06:26] actually it's too expensive [06:26] the altered existing ones are _very_ comprehensive. [06:26] poolie: re expensive - we have a size field, we could write the canonical size there when we determine the canonical hash. [06:26] so [06:26] we could [06:27] and on stat cache misses we'd read the file and figure out the right canonical hash - thats not so bad [06:27] we'd have to either detect existing dirstates that have it wrong, or bump the version, or tolerate it sometimes being wrong til people's dirstate gets rewritten [06:27] the last of those might be ok [06:27] thats all true. I'm not meaning to judge the solution. [06:27] just teddybearing where we need to test. [06:27] i'd kind of like to do it, but not as part of this change [06:28] we are changing record_entry; but we're changing it in a way consistent with its tests. [06:28] bbiab [06:28] maybe, depending what ian says, i'll file a bug for that [06:28] cheerio [06:28] we're also changing tree, with the new api. We should test *there* on both filtered and unfiltered [06:28] lifeless: anyhow, it's expensive and not very useful [06:28] mm, i have a test for it [06:29] not very useful because knowing the size doesn't help commit very much [06:29] do we need glue to test that record_entry does the right thing when given data that we've seperately tested is canonical? [06:29] if it's the same it still needs to check the hash, and if it's different it needs to read the whole thing [06:30] i guess i'm feeling the need for an overall integration test [06:30] or smoke test [06:30] there don't seem to be any/many for commit with content filtering [06:30] mmm, I think we don't :- if anything we'd want a test that the interface used by record_entry is actually the tree one, but in fact record_entry takes the result generated by the commit code [06:30] maybe i'll put this up and see what happens [06:31] so a test to close the loop would be test that a) commit calls tree.NEWAPI and then calls record_entry(...NEWAPIRESULT) [06:32] neither of which need to hit disk or exercise the whole stack. And with that test changes to commit to not use NEWAPI will show up, and we know (because the repository tests do this) that record_changes is compatible with the NEWAPI signature [06:32] YMMV, Just my thoughts, etc. [06:36] mm [06:36] i agree with testing bits in isolation [06:36] i guess i just feel like it can miss the big picture [06:36] anyhow https://code.edge.launchpad.net/~mbp/bzr/415508-content-filtering/+merge/10640 [06:39] back [07:05] hi all [07:07] hello vila [07:07] hmm, someone broke the tests on leopard.... [07:08] igc, feel free to say "nothing" but what do you think about smoke tests for content filtering? [07:08] ha ha ! no paramiko on that slave, some test didn't check its dependencies :) [07:08] poolie: I feel we need them [07:09] it's the kind of thing that would be nice to write in a (whisper it) doctest-like mode [07:09] narrative mode [07:09] poolie: but I agree with lifeless about making sure we're testing the low level interaction for this particular bug [07:09] mm [07:09] it's not either or [07:09] poolie: hehe, no problem wrote both kind of tests :) [07:09] it would be slow and clumsy to test everything that way [07:10] poolie: I'm a fan of high level tests because it's end-to-end that ultimately matters [07:10] poolie: but good quality comes from checking each layer is doing it's bit correctly as well [07:11] igc: it's not either or, but high level tests tends to leave *bug* holes in the coverage [07:11] vila: right [07:11] s/bug/big/ funny typo :) [07:11] vila: and low-level ones alone leave integration holes so either-or is not the answer! [07:11] Hello there, can someone please explain how I can set a tag for a particular revision. I did "bzr help tag" but did not quite get ot. [07:11] *it [07:12] to tag the current revision ... [07:12] igc: I find the integration holes easier to deal with... :) [07:12] bzr tag xxx [07:12] to tags an explicit revision ... [07:12] bzr tag -rN yyy [07:12] al-maisan: ^^^ [07:13] igc: thanks, will try that. [07:13] well i guess it was a silly question [07:13] i was actually wanting to ask "why don't we have them" and have the answer not be "cos ian got sick and the rest of us dropped the ball" [07:13] :-} [07:13] vila, so what's up next for you? [07:14] poolie: today: fix the leopard regression and try to package a new bzr-gtk :-/ [07:14] The later is required as the last release version don't work with 1.18.... [07:15] and I'll try to create the PPAs that time [07:15] so, I think we have many more high level tests that we need at the moment. Thats not a reason not to write ones we need. But I feel an urge to be parsimonious about adding them. [07:15] lifeless: agreed, and many of them should be turned into lower level ones too [07:16] i think it's unbalanced [07:16] like here we have a major feature and there's not really any integration tests [07:16] i think in some older tests there are probably too many high level tests [07:17] and the ones we do have are probably too slow to run and too hard to debug [07:29] hi igc [07:30] hi bialix: no explorer work so far today - hopefully tonight after dinner [07:30] ok [07:30] you answer my question :-) [07:31] vila: bonjour? [07:31] bialix: still replying to emails in my queue and tying up other stuff :-( [07:31] igc: don't worry [07:32] igc: I'm just want to know about installer [07:32] hello bialix ! [07:32] igc: btw, quick answer on your q in bzr-windows about msi: I think the answer is NO [07:32] vila: hello! [07:33] well my plan is to package 0.7 tonight and hope someone picks up the ball and gets it into karmic [07:33] vila: I'd like to beg you again about merging my patch (now patch for paramiko default) [07:34] bialix: was on my list :-( [07:34] vila: https://code.launchpad.net/~bialix/bzr/win32-paramiko-default/+merge/10563 [07:34] vila: I'm sorry? [07:34] igc: I can't help with karmic. Gary could, but it's hard to catch him [07:35] from irclogs he was here this night [07:37] hello bialix [07:38] hello poolie [07:38] how's 2.0beta going? [07:48] FWIW, I'll pretty much always write high-level tests, 'cuz it's very easy to say what I want like that, compared to faking my way through the API... [07:50] fullermd: And you do right ! [07:51] I mean, it's a very good way to start, others can then rewrite your tests into lower levels ones [07:53] I often start by writing a *shell* script to reproduce a bug (based on a skeleton I derived from one of your bug reports, just because it was using 'bzr' as a parameter to help test against various versions (among other tricks) [07:53] once I Isolate the problem enough I can write more precise tests. [07:54] There has been discussions (I hope it wasn't in a code review :-/) about defining some kind of easy to use language to allow people to write tests more easily, so the idea in the air :) [07:54] is in the air even (does it make sense in english ?) [07:54] Actually, IIRC, I started doing that after I got all riled up about some bug that turned out to be fallout from one of my aliases... [07:55] So having a var I could stick --no-aliases in was a "Don't be a dumbass in public" mechanism ;) [07:55] I remember this idea was first developed by James Blackwell (?) [08:02] bialix: sent to pqm [08:02] vila: thanks! and 2.0 branch too? [08:02] bialix: against trunk though [08:03] poolie said in ML 2.0 branch is separate now [08:03] indeed [08:03] you need explicit approval to land there I think [08:04] or the RM will cherry-pick it ? [08:04] * fullermd dreads 2.0 betas... [08:04] Once they start coming out I have to make packaging decisions :( [08:04] poolie: waht about merging paramiko-default patch to 2.0? [08:04] poolie: what about merging paramiko-default patch to 2.0? [08:05] fullermd: yeah, yeah, people keep being afraid, but the truth is, we generally fail in unexpected places anyway but rarely in parts people were afraid about :-D [08:07] fullermd: and if you could send me a short mail pointing to the better instructions to install the most important xBSD distro to test against, I could add it to the test farm one of these days... [08:07] fullermd: It doesn't have to be detailed as long as it's: download the iso here, boot, click, click, done :) [08:08] bialix: nominate it for 2.0 in launchpad [08:08] or target it [08:08] nominate patch? [08:08] or nominate bug? [08:08] bialix: and ensures that it can be merged cleanly into the 2.0 branch (that will help) [08:08] bialix: bug [08:09] vila: it will be merged cleanly yes [08:09] vila: *blink* Non sequitor? Or did I forget a conversation? [08:09] bialix: so that it will either: 1) be merged in 2.0 2) be re-targeted [08:09] vila: unless you guys doing extra NEWS items and they are always conflicts [08:09] forget about NEWS conflicts, there is no way the RM can avoid handling them :) [08:10] vila, lifeless: I have no control over targetting bugs, so... [08:11] fullermd: Clearly non sequitur, but I wanted to ask for a long time, now, was as good as any other time :) [08:11] bug https://bugs.launchpad.net/bzr/+bug/414743 nominated [08:11] bialix: are you kidding ? [08:11] Launchpad bug 414743 in bzr "paramiko should be default client for Windows" [High,Fix committed] [08:11] vila: about what? [08:12] about nominating bugs, just select the milestone, *that's* nominating :) [08:12] wow, you're not part of bzr devs team on launchpad... :-/ [08:12] vila: I'm not part of bzr team @ lp. so I can't [08:13] yes [08:13] bialix: nice picture though :D [08:13] thanks [08:13] also is 2 years old [08:13] oh, and I was wrong, nominating is a precise meaning, you seem to have done it anyway [08:14] * bialix now is older and not so nice :-P [08:14] Ooh, that's a good excuse. I'm gonna use that. [08:17] fullermd: always happy to help (tm) [08:17] I meant bialix's :) [08:18] sorry? [08:18] I'll try throwing something a sketch together for you. I only really know Free, though; it's been 10 years or so since I touched Net/Open. [08:18] "older and not so nice" [08:18] fullermd: whatever you know the best [08:19] Free is FreeBSD? [08:19] Some of the test failures, like the '..' one, are so weird I can't believe it actually works anywhere, so that's an easily portable fix. [08:19] The one with the disappearing revs, I have NFC though. I don't even have a guess as to the culprit, so I dunno how portable the failure would even be. [08:19] * fullermd nods at bialix. [08:20] fullermd: you have *fixes* for test failures on FreeBSD ???? [08:20] No, but the '..' would be pretty trivial. [08:20] fullermd: sorry, it seems I don't follow your last jokes [08:20] * bialix trying to steal .desktop file from bzr-gtk [08:20] bialix: Don't worry, it wasn't a very good one. [08:21] fullermd: ok, because I like your jokes [08:21] anybody knows something] about .desktop files? [08:21] anybody knows something about .desktop files? [08:22] Heck, I use ctwm; I don't even know anything about desktops :] [08:23] * bialix even don't know what is ctwm and fear to ask [08:24] The only proper way to run a GUI, naturally. [08:24] bialix: pre-historic window manager, a bit younger than twm [08:25] Pre-historic?! Bah. Kids these days... [08:25] lol [08:25] I'll have you know that its latest commit was in June of this year. [08:26] fullermd: sorry, I'm a bit grumpy because I played with gentto and went as far as twm running... :-D [08:26] yeah gentto, that's it [08:26] I hear gentto comes with a good hhtp server. [08:27] LOL [08:28] bialix: the .desktop spec is fairly clear; what are you having trouble with? [08:28] AfC: Icon [08:28] I've hacked something based on olive-gtk.desktop and pushed to trunk [08:28] bialix: Should be just [08:28] anybody able to test? [08:29] Icon=/fully/qualified/path/to/file.png [08:30] AfC: hmm, because bzr-explorer it's a plugin then I dont think I know what will be /fully/qualified/path [08:31] I'd show you a working example, but judging by the discourteousness rudeness above, I think we'll skip that. [08:32] * bialix wonder... [08:32] AfC: I don't quite understand what do you mean [08:33] bialix: that was a Gentoo user expressing that he doesn't quite appreciate other people rubbishing the hard work of people who contribute to community distros. [08:34] AfC: I'm Windows user/developer [08:34] AfC: and I'm native Russian [08:35] so if you think I say something wrong -- I'll live with that [08:35] bialix: it wasn't you. It was the others. [08:35] AfC: ohhh, so sorry, that was a joke about me having lost the habit of installing linux the hard way, [08:36] * bialix feels out of sync [08:36] bialix: but in this case, the working example I was going to show you was in a Gentoo package [to my knowledge it's not packaged in Debian or Ubuntu or Fedora] [08:37] I think I'd better skip all this stuff [08:37] bialix: but as the others here would just make fun of me if I attempted to show it, I think we'll just move on. [08:37] Contrast my requirement to fullermd for an install: 'download iso, boot, click , click' and the way Gentoo is installed... different needs, different means, not a critic of the Gentoo community... [08:41] bialix: I do remember that http://standards.freedesktop.org/desktop-entry-spec/latest/apa.html was not much help, but http://standards.freedesktop.org/desktop-entry-spec/latest/ar01s02.html does give the rules [08:42] bialix: I believe the FHS path for .desktop files is /usr/share/applications [08:42] bialix: I have lots of .desktop files there. [08:43] bialix: if you're running Microsoft Windows (only) then that won't be of much help to you [08:43] AfC: so IIUC it should be done when packaging [08:43] bialix: yeah, that gets tricky [08:43] AfC: not only [08:43] AfC: but bzr development I do only there [08:43] bialix: it's one of those things that the program knows the data for, but that needs to get deployed by make install && packaging [08:43] AfC: Ok, thanks [08:44] bialix: so what I can show you is a Perl script (a ./configure) that writes out a .desktop file at configure time. [08:44] I understand now [08:44] bialix: see the very end of http://research.operationaldynamics.com/bzr/slashtime/mainline/configure [08:44] I think it will be toooo much for me [08:45] bialix: `make install` moves that to $(DESTDIR)$(PREFIX)/share/applications [08:45] s/that/the .desktop file that outputs/ [08:46] I see [08:46] bialix: sorry I don't have a cleaner example; most projects do m4 macros or similar template substitution. I did it with Perl [08:46] np [08:46] I'm already understand I'm unable to propely do this stuff [08:47] * bialix avoids do the things he can't test [08:47] bialix: and, for what it's worth, that IS an in-production application, so it is a real example [08:47] bialix: even if it's a very tiny program [08:47] tiny? [08:48] bialix: the package is just a small program that tells you the time[zone] in various places: http://blogs.operationaldynamics.com/andrew/software/slashtime/slashtime-the-graphical-version.html [08:48] I understand [08:49] "tiny" :) [08:49] :-) [08:49] but "complete" in the sense that it installs properly, has translations, has a .desktop file, etc [08:51] The cafe I am sitting in is closing. See ya [08:51] * bialix waves [09:30] * igc dinner [09:43] hello again bialix [09:46] hello again poolie [09:47] poolie: I've tried to ask about including paramiko-default patch to 2.0 [09:47] hey [09:47] vila said you should judge [09:47] i just sent another post in the metronome thread [09:48] well, as you know, i think it's a good idea in general [09:49] if we're going to do it we should do it before 2.0beta [09:50] so we going to? [09:51] what's the worst that could happen? [09:51] :) [09:52] people continu asking why push to lp does not work on windows? [09:53] *continue [09:54] i guess if there are people who have complicated putty setups eg to set up proxies they might stop working [09:55] as I said BZR_SSH=plink force it [09:55] and this mentioned in NEWS in Compatibility break section [09:56] I'm aware about 3 cases when plink does not work with lp [09:56] mm [09:57] it seems it depends on actual plink/putty version [09:57] also, it would kind of avoid unnecessary variation [09:58] like if you have bzr working ok, and then for some other reason you install putty, it'll start doing something different [09:58] it? [09:58] it meaning this change [09:58] yes [09:58] meh, where are the 1.18 and 2.0 branches on bazaar-vcs.org ? Or does the pqm direclty use the lp branches now ? [09:58] so being a bit more selfcontained is worthwhile [09:58] yes, direct to lp i think [09:59] ok, let's do it! [10:00] poolie: so I send bialix patch to pqm against lp:bzr/2.0 ? [10:00] yes please [10:01] thanks gentlemen [10:01] just for future reference, i don't think this kind of fix would be appropriate to merge into the branch after the stable release [10:01] thanks for the patch bialix [10:01] it was easy [10:01] wow, oh yes, lp:bzr/2.0 resolves to ~bzr-qm/bzr/2.0 .... cute :) [10:02] ok so i should probably go... [10:02] poolie: need some help ? [10:02] poolie: GO ! [10:03] ok, cheerio [10:04] bialix: your started your branch from bzr.dev after 2.0 got created... [10:04] (and melody from crazy frog fly away...) [10:04] vila: ashes on my head! [10:04] https://edge.launchpad.net/bzr/+milestone/2.0 :) [10:08] vila: if you have rebase plugin installed, then cherrypicking will be easy [10:08] bialix: no [10:09] bialix: I mean, I don't have it installed [10:09] you want me to cherrypick it? [10:09] yes please [10:09] k [10:09] lp:bzr/2.0 yes? [10:10] bialix: yup [10:15] vila: your no applicable to both cases [10:16] ? cherrry-pick is not easy ? [10:17] no, it's easy if you know how to run rebase :-P [10:17] jelmer should be proud of his weird design of -r argument for rebase command! [10:18] vila: rebased, and no conflicts encountered! [10:19] pushed somewhere ? [10:19] vila: quick check: latest revision in 2.0 branch is luks patch? [10:19] vila: pushing now [10:20] revno 4634 luks patch for branch --switch [10:21] vila: bzr+ssh://bazaar.launchpad.net/~bialix/bzr/win32-paramiko-default-2.0/ [10:21] pushed [10:23] bialix: sent to pqm [10:23] thx [10:24] I should certainly try to merge 2.0 to trunk sooner rather than later to catch the conflicts asap [10:25] bialix: remind me to do that if I forget :D [10:25] tomorrow? [10:25] or after 2.0 is out? [10:26] bialix: once it's landed in the 2.0 branch [10:26] hmmmmm [10:26] tomorrow then ;-) [10:26] ok [10:56] bialix: still there ? [11:00] yip [11:00] vila: what's up? [11:01] ha, tortoisebzr now includes a wtcache.py file that uses the 'with' feature [11:01] is 2.6 a strong requirement on windows now ? [11:02] it makes the dev/dev installer build fails [11:02] wtcache? I think it's recent work from Naoki INADA [11:02] yup [11:02] vila: no, it's wrong [11:02] he's asking about testing his new work [11:03] wait a sec [11:03] this has landed on lp:tortoisebzr .... that's more than testing :) [11:03] vila: https://lists.launchpad.net/tortoisebzr-developers/msg00041.html [11:04] vila: lp:tbzr is open for everyone and Naoki the one who trying to maintain it [11:04] you should not be so grumpy [11:04] write him a mail, ok? [11:04] (preferrable using ML, but it's up to you) [11:04] not grumpy ! (Damn, am I grumpy today ? Afc first now bialix, soon fullermd ? ) :-) :-/ [11:05] bialix: just a heads-up, that's what the test farm is for :-D [11:05] vila: err! no! no! you're wonderfuil today [11:05] bad word chosen by me [11:05] sory [11:05] sorry [11:05] if you can pass the info to INADA, that would be nice ! [11:06] bialix: is he here ? [11:06] ok, so I'll answer him to tbzr ML. is ti what you want? [11:06] he? [11:06] yes please, he asked for feedback, that's feedback :) [11:06] INADA, is he here on IRC ? [11:07] his nick at lp: songofcandy [11:08] or something like that [11:08] I'm not sure he's chatting here [11:08] Japan timezone IIUC [11:08] vila: do you have precise traceback or error message? [11:08] just a sec [11:09] http://babune.ladeuil.net:26862/builders/installer-dev-plugin-dev/builds/35/steps/compile/logs/stdio [11:09] look at the end [11:10] (cough) [11:10] is it URL stable? [11:11] bialix: pretty stable yes [11:11] vila: error said it's not error but warning [11:11] bialix: and it's read-only now [11:11] SyntaxError: invalid syntax (wtcache.py, line 202) [11:12] first the warning then an error, at least, that's how I read it [11:13] bialix: the fix should a mere s/with/try+finally/ but without a way to test... [11:14] vila: http://pastebin.com/m75032c50 ok? [11:14] vila: Hmmwhat? I have to be grumpy now? But I just got coffee... [11:14] bialix: perfect [11:15] fullermd: you don't have too, you have to say *I* am :) [11:15] fullermd: you don't have too, you just have to say *I* am :) [11:15] Oh. Well, that's why I had to get coffee, see. 'cuz you're so grumpy. And stuff. [11:15] lol [11:16] hehe, and people think that because I work from home I can't have a good laugh.... [11:16] lol [11:16] I don't work from home. But I have a _really_ short commute from my home to my office. [11:16] Heck, that's WHY you work from home. If you stare at a computer screen for hours, then bust out laughing in an office, people eventually put you in a straitjacket. [11:17] vila: thanks for head up, mail sent, waiting for songofacandy response [11:17] I mean, so I hear. Not that I would ever be in a situation where I'd have such an experience. No siree Bob. That would be crazy. [11:18] fullermd: indeed [11:19] fullermd: I do this all the time [11:19] in the office [11:19] fullermd: what's wrong with wearing a straitjacket in the office? [11:19] AfC: It depends on who puts you in it :p [11:20] fullermd: and what colour it is. White is so passé [11:20] Yah, Tie-dye is much more festive. [11:29] Szilvester -- around? [11:35] mrevell: i think he's phanatic when he's here [11:35] hey mwhudson, expanding your nick soon? Thanks, yeah, looks like he's not around. [11:35] mrevell: not sure about that, people give intellectronica enough grief for a long nick :) [11:36] * mwhudson zzz [11:36] :) === JaredWigmore is now known as JaredW [12:02] vila: ping [12:03] vila: check mail, Naoki said he's fixed this issue, how hard is to test with your army of slaves? [12:05] it's the cost of the whips which really gets you down. [12:10] * bialix seems to lost his sense of humor today. too little coffee in the blood maybe [12:11] bialix: test launched, I'll keep you informed [12:12] cool [12:25] Hello, is there a hook to send mails at post-commit? like in subversion? I saw that exists bzr-email plugin but I want tu put in my server and not in every client [12:30] danigm: https://launchpad.net/bzr-hookless-email [12:31] bialix: ok, thanks === mrevell is now known as mrevellunch === abentley1 is now known as abentley === abentley1 is now known as abentley [13:17] bialix: test ok ! congrats to Naoki :) [13:17] bialix: I was a bit long as I had to manually update the branch... I don't understand why, but that doesn't really matter for now [13:18] test farm all green again :) [13:24] vila: how about publishing that installer as "nightly" build for testing? [13:25] bialix: I have no idea of its quality *today*... but that's indeed the mid-term plan [13:25] well, Naoki asked for testing his changes [13:25] but it's almost impossible to do without any installer [13:25] even broken one [13:25] AIUI jam uses the same script to build the official installers (using dev/release instead of dev/dev) === mrevellunch is now known as mrevell [13:35] abentley: BB down ? === abentley1 is now known as abentley [13:58] abentley: BB down ? [13:58] abentley: not sure you got my first message [14:08] vila: restarted. [14:08] abentley: thanks [14:11] vila: I use release/release to build the official installers "make installer-all PYTHON=..." [14:12] jam: yeah, right thanks for the heads-up :) [14:12] jam: good morning :) [14:13] jam: already get your first coffee ? :D [14:14] vila: yep === Noldorin_ is now known as Noldorin === loxs_wrk is now known as loxs [14:47] vila/jam: so what about "nightly" builds for windows installer/tbz [14:48] tbzr? [14:49] bialix, jam: I see two parts: 1) deciding whether the dev/dev installer is correctly built (jam ?), 2) Finding the right process to upload the installers to ... lp ? [14:50] jam: for some context: the builds have been green for some days except this morning due to an update in tbzr, I raised the problem, bialix caught it, Naoki fixed it [14:51] jam: a concrete example of the usefulness of the build farm :) [14:51] jam: but now, bialix want more ! :-D [14:51] ... nad garyvdm would like to test it but asked for installer [14:51] *and [14:51] *rats [14:51] * bialix don't want more [14:52] I've just threw idea [14:52] so i think vila points out that we have daily builds [14:52] * vila rats forgot the BIG RED FLASHING JOKE :) [14:52] we just need to figure out where to put them [14:52] lp is not the good place [14:53] lp seems like it would be an ok place [14:53] given that the nightly-ppa is there [14:53] I love the agreement :) [14:53] bialix: why not a good place ? [14:54] It isn't perfect, though [14:54] as from what I can tell, downloads are release specific [14:54] I suppose we could create a "windows-nightly" release target? [14:54] maybe to distinguish nightly it should be named something appropriate? [14:55] I also don't know how we would prune out old downloads like they do with old ppa items [14:55] err, aren't the nightly names already contain the trunk revno ? [14:55] vila: because it should be separate from official stables? [14:55] * pygi almost convinced libburnia folks (e.g. myself and some others) to move to bzr :p [14:56] bialix: ho, that's what the PPA are for, but may be the nightly PPA doesn't have the right kind of download space [14:56] pygi: If you can't convince yourself.... :-D [14:56] PPA != download installers area? [14:56] pygi afraids of qbzr and garyvdm? [14:57] bialix, nah, one of the devs started with VCS back in the time we started with the project [14:57] and he doesn't understand all the benefits of bzr just yet (he uses it in a bad way) [14:57] (we use svn for most components atm, only use bzr for one) [14:57] VCS or VSS? [14:58] pygi: I'm bad in jokes today [14:58] I know [14:58] :P [14:58] :-P [14:58] vila, isn't that always the hardest? :) [14:58] hmm, right, ~bzr-nightly-ppa is a team, not a project, it can have a PPA but not an upload area, AIUI [14:58] pygi: depends.... ;) [14:59] vila, on cheese? [15:01] pygi: hardest cheese ? [15:01] vila, no..I asked what does it depends on xD [15:03] :) [15:18] vila: I have a question for you. Did you see any of the conversation robert and I had about the 'sort_gc_optimal' stuff? [15:19] on IRC ? [15:20] yeah [15:20] last night [15:20] I can rehash the high points if you want [15:21] I prefer :-) I was rather sleepy when I read that [15:21] (yes, I was lurking :) [15:21] so... topo_sort is non deterministic [15:22] as it depends on sort order from dicts [15:22] right now when we pack a repo [15:22] we do "sort_gc_optimal" [15:22] which is basically [15:22] reversed(topo_sort()) [15:23] so it is also non-deterministic [15:23] We also have a concept that fetching between 2a repos should be "groupcompress" ordered. [15:23] With the idea that streaming one to another can opportunistically repack a little bit [15:23] The fundamental problem is that we 're-use' blocks when streaming from another repo [15:23] so if you have a poorly packed repo [15:23] and you stream it into another [15:24] you end up with a poorly packed repo, that has only 1 pack file [15:24] and thus autopack will never 'clean up' the new repo [15:24] so we would *like* to have a way to opportunistically repack on the fly [15:24] without having that degrade into repacking everything all the time [15:24] one possibility [15:24] if sort_gc_optimal was more stable [15:24] was that you could compare the stream with the 'optimal' ordering [15:25] and if it was close enough, re-use the incoming stream [15:25] if it was different enough [15:25] repack [15:25] ha yes, my first reaction was: 1) make topo_sort deterministic, 2) switch to merge_sort order, [15:25] because it is currently non-deterministic [15:25] that doesn't really work [15:25] so the problem is 'what is deterministic' :) [15:26] hehe, hence 2) [15:26] given a graph, you want to get identical results between repos [15:26] which is a starting point [15:26] the issue is that given 2 *similar but not identical* graphs, you want them to somehow be similar [15:26] hello jam, vila [15:26] hi poolie [15:26] we overlap again [15:26] poolie: yep. You're up way too late again :) [15:27] how identical ? fully identical or identical enough ? :-} [15:27] well the point is that if they are fully identical then you should get exact results [15:27] that is mostly a given [15:28] but often you will have 2 repos with slightly different revs [15:28] and you want the sorting between them to be "stable" in a similar sense [15:28] right, so merge_sort works where topo_sort fails [15:28] sort of [15:28] but merge_sort isn't all that stable either [15:29] it isn't stable with ghosts only, and I'd argue that filing ghosts is a f. good occasion to recompress [15:31] vila: let me give an example, just a sec [15:32] http://paste.ubuntu.com/259307/ [15:32] so if you consider that one side has graph F and the other G [15:33] jam, the only fixcommitted bug from spiv i see is https://bugs.edge.launchpad.net/bzr/+bug/402657 [15:33] Launchpad bug 402657 in bzr "2a fetch over dumb transport reads one group at a time" [Critical,Fix committed] [15:33] which does have a branch attached but maybe not an mp [15:33] actually it does have https://code.edge.launchpad.net/~spiv/bzr/gc-batching/+merge/10643 [15:33] it does [15:33] I just didn't see it in my inbox for some reason [15:33] I'll certainly review it [15:33] argh... hopefully this is a MAD issue [15:33] his MP has: ``bzr push`` locally on windows will no longer give a locking error with [15:34] maybe he merged Robert's branch before it landed... [15:35] mm [15:35] probably out of sync [15:35] you could diff it again yourself [15:35] yeah [15:35] jam: yeah, that would be a MAD issue I think :( [15:35] just painful how many times we run into code review problems [15:35] though I suppose we can now switch to having "bzr.dev" hosted on LP [15:36] since we've had success doing that for the release branches [15:36] jam: hmm, merge sort should be: A B|C D|E F or A B|C D|E G (same level nodes can be forced to respect an arbitrary but stable order (think revid alphanumerical)) [15:36] Launchpad could pay attention to the base_revision_id of merge directives and if it's not present in the target rescan before generating a diff. [15:36] vila: I'm not sure that you understand merge-sort [15:36] Well, mirror then scan. [15:36] it is "topological, with revisions appearing just before the rev that merged them" [15:37] (Although in this case I created the merge proposal via the web, so that wouldn't have helped...) [15:37] spiv: supposedly it does something different if you send an MD attachment, rather than say "propose this branch" [15:37] jam: I can see that, and I think it's a weakness as you just demonstrate :) [15:37] Anyway, it *is* late, and I'm off to sleep :) [15:37] at least I've seen comments from Aaron that it re-uses the exact patch you sent [15:37] spiv: sleep well [15:37] vila: so while investigating, I also realized that the 'most stable' sort order may not be the best "compression" order. [15:38] but I figured it would be good to talk about possibilities [15:38] for example [15:38] if you want stability you need leveling, [15:38] sorted((gdfo, revision_id) for gdfo, revision_id in revisions) [15:38] however [15:38] for *best compression* order [15:38] it is probably *not* good to group by gdfo [15:38] as they *specifically* don't have ancestry in common [15:39] anybody knows if sf.net hosted trac supports bzr plugin? [15:39] or they couldn't have the same gdfo [15:39] correct [15:39] certainly we would need to actually do real-world values to be sure [15:39] I was about to say that merge_sort flatten the rev space but we want different ways to flatten [15:39] but my gut says that gdfo would be more stable [15:39] hsn: i don't know, but given they support bzr it seems like a reasonable thing they should do [15:39] but would be worse compression [15:39] versus depth-first sort of sorting [15:40] my guts agree with yours :) [15:40] so right now I have an updated "gc_sort" which is topo_sort, but at each of the steps it sorts the parent_keys/child_keys [15:41] i.e. A BD CE F ? [15:41] though perhaps it is enough to just go via parent_keys [15:41] since that is a stable ordering [15:41] F D B E C A [15:42] is the current order, I believe [15:42] right, exactly what I said :-) (Believe it or not :) [15:42] and the other would be [15:42] G D B E C A [15:42] and what if G has D as second parent ? [15:43] G files a paternity suit, obviously. It plays out on national TV. Big scandal. [15:44] * vila puts his tie-dye suit back [15:44] vila: D *is* G's second parent [15:44] the point is that it was sorted(parent_keys) [15:44] D < E [15:44] ok, that was unclear, [15:44] so there is an open question of whether that is important [15:45] if you don't have it [15:45] then you get [15:45] the question was: is it independent of the parent order [15:45] F E C D B A [15:45] G D B E C A [15:45] well, depending on whether you walk right->left parent or left->right parent [15:46] but walking left->right the 'right' gets put on the stack last, and thus gets popped off first [15:46] probably we would want to walk left-parents first [15:46] hmm.... [15:46] I wonder if walking left parents first is more important than lexicographical order [15:46] walking when ? [15:47] oh, missed that line [15:47] no, I don't get it, what walk, what stack ? [15:48] when you create the group from the full texts ? You walk the graph ? [15:50] vila: we sort the graph to figure out how we want to group the texts [15:50] topo_sort uses a stack of nodes left to walk [15:50] go to a node [15:50] if parents are ready [15:50] push them on to the stack [15:50] pop from the stack to go to the next node [15:51] there are other ways [15:51] like everything in 'pending' is ready to be evaluated [15:51] evaluate all pending nodes before evaluating their parents [15:51] a bit more of a 'breadth-first' method [15:51] oh, you're coding gc_sort ? [15:51] similarly for the (gdfo, revision_id) sorted version [15:52] vila: well, updating it. Given that we already have one based on topo_sort :) [15:52] I don't really want to make topo_sort perfectly stable, because it gets to be *fast* [15:52] ok, I missed that part of the context :) [15:52] I think having a dedicated gc_sort is the way to go [15:54] I agree :) [15:54] I'm just trying to define the problem space well enough to decide on all the options for sorting an arbitrary graph :) [15:55] for example, I *could* do merge_sort(sorted(tips)) [15:55] and define merge-sort when there is more than one tip [15:55] given that the algorithm is only defined for 1 tip [15:55] of course, you can always have "special_tip => [tip1, tip2, tip3]" and filter special_tip back out at the end [15:56] hi,what is the diff between bzr checkout and bzr clone or branch [15:56] ? [15:56] exactly and force or not an order on the tips [15:58] cristi_an: in a bzr checkout when you do "bzr commit" it commits the change to the source branch, and then locally [15:58] versus only committing locally, and you have to 'bzr push' the data back to the source [15:58] jam: and otherwise localluy [15:58] interesting and so diff from cvs and svn [15:58] note that one of the main benefits is when you want to collaborate on a shared branch [15:58] since it helps to keep the 2 parties in sync [15:59] if one commits, the other has to update before they can commit [15:59] often, it is desirable to work on your own local branches only to get things done before they are integrated [15:59] yep but the merge is done ok [15:59] alter ? [15:59] later [15:59] however managing the *integration* branch is very well done via checkouts [16:00] jam: last sentece i did not fully understood [16:00] :( [16:01] You have a project [16:01] it will almost always have an integration branch of some sort [16:01] (trunk, head, development focus, etc.) [16:01] trunk...yes [16:01] as in svn [16:02] you do work on branches [16:02] that is more familiar to me [16:02] but eventually that work is integrated into another branch [16:02] (trunk) [16:02] correct ar some point [16:02] if multiple people are committing to an integration branch [16:02] a good way to collaborate [16:02] is via 'bzr checkout' [16:02] rather than having a local copy [16:02] that is ...a pian in the ass [16:02] which may be out of date [16:03] a bzr checkout requires you to be up-to-date before you can commit [16:03] that is more like svn cvs style [16:03] with checkout [16:03] cristi_an: so you can still develop your feature in a branch. But then "cd trunk-checkout; bzr merge ../my-feature/branch; bzr commit -m "merging my feature"" [16:04] checkouts in bzr are a way to bridge the gap between: [16:04] I'm developing independently (distrbuted) [16:04] I'm collaborating (centralized) [16:04] Anything you can do with checkouts you can do with regular branches and a bit of care [16:04] checkouts just help with the 'bit of care' part [16:05] i see [16:05] vila: right [16:05] the main problem with "force an order on the tips" [16:05] is that any arbitrary new tip can upset the ordering [16:05] if you sorted(tips) [16:05] then if you have G F, and someone commits an "A" [16:05] suddenly your whole graph gets rebalanced based on A [16:06] since that is now the new sort tip [16:06] of course, sorting somehow is better than not sorting at all [16:06] vila: and this is now a bit circular. But it is the bits I'm trying to understand as I come up with the graph => order of texts in groups algorithm [16:06] right, it should only occur at the "tip[s]" of the graph so, only for the last inserted revisions [16:07] vila: well, most algorithms are dependent on the starting point [16:07] merge_sort and topo_sort both obviously being effectde [16:07] effected [16:07] affected... actually [16:07] (i think, I'm really bad about that one) [16:07] one question then, we're talking about file graphs here right ? How many revisions can you put into a group ? [16:07] vila: as many as will fit [16:07] yes it is a bit loose [16:08] the current rule is: [16:08] fit up to 2MB worth of mixed-file-id data, or 4MB of single file-id data [16:08] but inside a single pack right, until the next autopack that is [16:08] or 2x the size of the largest content, if it is >4MB [16:08] ? [16:08] vila: well, we don't split groups between packs (yet, if ever) [16:08] jam i reviewd out discussion [16:09] jam: thx much more clear now [16:09] My point is that inside a pack you don't insert so your graph is immutable [16:09] cristi_an: happy to help [16:09] vila: sure, but when you transmit those to another repo... [16:09] or during pack [16:09] etc [16:09] My "big picture" work is opportunistically repacking groups [16:09] during transmission [16:10] and the "little picture" is that we need a stable way of listing content, so that the opportunistic repacker doesn't flip-flop all the time [16:10] but even in that context you know the full graph you care about no ? [16:10] thinking that things should be group A B C then C B A then BCA then... [16:10] how can i update a branch ? [16:10] since a checkout is bzr update [16:10] cristi_an: a regular branch is usually "bzr pull" a checkout is usually "bzr update" [16:11] same as for checkout ? [16:11] could you really receive disconnected parts of a graph, starting a group and then receive connecting parts while still using the same group ? [16:11] i mean for a project that was get usign checkout ? [16:11] vila: well, if you consider a streaming interface, yes [16:11] ha ! [16:12] you would get C => [A B], A => [], B => [A] [16:12] jam [16:12] thx [16:12] however, that isn't particularly of concern [16:12] well maybe some [16:12] get_record_stream() determines the order to send data [16:12] jam: I would tend to punt during the transfer waiting for the next autopack to get it right, is that acceptable ? [16:13] but insert_record_stream() is responsible for putting that data into the target repo, and possibly rebuilding groups [16:13] vila: we have determined that it is not [16:13] if the sending order can acted upon then... surely that's better [16:13] specifically [16:13] if my source repo has 1 large pack, and 200 loose packs [16:13] and I fetch from that [16:13] the data stream will create 1 large pack [16:13] with lots of loose data inside [16:13] and then my target repository [16:13] will have a single large pack [16:13] that will never get repacked [16:14] Too big to fail.... :) [16:14] don't do it then :) [16:14] create several smaller packs [16:14] vila: how do we know what to put where? [16:14] what heuristic do we use that we are ready to start a new pack [16:15] at the moment, what I'm thinking [16:15] size [16:15] is to have insert_record_stream() keep a "pending group" open [16:15] and if it gets a record that it thinks needs repacking [16:15] to put it into the pending group [16:15] with the algorithm there being [16:15] a Group with 1 entry [16:15] (at the moment) [16:15] and then fetch requests things in 'groupcompress' order [16:15] so if things are out-of-order they will get repacked on the fly [16:16] as we will split up the old group if it doesn't yield data in the correct order automatically [16:16] I'm not sure if you understand that from just what I've written. But it makes sense in my head... [16:17] trying to rephrase: you have a way to detect that you're creating a bad group and trigger its reconstruction afterwards ? [16:17] vila: mmm... [16:17] the source is already split up into groups [16:17] if we request "unordered" we get the groups as they exist [16:18] if we make a request for "topological" or "groupcompress" ordered [16:18] then the source will send groups that match the requested ordering [16:18] but will create new groups if it needs to [16:18] or truncate existing groups, etc. [16:18] at the moment [16:19] we have bug #402645 [16:19] Launchpad bug 402645 in bzr "fetch for --2a formats seems to be fragmenting" [Critical,In progress] https://launchpad.net/bugs/402645 [16:19] which is that we make a groupcompress request [16:19] so why not use unordered ? It seems to guarantee that the groups are not that bad and can be made better at the next repack [16:19] which causes the source to create small group fragments [16:19] vila: see above as for "at next repack" [16:19] after initial fetch [16:19] "next repack" never happens [16:20] but if the groups are correct that's not a problem [16:20] well, for the data transmitted with the initial fetch, it will never be repacked via autopack [16:20] vila: but they aren't in a real world repo [16:20] which has some amount of "perfect" groups === Noldorin_ is now known as Noldorin [16:20] and then a whole lot of new "not yet autopacked" groups [16:20] and probably some "autopacked, but not with all the new stuff yet" [16:20] given that our "optimal packing" is newest first [16:20] we can never be truly optimal in a live repo [16:20] since there will always be a bit more new stuff [16:21] hmmm [16:21] so the other bug is bug #402652 [16:21] Launchpad bug 402652 in bzr "smart fetch for --2a does not opportunistically combine groups" [High,Triaged] https://launchpad.net/bugs/402652 [16:21] the twin to #402645 [16:21] I'm back to the idea that we should create several packs [16:21] namely, if we are going to fragment, then we should recombine [16:21] vila: that is a possibility [16:21] but how do we know what/when/where to do that? [16:22] remember that individual packs are meant to be fairly self-contained [16:22] though I guess not fully self contained [16:22] given that a pack with rev 10 won't have the text files for rev 9 [16:22] well, from rev 9 that are still present in rev10 [16:22] where is the easiest: only locally do you have the right info to do a good job, the knowledge can't be shared [16:23] when: repack, repack, repack [16:23] vila: so I already have a patch for review for bug #402645 [16:23] Launchpad bug 402645 in bzr "fetch for --2a formats seems to be fragmenting" [Critical,In progress] https://launchpad.net/bugs/402645 [16:23] which would be nice to get reviewed [16:23] that could not be simpler that handled at repack time [16:23] I think lifeless was so-so on it, but I don't think he was blocking [16:24] that left what... and that one is not easy :D [16:24] vila: so... the problem with 'repack' time, is that for very large projects repacking everything takes a while [16:24] it would be nice to have a lighter repack [16:24] which is what we have autopack for today [16:24] but that assumes that a single pack file is already as optimally packed as it could be [16:24] one option [16:25] yeah, I mean autopack, sorry, the point where you decided to group the last 10 packs [16:25] is that we get the pack code to produce deterministic output [16:25] such that we compare what a pack actually contains [16:25] with what the optimizer would create [16:25] wait ! [16:25] and if they differ, redo the work [16:25] you just pointed out the flaw ! [16:25] we assume that a single pack file is already optimally packed [16:26] that's wrong and we know when [16:26] so how about repacking that one when we know it's wrong ? Is it already too late ? [16:26] repacking with a special sense here: don't try to combine with others, focus on that one [16:26] or not... [16:27] vila: how do we know when? [16:27] didn't you say you can identify that a group badly compressed ? [16:27] (its a serious question, how do we detect that a pack is sub-optimal without fully repacking) [16:27] vila: I'm saying maybe we can [16:28] potentially by checking the sort order [16:28] though that requires.... [16:28] a deterministic sort order [16:28] which ... is where I started this discussion :) [16:28] gc_sort is deterministic, we agreeD on that :) [16:28] it is not [16:28] it potentially is [16:28] it should [16:29] so here is the next issue with determinism, and why it would be good to be stable when changing graph [16:29] consider [16:29] A B C D E F G [16:29] and locally you can even realize that you're receiving a group that will be better compressed here but the other side couldn't know [16:29] which are currently in groups of: [16:29] [G F E] [D C B A] [16:29] it would be good that when adding H [16:29] we don't end up with [16:29] [H G F] [E D C] [B A] [16:29] If instead we could get [16:30] [H G F E] [D C B A] [16:30] that would mean that our 'lite-repack' doesn't have to touch everything [16:30] just a group here and there that we push some extra data into [16:30] vila: you can, but if you aren't going to look beyond the current pack file, you actually have *less* data... [16:30] one question: is it legal to create groups in new packs from bits of groups in old packs ? [16:31] it is legal to have multiple copies of a text in different packs [16:31] Is that sufficient for your question? [16:31] right [16:32] of course, having 2 copies of a text in your repository is, by definition, sub optimal :) [16:32] so we can create a new pack with only well compressed groups and wait for the autopack to clean the dust [16:32] of force the cleaning [16:32] vila: so I think you are saying that Repository.start_write_group() should be creating 2 pack files [16:32] one with 'seems to be good' data [16:33] and one with 'seems to need repacking' data [16:33] though again, if you are only creating 2 packs... [16:33] autopack doesn't kick in for a while [16:33] but you force it with a focus on the need repacking one [16:34] what worries me is that we may still transfer too much data... [16:35] vila: well, there are other problems. Like the fact that 'optimally' you would probably want to mix some of the data in the 'needs repacking' pack with the data in the 'fully packed' pack [16:35] nahhh, what is the risk that a group contains irrelevant data for a given fetch ? Isn't it capped by your 2M limit anyway ? Or can that trigger multiple times for a single fetch [16:35] Consider a group of a single file content [16:36] which has 100 revs of that content, and you get a single 'ungrouped' text of the same file [16:36] vila: so the server side code knows that you are requesting XX bytes out of the 2M group, and can opportunistically split up a group [16:36] which is currently causing our fragmentation issues :) [16:37] Is it a bug that it split too much or is needed for some reason ? [16:39] the bug is that our current fetch says: [16:40] give me the first 2 texts from group A, and the first 2 texts from group B, then the middle 3 texts from A, then middle 3 from B, then the last 2 from A, and last 2 from B [16:40] which causes a nice 7 entry group A and 7-entry B to be split up [16:40] into 6 groups 2, 2, 3, 3, 2, 2 [16:40] the *good* is that we don't send the full 14 texts multiple times [16:40] the bad is we end up fragmenting without recombining afterwards [16:41] The original idea we had for smart fetch was bug #402652 [16:41] Launchpad bug 402652 in bzr "smart fetch for --2a does not opportunistically combine groups" [High,Triaged] https://launchpad.net/bugs/402652 [16:41] we could create optimal groups on the fly [16:41] (whether that ends up being done in the server or the client...) [16:42] I vote for the client... half-heartedly :-/ [16:42] vila: in theory client scales better, because you have 1 server and N clients [16:43] It's the fetch for the smart server or also for dumb ones ? [16:43] that said, because of 'push vs pull vs copy ' who the client is ... [16:43] (it can be the source or target, or neither as an intermediary) [16:43] vila: so it is basically the same fetch, it just depends on the layering [16:43] "smart fetch" is [16:44] dumb_fetch => SmartObject => bytes on the wire => RemoteRepository => local repo [16:44] versus [16:44] dumb_fetch => local_repo :) [16:45] So for plain 'http' requests, we have to read the whole group [16:45] and then locally we split it up into smaller bits [16:45] we have a local cache of groups [16:45] so we are unlikely to fetch the same big group twice [16:45] though it is possible that we do so if we are really bad about our fetch ordering [16:50] 1) we shouldn't "fragmenting without recombining" or at least we should give hints about it so that the other side knows there is work to do, [16:52] 2) if the "smart" is to send multiple groups to reduce the amount of data transmitted and we fail to achieve that, that's not smart :-/ [16:56] vila: I'm pretty sure (2) is working [16:56] with the caveat that it is introducing (1) :) [16:57] so if we are happy with 2, the only problem is to not consider the received pack as optimally compressed no ? [16:58] What if you just look at that pack file, counting groups/file and deciding if it's worth keeping or not ? [16:59] Kind of an autopack after fetch focused on the just received pack ? === deryck is now known as deryck[lunch] [17:00] You can even spawn it in the background :-) [17:12] * kfogel is away: lunch + an errand [17:26] lifeless: hey man, I fixed that branch and re-submitted the review [17:31] oubiwann: just so you know, lifeless is usually sleeping for another 4hrs or so [17:31] jam: yeah, that's what I figured... [17:31] jam: I msg'ed him in the off chance that he checks his irc highlights ;-) [17:32] it does === deryck[lunch] is now known as deryck === beuno is now known as beuno-afk [18:02] pfffffffff [18:03] jam: ping [18:03] vila: ? [18:03] jam: I just upload new packages for bzr-gtk in bzr-beta-ppa [18:04] When should I do the same for bzr-ppa at the latest for 1.18 ? [18:04] well, doesn't reallt matter I suspect [18:04] I think you should copy them from the beta ppa [18:05] well, now that the script is written... :-) [18:05] Hi vila, jam [18:06] hi garyvdm , too bad, I'm about to leave :-) [18:06] np - just saying hi. [18:06] Hi bialix [18:06] Hi garyvdm! [18:06] I've seen your blueprint for the conflict manager, nice job, I'll try to comment if I find the time before you code it :-D [18:07] garyvdm: have some quick questions [18:07] may I? [18:07] sure [18:07] garyvdm: you can answer in short form [18:07] vila: There lots more that I still have in my head. [18:07] james_w: builddeb rocks, just wanted to let you know :) [18:08] james_w: bzr-builddeb I meant ! [18:08] thanks :-) [18:08] garyvdm: 1) your fix for qdiff and painting is great. is it possible to do similar thing for annotate? today annotate is slow on scrolling... [18:08] james_w: hi, great idea around bzr-build [18:08] garyvdm: as all of us :) The tricky thing is to get them out :) [18:09] james_w: at first I've thought you steal my scmproj cookies :-) [18:09] bialix: annotate is slow to scroll because it's only loading revision on screen, like qlog. [18:09] james_w: sorry: bzr-builder of course [18:10] garyvdm: and so? [18:10] bialix: I plan to make both qlog and qannotate be a bit more aggressive with revision loading. [18:10] bialix: Its the same bit of code. [18:10] garyvdm: that's great, especially if you put this agression inth thread ;-) [18:10] :-o [18:10] ok, ok [18:10] oh, ok [18:10] bialix: yeah, it is quite similar to scmproj :-) [18:11] james_w: but it's much more than that in one area and much less than that in other [18:11] ;-) [18:11] but I like the idea [18:11] heh [18:11] maybe I can persuade guys like jam to use it for working with windows installer configuration [18:13] garyvdm: 2nd q: oh, you already answer it about agressive loading [18:14] garyvdm: maybe I'll put some my thoughts about threads/subprocesses and multipass work as text document? (not wiki :-P) and we can work on it toghether? [18:14] garyvdm: 3rd) can we put throbber as status bar? [18:14] something like in firefox [18:14] (and other browsers) [18:15] bialix: If you want to look at the qlog/qannotate revisions loading, the code is in lib/revtreeview.py [18:15] bialix: You can put the throbber anywhere, so it is a question of design. [18:15] mostly I'm not very like as it appears and disappears [18:16] The throbber will look funny just bellow the buttons. [18:16] Or where you thinking above the buttons? [18:16] well, if it will be status bar, then we have to use progress bar instead of spinning wheel [18:16] no, I mean status bar, like in firefox [18:17] wannan screenshot? [18:17] Don't worry. I know what you mean [18:17] ok [18:18] garyvdm: igc is not asked you about packaging bzr-explorer? [18:18] bialix: maybe just have a throbber that sits in the top right, with no text like firefox... [18:18] No [18:19] top right? under title bar icon/buttons? [18:19] bialix: My knowledge about debian packaging in limited to "How to update qbzr's packages." [18:20] bialix: I'll have a shot on the weekend. [18:20] new firefox (at least on windows) lacks this spinning wheel at top right, but I remember it [18:20] "How to update qbzr's packages." -- that's cool anyway [18:21] I did not even notice that the top right throbber had gone... [18:21] garyvdm: I've just asked. igc want bzr-explorer 0.7 into karmic [18:21] (I'm about of packaging 0.7 release now) [18:22] bialix: 0.7 - I just read on the ml about 0.8? [18:23] bzr-explorer 0.7 is not released yet [18:23] tonight wil be [18:23] Oh - worry - miss read [18:23] *sorry [18:23] np [18:23] garyvdm: also I want to redesign qbzr.conf. completelly [18:24] a [18:24] I'm thinking about this many months [18:24] jam: ? [18:24] sorry, wrong window [18:24] bialix: I would recommend that you try find someone else to do the packaging, cause I will be very slow. Maybe luks? [18:25] garyvdm: don;t worry about packaging, I'm sure Ian will find the way [18:25] I was unsure is he asking you or not [18:26] bialix: To be honest, I'm not to familiar with the .conf code. You know it much better. [18:26] garyvdm: about qbzr.conf: I want to put data in more sections and subsections. instead of keep everything in [DEFAULT]. If you have any thoughts on this topic we can discuss it. otherwise I'm planning to work on it soon [18:26] ah, ok [18:27] I'm still unsure ow to organize things like geometry settings, color settings [18:27] s/ow/how/ [18:27] Yes - It would be nice if the the window sizes were separate from every thing else. [18:28] and I want some universal MRU handling (latest 10 pull locations used, push, commit messages etc) [18:28] garyvdm: precisely. [18:28] it was my first intent [18:29] * bialix often mulling some ideas several weeks/months because there is always bugs to fix rather than redesign deep internal code [18:29] garyvdm: also I want to redesign most of qsubprocess windows: to always show working branch/tree location. [18:30] Yes [18:30] Like qcommit [18:30] I found it's very easy to lost identity while working with multiple branches in bzr-explorer [18:30] yep, but maybe a bit shorter [18:31] garyvdm: and perhaps latest. today I've asked about qrevert to revision [18:31] I'm not filed a bug report yet [18:32] trying to understand where to put this control [18:32] bialix: I have the same thing. I've been thinking about this for about this for about a year now: http://bazaar-vcs.org/qbzr/Blueprint/diffmerge [18:32] Note that blueprint is not finished... [18:33] we have several dialogs where we expecting user can specify revision [18:33] Lots still in my head. [18:33] garyvdm: I like your screenshots [18:33] very much [18:33] very impressive [18:33] it seems meld's way is too complicated (from screenshots) [18:34] so, garyvdm, what we can do about revision selector? ;-) [18:34] Screenshots made with balsamiq [18:34] bialix: Yes I need to finish that. [18:35] I remember you stumbled upon some bugs/issues with qt [18:35] igc wants me to make it do all different types of revision selectors. [18:35] I'm still thinking about reusing qlog widget [18:36] Yhea - had problems putting qlog inside a qcombobox [18:36] is it possible to run separate dialog with qlog-like graph and allow user to click/dbl-click on revision and we pick it? [18:36] That would be easy [18:37] for selecting range -- just draw the mouse with button pressed and all [18:37] or something like that [18:37] something easy [18:38] bialix: I'm busy trying to use jam's new graph code to make qlog faster. [18:39] It's promising. [18:39] garyvdm: I'd like to fix https://bugs.launchpad.net/qbzr/+bug/417895 [18:39] Launchpad bug 417895 in qbzr "qannotate error when clicking on line which origin is dotted revno" [Critical,Confirmed] [18:39] if you will have a time to comment on it, I'll try to find the root of problem [18:39] garyvdm: btw, feedback is welcome if you find "it would be nicer if..." [18:39] he, faster :-( [18:40] heh [18:40] today qlog on bzr.dev opens for me ~10 seconds [18:40] jam: sure, I've allready got a draft mail :-) [18:41] bialix: I want that to be 1sec... [18:41] I will be happy if it start painting latest revno from the start and then loading the tail later [18:41] hmm... cold cache it is 18s here, 12s before the window shows [18:42] hot, 5s [18:42] (and working in thread because it's completely ignore repaint while doing loading) [18:42] Speed is important for using it for a revision selector [18:42] I think we have to load only mainline first [18:42] It takes about 45 sec for OOo [18:42] yes [18:42] because for revision selector this is what you need 90% of the time [18:43] well, for OOo, *everything* is mainline... [18:43] oh, I recall [18:43] garyvdm: qlog on multiple branches [18:43] garyvdm: can we dont show dotted revnos for non-trunk branches? [18:43] * garyvdm hides under the desk. [18:44] no, it's easy question [18:44] or the right revno... [18:44] right mainline revno would be confusing [18:44] but dotted revno is unusable [18:44] I can't merge using this revno [18:45] Easy to not show the wrong revno [18:45] I'll be happy just to see no revno at all [18:45] * bialix has too much questions/ideas so not everything is filed as bugs [18:46] :-) [18:47] garyvdm: it was last question. I promise! [18:47] jam: for OOo, I think we have to look at loading just part of the mainline... [18:47] Sure [18:47] * bialix packaging 0.7 now [18:48] oh [18:48] and after rebase qlog refresh crashed... [18:48] garyvdm: so... qlog could just load enough of the mainline to fill the first screen [18:48] * bialix hides [18:48] and read those revisions so that it could put up summary messages [18:48] and then spawn something that would load the rest of the data [18:48] Jam: yes [18:48] you can number mainline revs, etc [18:48] without needing the full graph [18:49] bialix: please log a bug. [18:49] arguably, we could have something inbetween 'get_parent_map()' and 'get_known_graph_ancestry()', I'm just not sure exactly how to flesh that out [18:49] so I didn't bother yet [18:49] (give me as much parent information as you can cheaply determine from these tips) [18:50] note also that if something like OOo switches, they will likely have a better conversion that actually makes use of their metadata in cvs to do merges [18:50] I'm not positive of that [18:50] but they have 3rd party tools to do a lot of merging, etc. for them [18:50] built on top of cvs [18:50] so there is some merge info there if people chose to extract it [18:50] I need to get a copy of a OOo branch... [18:51] ping igc, he uploaded it somewhere [18:51] let me check [18:51] does mysql branch is also very heavy and have much merge info? [18:51] bialix: mysql is *very* bushy [18:51] bialix: Yes - I do have copy of that. [18:51] something like 65k ancestry revs, versus... 3k mainline? [18:51] bushy? hmmm [18:52] * bialix wonder what fullermd would say about this [18:52] jam: Aprox how many mb is OOo branch. I have limited bandwidth. [18:52] gigs? [18:52] yeah [18:53] checking [18:53] 1.3GB in tar.bz2 form [18:53] 945MB in .bzr/ [18:53] * bialix wonder when people start using torrents to get such big repos [18:53] bialix: there already is gittorrent, IIRC [18:53] cool [18:54] garyvdm: 3.0GB of .bzr/ + working tree [18:54] wow [18:54] garyvdm: http://people.canonical.com/~ianc/benchmarks/repos/bzr/ [18:55] that has, emacs, OOo, python, mozilla, etc. [18:55] interesting how big lp sources? [18:55] Cool [18:55] comparing to those [18:55] bialix: about 100-200MB, IIRC [18:56] (note that numbers quoted are for --2a formats... LP was 1.2GB in --pack-0.92) [18:56] MySQL is 600MB in --pack-0.92, around 225MB in --2a, IIRC [18:56] jam: because bzr is also used packs as repo format it's possible to build bzrtorrent, hm? [18:56] jam: I think I'm going to ask igc to post me a flash drive with some repos on. It will be cheaper that way :-( [18:56] * pgega is away: I'm busy [18:57] bialix: well, I think it is based on git's sha1 content stuff. But certainly we *could* do a torrent-like protocol. I thought someone brought it up on the mailing list a while back [18:57] garyvdm: where do you live? [18:57] South Africa [18:57] jam: yeah sha-1 hashes should helps them [18:59] Someone worked out that if you wanted to download more than 5gib in south africa, it's quicker to fly to hong kong, download it there, and fly back... [18:59] we have sha1 hashes, but things are addressed in that sense [18:59] garyvdm: well, there is the old 'lorry full of backup tapes' question [18:59] btw jam, how's hard to implement commit --amend? is it possible to avoid uncommit/commit dance to simply fix commit message? [18:59] at what point is a truck full of backup tapes higher bandwidth than OC-3 ? [19:00] * pgega is back (gone 00:03:15) [19:00] bialix: The hardest pressure would be to get it past review [19:00] I don't see the specific benefit over uncommit/commit myself. [19:00] given that as I understand it [19:00] commit --ammend == uncommit + commit [19:00] use case: you have changes in your tree when you realize you need amend [19:00] as in, it takes the current content on disk [19:00] and recommits it [19:00] bialix: git commit --amend (I believe) will include those changes [19:01] perhaps not if you haven't 'added' them? [19:01] hm [19:01] not really sure [19:01] but as I understood [19:01] it was a way to resolve conflicts in the merge [19:01] which means it has to include content changes as well as metadata changes [19:01] I was under impression it just create new commit object for old tree/blobs [19:01] it == git [19:01] beuno-afk, ping when you've got a minute. [19:02] bialix: The discussion I read about it was talking about what to do when an autocommit after-merge got something wrong [19:02] and the recommendation was to use commit --amend [19:02] hence... it has to change content somehow [19:02] bialix: bushy mysql branch: http://imagebin.ca/view/3GLAnke.html . I just expanded the twisties on the screen. It gets much wider that that... [19:02] I don't use git at all, only read the docs sometimes [19:02] garyvdm: so... mysql also has the policy of letting anyone overwrite the mainline [19:03] so "bzr merge trunk; bzr commit -m "merge trunk into my branch"; bzr push trunk" is acceptable to them [19:03] which means their history is much harder to understand [19:03] jam: Yes. I chatted to vila about that at uds. They need a "land" command [19:03] garyvdm: there are a lot of possibilities. They need to allow one first :) [19:04] for example, setting 'append-revisions-only' would force the issue [19:04] but they didn't want to impact the developers [19:04] I think I would find "land" command usefull too. [19:04] garyvdm: what about 'bzr merge --swap-parents' ? [19:04] jam: one day after 2.0 I'll asking you a lot about file-specific commit messages. I want them in qbzr [19:04] * garyvdm goes to look [19:04] garyvdm: doesn't exist yet [19:04] just ~ the same idea as land [19:04] :-) [19:04] Yes [19:05] land == merge --swap-parents [19:05] and we'll need commit --revision-properties-from-file FILE [19:05] merge-to [19:06] I still think it's a bad idea [19:06] Hi luks, [19:06] hey [19:06] Important for mysql people to use qbzr [19:06] lukc: which one? [19:06] bialix: per-file commit messages as they are implemented in bzr-gtk [19:07] heh [19:07] they became stadard de-facto now [19:07] and only jam knows all secrets of them! [19:07] (joke) [19:08] it just hides information if there is no official support in bzrlib [19:08] luks: this is the latest feature bzr-gtk has and we don't [19:08] bialix: we can start with showing per file commit messages in qlog, and qannotate (In the revision html view) === pgega is now known as pgega|away [19:08] vila said me it's officially supported [19:08] or something like that [19:08] bialix: yet bzr doesn't support them, loggerhead doesn't support them, etc. [19:09] I'm sure. I think mysql pays for bzr support. [19:09] garyvdm: yes, but we need commit support too [19:09] garyvdm: yes, although and not to us [19:09] :-) [19:09] so we can ignore these properties [19:10] it's just mater of competition with bzr-gtk [19:10] I find it weird to update tools just to fit some weird workflow [19:10] I can completely understand why it was done in bzr-gtk [19:10] Bialix: I think that mysql guys would love for qlog to show pfcm (per file commit messages) because bzr viz is unusable on the mysql branch. [19:10] because it's core gui tools [19:11] garyvdm: As I Understand they have a custom "glog" which uses gtk widgets and does effectively qlog [19:11] bzr-gtk ^ [19:11] heh [19:11] heh!~ [19:11] * bialix does not see difference [19:12] just mainline, or expandable? [19:12] jam^ [19:13] garyvdm: expandable [19:13] bialix: 'bzr vis' doesn't have twisties [19:13] so you get *all* revisions *all* the time [19:13] ah [19:13] so it quickly become unusable on big histories [19:13] yep [19:13] I wonder why that is not in bzr-gtk. [19:13] garyvdm: I really don't know [19:14] other than some concerns about "not developing a VCS" to get in trouble with their existing bk license [19:14] any idea where I can find the code? [19:14] I c [19:14] whether that has finally run out or not, I don't know [19:14] guilheimb is the best mysql contact I know of [19:14] he just sent an email to the list if you want to get his address [19:14] vila might know too === bialix is now known as bialix-dinner [19:19] jam: I was thinking about storing gdfo, and I'm not sure why ghosts are a problem, because you can calculate the gdfo on commit, and then when you pull, you copy the gdfo from the branch you are pulling from? [19:20] garyvdm: fetch can bring in a node which was a ghost locally without you realizing it [19:21] garyvdm: http://paste.ubuntu.com/259422/ [19:21] time goes down [19:21] merging D into B will bring in the ghost [19:21] ok [19:21] causing B to have a new gdfo [19:23] bla [19:25] jam: but the gdfo for b and d are know, and the gdfo for the new revision is the max(gdfo of parents) [19:25] max(gdfo of parents) +1 [19:26] garyvdm: the gdfo for b *changes* when the ghost is filled in [19:26] we need to detect that [19:26] and propagate that change [19:26] garyvdm: for example: http://paste.ubuntu.com/259426/ [19:27] merging D into Z [19:27] * vila remembers when jam explained him the first time and send some love to garyvdm :-) [19:27] (so in those graphs, the first one has 'ghost' as a ghost, but the second has the actual revision.) [19:27] if you merge D into Z [19:27] it has to renumber, B, X, Y, Z === vila is now known as vila-dinner [19:28] In the first graph, we think the gdfo(B) == 2 (origin => A => B) [19:28] however, after filling in the ghost it is 3 [19:28] origin => A => C => ghost => B [19:28] sorry. it is 4 [19:29] Are the ghost in that dag the same rev? [19:29] garyvdm: right. In the first it is a real ghost, in the second it is a known revision [19:30] (the whole point is that 'filling in ghosts' is a problem for caching gdfo) [19:30] and unless you track a list of ghosts, it isn't cheap to determine what ghosts may have been filled in by fetch [19:30] if any [19:30] Ok lest say the ghost is rev g [19:30] sure [19:31] Who ever committed b would have had to have g in their repo [19:31] So the correct gdfo of g was know the b was commited [19:32] garyvdm: yes. but bzr-svn has had a great knack for introducing a ghost in other people's repos [19:32] and ... [19:32] wouldn't store the gdfo property [19:33] So bzr-svn whould need to check for introducing a ghost on fetch, but normal bzr would not. [19:35] garyvdm: it all depends on how ghosts can be created and when they can be filled [19:36] converting from Arch is another way that ghosts were generated [19:36] etc [19:36] jam: ok I see. [19:36] And fetching bzr-svn => bzr branch can create a pure bzr branch that has a ghost [19:36] and then fetching that bzr branch to another bzr branch can propagate that ghost [19:36] and then fetching into that from the original bzr branch will fill in the ghost [19:37] I can draw a diagram if you prefer [19:37] basically, though, allowing a bzr revision to be stored in a non bzr location [19:37] DW. I understand. [19:37] and then pulled out [19:37] *could* introduce all kinds of craziness [19:37] Jam: and checking for ghosts on fetch would involve loading the whole graph. [19:39] garyvdm: unless we start keeping a list somewhere of things that are ghosts. Yes [19:39] yes [19:39] also, if you are loading the whole graph, you may as well number it from scratch [19:39] since the numbering is ... low ms [19:39] versus the loading [19:40] I think gdfo on OOo is something like 700ms versus 10+s to load the whole graph [19:40] So is the plan to track the ghosts? [19:40] Hi amanica! [19:40] garyvdm: well, if we want to cache gdfo, I think we have to track ghosts [19:40] we don't do either yet [19:41] vila-dinner is the one looking at that side of it [19:41] :) [19:41] Hi garyvdm! :) === bialix-dinner is now known as bialix === EdwinGrubbs is now known as Edwin-lunch [19:50] jam: My ideas for KnownGraph, and question about find_ancestry are very brief, So I'll ask you here. [19:51] For KnownGraph, It would be nice if I could acquire via a public api the parents, and children for a revision. [19:52] Qlog uses that, I would be nice if there was just one copy. [19:52] easy enough to add [19:52] would you want it to be multi-way returning a dict? [19:52] or just a single node? [19:52] Jam: just single node [19:52] KG.get_parents(key) [19:52] or KG.get_parent_map(keys) [19:53] would you rather get the node directly, and have a way to walk node child and parents? [19:53] or do you prefer to have KG as the pure api? [19:53] (at the moment, the internals are slightly different between the KGN for python and pyrex) [19:53] KG.get_parents(key), KG.get_children(key) would be cool [19:54] I'll do a patch for that. [19:56] Ok 2nd question: This is how I'm using find_ancestry: http://paste.ubuntu.com/259445/ [19:56] Keys are in the format (revid,) [19:56] Is there a quick way to get just revid? [19:57] jam ^ [19:58] key[-1] ? [19:59] Also, I'm fairly sure that your else: clause will fail [19:59] because CombinedGraphIndex doesn't implement _find_ancestors() only find_ancestry() [19:59] oh, and revisions._index is not a CombinedGraphIndex [19:59] but a _GCGraphIndex or a _KnitGraphIndex [20:00] * garyvdm tests. [20:10] jam: I don't understand what ref_list_num is for. Is it ok for me to just pass 0, like you do in groupcompress.get_known_graph_ancestry? [20:11] garyvdm: Indexes store a list of referenced revisions, which is repository specific === CardinalXiminez_ is now known as CardinalFang [20:12] it turns out that all current repos have the regular 'parents' as the first entry [20:12] (KnitPack also stores the compression parent as the second entry, Groupcompress doesn't have a second entry) [20:12] but it is why the index code doesn't claim to know the right value [20:12] and gets it passed in from higher levels [20:13] I see, so I should just pass 0 then. [20:13] garyvdm: you honestly shouldn't really be accessing that low of a level from qbzr [20:13] as it could technically be different in a different repo [20:15] for now, yes, you could just pass 0 [20:15] Ok - Then I need higher level way to get the whole ancestory from multiple repos. [20:17] garyvdm: repository.get_graph() [20:17] lifeless: hi [20:17] lifeless: well, he'd like to get it in a KnownGraph form. :) [20:17] hi :) [20:18] Lifeless: I'm trying to take advantage of the better performance of find_ancestry [20:18] garyvdm: you could do: g = repository.get_graph(other_repo); pm = dict((r, p) for r, p in g.iter_ancestry([tips]) if p is not None); kg = KnownGraph(pm) [20:18] but that doesn't give you the benefit of faster ancestry search [20:18] search/loading [20:19] garyvdm: then you need to work on exposing it cleanly higher up [20:19] lifeless: Yes [20:19] otherwise you'll just end up repeating all of te logic to handle stacking etc [20:20] jam: Any reason why not Graph.iter_ancestry could use find_ancestry? [20:21] I want to branch a working copy off of my repo without keeping it connected to the repo. What's the command for this...? [20:21] garyvdm: layering issues? [20:21] chrispitzer: bzr branch [20:21] yea, but then bzr push will go back to the parrent... won't it? [20:21] it would certainly be the expected use case [20:21] I don't want that [20:21] then don't push ? [20:21] but iter_ancestry is a bit silly to cast up into a generator when the lower level is creating dicts [20:22] jam: I c [20:22] chrispitzer: if you create a separate branch, you can push anywhere you like [20:22] garyvdm: also, I'm pretty sure get_known_graph_ancestry() doesn't handle stacking yet... [20:23] as you have to explicitly support stacking everywhere, you never (anymore) get it for free [20:23] chrispitzer: You can remember a new push location with bzr push new_push --remember [20:23] especially when you don't even get fallbacks over RemoteRepository anymore... [20:24] garyvdm: and... nobody noticed that it was broken because again, stacking requires explicit testing, since it can't just be a permutation on existing formats... [20:24] jam: So in that case, I shall stick to using repo.get_graph().iter_ancestry( ) for now [20:24] * jam really dislikes the overall impact of stacking, as it breaks things over and over and over again [20:25] garyvdm: yeah probably [20:26] thanks [20:29] jam: does stacking even worse os locks? [20:29] I don't believe [20:29] :-) [20:29] bialix: stacking has caused more direct bugs in bzr than just about any other feature [20:30] direct bugs easier to fix? [20:30] it requires a whole lot of code to know whether the thing it is looking at is stacked or not [20:30] especially given that RemoteRepository is no longer abstracting away the fact of stacking [20:30] oh [20:31] so the client now needs to know about stacking, and do operations 2x, etc. [20:31] bialix: not to mention all the recent stuff about "filling in parent inventories", etc. [20:31] which broke fetch, commit, merging a bundle, autopack, pack, upgrade, ... [20:31] I'm happy to not know about all this nightmares [20:32] but it scary [20:33] jam: btw, did you saw james_w's new bzr-builder plugin? === Edwin-lunch is now known as EdwinGrubbs [20:36] jam: for KnownGraph.get_parents(key), should it check if the key is in the graph and return None if it is not, or just let a KeyError get raise? [20:36] garyvdm: I would probably raise an exception [20:37] I think the common uses [20:37] mean that you should only be using keys that it has told you about [20:37] so it is easier to write code that doesn't have to check if None [20:37] the bigger question is what to return for ghosts? [20:37] ok [20:37] []? [20:37] Since they have a node with parents of None [20:39] jam: [] would be the nicest for qbzr use. However, returning None may be usefull for other [20:39] users of the api [20:40] to know if a Node is a gost [20:40] garyvdm: well, kg.merge_sort() now directly filters out ghosts nodes from the return value [20:40] so you can pass in ghosts without having to filter [20:41] Ok, in that case, I'll just return None. [20:56] jam: Sorry to bother you so much. I've added a test that I expect to pass for the py version, and fail for the pyx version. But I don't get any failers. [20:56] How can I check that it is testing the pyx version. [20:56] garyvdm: how are you creating the KnownGraph instance? [20:57] self.create_known_graph() ? [20:57] you can also just print kg [20:57] and it should be obvious whether it is python or pyre [20:57] pyrex [20:57] self.make_known_graph [20:57] ok [20:59] bla - I was running bzr selftest, not ./bzr selftest [21:22] :) [21:37] jam: shelf on windows nearly fixed. [21:37] jam: is there a bug for it? [21:38] lifeless: bug #305006 [21:38] Launchpad bug 305006 in bzr "shelve fails on win32 with "Could not acquire lock"" [Medium,In progress] https://launchpad.net/bugs/305006 [21:38] I believe [21:52] jam: Thanks alot for all the help. [21:53] Night all [22:05] jam: hai five! [22:06] ? got shelve to work? [22:06] lifeless: grats [22:06] yup [22:06] 96 passing tests [22:06] and a bit of test stipple cleaned up [22:13] jam: branch is up, review request mailed. [22:13] jam: only 5 thisFails calls in tree now. [22:15] abentley: you might like to review it too, as I think you use shelf in pipelines implementation. [22:34] abentley: are you around? I have a merge_inner question [22:34] lifeless: on the phone [22:35] gimme a ping when you have a minute then. [22:46] oubiwann: thanks [22:46] oubiwann: shall look again [22:46] lifeless: sweet -- thanks! [23:08] jam: thanks [23:20] does bzrlib has helper function to deteremine is given location (or pair tree/branch) is actually light checkout? [23:27] bialix: I don't think so. tree.bzrdir.root_transport.base != tree.branch.bzrdir.root_transport_base though. [23:40] lifeless: ping [23:41] abentley: hi. so there is a single test left that does locking operations that won't work on windows [23:42] bzrlib.tests.per_workingtree.test_workingtree, test_merge_revert in that [23:43] in that, the merge_inner call fails [23:44] it fails because Merger replaces the base tree with a new revision tree [23:44] but 'base' is a modified tree, so I'm not sure that its safe for Merger to do that. [23:44] I'm hoping to get a better understanding from you of this part of the code base [23:44] lifeless: Where does this happen? [23:45] in set_base_revision I think. I paged it out since I pinged you, and I'm running low on blood sugar now, I was about to eat when you pinged. [23:45] Can you bear with me while I do that ? be about 10 minutes [23:46] lifeless: I'm looking.... [23:47] if you want to see the error, remove the thisFailsS... line at the top of that test. [23:47] back soon [23:58] lifeless: It looks like jam wanted to update Merger.base_rev_id, so he called set_base_revision, which has side effects that I can't imagine he wanted.