[00:19] jelmer: lp:~jelmer/bzr-pqm/fix-branch-root [00:19] jelmer: why the rejection? [00:24] lifeless: it just fixed the hack that was there, somebody else fixed it properly in the meantime by using the lp API [00:32] morning [00:55] lifeless: https://bugs.edge.launchpad.net/bzr/+bug/562666 [00:55] Launchpad bug 562666 in bzr "2a fetch is very cpu intensive" [Undecided,New] [01:38] hi igc, still here? [01:39] hi poolie [01:39] Dirk might be willing to do x64 builds (see mail) [01:39] that would be cool if it works [01:40] poolie: it certainly would [01:40] poolie: I'll reply to Dirk later today [01:40] cool, i thought you would, just wanted to make sure you saw it [01:41] poolie: yep, I did === retupmoca` is now known as retupmoca [02:25] can someone remind me how to add an existing branch to a shared repository? [02:27] You mean reconfigure --use-shared? [02:27] fullermd: thankyou [02:40] * igc out for a few hours [02:47] bzr: ERROR: exceptions.ImportError: cannot import name XMLSerializer [02:47] aroo? [02:48] I haven't seen that for a while. [02:49] IIRC it has been due to installing bzr on top of an older installation without first cleaning out the .pyc files. [02:49] If that's not the issue, then more details please :) [02:55] anyone know of some good documentation for bazaar's python API? need to write some stuff that programmatically queries a repo [03:01] spiv: GREAT. [03:01] spiv: well. that would fix the activity [03:01] krobertson: our pydoc is ok [03:02] krobertson: for specific use cases we generally advise people to look at the code in builtins.py [03:02] lifeless: thanks, I will check that out [03:31] spm, lifeless: pqm has (I assume) failed my branch twice, but has not sent me a failure email [03:32] spiv: Ahh I think I know why - and it's likely your fault. again. ;-) [03:32] It doesn't want to hurt your feelings... [03:32] spiv: smtplib.SMTPSenderRefused: (552, '5.3.4 Message size exceeds fixed limit', 'pqm@bazaar-vcs.org') <== I haven't yet had a chance to chase down tho [03:32] spm: erk [03:33] that's a *guess* mind. but I've noticed 2 of them... [03:33] spm: that would be because of the log of the failing test run, I suppose :( [03:33] spiv: 8.30 last night; and 11.05 this morning. Sound about right? [03:33] Yep. [03:34] Any chance you can send me the most recent one manually? [03:34] How big is the offending message, do you know? [03:35] I suspect they get dropped; but the pqm log for it may exist... [03:38] spiv: ~spiv/bzr/diff-relock ? [03:40] spm: right [03:47] spm: oh, can we get that limit raised ? :) [03:47] lifeless: I suspect so. :-) submit an RT? it'll be a GSA task. [03:48] spiv: https://chinstrap.canonical.com/~spm/patch.1271201969.log.gz [03:48] spm: how big is that unzipped ? [03:48] lifeless: 44Mb [03:48] woah [03:48] hmm [03:49] we may want to strip passing tests sooner rather than later. [03:49] spm: thanks! [03:49] lifeless: yeah [03:50] thats a pqm tweak [03:50] lifeless: afaict, something changed around the 2d of April (~ 3K gzip) and the 6th 22K, and later on the 6th, was 44Mb. [03:50] spm: subunit detailed streams [03:50] spm: we're getting much more data about errors now [03:50] ahhhh [03:51] wasn't sure if know change; or not. coolio. [03:51] Not to mention much more data about successes ;) [03:51] known* [03:51] heh [03:52] spm: sent to launchpad at rt.c.c [03:52] ta [03:52] spm: which I'm told makes an instant right-queue task [03:52] spm: is pjdc an appropriate naggee for this ? [03:53] lifeless: not atm unless super zomg critical [03:53] spm: we shouldn't leave errors getting blocked for long [03:53] Erik's the current VG, so our mornings. [03:54] theres no vg listed atm ? [03:56] he finishes around midday I believe [03:56] 11. close enough. DST changes. [03:57] ok [03:57] so I want to make sure this is changed before 5pm - or whever you clock off [03:57] its my fault its been left like this :( [04:03] lifeless: re bug 562079 [04:03] Launchpad bug 562079 in bzr "incorrect warning after update" [High,Confirmed] https://launchpad.net/bugs/562079 [04:03] is this actually a regression, ie something that used to work? [04:03] poolie: I'm pretty sure it is [04:03] also, please put a meaningful subject line on [04:03] "incorrect warning after update" could mean almost anything [04:03] I think the rearranged update code is picking up some noise [04:04] poolie: I do try to put meangingful subjects [04:04] poolie: I'm sorry that that one isn't meaningful enough [04:04] np [04:08] is there code for doing the moral equivalent of ls -lR on a transport somewhere already? [04:09] iter_files_recursive [04:09] but it does not directly get the stat values [04:09] or do you want a human-oriented one? [04:09] mwhudson: for http you'll need webdav [04:09] mwhudson: to do listing [04:09] lifeless: we can switch away from http [04:10] poolie: thanks, i think that'll do [04:10] poolie: on the subject of subjects etc; you seem to remind people a lot; perhaps you could just lead by example and say 'I've changed the subject to XXX which I think is clearer' [04:15] lifeless: for a dump copy, does http://pastebin.ubuntu.com/414067/ sounds vaguely like what you meant wrt ls -lR ? [04:15] mm [04:16] i guess i should sort them [04:20] mwhudson: yeah [04:20] as a first approx thats simplest and it shouldn't a) infinite loop or b) trigger often [04:21] simple is good here [04:21] lifeless: fwiw, i don't think i've said this yet, this is only for the initial mirror of the branch [04:22] mwhudson: sure, you have and I got that ;) [04:22] ok cool [04:23] maybe not explicitly but I'm passingly familiar with the problem [04:45] hey spiv... it looks like your _move_index code has higher overhead than we predicted [04:45] I'm profiling loggerhead [04:46] and a bit of code that iterates over 'get_parent_map()' in order to build up the mainline history [04:46] ends up calling _move_* 5k times for the mainline [04:46] which ends up taking 3.7s of runtime [04:50] I can reproduce it by just using "branch.revision_history()" [04:50] certainly something we usually avoid, but still, 5k get_parent_map() calls shouldn't end up costing 3.5s of wall-clock time [04:52] one possibility... [04:52] hit_names is a list [04:52] and you do "for name, index in X: if name in hit_names" [04:53] which means that if you have 20 indices [04:53] and all are hit [04:53] you end up doing 20x20 lookups, rather than 20x1 lookups [04:54] I'll poke a bit [04:58] changing it to a set didn't really help [04:58] changing it to only _move_to_front every 10th call to iter_entries did [04:59] (dropping total time from 3.5s => .35s...) [05:20] found a better fix, at least for my use case [05:20] if the given indexes are already at the front, skip trying to reorder [05:20] happens 5138 out of 5153 times [05:23] jam: ah, nice [05:23] I think this is bug #562429, btw [05:23] Launchpad bug 562429 in bzr "performance regression for 'log' in trunk?" [High,Confirmed] https://launchpad.net/bugs/562429 [05:23] but haven't actually looked at the bug close enough yet. [05:24] jam: yeah, I just guessed that :) [05:25] I'm not sure about the hit_names being a set vs a list [05:25] "if the given indexes are already at the front, skip trying to reorder" sounds good to me. [05:25] I *think* a set is better [05:25] but since the real improvement is to just skip [05:25] I'll leave that code alone for now [05:25] Yeah. [05:29] Or to rephrase: "if no accessed index was a miss, then don't reorder." [05:30] How common is it (in your use case at least) that iter_entries doesn't look at every index in self._indices? [05:31] https://code.edge.launchpad.net/~jameinel/bzr/2.2-move-to-front-if-changed-562429/+merge/23371 [05:31] spiv: so I'm iterating the mainline [05:31] I may hit a few of them on the way back [05:31] but the last few thousand are all in the same index [05:32] I'm thinking that iter_entries could note how many it looked at, and tell _move_to_front, so that _move_to_front can avoid touching the last half if only the first half needs to change. [05:32] spiv: something like that could be worthwhile, but the above fix is pretty trivial [05:32] and solves my prob completely [05:33] to give an impression, my '.rix' has 1.3MB as the largest, next is 120k, 88k, 6x40k, 4x4k, 8x1k [05:33] So orders of magnitude each, and I'm guessing 90% of my query is that first index [05:34] spiv: alternatively, your iteration loop could see once all hit_names have been found [05:34] and stop the loop [05:34] wouldn't that do the same? [05:36] jam: which loop? The one in _move_to_front_by_index? [05:38] The loop in _move_to_front_by_name would benefit from that, but I don't think that alone would benefit as much as your current fix. [05:39] I expect most of the cost is in _move_to_front_by_index, not _move_to_front_by_name. [05:40] Although there is a little bit of repeated effort when _move_to_front_by_name is invoked, because it calls _move_to_front_by_index which constructs the same list comprehension. [05:41] spiv: as an example: http://paste.ubuntu.com/414088/ [05:41] The way _move_to_front_by_index has to zip-then-unzip the self._index_names + self._indices lists doesn't help. [05:42] and yes, the cost is in _move_to_front_by_index according to lsprof [05:42] That avoids the packing and repacking [05:42] well, with a bug [05:42] http://paste.ubuntu.com/414089/ [05:42] that should be better [05:43] Oh, right, yes that approach would be a little better. [05:43] anyway, *my* fix avoids calling all the siblings entirely [05:43] lifeless: oh good idea re commenting on the bug [05:43] Right. [05:43] hi jam [05:43] hi poolie [05:43] spiv: it would be nice to be 'cheaper' in case we don't know about other cases [05:43] Hmm [05:43] I was also thinking about the 'ordering' issue [05:43] That's true. [05:44] And iter_entries() should preserve the existing order [05:44] given that is *why* we are bothering to reorder them [05:44] I don't expect http://paste.ubuntu.com/414089/ will perform worse than the current implementation in any case. [05:44] so if the first indexes are all 'active' then we won't be thrashing the first few for minor improvements [05:44] which my original proposal of "sort by count() of hits" [05:44] could have done [05:45] (though again, in my particular use case, it wouldn't matter) [05:45] Right. [05:45] And of course, over the network the relative cost of a little CPU thrashing vs. extra IO looks different to what you're looking at. [05:45] sure [05:46] poolie: econtext I think [05:46] So if we can keep both down reasonably easily, we're both happy ;) [05:46] from before our conversation, about explaining why i'm retargeting [05:46] s//resummarizing [05:47] spiv: right. Also, don't all the sibling entries have identical index ordering? [05:47] jam: yeah :/ [05:48] well, good and bad [05:48] it means that if we built a map of where to move them [05:48] we could just apply it multiple times [05:48] however [05:48] the good is that if we know this one is sorted well enough [05:48] then we don't have to worry that we aren't sorting a sibling [05:48] Right. [05:48] poolie: oh right [05:51] poolie: and for laughts, a bugfix for the bug we discussed [05:51] https://code.edge.launchpad.net/~lifeless/bzr/update/+merge/23372 [05:52] lifeless: merge: approve [05:53] though there would be potential other cases if you had >1 pending merge and only one of them ended up filtered [05:53] right [05:53] but hey, no need to be perfect about it [05:53] thats the history analysis or api changes to feed those decisions up the stack that I refer to [05:57] poolie: https://code.edge.launchpad.net/~lifeless/bzr/update/+merge/23372 - using my patched hydrazone [05:57] *zine* [05:57] spiv: some perf results [05:58] bzr.dev: 2.142s [05:58] my paste w/o shortcut: 1.491s [05:58] shortcut: 1.111s [05:59] thanks [05:59] shortcut w/ paste: 1.111s [05:59] enough variability to be within the noise with or without the paste [05:59] so... we may want to merge it, as it would have helped [06:00] but the shortcut helps more [06:00] spiv: oh, and the reason I put it in _move_... is because it is called from 2 places [06:00] _iter_entries and iter_entries_by_prefix [06:01] lifeless: what's with the "Status Approved => Queued; Queue Position => 68" message? [06:02] jam/lifeless: any thoughts on the possibly ip problems in https://answers.edge.launchpad.net/bzr/+question/98657 [06:02] poolie: other than saying "I've seen similar throughput issues from launchpad" I don't really know [06:02] jam: cool, that's about what I'd expect. [06:03] jam: I think I'm happy for you to merge the paste too, because as you say there might be other access patterns that the shortcut doesn't entirely fix. [06:04] jam: I'm glad that the results are matching up with our analysis of the problem :) [06:11] mwhudson: hey, I beg to differ: '# Looms suck.' [06:11] jam: moving towards using the queueing abstraction in lp [06:11] lifeless: don't beg, just invoke /usr/bin/diff :P [06:12] lifeless: i guess someone should test if that bit is still necessary [06:12] jam: I realise the extra mails may be a little annoying, I'm going to beg^Wask for a little patience [06:12] i have this recollection of being extremely angry getting that code to work [06:12] mwhudson: and *file a bug* if it is [06:12] (i think weaves broke it too, or something ridiculous like that) [06:23] oh [06:24] is 2.2 open ? can't seem to choose it when proposing for merge [06:25] lifeless: trunk [06:25] 2.2 is still just a merge from trunk away [06:25] it is just there to make the RM manager's life easier [06:25] well, at least how I used to RM [06:28] speaking of which.. poolie did we ever get pqm happy with your 2.2 tarball? [06:28] no, still on my plate for today [06:28] k [06:28] night everyone [06:28] so i should close my mail :) [06:28] night [06:44] jam: I was noting that you merged to 2.2 in pqm [07:02] back [07:43] poolie: sorry for the spam messages from me; still learning hotkeys etc [07:47] Howdy, anyone with a sec to help with a question regarding shared repository formats? [07:48] sure, just ask [07:48] thx1 [07:48] question is: can I find out what the repo format of lp:ipython is? [07:48] bzr info gives me: [07:49] Standalone branch (format: unnamed) [07:49] Location: [07:49] branch root: bzr+ssh://bazaar.launchpad.net/~ipython-dev/ipython/trunk/ [07:49] bzr: ERROR: Parent not accessible given base "bzr+ssh://bazaar.launchpad.net/~ipython-dev/ipython/trunk/" and relative path "../../../../%7Evcs-imports/ipython/main/" [07:49] I need to create a shared repo that I can then push from, but all my attempts end up with [07:49] try bzr info -v nosmart:lp:ipython [07:49] sorry [07:49] Using saved push location: lp:ipython [07:49] bzr: ERROR: RemoteRepository(bzr+ssh://bazaar.launchpad.net/~ipython-dev/ipython/trunk/.bzr/) [07:49] is not compatible with [07:49] KnitPackRepository('file:///home/fperez/ipython/repo/.bzr/repository/') [07:49] different rich-root support [07:49] try bzr info -v nosmart+lp:ipython [07:49] that may work better [07:49] aha! [07:49] many thanks! [07:50] Standalone branch (format: pack-0.92) [07:50] L [07:50] That's what I needed :) [07:52] fperez: I suspect though [07:52] fperez: that the vcs import is in 2a or similar, and that you'll want to upgrade trunk, and your local branches - everything - to 2a. [07:53] #launchpad can talk about branch mgmt stuff in more detail. Well we know here too, but that is the right channel :P :) [07:53] Mmh, let me try the push I was about to make and see how it goes. Back in a sec. [07:54] moin [07:55] lifeless: actually it just worked: [07:56] amirbar[lp]> bzr push [07:56] Using saved push location: lp:ipython [07:56] Pushed up to revision 1232. [07:56] thanks a lot, you saved me a ton of grief [08:00] fperez: ok cool [08:00] fperez: glad I could help [08:27] hello. i am trying to write a test case for bug #413406 [08:27] Launchpad bug 413406 in bzr "bzr export fail with directory contain non-ascii char." [Medium,Confirmed] https://launchpad.net/bugs/413406 [08:27] http://pastebin.com/sb8hMUU1 the test doesn't seem to throw the exception while doing the same from command line throws the exception [08:28] hmm something seems to have broken bzr-search :< [08:28] do i need to do something differently in the test? [08:28] parthm: intersting [08:30] lifeless: the fix is quite simple actually http://pastebin.com/SS5TKqFw i am assuming bzr uses utf8 internally. is that correct? the only thing i don't have is an automated test :( [08:32] parthm: we use unicode internally [08:32] parthm: utf8 in some code paths where we have audited it carefully [08:33] parthm: if name is the filename on disk we're meant to create, it sounds like the user in that bug has an invalid fs encoding [08:34] hmm, bug shows that assumption is wrong [08:35] parthm: I've commented on the bug [08:36] parthm: you need to use the fs encoding, not utf8, because its a pathname on disk we're setting [08:37] lifeless: makes sense [08:37] lifeless: i suppose they fixed it in py3.0 :) [08:37] parthm: possibly, but likely not. [08:39] parthm: as fo the unit teset [08:39] test [08:39] try [08:39] self.run_bzr(['export', '--format', 'tgz', u'test.tar.gz']) [08:40] lifeless: yes. that makes the test case work. thanks. [08:41] I'd add a comment there that we're tickling a posixpath.py bug and need a unicode argument to do so. [08:41] for clarity. [08:41] lifeless: for fs encoding does bzr provide an api or is os.getfilesystemencoding() the suggested way? i see osutils doing "_fs_enc = sys.getfilesystemencoding() or 'utf-8'" [08:41] lifeless: good point. will do that. [08:42] look in osutils [08:42] we do have a wrapper/cache I think [08:43] lifeless: osutils.get_user_encoding? [08:44] no, thats the terminal [08:44] _fs_enc [08:45] is what is used throughout the module [08:46] lifeless: will use that. maybe that should be _fs_enc? or exposed via a api? [08:49] spell it osutils._fs_enc [08:49] for test friendliness [08:49] it is exposed through an API :) - osutils._fs_enc [08:51] lifeless: :) thanks for your help. i will raise a merge proposal for this fix. [08:51] cool [09:02] looks like bzr 2.1.1 was not announced. is missing on the feed on homepage. https://launchpad.net/bzr/+announcements [09:06] and from the channel topic :-) [09:29] * igc dinner [09:49] poolie: is there a way to mute things that are 'to: me' in gmail ? [09:49] poolie: (e.g. all lp bug fail) [09:49] are there any PPAs which have packages of the 2.0.x release branch in them? [09:51] OOPS-1565CEMAIL38 [09:51] https://lp-oops.canonical.com/oops.py/?oopsid=1565CEMAIL38 [09:52] mthaddon: (losa) ^ - looks like we're still failing on merge emails [09:56] It seems that all of the PPAs listed in http://wiki.bazaar.canonical.com/DistroDownloads don't contain the 2.0.x branch. [09:59] * vila back online and from dentist [09:59] hi all [10:03] how do I convert a lightweight checkout to a heavy checkout? [10:04] heya vila! [10:04] edgimar: reconfigure [10:04] bialix, ok thanks. [10:05] bialix: _o/ [10:07] bialix, tried 'bzr reconfigure --checkout', and got "bzr: ERROR: exceptions.AssertionError: is not a RemoteBzrDir" [10:08] (using 2.1.1) [10:09] this is a bug [10:10] edgimar: please, file a bug [10:10] how can it be that this kind of bug has not already occurred (or has it?)-- no one else has ever tried converting a lightweight to a heavy checkout?? [10:11] edgimar: I suspect the problem here is that your master branch is not local one [10:11] Yes, that is correct. [10:11] I believe it works correctly if master branch is local one [10:11] and I think there is even test for the local case [10:12] just somewhere between here and there something become broken, and there is no corresponding test, so core devs won't noticed it [10:13] as a workaround you'll need create heavy checkout from scratch :-/ [10:13] ok fine -- but I guess that for every test which is performed on local branches, it should also be tested on remote branches, right? [10:13] anyway you'll need to download entire branch history from remote location [10:13] vila: ^ [10:14] edgimar: in theory -- yes. in practice??? [10:14] bialix: you're correct, filing a bug will help find the hole in the test coverage [10:15] I'm sure that doing the testing in both scenarios for all tests can be automated/encapsulated. [10:15] anyway, I'll file the bug... [10:16] edgimar: thanks === vila changed the topic of #bzr to: Bazaar version control | try https://answers.launchpad.net/bzr for more help | http://irclogs.ubuntu.com/ | Patch pilot: spiv | bzr 2.1.0 is out === vila changed the topic of #bzr to: Bazaar version control | try https://answers.launchpad.net/bzr for more help | http://irclogs.ubuntu.com/ | Patch pilot: spiv | bzr 2.1.1 is out [10:18] indeed 2.1. is out [10:22] But filed at: https://bugs.launchpad.net/bzr/+bug/562896 [10:22] Launchpad bug 562896 in bzr "crash when converting lightweight to heavy checkout with remote master branch" [Undecided,New] [12:20] vila: http://pqm.bazaar-vcs.org/ [12:20] vila: check the pqm message out :P [12:21] lifeless: even better ! the leading 'success' is new right ? (the running [ x%] was there for a couple of days already) [12:21] vila: the success is not new [12:21] lifeless: what is then ? [12:21] the test ids ? [12:22] vila: the (lifeless) and (Parth Malwankar) [12:22] haaa, the *message* ! I had a crash last day and forgot to file the bug after that [12:23] lifeless: where did you fix that ? [12:23] hydrazine [12:24] bzr+ssh://bazaar.launchpad.net/~lifeless/hydrazine/cron/ [12:24] thumper: why you get up, why do new queued proposals get queu position *68* ? it seems oddly high,,, [12:25] And highly odd. [12:25] But also even. [12:26] 2 * 2 * 17, perfectly regular [12:27] spiv: even if its highly odd, its still oddly high [12:27] lifeless: :) [12:30] lifeless: where is this queue coming from ? [12:30] lp [12:30] vila: if you use my branch and set a message, and submit them, I'll do the signing. [12:31] meh [12:31] you mean your cron job will do the actual pqm submission ? [12:32] is there an url on lp where I can see this queue ? [12:38] lifeless: ^ ? [12:38] vila: oh, no. [12:38] not wanting to wake up fullermd by fixing typos but did you mean "it'll do the signing" ? [12:42] my gpg key [12:42] so 'me' [12:47] lifeless: *I* do a submission and *you* sign it ? [12:47] yes [12:47] decoupled 'queue' and 'tell pqm' [12:48] anyone with review approval permissions can land stuff now, using my hydrazine cron branch [12:50] ha ok, I'm not the primary target. So, that will avoid ready-to-land patches languishing but I'm a bit uncomfortable with automated landing... (I already make mistakes manually :) [12:50] vila: its no more automated than what you're using [12:50] someone has to say 'land this now', by hand. [12:52] lifeless: ok, I was confused by 'cron', you still need to run feed-pqm [12:52] right [12:52] you run feed-pqm [12:52] and it sets the metadata [12:52] then my feed-pqm picks it up and sends the email [12:54] ok, ok, there is a race condition of course but I doubt we run into it soon, may be the patch pilot should be the one to run with --cron [12:55] vila: race condition ? [12:55] if two people try to send_mp() [12:55] no race [12:56] or rather, no worse than bzr-pqm's pqm-submit command has === mrevell is now known as mrevell-lunch [12:59] lifeless: hmm, in queue_mp(), you have setStatus(status='Approved') followed by setStatus('Queued') [13:01] lifeless: then in send_mp, you check that the mp has already been sent and return early, if two users come here they will both miss the other submission and will both submit no ? [13:02] vila: only one person will be running the email mode [13:03] vila: so its up to lp to prevent in-flight collisions on the metadata [13:03] lifeless: says who ? [13:03] vila: says me [13:03] by email mode you mean --cron ? [13:03] yes [13:03] right, then indeed no race :) === masklinn_ is now known as masklinn [13:16] vila: try it - run that branch 'feed-pqm bzr' [13:16] vila: and submit it [13:18] lifeless: wanna approve https://code.edge.launchpad.net/~vila/bzr/cleanup-log-direction/+merge/23382 ? [13:18] lifeless: trivial [13:19] vila: not trivial at all, but I have already approved it === masklinn_ is now known as masklinn [13:19] lifeless: hehe, feed-pqm sees it but lp not yet :) [13:20] lifeless: done [13:22] lifeless: I see the mp status is not Queued on lp, but this can't be set from the web right ? [13:22] right [13:23] lifeless: using your branch means I can't submit directly myself anymore [13:23] right [13:23] thats the point [13:23] it is submitted now [13:24] in the sense of 'you have done what you need to' [13:24] you should not have put (vila) in the commit message though [13:24] lifeless: but I now depend on you before pqm see my submission [13:24] yes [13:24] lifeless: meh, I see no code to put it there in your branch [13:25] vila: trust me [13:25] lifeless: thanks for the catch on the 2.2 thing. I recently reworked my layout to handle having 3 stable branches, and accidentally did the work in the 2.2 area. So the default submit was the 2.2 branch. [13:25] vila: change the message, take (vila) out [13:25] morning vila, /wave lifeless [13:25] just heading to breakfast [13:25] jam: moinmoin [13:25] morning jam enjoy it [13:26] lifeless: I can't: Looking for ['Approved'] mps in https://api.edge.launchpad.net/beta/bzr [13:26] Nothing matched status ['Approved']? [13:26] vila: you can in the web ui, or by passing --queued to feed-pqm [13:27] lifeless: done [13:30] vila: http://pqm.bazaar-vcs.org/ [13:30] vila: and if you want to see where teh code is - look for the format string (%s) %s (%s) [13:30] vila: now, this isn't running fully cronned yet, I need to make a subkey for it or something. [13:30] vila: but [13:31] vila: It would be good to get other people with code to land to use this [13:32] right, I found the code. [13:34] gnight [13:34] lifeless: but I still don't get why you want to introduce a *single* sender (I understand why for people without pqm access), that's just adding another point of failure on top of pqm [13:34] vila: refactoring to remove layers [13:35] vila: checkout lp:pqm === mrevell-lunch is now known as mrevell [14:47] jam: Am I the only one *feeling* pqm slower these days ? [14:47] vila: not sure, but the new cron script is certainly making my review folder bloat [14:47] feed-pqm was bad enough [14:47] cron adds at least 1 more message... :( [14:47] killfiles ftw ! [14:48] we will now get 7 messages for your cleanup-log-direction, one is your proposal, 2 is your approval, the rest.... [14:49] vila: how do you separate out the automated messages about "queued" from the "review: approve" messages? [14:49] anyway, I'll survive [14:49] but certainly I am surprised when I get up and my review folder has 30 new messages [14:49] but oh, only 5 of them mean anything [14:51] jam: I don't kill them but since they are in the same thread, it's easy enough to handle them [14:52] still a 2.5:1 noise to signal ratio [14:58] jam: sure, but... wfm :-> [15:02] * bialix feels unhappy too [15:03] vila: pqm feels much slower for me too, lately [15:04] * vila suspects subunit buffering..... /me whistles [15:04] vila: random thought: maybe all the verbose subunit details, including log text from passing tests, is part of it? [15:05] It emits ~44M of stuff now, which stopped PQM sending me a failure message today :( [15:05] spiv: my gut feeling is that the bandwidth is not relevant here but I may be wrong [15:05] wow, 44M ! [15:06] spiv: worth a critical bug [15:06] I suppose it's not too hard to run the experiment... [15:08] Submit a branch that changes the Makefile to call date before and after the selftest, and then fail. (i.e. date; selftest; date; false) [15:08] And then do the same, but remove the --subunit option [15:08] And then compare the times. [15:09] Oh, except I'm not sure if sending of failure messages from --subunit is fixed :/ [15:09] and hope the load is similar both times [15:09] Still, I guess you could do the second half of my experiment, and just watch existing PQM jobs to find the time for the first half. [15:10] Anyway, bedtime for me. [15:13] spiv: you wish ! I'm sure Vincent wants to play with his daddy *now* :) [15:14] vila: well, Vincent slept through the night last night [15:14] I'm guessing he's been asleep for a while :) [15:14] spiv: have a good evening [15:14] jam: how do you know that ? 8-) [15:15] vila: spiv's wife has a baby blog [15:16] http://incrementum.dreamwidth.org/ [15:16] Ha yes [15:16] since I have a recent newborn, it is fun to read about someone else's experience [15:17] hehe, yeah, I know the feeeling: oooh, now *they* are in trouble :-D [15:18] jam: anyway, from the discussion above, I guess you don't want me to do a merge proposal for yet another update-copyright cleanup ? [15:18] jam: I'm sure you Approve the idea of pqm-submitting it directly :-D [15:19] jam: more seriously, is there an easy way to use your plugin to check the copyrights for *all* files ? [15:20] vila: I think my plugin has an explicit "don't do everything" check which can be disabled. [15:20] let me check real quick [15:20] one option [15:20] just disrupt the copyright line of every file :) [15:21] vila: run "bzr update-copyright" [15:21] should be enough [15:21] jam: you really want me to resurrect my old perl script ? :-D [15:21] should do a recursive check-everything-and-update [15:21] jam: awesome [15:22] on my bzr.dev it seems to be at about 200 changed, vs 600 [15:22] still going [15:22] 355 updated [15:22] $ bzr update-copyright [15:22] Checked 1183 files [15:22] Updated 356 [15:22] Already correct 383 [15:22] No copyright 444 [15:22] 381 already correct, 444 no copyright [15:22] huh ? [15:22] .png .svg, some .txt don't have copyright statements [15:22] we have so many of them ? [15:23] also, I only update specific copyright statements, if they aren't the first line, they don't get touched, etc. [15:23] we have 1200 files [15:23] My guess, though, is the "not first line" thing [15:26] jam: here is the full list: http://paste.ubuntu.com/414352/ [15:26] vila: a lot of those look correct [15:26] lots of doc file [15:26] files [15:26] yup [15:26] and 'tools' files [15:26] we have a source check to ensure .py and .pyx inside bzrlib/* [15:27] but not elsewhere [15:27] I'm surprised on the .c .h and _patiencediff_py.py files, though [15:27] ah, not first line [15:28] though the #! line is wrong anyway... [15:28] given there is no __name__ == '__main__' line [15:30] jam: also, http://www.gnu.org/software/hello/manual/texinfo/copying.html says ranges are not allowed [15:31] I don;t remember the details, but I'm pretty sure there is a legal reason behind that (but IANAL) [15:31] vila: meh [15:31] we had that, then we didn't, then we did [15:31] I've read a discussion about it years ago [15:31] ranges look a lot cleaner [15:31] 2005,2006,2007,2008,2009,2010 is getting really long for some of the files [15:31] they sure look cleaner, but lawyers... [15:32] yeah, they say The copyright `line' may actually be split across multiple lines [15:32] vila: well, Martin manually editted some to use ranges, I took that as an opportunity to update the plugin [15:32] I *really* don't want to rehash all 1.2k files now :) [15:32] :-) [15:33] jam: pfff, no need to rehash, what I really like with your plugin is that it updates things without trauma for anyone [15:33] vila: what about lawyers? [15:34] they can't really argue that 2002-2009 is any less meaningful than 2002,2003,2004,2005,2006,2007,2008,2009 [15:34] SamB_XP: they sometimes do things in a way that makes no sense to me, yet, they have good reasons for that (or not that good, but you get the idea) [15:34] it's not like it tells you which parts of the code are copyrighted when either way! [15:34] if you wanna know that you've got to use "bzr annotate" or something [15:35] who knows, some lawyers recently spend some 7 years arguing about who owns the Unix copyright... [15:35] vila: that's different [15:35] why ? Because they didn't have a VCS ? [15:35] there was actually some so-called "intellectual property" that had actually changed hands somewhere in the vicinity of that case [15:36] ;-P [15:36] I do admit it is kinda pathetic it took 'em 7 years to settle it [15:36] SamB_XP: I thought they finally agree that "it" didn't change hands :) [15:36] vila: I meant, the trademark on UNIX has been given to that standards body [15:37] SamB_XP: yeah, kidding, I stop reading groklaw on a regular basis long ago ;) [15:37] vila: me too [15:37] SamB_XP: But I'm glad it still exists ! [15:38] I only looked at it recently because my mom seemed to think something BAD for Linux had been decided in some case [15:38] she didn't have any details, so I looked on groklaw and found out that Novell finally won that very day [15:38] but probably after she'd heard something about the case mentioned on the radio [15:39] yup, typical clash between the broadcast and peer-to-peer models distribute information :) [15:39] hmm, what was the reason they had to delay the ruling? one of the jury-members had an urgent vacation or something ? [15:40] vila: well, maybe they were actually talking about what the outcome COULD mean [15:40] depending on what it was [15:40] I think my mom said she hadn't been listening very attentively [15:41] was Novell in no particular hurry to win the case or something ? [15:42] sure, it's easier to re-read a web page than to catch the next broadcast of that news you were listening to with one ear [15:43] vila: well, you CAN go online and look for the archived show usually [15:43] or news segment or whatever they call those things [15:43] it's not like, you know, there are for-profit stations with news on the radio anymore ;-P [15:44] :) [15:44] at any rate my mom doesn't listen ... [15:45] ... but in any case, a lawyer who prefers the listing of all years to a listing of ranges is, IMNSHO, just being silly and paranoid [15:46] http://www.gnu.org/prep/maintain/html_node/Copyright-Notices.html#Copyright-Notices [15:46] I'm not even sure why it's helpful to have anything but the most recent years (and lots of places they don't) [15:46] but they don't explain *why* ranges should not be used [15:47] citing GNU is not a good way to convince anyone that the idea isn't silly and/or paranoid [15:47] the url above explains that they indicate when older versions might theoretically go into the public domain [15:47] yeah, okay, true [15:48] I don't care that much, it's just that I don't want us to do the bad thing just because we didn't know [15:48] but they're not useful for determining when any part of the version of the source you're seeing now will be in the public domain [15:48] anyway, will any of it EVER be? [15:48] enough law for me, back to coding ;) [15:48] disney always pushes back the copyright expiration just in time, don't they ? [15:48] don't start me on that :) [15:49] heheehe [15:49] vila: the other reason I didn't run' bzr update-copyright' on everything was to avoid artificially adding 2010 to all the files. Since I don't think updating a copyright line would be considered a valid code update :) [15:50] jam: point taken, end of the experiment :) [15:54] jam: as said above, I find the workflow implied by your plugin really nice, the only caveat if that merging bzr.dev in a loom tend to trigger updates and I was looking at an rough estimate of files that still needed to be updated [15:54] vila: I wouldn't block it if you want to just run it [15:55] Probably some of the changes are "spurious" in that it is simple formatting [15:55] alternatively do this [15:55] take an old bzr.dev [15:55] merge in a new bzr.dev [15:55] have those files get updated, and submit it [15:55] then it will only touch actually modified files [15:55] you could merge bzr.dev into, say the 2.0 branch [15:57] hmm, that makes me realize that I need to install the plugin on my other dev machine :0 [16:14] http://hginit.com/ looks fantastic [16:19] vila: graph.is_ancestor() is probably a really painful way to do it. It is a heads() graph search for every pair... [16:23] jam: I know, but what is the cheap alternative ? [16:23] vila: graph.find_unique_ancestors() or there is another, let me check [16:24] Graph.find_difference() [16:24] I think you want to do: [16:24] interesting_set = Graph.find_unique_ancestors(wanted_tip, [unwanted_tail]) [16:24] x = merge_sort() [16:24] for rev in x: [16:24] if x not in interesting_set: [16:24] continue [16:25] jam: haa, I searched for something like that but the name didn't ring a bell [16:29] sabdfl: it has mentioned here (or was it the ML) when it came out, various reactions, summary: except for the graphical (nice) look, our various docs covers the subject but yet another friendly introduction can be tried, people found the humour a bit too distracting === IslandUsurper is now known as IslandUsurperAFK [16:29] s/has/was/ [16:30] sabdfl: thanks for the heads-up anyway [16:30] sure [16:30] my brother says: [16:30] (16:14:38) Brad Shuttleworth: great marketing for his hg-apps [16:30] (16:15:05) Brad Shuttleworth: the bzr stuff is *really* dry, and is all "you can do almost anything you want" rather than what *i* want to know [16:30] (16:15:16) Brad Shuttleworth: "how should i do it, how does it make my life easy". [16:30] i think he's right [16:31] for 3.0 we need to tighten up the UI, the way we prioritise how people learn about the tool [16:31] our stuff is written from the perspective of people who already love it [16:31] not from the perspective of people coming from svn or cvs [16:32] yeah, I mentioned having a 'what's next' command too for beginners (or even newcomers) that could propose commands based on the state of the current tree [16:33] or interactive tutorials... === verterok is now known as verterok|lunch === deryck is now known as deryck[lunch] === verterok|lunch is now known as verterok [17:34] jam: thanks for the heads-up, I did a real-life test (wrong) and didn't notice a bug impact on performance. Redoing it more carefully reveals a *huge* penalty if using only is_ancestor [17:35] vila: yeah, if you had say 100 revs to be displayed, it would be bad [17:35] also, Graph.heads() is O(N^2) IIRC [17:35] so is find_unique_ancestors() but at least you only do it once [17:36] yeah, I wanted to do something like that if needed. It's needed :) [17:36] it's still a bit dirty to calculate the graph difference and iterate anyway, but well.. enough on that [17:38] jam: pushed [17:38] vila: well, iter_merge_sorted can be given a range [17:38] or you are already inside there [17:38] anyway, you still need to sort them [17:39] I am and I need a different stop_rule (hackish) [17:39] vila: on the plus side, bzr-history-db, can [17:40] a) Compute the graph difference cheaper [17:40] :-P [17:40] b) compute only a partial merge_sort [17:40] c) already hooks in there, and just needs to support the new rule [17:41] jam: gimme that in core :) [17:42] vila: respond to my comments off-list :) [17:42] I should... I really should... === IslandUsurperAFK is now known as IslandUsurper === deryck[lunch] is now known as deryck === beuno is now known as beuno-lunch [18:19] any loggerhead hackers around [18:29] jam, beuno-lunch is probably at lunch, and mwhudson is a couple of hours away [18:29] I know this is probably in a faq somewhere, but why does PQM say that it's the author of the commit to do the merge? [18:30] jml: pqm says it is the committer, which is accurate, I don't think it sets --author [18:30] jam: thanks. [18:30] the primary reason is that we don't send information for pqm to use to set the real committer [18:30] or author [18:31] Hello! Using bzr+ssh, what is needed from ssh to make it so that the root directory when logging in to ssh is the bazaar depot home, so I can check out files with bzr+ssh://my-server.com/my_project instead of bzr+ssh://my-server.com/svr/bzr/my_project ? [18:37] allquixotic: look into the 'bzr_access' script [18:37] in contrib/bzr_access [18:37] http://bazaar.launchpad.net/~bzr-pqm/bzr/bzr.dev/annotate/head%3A/contrib/bzr_access [18:41] jam: that has to be used from the client side? [18:41] allquixotic: you set up bzr_access on the server === beuno-lunch is now known as beuno [19:13] jml, jam, hi, what's up? [19:14] beuno: trying to get my head around some of the loggerhead code [19:15] specifically "get_view()" does something with a revid and a start_revid [19:15] and I don't quite understand what it is trying to do there [19:15] jam, without even looking, isn't that to batch the log view? [19:16] well, it seems to play pretty fast-and-loose with which value it uses at a given time [19:16] and pretty much every time I check start_revid == revid [19:19] hmmm... [19:19] beuno: so I'm trying to improve perf [19:19] and I've found some interesting bits [19:19] with my bzr-history-db plugin, and some reworking of History [19:20] I can get to render bzr.dev in about 0.5s [19:20] I've been following the thread with a sense of shame for not participating [19:20] wow, that's amazing [19:22] the next bit I noticed [19:22] was that we always walk the full mainline [19:22] but we only end up displaying the last 20 revs [19:22] so I can shave maybe another 100ms by skipping that [19:23] however, I just saw that I lose the "Older>>" and "Newer>>" links [19:24] beuno: do you know what causses those to get showwn? [19:24] ah 'navigation' is the object [19:25] do we actually pay attention to the page_position / page_count anymore? [19:28] I guess it isn't terrible. [19:28] On emacs it is 0.467s vs 0.339s to render that page [19:29] weird, after navigating, it becomes 1.318s [19:29] so if start_revid is set, it slows way down [19:29] and now, even without it I"m at 900ms [19:29] very weird [19:30] i'm guessing a cache is getting filled out and causing gc overhead, but I'm guessing [19:30] ah, I see now [19:30] changes => 300ms [19:30] changes/99865 => 866ms [19:30] changes/99865?start_rev=99865 => 1300ms [19:31] it is being slow on a dotted => revid lookup [19:31] which is a bit weird [19:32] hm [19:32] (sorry, in the middle of firefighting) [19:32] dotted revnos have always been the slow point I think [19:39] I found a bug in my code [19:41] beuno: now changes => 350ms, change/99885 => 350ms, changes/99885?start_revid=99885 => 350ms [19:41] \o/ [19:41] bzr-history-db was finding the map quickly, and then continuing to search the rest of history [19:43] ah [19:44] jam, the start_revid is also used to diff revisions IIRC [19:47] beuno: so how does loggerhead deal with caching a given branch, and handling the 'stateless' nature of HTTP? [19:48] lets excersize my memory [19:48] we use the LRU cache for some bits [19:51] and fall back to sqlite [19:51] beuno: but you basically instantiate a History object per request, right? [19:52] yes [19:52] ok [19:52] cause I got rid of both caches in History :) [19:52] in loggerhead/apps/branch.py [19:52] get_history [19:52] RevInfoMemoryCache and RevInfoDiskCache [19:52] ah, heh [19:52] that's pretty mind-blowing :) [19:53] all about data-structures [19:53] you have to create ones that match what you want to get out of them [19:54] beuno: loggerhead.zptsupport.expand_into [19:54] that is the take a template, turn it into html code? [19:55] yes, although that's mwhudson's code, so he can answer with more conviction than me [19:55] it looks like it [20:00] ok, I was wrong, I had a different bit here [20:00] If I stop iterating history at 100 revs [20:00] things are fast [20:00] otherwise emacs trunk takes 6s [20:01] vs 300ms [20:01] which is why I wanted to stop early [20:01] but it seemed to mess up the page navigation code [20:01] it needs to know the ranges [20:01] so maybe you can get $current+1 [20:01] or $current+batch_size [20:03] well, you seem to need to know + and -1 page [20:04] which is a bit tricky to get Newer by 1 page without starting from the 'start_revid' [20:04] but I could start at start_revid, and stop once I've found revid + 1 page [20:04] It means looking at 10k old revisions would be slow [20:04] but I doubt people do that often [20:05] and still faster than today :) [20:06] beuno: the other bit is how to handle the 'merge_points' stuff [20:06] I think I understand the point of it [20:06] but either it is buggy, or I'm actually misunderstanding [20:06] specifically, if I click on a merged revision [20:06] I can sometimes see a link back to the rev that merged me [20:06] but generally not if that revision was trunk [20:07] for now, I've disabled merge_points, because it was the one bit of code that I can't cheaply answer via regular bzrlib apis which bzr-history-db just makes faster [20:07] I'd have to actually query the new db [20:08] I think dropping that is fine for now [20:08] it's intersting to navigate up [20:08] but not crucial [20:08] hey all ... [20:08] bzr: ERROR: exceptions.ImportError: cannot import name XMLSerializer [20:08] happened to me after I upgraded [20:09] is that the "this revision was merged to the mainline at revision" stuff? [20:09] james_w, yes [20:09] and hi :) [20:09] james_w: morning [20:09] mtaylor: could you run whatever you did again with -Derror and pastebin the traceback? [20:09] hi beuno, jam [20:09] james_w: yes! [20:09] I like that in loggerhead, and would like to be able to query that sort of information in bzr often [20:10] james_w: so right now all revs have a merge_points [20:10] however, it seems buggy in its implementation [20:10] namely, if you click on a directly merged rev, it doesn't seem to show [20:10] for example, expand the first revision here: http://bazaar.launchpad.net/~bzr-pqm/bzr/bzr.dev/changes/5129.1.3 [20:10] It *should* show that it was merged into bzr.dev 5158 [20:10] yeah, I'm fine with losing it for other improvements, but was just piping up to support re-implementing it on top of faster bzrlib APIs that would then make it available in the bzr client :-) [20:11] jam, but if you go to the revision: http://bazaar.launchpad.net/~bzr-pqm/bzr/bzr.dev/revision/5129.1.3 [20:11] This revision was merged to the branch mainline in revision 5158. [20:11] and you can click to it [20:11] which is the interesting bit I think [20:12] james_w: http://pastebin.com/gsd1RQbJ [20:13] however, it is present here: [20:13] mtaylor: how do you have bzr installed on that machine? [20:13] http://bazaar.launchpad.net/~bzr-pqm/bzr/bzr.dev/changes/5080.2.1 [20:13] odd [20:13] james_w: installed from source -it's a centos5 box [20:13] jam, and yes, feels broken [20:14] beuno: ah sorry, the link is present there is pointing to another merge [20:14] but if you follow *that* [20:14] http://bazaar.launchpad.net/~bzr-pqm/bzr/bzr.dev/changes/4797.33.4 [20:14] you can see both arrows [20:14] My guess is something weird with _simplify_merge_points [20:14] james_w: the main prob in bzrlib is that we only have child => parent mappings [20:14] to find parent => child, you have to walk all the revs [20:14] yeah [20:14] bzr-history-db has them [20:15] and you built them during the 'load_whole_history' step [20:15] which I'm trying to get rid of :) [20:15] you can answer the question reasonably cheaply using Graph, but not from the cli [20:15] mtaylor: hmm, something is odd [20:15] yeah [20:16] james_w: so with my current implementation, I can probably answer "what mainline rev merged X" fairly cheaply, but it doesn't scale into merges-of-merges [20:16] mtaylor: hoe does python -c "from bzrlib.xml_serializer import XMLSerializer" fare? [20:16] does that make sense? [20:16] yeah, I think so [20:16] would that be a reasonable tradeoff? [20:16] james_w: quite poorly [20:16] it means that when you find the revision that andrew merged into 2.1 and then into bzr.dev [20:16] ImportError: cannot import name XMLSerializer [20:16] yeah, I'm usually after "does this release contain this revision" type stuff [20:16] you would see when it landed in bzr.dev, but not a direct link to the 2.1 branch [20:16] unless you were looking in the 2.1 branch :) [20:17] mtaylor: python -c "from bzrlib import xml_serializer" ? [20:17] http://pastebin.com/iZNAaZ5c any quick thoughts/fixes on this error? [20:17] hi eday [20:17] james_w: hi! [20:17] james_w: that works [20:17] gosh, when it rains it drizzles [20:17] mtaylor: oh, you're here [20:18] james_w: should I just delete the install on the box and go from scratch? [20:18] mtaylor: any chance you have different versions of python on your path and it is getting confused about ti? [20:18] it [20:18] mtaylor: I think it's something do to with elementtree [20:18] jam: not that I can find ... [20:18] bad marshal data [20:18] sure looks like something wrong with .pyc files [20:18] * mtaylor deleted all the .pyc files [20:19] oh - hrm [20:19] mtaylor: can you import elementtree and cElementTree ? [20:20] * mtaylor is very confused - this box was working perfectly and then, all of a sudden, it had bzr 1.14 installed [20:20] james_w: yes [20:20] hrm [20:20] I don't see yet how you can import bzrlib.xml_serializer but not have XMLSerializer in it [20:21] mtaylor: could you pastebin your /usr/lib64/python2.4/site-packages/bzrlib/xml_serializer.py? [20:21] james_w: very odd - I moved bzrlib out of the way and installed again and now it works [20:22] hmm, that is odd [20:22] james_w: http://pastebin.com/hfWnFr7G [20:22] james_w: there's what was there [20:23] aha [20:23] that's out of date or something [20:23] for some reason that file wasn't in sync with the rest of your bzrlib [20:24] james_w: so I installed over top of the 1.14 that was there - perhaps setuptools decided to not install some of the bits of 2.1 [20:24] jam: the bad marshall data went away with 2.1.1 upgrade. strange thing is it had just stopped working, even after removing .pyc files [20:24] that seems both unlikely and likely at the same time [20:25] you are both up and running again now? [20:25] * mtaylor is [20:25] mtaylor: setuptools doesn't delete things that are already there [20:25] and possibly a perms issue for overwritting it? [20:27] * james_w goes back to using your code instead then [20:27] * mtaylor _so_ prefers packages === grahal_ is now known as grahal [20:31] beuno: \o/ changing it to number until revid + pagesize +1 worked. It only takes 400ms to show the tip of emacs trunk, but I still get the navigation links [20:31] jam, I'm almost in tears knowning that you're working on this :) [20:34] jam, so, I should reply to igc's thread [20:34] but my original idea was to make loggerhead a layer to serve anything bzr over web [20:35] ie, it would always return jsons [20:35] it would come with a set of templates that renders it and makes it standalone [20:36] but it can be very well used for a hosting facility of bzr branches that needs to extract information and display on a web ui :) [20:36] side note [20:36] I believe you always open the branch from scratch each time [20:37] is that correct? [20:37] (It seems to be done in apps/transport/__call__) [20:37] yes [20:37] Which means that any caching that might have been done at the Branch level would be lost [20:37] which means that my changes will destroy loggerhead w/o my plugin... sigh [20:38] some changes would have done ok, as long as the Branch caches stayed in effect [20:38] slow to start, but decent at N+1 requests [20:38] maybe add an lru cache of branch objects? [20:38] they aren't very thread-safe, though [20:38] don't we do that already? [20:39] beuno: transport.py BranchesFromTransportServer __call__ [20:39] always calls open_from_transport [20:39] BranchWSGIApp was always lock_read and unlock around doing stuff [20:39] which I was hoping to push up higher [20:40] but doesn't really matter if we are getting a completely separate Branch object each time. [20:40] ah. we stick graphs in the LRU cache [20:40] right [20:40] which is what I'm getting rid of [20:40] in favor of just using Branch apis [20:41] which bzr-history-db was overriding to make fast [20:41] I was hoping to get to the point where I could say [20:41] "you can use loggerhead, and install bzr-history-db as an accelerator if you need it" [20:41] but I'd have to bring back the old code to make that feasible [20:42] and then change the code back to querying primarily on loggerheads cached objects [20:42] well, if you're code makes such a difference, I think I'd be inclined to include it by default [20:42] not sure why we wouldn't want that [20:42] are you using sqlite? [21:10] beuno: atm I'm using sqlite [21:11] the goal was to simplify loggerhead [21:11] right, so that would make sqlite a dependency [21:11] switch to standard bzrlib apis [21:11] and introduce this new bzrlib accelerator plugin [21:11] which long-term may not use sqlite [21:11] beuno: sure, but loggerhead stores its existing cache in sqlite if you use anything on disk [21:11] I guess the it can run with just in-memory [21:12] yeah, ot checks for sqlite, otherwise it defaults to not caching outside of the LRU cache [21:13] beuno: sure, though getting rid of the in-memory cache would be a big win for memory consumption... [21:14] absolutely [21:15] though not that huge maybe... [21:15] I'm seeing 120MB for just emacs/trunk [21:16] and it only goes to 140MB when I get a second branch [21:18] I guess that is vs 30MB, with my code, though [21:18] StaticTuple helping there... [21:18] Peng: /wave [21:24] * jam needs food, bbiab [21:39] abentley: is there a preferred way to check if your cwd is a pipeline or a plain branch? [21:40] I get the impression that's not really so much of a distinction with pipelines [21:40] james_w, plain branches are pipelines with a length of 1. [21:40] right [21:41] so what's the preferred way to determine if there are previous pipes for the current branch? [21:42] cli-accessible that is [21:42] james_w, create a PipeManager for the branch, and execute get_prev_pipe [21:43] james_w, from the commandline, run "bzr pipes". [22:04] jam: so sqlite is single client [22:04] jam: but loggerhead is multithreaded; [22:33] lifeless: sqlite is multi-reader 1 writer [22:33] but the writer is supposed to only block when the 'commit' step happens [22:33] not for the whole transaction [22:40] jam: thats not a complete description [22:40] http://www.sqlite.org/lockingv3.html [22:40] lifeless: sure, I've read the doc [22:41] We'd have to see the specific impact in practise [22:41] and see how to work around it. [22:42] For launchpad's instance, we could just move the backend db to postgres and not worry about concurrancy