[00:00] <abentley> So yeah, compatibility would suggest checking whether the class supports that parameter.
[00:00] <abentley> like supports_reverse_cherrypick.
[00:01] <pickscrape> This is weird. I'm trying to bzr rm a symlink to a directory, but it is complaining about the files in the directory the symlink links to
[00:01] <pickscrape> An no, I'm not appending a slash to the end of the symlink name
[00:02] <lifeless> pickscrape: I believe we have a bug :(
[00:02] <lifeless> beuno: wb
[00:03] <pickscrape> lifeless: passing the --force flag does the right thing (the target directory is not affected)
[00:03] <pickscrape> Not the correct user experience though. I'll raise it in launchpad.
[00:06] <tolstoy> Loggerhead! I'm going to see if I can even get CLOSE to setting it up..... ;)
[00:07] <NfNitLoop> hrmm, does bzr version symlinks?
[00:07] <lifeless> jam: why did you change it from using calltree?
[00:07] <lifeless> NfNitLoop: yes
[00:07] <NfNitLoop> Huh. cool.
[00:07] <lifeless> pickscrape: thanks
[00:07] <jam> lifeless: you mean callgrind?, because I don't have KCacheGrind on win32, you can change it back if you like
[00:07] <NfNitLoop> I noticed a bug the other day re: operation on a case-insensitive filesystem.
[00:08] <NfNitLoop> is that known?
[00:08] <NfNitLoop> I tried adding file x, when it had already been added as X.
[00:08] <jam> save('filename.callgrind') should give the same result, btw
[00:08] <NfNitLoop> and it committed it a second time.
[00:08] <NfNitLoop> and then complained that it was missing. :p
[00:10] <lifeless> jam: parameterised it
[00:10] <lifeless> NfNitLoop: oh, foo. File a bug ?
[00:11] <lifeless> NfNitLoop: thats the sort of thing that makes usage for windows users (or FAT under linux) particularly painful.
[00:13] <NfNitLoop> yes.  I'm all too familiar with file case issues and still was a bit confused as to what was going on.
[00:14] <NfNitLoop> I'll try to create a sample and submit that tonight.
[00:17] <jam> lifeless: latest version of PyBloom has a bloom.resize() function, so we can poke at it if we want.
[00:17] <jam> I'm done for the night, though. Off to dance class
[00:17] <jam> lp:~jameinel/+junk/pybloom
[00:18] <lifeless> jam: tchau
[00:18] <lifeless> jam: what dance style?
[00:18] <jam> ballroom, aka latin, waltz, swing, etc
[00:18] <lifeless> nice
[00:18] <lifeless> I keep meaning to get back into that
[00:20] <lifeless> so
[00:20] <lifeless> parse_lines and string split make iter_random_one cry
[00:21] <lifeless> a C parser would be substantially more efficient
[00:25] <lifeless> (I'd like to drive the cost of random access way down, because it its a good total-cost proxy)
[00:28] <tolstoy> Hm. I can't seem to install Paste 1.1.7
[00:29] <Odd_Bloke> Moin.
[00:29] <tolstoy> Er, 1.71.
[00:29] <tolstoy> Make that Paste 1.7.1.   ImportError: No module named setuptools
[00:30] <tolstoy> I wonder why all the other "python setup.py install"s I did?
[00:30] <name> do easy_install setuptools
[00:30] <pygi> somebody wanna raise their voice on the whole GNOME+DVCS stuff?
[00:30] <Odd_Bloke> tolstoy: Or, better, use your package management system if you have one.
[00:30] <james_w> hey pygi. I liked your post on the matter.
[00:30] <name> which gnome+dvcs stuff?
[00:31] <tolstoy> Odd_Bloke: This is on Solaris, all inside a user account.
[00:31] <tolstoy> I don't think I have "easy_install" on there.
[00:31] <pygi> james_w, thanks, I just wrote a big comment addressing concerns Jason raised
[00:31] <tolstoy> Yes, yes, it feels like I've gone back in time decades by not having root on this box.
[00:31] <pygi> name, the discussion of GNOME migration to DVCS
[00:32] <Odd_Bloke> tolstoy: http://pypi.python.org/pypi/setuptools/
[00:32] <tolstoy> Are setuptools not normally part of the Python? And not required for other versions of setup.py?
[00:33] <tolstoy> (I'm so far away from Python culture these days, alas. Close as I can get is Groovy.)
[00:34] <Odd_Bloke> tolstoy: I believe that most setup.pys use just distutils.
[00:34] <Odd_Bloke> But I'm not sure.
[00:34] <tolstoy> Okay. Cool. That makes sense.
[00:35] <name> Odd_Bloke: yes they do
[00:35] <james_w> pygi: thanks, just read it. Jc2k has raised some similar points to you, and I think it's very important, and not something that's being discussed at the moment. I don't have any voice in GNOME to try and steer the conversation, but I'd be happy to comment on a migration plan for bzr.
[00:36] <james_w> pygi: are you a git fan, or is gitorious just very good for what you want? Would you prefer bzr, or is the abstraction of everything what you would like?
[00:36] <pygi> james_w, as I've written in the post, I'm not a fan of either git or bzr
[00:36] <pygi> I use both where I see fit
[00:36] <pygi> (heck, I've even mentored two bzr students in 2006)
[00:37] <pygi> james_w, currently, something like gitorious doesn't exist for bzr
[00:37] <pygi> and no, you can't count LP, because it is not free
[00:37] <james_w> pygi: ah yeah, I forgot that sorry.
[00:37] <name> pygi: why is LP not free?
[00:37] <pygi> james_w, the problem is people keep going in circles, rather then discussing important points
[00:38] <james_w> pygi: yeah, that's what lp is trying to do, but it's obviously not going to work for GNOME.
[00:38] <pygi> that's why I hope my post will bring at least put bug in peoples ear :)
[00:38] <pygi> name, because its not Free by FSF definition :)
[00:39] <pygi> james_w, if you wanna work with me, we could surely write a plan for bzr
[00:39] <pygi> I'm all for it
[00:39] <pygi> james_w, it the end, it all comes down to communication and collaboration, as I've mentioned
[00:39] <pygi> these folks forgot what that is lately
[00:40] <thumper> pygi: are you looking for somewhere to put bzr gnome branches?
[00:40] <james_w> yeah, these tools are supposed to help with collaboration, not cause fights.
[00:40] <james_w> pygi: there's a #gnome-bzr channel on gimpnet that may be more appropriate if we want to have longer discussions on this topic, there's a bunch of people there willing to work to make this happen.
[00:41] <james_w> hey thumper
[00:41] <thumper> james_w: hey
[00:41] <thumper> james_w: getting kinda late where you are?
[00:41] <lifeless> pygi: where is your post ?
[00:41] <pygi> lifeless, moment :)
[00:41] <pygi> lifeless, http://pygi.wordpress.com/2008/07/01/broken-by-birth/
[00:41] <pygi> thumper, it
[00:42] <pygi> thumper, it's a little more complicated then that
[00:42] <pygi> james_w, ok, will come there (You have pm too, btw)
[00:42] <james_w> thumper: aye, just finished a meeting.
[00:43] <thumper> james_w: work style meeting?
[00:43] <james_w> thumper: yup, platform team weekly.
[00:43] <lifeless> pygi: are you aware that savannah, and alioth support bzr ?
[00:44] <lifeless> pygi: (by which I mean, what is 'like gitorious' mean) ?
[00:45] <pygi> lifeless, yes, I am aware that they support bzr
[00:45] <lifeless> ok, cool
[00:49] <pygi> lifeless, it's not only about supporting bzr per-se
[00:49] <Pilky> pygi: if you're wanting the ability for people to use whatever DVCS they want, then wouldn't svn as the central repo work best?
[00:49] <pygi> it's about making it easier for everyone involved
[00:49] <Pilky> from what I understand the svn integration in bzr, git and hg is better than the integration between the 3
[00:50] <james_w> night all
[00:50] <pygi> Pilky, that's what people are currently doing mostly
[00:50] <lifeless> night james_w
[00:50] <pygi> but that means overhead
[00:51] <Pilky> true
[00:51] <pygi> Pilky, anyway, that was just a suggestion on how things *could* be handled
[00:51] <pygi> what is important is for people to finally realize they're not going anywhere with useless arguments sometimes even on personal level
[00:51] <Pilky> yeah
[00:52] <Pilky> the unfortunately reality about we programmers is that when we start liking a tool we'll fight to the death over it ;)
[00:52] <pygi> I know, but sometimes you have to think above tools
[00:53] <pygi> If I was a new folk in the FOSS community, and read planet GNOME, I'd run away from the project as faster as possible
[00:53] <pygi> because of the recent events
[00:53] <lifeless> pygi: I think you make good points
[00:53] <pygi> lifeless, any questions or critics perhaps? Those always help flesh out the idea :)
[00:54] <Pilky> pygi: there's a reason I don't do a huge amount of open source stuff (or at least large stuff), it's far easier to get your own way when you're the sole dev ;)
[00:54] <lifeless> pygi: well, one thing to note is that bzr itself has an abstraction system; it can natively talk about hg and svn and a number of bzr formats
[00:54] <lifeless> pygi: (in a way that none of git/hg/svn try to do)
[00:54] <pygi> lifeless, yep, am aware of that =)
[00:55] <pygi> thanks for pointing it out ;)
[00:55] <lifeless> for instance, pqm had a multi-dvcs abstraction
[00:55] <lifeless> and now we're ripping it out to replace with bzr plugins
[00:56] <pygi> Pilky, I prefer to stay out, and maintain a lot of stuff outside such projects (the only exception is xfburn (xfce project), but it was initially started there, so can't change it for the time being)
[00:56] <pygi> lifeless, I should really look further into pqm branch then (I guess it's on LP as usual)
[00:57] <Pilky> pygi: the only "major" open source project I've worked/am working on is BazaarX, everything else is my shareware apps
[00:57] <pygi> Pilky, ah, no Mac here, sorry :)
[00:58] <lifeless> well, pqm is not the cleanest code base. Yes its on lp. But the point there was just that /one/ approach would be to write admin toolchains in bzr, as a lingua franca
[00:58] <lifeless> much as is being done with svn today, but being D aware
[00:58] <Pilky> pygi: heh
[00:59] <pygi> lifeless, understood
[00:59]  * pygi is thankful for all the comments ;)
[01:00] <pygi> lets hope I can put them to good use :)
[01:01] <lifeless> I'd argue that gnome has a gitorious itself /already/
[01:01] <lifeless> just called 'gnome-svn-orius'
[01:02] <pygi> ehm, well, if you can call the current platform they use that way, yes
[01:02] <lifeless> well
[01:02] <lifeless> what is it about gitorius as a system that could save admin time
[01:03] <lifeless> is it outsourcing - but gnome could have gotten a variety of sites to host the svn migration
[01:03] <pygi> lifeless, ehm, no, its not about that
[01:03] <pygi> lifeless, gnome would have its own instance ofcourse, and not *plain* Gitorious
[01:03] <lifeless> right
[01:03] <pygi> I've outlined some of the things that would save admins time in my latest comment
[01:03] <lifeless> so why would it be easier for the admins :)
[01:04] <pygi> but there are a lot of stuff we could change to make it even easier for them
[01:04] <pygi> thats why I want the discussion
[01:04] <pygi> which I managed to initiate at KDE (sadly I can't attend Akademy, but oh well)
[01:05] <lifeless> pygi: right, so my point is 'what would make the sysadmins life easier' is a good discussion and quite orthogonal to changing vcs -except- insofar as different vcs's have different admin overheads
[01:05] <pygi> lifeless, I definitely agree
[01:05] <pygi> lemme c/p excerpt:
[01:05] <pygi> "Maintainers of specific module could approve users for commit access to “mainline” or main repository, giving administrators less work. New modules/projects could be added by existing GNOME developers after their inclusion has been discussed. New translators and documentation writers could get their commit bits much easier"
[01:06] <pygi> as an example ofcourse, I can't count all the use-cases
[01:06] <lifeless> I think that thouse things are actually highly automated at gnome already?
[01:07] <lifeless> I was thnking of things like bkor having to script some massive dump+load for all 520 svn modules to handle 1.5
[01:07] <pygi> some things are, some things aren't
[01:07] <pygi> lifeless, that's the part where I say whats needed prior to migration :)
[01:07] <lifeless> K
[01:08] <pygi> lifeless, we could have an interface that would allow upgrading all repos to say new bzr format
[01:08] <lifeless> pygi: right; not to mention that bzr can do that it self remotely
[01:08] <pygi> indeed
[01:09] <pygi> I just gave that as an example
[01:09] <lifeless> :)
[01:10] <pygi> lifeless, I definitely agree that everything you mentioned should be discussed
[01:10] <pygi> lifeless, but before that, we need to make people collaborate the way they can once again
[01:11] <lifeless> we can't make them, we can only try to collaborate with them
[01:11] <pygi> well, yes, and by collaborating with them change the way they think about collaboration =)
[01:11] <lifeless> which is a little hard when you get told you're stupid or insane for making an informed design decision :)
[01:12] <pygi> lifeless, eh, even though I doubt anybody will listen to whatever I wrote in the post, they'd listen even less if I actually argued technical reasons behind whatever DVCS they choose
[01:12] <pygi> or if I accidentally mentioned that I prefer one over the others
[01:12] <lifeless> pygi: sure, not suggesting you should get technical; I was noting that unpleasantness doesn't help collaboration - and there has been plenty in some of the previous months/years about DVCS
[01:13] <pygi> lifeless, and not only DVCS ...
[01:14]  * pygi should probably do some bzr hacking this summer if he gets the time
[01:15] <lifeless> pygi: that would be cool
[01:16] <lifeless> pygi: will you be at GUADEC? I'd be happy to point at internals face-to-face if you are
[01:17] <pygi> lifeless, eh, sadly no :(
[01:17] <lifeless> ah well
[01:18] <pygi> but I'd be happy to learn more about internals over irc/mail/jabber/whatever :)
[01:33] <lifeless> we've done a lot of scalability work since 2007
[01:33] <lifeless> many more things that only look at small subsets of history
[01:34] <lifeless> still need to fix inventories to not be full-tree-objects
[01:34] <NfNitLoop> lifeless: Looks like someone beat me to it (by a year or so) reporting case insensitive fs issues.    https://bugs.launchpad.net/bzr/+bug/77744
[01:34] <NfNitLoop> I pasted my test as well.
[01:34] <lifeless> NfNitLoop: thanks
[01:37] <mwhudson> *cough* bzr log *cough*
[01:37] <NfNitLoop> mwhudson: eh?
[01:38] <mwhudson> just carping at lifeless's "many more things that only look at small subsets of history"
[01:38] <NfNitLoop> aaah.
[01:45] <lifeless> mwhudson: *many* not all
[01:45] <lifeless> mwhudson: if you compare what we _used_ to do
[01:46] <mwhudson> yeah, it's much better, i'm just being annoying
[01:46] <lifeless> go merge the loggerhead search integration branch or something
[01:56] <fullermd> Boy, that's sad.  missing took 26 seconds.  pull took 9.
[01:56] <NfNitLoop> heh.
[02:03] <fullermd> Hm.  Are these location-based aliases documented somewhere I can't find?
[02:06] <mwhudson> fullermd: which version of bzr ?
[02:06] <mwhudson> i know missing got some love recently
[02:07] <fullermd> bzr.dev local, 1.5 remote.
[02:08] <mwhudson> hm
[02:08] <mwhudson> should be new enough :/
[02:19] <lifeless> bkor: ping
[02:20] <lifeless> spiv: ping
[02:20] <spiv> lifeless: pong
[02:21] <lifeless> did you end up having time to look at the gnome hooks?
[02:21] <AfC> lifeless: (I suspect Olav will be asleep)
[02:21] <lifeless> AfC: so do I but it never hurts to try :)
[02:22] <spiv> gnome hooks?
[02:23] <fullermd> Sure.  You don't want 'em just wandering around the garden unsupervised...
[02:24] <lifeless> spiv: yes - the ones they run on svn, which we'd need to support
[02:27] <spiv> Ah, no, I hadn't looked.
[02:28] <lifeless> could I entice you to look? You've done some server-hook stuff before..
[02:28] <igc> morning
[02:30] <lifeless> hi igc
[02:31]  * spiv looks
[02:31] <Odd_Bloke> lifeless: Apologies if I just spammed you with LP merge requests, I was playing around with the interface and didn't realise it generated emails.
[02:32] <spiv> staging.launchpad.net is a handy sandbox for experimenting with.
[02:32] <JamesB192> I am trying to misuse bzr as a versioned backup program, I have created the remote and local branches, and am running a server for the remote. Is there a tutorial for this type of misuse or do I have to RT(Entire)FM? ps using bzr send I get the error 'bzr: ERROR: No mail-to address specified.' how do I make it go away?
[02:33] <Odd_Bloke> spiv: Well, the merge requests are valid, so it doesn't matter if they're in LP proper.  I just didn't intend to send lifeless a number of emails (which largely mirror on-list discussion anyway).
[02:33] <lifeless> Odd_Bloke: :)
[02:34] <lifeless> JamesB192: well bzr send is used to create emails, you probably want bzr push
[02:35] <JamesB192> Ah, OK.
[02:37] <Odd_Bloke> lifeless: Did you get a merge request (not via LP) that I sent a little while ago.  I CC'd the ML but haven't seen any sign of it and am wondering if it got through.
[02:37] <lifeless> yesterday?
[02:37] <Odd_Bloke> Hold on, I'll find a timestamp.
[02:37] <lifeless> or earlier today, cause I saw one yesterday and one earlier today
[02:38] <Odd_Bloke> lifeless: This one was sent, according to GMail, on the last hour (so around 40 minutes ago).
[02:38] <Odd_Bloke> But I'm using GMail via IMAP, which can occasionally be a bit funky.
[02:39] <Odd_Bloke> Subject was "[PQM/MERGE] Remove Arch-style naming from tests".
[02:40] <lifeless> yes
[02:40] <lifeless> uhm, I think its fine
[02:40] <Odd_Bloke> OK, good.
[02:40] <Odd_Bloke> The reason I asked was because I was going to reply to it to mention that it depends on my 'fix-tests' branch, but haven't had the chance to do so yet.
[02:41] <Odd_Bloke> Then I started playing with LP. :p
[02:41] <lifeless> :)
[02:43] <spiv> lifeless: well, I've looked.  The post_change_branch_tip hook should be able to do the post-commit actions fairly straightforwardly I think.
[02:43] <spiv> lifeless: (and thanks to my push work it turns out that that hook *does* get run server-side now!)
[02:44] <spiv> lifeless: but we probably need to add a pre_change_branch_tip hook as well.
[02:45] <spiv> lifeless: because there's a fair bit of checking of the commit before it is accepted (there must be a log message, PO files should be syntactically correct, etc).
[02:45] <lifeless> spiv: I'm really sorry that I hadn't drawn it more visibly to your attention earlier; I think you were in protocol-3 space
[02:45] <spiv> Yeah, I probably was at that time.
[02:46] <spiv> I don't think adding infrastructure for a pre_change_branch_tip hook should be too hard.
[02:48] <spiv> Also, pre-1.6 clients won't trigger that hook, because they will be using VFS RPCs to update the last revision info.  I'm not sure how much that will matter.
[02:48] <lifeless> spiv: any chance for GUADEC? :P
[02:48] <lifeless> spiv: well, we could add a version check policy on the server?
[02:48] <spiv> If it does matter, we could probably hack up a plugin for the server that traps that VFS call and invokes the right hook(s).
[02:50] <spiv> Hmm, we could do that, I suppose, by e.g. writing a plugin so that the server will reject non-protocol-3.
[02:50] <lifeless> spiv: don't client give their verison now ?
[02:51] <spiv> Probably no easier than hooking VFS calls that change */.bzr/branch/last-revision, though.
[02:51] <spiv> They do, but as a string.
[02:51] <spiv> That header was intended for logging rather than greater-than/less-than version comparisons.
[02:51] <spiv> But protocol-3 happens to be new in 1.6, as is that verb.
[02:53] <spiv> I'd rather not get into comparing strings like "1.6b3" and "1.7" and "1.10".
[02:53] <lifeless> right
[02:54] <lifeless> but isn't the string taken from the string made from the tuple ?
[02:54] <spiv> It is bzrlib.__version__, yeah.
[02:54] <spiv> I'd rather introduce a new header if we do that, though.
[02:55] <spiv> So we keep the concepts of user-agent-label and protocol-level-I-speak separate.
[02:56] <spiv> The current header is "Software version", and is intended to be the former.
[02:56] <spiv> Maybe I should just rename it to "User agent" :)
[02:56] <lifeless> right
[02:57] <lifeless> but -
[02:57] <lifeless> there are two levels of protocol
[02:57] <lifeless> there is 'how I encode'
[02:57] <lifeless> and there is 'what verbs I will try'
[02:57] <spiv> We can add another header pretty painlessly, but we should think about how we want it to work and what use cases we want it to satisfy.
[02:57] <lifeless> anyhow
[02:58] <lifeless> I would like to be able to say to bkor: 'bzr X on server, bzr X or > on client, plugin Y on server'
[02:58] <spiv> Ideally, we shouldn't need to rely on the client to indicate what verbs it will try, we should just DTRT based on the verbs they do try :)
[02:58] <lifeless> and have : ported checks from svn run reasonably fast and appropriately, and reliably.
[02:59] <spiv> Porting the checks from SVN will take a bit of effort.  Not gargantuan, but not trivial either.
[02:59] <lifeless> yah
[03:00] <lifeless> I'm sure bkor would be happy to do that bit
[03:00] <spiv> Although "poke the viewcvs so that it knows about the new revision" bit will hopefully be unnecessary with loggerhead :)
[03:00] <lifeless> (though perhaps not as happy as if we did it :P)
[03:01] <lifeless> now for the argh! bit. the DVCS talk is on monday :)
[03:01] <spiv> Well, I'd be happy to help port them.  Realistically it'll probably be easier for us to work with them than for either side to try port them alone.
[03:01] <spiv> Ok, so you'd like the pre-commit capability done ASAP?
[03:01] <lifeless> if its doable this week, then yes. otherwise no - we can say we know what it will take and its coming
[03:02] <spiv> Well, I don't think it'll be hard.  I'll spend an hour or so on it to see if it really is feasible that quickly.
[03:03] <lifeless> sweet
[03:04] <spiv> I wouldn't be totally shocked if there's a couple of unexpected bumps between here and having it done, but I wouldn't be shocked if it's plain sailing either.  We'll see :)
[04:05] <jam> lifeless: any updates on btree? I haven't seen any commits or posts since I left :)
[04:16] <lifeless> jam: just doing C parsing for _leaf_nodes
[04:16] <lifeless> jam: 60% of that 110 seconds is claimed to be in the parsing
[04:16] <jam> lifeless: sure
[04:17] <jam> If you consider... it takes 1s to read all nodes (iter_all)
[04:17] <jam> and we get maybe 100-200 nodes per leaf
[04:17] <jam> and we have a 50% cache rate
[04:17] <jam> 200 * .5 * 1 = 100s
[04:17] <jam> lifeless: pyrex or raw C?
[04:18] <lifeless> pyrex
[04:18] <lifeless> adapting _knit_load_data
[04:19] <jam> :)
[04:19] <lifeless> I'm not sure its wrong
[04:19] <lifeless> but profilers -  meh
[04:19] <lifeless> anyhow, finding out is relatively cheap
[04:20] <lifeless> what dance did you do tonight?
[04:21] <lifeless> its disconcerting that the three-level test kicks in before the progress bar debounce
[04:36] <jam> we did rumba and waltz tonight, focusing on underarm turn and breakaway 5-something
[04:36] <jam> 5position whatever, where your foot is crossed behind the other
[04:37] <jam> lifeless: yeah, the "nothing is happening" is why I generally run with -v
[04:37] <jam> lifeless: So I'm poking around at our bloom stuff, and I was noticing that our bottom level rows have more nodes than I think you give them credit for
[04:38] <jam> I'm seeing bloom filters with 9,000 entries
[04:38] <jam> I think the bottom level is < 100, but you have at least that many in the internal node
[04:38] <jam> So to be useful, we would need bloom filters of ... 9000 bytes
[04:39] <jam> Now, this is for the biggest pack in all packs
[04:39] <lifeless> wheee thats quite some compression
[04:39] <lifeless> oh right
[04:39] <lifeless> yes, I was saying that only the bottom level of filter is really useful
[04:39] <jam> lifeless:  that *is* the bottom
[04:40] <jam> leaf nodes + 1 internal = 9000 records per internal node
[04:40] <jam> Otherwise you only have leaf nodes
[04:40] <lifeless> oh
[04:40] <jam> and if you've already read them ...
[04:40] <jam> so the *only* time these are useful is when you have 1 root node, because you didn't fill it all the way
[04:40] <lifeless> eh, this doesn't make sense to me
[04:41] <jam> lifeless: each internal node can map to 100 other nodes
[04:41] <jam> right?
[04:41] <lifeless> how many keys are in each leaf?, and how many leaves per internal node
[04:41] <jam> And each leaf node can hold 90
[04:41] <lifeless> so, 9K yes
[04:41] <lifeless> but - we *are* seeing IO avoidance already
[04:41] <lifeless> I don't know what our FPR is in practice
[04:41] <jam> I get approximately the same values by just using "Num nodes / row_length" manually looking at the headers
[04:42] <jam> lifeless: we aren't getting any hits on the big packs
[04:42] <jam> it is only packs with < 9k records total
[04:42] <jam> so you have a single root node
[04:42] <lifeless> jam: ah, interesting theory
[04:42] <lifeless> jam: making indexbench tell us this would be good
[04:42] <jam> lifeless: -Dindex will print out the bloom statistics for every finalize
[04:42] <lifeless> also, trying say a 512byte filter - which will push out some nodes from the internal
[04:42] <jam> Finished node 1 with bloom filter <BloomSHA1 @ 0x3b3b690: 2048 bits, 3635 entries, 0.6 b/e, 100.0% gray>.
[04:43] <jam> lifeless: the only problem is the finish_node() has no idea what level it is at, so I had to hack in another debug statement before it gets called
[04:43] <lifeless> jam: yes, I added that :P. What I mean is at the external level, to tell us what the aggregate usefulness is of the bottom internal row
[04:44] <jam> so, there is one other case that it slightly helps, which is the 'last added' node.
[04:44] <jam> because that one is also not always full
[04:46] <lifeless> still, minimal
[04:46] <lifeless> ok, there is someting for you to play with
[04:47] <lifeless> I shelved the pyx code contents pushed the rest
[04:49] <lifeless> jam: ^
[04:49] <jam> oh, and iix fits more than tix does into each leaf node
[04:49] <jam> yeah, I saw the commit,
[04:50] <jam> I bumped up the bloom to 1K, and we still get 7K entries, and 98.8% full
[04:50] <jam> Wrote: 4710400 with 1K blooms instead of 4706304 with 256 byte blooms
[04:51] <jam> so only added 1 page in all of that output :)
[04:51] <jam> I guess we have some room to spare
[04:53] <lifeless> still, 98% is not saving much
[04:53] <lifeless> 2K?
[04:53] <lifeless> (meep)
[04:53] <jam> well, we have a few other ones that start hitting 45% full, which is "optimal"
[04:53] <jam> I'll try 2K next
[04:53] <jam> but I'm running the benchmarks first
[04:53] <jam> meep?
[04:53] <jam> lifeless: note the 'gray level' isn't == FPR
[04:55] <lifeless> jam: its the number of bits on; 2% is the number that can possibly miss, and we get 5 chances at that per value
[04:55] <jam> right
[04:55] <lifeless> jam: so 10% on average, or 'not saving much' :)
[04:55] <jam> sure, but 10% >> 2%
[04:55] <lifeless> true
[04:56] <lifeless> So, I wasn't translating all the way up to the FPR because we both know the maths :)
[04:56] <jam> I'm going to give a shot at 2K when this test run finishes, I forgot to supply --no-graph
[04:56] <lifeless> anyhow, meep was simply expressing dismay at 2K/4K being bloom
[04:57] <lifeless> may also find the chunk code is starting to generate more slack space at that fraction too
[04:58] <lifeless> yay gcc
[04:58] <lifeless> _parse_btree_c.c:91: warning: â defined but not used
[04:58] <lifeless> _so_ not helpful
[04:58] <jam> well, I just hit AssertionError: Somehow we ended up with too much compressed data, 4221 > 4096
[04:58] <jam> with 2K bloom filters :)
[04:59] <lifeless> jam: I think that my reserved hack is not quite solid enough
[04:59] <jam> I'm not quite sure why, as I'm using "2*capacity"
[04:59] <jam> I guess maybe we don't get our 2:1 compression with < 2K bytes to work with
[05:00] <jam> dropping to 1.5
[05:02] <jam> ultimately, I think we need to make blooms too big to really be useful, which was something I felt before
[05:02] <jam> It might be a bit better with the md5 bloom
[05:02] <jam> hard to say at these levels ...
[05:04] <jam> lifeless: I can see that 2k blooms work "better" than 1k blooms for miss torture
[05:04] <jam> 309k bloom misses, versus 282k bloom misses
[05:04] <jam> and it saves ... 13.2s => 12.6s
[05:05] <jam> search_from_revid is *slower* 1.597 => 1.639
[05:05] <jam> get_components is faster 15.143 => 14.394
[05:05] <jam> with 24.4k hits versus 19.8k hits
[05:07] <lifeless> jam: well the bloom gets compressed
[05:07] <lifeless> jam: but a grey one won't compress *that* well
[05:07] <jam> well, a 100% grey compresses very well
[05:07] <jam> all 1s :)
[05:07] <lifeless> but is useless :)
[05:08] <lifeless> also, 100% grey == black!
[05:08] <jam> yeah, unfortunately optimal blooms don't compress well
[05:08] <jam> lifeless: black ... or white depending on your terminal color
[05:08] <jam> and whether you consider 1 whiter or blacker than 0
[05:08] <jam> I'm seeing 90% grey at the level 1 nodes for the big packs, and we are down 13k => 8k entries for iix, and 9k => 4.6k for tix
[05:09] <jam> with 75%  for tix
[05:09] <jam> and a corresponding large increase in l1 nodes
[05:09] <lifeless> so broadly
[05:09] <lifeless> we can - not use a bloom
[05:09] <lifeless>  - use a bloom for the entire table, and page it in <some sensible> size
[05:10] <jam> I would say if we want to use a bloom, it needs to be on a page of its own *next to* the node mapping page
[05:10] <lifeless> use blooms for the bottom row only (giving use better fan out high up the tree
[05:11] <jam> 4096 bytes for 9k records would give us ~4 bits / entry
[05:11] <jam> (3.64)
[05:11] <jam> which is in the ~20% FPR range
[05:12] <jam> lifeless: the one advantage of a whole-pack bloom is that you know what pages to read in, as soon as you know how big the bloom is
[05:12] <jam> for as many keys as you want to check
[05:12] <jam> so while you are right
[05:13] <jam> that it would be 3-round-trips to get down to the almost leaf node
[05:13] <jam> it is only 1 round-trip, but more total data
[05:13] <lifeless> jam: actually, I was comparing against no-blooms :)
[05:13] <jam> and for what we want, doesn't scale terribly well with number of keys
[05:14] <lifeless> so aiming for very low FPR is one approach
[05:14] <lifeless> another approach to analysing this is to say:
[05:15] <lifeless> how much IO do we add (for each scenario) by having the bloom, whether it is a global, or bottom-internal-only, or whatever. And how much do we save?
[05:15] <lifeless> 10% unset is around 50% FPR (more or less)
[05:16] <lifeless> for halve the pointer count in the internal node
[05:16] <lifeless> if we strip the bloom from higher nodes
[05:16] <lifeless> we've doubled our bottom internal node row count
[05:16] <jam> lifeless: but we don't have many intermediate nodes anyway
[05:17] <jam> if we figure a bloom-less node is 1:100 fan out
[05:17] <lifeless> jam: right, b+Tree structure for the win
[05:17] <jam> and a bloom one is 1:50
[05:17] <lifeless> its still waaay faster than bisection to locate an individual key
[05:17] <jam> and the tips are... ~100
[05:17] <lifeless> jam: anyhow, my point is to compare IO load, not to compare a specific FPR
[05:18] <jam> you have to have 500,000 nodes before it starts getting interesting
[05:18] <jam> lifeless: so... what is saves us is searching for missing keys in the bottom most layer
[05:18] <jam> which helps 'miss torture' but not much else
[05:20] <jam> hmm... you might consider using them for small packs
[05:20] <jam> as you could even aggregate them if you wanted to
[05:20] <lifeless> so miss_torture is trying to simulate something I think we see in real world
[05:20] <lifeless> which the other tests don't yet show
[05:21] <lifeless> anything with CompositeGraphIndex in there will start to show it
[05:21] <jam> lifeless: I think 'search_from_revid' is a reasonable approximation
[05:21] <jam> and it benefits more from having fewer internal nodes
[05:22] <jam> and removing the CPU overhead of computing sha1s to compare against the bloom
[05:23] <jam> I'm a bit curious about start+end keys at this point
[05:23] <jam> It would be the same 50% overhead
[05:23] <jam> but would we get better filtering than bloom
[05:23] <jam> (10%)
[05:24] <lifeless> I don't think so
[05:24] <lifeless> also bloom will be getting nearly 50% filtering
[05:24] <lifeless> (at 10% unset bits)
[05:24] <lifeless> erm, well, 1-0.9^5 or something
[05:25] <lifeless> 43%
[05:27] <jam> lifeless: Well, removing the blooms entirely (page_size = 16 bytes), I get 10.550s for miss torture
[05:28] <poolie> lifeless: can you help me work the glidepath for stacking?
[05:28] <jam> which is down from 13s at 1k, and 12s at 2k
[05:28] <poolie> afaics there is nothing no more needing review
[05:28] <poolie> so i'd like to do some tweaks/merges
[05:29] <lifeless> poolie: oh dang, I fgort to do the annotate performance test
[05:29] <lifeless> jam: 16 bytes?
[05:29] <jam> lifeless: just something to make them ignored
[05:30] <jam> We don't have a way to just turn them off right now
[05:30] <jam> the point is that they a) aren't being used during read, and b) aren't taking up much space
[05:30] <poolie> lifeless: i don't mind doing that test
[05:30] <poolie> but i'd like to know the overall map
[05:31] <poolie> if i just work through from the bottom up and merge them will we be done?
[05:31] <lifeless> poolie: yes, my shallow branch loom has them all except aarons policy work
[05:31] <lifeless> poolie: working from bottom up will bring in the features
[05:32] <poolie> ok
[05:32] <poolie> any tips, or suggestions of anything better to do?
[05:32] <lifeless> poolie: and the bug with 'bzr branch not-stacking-format --stacked localpath' is still something I need to fix (by changing the format on the fly'
[05:32] <lifeless> poolie: for annotate, if its dog slow, undo the delete and do if(len(knit._fallback_vfs)) else: old_code
[05:33] <poolie> right
[05:33] <lifeless> poolie: other than that, no surprises that I know of
[05:35] <mwhudson> jam: heh, your bzr-service c client looks pretty similar to mine in some ways
[05:35] <jam> mwhudson: yeah, but i wrote that 2.5 years ago :)
[05:35] <jam> just never caught on, I guess
[05:35] <mwhudson> we must have similar levels of c (in)experience
[05:35] <jam> probably win32 lacking os.fork() didn't help
[05:39] <mwhudson> i guess doing something fancy with ptys would make some degree of sense
[05:42] <mlh> how light could you make the client in python?
[05:43] <mlh> never mind, I just noticed client.py :-)
[05:43] <jam> mlh: pretty light :)
[05:43] <jam> mwhudson: I remember the big off-put
[05:43] <jam> gpg signing
[05:44] <jam> It would prompt you for the signature in the original shell
[05:44] <jam> not the one you were in
[05:44] <jam> and fixing *that* wasn't worth my effort
[05:44] <mlh> I guess all state has to be passed every request .. and the bze service would have to be changed likewise
[05:45] <mwhudson> jam: i think doing the stdout/stderr overriding in a more unixy way might help there
[05:45] <mwhudson> jam: but yeah, effort
[05:46] <jam> mwhudson: gpg talks to the tty directly, just like ssh
[05:47] <jam> I *think* you might be able to set GPG_TTY
[05:47] <jam> but then you have to start passing env vars to the back end, etc.
[05:47] <mwhudson> jam: forkpty, maybe?
[05:48] <mwhudson> you can make another file descriptor the controlling terminal for your process *somehow*, i'm sure
[05:48] <poolie> mwhudson: you must close the existing one then the next tty you open will become yours by default iirc
[05:49] <poolie> or something like that
[05:49] <poolie> you might need to change session group?
[05:49] <poolie> i do have a book that describes this if you really care
[05:49] <mwhudson> i have apue somewhere
[05:49] <mwhudson> but i'm very unlikely to care enough to work on this
[05:49] <poolie> 'linux application development' also has several bags of this crack
[05:49] <poolie> or whatever crack comes in. tubes?
[05:50] <mwhudson> i semi-enjoy unix-hackery on this level, but not really enough to put actual work in
[05:50] <mwhudson> poolie: have you seen pyrepl?
[05:51] <jam> right, poolie, play innocent
[05:51] <jam> Anyway, => sleepytime
[05:51] <jam> have a good evening
[05:51] <lifeless> gnight
[05:51] <poolie> night john
[05:51] <poolie> mwhudson: no, i haven't, also python.net seems to be down
[05:52] <mwhudson> yeah, it's some weird-ass virtual machine slice these days and doesn't work very well
[05:52] <mwhudson> i guess i could put the bzrlib apidocs on people.ubuntu.com
[06:03] <lifeless> ok
[06:03] <lifeless> 7 seconds saved
[06:03] <lifeless> by parsing just the key in C
[06:03] <lifeless> clearly faster, I'll finish the conversion
[06:30] <catsclaw> Hi all
[06:30] <catsclaw> My last chance to get Bazaar working the way I need it before I give up and go back to SVN.
[06:30] <gour> huh
[06:31] <catsclaw> Does anyone know a way to get Loggerhead working though the fastcgi wrapper for wcgi?
[06:32] <catsclaw> Or whatever TurboGears uses?
[06:32] <bob2> you tried and it didn't work?
[06:32] <catsclaw> I've got no idea how to do it.  The Loggerhead instructions assume you've got it running as a daemon
[06:33] <catsclaw> There's no mention of any other way of doing it.
[06:33] <mwhudson> some coding will be required
[06:34] <mwhudson> from loggerhead.apps.filesystem import BranchesFromFileServer
[06:34] <mwhudson> BranchesFromFileServer('/path') <- there's your wsgi app
[06:36] <pygi> morning
[06:41] <poolie> hello pygi
[06:47] <Odd_Bloke> Whee, I've got automated bisection working.
[06:47] <poolie> hello Odd_Bloke
[06:47] <poolie> in pqm?
[06:48] <Odd_Bloke> Nope, in the bisect plugin.
[06:49] <Odd_Bloke> I've been tidying up my development directory and stumbled on some stuff I started at the London sprint.
[06:49] <jamesh> Odd_Bloke: can it bisect a graph?
[06:50] <lifeless> Odd_Bloke: cool
[06:50] <Odd_Bloke> jamesh: I dunno, I'll test.
[06:51] <Odd_Bloke> It's just hooking into existing bisect stuff.
[06:51] <Odd_Bloke> The majority of this patch is a while loop. :p
[06:53] <Odd_Bloke> It's not quick on bzr.dev.
[06:54] <Odd_Bloke> Oh, and it's bombed with an encoding issue.
[06:54] <Odd_Bloke> Probably Lukas' fault. :p
[06:58] <Odd_Bloke> jamesh: It'll find a dotted revision, if that's what you meant by 'bisect a graph'.
[06:59] <jamesh> Odd_Bloke: I guess what I meant was to use the graph when working out what revisions to test
[07:00] <Odd_Bloke> jamesh: I don't know.  Jeff Licquia wrote most of it, I've just plugged in the necessary bits to get automation.
[07:00] <jamesh> Odd_Bloke: if you keep the assumption that "one revision causes the breakage", each successful test indicates that the revision's entire ancestry is clean, and every failed test indicates that every other revision that includes this one in its revision history is broken
[07:01] <jamesh> rather than treating it as a simple linear sequence of revisions you can cut in the middle
[07:01] <Odd_Bloke> Right, I suspect it just cuts it, but I don't actually know either way.
[07:01] <lifeless> I think cutting in the middle is fine for left-hand
[07:01] <igc> hi pygi. Thanks for your blog encouraging more interop between the DVCSs. For large projects, dynamic interop will be slow but I think there's promise in using fastimport/export for keeping high-performing mirrors around and synched
[07:02] <lifeless> so bisect (mainline), then bisect(new in branch)
[07:02] <jamesh> how does git-bisect handle it?
[07:03] <lifeless> not entirely sure
[07:04] <lifeless> ok, saved 30/120 ish seconds
[07:04] <poolie> spiv, ping?
[07:11] <lifeless> spiv: so, are the hooks boom or bust?
[07:12] <gour> pygi: my experience is that fastimport fails with darcs...
[07:13] <spiv> lifeless: boom
[07:14] <lifeless> cool
[07:14] <lifeless> I have iter_random_one down to 78.722,
[07:14] <lifeless> just by parsing faster
[07:18] <Odd_Bloke> It looks like bisect does restart on the branch when it bisects to a merge revision.
[08:15] <Odd_Bloke> What word can I use to refer to { trees, branches, repositories }?  Elements?
[08:15] <luks> thingies?
[08:15] <luks> :)
[08:19] <lifeless> Odd_Bloke: objects? components?
[08:25] <igc> that's it from me today. Night all
[08:42] <lifeless> hmmm
[08:42] <lifeless> night igc
[08:43] <lifeless> I wonder what the gc does with swapped out pages
[08:45] <beuno> howdy lifeless
[08:47] <lifeless> hi beuno
[08:48] <lifeless> lh search merged yet? (<am keen>)
[08:48] <beuno> heh, I promise I'll review the current code today, and send it off to the list so mwhudson feels more pressured  :)
[08:49] <beuno> I've showered and slept, so today should be more productive
[08:49] <lifeless> lol
[08:49] <lifeless> where are you?
[08:49] <beuno> Canonical
[08:50] <lifeless> London?
[08:50] <beuno> 16 hour on a plane, went straight to the meeting
[08:50] <beuno> yeap
[08:50] <lifeless> cool, didn't even know there was a meeting ;)
[08:50] <lifeless> you going to GUADEC?
[08:50] <beuno> bzr-unrelated stuff
[08:51] <lifeless> sounds fun :)
[08:51] <beuno> I don't think so, I'm here for a few days (may end up staying longer though)
[08:51] <beuno> it is!
[08:52] <beuno> Guadec is in Spain, isn't it?
[08:52] <lifeless> turkey
[08:52] <lifeless> next week
[08:52] <lifeless> monday !
[08:53] <beuno> ah, then I'm sure I'm not going
[08:53] <beuno> I do still have to make some things pretty for Jc2k
[08:53] <lifeless> :)
[08:53] <beuno> I hope I can get to that this weekend
[08:57] <lifeless> lol
[08:57] <lifeless> BTreeIndex: iter_random_one in 199.008,
[08:58] <beuno> I don't have a backlog, how much is it without your BTreeIndex magic?
[08:58] <lifeless> dunno yet
[08:58] <lifeless> its new
[08:58] <lifeless> its actually iter_random_one_reopen
[09:02] <lifeless> and - much longer - still going
[09:04] <beuno> oh, cool. mwhudson go the new version of LH back up again in LP
[09:04] <beuno> and it's fast!
[09:06] <mwhudson> still using mega ****wads of ram
[09:07] <mwhudson> but at least that isn't affecting the rest of the codehosting any more
[09:07] <beuno> mwhudson, more or less then before?
[09:07] <lifeless> still going
[09:10] <lifeless> GraphIndex: iter_random_one in 794.916,
[09:10] <mwhudson> beuno: well, different
[09:10] <mwhudson> beuno: it's probably growing more slowly
[09:10] <mwhudson> beuno: if it was using this much on the old machine it would have been killed by now
[09:11] <beuno> ah, not very encouraging
[09:11] <lifeless> mwhudson: kudos
[09:11] <mwhudson> beuno: still, the sysadmins are much happier :)
[09:11] <beuno> mwhudson, is there some way you can get more information on what's eating up most of the ram?  if not, is there anyone in Canonical's office I can stalk to do so?
[09:12] <lifeless> beuno: the sysadmins during london's day
[09:12] <lifeless> beuno: elmo is the boss sysadmin
[09:12] <beuno> right, but you threw hardware at the problem, which isn't ideal as we can't send free hardware with each LH download  :p
[09:12] <mwhudson> beuno: probably not without adding some code
[09:12] <mwhudson> beuno: no one else is running a site remotely like LP yet, fortunately
[09:13] <lifeless> beuno: being *able* to throw hardware at it is a feature
[09:13] <beuno> mwhudson, well, the guy that opened up the 100% CPU bug maybe
[09:13] <Jc2k> hmmm pretty things :)
[09:14] <mwhudson> beuno: the obvious suspect for ram-chewage is all the whole history data we store
[09:15] <mwhudson> if it's that, and i guess one could add some code to estimate it, it probably wouldn't be hard to store it much more compactly
[09:15] <beuno> Jc2k, :)   how's LH running on your server?  is it ram-friendly enough?
[09:15] <Jc2k> indeed, its been fine
[09:16] <beuno> that's good news!  also considering that you're using search too
[09:16] <lifeless> yes, but search is good :)
[09:16] <lifeless> (/me stops wanking)
[09:16] <beuno> lol
[09:17] <Jc2k> :)
[09:18] <lifeless> I tried search with BTree
[09:18] <lifeless> massively bigger index
[09:18] <lifeless> but faster still
[09:18] <lifeless> so there is room I think for having the last page not round to 4K
[09:18] <lifeless> jam: ^
[09:19] <lifeless> (search has _many_ very small indices
[09:19] <pygi> igc, ;)
[09:19] <pygi> gour, well, file a bug with them? :)
[09:21] <gour> pygi: https://bugs.launchpad.net/bzr-fastimport/+bug/232177
[09:22] <spiv> lifeless: hooks patch sent to list
[09:23] <lifeless> spiv: rocking!
[09:23] <pygi> gour, isn't that a darcs problem with fastexport rather?
[09:24] <gour> pygi: darcs-to-git and darcs2git, yes
[09:26] <lifeless> spiv: I've approved; one thought though
[09:26] <lifeless> spiv: perhaps the hook catpure logic could be pulled up
[09:26] <gour> pygi: the point is darcs --> bzr conversion
[09:26] <lifeless> poolie: ping
[09:26] <spiv> lifeless: in the tests?  Yeah, I guess so.
[09:27] <spiv> Some of the test methods are also identical.
[09:27] <lifeless> spiv: I don't think a common base class is _that_ ugly
[09:27] <pygi> gour, well, I understand. What I'm asking is: Is darcs fast-export implementation correct?
[09:29] <spiv> If the common base class inherits from TestCase, and has tests methods, but is not meant to be run, it's a bit ugly.
[09:30] <gour> pygi: dunno. tailor is atm the only tool capable of darcs{1,2} --> bzr
[09:31]  * spiv goes for a swim
[09:32] <poolie> lifeless: pong, off the phone now
[09:33] <pygi> igc, we will see what happens, I'd certainly like if people would be able to collaborate even if they disagree on some stuff
[09:33] <lifeless> poolie: feel like another call ?
[09:33] <poolie> :-}
[09:34] <poolie> let me get a cuppa and i'll call you
[09:34] <lifeless> K
[09:34] <gour> pygi: let'em choose whatever dvcs as long as it's called bzr :-)
[09:37] <pygi> gour, hehe
[09:37] <lifeless> spiv: is there a simple recipe for 'give me a 300ms local server'
[09:37] <lifeless> spiv: ?
[09:46] <lifeless> spiv: what we do there is not give it test methods, but helpers :P
[09:46] <lifeless> spiv: add an intermediate class :)
[10:03] <\sh> luks: ping qbzr ppa rebuild hardy ;)
[10:09] <semslie> When merging branches, I have some duplicate conflicts where the same file was added to both branches individually, however the files are identical. Is there a merge algorithm that will take into account that the files are the same and not detect a conflict?
[10:11] <james_w> semslie: unfortunately not, as file joins are not implemented yet.
[10:11] <semslie> james_w: what are file joins?
[10:11] <james_w> they wouldn't need to be to not have a conflict, but bzr currently errs on the side of safety.
[10:12] <james_w> semslie: bzr uses file-ids to track file identity. If a file is added independently in two different places it will get two different file ids. When they are merged this causes the conflict.
[10:13] <james_w> a file join would mark the file as having two file-ids in its history, so there would be no need to conflict.
[10:13] <semslie> james_w: thanks
[10:14] <james_w> resolving this conflict currently is just picking one file-id. You could have a merge algorithm that did that, and most users wouldn't care which was picked, so it's probably a safe thing to do.
[10:14] <james_w> however, I'm not sure that there aren't issues with symmetry that could trip that up.
[10:14] <james_w> semslie: there is a bug report open requesting this if you would like to subscribe.
[10:15] <semslie> I'll take a look
[10:37] <semslie> james_w: do you know where I could find a description of the different merge algorithms available?
[10:41] <james_w> semslie: I don't, sorry.
[10:47] <lifeless> semslie: jam is working on a plugin that accounts for this
[10:47] <semslie> thanks lifeless
[10:47] <lifeless> semslie: it doesn't address the root cause james_w mentions
[10:48] <lifeless> semslie: http://bzr.arbash-meinel.com/branches/bzr/1.6-dev/merge3_per_file
[10:50] <lifeless> (by plugin, I clearly mean branch)
[10:50] <lifeless> http://bzr.arbash-meinel.com/plugins/per_file_remerge may be useful too
[10:51] <lifeless> dunno how to make it jump etc, just saw it on the list
[11:08] <lifeless> mtaylor: hey
[11:08] <mtaylor> hi lifeless
[11:09] <mtaylor> lifeless: I was in the middle of testing something for you, wasn't I...
[11:09] <lifeless> mtaylor: dunno, were you ? ;)
[11:09] <mtaylor> lifeless: ok. good! then no!
[11:09] <lifeless> mtaylor: bzr-search has come a long way since your last attempt i think
[11:09] <mtaylor> :)
[11:09] <mtaylor> sweet. I'll pull again.
[11:10] <lifeless> also there is now a new index layer under development
[11:10] <lifeless> should help people with very large indices:). e.g. mysql.
[11:10] <semslie> thanks for the tips. after playing around a bit I seem to be finding that bazaar detects conflicts in places where subversion does not, which is a bit puzzling. I dont think subversion is wrong here, but it would be nice to see a good descriptions of the different merge algorithms and why they might be detecting conflicts
[11:11] <lifeless> semslie: well, the code is authoritative, but please do file a bug saying you couldn't get the info you needed
[11:11] <lifeless> semslie: also, if you have a particular false-conflict, if you can describe it perhaps we can help you understand why you get it
[11:12] <lifeless> mtaylor: lp:~lifeless/+junk/bzr-index2
[11:12] <mtaylor> lifeless: should I pull that as a separate plugin then?
[11:12] <mtaylor> or is that a branch of search?
[11:12] <lifeless> mtaylor: adds a btree based format; if you wanted to give it a spin (in a new repository, obviously), and tell me how it goes I'd love the feedback
[11:12] <lifeless> mtaylor: index2 is a new repository format
[11:13] <lifeless> mtaylor: not related directly to bzr-search at all
[11:13] <mtaylor> oh. so but will it go in in plugins? or is it a branch of bzr then?
[11:13] <lifeless> plugins
[11:13] <lifeless> as 'index2'
[11:14] <mwhudson> lifeless: have you tried index2 on any bzr-svn repos?
[11:14] <mtaylor> cool
[11:14] <lifeless> mwhudson: haven't created a rich root variation yet
[11:14] <mwhudson> lifeless: oh right
[11:14] <lifeless> mwhudson: currently have a to-london test running though
[11:14] <mwhudson> just thinking that bzr-svn indices tend to be 'rather large'
[11:15] <mtaylor> ImportError: No module named pybloom.pybloom
[11:15] <mtaylor> gah
[11:15] <lifeless> mtaylor: oh, also pull
[11:15] <mtaylor> :)
[11:15] <lifeless> http://bzr.arbash-meinel.com/plugins/pybloom/ to plugins/pybloom
[11:16] <mwhudson> lifeless: cool
[11:17] <lifeless> mwhudson: its about 4 times faster at getting a single key out from 'go' to 'woah'
[11:20] <lifeless> mtaylor: so concretely, in case I was unclear - I'm talking about doing 'bzr init-repo --btree-plain foo'; bzr branch mysql-6.0 foo/6.0; or similar
[11:20] <lifeless> then do log etc
[11:21] <lifeless> not about bzr-search on the btree format - that will be basically unchanged
[11:21] <mtaylor>     raise KeyError('Key %r already registered' % key)
[11:21] <mtaylor> KeyError: "Key 'btree-plain' already registered"
[11:21] <lifeless> mtaylor: oh, one other thing - optional C extension
[11:21] <mtaylor> right
[11:21] <lifeless> in index2, do ./setup build_ext -i
[11:21] <lifeless> mtaylor: did you pull the index2 plugin twice?
[11:21] <mtaylor> nope. ImportError: cannot import name _KnitGraphIndex
[11:22] <mtaylor> I get "foo" already registered sometimes when a plugin crashes spectacularly
[11:22] <lifeless> mtaylor: ah! this will want bzr.dev
[11:22] <mtaylor> :)
[11:22] <mtaylor> do we have running new-package-on-push .debs for bzr.dev yet?
[11:22] <mtaylor> because we need them
[11:23] <lifeless> mtaylor: I don't think we do at the moment, I agree.
[11:23] <mtaylor> kiko!!!
[11:23] <lifeless> not kiko's problem :)
[11:25] <mtaylor> ah... but how cool would it be to have a PPA auto-generate from a bzr branch, no?
[11:25] <lifeless> mtaylor: indeed
[11:26] <mwhudson> i think this idea has been mentioned once or twice over the years...
[11:26] <Kinnison> lifeless: cache-hot, on my smallish project, that gives me a ca. 25% speedup on 'bzr log >/dev/null'
[11:26] <Kinnison> lifeless: nice.
[11:26] <mtaylor> PPA-from-bzr-branch++
[11:27] <lifeless> Kinnison: without the C extensions?
[11:27] <Kinnison> lifeless: I built the C extensions
[11:27] <lifeless> Kinnison: cool
[11:27]  * Kinnison tries on something bigger
[11:27] <lifeless> Kinnison: should still be faster with
[11:27] <lifeless> *without*
[11:30] <Kinnison> lifeless: testing against my local copy of bzr.dev yields a 25% improvement in bzr log too
[11:30] <Kinnison> lifeless: really cool work
[11:30] <james_w> does anyone know what generates file ids like "x_Chris_Halls_<halls@debian.org>_Fri_Nov__4_10:34:47_2005_22857.16" ?
[11:30] <lifeless> Kinnison: thanks
[11:30] <lifeless> james_w: tla
[11:31] <james_w> lifeless: ah, thanks.
[11:32] <lifeless> Kinnison: is 25% good ? :)
[11:33] <Kinnison> lifeless: It's a matter of a second on my smallish dev tree
[11:33] <lifeless> so 4 to 3 ?
[11:34] <Kinnison> lifeless: aye
[11:34] <Kinnison> lifeless: and 8 to 6 on bzr.dev
[11:34] <lifeless> Kinnison: try a cold cache comparison
[11:34] <lifeless> erm
[11:34] <lifeless> by which I mean
[11:34] <lifeless> drop caches
[11:34] <lifeless> run bzr info
[11:34] <lifeless> then time bzr log
[11:34] <Kinnison> how do I drop the caches?
[11:34] <lifeless> $ cat bin/drop-caches
[11:34] <lifeless> #!/bin/sh
[11:34] <lifeless> # get written data to disk (not that its guaranteed)
[11:34] <lifeless> sync
[11:34] <lifeless> # (drop the unmodified pages)
[11:35] <lifeless> echo 3 | sudo dd of=/proc/sys/vm/drop_caches 2>/dev/null
[11:36] <lifeless> man, this network test is up to hours now
[11:36] <lifeless> gunna be a long benchmark
[11:37] <mwhudson> branching launchpad into an index2 repo is taking a good long while too
[11:38] <Kinnison> lifeless: Irritatingly, the cold-cache time on the index2 tree is stable where it is wildly variable on the non index2 tree
[11:38] <lifeless> mwhudson: index writing is a little slower
[11:38] <lifeless> Kinnison: likely the non index2 tree is split more on disk
[11:38] <Kinnison> lifeless: placing index2 on cold-cache anywhere from 25% slower to 50% faster
[11:38] <lifeless> Kinnison: new repository single pull -> packed
[11:39] <lifeless> Kinnison: strange
[11:39]  * Kinnison times cold-cache on a new copy of my test branch
[11:40] <Kinnison> on a fresh pull it's slightly more stable, and puts them on a par
[11:40] <Kinnison> well within experimental error
[11:49] <lifeless> mwhudson: did it finish?
[12:00] <mwhudson> lifeless: yeah
[12:02] <lifeless> mwhudson: is it faster?
[12:11] <AnMaster> $ bzr log --short -rtag:0.0.1-release..
[12:11] <AnMaster> bzr: ERROR: Selected log formatter only supports mainline revisions.
[12:12] <AnMaster> what does that mean?
[12:12] <AnMaster> -rtag:0.0.1-release.. was in same branch
[12:12] <AnMaster> other branches have been merged and such to it of course but why doesn't it work then
[12:14] <mwhudson> lifeless: yep, 20s vs 30s for log > /dev/null
[12:14] <mwhudson> the btree one is better packed though ofc
[12:18] <lifeless> mwhudson: 'bzr pack' in both :)
[12:19] <mwhudson> lifeless: yeah, on that
[12:19] <mwhudson> lifeless: they're about the same speed after the pack
[12:21] <lifeless> mwhudson: hmm :P
[12:21] <lifeless> mwhudson: the new index should degrade better, or thats the theory
[12:21] <lifeless> mwhudson: you could try upping the cache too
[12:22] <mwhudson> lifeless: i think i might try going to bed :-p
[12:22] <lifeless> mwhudson: ciao!
[12:53] <lifeless> mtaylor: did that work for you?
[12:53] <mtaylor> lifeless: was at lunch... haven't tested yet
[12:54] <lifeless> mtaylor: no worries
[13:00] <mtaylor> lifeless: lp:bzr should get me bzr.dev, right?
[13:00] <lifeless> uhm probably :P
[13:00] <lifeless> (thats yes, yes it will(
[13:00] <mtaylor> ok. good
[13:02] <mtaylor> testing now
[13:02] <lifeless> log isn't actually a great test, I think IO to get revisions rapidly outweighs indexing, except where the index blows :P
[13:05] <mtaylor> well, what is a good test?
[13:05] <lifeless> 'doing stuff' I think :)
[13:05] <lifeless> log will show up obvious problems :)
[13:06] <mtaylor> :)
[13:11] <lifeless> log -r -400 or something
[13:11] <lifeless> that should be faster
[13:16] <mtaylor> lifeless: still branching in to the new repo
[13:16] <mtaylor> lifeless: have I mentioned how much I enjoy copying 60k revs around ?
[13:16] <lifeless> mtaylor: you enjoy it a lot
[13:16] <mtaylor> yes.
[13:17] <mtaylor> I'd really like to do it more
[13:17] <lifeless> please sir
[13:17] <lifeless> may I have another
[13:18] <LarstiQ> yksi iso olut, olkaa hyvä.
[13:18] <lifeless> LarstiQ: *blink*
[13:18] <LarstiQ> lifeless: about the same in Finnish :)
[13:19]  * LarstiQ practices a bit for his extended stay in Finland from the 11th onwards
[13:19] <LarstiQ> lifeless: that translates "May I have one beer please"
[13:19] <lifeless> LarstiQ: cool
[13:19] <LarstiQ> big beer even
[13:20] <lifeless> the only sort :P
[13:20] <lifeless> but hang on, why are you practicing asking for beer?
[13:20] <LarstiQ> lifeless: ah well, this was the phrase as listed in my dictionary ;)
[13:21] <LarstiQ> would have to add the orangejuice lookup
[13:37] <semslie> lifeless, james_w: turns out bazaar was doing the right thing by correctly finding the last common ancestor, while we were messing it up and checking against the wrong ancestor. problem solved, bazaar ftw :)
[13:38] <james_w> cool, good to know.
[13:38] <semslie> one thing I found was that I preferred the results of a weave merge to the default
[13:38] <semslie> I'm not sure what the implications of that is
[13:43] <lifeless> gnigt all
[14:10] <pygi> hey hey folks =)
[14:17] <vxnick> Hi all
[14:18] <vxnick> I was wondering if anyone knows a good way to "push" a repository to a dev server? At the moment, I have a repository setup but would like to update a live version for testing without manually uploading the files
[14:18] <vxnick> I read that using another working copy isn't the best solution, so wondered if there are any other solutions?
[14:19] <beuno> vxnick, I can think od 2 ways
[14:19] <beuno> there's a push-and-update plugin which will do that
[14:19] <vxnick> oh, excellent
[14:19] <beuno> and there's the bzr-upload plugin which will *just* upload the working tree
[14:20] <beuno> no other revision information except the last revision uploaded, so next time you only uploaded what has changed
[14:21] <vxnick> beuno: thanks a lot, I'll take a look at those :)
[14:21] <beuno> vxnick, you're welcome
[14:26] <abentley> jelmer: I'd be interested to talk about the bzr-gtk Bundle Buggy some time.
[14:32] <vxnick> beuno: bzr-upload works great, thanks again!
[14:34] <jelmer> abentley: Hi
[14:34] <jelmer> abentley: Sure, what about it?
[14:34] <abentley> I've moved trunk to use postgres, so you may be surprised if you just update blindly.  OTOH, BB is much more stable on postgres.
[14:35] <abentley> So I would recommend switching at some point.
[14:36] <jelmer> abentley: I already updated :-) Apart from the fact that I had to point the configuration at sqlite again it seems to still work fine
[14:36] <abentley> Cool.  Sorry about that.  The sqlite-ness has always been assumed as part of the configuration.
[14:36] <abentley> Frankly, I'm not sure what the right way to handle this is.
[14:37] <abentley> I could do a local.conf, but a local-prod.conf and a local-dev.conf?
[14:38] <jelmer> abentley: No worries, it was easy to deal with
[14:39] <jelmer> abentley: Any particular reason pgsql is better than sqlite or just performance?
[14:39] <abentley> jelmer: Stability.  TG interacts poorly with the way SQLite does locking.  PG and TG play better together.
[14:40] <jelmer> abentley: ah, ok
[14:40] <abentley> Also, schema migration is nicer.
[14:42] <jelmer> So far locking hasn't been a problem for us, but I'll see if I can migrate it at some point anyway.
[14:43] <abentley> jelmer: BB seems to crash for you occasionally.  I assumed it was that.
[14:43] <pickscrape> locking granularity leading to improved concurrency is why we moved our trac from sqlite to postgres.
[14:45] <jelmer> abentley: No, the main problem at the moment is DHCP leases expiring and DNS updating failing for some reason
[14:45] <abentley> Oh, weird.
[14:46] <abentley> jelmer: Okay.  So I'll try not to make any changes that require Postgres, but if I goof, please let me know.
[14:46] <jelmer> abentley: Will do
[14:53] <Laibsch> Hi, I am trying to merge some launchpad branches
[14:53] <Laibsch> https://code.launchpad.net/~vcs-imports/subdownloader/trunk shall go into https://code.launchpad.net/~subdownloader-developers/subdownloader/trunk
[14:53] <Laibsch> Here is what I did.  Check out the second branch and cd into it
[14:54] <Laibsch> Then "bzr merge lp:~vcs-imports/subdownloader/trunk" which resulted in http://rafb.net/p/rCkYQS78.html
[14:56] <james_w> hi Laibsch
[14:56] <james_w> it sounds like https://bugs.edge.launchpad.net/bzr/+bug/177874
[15:00] <Laibsch> Great
[15:00] <Laibsch> james_w: thank you for the info
[15:04] <tacone> hello, how can I print the revision number in my sourcecode ?
[15:04] <andrea-bs> tacone: bzr revno?
[15:06] <tacone> not. what shuold I write in my code, to make the current revision number appear inside that file ?
[15:07] <james_w> tacone: that's not possible yet, sorry
[15:08] <james_w> 1.6 might have it, if not then 1.7 I expect
[15:09] <tacone> not a bit problem :-). thank you for your answer.
[15:15] <vxnick> james_w: isn't there another command/program to do this? I could've sworn I saw something about this in the manual for printing revision info to source code
[15:15] <vxnick> I was looking for it myself last night - it'd be nice to output the revision number to a PHP file
[15:16] <tacone> vxnick: sed ? :-)
[15:16] <vxnick> tacone: yeah I've done it with that, but it'd be nice to have a post-commit hook to do it, but I'm not familiar with Python
[15:17] <james_w> bzr version-info
[15:17] <james_w> you can hook that in to your build system to do it.
[15:17] <vxnick> Actually, I used sed for subversion revision numbers, with a post-commit hook
[15:17] <vxnick> james_w: that's the one :-)
[15:17] <vxnick> any ideas for that though?
[15:17] <vxnick> i.e. integrating into a build system
[15:18] <james_w> I've never done it, sorry
[15:19] <vxnick> fair enough
[15:19] <vxnick> thanks anyways
[15:21] <vxnick> found a reference: http://doc.bazaar-vcs.org/bzr.0.90/en/user-guide/version_info.html
[15:34] <AnMaster> I didn't get an answer before
 $ bzr log --short -rtag:0.0.1-release..
 bzr: ERROR: Selected log formatter only supports mainline revisions.
 what does that mean?
[15:35] <AnMaster> does anyone around now know?
[15:35] <AnMaster> the tag was in the same branch
[15:35] <james_w> AnMaster: what does "bzr log -rtag:0.0.1-release" show?
[15:36] <james_w> (no --short, no "..")
[15:36] <AnMaster> a sec
[15:37] <AnMaster> http://rafb.net/p/RQruyr60.html
[15:38] <james_w> interesting
[15:38] <james_w> and "bzr log --short" works fine?
[15:38] <AnMaster> james_w, no it gives the error I gave above
[15:38] <LarstiQ> AnMaster: without the ..
[15:38] <AnMaster> when done for that tag
[15:38] <james_w> even with no -r
[15:39] <james_w> ?
[15:39] <AnMaster> oh ok a sec
[15:39]  * LarstiQ is suspecting the .. includes non-mainline revisions which --short doesn't like.
[15:39] <AnMaster> without any -r short format works
[15:39] <james_w> sounds like a bug to me
[15:39] <james_w> one more to try "bzr log --short -rtag:0.0.1-release"
[15:39] <AnMaster> I suspect corrupted repo on some way because the revision number from "bzr log --short -rtag:0.0.1-release" is different
[15:40] <AnMaster> 449.2.20 Arvid Norlander        2007-11-13
[15:40] <AnMaster>       Relase of envbot 0.0.1!
[15:40] <AnMaster> why does revision number differ
[15:40] <james_w> oh wow, that is odd.
[15:40] <AnMaster> corrupted repo?
[15:40] <james_w> is this a public branch?
[15:40] <LarstiQ> AnMaster: I doubt that.
[15:41] <AnMaster> james_w, well both branches are public, just sshed in to the server and there it works, server runs FreeBSD 6.3 with bzr 1.5
[15:41] <AnMaster> python 2.5
[15:41] <LarstiQ> AnMaster: what do you use locally?
[15:41] <AnMaster> my desktop where it gives odd results runs Linux (arch linux currently), with python 2.4
[15:41] <AnMaster> bzr 1.5
[15:42] <AnMaster> however it is mounted over nfs from a gentoo box
[15:42] <AnMaster> does that matter?
[15:42] <LarstiQ> shouldn't.
[15:43] <AnMaster> well let me check on the gentoo box (also bzr 1.5 and python 2.4)
[15:44] <AnMaster> ok this is probably not a bug in bzr I guess...
[15:44] <AnMaster> Reason.... the gentoo box now has two keyboard leds blinking (= kernel oops).... could be hardware issues on it I guess
[15:44] <AnMaster> :/
[15:44] <AnMaster> sigh
[15:44] <LarstiQ> ah.
[15:45] <AnMaster> well got to debug that, when I find out the cause of that I will get back to you (if it wasn't memory corruption simply)
[15:45] <AnMaster> (or harddrive issues or whatever)
[15:46] <LarstiQ> AnMaster: good luck
[15:46] <AnMaster> LarstiQ, I *do* have backups
[15:46] <AnMaster> daily to tape
[15:47] <LarstiQ> still, faulty hardware sucks
[15:47] <AnMaster> yes it does
[15:47] <AnMaster> LarstiQ, anyway it could as well have been something else than hardware that crashed it, it runs X with binary nvidia driver (bleh I know, but I need 3D for various reasons)
[15:48]  * AnMaster books into memcheck
[15:48] <LarstiQ> AnMaster: we'll still be here when you've sorted it all out :)
[15:48] <AnMaster> heh
[15:57] <vila> did someone remember the url announcing mysql switch to bzr ?
[15:58] <enobrev> vila, http://elliotmurphy.com/2008/06/19/mysql-converts-to-bazaar-and-why-it-matters/
[15:59] <enobrev> oops... link's on that post, though (got that from planer bazaar)
[15:59] <vila> enobrev: thks
[15:59] <enobrev> http://blogs.mysql.com/kaj/2008/06/19/version-control-thanks-bitkeeper-welcome-bazaar/
[16:20] <kwk> Hello is somebody using the Bazaar plugin for eclipse?
[16:22] <enobrev> kwk, i am
[16:22] <enobrev> on win32
[16:22] <enobrev> erm, xp
[16:24] <kwk> enobrev. Ok, did you experience problems when trying to set the bazaar preferences? I'am using the plugin on ubuntu with Eclipse Ganymede and it doesn't work when trying to specify the bzr executable. A bug report is already filed (https://bugs.launchpad.net/bzr-eclipse/+bug/245136).
[16:25] <enobrev> kwk, nope worked ok for me on both granymede and europa.  haven't tried it on my ubuntu system yet
[16:26] <kwk> ubottu: exactly. enobrev, can you please send me you configuration file from YOURWORKSPACE/.metadata/.plugins/org.vcs.bazaar.eclipse.core(if such a file exists)? The bug report says that there has to be an initial configuration file that is working. Therefore I ask.
[16:27] <kwk> bug 236162
[16:27] <enobrev> nice
[16:28] <enobrev> kwk, dir's empty
[16:28] <kwk> enobrev, hmm, do have any clue where the settings are located?
[16:29] <enobrev> (looking now)
[16:33] <enobrev> kwk, not sure... not seeing it anywhere
[16:33] <kwk> enobrev, OK thank you very much
[16:34] <AnMaster> LarstiQ, still there? bad ram. I guess that caused it
[16:36] <LarstiQ> AnMaster: that is, if you try what you did before now, the problem doesn't surface?
[16:36] <LarstiQ> AnMaster: it is of course possible that the problems were unrelated, and there is a real bug inbzr.
[16:37] <AnMaster> indeed that is the case
[16:37] <AnMaster> anyway I shut it down now, still got warranty on this ram at least
[17:45] <Laibsch> will "bzr get svn+ssh://rolf@svn.gnucash.org/repo/gnucash" check out all available branches?
[17:46] <jelmer> Laibsch: no, you'd need to use "bzr svn-import" for that
[17:46] <Laibsch> OK
[17:46] <Laibsch> Thanks
[17:48] <jelmer> abentley, is there info about the merge proposal email support somewhere?
[17:49] <Laibsch> jelmer: Should I even check out all branches?  I want trunk plus 2 others.  I will likely want to try and merge and later rebase them.
[17:49] <jelmer> Laibsch: In that case, I would recommend just retrieving those branches
[17:49] <Laibsch> I was wondering if that would be impossible if I checked out the branches individually
[17:49] <Laibsch> OK
[17:49] <Laibsch> Thanks
[17:49] <jelmer> e.g. "bzr get svn+ssh://rolf@svn.gnucash.org/repo/gnucash/trunk" ( I think)
[17:50] <Laibsch> Yes, correct
[17:50]  * jelmer wonders if the svn+ssh url means Laibsch is a gnucash developer
[17:50] <abentley> jelmer: http://news.launchpad.net/cool-new-stuff/email-interface-to-code-review
[17:50] <jelmer> abentley, thanks
[17:50] <Laibsch> jelmer: partially
[17:50] <Laibsch> I have partial rw access
[17:51] <jelmer> abentley: What about submitting merge requests?
[17:51] <abentley> jelmer: That's done through the web right now.
[17:52] <jelmer> abentley: ok - should I file a wishlist bug for being able to do that over email or is it already pleanned ?
[17:52] <abentley> We plan to support merge directives.  Is that what you have in mind?
[17:53] <jelmer> abentley: Yeah, being able to "bzr send" to some lp address
[17:56] <james_w> jam: hi, are you around? I have a question about the merge-into plugin.
[17:58] <james_w> jam: I'm just trying it on a dummy branch, and "merge-into" set a merge revision, but didn't seem to import the text changes.
[18:04] <vgeddes> I am having trouble setting up a dedicated smart-server, ...
[18:04] <vgeddes> I set it up like this: bzr serve --allow-writes --directory=/home/vgeddes/src/nis ...
[18:05] <vgeddes> but `bzr log bzr://localhost//home/vgeddes/src/nis/main'
[18:05] <vgeddes> returns `bzr: ERROR: Not a branch: "bzr://localhost/".
[18:05] <Peng> Maybe // is a problem?
[18:05] <Peng> Also, you do know --allow-writes means there's no auth whatsoever, and anyone can write to your branches?
[18:05] <Peng> Wait.
[18:06] <LarstiQ> vgeddes: try bzr log bzr://localhost/main with that --directory
[18:06] <vgeddes> yeah, thats not the problem, I tried it without the //
[18:06] <Peng> Since you passed a --directory like that, bzr ser -- yeah, what he said.
[18:06] <vgeddes> ha, thanks !
[18:08] <Laibsch> http://rafb.net/p/iVSTU253.html Is that bug ﻿177874?  How can I perform the merge?  The branches had completely separate directories, there should be no conflicts
[18:09] <abentley> james_w: Are you testing with a branch that has commits?
[18:09] <james_w> abentley: yeah, one commit in each
[18:11] <jelmer> Laibsch, not sure, this looks like a bug
[18:11] <jelmer> Laibsch: Can you try merging from a local copy of that branch?
[18:12] <Laibsch> jelmer: How will that work?
[18:12] <Laibsch> I'll check this out into $dir
[18:12] <Laibsch> and then bzr merge $dir?
[18:17] <Laibsch> Well, in any case, those steps lead to apparently the same failure
[18:18] <Laibsch> Bazaar (bzr) 1.3.1 from ahrdy
[18:18] <Laibsch> hardy
[19:21] <pygi> bkor, I think you have your answer now :)
[19:29] <jelmer> Laibsch, please file a bug
[19:32] <awilkins> Verterok: Aha
[19:33] <Verterok> awilkins: hey
[19:34] <awilkins> I'm having trouble ; the plugin isn't initializing properly somewhere ; to get use it with a project you have to disconnect it and re-share it
[19:36] <Verterok> awilkins: are you using a build of  the latest code?
[19:37] <awilkins> Verterok: I was just going to pull ; I'm using a slightly patched build
[19:37] <awilkins> Verterok: With the xmloutput fix I just mailed in
[19:37] <Verterok> awilkins: yes, I meaned that :)
[19:39] <awilkins> Verterok: Hmm, might not have revisions 163/162
[19:40]  * Verterok checks what changes in 162-163
[19:41] <awilkins> There's also (undecided)  	
[19:41] <awilkins> #245136
[19:41] <Verterok> awilkins: 163 is the BIG integration merge
[19:41] <Verterok> :)
[19:41] <awilkins> Still the same trouble
[19:42] <Verterok> awilkins: revno 163 merge your plugin-dev branch and other improvements (allow selection of xmlrpc service port, etc)
[19:43] <awilkins> I think I was on the integration branch ; I just merged it
[19:43] <awilkins> Verterok: Alas, still same problem.
[19:43] <Verterok> awilkins: steps to reproduce? a eclipse restart?
[19:44] <awilkins> I was going to see if being a standalone over a repo branch makes a difference
[19:45] <awilkins> Verterok: This is the plugin running in debug from another session which is how I typically run it until I think it's stable enough for normal use
[19:45] <awilkins> I have no idea if it's linked to https://bugs.launchpad.net/bzr-eclipse/+bug/245136 at all
[19:46] <awilkins> I just added a chunk of code that sets a reasonable default for Windows (well, reasonable for my personal config....)
[19:46] <Verterok> awilkins: maybe it's related
[19:48] <Verterok> awilkins: while in debug I encountered some problems when using and oldd workspace, all gone away once I reconfigured usign the new preference page
[19:48] <awilkins> It springs the prefs page every time you use the share wizard ; would that be normal?
[19:51] <Verterok> not at all
[19:52] <awilkins> I'll trash the settings for the workspace
[19:52] <Verterok> awilkins: I'll check the preference initializer
[19:53] <awilkins> Hm, there are no settings
[19:53] <awilkins> Should there be a file in .metadata/.plugins/org...core  ?
[19:55] <Verterok> awilkins: I don't think so, but let me double check
[19:55] <awilkins> Where does it store it then?
[19:57] <Verterok> awilkins: ./org.eclipse.core.runtime/.settings/org.vcs.bazaar.eclipse.ui.prefs and ./org.eclipse.core.runtime/.settings/org.vcs.bazaar.eclipse.core.prefs
[19:59] <awilkins> Ok, I've trashed those two files. Darn, should've kept them
[19:59] <Verterok> I'll do the same here and see if I can reproduce the preference reset issue
[20:00] <awilkins> I'm seeing it right now ; I changed the default preference for the UI, not the actual setting.
[20:01] <awilkins> So it's displaying the .bat file and saying "cannot run program "bzr" cannot find file specified"
[20:01] <awilkins> Or that last merge erased my changes
[20:02] <Verterok> awilkins: you chanhed the preference initializer in the UI to point to the bzr.bat?
[20:02] <Verterok> and keeps telling about bzr file
[20:02] <Verterok> did I understood right?
[20:02] <fbond> Hi.  I want to produce a patch for a changeset that removes a binary file and can be applied with the patch program.  With no VCS, `diff -Naur' would do this.  What do I use with bzr?
[20:03] <awilkins> Yes, the preference says the bat file, the error message is as if it was just trying to run "bzr" like you would on posix
[20:03] <awilkins> Aha, but if you summon the dialog AGAIN, it doesn't complain
[20:04] <Verterok> awilkins: the core plugin also haave a preference initializer :P
[20:04] <awilkins> Stills says "no client factory found" though
[20:04] <Verterok> fbond: bzr diff ?
[20:04] <fbond> Verterok: I assume you haven't tried it.
[20:05] <Verterok> fbond: I used bzr diff but I don't know if it's the same as 'diff -Naur'
[20:05] <Verterok> fbond: and I used the generated diff with patch
[20:06] <fbond> Verterok: Then your binary file isn't being removed by patch.
[20:06] <Verterok> fbond: oh, I missed the binary file part :P I never used it with binary data
[20:07] <Verterok> awilkins: that means the bzr-java-lib can create a IBazaarClient
[20:07] <Verterok> awilkins:  commandline or xmlrpc
[20:08] <awilkins> Verterok: Ok, I patched the core initializer with the same default value ; now the projects are connected but no overlays
[20:08] <fbond> I've tried bzr diff -r X..Y --using diff --diff-options '-Nau' but that produces funny output that I can't blame patch for not accepting.
[20:08] <awilkins> Verterok: They were previously in the "Connected/Error" state where only the disconnect operation was available
[20:08] <awilkins> This is XMLRPC
[20:08] <Verterok> awilkins: if you trashed all the preferences, you should enbale the decorators again
[20:09] <awilkins> Verterok: The status decorator is enabled in the defaults
[20:09] <Verterok> awilkins: jusst disable/enbale it
[20:09] <awilkins> Verterok: But only shows after you visit the property page, it seems.... must be in the UI defaults
[20:10]  * awilkins restarts workspace again
[20:11] <awilkins> Ok, workspace restarted... now the projects are in the same state
[20:11] <awilkins> (Connected/Error)
[20:11] <awilkins> No stack traces in the console of the calling workspace
[20:12] <awilkins> (host workspace)
[20:12] <awilkins> Verterok: Suddenly starts working after visiting the main preferences page (and not touching anything)
[20:12] <Verterok> awilkins: ohh, I see...
[20:12] <awilkins> Verterok: Let me see if that is also true when you cancel it
[20:13] <Verterok> awilkins: I think it's only doing the setup of the client when you enter the pref. page
[20:14] <awilkins> Verterok: The prefs being a singleton isn't going to work either ; I think that holds on both posix and windows, you just haven't run into it on posix because the default config is valid for posix
[20:15] <awilkins> Verterok: If you visit the prefs and cancel, it doesn't show decorators
[20:15] <awilkins> Verterok: Not for "OK" either
[20:15] <awilkins> Verterok: Or "Apply" ...
[20:16] <awilkins> Aha,
[20:16] <awilkins> To get decorators, you have to visit the decorators prefs and "OK"
[20:17] <Verterok> awilkins: yes
[20:17] <awilkins> To get functional team menu I think you have to visit main prefs and OK
[20:17] <Verterok> awilkins: I'll try to reproduce that problem
[20:17] <awilkins> You don't get decorators if you visit/ok decorators WITHOUT having functional Team menu
[20:18] <Verterok> awilkins: actually I don't fully agree with the non-singleton preferences :) but this problem have more priority
[20:19] <awilkins> Verterok: How does the prefs dialog check to see whether it holds valid prefs if the prefs are a singleton, and not set until the prefs page is valid?
[20:20] <Verterok> awilkins: the bzr-java-lib preferences are setup when the core plugin is started (valid or invalid)
[20:20] <Verterok> awilkins: in the prefs. dialog when apply/Ok the prefs are stored with the eclipse preferencestore
[20:21] <Verterok> awilkins: and applied to the BazaarClientPreferences singleton
[20:21] <awilkins> Verterok: Yes, but the validity check in the prefs page uses the prefs to start the client - the old prefs, not the prefs in the page
[20:22] <Verterok> awilkins: ups, I missed that :P
[20:22] <awilkins> So if the old prefs are invalid, it's impossible to set new ones
[20:23] <awilkins> Verterok: So by all means have a default static prefs instance but you need to be able to create a new one in the prefs page for validation and use it
[20:23] <Verterok> awilkins: heh, I forgot about the preference listener
[20:23] <Verterok> awilkins: PreferenceStoreChangeListener
[20:29] <Verterok> awilkins: that updates the preference store of the core
[20:31] <awilkins> Verterok: Is that the bit which loads them in the first place?
[20:31] <awilkins> Verterok: Or is it that the UI prefs are never being stored in the core?
[20:32] <Verterok> a bit of both, I think I missed the update of the client when the core/ui are updated
[20:32] <Verterok> but the core prefs are used in the core start to update the client preferences
[20:32] <awilkins> Verterok: The prefs are not being saved either (but they haven't changed from the defaults)
[20:33]  * awilkins changes them
[20:33] <awilkins> If I can iron this out, I'll do some more testing and deploy it to my users (minus the SaveableEditorInput).
[20:34] <Verterok> awilkins: basically I missed a call to EclipseBazaarCore.updateClientPreferences() in PreferenceStoreChangeListener
[20:34] <Verterok> I think that adding a call to that should make the trick :)
[20:34] <mgedmin> did anyone ever already suggest bzr diff --color?
[20:35] <awilkins> mgedmin: I think there is a plugin for it
[20:36] <mgedmin> okay, then, did anyone ever suggest bundling it with bzr so that users could get pleasurable experience right out of the box?
[20:37] <Verterok> awilkins: I can add a prefs. change listener to the core and fire the client prefs. update when the core preferences changes
[20:38] <awilkins> Verterok: I added the core-from-UI-prefs update line
[20:39] <awilkins> Verterok: It fixes the prefs page but not the (Connected/Error) state on init
[20:39] <Verterok> awilkins: ok, let me know if that fixes the issue
[20:40] <Verterok> awilkins: I don't fully understand what "(Connected/Error) state on init" means
[20:40] <Verterok> :P
[20:41] <awilkins> It means that when you start, the project thinks it's connected to a bzr repo, but is in an error state so only the disconnect menu item is available
[20:42] <awilkins> This goes away after visiting the Team/Bazaar p[refs page
[20:42] <awilkins> The decorators remain absent until you visit the decorators pref page
[20:42] <awilkins> By the way, there is a difference between defaults, and your settings happening to BE the defaults
[20:43] <Verterok> awilkins: ok, the problem can be that a client can't be created, let me debug it a bit
[20:43] <awilkins> If you explicitly duplicate the default settings for decorators, the prefs file vanishes
[20:43] <awilkins> It should not ; an upgrade may change the defaults and you may not want that
[20:43] <awilkins> (IMHO)
[20:45] <Verterok> awilkins: I'm thinking in remove the executable related prefs from UI, and store it only in the core
[20:45] <Verterok> (to avoid this weird errors, and ease the manteinance)
[20:45] <awilkins> Verterok: I would agree ; my UI prefs file is empty/absent anyway
[20:46] <Verterok> awilkins: I can confirm the error state at init
[20:47] <awilkins> Verterok: You have a windows box ATM?
[20:47] <Verterok> awilkins: nope, testinng on Mac OS X 10.4
[20:50] <Verterok> awilkins: but I borrowed vnc access to a win XP box from beuno office (/me waves beuno)
[20:50] <Verterok> awilkins: I still need to configure bzr, Java, eclipse, but having an box with XP is a start :)
[20:51] <awilkins> Heh, it's good that not everyone is using Ubuntu for bzr (although I like it, and it's the primary OS of the wife and her sister)
[20:52] <awilkins> I'm using Vista ; I ran into a lovely issue with shelve ; apparently, Vista thinks that gnu patch needs admin rights because it's called "patch.exe"
[20:52] <awilkins> If you try and run it it either crashes or spams a UAC box
[20:53] <awilkins> Until you hand-edit a manifest resource into the file that tells Vista it doesn't really need Admin rights....
[20:55] <Verterok> awilkins: useful trick to know, did you added it on the wiki, your blog?
[20:55] <Verterok> awilkins: Indeed, Ubuntu is quite nice, but I primary use OS X 10.4 (my Gentoo box died a few weeks ago)
[20:56] <awilkins> Verterok: I added it to the sourceforge bug which already described it but the solution didn't work immediately (I think it caches manifests, I had to rename the file a few times)
[20:58] <Verterok> awilkins: great :)
[20:58] <awilkins> If nothing else, putting solutions to things online is always a good idea because sooner or later I run into the problem again and forget how I fixed it
[20:59] <Verterok> hehe, sure
[21:03] <Verterok> awilkins: I think error state at init can be solved by calling EclipseBazaarCore.checkClient() in EclipseBazaarCore.start
[21:04] <awilkins> Righto, back in 1 hr
[21:04] <Verterok> awilkins: seeya
[21:07] <fbond> So, `bzr replay -r 821.. foo' should replay commits 822, 823, etc., but not revision 821, right?
[21:11] <indigo> If i've moved a directory containing a repository and some checkouts to a different path, what can I do to fix the link from the checkouts to the repository?
[21:13] <indigo> guess i have to manually frob .bzr/branch/location
[21:14] <indigo> it's a little annoying that a trailing newline in that file breaks stuff
[21:37] <lifeless> indigo: bzr switch NEWPATH
[21:38] <indigo> i already mangled location manually; is that a problem?
[21:38] <indigo> it seems to be working..
[21:38] <lifeless> hi bkor
[21:38] <bkor> hey lifeless
[21:38] <lifeless> indigo: no problem, just letting you know the easier way :)
[21:38] <Odd_Bloke> Moin.
[21:38] <indigo> ah, thanks
[21:38] <indigo> too bad there isn't a bzr fix-bug-in-this-old-code :(
[21:39] <lifeless> indigo: hmm, I wonder :)
[21:39] <indigo> or even bzr get-this-old-code-to-compile
[21:40] <lifeless> jam: poing
[21:40] <bkor> indigo: $ moap code --help
[21:40] <bkor> commands:
[21:40] <bkor>   develop  develop code
[21:46] <jam>  lifeless: boing boing
[21:48] <jelmer> bkor, moap?
[21:48] <bkor> jelmer: http://thomas.apestaart.org/moap/trac
[21:48] <bkor> I use it for ChangeLog generation
[21:48] <lifeless> jam: what do you think of running with the new index format?
[21:49] <lifeless> jam: I have a network stress test still running overnight :X
[21:50] <jelmer> bkor, ah
[21:50] <schmichael> does bzr have what Hg calls "named branches"?
[21:50] <lifeless> schmichael: all of our branches are named
[21:50] <schmichael> hm... i don't think i'm using the right terms
[21:50] <Peng> schmichael: You can't have multiple branches in one directory.
[21:51] <schmichael> Peng: ah, thanks
[21:51] <lifeless> schmichael: hg has a single directory with multiple heads
[21:51] <lifeless> schmichael: and the ability to name some revisions with movable labels which they then call named branches
[21:51] <lifeless> schmichael: such things being useful because they reuse the working tree and repository
[21:52] <lifeless> schmichael: we allow working tree reuse and repository sharing, but we do it by doing:
[21:52] <lifeless> bzr init-repo --no-trees REPO
[21:52] <lifeless> bzr branch THING REPO/NAME
[21:52] <lifeless> bzr checkout --lightweight REPO/NAME WORKINGTREE
[21:53] <lifeless> cd WORKINGTREE; bzr switch NAME2; bzr switch NAME3 etc
[21:53] <schmichael> bzr switch?
[21:53] <schmichael> i'm not familiar with what that does...
[21:53] <jam> new this-week: http://jam-bazaar.blogspot.com/2008/07/this-week-in-bazaar.html
[21:53] <lifeless> schmichael: it switches a checkout from one branch to another
[21:54] <schmichael> lifeless: great!  thanks!
[21:54] <lifeless> schmichael: no problem :)
[21:55] <Peng> hg's named branch aren't really moving labels.. They're tied into the revision's meta data.
[21:58] <schmichael> Peng, lifeless: thanks!
[21:59] <Peng> branches*
[21:59] <lifeless> Peng: you can commit to a named branch right?
[21:59] <lifeless> (in hg)
[21:59] <Peng> I dunno. I think it might just be that when you commit, if the parent is a named branch, the new revision inherits the name.
[22:00] <lifeless> Peng: I thought it was a list of names that pointed at revision hashes
[22:00] <schmichael> lifeless: yes
[22:00] <schmichael> well actually i'm trying to learn the details of named branches in hg
[22:00] <lifeless> Peng: because otherwise you can't delete them
[22:00] <schmichael> they're new and kind of awkward
[22:01] <Peng> lifeless: Correct, you can't delete them.
[22:01] <lifeless> Peng: oh, I didn't realise that. ouchies.
[22:01] <Peng> I think git's named branches are basically automatically-moving tags.
[22:01] <lifeless> way to make the cost of using them permanent
[22:01] <lifeless> Peng: they are
[22:02] <lifeless> Peng: they are reasonably sensible, except for the all-in-one-place aspect
[22:02] <lifeless> (I have a bzr repo with 13K branches; don't really want that in a plain text file thanks(
[22:02] <Peng> Heh.
[22:03] <Peng> Does bzr perform well if you have thousands of tags?
[22:03] <jam> lifeless: well, at one point hg's named branches was a simple 'tag' that said "and follow decendents until you have no more". And it was actually committed into the repository, I don't know if they have moved away from that yet or not.
[22:04] <lifeless> Peng: not terribly well; but tags are per-branch, not per-repository, so they are cappable and trimmable without losing history
[22:04] <jam> I think our scaling for N branches in a shared repository is quite good. I don't know that we have tuned the tag support as much
[22:05] <jam> lifeless: as for running with the current btree stuff.... we certainly can
[22:05] <jam> I was hoping to have one more poke at a whole-pack bloom filter
[22:05] <jam> Since I worked out resizing, etc.
[22:06] <jam> It would be a single global one, so the code gets to be a bit simpler as well
[22:09] <lifeless> jam: sure, I think we can spend our respective fridays tweaking
[22:09] <lifeless> jam: I want to  validate that networking doesn't blow
[22:09] <jam> lifeless: yeah, though we have a pretty good idea that it won't
[22:09] <lifeless> did you see iter_random_one_reload ?
[22:09] <jam> the only thing I might add is speculative reading of extra pages over a network link
[22:10] <jam> lifeless: not specifically
[22:10] <jam> I think I saw you performance testing it
[22:10] <lifeless> does iter_random_one
[22:10] <jam> with what 800s => 200s ?
[22:10] <lifeless> but reloads the index on each key
[22:11] <lifeless> so its testing raw 'key a get' speed
[22:11] <lifeless> (e.g. no cache for either index)
[22:11] <jam> sure
[22:11] <lifeless> its been running for 12 hours now over the network :P
[22:11] <jam> lifeless: but if you really wanted to be mean, you should run "drop_caches()" inbetween
[22:12] <jam> lifeless: ouch...
[22:12] <lifeless> jam: network - what cache :P
[22:12] <lifeless> jam: (and I do)
[22:12] <jam> lifeless:  of course, that makes your machine pretty useless while it is running :)
[22:12] <lifeless>         for key in order:                                                                                 |from bzrlib.repository import format_registry as repo_registry
[22:12] <lifeless>             index.iter_entries([key]).next()                                                              |repo_registry.register_lazy(
[22:12] <lifeless>             if reload_index:                                                                              |    'Bazaar development format - btree-rich-root (needs bzr.dev from 1.6)\n',
[22:12] <lifeless>                 index = factory(index._transport, index._name, index._size)                               |    'bzrlib.plugins.index2.repofmt',
[22:12] <lifeless>                 drop_cache()                                                                              |    'RepositoryFormatPackBTreeRichRoot',
[22:12] <jam> um, some weird side-by-side pasting there
[22:13] <lifeless> yes, wrong vim copy command
[22:13] <lifeless> :)
[22:15] <mwhudson> hmm, we need 'bzr unpack' to test btrees better :)
[22:15] <jam> lifeless: so... whose to blame for 12 hours worth of pain?
[22:15] <lifeless> well
[22:15] <lifeless> 99704 keys
[22:15] <lifeless> say 3 IO's per key on btree for the larger index
[22:15] <jam> Is btree better/worse than graph at that point (I fully expect it to be better)
[22:16] <lifeless> 0.3 seconds per IO -> 1 sec per key
[22:16] <jam> and is this what, you to Chinstrap?
[22:16] <lifeless> yes
[22:16] <jam> that's about 1 day
[22:16] <lifeless> yes
[22:16] <jam> 1.15 days
[22:16] <lifeless> I didn't think it out in advance you see :)
[22:16] <lifeless> so I'm going to stop it :)
[22:17] <jam> lifeless: hence why we should be doing speculative reading of extra pages
[22:17] <lifeless> and do 20 keys
[22:17] <jam> a round-trip should never be < 64k, or something like that
[22:17] <lifeless> jam: so there are two things I'd like to see
[22:17] <lifeless> I'd like the last page to be allowed to be truncated
[22:17] <lifeless> so that a small index is, well, small
[22:17] <jam> sure
[22:18] <jam> I saw that in the backlog
[22:18] <jam> what you need is tail recursion, or whatever it was called :)
[22:18] <lifeless> and I'd like to read more internal nodes
[22:18] <jam> Otherwise if they are separate, you're going to get the same disk space regardless
[22:18] <lifeless> they are laid out as they are to allow the read-more-than-one optimisation
[22:18] <jam> lifeless: you mean internal nodes up front ? yeah
[22:19] <lifeless> jam: for instance, reading 64K on a > 64K index in the first read
[22:19] <lifeless> and then any read covering internal nodes do the same
[22:19] <lifeless> but only read 4K for leaf node requests
[22:20] <jam> I would just try to spread it out, such that if we request 16 4k disjoint pages, we don't actually try to get 16-64k pages
[22:20] <lifeless> e.g. do range(node, node+16)
[22:20] <jam> more of a "if there is some more room, grab these as well"
[22:20] <lifeless> jam: sure
[22:20] <lifeless> in fact
[22:20] <lifeless> the single key workflow is probably best kept at 'minimise IO'
[22:21] <lifeless> but when you have two or three keys being asked for you can see graph walking happening, so start being aggressive
[22:21] <jam> lifeless: interesting thought
[22:21] <jam> and probably quite reasonable
[22:21] <lifeless> jam: next thing I'm going to do though, is to write a in-place upgrader for these repository formats
[22:23] <jam> lifeless: not very hard, right? Just write the indexes to upload, rename into place?
[22:23] <lifeless> jam: roughly yes
[22:24] <jam> Have you ever considered packing indexes without packing the data?
[22:24] <lifeless> jam: create a new indices dir with the right contents, save the old format, remove the format marker, pivot the indices dirs, write the final format marker, remove the saved format marker
[22:25] <lifeless> jam: well, we didn't have indices that could be anything other than optimal. or do you mean one index N packs ?
[22:25] <jam> lifeless: right
[22:25] <jam> The idea is to get the benefit of 1 index
[22:26] <jam> without the cost of moving the data around
[22:26] <jam> Just an idea that came up
[22:26] <jam> I haven't thought about it in-depth
[22:26] <lifeless> one index N packs would make ensuring integrity and so on quite a bit harder
[22:27] <lifeless> is citeseer working for you?
[22:28] <jam> lifeless: http://citeseer.ist.psu.edu/ is down
[22:28] <lifeless> dang, was going to point you at LSM trees
[22:28] <lifeless> and re-read it myself
[22:28] <jam> Well, there was also something about CSB+ trees that were supposed to be cache sensitive
[22:29] <jam> but that was more for in-memory DB
[22:29] <jam> lifeless: http://www.google.com/search?q=lsm+trees&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a
[22:29] <jam> First link is a postscript file on www.cs.umb.edu
[22:30] <lifeless> lets run some arbitrary postscript over the web
[22:31] <lifeless> yah thats it
[22:31] <lifeless> a bunch of similarities with what we do
[22:36] <jam> well, we run arbitrary pdf all the time, right?
[22:37] <lifeless> :)
[22:39] <jam> lifeless: what key is best for Andrew Bennetts?
[22:39] <lifeless> spiv
[22:39] <jam> he seems to have 3 registered with subkeys.pgp.net
[22:39] <jam> oh, sorry, that is michael hudson who has multiples
[22:39] <jam> mwhudson: ^^^??
[22:40] <jam> And it seems he doesn't have any of them in *my* web of trust
[22:41] <mwhudson> jam: ah right, i kept losing hard drives with private keys on them :(
[22:41] <mwhudson> jam: the one on lp.net is right
[22:46] <lifeless> I was thinking of http://www.freenet.org.nz/python/embeddingpyrex/
[22:47] <lifeless> for startup speed
[22:47] <jam> lifeless: except you still have to "import bzrlib" which is the whole cost
[22:48] <jam> now if we could get pyrex to compile bzrlib....
[22:48] <jam> that would be interesting
[22:48] <lifeless> jam: right, thats the point
[22:48] <lifeless> I'm pretty sure a single .so can provide > 1 modules
[22:48] <lifeless> with a little love
[22:49] <jam> lifeless: well, you could have 'bzrlibmodule.so'
[22:49] <jam> and that would clearly be able to have "bzrlib.foo"
[22:49] <jam> I'm not 100% sure about "from bzrlib.foo.bar.baz import bing"
[22:50] <lifeless> loading is left to right
[22:50] <lifeless> so it should work
[22:51] <jam> well, standard attributes would confuse the __import__ stuff, but if you tricked it into thinking they were modules
[22:54] <lifeless> jam: types.ModuleType
[22:54] <lifeless> jam: by 'trick' I hope you mean 'make them modules' :)
[22:55] <mwhudson> the fact that 'from os.path import join' works means this isn't too hard
[22:55] <mwhudson> (os not being a package, and os.path being around from before there _were_ packages)
[22:58] <jam> mwhudson: it sure does make it hard to track down where to find the *code* for os.path.foo though :)
[22:59] <mwhudson> jam: not as hard as pyrexing it all up :)
[22:59] <jam> Especially when the function is "nt.lstat()" but the code for it is found in 'posixmodule.c'
[22:59] <mwhudson> oh yes, that is a good trick
[23:03] <lifeless> jam: design thought
[23:04] <lifeless> jam: how does this sound: 'to convert repository instance Y into a Z, we use an InterRepositoryRepositoryFormat instance'
[23:04] <jam> to simplify all cut&paste you do each time?
[23:04] <lifeless> well currently we do a full fetch
[23:04] <lifeless> so this is new code
[23:05] <lifeless> I'm proposing that as the design/lookup
[23:07] <jam> I thought we already had upgrade/downgrade infrastructure
[23:07] <lifeless> we do
[23:07] <lifeless> MetaToMeta does
[23:07] <lifeless> if repo._format != desired_format:
[23:07] <lifeless>    init_new
[23:07] <lifeless>    new.fetch(repo)
[23:07] <jam> ouch
[23:07] <lifeless>   pivot()
[23:08] <lifeless> which is fine, it works and is very generic
[23:08] <lifeless> and until now we haven't had a meta format which would benefit from special casing
[23:08] <lifeless> (except perhaps plain -> richroot). anyhow:
[23:08] <lifeless> what do you think of the approach.
[23:09] <jam> lifeless: well, interestingly enough, you already have:
[23:09] <jam> # TODO: conversions of Branch and Tree should be done by
[23:09] <jam> # InterXFormat lookups
[23:09] <jam> But not a comment for Repository
[23:10] <jam> so... I'm okay with it, though it is a layer of abstraction, which I don't think will see a lot of benefit
[23:10] <jam> as we won't have tonnes of Repo<=>Repo converters that benefit
[23:10] <lifeless> its pluggable though, which is a win I think
[23:10] <jam> versus the work that has to be done anyway to make 'fetch()' work well.
[23:11] <lifeless> sure
[23:11] <jam> so, I'm fine with the idea, though I wouldn't work terribly hard to write the code just yet myself
[23:11] <jam> I like being able to have plugins provide special tools for their customizations
[23:12] <jam> it is just code to support, and maintain, and etc.
[23:12] <jam> For 1 plugin which is going to be core RSN
[23:12] <lifeless> jam: well, I've written several before that would have benefited
[23:13] <lifeless> and I'm expecting we'll keep hacking on index2 for a while; its just going to be a code dump to get the first generation in to core
[23:13] <jam> lifeless: reasonable enough, if you've seen a need for it, it is reasonable to bring in
[23:13] <jam> <== is a big fan of plugins
[23:13] <jam> having written the original import code :)
[23:14] <jam> and a slew of plugins myself
[23:14] <lifeless> :)
[23:15] <jam> lifeless: so, would it be reasonable for me to strip out the per-internal-node bloom code
[23:15] <jam> I don't think we'll see much benefit there
[23:15] <jam> though I can leave it if you want
[23:16] <lifeless> jam: I'd kind of like to see network impact of 50% blooms, that sort of thing
[23:16] <jam> ok
[23:16] <jam> I'll leave it if you are still poking at it
[23:16] <lifeless> jam: so unless its in the way, I'd rather leave it - but o course make it not cost [much] when not using
[23:17] <jam> I was going to change how blooms get generated, but I can do so compatibly
[23:17] <lifeless> don't worry about disk layout compatibility, anyone using this today can convert back :P