[00:00] <lifeless> it will make subunit-ls clearer, so if you get the time to review, assume its gone :)
[00:00] <mwhudson> ronny: have you read "xUnit test patterns"?
[00:01] <lifeless> spiv: stacked [merge] sent
[00:02] <ronny> mwhudson: no
[00:03] <mwhudson> ronny: the first 100 pages or so is well worth reading for provoking thoughts about testing
[00:03] <ronny> hmm, added to my should read list
[00:04] <mwhudson> it's unfortunate that you have to buy the other 700 pages too :)
[00:04] <ronny> ew
[00:18] <edcrypt_> igc: ping
[00:22] <igc> edcrypt_: pong
[00:25] <edcrypt_> igc: hi, I the one who made that bug report about views
[00:25] <edcrypt_> igc: what is the view-aware API to iterate on the WT?
[00:26] <igc> edcrypt_: there probably isn't one ...
[00:26] <igc> I tend to trap most things at the UI layer inside builtins.py ...
[00:27] <igc> so that the list of files is automagically passed to commands without needing to change their internals
[00:27] <igc> edcrypt_: there are exceptions though ...
[00:27] <lifeless> igc: shouldn't it filter at the tree?
[00:27] <igc> diff comes to mind
[00:27] <lifeless> igc: so that plugins work
[00:28] <igc> lifeless:it depends
[00:28] <igc> whether to use a filtered view needs to be considered on a case by case basis
[00:29] <edcrypt_> igc: WorkingTree..inventory.iter_entries_by_dir() gives all files, but probaly I shouldn't be using it
[00:30] <igc> edcrypt_: so if you look at the code for cmd_diff in builtins.py, you'll see it passed apply_view flag down to the next layer
[00:31] <igc> edcrypt_: you certainly can use that API, you'll just need to post filter the results by seeing if the path falls inside the view or not
[00:31] <igc> thst's not hard - see the examples at the top of builtins.py (like internal_files_for_add say)
[00:31] <igc> s/thst/that/
[00:32] <edcrypt_> igc: ok, taking a look, thanks!
[00:32] <igc> np
[00:32] <igc> edcrypt_: any you looking at fixing ls for me? :-)
[00:33] <lifeless> igc: in what cases is the answer 'no' so far?
[00:33] <igc> lifeless: merge, log, info at least
[00:34] <edcrypt_> igc: hehe, maybe, I have just learned hwo to create a plugin!
[00:37] <edcrypt_> igc: _check_path_in_view()
[00:38] <edcrypt_> ops, diff._check_path_in_view()
[00:40] <spiv> lifeless: reviewed, bb:approve
[00:42] <lifeless> woo
[00:42] <igc> edcrypt_: might be good to make that a public function in views.py instead of a private one in diff.py.
[00:47] <edcrypt_> igc: indeed
[00:57] <lifeless> spiv: ok, thats sent.
[01:08] <jelmer> luke-jr, hi
[01:08] <igc> poolie: fyi, here's my plan for today
[01:08] <jelmer> luke-jr, any chance you can comment on bug 326278?
[01:08] <igc> 1. get usertest tweaked with some of the changes on the roadmap
[01:09] <lifeless> jelmer: ping
[01:09] <jelmer> lifeless, pongz0rz
[01:09] <igc> 2. ping you and we'll update orchadas
[01:09] <igc> 3. land content filtering after chatting with you
[01:09]  * igc bbiab - food calling
[01:11] <lifeless> jelmer: could you do a pass over your subunit branches, and submit for merge where relevant
[01:12] <jelmer> lifeless, k
[01:13] <lifeless> thanks
[01:14] <poolie> igc, sounds good
[01:14] <poolie> my laptop won't boot, i'm guessing because of bug 333073 in jaunty :/
[01:14] <poolie> that gave me a bit of a setback
[01:14] <poolie> but let's talk anyhow when you're done
[01:17] <garyvdm> Hi - when I run bzr shelve - I get the following prompt:
[01:17] <garyvdm> Shelve? [yNfq]
[01:18] <garyvdm> yes No f???? q????
[01:18] <garyvdm> Where can I find more info. bzr help shelve does not say anything.
[01:19] <poolie> q is quit
[01:19] <poolie> does ? do anything?
[01:20] <garyvdm> no
[01:20] <garyvdm> no - ? seams to cause no which is the default action - lol
[01:20] <poolie> srsl? :(
[01:21] <mwhudson> f flips the defaults
[01:21] <lifeless> Odd_Bloke: is workng on this I thought
[01:22] <garyvdm> mwhudson - I jest tested that - f seems to do a "full" diff?
[01:23] <mwhudson> oh
[01:23] <mwhudson> i'm wrong then
[01:24] <garyvdm> ok - it easy to figure out - but irritating.
[01:30] <poolie> it should definitely be able to give you help
[01:31] <poolie> igc, back in 5
[01:37] <lifeless> spiv: ok with that sent off I'm looking at the next round trip
[01:42] <lifeless> spiv: another regression - we're not using find_repository to do the iteration for us
[01:42] <lifeless> we're walking up dirs manually
[01:44] <spiv> lifeless: actually, I'm not sure if that ever worked...
[01:44] <spiv> But yes, it'd be good to fix that.
[01:44] <spiv> Also, coffee is good...
[01:44]  * spiv does something about that
[01:44] <lifeless> I'm fairly sure it did at one time ;)
[01:44] <lifeless> but I could be wrong
[01:45] <spiv> Me too :)
[01:51] <jml> new small lp-open patch just sent to mailing list.
[01:52]  * jml doubles as an IRC notification bot.
[01:53] <mwhudson> :)
[02:04] <lifeless> spiv: I want to delete InterPackToRemotePack
[02:09] <poolie> igc, i'm free now if you are
[02:09] <poolie> lifeless: +1 (minus a bit for not being right there in the code atm)
[02:17] <spiv> lifeless: so that was only doing two things
[02:17] <spiv> lifeless: (IIRC).  calling the autopack RPC, and making sure _walk_to_common_revisions walks 50 revs at a time
[02:18] <spiv> lifeless: the streaming push takes care of the former, I think the latter might be taken care of by the InterOtherToRemote?
[02:18] <spiv> lifeless: if so, I don't see any reason to keep it either
[02:21] <lifeless> spiv: yeah
[02:21] <lifeless> spiv: thats why I want to delete it; and am doing so ;)
[02:21] <spiv> Ok.  +1 :)
[02:21] <spiv> I'm glad to see the set of InterRepos shrinking for once :)
[02:22] <lifeless> http://paste.ubuntu.com/121655/
[02:22] <lifeless> I'd like to just toss that at pqm - pass land fail I'll look into it
[02:22] <lifeless> spiv: ^
[02:27] <spiv> lifeless: sure, if it passes, then +1
[02:27] <spiv> lifeless: or rather, bb:approve :)
[02:36] <mwhudson> jml: your blug reports footnote doesn't seem to exist
[02:39] <jml> mwhudson: mea culpa
[02:43] <mwhudson> i think i just made the annotate view in loggerhead twice as fast
[02:44] <lifeless> mwhudson: good
[02:44] <lifeless> :)
[02:44] <mwhudson> by not calling annotate_iter twice!
[02:44]  * mwhudson sighs
[02:46] <lifeless> mwhudson: its amazing how little things can make a difference :)
[02:46] <lifeless> mwhudson: I have a suggestion for you
[02:46]  * mwhudson listens
[02:46] <lifeless> this is for testing
[02:47] <lifeless> create a little delegate branch
[02:47] <lifeless> which logs all the calls made to it
[02:47] <lifeless> and has a repository object it can return which likewise does the same for the branches repository
[02:47] <lifeless> you can do two things with this
[02:47] <lifeless> when asking 'why is X slow' you can see the api calls being made
[02:48] <lifeless> and secondly you can write a test that no more than a certain number of calls/exact calls etc are made
[02:49] <mwhudson> hmm
[02:49] <lifeless> the former is useful for exploring
[02:49] <lifeless> the latter is useful for locking down a behaviour ratchet
[02:51] <mwhudson> certainly, something i've been doing lately is removing the layers between loggerhead and bzrlib
[02:52] <mwhudson> and it was in the process of doing this that i found this problem
[02:58] <lifeless> spiv: I'd like to make RemoteRepository.*write_group* no-ops like knit/weave repositories
[03:00] <spiv> lifeless: the _real_repo would still need to have a write group, though?
[03:00] <lifeless> well
[03:00] <spiv> lifeless: or are we avoiding enough vfs ops now?
[03:01] <lifeless> as a start point we may find points that break
[03:01] <lifeless> at the moment though it and repo.lock_write are triggering _ensure_real
[03:01] <lifeless> actually
[03:01] <lifeless> lock_write doesn't trigger _ensure real
[03:01] <lifeless> but its a noop for pack repos
[03:03]  * spiv nods
[03:04] <lifeless> light weight commits will still need vfs
[03:04] <lifeless> but bound commit shouldn't
[03:05] <lifeless> calls 13 through 18 are RemoteRepository.start_write_group(), in the acceptance test
[03:18] <lifeless> spiv: ok, BranchFormat.network_name next I think
[03:18] <lifeless> spiv: ping
[03:18] <lifeless> bah
[03:18] <lifeless> spm: ping
[03:18] <spm> lifeless: pong
[03:18] <lifeless> spm: I think pqm is hung, can you check please?
[03:18] <spm> sure
[03:23] <jml> did jelmer's clean-tree patch get reviewed?
[03:27] <spm> lifeless: it was wedged pretty good. should be rockin again
[03:27] <lifeless> spm: did you note the cause?
[03:28] <spm> looked like the old email thing; but that didn't fix it. the sendmail was then hanging around as a defunct process. ie still wedged.
[03:28] <lifeless> ok, I'll try the merge again
[03:33] <spm> damn. it's rewedged itself. looks like on the original one too.
[03:34] <igc> poolie: I'm back
[03:34] <lifeless> spm: ok, patch is bad anyhow, so I suggest just removing the request from me and killing the bzr child
[03:35] <lifeless> spm: you shouldn't ever kill the pqm parent when a test is wedged
[03:35] <igc> is it just me or does bazaar-vcs.org seem really slow these last few days?
[03:36] <spm> lifeless: yeah - that was a pebkac. :-( did the chroot kill without a dir specified. clobbered everything :-( x 3
[03:36] <lifeless> spiv: we need a 'refresh your data' api for all repositories
[03:36] <lifeless> spiv: because this streaming push can work for knits, but we can't let it try :>
[05:17] <poolie> igc, hi?
[05:18] <igc> hi polie
[05:18] <igc> poolie:^
[05:19] <igc> poolie: you ready to do this orchadas stuff?
[05:19] <poolie> hey
[05:19] <poolie> i am
[05:19] <poolie> i hit an ubuntu bug 332270 that stopped my laptop booting and messed up my day a bit
[05:20] <igc> poolie: yum
[05:20] <igc> poolie: let's start by having a quick call maybe?
[05:20] <poolie> that's why it's called an alpha
[05:20] <poolie> good idea
[05:20] <poolie> i'll call your home?
[05:21] <igc> sure
[05:22] <jam> lifeless: I have some theoretical reasons why multiparent with xdelta is smaller than gc, care to have a call about it tomorrow?
[05:22] <poolie> hello jam
[05:22] <jam> hi poolie
[05:23] <jam> just heading off to bed after perusing the ~200 emails that came in over the weekend...
[05:23] <lifeless> jam: I'd be delighted
[05:23] <jam> lifeless: k, ping me when you wake up
[05:23] <jam> I'm going to sleep now
[05:35] <lifeless> jam: ciao
[05:36] <lifeless> jam: I'd expect the theoretical reason is deltas composition rather than recipes
[05:36] <lifeless> but see recipes for why gc is fast at extraction
[05:37] <maco> um, bzr just said "bzr: ERROR: exceptions.AttributeError: children" and aborted the commit i told it to do. how do i make it successfully commit?
[05:38] <lifeless> maco: what bzr version?
[05:39] <maco> 1.12
[05:42] <lifeless> maco: uhm
[05:43] <lifeless> maco: is there more in ~/.bzr.log ?
[05:52] <maco> lifeless: it spit out a backtrace
[05:52] <maco> i'll pastebin it
[05:57] <maco_> kernel panicked
[06:02] <maco> http://paste.ubuntu.com/121690/
[06:02] <maco> lifeless: ^
[06:09] <lifeless> maco: can you pastebin the output from 'bzr st' please
[06:09] <lifeless> (parent appears not to be a directory)
[06:10] <maco> just "added: .kde@"
[06:11] <lifeless> so .kde is a symlink
[06:11] <maco> i just made ~/config-backup, cd'd there, and did "bzr init" then tried to do "bzr add ~/.kde/share" and some other directories, but it didn't like that, so i made the symlink
[06:11] <maco> yes
[06:11] <lifeless> ok
[06:12] <lifeless> so we'll make a bug for the error, because it shouldn't be happening; but if you're trying to backup your .kde directory, a symlink to it won't do that - bzr will version the symlink
[06:12] <maco> i didnt give it ~/.kde to version though
[06:12] <edcrypt_> igc: merge request sent (make ls aware of views)
[06:12] <maco> i gave it ~/.kde/share and ~/.kde/Autostart and ~/.kde/env
[06:12] <lifeless> you need to either make ~/.kde a symlink to your bzr tree, or actually bzr init in ~/.kde
[06:13] <lifeless> maco: you did 'ln -s ~/.kde' inside config-backup didn't you?
[06:13] <maco> yes
[06:13] <igc> edcrypt_: thanks!
[06:13] <maco> but then i did bzr add on .kde/share, not on all of .kde
[06:14] <edcrypt_> igc: np
[06:14] <lifeless> maco: right, so I think I know what the bug is
[06:14] <lifeless> maco: however fixing it won't do what you want, so give me a second to record the issue
[06:14] <maco> was trying to get around it giving "NotBranchError: Not a branch: "/home/maco/.kde/share/"." when i did "bzr add ~/.kde/share"
[06:14] <maco> ok
[06:15] <poolie> igc, ok, i got a report with just one tool running
[06:15] <maco> is it not possible to give an absolute path to bzr add?
[06:15] <poolie> now i'll try it with some more though that will be slower
[06:15] <lifeless> maco: no, its not possible
[06:16] <lifeless> maco: bzr requires that all the files it versions are underneath a .bzr directory containing the 'tree' to be versioned
[06:16] <lifeless> maco: roughly, thats where you do 'bzr init'
[06:16] <lifeless> maco: so if you want to version ~/.kde/share, you must have done 'bzr init' in one of: ~/.kde/share, or ~/.kde, or ~
[06:18] <lifeless> maco: I've filed a bug for the specific error you're getting, but the actual fault is in 'bzr add' :(
[06:19] <maco> lifeless: ok thanks. a friend suggested avoiding putting it in ~/.kde in case of the next time i blow away ~.kde but i'll just have to remember to keep a backup of the .bzr that's in there
[06:19] <lifeless> maco: put it in ~/.kde like so:
[06:20] <lifeless> rm -rf ~/config-backup; cd ~/.kde; bzr init; bzr add [things]; bzr commit -m "start versioning ~/.kde"; bzr push ~/config-backup
[06:20] <igc> poolie: cool
[06:20] <lifeless> maco: now config-backup is a backup
[06:20] <maco> ah ok good idea. thanks!
[06:21] <lifeless> my pleasure
[06:30] <poolie> http://benchmark.bazaar-vcs.org/usertest/summary.html
[06:46] <maco> if i do "bzr remove" it doesnt remove the file, right? just stops doing version control on it?
[06:46] <poolie> if you do rm --keep it will do that
[06:49] <maco> or well...how do i tell it to not track anything in share/apps/kmail/*/*? bzr ignore? and will that get rid of its past control of those files or not?
[06:49] <poolie> if you have already added them, and you want to just ignore them in future you need to both remove --keep and also ignore them
[06:51] <poolie> igc, still here?
[06:53] <maco> ok
[06:53] <maco> thanks
[06:53] <igc> poolie: yep
[06:54] <poolie> i'd like to make something that will feed usertest output into rrd
[06:54] <poolie> so we have an over-time plot of results
[06:54] <maco> er, was i supposed to ignore or remove first? bzr diff still shows stuff from those files
[06:54] <lifeless> maco: bzr ignore ./share/apps/kmail/
[06:54] <poolie> i'd probably start with getting the total scenario time for each tool
[06:54] <poolie> and recording that each time we execute
[06:54] <lifeless> maco: ignore and remove
[06:57] <poolie> so firstly ,is this redundant with something you've done
[06:57] <maco> ok thanks
[06:57] <igc> poolie: no, I'm yet to produce rrd friendly output
[06:57] <poolie> bearing in mind it has to run on a dapper machine with no added software
[06:57] <poolie> and secondly, what should i start from?
[06:57] <poolie> oh
[06:58] <poolie> actually for the overall time, now that we're running each test separately, it's obviously easy
[06:58] <poolie> to do it from the driver script
[06:58] <poolie> i'll do that first then see how we go
[06:58] <igc> poolie: ok
[06:58] <igc> poolie: btw, new usertest just pushed with Description column added to the html report
[06:59] <igc> poolie: that will reduce the # of times I need to explain what each task/action does :-)
[06:59] <poolie> scalability :)
[07:00] <igc> rev 151 also fixes the colouring when multiple projects are given in the one html report
[07:00] <igc> i.e. colouring is per project, not just against the first column :-(
[09:10]  * igc dinner
[09:44] <Odd_Bloke> lifeless: I have a branch to fix the prompt up.  I'll send it in now (as the branch it depends on has been merged now).
[10:52] <ronny> yo
[11:01] <Peng_> lifeless: FWIW, I always run latest bzr.dev, and the streaming push work hasn't broken anything for me. I don't use stacking though.
[11:18] <spiv> Peng_: that's good to know.  Does that include using bzr.dev on the server?
[11:34] <Peng_> spiv: Yes.
[11:35] <Peng_> Except for once half an hour ago, where the revisions on the client and server were incompatible, so I had to do one push over sftp.
[12:05] <lifeless> Peng_: thanks
[12:05] <lifeless> Peng_: is it faster?
[12:06] <Peng_> lifeless: Sorry, but I don't really pay attention. It takes more than 1 second, so I always do something else while it runs.
[12:06] <lifeless> Peng_: fair enough
[12:06] <Peng_> I always run it through "time", and yet I never read the results. :P
[12:06] <lifeless> LOL
[12:16] <jelmer> moin lifeless, Peng
[12:19] <ronny> lifeless: are there any plans to support custom objects in the history store at one point?
[12:19] <Peng_> jelmer: Good morning.
[12:20] <lifeless> ronny: no; as we discussed before I think we'd need to see more examples of custom things before being able to design a good reliable & sufficient interface
[12:20] <lifeless> ronny: so its not 'no, never' its 'no, no _plans_ at the moment, open to things as we see people doing it in plugins'
[12:23] <ronny> k
[12:23] <jelmer> lifeless, what happened to the annotated code stuff you were discussing with James in London during the summer?
[12:23] <jelmer> can't remember the exact name you were using
[12:28] <ronny> the main use-cases *i* see is signed tags, code-review, testresults
[12:30] <james_w> jelmer: "marks"?
[12:31] <jelmer> james_w, yeah, that's it
[12:32] <james_w> I don't know if lifeless has done any work on it, I certainly haven't
[12:32] <james_w> I would love to have it though
[12:33] <james_w> the thing that I'm not sure about is how to store the marks
[12:33] <james_w> "file-id hunk-start hunk-end sha-of-hunk" might work I guess
[12:43] <lifeless> jelmer: views
[12:43] <lifeless> jelmer: erm
[12:43] <lifeless> views/filters/tags something like that
[12:43] <lifeless> marks maybe?
[12:43] <lifeless> james_w: yeh I started sketching htat
[12:43] <ronny> hmm
[12:43] <lifeless> haven't gone far; performance is a major issue first
[12:45] <lifeless> ronny: so for all of those, I'd definitely help you get support in the core as hooks, so that plugins can do them
[12:45] <lifeless> and once we see what plugins *do* with those hooks look at the design for core inclusion of extensible things
[12:50] <ronny> lifeless: atm im puting together a nosetest plugin for subunit
[12:51] <james_w> lifeless: if you've committed anything I'd love to take a look
[12:51] <james_w> lifeless: I understand that it's not your first priority
[12:55] <ronny> lifeless: subunit could use a setup.py
[12:55] <lifeless> ronny: yeh, though it builds native C and shell and stuff; setup.py isn't really a good fit there
[12:55] <jelmer> scons isn't either though >-)
[12:56] <ronny> lifeless: at least for the python part
[12:56] <lifeless> jelmer: scons was an awful nightmare that I regret
[12:56] <lifeless> ronny: certainly thats doable yes
[12:56] <lifeless> I think there is a branch
[12:56] <lifeless> ronny: http://www.somethingaboutorange.com/mrl/projects/nose/doc/plugin_interface.html#prepareTestResult
[12:56] <lifeless> prepareTestResult(self, result)
[12:56] <lifeless> seems to be the bit
[12:57] <lifeless> lp:~lifeless/subunit/filter has the latest subunit branch I was tweaking
[12:58] <ronny> would you mind if i would put nosetest support directly into subunit ? its basically just replacing the reporter
[13:15] <lifeless> ronny: I'm not sure what you're asking
[13:15] <lifeless> ronny: but note bug 332770
[13:16] <ronny> lifeless: thats what i do atm
[13:16] <lifeless> which is to say, I'd like it if installing subunit added an option to nose [if nose is installed]
[13:16] <lifeless> I don't want subunit to depend on nose though
[13:16] <lifeless> I consider it a lower level helper
[13:16] <ronny> ok, so separate repo it is
[13:17] <lifeless> ronny: hm? Not sure I follow, however I'm tired :)
[13:17] <lifeless> so - chat with you in ~ 7-8 hours
[13:17] <ronny> k
[13:17] <ronny> cu later
[16:10] <jam> morning vila, I hope you had a good weekend
[16:11] <vila> jam: hehe, hi ! Yes, pretty good, didn't touch a mouse nor a keyboard (didn't happen for a while :)
[16:11] <vila> jam: how was yours ?
[16:11] <jam> jelmer: I'm getting timeouts while trying to access a signature file on your dulwich branch
[16:11] <jam> vila: pretty good
[16:12] <jam> sat was a kids birthday party
[16:12] <jam> so it was fun to see Kareem playing with the neighbor kids
[16:12] <jam> I didn't get back to a comp until late Sunday night as well
[16:12] <jam> jelmer: lp also seems to be having problems with that branch
[16:12] <jelmer> jam: us2 seems to be under heavy load at the moment, one of the admins is looking into it
[16:13] <jam> jelmer: k. It seems funny since everything works until the .six
[16:16] <jam> vila: still looking into the mysql stuff?
[16:16] <vila> jam: on my right screen, yes
[16:17] <vila> jam: trying to find a better ftp test server on the left one
[16:17] <vila> jam: I was so close :-/ And had problems *again* with FtpStatResult and the test suite being too demanding for ftp :-/
[16:17] <Peng_> Hmm, it took 438.396 seconds to pull one revision from subvertpy. bzr-svn is still going. Did my connection hiccup?
[16:18] <vila> jam: the idea was to spend a couple of hours only on it and then address the fact that we can't test under 2.6 anymore until we avoid using medusa there which requires another one
[16:19] <vila> jam: I hate ftp
 that was rude, ftp doen't like me that much
[16:21] <jelmer> Peng_, server is having issues
[16:21] <jam> vila: Would it be easier to just get medusa working with 2.6 ?
[16:21] <Peng_> Oh. Oh! I did read backlog, just not enough to understand it. Alright. :)
[16:23] <vila> jam: no, far harder as the bug is deep down inside asyncore/asynchat or something (I suspect the problem is roughly that somewhere in an iterator str() objects are special-cased and unicode isn't, ugly business), so the idea was to just stop using medusa for >= 2.6 and find another ftp test server
[16:24] <vila> jam: I found one that doesn't require running as root, only to find that it returns an empty list when asked to list a non-existing directory and has no problem giving the *size* of a directory
[16:24] <jam> vila: why would we be using Unicode anywhere...
[16:24] <jam> at the FTP level, everything should be 8-bit strings, (I think)
[16:25] <vila> utf-8 (sorry)
[16:25] <jam> vila: we wouldn't be special casing a utf-8 string
[16:25] <jam> as it is just another 8-bit string
[16:25] <vila> but at some point giving utf-8 means handling unicode (from memory)
[16:25] <vila> jam: I don't remember exactly the problem, certainly it was on server side, and ... well there is little we can do there :(
[16:26] <jam> vila: I don't know the failures, but everything *under* FTPTransport should be 8-bit strings. I suppose if we have a unicode file on disk there may be other issues
[16:26] <jam> vila: does everything fail?
[16:26] <jam> Or is it just a couple tests that we could say "py2.6 + unicode files + medusa is KnownFailure"
[16:26] <jam> vila: I certainly would rather you spending your time on other things :)
[16:27] <vila> under 2.6 with medusa, failures are about utf-8 encoded file names IIRC
[16:27] <jam> vila: exactly, so wrap tests that do exactly that with KnownFailure and get on with your life :)
[16:27] <vila> jam: I know :-/ I hoped to be done with it far quicker and then... well each bug looks like the solution was close :)
[16:28] <eferraiuolo> Can I expect the Mac OS X 10.5 Installer for bzr 1.12 soon?
[16:28] <vila> hmm, not sure we can peak under the cover to realize we are under 2.6 + medusa + whatever, the idea was to shorcut at get_transport_permutations
[16:29] <vila> get_test_permutations I meant
[16:29] <Peng_> eferraiuolo: "tonight (European time)" according to a post on the mailing list :)
[16:30] <eferraiuolo> Peng_: thanks for the info :-)
[16:30] <vila> jam: to be precise, I have a test server that pass the test suite, with a relaxing constraints patch against bzr.dev, so that's not *that* bad :-)
[16:31] <jam> jelmer: I have a trivial patch to 'dulwich' which should help the performance of 'apply_delta'. Care to test it out?
[16:31] <jam> http://paste.ubuntu.com/121930/
[16:32] <vila> jam: on the bbc front all the remaining failures will require a bit of work and I wanted to start with a bzr.dev merge which raise to many conflicts to be solved quickly, so *that* was pushed out of the parrallel activities which are now back to the two mentioned above :)
[16:32] <jam> vila: sure. I'm also submitting the "trailing whitespace" fix
[16:32] <jelmer> jam, Is that against the current trunk? James also submitted a performance improvement yesterday
[16:32] <jam> which is going to be bad for merging into bbc
[16:32] <jam> jelmer: versus 154
[16:32] <jam> it just changes to use a list and append
[16:32] <jam> and then ''.join() at the end
[16:33] <jelmer> jam, ah, cool
[16:33] <jelmer> jam, can you "bzr send" or is it easier if I just pick up from pastebin?
[16:33] <jam> that should avoid O(N^2) performance
[16:33] <jam> I haven't committed, but I can and I'll send it
[16:33] <vila> jam: by the way, if *you* are able to merge bzr.dev cleanly in bbc, just tell me :-)
[16:33] <james_w> nice
[16:34] <james_w> there are probably a couple more places in that file where that approach could be used
[16:34] <jelmer> jam: Please do :-)
[16:34] <jelmer> jam, vila: Also, if you can have a look at my InterBranch patch that would be really helpful for bzr-git..
[16:35]  * jelmer is still wondering what the magic trick is to get a patch reviewed
[16:35] <jam> jelmer: sent
[16:35] <jelmer> jam: Thanks :-)
[16:35] <jam> vila: I'm sure I'm not
[16:35] <jam> I changed some code in fetch
[16:36] <jam> which Robert changed with the network streaming stuff
[16:36] <arshad> hi channel
[16:36] <jam> I added a progress bar
[16:36] <arshad> whats bazaar all about  ??
[16:36] <vila> jam: yeah, so at least you know *one* side of the conflict as opposed to None :-)
[16:37] <jam> jelmer: I think it is beer. And submitting patches that directly effect other people. The problem with InterBranch is that I haven't worked out whether the added complexity is worth the benefit for non-core code. I'll try to give it a look
[16:37] <jam> arshad: freedom :)
ok . . . . . .   but freedom for what ??
[16:38] <james_w> jelmer: I assume that is the cause of "AttributeError: 'module' object has no attribute 'InterBranch'" with a newer bzr-git?
[16:38] <jam> arshad: That was a trivial response. To give you a better one, how much do you *know* so I don't repeat myself
[16:38] <jam> bzr is a version control system
[16:39] <jam> do you need to know the diff from others
[16:39] <jam> what a vcs *is*?
[16:39] <jam> etc
[16:40] <arshad> hope u wont fire me for that
[16:41] <jam> arshad: no firing, just trying to understand what info you really want
[16:42] <jam> vila: yeah, robert refactored a lot of stuff that bbc had hacked to get working :(
[16:42] <jam> So I need to actually understand the new stream code
[16:42] <jam> before we can really adapt it
[16:42] <jam> vila: like understanding how to get chk records as part of the new stream
[16:42] <vila> jam: The good thing is that that should address at least *one* failure :-)
[16:43] <vila> jam: ouch, not an easy bird then :-/
[16:43]  * vila throws some stones at jam to help him :)
[16:43] <arshad> can VCS b made as a career
[16:43] <arshad> ??
[16:43] <jelmer> james_w: yep
[16:43] <jelmer> .
[16:43] <Peng_> Doesn't CPython optimize += on strings with only one reference, making it faster than ''.join()ing a list?
[16:44] <james_w> jelmer: am I better merging that, or going back in time a little bit on bzr-git?
[16:44] <vila> arshad: vcs == version control system, try reading http://bazaar-vcs.org a bit
[16:44] <vila> arshad: you should get many answers there
[16:44] <arshad> where??
[16:45] <jam> vila: so one problem I have with your "knownFailure" change to "autopack_rpc_is_used" test. Is that you don't test anything, in case we accidentaly *fix* that.
[16:45] <jelmer> jam: It's the only way to make "bzr pull" work on remote git repositories since we can't walk the graph of remote git repositories to determine the revno
[16:45] <jelmer> james_w, Better off merging InterBranch, without it you can't pull from remote git branches
[16:46] <vila> jam: that was quite the only possible route at that time as we were testing that the hpss autopack call was in hpss_calls
[16:47] <vila> jam: and that required some InterRepo accepting chks which wasn't the right way
[16:47] <jam> vila: not really
[16:47] <jam> you could have put the knownFailure later
[16:47] <jam> around the actual "x in y" test
[16:47] <jam> anyway, it is fine.
[16:47] <jam> For right now, I'm pulling out your change, because it conflicts with the new streaming RPCs
[16:47] <jam> and I don't know how BBC will interact with that yet
[16:48] <james_w> jelmer: will InterBranch allow things like "missing" against the remote branch? I guess they may have to become a fetch to local then missing operation?
[16:49] <vila> jam: pull out ! pull out ! sounds perfectly right now that the test has changed
[17:00] <jam> vila: dammit.... we have a serious problem
[17:00] <jam> the generic fetching code doesn't really seem to support converting on-the fly
[17:00] <jam> at least not easily
[17:01] <jam> since it expects to be getting a stream
[17:01] <jam> that it can just insert on the other side
[17:01] <jam> and with chk
[17:01] <jam> it has to convert the repos
[17:01] <jam> well, convert the inventories.
[17:01] <jelmer> james_w: It mainly allows for custom implementations of branch-to-branch operations, such as pull
[17:01] <jelmer> james_w: right now bzr-git only overriden update_revisions(), which is the main worker used by pull
[17:02] <jelmer> james_w, but in the end, we would indeed end up with a search_missing_revisions() and that would most likely indeed to a fetch :-/
[17:02] <jelmer> james_w, s/to a/do a/
[17:03] <james_w> yeah, that's a pain
[17:03] <vila> jam: ouch :-/
[17:03] <james_w> jelmer: would it be at all possible to extend git's server to help with this?
[17:04] <james_w> not in terms of would they do it, but in terms of whether git could provide the sort of information that is needed?
[17:04] <jelmer> james_w, it would, though I'm not sure how likely such changes would be accepted upstream
[17:04] <james_w> I guess having it speak the bzr remote protocol would work :-)
[17:04] <jelmer> james_w: :-)
[17:05] <jelmer> james_w, Actually, that has crossed my mind.. the git server John has been working on can provide a "bzr" capability, just like "bzr svn-serve" does
[17:05] <james_w> heh
[17:05] <james_w> jelmer: also, would "bzr format-git-patch" make sense?
[17:06] <james_w> and something to then pull then back in? maybe "bzr patch" is enough
[17:06] <jelmer> james_w, you mean "bzr send --format=git" ? >-) Yes, I think so
[17:06] <james_w> yeah, that too
[17:06] <james_w> :-)
[17:07] <Jc2k> jelmer: i thought about custom extensions to git-serve too :)
[17:07] <james_w> making "format-patch" and alias of "send" might be a good idea
[17:07] <jam> vila: so StreamSink *is* meant to handle some of that, but it only really is defined as handling the case where 1 fulltext == 1 inventory
[17:07] <jam> which is no longer true for bbc
[17:07] <jam> so I have to figure out how to work around it
[17:08] <jam> the stream code seems to think that just "get_bytes_as('fulltext')" would be enough
[17:08] <jam> and, of course, lifeless and spiv are both sleeping right now
[17:08] <jam> I may have to punt... :(
[17:09] <jam> until I can talk to them and see what they were thinking of as a solution for this
[17:10] <Ng> is bzr 1.12's "st" supposed to explode on me?
[17:10] <jelmer> Ng, you may want to update your copy of bzr-loom
[17:10] <vila> Ng: let me guess: it doesn't like 'verbose' and you have installed bzr-loom ? Grr, jelmer is too fast :)
[17:11] <Ng> correct :)
[17:14] <vila> jam: punting sounds the most reasonable thing to do, is StreamSink._extract_and_insert_inventories where the problematic assumption is being made ?
[17:15] <jam> vila: that an inventory is contained within a string
[17:15] <jelmer> james_w, yeah, I agree
[17:15] <jam> the idea is that you can do "read_inventory_from_string()" using the source serializer
[17:15] <jam> the problem is that for chk repos
[17:15] <jam> you can't just have the little bits you need
[17:15] <jam> you have to have all the chk pages
[17:16] <jam> for that inventory
[17:16] <jam> in order to cast up to an Inventory object
[17:16] <jam> that can then be serialized back down into whatever format the target requires
[17:16] <jam> vila: does that make sense?
[17:16] <vila> jam: can't we just transmit the delta and build from that ? Argh, yes, not all formats can build from a delta :-/
[17:17] <jam> vila: so the issue is that "CHKRepo.get_stream()" would want to at best just send the individual pages that changde
[17:17] <jam> changed
[17:17] <jam> but the target repo doesn't have *any* pages
[17:17] <jam> so it wouldn't know how to interpret them
[17:17] <jam> so CHKRepo.get_stream() sort of needs to do the conversion on *its* end
[17:18] <vila> jam: so either the target is chk and we send changed pages (>-/) or better the delta or the target is not and... pfff
[17:18] <jam> The BBC logic just did "for inv in self.source_repo.iter_inventories(revs)"
[17:19] <jam> vila: right, so I can easily figure out what to do when source_format == target_format
[17:19] <jam> However, when source_supports_chks != target.supports_chks
[17:19] <jam> I'm a bit stymied
[17:19] <vila> I can understand that :-/
[17:19] <jam> The code is written such that the *target* does the conversions
[17:19] <jam> But a Pack-0.92 repository isn't going to know what to do with a chk page
[17:20] <jam> So I think we need a get_stream() that can tell the target what format it needs
[17:20] <jam> sorry, tell the *source*
[17:20] <jam> hmm.. maybe I can cheat and try to do that
[17:20] <jam> I just worry that I'm breaking the *spirit* of lifeless & spiv's work
[17:21] <vila> jam: better talk with them then
[17:21] <jam> though I know they wanted to make sure that the streaming code worked with bbc
[17:21] <jam> as part of the "if it doesn't, then we haven't done the right thing"
[17:21] <vila> there is already some XXX in remote.py
[17:22] <vila> so may be we should just don't try to merge yet (and will regret it later :-)
[17:22] <jam> vila: well, the other 3 conflicts were easy to resolve... :)
[17:23] <jam> and if we don't merge revno 4032, when we merge 4033 we're going to have a real pain (it is the whitespace cleanup patch)
[17:23] <vila> jam: hmmm, can you resolve the last one with a hammer maybe ?
[17:23] <jam> it is easy enough to do if you merge just before
[17:23] <jam> because then you know you can revert everything
[17:23] <jam> and do your own whitespace cleanup
[17:24] <jam> vila: yeah, I can always put big XXX and say that robert and andrew need to fix this
[17:24] <jam> :)
[17:24] <vila> jam: if we end up with test_autopack_or_streaming_rpc_is_used_when_using_hpss failing, well that will make 5 failures instead of 4, not a big deal
[17:24] <jam> vila :)
[17:24] <vila> jam: sounds like a good idea, at least they get an almost working bbc
[17:25] <vila> jam: and they may already know that can't work right now
[17:25] <vila> jam: and they may already know that bbc can't work right now
[17:26] <jam> vila: I'll also note that "supports_chk" isn't a sufficient test. As if you have different CHK repos with a different paging structure (different hash, etc)
[17:26] <jam> then the target repo still doesn't understand the source repository
[17:26] <jam> pages
[17:26] <jam> (layered on *top* of the pages is a possible delta format, which could differ but still be understood by both formats)
[17:27] <jam> like gc+chk255 should be able to understand the wire-stream of a chk255 repo
[17:27] <jam> but chk16 != chk255
[17:27] <jam> me == :(
[17:27] <vila> jam: yup, and a delta format may be easier to serialize/deserialize for *future* formats too
[17:28] <jam> vila: I think we are talking different things
[17:28] <jam> logical inventory diff
[17:28] <jam> versus byte-wise diff of chk page
[17:29] <jam> I think defining a "get_stream()" that can return serialized bytes of a logical inventory diff may be the way to go
[17:29] <jam> though possibly slower than a pure chk => chk could be
[17:29] <vila> jam: sorry, I went too fast, but you were already saying the byte-wise diff of chk pages was having problems, so going with a logical inv dif... yes, what you said :)
[17:30] <vila> jam: I'm not sure you can assume that two repos have the needed pages without checking first (and it's true today, I'll be more comfortable if that wasn't required)
[17:30] <vila> jam: I'm not sure you can assume that two repos have the needed pages without checking first (and *if* it's true today, I'll be more comfortable if that wasn't required)
[17:31] <jam> of course, I don't want to have to write a logical diff format + serialized form just to get bbc working
[17:31] <jam> vila: chk pages are the same problem we have today, so I'm not specifically worried
[17:31] <jam> we assume that if you have the inventory at revision X, then you don't need to transmit it
[17:31] <jam> in the same way, you don't need to transmit the chk pages for X
[17:32] <jam> the issue is that if both sides have different chk serializers
[17:32] <jam> the pages for Y don't mean anything to the other side
[17:32] <vila> I think that defining new serialized formats shouldn't be harder than defining a % format
[17:32] <vila> It *is* today but it shouldn't
[17:33] <jam> vila: anytime you work on bytes-on-the-wire or bytes-on-disk it is hard
[17:33] <jam> because you need a whole lot of direct tests
[17:33] <vila> jam: exactly
[17:33] <jam> to make sure that bzr-X.Y can talk to bzr-X.Z
[17:33] <jam> I don't see that going away
[17:34] <vila> I worked in that direction in the transportstats plugin (not the interoperability part, but the easy defined formats)
[17:37] <jam> argh....
[17:37] <vila> jam: but the gap to go there is still rather large :-/
[17:37] <jam> I thought I had a decent workaround
[17:37] <jam> by changing the source to do the reserialization and hand it back
[17:38] <jam> but it seems the Sink always uses the source-serializer to decode the fulltext bytes
[17:38] <jam> (which sort of makes sense, but means I don't have a way to do this yet)
[17:40]  * vila dinner
[17:41] <jam> hmm... it seems that chk_serializer's inherit from the XML serializers (perhaps wrongly, but they do)
[17:41] <jam> which means an ugly-ugly hack is to actually write out a xml5 format inventory to the record stream
[17:45] <vila> jam: will that make it less ugly to use composition instead of inheritance (ignoring it doing that) so that you can provide the right xml serializer ?
[17:46] <jam> vila: so the ugly hack is that we will read in the CHK inventory, bring it up all the way to a full Inventory object in memory
[17:46] <jam> and then convert it to some-semi-random not really used XML representation
[17:46] <jam> and write those bytes out on the wire
[17:47] <jam> I think I can make it work
[17:47] <jam> but it certainly isn't what I would consider the "correct" method
[17:47] <jam> One other possibility, is that get_stream() from a CHK repo
[17:47] <vila> jam: At least you'll have some meat to talk with lifeless and spiv :)
[17:47] <jam> could return *all* the pages
[17:48] <jam> which would mean the target would have enough to work with to do it on its end
[17:48] <jam> but it would have to transmit and buffer all of those pages somewhere
[17:48] <jam> until it got to the inv records
[17:48] <jam> and new what to do with them.
[17:48] <vila> jam: returning all the pages sounds wrong especially considering that you'll certainly ends up transmitting some of them multiple times
[17:48] <jam> vila: you can just transmit the set()
[17:49] <jam> for revisions 10..30 send all referenced chk pages
[17:49] <jelmer> hmm
[17:49] <vila> jam: sounds better
[17:49] <jam> jelmer: ?
[17:49] <jelmer> jam, Trying to figure out what I'm doing wrong benchmarking OOo with bbc and bzr.dev
[17:49] <jam> vila: yeah, I'm still going to let robert and andrew worry about how to semi-optimize transfering between formats
[17:50] <jelmer> I'm seeing a 28x performance gain with bbc
[17:50] <jelmer> which seems a little excessive ;-)
[17:51] <jam> jelmer: not really
[17:51] <jam> it is an O(N^2) versus O(N) sort of change
[17:51] <jam> I don't know *what* you are benchmarking
[17:51] <jelmer> jam, import from OOo-svn into bzr
[17:51] <jam> jelmer: oh sorry, it is a log(N) versus N change
[17:51] <jam> jelmer: so still possible
[17:52] <jam> with bzr.dev
[17:52] <jelmer> bzr.dev gives one revision every 6 or 7 seconds, bbc close to 4 per second
[17:52] <jam> we have to generate an inventory which contains a line for *every* file
[17:52] <jam> with bbc
[17:52] <jam> we only have to update the few pages based on the actual change
[17:52] <jam> O(tree) operation versus O(changes)
[17:53] <jam> jelmer: also the bbc pages aren't delta compressed (yet), which means they are also faster to read, etc.
[17:53] <jam> so *faster* but not *smaller* to store
[17:53] <jam> jelmer: so my initial feeling is that you aren't doing anything wrong
[17:53] <jam> as that is really what bbc is *about(*
[17:54] <jam> I would actually expect the difference to get *more* pronounced as time goes on
[17:54] <jam> as the larger OOo gets, the size(changes) stays relatively constant for most projects
[17:55] <jelmer> right
[17:55] <jam> jelmer: well, at least if you have an optimized Inventory._make_delta() for bzr-svn
[17:55] <jelmer> jam, I do
[17:55] <jelmer> jam: Still surprised it matters this much though
[17:55] <jelmer> jam, Anyhow, I'm not complaining ;-)
[17:56] <jelmer> jam, bbc rocks \o/
[17:57] <jam> jelmer: if you see igc around, I'm sure he'd be interested to hear it
[17:57] <jam> It seems that OOo is looking again at possibly switching to a DVCS
[17:57] <jelmer> jam, This was actually why I was looking into it
[17:57] <jelmer> jam: They were seeing slow imports with bzr.dev
[17:58] <jam> jelmer: yeah, when we imported in the past, it was... 2 weeks to a month? something like that
[17:58] <jam> Also, I wouldn't be surprised if the generic converter is actually *slower* in bzr.dev than it used to be
[17:58] <jam> as it was updated in preparation for bb
[17:58] <jam> bbc
[17:59] <jam> And I believe Ian was looking at fast-import code in the last couple of days
[17:59] <jam> as it had also done some internal hackery for performance that may not be as relevant now
[18:00] <jelmer> based on what I'm seeing now with bzr-svn (4 revs / second) I should be able to import OOo in 20 hours
[18:01] <jam> jelmer: that's a good number
[18:01] <jam> jelmer: is that streaming from their repository? Or do you have an svn dump locally?
[18:01] <jelmer> I have a local dump
[18:02] <jelmer> That's mainly to avoid hammering their server too much though
[18:02] <jelmer> as the process is mainly CPU-bound
[18:06] <jelmer> jam: bzr-svn basically does one "svn log -v" followed by "svn up -rX-1:X" for each revision
[18:07] <jelmer> so while the extra roundtrip-time should have some impact, it shouldn't be significant
[18:07] <jam> jelmer: well, bandwidth and latency
[18:07] <jam> and as you say
[18:07] <jam> if you try multiple times
[18:08] <jam> you only have to have dumped once :)
[18:08] <jam> at 4 revs/sec
[18:08] <jam> latency becomes an issue
[18:08] <jam> 250ms
[18:08] <jam> a latency of 100ms and 2 round trips would cut your conversion speed in half
[18:08] <jam> certainly didn't matter for bzr.dev at 7s/rev
[18:08] <jelmer> jam: Latency to launchpad is 10ms here, not sure how much it is to the ooo svn server
[18:37] <jam> vila: ok, I've hacked it enough that it seems to be working
[18:38] <jam> the RPC tests still fail
[18:38] <jam> though I think we could fix that by allowing chks to be included in the InterPackToRemotePack tests
[18:47]  * ToyKeeper finds it a little odd that two opposite-sounding options are required to produce the desired output...  bzr log --verbose --short
[19:08] <ronny> anyone knows when lifeless will appear?
[19:12] <mthaddon> anyone know what's up with http://benchmark.bazaar-vcs.org/usertest/usertest.log ?
[19:12] <jam> ronny: he usually shows up in about 2 hours
[19:12] <jam> though he almost never actually completely logs off irc
[19:12] <jam> so I'm not sure if something changed
[19:13] <jam> mthaddon: what is the problem?
[19:13] <mthaddon> jam: it seems to be debug output rather than the expected output
[19:13] <ronny> i hacked up a inital nosetest plugin for subunit
[19:14] <mthaddon> jam: either that or the format has changed radically (it's used to generating performance graphs, so I'd need to alter my parser if the format's changed)
[19:18] <jam> mthaddon: poolie or igc would be the ones to ask
[19:18] <jam> If you want, I'll try to mention it when I talk to them later
[19:18] <jam> mthaddon: best guess is that someone forgot a flag when the updated the benchmarks being run
[19:18] <mthaddon> cool, thx - I'll send them an email as well
[19:45]  * jelmer makes a note to buy John beer at the next bzr sprint
[19:46] <jelmer> jam: Thanks for the reviews, it's much appreciated!
[20:37] <glade88> hello. is there a way to make bzr use a network proxy setting?
[20:57] <mwhudson> does nicolas allen irc?
[20:57] <mwhudson> nicholas, sorry
[21:08] <lifeless_> mwhudson: I think so
[21:22] <poolie> jam, hi
[21:22] <Peng_> Do http connections time out...ever?
[21:22] <jam> morning poolie
[21:23] <poolie> Peng_: probably, but i would say not as fast as they should
[21:23] <poolie> they should timeout and retry
[21:23] <poolie> since they're all idempotent
[21:24] <poolie> jam, want to talk?
[21:25] <jam> hey poolie, chatting with robert right now
[21:25] <poolie> ok
[21:26] <poolie> i'll be here all day, ping me if you want a 1:1
[21:27] <james_w> hi poolie
[21:27] <poolie> hello
[21:28] <mwhudson> Peng_: we occasionally had http connections hang for ridiculous amounts of time in the branch puller
[21:29] <poolie> jam, btw not sure if you saw but it looks like rockstar will go to the mysql conf
[21:30] <Peng_> mwhudson: How long?
[21:30] <mwhudson> Peng_: days, iirc
[21:30] <Peng_> Eh. I'm at 12.5 hours so far. That sounds bad.
[21:31] <Peng_> Oh well, I have nothing better to do. :)
[21:32] <mwhudson> well
[21:33] <mwhudson> killing and starting again will probably work...
[21:34] <Peng_> Sure, but how would that be fun?
[21:35] <mwhudson> i'm not going to try to guess what you find fun :)
[21:38]  * mwhudson watches pydoctor trunk process bzrlib
[21:39] <mwhudson> gawd docutils is SO SLOW
[22:23] <lifeless> jam: let me know if you want to continue the compression chat [just skype me]
[22:24] <ronny> lifeless: got an initial nose plugin for subunit wired up, currently it imports the reporter
[22:25] <ronny> lifeless: how about a convention for giving stream output/log lines after the trace?
[22:26] <mwhudson> beuno: hi
[22:26] <lifeless> ronny: How would that map into pyunit [let alone junit etc]
[22:26] <beuno> mwhudson, hiya
[22:27] <mwhudson> beuno: so the masses are complaining about https://bugs.edge.launchpad.net/loggerhead/+bug/253950
[22:27] <ronny> lifeless: ignore unless known otherwhise
[22:27] <lifeless> ronny: no, I mean *where* would that be exposed
[22:28] <ronny> nosetest tracks things like stdin/stderr (and its nice for additional output)
[22:28] <lifeless> ronny: you have a RemoteTestCase which calls result.addFailure(self, RemoteError(self.blah))
[22:28] <mwhudson> beuno: a simple fix is to truncate the list of added/modified/... files when the list gets more than a certain length
[22:28] <lifeless> ronny: ah
[22:28] <lifeless> so; divide and conquer
[22:28] <beuno> mwhudson, sure. Although, to be fair, doing it with ajax should be pretty quick as well.
[22:28] <mwhudson> beuno: yeah, i guess
[22:28] <ronny> currently my initial wire up doesnt handle that tho
[22:29] <mwhudson> maybe i should just try that quickly
[22:29] <lifeless> there are two use cases here - there is 'get nose discovered tests run with output to subunit', and there is 'accept a subunit stream for display with the nose infrastructure'
[22:29] <beuno> mwhudson, and we could potentially avoid getting that information at all unless needed, gaining some performance?
[22:30] <mwhudson> doing it nicely will require some infrastructure hacking i guess
[22:30] <ronny> lifeless: i see 2 possible solutions - mangle it into the trace output, or have a new event
[22:30] <lifeless> In terms of the former, just putting it in the comment area in the result is fine [though if people should be seeing it real time (e.g. a debugger prompt), put it in the stream in realtime.
[22:30] <mwhudson> and some nice gifs
[22:30] <beuno> mwhudson, we have most of that in lazr-js  ;)
[22:30] <mwhudson> i should look at lazr-js i guess!
[22:31] <lifeless> for the latter, I don't think you need to accept any random stream into nose so much as nose generated ones, so as long as you use the same approach for in as for out it should work nicely
[22:31] <lifeless> ronny: yes, I see the same two things; I'm cautious about new events though
[22:31] <mwhudson> argh
[22:31] <mwhudson> when will bzr info lp:lazr-js not say "format: unnamed" ?
[22:32] <lifeless> ronny: particularly things that are tricky to map into python space - like jml's testtools, I want to see what multiple projects are doing before committing to a design
[22:33] <lifeless> mwhudson:
[22:33] <lifeless> Repository branch (format: pack-0.92)
[22:33] <lifeless> I bet its info not doing the right thing with hpss branches; perhaps filing a bug would help.
[22:34] <ronny> lifeless: got a link to jml's testtools?
[22:35] <lifeless> https://edge.launchpad.net/testtools
[22:36] <mwhudson> lifeless: would seem to be https://bugs.edge.launchpad.net/bzr/+bug/196080
[22:37] <lifeless> mwhudson: k
[22:37] <lifeless> thanks
[22:37]  * beuno -> home
[22:37] <beuno> bbiab
[22:37] <mwhudson> beuno: so how do i actually use lazr-js, is there documentation or a quickstart somewhere?
[22:45] <phinze> is there any way to get bzr log to show me only the commits made on this task branch?
[22:47] <ronny> afair bzr log -rsubmit:..
[22:47] <ronny> see bzr help revisionspec
[22:55] <igc> jam: I've been struggling getting a good conversion of mysql and emacs to --development5
[22:55] <igc> emacs fell out with a serializer bug after 10 hours
[22:55] <igc> s/out/over/
[22:56] <igc> mysql completed but then didn't work ...
[22:56] <igc> and I'm yet to track down why cos ...
[22:56] <igc> bzr check isn't fast
[22:57] <igc> jam: I'm wondering if you've had more luck converting these
[22:57] <jam> igc: I have, but I haven't done a full conversion in a while
[22:58] <igc> if so, could I ask for you to tar.gz the converted branches and put them on orcadas?
[22:58] <igc> that's the missing piece for getting meaningful benchmark results
[23:00] <jml> abentley: thanks for the review
[23:00] <abentley> jml: np
[23:09] <thrope> hi - numpy/scipy developers are discussing dvcs ... I pointed out bzr-svn (they were talking about the shortcomings of git-svn)
[23:09] <thrope> http://projects.scipy.org/pipermail/scipy-dev/2009-February/011205.html
[23:10] <thrope> someone asked "can you point us to a public svn repo where non-trivial branching is happening with bzr?" was wondering if anyone (perhaps jelmer ) could point me to this
[23:11] <jelmer> thrope, hi
[23:11] <jelmer> thrope, non-trivial branching?
[23:11] <fullermd> I thought the point of a DVCS was that ALL branching is trivial   ;>
[23:12] <thrope> not sure - thats what the person asked...
[23:12] <thrope> I dont have a lot of experience - but bzrsvn worked well for me and they were complaining about git-svn
[23:13] <thrope> I guess can anyone point to an open source project using bzr-svn 'officially' with a couple of different bzr branches in svn
[23:14] <jelmer> thrope, well, there are some projects that contain roundtripped revisions using bzr-svn
[23:15] <jelmer> thrope: it seems kindof pointless to have official bzr-svn branches though, since the official repo would always be svn, even if some developers are accessing svn using bzr
[23:16] <thrope> right
[23:16] <thrope> just not sure how to respond to the guys question then
[23:17] <jelmer> thrope, if you're looking for example branches in bzr created from svn branches using bzr-svn, see http://bzr-mirror.gnome.org/
[23:19] <thrope> ok thanks
[23:19] <jelmer> thrope: alternatively, I'm interested to hear about svn repositories that don't work in bzr-svn
[23:25]  * igc out for several hours
[23:25] <igc> back later hopefully
[23:30] <lifeless> spiv: so, what shall we do today?
[23:31] <fullermd> The same thing we do every night; try and take over the world.
[23:31] <spiv> lifeless: so I'm currently wikifying my measurements from yesterday
[23:31] <spiv> lifeless: after that I'm happy to head over to Epping
[23:31] <spiv> lifeless: if that suits you?
[23:33] <lifeless> hornsby would be better for us today; if epping is better for you though, that is fine too
[23:33] <spiv> lifeless: that's ok
[23:34] <lifeless> ok, well I'll pick up food and grab a train
[23:37] <lifeless> late morning is ok for you ?
[23:40] <spiv> Sure.