[00:17] <poolie> right
[00:19] <jml> hello
[00:20] <jml> I've got a straightforward documentation patch that still needs another positive review.
[00:20] <jelmer> lifeless: I'm looking for a SubunitTestRunner - is there something like htat yet?
[00:20] <lifeless> jelmer: all my changes are pushed; what is it you want to do ?
[00:20] <lifeless> jelmer: run a suite in a subprocess?
[00:21] <jelmer> yes - samba4's testsuite is subunit based
[00:21] <jelmer> but the main process is written in perl, not python
[00:21] <jelmer> including the subunit parser
[00:21] <poolie> jml, url?
[00:21] <lifeless> jelmer: contribute it back dude
[00:22] <jml> poolie: http://bundlebuggy.aaronbentley.com/request/%3Cd06a5cd30711261654k61edd4c4n66c7d5894d1c485c@mail.gmail.com%3E
[00:23] <jelmer> lifeless: I will at some point, once we get rid of some of our extensions
[00:23] <poolie> btw you should think about naming a child, should you have one, U.R.Lange :)
[00:23] <lifeless> jelmer: in subunit itself, you can use ExecTestCase to run external tests, and IsolatedTestCase/IsolatedTestSuite to run existings tests past a fork() barrier
[00:23] <jelmer> lifeless: I doubt you want support for setting up a Samba server inside subunit ;-)
[00:24] <lifeless> jelmer: well, thats really part of your tests: )
[00:24] <lifeless> jelmer: but the hook to make that easier; that I could go for
[00:24] <jml> poolie: heh heh
[00:24] <jml> poolie: I guess that's better than Minnie Lange
[00:31] <jml> poolie: thanks
[00:33] <poolie> jml, can you merge it or do you need an adult to help?
[00:34] <jml> poolie: I need a grown up to do it.
[00:35] <jml> poolie: will I get an email once it's merged?
[00:35] <poolie> no, but you can poll that page
[00:35] <lifeless> jml: subscribe to bazaar-commits
[00:37] <jml> lifeless: yeesh.
[00:42] <lifeless> spiv: you can commit kthxbye
[00:43] <spiv> lifeless: ta :)
[01:03] <spiv> 'bzr: ERROR: bzrlib.errors.ShortReadvError: readv() read unknown bytes rather than unknown bytes at unknown for "http://bazaar-vcs.org/bzr/bzr.dev/.bzr/repository/packs/a3069b4df97462558e8666ff0cc72386.pack": Server aborted the request'
[01:03] <spiv> That's a lot of unknowns.
[01:05] <lifeless> -rw-r--r-- 1 robertc warthogs 58844730 Nov 28 03:44 a3069b4df97462558e8666ff0cc72386.pack
[01:06] <lifeless> .bzr.log have any hints? I pulled ok earlier today
[01:06]  * spiv looks
[01:07] <lifeless> we have 58844730, 52560, 13592, 5453, 4930, 2259, 2052 packs
[01:07] <spiv> http readv of a3069b4df97462558e8666ff0cc72386.pack  offsets => 649 collapsed 149
[01:07] <spiv> got pycurl error: 18, transfer closed with 162685 bytes remaining to read, (18, 'transfer closed with 162685 bytes remaining to read'), url: http://bazaar-vcs.
[01:07] <lifeless> in bytes
[01:07] <spiv> org/bzr/bzr.dev/.bzr/repository/packs/a3069b4df97462558e8666ff0cc72386.pack
[01:07] <lifeless> spiv: interesting
[01:07] <lifeless> link error perhaps?
[01:08] <spiv> Lots of other successful readvs before that, but none with more than 16 parts after collapsing.
[01:08] <spiv> This is updating my bzr.dev from before I was on leave.
[01:09] <spiv> (but using a current bzr.dev to the pull, and the local repo is now in packs-0.92 format)
[01:09] <lifeless> is it in knit or pack form?
[01:09] <lifeless> strange
[01:09] <spiv> i.e. pulling into 2977 pqm@pqm.ubuntu.com-20071109154145-1yq4oi390uk3z90o
[01:09] <lifeless> are you using bzr.dev to do this? or your old bzr.dev ?
[01:09] <spiv> Current bzr.dev, as of last night.
[01:09] <lifeless> ok, that has the http fix
[01:10] <spiv> (3041 pqm@pqm.ubuntu.com-20071128045852-9ii8fj85vxz1om46)
[01:10] <lifeless> is it repeatable?
[01:10] <spiv> Let's find out.
[01:10] <spiv> Oh, one other interesting fact:
[01:10] <spiv> real    38m32.140s
[01:10] <spiv> user    0m3.196s
[01:10] <spiv> sys     0m0.380s
[01:10] <lifeless> erm
[01:10] <lifeless> ex-wtf-me
[01:10] <spiv> Which seems... odd :)
[01:10] <spiv> Possibly supports the "link error" hypothesis I guess.
[01:11]  * spiv tries again.
[01:11] <lifeless> time bzr pull
[01:11] <lifeless> Using saved location: http://bazaar-vcs.org/bzr/bzr.dev/
[01:12] <lifeless> Now on revision 3047.
[01:12] <lifeless> real    0m36.865s
[01:12] <lifeless> user    0m6.844s
[01:12] <lifeless> sys     0m0.344s
[01:12] <lifeless> still slow
[01:12] <lifeless> but I think mostly fixable via indexing work
[01:12] <lifeless> also we could do
[01:12] <lifeless> read-invs
[01:12] <lifeless> then read revs + texts + sigs in one readv to the pack
[01:12] <lifeless> dropping latency by 2 RTT per pack
[01:16] <spiv> lifeless: still weirdly slow
[01:16] <spiv> lifeless: long delay at "http readv of a3069b4df97462558e8666ff0cc72386.pack  offsets => 381 collapsed 4" in the .bzr.log
[01:18] <spiv> lifeless: now another long delay now that it's reached "http readv of a3069b4df97462558e8666ff0cc72386.pack  offsets => 649 collapsed 149"
[01:18] <spiv> lifeless: no network traffic atm according to tcpdump
[01:19] <lifeless> spiv: disable your squid.
[01:20] <jelmer> jml: how does trial retrieve tests from a path?
[01:20] <lifeless> spiv: or otherwise bypass; then try again.
[01:20] <jelmer> jml: hi, btw :-)
[01:20] <jml> jelmer: hi
[01:20] <jml> jelmer: black magic.
[01:20] <jelmer> jml: appears to be something custom, and I can't easily figure it out from the source, but I guess I'm missing something...
[01:21] <spiv> lifeless: ok.
[01:21] <jml> jelmer: you are particularly interested in 'trial foo/bar/baz' rather than 'trial foo.bar.baz', right?
[01:21] <jml> (foo bar baz BZR)
[01:22] <jelmer> jml: yep, in running stuff that's not in my PYTHONPATH
[01:22] <spiv> lifeless: btw, some traffic has occurred finally
[01:22] <jml> jelmer: the code is in twisted.trial.runner.filenameToModule
[01:23] <jelmer> jml: This is for the Samba testsuite - I really don't want people to have to install twisted just to run the testsuite but I like how easy it is to use trial
[01:23] <jelmer> jml: thanks!
[01:24] <spiv> lifeless: for added laughs, Ctrl-C doesn't interrupt it.
[01:24] <spiv> (nor does Ctrl-\)
[01:26] <spiv> lifeless: gdb seems to think it's in pycurl.so (but there's a lot of ?? in the backtrace).
[01:27]  * spiv SIGTERMs it
[01:29] <fullermd> I think 'pycurl is uninterruptable' is on the known list...
[01:30]  * spiv nods
[01:34] <spiv> lifeless: it just completed in 3m 7s real when I bypass squid.
[01:36]  * spiv files a bug
[01:38] <lifeless> spiv: interesting.
[01:45] <spiv> lifeless: filed bug 172701
[01:46] <ubotu> Launchpad bug 172701 in bzr "Large readv requests can fail if there is a squid proxy" [Undecided,New] https://launchpad.net/bugs/172701
[01:46] <spiv> Perhaps I should have just said "readv requests" rather than "Large readv requests" in that summary.
[01:47] <lifeless> spiv: a http request decode of what is sent to squid, and squid to upstream for the .pack read that fails would help
[01:47] <spiv> lifeless: do you know an easy way to get that off the top of your head?  (e.g. the right tcpdump magic?)
[01:48] <spiv> I can look it up if you don't.
[01:48] <fullermd> Wireshark's decoder/trace is handy.
[01:48] <poolie> -R 'http.request' i think?
[01:49] <lifeless> tshark ftw
[01:50] <fullermd> Coming up with 'wireshark' of course mentally requires a detour through "etherea...  wait, what did they change their name to?".  Stupid program renaming.
[01:53] <lifeless> yes
[01:53] <lifeless> because wireshark is so much more clear and inuitive
[01:56] <ubotu> New bug: #172701 in bzr "Large readv requests can fail if there is a squid proxy" [Undecided,New] https://launchpad.net/bugs/172701
[03:24] <grovers> quick question: what's the best bzr setup for two people working on a project in a LAN? I can't seem to get permissions that work for the both of us
[03:27] <poolie> grovers, sftp tends to have problems with permissions
[03:27] <poolie> so i'd suggest using bzr+ssh, which should preserve them
[03:28] <grovers> is there any way to use bzr+ssh with a non-standard SSH port?
[03:28] <poolie> sure, i think you can put it in the url, or in your .ssh/config (on unix)
[03:29] <grovers> so it would be acceptable for us to push/pull to each other?
[04:04] <abentley> lifeless: Bundle Buggy's on packs, but I'm in the middle of reconciling my bzr repo.
[04:05] <lifeless> abentley: cool
[04:06] <lifeless> wool
[04:06] <lifeless> :!./bzr --no-plugins selftest reconcile.*ack  -1
[04:06] <lifeless> testing: /home/robertc/source/baz/reconcile/bzr
[04:06] <lifeless>    /home/robertc/source/baz/reconcile/bzrlib (0.93.0.dev.0 python2.5.1.final.0)
[04:06] <lifeless> [141/141 in 11s, 3 skipped] repository_implementations.test_check_reconcile.TestFileParentReconciliation.test_reconcile_behaviour(RepositoryFormatKnitPack3,UnreferencedFileParentsFromNoOpMergeScenario)
[04:06] <lifeless> native packs reconcile FTW
[04:06] <lifeless> let me check all pack tests, then I'm firing this off and done for the day.
[04:19] <lifeless> later all
[04:20] <Peng> Omg! I got git to compile!
[04:22] <Peng> Wait, I don't want to learn git.
[04:24] <lifeless> I was going to say
[04:24] <lifeless> Peng: there is a native pack reconcile patch I just sent to the list; if you
[04:24] <lifeless> are interested.
[04:25] <lifeless> Peng: (take a backup of .bzr first, of course :)).
[04:26] <Peng> I am interested, but I'm too tired to try it now.
[04:27] <Peng> Also, running bzr.dev is pushing it enough. But a patch?
[04:27] <lifeless> :)
[04:27] <lifeless> just giving the customer what they want :)
[04:29] <Peng> The customer wants a nap. :P
[04:30] <Peng> Anyway, thanks.
[04:30]  * lifeless gives Peng a nap
[04:31]  * Peng snoozes.
[04:39]  * spiv -> late lunch
[04:40] <Peng> Woah.
[04:40] <Peng> I keep ~/local in a bzr branch, and the pack for installing git is 68 MB!
[04:42] <Peng> A lot of bin/git-* files are the same 4.2 MB file. Might be hardlinked together, but bzr doesn't know that.
[04:48] <abentley> Heh, converting my branches from format 5 to 6 saved about 14M of storage.
[04:51] <abentley> knits: 145M, Pack+branch5 98M, Pack+branch6 84M.
[04:53] <Peng> Branch6?
[04:53] <Peng> Branch5 to branch6, that would be dirstate's branch format to dirstate-tags's?
[04:54] <Peng> Heehee, I've got git installed. Cloning git.git had crazy fun output, and only took 2 minutes.
[05:02]  * Peng goes to bed.
[05:05] <abentley> Peng: correct.
[05:37] <poolie> ok, i'm looking at bug 165304
[05:37] <ubotu> Launchpad bug 165304 in bzr "can't pull knits to packs over hpss" [Critical,In progress] https://launchpad.net/bugs/165304
[06:07] <poolie> spiv, still here?
[06:08] <poolie> spiv, if you think it's correct could you merge john's fix for check? it looks plausible to me
[06:12] <spiv> poolie: I'm here.
[06:16] <spiv> poolie: I think it may have been obsoleted by an API change that got merged a bit later (which effectively made the same fix); I sent that reply will trying to catch up on the >1000 bazaar@ messages waiting for me :)
[06:16] <spiv> poolie: I'll double check that's what happened, and if not I'll merge John's fix.
[06:16] <poolie> hm
[06:16] <poolie> i'm not sure if reading, or even trying to skim all the messages is a good idea
[06:17] <spiv> Well, I mostly just marked entire threads as "read" based on subject line alone.
[06:17] <spiv> I wanted to make sure I didn't miss anything about e.g. 1.0 planning.
[06:19] <spiv> I had to resist my curiousity several times :)
[06:21] <spiv> poolie: yes, an equivalent fix was made by a separate change.
[06:21] <poolie> oh ok, so it's not still happening?
[06:23] <spiv> As of yesterday, by the looks.
[06:23] <poolie> cool
[06:24] <spiv> In r3040; that particular change was in 3036.1.2.
[06:30] <abentley> lifeless: My tests show that on some merges, the text extraction time for the simple case (no text merge) is the bottleneck.  I'll be interested in putting in iter_files_bytes, once we have a good implementation.
[06:31] <poolie> i'm seeing some odd failures - new tests from alexander for making the tree hidden call build_tree() incorrectly
[06:31] <poolie> passing just a string, not a list
[06:31] <poolie> i don't understand how this passed before
[07:27] <i386> lifeless: ping
[07:29] <lifeless> i386: yo
[07:29] <lifeless> abentley: if you'd like to write up a decent iter_files_bytes, I'd be happy to give pointers
[07:29] <i386> hey :)
[07:29] <i386> Ive been prototyping some APIs for rhodes recently
[07:30] <i386> and I dont like the feel of JavaScript for my usecase
[07:30] <lifeless> abentley: basically we need to plan the texts and retrieve them across N weaves, so need more code extraction for reuse from knits; generalising the underlying stuff to be arbitrary delta code instead, and make it talk (file_id, revision_id) pairs
[07:30] <lifeless> i386: yah
[07:30] <i386> so ive decided the world needs another DSL
[07:30] <lifeless> have you implemented lisp yet ?
[07:31] <i386> :P
[07:31] <i386> I bought a book on ANTLR
[07:31] <i386> from the pragmatic dudes
[07:33] <lifeless> ok...
[07:33] <lifeless> so you're going to write an entire DSL, or layer it on javascript using jantler, if such a beast exists.
[07:36] <i386> No on top of python
[07:36] <i386> since it already has a target
[07:36] <i386> and I already have some code :)
[07:37] <i386> anyway
[07:37] <i386> Ill buy you many beers if you can poke holes in my implementation / give me advice
[07:38] <lifeless> :)
[07:38] <lifeless> mail me a link to code
[07:38] <i386> Only when I have a language though
[07:38] <lifeless> what is rhodes?
[07:39] <i386> http://i386.kruel.org/blog/?p=257
[07:39] <i386> ^^ rhodes
[07:42] <lifeless> i386: hmm, my first reaction is doom
[07:42] <lifeless> rhodes++
[07:42] <lifeless> new language--
[07:42] <Necrogami> hmm
[07:42] <i386> Yeah
[07:42] <i386> I want something simple and really controllable
[07:43] <Necrogami> Bazaar is Python?
[07:43] <lifeless> Necrogami: oh yes; i386 and I are talking off-topic.
[07:43] <i386> Necrogami: yes
[07:43] <Necrogami> Stackless or standard python?
[07:43] <lifeless> standard
[07:43] <Necrogami> :-\
[07:43] <lifeless> python 2.4 or above
[07:43] <i386> perhaps we could take this somewhere else lifeless ?
[07:43] <lifeless> i386: #rhodes I guess :)
[07:44] <Necrogami> im a svn user currently how does it compare? or should i spend more time reading?
[07:44] <lifeless> Necrogami: stackless python is a little harder for users to get :). I'm not sure why you'd want it to require stackless python.
[07:44] <Necrogami> stackless is quicker :-\
[07:44] <lifeless> Necrogami: we think its superior to svn (and we're biased :)). Its got many features svn does not; and a few notables ones svn has are missing.
[07:45] <lifeless> Necrogami: speed is relative; we commit on trees with 50K files in about 4 seconds on laptop hardware;
[07:45] <Necrogami> Right
[07:46] <Necrogami> im worrying about size of my files to commiting vs time it takes to commit
[07:46] <lifeless> Necrogami: most of the places we're not extremely fast are in the process of being solved; and stackless wouldn't give as much of a win on its own as just fixing our code
[07:46] <Necrogami> that's the problem  i'm currently having
[07:46] <lifeless> Necrogami: how many MB are your files ?
[07:46] <Necrogami> 44.82gb
[07:46] <Necrogami> lol
[07:46] <Necrogami> XML
[07:46] <lifeless> each ?
[07:46] <Necrogami> it's one main file
[07:47] <lifeless> *blink*. Just to be clear; you have a single file that is ~40GB ?
[07:47] <Necrogami> but it has about ... 12.5Million Files attached
[07:47] <Necrogami> Yes
[07:47] <Necrogami> Xml rendered map
[07:47] <lifeless> and 12M subordinate files. wow.
[07:47] <Necrogami> It's a map for a game i'm designing
[07:48] <lifeless> so I can tell you today that we can't commit that; we hold files we're committing in memory as we commit them
[07:48] <Necrogami> 12.58 million Sq Miles each sub file is a sq mile
[07:48] <lifeless> Necrogami: ok this isn't making sense to me; if you have separate files then your main file is not going to be 44G
[07:48] <Necrogami> no ...the Sub files are Images
[07:49] <Necrogami> nothing more
[07:49] <lifeless> Necrogami: so to be sure I'm reading this right
[07:49] <lifeless> you have:
[07:49] <Necrogami> The main file contains all of the linking data for down 1 sq ft
[07:49] <lifeless> 12.58M images
[07:49] <lifeless> 1 xml file that is 44GB in size
[07:49] <Necrogami> yep
[07:50] <lifeless> bzr can't handle that today; we currently start swapping at 3M versions paths; we will be fixing that.
[07:50] <Necrogami> kk
[07:50] <lifeless> each individual file gets held in memory to commit: 1 copy for the basis, 1 copy for the current version.
[07:51] <lifeless> merges need 3 copies
[07:51] <lifeless> so you'd need about 150GB of memory to do a commit of that file.
[07:51] <Necrogami> yeah ow
[07:51] <lifeless> I'm surprised any tool can manage that size file well though, xml requires a data shuffle to modify elements
[07:52] <lifeless> so my immediate reaction is to suggest ways to shrink that file
[07:52] <Necrogami> Yeah were in the process of breaking it down to SQL of some sort
[07:52] <Necrogami> be it oracle or something
[07:52] <lifeless> now, versioning a database is still going to be a problem :)
[07:53] <Necrogami> oh i know
[07:53] <Necrogami> but breaking down the database by regions is alot easier
[07:53] <lifeless> so, we'd like to handle such size files in principle.
[07:54] <i386> wow
[07:54] <i386> big file
[07:54] <Necrogami> oh i know
[07:54] <lifeless> there are a number of engineering tasks to complete before we can.
[07:54] <Necrogami> i'll definately have to stick around...
[07:54] <lifeless> mainly writing streaming diff and merge
[07:55] <i386> Necrogami: You might have better luck breaking each "record" of that file into a single file
[07:55] <Necrogami> any current webapi's for bzr?
[07:55] <i386> and having an index that lists all the "records"
[07:55] <Necrogami> uh
[07:55] <lifeless> i386: 24M paths will be a problem too
[07:55] <Necrogami> right?
[07:55] <lifeless> i386: :)
[07:55] <i386> ahh :)
[07:55] <Necrogami> Lifeless.... Every building and item on the map is in the XML
[07:55] <lifeless> Necrogami: if you mean web viewers to see the code; yes - there is loggerhead, and bzr webserver
[07:55] <Necrogami> so record count .. is about 85 Million
[07:56] <i386> whoa
[07:56] <lifeless> Necrogami: sure thats fine; like I said, currently 3M files makes us swap on commonly available hardware
[07:56] <i386> enjoy :)
[07:56] <Necrogami> Last time i ran a record counter...
[07:56] <Necrogami> Yeah
[07:56] <lifeless> we often hold the full tree in memory; the next inventory format should take us some way to solving that and only needing to hold modified regions in memory
[07:56] <Necrogami> Nothing about this game is gonna be "common"
[07:58] <Necrogami> Yeah i just lost a SVN database ...
[07:58] <Necrogami> so i'm looking around for an alternative
[07:58] <i386> backups are good :
[07:58] <i386> :)
[07:59] <Necrogami> Always .. but not necessarilly easy
[07:59] <lifeless> I'm quite surprised svn managed to handle that
[07:59] <lifeless> and impressed
[07:59] <Necrogami> it wasn't handeling the main file
[07:59] <Necrogami> just all of the images and sub files
[07:59] <lifeless> oh :)
[08:01] <Necrogami> Yeah :-\
[08:01] <Necrogami> the first time i tried to load that main file on there
[08:01] <Necrogami> it segfaulted the box
[08:02] <lifeless> rofl
[08:02] <Necrogami> so i quit there w/ that file
[08:05] <lifeless> Dinner time
[08:05] <poolie_> lifeless, i tryed changing the default and get a failure in switch tests
[08:05] <poolie_> this might be after you last tested it
[08:05] <poolie_> nm, enjoy your dinner
[08:05] <lifeless> poolie_: ring me if you want to chat aboot it
[08:26] <ubotu> New bug: #172741 in bzr "merge from knit-repo to knit branch: invalid progress bar 0/0" [Undecided,New] https://launchpad.net/bugs/172741
[09:13] <mrevell> morning
[09:22] <poolie_> hi mrevell
[09:22] <poolie_> on the phone, will be off in a bit
[09:22] <mrevell> Hi poolie_
[09:56] <jdahlin> I have bzr-svn question
[09:56] <jdahlin> Is it no possible to merge from an svn repo in a normal bzr branch?
[09:56] <jdahlin> I branched of a vcs-import of a svn-repo, and then tried to merge from the real repository
[09:57] <jdahlin> which  gives me bzr: ERROR: Repository KnitRepository(...) is not compatible with repository SvnRepository(...)
[10:09] <pbor> jdahlin: I think that to be able to merge you should have branched the bzr repo with bzr-svn, branching from a vcs-import doesn't ensure that your bzr branch and the svn repo have a common ancestor
[10:10] <poolie> jdahlin, i think you might need to upgrade to a rich-root repository for the bzr side
[10:24] <jdahlin> pbor, ah, I take it that the vcs import tool is not creating branches which are compatible with bzr-svn.
[10:24] <jdahlin> poolie, hmm, how do I do that, and are you sure that would help?
[10:27] <pbor> jdahlin: yeah, that's what I've been told
[10:31]  * jdahlin prays bzr-svn will survive importing his 4000 revision repository
[10:35] <nebuchadnezzar> hello
[10:37] <nebuchadnezzar> I want to make a svn branch with the bzr-svn plugin but I need to define a http_proxy
[10:39] <nebuchadnezzar> the proxy is defined for the svn command, it works but it doesn't with bzr-svn :-/
[10:40] <pbor> jdahlin: no way, you have to do that in chunks
[10:40] <jdahlin> nebuchadnezzar, did you try setting the environment variable http_proxy ?
[10:40] <nebuchadnezzar> yes
[10:40] <nebuchadnezzar> it is set
[10:40] <jdahlin> pbor, I'm 60% done, so almost there
[10:41] <pbor> jdahlin: then you have plenty of ram :)
[10:41] <jdahlin> pbor, ram usage by that process is also 60%, interestingly :-)
[10:41] <nebuchadnezzar> jdahlin: netstat show me that python send a SYN to the remote web server instead of the proxy
[10:42] <jdahlin> nebuchadnezzar, iirc urllib is supposed to pickup http_proxy
[10:42] <jdahlin> I'm not sure what bzr uses for http though
[10:44] <james_w> bzr uses pycurl by default, but this is probably going through the libsvn, so I'm not sure why it would differ to the svn client.
[10:44] <james_w> nebuchadnezzar: have you tried bzr branch svn+http://...?
[10:46] <nebuchadnezzar> james_w: it works
[10:46] <nebuchadnezzar> with svn+https:// the connection is done via the proxy
[10:56] <VSpike> hi folks.. probably a simple question but working on a train with limited battery and no internet except cellphone...
[10:57] <VSpike> i want to move versioned files from foo to foo/baz
[10:57] <VSpike> I tried bzr move foo foo/baz
[10:57] <VSpike> and it gave me a exception trace
[10:58] <VSpike> which i will email when i can as it asks
[10:58] <VSpike> so i did mkdir foo/baz
[10:59] <VSpike> mv foo/* foo/baz
[10:59] <VSpike> bzr mv foo foo/baz --after
[11:00] <VSpike> but it said foo/baz was not versioned (Id moved .bzr too)
[11:00] <Kinnison> Was this within a tree?
[11:00] <Kinnison> E.g.
[11:00] <Kinnison> mkdir foo foo/baz
[11:00] <Kinnison> bzr add foo
[11:00] <Kinnison> bzr mv foo foo/baz/
[11:01] <Kinnison> or moving a tree in its entirety?
[11:01] <VSpike> so i moved files back and tried bzr init foo/baz && bzr mv foo foo/baz
[11:01] <VSpike> i want to move tree entirely
[11:02] <VSpike> if i just move it all including .bzr it thinks files have been deleted
[11:03] <VSpike> will i have to move it sideways then down?
[11:06] <Kinnison> Erm
[11:06] <Kinnison> So you have dir foo
[11:06] <Kinnison> and dir foo is a bzr branch?
[11:06] <Kinnison> you can just rename the dir
[11:06] <VSpike> yep
[11:07] <VSpike> .bzr doesnt contain any absolute paths?
[11:07] <Zindar> no
[11:07] <Kinnison> nope, it's all relative to where the .bzr is
[11:07] <VSpike> hm ok i figured i would have to tell it somehow
[11:08] <poolie> mrevell, can you please make sure to read ian's user guide draft, if you haven't already
[11:08] <poolie> it's on the list
[11:08] <VSpike> let me try that again
[11:08] <Zindar> hi all :)
[11:08] <mrevell> poolie: Thanks for the reminder. I'm working through it now.
[11:08] <Kinnison> VSpike: http://rafb.net/p/PcjCBE85.html
[11:19] <VSpike> sorry..dunno if you saw my last msg
[11:19] <VSpike> when i move it bzr status list lots of files as unknown
[11:19] <VSpike> if u did i missed ur resp
[11:26] <Kinnison> VSpike: I offered http://rafb.net/p/PcjCBE85.html
[11:26] <Kinnison> that was the last bit I said
[12:06] <AfC> jdahlin_: how's your import going?
[12:07] <jdahlin_> AfC, oh
[12:07] <jdahlin_> it broke :-)
[12:07] <jdahlin_> SubversionException: ("REPORT request failed on '/svn/!svn/vcc/default'", 175002)
[12:08] <AfC> Gee, I wouldn't have used a smiley for that!
[12:08] <jdahlin_> ah, it was at 90% or so
[12:08] <jdahlin_> how could I not smile?
[12:08] <jdahlin_> let me attempt to do it in steps as pbor suggested
[12:09] <jdahlin_> so the first run is simple, just say, -r 500
[12:09] <jdahlin_> and after that should just be a merge -r 1000
[12:09] <jdahlin_> right pbor?
[12:10] <pbor> yeah... I may have a script I used for gedit
[12:10] <pbor> r=$1
[12:10] <pbor> while [ $r -ne $2 ]; do
[12:10] <pbor>         bzr pull -r$r svn+ssh://pborelli@svn.gnome.org/svn/gedit/trunk
[12:10] <pbor>         r=$(( $r + 100 ))
[12:10] <pbor> done
[12:10] <AfC> jdahlin_: (can I suggest doing it in steps of 5 or so, to get the technique right first?)
[12:11] <jdahlin_> AfC, too late, but I'll do the 5 revisions in the first merge
[12:11] <jdahlin_> assuming I can branch of the first 500 revisions
[12:11] <AfC> heh
[12:12] <jdahlin_> thanks pbor
[12:12] <jdahlin_> pbor, I tried bzr-svn 6 months ago or so, I had some issues merging back and forth
[12:12] <jdahlin_> pbor, do you know if it's working better these days?
[12:13] <pbor> jdahlin_: I tried it a bit and it worked
[12:13] <AfC> jdahlin_: I gather that it is - although the person who was hacking on it most (jelmer) is now faced with writing bzr-git
[12:13] <pbor> (to be honest I am doing so little dev that I went back to plain svn)
[12:13] <pbor> jdahlin_: make sure to also get the rebase plugin
[12:14] <jdahlin_> cool, bzr-git!
[12:14] <jdahlin_> does that mean that I don't have to use git any longer?
[12:15] <AfC> Hopefully it will someday.
[12:15] <AfC> Roundtripping between dissimilar systems is always going to risk being lossy; I'll be happy with one way import if I can get it.
[12:21] <jdahlin_> pbor, what is the rebase plugin for?
[12:22] <pbor> jdahlin_: suppose you branch from svn at rev 100 and then hack away and commit locally...
[12:23] <pbor> then some commits happen in svn, e.g. the current head revision is 102
[12:23] <pbor> you can rebase your local changes as they were done branching from 102
[12:23] <jdahlin_> I thought I would be able to get these changes by doing another bzr merge?
[12:23] <pbor> and then you can safely push to svn to commit
[12:24] <AfC> Why wouldn't you just merge?
[12:24] <pbor> I guess this allwows to keep th history more linear
[12:25] <pbor> not sure what the practical implications are
[12:26] <ubotu> New bug: #172791 in bzr "bzr merge A B and bzr merge B A produce different sets of conflicts" [Undecided,New] https://launchpad.net/bugs/172791
[12:35] <Janzert> To upgrade from 0.90 is it safe to just run setup.py install or should I remove the 0.90 bzrlib directory first or?
[12:38] <jdahlin__> hmm, and no bzr-svn ubuntu packages for bzr-svn 0.4.4
[12:41] <ubotu> New bug: #172794 in bzr-eclipse "add support for quick diff" [Undecided,New] https://launchpad.net/bugs/172794
[12:41] <jdahlin__> oh, there is one in hardy. nice.
[12:54] <abentley> lifeless: I am starting to work on mpdiff-based packs.  I'll be making iter_files_bytes the underlying content-retrieval mechanism.
[12:57] <AfC> Janzert: should be safe
[12:58] <Janzert> AfC: ok, thanks.
[14:08] <mneisen> Hi, I just wanted to branch the mainline branch of bzr and got the error message: "bzr: ERROR: Connection error: curl connection error (problem with the SSL CA cert (path? access rights?))". What shall I do?
[14:09] <mwhudson> uh
[14:10] <mwhudson> mneisen: what command line did you use?
[14:11] <mneisen> mwhudson: The right one :-D ... Anayway, case close, I garbled up the URL ... https instead of http. No wonder bzr could not make any sense of it!
[14:12] <mwhudson> mneisen: well, that's not the correct url then is it :)
[14:12] <mwhudson> mneisen: glad to hear it's working in any case
[14:13] <mneisen> mwhudson: thank you anyway?
[14:14] <mneisen> mwhudson: The question mark was unintended.
[14:14] <mwhudson> mneisen: :)
[15:25] <dato> heh, here I was wondering why updating bzr.dev was taking so long
[15:25] <jam> dato: pulling from packs into knits?
[15:26] <dato> yes
[15:26] <jam> if you have an older bzr that is *really* slow
[15:26] <jam> with bzr.dev it is a bit faster
[15:26] <jam> but it isn't transparently fast
[15:26] <dato> the best I could do would be `bzr upgrade` in my bzr.dev copy, right?
[15:27] <jam> dato: well if you only have 1 branch of bzr.dev (no personal ones) I would probably nuke it and start from scratch
[15:27] <jam> because you should "bzr reconcile" before you "bzr upgrade"
[15:27] <dato> ok, thanks; I'll branch from scratch
[15:27] <jam> Though bzr reconcile should be better than it used to be
[15:27] <jam> The last time I reconciled, it took about 1hr
[15:27] <jam> so I'm guessing a fresh pull will be faster for you
[15:42]  * dato does `rsync -avP --exclude .bzr.backup bazaar-vcs.org::bazaar-ng/bzr/bzr.dev bzr.dev`
[15:47] <dato> jelmer: hi, are you around for a bzr-svn problem? I'm finally trying to push a bzr branch to svn, but I get this error:
[15:48] <dato> % b svn-push https://forja.rediris.es/csl2-minirok/trunk/
[15:48] <dato> bzr: ERROR: Not a branch: "https://forja.rediris.es/csl2-minirok/trunk/".
[15:48] <jam> dato: just as an aside, did you try "bzr svn-push svn+https://" ?
[15:48] <dato> nope, should I?
[15:48] <dato> ah
[15:48] <dato> right
[15:49] <dato> nope, same error
[15:57] <jam> dato: are you sure you have the right URL, going there in a browser
[15:57] <jam> gives me a 404
[15:57] <jam> "Page not Found"
[15:57] <jelmer> abentley: hi
[15:57] <jam> dato: it certainly seems difficult for a 404 to be a branch
[15:57] <jam> especially an svn one
[15:58] <jelmer> s/abentley/dato/
[15:58] <dato> hm
[15:58] <jam> dato: https://forja.rediris.es/svn/csl2-minirok/
[15:58] <jam> you need the "svn/" in the url
[15:58] <dato> oops, thanks
[15:58] <jam> dato: however, there is no 'trunk' either
[15:58] <jam> though maybe you are creating it?
[15:58] <dato> yes
[15:58] <dato> sorry for having you debug this
[15:58] <jam> I don't know if you can create branches with svn-push
[15:58] <jam> jelmer: ^^ ?
[15:59] <dato> yes, afaik
[15:59] <dato> not with "push" alone
[15:59] <jelmer> yes, you can - but only with svn-push
[15:59] <dato> jelmer: svn-push -r 1 should just push one revision, right?
[16:00] <jelmer> yep
[16:00] <jelmer> uhm
[16:00] <jelmer> if -r is supported for svn-push
[16:00] <dato> it's in the help
[16:00] <dato> let's see if they've actually enabled the pre-revprop-change
[16:08] <dato> bzr: ERROR: Permission denied: ".": MKACTIVITY of '/svn/csl2-minirok/!svn/act/eedb297f-6600-4219-a18b-b62d24fcce9b': authorization failed (https://forja.rediris.es)
[16:08] <dato> jelmer: is that error sort of known?
[16:09] <jelmer> dato: you weren't expecting a permission denied error?
[16:09] <dato> well
[16:09] <dato> I guess it makes sense, since it didn't ask for a passwrd
[16:10] <jelmer> bug 120768
[16:10] <ubotu> Launchpad bug 120768 in subversion "Should prompt for passwords" [Undecided,New] https://launchpad.net/bugs/120768
[16:18] <jam> vila: ping
[16:18] <vila> jam: pong
[16:18] <jam> vila: I think you took Robert's code too literally
[16:18] <jam> I just sent an email
[16:18] <jam> but: ranges = coalesced[group * max_ranges:group+1 * max_ranges]
[16:18] <jam> looks like it has a host of problems with it
[16:18] <jam> I think the real value you want is:
[16:19] <jam> ranges = coalesced[group:group+max_ranges]
[16:19] <vila> rhaaa, I knew something simpler was possible ;)
[16:19] <jam> I don't know how what you had was working
[16:19] <jam> Unless you only had 1 range
[16:19] <jam> or maybe 2
[16:21] <vila> ..and correct too :)
[16:23] <vila> it worked in the smoke test because no readv required more than 200 ranges
[16:25] <vila> thanks anyway, fixed, nobody could use that buggy version so far except me, I rarely commit before testing but I'm highly interruptible these days, so...
[16:30] <vila> jam: did you read the rest of my patch, any comments ?
[16:30] <jam> vila: did you read my email
[16:30] <jam> ?
[16:30] <vila> I tried re-factoring sftp_readv up front, but there was too many unclear small diferrences between sftp and http
[16:31] <jam> yeah, the off-by one is a killer
[16:31] <jam> http ranges are inclusive
[16:31] <jam> I don't remember sftp
[16:31] <jam> but it is either start+length
[16:31] <vila> jam: about sftp patch ? Yes replied ;-) off-by-one ? did you send several emails ?
[16:31] <jam> or start => end non-inclusive
[16:31] <jam> vila: I sent 2 email
[16:31] <jam> emails
[16:31] <vila> ok
[16:32] <jam> one to your list email
[16:32] <jam> and one to your patch email
[16:32] <jam> (from bazaar-commits)
[16:32] <jam> vila: for the 65536... openssh supports larger ranges
[16:32] <jam> but the sftp spec
[16:32] <jam> says they "must support at least 32k"
[16:33] <jam> I believe it says they must support a total request of at least 34k
[16:33] <jam> including overhead
[16:33] <vila> paramiko seems to use 32768 anyway in my currently installed version, nvm
[16:34] <vila> but I looked at the async implementation with a thread... ;-)
[16:35] <jam> well paramiko uses threads, IIRC
[16:35] <jam> but bzr itself doesn't
[16:35] <dato> jelmer: so how should I do it, do a dummy commit with the svn client so that the credentials get stored and bzr can use them?
[16:35] <jelmer> dato: yes, that's the only way I'm aware of
[16:35] <vila> ha, your second email is there
[16:36] <jelmer> dato: until I manage to fix upstream svn...
[16:37] <vila> main problem for the http parser: if we turn it into an iterator to return data as soon as they arrive, there will be no guarantee anymore that the socket is left "clean", i.e. no finally statement for the iterator
[16:38] <vila> I can clean that when the last offset is handled but do I have the guarantee that bzr will always consume all offsets ?
[16:46] <vila> jam: Since http processing is still synchronous I don't understand your point, more requests means more latency means more clock time.
[16:47] <jam> vila: well, I'm thinking more polling the socket for more data
[16:47] <jam> I won't make a strict guarantee that we iterate all of the data, just because if we get an exception we won't pull the rest
[16:47] <jam> but the fact that we requested it means we want it
[16:47] <jam> so I don't think we will issue a request and only read 1/2
[16:48] <jam> in 2.5 you could use a try/finally
[16:48] <jam> the poor man's version is a try/except/else
[16:48] <jam> (for 2.4)
[16:48] <jam> but you aren't guaranteed to have the finally happen if we stop iterating
[16:48] <jam> vila: for you other statement...
[16:48] <jam> right now bzr pull http:// *feels* slow
[16:48] <jam> because we spend a long time buffering a huge request
[16:49] <jam> a) Not consuming memory may help VM pressure, but that isn't really a primary effect
[16:49] <jam> b) It will *feel* faster because we can at least spin our little progress bars as we go
[16:49] <jam> c) you will be able to ^C even with pycurl
[16:49] <jam> because it isn't buffering 20MB before returning control
[16:50] <vila> c) hehe, you leave me no surprise to announce ;-)
[16:50] <vila> a) and b) should be addressed by rewriting the http response parsing
[16:51] <jam> vila: well the other fix for (c) is to give the pycurl code a python write function
[16:51] <jam> so that as it gets data, it calls back into python
[16:51] <jam> which can check for ^C
[16:51] <vila> jam: I know
[16:51] <jam> but I'm not sure how well pycurl handles having the callback raise an exception
[16:51] <jam> I think we could even just have a simple progress callback
[16:51] <vila> jam: realized it lately, but didn't look at how do it
[16:51] <jam> and have it still do the buffering internal to pycurl
[16:53] <vila> regarding exption/iterators, you say that if an exception occurs in the user code, the execption can be seen in the iterator func and a simple except Exception is enough to catch it ?
[16:53] <jam> vila: no
[16:54] <jam> just if it happens in the readv code
[16:54] <jam> in python2.5
[16:54] <jam> try/finally works
[16:54] <jam> because when a generator is garbage collected
[16:54] <jam> it gets an exception
[16:54] <dato> jelmer: ok, it worked. I'm so happy to have bzr-svn. :-)
[16:54] <jam> but that construct is *illegal* in python2.4 :*(
[16:55] <jam> :'(
[16:55] <jam> hmm. what is the tear smiley?
[16:55] <jam> I guess Colloquy doesn't have it
[16:55] <vila> bah, garbage collection is far too late for me, I should clean the http socket before reusing it for another request while in the same time not trying to read too much if it's clean ;-/
[16:56] <jelmer> dato: cool :-)
[16:56] <vila> well, thanks for the explanation, the easy way does not work then
[16:56] <jam> vila: I'm not talking GC as in python interpreter shutdown
[16:56] <jam> I mean as soon as it is no longer referenced
[16:56] <jam> aka, the calling code has had an exception
[16:57] <jam> vila: I wouldn't try too hard
[16:57] <jam> I think we will always consume everything
[16:57] <jam> except when we have an unrecoverable error
[16:57] <jam> I suppose you could set a flag
[16:57] <jam> which tells the HTTPTransport whether it is clean
[16:58] <jam> and set it to false at the beginning of readv()
[16:58] <jam> and only set it True if you get to the end.
[16:58] <vila> jam: hmm, not trying too hard and get blamed when the next hhtp request fails :-)
[16:58] <jam> vila: I'm always happy to blame you
[16:58] <vila> jam: yeah sure, but just  an expect clause is easier and cleaner
[16:58] <vila> jam: that's a good start ;-)
[16:59] <jam> "expect" or "except"
[16:59] <vila> except of course
[17:00] <vila> most of my typos occurs because the brain->fingers socket is UDP
[17:01] <jam> have you ever been checked for Dyslexia ?
[17:01] <jam> :)
[17:02] <vila> have you ever been trying to change you keyboard mapping between azert and qwerty hundreds times by day ? ;-)
[17:02] <jam> vila: I have gone from dvorak to x,dokt
[17:02] <jam> oh, I mean qwerty :)
[17:02] <vila> I now use only qwery keyboards but the harm is done ;)
[17:02] <jam> ,dppw G hslqk vls, ,jak tsf mdal
[17:02] <jam> Gq.d ld.do jah rosnpdm; ,gkj vdtmar;
[17:02] <jelmer> jam: Why'd you decide to go back?
[17:03] <jam> (I don't know what you mean, I've never had problems with keymaps)
[17:03] <jam> jelmer: I still use dvorak primarily
[17:03] <jam> but it is hard to touch type with one hand
[17:03]  * jelmer has attempted to switch to dvorak a couple of times
[17:03] <jam> with a baby in the other
[17:03] <jelmer> but I just get so annoyed after a couple of days that I switch back
[17:03] <jam> when you have a qwerty keyboard in dvorak mode
[17:03] <jam> jelmer: it took me 1 month to get back to real speed
[17:03] <jam> I had a slow month one time when my wife was away
[17:03] <jelmer> jam: There's single-handed dvorak as well :-)
[17:04] <jam> jelmer: sure, but try getting that on the keyboard :)
[17:04] <jam> And it is also a left form
[17:04] <jam> and a right-form
[17:04] <jam> and the baby goes both ways too
[17:04] <jam> so I would need keys with dynamic labels
[17:04] <jelmer> :-) More keymaps to learn...
[17:04] <jam> which would be really cool
[17:04] <jam> some other oddities
[17:05] <jam> the "A" and "M" keys don't move
[17:05] <jam> so it is really easy to switch to the wrong keyboard after typing one
[17:05] <jam> But Typing Tutor showed my WPM rate to be huge on those keys
[17:05] <jam> I would be like 10-20 WPM for most keys
[17:05] <jam> and then Spacebar was about 90
[17:05] <jam> and A & M were still in the 70 range
[17:05] <jam> the graph looked pretty funny
[17:06] <jam> jelmer: I'm pretty happy with dvorak now. And I know Martin uses it, too.
[17:06] <jam> Vim was the hardest to switch between
[17:06] <jam> because so many commands just become muscle memory
[17:06] <jam> rather than "O" or "a"
[17:06] <jelmer> yeah, I've heard from quite a few people that it's really good
[17:06] <dato> jelmer: I think it'd be best if the changelog of bzr-rebase and bzr-svn would explicitly have a DD adding the DM-foo header, so it's clear it wasn't sneaked or anything. I'll put my name if you don't mind?
[17:06] <jam> When I switched and started getting *good*
[17:06] <jam> it felt like I was typing slower
[17:06] <jam> because my fingers weren't moving as much
[17:07] <jelmer> dato: sure, thanks!
[17:07] <jam> Which I felt was a good sign that it is actually less stressful
[17:10]  * jelmer just needs to gather the discipline at some point to actually make the switch and not quit halfway through again :-)
[17:10] <jam> jelmer: well, I would switch back occasionally when I had a long email to write
[17:10] <jelmer> did you relabel your keyboard or anything?
[17:11] <jam> I did not relabel
[17:11] <jam> but I'm a strong touch typist (now)
[17:11] <jam> In high-school I always looked at my hands, but eventually I just grew out of it
[17:11] <jam> not sure exactly when it happened
[17:11] <jam> but it wasn't a conscious effort
[17:11] <jam> anyway, not having the keyboards labeled actually worked well for me
[17:12] <jam> I do have a keyboard with the labels moved around now
[17:12] <jam> but I couldn't move the "F" and "J" keys
[17:12] <jam> because they have the little dots that you put your fingers on
[17:12] <jam> and I accidentally switched the V and W
[17:12] <jam> and haven't gone back to fix it.
[17:12] <jam> (Which is a pretty big issue because ^W is close window, while ^V is paste text)
[17:13] <jam> not something you want to mix up.
[17:17] <jelmer> :-)
[17:35] <ubotu> New bug: #172861 in bzr "add: accepts aliases due to case insensitivity" [Undecided,New] https://launchpad.net/bugs/172861
[17:45] <ubotu> New bug: #172865 in bzr "commit: fails to detect deletion of aliased file" [Undecided,New] https://launchpad.net/bugs/172865
[18:06] <ubotu> New bug: #172870 in bzr "'bzr fetch' broken" [Low,Triaged] https://launchpad.net/bugs/172870
[18:44] <mwhudson> er
[18:45] <mwhudson> bzr --no-plugins selftest test_build_and_install
[18:45] <mwhudson> fails for me on bzr.dev
[18:45] <mwhudson> oh duh
[18:45] <mwhudson> ./bzr
[18:46] <mwhudson> i'm always doing that
[18:48] <dato> OH GOD
[18:48] <dato> this is *so* fast
[18:48] <dato> commiting in bzr.dev with packs
[18:50] <mwhudson> abentley: should revisiontree have case_sensitive = True somewhere?
[19:14] <mwhudson> how long should i expect reconciling bzr.dev (with bzr.dev) to take?
[19:15] <mwhudson> (in a knits repo)
[19:17] <dato> 16:27 <jam> Though bzr reconcile should be better than it used to be
[19:17] <dato> 16:27 <jam> The last time I reconciled, it took about 1hr
[19:19] <mwhudson> thanks
[19:19] <mwhudson> the progress bar is doing funny things :)
[19:30] <jam> mwhudson: it depends a lot on hardware. I don't think I've tested it after Robert's latest improvements
[19:30] <jam> but just prior to that, my desktop was about 1.5hrs and my laptop was 2-3hrs
[19:30] <mwhudson> going for 28 minutes here
[19:31] <mwhudson> and the progress bar is almost all the way across
[19:31] <mwhudson> for the second time :)
[19:31] <mwhudson> oh wow, just finished
[19:32] <mwhudson> so must have been ~30 minutes
[19:32] <mwhudson> not bad at all
[19:37] <mwhudson> fooey
[19:37] <jam> ?
[19:37] <mwhudson> my upgrade to packs broke
[19:37] <mwhudson> bzr: ERROR: Revision {('aaron.bentley@utoronto.ca-20070517163555-3i7jamitmffdg85l',)} not present in "<bzrlib.knit.KnitGraphIndex object at 0x4cb3ed0>".
[19:37] <mwhudson> /home/mwh/src/bzr/bzr.dev/bzrlib/lockable_files.py:110: UserWarning: file group LockableFiles(<bzrlib.transport.local.LocalTransport url=file:///home/mwh/src/bzr/.bzr/repository.backup/>) was not explicitly unlocked
[19:38] <jam> that isn't good
[19:38] <jam> is that from a 'bzr check' ?
[19:39] <jam> mwhudson: how did you get that error?
[19:39] <mwhudson> "time ./bzr.dev/bzr upgrade --format=rich-root-pack"
[19:39] <jam> so the upgrade itself failed
[19:39] <mwhudson> yes
[19:39] <jam> hmm.. rich-root-pack will have to rebuild all of your inventories
[19:40] <jam> are you sure you don't want pack-0.92 ?
[19:40] <mwhudson> not really no
[19:40] <mwhudson> let me try that instead
[19:40] <jam> It still shouldn't fall over and die
[19:40] <jam> but we should start with 1 step at a time
[19:41] <jam> mwhudson: if you have been using bzr-svn in that repository, you probably *have* to use it, though.
[19:41] <mwhudson> indeed, if this works i can try upgrading again
[19:42] <mwhudson> ok, so _that_ went rather more smoothly
[19:43] <jam> well, pack-0.92 just has to do a very simple transformation
[19:43] <jam> actually, I think it can just shove the inventory data over verbatim
[19:43] <jam> but for the file data
[19:43] <jam> it gunzip, split & strip, gzip
[19:43] <jam> (it strips the annotation lines.)
[19:46] <mwhudson> oh, is pushing from knits to packs still terrible?
[19:46] <mwhudson> er
[19:46] <mwhudson> packs to knits
[19:46] <jam> mwhudson: not as terrible
[19:47] <jam> It has to add the texts one by one
[19:47] <jam> but it doesn't rebuild the whole history of each modified file
[19:47] <mwhudson> jam: ok
[19:47] <jam> I'm using it right now
[19:47] <jam> (my local repos are packs, bound to a knit repo)
[19:47] <mwhudson> oh, actually this is a fairly big push i'm doing
[19:56] <igc> morning
[19:59] <jam> morning igc
[19:59] <igc> hi jam
[19:59] <jam> enjoying OSDC?
[19:59] <igc> excellent
[19:59] <igc> all finished now
[20:00] <ubotu> New bug: #172893 in bzr "test_lock_permission selftest fails" [Undecided,New] https://launchpad.net/bugs/172893
[20:04] <mwhudson> yah, ./bzr.dev/bzr upgrade --format=rich-root-pack fails starting from packs-0.92
[20:04] <mwhudson> i guess that's a bug or something
[20:06] <jam> or you might still have inconsistent data, and the plain upgrade to packs doesn't notice
[20:07] <jam> can you try running 'bzr check'?
[20:07] <jam> It is probably a bit slower than 'bzr reconcile', though.
[20:07] <mwhudson> in a bit
[20:07] <jam> thanks
[20:09] <lifeless> mmoin
[20:09] <mwhudson> actually it fails in 0.3 seconds :)
[20:09] <mwhudson> oh, partial upgrade
[20:09] <beuno> abentley, when you got a minute, I'd like to have a quick chat with you over this whole XML thing if possible. Seems easier then through the ML
[20:18] <lifeless> hi statik
[20:26] <beuno> I'm pulling bzr.dev to my current local knit repo and it's *extremely* slow (I'm at revno 3021 and it's been ~25 minutes and I'm < 10% through)
[20:27] <beuno> is this an expected behaviour due to the change to packs?
[20:27] <lifeless> beuno: what revno are you using to pull?
[20:27] <beuno> lifeless, I'm at 3021 and I just did "bzr pull"
[20:27] <lifeless> beuno: there was a bug where it reannotated too much, which you could be experiencing
[20:28] <beuno> would it be useful if I canceled and debuged somehow, or is it solved?
[20:28] <dato> beuno: jam recommended me to pull a new fresh branch
[20:29] <lifeless> beuno: looks like that is not the case
[20:29] <lifeless> beuno: don't stop it
[20:29] <lifeless> beuno: look in  ~/.bzr.log
[20:29] <lifeless> is there http readv log output
[20:29] <lifeless> if so, look for 'error...retring with one request' after a large collapsed readv count
[20:30] <beuno> lifeless, yeap
[20:30] <beuno> http readv of a3069b4df97462558e8666ff0cc72386.pack  offsets => 14 collapsed 4
[20:30] <beuno> GET: [http://bazaar-vcs.org/bzr/bzr.dev/.bzr/repository/packs/a3069b4df97462558e8666ff0cc72386.pack]
[20:30] <beuno> http readv of a3069b4df97462558e8666ff0cc72386.pack  offsets => 15 collapsed 4
[20:30] <beuno> GET: [http://bazaar-vcs.org/bzr/bzr.dev/.bzr/repository/packs/a3069b4df97462558e8666ff0cc72386.pack]
[20:30] <beuno> http readv of a3069b4df97462558e8666ff0cc72386.pack  offsets => 14 collapsed 5
[20:30] <beuno> GET: [http://bazaar-vcs.org/bzr/bzr.dev/.bzr/repository/packs/a3069b4df97462558e8666ff0cc72386.pack]
[20:30] <beuno> lots of those
[20:30] <lifeless> beuno: so each of those is reading one text
[20:31] <lifeless> beuno: then it gets annotated and inserted into your local repo
[20:31] <beuno> lifeless, right, so it's a "known bug". Can we make use of this case in anyway, or should I follow dato's suggestion and re-branch?
[20:33] <lifeless> beuno: I would reconcile and upgrade to packs, then pull. No need to rebranch
[20:33] <beuno> lifeless, will do, thanks
[20:41] <lifeless> jam: ping
[20:41] <jam> lifeless: pong
[20:42] <lifeless> how are you going on the pull bug
[20:42] <jam> doing okay
[20:42] <jam> getting sidetracked again because bzr+ssh is taking 6s to copy 2MB on my loopback
[20:43] <lifeless> that will be the data stream api sucking on bzr:
[20:43] <jam> sftp is taking 700ms
[20:43] <jam> but on my server it is 244ms
[20:43] <jam> for bzr+ssh
[20:43] <lifeless> oh
[20:43] <lifeless> push or pull
[20:43] <jam> and 358ms for sftp
[20:43] <jam> transport.get_bytes()
[20:43] <lifeless> pull
[20:43] <jam> I don't know if Mac is just really Fubar
[20:44] <jam> or if we are provoking it
[20:44] <lifeless> do you want me to take over the patch then ?
[20:44] <jam> I'll finish it up
[20:44] <lifeless> ok
[20:44] <jam> just frusttrating to use bound branches
[20:44] <jam> when every commit is taking > 6s
[20:44] <jam> because it is loading the remote index
[20:44] <lifeless> I'll are they packs?
[20:45] <jam> remote is still knits
[20:45] <lifeless> well then
[20:45] <jam> I want to test the pack => knit stuff
[20:45] <lifeless> packs -> partial indices.
[20:45] <jam> since it exposed some real issues
[20:45] <lifeless> kk
[20:56] <jam> weird, the Pack=>Pack code isn't being tested by interrepository_implementations
[20:57] <jam> I'm adding it, since the way you had me set up the test is portable across all repos
[20:57] <lifeless> are you sure ?
[20:57] <jam> (with the rule that you either have to copy the basis text, or you have to fail)
[20:57] <jam> lifeless: InterPackRepo isn't listed in __init__.py
[20:57] <lifeless> I am positive I have test failures within interrepository when I bugger up pack to pack
[20:57] <lifeless> jam: which __init__ ?
[20:57] <jam> hmm... you are using InterRepository._optimizers
[20:57] <lifeless> right
[20:58] <lifeless> dynamic registration -> tests
[20:58] <jam> interrepository_implementations/__init__.py
[20:58] <lifeless> so that plugins get tested too
[20:58] <jam> lifeless: I don't think it is working as you expect
[20:58] <jam> since it wasn't testing that code
[20:58] <lifeless> __init__ only lists specific manual combinations where the dynamic stuff isn't sufficient
[20:58] <jam> until I manually added it
[20:58] <jam> hmm.. maybe
[20:58] <jam> I'll take a closer look
[20:59] <lifeless> it may have been masked by poolies nuking the default repository format attribute; unlikely but possible; run with -v
[20:59] <jam> Unfortunately the ids only show you the inter object
[20:59] <jam> so Knit1 => Knit3 is just InterKnit
[20:59] <lifeless> right
[20:59] <lifeless> so if -v shows InterPack
[20:59] <lifeless> then it thinks its testing it
[21:00] <lifeless> but if its not failing, something is masking it.
[21:00] <jam> you're right
[21:00] <jam> and it seems like Pack=>Pack is the only thing that fails
[21:00] <jam> Pack => Knit is file
[21:00] <jam> fine
[21:00] <jam> and Knit => Pack is fine
[21:00] <lifeless> probably incorrect parameterisation - some aspect of the format isn't actually one that will cause it to match
[21:00] <jam> (Probably because both use the Knit=>Knit code)
[21:01] <jam> lifeless: it is present, and it is failing
[21:01] <jam> I just thought I should see more failures
[21:01] <lifeless> jam: good; why did you think it wasn't being tested :)
[21:01] <jam> Only 1 fail
[21:01] <jam> I thought it was a Pack => knit
[21:01] <jam> or knit => pack
[21:01] <lifeless> no, they do the dumb recursive-join-of-versioned-files stuff
[21:02] <lifeless> remote* uses data streams
[21:02] <jam> so do we need an inter that goes through Remote ?
[21:04] <jam> by the way, we use pack_repo.Packer() to do the fetch
[21:04] <jam> is this going to conflict with your latest revising of that code?
[21:04] <jam> I suppose I can just review your patch
[21:05] <jam> and get it merged
[21:06] <kiko>  1366 kiko      15   0  302m 300m 2812 R    6 14.9   0:59.62 bzr
[21:07]  * kiko tells bzr to stop growing
[21:07] <jam> kiko: what are you doing? and what version of bzr are you doing it with?
[21:07] <jam> (reconciling launchpad code with an older version of bzr is not recommended)
[21:08] <kiko> jam: just a bzr branch
[21:08] <jam> from/to ?
[21:09] <jam> (that said, at the moment, if you are using http we will buffer all requests before processing them.)
[21:13] <lifeless> jam: there are two
[21:13] <lifeless> jam: two and from
[21:14] <lifeless> jam: my reconcile patch contains most of the code you need.
[21:14] <jam> I'm trying to review your "trivial syntax error"
[21:14] <lifeless> kiko: what version of bzr  ?
[21:14] <jam> which I think is your reconcile patch
[21:14] <lifeless> jam: yes; see the prior email for the cover text
[21:15] <kiko> Bazaar (bzr) 0.91.0
[21:15] <kiko> jam: a local branch of launchpad from a local rsynced launchpad
[21:15] <kiko> no repo
[21:15] <lifeless> kiko: use bzr.dev; I think you'll find its a lot more efficient on memory
[21:25] <jam> lifeless: it is a little bit hard to track down what actually changed versus what is just mechanical
[21:25] <jam> since you split the functions up
[21:26] <jam> I can certainly check ReconcilePacker directly
[21:26] <jam> but any hints for just plain Packer ?
[21:26] <lifeless> oh
[21:27] <lifeless> so, in plan Packer there are no changes, just shuffling
[21:27] <lifeless> there is one new method in Packer, which is useful to you
[21:27] <lifeless> but not actually called
[21:27] <lifeless> thats the external_text_references method
[21:28] <lifeless> it might be better on Pack actually; still - its the thing you need to determine what external texts you need to reconstruct all the texts in the pack
[21:29] <jam> lifeless: "external_compression_parents_of_new_texts" ?
[21:29] <lifeless> yes
[21:30] <lifeless> e.g. for the scenario we cooked up
[21:30] <jam> yeah
[21:30] <lifeless> rev B file id F deltas against rev A file id F
[21:30] <lifeless> that should return [(F,A)]
[21:30] <kiko> lifeless, where does bzr.dev live
[21:31] <lifeless> then you can do present_external_keys = list(self._pack_collection.text_index.combined_index.iter_entries([(F,A)]))
[21:31] <lifeless> if len(present_external_keys) != len([(F,A)]):
[21:31] <lifeless>    error here
[21:31] <lifeless> kiko: http://bazaar-vcs.org/bzr/bzr.dev, or lp:bzr
[21:33] <kiko> lifeless, and debs?
[21:33] <lifeless> we don't currently build debs of bzr.dev
[21:34] <kiko> ah, I see
[21:34] <lifeless> kiko: you know you can run from source trivially -
[21:34] <kiko> yeah, I know
[21:36] <beuno> lifeless, I reconciled and did: "bzr upgrade --format=knitpacks-experimental", and it's still taking forever to pull with the same feedback in ~/.bzr.log.  What could I have done wrong?
[21:37] <lifeless> beuno: do you have a shared repository? did you actually upgrade the repo and not just a branch?
[21:37] <lifeless> bzr info will tell you
[21:37] <beuno> lifeless, aaaaah, yes, shared repo
[21:37] <beuno> that might be it
[21:38] <beuno> doing the same with the shared repo
[21:41] <lifeless> hi jrydberg
[22:00]  * igc breakfast
[22:13] <poolie> morning
[22:16] <beuno> hello again
[22:16] <beuno> upgrading to packs
[22:16] <beuno> and I'm getting this: http://paste.ubuntu-nl.org/46275/
[22:19] <beuno> (the shared repo has already been reconciled and upgraded)
[22:20] <jam> well, it seems the reconcile is seeing nothing to do
[22:20] <jam> but I don't know why it is trying to remove the un-added pack from the memory index
[22:20] <jam> beuno: so (a) the error is "harmless" your data is good
[22:21] <jam> (b) I'm guessing Robert's new code (which I'm reviewing now) is going to change all of this anyway
[22:21] <lifeless> beuno: what version of bzr are you doing that with ?
[22:21] <lifeless> beuno: oh, yes. you can't reconcile packs without my patch
[22:21] <lifeless> hhi poolie
[22:21] <lifeless> poolie: I've changed the deefault; patch up for review
[22:21] <poolie> ok, thanks
[22:22] <lifeless> poolie: I'm working on 165106 now
[22:22] <dato> bug #165106
[22:22] <ubotu> Launchpad bug 165106 in bzr "check that get_data_stream distinguishes annotated and unannotated knits" [High,Invalid] https://launchpad.net/bugs/165106
[22:22] <lifeless> nearly done
[22:22] <poolie> i didn'tread the reconcile patch now, but will soon
[22:22] <beuno> lifeless, latest bzr.dev
[22:22] <beuno> freshly pulled
[22:22] <jam> lifeless: generally it seems good
[22:23] <beuno> lifeless, it's actually a knit branch with a shared-repo with packs
[22:23] <jam> There are a couple confusing things which should probably be documented
[22:23] <jam> and a small bug in some error handling code
[22:24] <jam> lifeless: and I would like to build on your patch, but I think I can do it without any cleanups
[22:26] <lifeless> jam: feel free to +1, then merge it into yours and build on it and cleanup at the same time :)
[22:26] <lifeless> jam: no point serialising your fixes to me then me deserialising and doing
[22:26] <lifeless> jam: unless its easier for you
[22:26] <jam> lifeless: as long as you are okay with my comments
[22:26] <jam> I've already bb:tweak ed it
[22:29] <lifeless> ref-keys is missing_texts
[22:29] <jam> I'm planning on merging your code into my branch
[22:29] <lifeless> k
[22:29] <jam> it should be fine as is
[22:29] <jam> for what I need
[22:29] <lifeless> so your comments and questions are good
[22:29] <lifeless> this is unannotated so the ordering stuff is moot
[22:30] <jam> other than "locality of reference" but sure
[22:31] <lifeless> yah
[22:31] <lifeless> but this is meant to be rare
[22:31] <lifeless> the _use_pack stuff looks correct to me; its a little redundant is what I think you are saying
[22:32] <jam> lifeless: correct
[22:32] <lifeless> the knitversionedfile index stuff I'll write up for you
[22:32] <jam> but also I meant to point out you could check the indexes there
[22:33] <lifeless> well we don't want to have to compare 10's of thousands of keys
[22:33] <lifeless> better to set a flag when we notice a problem
[22:33] <lifeless> and not all issues will show up in the indices
[22:48] <poolie> lifeless, did you fix the switch thing implicitly to change the format?
[22:48] <poolie> i think there were no other failure
[22:48] <poolie> s
[22:49] <lifeless> poolie: I fixed the switch thing yes
[22:49] <lifeless> part of the same patch
[22:59] <poolie> lifeless, spiv, igc, jam: one minute to call
[22:59] <igc> sure
[23:30] <vila> spiv: have you looked at my comment on bug #172701 ?
[23:30] <ubotu> Launchpad bug 172701 in bzr "Large readv requests can fail if there is a squid proxy" [High,In progress] https://launchpad.net/bugs/172701
[23:32] <spiv> vila: Thanks; I just saw it.
[23:32] <spiv> vila: I'll try again with pycurl.VERBOSE.  Also I'll try capturing the actual HTTP requests/responses involved, both before and after the proxy.
[23:33] <lifeless> vila: I knew spiv had a proxy
[23:33] <lifeless> vila: I was doing divide and conquer
[23:33] <vila> lifeless: oh, I hoped you had an insight...
[23:33] <spiv> vila: in case it helps, my proxy is an intercepting proxy (i.e. the client thinks it is connecting to port 80 of bazaar-vcs.org, but it actually connects to squid).
[23:34] <spiv> lifeless: you wanted to figure out which software of yours to blame? ;)
[23:34] <vila> hehe
[23:36] <vila> spiv: anyway, there is a bug here since we should have catched the short read and issued another request (the new implementation will catch it, but I want to to give a try to read-as-we-receive before submtting)
[23:37] <vila> good night all, kudos for the hard work of the last days