[00:00] <jam> I think it would be an incompatible change, or at least a whole lot of shoe-horning to get optimal gc => gc via get_record_stream.
[00:02] <jam> lifeless: I guess I've felt like you've been optimizing the knit => gc case, rather than the gc => gc case. And the former is transient until everyone has upgraded, while the latter is an on-going issue.
[00:06] <lifeless> jam: I've been hammering on the interface that is used
[00:07] <lifeless> we need gc->gc to work when the source is actually a RemoteRepository
[00:07] <jam> lifeless: I suppose I also haven't seen any visibility of your work. I don't know if you just aren't committing, or if you aren't sending emails
[00:08] <jam> Certainly I don't see a public URL to get your latest code.
[00:08] <lifeless> ~lifeless/+junk/bzr-groupcompress on launchpad
[00:08] <lifeless> and my btree-graphindex bzr branch
[00:09] <lifeless> which has outstanding uncommitted work, because I need to fix this api
[00:09] <lifeless> I haven't been able to spend time on the core code for a couple weeks now
[00:10] <lifeless> helping with 1.6 stuff, analysing nasty bugs, and thinking about the interface problems
[00:11] <lifeless> so I think you're seeing 'robert has not been working on this'
[00:11] <lifeless> rather than 'robert has been obsessing about knit->gc'
[00:14] <beuno> mwhudson, have any idea how/where I can upload the 1.6 tarball in LP?  https://edge.launchpad.net/loggerhead/+download  isn't very helpful
[00:15] <mwhudson> beuno: i think maybe you go to the release page
[00:15] <mwhudson> beuno: which you perhaps need to create
[00:16] <beuno> mwhudson, ah, I see.  Ok, I have to go now, but I'll release 1.6 with NEWS, announcements, etc when I come back (or tomorrow morning), unless you can think of something else we're missing
[00:17] <lifeless> jam: before you go, what I need to understand is if you think a hint is _wrong_ or just premature
[00:18] <lifeless> jam: if you think its wrong I'll head back to drawing boards, if you think its premature I'll accept we have a different order on the same TODO list
[00:18] <mwhudson> beuno: awesome
[03:45] <jam> lifeless: just premature, not wrong, I thought I made that somewhat clear.
[03:45] <jam> sorry about th delay
[03:45] <lifeless> jam: it was getting pretty angsty, I wanted to be sure
[03:45] <lifeless> no worries about the delay; life does come first :)
[03:54] <lifeless> jam: do you have an opinion on lexer and cc toolchain?
[03:55] <lifeless> jam: I'm currently trying antlr3
[03:55] <jam> lifeless: I do not
[04:03] <jam> lifeless: what are you trying to parse?
[04:04] <lifeless> dirstate
[04:04] <lifeless> initially that is
[04:04] <jam> seems a bit overboard, but if you feel it is merited
[04:05] <jam> As far as one I came across a while ago: http://spirit.sourceforge.net/
[04:05] <jam> it is C++ using templates
[04:05] <jam> to allow you to write EBNF, I guess
[04:05] <jam> boost is a very nice advanced C++ library
[04:05] <jam> not necessarily something you want to use in this context
[04:16] <lifeless> yah
[04:16] <lifeless> so if we do C++, I'd grab boost, its nice
[04:17] <fullermd> Honkin' big.
[04:17] <lifeless> but I think plain C is probably best at this point
[04:19] <jam> lifeless: sure, the initial docs for antlr only describe java and c++ output
[04:19]  * spiv doesn't like Fridays.
[04:19] <jam> though it seems they have since implemented several output languages
[04:21] <lifeless> http://www.antlr.org/api/C/
[04:21] <lifeless> spiv: heh
[04:25] <mheld> if you had to categorize bzr commands, how would you?
[04:25] <mheld> transfer, edit, info?
[04:27] <Peng_> Huh, importing paste is either taking 0.0037 seconds, or 9.5. That makes starting Loggerhead a little slow...
[04:27] <jam> Peng_: that seems like a bit of variation.
[04:28] <fullermd> Yeah.  Do we get to pick which it is?   8-}
[04:28] <jam> lifeless: yeah, the problem is that on the download page, they say you can use the binaries for C++, C# or python, when they really mean C...
[04:30] <jam> I will also say I have a hard time finding online examples
[04:34] <lifeless> Peng_: out of cache?
[04:39] <Peng_> lifeless: Yeah, probably.
[04:41] <mwhudson> it's not like paste does all that much when you import it
[04:41] <lifeless> mwhudson: 'python'
[04:42] <mwhudson> yeah yeah
[04:44] <lifeless> python, yeah yeah yeah, python, yeah yeah yeah
[04:44] <lifeless> so I played with freeze
[05:00] <markh> lifeless: in a packet trace, are you only interested in the few frames before and after the error?
[05:01] <lifeless> markh: I'm not interested in them at all; I want you to look at it ;)
[05:01] <markh> heh :)
[05:01] <lifeless> markh: we're looking for retransmissions, tcp errors etc
[05:02] <fullermd> Ideally, you'd probably want at least the last few packets before it goes off in the weeds.
[05:02] <lifeless> and to see what happens after
[05:02] <lifeless> if it gives up I think it will send a FIN
[05:03] <fullermd> RST more likely
[05:03] <lifeless> fullermd: yeah mea culpa
[05:03] <lifeless> EMEMORY
[05:03] <fullermd> Of course, it IS Windows; so it might send just about anything  ;)
[05:10] <lifeless> jam: whats up with NEWS
[05:11] <lifeless> jam: we seem to be jumping around with sections, I thought we had a fixed set these days ?
[05:12] <jam> lifeless: I see the same ones IN DEVELOPMENT as elsewhere, what would you have expected?
[05:13] <jam> lifeless: there has been some motion because of things that were in dev that got merged into a release candidate.
[05:13] <jam> I tried to keep them in the same sort order, though.
[05:13] <lifeless> I only ask because I'm running into conflict-after conflict :P
[05:13] <lifeless> jam: also, did you know that buffer() is a zero-copy tool ?
[05:13] <jam> lifeless: Is that because we started a new IN DEVELOPMENT?
[05:14] <jam> lifeless: yes, but buffer() doesn't make the code more obvious, etc.
[05:14] <lifeless> jam: As we work on memory, I expect we'll want more of it
[05:17] <markh> I'm not very familar with debugging network traces.  The last packet I see before the timeout is from launchpad and apparently carrying http payload.  The first I see after the timeout is also from launchpad, with the TCP "F" flag set, which apparently means "end of data".
[05:18] <markh> I'm using a windows network tool and I'm not sure how to export the data in any meaningful way
[05:18] <fullermd> Well, F would be FIN, which means it's shutting down the connection in an orderly fashion ("Done", rather than RST which means more like "broken, WTF?!?")
[05:19] <fullermd> You don't see any packets from you in the middle, though?
[05:19] <markh> right
[05:19] <markh> nope - I'm filtering by "source=launchpads ip or dest == launchpads ip"
[05:19] <fullermd> I don't see how it could FIN unless it had at least seen your ACK of that last packet...
[05:19] <lifeless> does the sequence number match ?
[05:20] <lifeless> fullermd: I think you can FIN on shutdown() regardless of ack-state
[05:20] <lifeless> so
[05:20]  * fullermd reached for Stevens.
[05:20] <lifeless> I think we should check what the apache front-end has its timeout set to
[05:20]  * lifeless bets 15 minutes
[05:21] <lifeless> spm: ping
[05:21] <spm> lifeless: pong
[05:21] <fullermd> Assuming your ACK did get through, then yeah, it sounds like HTTP keepalive.  LP sent all its data, and considers that you've gotten it, and after 15 minutes of no activity, decides you're gone and closes the session.
[05:21] <lifeless> spm: whats the persistent connection timeout configured on bazaar.launchpad.net
[05:21] <fullermd> 15 minute keepalive timeout sounds insanely long, though.
[05:21] <lifeless> fullermd: not really
[05:21] <markh> what pastbin do you guys use?
[05:21] <markh> pastebin
[05:22] <lifeless> markh: any that works
[05:23] <markh> http://pastebin.com/m6aed84b0 - but no column headers :(  2nd col is "packet number", 3rd is "time offset", "eden" is my pc
[05:23] <lifeless> spm: we have a situation where we are seeing some http data sent to a bzr session
[05:23] <lifeless> spm: then nothing for 15 minutes
[05:23] <lifeless> spm: then a FIN
[05:23] <spm> lifeless: hmm. not good.
[05:23] <lifeless> spm: working theory is that packets are going AWOL
[05:24] <lifeless> spm: and then apache is timing out the socket gracefully
[05:24] <lifeless> spm: finding out the configured timeont on b.l.n will help
[05:24] <spm> lifeless: sounds reasonable...
[05:25] <spm> lifeless: sure. just getting...
[05:25] <lifeless> if its different we can look at other things
[05:25] <lifeless> if its not configured, it willbe the default
[05:26] <spm> lifeless: looks to be default
[05:26] <lifeless> spm: thanks
[05:27] <lifeless> Default:KeepAliveTimeout 15
[05:27] <lifeless> Syntax:KeepAliveTimeout seconds
[05:27] <fullermd> It's 5 here.  But either way, it's a long cry from 15 minutes.
[05:27]  * lamont mutters about &%)^%^*_ tar.gz packaging of bzr snapshots, again
[05:27] <markh> KeepAlive is different
[05:27] <markh> isn't it?
[05:27] <lamont> 1) orig.tar.gz without debian/; 2) drop debian/.bzr from the source package
[05:27] <lifeless> KeepAlive is boolean
[05:27] <markh> a timeout though
[05:28] <mwhudson> lamont: i'm sure you should be asleep
[05:28] <lifeless> lamont: bzr itself?
[05:28] <markh> if it has the connection open, that is how long before it will close it even if the client doesnt IIUC
[05:28] <lifeless> lamont: you have confused me
[05:28] <fullermd> markh: No, KeepAlive is just a flag for whether or not persistent connections are allowed.
[05:28] <lamont> lifeless: see lauchpad.net/~ppa-bzr-beta/
[05:28] <lamont> ]+archiuve
[05:28] <markh> yes -
[05:28] <fullermd> markh: KeepAliveTimeout is the time after which it closes even if the client doesn't.
[05:28] <markh> and if it is used and the connection remains open, the server has a timeout
[05:28] <lifeless> lamont: bzr upstream does not have debian in it
[05:29] <lifeless> lamont: but the packaging rules are a bzr branch
[05:29] <lamont> the package in the ppa doesn't _HAVE_ an upstream tarball
[05:29] <markh> fullermd: yeah, I think that is what I'm saying :)
[05:29] <markh> which is a different timeout than we are looking for?
[05:29] <lamont> lifeless: which, specifically, is my complaint
[05:29] <lifeless> lamont: uploaded as a native package?
[05:29] <lamont> yes
[05:29] <fullermd> (I don't think there IS a timeout for max length of a persistent session, short of MaxKeepAliveRequests * (KeepAliveTimeout * .9999999)
[05:29] <lamont> OTOH, at least 1.6rc2 has dapper source for me :-)
[05:29] <lifeless> lamont: file a bug
[05:29] <lifeless> lamont: the folk doing these uploads are not DD's
[05:30] <lamont> yeah
[05:30] <lamont> I'll do it once I'm awake again, as mwhudson so aptly points out
[05:30] <lamont> and on that note, bed.
[05:31] <lifeless> still
[05:31] <lifeless> 1/60th of the time out is still damn suspicious
[05:32] <bvk> hi, how do i make a minor edit in an already commited revision, without another commit?
[05:33] <AfC> bvk: uncommit
[05:34] <fullermd> Anyway.  I'm pretty certain a FIN won't be set if there's outstanding un-ACK'd data, no matter what.  shutdown(2) will send the queued data and wait for it to be ACK'd before doing it's half-close.
[05:35] <bvk> AfC: thanks, i will try it out
[05:35] <AfC> bvk: (followed by a new commit, of course)
[05:36] <AfC> bvk: (assuming, a) that you are trying to fix the most recent revision, and b) that you haven't sent it anywhere public)
[05:38] <bvk> AfC: it seems, i cannot go down more than a recent commit and make a fix :-(
[05:39] <fullermd> Barring really heroic and unportable measures, I don't think there's any way for userland to 'take back' send(2)'t data; once it goes down there, TCP will reset the connection without any more intervention if it can't get through...
[05:40] <spm> markh: just been looking at that network trace pastebin. those first two packets (4188/89) appear to be identical? Which seems a tad broken.
[05:40] <bvk> AfC: say, i have revisions 1,2 and want to make a minor edit in 1 (but not in 2) revision 2 also needs to be uncommitted
[05:41] <markh> shall I go back a few packets?
[05:41] <AfC> bvk: yes, which is why you should just be making a fix and committing revision 3.
[05:42] <bvk> AfC: is there anything like mercurial patch queues? where i can do hg qpop; hg qpop; edit; hg qrefresh; hg qpush; hg qpush ?
[05:42] <spm> markh: yeah... something just seems really odd... I'd expect to see more "stuff" happening. retries whatever.
[05:42] <fullermd> markh: If you can use something like Wireshark, you can do things like show a trace of the HTTP session, which could be helpful in seeing if it really is "all there".
[05:42] <markh> those 2 packets look identical here too
[05:44] <fullermd> Mmm.  That trace looks all sorts of messy...
[05:44] <mwhudson> bvk: there is the 'loom' plugin
[05:44] <markh> http://pastebin.com/m78da686f has from the GET request
[05:44] <markh> I'll look into wireshark...
[05:45] <fullermd> Look at those 4th and 5th lines again.
[05:45] <fullermd> First, the 4th line is launchpack ACK'ing your packet in the 5th line.  So the order is messed up.
[05:45] <fullermd> And secondly, those are from a different session than the first 2 lines (can't tell about the 3rd); the local port is different.
[05:46] <bvk> mwhudson: i tried loom plugin, but i couldn't get that how to do it right :-( loom doesn't have anything equivalent to hg qrefresh :(
[05:47] <bvk> mwhudson: i still need to make a commit for minor fixes and it is recorded as another revision
[05:47] <mwhudson> i don't know what qrefresh does, so can't really answer that :)
[05:48] <lifeless> mwhudson: its what commit in aloom thread does
[05:48] <lifeless> bvk: in loom you just do a commit on that thread
[05:48] <bvk> mwhudson: qrefresh updates the current working patch (in a queue of patches). one patch acts like one revision, so multiple qrefreshes on a single patch will not result in multiple revisions
[05:49] <markh> I think only one bzr was running in that trace - I was being careful to wait while the 15 minutes expires and not doo anything else with bzr
[05:49] <markh> I'll see how I go with wireshard
[05:49] <markh> k
[05:50] <lifeless> bvk: looms map revisions into a stack rather than replacing revisions
[05:50] <lifeless> bvk: the stack has the delta each revision needs when submitted upstream
[05:51] <fullermd> But yeah, that FIN 15 minutes and 10 seconds later is from a different TCP session (local port 60013 instead of 60012), so it's no relation to the other packets.
[05:52] <fullermd> That first connection apparently disappears totally after that line 7 packet.  Which is absurd; you're running locally, you should be able to see your ACK of it no matter what, even if the other end doesn't.
[05:52] <bvk> lifeless: um...if i go down-thread, do some edits and did up-thread, will my top thread gets the edits merged?
[05:52] <lifeless> so, we were looking at the wrong machine in the DC
[05:52] <lifeless> bvk: yes, as a pending merge like normal bzr merges
[05:53] <lifeless> bvk: diff -r thread: at any point shows you the overall pending diff
[05:53] <bvk> lifeless: so, i have to do one more commit in all up-threads from the point of edits?
[05:54] <spm> fullermd: agreed - good catch on that. which also begs the "what is actually open" question...
[05:54] <lifeless> bvk: yes, there are some plans to streamline this
[05:54] <bvk> lifeless: ok
[05:54] <fullermd> (of course, it's an assumption that that HTTP data packet is part of the 60012 connection, since it doesn't show ports or sequence numbers or whatever on those packets, but I think it's a reasonably good one)
[05:55] <spm> markh: might be worthwhile seeing how many, if any?, connections you have open to 91.189.90.161; netstat -an | fgrep 91.189.90.161
[05:56] <bvk> lifeless: one more question, when i do a merge from up-thread, all its revisions are automatically logged in the commit message, can i avoid that with a simple merge notes?
[05:56] <markh> none apparently :)  I'm installing wireshark now
[05:56] <lifeless> bvk: they are not logged in the commit message for me; why do you say they are ?
[05:58] <bvk> lifeless: i saw all revisions logged with some indentation :-(
[05:58] <spm> fullermd: yeah, the timing is too tight. but still ... coincidence is possible. :-)
[05:58] <lifeless> bvk: thats display
[05:58] <lifeless> bvk: try e.g. bzr log --short
[05:59] <lifeless> copying commit messages around like that would be horrible duplication
[06:00] <markh> I'm pretty clueless with this :( fullermd - any clues about the syntax for the wireshark filter I'd need?
[06:00] <markh> or spm of course :)
[06:00] <fullermd> "host 91.189.90.161 and port 80" should do it
[06:01] <fullermd> (should show both directions)
[06:01] <spm> markh: just grab the lot - post filtering with wireshark is a doddle. or ^^ :-)
[06:01] <markh> it says thats a syntax error
[06:01]  * markh looks at help
[06:03] <bvk> lifeless: ok, short is not displaying them :-)
[06:04] <spm> markh: are you applying that filter as you capture, or post capture? the eg fullermd gave is "as you capture"? The post capture filters are very different syntax.
[06:05] <markh> um - in the toolbar "filter" box :) - haven't started capturing yet
[06:05] <spm> Ah! :-)
[06:05] <spm> No not that one. :-)
[06:05] <spm> "start a new live capture"
[06:06] <spm> ... or "capture options"; should then see a "capture filter" - put the filter in there.
[06:09] <fullermd> The latter is what I always do (make sure you choose the right interface)
[06:10] <markh> ok, running.  Now to retry a few times until it fails...
[06:18] <markh> a few "dupe ack" messages are flying by, but its still working...
[06:19] <markh> ahh - here we go :)  bb in 15 :)
[06:22] <spm> markh: :-) Once you close the sniffing; go looking for the http GET request that timed out - it should be fairly obvious; select that line; right click; "Follow TCP Stream" - will filter packets to only that connection.
[06:22] <markh> nice :)
[06:23] <fullermd> I'd save it right off too; that way you have a copy around to work with.
[06:43] <markh> fullermd/spm: so I've got a capture - but struggling to get a similar text format out for the selected packets.  What's the best way for you to see the data?
[06:43] <markh> upload a .scap somewhere?
[06:44] <fullermd> markh: Well, you can try filtering down to just that session, then saving that out to a pcap file, and seeing how big that is.
[06:44] <spm> markh: sure - works for me
[06:44] <lifeless> bbs
[06:44] <fullermd> If it's not huge, we can just grab that and poke at it.
[06:44] <fullermd> (or if the whole capture isn't too huge, that would work too, with less effort)
[06:48] <markh> fullermd/spm: http://starship.python.net/crew/mhammond/bzr-hang-2.zip (151,340 bytes) - it should be from the most recent GET (a few seconds and many packets before the hang), and a dozen or so packets after.
[06:57] <fullermd> OK, that's a lot of dupe ACK's in a bunch there.  Couple times.  Weird.
[06:57]  * markh doesn't know what happened to his isp then.
[06:59] <fullermd> Lot of partial packets there; your MTU is higher than somewhere along the path, which is causing a lot of fragmentation.
[06:59] <fullermd> That shouldn't break anything, but certainly exposes more edges.  And probably messes with your performance a bit.
[06:59] <markh> quite possibly my very old dsl modem
[07:00] <fullermd> If we go to packet 687, which is the last one before the pause, right-click and 'Follow TCP Stream' gives a text dump of the session.
[07:00] <fullermd> There's two things to note.
[07:00] <fullermd> The first, is that looking at the very end, it's obviously in the middle of a line, which confirms that you're not actually getting all the expected data.
[07:01] <fullermd> The second, is that there IS a proxy; see the HTTP response header:
[07:01] <fullermd> Via: 1.0 proxy3.mel.dft.com.au:80 (squid/2.6.STABLE18)
[07:02] <markh> yeah, my isp I believe
[07:03] <fullermd> So, that's suspicious.  HTTP breakage unrelated to proxies happens, but not near as often as related.
[07:04] <fullermd> lifeless may know more as to whether there's something particular about that version that might be tripped us up.
[07:05] <lifeless> australian ISP's as a rule have an intercepting squid
[07:05] <lifeless> squid had a known, serious bug with range requests
[07:05] <markh> heh - well there you go :)
[07:05] <markh> in that version?
[07:07] <lifeless> yes
[07:07] <lifeless> .19 fixes it
[07:07] <lifeless> but they should 2.7 or 3 anyhow
[07:07] <lifeless> http://www.squid-cache.org/Versions/v2/2.6/changesets/11996.patch
[07:07] <markh> bugger - here goes another support request that will end up in the same bucket my request for them to upgrade their SpamAssassin did :(
[07:08] <lifeless> also
[07:08] <lifeless> http://www.squid-cache.org/bugs/show_bug.cgi?id=2329
[07:08] <lifeless> which is fixed in .20
[07:09] <fullermd> (actually, I could be full of crap on my first point above; I guess the range could just end right in the middle of that line)
[07:09] <lifeless> fullermd: single ranges can, multi ranges will be multipart wrapped
[07:09] <Peng_> Australian ISPs run proxies? Ew.
[07:10] <fullermd> It's multi-part wrapped.  2 ranges.
[07:10] <lifeless> fullermd: so cutting off part way == incomplete
[07:10] <fullermd> I forgot that it was ranged, so I assumed it would have a full index, which wouldn't fail in middle of the line.
[07:10] <fullermd> The range is a little weird, though...
[07:10] <fullermd> Range: bytes=33100-123726,124238-190665
[07:10] <lifeless> thats GraphIndex
[07:10] <fullermd> We really care about saving 500 bytes in the middle, when we're pulling >150k?   :p
[07:10] <lifeless> no
[07:11] <lifeless> we want a few hundred bytes from 150000+
[07:11] <lifeless> and its remote so we expand it to 64K
[07:11] <lifeless> we also want some bytes from a couple of lower spots, apparently
[07:12] <fullermd> Yeah, it just seems weird to me that they expand to those blocks, but don't just collapse it into a single range.
[07:12] <markh> that squid bug appears to only happen for cached content?
[07:12]  * fullermd does a quickie test of the range
[07:13] <lifeless> markh: the second bug yes
[07:13] <lifeless> markh: but note that the first bug will cause the second to show
[07:18] <fullermd> Oh, I see what you mean.  I missed that it was lacking the MIME boundary marker.
[07:18] <fullermd> So I was right, for the wrong reason   8-}
[07:20] <lifeless> markh: I'm not convinced squid is the root cause thouh
[07:21] <lifeless> markh: you are seeing anomalous behaviour
[07:21] <fullermd> Yeah.  There's enough weirdness in there to go around.
[07:22] <xshelf> a quick question: is bzr-fastimport format identical to git-fastimport format?
[07:22] <fullermd> All the dupe ACK's, in several clusters through the trace, are locally generated, rapid-fire.
[07:22] <lifeless> xshelf: bzr-fastimport read git-fastexport if thats what you're asking
[07:22] <fullermd> That may or may not be related to the near-total fragmentation.
[07:23] <fullermd> I think there's some SACKtion going on, but I don't know SACK well enough to say much about it without a lot more digging than I have time for tonite   :|
[07:23] <xshelf> lifeless: i was looking at reusing git-p4 for bzr
[07:24] <lifeless> xshelf: yes, I believe someone is working on that
[07:24] <lifeless> there is stuff on the list about this
[07:24] <xshelf> lifeless: i read it in the list, let me followup with that person
[07:46] <jml> lifeless: hi.
[07:46] <Peng_> jml: bzr+http? :D
[07:46] <jml> Peng_: working on it!
[07:47] <Peng_> Cool.
[07:56] <lifeless> jml: hi
[07:56] <jml> lifeless: so, tomorrow.
[07:56] <jml> lifeless: we're missing a *time*
[07:57] <lifeless> and a place
[07:57] <lifeless> I've been trying to grab Lynne, no luck
[07:57] <jml> lifeless: Kensington is the place
[07:57] <lifeless> details details details; how do I get there?
[08:03] <lifeless> markh: is there mtrr for windows?
[08:23] <markh> no idea!
[08:24] <markh> I see rio.py has:      if isinstance(value, str):\n      value = unicode(value) - that is always going to be suspect isn't it?  If the string is utf8 or anything else that's non-ascii, we are doomed
[08:25] <markh> some russian dude is hitting it in 1.5, and it looks like he still might in 1.6
[08:27] <lifeless> http://beerpla.net/2008/05/12/a-better-diff-or-what-to-do-when-gnu-diff-runs-out-of-memory-diff-memory-exhausted/
[08:28] <lifeless> agreed
[08:28] <lifeless> is there a bug?
[08:31] <markh> https://bugs.launchpad.net/bzr/+bug/256550
[08:31] <markh> yep
[08:31] <markh> oops :)
[08:31]  * markh answered the bot :)
[08:32] <luks> the problem is not rio, the problem is using non-ascii bytestrings for hostnames
[08:32] <markh> one path I can see is _auto_user_id() calls socket.gethostname(), which returns a string and may be non-ascii in that case
[08:32] <markh> yes :)
[08:32] <luks> personally I'd just decode is using the user's locale
[08:33] <markh> on windows the encoding would be 'mbcs' (which is also filesystemencoding) - but yeah
[08:33] <markh> hard path to test - monkeypatching maybe?
[08:33] <luks> no, that's the filesystem encoding
[08:33] <luks> it would be cp-something
[08:34] <markh> windows is likely to return a string that went via WideCharToMBCS (or however it is spelt), in which case "mbcs" would be the appropriate encoding to use IIUC
[08:35] <markh> the same value directly from the api via unicode would also be an option
[08:37] <markh> win32api.GetComputerNameEx(0) returns unicode :)
[08:39] <markh> but - rio is still potentially broken in the future.  ISTM it should probably throw an exception for a string
[08:39] <markh> as even a utf8 string would blow it up today
[08:39] <luks> well, is does, UnicodeDecodeError :)
[08:39] <markh> :) but only when it actually contains non-ascii
[08:40] <markh> it should raise an exception *every* time it gets a string, as it means one day a Unicode error will happen that is hard to reproduce :)
[08:41] <luks> yeah, I suspect that would break a lot of bzrlib code
[08:41] <markh> well, it could be argued that code is already broken for some users
[08:41] <luks> true
[08:52] <markh> beer-oclock!
[09:06] <lifeless> rio probably wants to be asserting on bytestrings rather than decoding
[09:21] <uws> Hmm
[09:21] <uws> Will bzr have something like this:  http://www.gnome.org/~federico/news-2008-08.html#12  and http://treitter.livejournal.com/7769.html   ?
[09:21] <uws> interactive rebasing/stacking patches
[09:23]  * AfC writes uws a shell script
[09:23] <AfC> :)
[09:25] <uws> AfC: Eh, how would that work?
[09:25] <uws> AfC: This is about merging multiple commits into one
[09:25] <uws> so that the history is cleaner when merging into mainline
[09:25] <AfC> [Ok, switching back to reality: I know a goodly number of people around here respect (and in some cases, including my own, absolutely worship) the UI that was presented by Darcs. Nothing to do with patch commutation vs tree snapshot issues; but the console UI was bloody brilliant in its interactiveness and consistency]
[09:27] <AfC> uws: well, given that rebase is ultimately just a wrapper around `bzr merge -r X..Y ../theirs` (for X not in mine)
[09:27] <AfC> uws: it's eminently scriptable
[09:27] <lifeless> uws: yes
[09:28] <lifeless> uws: there is a bug open on rebase -i
[09:28] <AfC> (ok, so rebase has nice logic for moving forward and backwards / continuing as conflicts arise)
[09:28] <lifeless> uws: it has somethoughts on how to make it happen
[09:28] <uws> lifeless: will that also make it possible to "collapse" multiple commits into one? E.g. a real patch + 2 subsequent typo fixes
[09:28] <lifeless> uws: sure
[09:29] <lifeless> uws: I mean, its basically uncommit -r -3; commit
[09:29] <lifeless> uws: and then rebase replay after that
[09:30] <lifeless> uws: note that there is a way to do all that rebase does in terms of presenting things without sacrificing history; for the case where you need to evolve a patch-branch rather than simply fake up work done
[09:31] <uws> lifeless: ...which is?
[09:32]  * AfC guesses Robert's loom concept
[09:33] <uws> Something that helps merging stuff into a branch chunks would also do the job right?
[09:33] <uws> so that if history is like this:
[09:34] <uws> add-feature-1, fix-typo, fix-typo
[09:34] <uws> add-feature-2, fix-typo, fix-another-typo
[09:34] <uws> i.e. 6 commits
[09:34] <uws> that you can create a branch (preferably in place) that is like this:
[09:34] <uws> add-feature-1 (add-feature-1, fix-typo, fix-typo),  add-feature-2 (add-feature-2, fix-typo, fix-another-typo)
[09:35] <uws> where the parenthesized commits are merged revisions
[09:35] <AfC> uws: this is going to sound silly, but why not just take a branch at point 0
[09:35] <uws> something like "bzr collapse -r 1:3" which will give you an editor with all commit message
[09:36] <uws> so you can assemble a new commit messte from these
[09:36] <AfC> uws: then merge from feature-1+fix-typo+fix-typo,
[09:36] <AfC> uws: then merge from feature-3+fix-typo
[09:36] <uws> AfC: Yeah, that's the same net effect.
[09:36] <AfC> you'll end up with two left hand side revisions (merges, as it happens) that you can write up to your heart's content.
[09:36] <uws> but it's cumbersome
[09:36] <AfC> More cumbersome that screwing around with rebase and history erasure?
[09:37] <uws> No, what I described a few lines back doesn't change history, does it?
[09:37] <uws> it just "groups" a few revisions into a merge, and then does the same again
[09:37] <uws> so that a "bzr log" will list only the 2 groups at the top levle
[09:37] <AfC> [as an aside, I have a slightly different problem, which is that the left hand edge merges are NOT what are significant in our project, and I'm getting rather tired of writing the same detailed log message again and again as features get merged up]
[09:38] <AfC> uws: {shrug}
[09:38] <AfC> uws: that is exactly how Bazaar operates today
[09:38] <AfC> uws: so I guess I'm a bit vague on which part you consider cumbersom
[09:38] <AfC> e*
[09:38] <uws> AfC: Well, I branch a project
[09:38] <AfC> (one possible point that's missing: you can do a merge even if there is no divergence)
[09:38] <AfC> (instead of pulling)
[09:39] <uws> AfC: then I start hacking
[09:39] <uws> commit new feature, fix a syntax error
[09:39] <AfC> yup
[09:39] <uws> but when my work goes back into mainline I don't want the typo fix to show up so prominently
[09:40] <uws> I want to present my branch as a few self-contained revisions (which may be merge revisions containing also the tiny typo fixes)
[09:40] <uws> right now it seems I can only do this with a 2nd personal "helper" branch in which I merge selectively
[09:40] <AfC> uws: well, you either cherrypick, thereby losing that history completely, or you merge it, and hope that most people don't pay much attention to non-left-hand-edge revisions. (`bzr log` is so biased; `bzr viz` is not)
[09:41] <uws> I don't like cherrypicking that much (for the reasons you stated)
[09:42] <AfC> uws: is the source branch a [bzr-svn] checkout in this case?
[09:42] <AfC> if it is not,
[09:42] <AfC> then it's already capable of being the staging area for you to create the merges that are "cleaner"
[09:42] <AfC> if it is, then really you need a temporary 2nd branch to do the work in before bringing it back to the checkout
[09:43]  * AfC is ignoring the --local capabilities, which he doesn't know anything about unfortunately
[09:43] <uws> AfC: Yes it is
[09:43] <AfC> I thought so
[09:43] <AfC> In that case, look at it this way
[09:43] <uws> AfC: --local is just committing without having stuff in the bound branch
[09:44] <AfC> the bzr-svn branch [checkout == slaved to upstream] really wants to be kept pristine as a copy of upstream
[09:44] <uws> next non-local commit will just push the whole bunch on top of the bound branch
[09:44] <AfC> well there you go
[09:44] <uws> AfC: I have a trunk branch, which I don't hack on (it doesn't have a workign tree)
[09:44] <uws> then I have my own branch
[09:44] <uws> I keep trunk/ up to date
[09:45] <uws> and then merge trunk into my own branch
[09:45] <AfC> uws: anyway, if this is an upstream like, say, GTK where commit permission is only grudgingly granted after long discussions, then you'll need not just somewhere else to stage your patches, but a whole bunch of somewhere elses to stage each feature.
[09:45] <uws> AfC: in this particular case it's a read only bzr-svn checkout/branch
[09:45] <AfC> uws: yup, it all sounds sane
[09:46] <uws> branched over http://
[09:46] <AfC> uws: so I guess I'm confused as to why you feel uncomfortable doing the merges of N revisions at a time into your own [writable] copy of trunk vs the place where you are working
[09:47] <AfC> uws: let me try a different tack on this:
[09:47] <uws> AfC: My trunk/ access is RO
[09:47] <AfC> uws: a place to do merges is a good idea
[09:48] <AfC> [I'm talking here not ex-cathedra as a Bazaar hacker, because I am not one; but having been leaning on Bazaar fairly heavily for almost 2 years now (as you know) some patterns have seemed to serve me well]
[09:49] <AfC> uws: (if I might suggest, the three branches could be named
[09:50] <AfC> uws: 'upstream' (that's the bzr-svn RO checkout), 'trunk' (RW bzr branch which you keep in sync by pulling regularly from ;upstream') and 'working' (or otherwise feature named branches, RW bzr)
[09:50] <AfC> uws: and you'd do your merge work in 'trunk')
[09:50] <AfC> {shrug} something like that
[09:50] <uws> AfC: Yeah, I'm not a bzr hacker either. Just a regular user. I shove bzr up my colleague's throats as well
[09:51] <uws> so I'd better know what/how because they ask me all the time ;)
[09:51] <uws> AfC: But once I merged my feature into 'trunk'
[09:51] <uws> I won't be able to 'pull' upstream anymore. Just merge
[09:52] <uws> Eventually the goals is that my local patches end up upstream after review/chagnes
[09:52] <AfC> uws: (or, 'trunk' [bzr-svn], 'working', and feature-braches^N)
[09:52] <AfC> uws: realistically, aren't you maybe overthinking this a bit? You're going to be doing
[09:53] <AfC> $ bzr diff -r ancestor:../trunk
[09:53] <AfC> all the time to extract diffs to show people
[09:53] <AfC> and it's not until a patch is accepted that you can go and merge to a RW checkout of trunk - and that merge sums up the other stuff, no?
[09:55]  * AfC thinks uws is going to need a whole plethora of branches each containing a feature. They will be individually diffed against 'trunk' for review, but also will be fairly constantly merged into 'working' which is the branch with a WT which is where you are actually hacking.
[09:56] <uws> AfC: All needing lots of disk space :(
[09:56] <gour> uws: shared repo?
[09:56] <AfC> I have found myself doing this; except that since I'm doing the creative work in 'working', creating little branches for individual featurettes which may be mergeable means manually copying the changes over to the little branches, creating revisions there, and then merging back to 'working'. Messy, but at least I'm not cherry picking
[09:57] <uws> gour: of course, but still lots of disk space
[09:57] <luks> use treesless shared repository and a single checkout
[09:57] <AfC> uws: the branches storing the individual features as they grow don't need Working Trees
[09:57] <uws> hmmm. idea: have branches without working trees, and have one checkout which I can "bzr switch" to the feature I'm working on
[09:57] <AfC> uws: branches themselves are negligible in size.
[09:58] <uws> AfC: But the working tree is HUGE in this case
[09:58] <AfC> uws: that is essentially how I work
[09:58] <uws> AfC: it's like ~600M
[09:58] <AfC> uws: that's fine. You only need one (maybe 2 if bzr-svn needs a Working Tree)
[09:58] <uws> and > 50k files
[09:58] <AfC> uws: yes, that's fine.
[09:58] <uws> AfC: (it doesn't)
[09:58] <AfC> uws: bzr switch should handle it no problem
[09:58] <uws> oh, for reference: we're talking about Webkit here btw
[09:59] <AfC> uws: I suspect that reality is that you're going to need 2 Working Trees, then - one for hacking in, and one for doing patch prep & merging
[10:00] <AfC> the later would be the thing you're switching around.
[10:00] <AfC> Well, I've said enough. I look forward to hearing how it works out for you.
[10:01] <AfC> [600 MB? What on earth have they got in there?]
[10:01] <uws> AfC: millions of tests
[10:02] <uws> AfC: Like >300MB of them
[10:04] <lifeless> uws: yes looms are the [partial] implementation of 'edit and manage with history and collaboration]
[10:05] <gour> what's missing?
[10:05] <robtaylor> lifeless: not sure what you mean, gobject code dont get that heavy on memory io.
[10:06] <lifeless> gour: polish
[10:06] <lifeless> gour: helper routines
[10:06] <robtaylor> lifeless: the objects are allocated by a slice allocator
[10:06] <lifeless> gour: and, in the core of bzr, seriously cherrypicking [equivalent to darcs]
[10:06] <lifeless> robtaylor: so are python objects, but every ref and deref is a memory write
[10:06] <lifeless> robtaylor: its intrinsic to ref counted systems
[10:07] <robtaylor> lifeless: uhhu
[10:07] <robtaylor> lifeless: gstreamer has refcounted objects..
[10:07] <robtaylor> lifeless: i don;t think its something to worry about
[10:08] <robtaylor> lifeless: i thought python keeps a live count for objects anyhow?
[10:08] <gour> lifeless: will something of the above happen in 1.7 time frame?
[10:09] <lifeless> gour: if someone sends in patches :)
[10:10] <lifeless> robtaylor: I will bet you significant amounts that gstreamer has refcounted external objects, not every internal detail
[10:10] <robtaylor> lifeless: GstMiniObject is used for everything
[10:10] <robtaylor> lifeless: refcounting happens all along the pipeline
[10:11] <lifeless> yes, but thats still very coarse AIUI
[10:11] <lifeless> the objects I am working with are 10's of bytes long
[10:11] <lifeless> with millions in even only moderately large trees
[10:11] <robtaylor> lifeless: oh, something like that you just do with structs and slices
[10:12] <robtaylor> lifeless: you only need refcounting if your dealing with complex lifetimes
[10:12] <lifeless> right, this is kindof my point :)
[10:12] <robtaylor> ok :)
[10:12] <AfC> How come you're so interested in GObject this week?
[10:13] <robtaylor> lifeless: ooi do you always know the size of the objects?
[10:16] <lifeless> robtaylor: for some yes
[10:16] <lifeless> robtaylor: something taseful will arise, I'm sure
[10:16] <lifeless> AfC: I think a better question, is, what is robert writing
[10:17] <AfC> Which Robert :)
[10:17] <Jc2k> evil twisted things that shut not be spoken about
[10:17] <lifeless> Yes
[10:17] <Jc2k> should*
[10:17] <lifeless> AfC: Yes
[10:17] <Jc2k> ever
[10:17] <AfC> [note that I didn't direct the question in any particular direction :)]
[10:18] <AfC> That's ok. There's no way you have a monopoly on crazy ideas. I've got an R&D project going to examine what would happen if we let the Java VM manage GObject life cycles instead of letting GObject's Ref and ToggleRefs do it.
[10:19] <AfC> (so discussion about sizes, access patterns, and what the slab allocator is up to this month are of passing interest)
[10:19] <lifeless> ahha!
[10:20] <lifeless> so I'm writing another C extension
[10:20] <lifeless> somewhat larger than our current ones
[10:20] <lifeless> and I want to use low level C diagnostic tools
[10:20] <lifeless> so I want no python VM
[10:20] <AfC> lifeless: (that's what I vaguely suspected)
[10:22] <robtaylor> AfC: that crazy R+D project sounds pretty interesting =)
[10:22]  * AfC imagines Robert will have more luck that him :)
[10:22] <AfC> robtaylor: well, it's a couple things [and I am only disucussing something so OT here in so far as it seems conceptually similar to Robert's experiments]
[10:23] <lifeless> well, I have antlr3 running now
[10:23] <AfC> robtaylor: we have the whole ToggleRef thing in place to manage the relationship between our Java Proxy objects and the underlying GObjects. But it gets complicated;
[10:24] <AfC> and it's not as effective as I would like, because there is too much lock-step going on between the two sides.
[10:24] <AfC> It *works* (there are zero leaks)
[10:24] <AfC> [ok, famous last words. We're pretty good, though]
[10:24] <lifeless> how does gintrospect fit into your bindings?
[10:26] <AfC> But I have encountered situations where not everything is cleaned up at once. The last Ref to a GObject is our ToggleRef; our Java object becomes only weakly reachable and so next GC, ta-da, we drop the ToggleRef and the GObject finalizes.
[10:26] <AfC> That's all good
[10:26] <AfC> The trouble comes that when it's not something violent like a 'delete-event',  (where violent is code paths in GTK that go above and beyond about breaking references)
[10:27] <AfC> then it's only now that some of the child Widgets drop down to being us owning the last ref.
[10:27] <AfC> which means that they won't get destroyed until the *next* GC cycle.
[10:27] <AfC> Bah!
[10:28] <AfC> lifeless: (short answer, not yet)
[10:29] <AfC> lifeless: (longer term answer, our code generator is nicely abstracted, so the .defs data feeding it can be replaced with introspection data when it comes available. But it needs to be fairly complete before we reach that point. When we're generating the C API for GNOME libraries off of that data, then I think we'll be set)
[10:29] <AfC> :)
[10:30] <AfC> robtaylor: so anyway, it occurred to me that this lockstep'edness was occurring because the Java VM doesn't have full information about the relationships. If it did, it could see the closed sets and pow them as a group
[10:31] <AfC> robtaylor: I rather expect that the cost of going across the JNI boundary to reach the Java VM's reference queues instead of using GObject's internal hash tables will be prohibitively expensive, but it's been an interesting exercise so far.
[10:32] <robtaylor> AfC: I'd be very interested in having a g_objcet_ref_with_association (object, owner)
[10:32] <robtaylor> AfC: i suspect the Vala people would too
[10:32] <strk> what's that command to show current revno ?
[10:32] <AfC> robtaylor: everyone who is proxying would, I imagine
[10:32] <strk> it's not printed in 'bzr help'
[10:32] <robtaylor> *nod*
[10:32] <AfC> strk: `bzr revno`
[10:33] <robtaylor> AfC: anyhw, the crack project we're working on in our free cycles is http://wizbit.org
[10:33] <strk> does 'revno' checks revision on remote or local tree ?

[10:34] <AfC> robtaylor: I should call you sometime and chat with you about that. It fits in nicely with some areas I'm working in
[10:34] <robtaylor> AfC: feel free :)
[10:34] <AfC> strk: so, being pedantic, it will tell you about the "branch"
[10:34] <AfC> strk: if you do `bzr revno` it'll tell you about this Branch (assuming you're in a Branch)
[10:35] <strk> damn. I have two builds from two trees. both trees give same revno, but one works in a way, one doesn't
[10:35] <AfC> strk: if you do `bzr revno bzr://research.operationaldynamics.com/bzr/java-gnome/mainline/` it'll tell you about bzr://research.operationaldynamics.com/bzr/java-gnome/mainline/
[10:35] <AfC> strk: revnos are only meaningful per branch
[10:35] <lifeless> strk: because you can have multiple branches, revno's are only relevant to a brnach
[10:36] <AfC> strk: what you probably need to inspect are the revids (though, in human readable terms, the last few log entries should tell you what's what)
[10:36] <lifeless> strk: if you want the uuid, 'bzr revision-info' is the right command to use. But I generally would use 'bzr missing' instead, because that tells me what commits are in each
[10:36] <AfC> strk: `bzr revision-info`
[10:36]  * AfC gets out of lifeless's way
[10:37] <strk> matches as well
[10:37] <strk> (revision-info)
[10:37] <strk> Branches are up to date.
[10:37] <AfC> strk: if you `cd ONE` and do `bzr missing --line /path/to/TWO` you should be told what's different
[10:38] <strk> still up to date
[10:38] <AfC> robtaylor: anyway, I'm trying to figure out if I can replace (override) the GObject Ref mechanism. That means hijacking g_object_ref and g_object_unref, and all the ways to do such a thing are nasty.
[10:39] <AfC> strk: silly question, but I assume `bzr diff` shows nothing in each one
[10:40] <lifeless> strk: you have two separate working trees ?
[10:40] <strk> right (no diff, two working trees)
[10:40] <AfC> robtaylor: hijacking GObject's memory allocation would be a second step. That's not just crack. That's ice.
[10:40] <AfC> And enough on that topic.
[10:41] <robtaylor> AfC: yeah, pain :/
[10:41] <strk> after 'bzr switch trunk', altought revision-info didn't change, 'make' is doing something
[10:42] <strk> this is Bazaar (bzr) 1.6rc2
[10:42] <strk> is it supposed to touch files even if nothing changed ?
[10:44] <AfC> (it could)
[10:45] <lifeless> strk: bzr revision-info changes for me
[10:46] <lifeless> strk: no, it won't touch files if nothing has changed
[10:46] <lifeless> strk: it will touch files if they *happen* to have the same content but appear changed to bzr
[10:46] <strk> it seems a regression was introduced, so I was hoping to figure which revno worked and which not... unfortunately I had the same revno back from the two branches... that's how it all started
[10:47] <strk> now it's hard to tell (we don't store revno in binary modules)
[10:47] <strk> and bzr viz stopped working :/
[10:48] <strk> would switching to a specific revno require online access or my local branch is enough ?
[10:50] <strk> bzr diff -r 9590 -r 9591 # didn't do what I expected, did it ?
[10:50] <strk> diffs between two revisions
[10:51] <strk> it seems it gave me diffs from 9590 to current
[10:52] <strk> bzr diff -r 9590..9591 # did it, it seems
[10:54] <lamby> 'lo jelmer .. Curious to why you uploaded bzr-gtk to experimental? :)
[10:55] <strk> I hope for me
[10:55] <strk> bzr was to experimental (I guess, as I was prompted for upgrade) but since I upgraded, bzr viz stopped working
[13:13]  * gour finds 'shelve' command useful...
[13:31] <pickscrape> shelve is awesome :)
[13:34] <gour> right. it keeps my history more 'sane' considering i am coming from darcs world and nice interactive 'darcs record'
[13:38] <Peng_> "record" would be even better than shelve though.
[13:39]  * gour agrees
[13:40]  * pickscrape doesn't know anything about record
[13:42] <gour> pickscrape: http://www.darcs.net/manual/node8.html#SECTION00861000000000000000
[13:43] <pickscrape> Ah, so it's interactive commit?
[13:43] <gour> yep
[13:44] <gour> ..if you want
[13:44] <pickscrape> That reminds me how I got by without the shelf when using svk: it has interactive commit too.
[13:44] <gour> it would be nice to have it in bzr as well
[13:48] <Peng_> The first interactive commit-like thing I used was bzr's shelve. When I found hg only had a record plugin, I was disappointed, and it took a while to get used to, but now I hate having to go to the trouble to use "bzr shelve".
[13:48] <Peng_> They do have different uses, of course, and I have missed shelving in hg a couple times.
[13:50]  * gour uses shelve as poor-man's-interactive commit
[14:01] <lifeless> Peng_: install bzr-interactive
[14:04] <Peng_> Ooh
[14:05] <Peng_> How does bzr-interactive work? Does it add a "record" command? Does it change "commit"?
[14:05] <Peng_> I think it adds "record".
[14:17] <lifeless> Peng_: also you might like to comment on my tree marks concepts
[14:18] <Peng_> lifeless: What's that?
[14:18] <lifeless> a concept I'm exploring
[14:18] <lifeless> I started a thread about better review support
[14:23] <Peng_> lifeless: Thanks for suggesting bzr-interactive. :)
[14:32] <quicksilver> where do I find information about bzr-interactive? I found the launchpad page but it's a bit bare.
[14:33] <Peng_> quicksilver: The source code, I guess. Or "bzr help interactive" after installing it.
[14:33] <Peng_> quicksilver: There's not much to say. It adds an -i option to commit.
[14:34] <Peng_> It also adds a record-patch command.
[14:35] <quicksilver> I found the author's blog post.
[14:35] <quicksilver> I have feeling I might consider commit -i harmful.
[14:35] <quicksilver> After all if you commit -i and commit anything less than all hunks, that suggests that you haven't tested the version you ahve committed.
[14:35] <quicksilver> That sounds like bad practice?
[14:36] <Peng_> Hmm, I suppose you're right.
[14:36] <uws> Server does not understand Bazaar network protocol 3, reconnecting.  (Upgrade the server to avoid this.)
[14:36] <LarstiQ> quicksilver: now hook up -i code that takes the new tree and runs automated tests on that
[14:36] <uws> ^^ Why do I get this with 1.5 and 1.6 client?
[14:36] <uws> there's no newer version, is there?
[14:36] <Peng_> uws: There is.
[14:37] <Peng_> uws: 1.6 introduces a new network protocol.
[14:37] <Peng_> I usually use interactive commits when working on non-code files, so testing is moot.
[14:38] <quicksilver> LarstiQ: indeed; but I think you take my point.
[14:38] <uws> Peng_: Ok, didn't know that.
[14:38] <uws> but 1.6 is not stable so I won't upgrade the server :)
[14:39] <Peng_> uws: OK. :)
[14:39] <Peng_> uws: You'll have to either downgrade your local bzr to 1.5 or live with that message and the second connection.
[14:51] <LarstiQ> quicksilver: yes, I personally tend to agree, but there are ways to mitigate that risk.
[15:30] <jam> LarstiQ, quicksilver: Personally I only make the test suite pass when i'm ready to merge into the next level, so I'm not particularly concerned about the test suite passing on *every* commit.
[15:30] <jam> One of the nice things about having layered branches.
[15:31] <LarstiQ> jam: right, I didn't mean the *entire* suite. Just test exercising codepaths you've touched.
[15:31] <LarstiQ> even that may be too much, ah well.
[15:37] <jam> LarstiQ: sure, it is just my new response for partial commits, commit -i, etc, etc.
[15:38] <jam> I used to feel it was a bad deal, because you were committing something untested.
[15:38] <jam> But as long as you test it at the time of merging it into the next level.
[15:39] <LarstiQ> hmja.
[15:39] <LarstiQ> jam: you're ahead of me in feeling comfortable about it :)
[15:40] <jam> LarstiQ: I have come to realize that feature branches are a *better* way of creating that perfect merge patch
[15:40] <jam> rather than trying to do it without committing
[15:40] <jam> Especially when I'm exploring the space
[15:40] <jam> I'll do snapshots of stuff that I know I'm going to delete
[15:40] <jam> just to have something I can revert to if I decide.
[15:43] <beuno> jam, hi  :)  I haven't managed to get to the packaging yet, I will try to on the next few hours
[15:48] <jam> LarstiQ: I would say the biggest benefit is in development *pace*. I can snapshot as I go, and not fret *too* much about something breaking until I need everything to work
[15:49] <jam> If too much breaks at the end, I can go back via the snapshots to something earlier, and tweeze the changes out from there.
[15:49] <jam> So I commit small changes often
[15:49] <jam> and don't wait for the test suite to finish on all of them
[15:49] <jam> Though thanks to vila, I *do* tend do run the subset of tests that might be effected
[15:49] <jam> -s bzrlib.tests.test_foo is great
[15:49] <jam> we just need to make it shorter :)
[15:50] <LarstiQ> just - instead of -s? :)
[15:50] <jam> LarstiQ: well, if we could just get it to read my mind instead
[15:50] <jam> then it is just ''
[15:51] <jam> I think my current favorite is aliases, so that BT == bzrlib.tests and BP == bzrlib.plugins
[15:52] <pickscrape> It would be clever if it picked if you from your working directory (assuming your working directory is a test directory)
[15:52] <pickscrape> Damn, what happened to my English there?
[15:56]  * gour is installing bzr-interactive
[15:56] <james_w> vi
[15:58] <LarstiQ> jam: a stat to see which files got recently changed?
[16:01] <jam> LarstiQ: Interesting though I'm not sure if it would be perfect correlation
[16:03] <gour> hmm, any idea what's wrong with http://rafb.net/p/6VdGNy88.html ?
[16:05] <jam> gour: you logged in and so are pulling from a different URL into a checkout of the http url
[16:05] <jam> why not just "bzr update"
[16:06] <jam> (bzr pull will update both the local branch, and the branch it is bound to, but you are bound to an HTTP branch, and thus updating it fails)
[16:06] <jam> I'm guessing you've logged in, because it isn't warning you
[16:06] <jam> which means that it is pulling from bzr+ssh:///...
[16:06] <jam> and thus doesn't know it is the same branch.
[16:07] <beuno> jam, older versions of bzr don't warn you
[16:07] <beuno> gour, are you using 1.3.1?
[16:07] <beuno> (hardy default)
[16:08] <gour> beuno: no, 1.7dev
[16:09] <beuno> ah, then it is odd
[16:09] <gour> jam: hmm, i just wanted to 'update' upload plugin by pulling latest revs from LP
[16:12] <gour> ..by pulling from LP to my local machine...strange
[16:25] <VSpike> probably a dumb question, but I've got a repo on a NTFS partition mounted in linux, and I want to do a commit on it but it shows everything changed because of permission bits (presumably)
[16:27] <VSpike> I was going to copy it to an ext3 fs and then twiddle permissions, so I used cp -r but then bzr said it was not a branch
[16:27] <VSpike> I manually copied .bzr because cp left it behind
[16:27] <VSpike> Is that maybe because it is in a shared repository?
[16:33] <Peng_> Well, if your copied branch had no repository, that would cause problems...
[16:33] <Peng_> You should use "bzr branch", and maybe "bzr merge --uncommitted" if you want to pull the working copy changes too.
[16:34] <VSpike> I've worked around it by using cp -a on the root of the shared repo
[16:35] <VSpike> How can I get bzr to show me which file properties have actually changed?
[16:35] <VSpike> I tried using find to remove all permission bits, but that doesn't seem to be it
[16:35] <VSpike> remove all *execute* bits i mean
[16:56] <timnik> "bzr diff filename" will show me a diff of filename's changes. How might I tell bzr to use a graphical program such as meld to show me the diff?
[16:56] <timnik> Uncle google didn't seem to be of much help on this one.
[16:56] <pickscrape> bzr diff --using=meld ?
[16:57] <timnik> pickscrape, sweeeet :-)
[16:57] <ToyKeeper> timnik: bzr alias mdiff='diff --using=meld'   ...  then run 'bzr mdiff'
[16:57] <timnik> thanks!!
[16:57] <timnik> ToyKeeper, oooh, I like :-P
[16:57] <timnik> thank you
[16:57] <pickscrape> Isn't the bzr alias command new in 1.6?
[16:58]  * ToyKeeper tries again to figure out how to split a subdir into a new branch, including only the history for that subdir
[16:58] <gimaker> pickscrape, correct.
[16:58] <pickscrape> Just in case timnik is running 1.5 and find that doesn't work :)
[16:58] <gimaker> you can still edit edit bazaar.conf manually though
[16:59] <ToyKeeper> So far, it looks like 'bzr split foo/' and then 'bzr remove-revision N' for each unwanted rev N.  :(
[16:59] <gimaker> personally I like to use what ships with ubuntu, which is 1.3.1
[17:01] <TheEric> found a simple solution to our xpattern bug and bzr/lp with the revision history set to 0 - repush the last revision and *poof* everything was back to the way it was.
[17:08] <ToyKeeper> bleh, remove-revision seems to be broken.
[17:46] <Necoro> does bzr-svn support stacked branches?
[17:46] <Necoro> so that there would not be the need to download large repositories
[17:50]  * beuno releases Loggerhead 1.6
[17:52] <james_w> congratulations beuno and mwhudson
[17:52] <beuno> james_w, thanks!  :)
[17:55] <Necoro> hmm - ok ... atleast it works
[17:55] <Necoro> wondering how long - or if I'm missing an important point ;)
[18:01] <tacone> congrats
[18:03] <jam> beuno: good to hear
[18:04] <beuno> jam, next on my list is packaging bzr. So here I go...
[18:05] <hsn_> bzr check reports: 1 incosistent parent, is there way to determine what file/branches/projects are bad? its shared repo between several projects
[18:07] <Peng_> hsn_: Just run "bzr reconcile".
[18:07] <Peng> Ooh, Loggerhead 1.6. Congrats!
[18:08] <hsn_> Peng: i heard rumours that bzr reconsile takes hours to run
[18:08] <Peng_> hsn_: Is your repo gigabytes in size?
[18:08] <hsn_> about 0.5gb
[18:08] <Peng_> hsn_: Eh. It'll probably take a while then.
[18:08] <Necoro> hmm ... are stacked branches a good replacement for looms?
[18:08] <Peng_> hsn_: A couple inconsistent parents aren't a serious issue. Reconcile will fix it, but you don't really need to bother.
[18:08] <Necoro> or not really
[18:09] <Peng_> Necoro: Are pencils a good replacement for skateboards?
[18:10] <Necoro> Peng_: hmm - then I misunderstood what stacked branches were supposed to do
[18:10] <Peng_> Necoro: They allow you to avoid storing the entire repo locally.
[18:11] <Peng_> Necoro: Sort of like a lightweight checkout, only it acts like a branch and you can commit locally and everything.
[18:11] <Necoro> ok ... but if you would make stacked branches of the "trunk" stacked branch you would end up with lightweight feature branches, right?
[18:12] <Necoro> I just noticed that looms does not work with stacked branches - and I don't want to download this 11k-revs-svn-repo
[18:12] <Necoro> so I'm looking for an alternative :)
[18:13] <Necoro> and having a stack of branches seemed reasonable ... because looms are a stack too
[18:16] <jam> beuno: do you want to send your email to the 'bazaar-announce@' as well. I think it is worthy and I'll "accept it"
[18:17] <beuno> jam, yes!  I'll send it now, thanks  :)
[18:17] <beuno> jam, sent
[18:18] <jam> beuno: Did you see: https://bugs.launchpad.net/bugs/258166
[18:18] <jam> lamont feels we are being bad packagers
[18:18] <beuno> jam, yeah, we talked it over, he's right, etc
[18:18] <beuno> we'll need to change the packaging doc a bit
[18:18] <jam> beuno: also, update the docs
[18:19] <beuno> :)
[18:19] <Necoro> ok ... branching fails if "A" is a stacked branch of an svn-repo and you are trying to create another stacked branch "B" from "A"
[18:20] <Necoro> then downloading the 11k revs is the only alternative =|
[18:21] <lamont> jam: I wouldn't care so much if it weren't 3MB+ of tarball and my  link being crap
[18:22] <jam> Necoro: branching from a stacked branch only creates a stacked branch if you request it
[18:22] <Necoro> jam: i did: bzr branch --stacked A B
[18:22] <Necoro> so I requested it :)
[18:22] <Necoro> but I guess the problem lies in bzr-svn
[18:22] <jam> I'm surprised, as I believe it is something we implement.
[18:22] <jam> But it may be something that is rough
[18:23] <Necoro> tells me something about: bzr: ERROR: Connection closed: Connection closed unexpectedly
[18:24] <jam> beuno: one thing that would be nice, try to keep a good log of everything it takes you to make a release.
[18:24] <jam> I'd *really* like to see it more automated.
[18:24] <jam> It takes far too much dev time to package a new release, considering we want to do 2 per month.
[18:24] <jam> (rc & final)
[18:25] <jam> beuno: also, it seems the dapper packaging has the old bug #249452
[18:25] <beuno> jam, ok, I'll try and do it this time
[18:26]  * Necoro just does the 11k revs pull now :)
[18:26] <TheEric> so, we fixed our issue with the revision history being set to 1 and not being in sync, but now we're getting the revision emails all over again. All 1264 of them.
[18:29] <jam> lifeless: I know you didn't like the workaround in status, but this seems to be the 3rd or 4th bug report of the status bug: bug #258358
[18:36]  * LeoNerd discovers the branch/uncommit/commit/replay  method of fixing earlier commits
[18:36] <LeoNerd> 7 shades of evil, but cute at the same time
[18:36] <jam> so what do you guys think about bug #258349
[18:36] <jam> should I mark it as 'invalid' because it is a limitation of cygwin?
[18:41] <Necoro> hmm - ok
[18:42] <beuno> jam, I think it's invalid, as bzr can't really do anything about it
[18:48] <beuno> jam, btw, it seems the bzr 1.6* announcements are missing in LP
[18:48] <jam> beuno: hmm... did I make them?
[18:48] <jam> I think you have to do a separate announcement for a release.
[18:49] <jam>  And it isn't part of the official "releasing.txt" document that reminds us about all that stuff
[18:49] <beuno> jam, ah, ok. I just saw that 1.5 was the last one announced, so I thought it was
[18:50] <jam> beuno: yeah, looks like I manually did that in 1.5 (and for 1.5rc1) I'll go ahead and do it for 1.6rc3
[18:50] <jam> I know poolie didn't for 1.6rc1 and the betas
[18:52] <jam> beuno: could it be that they were automatic when being put into the ~bzr ppa, but aren't because it is the ~bzr-beta-ppa ?
[18:52] <beuno> jam, nah, poolie did it manually
[18:53] <beuno> they don't get done automagically, although that would be a cool feature
[18:53] <Peng_> beuno: Thanks for giving me credit in the Loggerhead 1.6 release. I've really enjoyed making my little contributions, and just watching you guys work. :-)
[18:54] <beuno> Peng_, you absolutely deserve the credits, you're part of our test suite!
[18:55] <Peng_> Heh.
[18:56] <Necoro> hmm - is there a way of having a diff of a branch to its parent branch sent per mail? - but not the "bzr send" way of working
[18:56] <Necoro> just the patch :) w/o having to do "bzr diff > some.patch" and then attach this to some mail
[18:57] <Kinnison> Necoro: You mean you don't want a bundle?
[18:57] <Necoro> Kinnison: right ... because the project uses svn - and all the bundle information is just unnecessairy
[18:58] <Peng_> bzr send has a --no-bundle option. I think t'll still include a bit of meta data, but not that mcuh.
[18:58] <Necoro> and "bzr send --no-bundle" needs my branch being publically available
[18:58] <Peng_> Oh.
[18:59] <Necoro> oh - it works, if I use "." as public branch =D
[18:59] <jam> Necoro: so as you commit multiple times, you want it to keep growing? Or you don't want this to be automatically done, just as a one-time thing?
[19:00] <Necoro> jam: just when I finished some work (bug fix for example), I want all commits in one patch sent to the m-l
[19:02] <jam> Necoro: is there a reason you don't want the branch metadata included?
[19:03] <jam> As then people can just "save attachment your.patch" "bzr merge ../your.patch"
[19:03] <Necoro> because they do not use bzr
[19:03] <Necoro> and it would just be unnecessary noise
[19:03] <Necoro> probably annoying some guys
[19:06] <Necoro> I just noticed that git-send-email, which I took at the model for the functionality, actually needs patches as files
[19:06] <Necoro> but I don't want to have tons of patches lying around -.-
[19:07] <Necoro> just something like: "bzr send-this-loom --mail-to some@addre.ss"
[19:07] <Necoro> I guess, I have to write it by myself
[19:12] <jam> Necoro: 'bzr send -r thread:..-1 --no-bundle' though I guess you want to break apart the loom?
[19:12] <jam> I think something like that would be more-than-welcomed into the loom plugin.
[19:12] <jam> Possibly as a wrapper for the 'bzr send' command itself when you are working in a loom
[19:13] <jam> (like is done for 'bzr status' and some other commands)
[19:17] <Necoro> jam: you need to specify the public branch ... so the correct command would be: bzr send -r thread:..-1 --no-bundle . .
[19:17] <Necoro> not the two dots :)
[19:17] <Necoro> *notice
[19:17] <Necoro> but I think there is still to much bzr-metadata in it
[19:17] <jam> Necoro: well, you could set your public_branch location in ~/.bazaar/locations.conf as an alternative
[19:18] <Necoro> truw
[19:18] <Necoro> *true
[19:18] <jam> Though 'bzr send' *is* meant as a way to convey bazaar changes between people (hence a public location if you don't send the metadata, etc.)
[19:19] <Necoro> jam: yeah - BUT: this is the wrong assumption: not every project is using bzr just because one guy happens to do so ;)
[19:20] <Necoro> so for the moment I'll go with the normal "diff && attach to mail"-procedure
[19:21] <Necoro> and see if have some time left next month to write a nice "mail-to-non-bzr-users"-plugin :)
[19:31] <jam> Necoro: I might point you toward bug #227340
[19:32] <jam> which at least seems related
[19:32] <Necoro> jam: thx
[19:39] <beuno> lamont, ping
[19:39] <lamont> beuno: can I have a couple min first?
[19:39]  * beuno hands a couple of minutes to lamont 
[19:40] <rockstar> If I bzr upgrade a local branch, and then push it up to an existing branch on Launchpad, will that upgrade the branch there as well?
[19:41] <Peng_> rockstar: no
[19:41] <Peng_> rockstar: You'll have to run upgrade on Launchpad over sftp or (in bzr 1.6) bzr+ssh.
[19:42] <rockstar> I didn't think so.  Just trying to plan my upgrade properly.  the latest rc seems to have deprecated this specific format.
[19:42] <lamont> beuno: wassup?
[19:43] <beuno> lamont, so, I was looking at what I did for the 1.6rc2, and I did name the tarball .orig*
[19:44] <lamont> if the version in debian/changelog is FOO-BAR then you need bzr_FOO.orig.tar.gz
[19:44] <lamont> so did you name it bzr_1.6~rc2.orig.tar.gz, or something else?
[19:44] <lamont> and it needs to be in the parent directory of wherever you do debuild
[19:44] <lamont> which is to say, in the parent dir of bzr-1.6~rc23
[19:44] <lamont> s/3$//
[19:45] <beuno> lamont, bzr-1.6rc2.orig.tar.gz
[19:45] <beuno> the - screwed me over, didn't it?
[19:45] <lamont> note the missing ~
[19:45] <lamont> and the dash instead of underscore
[19:46] <lamont> once you have the orig.tar.gz as it needs to be, if you forget the -I.bzr -i.bzr it'll bitch about binary files in the diff... :-)
[19:46] <rocky> jelmer: don't suppose you're around?
[19:49] <beuno> lamont, ok, let's see how this turns out then...
[19:53] <Peng_> rockstar: Yeah. Recent changes ruined the performance of pre-pack repos, so they were deprecated. It was about time anyway, since they're old and inferior.
[19:54] <rockstar> Peng_, yea, I knew there were upgrades needed.  I also know that a few of my last attempts didn't work.
[19:55]  * rockstar procrastinates an upgrade
[19:59] <Peng_> rockstar: How large is the repo? If it's small or you have a speedy connection, it's no big deal.
[20:01] <beuno> lamont, debuild -S -sa -i -D
[20:02] <beuno> is still the correct command, right?
[20:02]  * lamont goes looking for what -D does
[20:03] <beuno> jam, if LP doesn't take up too much of my time next week, I may attempt to script this, and correct the docs
[20:03] <lamont> I expect that -i wants to be -i.bzr OTOH, if that command successfully builds a source package, and the .dsc references bzr_1.6~rc3.orig.tar.gz, then \o/
[20:05] <beuno> lamont, lintian is now a bit happier than before: https://pastebin.canonical.com/8231/
[20:05] <beuno> and: dpkg-source: building bzr using existing bzr_1.6~rc3.orig.tar.gz
[20:06] <lamont> does debian/rules actually use quilt, or should the build-dep be dropped?
[20:06] <lamont> line 2 is "meh"
[20:06] <lamont> line 3-4 are expected
[20:06] <beuno> well, we don't use quilt in our PPAs, maybe Debian does?
[20:07] <lamont> dunno
[20:07] <lamont> in short, it's just saying that you claim to need quilt, but aren't seeming to use it
[20:07] <chadmiller> Hi all.  One of my cow-orkers says his branch/repo reset its revno to zero.  Sure enough, "bzr log" starts with revno 6193, but "bzr revno" says "15".  Any ideas?
[20:07] <lamont> --> "meh"
[20:08] <beuno> lamont, heh, ok. I can either change the standards version and remove quilt, reducing the amount of
[20:08] <beuno> "meh"'s
[20:08] <rockstar> chadmiller, bzr reconcile
[20:08] <beuno> or upload
[20:08] <lamont> if you change the standards version, you have to actually go read the cheatsheet to make sure you really are compliant
[20:08] <lamont> IOW, upload
[20:08] <rockstar> chadmiller, before he does that...
[20:08] <chadmiller> "'bzr reconcile' seems to have fixed some things.  I'm running 'bzr check' now."
[20:08] <beuno> since you've been so kind to help me, I'll let you decide
[20:09] <chadmiller> He made backups, of course.
[20:09] <rockstar> chadmiller, damn.  That's a bzr bug that I know exists and would like to know the cause.
[20:09] <chadmiller> 'bzr check seems to be hung however in the "checking versionedfile 0/20980"'
[20:09]  * beuno uploads
[20:09] <chadmiller> I told him "check" will tak3e a while.
[20:09] <rockstar> Can you pastebin the logs ?
[20:10] <rockstar> From the old branch, the broken one?
[20:10] <beuno> lamont, thanks a lot for your help, and please poke me if the debs have anymore problems  :)
[20:10] <lamont> thanks
[20:11] <chadmiller> rockstar: I'm getting them.
[20:11] <chadmiller> rockstar: It may be big.  Want it in email?
[20:11] <rockstar> chadmiller, that would be awesome
[20:12] <rockstar> paul at canonical com
[20:12] <chadmiller> 'kay.
[20:13] <beuno> jam, so, by my calculations, once I've automated everything in my head, uploading bzr to all PPA's, sans bzrtools and other interesting plugins, it takes a bit less than half of my day away.  I wouldn't be surprised if doing it in full takes up a full day
[20:16] <jam> beuno: right, and we need someone to do that 2x per month
[20:16] <jam> Not to mention all the PQM time gardening a new release branch.
[20:17] <jam> (That has gotten better, though)
[20:17] <beuno> jam, yeah, too much. On the bright side, it should be scriptable, because everything is really automated
[20:17] <jam> Anyway, out of 20 working days, killing 2 is 10% of a dev's time spent just getting releases out every moth.
[20:17] <beuno> but a script to do it would take a few days work
[20:17] <beuno> which seems like a good investment
[20:17] <jam> beuno: sounds like you get paid back in 2 months
[20:18] <jam> pretty good ROI
[20:18] <beuno> absolutely
[20:18] <jam> beuno: *and* if it is automated enough, then *I* can do it, which gives you an even better ROI
[20:18] <beuno> jam, heh
[20:19] <beuno> well, I don't really know if I'll be able to do this. I still don't have a manager, so I'm doing a little bit of everything. That may change enxt week
[20:19] <beuno> but I'll certainly try to do it
[20:19] <jam> beuno: who do you expect to be your line manager, btw
[20:20] <beuno> jam, they're still trying to figure that out  :)
[20:20] <beuno> I'm hanging around in the lp-bzr team for now, which is where I already know everybody, and can be productive *today*
[20:21] <beuno> and I expect to be working on LP for the next few months at least
[20:22] <jam> beuno: well, you've been productive in #bzr today as well, so thanks :)
[20:23] <pickscrape> beuno: while you're around, are you aware of any good docs on METAL?
[20:23] <beuno> jam, happy to help!  :)
[20:23] <rockstar> pickscrape, zope's docs are pretty thorough
[20:23] <beuno> pickscrape, zope has the best ones I think
[20:23] <jam> beuno: hey, that's my line :) (At least according to vila)
[20:24] <beuno> jam, very happy to help  :p
[20:24] <pickscrape> beuno: ok, I'll have a look. The ones google found for me were really sparse on detail and examples
[20:26] <beuno> pickscrape, http://www.zope.org/Documentation/Books/ZopeBook/2_6Edition/AppendixC.stx
[20:26] <pickscrape> beuno: thanks :)
[20:27] <pickscrape> Hopefully I'll be able to make progress on the breadcrumbs tonight. I hit a wall with it not recognising my second template
[20:28] <beuno> pickscrape, cool!  let me know if you're stuck, I'll be happy to give you a push
[20:28] <beuno> loggerhead is crack-ish sometimes
[20:28] <beuno> as rockstar   :)
[20:28]  * rockstar is indeed crack-ish
[20:28] <pickscrape> :) Well, tal etc is completely new to me, so I'm learning as I'm going along
[20:29] <beuno> heh
[20:29] <beuno> I think I meant "ask rockstar"
[20:29] <beuno> but it's friday, and it's been a *very* active week for me
[20:29] <pickscrape> :)
[20:29] <rockstar> beuno, by the way, paste has all sorts of profiling goodies.  I'm going to take a whack at it this weekend.
[20:30] <pickscrape> Has the "unification" stuff been merged to trunk yet? If so I'll probably have a bit of fun with conflicts when I merge it :)
[20:30] <beuno> rockstar, that would be awesome
[20:30] <beuno> pickscrape, it has
[20:30] <rockstar> pickscrape, it has
[20:30] <beuno> and you will  :)
[20:31] <pickscrape> ok, I'll complete what I'm doing then before merging so I can do it all in one go.
[20:31] <beuno> pickscrape, it's a great way to learn about resolving conflicts
[20:31] <pickscrape> Yeah. I'm used to the svk way (which is actually pretty nice)
[20:39] <beuno> lamont, out of curiosity. All the other bzr releases had the .bzr dir in them, right?
[20:39] <lamont> only if they were packaged without an orig.tar.gz
[20:40] <beuno> so just the last one I uploaded?
[20:40] <beuno> poolie removes them?
[20:40] <lamont> .bzr has binary files , and .diff.gz doesn't handle them
[20:40] <lamont> ergo, they're not there, one way or another.
[20:40] <lamont> -mix 1520 : cat /home/lamont/.devscripts
[20:40] <lamont> DEBUILD_DPKG_BUILDPACKAGE_OPTS="-I.bzr -I.svn -I.git -i.bzr -i.git"
[20:40] <beuno> ah, so you can leave them, if you do the .orig bit correctly
[20:41] <lamont> I tend to just not worry about remembering it. :-)
[20:41] <lamont> how so?
[20:41] <lamont> .orig doesn't have a debian/ directory
[20:41] <lamont> and debian/.bzr is the bit that kills you
[20:41] <beuno> I'm confused
[20:41] <beuno> deleting the debian/.bzr isn't in the docs
[20:42] <beuno> which makes me think poolie doesn't remove it
[20:42] <lamont> of course, there's no reason that I can think of to ship a stale bzr-1.6/.bzr directory as part of the source, when the current .bzr directory is what everyone wanting to muck with it should be referencing
[20:42] <lamont> with the -I and -i options, you don't have to delete it, it becomes a -x.bzr to the diff command
[20:43] <beuno> I see
[20:43] <beuno> ok, so I don't need to add that extra step to the docs
[20:43] <lamont> so if you look at poolie's old source packages, you'll find that the diff doesn't have debian/.bzr in it.  exactly _how_ poolie accomplished that, dunno
[20:49] <beuno> jam, after I finish uploading to PPA's, would you like to help me debug why bzr suddenly is eating *all* my ram + swap bringing my laptop to it's knees, to pull 1 day's worth of the LP source?
[20:49] <beuno> I've tried it 3 times now
[20:49] <beuno> and it does it every time
[20:49] <beuno> have to kill -9 it
[20:50] <jam> beuno: sure
[20:50] <jam> quid-pro-quo and all
[20:50] <beuno> :)
[21:02] <beuno> lamont, jam, just uploaded the last of 'em. Dapper with the shell fix. I'm off for ~40 minutes, and then I'll check back to fix anything that I might have overlooked, and try to debug my bzr problem if jam is still around
[21:04] <beuno> jam, nm about the debugging, it was bzr-search still trying to index everything
[21:04] <jam> beuno: yeah, first thing I was going to recommend was "--no-plugins" :)
[21:25] <TheEric> What's the "could not acquire lock" error?
[21:29] <Peng_> TheEric: It means whatever you were trying to operate on was locked (i.e. someone else is using it, or was and they crashed).
[21:29] <Peng_> TheEric: Or maybe you tried to write to something read-only, like http.
[21:31] <TheEric> any way to override that?
[21:32] <TheEric> There was a bit of an error that we -think- was fixed in the revision history where it was out of synch with eveything else.
[21:35] <Peng_> TheEric: More details plz.
[21:35] <Peng_> TheEric: Locking has nothing to do with history issues.
[21:41]  * meteoroid loses 10lb on the treadmill while branching a couple hundred revisions from fricking slow googlecode svn
[21:41] <meteoroid> :-P
[21:42] <mwhudson> googlecode svn is pretty terrible
[21:42] <mwhudson> (especially *because it's google!*)
[21:43] <meteoroid> i know, i was surprised..
[21:43] <meteoroid> looks like my idea of offering professional hosted, private SCM and other things for people like Trac is not so bad..
[21:43] <meteoroid> our svn is really fast, and soon i'll have bzr talking webdav or something fancy so i can use it as my personal primary
[21:51] <lifeless> in what way is googlecode's svn worse that $vanilla svn?
[21:52] <jam> meteoroid: have you looked at "bzr-webdav" the plugin Vila wrote?
[21:52]  * jam is away for a bit
[21:52] <lifeless> bye jam
[21:52] <meteoroid> jam: yeah i haven't got the server setup yet
[21:52] <meteoroid> and i don't have 1.6 going
[22:02] <mwhudson> lifeless: it's unreliable
[22:02] <mwhudson> code imports from google code rarely complete if it's more than a few hundred revisions
[22:02] <mwhudson> (this is partly because cscvs is terrible, of course, but ...)
[22:07] <meteoroid> lifeless: well, it's terribly slow, is the problem.
[22:07] <meteoroid> 154 revisions and still maybe 75% complete
[22:07] <meteoroid> it's been over 20 minutes!
[22:07] <meteoroid> the servers are either overloaded or excessively throttled
[22:07] <meteoroid> i mean, i pushed four revisions from bzr into svn at googlecode and it took 5 minutes..
[22:08] <meteoroid> four revisions! a few config files and a python script that downloads code from elsewhere
[22:10] <Peng_> I don't have problems with Google Code's svn, but I guess I don't use it for anything large.
[22:13] <Toranin> anyone here have a recommended best way to convert a very long-historied (well over 90000 revs) svn repos?  I couldn't get the bzr-svn extension to work (may try again later), and it's looking like svn2bzr (dumpfile version) will end up taking like 2 weeks to make it through the whole thing
[22:14] <Peng_> Toranin: You should use bzr-svn, and get help here if you encounter problems.
[22:14] <lifeless> Toranin: bzr-svn
[22:14] <aquarius> I've got one folder which is under bzr control; I've done bzr push to a private sftp repository. If I want to edit it on another machine as well, should I bzr branch sftp://wherever or bzr checkout sftp://wherever ?
[22:14] <lifeless> Toranin: older versions will leak like a sieve (the bindings to svn were dud, new versions of bzr-svn have their own, and shouldn't leak).
[22:14] <mwhudson> some kind of fast-import based approach would be the other idea i guess
[22:15] <lifeless> Toranin: if it does leak, just ctrl-C and resume
[22:15] <mwhudson> but yeah, bzr-svn would be the thing to try
[22:15] <Toranin> I have the new version, but the selftests fail
[22:15] <lifeless> aquarius: thats right
[22:15] <Toranin> with the same assertions I see when trying it on the repos
[22:15] <lifeless> Toranin: please do file a bug
[22:15] <aquarius> lifeless: heh. Sorry, my question was "which of those should I use?" I'm a bit unclear on the difference :)
[22:15] <mwhudson> Toranin: what version of bazaar
[22:15] <mwhudson> ?
[22:16] <lifeless> aquarius: branch will make a new branch; checkout will let you work on the existing branch directly
[22:16] <Toranin> IIRC (this was a few days ago) 1.6rc1, with bzr-svn 1.4rc1
[22:16] <meteoroid> Toranin: do you have access to ubuntu hardy? that's how i keep bzr-svn functional
[22:16] <meteoroid> i just suck it out of svn then i can talk to it from plain-jane bzr from any other box
[22:16] <Toranin> no, I'm stuck with platform FreeBSD 6.3-RELEASE amd64
[22:17] <aquarius> lifeless: so if I checkout, then bzr commit commits straight back to sftp, but if I branch then bzr commit goes locally and I have to bzr push to get it back into sftp?
[22:17] <Toranin> but I do have a jail with separate software
[22:17] <lifeless> aquarius: exactly
[22:17] <Toranin> so I can keep a functional bzr around without too much trouble
[22:17] <lifeless> Toranin: so, I think you need bzr 1.6rc3 because there was a patch to bzr Jelmer put in on wednesday or so
[22:17] <aquarius> lifeless: ok, I'm starting to get a handle on this, cheers. :)
[22:17] <meteoroid> well if you can get subversion 1.5 with its' python bindings working, you should be home free on bzr-svn
[22:18] <meteoroid> and that's the route to go, i think.
[22:18] <lifeless> aquarius: excellent
[22:18] <aquarius> and...if I already have the code on the second machine, and I bzr branch in that folder, will it cope with some of the files already being there? Or will it chuck some sort of horrible wobbler?
[22:18] <Toranin> meteoroid: old bzr-svn (with svn's python bindings) on svn 1.5 gives an assertion failure and crashes the Python process
[22:18] <Toranin> something about paths not being allowed to start with a '/'
[22:19] <meteoroid> when does it give this failure?
[22:19] <Toranin> whenever you attempt to run any command that accesses a repository
[22:19] <Toranin> whether remote or local
[22:20] <meteoroid> sample command?
[22:20] <Toranin> hmm
[22:20] <Toranin> (this was a few days ago when I last tried it)
[22:20] <Toranin> bzr log svn+https://url/to/trunk
[22:20] <meteoroid> try something like: bzr svn-import svn+http://some-svn-repo/
[22:21] <meteoroid> i had issues with high rev count but if you have the latest bzr-svn it should have a leaking issue fixed..
[22:22] <Toranin> meteoroid: let me get the latest bzr and bzr-svn as of today, and I'll get back to you with results and selftest failures if any
[22:22] <Toranin> or to the channel anyway
[22:23] <lifeless> aquarius: if you run bzr branch somewhere, it will make a subdirectory and put its own files within that
[22:24] <lifeless> aquarius: you can copy your own versions over the result
[22:30] <aquarius> lifeless: ah. The folder in question is my home folder...so I wanted to do "bzr branch sftp://blah . "
[22:31] <Toranin> meteoroid: okay, running a svn-import on a svn+file:// of a backup repos
[22:31] <Toranin> at 90488 revs
[22:32] <Toranin> it's at "determining changes" now, we'll see if it gets past there before I have to leave for the evening
[22:34] <meteoroid> right on
[22:34] <meteoroid> how much ram on this box?
[22:34] <meteoroid> you did 'nice' right? heh
[22:34] <meteoroid> i found, btw, on a multicore, that branching http from localhost was faster because apache used 30-60% of a core and bzr would stay steady around 60-80
[22:34] <meteoroid> otherwise bzr just takes up like 60-80 wit a file://
[22:35] <meteoroid> probably would be faster with svnserve because http is chatty
[22:36] <Toranin> no nice on it
[22:36] <Peng_> Toranin: You should watch the RAM usage, just to be safe.
[22:36] <Toranin> *nods*
[22:36] <Toranin> it's not using much
[22:37] <Toranin> only a couple hundred MB so far, I have about 700MB of leeway in free/inactive RAM
[22:38] <Toranin> it's spinning at "determining revisions to fetch 0/33" now
[22:38] <Toranin> using virtually no CPU and in BSD "lockd" state
[22:41] <pasky> Hi! I've read http://bazaar-vcs.org/BzrVsGit#head-826d31d333758d3cd08eb5aeecd8bf77b1025373 and now I'm confused, how is that "far more flexible and powerful" than Git's alternates feature?
[22:44] <bpeterson> hi guys!
[22:45] <bpeterson> I was wondering if https://bugs.launchpad.net/bzr/+bug/252212 can be fixed before the 1.6 release
[22:45] <lifeless> pasky: well its quite different, we also have an alternates-like feature
[22:46] <lifeless> I don't know that its "far more flexible and powerful", just a different want of structuring some things; but it is nicer IMO to be able to manage directories as directories
[22:47] <pasky> so what's the structural difference to alternates?
[22:47] <pasky> I'm sorry, I don't understand "manage directories as directories"
[22:47] <lifeless> alternates are pointers to a separate repository
[22:48] <lifeless> bzr shared repositories are a single repository which separate branches use for common storage
[22:48] <pasky> oh, I see
[22:49] <pasky> git can do that too :)
[22:49]  * lamont gets popcorn
[22:50] <pasky> git "has" a git-new-workdir command that does pretty much the same thing
[22:50] <lifeless> pasky: "directories as directories" I meant to write "branches as directories"
[22:51] <pasky> the only catch is in the "has", since it's not part of the official command set (it's in contrib/) and it is pretty much completely undocumented
[22:51] <lifeless> git-new-workdir looks a little raw
[22:51] <lifeless> it also needs symlinks which are not portable
[22:51] <pasky> that's right
[22:52] <lifeless> in a purely technical sense, bzr builds in the capability to do N workdirs -> M branches -> 1 history store, portabilty on windows/macosX/*nix
[22:52] <lifeless> meh, no caffeine yet, please excuse my syntax and spelling glitches
[22:52] <pasky> :)
[22:54] <pasky> how would you feel about "Git can achieve the same functionality by a semi-official git-new-workdir extension that has however many usability and portability problems?"
[22:56] <Toranin> meteoroid: in case this wasn't obvious to you, if you point SVN_SSH at a script that looks like 'shift; exec "$@"' you can have it run svnserve locally and talk to it directly
[22:56] <meteoroid> that's pretty neat..
[22:56] <meteoroid> it's obvious why it works but yeh i never thought of that
[22:56] <meteoroid> i forget that svnserve is just a different name for same executable
[22:56] <meteoroid> like sh/bash
[22:56] <meteoroid> vi/vim
[22:57] <Toranin> well, actually in this case the script gets run as "scriptname localhost svnserve -t"
[22:57] <Toranin> and it just pops off the localhost (or whatever hostname) and then passes control to svnserve
[22:57] <meteoroid> ah sure..
[22:57] <lifeless> pasky: I don't think its the same functionality
[22:58] <meteoroid> that's pretty clever :)
[22:58] <meteoroid> learn something new every day..
[22:58] <Toranin> mine checks if the hostname is localhost and borks if not
[22:58] <lifeless> pasky: because there is no constraint linking the workdir to the repository, git can't prevent bad gc actions
[22:58] <meteoroid> am always telling people to write their admin scripts with bash instead of perl  because they will learn one-liners that are golden in an emergency
[22:59] <meteoroid> i challenge perl to have a decent one-liner with seven subcommands ;d
[22:59] <Toranin> I go back and forth between bash and Python depending on how elaborate they get
[22:59] <meteoroid> yah i'd tend to choose python if i went over bash
[22:59] <pickscrape> I challenge perl to be readable ;)
[22:59] <meteoroid> had a new client who needed me to clean up tons of deploy scripts
[22:59] <meteoroid> and i'm like
[22:59] <Toranin> I have an equals sign!
[22:59] <pasky> lifeless: the refs/ space is shared so this would only cover *quite* a corner case when you have uncommitted objects in your index for more than 14 days, and even in that case it would be usually trivially repairable
[22:59] <meteoroid> this is the same script, with variables changed in each copy
[22:59] <pasky> lifeless: so I don't know if that's practical concern, unless you have something else on your mind?
[22:59] <meteoroid> why not set defaults that are for staging build and then have it accept options
[23:00] <Toranin> maybe by the time I leave I will have two, or maybe THREE equals signs on the "determining revisions to fetch"
[23:00] <meteoroid> even my bash scripts do that ;d
[23:00] <lifeless> pasky: you get a copied index?
[23:00] <meteoroid> i mean, sure, chomp is funny sounding, and ~= is cool looking, but there is more to life ;d
[23:00] <pasky> lifeless: well obviously, each workdir has to have its own index
[23:01] <lifeless> pasky: I guessed it would but confirming is useful
[23:01] <lifeless> pasky: I'm not a git dev :)
[23:01] <Toranin> RAM usage is creeping up, but it still looks heavily I/O bound -- <0.2% cpu use
[23:01] <lifeless> If you wanted to change to the sentence you wrote, I'd be ok with that
[23:01] <pasky> so if you git add something and wait 14 days without committing and then git gc in another shared repository, the object will go away
[23:02] <pasky> ok
[23:02] <lifeless> I'm sure other folk will tweak it back and forth
[23:02] <lifeless> these things are dynamic :P
[23:04] <lifeless> pasky: while you are here, can you answer a couple of questions about git?
[23:04] <pasky> sure
[23:04] <pasky> BTW, I'm also confused about "Git is not a realistic option on Windows"
[23:04] <lifeless> when you fetch, and get a pack over the network
[23:05] <pasky> I'm now using MSysGit daily and I'd appreciate if yo ucould elaborate on that :)
[23:05] <lifeless> It seems obvious to me that you decompress every text to get its sha1 and index it
[23:05] <lifeless> but do you keep the pack, or unpack to single files?
[23:06] <pasky> we don't decompress the text, the object id is part of the header of each object record; and we keep the pack
[23:06] <pasky> unpacking to single files is extremely rare operation that is used pretty much only for repairing corrupted packs, I think
[23:07] <lifeless> ok, so if someone gave you garbage, you'll only notice when you access the object?
[23:07] <pasky> interesting question :) let me check
[23:07] <lifeless> I mean e.g. a bit error in an object
[23:08] <lifeless> you could hash the entire pack
[23:08] <pasky> ok so I was wrong on at least one count, we do unpack to single files if there's less than 100 objects in the pack
[23:09] <lifeless> but that doesn't prevent targeted attacks (not that they can get you to use wrong data, just to make you think you have content you don't)
[23:09] <pasky> (which makes sense since having many little packs is not much good, and sooner or later git gc will automatically fire up and collect the objects to a larger pack)
[23:09] <LeoNerd> Can I do a   bzr log -r a..b    which starts at the revision -after- a, i.e. the output doesn't include a itself?
[23:09] <LeoNerd> I tag all my releases with a 'RELEASED ...' tag, so I have a little script which finds the latest such tag...
[23:09] <lifeless> LeoNerd: its half-open by default I think :)
[23:10] <LeoNerd> bzr log -r `bzr-find-latest-RELEASED`..    <== includes the released tag itself
[23:11] <pasky> on your other question, there is a hash of the entire pack
[23:11] <lifeless> LeoNerd: hmm
[23:11] <pasky> and I currently don't see your targetted attack scenario?
[23:11] <LeoNerd> I vaaaaugely recall a VCS that can do   -r after:somespec   and so on.. is that bzr or something else?
[23:11] <lifeless> pasky: its would be just a nuisance
[23:12] <pasky> (I'm still reading the code :)
[23:12] <lifeless> pasky: take a pack, edit some delta content to nulls, rehash the pack, and then give that when requested
[23:12] <lifeless> pasky: you'll have some unreconstructable content and no warning
[23:13] <LeoNerd> Oooh.. boo
[23:13] <LeoNerd> bzr can do  before:tag:whatever   but not after:tag:whatever
[23:13] <LeoNerd> Feature request? :)
[23:13] <lifeless> pasky: anyhow, the key question I had was about the work done during fetch
[23:13] <pasky> hmmm
[23:13] <lifeless> pasky: I've read the pack creation code closely
[23:13] <pasky> the code is trying to confuse me
[23:14] <pasky> since it suggests that objects in fact are uncompressed
[23:14] <lifeless> I would expect for correctness a full decompress
[23:14] <pasky> which comes as a surprise to me :)
[23:14] <pasky> but I guess it's so
[23:14] <lifeless> LeoNerd: uhm, please bring up on list?
[23:15] <LeoNerd> "the list"? Ooh.. a mailing list?
[23:15] <LeoNerd> I suppose I could add another; I'm only on about 15 already :P
[23:15] <lifeless> LeoNerd: yeah, we have one of those :)
[23:15] <LeoNerd> Ya.. most people do
[23:15] <pasky> hm, so yes, we do a full decompress and verify correctness
[23:16] <pasky> I currently don't see the code that verifies correctness of delta-based objects but that's probably just because I'm kind of tired :)
[23:16] <lifeless> LeoNerd: I think the bug is that log is not half-open when given a range
[23:17] <LeoNerd> It is half-opten
[23:17] <LeoNerd> Just at the wrong end
[23:17] <lifeless> LeoNerd: no its not
[23:17] <lifeless> log -r 1..2
[23:17] <LeoNerd> Most people expect that   bzr log -r 10..20   should include 10
[23:17] <pasky> lifeless: so thanks for teaching me more about git internals ;)
[23:17] <lifeless> shows 1 and 2
[23:17] <lifeless> pasky: :)
[23:17] <LeoNerd> Oh..
[23:17] <LeoNerd> Yes..
[23:17] <LeoNerd> But.. I want   bzr log -r 10..    to show 11 onwards, and not 10
[23:17] <LeoNerd> I imagine most people would complain
[23:17] <LeoNerd> Isn't   after:   easier to add?
[23:18] <LeoNerd> bzr log -r after:10..
[23:18] <lifeless> LeoNerd: if you can define it sanely
[23:18] <lifeless> we have a graph ;)
[23:18] <LeoNerd> after:10  is 11,   after:anythingelse dies
[23:18] <lifeless> LeoNerd: well you want after:tag:foo
[23:18] <LeoNerd> Yes,..
[23:18] <lifeless> but sure - lets raise
[23:18] <LeoNerd> But tag:foo   is just a "nice" name for 10, no?
[23:18] <lifeless> no
[23:19] <lifeless> its a nice name for revid:SOME_UUID_HERE
[23:19] <LeoNerd> Ooh.. hrm...
[23:19] <Peng_> Well, what does before: do?
[23:19] <lifeless> I'm sure its doable
[23:19] <lifeless> Peng_: left hand parent
[23:19] <lifeless> LeoNerd: I'm just saying we should take a minute to discuss
[23:20] <LeoNerd> Right..
[23:20] <lifeless> if we can tweak a fundamental to be more consistent
[23:20] <LeoNerd> OK.. I'll take a peek at the ML then
[23:20] <lifeless> e.g. merge -r x..y does not include x
[23:20] <lifeless> then we can be simpler
[23:21] <LeoNerd> https://lists.canonical.com/mailman/listinfo/bazaar  <== this?
[23:21] <lifeless> LeoNerd: yes
[23:23] <LeoNerd> Aww.. I don't get a choice of English (UK) ? :P Boo
[23:27] <james_w> bazaar is gone from Debian apparently
[23:27] <LeoNerd> ?
[23:27] <lifeless> LeoNerd: the old C version
[23:27] <LeoNerd> Ohdear... so you're right.
[23:28] <james_w> oh yeah, sorry :-)
[23:28] <LeoNerd> Oh.. ohyes. Got me all in a panic for a moment
[23:28]  * LeoNerd recalls the SAS
[23:28] <lifeless> pasky: I can't comment on windows, I put away my windows-hacker cloths some time back
[23:29] <pasky> lifeless: ok - I've just edited that section too to clarify a bit wrt. current state of msysgit, I look forward to seeing further edits by others :)
[23:29] <pasky> now my hands are off again ;)
[23:31] <lifeless> ok ;)