[01:05] <spiv> vila, jam, lifeless: IIRC my change to --parallel was to make it allocate by round-robin
[01:13] <poolie> hi spiv
[05:27] <AfC> I'm curious what might be done to help bzr take advantage of multi-core systems.
[05:27] <AfC> I guess Python is a serious roadblock in here :/
[05:27] <Peng> Does bzr even doing anything very parallelizable?
[05:27] <Peng> Other than the test suite? :P
[05:28] <AuroraBorealis_> what needs to be
[05:28] <AuroraBorealis_> paralleized
[05:29] <AuroraBorealis_> honestly
[05:29]  * AfC is watching a bzr process at "fetching revisions 1566/264178" at about 20 per second on one core, and watching three other cores of this server idle.
[05:30] <AfC> Gonna be a while, this one.
[05:30] <AuroraBorealis_> thats probably because
[05:30] <AuroraBorealis_> its limited by your download speed
[05:30] <AfC> AuroraBorealis_: its local
[05:30] <AuroraBorealis_> not because its single threaded.
[05:30] <Peng> AfC: And how exactly would you multithread that?
[05:30] <AfC> The system is not in iowait at all.
[05:30] <AuroraBorealis_> ive copied the bazaar source code which is like 37000 revisions
[05:30] <AuroraBorealis_> in seconds...
[05:31] <AfC> It's a quad Xeon. It's not a slow system. Anyway
[05:31] <AfC> Peng: that is, of course, a fascinating question
[05:31] <AfC> Peng: I don't know how much the Python runtime is an obstacle here.
[05:32] <AfC> Peng: but, on e.g. import loads, mapping any one revision would conceivably be independent of mapping any other revision.
[05:32] <AuroraBorealis_> python probably is not the obstacle
[05:32] <AfC> I imagine at some point there is the whole "sequence adjacent revision texts" thing
[05:33] <AuroraBorealis_> branching the bazaar source code which is at 38,000 revisions took less than a minute
[05:33] <AuroraBorealis_> i feel thats acceptable
[05:33] <bob2> shared or non-shared repo?
[05:34] <AfC> AuroraBorealis_: if it's CPU bound, then yes, the Python runtime's global locks will be an obstacle (if threading is the approach one uses to parallelism).
[05:34] <AfC> 6000 down. Only 258 thousand to go.
[05:35] <bob2> welll, if it's cpu-bound /and/ parallelisable :)
[05:35] <AfC> bob2: heh
[05:35] <AuroraBorealis_> dunno why its taking so long for you.
[05:35] <AuroraBorealis_> what are you branching
[05:35] <AfC> Linux
[05:35] <bob2> it's 10x as many revs as bzr
[05:35] <bob2> AuroraBorealis_, one minute to branch into a new unshared repository?
[05:35] <AuroraBorealis_> wait
[05:35] <AuroraBorealis_> i have it locally
[05:35] <AuroraBorealis_> so yes
[05:35] <AuroraBorealis_> well i didnt put it into a repo
[05:36] <AuroraBorealis_> and where are you getting linux in a bazaar repo
[05:36] <AuroraBorealis_> it uses git
[05:37] <bob2> lots of people import lots of things
[05:37] <AuroraBorealis_> i mean going from git->bzr sounds like it could take a while
[05:37] <AfC> Interesting data points: getting the kernel from github was 15 minutes. Getting the vcs-import on Launchpad was 1 hour 20 min. I'm assuming that was lp's net I/O fault, but we'll see later today.
[05:39] <bob2> barely relatedly, 1:04 to git clone linux-2.6 with cold cache, 40s with warm
[05:39] <AfC> bob2: that is interesting, thanks
[05:39] <bob2> lp to au is tedious :/
[05:40] <AfC> bob2: yeah. I'm getting tired of hearing all the reasons why Launchpad can't mirror content.
[05:43] <AuroraBorealis_> sounds like a problem with lp rather then bazaar :>
[05:43] <AfC> Just hit the repack at 10k revisions. That was exciting.
[05:45] <AuroraBorealis_> whats the link to clone the kernel
[05:45] <AuroraBorealis_> kernel.org is...down
[05:46] <AfC> AuroraBorealis_: here
[05:46] <AfC> $ git clone https://github.com/torvalds/linux.git linux
[05:47] <AfC> AuroraBorealis_: then, see https://bugs.launchpad.net/bzr-git/+bug/861973 as you'll hit a bug when you try to bzr branch from the local copy
[05:47] <AuroraBorealis_> of course the git gui sucks
[05:47] <AfC> [the local copy being necessary since currently released bzr-git crashes when talking to github. THAT is fixed in the ppa:bzr/daily (yeay) though this bug's fix isn't there yet]
[05:48] <AuroraBorealis_> again, going through bzr->git seems like some problems...
[05:48] <AuroraBorealis_> just sayin
[05:50] <AuroraBorealis_> cause not only bazaar has to support itself but now it has to flawlessly support every other version control software out there
[05:52] <AuroraBorealis_> also github is sure taking its dear sweet time. siqq no progress indicators
[05:56] <AuroraBorealis_> yeah. cloning from github does NOT take 1 minute =P
[06:20] <vila> hi all !
[06:21] <AfC> AuroraBorealis_: as I said above, I got it in ~15 minutes.
[06:50]  * fullermd vilas at wave.
[06:52] <vila> fullermd: :)
[06:57] <AuroraBorealis_> well
[06:57] <AuroraBorealis_> the fast-export output for the linux kernel is currently 15 gb
[06:57] <AuroraBorealis_> i might run out of drive space o.o
[07:00] <jam> morning all
[07:01] <vila> jam: _o/
[07:02] <vila> jam: no hang today, you're getting closer, the hang(s?) is scared and hides ;)
[07:03] <fullermd> Maybe it's just hanging him out to dry.
[07:03] <jam> vila: well, in testing, it was hanging around 2pm, and then stopped around 5pm. Apparently it is related to the phase of the moon.
[07:03] <vila> jam: also note that windows has never hung which is a good sign
[07:04] <vila> jam: testing locally you mean ? How ? Otherwise, yes, of course it's related to the moon...
[07:04] <vila> :)
[07:05] <jam> vila: yes, locally
[07:05] <jam> and on devpad
[07:05] <jam> I could reliably get it to hang, and then it started always working...
[07:06] <vila> jam: infuriating
[07:06] <jam> yep
[07:07] <vila> but being able to trigger it more and more is a sure sign you're making progress
[07:07] <vila> don't let it drive you nuts, you'll win in the end, it knows that...
[07:10] <vila> jam: which server is involved in the hang, the pipe or the TCP one ?
[07:34] <jam> vila: tcp, we generally don't use the pipe one in testing.
[07:36] <vila> jam: did you consider using cethread there ? I seem to remember some hangs being caused by uncaught exceptions...
[07:49] <jam> vila: cethread?
[07:49] <jam> CatchingExceptionThread?
[07:49]  * vila nods
[07:49] <jam> I don't think that is this specific problem, but i'm still digging.
[07:49] <vila> bzrlib.cethread
[07:50] <jam> I'm not getting an uncaught exception traceback on the terminal
[07:51] <vila> I think I encountered cases where you don't get tracebacks (can't remember the details, that was... hairy)
[07:54] <mgz> morning all
[07:55] <vila> mgz: _o/
[08:02] <jelmer> 'morning
[08:08] <mgz> jelmer: what's the process for landing things for bzr-builddeb?
[08:08] <mgz> (thanks for reviewing)
[08:09] <jelmer> mgz: once you have approval, you can land them manually by doing one merge per MP into trunk
[08:10] <mgz> hm, I'll need to join the right team
[08:12] <mgz> does that mean bugging james_w?
[08:16] <jelmer> yep
[08:16]  * mgz gets out his butterfly net
[08:29] <poolie> hello mgz, jelmer, europa
[08:29] <mgz> hey poolie!
[08:29] <jelmer> hi poolie
[08:38] <jelmer> hah, netsplits
[08:38] <jelmer> mgz: one of the other bzr-builddeb-hackers should also be able to land it for you until you get commit access
[09:02] <vila> jelmer: _o/
[09:02] <jelmer> jam: hi
[09:02] <jelmer> jam: did you see my follow-up to https://code.launchpad.net/~jelmer/bzr/vf-fileids-altered-by-revisions/+merge/76851 ?
[09:02] <vila> . o O (Using semaphores during net storm sounds appropriate)
[09:02] <jam> jelmer: approved
[09:03] <jelmer> jam: Thanks!
[09:12] <jelmer> vila: wrt https://code.launchpad.net/~jr/bzr/plugin-test-failure/+merge/77569
[09:12] <jelmer> vila: isn't the test for export-pot ?
[09:12] <vila> yes, but applied to a plugin
[09:13] <vila> i.e. it should run if the plugin is available and not run it's not there, so putting it in plugin test suite makes sense
[09:13] <vila> now it may also makes sense to have another test that we fail gracefully when we try to export-port an unknown plugin
[09:14] <jelmer> vila: it should run in either case I think; it's a test for 'bzr export-pot', not for bzrlib.plugins.launchpad
[09:14] <jelmer> vila: perhaps rather than using launchpad the test should register a dummy plugin
[09:14] <vila> would work too but that seems more complex than two simpler tests
[09:15] <vila> the dummy plugin stuff is brittle
[09:15] <jelmer> vila: it seems wrong to put a export-pot test in the launchpad plugin testsuite though
[09:15] <vila> really ? Why ?
[09:15] <jelmer> vila: I would never think to look there if I was looking for export-pot tests
[09:15] <vila> haa
[09:15] <mgz> hm.
[09:15] <jelmer> vila: I'd have to check the testsuite of every other plugin too to see if it happens to test export-pot
[09:16] <mgz> I think it's a question of whether this is just a test that any plugin works at all
[09:16] <jelmer> vila: using requiresFeature in the tests seems like the right thing to me
[09:16] <mgz> or whether every plugin that grows translatability will want a test that it works
[09:16] <jelmer> mgz: it is
[09:16] <vila> well, it there are bits specific to plugins in export-pot, it makes sense to run that kind of test for all  plugins
[09:16] <mgz> if the former, the current location and a skip seem fine
[09:16] <mgz> if the latter, there should be a testcase class that plugins can subclass and use
[09:17] <vila> mgz: +1
[09:17] <vila> that'll make plugin autors life easier to have already written tests for them
[09:17] <vila> authors
[09:18] <jelmer> mgz: in this case, it's the former
[09:25] <vila> jelmer: and you won't test the unknown plugin case
[09:56] <jelmer> vila: The unknown plugin situation deserves its own testcase
[10:18] <mgz> okay, all caught up on launchpad mails, found some interesting bug reports
[10:40] <Riddell> ModuleAvailableFeature from tests is deprecated, anyone know what replaces it?
[10:42] <mgz> Riddell from tests.features instead?
[10:42] <jam> Riddell: use it from features
[10:43] <Riddell> right, just a move
[10:43] <jam> bzrlib.tests.features.ModuleAvailableFeature
[11:58] <Noldorin> hi jelmer
[11:59] <jelmer> hi Noldorin
[11:59] <Noldorin> jelmer, how's it going?
[12:00] <Noldorin> jelmer, you seem to disappear each time i poke you hehe
[12:00] <jelmer> alright, how are you ?
[12:00] <Noldorin> pretty good
[12:00] <Noldorin> working on various things; putting bzr-git aside for now
[12:00] <Noldorin> but speaking of which, how's progress? :-)
[12:03] <Noldorin> jelmer,
[12:11] <jelmer> Noldorin: nothing new yet
[12:12] <Noldorin> jelmer, ah okay. any time soon though you think? :-)
[12:12] <jelmer> Noldorin: no idea, sorry
[12:12] <Noldorin> okay
[12:12] <jelmer> it might be very soon, might take a month or so
[12:12] <Noldorin> haha
[12:12] <jelmer> it just depends on what else comes my way
[12:12] <Noldorin> very specific
[12:12] <Noldorin> oh well
[12:12] <Noldorin> fair enough
[12:13] <Noldorin> jelmer, it's okay, i understand. you seem quite over-workd to be fair :-P
[12:13] <Noldorin> shame there isn't someone else helping you out
[12:15] <jelmer> Noldorin: nothing stopping you :)
[12:15] <Noldorin> jelmer, except my lack of knowledge of both python and bzr-git? :-P
[12:15] <jelmer> that's fixable
[12:16] <Noldorin> i mean...the code is well-written with doubt...but not exactly well commented heh
[12:16] <Noldorin> alas, i don't have the many hours required for such a task
[12:17] <jam> vila: so, I found out why it was hanging, now to figure out what to do next.
[12:17] <jam> Specifically, if any client causes an exception in SmartTCPServer_for_testing it causes the server to stop
[12:18] <jam> which also has the side effect that the next connection sees "oh, I'm stopping"
[12:18] <jam> but doesn't actually close the connection
[12:18] <jam> and because we keep a list of ".clients" the socket stays around and doesn't get garbage collected
[12:18] <jam> so the client just hangs indefinitely
[12:24] <vila> jam: which exception ?
[12:24] <jam> vila: I'm not sure on that part yet, but this fixes the hang: http://paste.ubuntu.com/699711/
[12:25] <jam> I think there was a failure in the previous request (which might eventually bubble up)
[12:25] <jam> but we don't get there, because the next request is denied and just hangs
[12:25] <vila> that's probably the race
[12:26] <jam> vila: right, now we might start seeing test *failures* after this
[12:26] <jam> but we shouldn't see hangs
[12:26] <jam> which I'm happier with :)
[12:27] <vila> hehe, progress !
[12:27] <Noldorin> jelmer, okay, so i'm actually a little bored now. if i can make this fix within an hour or two, maybe it's just worth it :-P
[12:27] <Noldorin> jelmer, that is, if you could provide any specific details how that function needs to be rewritten...would help a lot
[12:28] <Noldorin> i know you summarised already, which helps ;-)
[12:29] <jam> vila: and I *think* what is happening is that we get 3-4 connections to the smart server during the test case
[12:29] <jam> and some of those start a connection, and then never make another request
[12:29] <jam> so they eventually timeout as an idle connection
[12:29] <jam> which is actually true
[12:29] <jam> they are dead, but just never called disconnect()
[12:30] <vila> probably related to daemon threads which should be joined instead during tests (at least)
[12:30] <jam> however, if any of those trigger ConnectionTimeout, that is an exception which gets passed to the server and shuts it down
[12:30] <jam> because you don't have the "handle a connection timeout" logic in SmartTCPServer_for_testing that is in SmartTCPServer
[12:30] <jam> we re-implemented the '.serve()' stack
[12:30] <vila> well, there was no timeout when it was implemented
[12:31] <jam> vila: sure, though that also meant we just stayed connected on dead connections forever.
[12:31] <jam> so I think the sequence is
[12:31] <vila> known thread leaks
[12:31] <jam> we make an initial request to the server
[12:31] <jam> and never disconnect
[12:31] <jam> that connection is then 'idle'
[12:31] <jam> if the rest of the test takes longer than 4.0 seconds to finish
[12:32] <vila> when I mentioned the issue, the answer was, we don't want to change the server design because we don't care
[12:32] <jam> the idle thread wakes up and disconnects the client
[12:32] <jam> which has a side effect of shutting down the server as a whole
[12:32] <jam> which causes further requests to get side lined
[12:32] <jam> causing a hang without the patch, and probably a test failure with ti.
[12:32] <vila> so ConnectionTimeout just needs to be added to ignored_exceptions
[12:33] <jam> vila: right, something like that
[12:33] <jam> I'm still poking around there
[12:33] <jam> vila: though not exactly that
[12:33] <jam> because handle_error is the one shutting down the server
[12:33] <jam> not CEThread
[12:33] <jam> CEThread would raise it as an exception during thread.join
[12:34] <jam> but before we get to the point of reaping old connections
[12:34] <jam> we've already shut down the server because one request failed
[12:36] <vila> I think you have been mixing two designs
[12:36] <vila> if ConnectionTimeout is not an error, handle_error should never see it
[12:37] <jam> vila: I'm not 100% sure about ConnectionTimeout, I am pretty sure about handle_request causing a hang if the server is in shutdown mode.
[12:40] <vila> jam: I don't quite get why a client can connect if the server is shutting down, isn't the listening socket closed as soon as the server shuts down ?
[12:40] <jam> vila: *not* if it is shutting down because of handle_error
[12:40] <jam> vila: If you call "stop_server" then it tries to connect to its own socket to force the server to stop accepting connections
[12:40] <vila> ha right !
[12:40] <jam> but if you call handle_error()
[12:40] <jam> it just sets "self.stopping"
[12:40] <jam> but it is stuck in "self.accept()"
[12:40] <jam> socket.accept()
[12:40] <vila> because only a single client could fail before your patch
[12:40] <jam> so it will accept *1 more connection*
[12:40] <jam> that it won't respond to
[12:41] <jam> vila: yeah probably
[12:41] <vila> well, that's more than probably :)
[12:41] <jam> oh, another small bug
[12:41] <jam> vila: http://paste.ubuntu.com/699723/
[12:41] <vila> the test runs in the main thread so only one client can run at a time, therefore only one client can fail at a time
[12:41] <jam> can you point out why close_request won't get called?
[12:42] <vila> same constraint
[12:42] <jam> vila: handle_request has a blanket "raise" in it
[12:42] <vila> the server shutdown takes care of closing
[12:43] <vila> where ?
[12:43] <jam> vila: bzrlib.tests.test_server.TestingTCPServerMixin.handle_error
[12:44] <vila> and ?
[12:44] <jam> vila: we get an exception, we 'handle it' by re-raising it, which means you won't call close_request() though the code sure looks like it would
[12:44] <jelmer> Noldorin: I'm not sure if I can really provide much more details without spending a considerable amount of time on it myself
[12:44] <vila> no
[12:45] <jelmer> Noldorin: either way, it would be worth reading up on the bzr and git internals if you want to have a stab at this
[12:45] <jam> vila: TestingTCPServerMixin.handle_request calls process_request in a try/except
[12:45] <vila> the code calls handle_error and then close_request, if an execption is raised in handle_error, close won't be called
[12:45] <Noldorin> jelmer, hmm okay sure. anything i should read beside the general code and bug report itself?
[12:45] <jam> vila: except handle_error calls "raise"
[12:45] <jam> re-raising the existing error
[12:45] <vila> that's what I just said
[12:45] <Noldorin> jelmer, even a rough pseudocode implementation of the new implementation perhaps...? ;-)
[12:45] <vila> jam: and I said previously: <vila> the server shutdown takes care of closing
[12:46] <vila> jam: which was based on the same assumption that a single client can fail at a time
[12:46] <jam> vila: so, it violates the SocketServer paradigm to have handle_error raise an exception. Sure, it may get caught later.
[12:46] <jelmer> Noldorin: the repository structure (how a tree is built up, etc)
[12:46] <jam> but it can also cause hangs
[12:46] <vila> what paradigm ?
[12:46] <jam> because the client is never disconnected
[12:46] <jelmer> Noldorin: sorry, I'm not entirely sure what the pseudocode would have to look like yet either
[12:46] <Noldorin> heh okay
[12:47] <Noldorin> that's fine
[12:47] <jam> so say the client makes a bad request, that triggers an exception on the server
[12:47] <jam> the client then hangs forever waiting for the server to respond
[12:47] <jam> because the server didn't close the connection.
[12:47] <vila> doesn't happen in tests
[12:47] <jam> vila: we haven't triggered it yet in tests
[12:47] <vila> you can't
[12:48] <vila> unless you introduce a new kind of failure like you did with the timeout
[12:48] <jam> vila: if process_request itself was buggy it would be triggering this
[12:48] <jam> it happens that process_request immediately deferrs to a thread for the actual processing.
[12:48] <vila> by design
[12:48] <jam> vila: sure, but if there were a bug in that code, it would cause hangs
[12:49] <jam> vila: also you have "t.pending_exception()" so if an exception happens at thread start, it will also cause hangs.
[12:49] <vila> which is why there should be no bug there :)
[12:49] <jam> again, I haven't tracked down where the actual failure is causing the code to then hang, I'm pointing out that the current implementation is likely to hang if there are failures in a few key aspects of the code.
[12:50] <jam> rather than getting a test suite failure that we can debug
[12:50] <vila> well, I suggest focusing on this new failure mode you've introduced first
[12:50] <jam> vila: I have to get an actual failing test first
[12:50] <jam> which involved not having the test suite hang
[12:50] <jam> On windows, because the hang is in a sock.recv() you can't even ^C out of it to get a traceback.
[12:50] <vila> use cethread to check which exception is shutting down the server then
[12:51] <vila> jam: the TCP server on windows ?
[12:51] <jam> vila: socket.recv() is one of those "blocking and cannot be interrupted" calls on Windows
[12:51] <jam> this is actually during "self.make_repository()"
[12:51] <vila> jam: the TCP server on windows ?
[12:51] <jam> vila: the *client* code is blocked
[12:52] <vila> jam: the TCP server on windows ?
[12:52] <Noldorin> jelmer, heh. i guess that 'in progress' is for me? ;-)
[12:52] <jelmer> no, just in general since there's a testcase and some initial work
[12:52] <jam> vila: Repeating yourself doesn't make me understand what you are asking. Vs what?
[12:53] <vila> jam: is that when using the TCP server on windows ?
[12:53] <jam> vila: yes
[12:53] <vila> as opposed to the pipe server, there are only 2
[12:53] <jam> vila: as I said way back when, we generally don't use the pipe server in testing
[12:53] <jam> This is a per_interrepository test.
[12:53] <jam> But per_repository where Repository format is Remote triggers the same code paths
[12:54] <jam> We have a SmartTCPServer_for_testing running, it received a request but decided it didn't want to respond because it was shutting down.
[12:54] <Noldorin> jelmer, ok sure ;-)
[12:54] <jam> The client is then blocked waiting for the server to respond (or disconnect it)
[12:54] <jam> On Windows, the client is blocked so badly you have to kill the process.
[12:54] <jam> On Linux, you can at least do ^C and get a traceback in ~/.bzr.log
[12:55] <vila> yeah, on linux too
[12:55] <vila> not always when a hang happens
[12:55] <jam> vila: for this particular hang, ^C worked fine for me
[12:55] <jam> it is how I pinpointed the hanging part
[12:55] <vila> hmm, lucky you then ;)
[12:57] <vila> jam: so, IIUC, if the client is in a blocking read, you should not shut down the server
[12:57] <vila> jam: or if you do so, you should close the connected socket in a way that will unblock the client
[12:57] <jam> vila: right, I think "closing sockets on shutdown" is the most reasonable process.
[12:58] <jam> and currently the code has a few bugs where during exceptions that will stop the server, it doesn't close the connections
[12:59] <vila> apart from the timeout case, I think the test server is quite well tested in this regard
[12:59] <vila> the only problematic case being th TCP smart server because it didn't track the client connections
[12:59] <jam> an exception during process_request will cause the server to not disconnect the client
[13:00] <jam> that includes process_request_thread because SocketServer was also written as "self.handle_error(); self.close_request(request)"
[13:01] <vila> that was the right thing to do as long as a single client thread could fail
[13:02] <vila> if this is not true anymore for your server, you should probably overload verify_request and handle_error
[13:24] <mgz> hm, not sure if it'd be better to give instructions or just do the changes.
[14:09] <jam> vila: so interestingly enough, you had a test case for "test_server_fails_while_serving_or_stopping", but you had wrapped the client.read with a socket timeout and suppressed the socket error.
[14:09] <jam> It turns out, that if you don't need the timeout if you close the connection.
[14:09] <jam> however, there is a catch
[14:09] <jam> if you use SocketServer.StreamRequestHandler
[14:09] <jam> it creates an "rfile"
[14:10] <jam> which is *another handle to the socket*
[14:10] <jam> so just closing the 'request' object isn't sufficient
[14:10] <jam> you have to either a) Not use StreamRequestHandler, or b) somehow get at the handler to tell it to close its handle because you had an error.
[14:11] <jam> alternatively, if you don't raise an exception in handle_error, the StreamRequestHandler goes out of scope naturally, and closes its connection
[14:11] <jam> but with an exception, I think it ends up in the traceback, and thus doesn't get gc'd
[14:13] <jam> given that we avoid StreamRequestHandler everywhere else because makefile doesn't play very nicely with other bits (in general)
[14:13] <jam> it would seem reasonable to just avoid it.
[14:13] <vila> and replace all read/write with recv/send ?
[14:14] <jam> there is only one or two read write calls anyway.
[14:14] <jam> The comment on the timeout was that "the server may not get cycles" but the reality was that the server got the cycles, it just wasn't disconnecting the client.
[14:15] <vila> which comment ?
[14:17] <mgz> yeah, makefile is more trouble than it's worth
[14:20] <AfC> That bzr branch of a local git to local bzr tree that I kicked off earlier is at 8h53m22s elapsed, 245 minutes CPU time. Half way done.
[14:31] <jelmer> hmm
[14:31]  * jelmer ponders deprecating make_branch_and_tree
[14:32] <AfC> jelmer: oh, hey
[14:32] <AfC> jelmer: I applied poolies workaround tag deletes
[14:32] <AfC> jelmer: and I'm churning though it now.
[14:32] <jelmer> AfC: cool
[14:33] <jelmer> AfC: note that his bug shouldn't crop up if you clone from the remote git branch directly
[14:35] <AfC> jelmer: yeah, I couldn't do that [at the time] because bzr-git choked on github URLs (or, rather, the 405 redirect thereby)
[14:35] <AfC> jelmer: ppa:bzr/daily appears to have fixed that, I discovered subsequently.
[14:35] <jelmer> AfC: git:// should work in older versions of bzr-git, too
[14:35] <jelmer> AfC: but yeah, http/https weren't supported earlier
[14:35] <AfC> jelmer: plus, old habit - I got tired of things crashing :/
[14:36] <jelmer> AfC: yeah, this is all still pretty experimental stuff..
[14:40] <vila> hang on lucid :-/
[14:40] <vila> I was hoping to *not* see that as it means it can happen on pqm too :-/
[14:41] <AfC> vila: I don't want to hear these things!
[14:41] <vila> AfC: sorry, I wasn't talking to you :)
[14:41] <AfC> heh
[14:45]  * AfC adds 8GB of swap to his server in hopes that bzr will not OOM on me
[15:24] <vila> Finally, https://launchpad.net/bzr/+milestones starts to become readable...
[15:47] <jelmer> vila: still there?
[15:47] <jelmer> it looks like any bzrlib usage now breaks if bzrlib isn't explicitly initialized
[15:49] <jelmer> I've filed a bug
[15:51] <vila> ha, I had some doubts about that :-/
[15:52] <vila> jelmer: assign it to me, I ~know what is going on
[15:53] <vila> jelmer: sorry about that :-/
[15:53] <vila> jelmer: how did you encounter it ?
[15:53] <jelmer> vila: I was writing a simple test script when I hit it: http://pastebin.ubuntu.com/699875/
[15:54] <vila> ha. Add 'with bzrlib.initialize():' :-p
[16:11] <SpamapS> Hi guys, I'm trying to use bzr builder to do a dailydeb for a source tree that has a debian/ dir that differs from the packaging branch..
[16:11] <SpamapS> Is there a way to tell nest-part to just do a forcible merge?
[16:17] <vila> jelmer: where did you file this bug ?
[16:24] <AuroraBorealis> well. converting the linux kernel git repo to bazaar has taken 9 hours so far and only 1/6th done haha
[16:25] <AuroraBorealis> no i lied its half done
[16:29] <AuroraBorealis> all this in the name of science!
[16:30] <AuroraBorealis> :O\
[16:30] <science> AuroraBorealis: thanks !
[16:30] <AuroraBorealis> someone here was saying a local branch with a tons of commits took forever
[16:30] <AuroraBorealis> so i'm trying it out
[16:32] <AuroraBorealis> but i think he was trying to branch the git repo which means it had to convert it, which of course is taking forever
[16:42] <mgz> jelmer: if you get a moment, please do merge my bzr-builddeb branches
[16:43] <mgz> jam: you can review your own fix for bug 856731 if you like
[16:49] <SpamapS> Who would be the best person to ask about build recipes? I am having a hell of a time with one right now. :-/
[16:49] <SpamapS> james_w: ping?
[17:22] <jelmer> mgz: will do
[17:33] <vila> jelmer: still there ?
[17:34] <vila> jelmer: I'm running a full test run before submitting my fix for bug #863401
[17:57] <vindolin> anyone else having problems commiting with bazaar explorer?  bzr: ERROR: exceptions.TypeError: unsupported type u'Collecting changes [0] - Stage'
[17:58] <AuroraBorealis> does it happen with the command line bazaar?
[17:58] <vindolin> AuroraBorealis: no.. works perfect
[17:59] <AuroraBorealis> hmm
[17:59] <AuroraBorealis> latest version of explorer and all that?
[17:59] <vindolin> 1.2.1
[17:59] <vindolin> and all that :)
[18:00] <vindolin> http://paste.pocoo.org/show/eUq2c1UEqxfo3wHvM2eE/
[18:00] <vindolin> ^ traceback
[18:00] <AuroraBorealis> well you are using bazaar 2.5 :o
[18:05] <vindolin> AuroraBorealis: what's wrong with that?
[18:05] <AuroraBorealis> well the '2.5dev2' kinda speaks for itself =P
[18:06] <AuroraBorealis> are you sure its not  a problem with the development version?
[18:07] <vila> vindolin: check that 'all that' is the tip for all your plugins and if you still encounter the issue: https://bugs.launchpad.net/bzr/+filebug
[18:08] <vila> vindolin: it looks like qzbr is confused by a progress bar output
[18:08] <vila> vindolin: so may be you should report at: https://bugs.launchpad.net/qbzr/+filebug instead
[18:09] <vila> vindolin: unless you played tricks with BZR_PROGRESS_BAR but I fail to imagine which...
[18:12] <vindolin> i have no idea where the dev version comes from.. checking..
[18:14] <vila> vindolin: someone installed it from source, there is no released version with 'dev' in its name, but you're using a bzr installed in /usr/lib/python2.7/dist-packages/ nevertheless
[18:15] <vindolin> ok.. apt-get removed everything bzr.. reinstalled.. works :/
[18:16] <vindolin> bzr is now 2.4.1.. strange
[18:16] <vila> vindolin: meh, I'm glad for you, let us know what happened if you ever find out
[18:16] <vila> It's still a weird behavior even running from sources
[18:17] <vindolin> ok thanks everyone
[19:44] <jeff_> hello people, have a silly newbie question.
[19:45] <jeff_> can't seem to find the answer in the docs for bzr: I just did bzr add, not realising that my .bzrignore filter was not on this machine.  how do i tell bzr to forget my massively incorrect add operation, without any changes to my files?
[20:03] <lifeless> vila: hi
[20:03] <lifeless> vila: i thought initialize was mandatory for about 6 months now; surely the break has already been done?
[20:04] <vila> lifeless: I thought so too but as you can see, that's not the case
[20:04] <lifeless> vila: jelmers bug doesn't indicate that
[20:04] <lifeless> vila: 'fetch.py' might be years old
[20:05] <lifeless> vila: poolies 'new' deprecation thing says its ok to break apis across major releases
[20:05] <vila> lifeless: what triggered this bug is that the config now needs initialize to be called or bzrlib.global_state is None
[20:05] <lifeless> vila: sounds fine to me :)
[20:05] <lifeless> vila: I'm just saying, if its cleaner not implicitly initializing (it is I think), then you've already made that transition.
[20:06] <vila> lifeless: our own generate_docs.py was calling it
[20:06] <vila> grr wasn't
[20:06] <vila> no, it doesn't initialize implicitly
[20:06] <lifeless> vila: actually, did you just say you're using bzrlib.global_state from config?
[20:06] <lifeless> thats a bug.
[20:06] <lifeless> You should be passing the library state *into the config system*
[20:07] <lifeless> nothing new should ever be accessing bzrlib.global_state
[20:07] <vila> from where ?
[20:07] <vila> yeah, there are so many constraints on it that nobody can use it
[20:08] <vila> without violating at least one
[20:08] <lifeless> I'm not sure what you mean
[20:08] <lifeless> the thing was a centralisation of existing globals spread out through the code base
[20:09] <vila> how should the config system acquire the library state ?
[20:09] <lifeless> it should be passed in
[20:09] <vila> by who ?
[20:09] <lifeless> callers
[20:09] <vila> you want it to be a parameter to all bzrlib calls ?
[20:09] <lifeless> or held as an instance variable
[20:10] <vila> intialized when ?
[20:10] <lifeless> when you create the object
[20:10] <vila> in bzrlib.initialize ?
[20:11] <lifeless> why would bzrlib.initialize need to know about configs?
[20:11] <vila> which object ? the config ? From where... ELOOP
[20:11] <lifeless> let me phrase it differently. *a* goal is to be properly isolated in tests from the test runner, which also uses a BzrLibraryState
[20:12] <lifeless> uses of 'bzrlib.global_state' are in violation of that goal, though we have workarounds today.
[20:12] <vila> not the bzr one :-/
[20:12] <vila> what workarounds ?
[20:13] <vila> either the state is passed all over place or you need to access it
[20:14] <lifeless> indeed bzrlib/tests/__init__ is missing a push-pop of the library state in each test.
[20:14] <lifeless> that will be contributing to test isolation issues.
[20:14] <lifeless> (I haven't pulled recently, you may have worked around it)
[20:14] <lifeless> yes, thats right - the state needs to be passed around when creating objects that need access to the state.
[20:15] <vila> since branches needs a config, all branch operation should pass the state then
[20:15] <lifeless> note the BIG comments in bzrlib/__init__.py
[20:15] <lifeless> # If using this variable by looking it up (because it can't be easily obtained)
[20:15] <lifeless> # it is important to store the reference you get, rather than looking it up
[20:15] <lifeless> # repeatedly; that way your code will behave properly in the bzrlib test suite
[20:15] <lifeless> # and from programs that do use multiple library contexts.
[20:15] <vila> and since branches can be obtained from wt, same goes for all wt operations, etc
[20:15] <lifeless> vila: thats bogus
[20:15] <lifeless> pass it to __init__. Once.
[20:15] <lifeless> fin, end of story, done
[20:16] <vila> which __init__ ?
[20:16] <lifeless> THE OBJECT INIT
[20:16] <lifeless> sorry, I have to go, EBABY etc.
[20:16] <vila> oh come on, where did the caller got it brom ?
[20:16] <vila> from
[20:16] <lifeless> vila: the ultimate caller gets it from bzrlib.initialize()
[20:17] <lifeless> they pass it to Branch.open() which passes it to the constructor of the Branch. All ops on the branch use self.library_state.
[20:17] <lifeless> for instance.
[20:18] <vila> that still means a lot of code paths need to carry it
[20:18] <lifeless> I *have* to go, sorry. This seems very straight forward to me.
[20:19] <vila> easier said than done, I fully understand *how* it can be done, I just don't think adding such a parameter to a bunch of signatures will outweight the benefits.
[20:20] <vila> and I probably meant the losses :)
[20:23] <lifeless> in terms of overheads, its at most 1*number-of-classes + the call stacks for key factory functions like Branch.open
[20:23]  * lifeless is gone
[22:15] <lifeless> vila: and back but I suspect you are sleeping now :)