[00:02] <maxb> svz90: Easiest thing to do would be to configure it in your OpenSSH .ssh/config file
[00:04] <svz90> maxb: Thanks.
[05:32] <pattern> i've just done a "bzr add foo", but before i committed the change i changed my mind and i no longer want to add "foo"
[05:32] <pattern> how can i tell bzr i don't want to add "foo" anymore?
[07:31] <bob2> rm
[07:31] <bob2> --keep or whatever
[14:06] <renatosilva> english question: in a commit message like "reverted changeset xyz", reverted is past participle, not simple past, right? that is, changeset was reverted, not I reverted it.
[14:09] <Meths> As a commit message it would make more sense to say "Reverts changeset xyz" as the message should say what that changeset does.  But in the above either works really, not sure which way it would be interpreted by a grammer perfectionist though.
[14:24] <fullermd> I'd go with "revert changeset" myself.  Imperative FTW.
[14:28] <renatosilva> can it be read, is it usually read, as past participle?
[14:29] <renatosilva> is it "undone commit 123.", "commit 123 undone.", or "undid commit 123."?
[14:37] <Meths> 3rd IMO
[14:51] <renatosilva> there's one guy saying 1st could be read as "the commit has undone commit 123"
[14:52] <renatosilva> and other guys saying "commit 123 undone" ("commit 123 has been undone") is ok, even though 3rd one is more usual
[15:35] <fullermd> vila: Around?
[16:01] <vila> fullermd: briefly, may I help ?
[16:02] <fullermd> I wondered if you've ever seen oddities of selftest on your FreeBSD vm with it locking itself up waiting on a semaphore.
[16:03] <vila> this doesn't ring any bell, FreeBSD (the babune slave) is on of the most stable (if the not *the* most)
[16:03] <vila> fullermd: is it something you've seen with an explicit output or only a weird behavior
[16:03] <vila> ?
[16:04] <fullermd> Well, for instance, http://babune.ladeuil.net:24842/job/selftest-freebsd/lastCompletedBuild/testReport/bzrlib.tests.blackbox.test_branch/TestSmartServerBranching/test_branch_from_trivial_branch_streaming_acceptance/?
[16:04] <fullermd> That one.
[16:04] <vila> while being stable it's also quite slow (compared to the others), but I don't have a good explanation for this
[16:04] <fullermd> If I run it via 'bzr selftest branch_streaming_acceptance', it runs fine.
[16:05] <fullermd> If I just like 'bzr selftest' hit it, it just stops dead.  python had 'usem' as a wchan in ps/top.  I've let it sit for 4 or 5 minutes, and gets nowhere.
[16:05] <vila> haaa interesting
[16:05] <fullermd> (runs in <3 seconds standalone)
[16:05] <fullermd> I've tried running a full selftest a half dozen times, and I keep finding new places further on that it locks down.
[16:05] <fullermd> Except last try around, where I foudn another place _early_ on that it started locking.  Gruuh.
[16:06] <vila> so, IIRC, that's a test which runs both a client and a server in the same process
[16:06]  * fullermd nods.
[16:06] <fullermd> All the ones that lock are of that ilk AFAICS.
[16:07] <vila> I've been suspecting a weird interaction between python and BSDs (including OSX) but other OSes too (but quite differently) but I've never been able to diagnose precisely enough
[16:07] <vila> weird interaction around sockets
[16:08] <vila> some double handling of socket state (especially when both sides of the socket are used in the same process)
[16:08] <vila> but that's only a theory so far... and pretty weak :-/
[16:09] <fullermd> Mmm.  And you're using --parallel and somehow don't it it?
[16:09] <fullermd> (hit it)
[16:09] <vila> I do use --parallel yes
[16:09] <vila> well babune that is
[16:09] <fullermd> Annoyingly it seems like sometimes it sneaks past it fine.  Weird timing stuff; always works when it's part of a short list of tests being run.
[16:10] <vila> ha, that rings a bell
[16:10] <fullermd> Would make it easy enough to work around by running a couple times, if it didn't happen 5 or 10 minutes into a freakin' test run...
[16:10]  * fullermd is now trying `./bzr selftest -x branch_streaming_acceptance -x test_create_clone_on_transport_use_existing_dir -x RemoteBranch -x RemoteBzr`.  Furrfu.
[16:10] <vila> my suspicion began when there was a bunch of sockets waiting to die which seem to slow the overall run
[16:11] <vila> which led me to suspect some select() call
[16:12] <vila> a bit like if python was relying on the OS to sort things out while.... somehow checking until it was happy with sleeps() intermixed... as you can see nothing very concrete
[16:13]  * vila baktracks
[16:13] <vila> fullermd: you encounter hangs ?
[16:13] <fullermd> Except for me, python ends up waiting on a semaphore of some sort, so it never checks again.
[16:13] <fullermd> Well...  halting problem.  It COULD just be being very very slow, for 4 or 5 minutes, using no CPU and never waking up.
[16:13] <fullermd> But it sure smells like...  oh, look.
[16:14]  * fullermd tries a-fscking-gain...
[16:14]  * vila holds his breath...
[16:14] <vila> look what ?
[16:14] <vila> :D
[16:15] <fullermd> Oh, it locked itself again a thousand or so tests in.
[16:15] <vila> what's the flag... -Dthreads -Ethreads ?
[16:16] <vila> -Ethreads
[16:17] <vila> It was quite useful when debugging the leaks, but I think it was a but too intrusive and led to some failures (changing the output that some tests are checking), so be careful while encountering failures
[16:18] <fullermd> Well, we're 1600-some in to this attempt...
[16:18] <fullermd> Aaaand, it does.
[16:19] <vila> oh yes, lsof !
[16:20] <vila> lsof was also useful combined with -Ethreads whose output gives the socket references by tests
[16:20] <vila> both client and server side
[16:20] <fullermd> % fstat -p59868 | grep -c tcp
[16:20] <fullermd> 212
[16:20] <fullermd> That's a couple TCP sockets open...
[16:20] <vila> that's bad
[16:21] <vila> so, the other factor here is paramiko which leaves pending sockets
[16:21] <fullermd> Not a factor here.  I've had paramiko uninstalled for, like, a year and a half.
[16:21] <vila> but they can't be easily collected as they are internal to pa...
[16:21] <vila> damn
[16:21] <vila> one more theory dead
[16:22]  * fullermd , slayer of theories!
[16:24] <fullermd> When you say -Ethreads, does that mean some sort of python internal threads, or does it use OS-level threads?
[16:25] <vila> python threads
[16:25] <vila> -Ethreads outputs debug statements at various "interesting" points
[16:26] <fullermd> There are 3 threads in the hung process.  One seems to be sitting in the 'accept' wchan, which I WAG may mean it's sitting in accept(2)...
[16:26]  * vila hates vbox a bit more every day especially when killing one VM sometimes kill *all* the running VMs
[16:26] <vila> WAG ?
[16:26] <fullermd> Wild-Add Guess
[16:26] <vila> :)
[16:27] <vila> one thread for the accept(), one thread for the serving, one thread for the client (usually the main one)
[16:29] <vila> and by the way, the selftest in running with BZR_CONCURRENCY=4 and the VM has 2 processors configured
[16:29] <vila> s/in/is/
[16:32] <fullermd> Well, I'm not doing any --parallel'ing.
[16:32] <vila> I got that, just mentioning
[16:32]  * fullermd nods.
[16:33] <vila> I'm running a straight selftest right now
[16:34] <fullermd> Oh, hey, it locked up only 197 in this time.
[16:35] <vila> with -Ethreads ?
[16:35] <fullermd> Only 16 TCP sockets open this time.
[16:35] <fullermd> Trying that out now.
[16:36] <vila> this... is surprising, I was pretty sure we collect almost all sockets now, except for the paramiko ones (which aren't relevant here)
[16:36] <vila> you're running bzr.dev there right ?
[16:36] <fullermd> Well, locked up, but I guess I should use -v too so's to have some idea where it is...
[16:37] <fullermd> Well, technically, this is bzr.dev+someotherchanges.  But nothing that would mess with network.
[16:38] <fullermd> Well, it dumps a pile of information, but I don't see how it's much useful...
[16:38] <vila> 2700 tests and running and fstat reports only 'internet stream tcp' lines, that's sockets right ?
[16:38] <fullermd> Yah.
[16:38] <fullermd> 'sockstat | grep python' should tell you what they are (well, unless you've got a lot of other stuff python running; then grep for the pid)
[16:38] <vila> it helps making sure the sockets are for past tests not the current ones and it also helps in finding which tests are leaking
[16:40]  * fullermd frowns.
[16:40] <fullermd> Hm.
[16:40] <fullermd> I'm trying t selftest --no-plugins now.  That seems to clean up after itself pretty fast; I'm in the 500's now and nothing hanging around...
[16:41] <vila> right, ~4000 here and still no socket leaks (only 3 displayed)
[16:41] <fullermd> Oh, well, that was a good theory while it lasted.  Locked up.
[16:42] <vila> out of curiosity, what changes do you keep there ? Are you sure they aren't some gems worth proposing ? ;D
[16:42] <fullermd> Well, I WANT to propose them.  That's why I want selftest to run   :p
[16:42] <fullermd> Here's an interesting thing:
[16:42] <fullermd> USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
[16:42] <vila> ha great :)
[16:42] <fullermd> fullermd python     60430 4  tcp4   127.0.0.1:39758       *:*
[16:42] <fullermd> fullermd python     60430 5  tcp4   127.0.0.1:39758       127.0.0.1:16483
[16:43] <fullermd> That's presumably the socket of the current test.  One of the threads is waiting in accept.  The socket is listening.  But there's also a connection made to it.
[16:44] <fullermd> That's on a test_pull_smart_stacked_[something]
[16:44] <vila> yes, could be the client and the accept() or the serving one, -Ethreads should help there no ?
[16:44] <vila> also, you should certainly see the mirrored one most of the time but if you happened to list while they are killed ?
[16:45] <fullermd> OK...
[16:45] <fullermd> ...nching.test_branch_from_trivial_branch_streaming_acceptanceServer thread ('127.0.0.1', 31415) started
[16:45] <fullermd> Client thread ('127.0.0.1', 10740) -> ('127.0.0.1', 31415) started
[16:45] <fullermd> fullermd python     60607 4  tcp4   127.0.0.1:31415       *:*
[16:45] <fullermd> fullermd python     60607 5  tcp4   127.0.0.1:31415       127.0.0.1:10740
[16:45] <fullermd> fullermd python     60607 6  tcp4   127.0.0.1:10740       127.0.0.1:31415
[16:46] <vila> 7000 and fstat is still clean as a baby, your 212 number above is still very weird
[16:46] <fullermd> Only tcp sockets open for the process.
[16:46] <fullermd> So it's got both sides of the connection open.  But one thread is still sitting in accept.
[16:46] <vila> sounds fine
[16:47] <fullermd> And it's dead there.
[16:47] <vila> there is one thread waiting in the accept, spawning a thread for serving each connection and accept'ing again
[16:47] <vila> dead and staying dead right now ?
[16:47] <vila> keep it alive !!!!
[16:48] <fullermd> Well, it's been a couple minutes...
[16:48] <vila> seriously,
[16:48] <vila> this is the dirty bit when shutting down a test server
[16:48] <vila> what's the test name ?
[16:49] <fullermd> ...nching.test_branch_from_trivial_branch_streaming_acceptance
[16:49] <vila> let me see
[16:49] <fullermd> In this case.  I don't think it matters though; it's semi-random.
[16:49]  * fullermd just fired off a selftest in a virgin bzr.dev, just for kicks.
[16:50] <vila> could be, but I want to look at what kind of test server is involved and show you the relevant code so you may have your own take on it
[16:50] <vila> (not asking to debug it, but look at the code in a context where it seem to be failing )
[16:50] <fullermd> Man, the relevant code is python.  My take will be "Hey, I can just rewrite that in perl..."   :p
[16:51] <vila> hehe, no I mean as socket code, whatever the language, we're almost doing C there :)
[16:52] <fullermd> Remember, any test it's halted on yet, it works just peachy if I run it alone.
[16:52] <vila> ok, so that's a TestCaseWithTransport so it should be an http server
[16:52] <fullermd> Or in a small group.  It's only when I run a big enough group (e.g., a full selftest) that it semi-arbitrarily picks a time to lock itself up.
[16:52] <vila> I've seen very very very weird failures when chasing the leaks
[16:52] <vila> exactly
[16:52] <fullermd> Oh, there went virgin bzr.dev.
[16:52] <fullermd> | [1516/24890 in 3m27s, 1 failed] per_branch.test_branch.TestBranch.test_comm..
[16:54] <vila> bah, showing you the code is a bad idea, too many classes involved
[16:54]  * fullermd has no class.
[16:55] <vila> ha, no, have a look at bt.test_server.TestingTCPServerInAThread.stop_server
[16:56] <vila> the dirty bit I was referring to is defined there: the server is blocked in an accept call, so we 1) tell him to stop acception connections (after getting out of it's current call), 2) give him a dummy connection
[16:57] <vila> there is also a if debug_threads():... call that you could copy/adapt/add in various points
[16:57] <vila> I suspect the case you encounter should be in this area but exhibit an unexpected behavior
[16:58] <vila> 14000 test and still clean
[16:59] <vila> hmm, IPV6 ?
[17:00] <fullermd> All those connections are over 127.0.0.1
[17:00] <vila> naah, you're running py2.6 right and anyway, I'm pretty sure I've used a variation of the right python code
[17:00] <vila> we've seen weird things even when using only ipv4 for us, as long as the host is configured for ipv6, (paramiko at least as a bug for this kind of config)
[17:01] <vila> most of the time we force ipv4 by using 127.0.0.1 but some tests may still use localhost and went unnoticed
[17:02] <vila> we have some fugitives like that here and there ;D
[17:02] <fullermd> Just switch it to force ipv6 by using ::1 instead.  If their system doesn't support v6 yet, set it in fire; better for everyone that way   :p
[17:04] <vila> hehe
[17:05] <vila> 18000 and still clean, babune:~ :) $ fstat -p1004 |wc -l
[17:05] <vila>       12
[17:06] <vila> but if you encounter the problem with bzr.dev and --no-plugins that should rule your proposal out :)
[17:08] <vila> 20000 and raising briefly at ~40 (paramiko), back at 12
[17:08] <fullermd> It doesn't seem to [generally] do any notable accumulation of stuff over time.  Most of the locks have no sockets except the current ones.
[17:09] <vila> but when it blocks it's on a socket right ?
[17:09] <vila> and one in the accept() state ? (with no foreign address ?)
[17:10] <fullermd> Yah.
[17:10] <vila> or are you unsure about that ?
[17:10] <fullermd> Well, I'm not _sure_ insofar as I've dug into the stack.  But that's what everything looks like.
[17:11] <vila> so, I encounter weird things when trying the kill-the-socket-with-shutdown approach IIRC
[17:12] <vila> I tried various ways with mixed results until I ended up giving it a dummy connection and even there I think I tried various tricks before settling on a simple close()
[17:13] <vila> that's why I suggested adding some sys.stderr.write in this area but if you're blocked on a case where there is no foreign address...
[17:13] <vila> it kind of means that this last_conn didn't succeed ?
[17:14] <vila> or did you even not reach reach the point where 'Server thread %s will be joined' ? Or is this message not flushed ? <shudder>
[17:16] <fullermd> I don't recall ever seeing joins on the locks.
[17:16] <vila> oook
[17:16] <fullermd> See the one I pasted ~half hour ago.  The "Client thread started" is the last thing that shows up.
[17:17] <fullermd> As if the server thread sat in accept() waiting for the client thread (which the OS sees as connected), and never got out of accept().
[17:17] <vila> that part is expected, the server thread has spawned another thread
[17:18] <vila> what isn't expected is that this spawned thread can't finish (or its related client thread)
[17:18]  * fullermd sighs.
[17:18] <vila> the client thread being the main thread
[17:18] <fullermd> It sucks trying to figure out why changes were made when you can't ask the maker   :|
[17:18] <vila> so, this kind of hang is the hardest
[17:18] <vila> which changes ?
[17:19] <fullermd> The ones I'm working on cleaning up, which led me to trying selftest in the first place.
[17:19] <vila> which changes ? :D
[17:20] <fullermd> upgrade enhancements.
[17:20] <vila> ha :-/
[17:21] <vila> err, may I misinterpreted here, you mean in upgrade.py ?
[17:21] <fullermd> Yah.
[17:22] <vila> qblame mentions only living people, so you should be able to reach them no ?
[17:22] <fullermd> Yes, that would be for the existing merged code, not the outstanding unmerged.
[17:23] <fullermd> Which is igc   :(
[17:23] <vila> ha, I didn't misinterpreted :-/
[17:23] <vila> you're digging an old mp ?
[17:23] <fullermd> https://code.launchpad.net/~bzr/bzr/smooth-upgrades/+merge/8921
[17:26]  * vila reading
[17:27] <vila> by the way, the selftest succeeded here with only: bzrlib.tests.test_smart_transport.TestServerHooks.test_server_started_hook_memory is leaking threads among 2 leaking tests.
[17:27] <vila> 1 non-main threads were left active in the end.
[17:32] <vila> hmm, lots of stuff there ;-/ Nice to see you working on it, I should leave now, but I'll be pp next week and will look at anything you'll submit, so feel free to begin with high level questions or even re-start the discussion on the ML
[17:33] <vila> --pack should be useless, --cleanup I think has been implemented elsewhere, so it's worth having a look at that while it still makes sense to propose it there (I went with a bunch of upgrades lately and could have used it)
[17:36] <fullermd> You're thinking of pack --clean-obsolete-packs I think.
[17:42] <jelmer> fullermd: that's an interesting mp
[17:44] <fullermd> Yeah, it should be a real nice addition to 2.0.0   :|
[17:50] <jelmer> :-P
[17:56] <fullermd> 's one of the things whose continued absense pisses me off.  I don't really have time or inclination to work on it, but I have even less of both to keep not having it.  Sigh.
[19:23] <fullermd> vila: Just dropped what I could do on it.  Enjoy your piloting   :)
[19:26] <nonix4> if revision x in branch a adds file c, and revision y in branch b adds file d, where d is actually descendant of c... is there a simple way to tell that to bzr after both commits have been made? (both mentioned revisions containing other unrelated changes as well)
[19:27] <fullermd> I'm not sure what you mean by "descendant".  But the answer will be "no".  Once you've made a revision in the current World Order, it's too late to go back and tell bzr more about it.
[19:30] <nonix4> except for uncommit?
[19:31] <fullermd> Well, semantics.  Even uncommit doesn't change a revision, it just pops things off the top of the history and throws them away.
[19:34] <fullermd> There is occasional discussion about ways to _annotate_ information after the fact (specifically to this case, things like equivalences between separate files).  But it's all speculative and far-future, which doesn't help anything now.
[19:47] <nonix4> well... how about cherry-picking creation of one file from a distant branch?
[19:47] <fullermd> You could do that.  Being a cherry-pick, it wouldn't record any of the revision info.  But it would use the same file-id.
[21:38] <mgz> ah, was going to ask about the 'local oddities' fullermd ran into with the test suite, but I see it's in the history so I'll just read up.
[21:41] <mgz> I'd be interested if lp:~gz/bzr/cleanup_testcases_by_collection_613247 helped though, I recall freebsd has som lower default resource limits than other nixes so leaks could hurt
[21:45] <lifeless> hi mgz
[21:46] <lifeless> mgz: if you're interested in testrepository, I would love to know if trunk still works on win32
[21:49] <mgz> hm, looks more like 's just theclient-server tests being dodgy still from the log
[21:49] <mgz> lifeless: I'll pull and test.
[21:51] <lifeless> thanks!
[21:51] <lifeless> mgz: its specifically 'testr run' I'm hacking on
[21:52] <lifeless> mgz: I've added a SIGPIPE fixup for unix
[21:52] <lifeless> but that may bork win32 subprocess invocations
[21:52] <mgz> hm, some kind of import issue
[21:52] <mgz> I wish the framework didn't hide that stuff
[21:52] <lifeless> ?
[21:52] <mgz> ...  File "C:\Python24\lib\unittest.py", line 532, in loadTestsFromName
[21:52] <mgz>     parent, obj = obj, getattr(obj, part)
[21:52] <mgz> AttributeError: 'module' object has no attribute 'tests'
[21:52] <mgz> not useful.
[21:52] <lifeless> ah yes
[21:52] <lifeless> I filed a bug on python
[21:52] <lifeless> and there is one on testtools too
[21:52] <lifeless> workaround
[21:53] <lifeless> python
[21:53] <lifeless> import testrepository
[21:53] <mgz> yup.
[21:53] <lifeless> import testrepository.tests
[21:53] <lifeless> ...
[21:54]  * lifeless wags on testrepository.ui.cli
[21:54] <mgz> what are the dependencies explictly?
[21:55] <mgz> I'm failing to find them documented anywhere.
[21:55] <lifeless> cat INSTALL.txt
[21:55] <mgz> doh.
[21:55] <mgz> okay, I'll fix up my python path and go again.
[21:55] <lifeless> the autotools have kindof devalued that file
[21:55] <lifeless> which is a bit sad
[22:01] <mgz> ...
[22:01] <mgz>   File "C:\bzr\testscenarios\lib\testscenarios\scenarios.py", line 26, in ?
[22:01] <mgz>     from itertools import (
[22:01] <mgz> ImportError: cannot import name product
[22:01] <mgz> will fix that and see what else I hit.
[22:18] <mgz> so, your sigpipe fixup also wasn't valid 2.4 syntax, but that and a few other things look like easy fixes.
[22:18] <mgz> 17 failures, a bunch look quoting related.
[22:19] <lifeless> heh
[22:19] <mgz> some rename-over-existing-file ones.
[22:19] <lifeless> feel like filing bugs?
[22:19] <mgz> only if I can't fix 'em.
[22:19] <lifeless> cool
[22:20] <lifeless> well I'm mid rewrite of run to do parallel test invocation
[22:24] <mgz> hm, so, could either fix the quoting in commands.run, or make it use a subprocess pipe rather than a shell pipe. you got any strong feelings there?
[22:25] <lifeless> thats one of the things I'm mid rewrite on
[22:25] <mgz> okay, will leave that one for the mo.
[22:25] <mgz> a test with a tempdir that needs quoting in would be worth adding though.
[22:25] <lifeless> the shell=True is for shell expansion
[22:26] <lifeless> mgz: please do
[22:32] <fullermd> mgz: Yeah, it's not a resource limit.  More something squirrel with the sockets.  Race or sumfin'.
[23:38] <mgz> wow python-dev has had a lot of long pointless threads of late.
[23:39] <lifeless> yeah
[23:39] <lifeless> win
[23:39] <mgz> lifeless: I hate DocTestMatches, it's awful. Down to six failures though, four of which are quoting related.
[23:40] <lifeless> mgz: :(
[23:40] <lifeless> mgz: I'd welcome a more powerful full text matcher
[23:40] <mgz> fullermd: I'd be interested if you give that branch a run anyway, though it probably won't help. I'll do yours in turn to check you've not regresses anything.
[23:41] <mgz> lifeless:
[23:41] <mgz> FAIL: testrepository.tests.test_testr.TestExecuted.test_runs_and_returns_run_argv_no_args
[23:41] <mgz> AssertionError: Match failed. Matchee: "True True True
[23:41] <mgz> ['b:\\temp\\tmprbtwer\\testr']
[23:41] <mgz> "
[23:41] <mgz> Matcher: DocTestMatches('True True True\n[...]\n', flags=8)
[23:41] <mgz> Difference: Expected:
[23:42] <mgz> I don't see why that's a mismatch, and the output sucks.
[23:42] <mgz> it possibly has an extra newline off the end? who knows.
[23:43] <mgz> the test_bdist was even worse because it prints pages of text every time.
[23:45] <mgz> I don't think the Matcher scheme works well with more than one line of text, the top thingy needs to know about every leaf object to not spam you.
[23:51] <lifeless> mgz: it should have printed a difflike thing
[23:53] <mgz> but the matchee is always printed anyway.
[23:54] <mgz> `assertThat(some_giant_thing, WhateverMatcher)` brings pain.
[23:54] <lifeless> mgz: that can be changed
[23:54] <lifeless> mgz: OTOH its good to be able to see definitional errors
[23:55] <mgz> it's good to be able to see more than the tail of the last failing test's output.