[00:00] <mtaylor> thumper: if I'm going to propose some work on launchpad and I want to get direction approved, should I file a blueprint or a bug?
[00:00] <mtaylor> thumper: or should I write a patch containing a user story and submit that :)
[00:01] <thumper> mtaylor: you should have a pre-implementation call :)
[00:01] <lifeless> mtaylor: chat with a dev
[00:01] <lifeless> mtaylor: everything else is makework.
[00:01] <lifeless> mtaylor: you could chat on the lp-dev list, here, or on voice
[00:02] <mtaylor> lifeless: ok.... just wanted to be sure I was prepping for the right thing
[00:02] <lifeless> if you need a persistent record, I'd start with the list, or a bug [perhaps]
[00:02] <mtaylor> _I_ don't need a persistent record - I just wanted to make sure I was being friendly and all
[00:03] <lifeless> whats your ideal, when someone does stuff for dribble?
[00:03] <mtaylor> lifeless: depends on scope of work - if it's a new feature or something decent sized, we ask them to file a blueprint
[00:03] <lifeless> what do you use the blueprint for?
[00:04] <mtaylor> lifeless: tracking work against the task
[00:04] <mtaylor> lifeless: they'll then tie their bzr branch to that blueprint, we'll use the blueprint during planning meetings to check progress on it
[00:04] <mtaylor> etc.
[00:04] <lifeless> mtaylor: but not for deciding on direction :)
[00:05] <mtaylor> lifeless: well, not directly, no... although we'd certainly like to :)
[00:05] <mtaylor> lifeless: but fair enough
[00:33] <mwhudson> lifeless: say, do you have any particular opinions about pyunit-compatible test runners?
[00:34] <lifeless> why yes, I do
[00:34] <mwhudson> lifeless: which would you recommend? (he asks, half expecting "bzrlib's")
[00:35] <lifeless> testrepository if you can
[00:35] <lifeless> failing that testtools.run
[00:35] <mwhudson> oh, testtools has a runner?
[00:35] <mwhudson> didn't know that
[00:35] <lifeless> and failing that your own glue to bzrlib.tests.$stuff
[00:35] <lifeless> my current absolute favourite is testrepository
[00:36] <lifeless> which is generally used via subunit.run
[00:36] <lifeless> which backs onto testtools stuff
[00:36] <mwhudson> lifeless: i'm currently writing up a python programming practices document for the $censored work
[00:37] <mwhudson> so i get to inflict my opinions on how some things should be done
[00:37] <lifeless> have I shown you testrepository and all the bits
[00:37] <mwhudson> i'm aware of it
[00:37] <lifeless> I think you should experience before recommended
[00:37] <lifeless> s/ed/ing/
[00:37] <mwhudson> yeah
[00:38] <lifeless> so, apt-get install testrepository
[00:39] <lifeless> this doesn't do test discovery yet, it will trivially when someone gets around to testtools + discovery [foords thing] glue
[00:39] <lifeless> I should ask, do you have say, 10 minutes for a quick tour ?
[00:40] <mwhudson> yes
[00:40] <mwhudson> ah, i was going to ask about discovery
[00:40] <lifeless> it would be good to add that
[00:40] <lifeless> perhaps you could find a day and JFDI ?
[00:40] <mwhudson> perhaps
[00:41] <lifeless> let me see, how hard would it be
[00:41] <lifeless> http://pypi.python.org/pypi/discover/0.3.2
[00:43]  * mwhudson looks at unittest2 and finds nothing that looks particularly good, apart from the discovery stuff
[00:44] <lifeless> yeah
[00:44] <lifeless> I asked if he'd contribute to testtools
[00:44] <lifeless> but apparently it was too hard. Or something.
[00:44] <lifeless> <- a little bitter
[00:45] <mwhudson> :(
[00:45] <mwhudson> i think i told him to look at testtools at pycon 2009
[00:45] <lifeless> anyhow
[00:45] <lifeless> shiny shiny stuff
[00:45] <lifeless> so, do you have something small with a test_suite method ?
[00:45] <mwhudson> um
[00:45] <mwhudson> no
[00:46] <lifeless> if not, make something small with a test_suite method (because thats something that stock pyunit, and thus testtools.run, can support)
[00:46] <lifeless> just
[00:46] <lifeless> from testtools import TestCase
[00:46] <lifeless> class foo(TestCase):
[00:46] <lifeless>  def test_pass(self):pass
[00:47] <lifeless>  def test_fail(self):self.fail('sad')
[00:47] <bryceh> bzr question...  I have a branch with an html file in it.  I want whenever I commit to the branch that it updates a "last-updated-date" line in the footer.  Any way to set up bzr to do this easily?
[00:47] <lifeless> def test_suite():
[00:47] <jelmer> bryceh: see the bzr-keywords plugin
[00:47] <lifeless>     return unittest.TestLoader().loadTestsFromNames('foo')
[00:47] <lifeless> in foo.py
[00:47] <bryceh> jelmer, thanks
[00:48] <lifeless> mwhudson: we can test this works with 'python -m testtools.run foo.test_suite'
[00:48] <lifeless> make another file, .testr.conf
[00:48] <lifeless> [DEFAULT]
[00:49] <lifeless> test_command=python -m subunit.run $IDLIST
[00:49] <lifeless> test_id_list_default=foo.test_suite
[00:49] <lifeless> EOF
[00:49] <lifeless> then, testr init; testr run
[00:49] <lifeless> should show you one failure
[00:49] <lifeless> testr run --failing
[00:49] <lifeless> should run the one failing test
[00:49] <lifeless> tour over
[00:50] <lifeless> for more detail, keep chatting, and/or read /usr/share/doc/testrepository/MANUAL.txt
[00:50] <mwhudson> ok
[00:50] <lifeless> I totally totally depend on the 'run only failing' stuff now that I have it
[00:51]  * jelmer admits to also being a total testr fanboy
[00:51] <mwhudson> hm yeah that looks pretty neat
[00:51] <mwhudson> py.test had this --looponfailing thing that i remember using a bit, ages ago
[00:52] <lifeless> sounds similar
[00:52] <lifeless> this is not coupled to test runner or python
[00:52] <lifeless> as long as there is an interface, and it can output subunit, it works ;)
[00:53] <mwhudson> so where should discovery fit?
[00:54] <mwhudson> as an alternative to test_id_list_default ?
[00:55] <lifeless> so
[00:55] <lifeless> small bit of glue in subunit.run
[00:55] <lifeless> then you'd do
[00:55] <lifeless> python -m subunit.run --discover [discovery arg]
[00:55] <lifeless> when running it by hand
[00:55] <lifeless> for .testr.conf you'd do
[00:56] <lifeless> test_id_list_default=--discover <whatever you decided on>
[00:56] <lifeless> thats the entire set of changes needed AFAICT
[00:56] <mwhudson> ah, so subunit needs to change a bit?
[00:57] <lifeless> tiny tiny
[00:57] <lifeless> not the protocol, just the glue to setup a test list
[00:58] <lifeless> rather than calling the unittest loader with the parameters its given
[00:58] <lifeless> it needs to call the discover loader
[00:58] <lifeless> thats it
[00:59] <mwhudson> right
[00:59] <mwhudson> ok well it doesn't sound too hard...
[00:59] <lifeless> note that this workflow works with trial --reporter=subunit too
[00:59] <lifeless> for huge value of entertaining
[01:00] <mwhudson> right, and i guess trial already has its own approach to discovery
[01:00] <lifeless> yes
[01:04] <lifeless> I'd like bzr switch to stash the .testrepository with the branch
[01:04] <lifeless> or perhaps I want a push/pop thing in testr
[01:06] <mwhudson> lifeless: i guess i could find this out myself, but does trial have anything like the load_tests() protocol?
[01:06] <lifeless> it honours test_suite
[01:06] <lifeless> I would like to teach it load_tests
[01:06] <lifeless> because load_tests is much better
[01:07] <lifeless> mtaylor: ^ also another reason to use cppunit -or- make gtest talk subunit
[01:08] <mwhudson> lifeless: why is load_tests 'much' better?
[01:08] <mwhudson> i can see how it's a bit more convenient
[01:09] <lifeless> test multiplication is a lot easier with it
[01:09] <lifeless> less duplication
[01:10] <lifeless> and no hard-coding of TestSuite class
[01:10] <lifeless> and no hard coding of loader
[01:10] <lifeless> the whole process becomes more easily mutated, 2-line patches to change things, rather than change-per-source-file
[01:12] <mwhudson> ah yeah, that makes sense
[02:39] <mars> lifeless, you wouldn't happen to have any idea why the zope testrunner + subunit would produce a partial traceback like this?: http://pastebin.ubuntu.com/443072/
[02:51] <lifeless> mars: that looks truncated
[02:52] <mars> lifeless, yes.  I thought a buffer in my own code was holding the output, but it turns out to be somewhere in the testrunner itself.
[02:52] <mars> lifeless, is the subunit protocol written down anywhere?  I'm wondering if that "error: ..." line is correct?
[02:53] <mars> lines that say "failure: " end with "[ multipart "
[03:00] <mars> lifeless, found the protocol, EBNF in the README.  Clever.
[03:02] <lifeless> :)
[03:02] <lifeless> so you have the simple output there
[03:02] <lifeless> [\n
[03:02] <lifeless> there should be a trailing ]\n
[03:03] <lifeless> and the output of a simple attachment like that is done at once, so it has nearly no chance of not completin
[03:03] <lifeless> g
[03:03] <lifeless> if its blowing up in stopTest
[03:03] <lifeless> one possibility is stdout being closed or something
[03:04] <lifeless> but that would fly in the face of actually *getting* that traceback
[03:09] <mars> yeah :/
[03:09] <mars> lifeless, could it be that the parent process (that watching the subunit stream) choked on some child process output?
[03:10] <mars> silently ate it maybe?
[03:11] <mars> thread dump in the parent says nothing out of the ordinary:
[03:11] <mars> Thread 1
[03:11] <mars> #0 0x00007f535ef0cdc2 in select () from None
[03:11] <mars> #1 0x00007f535ec39784 in time_sleep (self=<value optimized out>, args=<value optimized out>) from /build/buildd/python2.5-2.5.2/Modules/timemodule.c
[03:11] <mars> /var/launchpad/tmp/eggs/zope.testing-3.9.4-py2.5.egg/zope/testing/testrunner/runner.py (521): resume_tests
[03:19] <mars> according to the code and thread dump it looks like the parent is running on the while-input loop
[03:19] <lifeless> how long does this take to trigger ?
[03:19] <lifeless> and is it reliable ?
[03:19] <lifeless> if its not too long, you might try running the child under strace
[03:19] <lifeless> as an ugly debugging hack
[03:19] <mars> not reliable, maybe 40% failure rate, takes 2 hours to trigger with a full suite run.
[03:19] <lifeless> also is it only on that test, or a general sort of thing?
[03:20] <mars> it happens anywhere in the windmill portion of the suite
[03:20] <lifeless> does it happen if you just run windmill?
[03:20] <mars> that is why I really really wanted that full traceback :)
[03:20] <mars> lifeless, not in my testing, no
[03:21] <mars> and it doesn't happen on buildbot either
[03:21] <mars> just on ec2
[03:21] <lifeless> and you can trigger it locally running everything ?
[03:21] <lifeless> or on ec2 w/windmill only ?
[03:21] <mars> nope
[03:21] <mars> only when running the full suite, including the windmill suite, on ec2
[03:23] <lifeless> bugger
[03:23] <lifeless> uhm
[03:23] <lifeless> buffer limit ?
[03:23] <mars> hmm
[03:23] <lifeless> oh
[03:23] <lifeless> you use wait() right
[03:24] <lifeless> IIRC that can trigger on anything in your process group
[03:24] <lifeless> want to bet that there is a child of a child which is being allowed to zombie until the child goes, and you're getting a signal on that child-of-child, which you then close the pipe on the immediate child and things go boom
[03:24] <lifeless> or something
[03:24] <mars> well, this is without using my code, but I can check the zope testrunner - it may .wait() as well
[03:25] <lifeless> I'm not terribly convinced of my explanation there
[03:25] <mars> lol
[03:25] <mars> my first thought was "yay IPC is fun"
[03:28] <mars> hmm
[03:28] <mars> lifeless, looking at the zope testrunner code, what you say may be possible.  I would have to think through the process though
[03:29] <mars> there is a note in there how reading the child process (the child layer) can be interrupted by EINTR if something sends SIGCHLD
[03:32] <mars> but it would be really really odd that I have seen the traceback truncated on that stopTest line so many times
[03:34] <lifeless> thats true
[03:35] <lifeless> OTOH subunit writes in a very predictable pattern
[04:14] <lifeless> how do I tell what version of bzr-loom launchpad has ?
[04:22] <lifeless> mwhudson: thumper: ^
[04:24] <mwhudson> lifeless: i guess look at utilities/sourcedeps.conf
[05:33] <thumper> probably quite old
[05:50] <ScottK> poolie: I was thinking about your comments about high dkim verification failure rates.  You might be better off doing dkim verification at the border too and then consuming an authresults header inside LP.
[06:22] <poolie> hi scottk
[06:22] <poolie> do you mean my speculation that many of them may have inadvertently broken headers?
[06:22] <poolie> two more thoughts there:
[06:23] <poolie> 1- it's much easier for me to land code into the bit of launchpad that processes incoming mail than it is to do that _and also_ get sysadmins to set up something in the border MTA
[06:23] <poolie> 2- i'd like to hope that we don't actually damage the mail on the way from our DC border in to the incoming queue
[06:23] <poolie> imbw
[06:23] <poolie> but i thought i'd try verifying them there first and if it turns out 100% are broken, i guess we'll know
[06:49] <mwhudson> given that we successfully verify gpg-signed mail, we can't damage it that badly (although i guess that doesn't sign headers?)
[09:18] <henninge> jtv-eat: the query looks acutally quite simple, given that I already have the product id and the sourcepackagename(s).
[09:18] <henninge> http://paste.ubuntu.com/443212/
[11:02] <deryck> Morning, all.
[11:29] <ScottK> mwhudson: A dkim signature covers a lot more of the message than a gpg signature.  The one case I've managed to see real success data of a large enterprise trying to do dkim verification on a second tier MTA and not on the border it had a significantly lower verification reliability than cases where I've seen it done on the border MTA.
[11:32] <mwhudson> ScottK: that makes sense
[11:32]  * mwhudson goes to bed
[12:53] <deryck> BjornT, ping
[13:01] <BjornT> hi deryck
[13:28] <wgrant>  /builders is timing out on both production and edge.
[13:28] <noodles775> Yep... saw that too :/
[13:29] <wgrant> How's it dying?
[13:29] <noodles775> view/other_builders
[13:30] <noodles775> wgrant: I'll paste the oops..
[13:30] <wgrant> Hm. That's a bit odd.
[13:31] <noodles775> wgrant: http://pastebin.ubuntu.com/443300/
[13:31] <wgrant> Ah.
[13:37] <leonardr> jamesh, i have a storm question
[13:38] <jamesh> leonardr: shoot
[13:39] <leonardr> is it possible to define a global hook equivalent to __storm_flushed__, or do i need to create a superclass/mixin and make everything inherit the behavior?
[13:40] <jamesh> leonardr: there is an internal event for that, but it is not part of the public API
[13:41] <jamesh> what would you want to use it for?
[13:44] <leonardr> jamesh: see https://code.launchpad.net/~leonardr/launchpad/test-representation-cache/+merge/26513
[13:44] <leonardr> i'm invalidating a memcached cache
[13:48] <jamesh> leonardr: ah.  The event I was thinking of is just fired once in response to store.flush() -- not once per object
[13:50] <leonardr> ah, ok
[13:50] <leonardr> it sounds like a superclass/mixin is the best bet
[13:51] <jamesh> leonardr: there are also internal "flushed" events for each object, but no way to globally register for all objects
[13:52] <jamesh> leonardr: note that not all data modifying Storm APIs will result in Python objects being flushed.
[13:53] <leonardr> jamesh: I'm talking to stub about that now. are you thinking of his "4) Using Storm for bulk updates, inserts or deletions."?
[13:54] <jamesh> leonardr: right.  The ResultSet set() and remove() methods
[13:54] <noodles775> wgrant: temporary patch - can you check pls: http://pastebin.ubuntu.com/443324/
[13:55] <jamesh> there isn't a bulk insert method in Storm, but I doubt you care too much about inserts if you're working with a cache
[13:55] <wgrant> noodles775: Looks fine.
[13:55] <wgrant> Are we blaming missing indices for the slow query?
[13:56] <noodles775> I'm assuming so, but haven't checked yet... want to get the page back up so we can fix it properly in time.
[13:56] <wgrant> Yeah.
[14:02] <stub> jamesh: store.execute(Update({Person.displayname: 'foo'}, Person.name=='stub')) would be how to get Storm to do updates without clearing the cache (not that we do that, but it would be nice to be able to do that).
[14:03] <jamesh> stub: you don't even really need to go behind Storm's back like that
[14:04] <stub> I thought that would be the blessed approach for bulk updates; didn't think that *was* going behind Storm's back ;)
[14:04] <jamesh> stub: if I use ResultSet.set() to do a bulk update, you'll only see __storm_flushed__() calls for the objects that are actually live in memory
[14:04] <stub> Ahh... that would be a better approach for bulk updates ;)
[14:05] <jamesh> it is really only intended to help you keep local caches up to date (e.g. storing computed values on the Python object) -- not external ones like I assume you're talking about
[14:06] <stub> So there is a check to see if the object is live. We could hook in there to clear the cache at that point... although there are still problems we can only solve at the database level. Not sure if they are problems we really need to worry about at this stage.
[14:06] <stub> Oh... no we couldn't since we won't know the keys :-(
[14:47] <deryck> Does sinzui have a mark-released button feature again?  A script?  Or too much time on his hands? :-)
[14:54] <sinzui> deryck, I used my script from *before* the feature that I had to retracted. I discovered we failed to update a lot of bugs on 10.04, 02
[15:00] <bac> reviewers meeting starting now
[16:17] <gmb> deryck, bug 386757 has been fixed for malone, hasn't it?
[16:17] <mup> Bug #386757: JavaScript for subscribing should be refactored for reuse <Launchpad Bazaar Integration:Fix Released by rockstar> <Launchpad Bugs:Triaged> <https://launchpad.net/bugs/386757>
[16:18] <gmb> Ah, or am I getting confused about branches that have landed but don't actually fix the bug?
[16:19] <deryck> gmb, not really fixed completely.
[16:19] <gmb> deryck, Ah, okay.
[16:20] <deryck> gmb, I did some work, rockstar did some work, and once we have state on the bug subscriptions, then we can do more work to unify our approaches.
[16:20] <deryck> though I think rockstar went really different from my work
[16:20] <gmb> deryck, Okay, I'll add it to the subs story.
[16:20] <gmb> And we can take a look at it later.
[16:20] <deryck> gmb, yeah, we should look again late in the UI refactor for this story.  It will most certainly have to touch the js subscribing code.
[16:20] <gmb> Right.
[16:21] <rockstar> deryck, I think my work was the culmination of discussions you and I had at the lazr-js sprint/UDS
[16:22] <rockstar> deryck, also, I have a branch that I've been trying to get back to that does the wizard widget, so we should have that at some point as well.
[16:23] <deryck> rockstar, right, I knew we talked about it.  I just meant it's more different than alike to what we have now in bugs.  but better, definitely.
[16:24] <rockstar> deryck, ah yes, because branch subscriptions have state, and you had planned to add the same to bugs.
[16:25] <deryck> rockstar, exactly.
[16:25] <deryck> So I think it will line up nicely with our subscriptions refactor the next couple months.
[16:29] <rockstar> deryck, sweet.
[16:34] <henninge> sinzui: I have a question about packaging if you have 2 minutes ;)
[16:35] <sinzui> I can help
[16:35] <henninge> sinzui: Is it correct that we are now copying packaging links whenever a new distroseries is created?
[16:35] <sinzui> Yes. they were copied.
[16:36] <henninge> sinzui: for maverick or for all series?
[16:36] <sinzui> for maverick using the script that bootstraps soyuz
[16:37] <sinzui> When we copy the packages from lucid to maverick and started the builds, we also copied the packaging links
[16:38] <henninge> sinzui: and that will happen on each new series?
[16:38] <henninge> in the future?
[16:39] <sinzui> yes.
[16:39] <henninge> sinzui: thanks, great help. ;)
[16:39] <sinzui> It is only for Ubuntu because only Ubuntu uses soyuz
[16:41] <henninge> sinzui: well, it looks like only Ubuntu supports packaging anyway, at least UI-wise.
[16:41] <henninge> "+ubuntupkg" ...
[16:42] <sinzui> correct
[16:45] <rockstar> henninge, yes, if you try and build for anything but Ubuntu, you can really bugger the build farm. abentley knows from experience.
[16:47] <henninge> rockstar: cool, that makes things easier for me, too.
[17:24] <abentley> noodles, why did you turn buildbase.queuebuild into a staticmethod?
[19:23] <thumper> jelmer: http://launchpadlibrarian.net/49567000/chicken-chicken-git-mirror.log
[19:29]  * maxb ponders creating a dev.launchpad.net/FailingBzrSvnImports to gather common failure cases, and wonders if some such thing exists already
[19:30] <maxb> Also, what is the current situation with svn imports requiring a username password? Does that still require losa intervention on the import slaves?
[19:30] <mars> gary_poster, ping, buildout question, when you have some spare time
[19:33] <gary_poster> mars, what's up?
[19:35] <mars> gary_poster, being annoyed by buildout picking the first zope.interface it sees, then barfing when it reads versions.cfg and realizes that it needs an older zope.interface
[19:37] <mars> gary_poster, just a sec, ran it through -vvvv, and it says there is a version conflict when loading zc.recipe.testrunner
[19:38] <mars> so it could be that zc.recipe.testrunner needs a newer zope.interface
[19:38] <gary_poster> ok
[19:38] <gary_poster> are you specifying the version of zc.recipe.testrunner?
[19:39] <mars> gary_poster, yep.  According to zc.recipe.testrunner, it does not need an explicit zope.interface
[19:40] <gary_poster> mars, sounds odd.  You have a branch for me to look at?
[19:40] <mars> gary_poster, yes, I'll push the changes
[19:41] <gary_poster> ok
[19:42] <mars> gary_poster, bzr branch lp:~mars/lazr-js/1.0
[19:42] <gary_poster> k
[19:43] <mars> gary_poster, also worth noting: the Makefile uses a LAZR_SOURCEDEPS_DIR variable, and mine points to ~/.buildout/
[19:43] <gary_poster> k
[19:45] <mars> gary_poster, zope.interface-3.5.1 is in the global download-cache, but it doesn't want to use it.  The global eggs/ directory only has zope.interface-3.5.3 (maybe because buildout refuses to build the 3.5.1 egg)
[19:49] <gary_poster> mars, the buildout.cfg file is trying to do something that appears to be against your goals.  ...wanna talk on mumble?
[19:50] <mars> gary_poster, sure
[19:50] <gary_poster> k, I'm there :-)
[21:04] <lifeless> mars: there was a import failures wiki page some time back
[21:05] <mars> maxb, ^
[21:08] <lifeless> bah
[21:08] <lifeless> sorry mars
[21:09] <mars> np :)
[21:28] <mars> lifeless, gary_poster, by the way, by hacking our testrunner I managed to get it to spit out the full windmill suite hang traceback: http://pastebin.ubuntu.com/443544/
[21:28] <lifeless> grah
[21:28] <mars> ?
[21:29] <mars> oh
[21:29] <gary_poster> "Look," Gary said intelligently, "a ValueError!"  Though it does look like something in subunit on the face of it.
[21:29] <mars> lifeless, ?
[21:29] <lifeless> checking the code
[21:30] <lifeless> so, error is not None - this is a plain pyunit api call
[21:30] <lifeless> rather than a lovely shiny testtools one
[21:31] <lifeless> stopTest was called
[21:31] <lifeless> and then the zope testrunner formatter is calling addError
[21:31] <lifeless> from within stopTest
[21:31] <lifeless> thats odd
[21:32] <lifeless> the error object is failing in _exc_info_to_string
[21:32] <lifeless> so err is not a exc_info tuple
[21:32] <lifeless> its something else
[21:32] <lifeless> look at zope/testing/testrunner/formatter.py
[21:32] <lifeless> whats the easiest way to get that for me? I have a 2 month old checkout of lp
[21:33] <lifeless> mars: ^
[21:34] <mars> lifeless, looking
[21:34] <mars> lifeless, check in eggs/zope.testing-3.9.4
[21:34] <mars> do you have that version?
[21:35] <lifeless> sec, starting the vm
[21:35] <lifeless> yes, I do
[21:35] <lifeless> well, a py2.5.egg
[21:35] <lifeless> should be good enough
[21:36] <mars> that's the one
[21:36] <lifeless> ok so this is subunut glue
[21:36] <lifeless> and its calling _get_text_details
[21:36] <lifeless> yes, its buggy
[21:37] <lifeless> I'll put a patch up
[21:38] <mars> lifeless, wow, thanks!
[21:41] <lifeless> ok, grab lp:~lifeless/zope.testing/subunit
[21:42] <lifeless> I don't know if passes tests, because it blows up when I follow the getting started instructions
[21:42] <mars> ok
[21:42] <mars> lifeless, it will take a while to run a new round of tests.  A few hours :(
[21:43] <mars> it will probably have to wait for tomorrow.
[21:43] <lifeless> https://code.edge.launchpad.net/~lifeless/zope.testing/subunit/+merge/26638
[21:43] <lifeless> mars: so anyhow, its something blowing up in another thread, but the error reporting codepath was broken
[21:44] <mars> ok
[21:44]  * mars waits for the diff to update
[21:44] <lifeless> so we'll still expect an error
[21:44] <lifeless> but it should be relevant to the windmill tests this time
[21:44] <mars> \o/
[21:45] <mars> that would be a big step forward
[21:45] <mars> Hurray for cascading failures!
[21:45] <lifeless> :)
[21:45] <lifeless> actualy, hurray for untested code :P
[21:49] <mars> lifeless, many thanks.  I'll let you know how it goes.
[21:49] <lifeless> please do
[21:49] <lifeless> sidnei: when you get back
[21:49] <lifeless> https://code.edge.launchpad.net/~lifeless/zope.testing/subunit/+merge/26638
[21:53] <mars> by coincidence, this may have been the source?  Exception in thread Thread-3 (most likely raised during interpreter shutdown):
[21:53] <mars> Traceback (most recent call last):
[21:53] <mars>   File "/usr/lib/python2.5/threading.py", line 486, in __bootstrap_inner
[21:53] <mars>   File "/usr/lib/python2.5/threading.py", line 446, in run
[21:53] <mars>   File "/var/launchpad/tmp/eggs/windmill-1.3beta3_lp_r1440-py2.5.egg/windmill/server/https.py", line 398, in start
[21:53] <mars>   File "/usr/lib/python2.5/SocketServer.py", line 218, in handle_request
[21:53] <mars> <type 'exceptions.AttributeError'>: 'NoneType' object has no attribute 'error'
[21:53] <lifeless> very likely
[21:53] <lifeless> the check was a threading thingy
[21:55] <mars> so my 'reprint the exception' code in bin/test actually spat out the entire error
[21:56] <mars> who knows why it was getting truncated
[21:57] <mars> wait, spoke too soon.  There is nothing in the on-disk capture file, just the console.  May have just been a luck break.
[22:09] <maxb> Is there a good way to hold a conversation with the requester of a vcs import?
[22:09] <lifeless> 'contact this person' in launchpad
[22:10] <maxb> ... and manually keep the whiteboard up to date with the state :-/
[22:10] <lifeless> yes :(
[22:12] <maxb> In a most perplexing move, someone's trying to register a ~lrcshow-x/lrcshow-x/trunk vcs-import when ~vcs-imports/lrcshow-x/trunk already exists
[22:12] <mwhudson> maxb: hey, at least you can see who registered the import now
[22:12] <mwhudson> that was one of the more frustrating problems with the ye olde system
[22:12] <maxb> But given the history of their svn repository, they clearly don't understand version control very well
[22:17] <lifeless> mwhudson: so what were your testing framework thoughts after looking at stuff yesterday
[22:18] <mwhudson> lifeless: i admit to getting a little sidetracked
[22:19] <mwhudson> lifeless: i think discover + subunit + testr run looks pretty nice though
[22:29] <maxb> Could someone look up OOPS-1614M3672 for me? (occurred attempting to rename a vcs-import branch)
[22:29] <lifeless> sec
[22:29] <lifeless>   Module lp.code.model.branchnamespace, line 135, in validateRegistrant
[22:30] <lifeless>     % (registrant.displayname, owner.displayname))
[22:30] <lifeless> BranchCreatorNotMemberOfOwnerTeam: Max Bowsher is not a member of lrcShow-X team
[22:30] <lifeless> zomg 500ms of sql time there
[22:30] <thumper> maxb: arse
[22:31] <thumper> maxb: was that over the api?
[22:31] <maxb> No, web ui
[22:31] <thumper> maxb: what did the form look like for the owner?
[22:31] <thumper> maxb: you should have had a dropdown
[22:32] <thumper> maxb: if you didn't, it is a bug
[22:32] <maxb> oh, I see
[22:33] <thumper> maxb: BTW I think you are doing an awesome job with the imports
[22:33] <maxb> I have edit perms by virtue of being in ~vcs-imports, but that means I can edit a branch of which I'm not in the owner team
[22:33] <maxb> So the vcs-imports special edit hack has been incompletely applied
[22:34] <thumper> maxb: you should be able to edit the branch yes
[22:34] <thumper> maxb: however you can only assign to a team you are a member of
[22:34] <thumper> maxb: the owner widget should be a drop down for you
[22:35] <maxb> I am not trying to change the branch owner. It is a dropdown. It's objecting about the fact that the existing owner of the branch is not one that I would be permitted to assign the branch to, despite the fact that I can edit the branch
[22:37] <maxb> i.e. it's over-aggressive validation. I suspect it's never been an issue before because ~vcs-imports have often been ~bazaar-experts previously
[22:37] <thumper> oh
[22:37] <thumper> arse
[22:37] <thumper> this is exactly the same problem as with the source package branches
[22:37] <thumper> maxb: can you file a bug please
[22:37] <maxb> Sure
[22:37] <thumper> thanks
[22:44] <maxb> lifeless: may I have some more of the traceback?
[22:46] <mwhudson> thumper: an oops is a better result than maxb accidentally assigning the branch to himself i guess :-)
[22:58] <sidnei> lifeless, i'm done for the day, but will look into it tomorrow
[22:58] <lifeless> sidnei: tresquis is on it
[22:58] <lifeless> sidnei: but thanks
[23:08] <maxb> 588943 ftr
[23:17] <sidnei> lifeless, even better. thanks!
[23:27] <wgrant> Hmm.
[23:27] <wgrant> It looks like if you have a 1.0 API override (because the devel API has changed, and we need to retain compat), beta (which came before 1.0) doesn't inherit it.