[00:07]  * poolie might try to finish my dkim branch
[00:08] <poolie> oh, poo, codehosting i guess is entirely off line
[00:15] <mtaylor> yup
[00:15]  * mtaylor once again annoyingly renews his objection to making launchpad completely unusable for 2 hours in the middle of the afternoon for the US west coast in the middle of the week
[00:17] <poolie> i know, it's crazy
[00:18] <poolie> also, i don't see any good reason why you shouldn't be able to at least read branches
[00:18] <poolie> and indeed why not write to them, because they're not stored in the db
[00:18] <poolie> but, it's getting better
[00:18] <poolie> i have trust in lifeless and co
[00:32] <lifeless> so today we had a db config issue on the readonly slave; another case where schema problems have bitten us.
[00:32] <lifeless> (the way we do schema changes, I mean)
[00:38] <mars> lifeless, what server does PQM run on?
[00:38] <lifeless> prae
[00:39] <lifeless> pqm runs outside a chroot, but code from our branches runs inside the chroot
[00:39] <mars> ah, darnit
[00:39] <lifeless> ?
[00:40] <mars> darnit that we are not there yet.
[00:40] <mars> with Python 2.6
[00:40] <lifeless> right
[00:40] <lifeless> there may be other gotchas
[00:40] <mars> on prae? yes, maybe
[00:40] <lifeless> the simplest thing IME is to stay compatible until we have *every possible case* covered.
[00:40] <lifeless> no shortcuts
[00:41] <mars> well, that is going to be what happens now :)
[00:59] <bac> thumper, rockstar, stub, mwhudson, stevek, lifeless, wgrant -- Review Meeting starting soon
[01:00] <wallyworld__> bac: thumper sends his apologies, but i'll lurk if that's ok
[01:00] <bac> wallyworld__: that's great.
[01:02] <bac> lifeless: ping
[01:06] <mwhudson> Edwin-afk: so when does devel open again? :-)
[01:07] <lifeless> bac: hi
[01:08] <bac> lifeless: join us in #launchpad-meeting?
[02:23] <wgrant> StevenK: An Ohloh reimport is in process, BTW.
[02:23] <wgrant> https://www.ohloh.net/p/launchpad/enlistments
[02:24] <StevenK> Huzzah
[02:30] <StevenK> mars: Do you have a few seconds for me to bend your ear?
[02:31] <mars> StevenK, sure
[02:32] <StevenK> mars: I keep seeing failures such as https://hudson.wedontsleep.org/job/db-devel/lastFailedBuild/testReport/junit/lp.codehosting.puller.tests.test_worker/TestWorkerProgressReporting/test_network/ keep appearing in hudson, do you have any clues?
[02:32] <StevenK> I think the same failures happen on ec2 too
[02:33] <mars> looking
[02:33] <mars> wgrant, this project does amazing things for one's Ohloh Kudos rank :)
[02:33] <StevenK> mars: If that small snippet isn't helpful, there's a full console output link on the left
[02:34] <mars> StevenK, unfortunately that tells me enough.  This is Benji's error from ec2 earlier today: https://pastebin.canonical.com/38615/
[02:35] <mars> /with/ a patch I had to try and fix it
[02:35] <mars> what's funny is the thread ID is the same
[02:35] <mars> Always the 18th thread started
[02:36] <wgrant> mars: It does!
[02:36] <wgrant> It looks like this import might only take a few days.
[02:37] <mars> StevenK, I am almost to the point of desperate measures to solve this.  Two thoughts: in the tearDown, enumerate all running threads, .join(3.0) on them.  Give them time to halt.
[02:37] <StevenK> mars: There are two others linked from https://hudson.wedontsleep.org/job/db-devel/lastFailedBuild/testReport/junit/ as well
[02:37] <mars> StevenK, or run something yucky like a custom tracer via threading.settrace(), then pick out whatever the heck it is that is hanging around
[02:38] <mars> StevenK, I think it might be an intermittent race condition between the zope testrunner and our test infrastructure.  I solved this once before (and forget how once again)
[02:40] <mars> StevenK, this work will probably become a priority.  When it does, you will have a fix for it.
[02:40] <StevenK> mars: Excellent. My only other concern is why does it impact hudson and ec2, but not buildbot?
[02:41] <mars> StevenK, my theory: BB servers are faster, affecting the race
[02:42] <StevenK> That could be why many of us can't reproduce it locally
[02:43] <mars> yes
[02:43] <StevenK> mars: So it may even end up being a race in zope's testrunner, and we just happen to tickle it?
[02:44] <mars> maybe.  The testrunner does no thread cleanup
[02:46] <StevenK> mars: That sounds like a zope fail to me
[02:46] <wgrant> When's devel likely to reopen?
[02:46] <StevenK> wgrant: It was going to be in an hour, but now it's 3.
[02:46] <StevenK> More seriously, RSN
[02:47] <mars> StevenK, got to run - fwiw, the testrunner could die horribly when doing a thread.join() - unjoined threads are test garbage and break isolation
[02:47] <mars> best leave their cleanup to the test itself
[02:47] <mars> same as leaking memory garbage
[02:47] <mars> later
[03:19] <lifeless> mars: StevenK: do we know what threads they are?
[03:19] <wgrant> https://code.edge.launchpad.net/~wgrant/launchpad/bug-655648-a-f-maverick/+merge/37820, https://code.edge.launchpad.net/~wgrant/launchpad/bug-629921-packages-empty-filter <-- can someone please land those?
[03:21] <StevenK> lifeless: Personally, I have no idea
[03:21] <lifeless> energy invested in making that automatically determined would be of great evalue
[03:22] <lifeless> or we could just disable the check, though I know thats not terribly popular an idea.
[03:25] <StevenK> lifeless: Hudson is showing, time and again, that there is a number of them that fail the same way
[03:25] <lifeless> Ursinha: I may be confused
[03:25] <lifeless> Ursinha: but I thought there was a report of oopses-received-today
[03:26] <Ursinha> lifeless, it's the lpnet-oops.html
[03:26] <Ursinha> same for edge and staging
[03:26] <lifeless> Ursinha: how often does it update?
[03:26] <Ursinha> lifeless, hourly, I guess
[03:26] <Ursinha> let me check
[03:26] <lifeless> last updated at 11pm utc, not updated since.
[03:30] <StevenK> lifeless: Hints on where to look would be awesome
[03:30] <lifeless> StevenK: have the teardown that bitches introspect the thread objects
[03:30] <lifeless> theres a few attributes that may be useful like name
[03:31] <lifeless> but also whether its dameonised and if accessible the start function would be good. oh and the class, though I think we're seeing that already.
[03:32] <mwhudson> does anyone know if there's a bug report i can associate https://code.edge.launchpad.net/~jameinel/launchpad/lp-service/+merge/37531 with?
[03:32] <lifeless> mwhudson: yes
[03:32] <lifeless> hmm, there was.
[03:32] <lifeless> but hell, file a new one.
[03:37] <mwhudson> lifeless: ok, let me know the number when you have it?
[03:37] <mwhudson> or link it, either works
[03:39] <StevenK> lifeless: Using threading.enumerate(): http://paste.ubuntu.com/512834/
[03:40] <mars> lifeless, we don't know what threads they are.  That is why I am thinking of using the threading.settrace() method to find out.  My research says threads are a black box otherwise
[03:40] <mars> unless you use the thread name well
[03:41] <StevenK> mars: ^
[03:42] <mars> yes, not the most helpful thread names :)
[03:42] <mars> StevenK, didnt' think of using the imp module - would that work?
[03:43] <mars> err, inspect, not imp
[03:43] <mars> StevenK, for each thread, can you see the .__class__ method?
[03:43] <mars> attribute
[03:44] <mars> bad typing tonight
[03:45] <lifeless> mwhudson: I oculdn't find it; want to file one ?
[03:46] <mwhudson> lifeless: ok
[03:46] <lifeless> mars: not a black box
[03:47] <lifeless> >>> t = threading.Thread(target=lambda:None)
[03:47] <lifeless> >>> t._Thread__target
[03:47] <lifeless> <function <lambda> at 0x7f75dfdd92a8>
[03:47] <lifeless> t.daemon
[03:47] <lifeless> StevenK: print those two things as well please
[03:47] <lifeless> (t.name, t.daemon, t._Thread__target)
[03:51] <mars> lifeless, ah, you are accessing a private object member - clever
[03:52] <lifeless> >>> t = threading.Thread(target=lambda:None)
[03:52] <lifeless> >>> dir(t)
[03:52] <lifeless> and look
[03:52] <lifeless> we may also want _Thread__args
[03:52] <lifeless> but thats more likely to throw up in our face, I think.
[03:54] <mwhudson> um
[03:54] <mwhudson> is person search on launchpad completely horked right now?
[03:54] <lifeless> yes
[03:54] <lifeless> the huge vocab bug
[03:54] <lifeless> 8.4 regression
[03:54] <mwhudson> ah ok
[03:54] <lifeless> once Ursinha gets back to me about lpnet-oops being stale I will raise the timeout for it via flags.
[03:56] <lifeless> its probably validpersoncache
[03:56] <lifeless> but also we've got this bizarre thing where staging is fine and prod sucks
[03:56] <lifeless> which reminds me, its bug fiuling time on that
[04:00] <Ursinha> lifeless, so.. it seems oops-tools can't find any oopses
[04:00] <lifeless> aarrrgghh
[04:01] <Ursinha> I'm checking devapd
[04:01] <lifeless> Ursinha: are there any on disk on sodium?
[04:01] <Ursinha> devpad
[04:01] <Ursinha> that's what I'm checking
[04:01] <lifeless> kk, great minds :)
[04:03] <StevenK> lifeless, mars: Sorry, was afk: http://paste.ubuntu.com/512844/
[04:03] <Ursinha> lifeless, hm, I see oopses there
[04:03] <Ursinha> lifeless, /srv/launchpad.net-logs/lpnet/gandwana/2010-10-14
[04:03] <Ursinha> I see a bunch
[04:04] <Ursinha> for instance
[04:04] <Ursinha> 340
[04:04] <lifeless> Ursinha: ok, so oopstools is broken ?
[04:05] <lifeless> StevenK: so, that tells use that that one has a bzr server
[04:05] <Ursinha> lifeless, investigating what's happening..
[04:05] <lifeless> StevenK: two threads; making the error print this extra info would be useful :)
[04:05] <StevenK> lifeless: Which I'm guessing it started and didn't tear down?
[04:05] <lifeless> StevenK: theres a few possibilities
[04:06] <mwhudson> oh god
[04:06] <lifeless> StevenK: process_request_thread may mean that there is a half closed socket, for instance.
[04:06] <lifeless> mwhudson: been drinking ? :)
[04:06] <mwhudson> i think this is because bzrlib's test http server implementation doesn't join() its thread
[04:06] <mwhudson> lifeless: i wish
[04:06] <Ursinha> lifeless, are you the one that's viewing src/oopstools/oops/dboopsloader.py?
[04:06] <lifeless> mwhudson: invoking the 'oh god' :)
[04:07] <lifeless> Ursinha: no
[04:07] <Ursinha> hm
[04:07] <StevenK> mwhudson: Does that make it a bzrlib bug, then?
[04:07] <mwhudson> eggs/bzr-2.2.0-py2.6-linux-x86_64.egg/bzrlib/tests/http_server.py:597 and thereabouts
[04:08] <mwhudson> StevenK: 'difference of opinion' vs bug
[04:08] <mwhudson> that said, i thought we joined that thread in launchpad, so maybe it's something else
[04:08] <StevenK> Heh, I see that, reading the comment.
[04:08] <mwhudson> that's not entirely unrelated
[04:08] <Ursinha> lifeless, I see a problem here in the oops loader, have to find out how to break the lock file
[04:08] <lifeless> Ursinha: ok
[04:08] <lifeless> Ursinha: I wait with bated breath
[04:08] <mwhudson> StevenK: you could try putting a join() in there and seeing what happens
[04:09] <mars> mwhudson, this argues for a test teardown .join() call then
[04:09] <wgrant>  /win 2
[04:10] <lifeless> mars: not really
[04:11] <mars> lifeless, why not?
[04:11] <lifeless> mars: it argues that our test code looking for thread leaks is wrong
[04:11] <lifeless> mars: / unnecessarily strict
[04:11] <lifeless> mars: looking over the life of the whole test run should be sufficient, for instance (which is what bzrlib does, more or less)
[04:12] <lifeless> that said, there's no reason not to join there, that comment is *almost certainly* premature optimisation.
[04:12] <mars> lifeless, sorry, you lost me - no reason not to join where?
[04:12] <lifeless> Can I encourage 'you' to put a patch forward to bzr to fix this.
[04:12] <lifeless> mars: stop_server(self)
[04:14] <StevenK> Adding self._http_thread.join() to bzrlib has no effect on the output
[04:14] <Ursinha> lifeless, oops-tools are mostly matsubara-afk 's domain, I'm not sure what to do there without possibly breaking something
[04:15] <StevenK> lifeless: ^
[04:16] <Ursinha> hmm
[04:16] <Ursinha> I guess I solved :)
[04:16] <Ursinha> update_infestation is running, let's see if oops-tools loads all oopses now
[04:17] <lifeless> StevenK: we're not sure that function is being called, are we?
[04:18] <lifeless> hmm, stop_server is being called
[04:18] <lifeless> and a gc.collect() is being called
[04:18] <lifeless> at least, according to the test
[04:19] <StevenK> Indeed
[04:19] <lifeless> Ursinha: awesome, how long should I wait before trying
[04:20] <Ursinha> lifeless, no idea, I'll let you know as soon as it finishes, but shouldn't take long
[04:21] <Ursinha> lifeless, hm, something is really wrong, script output was "no infestation updated"
[04:21] <Ursinha> it's not recognizing the oopses
[04:22] <Ursinha> hmm
[04:23] <Ursinha> lifeless, the line was commented out the crontab...
[04:23] <Ursinha> I ran that manually and it's loading the oopses
[04:23] <Ursinha> but not sure why that was commented
[04:23] <Ursinha> hopefully it won't cause any trouble
[04:32] <Ursinha> lifeless, I ran out of ideas. It loaded 7k oopses to the database but it just cannot find it for lpnet, only edge
[04:32] <Ursinha> edge has a partial report now, but lpnet doesn't
[04:32] <Ursinha> report generator claims there are no oopses for lpnet
[04:32] <Ursinha> we'll have to wait for diogo
[04:34] <lifeless> Ursinha: thanks for trying
[04:34] <lifeless> stub: hey, can we have a brief skype call?
[04:35] <Ursinha> no problem lifeless, sorry not being more helpful
[04:35] <Ursinha> going to eat something and sleep
[04:35] <lifeless> Ursinha: ciao
[04:38] <stub> lifeless: sure
[04:38] <stub> lifeless: stuartbishop
[04:43] <lifeless>  https://bugs.edge.launchpad.net/soyuz/+bug/659129
[04:43] <_mup_> Bug #659129: Distribution:+ppas timeout in PPA search <timeout> <Soyuz:Triaged by julian-edwards> <https://launchpad.net/bugs/659129>
[04:46] <mars> lifeless, StevenK, I tried the .enumerate() patch, but the results are nonsense.  Here is the patch, in case you see anything obviously wrong with it: http://pastebin.ubuntu.com/512867/
[04:47] <mars> lifeless, StevenK, the threads are still alive at the end of the test run, after TestCaseWithTransport.tearDown(self), but the zope testrunner doesn't complain
[04:51] <StevenK> mars: Throw that at ec2 and see what happens after runs the test suite?
[04:51] <mars> StevenK, it fails locally.  It is bunk :(
[04:51] <mars> no point in running it through ec2
[04:59] <lifeless> StevenK: http://wiki.postgresql.org/wiki/Postgres-XC
[04:59] <lifeless> bah
[04:59] <lifeless> stub: http://wiki.postgresql.org/wiki/Postgres-XC
[05:17] <lifeless> https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1747S227
[05:17] <lifeless> ^
[05:17] <lifeless> 06:27 < bigjools> lifeless: https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1747S227
[05:17] <lifeless> 06:27 < bigjools> the fixed query is not the one that's timing out
[05:17] <lifeless> 06:28 < bigjools> I smell a problem with the ValidPersonOrTeamCache view
[05:18] <lifeless> https://bugs.edge.launchpad.net/launchpad-registry/+bug/655802
[05:18] <_mup_> Bug #655802: Branch:+huge-vocabulary timeout (Person and team AJAX picker fails) <timeout> <Launchpad Registry:Triaged> <https://launchpad.net/bugs/655802>
[05:20] <lifeless> stub: https://lp-oops.canonical.com/oops.py/?oopsid=1740EC788
[05:26] <StevenK> "The PostgreSQL Project finally switched from CVS to Git in September 2010"
[05:26] <StevenK> O.o
[05:40] <lifeless> stub: https://bugs.edge.launchpad.net/launchpad-foundations/+bug/660291
[05:40] <_mup_> Bug #660291: inconsistent performance between staging and prod <Launchpad Foundations:New> <https://launchpad.net/bugs/660291>
[06:20] <lifeless> stub: I've tagged all the bugs pg83; probably a little zealous, but better safe than sorry ;)
[06:33] <stub> ALL the bugs??
[06:35] <StevenK> "Every bug is now tagged pg83, enjoy" ? :-)
[06:44] <lifeless> stub: all the timeout ones
[07:12] <stub> So can anyone tell me why the debugging information isn't being printed at https://lpbuildbot.canonical.com/builders/lucid_prod_lp/builds/7/steps/shell_7/logs/stdio ? Relevant code is test_on_merge.py around line 80.
[07:27] <LPCIBot> Project devel build (104): STILL FAILING in 4 hr 2 min: https://hudson.wedontsleep.org/job/devel/104/
[07:27] <lifeless> stub: what debugging info are you expecting ?
[07:28] <stub> lifeless: The information about the open connections that test_on_merge.py should emit around line 80.
[07:29] <lifeless> stub: if not results:
[07:29] <lifeless>             break
[07:30] <lifeless> hmm, no, that can't be triggering
[07:30] <stub> yer - I don't follow. Either it is a logic error I can't see, or buildbot is stripping the information, or it isn't on that branch.
[07:30] <StevenK> I suspect the latter
[07:31] <lifeless> that code is in prod-devel
[07:31] <StevenK> Given how noisy test_on_merge usually is
[07:31] <lifeless> hangon
[07:32] <lifeless> stub: thats not the code thats executing
[07:32] <lifeless> print 'Cannot rebuild database. There are open connections.'
[07:32] <lifeless> !=
[07:32] <lifeless> Cannot rebuild database. There are 1 open connections.
[07:32] <stub> ahh
[07:32] <StevenK> Can we move to Hudson now? Buildbot is annoying me.
[07:33] <lifeless> StevenK: you were working on reliability - hows that going?
[07:33] <StevenK> lifeless: I got distracted by real work
[07:33] <lifeless> StevenK: fair enough
[07:34] <lifeless> stub: I don't know what code *is* running, but its not test on merge
[07:34] <stub> I can't find it in the tree
[07:34] <stub> Maybe buildbot scripts?
[07:34] <lifeless> I guess
[07:34] <lifeless> it claims its running test_on_merge
[07:34] <stub> yer...
[07:35] <lifeless> perhaps a .pyc stale issue
[07:36] <lifeless> thats the old output
[07:36] <stub> Right. Or that branch just doesn't have my patch.
[07:36] <lifeless> and now bb is down ><
[07:36] <lifeless> stub: it does
[07:36] <lifeless> stub: I've pulled and checked
[07:36] <lifeless> rev 9848
[07:38] <lifeless> redoing just to be sure
[07:38] <lifeless> nope, its good.
[07:42] <lifeless> hmm, where *has* julians distribution:+ppas patch gone
[07:43] <lifeless> ah, its in devel
[07:44] <lifeless> way past eod; ciao.
[07:44] <lifeless> stub: build 7 was old
[07:44] <lifeless> https://lpbuildbot.canonical.com/builders/lucid_prod_lp/builds/9/steps/shell_7/logs/stdio was current
[07:45] <lifeless> till spm nuked bb
[07:45] <StevenK> My fault
[07:48] <wgrant> Could someone be convinced to EC2 my two long-approved branches?
[07:49] <StevenK> wgrant: Oh, sorry, I meant to do that. Link me again?
[07:50] <wgrant> StevenK: https://code.edge.launchpad.net/~wgrant/launchpad/+activereviews
[07:50] <StevenK> wgrant: Rah
[07:50] <wgrant> Oh?
[07:51]  * StevenK blinks
[07:51] <StevenK>     return self._mp.queue_status == 'Approved'
[07:52] <StevenK> AttributeError: 'Entry' object has no attribute 'queue_status'
[07:52] <wgrant> The URL is right?
[07:53] <StevenK> Oh, wait
[07:54] <StevenK> wgrant: Yeah, my fault
[07:55] <wgrant> Yay.
[07:55] <wgrant> EC2 doesn't hate me for the seventh time :)
[08:02] <StevenK> Hm
[08:02] <StevenK> ec2 still uses pg8.3
[08:02] <wgrant> Eep.
[08:03] <StevenK> I think we need a new machine image
[08:07] <StevenK> wgrant: You don't really care about the ec2 URLs, right?
[08:07]  * StevenK also notes that ec2 land gets really unhappy if two copies are running
[08:22] <StevenK> wgrant: Both branches are in ec2
[08:42] <lifeless> grah we're in testfix ? :(
[08:52] <StevenK> lifeless: Only because buildbot sucks
[08:53] <adeuring> good morning
[08:59] <maxb> heh, oops
[08:59] <maxb> "
[08:59] <maxb> Invalid stacked on location: /+branch/qbzr
[08:59] <maxb> "
[08:59] <maxb> So whilst the ssh server understands those new URLs, Launchpad gets irked if you use them as stacking locations
[09:04] <lifeless> please file a bug
[09:11] <maxb> filed as bug 660358, with a bonus easter egg extra idea :-)
[09:47] <bac> lifeless: still around?
[09:50] <StevenK> mars: O hai -- our ec2 images still use postgres 8.3, could you prod them up to 8.4?
[10:00] <wgrant> StevenK: Thanks.
[10:01]  * bigjools is way behind on email
[10:01] <allenap> mrevell: Fancy looking at bug 660283?
[10:01] <_mup_> Bug #660283: Bug search pages should document valid search expressions <Launchpad Bugs:New> <https://launchpad.net/bugs/660283>
[10:01]  * mrevell looks
[10:01] <mrevell> thanks allenap
[10:03] <bigjools> I'm confused.  Should we have "timeout" and "oops" on timeouts?  Or just oops?    Or just timeout?
[10:08] <jml> bigjools: timeouts should have 'oops' and 'timeout' unless something has changed recently
[10:08] <bigjools> jml: I need to have a word with Rob then :)
[10:09] <jml> bigjools: is lifeless deleting tags?
[10:11] <bigjools> jml: a bunch of soyuz bugs had "oops timeout" turned into "timeout" IIRC and when he added the pg83 tag the oops tag got removed.  I don't know if that's deliberate.
[10:11] <jml> bigjools: me either.
[10:12] <stub> Might have been removed on purpose - pg83 indicates we don't have a current valid OOPS (although a number of non-db ones have got that tag too...)
[10:13] <jml> ahh, yeah, that'll be it.
[10:13] <bigjools> furry muff
[10:24] <lifeless> bigjools: I'm told that timeout and oops tags are mutually exclusive
[10:24] <lifeless> bigjools: by urshina
[10:24] <bigjools> lifeless: that seems sub-optimal to me
[10:25] <bigjools> I'll talk to her and see why, thanks
[10:25] <lifeless> https://dev.launchpad.net/LaunchpadBugTags doesn't explain
[10:25] <lifeless> there was a different page
[10:26] <lifeless> anyhow, on my second day or so I was updating bugs with 'timeout oops' as per how I read the policy
[10:26] <lifeless> and urshina said that it was meant to be one or the other
[10:26] <jml> bigjools: to me one seems as good as the other, just as long as we're consistent so tools can be written
[10:26] <lifeless> jml: bigjools: ah, here it is https://dev.launchpad.net/PolicyAndProcess/ZeroOOPSPolicy
[10:26] <jml> lifeless: thanks
[10:26] <lifeless> 'It should be tagged with either 'oops' or 'timeout' on it.
[10:27] <bigjools> it doesn't say why
[10:27] <bigjools> and I find it useful to have both tags
[10:28] <lifeless> bigjools: It doesn't matter to me either way as long as I don't need to remember different rules for different parts of the same project
[10:28] <bigjools> agreed
[10:28] <lifeless> bigjools: I'd be delighted to change, if you want to bring it up with urshina/the list
[10:29] <lifeless> my (probably faulty) recollection is that it was for qa tooling
[10:29] <bigjools> I find it useful to search for just one tag "oops" and get everything related.  If we don't get stuff tagged with just "timeout" it's more time-consuming, at least for me, to remember to look for the other tag too.
[10:29] <jml> also, someday I'm going to make that oops/timeout graph that lifeless asked me for
[10:29] <bigjools> heh
[10:30] <jml> consistent tagging will be important
[10:43]  * bigjools totally loves PG84's psql that tells you what other tables reference your column
[10:44] <bigjools> stub: regarding that sql you did in the bug comment, I vaguely remember someone saying something about there being a way to check person validity directly on Person?
[10:45] <stub> bigjools: Yer - love that. Pain in the arse backtracking that stuff before
[10:46] <stub> bigjools: You need person, emailaddress and account for our current 'valid person' rules.
[10:46] <stub> bigjools: With just person, you can't tell if their account is active or if they have a preferred email address
[10:46] <bigjools> ok
[10:47] <bigjools> I still need to order on person, so I need that extra crap :(
[10:48] <stub> So my timings on current staging seem ok. lifeless got one with a 10 second query.
[10:48] <bigjools> I'll make another patch to try out with that changed query
[10:51] <bigjools> sqlobj doesn't do LEFT OUTER JOIN does it :/
[10:53] <stub> I've blotted all that from my mind.
[10:57] <bigjools> this is going to be tricky to change
[10:58] <wgrant> It's not easily Stormifiable?
[10:58] <bigjools> no
[10:59] <bigjools> take a look at Distribution.searchPPAs
[10:59] <bigjools> it has fti stuff - last time I tried that in Storm it was a world of pain
[11:00] <wgrant> If necessary you could just SQL() that bit.
[11:00] <bigjools> I could, which is what I think I did
[11:00] <bigjools> but something else is nagging me and I can't remember what it was
[11:00] <wgrant> The horrible horrible string concatenation in that method?
[11:00] <bigjools> heh
[11:01] <bigjools> that's how it was done with sqlobj
[11:01] <wgrant> yes, but that's like so three years ago.
[11:01] <bigjools> I do not want to stormify this query
[11:01] <bigjools> not right now anyway
[11:01] <stub> Seems the fastest approach to me
[11:02] <wgrant> It looks easy enough to Stormify... as long as the callsites aren't braindead.
[11:02] <bigjools> ...
[11:02] <bigjools> you know what they say about assumptions
[11:02] <wgrant> But this should have only one callsite.
[11:02] <bigjools> hahaha
[11:03] <wgrant> Well, there's only one non-test callsite that cares about the result.
[11:05]  * bigjools considers store.execute
[11:05]  * stub ♥ store.execute()
[11:06] <stub> Why are you ordering by Person.name anyway?
[11:06] <stub> Seems... odd...
[11:06] <bigjools> because this stuff needs to appear on +ppas
[11:07] <stub> Yer, but Person.name is a very arbitrary order.
[11:07] <bigjools> true but it's less arbitrary than anything else
[11:07] <wgrant> displayname is better.
[11:08] <wgrant> But isn't relevance better still?
[11:08] <bigjools> yes
[11:08] <stub> If you pick a field in Archive to order by, you don't have to rewrite it in Storm :)
[11:08] <bigjools> stub: I know, don't think I hadn't considered that :)
[11:08] <wgrant> Oh, you order by relevance then name, I see.
[11:09]  * stub changes his Person.name to 'aaaaaaaaaaaaaaaa_stub'
[11:09] <bigjools> I could order by relevance, then ppa name perhaps
[11:09]  * bigjools thinks
[11:09]  * nigelb points stub to http://uncyclopedia.wikia.com/wiki/AAAAAAAAA!
[11:10] <bigjools> like :)
[11:13] <bigjools> wgrant: actually something sensible needs to be the default for when no search term is supplied
[11:14] <bigjools> I like displayname to some extent
[11:14] <wgrant> bigjools: Person.name is probably not that
[11:14] <wgrant> Archive.displayname might be.
[11:14] <bigjools> indeed
[11:14] <bigjools> since it will include the person.name anyway unless they changed it :)
[11:14] <wgrant> Um, well, not any more.
[11:14] <wgrant> There is no default
[11:15] <bigjools> mmm true
[11:15] <bigjools> I think it's a better default, let's see
[11:15] <wgrant> We really need to fix that lack of default.
[11:15] <wgrant> Although it's not so bad now that the key doesn't acquire that name permanently.
[11:24] <bigjools> WARNING:root:Memcache set failed for...
[11:24] <bigjools> whut?
[11:25] <wgrant> Didn't you disable memcache on your system?
[11:26] <bigjools> I did
[11:26] <bigjools> sigh
[11:31] <bigjools> actually, not on this machine, so something else is wrong
[11:33] <bigjools> test isolation crappiness, it seems
[11:40] <jml> actually...
[11:40] <jml> mpt did a spec about consolidating the name/displayname/title thingy to make it consistent across LP
[11:40] <jml> we should do that
[11:44] <wgrant> We should do a lot of things :(
[11:44] <jml> Yes!
[11:44] <jml> Do more things!
[11:47] <wgrant> An intriguing proposal.
[11:48] <bigjools> wgrant: https://staging.launchpad.net/ubuntu/+ppas?name_filter=
[11:49] <bigjools> jml: +1000000000000000000
[11:50] <wgrant> bigjools: Looks a bit crap, but not much worse than the old one.
[11:50] <stub> The testsuite doesn't talk to the system memcache
[11:50] <bigjools> wgrant: seems better to me since it's sorting by something you can see :)
[11:50] <wgrant> bigjools: True.
[11:53] <stub> What was the hack in /default to stop rabbitmq starting up? I used to use update-rc.d but that isn't nice apparently
[11:53] <wgrant> This a.bono guy has a lot of PPAs.
[11:55] <lifeless> please file a bug on the rabbit package if there isn't something in /etc/default/rabbitmq
[11:56] <bigjools> stub: I put DAEMON=true in it
[11:56] <stub> And that *stops* it starting?
[11:56] <bigjools> yes
[11:56] <stub> huh
[11:56] <mpt> jml, https://dev.launchpad.net/RegistrySimplifications
[11:56] <bigjools> best not to ask :)
[11:56] <lifeless> please file a bug, thats really nonobvious
[11:57] <jml> mpt: thank you.
[11:57] <mpt> I didn't touch Archive. though
[11:57] <wgrant> mpt: That is a lot of steps.
[11:57] <jml> wgrant: but they are simple steps.
[11:57] <wgrant> True.
[11:57] <jml> (some of them might be tedious, but that's a different axis)
[12:00] <mpt> Archive.displayname is, I guess, what ends up being shown in the left pane of Ubuntu Software Center
[12:01] <deryck> Morning, all.
[12:01] <jml> deryck: good morning
[12:07] <bigjools> morning deryck
[12:08] <bigjools> stub: so why was that join to Person having such a big effect in the query?  And only on 8.4?
[12:08] <stub> I suspect it always had that effect - from my testing it reduced a 1.5 second query to 0.5.
[12:09] <stub> bug #660460
[12:09] <_mup_> Bug #660460: Need option to not launch server on boot <amd64> <apport-bug> <maverick> <rabbitmq-server (Ubuntu):New> <https://launchpad.net/bugs/660460>
[12:11] <stub> Booger. /ubuntu/+ppas just gave me a timeout on staging.
[12:12] <stub> There seems to be a db restore going on... might just be busy
[12:18] <bigjools> yeah it was very quick last time
[12:18] <StevenK> wgrant: Both your branches failed. :-(
[12:21] <wgrant> @#@!(U$@!$!
[12:21] <wgrant> #7
[12:21] <wgrant> Although that's only failure #3 for the other one.
[12:23] <wgrant> So, um...
[12:30] <jml> StevenK: I need to check it out, but I think that's a very old "bug" in bzr
[12:33] <StevenK> jml: Sure, I'd be happy to be pointed at where the issue actually is.
[12:33] <jml> StevenK: looking now.
[12:33] <jml> iirc, it might have something to do with python's socket lib being terrible
[12:34] <jml> StevenK: https://bugs.edge.launchpad.net/bzr/+bug/193253
[12:34] <_mup_> Bug #193253: sockets being leaked in branch puller tests <Bazaar:Invalid> <Launchpad Bazaar Integration:Fix Released by stub> <https://launchpad.net/bugs/193253>
[12:37]  * jml replies on list
[12:42] <jml> done
[13:49] <LPCIBot> Project devel build (105): STILL FAILING in 4 hr 8 min: https://hudson.wedontsleep.org/job/devel/105/
[15:28] <deryck> rockstar, ping.  (read: yo, where my new lazr-js, sucka!)
[15:31] <maxb> jelmer: You know that chardet-dep branch that you fixed? Well I think you forgot to push :-)
[15:32] <jelmer> maxb: bzr says "No new revisions to push.". Looking into it..
[15:33] <sinzui> deryck, ping
[15:33] <deryck> hi sinzui
[15:34] <sinzui> deryck, I would like your +1 to run this sql update in production. I just confirmed the test run on staging did purge 2500 known spam messages from openjdk: http://pastebin.ubuntu.com/513143/
[15:35]  * deryck looks
[15:36] <rockstar> deryck, I have it reviewed and everything, but I have 9 test failures, and they are all bugs' fault.
[15:36] <deryck> "fault" is such a harsh term. ;)
[15:37] <deryck> sinzui, r=me.  Do I need to update this on the LPS page?
[15:37] <deryck> rockstar, do you need our help fixing these test failures?  Are they Windmill or yuitest?
[15:37] <sinzui> deryck, I do not know.
[15:39] <deryck> sinzui, if you didn't add the query to LPS, I wouldn't think I need to do so.  If a losa wants it added there, I can update when you ping back.
[15:39] <sinzui> I will do that now
[15:41] <rockstar> deryck, windmill.  I think I just need to see details on the failures before I can make a statement on help.
[15:41] <deryck> rockstar, ok, cool.
[15:44] <sinzui> deryck, https://wiki.canonical.com/InformationInfrastructure/OSA/LaunchpadProductionStatus
[15:44] <sinzui> deryck, bug 660541
[15:44] <_mup_> Bug #660541: Messages from the list in the moderation queue are spam, discard them with a script <mailing-lists> <Launchpad Registry:In Progress by sinzui> <https://launchpad.net/bugs/660541>
[15:44]  * deryck updates LPS page
[15:45] <deryck> sinzui, done. r=me, of course.
[15:45] <sinzui> thanks
[15:46] <deryck> np
[15:47] <jelmer> maxb: fixed
[16:00] <jcsackett> rockstar, ping.
[16:03] <rockstar> jcsackett, pong.
[16:04] <jcsackett> rockstar, could i get your input on https://bugs.edge.launchpad.net/launchpad-code/+bug/652126?
[16:04] <_mup_> Bug #652126: branch collection action portlet is missing links <bridging-the-gap> <Launchpad Bazaar Integration:In Progress by jcsackett> <https://launchpad.net/bugs/652126>
[16:04] <jcsackett> it was filed as part of our bridging-the-gap work; i think we can either dismiss it or we need to talk about a different way of presenting the statistics it's talking about.
[16:05] <jcsackett> if it's the latter, i don't want to take a stab at it without talking with someone on the code team.
[16:06] <jcsackett> rockstar ^
[16:25] <dobey> gary_poster: you uh, TIMEd? :)
[16:26] <gary_poster> dobey, you mean Time Inc?  If so, yeah :-)
[16:26] <dobey> gary_poster: i mean, i see a CTCP TIME from you :)
[16:28] <dobey> anyway, lunch is calling. bbiab
[16:28] <gary_poster> dobey: ah, yeah.  We were having a discussion of python keyring and I was bringing up concerns you mentioned in passing, and I idly did a whois sort of thing
[16:28] <gary_poster> nothing to respond to
[16:28] <dobey> ah ok
[16:28] <dobey> cheers then
[16:29] <gary_poster> bye :-)
[16:34] <jml> bigjools: otp now, fwiw.
[16:34] <jml> bigjools: errr, off the phone
[16:35] <bigjools> :)
[16:35] <bigjools> jml: tests are working now
[16:35] <jml> \o/
[16:35] <bigjools> jml: one more thing, we're still trapping xmlrpclib.Failure - is that raised by twisted?
[16:35] <jcsackett> rockstar: had a chance to look at that bug?
[16:35] <jml> bigjools: you mean xmlrpclib.Fault?
[16:35] <bigjools> yes :/
[16:35] <rockstar> jcsackett, yeah, but I'm not entirely sure why we need links there.
[16:35] <jml> bigjools: yes, it is. I think t.w.xmlrpc.Fault is just a re-export of that
[16:36] <bigjools> coolio
[16:36] <rockstar> It's displaying information, but it doesn't need to link anywhere.
[16:36] <rockstar> At least, that's my opinion.
[16:36] <bigjools> jml: committing and pushing so you can peruse, one sec
[16:36] <jcsackett> rockstar: thus my pinging you. it looks to me like that is meant as statistics, not action.
[16:36] <jcsackett> rockstar: inasmuch as the only place those could link to is the very page you're looking at.
[16:37] <jml> bigjools: ta
[16:37] <rockstar> jcsackett, yes.
[16:37] <jcsackett> rockstar: do you think it's better to dismiss the bug or find a different way to display the info?
[16:37] <jcsackett> rockstar: i'm not sure, but perhaps the sidebar things are meant to be action links only?
[16:37]  * jcsackett looks for precedent.
[16:38] <rockstar> jcsackett, well, there are lots of things wrong with that portlet.
[16:38] <bigjools> jml: http://bazaar.launchpad.net/~julian-edwards/launchpad/builderslave-resume/revision/11676
[16:38] <bigjools> well, if it would not OOPS :/
[16:38] <jcsackett> rockstar: so would something like showing at the top of the page the "N branches, M commits" and removing it from the portlet make sense?
[16:38] <rockstar> jcsackett, maybe.
[16:39] <rockstar> jcsackett, although I'd contend that it shouldn't link to active reviews if there are no active reviews...
[16:39] <jml> bigjools: I'll just pull the branch.
[16:39] <jcsackett> rockstar: also valid, but maybe that's a different bug.
[16:39] <bigjools> jml: url works now, but ok
[16:39] <rockstar> jcsackett, I hate hate hate when we do things like "0 foos in the bar"
[16:40] <jcsackett> rockstar: yeah; i can see that.
[16:40] <rockstar> jcsackett, if you're going to fix this portlet, I suggest killing the whole portlet and moving all of the information to be part of the page.
[16:40] <jcsackett> rockstar: i was just thinking that.
[16:40] <jcsackett> jcsackett: perhaps placed above the drop down selector
[16:40]  * jcsackett looks at what he typed.
[16:41] <jcsackett> i clearly need more sleep and/or coffee. rockstar, last comment was meant for you, not me. :-P
[16:43] <jml> bigjools: instead of: "server_proxy.queryFactory = QuietQueryFactory" you could also do "server_proxy.queryFactory.noisy = False" and then delete the QuietQueryFactory class
[16:44] <bigjools> jml: I know, I kinda like the explicit factory
[16:44] <bigjools> but meh
[16:44] <jml> bigjools: yeah, it's a taste thing. I think they are both equally explicit in terms of intent though
[16:46] <jml> bigjools: looks good.
[16:46] <bigjools> jml: cool, thanks.  So now I have some more tests to write that I thought of that are lacking from the previous manager.
[16:46] <jml> bigjools: it might be worth adding an errback that transforms the CancelledError into a TimeoutError or something, just to make the API clearer.
[16:46] <bigjools> ah very good point
[16:46] <jml> bigjools: and _with_timeout probably ought to be a standalone function in lp.services.twistedsupport
[16:47] <bigjools> also we need the top-level to handle it
[16:48] <jml> handle the timeout? yes.
[16:48] <bigjools> at the moment it would print a stack trace
[16:48] <bigjools> it just needs to be added to the list of known failures
[16:49] <bigjools> jml: oh one more thing, when testing on dogfood I noticed that not all types of failure would get back to the top level scan(), which left the scan "dead" after a traceback.
[16:49] <bigjools> I'm not sure what was going on - but at the very least I need to make sure that *all* errors are caught in scan()
[16:50] <bigjools> and I don't know why it's not doing that
[16:50] <jml> bigjools: hmm.
[16:50] <jml> bigjools: I don't either – that's not much to go on. Was it an "Unhandled error"?
[16:50] <bigjools> once it gets in there the failure counting will deal nicely with it
[16:50]  * bigjools hunts logs
[16:51] <bigjools> jml: yes, Unhandled Error
[16:51] <jml> bigjools: ahh right
[16:51] <bigjools> it was a coding mistake
[16:52] <bigjools> but it should be caught further up
[16:52] <bigjools> the manager should trap those and kill the build
[16:52] <jml> bigjools: that means a Deferred somewhere has had errback called on it, but there's nothing handling that error. Normally either the deferred is not being returned ...
[16:52] <jml> or it is and someone didn't add errback
[16:52] <bigjools> hmmm
[16:52] <jml> bigjools: there's not many options open to you if you don't return the Deferred.
[16:53] <bigjools> jml: so since _startCycle() does d.addErrback(self._scanFailed) then I presume it's the former
[16:54] <jml> bigjools: or something down deeper.
[16:54] <jml> bigjools: setting 'debug' (or is it 'DEBUG'?) on the Deferred class to True will give you more information
[16:54] <bigjools> debug
[16:55] <bigjools> can you get on mawson?
[16:55] <jml> no
[16:56] <bigjools> http://pastebin.ubuntu.com/513226/
[16:56] <bigjools> that's the general type of thing
[16:57] <bigjools> I need to handle shutdown gracefully as well, did we work out if there was a reactor hook?
[16:58] <jml> bigjools: I didn't find out.
[16:59] <jml> bigjools: part of the problem with unhandled errors is that there's no way of telling a particular error is unhandled until the gc kills the deferred with the error.
[16:59]  * jml has a look at the code
[16:59] <bigjools> and that's bad, because in at least one case it failed the builder immediately (I need to purge all those)
[17:00] <bigjools> jml: def defer_with_timeout(self, d, timeout):
[17:00] <bigjools> look ok?
[17:00] <bigjools> in lib/lp/services/twistedsupport/xmlrpc.py
[17:02] <rockstar> jcsackett, I think that's fine.  It'll probably make that page a little busier.
[17:03] <jml> bigjools: it's actually not anything to do w/ xmlrpc, I'd put it in __init__ and call it cancel_on_timeout, I think.
[17:03] <jml> bigjools: and I presume you saw the recent discussion on #twisted
[17:03] <bigjools> no, not paying attention
[17:03] <jcsackett> rockstar: i think if it's refactor into one smooth sentence it'll be alright. i'll get a ui review on it when i'm done to be sure though.
[17:03] <jml> bigjools:  reactor.addSystemEventTrigger('before', 'shutdown', f)
[17:03] <jml> bigjools: also, we should be using Services, but one thing at a time
[17:04] <bigjools> oo nice
[17:07] <jml> mrevell: do we have any docs about the product release finder?
[17:09] <mrevell> hmm
[17:11] <mrevell> jml we have this http://blog.launchpad.net/cool-new-stuff/automatically-import-files-to-launchpad-using-product-release-finder
[17:14] <jml> mrevell: ta
[17:20] <bigjools> jml: I want to test the scenario where the deferred completes instead of cancelling
[17:20] <bigjools> and struggling a bit
[17:21] <jml> bigjools: from a proxy or from cancel_on_timeout?
[17:21] <jml> bigjools: from a proxy, just do DeadProxy again but with 'return defer.succeed(None)'
[17:21] <bigjools> jml: testing the cancel on timeout
[17:22] <jml> bigjools: from cancel_on_timeout, d = cancel_on_timeout(defer.succeed(None), timeout=10)
[17:22] <bigjools> jml: can I assert it was not cancelled?
[17:22] <bigjools> that will not fail either way :)
[17:22] <jml> bigjools: sort of. you can assert that you can add a callback to it and get whatever you passed in to succeed()
[17:23] <jml> bigjools: and rely on the TestCase to assert that the callLater has been cancelled
[17:23] <bigjools> right
[17:23] <jml> bigjools: if that's too implicit for you...
[17:24] <jml> actually... that won't work anyway
[17:24] <jml> self.assertEqual([], clock.getDelayedCalls())
[17:25] <jml> because Trial makes assertions about the actual reactor, and presumably you are passing in a fake.
[17:26] <jml> (see IReactorTime for docs on that)
[17:28] <bigjools> implicit is fine
[17:28] <jml> bigjools: not if you are passing in a clock, it isn't. it won't make the assertion at all.
[17:28] <bigjools> oh, meh
[17:30] <LPCIBot> Project db-devel build (66): STILL FAILING in 4 hr 19 min: https://hudson.wedontsleep.org/job/db-devel/66/
[17:31] <jml> now, where was I
[17:31] <jml> error handling.
[17:32] <LPCIBot> Project devel build (106): STILL FAILING in 3 hr 43 min: https://hudson.wedontsleep.org/job/devel/106/
[17:32] <LPCIBot> Launchpad Patch Queue Manager: [r=jelmer][ui=none][bug=656295] Allow derivers to specify which
[17:32] <LPCIBot> packagesets they want to copy to the child distroseries in
[17:32] <LPCIBot> InitialiseDistroSeries.
[17:34] <bigjools> jml: http://pastebin.ubuntu.com/513242/
[17:35] <jml> bigjools: sweet.
[17:35] <bigjools> I'll commit here but land it in a separate branch too
[17:35] <jml> bigjools: good call.
[17:36] <bigjools> one of the few things I can :)
[17:36] <jml> bigjools: I've just looked at that unhandled error in the paste you posted earlier
[17:37] <jml> bigjools: this is the relevant code:
[17:37] <jml>         d = self.scan()
[17:37] <jml>         d.addErrback(self._scanFailed)
[17:37] <jml>         return d
[17:37] <jml> if _scanFailed fails unexpectedly, nothing will handle that
[17:37] <bigjools> ha
[17:37] <jml> so you could change it to:
[17:37] <jml>         d = self.scan()
[17:37] <jml>         d.addErrback(self._scanFailed)
[17:37] <jml>         d.addErrabck(self._handleUtterDisaster)
[17:37] <jml>         return d
[17:37] <jml> but I'm not sure that would win you very much.
[17:37] <bigjools> so I need an error handler for the error handler
[17:38] <jml> (particularly if you typo addErrback!)
[17:38] <bigjools> belt & braces
[17:38] <bigjools> :)
[17:39] <jml> bigjools: hmm, I'm not sure writing more code to handle runtime cases where there are programming errors counts as belt&braces
[17:39] <jml> *deleting* code would be a far more sound strategy :)
[17:39] <bigjools> I know what you mean, *but* there is one piece of code that's out of the managers hands
[17:39] <bigjools> builder.getCurrentBuildFarmJob()
[17:40] <bigjools> that can failed and we have no control
[17:40] <bigjools> fail, even
[17:40] <jml> fair enough. in that case maybe put a try/except around it in _scanFailed?
[17:40] <bigjools> can do - and fail the job immediately
[17:51] <jml> unfortunately I have to go now.
[18:11] <dobey> abentley: around?
[18:18] <gary_poster> bac: hi?
[18:24] <abentley> dobey: back.
[18:26] <dobey> abentley: hi. about the bzr upgrade of remote branches issues. is there some way to separately upgrade the repository from the branch itself? or is that what the link on the web does?
[18:27] <abentley> dobey: I don't understand the question.  Technically, the branch will already be in format 7, and its repository is the only thing that could be upgraded.
[18:28] <dobey> abentley: ok, i upgraded a branch to 2a several weeks ago, and no new revisions have been committed, and the web page still says KnitPacks5 for the repository format
[18:28] <dobey> abentley: https://code.edge.launchpad.net/~configglue/configglue/trunk
[18:31] <abentley> dobey: the branch and its repository have been upgraded, but the database is out of sync.
[18:32] <dobey> abentley: and there's nothing i can do myself short of committing a new revision to force a rescan, right?
[18:32] <abentley> dobey: well, we could try taking out a branch lock and seeing if that triggers a scan.
[18:33] <abentley> dobey: or you could uncommit a revision and then push it again.
[18:35] <dobey> abentley: ok. i'm not especially concerned at this point. i was just wondering since your e-mail suggested this case shouldn't be happening, as an upgrade should cause a rescan to trigger.
[18:35] <abentley> It shouldn't be happening.  An upgrade should cause a rescan to trigger.  I can investigate why that didn't happen, now that I have an example.
[18:37] <dobey> abentley: do you know more specifically when upgrades should have started causing a rescan? if the code that handles that was deployed after i did the upgrade, that would probably be the reason. but "these days" isn't a very exact metric :)
[18:37] <jcsackett> is there a way to force lowercase on text in a template? ideally without using "python: " ?
[18:38] <abentley> dobey: months ago.
[18:38] <dobey> ah ok
[18:39] <abentley> dobey: probably around May.
[18:39] <dobey> yeah this was definitely upgraded after that then
[18:40] <dobey> well the last revision was in August, and I think I did the upgrade probably about 3-4 weeks ago
[18:41] <abentley> dobey: my logs go back to Sept 23.  Was it before that?
[18:42] <dobey> let me see if i can find an exact date for it
[18:42] <dobey> abentley: it probably would have been sept 22 or 23 though
[18:43] <dobey> Thu 2010-09-23 11:09:34 -0400
[18:43] <dobey> 0.025  bazaar version: 2.2.0
[18:43] <dobey> 0.025  bzr arguments: [u'upgrade', u'--default', u'lp:configglue']
[18:43] <dobey> abentley: should i file a bug?
[18:46] <abentley> dobey: I guess.  I'm not sure we can do more than mark it incomplete, at this stage.
[18:47] <abentley> dobey: what's your offset from UTC?
[18:49] <dobey> abentley: -0400
[18:50] <abentley> So 11:09 would be 15:09
[18:50] <abentley> dobey: So you remotely upgraded the branch, rather than using the web page, right?
[18:51] <dobey> abentley: right, i did bzr upgrade --default lp:configglue
[18:52] <abentley> dobey: okay, it's possible we don't handle that case.  I was thinking of web-based upgrades, where I know we handle it.
[18:55] <dobey> abentley: ok, i'll file a bug
[19:03] <LPCIBot> Yippie, build fixed!
[19:03] <LPCIBot> Project devel build (107): FIXED in 1 hr 30 min: https://hudson.wedontsleep.org/job/devel/107/
[19:03] <LPCIBot> * Launchpad Patch Queue Manager: [r=leonardr][ui=none][no-qa] Modify xx-person-subscriptions.txt not
[19:03] <LPCIBot> to use sample data.
[19:03] <LPCIBot> * Launchpad Patch Queue Manager: [r=lifeless][ui=none][bug 46581] Do not oops when users hack poll
[19:03] <LPCIBot> types.
[19:03] <_mup_> Bug #46581: Change a poll type URL manually crashes <oops> <polls> <Launchpad Registry:Fix Committed by sinzui> <https://launchpad.net/bugs/46581>
[19:04] <dobey> abentley: filed bug #660695
[19:04] <_mup_> Bug #660695: Remote upgrade not triggering rescans <Launchpad Bazaar Integration:New> <https://launchpad.net/bugs/660695>
[19:05] <abentley> dobey: I can't reproduce it.  I just created a branch and upgraded it remotely.
[19:05] <dobey> hmm
[19:06] <dobey> abentley: let me see if i have another branch i can try on
[19:06] <abentley> dobey: And it shows the correct repository format.
[19:06] <abentley> dobey: wrong branch format, though.
[19:08] <dobey> hmm
[19:21] <dobey> abentley: interesting. i see that as well on another branch i tried. and yet another i tried, has the right branch format, but still says KnitPack6
[19:23] <abentley> dobey: you mean, you just created a new branch in KnitPack6, upgraded it, and it's showing the wrong data?
[19:23] <dobey> abentley: no, i found a branch already on lp which i already owned, which was in the old format, and upgraded it
[19:24] <abentley> dobey: but you upgraded it just now?  Okay, maybe it's something to do with the format.
[19:24] <dobey> yes
[19:25] <dobey> abentley: it seems that things that are already BranchFormat7, exhibit the issue of showing the old Repository format info
[19:25] <abentley> dobey: aha!
[19:25] <dobey> abentley: and BranchFormat6 branches seem to always show BranchFormat6, but the Repository format gets updated
[19:27] <dobey> abentley: that makes sense to you i take it? :)
[19:27] <abentley> dobey: I can see how such a bug would happen.  Someone misunderstood the significance of branch format, and assumed if it hadn't changed, the repository format hadn't changed.
[19:29] <dobey> abentley: ah. and presumably the reverse as well, in the other case where BranchFormat doesn't change?
[19:29] <dobey> (doesn't change in the UI)
[19:29] <abentley> dobey: AFAICT, there are no cases where the branch changes in the UI from an upgrade alone.
[19:31] <dobey> abentley: so for the case where the UI still shows BranchFormat6 after an upgrade to 2a, is a separate bug?
[19:31] <abentley> dobey: yes, I've filed a bug for it.
[19:31] <dobey> abentley: ok, cool. glad i could help :)
[19:31] <abentley> dobey: bug 660706
[19:31] <_mup_> Bug #660706: Wronng branch format shown on web page after remote upgrade <branch-scanner> <Launchpad Bazaar Integration:Triaged> <https://launchpad.net/bugs/660706>
[19:34] <abentley> dobey: Successfully reproduced the bug.
[19:39] <dobey> abentley: cool.
[20:09] <mwhudson> good morning
[20:11] <lifeless> moin
[21:07] <wgrant> Any progress on the EC2 breakage?
[21:08] <mwhudson> wgrant: i just sent a mail that may be related
[21:08] <wgrant> Ah.
[21:08]  * wgrant looks.
[21:09] <wgrant> Hm, yes.
[21:09] <lifeless> mwhudson: so, have you filed it upstream yet?
[21:09] <mwhudson> lifeless: no
[21:10] <mwhudson> at least, not that i remember
[21:15] <lifeless> mwhudson: (hint)
[21:16] <mwhudson> yeah yeah
[21:16] <mwhudson> i don't really want to remember the details, they made me very angry
[21:18] <lifeless> Simple* can do that ;)
[21:20] <mwhudson> i'll do it when i get through my mail
[21:20] <LPCIBot> Project db-devel build (67): STILL FAILING in 3 hr 49 min: https://hudson.wedontsleep.org/job/db-devel/67/
[21:20] <LPCIBot> * Launchpad Patch Queue Manager: [rs=buildbot-poller] automatic merge from stable. Revisions: 11696
[21:20] <LPCIBot> included.
[21:20] <mwhudson> which has a new way for oopsprune to fail today
[21:20] <LPCIBot> * Launchpad Patch Queue Manager: [r=gmb][ui=none][bug=655567] Wire up structural bug subscription
[21:20] <LPCIBot> filters to the subscriber selection machinery.
[21:20] <mwhudson> IOError: [Errno 13] Permission denied: '/srv/launchpad.net/production-logs/2010-10-14/57341.C3848'
[21:23] <wgrant> I wonder why Hudson managed to discover the errors more than a week before EC2.
[21:26] <lifeless> wgrant: hudson is nice :)
[21:27] <wgrant> Well, yes, but...
[21:30] <lifeless> mwhudson: #python-testing may interest you as a lurking channel
[21:30] <lifeless> I need a teddy bear for where to put some testing code
[21:30] <mwhudson> lifeless: lp.testing.$whatever?
[21:30] <lifeless> mwhudson: mocker vs fixtures.*
[21:30] <mwhudson> ah
[21:31] <lifeless> so, I'm writing some code that invokes external processes.
[21:31] <lifeless> I have no interest in actually invoking them in my tests.
[21:31] <lifeless> partly because the process is bin/test so it would be hugely meta to do so
[21:31] <lifeless> and partly because the zope stack is so slow I'd die of old age waiting for things to run
[21:33] <wgrant> Can I send you off on a diversion to fix that slowness? :P
[21:33] <wgrant> That generally seems to be fairly effective.
[21:33] <lifeless> wgrant: yes, ditch zcml & global state. Thanks, I'll expect a patch :)
[21:35] <jam> mwhudson: just to be sure, merging *from* devel and proposing *into* db-devel is always safe, right?
[21:36] <jam> (aside from short-term bugs in devel that haven't made it into stable yet, that would eventually cause testfix mode)
[21:36] <mwhudson> jam: yeah, but you'll end up with a diff that's way bigger than necessary in the mp
[21:36] <mwhudson> it sucks that we can't retarget the existing mp
[21:36] <mwhudson> thumper: hey, fix that :-p
[21:37] <thumper> yeah yeah
[21:37] <thumper> I've talked with wally about that
[21:38] <jam> mwhudson: why would it be too big? Stuff that devel has that hasn't propagated into db-devel yet?
[21:38] <wgrant> Right. That can take days if buildbot is screwed.
[21:38] <wgrant> Which is fairly often these days.
[21:39] <mwhudson> jam: yeah, exactly
[21:39] <lifeless> mwhudson: so where was I
[21:40] <lifeless> right, I want to use a test double for 'subprocess.Popen'
[21:40] <lifeless> I *could* use a mocking library
[21:40] <lifeless> but I'll still have an implementation of subprocess hanging around which is a little larger than a mock
[21:41] <lifeless> So instead I'm thinking to add a tested double and a glue fixture, with setUp monkey patching the system module
[21:41] <lifeless> mwhudson: what do you think?
[21:42] <wgrant> lifeless: You wanted less global state, but you are advocating monkeypatching out subprocess.Popen? :P
[21:42] <lifeless> wgrant: sadly yes. But for the duration of a single test
[21:43] <lifeless> wgrant: modules are themselves global state; we can work better layers up on top of them though.
[21:43] <lifeless> another option is to make the code I write take a Popen implementation
[21:46] <mwhudson> dependency injection woo
[21:46] <mwhudson> that said, i think monkey patching will produce less wtfs per second
[21:47] <wgrant> jelmer: Hi.
[21:48] <lifeless> yeah
[21:48] <jelmer> wgrant: Hello.
[21:49] <wgrant> jelmer: I was just stalking recent Launchpad branches, and I think the assert in 135610-duplicated-ancestry is still a little wrong.
[21:50] <lifeless> hmm, I probably need to upload fixtures to debian soonish
[21:50] <wgrant> There can be more than one Published publication if the publisher hasn't finished yet.
[21:50] <wgrant> (it first marks everything has Published, then later marks old ones Superseded)
[21:52] <jelmer> lifeless: NEW is stalled again though :-/
[21:52] <jelmer> wgrant: Hmm
[21:53] <jam> when logging into the local launchpad instance, what username are you supposed to use? (I'm trying to get an ssh key registered, etc)
[21:53] <jam> It seems to be redirecting me to "testopenid.dev" but that isn't letting me create a new account
[21:53] <wgrant> jam: admin@canonical.com is an admin.
[21:53] <wgrant> admin@canonical.com:test
[21:54] <jam> wgrant: and that person is then "name16"... very strange
[21:54] <jam> but it worked, thanks
[21:54] <wgrant> jam: Well, username != password
[21:55] <wgrant> I think there's a script (utilities/make-lp-user?) which will create a new user and upload your SSH key to it.
[21:55] <jam> wgrant: or login name, or account name, or...
[21:55] <wgrant> Not sure if it still works.
[21:55] <wgrant> jam: By password I of course meant email address.
[21:55] <wgrant> username != email address
[21:56] <jam> wgrant: yeah, but also calling "admin@canonical.com" "name16" isn't very obvious, either
[21:56] <jam> "admin" would have been a bit more obvious
[21:56] <wgrant> admin@canonical.com is a relatively new alias.
[21:56] <wgrant> name16 dates to 2005 or so.
[21:56] <wgrant> Yay sampledata.
[21:56] <jam> I wouldn't have too much problem, but it seems that every "make schema" nukes the stored ssh keys
[21:56] <wgrant> Why are you running make schema so often?
[21:56] <jam> so just about every time I update my launchpad branch, I have to start from scratch
[21:57] <wgrant> You could always upgrade your DB with database/schema/upgrade.py, I guess.
[21:57] <jam> wgrant: I get a lot of "schema is out of date" failures, but partly because this was originally based on db-devel
[21:57] <wgrant> But make-lp-user may be a better option.
[21:58] <wgrant> I have a couple of scripts which I use to prepare a fresh DB.
[21:58] <wgrant> So make schema isn't so bad.
[22:07] <wallyworld>  abentley, thumper, rockstar: standup?
[22:07] <rockstar> wallyworld, sure.
[22:14] <poolie> jam, so, still hoping for a landing before uds?
[22:14] <jam> poolie: well, land, maybe deployed to staging, pretty unlikely to be deployed to production
[22:14] <jam> is my current thought
[22:15] <wgrant> Hm.
[22:15] <wgrant> launchpad-developer-dependencies should depend on lpreview-body, or something.
[22:15] <jam> mwhudson: so I haven't reproduced the failing tests yet, but mostly because "make schema && bin/test -vvt lp.codehosting" seems to be awfully slow going
[22:15] <jam> but I'll hopefully get there before I stop for tonight
[22:16] <mwhudson> jam: you should definitely only have to run make schema once per branch
[22:16] <jam> mwhudson: well, and every time I update from somewhere
[22:16] <jam> I think I was slightly out of date from the last time I pulled in db-devel
[22:16] <mwhudson> jam: if you're targeted devel now, then merge from there, commit, run make schema
[22:16] <mwhudson> once :)
[22:16] <jam> mwhudson: right
[22:17] <mwhudson> it can be a pain though, it takes a long time
[22:17] <poolie> jam so it looks like you're not blocked, at least
[22:17] <jam> I'm at the point that "bin/test" has been running for 20 minutes or so, and it is on "lp.codehosting.tests"
[22:17] <jam> take that back "bin/test" has been running for 1:06
[22:18] <poolie> jam i recently fixed a bug where resizing the window made the tests crash
[22:18] <poolie> _that_ is annoying when they've been going for an hour
[22:18] <jam> poolie: definitely
[22:19] <jam> I'm running it on a 1GB VM, and having bin/test fork bin/test fork python fork... and all at 177MB of virtual memory isn't very happy, either
[22:19] <mwhudson> argh
[22:19] <jam> I'm pretty sure lp. has now bloated from about 120MB to about 150MB since I last updated
[22:20] <jam> mwhudson: I at least make sure to "/etc/init.d/apache2 stop" since that would be running yet-another copy of lp.*
[22:22] <lifeless> jam: yes, layers are terrible (layers support the idea of unteardownable fixtures)
[22:22] <lifeless> which is frankly batshit insane
[22:23] <poolie> hi lifeless
[22:24] <rockstar> wallyworld, do you have time for a chat now, or do you have kids to deal with?
[22:25] <wallyworld> rockstar: i have to do the school drop off - can you ping me alter after you've done your things?
[22:25] <lifeless> hi poolie
[22:25] <rockstar> wallyworld, mine things don't need to be done for 2.5 hours, so ping me when you're back.
[22:25] <jam> lifeless: would it matter if you just made sure all of the layers that *launchpad* has can be torn down?
[22:26] <lifeless> jam: at that point we can just drop in testresources :)
[22:26] <rockstar> lifeless, yes please.
[22:26] <wgrant> Does lp-propose do prereq branches?
[22:26] <wallyworld> rockstar: actually, we can have a chat now if you like. i don't have to do it for another 20 mins or so
[22:26] <rockstar> wallyworld, okay.
[22:26] <jam> lifeless: so is that why it forks its own children? So that it can ensure clean state?
[22:27] <lifeless> jam: yes, it forks its own child when it hits 'NotImplementedError' in a layer tearDown
[22:27] <jml> best feature evar
[22:27] <wallyworld> rockstar: skype?
[22:27] <rockstar> wallyworld, yea, I'm up.
[22:27] <lifeless> and depending on where its at in the test process hitting that error means it won't tear down other layers, so a month ago you'd have had multiple librarians, memcached, etc all running too
[22:28] <jam> any ideas on the "No module named mailman.monkeypatches.*" stuff? I did just run "rocketfuel-get" and "make"
[22:28] <mars> jam, try 'make clean && make' just to be safe, and check for conflicts
[22:28] <lifeless> jam: rm -rf lib/mailman
[22:28] <lifeless> jam: then do 'make
[22:29] <jml> mwhudson: I can no longer understand your comment in layers.py about twisted signal handling: http://paste.ubuntu.com/513406/
[22:29] <jam> lifeless: so you're saying that I need to start this 1+ hour test run from scratch again...
[22:29] <jam> :)
[22:29] <lifeless> jam: its a stale statically-copied mailman element
[22:30] <jam> I find it interesting that the import facist complains so loudly, yet it doesn't stop anything from running
[22:30] <mwhudson> jml: it's possible it doesn't apply with twisted any more
[22:30] <jml> mwhudson: well, aiui, when the reactor is running there *is* a sigchld handler installed
[22:31] <mwhudson> jml: let me read some code again
[22:31] <jml> mwhudson: ok.
[22:32] <mwhudson> jml: ah ok
[22:32] <mwhudson> the point is to make sure that the sigchld handler is not installed while the test_* method runs
[22:33] <jml> mwhudson: the point of the comment or the code?
[22:33] <mwhudson> if a test returns a non-fired deferred, trial starts the reactor
[22:33] <mwhudson> which installs a sigchld handler
[22:33] <mwhudson> and then stops/crashes/mutilates it again
[22:33] <mwhudson> which doesn't uninstall the handler
[22:33] <mwhudson> jml: the code
[22:34] <mwhudson> jml: i can probably rephrase the comment a bunch
[22:34] <jml> mwhudson: I thought the code was about undoing the mangling that trial does
[22:34] <mwhudson> jml: i don't think so
[22:34] <jml> mwhudson: then save & restore are odd names
[22:35] <mwhudson> jml: they're about undoing the mangling that reactor.run does
[22:35] <jml> mwhudson: oh ok, that's what I meant.
[22:36] <jml> mwhudson: if you could rephrase that comment that would help me, I think
[22:38] <mwhudson> jml: ok
[22:40] <mwhudson> jml: twisted no longer raises PotentialZombieWarning
[22:40] <jml> mwhudson: so we can forget about the comment?
[22:40] <mwhudson> well, simplify it a lot
[22:40] <jml> heh
[22:41] <mwhudson> i think we suppressed the warnings in a few tests, but i deleted the suppressions when i upgraded to 10.1
[22:44] <mwhudson> jml: it's still a bit of a novel: http://paste.ubuntu.com/513424/
[22:45] <jml> mwhudson: that's good, thanks.
[22:46] <jml> mwhudson: so if we replaced our Twisted tests with something that always ran the reactor, basically we'd be unable to use tachandler as-is
[22:46] <LPCIBot> Project devel build (108): FAILURE in 3 hr 42 min: https://hudson.wedontsleep.org/job/devel/108/
[22:46] <LPCIBot> * Launchpad Patch Queue Manager: [r=edwin-grubbs][ui=none][bug=347218] Allow a project's or
[22:46] <LPCIBot> distribution's bug supervisor to set the official bug tags for it.
[22:46] <LPCIBot> * Launchpad Patch Queue Manager: [r=gary][ui=none][bug=628510] Fixes bug 628510 by overriding the
[22:46] <LPCIBot> default permission value for the oops file and oops dir when
[22:46] <LPCIBot> those are created.
[22:46] <_mup_> Bug #628510: OSError at /oops.py/  when using lp-oops <canonical-losa-lp> <Launchpad Foundations:In Progress by matsubara> <https://launchpad.net/bugs/628510>
[22:47] <mwhudson> jml: yeah
[22:47] <gary_poster> "IOError: [Errno 28] No space left on device"
[22:47] <lifeless> \o/
[22:47] <mwhudson> yeeeeeeeeeeeha!
[22:48] <wallyworld> rockstar: just checked - old version has same issue :-(
[22:50] <rockstar> wallyworld, okay. Maybe the problem IS in our code.
[22:50]  * rockstar sads
[22:51] <jml> mwhudson: ok, thanks. the testtools/twisted thing I'm coming up with does the signal save/restore, but always runs tests in the reactor
[22:51] <jml> mwhudson: so I guess I'm going to have to fix tachandler
[22:52] <mwhudson> jml: ok
[22:52] <mwhudson> jml: i think there is something in twisted that is supposed to do signal handling in a less interruptive fashion
[22:53] <mwhudson> jml: but i don't know the details and it didn't seem to work for the buildd-manager so ...
[22:53] <jml> mwhudson: maybe I'll be forced to write APIs for twistd
[22:54] <mwhudson> jml: woo
[22:57] <jml> I would like some more days
[22:59] <poolie> hello jml
[22:59] <jml> poolie: hello
[22:59] <jml> poolie: do you have some days that I can use?
[23:10] <thumper> jml: you can use saturday and sunday :)
[23:10] <jml> thumper: you'd think so
[23:10] <jml> thumper: but they are decreasingly available to me for programming
[23:12] <poolie> jml, octember's looking good
[23:14] <thumper> :)
[23:14] <jml> poolie: Cool, thanks. I'll see what I can do.
[23:21]  * jml -> bed
[23:21] <jml> g'night