[01:47] <wallyworld_> rockstar: ping?
[01:47] <rockstar> wallyworld_, hi.
[01:48] <wallyworld_> hey, with merge queues, i had started to do personproduct listings as a next step for person listings. is that something you still need me to do?
[01:48] <wallyworld_> and i guess product listings
[01:50] <wallyworld_> btw, i missed getting the person listings landed before the cut off. the branch passed ec2 ages ago but i was never lucky enough to find a non-testfix window to get the thing landed
[01:51] <rockstar> wallyworld_, I'm not sure why we'd need personproduct listings.
[01:51] <rockstar> Oh, product listings.
[01:51] <rockstar> Um, I'm not sure we do, but thumper might have an opinion.
[01:51] <rockstar> We have this idea that we need listings of everything, and I don't necessarily agree with that.
[01:52] <wallyworld_> ok. wrt not getting the branch landed - it was done in devel to pick up the db changes but after the rollout, thise db changes will be in prod
[01:52] <rockstar> wallyworld_, so, one thing that would be helpful with merge queues would be to take the JSON constraint out of the config.  You were right, we shouldn't force JSON on users.
[01:53] <wallyworld_> should i merge the stuff i did in db-devel into devel and land that after rollout? to get the person listing stuff in?
[01:53] <wallyworld_> ok. you want me to remove the json constraint?
[01:58] <rockstar> wallyworld_, yeah, if you could.
[01:59] <wallyworld_> ack
[01:59] <rockstar> wallyworld_, it's still (currently) only editable through the API anyway.
[01:59] <rockstar> wallyworld_, I think landing on devel once the devel rev has been tagged is okay, no need to merge (provided there are no conflicts)
[02:00] <wallyworld_> ok. i'll do that, so that you can pick up what i've done
[02:01] <wallyworld_> rockstar: is there a bug filed already for the json issue?
[02:02] <rockstar> wallyworld_, no, I don't think so.
[02:02] <wallyworld_> ok
[02:02] <rockstar> wallyworld_, the duplication of bug and kanban task prevented me from ever being organized enough.
[02:02] <wallyworld_> :-)
[02:03] <rockstar> wallyworld_, the only other backend thing that really needs to get done immediately is queuing a merge proposal, which I'm working on right now.
[02:03] <wallyworld_> ok
[02:03] <rockstar> Launchpad still has to be in a chroot though, because you can't develop ubuntuone and launchpad together on the same system.
[02:04] <wallyworld_> that's too bad
[02:05] <wgrant> buildout mess?
[02:05] <wgrant> Not fixable with virtualenv?
[02:06] <rockstar> wgrant, well, first, Launchpad eats postgres. All of it.
[02:06] <wgrant> It only does that once.
[02:06] <wgrant> After that it leaves it alone.
[02:13] <abentley> lifeless: Can you recommend a commandline testrunner compatible with testrepository?
[02:17] <spiv> abentley: I'd guess trial --reporter=subunit
[02:18] <abentley> spiv: I don't have a dependency on Twisted.  Doesn't trial depend on it?
[02:19] <spiv> yes
[02:20] <spiv> (but your tests don't have to depend on Twisted or Trial, if that makes a difference)
[02:22] <spiv> There are probably subunit plugins for other tools like nose, but that seems no different to me than using trial.
[02:22] <lifeless> abentley: python -m subunit.run
[02:23] <lifeless> abentley: or as spiv says trial --reporter=subunit
[02:23] <spiv> lifeless: oh, subunit has a runner?  Cute!
[02:23] <abentley> lifeless: cool.
[02:24] <lifeless> there is a nose subunit plugin floating around but its not really all that polished AIUI
[02:25] <elmo> rockstar: u1 have a very good system of spinning up a non-system postgres for their test suite, you should be able to mix and match
[02:27] <rockstar> elmo, yeah, and at some point, I'll sort that out.
[02:27] <rockstar> (Probably not on my first day with the u1 team)
[02:27] <lifeless> abentley: oh, and bzr selftest --subunit
[02:27] <lifeless> [02:27] <lifeless>     Hard / Soft  Page ID
[02:27] <lifeless>      180 /   53  Person:+commentedbugs
[02:27] <lifeless>       78 /  268  BugTask:+index
[02:27] <lifeless>       46 / 2908  Archive:+index
[02:27] <lifeless>       19 /  331  Distribution:+bugtarget-portlet-bugfilters-stats
[02:27] <lifeless>       14 /  312  Distribution:+bugs
[02:27] <lifeless>       10 /   12  Person:+bugs
[02:28] <lifeless>        8 /  100  MaloneApplication:+bugs
[02:28] <lifeless>        8 /    5  DistroSeries:+queue
[02:28] <lifeless>        5 /   15  DistroSeriesLanguage:+index
[02:28] <lifeless>        5 /    6  ProjectGroup:+milestones
[02:28] <abentley> lifeless: sure.  And the Launchpad test runner too.  But this is a separate project.
[02:28] <lifeless> sinzui: ^ Person:+commentedbugs has a fix in lp from stub
[02:28] <wgrant> Hmm... Archive:+index is still broken?
[02:28] <lifeless> abentley: righto, was just enumerating as things occured to me ;)
[02:28] <lifeless> wgrant: broken - not. Not all it can be - yes.
[02:30] <wgrant> lifeless: Hard timeouts == broken.
[02:31] <lifeless> wgrant: sure
[02:31] <lifeless> 46 is tolerable while we're recovering from 8.4
[02:31] <lifeless> wgrant: compared to hundreds per day
[02:31] <lifeless> wgrant: also note - no XMLRPC ones ;)
[02:31] <wgrant> Indeed.
[02:31] <wgrant> A bit disturbing that that was all just an overloaded appserver.
[02:33] <james_w> I'm a bit surprised that +branches pages don't show up on that list, it suggests to me that branch listings are rarely used, which is concerning in itself
[02:33] <wgrant> Is +branches linked from anywhere?
[02:33] <james_w> sorry, I was being overly precise
[02:33] <wgrant> +code-index?
[02:33] <james_w> whatever you get when you go to the "Code" tab
[02:34] <james_w> particularly person and person+pillar listings give me timeouts
[02:35]  * james_w causes a few OOPS for those pages to try and get lifeless to fix them :-)
[02:37] <lifeless> james_w: https://devpad.canonical.com/~lpqateam/ppr/lpnet/latest-daily-pageids.html
[02:43] <lifeless> thumper: https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1781G129
[03:04] <abentley> lifeless: what would test_id_option be for subunit.run ?
[03:18] <lifeless> abentley: subunit.run doesn't support filtering at the moment
[03:18] <lifeless> cat .testr.conf
[03:18] <lifeless> [DEFAULT]
[03:18] <lifeless> test_command=PYTHONPATH=. python -m subunit.run foo.tests.test_suite
[03:18] <lifeless> abentley: ^ something like that is all you'd need
[03:19] <abentley> lifeless: but I wouldn't be able to use testr run --failing?
[03:19] <lifeless> it will work, but it would't filter
[03:20] <lifeless> I really should add filtering to subunit.run
[03:21] <abentley> lifeless: Yes, I think that would be very helpful.
[03:22] <spiv> Trial can filter :P
[03:23] <abentley> spiv: indeed, and I guess that makes it the best tool for the job, at the moment.
[03:24] <spiv> *nod*
[03:24] <abentley> spiv: err, how do I make it filter?
[03:25] <spiv> abentley: you can pass it test names
[03:25] <spiv> (or test class names, or test module names)
[03:26] <abentley> spiv: So what do I specify for test_id_option ?
[03:27] <spiv> e.g. if there's a test called package.tests.module.Class.method, trial will run it with any of "trial package", "trial package.tests", "trial.tests.module", "trial.tests.module.Class", "trial.tests.module.Class.method" (although obviously all but the latter are likely to find more tests than just that one)
[03:27] <spiv> heh, I typoed some of those badly
[03:28] <spiv> Basically "trial $python_dotted_name" will run all the tests contained in that dotted name
[03:28] <spiv> It'll probably fail to work nicely if you have test ids that don't correspond to python dotted names (e.g. if you have test multiplication via testscenarios or something)
[03:29] <abentley> spiv: I can't use something like that.  I need something that allows me to specify a file with test ids.
[03:29] <spiv> trial `cat $file` ?
[03:30] <abentley> spiv: testrepository does not appear to support that.
[03:30] <spiv> sh -c "trial `cat $file`" ?  ;)
[03:30] <spiv> Hmm, what a shame.
[03:30] <spiv> So close and yet so far!
[03:31] <abentley> spiv: testrepository expects that there is an option you can supply to the elsewhere-specified command that will make it read a file listing the test ids to run.
[03:31] <abentley> spiv: So I can't change the command, only append an argument.
[03:31] <spiv> abentley: sounds like a job for a shell script until one of these tools gets better :(
[03:32] <abentley> spiv: Yeah, seems like.
[06:44] <wgrant> spm: Around?
[06:45] <wgrant> I think germanium is pretty upset.
[06:46] <wgrant> No uploads accepted or packages published for nearly half an hour.
[07:03] <wgrant> losa ping: ^^
[07:32] <poolie> jml: up yet?
[08:04] <wgrant> Could someone prod ISD about the issue in #launchpad?
[08:06] <nigelb> SSO isn't LP per se is it?
[08:06] <wgrant> Exactly.
[08:06] <wgrant> Needs ISD.
[08:06] <wgrant> And they like to hide.
[08:08] <nigelb> wgrant: :/
[08:08] <nigelb> I remember seeing a sort of rant mail about it in launchpad-dev
[08:08] <wgrant> It's not an awfully wonderful situation.
[08:10] <poolie> i will try
[08:10] <wgrant> Thanks poolie.
[08:10] <poolie> what is the issue, in one sentence?
[08:10] <poolie> and how critical?
[08:10] <wgrant> Another email address that SSO won't send mail to.
[08:10] <wgrant> Two support cases have been opened about it.
[08:10] <wgrant> But both have been ignored.
[08:10] <poolie> :/
[08:10] <wgrant> ixokai in #launchpad
[08:11] <poolie> and the thing with germanium is separate?
[08:11] <wgrant> Yes.
[08:11] <wgrant> That doesn't need ISD, fortunately.
[08:11] <wgrant> It just needs a LOSA, hopefully.
[08:11] <poolie> ah, spm is away today
[08:12] <wgrant> Ahh.
[08:12] <mthaddon> o/
[08:13] <poolie> mthaddon, meet wgrant :-)
[08:13] <mthaddon> yeah, just catching up on backscroll in #launchpad
[08:14] <wgrant> mthaddon: The main issue is that germanium's publisher is somehow unhappy.
[08:15] <mthaddon> wgrant: any clues as to where to look?
[08:16] <wgrant> mthaddon: The logs for cron.publish-ppa might have something interesting.
[08:16] <wgrant> I don't yet know where they live.
[08:16] <mthaddon> yeah, looking at that now
[08:17] <mthaddon> can't see anything untoward in there
[08:17] <wgrant> Is there anything not untoward there?
[08:18] <mthaddon> looks like there was issues with claiming the lockfile for a while, but seemed to clear itself up a few mins ago
[08:19] <mthaddon> definitely seeing stuff processing now - you sure it's still an issue?
[08:19] <wgrant> Ah, so it is.
[08:19] <wgrant> It's happening now.
[08:19] <wgrant> Not quite done, but getting there.
[08:19] <wgrant> I wonder why it was broken.
[08:19] <mthaddon> k
[08:19] <wgrant> Thanks.
[08:40] <adeuring> good morning
[09:01] <wgrant> Morning bigjools.
[09:02] <bigjools> morning
[09:02] <wgrant> The PPA publisher was MIA for ~2 hours earlier, but mysteriously recovered once mthaddon started poking around.
[09:03] <bigjools> :/
[09:03] <bigjools> that happened the other day too
[09:03] <bigjools>  the librarian went awol
[09:03] <wgrant> Ah, interesting.
[09:04] <bigjools> no librarian = sad soyuz
[09:05] <stub> Are we running Librarian with tokens now?
[09:05] <bigjools> no idea
[09:05] <stub> Not sure what has changed with the Librarian apart from that the last few years. Or Apache stuff could be wonky I guess.
[09:05] <wgrant> We're not.
[09:05] <stub> Or Squid
[09:05] <wgrant> It's still webapp proxying.
[09:10] <mrevell> Bonjour
[09:34] <jml> poolie: am now
[09:36] <jml> File "/srv/launchpad.net/production/launchpad-rev-11926/lib/lp/registry/interfaces/person.py", line 172, in validate_person_common obj, getattr(obj, 'name', None))) PrivatePersonLinkageError: Cannot link person (name=canonical, visibility=PRIVATE) to (name=None)
[09:36] <jml> I get this when I try to make ~canonical a member of ~canonical-data-science.
[09:36] <jml> What's the deal?
[09:38] <stub> http://docs.fabfile.org/0.9.3/tutorial.html
[10:44] <bigjools> jml: did you have any comments about the buildd-manager changes?
[12:04] <jml> bigjools: none from my skim. glad that BuilderSlave has a multi-file fetch method.
[12:04]  * jml skims again
[12:05] <bigjools> jml: and shockingly easy to implement
[12:05] <jml> bigjools: yeah
[12:05] <jml> bigjools: I found that once I got the hang of Twisted, a lot of stuff like that just falls into place.
[12:05] <bigjools> jml: it's starting to click for me now
[12:06] <bigjools> I need to read up on all the protocol stuff though
[12:08] <jml> *nod*
[12:08] <jml> bigjools: I don't think you need files_downloaded  (in packagebuild.py)
[12:08] <jml> you could also do:
[12:08] <jml>   d = slave.getFiles(filenames_to_download)
[12:09] <jml>   d.addCallback(lambda x: self.storeBuildInfo(self, librarian, slave_status))
[12:09] <jml>   d.addCallback(build_info_stored)
[12:09] <jml>   return d
[12:09] <bigjools> possibly, I'll take a look.  I went through many iterations writing that code and I expect there's some cruft
[12:10] <jml> spot the lie: http://paste.ubuntu.com/532951/
[12:10] <bigjools> no lie :/
[12:10] <bigjools> well, maybe a half-truth
[12:11] <jml> when was the "next Paris summit"?
[12:11] <jml> :)
[12:11] <bigjools> yeah :)
[12:12]  * jml is looking forward to finishing the damn testtools-experiment branch.
[12:13] <bigjools> thank you again for your tutelage jml
[12:13] <jml> np.
[12:38] <jtv> danilos, henninge: I can't find any remaining call sites for migrate_kde_potemplates.py…  am I missing one?  Or can I just drop it?
[12:39] <henninge> jtv: no idea but that sounds dropable ;)
[12:41] <danilos> jtv, I think we had it killed already
[12:41] <jtv> danilos: yes, I think we did—and this is just a remaining piece.
[12:42] <jtv> Then there's message-sharing-populate.py.
[12:42] <danilos> jtv, do away with it then... that too
[12:42] <danilos> jtv, there was another one, remove-obsolete-translations I think
[12:42] <jtv> Oh!  And here I just updated that one.  :)
[12:43] <jtv> That's in something called KNOWN_BROKEN.
[12:44] <jml> I'm offline to get some more writing, diagramming and strategizing done. (so leet). Available on mobile, back in time for standup.
[13:01] <lifeless> gmb: hey
[13:01] <lifeless> gmb: so I wanted to chat with you & deryck about bug subscriptions
[13:02] <lifeless> gmb: we seem to be a little confused about them. Perhaps a LEP - not about the 'subscribe-to-search' bit - but about subscriptions /per se/ would clarify things?
[13:02] <lifeless> gmb: I mean, the 'we email dup subscribers' is deeply embedded in the code - we've unpleasant queries that exist just to answer that question.
[13:09] <gmb> lifeless: I think a LEP to clarify what we actually want to do with subscriptions themselves is a very good idea.
[13:47] <lifeless> danilos: hi
[13:47] <lifeless> danilos: I'm replying now, but would love more data about the use case!
[13:48] <danilos> lifeless, which one? (I am kinda mentioning both :)
[13:48] <lifeless> danilos: these guys recommend  model by 'what queries do you want to do'
[13:48] <lifeless> gimme 2 minutes, I'll get this reply to you.
[13:48] <danilos> lifeless, sure thing
[13:48] <danilos> lifeless, I have a vague feeling that it wouldn't really work for us at all, but it's worth asking the questions while you are there :)
[14:05] <lifeless> danilos: hi
[14:05] <lifeless> danilos: sorry about that, had to move to the training room :)
[14:05] <danilos> lifeless: training is good for your health :)
[14:06] <lifeless> :)
[14:27] <lifeless> danilos: anyhow, does my reply help? Should we discuss it some more?
[14:30] <danilos> lifeless: sorry, got distracted myself
[14:31] <bigjools> sinzui: hi - from your bug comment are you volunteering to fix bug 654476?
[14:33] <danilos> lifeless: paraphrasing for the second use case seems about right
[14:34] <sinzui> bigjools, I thought I had volunteered a few weeks ago
[14:35] <bigjools> sinzui: really?  I didn't know we'd narrowed it down to the issue I last mentioned on the bug
[14:36] <sinzui> bigjools, well, we scheduled the team membership policy fix for this release. I think add a check for PPA as well as other teams takes but a few minutes of work
[14:36] <bigjools> sinzui: indeed, ok thanks.  I'll re-assign the bug to your team.
[14:36] <sinzui> okay
[15:00] <lifeless> bigjools: I'm really happy that the builddmanager is coming along so nicely
[15:00] <bigjools> lifeless: you are by no means the only one!
[15:00] <bigjools> lifeless: I have one more scalability issue to talk to you about though
[15:02] <bigjools> when you've got some free time it would be good to chat in more deptch
[15:03] <lifeless> bigjools: shoot
[15:03] <bigjools> oh, free now?
[15:04] <lifeless> We're doing some simple stuff right now
[15:04] <lifeless> I can track IRC well enough to get bootstrapped on your issue
[15:04] <bigjools> lifeless: ok, so, the main issue is that every time we add a new builder, it makes the b-m issue "the" query is issued at regular intervals.  That does not scale.
[15:05] <bigjools> partly because the b-m is using the master store
[15:05] <lifeless> what does this query do (in english)
[15:05] <bigjools> figures out what to dispatch next
[15:06] <bigjools> *cough* a bit like a queueing system one might say *cough*
[15:06] <lifeless> do we model the queue directly, or do we model imputs and assemble the queue every time?
[15:07] <bigjools> there is no queue as such, it issues the query every 15 seconds (was 5 but I slowed it) to see if there's a candidate
[15:07] <bigjools> so more like the latter
[15:07] <lifeless> so long term I think we need to overhaul the entire queueing model for buildds
[15:07] <bigjools> the buildqueue used to model a "queue" but things got quite complicated in the last year ;)
[15:07] <lifeless> as I think I've said before
[15:08] <lifeless> to reduce starvation and so forth
[15:08] <bigjools> it's what everyone has said before I think :)
[15:08] <lifeless> in the short term
[15:08] <bigjools> it's not a short-term problem
[15:08] <bigjools> but when we get those 200 extra builders ...
[15:08] <lifeless> what about leaving the builder idle until the next 15 second poll
[15:08] <bigjools> I don't understand what you mean by that
[15:09] <lifeless> ok
[15:09] <lifeless> so lets say we had the 200 builders all attached
[15:09] <lifeless> and nothing to build
[15:09] <lifeless> they are presumably in some state where we can dispatch a build whenever we want
[15:09] <bigjools> yes
[15:10] <lifeless> and then if I upload a build, after processsing etc, your 15 second frequency queue sampling operation will find a build and dispatch it.
[15:10] <lifeless> bigjools: I'm assuming your query is something like this - psuedo code:
[15:10] <lifeless> needed_build_requests = len(builder for builder in builders if builder.is_idle)
[15:11] <lifeless> build_requests = issue_THE_query(needed_build_requests)
[15:11] <bigjools> no
[15:11] <lifeless> so that if there are no idle builders you don't issue the query at all
[15:11] <bigjools> there is a separate query for each builder
[15:11] <lifeless> bigjools: is it grouped by equivalent builders?
[15:11] <bigjools> no
[15:11] <lifeless> like, surely all virtualised i386 builders could be queried at aonce
[15:12] <bigjools> yes that would be eminently sensible
[15:12] <lifeless> anyhow, what I meant was
[15:12] <bigjools> but since forever it has always issued one query per builder
[15:12] <lifeless> put the builder in whatever state a builder waiting for a build is
[15:12] <jml> sinzui: hi
[15:12] <sinzui> hi jml
[15:12] <lifeless> and let the next lookup after 15 seconds sort it out
[15:12] <jml> sinzui: thanks for deduping my bug
[15:12] <jml> sinzui: I was wondering if you know of a work around?
[15:13] <lifeless> bigjools: but if you're going to do one-per-builder anyway...
[15:13] <sinzui> jml: maybe make canonical the owner of the team?
[15:13] <jml> sinzui: even though I can't add canonical to the team?
[15:13] <lifeless> bigjools: then I think making the modelling be able to answer the query instantly is probably the key thing.
[15:13] <bigjools> lifeless: so we could mitigate things a lot by remodelling to issue one query per (arch,virt)
[15:13] <bigjools> the query is already fairly quick to be fair
[15:14] <bigjools> it's just issued *a lot* :)
[15:14] <lifeless> bigjools: and add a limit for the number of available builders ?
[15:14] <jml> sinzui: "The person/team named 'canonical' is not a valid owner for Canonical Data Science CoP."
[15:14] <lifeless> bigjools: (such that you can short circuit when limit=0)
[15:14] <bigjools> yes, we'd need to track state carefully - but this is a major, major change to the b-m operation
[15:14] <lifeless> bigjools: the idea being that that would let you drop it down to checking every 1-2 seconds, for instance.
[15:15] <bigjools> lifeless: I think that this is a good direction to head.  It's an almost total rewrite of the b-m to do it though :(
[15:16] <sinzui> jml: yes that is correct, it will not be valid because it will cause a membership change. I think OEM/ISD has employed scripts to add members from one team to another
[15:17] <jml> sinzui: so, effectively there is no practical work-around to the problem beyond manually adding every member of ~canonical to the new team?
[15:17] <sinzui> jml: I really do not think there is an acceptable work around. This bug has been a super pain for 18 months
[15:17] <jml> or scriptedly
[15:17] <jml> sinzui: ok.
[15:18] <jml> sinzui: we are going to fix this with the new privacy model, right?
[15:19] <sinzui> Our pleas for help were largely ignored last year. So while I want to fix this. I cannot say it will make everyone happy.
[15:19] <sinzui> restating the problem as a membership policy issue, not visibility issue was a break-through for us two months ago
[15:20] <jml> sinzui: making everyone happy is not a requirement. covering this use case (make a team that's visible to members of another team) is.
[15:21] <jml> multi-tasking is melting my brain.
[15:21] <sinzui> The need for team hierarchy will be lessened when users and teams can be permitted to observe without being members. And explaining that members need to guard their messages on mailing lists and contact-this-team emails may suffice
[15:22] <sinzui> jml: This is a super hard problem. Lots of people have worked on this and it has befuddled everyone.
[15:22] <lifeless> bigjools: the long term one would be 'be able to answer a single nodes needs very very efficient'y
[15:23] <lifeless> bigjools: though I do think querying for a set of buuilders needs at once is sensible regardless.
[15:23] <sinzui> bac checked the TP leaks a few months ago and reported that they were quite bad.
[15:23] <bigjools> lifeless: totally.  In fact when we start using a real queueing solution we just have one queue per (arch,virt) and be done with it.
[15:24] <bigjools> lifeless: one of the complications right now is that stuff can be in the queue but we have rules that stop it getting dispatched
[15:24] <lifeless> oh?
[15:24] <lifeless> is there lots of that? or just a few jobs at a time?
[15:24] <bigjools> lifeless: there's two: 1., private sources need to be published, 2. we don't allow PPAs to issue too many concurrent builds
[15:25] <lifeless> jam: cassandra is basically bzr's store ;)
[15:27] <deryck> gmb, hi.  In the continuing saga of lazr-js upgrade, I have a clean test run!  hurrahs heard round the world!
[15:28] <gmb> deryck: Let joy be unconfined.
[15:29] <lifeless> gmb: so you'll do a LEP then?
[15:31] <jam> lifeless: well, pack files, but certainly different indexing at the top level, and it isn't multi-writer safe :)
[15:32] <gmb> lifeless: Yes. I'll start drafting one on Thursday (I'm away tomorrow)
[15:33] <lifeless> gmb: awesome, thanks!
[15:33] <lifeless> jam: the indexing is a little different, but not all that much ;)
[15:33] <jam> lifeless: the in-memory-only hash map is pretty different
[15:33] <lifeless> jam: its (rowkey, columnname) tuples for the indices AFAICT.
[16:49] <jcsackett> so, back from vacation and i'm trying to get the branch-lander working again per matsubara's email, but all i get when trying to authorize is a redirect circle between the login page and "confirm sharing these details" page. anyone run into this when the change landed last week?
[16:50] <matsubara> jcsackett, what's the URL for that page?
[16:52] <jcsackett> matsubara: login.launchpad.net
[16:52] <jcsackett> auto-taken there when i run land.
[16:53] <matsubara> jcsackett, sounds like you're having problem with sso authentication. do other LP pages work for you?
[16:56] <jcsackett> matsubura: i had thought they were, but you're right, i can't login period right now on this machine (oddly on another one i'm having no problems). i'll fidget with it a bit more.
[16:56] <jcsackett> thanks.
[17:06] <jcsackett> ah, botched my ssl configuration.
[17:10] <bigjools> jml: was there some sort of bug in Trial to do with leaving a dirty reactor sometimes?
[17:11] <jml> bigjools: it's not a bug in trial, it's a bug in the tests that cause it.
[17:11] <jml> (or the code that the tests call)
[17:11] <bigjools> jml: I know that, but I was wondering if there was something in Trial too.  I have a test that's randomly leaving a dirty reactor :/
[17:12] <bigjools> so far only happens on a full test run
[17:12] <jml> bigjools: it's almost certainly not a bug in trial.
[17:12] <bigjools> ok, good to confirm if nothing else
[17:12] <jml> bigjools: a very common class of test that leaves junk in the reactor does so intermittently
[17:12] <jml> bigjools: point me at the test and the error. I might be able to spot the problem.
[17:13] <bigjools> jml: test_binarypackagebuild/test_date_finished_set
[17:14] <jml> bigjools: in which branch?
[17:14] <bigjools> jml: the one you reviewed
[17:15] <bigjools> jml: lp:~julian-edwards/launchpad/async-file-uploads-bug-662631
[17:16] <jml> bigjools: ta
[17:16]  * bigjools wishes the dev wiki would redirect back to the page you were on after logging in
[17:17] <deryck> mars, ping.
[17:17] <mars> Hi deryck, what's up?
[17:17] <deryck> Hi mars.  I got my lazr-js upgrade branch passing ec2 now, but I still have horribly long run times....
[17:18] <deryck> mars, can I send you a log and some notes this morning and see if you have any idea what's going on?
[17:18] <mars> deryck, do you still see thread trash lying around?
[17:18] <jml> bigjools: which subclass? (specifically, what is self.build set to?)
[17:18] <deryck> mars, yes
[17:18] <bigjools> jml: TestHandleStatusForBinaryPackageBuild
[17:19] <bigjools> it seems to work for the recipe one
[17:19] <mars> deryck, I can look, certainly.  Fire the mail over.  I can reply with a HTTP on-the-wire technique I successfully piloted the last time we did this, it should sniff out the off-site accesses, if any are around.
[17:19] <jml> bigjools: ta
[17:19] <deryck> mars, ah, ok.  That would be useful to try.
[17:21] <jml> bigjools: what's the left-over?
[17:22] <bigjools> jml: http://pastebin.ubuntu.com/533122/
[17:22] <jml> wow. that's a lot.
[17:22] <bigjools> I would guess at a timeout
[17:23] <deryck> mars, thanks for the help!  Two emails fwd'ed to you.
[17:23] <bigjools> jml: it's probably the getFiles() - it sets up a lot of callbacks
[17:23] <jml> bigjools: self.buildqueue_record.builder.cleanSlave() returns a Deferred but the code doesn't handle it.
[17:23]  * bigjools looks
[17:23] <jml> bigjools: maybe not the bug here, but it's a start.
[17:24] <jml> bigjools: in _handleStatus_OK
[17:24] <bigjools> I didn't change any of that
[17:24] <bigjools> so it was there before
[17:25] <jml> bigjools: but you've moved more tests to run under trial with the stricter checking
[17:25] <bigjools> jml: and that same calls is repeated in a few places
[17:25] <bigjools> and that code is in production right now
[17:26] <jml> bigjools: perhaps it doesn't matter when the slave gets cleaned, as long as it's eventually
[17:26] <bigjools> I think it's luck that it's not breaking
[17:27] <bigjools> the polling loop will wait until it's clean
[17:27] <bigjools> this is probably the test problem though
[17:27] <jml> bigjools: I'm inclined to support the luck theory
[17:27] <bigjools> but why is it intermittent
[17:27] <jml> bigjools: it's the most obvious part.
[17:27] <jml> bigjools: because sometimes it finishes before the test does?
[17:28] <bigjools> ah
[17:28] <jml> and sometimes it finishes after
[17:28] <bigjools> mebbe
[17:28] <bigjools> fun
[17:28] <jml> yeah
[17:28] <jml> can't guarantee order of execution without deferreds.
[18:12] <deryck> mars, I *just* found more combo loading attempts going on.
[18:12] <jml> ugh.
[18:12] <mars> deryck, where?
[18:13] <mars> deryck, outside your search-and-replace sweep?
[18:13] <deryck> mars, vocab picker uses YUI().use
[18:13] <jml> 10 personal emails to answer. My friends, family and contributors to my free software projects must hate me :(
[18:13] <deryck> mars, yeah, missed it somehow.
[18:13] <deryck> no one hates jml :-)
[18:13] <jml> :)
[18:15] <deryck> mars, found a bunch with a more liberal grep now.  Oh, well.  Easy sed fix and maybe this will be finished.
[19:04] <jml> I think I might declare tomorrow to be LEP day
[19:05] <jml> in any case, g'night all
[19:05] <mars> good night jml
[20:13] <LPCIBot> Yippie, build fixed!
[20:13] <LPCIBot> Project devel build (225): FIXED in 4 hr 3 min: https://hudson.wedontsleep.org/job/devel/225/
[20:33] <poolie> hello all
[20:36] <wallyworld> thumper: standup now?
[20:37] <thumper> wallyworld: hi
[20:37]  * thumper plugs in
[20:49] <jcsackett> sinzui: did you ever find that branch regarding lpstats you talked about this morning?
[20:55] <maxb> matsubara-afk: Were you specifically wanting another lucid build of bzr-pqm, or did you just misclick whilst requesting the maverick one?
[20:57] <bryceh> Anyone recognize this error?  "Error: Couldn't find a distribution for 'zope.pagetemplate==3.5.0-p1'." http://pastebin.ubuntu.com/533225/
[20:57] <bryceh> just started seeing it yesterday after merging current db-devel.  not seeing it with the same branch on ec2 though
[20:58] <mwhudson> bryceh: is your download-cache up to date?
[20:59] <jcsackett> EdwinGrubbs: do you know anything about the lpstats thing sinzui mentioned this morning? bug 675596
[20:59] <_mup_> Bug #675596: lpstats queries need to be updated to use usage enums <Launchpad Registry:Triaged> <https://launchpad.net/bugs/675596>
[21:00] <bryceh> mwhudson, perhaps not (been away for past week)
[21:00] <sinzui> jcsackett, I am looking from the trunk. I do not know the project name :(
[21:01] <sinzui> I do not even know who owns it
[21:01] <jcsackett> sinzui: dig. so it's not just launchpadusagestatistics.py or something, is it?
[21:01] <bryceh> mwhudson, do I simply delete download-cache/* or is there a command to refresh it?
[21:01] <EdwinGrubbs> jcsackett: https://lpstats.canonical.com/graphs/
[21:01] <sinzui> jcsackett, You have never seen the source code
[21:02] <jcsackett> sinzui: that was what i had assumed (but now what i was hoping. :-P)
[21:03] <mwhudson> bryceh: bzr up download-cache
[21:04] <sinzui> jcsackett, https://code.launchpad.net/pgmetrics
[21:05] <sinzui> jcsackett, I think I am wrong
[21:07] <jcsackett> sinzui: yeah, this code doesn't look like it could produce that site.
[21:28] <wgrant> jcsackett, sinzui: Isn't it tuolumne?
[21:29] <sinzui> wgrant, yes, which is a brilliant name for code that can run on many machines
[21:30] <jcsackett> sinzui: what's the name mean?
[21:31] <lifeless> wgrant: pgmegrics is probably the glue for tuolumne for postgresql that we use.
[21:32] <lifeless> jcsackett: the source code is tuolumne
[21:32] <lifeless> but I don't remember offhand which  branch we use. flacoste will know.
[21:33] <jcsackett> lifeless: right, given sinzui's comment i was just curious if the name itself had some meaning. google only reveals a county with the same name.
[21:33] <mwhudson> jcsackett: that's where the name comes from
[21:33] <jcsackett> ah.
[21:33] <mwhudson> the project was created by mthaddon, who was living in california and visiting yosemite at the time
[21:33] <flacoste> yeah, it's a climber ref
[21:34] <jcsackett> dig.
[21:34] <flacoste> jcsackett: you want to read https://wiki.canonical.com/Launchpad/PolicyandProcess/AddLPStatsGraph
[21:35] <jcsackett> flacoste: already reading. :-P
[21:35] <flacoste> all you need sohuld be there
[21:35] <flacoste> if anything is missing or unclear, please help by asking questions and feeding it back into that page
[21:35] <jcsackett> not quite all--i'm making sure graphs don't break as we migrate the official_* booleans to the *_usage enums.
[21:35] <jcsackett> rather than adding a graph.
[21:35] <flacoste> jcsackett: right, but it's the same process
[21:36]  * jcsackett nods.
[21:36] <flacoste> and you won't need to add a graph
[21:36] <flacoste> only update the metrics
[21:37] <jcsackett> flacoste: yup. sinzui has shown me a branch dealing with similar reqs.
[21:37] <jcsackett> or a revision, rather.
[21:38] <wgrant> gmb: You're not still around, are you?
[21:39] <gary_poster> Hey thumper .  crowberry doesn't need access to memcache, right?
[21:40] <thumper> gary_poster: what are we putting in memcache?
[21:41] <gary_poster> thumper: AFAIK it's just used for template snippets right now.
[21:41] <gary_poster> but I'm asking to make sure :-)
[21:42] <gary_poster> memcache is blocked from crowberry now, thumper, so this is a "is the status quo OK" question
[21:42] <gary_poster> (not asking about a change)
[21:43] <thumper> I think that's fine
[21:43] <gary_poster> ok thanks thumper
[21:44] <lifeless> whats on crowberry that is *trying* ?
[21:47] <wgrant> Could someone EC2 https://code.launchpad.net/~wgrant/launchpad/bug-654372-optimise-domination/+merge/40854?
[21:56] <lifeless> flacoste: hi
[21:56] <flacoste> hi lifeless
[21:56] <lifeless> flacoste: rt 42199 and 40480 should have reversed priorities
[21:56] <lifeless> flacoste: so that we have more insurance and can switch around without downtime.
[21:57] <lifeless> oh, bah mea culpa.
[21:57] <lifeless> flacoste: ignore me, tis fine.
[21:57] <flacoste> ok
[21:57] <flacoste> i will
[21:57] <lifeless> I misread as production ;)
[22:58] <lifeless> night
[23:02] <thumper> wallyworld: can you check the code team oopses for yesterday, the TypeError ones for +check-links
[23:02] <thumper> wallyworld: has this been fixed?
[23:02] <wallyworld> thumper: will check
[23:06] <wallyworld> thumper: there's a few from 15th but trying those urls right now results in no oops
[23:07] <wallyworld> thumper: the stack trace in the oops shows the branch which fixes it wasn't deployed yet
[23:07] <wallyworld> on the 15th
[23:37] <thumper> wallyworld: I'd like you to take a look at this: https://bugs.edge.launchpad.net/launchpad-code/+bug/675517 once you've finished the current page you're looking at
[23:37] <_mup_> Bug #675517: Certain error tracebacks can leak through codehosting <oops> <Launchpad Bazaar Integration:Triaged> <https://launchpad.net/bugs/675517>
[23:38] <wallyworld> thumper: ok