[02:17] <lifeless> [02:17] <lifeless>     Hard / Soft  Page ID
[02:17] <lifeless>      296 / 6927  Archive:+index
[02:17] <lifeless>       83 /  377  BugTask:+index
[02:17] <lifeless>       43 /  169  ProjectGroupSet:CollectionResource:#project_groups
[02:17] <lifeless>       22 /  261  Distribution:+bugs
[02:17] <lifeless>       12 /   29  Archive:+copy-packages
[02:17] <lifeless>       10 /  344  POFile:+translate
[02:17] <lifeless>       10 /   27  MailingListApplication:MailingListAPIView
[02:17] <lifeless>        9 /   34  DistroSeries:+queue
[02:17] <lifeless>        8 /   29  Milestone:+index
[02:17] <lifeless>        8 /   11  DistroSeriesLanguage:+index
[02:40] <wgrant> lifeless: Do you have a +copy-packages OOPS? Is it repeated DistroArchSeries queries again?
[02:40] <wgrant> Repeated DistroArchSeries and Archive queries, inf act.
[02:41] <wgrant> (wow, that is a lot of Archive:+index soft timeouts. I wouldn't have guessed that it had that many hits)
[02:54] <lifeless> wgrant: this is why I worry :)
[02:57] <wgrant> lifeless: If you'd let PQM be taken out of RC earlier, it would be fixed by now :P
[03:02] <wgrant> lifeless: Could you check a +copy-packages OOPS? I think it's the same issue as +queue -- SPR.getBuildByArch being awful.
[03:05] <lifeless> wgrant: I'm on leave, nutting to do with me ;)
[03:06] <wgrant> Pfft.
[03:06] <lifeless> Archive:+index 20753
[03:06] <lifeless> 99% under 15.84
[03:07] <wgrant> Ow.
[03:09] <lifeless> wgrant: I do
[03:14] <lifeless> wgrant: https://bugs.launchpad.net/soyuz/+bug/575450
[03:14] <_mup_> Bug #575450: Archive:+copy-packages nearly unusable due to timeouts <ppa> <timeout> <Soyuz:Triaged> <https://launchpad.net/bugs/575450>
[03:15] <wgrant> lifeless: Wow, those are really awful queries.
[03:15] <lifeless> tada
[03:15] <wgrant> Not what I thought it would be.
[03:15] <wgrant> Thanks.
[03:15] <lifeless> and 150 noddy repeats
[03:16] <wgrant> Those are hopefully fixed by my Archive:+index branch.
[03:16] <wgrant> But they will probably reveal more.
[03:16] <lifeless> sure
[03:16] <lifeless> a query scaling test for copy-packages would be nice
[03:17] <wgrant> But by that point I probably won't have to pester you.
[03:17] <wgrant> Yeah.
[03:17] <wgrant> But it's so awful at the moment that it's pointless to test.
[03:17] <lifeless> https://launchpad.net/ubuntu/+archive/primary/+copy-packages
[03:17] <lifeless> looks similar to me
[03:17] <lifeless> wgrant: its never pointless to stop the erosion
[03:18] <wgrant> lifeless: Pointless? No. Depressing and embarrassing? Definitely.
[04:17] <wgrant> lifeless: I think we need to segregate OOPSes by HTTP method.
[04:18] <wgrant> Looking at that +copy-packages OOPS again, it's a GET, not a POST.
[04:18] <wgrant> So it's just batchnavigator's COUNTing stupidity.
[04:42] <lifeless> wgrant: file a bug on oops-tools, I think that that is a good, valid, heuristic.
[04:43] <wgrant> lifeless: k, thanks.
[04:56] <wgrant> lifeless: How are we going to fix batchnavigator? Can we get postgres to give us an estimate?
[04:57] <lifeless> broadly
[04:58] <lifeless> a) there is an estimator in the code base already
[04:58] <lifeless> b) we don't want to pass storm result sets outside the persistence layer anyway
[04:58] <lifeless> c) while counting can be slow, we should be able to do better than 3 seconds
[07:21] <lifeless> \o/ parallel testing for testr
[07:22] <lifeless> about 0.3 seconds (25%) off of the time to test testtools.
[07:38] <wgrant> I love well-designed tests.
[07:50] <lifeless> wgrant: did you file that bug about presentation of the 'Ubuntu also tracks bugs for packages derived from this project: bzr in ubuntu.' links ?
[07:50] <wgrant> lifeless: I didn't.
[07:50] <wgrant> I feel like I should talk to someone first.
[07:50] <lifeless> thats a mispattern
[07:51] <lifeless> I'm sure you can point out that the current approach doesn't scale to even just the distros hosted in LP>
[07:52] <lifeless> the mispattern is needing discussion *before* creating a place to talk about the issues
[07:52] <wgrant> Well, It is tempting to at least raise it with Curtis first.
[07:53] <lifeless> its been 1? 2? weeks.
[07:53] <lifeless> I call timeout
[07:53] <lifeless> (the timeout should be ~ 5 minutes)
[07:53] <wgrant> Wellll, I forgot.
[07:53] <lifeless> wgrant: exactly
[07:53] <wgrant> You are on leave; you cannot call timeout :P
[07:53] <lifeless> can too; nyarh nyarh nyarh
[07:53] <jelmer> mørning lifeless, wgrant
[07:54] <lifeless> ø?
[07:54] <wgrant> Evening jelmer.
[07:54] <jelmer> I'm feeling creative this morning.
[07:54] <wgrant> My compose key is sadly broken :(
[07:54] <lifeless> ö
[07:54] <wgrant> Thére wë go.
[07:54] <jelmer> wgrant: do you need one for Strine?
[07:55] <wgrant> jelmer: Apparently.
[08:26] <adeuring> ood morning
[08:27] <al-maisan> moin adeuring :)
[08:28] <adeuring> hey al-maisan!
[08:28] <lifeless> wgrant: btw, how many cpus do you have?
[08:29] <wgrant> lifeless: 6 cores.
[08:29] <wgrant> Morning adeuring, al-maisan.
[08:29] <lifeless> wgrant: you might want to try glueing latest testr to running lp tests in vm's.
[08:29] <lifeless> (one vm per test command invocation)
[08:29] <al-maisan> hello wgrant, how are things?
[08:30] <lifeless> wgrant: (if you want 6 way concurrency in an lp test run)
[08:30] <wgrant> al-maisan: Pretty good.
[08:30] <wgrant> Enjoying my final week of freedom.
[08:30] <al-maisan> :)
[08:30] <al-maisan> there are some pretty interesting discussions going on re soyuz :)
[08:31] <wgrant> You could say that.
[08:32] <StevenK> wgrant: Yeah, right. You're spending it working on Launchpad
[08:32] <wgrant> StevenK: Says he who is on leave but present...
[08:32] <lifeless> StevenK: if you love your job, you never need to work :)
[08:32] <StevenK> I'm too attached to IRC to /quit
[08:32] <al-maisan> true .. but also slightly dangerous ;)
[08:33]  * StevenK is going to spend some time this work working on Hudson
[08:34]  * jelmer waves to StevenK, al-maisan, adeuring
[08:34] <StevenK> jelmer: O hai!
[08:34]  * al-maisan waves back :)
[08:35] <wgrant> lifeless: Is there any documentation on the testr parallelisation functionality?
[08:35] <wgrant> Test runs in half an hour would be pretty cool.
[08:35] <lifeless> wgrant: MANUAL.txt
[08:35] <lifeless> wgrant: testr run --help
[08:35] <lifeless> wgrant: NEWS
[08:36] <wgrant> lifeless: I suppose I'll need a very recent (!)release?
[08:36] <lifeless> tip
[08:36] <wgrant> k
[08:36] <lifeless> its not finalised
[08:36] <mwhudson_> morning jelmer
[08:36] <lifeless> I may tweak the config if it doesn't fit well with our needs here
[08:36] <jelmer> hey Michael
[08:37] <jelmer> mwhudson_: Thanks for landing that C introspection branch :-)
[08:38] <wgrant> Bah, fixtures isn't packaged :(
[08:38] <jelmer> wgrant: it is in the testing-cabal PPA
[08:38] <wgrant> jelmer: Perfect, thanks.
[08:38] <mwhudson_> jelmer: thanks for the prods
[08:38] <jelmer> wgrant: ppa:testing-cabal/archive
[08:42] <lifeless> tip testr is in that ppa too
[08:42] <lifeless> wgrant: its also in Debian.
[08:44] <wgrant> Maybe I should go back to unstable for a while.
[08:44] <lifeless> or natty
[08:44] <wgrant> I trust unstable more.
[08:44] <lifeless> unstable is ~frozen
[08:45] <wgrant> But it doesn't have Unity rewrite crack :)
[08:45] <wgrant> Anyway.
[08:45]  * wgrant investigates testr --parallel
[08:45] <al-maisan> wgrant: you can use natty with the normal gnome desktop as well
[08:46] <wgrant> But NM doesn't seem to really work :/
[08:51] <StevenK> lifeless: Off the top of your head, will the Hudson EC2 plugin just deal with and mount an EBS volume, or does that require changes? I can't find much TFM to R
[08:51] <lifeless> should just work
[08:52] <lifeless> ^-WAG
[08:54] <bigjools> morning
[08:55] <wgrant> Morning bigjools.
[08:56] <bigjools> inbox most definitely not zero today
[08:57] <wgrant> I should clean up mine this week.
[08:58] <wgrant> Because I guess I will acquire a new problematic one soon.
[08:58] <wgrant> :(
[08:58] <mwhudson_> i had a soul-crushing amount of email this morning
[08:59] <bigjools> wgrant: you already subscribe to bugmail, that forms 80% of mine
[08:59] <bigjools> then the oops reports
[08:59] <bigjools> then warthogs on a bad day :)
[08:59] <wgrant> bigjools: There's a lot of bugmail, sure, but it's nice and quick to get through.
[09:00] <bigjools> hahaha
[09:13] <mrevell> Hello
[09:22] <bigjools> g'day mrevell
[09:25] <thumper> wgrant: just one week to go :)
[09:25] <thumper> mwhudson_: yeah, saw your tweet
[09:25]  * thumper goes to make a cuppa tea
[09:25] <thumper> hi mrevell
[09:25]  * thumper waves at StevenK
[09:25]  * thumper thinks that is everyone now
[09:25] <mrevell> haha
[09:25] <mrevell> Good work thumper
[09:27] <wgrant> thumper: Indeed!
[09:43]  * bigjools pouts
[09:44] <wgrant> Uhoh.
[09:44] <bigjools> wgrant: https://bugs.launchpad.net/soyuz/+bug/685076
[09:44] <_mup_> Bug #685076: PPA deletion leaves unremoved publications <Soyuz:New> <https://launchpad.net/bugs/685076>
[09:44] <bigjools> ARGH.
[09:45] <wgrant> bigjools: How is that ARGH-worthy?
[09:45] <bigjools> because I overlooked it
[09:45] <wgrant> Heh.
[09:46] <bigjools> there's a pheasant outside my window.  Pie anyone?
[09:46] <lifeless> mmm
[09:46] <lifeless> PIE
[09:46] <bigjools> :D
[09:48] <bigjools> wgrant: so all these bugs you're filing!
[09:48] <bigjools> shall I just assign you to them now? :)
[09:49] <wgrant> Maybe.
[09:50] <bigjools> wgrant: I am not clearly following the timeline you wrote in bug 682692.  Could be my lack of caffeine though.
[09:50] <_mup_> Bug #682692: Some PPAs have duplicated builds <soyuz-build> <soyuz-publisher> <Soyuz:Triaged> <https://launchpad.net/bugs/682692>
[09:52] <wgrant> bigjools: What's ununderstandable?
[09:53] <bigjools> I am having trouble understanding where the problem is in that example timeline
[09:54] <wgrant> bigjools: The first copy was deleted 18 seconds before the second copy was performed.
[09:54] <wgrant> So the status summary returned FULLYBUILT instead of FULLYBUILT_PENDING, so the copy was permitted with unpublished builds.
[09:55] <bigjools> wgrant: can you condense that comment into a simple instruction to re-create the bug?
[09:55] <bigjools> it will double as a QA test when we fix it
[09:55] <bigjools> congrats on figuring it out
[09:58] <wgrant> bigjools: Done.
[09:59] <bigjools> wgrant: rock!  thanks
[09:59] <wgrant> Hmm. Should I expect an acknowledgement when I reply to an rt.admin.canonical.com ticket?
[10:06] <bigjools> wgrant: https://answers.launchpad.net/soyuz/+question/136685
[10:06] <bigjools> wgrant: RT should reply but since it doesn't know you I expect it won't
[10:07] <wgrant> bigjools: Have you looked at the end of that question?
[10:07] <bigjools> the massive list of files you mean? :)
[10:07] <wgrant> I was thinking more about my response and reviewed branch.
[10:08] <bigjools> oh didn't get that far
[10:08] <bigjools> carry on
[11:52] <bigjools> wgrant: any further comments re my bug comment: https://bugs.launchpad.net/soyuz/+bug/684180/comments/3 ?
[11:52] <_mup_> Bug #684180: Deleted sources can leave binaries hanging around forever <soyuz-core> <soyuz-publisher> <Soyuz:Triaged> <https://launchpad.net/bugs/684180>
[11:53] <wgrant> bigjools: "Argh" just about covers it.
[11:53] <bigjools> that's my Word of the Day
[11:53] <wgrant> Deletion won't cover everything.
[11:53] <wgrant> What if the source is superseded instead?
[11:53] <wgrant> We can't block that.
[11:53] <wgrant> We can, instead, cry.
[11:54] <bigjools> it won't but it covers this case
[11:54] <bigjools> we can block superseding surely?
[11:54] <bigjools> it'll get caught later
[11:54] <wgrant> Pllllllease don't.
[11:54] <wgrant> That violates the very little layering we have :(
[11:55] <bigjools> the layering sucks
[11:55] <wgrant> Why?
[11:55] <bigjools> we don't have layers, we have something organised into modules
[11:55] <wgrant> Making the publisher depend on build statuses is unprecedented.
[11:55] <wgrant> And slow.
[11:56] <bigjools> in that case we need to make domination do more work
[11:56] <bigjools> which is equally as unpaletable
[11:56] <wgrant> We need to think about how binaries and sources interact.
[11:56] <wgrant> For copies, for deletions, for dominations.
[11:56] <bigjools> "badly" :)
[11:56] <wgrant> It is all tied together.
[11:56] <wgrant> And it is only consistent in one respect: it is fucked.
[11:57]  * bigjools loses mouthful of coffee
[12:00] <bigjools> wgrant: we can't auto-supersede binaries if the source is superseded
[12:00] <bigjools> they might be a dependent
[12:00] <wgrant> I know
[12:00] <wgrant> Except that's not the reason.
[12:00] <bigjools> my main issue is preventing the manual deletion that people do in PPAs
[12:00] <bigjools> what reason?
[12:01] <wgrant> We don't seem to care about breaking dependencies.
[12:01] <deryck> Morning, all.
[12:01] <bigjools> howdy deryck
[12:02] <bigjools> wgrant: why do we have this then? http://people.canonical.com/~ubuntu-archive/NBS/
[12:02] <wgrant> I guess it is sort of dependency-related.
[12:02] <wgrant> But not entirely.
[12:02] <wgrant> Right.
[12:02] <wgrant> NBS.
[12:02] <wgrant> I hate NBS.
[12:02] <bigjools> that's why we can't auto-supersede the binaries
[12:03] <wgrant> I don't like introducing a hack to partially fix a problem without at least thinking about how to fix the whole thing.
[12:04] <bigjools> I contest your definition of a hack
[12:04] <wgrant> Blocking deletion because builds are in the wild is a hack.
[12:04] <bigjools> I disagree
[12:04] <wgrant> We should just make them be created deleted. Or something like that.
[12:05] <bigjools> that's a hack too then ;)
[12:05] <wgrant> It's a less obtrusive one.
[12:05] <bigjools> GRRR what has a maverick update done to my fonts
[12:07] <wgrant> So, we can't really reject the binaries if the source is deleted. Blocking deletion is undesirable. This leaves us with accepting binaries for a deleted source.
[12:07] <bigjools> we can just throw them away and mark the build superseded
[12:07] <bigjools> if the build is not active then we mark it superseded immediately
[12:08] <wgrant> We could. But that seems like a waste.
[12:08] <bigjools> someone wants the package deleted - we're just doing what they want.
[12:11] <wgrant> I think most of my problems with this area stem from the way we fail to treat sources and their binaries as a single unit.
[12:11] <wgrant> And the way this interacts with copies.
[12:13] <bigjools> how would you treat them as a single unit?
[12:14] <wgrant> We already do. For copies. And in the UI. Everywhere except most of the code.
[12:14] <bigjools> how would you do that in the code?
[12:18] <wgrant> I don't know exactly. But it's terribly confusing the way it is now: the same source can have different builds in different series and different archives. Copy from Hardy to Lucid, and you can no longer copy from Lucid to anywhere else. Copy one of those to another archive, and it will have all the same archs, but exactly one of them is actually a different build. Copy to a non-virt archive, it now contains another superset of the builds ...
[12:18] <wgrant> ... of the original archive.
[12:18] <wgrant> Want to do a security update? hppa is being slow. Let's copy without it.
[12:18] <wgrant> Oh look, can't copy it any more.
[12:20] <bigjools> you conveniently forgot about copy with binaries
[12:20] <bigjools> or mixed examples with it set/not set
[12:20] <wgrant> All those cases are copying with binaries.
[12:20] <wgrant> Without binaries it's boring.
[12:21] <bigjools> " one of them is actually a different build" - how?
[12:22] <wgrant> bigjools: Because I copied from Hardy in one PPA to Lucid in another. Lucid has armel, Hardy does not. Lucid gets a new armel build, different from the one in the origin archive.
[12:23] <bigjools> why are we creating new builds for binary copies?
[12:23] <wgrant> Because the target series or archive can have more architectures.
[12:23] <bigjools> that's crack
[12:23] <wgrant> I think it probably is.
[12:23] <bigjools> it's a binary copy, I don't expect more builds
[12:23] <wgrant> It's also really, really slow.
[12:24] <wgrant> But inter-series it becomes awkward.
[12:25] <wgrant> My intuition says that builds should live in a set. And only sets should be copiable. Atomically. Inter-archive that's fine, since if you want more archs you can just rebuild it all.
[12:26] <wgrant> But inter-series it doesn't work.
[12:26] <bigjools> yup
[12:27] <wgrant> So perhaps there should be an exception, where a set can be extended within an archive by copying to a new series.
[12:27] <wgrant> But then when somebody copies someone else's set to another archive, they are screwed. Because they can't append to it, and they can no longer rebuild it.
[12:28] <wgrant> "Argh"
[12:29] <bigjools> wgrant: but soyuz isn't that complicated, don't worry.
[12:30] <bigjools> we'll just throw more people at it
[12:31] <wgrant> :)
[12:31] <wgrant> Yes, the model is somewhat overcomplicated... but it can't be simplified away.
[12:32] <bigjools> it's the nature of the beast
[13:04] <bigjools> wgrant: so I think I'll amend my suggestion on that bug to just throw away the builds that are in progress and mark the build record as superseded.
[13:04] <wgrant> bigjools: When they try to finish?
[13:04] <bigjools> I just need to decide whether to do it in p-a or the b-m
[13:04] <bigjools> yes
[13:05] <bigjools> if they are not dispatched yet then we can supersede immediately
[13:05] <wgrant> They'll supersede automatically when they try to start.
[13:05] <bigjools> good point, well made
[13:05] <bigjools> I was thinking only of deletions
[13:06] <bigjools> what do you think of the cleanup SQL I propose to do?
[13:08] <wgrant> As long as there are no Published sourcepubs remaining, sure.
[13:08] <bigjools> yeah, it's just superseding binaries where the source is already superseded without being published at all
[13:08] <bigjools> should be safe
[13:09] <wgrant> It's possible that it was revived later. But I guess we could ignore that.
[13:09] <bigjools> some of the sources date back to 2009
[13:09] <wgrant> Only?
[13:09] <bigjools> 2009-10-06 is the earliest
[13:10] <wgrant> Hmmmm.
[13:10] <wgrant> That's pretty strange.
[13:10] <bigjools> 245 SPPHes
[13:21]  * bigjools -> fud
[14:26] <allenap> I have a new frozenset subclass. When it gets wrapped in a Zope security proxy nothing is available, not even the usual frozenset methods. Does anyone know how to configure this in ZCML (or otherwise) without repeating the existing configuration. Right now I think I'm going to modify BasicTypes directly.
[14:34] <flacoste> allenap: that's weird because frozenset is already part of _default_checker in zope.security.checker
[14:34] <allenap> flacoste: Yeah :-/
[14:35] <flacoste> allenap: have you try the security debugger on it?
[14:35] <allenap> flacoste: Is that ZOPE_WATCH_CHECKERS? Not yet.
[14:35] <flacoste> allenap: no, it's something else
[14:35] <flacoste> hang on
[14:35] <flacoste> looking for it
[14:36] <flacoste> something i write a long time ago
[14:36] <flacoste> it's in lazr.restful.debug
[14:36] <flacoste> from lazr.restful.debug import debug_proxy
[14:36] <flacoste> print debug_proxy(obj)
[14:37] <flacoste> allenap: it will show all layers of wrapping, along the checkers registerd for security proxy
[14:37] <allenap> flacoste: Cool, that sounds useful.
[14:39] <allenap> flacoste: It gets: zope.security._proxy._Proxy (using zope.security.checker.Checker)
[14:41] <flacoste> hmm
[14:42] <flacoste> allenap: is that all?
[14:42] <allenap> flacoste: Yes! That's it :)
[14:42] <flacoste> allenap: it's weird, because it should be showing the permission declared on it
[14:42] <flacoste> so it's like if the read_permission and write_persmission were unset
[14:42] <flacoste> but the default _setChecker
[14:42] <flacoste> is a NamesCheker that grant access to all the set procotol
[14:43] <flacoste> i suspect that something is chaning the default one
[14:43] <flacoste> hand on, i'll try something
[14:43] <allenap> flacoste: Yeah, that's what's stumping me.
[14:44] <flacoste> shit, my tree is out of date
[14:44] <flacoste> allenap: try printing the checker in make harness on devel
[14:44] <flacoste> see if it is the same thing
[14:44] <allenap> flacoste: That's where I did it.
[14:46] <flacoste> allenap: ok, let me compare notes (after my standup)
[14:47] <allenap> flacoste: Okay, thanks.
[14:51] <flacoste> mrevell, jml: http://leankitblog.com/2010/12/10-kanban-boards-leankit-kanban-style/
[16:04] <timrc> We're granting people/ teams upload rights to ppas via newComponentUploader() in launchpadlib.  Our problem is that, when we do this, the PPA does not show up in +archivesubscriptions -- seems to me this should be happening?  Am I off my rocker?
[16:06] <bigjools> timrc: sounds wrong.  Can you file a bug with the details to re-create that please.
[16:06] <timrc> bigjools: sure, np
[16:06] <bigjools> timrc: presumably it's a private PPA?
[16:06] <timrc> bigjools: yep
[16:07] <timrc> bigjools: I'll be sure to note that detail in the bug report
[16:07] <bigjools> well you can't have subscriptions on them otherwise :)
[16:07] <timrc> bigjools: the only way I know of to expose the subscription is to use Manage access via LP
[16:07] <bigjools> thought I'd check the blindingly obvious first
[16:07] <bigjools> you can use the API as well
[16:08] <timrc> bigjools: but maybe +archivesubscriptions only lists archives that you can install from and not ones you can upload too? I dunno, I'm a newbie :)
[16:08] <timrc> and not ones you can _only_ upload too?
[16:09] <bigjools> timrc: it's possible that's happened, yes, and it would be a bug if that's the case.
[16:25] <gary_poster> sinzui: might you know anything about https://bugs.launchpad.net/launchpad-foundations/+bug/683945 ?  I don't have anyone ATM who does.
[16:25] <_mup_> Bug #683945: disabled lists are never completing disabling <canonical-losa-lp> <Launchpad Foundations:New> <https://launchpad.net/bugs/683945>
[16:32] <flacoste> abentley: how's the release is going?
[16:32] <abentley> flacoste: It's going fine.  We've nearly got QA complete.
[16:33] <flacoste> abentley: did we sort out the DKIM QA?
[16:33] <abentley> flacoste: poolie was supposed to do it on (his) saturday, but he hasn't updated the tags.
[16:34] <flacoste> abentley: should we consider reverting it?
[16:34] <flacoste> abentley: and should we do a no-downtime today with bzr 2.2.2?
[16:34] <abentley> flacoste: I am hoping that he's done it and just forgotten to update the tags.
[16:35] <abentley> flacoste: I was going to chase it down today.
[16:35] <flacoste> abentley: ok, let's see what he says when he comes online later today
[16:35] <flacoste> abentley: what about bzr 2.2.2?
[16:36] <abentley> flacoste: I don't feel strongly either way.
[16:37] <flacoste> hmm, ok
[16:38] <abentley> flacoste: If we do, we might as well do 1206 as well, so we can nix both cowboys.
[16:40] <flacoste> abentley: the cow-boys are not part of the nodowntime set
[16:40] <flacoste> so it's probably not that useful
[16:41] <abentley> Hmm, I thought one of them was not part of the nodowntime set because of the cowboys.
[17:01] <gary_poster> losas, https://bugs.launchpad.net/launchpad-foundations/+bug/684672 is dealt with, right?  The bug report doesn't give a status update
[17:01] <_mup_> Bug #684672: staging update broken on revno 10029 <canonical-losa-lp> <Launchpad Foundations:Triaged> <https://launchpad.net/bugs/684672>
[17:02] <gary_poster> eh, sorry, going to ops
[17:40] <timrc> bigjools: So, I found a minute to play around.  You have to call newSubscription() to have the PPA appear in  +archivesubscriptions .. which is fine as being 'subscribed' to the archive simply means you can install from it.  I suppose there is a use case where you'd like to make the archive 'write-only' in that a person / team can upload to the PPA, but not install from it (e.g. the PPA is upstream to another archive that is inst
[17:40] <timrc> allable)
[17:41] <bigjools> timrc: actually I think we have a bug somewhere about having implicit subscriptions for PPA uploaders.
[17:41] <timrc> bigjools: so I think the takeaway here is that it'd be nice to see the access rights you have on a private PPA, regardless of subscription.. is there a way to do this from the UI?
[17:41] <bigjools> you can query what rights someone has via the API
[17:41] <timrc> bigjools: I was hoping there'd already be something out there to support our users :) but I'll put this on my list of TODOs
[17:42] <bigjools> yeah some of this is a little clunky still
[17:44] <timrc> bigjools: well I appreciate your time, thanks
[17:44] <bigjools> timrc: no problem, let me know if you have more questions
[18:24] <sinzui> gary_poster, sorry I was OTP. I marked the bug as fix released. Chex and I resolved the stuck Mm + XMLRPC combo
[18:41] <mkanat> Hey hey. Any hope of getting my loggerhead update merged and staged?
[18:43] <mkanat> This is the MP: https://code.launchpad.net/~mkanat/loggerhead/launchpad/+merge/42338
[18:43] <lifeless> that won't get it deployed
[18:43] <lifeless> you need to propose a source dep change to launchpad itself
[18:44] <mkanat> lifeless: Right, but first I have to merge to this branch, which I don't control.
[18:44] <lifeless> mmm, well both things I need
[18:44] <lifeless> anyhow, right now - we're frozen.
[18:44] <mkanat> lifeless: Okay. How long does that last?
[18:44] <lifeless> I'm not sure, i'm on leave - check the -dev list archive, dates should have been announced there
[18:45] <mkanat> lifeless: Okay, thanks.
[18:59] <gary_poster> ack, thanks sinzui
[19:21] <lifeless> sigh, the python3 shenanigans are just plain tiring
[19:23] <benji> lifeless: which shenanigans in particular are getting you down?
[19:23] <lifeless> benji: most currently the transform stuff
[19:24] <lifeless> benji: but more generally the way its just so painful to do widespread compatibility with
[19:24]  * lifeless things py3k was a mistake
[19:26] <benji> unfortunately I have to agree; too much incompatability for such small gains
[19:26] <benji> I still think someone who wanted to be BDFL 2.0 could take up Python 2 and run with it.
[19:36] <mkanat> benji: What I hear is that 2.7 is the last 2.x release ever.
[19:37] <benji> mkanat: right, the last official one; the python-dev folks won't be working on any more releases, but someone else could take it up -- they'd have to use a name other than "Python" though
[19:37] <mkanat> benji: Ah, yeah.
[19:40] <lifeless> benji: they could use Compatython
[19:40] <benji> heh
[19:45] <lifeless> benji: hi
[19:45] <lifeless> benji: can you join #launchpad ?
[19:45] <lifeless> benji: (generally every dev should be in that channel btw)
[19:55] <thumper> morning
[19:55]  * thumper relocates
[20:13] <mhall119> question, if I have a launchpad username, can I get their openid identity_url from the launchpad API?
[20:16] <thumper> mhall119: what problem are you trying to solve?
[20:21] <mhall119> loco.ubuntu.com pre-creates Django users based on a team's admins
[20:22] <mhall119> however, when those admins go to log in to loco.u.c, django-openid-auth sees an existing user with that username, and instead assigns them to $username+1
[20:23] <mhall119> if they already had their openid identity_url set, it would just log them into the correct account
[20:23] <mhall119> see https://code.launchpad.net/~mhall119/django-openid-auth/fixes-639772/+merge/38545
[20:51] <EdwinGrubbs> lifeless: what do you think of this solution for specifying the store used by Set classes? http://pastebin.ubuntu.com/540423/
[20:51] <lifeless> EdwinGrubbs: that makes a context specific item a global behaviour: thats undesireable
[20:53] <EdwinGrubbs> lifeless: what do you mean? The dbpolicy is only being used during the duration of the with-statement.
[20:53] <lifeless> right
[20:56] <EdwinGrubbs> lifeless: Does that mean that the BaseDatabasePolicy shouldn't have had the __enter__() and __exit__() methods added so that someone could use "with MasterDatabasePolicy()"
[20:58] <lifeless> EdwinGrubbs: the with statement you wrote changes the selector for all threads at once AFAICT
[20:59] <lifeless> EdwinGrubbs: it may well mean that the __enter__ and __exit__ are bad ideas outside of the test suite.
[21:00] <lifeless> EdwinGrubbs: but even if it was threaded
[21:01] <lifeless> EdwinGrubbs: its still a risky idiom to change context-wide state when calling an object will a defined lifetime greater than ones own contect
[21:01] <lifeless> *context*
[21:01] <EdwinGrubbs> lifeless: then, Person.getOrCreateByOpenIDIdentifier() should probably stop using  a policy as a context manager also.
[21:02] <lifeless> EdwinGrubbs: myself, I would have used the person_validity_queries static method that was on Person and not put anything onto PersonSet
[21:02] <lifeless> EdwinGrubbs: re Person.getOrCreateByOpenIDIdentifier - yes, I suspect so.
[21:03] <lifeless> EdwinGrubbs: I had a quick look through zope.component and zope.interfaces to see if I could find a threaded context in there - I couldn't, but its pretty convoluted code, so it could be anything.
[21:03] <mhall119> if I have a launchpad username, can I get their openid  identity_url from the launchpad API?
[21:04] <james_w> mhall119, it doesn't look like it
[21:04] <lifeless> mkanat: https://api.launchpad.net/+apidoc/ has the api docs
[21:04] <lifeless> bah
[21:04] <lifeless> mhall119: ^
[21:04] <lifeless> mhall119: That said, I think its probably undesirable to share that information about users to people other than the user themselves.
[21:05] <mhall119> lifeless: if you view the source of https://launchpad.net/~lifeless
[21:05] <mhall119> it's in there
[21:05] <mhall119> <link rel="openid2.local_id" href="https://login.launchpad.net/+id/kPbPBDC" />
[21:06] <lifeless> mhall119: hmm, we should nuke that
[21:06] <lifeless> we use the ubuntu sso these days
[21:06] <mhall119> why?
[21:06] <lifeless> launchpad does not contain an authentication database.
[21:06] <lifeless> we delegate to sso.ubuntu.com
[21:06] <lifeless> login.launchpad.net is just a skin on sso.ubuntu.com
[21:07] <mhall119> okay, any way to get the identity url from sso?
[21:07] <mhall119> given a launchpad username
[21:08] <lifeless> try asking in #canonical-isd
[21:08] <mhall119> thanks
[21:08] <lifeless> personally though, I'd just stop preallocating things
[21:08] <lifeless> and do admin lookup on-demand.
[21:08] <mhall119> we setup "user profiles" for admins in a team, so we can display their real names
[21:09] <mhall119> but that requires that we have a django user account for them
[21:09] <mhall119> which throws off django-openid-auth once the admin actually tries to log in for the first time
[21:11] <lifeless> right
[21:11] <lifeless> so do that setup *after* they login
[21:11] <lifeless> its just a thought
[21:23] <cody-somerville> Is this page cached? https://code.launchpad.net/~offspring-hackers
[21:23] <cody-somerville> That page says the branch was modified 1 minute ago - the branch was last modified on the 3rd.
[21:24] <lifeless> metadata changes count too
[21:26] <cody-somerville> ah
[21:27] <cody-somerville> right
[21:43] <lifeless> mwhudson: get a losa to set the review bits properly
[21:50] <mars> flacoste, pong?
[21:50] <flacoste> hi mars
[22:01] <benji> grr, the frequent firefox grey-screen-of-wait I'm getting lately is irritating
[22:39] <poolie> lifeless/whoever: is there a bug or rt for live log feeds from production?
[22:39] <poolie> should i file one?
[22:39] <poolie> wbn
[22:39] <lifeless> realtime? possibly
[22:39] <lifeless> up to you
[22:45] <wgrant> 17 minutes for DB changes!? I thought the last estimate was slightly under two hours...
[22:45] <lifeless> recife branch was rolled back
[22:45] <wgrant> Ahh.
[22:46] <poolie> lifeless: well, at least frequently you don't have a multi-minute wait
[22:46] <poolie> or apparently a 24h wait
[22:47] <wgrant> Aren
[22:47] <wgrant> Aren't they synced every 5 minutes?
[22:47] <lifeless> poolie: EPARSE
[22:47] <poolie> not from staging, apparently
[22:48] <poolie> lifeless: if something i changed produces an error or debugging message on staging, i would like to be able to see that message
[22:48] <poolie> at the moment i need to either prod a losa, or wait 24h
[22:49] <poolie> iwbn if it was available to developers less than a minute after being emitted
[22:49] <lifeless> poolie: sure, sounds nice to be better at
[22:49] <lifeless> I wouldn't aim at realtime just yet
[22:49] <poolie> near-
[22:49] <lifeless> staging load is very high
[22:50] <wgrant> Is the scripts split going to happen at some point?
[22:50] <lifeless> poolie: I don't want to think about this deeply on leave
[22:50] <lifeless> poolie: so, as I said before, up to you
[22:54] <poolie> sure
[23:43] <poolie> abentley, i'm back to qa now
[23:45] <poolie> i think 643219 and bug 316272 are ok
[23:45] <_mup_> Bug #316272: launchpad should verify gmail or DomainKeys authenticators <dkim> <feature> <qa-needstesting> <Launchpad Registry:Fix Committed by mbp> <https://launchpad.net/bugs/316272>
[23:46] <poolie> spiv/wgrant/abentley: would you like to read my qa steps on those bugs and suggest any others?
[23:47] <wgrant> poolie: Have you tried DKIM to an existing bug?
[23:48] <poolie> i have, and it worked
[23:48] <wgrant> Sounds good then.
[23:48] <poolie> and gpg is still accepted
[23:48] <wgrant> Yep, saw that.
[23:55] <abentley> poolie: ideally, your reviewer will have seen your testing plan as part of your merge proposal, but I trust your judgement.
[23:55] <poolie> that would be good
[23:55] <poolie> nobody asked me for one
[23:55] <poolie> i'm just checking the last bug is ok
[23:55] <poolie> it doesn't seem to have regressed, at any rate
[23:55] <poolie> i should now manually retag them qa-ok?
[23:56] <abentley> poolie: yes.
[23:57] <poolie> and they'll automatically go to fixreleased during rollout to lpnet?
[23:57] <abentley> poolie: we have a standard template for proposals, which includes a section for testing it.  But very few people are using the template.  I should bring that up at the reviewer meeting.
[23:58] <abentley> poolie: You don't have to worry about marking them fixreleased.  I believe sinzui runs a script to do that.