[00:00] <wallyworld> no, not :)
[00:00] <wallyworld> :(
[00:00] <lifeless> wallyworld: one common idiom is an 'integration' branch holding things merged from other people that is merged to trunk
[00:00] <lifeless> another is a 'bits' branch where you do little trivial things
[00:00] <lifeless> see e.g. stubs pending-db-patches branch
[00:00] <wallyworld> yes, i totally ignored those
[00:00] <wallyworld> in my thnking
[00:00] <lifeless> wallyworld: its ok, really. LP has -massive- use cases
[00:00] <wallyworld> bad mistake. mad at myself for making it
[00:01] <lifeless> and our object model being so heavy warps our thinking.
[00:01] <lifeless> I do it all the time, and have to back up and do another run at the problem.
[00:01] <wallyworld> that's true actually, about the object model
[00:01] <wallyworld> especially for newer people like me
[00:01] <lifeless> yes
[00:02] <lifeless> its a map, with lots of here-b-dragons, and not the territory.
[00:02] <lifeless> wallyworld: here's another case - one I'm looking at at the moment
[00:02] <wallyworld> it's also "decay" in the codebase
[00:02] <lifeless> wallyworld: https://bugs.qastaging.launchpad.net/ubuntu/+source/afflib/+bug/230350/+index
[00:02] <_mup_> Bug #230350: Missing Debian Maintainer field <Ubuntu:Invalid> <afflib (Ubuntu):Fix Released by saivann> <alac-decoder (Ubuntu):Fix Released by warp10> <axiom (Ubuntu):Invalid> <beneath-a-steel-sky (Ubuntu):Fix Released> <bibletime-i18n (Ubuntu):Fix Released by txwikinger> <binkd (Ubuntu):Fix Released> <bzr-builddeb (Ubuntu):Won't Fix> <capisuite (Ubuntu):Invalid> <chinput (Ubuntu):Fix Released by warp10> <chmsee (Ubuntu):Fix Released> <ciso (U
[00:03] <lifeless> that has 160 bug tasks.
[00:03] <lifeless> *160*
[00:03]  * wallyworld looks
[00:03] <lifeless> wallyworld: this is just an example of where users go waaaaay outside the expected bounds us mere developers created.
[00:03]  * wallyworld tried to look but got a timeout
[00:03] <lifeless> right
[00:04] <lifeless> thus I'm looking at it :>
[00:04] <wallyworld> :-)
[00:05]  * lifeless goes back to bug-279513
[00:06] <StevenK> Hm
[00:06] <StevenK> Jenkins, I'm disappointed.
[00:06] <wgrant> lifeless: Thanks.
[00:06] <StevenK> I was expecting a "trigger another build when this one is started."
[00:07] <lifeless> StevenK: I think the advanced triggers plugin can do that
[00:09] <StevenK> lifeless: I can't see that on the plugins page
[00:11] <lifeless> StevenK: #jenkins I think
[00:11] <lifeless> a quick google doesn't show me anything
[00:11] <lifeless> it would be easy to do a plugin for it
[00:11] <lifeless> but I'm reasonably sure there is one
[00:13] <StevenK> I can't see one either, but I can work around it
[00:14] <LPCIBot> Project db-devel build #407: STILL FAILING in 5 hr 36 min: https://hudson.wedontsleep.org/job/db-devel/407/
[00:14] <StevenK> wgrant: So, what do I do to this to make it only run windmill tests?
[00:15] <wgrant> StevenK: How do you call the test suite at the moment?
[00:15] <StevenK> make TESTOPTS="--subunit" check | subunit2junitxml -f -o test-results.xml
[00:15] <wgrant> Add '--layer=WindmillLayer' to TESTOPTS
[00:16] <StevenK> make TESTOPTS="--subunit --layer=WindmillLayer" check | subunit2junitxml -f -o test-results.xml
[00:16] <wgrant> Yup.
[00:16] <StevenK> Added
[00:16] <wgrant> Thanks.
[00:17] <StevenK> Scheduled a new build of it
[00:17] <wgrant> They only take ~35 minutes on ec2.
[00:19] <wgrant> StevenK: Is there a bug for the checkwatches.txt thing?
[00:19] <StevenK> Not that I can recall
[00:19] <wgrant> OK.
[00:19] <StevenK> It's only 500 lines, it should be shot.
[00:21] <wgrant> I need to rewrite most of it to get rid of more OOPSes.
[00:21] <wgrant> So I might.
[00:21] <wgrant> But checkwatches :(
[00:21] <StevenK> Haha
[00:22] <wgrant> Ah, good.
[00:30] <thumper> oops
[00:30] <thumper> forgot to add a particular tag to that landing
[00:32] <wgrant> thumper: Which?
[00:32] <thumper> wgrant: the mantis reland
[00:33] <thumper> I did an ec2 land
[00:33] <wgrant> It seems to have been rejected anyway.
[00:33] <wgrant> Ahh.
[00:33] <wgrant> What was missing? It doesn't want to be --rollback.
[00:33] <wgrant> It looks fine to me.
[00:33] <thumper> oh, ok
[00:33] <wgrant> I already rolled it back last night.
[00:33] <wgrant> So we are clear QA-wise.
[00:34] <thumper> ok
[00:34] <thumper> good
[00:34]  * thumper needs lunch
[01:28] <wgrant> Finished: SUCCESS
[01:29] <StevenK> wgrant?
[01:29] <wgrant> StevenK: windmill worked.
[01:29] <StevenK> \o/
[01:29] <wgrant> Thanks!
[01:30] <StevenK> It took an hour, though. :-(
[01:30] <wgrant> Yeah.
[01:30] <StevenK> I should be able to cut about 30 minutes off Jenkins build times by working on EBS
[01:30] <StevenK> But I have no idea where to start
[01:31] <lifeless> is bug 181368 fixed?
[01:31] <wgrant> Maybe.
[01:32] <wgrant> Certainly not released.
[01:32] <wgrant> But maybe fixed.
[01:35] <lifeless> wgrant: I thought the publisher was in nodwontime
[01:35] <wgrant> lifeless: No, because of poppy.
[01:35] <wgrant> No separate tree yet.
[01:47] <thumper> https://code.launchpad.net/~thumper/lazr.enum/case-insensitive-getTokenByTerm/+merge/51843 anyone?
[01:52] <thumper> why does jenkins now show recent success for db-devel?
[01:54] <StevenK> It doesn't?
[01:56] <thumper> last success 6 days ago
[01:56] <thumper> that isn't recent to me
[01:57] <StevenK> Oh. s/now/not/ ?
[01:57] <thumper> oh
[01:57] <thumper> yeah
[01:57]  * thumper is on school run
[01:57] <StevenK> Because checkwatches.txt has been failing pretty much every test run
[01:58] <StevenK> But not on ec2 or buildbot :-(
[02:15] <thumper> hmm....
[02:15] <thumper> why?
[02:16] <StevenK> Neither wgrant or I have any clue
[02:18] <wgrant> thumper: I adjusted a couple of things relating to that test.
[02:18] <wgrant> thumper: So I'm going to look at it this afternoon.
[02:18] <wgrant> And either fix or delete it.
[02:19] <thumper> ok
[02:19] <wallyworld> lifeless: you mentioned before that the lp templates mostly show bug tasks and not bugs..... however i think we want to stick with just displaying bugs for revisions listing on the branch index page
[02:20] <lifeless> why?
[02:20] <wallyworld> a bug can have many bug tasks and we really just want the one bug item which the user can click on don't we? there's already a lot of info displayed
[02:20] <lifeless> wallyworld: bugtask:+index already picks a single task
[02:20] <lifeless> wallyworld: see the method I pointed you at ;)
[02:20] <wallyworld> the bug summarises the issue the mp is solving
[02:21] <wallyworld> if you mean the getLinkedBugTasks(),  i think we need an anlogous getLinkedBugs() method
[02:21] <wallyworld> we don't always want the entire list of bug tasks, sometimes just the bugs
[02:22] <wgrant> We have too many h2s.
[02:22] <wgrant> But it is tempting to keep them because they just look so nice.
[02:23] <wallyworld> lifeless:  i think the current extended revision listing looks ok as is, but we just want to be able to hide private bugs properly
[02:23] <lifeless> wallyworld: you're not making any sense.
[02:24] <lifeless> wallyworld: get linked bug tasks already chooses *precisely one task*
[02:24] <lifeless> wallyworld: the difference you are arguing for is precisely empty.
[02:24] <wallyworld> but a bug can have many associated bug tasks, no?
[02:24] <lifeless> yes, but thats unrelated to the discussion
[02:24] <lifeless> because getLinkedBugtasks never returns more than one task per found bug
[02:25] <wallyworld> just saw that now
[02:25] <lifeless> wallyworld: you *must* have a task to show importance, status, tags, context. Bugs on there own are approximately never interesting.
[02:25] <wallyworld> why is that?
[02:26] <wallyworld> why only return just the one task?
[02:26] <lifeless> wallyworld: because thats what bugtask:+index spent 500 queries figuring out.
[02:26] <lifeless> wallyworld: I was merely fixing the performance not the definition.
[02:27] <wallyworld> ok. fair enough. where in lp then can the user see all bug tasks associated with a branch?
[02:27] <wallyworld> and why is the method called getLinkedBugTasks if it only returns one?
[02:28] <lifeless> because it returns one task per bug - it returns many tasks
[02:28] <lifeless> just not many tasks per linked bug
[02:29] <wallyworld> of course, reading comprehension problem :-(
[02:29] <lifeless> there is nowhere in LP that you can see all the bug tasks associated with a branch. The model links to bug, but linking to bug isn't useful for the rest of the system.
[02:29] <lifeless> s/model/schema/
[02:29] <wallyworld> ok
[02:30] <lifeless> wallyworld: series branches only show bugtask for bugs which have an open bugtask; regular branches show a bugtask for all their bugs.
[02:30] <lifeless> wallyworld: whether this is right or wrong is a separate discussion IMO : I want to just highlight a few key things:
[02:31] <lifeless>  - everything about a bug except visibility, description, comments,attachments - is on bugtask
[02:32] <lifeless>  - our /convention/ throughout the system is to not show bugs, only bugtasks.
[02:32] <wallyworld> and also on bug are the associated branches from memory
[02:32] <lifeless> wallyworld: bugs, questions, specs and other htings are indeed related by bu.
[02:32] <lifeless> *bug*
[02:33] <lifeless>  - we show bugtasks because we show status and importance. And we badge things with 'has brnach', 'has spec' etc etc.
[02:33] <wallyworld> ok. there's currently functionality which constructs mp emails using bugs not bugtasks. i guess this needs changing too?
[02:33] <lifeless> wallyworld: case by case: we're talking web UI here.
[02:34] <lifeless> the web UI is different to email because:
[02:34] <lifeless>  - its live: folk expect data to be presented rather than clicking through
[02:34] <lifeless>  - email they expect it to be stale - to be given deltas rather than seeing it as it is right now
[02:35] <wallyworld> i ask because i think (from memory) the email stuff uses branch.linked_bugs and that needs to be removed
[02:36] <wallyworld> because that also exposes private bugs to email recipients
[02:37] <wallyworld> so it possibly needs to replaced with something using getLinkedBugTasks
[02:37] <wgrant> The codebrowse OOPSes have stopped!
[02:37] <StevenK> Surely not!
[02:38] <wgrant> Maybe it's just hung again :P
[02:39] <lifeless> wallyworld: right, I'd use getLinkedBugTasks
[02:39] <lifeless> wallyworld: you could use task.bug
[02:40] <lifeless> wallyworld: or you could show bug status as well in the email
[02:40] <wallyworld> lifeless: yes, i'm doing just that elsewhere atm
[02:40] <wallyworld> using task.bug i mean
[02:41] <wallyworld> have to look at performance though. hopefully any alternatives are no worse that using branch.linked_bugs
[02:42] <wallyworld> wgrant: saw the email about windmill. me sad. a windmill test of mine actually caught a legitimate bug so we do need those tests
[02:43] <StevenK> wallyworld: Which are being run on Jenkins
[02:43] <wgrant> wallyworld: Sure, we need to turn them back on at some point.
[02:43] <wgrant> But for now they are out of the critical path, but Jenkins will tell us if things go bad.
[02:43] <wallyworld> StevenK: yes but not as part of ec2 land hence we can now land bad code
[02:44] <wgrant> wallyworld: It was turned off for most of last year anyway.
[02:44] <wgrant> The number of issues that it catches is miniscule.
[02:44] <wallyworld> sure, we just got lucky
[02:44] <wgrant> We would be better served by finding them later and being able to fix them more quickly.
[02:44] <wgrant> Which is what this gives us.
[02:44] <wgrant> Because branches will no longer be delayed by 10 hours due to Windmill being shit.
[02:45] <wallyworld> well if we use more and more XHR then miniscule won't be necessarily true
[02:45] <wgrant> This is a short-term thing.
[02:45] <lifeless> huh
[02:45] <lifeless> qatagger is lying
[02:45] <wgrant> Eventually we will convince deryck to fix it.
[02:45] <wallyworld> wgrant: just wanted to hear that it is short term :-)
[02:45] <wgrant> lifeless: Oh?
[02:45] <lifeless> look at it now
[02:45] <lifeless> then file a bug :)
[02:45] <wgrant> You mean how it shows it as deployable?
[02:46] <wgrant> It has always done that.
[02:46] <lifeless> no
[02:46] <lifeless> I mean the 2nd line
[02:46] <lifeless> earlier today it was showing 12486
[02:46] <lifeless> which was correct
[02:46]  * wallyworld needs food
[02:46] <lifeless> now its showing 12488
[02:46] <lifeless> rather than 'nothing can be deployed'
[02:46] <wgrant> lifeless: That was probably before the rollback was in place.
[02:50] <lifeless> wgrant: no
[02:50] <lifeless> wgrant: it wasn't
[02:50] <lifeless> the /rev/ can be ok without it being show deployable at the top
[02:50] <wgrant> Ah.
[02:51] <lifeless> wgrant: this is a previously unobserved corner case
[02:51] <lifeless> the rule fo rdeployable is 'all the revs before deployable and any rollbacks for broken revs included in the deploy
[02:55] <lifeless> wgrant: https://bugs.launchpad.net/qa-tagger/+bug/727552
[02:55] <_mup_> Bug #727552: when first rev is bad, with a later rollback, qatagger incorrectly reports it deployable atthe top <qa-tagger:Triaged> < https://launchpad.net/bugs/727552 >
[02:55] <wgrant> lifeless: Thanks.
[02:55] <lifeless> its probably an off by one
[02:57]  * thumper hangs head
[02:57] <thumper> WTF...
[02:57] <LPCIBot> Project devel build #492: FAILURE in 6 hr 8 min: https://hudson.wedontsleep.org/job/devel/492/
[02:58] <wgrant> WTF @ Hudson
[02:59] <wgrant> But checkwatches.txt passed again...
[03:00] <wallyworld> what's the sop for deprecating something in the web services interface?
[03:00] <lifeless> we don't
[03:00] <thumper> getUtility(ILaunchBag).user.preferredemail.email inside the email handler
[03:00] <lifeless> if its in beta, leave it there.
[03:00] <wallyworld> even if it's a security/privacy flaw?
[03:00] <lifeless> if its in 1.0 leave it there.
[03:00] <lifeless> if its devel, just delete it from devel.
[03:01] <wgrant> wallyworld: What is?
[03:01] <thumper> there isn't a user in the launchbag for process mail
[03:01] <lifeless> wallyworld: change its behaviour
[03:01] <wallyworld> because we export branch.linked_bugs
[03:01] <wgrant> wallyworld: lazr.restful checks launchpad.View on each object in a collection before it returns it.
[03:01] <lifeless> wallyworld: just change it to only return visible bug. But I fixed this already I thought.
[03:01] <thumper> it is handled by lazr.restful
[03:01] <thumper> we don't need to mess with it
[03:01] <wallyworld> currently it's still defined as an SQLRelatedJoin
[03:02] <lifeless> wallyworld: do you *think* its broken, or have you *shown* it
[03:02] <wallyworld> ok. didn't realise lazr.restful did some filtering
[03:02] <lifeless> wallyworld: it will be doing sucky late evaluation at the moment
[03:02] <lifeless> wallyworld: and the collection size will be wrong.
[03:02] <poolie> lifeless, i guess "7% headroom" comes from 13/14
[03:02] <lifeless> wallyworld: so its improvable, but it won't actually be disclosing
[03:02] <lifeless> poolie: 1/14, yes.
[03:03] <lifeless> poolie: I realised the flaw right after I hit send.
[03:03] <poolie> but i don't really see how that corresponds to requests/day
[03:03] <poolie> oh ok
[03:03] <poolie> nice mail otherwise though
[03:04] <lifeless> poolie: we'll free up 1 second for every request that is > 13 seconds today.
[03:04] <lifeless> poolie: which is a smaller number. A thousand or so
[03:04]  * wallyworld still thinks we should be able to mark stuff as deprecated if required whether it be a security hole or implementation issue
[03:04] <lifeless> wallyworld: we can in extremis, but it needs to be last-resort
[03:05] <wallyworld> python, java etc all do it :-)
[03:05] <lifeless> wallyworld: this is why I don't want *any* new verbs / attributes or types in the 1.0 web service.
[03:05] <poolie> so you'll save ~701s of cpu time per day?
[03:05] <poolie> oh, a bit more than that if you also kill those in the >13s range
[03:05] <poolie> right
[03:05] <lifeless> wallyworld: we made a promise to not break 1.0 for the folk using it in shipped Ubuntu versions.
[03:05] <poolie> over 32 cores?
[03:05] <lifeless> poolie: the 701 is the >14 requests.
[03:06] <poolie> right, got that
[03:06] <poolie> so say 1000 requests, and you'll cut off one second each
[03:06] <lifeless> poolie: currently over 18 python processes.
[03:06] <wallyworld> so we could mark something as deprecated and then support it until the lts runs out
[03:06] <lifeless> wallyworld: there is no deprecation marker.
[03:06] <lifeless> wallyworld: we'd just remove it from devel.
[03:06] <poolie> so it's less than .001% more headroom
[03:06] <wallyworld> yes, and i'm suggesting we need such a marker :-)
[03:07] <lifeless> wallyworld: I think it would be counterproductive.
[03:07] <poolie> still worth doing
[03:07] <lifeless> wallyworld: 1.0 users cannot use the replacement.
[03:07] <lifeless> wallyworld: devel users don't need the old method anymore.
[03:08] <wallyworld> but in the general case, marking things as deprecated is a fairly common paradigm
[03:08] <lifeless> sure
[03:08] <lifeless> this isn't the general case
[03:08] <lifeless> its a webservice with fairly specific constraints
[03:09] <lifeless> one of which is that things work smoothly and seamlessly with the 1.0 version of the service.
[03:09] <lifeless> Separately, I don't want use carrying the debt of things we want to remove *in devel* until we delete every prior edition.
[03:09] <lifeless> we have enough debt.
[03:10] <wallyworld> sure, but we should still be able to work towards a 1.1 version, no? ie we need a story around how we evolve the interface
[03:10] <lifeless> wallyworld: we have one
[03:10] <lifeless> wallyworld: whats the problem you are trying to solve?
[03:11] <wallyworld> talking generalities. the problem of being able to marked certain things as "do not use, will disappear in a future version". give people time to migrate away
[03:11] <lifeless> wallyworld: but it doesn't for us.
[03:12] <wallyworld> this is more a pub discussion :-)
[03:12] <lifeless> wallyworld: the timescale impedance is immense.
[03:12] <lifeless> wallyworld: 5 years vs daily change.
[03:12] <lifeless> wallyworld: and new things *are not accessible* until you step over the version, at which point - unless we carry that *5 year* burden for *10 years* - the old stuff is flat gone.
[03:13] <wallyworld> yes, the timeframes are large
[03:14] <wallyworld> it would be interesting to know which apis were being actively used. i don't suppose we gather those sorts of metrics
[03:14] <lifeless> we have logs that can tell us reasonably well
[03:15] <wallyworld> moot point for now. i was more just thinking out loud. i have to stop doing that
[03:15] <lifeless> wallyworld: its fine - I do that too
[03:15] <lifeless> wallyworld: but expect to be engaged on such utterances ;)
[03:16] <wallyworld> :-)
[03:16] <lifeless> wallyworld: happens to me all the time - surely you've seen me go 'I have no real answer to hand' :)
[03:18] <wallyworld> yes :-)
[03:19] <lifeless> \o/ I now can start coding today.
[03:19] <lifeless> this is why I'm so unproductive, no cycles to write code
[03:20] <wallyworld> doesn't help you get dragged into various discussions all the time
[03:24] <jtv> You know your jet lag is bad when you catch yourself thinking that you want database sharting.
[03:30] <thumper> it seems to me that there should never be a user in the ILaunchBag during process-mail
[03:32] <lifeless> why not ?
[03:32] <lifeless> when the sender of the mail is identified we should setup a participation and security policy for them.
[03:33] <lifeless> I filed a bug about this.
[03:37] <StevenK> Rargh, and then devel #492 fails with _lock_actions garbage
[03:46] <wgrant> lifeless: https://code.launchpad.net/~wgrant/launchpad/unbreak-incoming-mail/+merge/51850
[03:46] <wgrant> Slow scan and diff generation are slow.
[03:55] <lifeless> is lib/lp/soyuz/tests/../stories/soyuz/xx-distributionsourcepackagerelease-pages.txt of any use
[03:56] <lifeless> or is it redundant with unit tests?
[04:02] <StevenK> lifeless: That doctest makes me sad
[04:02] <wgrant> StevenK: It's a doctest.
[04:02] <StevenK> Haha
[04:15] <lifeless> can I just delete soyuuz ?
[04:15] <wgrant> Please.
[04:15] <wgrant> But delete checkwatches first.
[04:15] <StevenK> lifeless: Patches welcome
[04:15] <lifeless> sob
[04:16] <lifeless> why is default distroseries different to current
[04:16] <wgrant> Default distroseries?
[04:16] <lifeless>     - http://launchpad.dev/ubuntutest/+source/testing-dspr/1.0
[04:16] <lifeless>     + http://launchpad.dev/ubuntutest/hoary-test/+source/testing-dspr/1.0
[04:16] <lifeless> in that test
[04:17] <lifeless> after I tell getPubSource where to publish - the ubuntutest currentseries
[04:17] <wgrant> That's a DSPR wersus a DSSPR...
[04:17] <wgrant> Wow. versus.
[04:17] <lifeless>     >>> anon_browser.open(
[04:17] <lifeless>     ...     'http://launchpad.dev/ubuntutest/+source/testing-dspr')
[04:18] <lifeless>     >>> anon_browser.getLink('1.0').click()
[04:18] <lifeless> are the lead in lines
[04:18] <lifeless>     >>> print anon_browser.url
[04:20] <wgrant> getLink('1.0') is a little ambiguous.
[04:21] <lifeless> wgrant: I'd really like to just delete the test :)
[04:31] <lifeless> wgrant: so
[04:31] <lifeless> you think there are multiple 1.0 links ?
[04:32] <wgrant> lifeless: Quite possibly.
[04:32] <wgrant> Can't check details right now though, sorry.
[04:57] <lifeless> wgrant: argh
[04:58] <wgrant> Oh?
[04:58] <lifeless> wgrant: I *think* I may have found a reason for distro.getCurrentSourceReleases.
[04:58] <lifeless> wgrant: perhaps not a good reason
[04:58] <lifeless> wgrant: /ubuntu/+source/packagedeletedinseries-1
[04:59] <wgrant> lifeless: Latest upload?
[04:59] <lifeless> yeah
[05:07] <lifeless> what about dropping 'latest upload' as uninteresting
[05:10] <wgrant> Component, Maintainer and Architectures are potentially useful.
[05:10] <wgrant> Urgency and Latest upload are not.
[05:15] <lifeless> sob
[05:15] <lifeless> maintainer etc all come from 'current'
[05:15] <lifeless> so, does this need to handle packages never published in 'currentseries' ?
[05:15]  * lifeless thinks
[05:16] <wgrant> It would be nice. But it's not really critical.
[05:16] <lifeless> I'm going to shelve the branch
[05:16] <lifeless> its taking up too much time
[05:16] <wgrant> I say use the current series' publications, otherwise say it's not published.
[05:17] <lifeless> makes sense to me too
[05:17] <lifeless> but the perf win for this is no where near big enough to justify the time investment
[05:19] <lifeless> https://code.launchpad.net/~lifeless/launchpad/bug-279513/+merge/51063 - abandoned
[05:37] <lifeless> anyone know - does storm sniff tables in SQL(...) ?
[05:39] <lifeless> whats a good test class for Archive:getPublishedSources?
[05:48] <lifeless> wgrant: ^
[05:49] <lifeless> wgrant: also, soyuz permissions - are they lists of attributes, or just-add-to-interface-and-its-done ?
[05:50] <wgrant> lifeless: You may need to create a new class in test_archive.
[05:50] <wgrant> lifeless: Most of Archive is tested in archive.txt.
[05:50] <wgrant> lifeless: As for permissions, it depends on the class.
[05:50] <wgrant> You'll have to check configure.zcml.
[05:59] <lifeless> wgrant: thanks
[06:01] <stub> lifeless: No, SQL is essentially a string. Storm doesn't know how to parse SQL - just generate it.
[06:01] <lifeless> stub: thanks
[06:01] <lifeless> I'll use a using clause
[06:03] <lifeless> whats the easiest way to find what a Archive:EntryResource:getPublishedSources?ws.op=getPublishedSources looks like given that its timing out
[06:04] <wgrant> You want to know the data from a specific URL that is timing out?
[06:04] <wgrant> (also, QA party)
[06:05] <lifeless> wgrant: yes
[06:05] <lifeless> wgrant: we're qa'd ?
[06:06] <wgrant> No, it's time for a QA party to QA all the stuff that is now on qas.
[06:06] <lifeless> wgrant: ah
[06:06] <lifeless> I'll do mine straight up
[06:07] <lifeless> !!! frell
[06:07] <wgrant> ?
[06:07] <wgrant> Still broken?
[06:07] <lifeless> TypeError
[06:07] <lifeless> just getting a traceback
[06:07] <lifeless> ('Could not adapt', <lp.code.model.seriessourcepackagebranch.SeriesSourcePackageBranch object at 0xaefff50>, <InterfaceClass lp.code.interfaces.linkedbranch.ICanHasLinkedBranch>)
[06:07] <wgrant> Hah.
[06:07] <wgrant> I suggest rolling both back.
[06:08] <wgrant> ... and adding tests.
[06:08] <lifeless> doit
[06:08] <wgrant> Me, or you?
[06:08] <lifeless> you, if you would; its 7pm here and I'm about to get snatched for dinner
[06:08] <wgrant> k
[06:09] <lifeless> I landed this by accident if you recall :(
[06:10] <wgrant> But it passed buildbot
[06:10] <lifeless> yes
[06:10] <lifeless> so ec2 would not have helper.
[06:10] <lifeless> *helped*
[06:10] <lifeless> I'm just saying its cursed.
[06:10] <wgrant> Heh.
[06:14] <wgrant> In PQM.
[06:15] <lifeless> thanks
[06:23] <wgrant> OK, how do I delete launchpadlib's credentials?
[06:23] <wgrant> I am trying to log in as one of my alternate SSO accounts on qas.
[06:24] <wgrant> But I can't work out how to remove my system authorization.
[06:24] <wgrant> Hm, maybe my system launchpadlib still uses GNOME Keyring.
[06:25] <wgrant> Ah, there.
[06:26] <lifeless> so, we get 800 hits a day on /branches
[06:27] <lifeless> > 800 - 800 soft timeouts
[06:27] <lifeless> so probably 0.02% of traffic or so, if we wanted to let it die for a day.
[06:27] <lifeless> just a thought
[06:28] <wgrant> It should be through buildbot in 7 hoursish.
[06:29] <wgrant> And there is no QA required after what is there now.
[06:29] <wgrant> So, in the likely event that nobody requests one overnight, we can do it in the morning.
[06:31] <wgrant> wallyworld: I like your fix for the AJAX build request.
[06:32] <wallyworld> wgrant: great. it seems to work better now
[06:32] <wgrant> Assuming that this revert works, we are QA'd up.
[06:35] <LPCIBot> Project windmill build #2: FAILURE in 1 hr 9 min: https://hudson.wedontsleep.org/job/windmill/2/
[06:35] <wgrant> My point exactly.
[06:37] <wgrant> wallyworld: Could you check that test? It passes locally, but this probably indicates that it has a race.
[06:37] <lifeless> grr
[06:37] <lifeless> TacException: Error running ['/usr/bin/python2.6', '-Wignore::DeprecationWarning', '/home/robertc/launchpad/lp-branches/working/bin/twistd', '-o', '-y', '/home/robertc/launchpad/lp-branches/working/daemons/librarian.tac', '--pidfile', '/tmp/tmpZckiy9/librarian.pid', '--logfile', '/tmp/tmpZckiy9/librarian.log']: unclean stdout/err: No handlers could be found for logger "QueueProcessorThread"
[06:37] <wgrant> Oh, it's setting daily_build, not requesting a build.
[06:37] <wgrant> So not your fault :P
[06:37] <LPCIBot> Project db-devel build #408: STILL FAILING in 2 min 18 sec: https://hudson.wedontsleep.org/job/db-devel/408/
[06:38] <wgrant> bzr: ERROR: http://bazaar.launchpad.net/~launchpad-pqm/launchpad/db-devel/.bzr/repository/packs/09fb66d3dfb9df78e2bb1ca7b6c7aac9.pack is redirected to https://launchpad.net
[06:38] <wgrant> Not again :/
[06:38] <lifeless> and poolie is gone
[06:38] <wallyworld> wgrant:  i don't think that test is one of mine. i would have to look to be sure
[06:38] <wgrant> lifeless: It doesn't appear to have been during a push.
[06:39] <lifeless> ok
[06:39] <lifeless> something solidly else then
[06:39] <lifeless> the redirect is berk
[06:39] <wgrant> PQM has been checking my branch for 25 minutes.
[06:39] <lifeless> speak of the man
[06:39] <wgrant> Shouldn't push for another couple.
[06:39] <wgrant> It always used to redirect on 404, if branch-rewrite failed.
[06:39] <wgrant> Not sure if it still does.
[06:40] <wgrant> It doesn't.
[06:40] <wgrant> poolie: https://hudson.wedontsleep.org/job/db-devel/408/
[06:40] <wgrant> Another pack redirect.
[06:40] <wgrant> But PQM wasn't pushing at the time.
[06:40] <lifeless> well
[06:40] <stub> lifeless: Just a guess, but shouldn't that command line be bin/py or python2.6 -S for all the buildout deps and environment config to be pulled in?
[06:41] <lifeless> stub: bin/twistd is a similar alias
[06:41] <poolie> hi wgrant
[06:41] <lifeless> stub: that wraps /usr/bin/twistd with the local python etc etc
[06:41] <stub> yup. ic.
[06:41] <lifeless> the fail is the 'no handlers' message
[06:42] <stub> lifeless: We silence some loggers in lib/lp_sitecustomize.py if that is appropriate here.
[06:42] <poolie> wgrant, well, the big thing is to just work out what the redirect message is, and who is sending it
[06:42] <poolie> but i'll reopen the bug
[06:43] <lifeless> stub: I haven't looked
[06:44] <wgrant> poolie: I don't think devpad has codehosting HTTP logs :(
[06:45] <lifeless> stub: it may be a message we want - won't know until we dig
[06:45] <stub> wtf do I feel like I have a hangover? Maybe I should have gone out last night and at least had a reason?
[06:45] <lifeless> wgrant: for crowberry ?
[06:45] <lifeless> stub: cause you slept in :)
[06:45] <wgrant> lifeless: Yes.
[06:45] <stub> lifeless: Getting better. Awake at noon today. Might even make our call tomorrow :-P
[06:45] <LPCIBot> Project db-devel build #409: STILL FAILING in 0.47 sec: https://hudson.wedontsleep.org/job/db-devel/409/
[06:45] <poolie> are there logs at all for this?
[06:45] <wgrant> Er, still?
[06:46] <wgrant> Ah, looks like the slave needs a good killing.
[06:47] <wgrant> poolie: I presume there are logs from Apache...
[06:48] <poolie> i discovered that's not a reliable kind of presumption
[06:48] <wgrant> I know we don't keep haproxy logs.
[06:48] <jam> Hi poolie, funny to see you online when I'm waking up :)
[06:49] <wgrant> Morning jam.
[06:49] <wgrant> jam: You'll be pleased to know that loggerhead has stopped OOPSing.
[06:49] <jam> \o/
[06:49] <jam> wgrant: what was up ?
[06:49] <wgrant> Thanks for fixing that.
[06:49] <jam> Or that is the 16k oops / day thing?
[06:49] <wgrant> jam: The 16000 daily socket errors.
[06:49] <poolie> :)
[06:50] <poolie> congratulations on waking up
[06:50] <jam> there are more fixes in loggerhead trunk, to handle various bits of that correctly all the way through the stack
[06:50] <poolie> at what seems like a reasonable time in that tz
[06:50] <poolie> as it happens i'm just answering your mail
[06:50] <jam> poolie: Still getting my head around the area, and setting up K with the new schools, etc. So I'll probably only be around a half-day today
[06:50] <wgrant> jam: Although when QAing it I found bug #726985
[06:50] <_mup_> Bug #726985: codebrowse OOPSes with GeneratorExit when connection closed early <Launchpad itself:Triaged> < https://launchpad.net/bugs/726985 >
[06:50] <jam> but yeah, jet lag doesn't seem to be too bad for me this time
[06:50] <jam> my son isn't quite sure about the new TZ, though.
[06:50] <wgrant> Heh.
[06:51] <stub> lifeless: So we only gain using pgbouncer for appserver connections. At the moment, at any point in time we have 160 connections to the master db. With pgbouncer, that will usually be sitting at 40-50 (about half the main connections are active at any point in time, only < 10 of the session connections are active at any point in time - they run in autocommit and normally are touched only at the start of an appserver request).
[06:52] <jam> wgrant: seems pretty easy to fix up. I'm curious who is killing the generator, or how it is falling out of the stack early, etc.
[06:52] <jam> I think there is already a StopIteration check, probably fine to add GeneratorExit
[06:52] <wgrant> jam: Yeah, I think so.
[06:52] <wgrant> It's not OOPSing on prod, at least.
[06:52] <jam> right
[06:52] <stub> hmm... can use it too for librarian, codehosting and friends too. I think everything but the cronjobs.
[06:52] <jam> I think the issue is just that *any* exception is getting logged
[06:52] <jam> even the ones that don't mean much.
[06:52] <jam> like GeneratorExit means we exited early
[06:53] <jam> but is likely to not actually matter
[06:54] <jam> btw, poolie, welcome back from your bike ride, sounds like you had a good time
[06:54] <lifeless> stub: we only have 18*4 = 72 appserver threads
[06:55] <lifeless> stub: so 1/2 active would be a reduction of 32 on that 160 ?
[06:56] <stub> lpnet1-8,lpnet11-14 each using 4 threads. lpnet15 using 32.
[06:56] <deryck> Morning, all.
[06:57] <stub> I'm not even counting edge actually - guess I should be here.
[06:57] <wgrant> Morning deryck.
[06:57] <lifeless> stub: 15 is getting reconfigured
[06:57] <stub> Looks like lpnet15 is a bogus config
[06:57] <lifeless> stub: yes, along with 9 and 10 that are not currently running
[06:58] <wgrant> deryck: I have some questions about writing Windmill tests properly, if you have a moment.
[06:58] <lifeless> we have 13 lpnet servers, 5 edge
[06:58] <lifeless> for 18*4 once lpnet15 is fixed
[06:58] <deryck> wgrant, I think the sprint I'm in will start shortly.  But ask away and I'll respond as I can.
[06:58] <wgrant> deryck: Ah, didn't realise you were at one.
[06:58] <lifeless> stub: will be going up to 80 with singlre threaded (but we'll actually have two threads per appserver to permit opstats queries
[06:58] <deryck> wgrant, yes, in South Africa.  Ensemble sprint.
[06:58] <stub> lifeless: So If the 50% active connections remains, it could be a win. If you are correct about GIL contention and stuff the activity will grow. I need to decide if it is enough of a win anyway to bother.
[06:59] <deryck> wgrant, this is why I haven't looked at Windmill this week yet. ;)
[06:59] <lifeless> stub: I'd be inclined to move ahead with it, if only because we can more easily kick everyone off
[06:59] <lifeless> stub: though you were saying we had cross transaction temp tables?
[07:00] <wgrant> deryck: Looking at lib/lp/code/windmill/tests/test_recipe_index.py, test_inline_recipe_daily_build just failed on our Windmill-specific Jenkins job.
[07:00] <wgrant> deryck: It is a race of some sort.
[07:00] <wgrant> deryck: The lack of a wait between the getUserClient and click looks very suspicious to me.
[07:00] <wgrant> Should there be one there?
[07:00] <LPCIBot> Project db-devel build #410: STILL FAILING in 0.49 sec: https://hudson.wedontsleep.org/job/db-devel/410/
[07:00] <wgrant> I don't really know how much waiting a page load does.
[07:00]  * deryck looks
[07:01] <wgrant> StevenK: Could you please kill/clean the db-devel Jenkins slave?
[07:02] <deryck> wgrant, what line in the test?  I don't see a getUserClient call.  do you mean the getClientFor stuff?
[07:02] <wgrant> deryck: https://hudson.wedontsleep.org/job/windmill/lastCompletedBuild/testReport/ has the failure.
[07:02] <stub> lifeless: Yes. A pgbouncer instance runs in a single pooling mode. Session or Transaction interest us. Transaction would save us idle database connections, but doesn't support cross transaction resources. Session doesn't change our utilization, so we would be trading some extra connection control for crappier authentication and extra complexity.
[07:02] <wgrant> deryck: Er, getClientFor, yes.
[07:02] <deryck> ok
[07:02] <StevenK> wgrant: i-3EDE07B4
[07:02] <StevenK> ?
[07:02] <wgrant> deryck: It loads the page them immediately tries to click on a JS widget.
[07:02] <wgrant> deryck: Which sounds prone to races.
[07:03] <lifeless> stub: So I think what I care about is perf and reliability
[07:03] <lifeless> stub: if we won't gain measurable perf, and will code reliability, lets defer it.
[07:03] <wgrant> StevenK: Yes.
[07:03] <lifeless> stub: we can work on getting rid of those tables and cursors later.
[07:03] <deryck> wgrant, there's a waits.forPageLoad in the getClientFor method before it returns.  So I think it's fine to click immediately after the method.
[07:03] <stub> I'm looking at pgpool-ii to see if it gives us better options.
[07:03] <wgrant> deryck: :(
[07:04] <lifeless> stub: cool
[07:04] <lifeless> stub: go with what makes the most sense to you
[07:04] <wgrant> deryck: So that waits for all the JS to finish?
[07:04] <stub> I don't think we want to drop temporary tables and cursors entirely - they are the only sane way of doing some bulk operations. The rosetta scripts make use of them extensively.
[07:05] <deryck> wgrant, sorry, sprint starting.  will look again when I can.
[07:05] <wgrant> deryck: Thanks, no rush.
[07:05] <wgrant> It's not blocking anything any more.
[07:05] <deryck> wgrant, but yes, the js should be loaded and done by that method return.
[07:06] <lifeless> stub: they aren't crash proof though
[07:06] <lifeless> stub: I think if we make a crash proof implementation for those scripts, we wouldn't need cursors / cross trnasaction temp tables.
[07:07] <StevenK> wgrant: Done. Why?
[07:07] <poolie> lifeless, re bug heat, perhaps it would be useful to try to gauge how many users actually find it useful?
[07:07] <wgrant> StevenK: It failed to check out the branch due to an odd pack redirect.
[07:07] <StevenK> AGAIN?!
[07:07] <wgrant> Yes.
[07:07] <poolie> i think it could be potentially useful but it never actually seems useful
[07:08] <lifeless> poolie: the distro do use it
[07:08] <poolie> and like it?
[07:08] <poolie> that's good to know
[07:08] <lifeless> poolie: I think we should finish and polish it before assessing whether its good or not.
[07:08] <lifeless> like many things its been done just-enough, rather than made excellent
[07:08] <poolie> hm
[07:08] <poolie> that doesn't seem like a very general algorithm
[07:09] <poolie> but, i think it's worth working out whether the problems are essential or just bugs
[07:09] <stub> lifeless: The pattern is to prepare the data into a holding area and then perform bulk operations in multiple transactions with small batches.
[07:09] <lifeless> stub: yup, thats what I thought it was.
[07:10] <lifeless> stub: the crash proofness I refer to is the assumption [or fact] that identifying the work is expensive.
[07:10] <stub>  I think you are proposing the holding area be a real table rather than a temporary one, which is doable but a little more effort (db patch to create the table, extra code to clean out the holding area when necessary)
[07:10] <lifeless> stub: for instance, yes. alternatively make identifying the remaining work cheap.
[07:12] <stub> lifeless: Even when identifying the remaining work is cheap, it is still more expensive than doing it upfront. Garbo and the librarian-gc have made use of pretty much every possible technique.
[07:13] <lifeless> do storm StringColumns support a .like() call for queries?
[07:13] <lifeless> e.g. foo.like('%bnar%')
[07:13] <stub> I think they do, or there is a LIKE function.
[07:14] <lifeless> stub: I think it depends on the operation whether upfront checking is needed, is all I'm saying
[07:14] <lifeless> stub: I recognise the need for variety.
[07:14] <stub> yes. For our current use cases, I think we could minimize their use if we wanted to, but I don't think it would be sane to eliminate them altogether.
[07:15] <lifeless> stub: if we *were* to eliminate them, we could:
[07:15] <lifeless>  - parallelise more easily
[07:15] <lifeless>  - kill the tasks during db deploys more easily
[07:15] <lifeless>  - use pgbouncer more easily
[07:15] <lifeless> its just a thought
[07:16] <stub> storm.expr has a Like binary operator. Don't know about foo.like() syntax.
[07:17] <stub> yup
[07:41] <LPCIBot> Project devel build #493: STILL FAILING in 4 hr 43 min: https://hudson.wedontsleep.org/job/devel/493/
[07:42] <wgrant> That's not good.
[07:43] <lifeless> does IntColumn.__eq__(Enum) work ?
[07:44] <wgrant> I don't know, but it would certainly make you a bad person.
[07:44] <wgrant> Why?
[07:49] <lifeless> wgrant: migrating Archive.getPublishedSources to storm so I can DRS it
[07:52] <poolie> who has permission to mark a branch as an official source package branch?
[07:52] <lifeless> the tb
[07:55] <poolie> because they're the owner of the distro series it's going into?
[08:04] <wgrant> lifeless: I think you just need ~ubuntu-branches membership.
[08:04] <lifeless> Because they are in the special group created by jmls initial work
[08:04] <wgrant> Celebrities, yay.
[08:05] <lifeless> and that group is meant to only have the TB in it now
[08:05] <lifeless> because it effectively controls upload rights
[08:05] <lifeless> we should indeed move that to a right granted by owning the distro
[08:05] <poolie> ok
[08:05] <poolie> so allowing them to only push the button if they also own the branch would fix this?
[08:05] <wgrant> We also need to delete bazaar-experts.
[08:06] <poolie> it would avoid them shooting themselves in the foot by accidentally trusting someone else's branch
[08:07] <lifeless> poolie: uh
[08:07] <lifeless> poolie: I am missing a lot of context here I presume
[08:07] <lifeless> what button, what problem, what accidental trust ?
[08:07] <poolie> nm
[08:07] <poolie> i sent mail
[08:07] <poolie> bug 516709
[08:07] <_mup_> Bug #516709: revisit official package branch permissions <launchpad> <lp-code> <Launchpad itself:Triaged> <Ubuntu Distributed Development:Invalid> < https://launchpad.net/bugs/516709 >
[08:08] <wgrant> poolie: Owner == Maintainer
[08:08] <poolie> not urgent
[08:08] <wgrant> And that is shown on Distribution:+index.
[08:09] <poolie> ok
[08:09] <poolie> there's still a bug as to how you're supposed to know owner==maintainer
[08:09] <wgrant> Pillars don't have the "owner" term exposed in the UI. It's called "Maintainer" instead (and "Registered by" is the owner for both of those objects, but that is a bug)
[08:10] <wgrant> Where have you seen a reference to a distribution owner?
[08:10] <wgrant> Or a project owner?
[08:10] <poolie> robert's comment in the thread about that bug, and the api
[08:12] <lifeless> poolie: its not clear what you are trying to optimise for in that discussion
[08:13] <poolie> > Lets aim for 'excellent' not 'least work' (while still aiming for 'done soon').
[08:13] <poolie> that's kind of vague
[08:13] <poolie> who could disagree?L
[08:13] <lifeless> well
[08:13] <poolie> but it's not very helpful in working out what is actually simplest
[08:13] <lifeless> simplest to implement? use? describe?
[08:14] <poolie> any
[08:14] <lifeless> I'm essentially getting confused
[08:15] <lifeless> I'm not sure if you're trying to redesign the distro branches feature [which isn't fully implemented, and the design calls for the permissions to override - thats why there is a bug open for that]
[08:15] <lifeless> or something else
[08:16] <poolie> i'm trying to clarify what if anything needs to be done for https://bugs.launchpad.net/udd/+bug/516709
[08:16] <_mup_> Bug #516709: revisit official package branch permissions <launchpad> <lp-code> <Launchpad itself:Triaged> <Ubuntu Distributed Development:Invalid> < https://launchpad.net/bugs/516709 >
[08:16] <lifeless> poolie: we need to ensure the identity statement that only uploaders to the distro can upload to the distro
[08:16] <poolie> sure
[08:17] <poolie> there are various possibilities to get there
[08:17] <lifeless> what are your constraints
[08:17] <lifeless> what are your requirements, and the requirements of your stakeholders?
[08:18] <poolie> i think the only hard requirement is, just as you said, that only package uploaders should be able to write to these branches
[08:18] <poolie> afaict that is in fact already met
[08:18] <lifeless> well, its not.
[08:18] <poolie> then there are two soft requirements
[08:18] <poolie> 1- it should not be too easy to make mistakes that open up access-
[08:18] <lifeless> because the permissions | together rather than excluding.
[08:18] <poolie> 2- it should not complicate lp's access control user model any more than it is at present
[08:19] <poolie> which permissions?
[08:19] <lifeless> poolie: I'm going to put on my paranoid hat and say that 1 is not a soft requirement.
[08:19] <lifeless> poolie: its a *hard* requirement that it be effectively impossible to open up access.
[08:19] <lifeless> poolie: this has been well established over the years of UDS discussions.
[08:19] <poolie> what i mean is it is not a black and white easy
[08:19] <poolie> how is 'too easy'?
[08:20] <lifeless> if its possible to have something look like its distro uploader only, it must be distro uploader only.
[08:20] <poolie> well, that's really my second point, that the user model should be understandable
[08:20] <lifeless> I'd rather for 2) we say 'we want it to be easy to understand and use'
[08:21] <poolie> s//black and white issue
[08:21] <poolie> sure, i agree with that too, i'm just trying to be more specific
[08:21] <poolie> ok
[08:23] <lifeless> poolie: so we have a trivial to code tolerable solution mooted in the bug
[08:23] <lifeless> poolie: I assert that something needs to be done because we don't satisfy the identity requirement today.
[08:23] <poolie> it has the drawback that we will then have branches owned by say ~mbp that can't be written by ~mbp
[08:23] <lifeless> right
[08:24] <poolie> i think this does poorly on transparency and understandability
[08:24] <lifeless> if you want to incrementally improve things, and aim for shortest path to tolerable source uploads from bzr
[08:24] <lifeless> then I think you should accept that we already have 'branches own by ~mbp that ~ubuntu-devel can write to'
[08:24] <lifeless> this neither much worse, nor much better than the status quo.
[08:25] <poolie> i didn't know we could already have such branches
[08:25] <poolie> where?
[08:25] <poolie> but this is on the path to bzr source uploads
[08:25] <poolie> because having them be adequately secure is important
[08:25] <lifeless> poolie: if we bless, for instance, ~mbp/bzr/packaging as lp:ubuntu/bzr
[08:26] <lifeless> then ubuntu-devel get write access to ~mbp/bzr/packaging
[08:26] <poolie> right, and i keep it
[08:26] <poolie> that's what i ithkn 516709 is about
[08:26] <lifeless> yes
[08:26] <poolie> is blessing existing branches actually useful?
[08:26] <lifeless> I can't tell if we're violently agreeing or what
[08:26] <lifeless> poolie: unless you rework the feature from groundup, its necessary.
[08:27] <lifeless> I'd love to see it reworked.
[08:27] <poolie> it seems to me, from this thread, that we should get away from blessing existing branches
[08:27] <poolie> why?
[08:27] <poolie> i mean, why would it require redoing the feature?
[08:27] <lifeless> because there is *no* lp:ubuntu/bzr branch - at all, and no space for it.
[08:27] <lifeless> lp:ubuntu/bzr is *defined* as being an alias.
[08:29] <poolie> so, given the thread so far, it seemed like it would be cleanest to make it an alias to a branch owned by, say, ~techboard or something similar
[08:29] <lifeless> that will still require lp changes
[08:29] <lifeless> and will need care to accomodate linaro etc
[08:29] <lifeless> who have a different governance structure
[08:29] <lifeless> [but the distro permissions wouldn't need reworking, if we go with 'distro permissions are all that matter for blessed branches']
[08:30] <poolie> hm
[08:30] <poolie> good point about linaro
[08:31] <poolie> however, i don't think that making the branches owned by the series owner should be a problem
[08:31] <wgrant> (I'm pretty sure that source package branches need a complete rethink for when we want to support multiple distributions. More complete than you suggest.)
[08:31] <lifeless> wgrant: Possibly.
[08:31] <poolie> that just effectively formalizes that the only  way to them is through the distro permissions
[08:31] <lifeless> poolie: this sets off all sorts of alarm bells
[08:31] <LPCIBot> Yippie, build fixed!
[08:31] <LPCIBot> Project windmill build #3: FIXED in 1 hr 9 min: https://hudson.wedontsleep.org/job/windmill/3/
[08:32] <lifeless> poolie: and I really don't want to spend a lot of time painting the bikeshed
[08:32] <poolie> sure
[08:32] <poolie> and it's late
[08:32] <lifeless> poolie: the requirement we were given was 'match the distro permissions'.
[08:32] <lifeless> I can see two routes to that:
[08:32] <lifeless>  - genuine *distro* branches that are not aliases.
[08:32] <lifeless>  - *completely* overriding the permissions when blessed.
[08:33] <poolie> those both have some problems
[08:33] <lifeless> I've seen no proposals that meet the requirement other than those two. The former is a moderate amount of work. The latter is going to be confusing if the feature is misused but easy to do and secure.
[08:33] <poolie> the first is apparently a fair amount of work; the second complicates the already hellacious security model
[08:34] <lifeless> the incremental complication is near-zero.
[08:34] <lifeless> Because the elephant is inside the room already.
[08:34] <poolie> i'm confused because i think i'm proposing exactly what you proposed on 18/2
[08:34] <lifeless> its fine to argue we shouldn't arbitarily bless branches. But thats orthogonal to meeting the requirement.
[08:34] <poolie> > I'd make *that* owner for these branches the owner
[08:34] <poolie> of the distro series the branch is for, not a celebrity.
[08:35] <lifeless> poolie: you've elided the context
[08:35] <lifeless> 'We probably want an owner in the same sense that a team has an owner:
[08:35] <lifeless> someone that has administrative privilege over the thing but no direct
[08:35] <lifeless> access to the content of the thing'
[08:36] <poolie> sure
[08:36] <poolie> anyhow, it's late
[08:37] <lifeless> that is, people that are members of distro.maintainer can say 'this is the branch' or 'thats the branch' but cannot write to the content of the branch unless they separately have upload permissions
[08:37] <lifeless> this is part of doing genuine distro branches
[08:37] <poolie> i'm trying to work out a path to unblock the problems identified in <https://bugs.launchpad.net/udd/+bug/516709/comments/8> with the existing branch
[08:37] <_mup_> Bug #516709: revisit official package branch permissions <launchpad> <lp-code> <Launchpad itself:Triaged> <Ubuntu Distributed Development:Invalid> < https://launchpad.net/bugs/516709 >
[08:37] <poolie> i don't want to bake into the user model the idea that branches owned by random users can be blessed
[08:37] <lifeless> poolie: its already there
[08:38] <lifeless> I appreciate the motivation, but *its already there*
[08:38] <lifeless> poolie: conflating 'fix this previous design shortcut' and 'meet the stakeholder requirements' will push your critical path way back.
[08:39] <poolie> hm
[08:39] <poolie> ok, good night
[08:39] <lifeless> heh
[08:39] <poolie> i'll think about it
[08:39] <lifeless> poolie: I'm in town in the weekend
[08:40] <lifeless> which reminds me, I need to mail folk and say 'hi, meetup?'
[08:41] <wgrant> jam: Around?
[08:43] <poolie> lifeless, so essentially you're disagreeing with jml's assertion in #8 that the usability consequences need to be fixed at the same time?
[08:43] <poolie> you may be write that will be faster
[08:43] <poolie> s//right
[08:43] <lifeless> yes
[08:43] <lifeless> its a bit like your thing on reviews
[08:44] <lifeless> that asking for more work at the same time often just stalls things.
[08:44] <poolie> true
[08:44] <lifeless> designing a better solution than we collectively came up with the first time around is possible.
[08:45] <lifeless> Buts its going to be more work than finishing our initial design.
[08:45] <poolie> yeah
[08:46] <poolie> i wonder what people do own official branches at the moment and how they use them
[08:46] <poolie> does anyone rely on having separate access through that, or does anyone have it but not need it
[08:46] <poolie> it may not actually cause a problem
[08:48] <poolie> lifeless, this feels like the kind of thing where lp puts in a poor user model, and then spends more time explaining and supporting it than it would have taken to just finish it properly in the first place
[08:48] <lifeless> poolie: quite possibly. its a 4 year old design
[08:49] <lifeless> poolie: the design was optimised for delivery not explainability or usability.
[08:49] <lifeless> like many things at the time.
[08:51] <poolie> ok, perhaps you're right we should split them, then separately get away from blessing existing branches
[08:52] <poolie> is that in fact what you're saying?
[08:53] <lifeless> yes
[08:53] <lifeless> I'm not trying to defend blessing at all
[08:53] <lifeless> its fugly
[08:55] <adeuring> good morning
[08:56] <poolie> ok, i see
[08:56] <poolie> well, really good night then
[09:07] <bigjools> morning all
[09:07] <wgrant> Morning bigjools.
[09:07] <wgrant> P-a-s makes me cry.
[09:08] <maxb> it does that to most people :-)
[09:08] <mrevell> Hi
[09:11] <bigjools> wgrant: I'm sure a boy of you calibre can fix it
[09:11] <bigjools> your*
[09:11] <wgrant> Hah.
[09:27] <wgrant> lifeless: How do I push a config overlay onto the FS?
[09:27] <lifeless> wgrant: see layers.py
[09:27] <wgrant> Thanks.
[09:35] <bigjools> wgrant: so, sparc builders
[09:35] <wgrant> bigjools: We worked out the problem.
[09:36] <bigjools> wgrant: yeah, I'm interested to hear details, steve said it was a corrupt bq record?
[09:36] <wgrant> bigjools: https://wiki.canonical.com/IncidentReports/2011-03-02-LP-build-with-wrong-status
[09:36] <wgrant> No BQ.
[09:36] <wgrant> But the BFJ was BUILDING, when it should have been FAILEDTOBUILD.
[09:37] <wgrant> Very similar to Bug #717969
[09:37] <_mup_> Bug #717969: storeBuildInfo is sometimes ineffective <soyuz-build> <Launchpad itself:Triaged> < https://launchpad.net/bugs/717969 >
[09:37] <wgrant> Except the opposite.
[09:37] <StevenK> Oh, yes, sorry, a BFJ record
[09:38] <bigjools> wgrant: the report doesn't say *why* the build candidates were not being considered
[09:38] <wgrant> bigjools: The 80% rule.
[09:38] <wgrant> Due to the 80% PPA rule, this phantom build prevented further sparc builds in that PPA from being dispatched.
[09:38] <bigjools> which PPA?
[09:38] <wgrant> It's in the description!
[09:39] <wgrant> https://launchpad.net/~ubuntu-mozilla-security/+archive/ppa
[09:39] <bigjools> urgh
[09:39] <bigjools> ok I need to teach you how to write incident reports
[09:39] <bigjools> :)
[09:39] <wgrant> Yes.
[09:39] <wgrant> this was all written well afterwards, so it was sort of not very well done.
[09:40] <bigjools> the other bug is that nonvirtual PPAs should not be doing the 80% check
[09:40] <wgrant> Indeed.
[09:40] <bigjools> so the sBI bug is serious
[09:41] <wgrant> I have seen it only three times. But yes, it is rather serious.
[09:41] <bigjools> I concur that there's a missing commit
[09:41] <wgrant> NFI where, though.
[09:41] <bigjools> but what's worse is that we have a problem with transaction boundaries
[09:41] <wgrant> Oh?
[09:41] <bigjools> in that the boundary is wrong, if we can get in an unrecoverable situation
[09:41] <bigjools> the b-m should have seen the build and fixed it
[09:42] <wgrant> Yeah.
[09:42] <bigjools> the first step is a minimal test cast to re-create it
[09:43] <wgrant> But the first step of that is working out what has gone wrong.
[09:43] <bigjools> no, it's not
[09:44] <bigjools> it's working out a test case that re-creates it :)
[09:44] <wgrant> How do you propose to write a test case when we don't know how to reproduce it?
[09:44] <wgrant> Well, yes.
[09:44] <bigjools> "what has gone wrong" is not the same as "how to reproduce it
[09:44] <bigjools> "
[09:44] <bigjools> but we're on the same wavelength, so no worries
[09:55] <bigjools> wgrant: need to talk to you at some point about folding the SoyuzTestPublisher into the LaunchpadObjectFactory
[09:56] <bigjools> the stuff the LOF does is completely wrong in some cases
[09:57] <wgrant> bigjools: james_w had a branch to do that at one point.
[09:57] <wgrant> And I've wanted to get that landed for a long time.
[09:57] <wgrant> I occasionally improve STP when writing tests... but some of it is just so wrong that it needs a big rethink.
[09:57] <bigjools> yes
[09:57] <bigjools> I want to deprecate STP
[09:57] <bigjools> but the LOF needs fixing, it creates bad data
[09:58] <wgrant> Yup.
[09:58] <wgrant> Still, not data that is as bad as the sampledata :)

[10:07] <danilos> stub, hi, I'd like to create a (constraint) trigger that stops a row from being removed if it's the last one for a foreign key
[10:08] <danilos> stub, I have https://pastebin.canonical.com/44140/ so far: I am using a subselect to be able to add a LIMIT 1, and COUNT() to ensure I get a value in the variable ("IF FOUND" didn't work for me)
[10:08] <wgrant> henninge: Hi.
[10:08] <henninge> Hi wgrant!
[10:08] <stub> a foreign key references a single row...
[10:08] <danilos> stub, also, one thing I am worried about is race conditions
[10:08] <wgrant> henninge: Did you land that sharing information thing just now?
[10:08] <danilos> stub, right, but I might have multiple rows that reference that same single row in the other table
[10:08] <henninge> wgrant: yes
[10:08] <danilos> stub, basically 1 StructuralSubscription -> n BugSubscriptionFilters
[10:08] <henninge> wgrant: PQM was *very* quick.
[10:09] <wgrant> henninge: How long did it take from when you submitted it?
[10:09] <wgrant> henninge: It should have been less than a minute.
[10:09] <danilos> stub, BugSubscriptionFilter.structuralsubscription is what references StructuralSubscription
[10:09] <henninge> wgrant: seemed like it
[10:09] <wgrant> henninge: Great, thanks.
[10:09] <stub> danilos: So normally, we would just allow the removal and have a garbo job clear out the lost rows.
[10:09] <henninge> wgrant: did you do that? Great work!
[10:09] <henninge> thanks a lot
[10:09] <danilos> stub, oh, I want to stop removal of the final row, to avoid race conditions
[10:10] <danilos> stub, iow, I stop the removal in python, but it's potentially easy to cause a race condition and still remove it
[10:11] <stub> And you want to trigger an OOPS instead?
[10:11] <wgrant> henninge: Yeah, previously we ran buildout and then bin/test to run a test that ran one line of shell... now we just do that directly in PQM.
[10:11] <wgrant> It's a little bit faster.
[10:11] <henninge> just a tiny bit ... ;)
[10:11] <danilos> stub, will that happen if delete removes 0 rows?
[10:12] <danilos> stub, (and even so, I'd prefer that)
[10:12] <danilos> stub, or, I could allow removal and then insert another row, would that be better?
[10:12] <wgrant> I don't understand why you want to prevent removal.
[10:13] <danilos> wgrant, so python code can assume that there is always at least one BugSubscriptionFilter for every StructuralSubscription (that's what we have already agreed on as the model)
[10:13] <stub> danilos: The DELETE trigger can allow or block the deletion. I don't think it can make it fail silently or remove a different row. That would be lying.
[10:13] <wgrant> danilos: Right, but that doesn't really explain why :)
[10:15] <danilos> wgrant, well, to clean up the semantics, allow easier linking etc (i.e. use just bugsubscriptionfilter for linking instead of having to have complex logic for choosing structuralsubscription of bugsubscriptionfilter)
[10:15] <danilos> wgrant, today, you can have "no filtering" in two different ways: SS with no BSFs, or SS with BSF with no filters
[10:15] <danilos> wgrant, we don't like the duality
[10:16] <wgrant> Ah, right, forgot that you wanted to link them to notifications.
[10:16] <stub> I don't see why a structuralsubscription must have >= 1 bugsubscriptionfilter's myself.
[10:17] <danilos> stub, right, the trigger I have so far seems to give me the right behaviour, I am just wondering if that's the optimal or good way to do it?
[10:17] <danilos> stub, see above, though we can argue if they are strong enough reasons
[10:18] <stub> danilos: It doesn't work if two transactions delete the last two referencing rows in separate transactions at the same time.
[10:18] <stub> Both sessions think there is another reference, both succeed, both rows are removed.
[10:19] <danilos> stub, that's what I was wondering; I see that one can have a CONSTRAINT TRIGGER to happen at the end of transaction, which would work better, but that one can only be an AFTER trigger, meaning it can't stop deletion (naturally, since everything else in the transaction might be messed up)
[10:19] <danilos> stub, also, would it be possible to explicitely lock rows that I care about?
[10:19] <stub> Your just shuffling the race condition around
[10:19] <stub> Yes, SELECT FOR UPDATE will do that.
[10:19] <stub> But this is a complex way of proceeding. Ideally we find a non-complex way of solving the problem.
[10:20] <danilos> stub, agreed re shuffling race condition around, but having more safeguards in place reduces the chances of it happening, and ideally, I'd be avoiding it completely
[10:20] <danilos> stub, any suggestions on a non-complex way? (not that I see this as overly complex)
[10:21] <stub> We just keep outerjoining with the table and not fighting the model SQL gives us.
[10:22] <danilos> stub, that's something I'd have to (re)discuss with my team, since we are generally relying on this already in a few places; if it's not too many places, perhaps we can go back
[10:25] <danilos> stub, one of the reasons we are going this way is to allow easier merging of structuralsubscription and bugsubscriptionfilter tables in the future
[10:26] <wgrant> bigjools: Do you want to review https://code.launchpad.net/~wgrant/launchpad/generalise-add-missing-builds/+merge/51873, or should I grab someone else?
[10:26] <stub> fwiw, there is nothing in the data model preventing a bug from having no linked bugtasks, but that hasn't been a problem.
[10:27] <wgrant> bigjools: Also, is there an existing wiki page on adding new archs, or should I create one?
[10:27] <bigjools> wgrant: I can do it in a bit
[10:27] <wgrant> I searched around earlier, but found nothing :/
[10:28] <wgrant> Which I guess is unsurprising, given the questionable results of the last two new ones :)
[10:28] <bigjools> wgrant: feel free to start one
[10:34] <danilos> stub, right, so your suggestion is to just ignore the potential race condition? (in our particular case, it is very unlikely, but if we don't put any safeguards in, a malicious person will be able to do it, especially if we allow API access)
[10:34] <stub> danilos: So to do this. In Python, you do 'SELECT * FROM BugSubscriptionFilter FOR UPDATE', then check there is at least one other link. If there is, proceed with the deletion.
[10:35] <stub> If not, render an error message
[10:35] <stub> If two transactions try that at the same time, one will block until the other commits, then a serialization exception is raised and Zope retries the request
[10:36] <stub> danilos: Doing it in the Python level means you catch the case with a nice error
[10:36] <danilos> stub, right, the reason I wanted this as the trigger is because I ran into a place where someone was not using the appropriate Python API, but instead removed a row directly using store.find(...).remove()
[10:36] <danilos> stub, true, agreed
[10:36] <stub> danilos: The trigger would need to duplicate this logic, but it will cause an OOPS.
[10:37] <danilos> stub, right, but if we guard the proper API in Python, this would indicate a bug, which is exactly what we want :) though, I can't make it work in the trigger
[10:38] <stub> danilos: I'd ignore the race condition - do you really care if there is a structuralsubscription with no bugsubscription? Malicious isn't an issue - you can only shoot yourself in the food.
[10:38] <danilos> stub, do you know how to "discard results of a SELECT using PERFORM"?
[10:38] <danilos> stub, right, but I hate to see food flying around :)
[10:38] <danilos> stub, and you are definitely right, we can cause problems for ourselves in many more different ways, so I'll just guard against it in the python code
[10:38] <danilos> stub, thanks
[10:39] <stub> danilos: Yes, but the fallout from all the extra complexity you are adding is likely worse than the problem.
[10:40] <stub> PERFORM is a pl/pgSQL statement so only useful in a stored procedure. Syntax similar to SELECT but it throws the results away.
[10:40] <danilos> stub, I assume that last comment is re trigger implementation?
[10:41] <stub> It depends on how contrived your race condition is.
[10:43] <stub> If you have a form """( ) bugsubscription a (x) bugsubscription b""", it is reasonable to assume someone clicks 'delete', thinks 'oh shit', changes the check box and clicks submit again. So catching this in Python is reasonable.
[10:43] <stub> If you need to contrive an example with two browser windows opened to the correct page and the user submitting both at once, you might not want to bother.
[10:43] <danilos> stub, atm it's the latter, but the UI is not settled yet
[10:44] <stub> I certainly wouldn't bother catching the case of the form submission being processed at the same time as some background script running that does the same removal (esp. as I doubt we would have a background script performing that removal)
[10:45] <danilos> stub, I can imagine the UI becoming what you described first, and it seems it's not hard to guard against it with SELECT FOR UPDATE
[10:47] <danilos> stub, ok, and just for fun, the trigger works with PERFORM FOR UPDATE, so something I learned today :)
[10:49] <stub> Right. Should be able to simplify it too. Once you have locked all the rows referencing the structural subscription, you only care if there is a result from SELECT TRUE FROM BugSubscriptionFilter WHERE structuralsubscription=OLD.structuralsubscription AND id <> OLD.id'
[10:50] <stub> I think you can lie and silently not do the deletion, but I think it is better to raise an exception here.
[11:00] <danilos> stub, ah, SELECT TRUE trick is a very nice one
[11:00] <danilos> stub, is it worth doing a LIMIT on that query as well? (just for my education, it seems it is :)
[11:00] <stub> danilos: That wouldn't lock all the rows then
[11:01] <stub> Oh... the check if the result exists. Yes.
[11:01] <danilos> stub, ah, so I would put the FOR UPDATE in this query as well?
[11:01] <stub> Thought you meant on the FOR UPDATE, which should not have a LIMIT :)
[11:01] <danilos> stub, right :)
[11:27] <bigjools> wgrant: DistroSeries:+queue just hove into view on the oops reports
[11:27] <bigjools> I hate that page
[11:27] <wgrant> bigjools: Yes. :(
[11:27] <bigjools> I remember when it used to have 3000+ queries on it.  I was quite chuffed when I got it to ~600
[11:27] <bigjools> that looks big now :/
[11:28] <wgrant> A lot of it is the same problem as +copy-packages: createMissingBuilds is shit.
[11:28] <bigjools> there's also a massive problem when there are many packages with custom files
[11:28] <wgrant> Yeah.
[11:28] <bigjools> I only optimised binary and source uploads, custom packages are still bad
[11:29] <wgrant> It's fairly easy to optimise now that we're allowed to do caching on the model.
[11:29] <wgrant> GETs, that is.
[11:29] <wgrant> POSTs are still hard.
[11:29] <bigjools> we can redirect
[11:29] <bigjools> I think it does that already
[11:29] <wgrant> We already do. Most of the work is skipped on POST.
[11:30] <wgrant> *Most*.
[11:30] <bigjools> :)
[12:19] <Daviey> bigjools, I /just/ noticed bug #610491 is fixed... but I can't get it to work... getPublishedSources(package_signer=IPerson)... i would have thought would return a packageset?
[12:19] <_mup_> Bug #610491: [API] Please expose getPublishedSources(package_creator,package_signer) <api> <lp-soyuz> <Launchpad itself:Triaged> < https://launchpad.net/bugs/610491 >
[12:20] <bigjools> Daviey: why do you think it was fixed?
[12:21] <bigjools> getPublishedSources returns publications, not packagesets BTW
[12:22] <Daviey> bigjools, ah yeah... that is what i want
[12:23] <bigjools> we can add the filtering as per the bug, but it's low priority
[12:23] <danilos> StevenK, hi, are you still awake? :)
[12:23] <bigjools> I'm sure everyone wants a fast, non-oopsing Launchpad first :)
[12:26] <Daviey> bigjools, i thought it was fixed because i confused it with bug #372704
[12:26] <_mup_> Bug #372704: expose Signed-by and Changed-by via API <api> <lp-soyuz> <Launchpad itself:Fix Released by julian-edwards> < https://launchpad.net/bugs/372704 >
[12:26] <bigjools> heh
[12:27] <Daviey> ETOOMANYSIMILARBUGS! :)
[12:35] <jml> My laptop is busted.
[12:36] <StevenK> danilos: Hai?
[12:36] <danilos> StevenK, you were listed as OCR, I've removed you thinking you might be done with that
[12:37] <StevenK> danilos: It's 1130pm, I so am :-)
[12:37] <danilos> StevenK, ok, cool, enjoy the night then :)
[12:39] <bigjools> jml: that sucks
[12:41] <wgrant> jam: Hi.
[12:42] <jam> hi wgrant
[12:42] <wgrant> jam: We have another chance to roll out the forking service next week. Do you think we should try?
[12:43] <jam> wgrant: I would like to land one more patch, but I think I can make it by next week
[12:43] <jam> https://code.launchpad.net/~jameinel/launchpad/lp-serve-child-hangup/+merge/50055
[12:43] <jam> but poolie approved it, pending switching to signal.alarm()
[12:43] <wgrant> jam: Right, I've seen that sitting around.
[12:43] <wgrant> I think I would prefer an explicit handler.
[12:43] <jam> well, it seems approved from the conversation, I'll certainly put it up again for review
[12:44] <jam> wgrant: something to handle SIGALRM?
[12:44] <wgrant> Yeah.
[12:44] <wgrant> But I guess it doesn't really matter.
[12:45] <jam> wgrant: it makes the testing easier
[12:45] <jam> since otherwise I have to worry about it when testing that specific code
[12:45] <wgrant> True.
[12:45] <jam> I'll think about it
[12:45] <jam> I can certainly just install a handler for the test itself
[12:48] <wgrant> Thanks.
[12:49] <jam> wgrant: any idea why ^C in the middle of the test cancels the current test with OK, rather than generating a traceback and failure where it is blocked?
[12:49] <jam> I think this changed with testtools
[12:49] <jam> maybe jml ^^ is better to ask (though prob not in this timezone :)
[12:50] <jml> jam: it's complex. depends on the test.
[12:50] <jam> jml: I've seen it on just about all test runs I've tried
[12:58] <jam> wgrant: do we have a way to tell that a process, which is not a direct child, has exited?
[12:59] <wgrant> jam: "We" meaning victims of POSIX? I don't think so.
[13:00] <jml> poll ps :P
[13:00] <jam> jml: yeah, that's what I thought was required. which... I'm not doing. But thanks for confirming
[13:01] <jml> jam: I guess you could do something like upstart
[13:02]  * danilos -> food
[13:10] <jam> wgrant: https://code.launchpad.net/~jameinel/launchpad/lp-serve-child-hangup/+merge/51893
[13:10] <jam> if you want to give it a look ovre
[13:10] <jam> the only other thing I can think of, is wanting to have the full TIMESTAMP and PID in log files, rather than mutter's default of time-since-started
[13:10] <jam> but that doesn't seem like a blocker for rollout
[13:12] <wgrant> jam: Thanks. I'll have a look tomorrow.
[13:34] <leonardr> hey, benji, i'd like to talk about the way in which exceptions in launchpad are turned into http responses in lazr.restful
[13:35] <leonardr> the more i look at this, the more it looks like two systems that do the same thing
[13:35] <leonardr> for instance, you filed bug 631711
[13:35] <_mup_> Bug #631711: Ability to specify HTTP result code when using lazr.restful.error.expose <lazr.restful:Triaged> < https://launchpad.net/bugs/631711 >
[13:35] <leonardr> but we have that--it's the webservice_error() decorator
[13:35] <leonardr> but the webservice_error() decorator on its own doesn't work
[13:36] <leonardr> you have to call expose() on the exception object as well
[13:36] <leonardr> and i can't change webservice_error() to call expose() on the exception class, because expose() is for objects
[13:36] <leonardr> so you have to change it in two places--call webservice_error() on the class and expose() on the object
[13:36] <leonardr> this has been bothering people for a while and i think i'll do something about it
[13:37] <leonardr> but, i'd like to check in with you since it looks like you are responsible for the other system
[13:38] <benji> leonardr: I'm on a call at the moment.  I'll be off in about 5 minutes.
[13:38] <leonardr> sure
[13:43] <LPCIBot> Yippie, build fixed!
[13:43] <LPCIBot> Project devel build #494: FIXED in 5 hr 11 min: https://hudson.wedontsleep.org/job/devel/494/
[13:46] <benji> gary_poster: thanks, we'll need it; I'm almost certain we're upside down, but there's always the option of a short sale
[13:46] <LPCIBot> Project db-devel build #411: STILL FAILING in 6 hr 4 min: https://hudson.wedontsleep.org/job/db-devel/411/
[13:46] <benji> gary_poster: thanks for the pointer, it's hard to find a good agent
[13:46] <gary_poster> benji, ack, and following you around on the channels is exciting ;-)
[13:46] <benji> LOL
[13:47] <benji> ok, I really need to do something about that
[13:47]  * benji starts a local chapter of Channel Hoppers Anonymous
[13:48] <benji> leonardr: the root difference is that one is about exposing exception classes and the other is about exposing exception instances (especially instances of built-in exceptions)
[13:49] <leonardr> right
[13:49] <leonardr> the problem is that on its own, exposing an exception class does absolutely nothing
[13:49] <leonardr> you also have to expose the instance
[13:49] <benji> oh, you do?  I thought that if the framework caught an instance of an exposed exception class then it would handle it specially
[13:50] <leonardr> no, it doesn't
[13:50] <leonardr> i don't know if it used to and doesn't anymore, or what
[13:50] <benji> hmm
[13:50] <leonardr> ideally there would be one method that you could call within a class definition, or on a class, or on an instance
[13:51] <benji> two courses of action suggest themselves: make exposing an exceptoin class actuall do something or do away with the class-based mechanism alltogether
[13:53] <leonardr> the class-based mechanism is very useful because you can take care of the whole thing in one place
[13:55] <leonardr> the reason you filed that bug is because expose() only works for exception classes that give a 400 error code
[13:55] <leonardr> so... let me try to come up with a solution
[14:02] <abentley> henninge: standup?
[14:02] <henninge> abentley, adeuring: Can we do skype?
[14:02] <abentley> henninge: sure.
[14:02] <adeuring> surwe
[14:03]  * adeuring needs to start a machine where skype is installed. 2 minutes, please
[14:29] <leonardr> benji: argh, we actually have _three_ systems, only one of which works
[14:30] <benji> leonardr: yow
[14:30] <leonardr> but two of the systems are alternate spellings
[14:42] <leonardr> benji: do you know of any reason why it shouldn't be allowable to call expose() on an exception __class__ (or do the equivalent--basically, force a class to implement a "error handled specially by the web service" interface
[14:44] <benji> leonardr: that seems reasonable; I'd make it a class decorator (as well as a normal function) so you could use it either way
[15:00] <bigjools> sinzui_: will your connection stay up long enough to answer a question? :)
[15:01] <sinzui_> doubtful. unity cannot decide to show menu in the menu bar or the window. My windows are shaking with anger
[15:02] <bigjools> sinzui_: well let me try. rvba is having problems using create_initialized_view in a test with a principal of None
[15:02] <sinzui_> he is rendering content?
[15:02] <bigjools> https://pastebin.canonical.com/44154/ causes https://pastebin.canonical.com/44155/
[15:02] <bigjools> yes
[15:03] <rvba> sinzui_: calling render on the view triggers a LocationError: 'name_link'
[15:03] <rvba> sinzui_: (Hi by the way ;-))
[15:03] <sinzui_> He needs to pas the principal because menus links and login often needs to get the user form the interaction
[15:04] <bigjools> what about an anonymous user?
[15:05] <sinzui_> Most Lp pages will not render with an anonymous user. The interaction is not setup right with anon
[15:05] <bigjools> hah
[15:05] <rvba> sinzui_: ok
[15:05] <bigjools> didn't know that
[15:05] <sinzui_> he can pass the arg like
[15:05] <bigjools> I guess I was lucky to find a page that does when I've done this
[15:05] <sinzui_> principal=distro.owner to hack the case
[15:05] <rvba> sinzui_: login(someuser) won't do?
[15:05] <sinzui_> the login() will still work for permission
[15:06] <sinzui_> no
[15:06] <bigjools> rvba: login() is for the zope interaction
[15:06] <sinzui_> rvba, We prefer login_person() so that we have the person to pass as the principal
[15:07] <rvba> sinzui_: got it
[15:08] <rvba> bigjools: I though every security check was "zope interaction"
[15:08] <bigjools> rvba: yes, logging in sets up the user for the interaction
[15:09] <rvba> bigjools: what is the part that is *not* zope interaction?
[15:09] <leonardr> benji: see if this makes sense to you: http://pastebin.ubuntu.com/574501/
[15:09]  * benji looks
[15:09] <bigjools> rvba: anything that's not using database objects, views and utilities generally
[15:10] <bigjools> so not much :)
[15:10] <rvba> bigjools: ok, good to know
[15:11] <rvba> sinzui_: thanks for the clarification
[15:13] <benji> leonardr: that sounds like it accurately describes the status quo
[15:13] <leonardr> ok, now to figure out a better system
[15:13] <leonardr> i'm kind of thinking of a system in which we do a named operation based on __lazr_webservice_error
[15:14] <leonardr> er, a named adapter lookup
[15:14] <leonardr> since the way we describe this class of exception is by saying what response code we want to send instead of 500
[15:14] <leonardr> so if you could do @error_status(400) on an exception class, or expose(exception, 400) on an instance
[15:15] <leonardr> lazr.restful would do an named lookup adapting to IWebServiceErrorView, '400', and you would get a ClientErrorView
[15:57] <danilos> abentley, hi, do you mind taking a look at https://code.launchpad.net/~danilo/launchpad/devel-bug-720826-delete-race/+merge/51890?
[15:58] <abentley> danilos: sure.  OTP.
[15:58] <danilos> abentley, great, thanks, I'll be around for discussion as well
[16:14] <leonardr> benji: what do you think of this? http://pastebin.ubuntu.com/574528/
[16:14] <leonardr> (skip to "a better system")
[16:15] <abentley> danilos: back.
[16:15] <danilos> abentley, generally, the branch is quite simple; few things I have on the top of my head: perhaps I should split the delete_final test into a test-per-attribute, though I am not sure about that; second, I am not testing the race condition handling
[16:16] <danilos> "I have on the top of my head" is an interesting way to put it, though :)
[16:18] <abentley> danilos: I haven't seen "FOR UPDATE" before.  Is that why you didn't use Storm expressions?
[16:18] <danilos> abentley, yeah
[16:18] <danilos> abentley, that's how one locks rows in Postgres, I just learned that today from stub :)
[16:19] <abentley> danilos: cool way of verifying that an object is deleted.  I've been grabbing the ID and then querying for it.
[16:20] <danilos> abentley, this should handle race conditions where someone tries to delete it from a different place, the check succeeds in both, and then you try the delete; this way, one of them gets locked out until the check and delete are done
[16:21] <abentley> danilos: I meant using self.assertIs(None, Store.of(bug_subscription_filter))
[16:21] <danilos> abentley, ah, right :)
[16:22] <abentley> danilos: what is getDescriptionFor?
[16:23] <danilos> abentley, oh, it returns you things like Choice(blahblah) object from the interface; I've looked it up using help(Interface)
[16:24] <danilos> abentley, perhaps I could say it returns an attribute "descriptor"
[16:24] <abentley> danilos: Okay, cool.
[16:26] <abentley> danilos: if I were writing it, I'd probably look up the default on a separate line, just to avoid clutter.  But that's a matter of personal taste.
[16:26] <abentley> danilos: r=me.
[16:27] <danilos> abentley, a thought did occur to me as well, so since it's not just me, I'll do it :)
[16:30] <danilos> abentley, and thanks!
[16:30] <abentley> danilos: np.
[17:29] <lifeless> Ursinha: matsubara-lunch: whats the landing process for qatagger ?
[17:30] <Ursinha> lifeless, it's all explained on the README file (I believe)
[17:30] <Ursinha> lifeless, wait, let me find the wiki page
[17:31] <lifeless> Ursinha: its not :)
[17:31] <lifeless> Ursinha: I have this branch diogo approved, I would like to land it
[17:31] <Ursinha> lifeless, first what I do is to merge it on trunk
[17:31] <Ursinha> after that, a make dist
[17:32] <Ursinha> I pick the package and copy over to devpad, untar it, make install
[17:32] <Ursinha> and create a qa link for that folder
[17:32] <Ursinha> change the cron entry and watch
[17:32] <Ursinha> ah, I copy the files last-revno-* from current to qa folder
[17:32] <leonardr> benji: got some time for more brainstorming?
[17:33] <Ursinha> and create a link for logs and current-milestone
[17:33] <Ursinha> I'm sure that's explained somewhere, I guess a wiki page I wrote
[17:33] <Ursinha> just a moment
[17:33] <Ursinha> lifeless, did you manage to get access to lpqateam?
[17:33] <Ursinha> on devpad, that is
[17:33] <lifeless> Ursinha: haven't moved that forward
[17:33] <lifeless> Ursinha: thats *deploy*
[17:34] <lifeless> Ursinha: how do I get my change into trunk ?
[17:34] <Ursinha> ah
[17:34] <Ursinha> for me that's land :)
[17:34] <lifeless> Ursinha: I like it :)
[17:34] <Ursinha> lifeless, if that's approved, merge into devel branch and push
[17:34] <Ursinha> then you have to deploy
[17:34] <Ursinha> that's what I described
[17:34] <lifeless> Ursinha: ah, so I can't run the test suite at the moment
[17:34] <lifeless> some buildout error
[17:35] <lifeless> perhaps you could check I haven't broken a test and push it to devel if its ok ?
[17:35] <Ursinha> ah, I know
[17:35] <Ursinha> I guess that's trying to find a launchpadlib version that isn't there anymore, just a moment
[17:40] <lifeless> yes
[17:40] <lifeless> Error: Couldn't find a distribution for 'launchpadlib==1.6.3'.
[17:40] <Ursinha> lifeless, it seems mars removed his pypi folder that contains the dependencies, I'll add one to ~ursula
[17:40] <Ursinha> you can fix the find-links to point to ~ursula, I'll fix that
[17:41] <lifeless> Ursinha: can you merge this one into trunk - I need to go chase two in-progress timeout bugs
[17:42] <Ursinha> lifeless, of course :)
[17:42] <lifeless> Ursinha: I appreciate you fixing the process blockers immediately - thats great stuff.
[17:42] <benji> leonardr: sure, give me about 10 minutes
[17:42] <leonardr> k
[17:42] <lifeless> Ursinha: thanks
[17:42] <Ursinha> lifeless, np
[17:46] <lifeless> bigjools: are you around for much longer ?
[17:46] <bigjools> lifeless: 10 mins
[17:46] <lifeless> I'm having trouble converting archive.getPublishedSources to storm
[17:46] <bigjools> ho ho :)
[17:47] <lifeless> I want to do that to change the prejoins to set based eager loaders
[17:47] <bigjools> why are you doing that?
[17:47] <bigjools> ok
[17:47] <lifeless> https://bugs.launchpad.net/launchpad/+bug/727560
[17:47] <_mup_> Bug #727560: Archive:EntryResource:getPublishedSources <timeout> <Launchpad itself:Triaged> < https://launchpad.net/bugs/727560 >
[17:47] <lifeless> 15 second query atm
[17:47] <bigjools> meep
[17:47] <lifeless> you could say that
[17:50] <lifeless> bigjools: are there unit tests for this, or just 'archive.txt' ?
[17:51] <bigjools> lifeless: just the doctest :/
[17:51] <lifeless> thanks
[17:51] <bigjools> feel free to rip it out
[17:51] <lifeless> migrate to unit tests ?
[17:51] <bigjools> however, that method is used *everywhere* - loads of other code and tests use it
[17:51] <lifeless> that would make it a lot easier to debug
[17:51] <bigjools> yes, migrate if you want
[17:52] <bigjools> archive.txt is way too big
[17:52] <lifeless> I'll have a look see
[17:52] <lifeless> thanks
[17:52] <bigjools> np
[17:55] <bigjools> lifeless: did you want any more help?
[17:58] <sinzui_> abentley, do you have time to review https://code.launchpad.net/~sinzui/launchpad/sane-definition-status-0/+merge/51932
[17:58] <bigjools> too late if you do, I'm off :)
[17:58] <bigjools> good night all
[18:15] <leonardr> benji: i think i've reduced my remaining problems do "why doesn't this component lookup succeed?" want to take a look?
[18:15] <benji> leonardr: sure
[18:15] <leonardr> ok, pushing
[18:16] <leonardr> lp:~leonardr/lazr.restful/consistent-error-exposing
[18:16] <leonardr> if you run bin/test there's a breakpoint right at the bit where everything goes south
[18:17] <leonardr> look at the SystemErrorView registration in configure.zcml. that's the one that's failing
[18:17] <leonardr> and look in _resource.py, the except clause in ReadWriteResource.__call__
[18:17] <leonardr> that's where the lookup happens
[18:19] <abentley> sinzui_: happy to.
[18:20] <leonardr> i've manually verified IException.providedBy(e), IWebServiceLayer.providedBy(self.request), and Interface.providedBy(IWebServiceErrorView)
[18:22] <leonardr> hm, it succeeds if i copy simple.py and remove the thing i'm trying to adapt to
[18:22] <leonardr> but i get the wrong status code
[18:22] <leonardr> i get SystemErrorView
[18:23] <leonardr> rather than WebServiceExceptionView
[18:25] <benji> looking at it now
[18:26] <leonardr> i think i got it, actually. i'm just not sure why i had to start adapting to Interface (presumably the default) instead of IWebServiceErrorView
[18:27] <leonardr> let me get it working and then you can critique the working version
[18:35] <leonardr> benji: pushed working version
[18:35] <leonardr> basically, WebServiceExceptionView is now the default view for all exceptions unless another view is registered
[18:35] <benji> looking
[18:35] <leonardr> and if the exception happens to have been annotated (by any mechanism), WebServiceExceptionView will defer to the annotation when it comes to the status of the response
[18:37] <leonardr> and the exception handler does a lookup for any exception, not just for a client error
[18:39] <abentley> sinzui_: Why are you addressing this issue by providing a new enum?  Why not just check that the value is in a set of acceptable values?
[18:43] <benji> leonardr: looks good very good to me
[18:44] <leonardr> benji: ok, maybe you'll like this other change, which takes it one step further...
[18:49] <leonardr> benji: check the new version. it *always* turns an exception into a view, and gets rid of IWebServiceError altogether
[18:50] <benji> k
[18:54] <benji> leonardr: wouldn't that mean that an "accidental" exception would result in a 400?  It seems a 500 would be more appropriate.
[18:54] <leonardr> benji: an accidental exception would turn into an un-annotated WebServiceExceptionView
[18:54] <leonardr> and its status would be the default, 500
[18:55] <benji> leonardr: ok, sounds like a plan; simpler code and it behaves the way we want
[18:56] <leonardr> all right, i'll fix the test failures and submit a formal mp
[19:04] <leonardr> benji: this does mean that the publisher will not be handling any errors. is that ok?
[19:05] <lifeless> leonardr: will we still get OOPSes on unannotated exceptions?
[19:06] <benji> leonardr: hmm, does our custom publisher do retries on DB availability errors (like conflicts)?  If so, it won't get the error to know that the error happened.
[19:07] <leonardr> lifeless: i don't know. lazr.restful will set X-Lazr-OopsId if request.oopsid is present, but i imagine there's more to "getting an oops" than that
[19:07] <sinzui_> maybe I should install unity-2d. The constant shaking windows is making me ill.
[19:07] <lifeless> leonardr: I mean the server side mechanism where raising is called()
[19:08] <benji> leonardr: lifeless makes a good point; we may want to only catch the errors that were marked as "not exceptional exceptions" instead of intercepting potentially hair-on-fire exceptions (MemoryError comes to mind)
[19:09] <lifeless> benji: you credit me with too much... that is also an excellent point
[19:09] <leonardr> benji: what if we raised exceptions if their error code was 500, for whatever reason? either because no other code was specified or because the error was explicitly marked as a server error?
[19:09] <lifeless> benji: my concern was more prosaic - I want our feedback loop for fixing crasher bugs to remain intack.
[19:10] <lifeless> leonardr: benji: may I ask what problem you're trying to solve ?
[19:10] <lifeless> s/trying to solve/solving/
[19:11] <benji> leonardr: I'm not sure I understood that part ("what if we raised exceptions if their error code was 500...")
[19:11] <leonardr> lifeless: we have 2.5 systems for telling lazr.restful to treat a given exception/exception class as a certain http response code, and only one of them works
[19:11] <leonardr> that's done; this is ancillary cleanup work, in which the question is "which exceptions can lazr.restful handle on its own, and which should it delegate to the publisher?"
[19:12] <leonardr> benji: so, earlier we effectively re-raised exceptions if their status code was not 400
[19:12] <lifeless> oh. Have you considered 'if lazr.restful does not explicitly know, its not its business' ?
[19:12] <benji> leonardr: I favor reverting that last bit (r181)
[19:13] <leonardr> lifeless: yes. the question is what it means to 'explicitly know' this
[19:13] <lifeless> leonardr: that would seem to be dependent on the system you've chosen for telling lazr.restful how to determine result codes for exceptions
[19:14] <leonardr> benji: if i revert the last bit, then exceptions will be re-raised unless they have been annotated with a return value. i don't think that's necessary or sufficient
[19:14] <benji> leonardr: that's not the way I read the code.  It seems to me that we reraised if the exception wasn't marked as "an exception that the client caused".
[19:14] <leonardr> an exception that the client caused -> 4xx response code
[19:14] <leonardr> we can make that explicit
[19:15] <leonardr> let me do some code and tests and show you waht i mean
[19:15] <benji> I think we're talking about this bit, right? http://pastebin.ubuntu.com/574616/
[19:16] <leonardr> benji: yeah. the comment is wrong
[19:16] <benji> leonardr: the comment before or after the change is wrong?
[19:16] <leonardr> before
[19:17] <leonardr> if an exception is annotated as 301 or 503 (or 200 for some reason)
[19:17] <leonardr> lazr.restful would have handled it
[19:17] <leonardr> even though its decoration does not indicate that it's the client's fault
[19:17] <benji> ah, ok
[19:17] <leonardr> i think it makes sense to only handle exceptions where the response code is 4xx
[19:17] <leonardr> and let the publisher handle the rest
[19:18] <leonardr> that way, if you explicitly decorate an exception as 5xx, it will become an oops
[19:18] <benji> the exceptions-as-control-flow thing was throwing me
[19:19] <lifeless> leonardr: +1
[19:20] <benji> the issue there would be that (if I'm understaning all the mechanisms correctly) that if you decorate an exception as a 50X but the code in question re-raises the exception, the publisher's error view may well retorn something other than 50X (say it may have 500 hard-coded).  Does that jive with your understanding?
[19:21] <leonardr> benji: yes, that might happen
[19:21] <leonardr> i think that failure mode is minor compared the the failure modes that might happen if lazr.restful handled all the errors
[19:24] <benji> It seems to me that since we're not crying out for more controll over 500s, we should designate this decoration mechanism as for status codes from 400 and down, that way we don't have to get our fingers in the "a *real* error happened reporting story" while also greatly improving the "client did something wrong" story.
[19:25] <leonardr> i think just handing 4xx is good enough--that's effectively what we were doing before (except we only handled 400)
[19:25] <leonardr> and the publisher may have some way of handling exceptions that are marked as redirects, or something weird like that
[19:26] <benji> +1
[19:37] <leonardr> ok, i think we have a winner
[19:45] <LPCIBot> Project db-devel build #412: STILL FAILING in 5 hr 17 min: https://hudson.wedontsleep.org/job/db-devel/412/
[19:45] <lifeless> oh yay, tz boobytraps in archive.getPublishedSources
[19:51] <leonardr> benji: https://code.launchpad.net/~leonardr/lazr.restful/consistent-error-exposing/+merge/51946
[19:51] <leonardr> punt to abentley if you don't want to do this right now
[19:53] <benji> leonardr: I'd love to do the review, but I have some pressing work I really need to put some time into
[19:54] <leonardr> abentley, can you take a look?
[19:55] <LPCIBot> Project windmill build #5: FAILURE in 1 hr 13 min: https://hudson.wedontsleep.org/job/windmill/5/
[20:41] <thumper> matsubara: ping
[20:42] <matsubara> hi thumper
[20:42] <thumper> matsubara: sorry for not getting back to you earlier
[20:42] <thumper> matsubara: you were asking about some recipe testing
[20:42] <thumper> matsubara: what do you need from me?
[20:43] <matsubara> thumper, can we have a quick call?
[20:43] <thumper> matsubara: sure... skype or mumble?
[20:43] <matsubara> thumper, let's try mumble
[20:44] <thumper> matsubara: ok
[20:44] <matsubara> I'm on the product channel
[20:45] <matsubara> thumper, https://dev.launchpad.net/QA/ExploratoryTesting
[20:59] <leonardr> benji, i'm not sure why we have both webservice_error and @error_status--they do the same thing with slightly different syntax. do you have any ideas? should we standardize?
[21:01] <benji> leonardr: nope, no idea; I'm pretty sure both existed when I started looking at them
[21:03]  * leonardr will leave it alone for now
[21:04] <thumper> leonardr: mumble?
[21:04] <leonardr> thumper, yes!
[21:26] <leonardr> thumper: https://code.launchpad.net/~leonardr/lazr.restful/consistent-error-exposing/+merge/51946
[21:39] <thumper> leonardr: ok, I'm left a little confused with your merge proposal
[21:40] <thumper> leonardr: I think I was fine up until the last test
[21:40] <thumper> which left me questioning
[21:40] <leonardr> thumper: an alternative is to prohibit the caller from marking an exception with a response code not in the 4xx or 5xx series
[21:41] <thumper> leonardr: in test_decorated_exception_instance_becomes_error_view when the resource is called, you get a result
[21:41] <thumper> leonardr: but in the last test, when you call the resource it raises
[21:41] <thumper> why?
[21:42] <leonardr> thumper: because of line 27 in the diff
[21:42] <leonardr> if the response code is 4xx we handle the response (since the error was the client's fault)
[21:42] <leonardr> in all other cases we let the publisher handle it
[21:42] <thumper> ahh
[21:42] <thumper> ok
[21:42] <thumper> although the 200 case seems a bit of an edge case
[21:43] <thumper> I guess it is there for completeness
[21:43] <leonardr> thumper: yeah, we could also prohibit it, like i said
[21:43] <leonardr> but if the user wants to do that, we don't really know what they're trying to do, so better let the publisher handle it
[21:43] <thumper> approved
[21:43] <leonardr> great
[21:44] <leonardr> i'll land but not do a release, as integrating into launchpad may require other changes
[21:45] <thumper> ok
[21:56] <lifeless> matsubara: please triage new bugs :) - 728059
[21:57] <matsubara> lifeless, will do
[21:57] <matsubara> I just filed them
[21:59] <poolie> i just got Error 324 (net::ERR_EMPTY_RESPONSE): Unknown error. from https://bugs.launchpad.net/~canonical-bazaar/+assignedbugs
[21:59] <poolie> good morning btw
[22:01] <lifeless> wallyworld: BrowsesWithQueryLimits is your best option for that test
[22:07] <wallyworld> lifeless: yeah. i'll need to modify it to suit.
[22:08] <wallyworld> currently it takes a context object and i need to give it a url
[22:09] <wallyworld> i'm keen for the branch to land to improve +daily-builds performance even more
[22:12] <lifeless> wallyworld: I would give it an object and a view selector
[22:12] <lifeless> defaulting to +index
[22:12] <lifeless> wallyworld: then you can use the Root object and +daily-builds
[22:13] <wallyworld> ok. haven't really thought too much about it yet. have to context switch back. sounds good what you're suggesting
[22:14] <lifeless> wallyworld: or you could make a parallel one (perhaps sharing code) that takes a url
[22:14] <lifeless> wallyworld: we need to talk separately about your bug task search patch
[22:15] <wallyworld> ok. i'll take another look and see if i can decorate the rs outside the search implementation
[22:15] <lifeless> wallyworld: can you tell me what you need to do ?
[22:15] <wallyworld> basically the current search functionality only returns bugtask and i need to also access tables from the query eg BugBranch
[22:15] <lifeless> [this is why I don't like itsy bitsy landings without context - they are harder to review]
[22:15] <lifeless> wallyworld: /why/
[22:16] <wallyworld> so i can, for each bugtask, create a data structure that allows be to lookup a bugtask based on branch id, so i can find all info i need with one query
[22:17] <wallyworld> s/be/me
[22:17] <wallyworld> otherwise i would have to do one query per branch
[22:18] <lifeless> wallyworld: this is the privacy thing you're fixing ?
[22:18] <lifeless> wallyworld: I can give you a one liner to do what you need a different way
[22:18] <wallyworld> and when using the search method and passing in linked_branches=any(), the BugBranch table is in the query but i don't get to access it currently
[22:18] <wallyworld> yes
[22:20] <lifeless> visible_bugs = getUtility(IBugTaskSet).searchBugIds(BugTaskSearchParams(user=self.user, bug=any(*(bug.id for bug in bugs_you_have_today))
[22:21] <lifeless> this is one extra query, but should be fairly lean, and it will be constant.
[22:22] <lifeless> the problem with passing a result decorator into the query is that you actually want a wider tuple out, which such a decorator won't give you
[22:23] <wallyworld> actually mine does. it works :-)
[22:23] <wallyworld> and doesn't cause any extra sql
[22:24] <wallyworld> but i'll take a look at your suggestion
[22:24] <lifeless> pastebin it ?
[22:25] <wallyworld> lifeless: https://pastebin.canonical.com/44193/
[22:29] <wallyworld> lifeless: with your suggestion, at first look, it still seems like i won't have access to the info i need with one query, which is bugtasks for each branch
[22:30] <wallyworld> i'll look in more detail in case i'm missing something
[22:30] <lifeless> ok
[22:30] <lifeless> uhm, I'm worried about the interface basically
[22:30] <lifeless> there is a compiler in there that is fiendish
[22:31] <lifeless> what you have will actually cause more queries
[22:31] <lifeless> if you have *visible* private bugs
[22:31] <lifeless> because their visibilty won't be cached
[22:31] <wallyworld> hmm. ok
[22:31] <lifeless> if you tested with *invisible* private bugs, it will seem fine.
[22:32] <lifeless> because you're fixing half the problem
[22:32] <wallyworld> the main problem is that the search() assumes you only want bugtasks and not other stuff from the query
[22:32] <wgrant> sinzui: No critical issues with your CSS changes?
[22:32] <lifeless> wallyworld: thats what its designed for
[22:32] <sinzui> no critical
[22:33] <wgrant> OK, we shall deploy then.
[22:33] <wgrant> Thanks.
[22:33] <wallyworld> hmmm. don't like the design then :-)
[22:33] <lifeless> wallyworld: there are some options - eager load (e.g. into a cache property) the branches; ask for related things in the result
[22:33] <lifeless> wallyworld: Here is another problem with your fix
[22:33] <lifeless> wallyworld: you'll get multiple rows back for the same bugtask.
[22:35] <lifeless> wallyworld: (if branch A and B are both linked to bug C, then you'll get a row for (C, A) and for (C, B))
[22:35] <wallyworld> so long as the related branch in each row is correct that won't matter
[22:35] <wallyworld> i building a dict of branch->related bug tasks
[22:36] <lifeless> wallyworld: why won't it matter? It will still cost cpu: cache refreshing of duplicate rows in storm.
[22:36] <wallyworld> but the rows for (C,A) and (C,B) are valid and required
[22:36] <lifeless> wallyworld: I would do a separate single query on bugbranch for (BugBranch.bugID.is_in(bug_ids), BugBranch.branchID.is_in(source_branch_ids))
[22:37] <wallyworld> sounds reasonable. get the bug ids from the search() and then another query to load the required info
[22:38] <wallyworld> if visible private bugs were cached, my way would require less queries :-P
[22:40] <wallyworld> thanks for the input. i'll get back to it after food and after i resubmit the +daily-builds perf improvement stuff
[22:41] <wgrant> Looking at the stable merge failure.
[22:45] <wgrant> sinzui: Oh, yesterday I also killed Windmill and made PQM fast.
[22:45] <huwshimi> wgrant: oh just that, I'm surprised you remembered at all!
[22:45] <sinzui> wgrant: you made 5 minute PQM (a merge in less than 5 minutes) actually work as promised!
[22:46] <sinzui> I think we gave up on that 18 months ago
[22:46] <wgrant> Sadly prod config landings are still slow.
[22:47] <wgrant> But they are rare enough.
[22:47] <poolie> hi all
[22:47] <wgrant> Oh no :(
[22:47] <wgrant> Now PQM is going to spam us more often with stable->db-devel conflicts :(
[22:48] <jcsackett> wgrant: the 5 min for main launchpad stuff is amazing. i think we can all bare the configs being longer. :-)
[22:55] <wgrant> Huh.
[22:55] <wgrant> A test went missing when r12399 was automatically merged into db-devel.
[22:55] <lifeless> orly?
[22:56] <wgrant> lifeless: Compare the stable r12399 and db-devel r10214 diffs, lib/lp/registry/tests/test_person.py
[22:57] <lifeless> wgrant: if it was already in db-devel
[22:57] <lifeless> then it wouldn't show in the diff
[22:57] <wgrant> It's not in db-devel now, and doesn't seem to have been removed since.
[22:58] <lifeless> ok, thats more worrying then
[22:58] <wgrant> But let's see the tree at that rev...
[22:58] <wgrant> No, not there in 10214 either.
[22:59] <lifeless> ok, thats whack
[22:59] <lifeless> if you merge the two revs locally, does bzr do that?
[22:59] <wgrant> Waiting for that to happen.
[22:59] <wgrant> Warning: criss-cross merge encountered.  See bzr help criss-cross.
[22:59] <wgrant> Heh.
[22:59] <wgrant> And it's missing.
[23:00] <wgrant> So the criss-cross kills it without a conflict.
[23:00] <wgrant> This is sort of really bad.
[23:00] <lifeless> we can teach pqm to abort on criss cross
[23:01] <wgrant> I'll revive the test now and look through the stable->db-devel diff to check that nothing else is missing.
[23:01] <spiv> lifeless: not just whack, wiggida wiggida wiggida wack!
[23:01] <lifeless> hi daddy :)
[23:02] <spiv> lifeless: well it's a criss-cross problem :P
[23:02] <jcsackett> ...gonna make you jump, jump.
[23:03] <lifeless> spiv: oh, I missed the ref.
[23:03] <lifeless> spiv: 5:45 starts on thursday ;)
[23:03] <wgrant> I think I may be too young for this.
[23:05] <jcsackett> wgrant: http://en.wikipedia.org/wiki/Kris_Kross
[23:05] <wgrant> Ah, yes, 1992.
[23:06] <spiv> wgrant: http://www.youtube.com/watch?v=SmO4xdnf6Gk, especially around the 40 second mark
[23:07] <huwshimi> wgrant: That was the year you were born, right?
[23:07] <wgrant> huwshimi: '91.
[23:07] <huwshimi> wgrant: oh, my apologies :D
[23:08] <wgrant> I am at least a few months older than this... abomination.
[23:08] <lifeless> no excuses then
[23:09] <huwshimi> wgrant: That would only make you a little younger than they where when the song came out
[23:12] <lifeless> wow
[23:12] <lifeless> I'm not an apple afficiondo
[23:12] <lifeless> but this is sweet hardware: http://www.apple.com/ipad/features/
[23:13] <wgrant> Yes, even I have to admit that :(
[23:14]  * huwshimi is wondering if Ubuntu will work on his released-last-week MacBook Pro
[23:15]  * huwshimi is about to find out
[23:19] <lifeless> bah
[23:20] <lifeless> how is %foo% meant to be escaped for storm
[23:20] <lifeless>   File "/home/robertc/launchpad/lp-sourcedeps/eggs/storm-0.18-py2.6-linux-x86_64.egg/storm/database.py", line 366, in _check_disconnect
[23:20] <lifeless>     return function(*args, **kwargs)
[23:20] <lifeless>   File "/home/robertc/launchpad/lp-branches/working/lib/lp/testing/pgsql.py", line 115, in execute
[23:20] <lifeless>     return self.real_cursor.execute(*args, **kwargs)
[23:20] <lifeless> ("SELECT COUNT(*) FROM SourcePackageName, SourcePackagePublishingHistory, SourcePackageRelease WHERE SourcePackagePublishingHistory.archive = %s AND SourcePackagePublishingHistory.sourcepackagerelease = SourcePackageRelease.id AND SourcePackageRelease.sourcepackagename = SourcePackageName.id AND (SourcePackageName.name LIKE '%' || 'cd' || '%')", (9,))
[23:20] <lifeless> IndexError: tuple index out of range
[23:28] <lifeless> argh, I'm blind
[23:28] <lifeless> sqlobject vs storm difference on % escaping. Baby saviour wept.
[23:29] <wgrant> Are you blind for not seeing it, or blind because you saw it?
[23:30] <lifeless> blind having fixed it
[23:30] <lifeless> " '%%%%' || %s '%%%%'" % quote_like(name)
[23:31] <lifeless> damned if I know why the change was needed
[23:31] <LPCIBot> Yippie, build fixed!
[23:31] <LPCIBot> Project windmill build #6: FIXED in 49 min: https://hudson.wedontsleep.org/job/windmill/6/
[23:36] <lifeless> oh fail
[23:36] <lifeless> julian used bool(resultset) all over with getPublishedSources
[23:38] <lifeless> wgrant: pop quiz
[23:38] <lifeless> binary_copies)                                                                                       |            clauses.append(
[23:38] <lifeless> lib/lp/soyuz/scripts/packagecopier.py line 564
[23:38] <lifeless> would .one() or .first() be appropriate?
[23:39] <wgrant> lifeless: Which line?
[23:39] <wgrant>         source_copy = source_in_destination[0]
[23:39] <wgrant> ?
[23:39] <lifeless> yes
[23:39] <wgrant> first
[23:39] <wgrant> one will cras
[23:39] <wgrant> +h
[23:40] <lifeless> is a failover server?
[23:40] <lifeless> \o/ test passes
[23:45] <wgrant> Yay, appservers updated.
[23:46] <lifeless> hah, and a trivial query reduction in distroseriesdifference.py
[23:46] <wgrant> Oh?
[23:46] <lifeless> *sql is not python*
[23:46] <lifeless> second hunk
[23:46] <lifeless> http://pastebin.com/0ijzdQjD
[23:48] <wgrant> Ah
[23:51] <lifeless> also in findSourcePackageRelease
[23:51] <lifeless> same bug
[23:52] <lifeless> and getSourceAncestry
[23:53] <lifeless> also know as LBYL is slow
[23:56] <lifeless> actually,
[23:56] <lifeless> -        if pubs:
[23:56] <lifeless> +        try:
[23:56] <lifeless>              return pubs[0]
[23:56] <lifeless> -        else:
[23:56] <lifeless> +        except IndexError:
[23:56] <lifeless> is cleaner
[23:59] <wallyworld> abentley: you still there?