[00:02] <StevenK> Ah ha. lifeless's comments are a little misleading.
[00:03] <StevenK> BugNominationView._getListingItem isn't it at all, it's BugListingBatchNavigator._getListingItem
[00:05] <poolie> StevenK, hey
[00:05] <poolie> in pursuit of splitting out the buildd tree
[00:05] <poolie> i'm thinking of moving the tests that rely on zopelessdblayer etc out of the buildd tree into lp
[00:05] <poolie> so they will rely on a buildd but not be part of it
[00:06] <StevenK> poolie: And making lp-buildd be pulled in via sourcecode or the download-cache?
[00:06] <poolie> yep
[00:06] <poolie> https://bugs.launchpad.net/launchpad/+bug/800295 fwiw
[00:06] <_mup_> Bug #800295: buildd is unnecessarily coupled  to the launchpad tree through tac file and readyservice <tech-debt> <Launchpad itself:In Progress by mbp> < https://launchpad.net/bugs/800295 >
[00:06] <poolie> welcome back, bot
[00:10] <StevenK> Actually, it's not. It's NominatedBugListingBatchNavigator.
[00:18] <wgrant> StevenK: Do you know if overlay distros work yet?
[00:18] <wgrant> I think they probably function.
[00:19] <wgrant> Enough for someone to kill archivepurpose 4 with a few hours of work.
[00:19] <StevenK> Ha. Maybe they do.
[00:20] <StevenK> wgrant: Should I put up a NDT for r14231?
[00:20] <wgrant> Since I've got Julian convinced that we should kill archivepurpose 7...
[00:20] <wgrant> StevenK: Was hoping you'd ask :)
[00:20] <wgrant> And 4 is pretty clearly actually an overlay distro.
[00:20] <StevenK> What's 7 again?
[00:20] <wgrant> 7 is DEBUG
[00:20] <wgrant> I successfully convinced him that it's a terrible idea.
[00:20] <StevenK> And 4 is PARTNER
[00:20] <wgrant> Yes.
[00:20] <StevenK> They both need to die.
[00:20] <wgrant> Yes.
[00:21] <wgrant> And I think 4 can die pretty cheaply by being turned into https://launchpad.net/canonical-partner
[00:21] <wgrant> And that lets us remove a tonnnnnne of awfulness.
[00:21] <wgrant> all_distro_archives can fuck off.
[00:21] <StevenK> Needs IS involvment to get it onto archive.c.c.
[00:21] <StevenK> But certainly doable
[00:21] <wgrant> We can remove all the Distribution and DistroSeries publication-related methods.
[00:22] <wgrant> IT WILL BE BEAUTIFUL
[00:22] <wgrant> Yes, it requires some IS involvement.
[00:22] <wgrant> But I think it's very doable.
[00:22] <wgrant> Although I bet it's signed with the Ubuntu archive key.
[00:22] <wgrant> Just to make things difficult.
[00:22] <StevenK> Oh, rofl
[00:22]  * StevenK starts putting together a NDT
[00:24] <wgrant> StevenK: Have you heard about my new solution to kill off 7?
[00:24] <wgrant> I'm not sure if I've run it past you.
[00:26] <wgrant> Need to run it past Colin and Adam and the like too, but I think it's sensible.
[00:27] <StevenK> wgrant: Only in vague terms, I think.
[00:28] <wgrant> StevenK: We publish them in separate $COMPONENT/debug indices, like $COMPONENT/debian-installer, and handle the split like we do with security and ports.
[00:28] <wgrant> Eventually when Soyuz gets archive views, we can move the split to there... but until then we might as well utilise the existing mechanism.
[00:29] <StevenK> Doesn't that make archive.u.c explode in size?
[00:30] <wgrant> Remember that there's already the secret magic splitter that throws ports binaries and indices elsewhere.
[00:30] <wgrant> And keeps security on security.u.c.
[00:30] <wgrant> We repurpose that to create a fourth archive -- ddebs.ubuntu.com
[00:30] <wgrant> Because this is really the same thing as the ports split.
[00:31] <StevenK> Oh, right.
[00:31] <wgrant> We need to keep some binaries off the mirrors.
[00:31] <wgrant> It's divided by component instead of arch, but it's really the same thing.
[00:32] <StevenK> With the appropriate magic-mirror hacking, that sounds like a good plan.
[00:32] <StevenK> wgrant: And ddebs.u.c already exists on macquarie
[00:32] <wgrant> Yes. The retracers will probably need to look at both for a while.
[00:33] <wgrant> For values of "a while" that probably number in the several years.
[00:33] <wgrant> But details, details.
[00:35] <wgrant> apt and apt-ftparchive seem happy with custom slashed components.
[00:35] <wgrant> Yay
[00:35] <wgrant> It is practical.
[00:35] <wgrant> And the Soyuz side is dead-easy.
[00:37] <wgrant> So, we now have feasible solutions to both partner and debug that probably take less than two days each.
[00:37] <wgrant> One is already escalated...
[00:43] <StevenK> wgrant: Isn't there a better method to call to get upload perms for a user than distribution.main_archive.verifyUpload(person, name, component, distroseries) ?
[00:43] <wgrant> Don't think so.
[00:44] <wgrant> jml cleaned it up a couple of years back.
[00:44] <wgrant> It would be nice if it was set-based, but it's not.
[00:44] <StevenK> Since I think the lookup of the component is grabbing SPPH, and then verifyUpload does the same thing.
[00:44] <wgrant> It's grabbing it to check the component?
[00:44] <wgrant> It could be cleaned up further, but it's a deep hole.
[00:45] <StevenK>                 component = self.distroseries.getSourcePackage(
[00:45] <StevenK>                     name).latest_published_component
[00:45] <StevenK>                 if distribution.main_archive.verifyUpload(
[00:45] <StevenK>                     person, name, component, self.distroseries) is None:
[00:45] <StevenK>                     return True
[00:45] <wgrant> One I plan to follow eventually, but hopefully after abolishing archivepurpose (4, 7).
[00:45] <wgrant> StevenK: You could possibly check how branches do it.;
[00:46] <wgrant> launchpad.Edit for IBranch needs to do the same thing as IBugNomination.canApprove.
[00:49] <wallyworld_> poolie: hi. i claimed your 678090-affected-count mp and then read the comments. are you still working on it?
[00:49] <StevenK> wgrant: Digging around IBranch isn't turning up anything
[00:49] <poolie> wallyworld_, not right at this moment but i will
[00:50] <wgrant>     def checkAuthenticated(self, user):
[00:50] <wgrant>         can_edit = (
[00:50] <wgrant>             user.inTeam(self.obj.owner) or
[00:50] <wgrant>             user_has_special_branch_access(user.person) or
[00:50] <wgrant>             can_upload_linked_package(user, self.obj))
[00:50] <wgrant>     for ssp in ssp_list:
[00:50] <wgrant>         archive = ssp.sourcepackage.get_default_archive()
[00:50] <wgrant>         if archive.canUploadSuiteSourcePackage(person_role.person, ssp):
[00:50] <wgrant>             return True
[00:50] <wallyworld_> poolie: ok. just wanted to see if i needed to review now or not. thanks
[00:50] <poolie> do you have any comments beyond what robert and i said?
[00:50] <wgrant> Which uses latest_published_component.
[00:50] <wallyworld_> poolie: not sure. didn't get that far. i'll have a look
[00:52] <StevenK> wgrant: Ah, where is that?
[00:52] <wgrant> StevenK: c.l.security
[00:55] <wallyworld_> poolie: so you can delete the view property 'other_users_affected_count' i think
[00:55] <poolie> yep probably
[00:56] <poolie> is my change to @cachedproperty ok?
[00:56] <wallyworld_> poolie: other than that, looks like a nice change. i was going to also suggest the feature flag
[00:56] <wallyworld_> poolie: i like it how you've been doing these polish 'paper cut' bugs of late
[00:57] <poolie> thanks
[00:57] <wallyworld_> poolie: i think the cachedproperty change is ok
[00:57] <wallyworld_> i've run into issues before with cached permissions but i think this case is ok
[00:58] <poolie> it's just cached for the duration of the page view, right?
[00:58] <wallyworld_> yep. the rendering
[00:59] <wallyworld_> poolie:  so, a cached property is good if a property is accessed more than once during a render. is this property used more than once?
[00:59] <poolie> yes, the template puts it in twice
[00:59] <poolie> for something to do with js fallbacks
[00:59] <wallyworld_> ok
[00:59] <poolie> which seems ugly but i didn't want to change it without fully understanding it
[00:59] <poolie> and just caching it seemed fine
[01:00] <wallyworld_> sure
[01:01] <poolie> lifeless, do you have any opinion where TacTestSetup ought to live, so that it can be used by both lp and buildd tests?
[01:01] <poolie> it is not enormous but not trivial
[01:01] <wgrant> In Twisted :)
[01:01] <poolie> it would seem like a waste to make a new package just for that
[01:01] <poolie> :)
[01:01] <poolie> perhaps it could go in testfixtures
[01:02] <lifeless> poolie: you could create a txfixtures
[01:02] <lifeless> avoiding packages just makes problems
[01:02] <poolie> as a new package, depended upon by both?
[01:02] <lifeless> yes
[01:03] <lifeless> txfixtures with twisted specific Fixtures
[01:03] <lifeless> it can depend on fixtures
[01:03] <lifeless> lp and buildd can both use txfixtures
[01:03] <poolie> k
[01:03] <poolie> aside from that i think it's all ready to be cut out
[01:04] <poolie> nb it doesn't actually depend on twisted
[01:04] <poolie> so, in a sense it could go in plain fixtures
[01:04] <poolie> well, you probably cannot run its owntests without twisted
[01:05] <poolie> wallyworld_, while you're here can you read https://code.launchpad.net/~mbp/launchpad/800295-buildd-split/+merge/81111
[01:05]  * wallyworld_ reads
[01:17] <wallyworld_> poolie: looks ok to me, but i think the port to listen on needs to be paramaterised
[01:27] <wgrant> lifeless: Do you know exactly what OEM needs? Just bugs with foo and hwe-foo tasks?
[01:27] <wgrant> lifeless: I wonder if we can just have a temporary owning context.
[01:28] <wgrant> lifeless: There's no real need to respect multiple policies.
[01:28] <lifeless> I don't know precisely know; nor do I understand what you mean :)
[01:29] <wgrant> lifeless: What if we say that OEM's multi-pillar bugs favour one of the targets, and use the access policy from that target?
[01:29] <wgrant> Rather than respecting the union of policies from each target.
[01:30] <lifeless> I have no objection to that
[01:30] <lifeless> seems harder to do well
[01:30] <lifeless> but thats your problem :)O
[01:32] <wgrant> lifeless: Also, I forget some details of yesterday's conversation. IIRC you suggested just having the enum value on Bug/BugTask, without the pillar reference. But then we have to indirect through Product/Distribution/ProductSeries/DistroSeries to work out the policy.
[01:34] <poolie> some of these osutils almost need their own package
[01:34] <lifeless> for full observers
[01:34] <lifeless> not for restricted
[01:34] <wgrant> Sure.
[01:34] <lifeless> which is a small set
[01:34] <wgrant> Sure, but it makes queries far worse.
[01:34] <lifeless> you've tested ?
[01:35] <wgrant> I mean in terms of complexity.
[01:35] <wgrant> Speed, I haven't tested yet.
[01:35] <lifeless> the are a little more awkward to write, thats true.
[01:35] <lifeless> might be a driver for having two separate tables
[01:35] <wgrant> How?
[01:35]  * lifeless speculates wildly
[01:35] <wgrant> The problem is not that there's one table.
[01:36] <lifeless> wgrant: its a lopsided table
[01:36] <wgrant> THe problem is that we have to indirect through four other tables to get to the one table.
[01:36] <lifeless> wgrant: task -> policyuse -> grants
[01:36] <lifeless> gnuoy: one in between table
[01:36] <lifeless> bah
[01:36] <lifeless> wgrant: ^
[01:37] <wgrant> lifeless: How do you get from task to policyuse unless you refer to policyuse on task?
[01:37] <wgrant> You'd have to go through ProductSeries or DistroSeries.
[01:37] <lifeless> task.product == policyuse.product
[01:37] <lifeless> mmm, sure. ok - 2 tables
[01:37] <lifeless> task -> [*series] -> policyuse
[01:37] <wgrant> And we already have to denorm this slightly, on grant.
[01:37] <wgrant> I'm not sure that one is avoidable.
[01:38] <lifeless> grant knowing the asset ?
[01:38] <wgrant> Right, an artifact-specific grant has to know the artifact, its target, and its policy.
[01:38] <lifeless> target == user ?
[01:39] <lifeless> 'person' ?
[01:39] <wgrant> pillar
[01:39] <lifeless> why does it need pillar? it can use policyuse.id
[01:39] <lifeless> (person, artifact, policyuse)
[01:39] <wgrant> Right, but policyuse requires target and policy.
[01:39] <lifeless> yes
[01:40] <wgrant> Which means we're denorming (target, policy) from bugtask onto grant.
[01:40] <wgrant> It's indirect, but it's still denormed.
[01:40] <lifeless> mmm
[01:40] <lifeless> so we'd do that for reporting
[01:40] <lifeless> ack, its a denorm
[01:41] <lifeless> thats part of your schema anyhow though, right ?
[01:41] <wgrant> Yes.
[01:41] <lifeless> so no-change
[01:41] <wgrant> The change is that you suggest I no longer denorm that on bugtask itself.
[01:42] <wgrant> Which seems odd, when we're already denorming it elsewhere.
[01:42] <wgrant> Further away.
[01:42] <lifeless> well, denorms are done for a reason
[01:42] <lifeless> whats the reason for it to be denormalised on bugtask ?
[01:42] <lifeless> or rather
[01:42] <lifeless> let me ask it differently
[01:42] <lifeless> if bug is the native location for this
[01:43] <lifeless> whats the best denorm to use on bugtask
[01:43] <lifeless> so bug.policy = ENUM
[01:43] <wgrant> Right, better question.
[01:43] <lifeless> bugtask.whatshouldwehavehere.
[01:44] <lifeless> brb, nature
[01:44] <wgrant> I suspect policyuse. So setting Bug.access_policy calculates policyuse and throws it at BugTask and artifact.
[01:45] <wgrant> Although then we can't optimise public bugs.
[01:45] <wgrant> Because we'd have to look up the public policy. But that may not be a problem.
[01:59] <poolie> ok, lp:txfixtures exists!
[02:04] <wgrant> lifeless: We seem to have an OOPS issue.
[02:04] <wgrant>  * 308 Exceptions
[02:04] <wgrant> All the None/None OOPSes are missing.
[02:04]  * wgrant checks devpad.
[02:05] <wgrant> django.db.utils.DatabaseError: invalid byte sequence for encoding "UTF8": 0xf8
[02:05] <wgrant> HINT:  This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding".
[02:05] <wgrant> yay
[02:05] <wgrant>  raised with datedir: /srv/launchpad.net-logs/production/chaenomeles/2011-11-03 filename: OOPS-4e0b81ac6e642ee8099df77672c8be1a
[02:05] <wgrant> fwiw
[02:07] <wgrant> ['HTTP_USER_AGENT', '\xf8\x07p\x01']
[02:07] <wgrant> That's a new one...
[02:09] <poolie> M-x ^g p ^a
[02:09] <poolie> help!
[02:09] <poolie> wallyworld_, so what did you think of https://code.launchpad.net/~mbp/launchpad/800295-buildd-split/+merge/81111
[02:10] <poolie> lifeless, i'm just going to copy some of the osutils functions in to txfixtures to stop this recursing too much
[02:10] <wallyworld_> [11:17:10] <wallyworld_> poolie: looks ok to me, but i think the port to listen on needs to be paramaterised
[02:11] <poolie> some kind of global osutils
[02:11] <poolie> wallyworld_, where?
[02:12] <lifeless> wgrant: win
[02:12] <lifeless> wgrant: rye has a branch up that should help with this
[02:12] <wgrant> lifeless: .decode('utf-8', 'replace')?
[02:12] <lifeless> wgrant: not proposed for merging yet
[02:12] <wallyworld_> poolie: _hasDaemonStarted().... return self._isPortListening('localhost', 8221)
[02:12] <lifeless> well, the db schema is royally messed up. headers are bytes.
[02:12] <wgrant> Not sure whether to hack code or OOPS now..
[02:13] <poolie> wgrant, that particular one looks a lot like unix keyboard input not broken utf8
[02:13] <wgrant> Yes.
[02:13] <lifeless> wgrant: https://bugs.launchpad.net/python-oops-tools/+bug/884265
[02:13] <_mup_> Bug #884265: req_vars header values may cause crash in dboopsloader <python-oops-tools:Triaged> < https://launchpad.net/bugs/884265 >
[02:13] <poolie> of course in general escaping might be good
[02:13] <wallyworld_> poolie: but i'm not very familiar with that stuff so i could be wrong
[02:13] <poolie> wallyworld_, it's already hardcoded in the tac file...
[02:13] <poolie> the coupling is not ideal but since we're apparently happy just counting on that port for these tests i can live with it
[02:14] <lifeless> I think someone is going around attempting an exploit
[02:14] <wallyworld_> poolie: so, my ignorance - i thought hasDaemonStarted() is a generic method used for different services
[02:14] <wallyworld_> hence each would be on a different port
[02:14] <lifeless> wgrant: https://code.launchpad.net/~rye/python-oops-tools/quote-req-vars
[02:14] <poolie> i guess i could make a tac file in a tmpfile
[02:14] <wgrant> lifeless: Intriguing.
[02:14] <poolie> wallyworld_, the version that hardcodes the value is specific to tests that always use buildd-slave-test.conf which is always 8221
[02:15] <wgrant> But I guess that works.
[02:15] <wgrant> For now.
[02:15] <wallyworld_> poolie: ok. thanks for clarifying
[02:15]  * wgrant cowboys.
[02:16] <poolie>  np
[02:16] <poolie> thanks for the speedy review
[02:16] <poolie> all good then?
[02:17] <lifeless> wgrant: where did you see the error ?
[02:18] <wgrant> lifeless: Ran make update_db manually.
[02:18] <lifeless> poolie: lazr.utils might be a place you can put osutils things
[02:18] <wgrant>         if isinstance(value, str):
[02:18] <wgrant> Needed to add that.
[02:18] <wgrant> Since we have floats.
[02:18] <wgrant> Perhaps U1 does not.
[02:18] <poolie> ok lunch, biab
[02:18] <lifeless> poolie: though I have not checked the dependencies it would drag in
[02:18] <lifeless> wgrant: well, they don't have native types at all yet (rfc822 still)
[02:19] <poolie> i don't have a good bias towards lazr.* but perhaps it's only restfulclient
[02:19] <lifeless> wgrant: I'm working on the IBodyProducer thing they need to migrate
[02:19] <wgrant> lifeless: Ah, of course.
[02:20] <wallyworld_> poolie: looks ok
[02:22] <wgrant> lifeless: Thanks.
[02:22] <wgrant> lifeless: Any thoughts on bugtask?
[02:24] <lifeless> permit bug.accesspolicy to be NULL
[02:25] <lifeless> I agree; trigger updates both denormed locations (grant + tasks)
[02:26] <wgrant> Great.
[02:26] <lifeless> wgrant: raises the question, perhaps the artifact table should denorm the policyuse, rather than the grants.
[02:26] <lifeless> (Or does it already?
[02:26] <lifeless> )
[02:26] <wgrant> It doesn't already. It could. But currently the only method to manipulate artifacts is ensure(), which is nice.
[02:27] <lifeless> so bug.accesspolicy being null implies task.policyuse is nullable, so public case is optimisable
[02:27] <wgrant> Yep.
[02:27] <wgrant> I'm not sure what to do with artifact-specific permissions when the artifact becomes public.
[02:27] <lifeless> now, when the policy changes, there are less artifact rows to change than grant rows
[02:27] <wgrant> Indeed.
[02:27] <wgrant> And querying through artifact for reporting should be quickish... hopefully.
[02:28] <lifeless> which, if you're hitting artifact anyway on reports
[02:28] <lifeless> -> I would put the policyuse on artifact for restricted observers
[02:28] <lifeless> -> and on grant for full observers.
[02:28] <wgrant> Right.
[02:28] <lifeless> yes, that breaks your combined schema. I'm feeling more and more sure it should be separate.
[02:29] <wgrant> Possibly, yes.
[02:29] <wgrant> It shouldn't make a significant performance difference.
[02:29] <wgrant> To have to go through artifact.
[02:29] <lifeless> Well, I think you are anyway, aren't you ?
[02:29] <lifeless> or were you thinking to just report 'there are N total grants'
[02:30] <wgrant> Pretty much that.
[02:30] <lifeless> being able to report 'there are N private objects' and 'Y security ones' might be useful itself.
[02:30] <wgrant> In order to be fast.
[02:30] <wgrant> Yes.
[02:30] <wgrant> True.
[02:30] <lifeless> so, O(artifacts) < O(restricted grants) if there are 1 restricted grant per artifact. But there may not be.
[02:30] <lifeless> so you could denorm to both.
[02:30] <wgrant> I had considered putting policy on artifact, but it makes reporting slightly awkward.
[02:30] <wgrant> But should be fine.
[02:30] <lifeless> Hard to predict.
[02:31] <wgrant> It really depends on the project.
[02:31] <wgrant> Security will usually have one restricted observer.
[02:31] <lifeless> the reporter?
[02:31] <wgrant> Artifacts in private projects probably won't have any.
[02:31] <wgrant> Right.
[02:32] <lifeless> ok, so plan: bug.accesspolicy nullable; task.policyuse nullable; possibly add an artifact.policyuse; possibly drop grant.policyuse
[02:32] <lifeless> last two subject to scaling testing by you
[02:32] <wgrant> It's going to be awkward to get something representative, but I'll do what I can.
[02:33] <lifeless> I feel your pain
[02:34] <lifeless> you could do what I did with bugsummary
[02:34] <wgrant> What's that?
[02:34] <lifeless> conversion script into temp tables
[02:34] <lifeless> ran it repeatedly on staging until happy
[02:34] <wgrant> FSVO "you"
[02:34] <lifeless> we should have a reasonable corpus already, particularly if you use private branches
[02:34] <lifeless> wgrant: s/staging/dogfood/
[02:34] <wgrant> Not very representative of performance, but maybe.
[02:34] <lifeless> if its fast there...
[02:34] <wgrant> Sometimes hundreds of times faster, sometimes infinitely slower :)
[02:35] <wgrant> Heh
[02:35] <wgrant> True
[02:35] <lifeless> its not the resources that make us make things fast, its the constraints :)
[02:36] <wgrant> Mmm, yeah, but there's so many bad queries and schema in LP that pretty much anything happening on mawson blows away the cache and kills everything.
[02:36] <wgrant> I will turn everything off :)
[02:38] <wgrant> lifeless: Ah, denorming like this also more easily allows us to have a master target for The OEM Conundrum.
[02:39] <wgrant> Since the policyuse is not calculated from each task's target separately.
[02:39] <lifeless> it seems to make that unneeded though
[02:39] <lifeless> I suspect doing a master target may be a big fail
[02:39] <lifeless> I wouldn't try it myself
[02:40] <lifeless> (because what if hwe aren't allowed to see all of oem's bugs)
[02:45] <wgrant> lifeless: Then they can have restricted observers on all the relevant artifacts, just like they do now.
[02:46] <lifeless> true
[02:46] <wgrant> Unless they're in the OEM bug supervisor team (which is subscribed automatically, and will be the observer in the new model), they are being subscribed manually.
[02:46] <wgrant> How would you do it without a master task? Which target would it use to look up the policy?
[02:47] <lifeless> would what
[02:47] <wgrant> it == the trigger to push Bug.access_policy through to BugTask.policy_use
[02:48] <wgrant> And to AccessPolicyArtifact.policy_use, or AccessPolicyGrant.policy_use.
[02:49] <lifeless> wgrant: I was assuming it would run a small function (task, policy enum) on each task.
[02:50] <wgrant> lifeless: One of those has to win, in order to be put into AccessPolicyArtifact or AccessPolicyGrant.
[02:51] <lifeless> ah
[02:51] <lifeless> ok, I see. Yes, go ahead.
[02:51] <wgrant> I figure most of the cases are bugs with $foo and hwe-$foo tasks, in which case $foo should win. Not sure how we'll define that, but that can be clarified next week.
[02:59] <wgrant> fuuuu
[02:59] <wgrant>   File "/srv/lp-oops.canonical.com/cgi-bin/lpoops/src/oopstools/oops/models.py", line 370, in _get_oops_tuple
[03:00] <wgrant>     for (start, end, db_id, statement) in statements:
[03:00] <wgrant> ValueError: need more than 3 values to unpack
[03:00]  * wgrant enfixorates further.
[03:00] <lifeless> EWTFISGOINGON
[03:00] <lifeless> don't tell me bson is translating (foo, None) as (foo,)
[03:00] <lifeless> that would be fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
[03:00] <wgrant> It would indeed.
[03:01] <wgrant> I remember we ran into problems like this when I instrumented openid.
[03:01] <wgrant> I assume that's what this is.
[03:01] <wgrant> But I guess we'll find out in a sec.
[03:01] <wgrant> Except we've had 2000ish openid timeouts since we moved, and they haven't been problematic.
[03:02] <wgrant> [212, 212, 'sendmail'], [215, 215, 'sendmail'], [218, 218, 'sendmail']
[03:02] <wgrant> etc.
[03:02] <wgrant> sigh.
[03:03]  * wgrant greps.
[03:03] <lifeless> are they old oopses or something ?
[03:03] <wgrant> No, this is OOPS-11a3f88d3d139e90764487de402825c4
[03:04] <lifeless> can we make sure we have bugs on the producer side as well as whatever you do to fix the consumption ?
[03:04] <wgrant> The fourth item should be the detail, shouldn't it?
[03:04] <lifeless> yes
[03:04] <lifeless> the statement
[03:04] <lifeless> (start, end, db_id, statement)
[03:04] <wgrant> For this issue I'll file a producer bug, sure.
[03:04] <wgrant> For the user-agent one it probably didn't make sense.
[03:05] <lifeless> the missing value one needs a bug too
[03:05] <lifeless> user agent is crap data, I'm fine with sanitising that in oops-tools
[03:05] <wgrant> The missing value one is quite possibly the same thing.
[03:06] <lifeless> yes
[03:06] <wgrant> Hmm, putting such an array through bson works fine.
[03:06] <wgrant> with None and ''.
[03:06] <wgrant> So not quite that trivial and awful, sadly.
[03:09] <lifeless> headddeeeeesk
[03:09] <wgrant> ?
[03:12] <lifeless> just at the symptoms
[03:12] <wgrant> Oh.
[03:13] <wgrant> fwe9ofju*(OW#YU8947238942389234
[03:13] <wgrant> \
[03:13] <wgrant> >>> bson.loads(bson.dumps({'blah': (1, 2, object())}))
[03:13] <wgrant> {u'blah': [1, 2]}
[03:13] <wgrant> It's unrepresentable, so just omit it :D
[03:14] <wgrant> lifeless: reasons for hating BSON++
[03:14] <lifeless> wgrant: so we have unrepresentable objects in the dict ?
[03:15] <lifeless> wgrant: I doubt json does better.
[03:15] <lifeless> wgrant: or things we expect to stringify() ?
[03:15] <lifeless> wgrant: anyhow, if we're doing that (or analagous) we're breaking the oops contract *anyhow*
[03:16] <lifeless> >>> json.dumps({'blah': (1, 2, object())})
[03:16] <lifeless> TypeError: <object object at 0x1731080> is not JSON serializable
[03:17] <stub> Given this is error handling, a case can be made to sanitize the dictionary before serialization, converting unrepresentable stuff to repr(foo) and maybe logging a warning.
[03:17] <wgrant> lifeless: JSON crashes, yes.
[03:17] <wgrant> Which is better.
[03:17] <wgrant> It tells you that you're being a moron, rather than nodding and ignoring you.
[03:23] <wgrant> Ah.
[03:23] <wgrant> In the sendmail case it's an email.header.Header.
[03:23] <wgrant> Which stringifies fine.
[03:24] <wgrant> But doesn't JSON.
[03:24] <wgrant> HMmm.
[03:24] <wgrant> Oh, but we've probably never used it with JSON.
[03:24] <wgrant> Only RFC822.
[03:24] <lifeless> where it worked by chance
[03:24] <wgrant> Yep.
[03:24] <lifeless> its against the oops contract
[03:25] <wgrant> (Pdb) bson.loads(bson.dumps({'foo': object()}))
[03:25] <wgrant> {}
[03:25] <wgrant> yay
[03:27] <wgrant> I guess that explains the field.blob thing too.
[03:27] <wgrant> It'll be an uploaded file thingy.
[03:27] <lifeless> yeah
[03:27] <lifeless> reasonable to not include it :)
[03:28] <wgrant> Yeah, but not to just exclude it.
[03:28] <wgrant> Particularly since once it's a dict the key will just disappear.
[03:28] <lifeless> yes
[03:28] <lifeless> worth a bug filed ustream
[03:28] <stub> Does the bson library provide hooks for serialization of custom objects?
[03:29] <wgrant> Oh god bson uses tabs.
[03:29] <wgrant> I guess the problem is encode_value()
[03:30] <wgrant> Which has a huge long if/elif, and no catch-all.
[03:31] <wgrant> lifeless: What happens if the OOPS serialiser raises an exception?
[03:32] <lifeless> wgrant: it blows up the stack
[03:32] <lifeless> wgrant: there isn't [yet] a fallback mechanism.
[03:32] <wgrant> Probably ending up in the appserver error log.
[03:32] <wgrant> So we can't very well just make this blow up.
[03:32] <lifeless> wgrant: core oops + oops-twisted is a place to put one, and it probably makes sense to do so.
[03:33] <lifeless> e.g. generate an oops saying 'could not generate this oops' with the dict keys and values coerced, nonrecursive, to strings, with try:excepts everywhere.
[03:35] <wgrant> So, should I file LP bugs complaining about the sendmail and field.blob cases?
[03:36] <lifeless> yes
[03:37] <lifeless> field.blob might be oops-wsgi
[03:37] <lifeless> dunno if it was a loggerhead or appserver oops
[03:38] <wgrant> Don't see why loggerhead would have Zope field names and uploaded blobs :)
[04:00] <lifeless> wallyworld_: yo
[04:00] <lifeless> wallyworld_: OCR time :) - https://code.launchpad.net/~lifeless/python-oops-wsgi/0.0.6/+merge/81230
[04:00]  * wallyworld_ looks
[04:01]  * wallyworld_ wishes txlongpoll was done so he didn't have to wait for the mp diff
[04:03] <lifeless> you'd still be waiting
[04:03] <lifeless> you just wouldn't be refreshing
[04:18] <wallyworld_> lifeless: at the last epic, the longpoll demo session made a point of showing that the diff generation was instant
[04:19] <wgrant> wallyworld_: The page updates instantly once the diff is generated.
[04:19] <wgrant> wallyworld_: The diff generation still takes a while, and is even still on a minutely cronjob.
[04:19] <wallyworld_> that's a shame :-(
[04:19] <wallyworld_> i hate cron jobs for this sort of stuff
[04:20] <wallyworld_> should be event driven
[04:20] <wgrant> Eventually we'll have an MQ-based jobrunner, I assume.
[04:20] <wallyworld_> lifeless: dumb question given i'm not familiar with the the code i'm looking at - does it matter that the default tracker will call ensure_start_response twice?
[04:41] <lifeless> wallyworld_: thats whats passed into it
[04:41] <lifeless> wallyworld_: no it doesn't matter
[04:42] <lifeless> wallyworld_: the code is structured to let that change without folk that write custom trackers having to know it changes
[04:42] <lifeless> wallyworld_: on_first_bytes is called once only, and on_finish once only
[04:43] <wallyworld_> lifeless: cool. it all seems ok but lacking any familiarity with the package, i'm not sure if i've missed anything
[04:44] <lifeless> thats fine :)
[04:44] <lifeless> unless you want to become a wsgi expert - you can cross reference with PEP 3333
[04:46] <wallyworld_> to save stalling i'll +1 it :-)

[04:50] <stub> wgrant: https://code.launchpad.net/~wgrant/launchpad/observer-db/+merge/81104 is flagged as in progress, but the db patch seems done (including the partial index on AccessPolicyGrant which I thought was staying as it was). Want me to tick it yet?
[04:52] <wgrant> stub: lifeless revealed some additional requirements last night. And convinced me to enum it.
[04:52] <stub> ok :-)
[04:54] <stub> Anyone know if there is a bug open for the slow bugsubscription,teamparticipation join we were discussing the other day?
[04:54] <lifeless> more strictly, OEM revealed ...
[04:55] <wgrant> lifeless: Well, I assume you found out earlier than you let on :)
[04:55] <lifeless> yesterday early aco
[04:55] <lifeless> *avo*
[04:55] <wgrant> Oh.
[04:56] <lifeless> ~EOD in florida
[04:56] <stub> Where did the 'deployable revisions' reports go? By bookmarks are no longer valid
[04:57] <StevenK> stub: I removed the private redirect yesterday
[04:58] <StevenK> stub: Didn't you see my mail about the deployment reports becoming public, and that I would remove the redirect in one month?
[04:58] <stub> StevenK: I probably saw it and ignored it :-)
[04:58] <lifeless> StevenK: in such cases a follow up at the time can be very useful
[04:58] <StevenK> Bad stub, no cookie.
[04:59] <stub> I can just about cope with tomorrow. One months time? Ha!
[04:59] <StevenK> stub: lpqateam.canonical.com/qa-reports
[04:59] <spm> [15:58:40] <stub> StevenK: I probably saw it and it was washed away in the flood waters, while I desperately tried to recover it; but alas; twas not to be <== fixed
[05:00] <StevenK> spm: Now do it in haiku?
[05:01] <spm> I wouldn't like to keep stealing the limelight like that
[05:01] <spm> I'm a shy retiring type doncha know
[05:01] <lifeless> terrible liar
[05:01] <StevenK> Who are you, and what have you done with the real spm?
[05:02] <lifeless> spm 1, StevenK 0
[05:02] <StevenK> Oh, pft.
[05:10] <poolie> i'm ready to add txfixtures as a dependency
[05:10] <poolie> are there any docs for this?
[05:10] <poolie> also, i presume it will be an egg rather than a branch?
[05:34] <lifeless> so, are intel 320 class SSD's good ?
[05:36] <poolie> generally reckoned to be
[05:36] <StevenK> This X201 has a 160, I think, and it's excellent.
[05:40] <lifeless> I've hit the wall on my laptop
[05:40] <lifeless> 130GB just isn't enough
[05:41] <StevenK> I only have 160GB, but I also have a desktop and a fileserver
[05:41] <lifeless> so do I but the pain of rotating stuff on and off had gotten to me
[05:42] <StevenK> Sigh. Forgot to unmount NFS partition before moving networks.
[05:46] <StevenK> wgrant: Can I get from ISPPH to ISourcePackage easily?
[05:47] <StevenK> lifeless: What do you need to rotate on and off the laptop?
[05:48] <wgrant> StevenK: Maybe.
[05:49] <StevenK> wgrant: In a test/harness environment, so feel free to suggest evils like rSP()
[05:51] <wgrant> StevenK: SPPH.meta_sourcepackage
[05:51] <poolie> lifeless, oh they come up to 300gb now? or more?
[05:53] <poolie> do i need a review to add the tar to the download-cache, or do i just add it?
[05:53] <wgrant> Just add it.
[05:54] <poolie> and then propose an addition to versions.conf
[06:02] <StevenK> \
[06:03] <StevenK> wgrant: I thought that returned a DSP?
[06:03] <wgrant> StevenK: No.
[06:03] <wgrant> You might be thinking of SP.distribution_sourcepackage
[06:14] <StevenK> Bleh. Still doesn't make SPPH queries locally.
[06:16] <wgrant> You're probably an admin.
[06:18] <lifeless> poolie: yeah 360GB
[06:19] <lifeless> poolie: faster too if you have  6GBps SATA
[06:20] <StevenK> wgrant: No, I'm logged in as a user.
[06:21] <StevenK> I already thought of that too.
[06:21] <wgrant> It's not the archive owner?
[06:22] <wgrant> Or the distribution or distroseries owner or driver?
[06:23] <StevenK> Logged in as myself, as a member of ubuntu-team
[06:23] <wgrant> That would do it.
[06:23] <wgrant> Don't be a member of ubuntu-team.
[06:26] <StevenK> Nice. ?batch=80 == 1500 queries
[06:30]  * StevenK tries to work out how to make IBugNomination.canApprove() nicer.
[06:39] <stub> lifeless: The Intel drives are what people are currently recommending for databases - at a decent price point, reliable but a little slower than some less reliable drives
[06:41] <lifeless> stub: the 320 series are laptop form factor
[06:41] <lifeless> stub: there are larger ones around I believe
[06:42] <stub> lifeless: Intel 320's is what I'm seeing. Not sure if there are server versions too.
[06:43] <stub> lifeless: http://postgresql.1045698.n5.nabble.com/Recommendations-for-SSDs-in-production-td4958689.html for a thread I saw today
[06:43] <StevenK> Bleh. Brain fried.
[06:44] <lifeless> hah up to 600GB now
[06:44] <lifeless> http://ark.intel.com/products/56569/Intel-SSD-320-Series-%28600GB-2_5in-SATA-3Gbs-25nm-MLC%29
[06:44] <poolie> do i have to do something special to get the egg hooked up by buildout
[06:44] <poolie> after putting it in the download cache and in versions.cfg
[06:44] <stub> Aparently there is a review of the Intel SSDs on Tom's Hardware today
[06:45] <lifeless> the 710
[06:45] <lifeless> http://www.tomshardware.com/reviews/ssd-710-enterprise-x25-e,3038.html
[06:52] <poolie> when i try to import it (and do nothing else with it)
[06:52] <lifeless> which looks to be that product http://ark.intel.com/products/56585/Intel-SSD-710-Series-%28300GB-2_5in-SATA-3Gbs-25nm-MLC%29
[06:52] <poolie> i get an importerror kind of mixed together with a zopexmlconfigurationerror
[06:57] <wallyworld_> poolie: did you do a make?
[06:57] <poolie> yeah
[06:57] <wallyworld_> you need to do that
[06:57] <poolie> i hadn't mentioned it it setup.py, perhaps that was it
[07:09] <poolie> yep got it
[07:14] <poolie> avagoodweekend wallyworld_
[07:14] <wallyworld_> poolie: still here. just don't want any last minute reviews. i'm fighting doc tests :-(
[07:26] <poolie> what's the oldest python version lp deploys on? 2.4?
[07:29] <lifeless> 2.6
[07:31] <poolie> great thanks
[07:31] <poolie> so i can use relative imports to help the transition of buildd code?
[07:35] <lifeless> maybe :) we has some trouble there recently
[07:37] <poolie> i see https://bugs.launchpad.net/bugs/825485
[07:37] <_mup_> Bug #825485: canonical.launchpad.scripts.oops imports broken on natty & oneiric <qa-untestable> <Launchpad itself:Fix Released by abentley> < https://launchpad.net/bugs/825485 >
[07:38] <poolie> so i guess that's a no
[07:38] <poolie> or, it's dangerous
[07:39] <poolie> well, it's fine without
[07:41] <wgrant> poolie, lifeless: LP deploys on 2.6. lp-buildd has no such guarantee.
[07:42] <poolie> right and on hardy it may be earlier
[07:42] <wgrant> Indeed, it's 2.5.
[07:42] <wgrant> And lp-buildd is largely deployed on Hardy.
[07:42] <poolie> it's only the tests
[07:42] <poolie> but, it would be nice to run the tests there
[07:45] <lifeless> wgrant: oh? we missed a platform
[07:45] <lifeless> wgrant: is there a bug ?
[07:50] <wgrant> lifeless: Huh?
[07:51] <wgrant> lp-buildd has always has entirely unrelated deployment constraints.
[07:51] <wgrant> Because it deploys on crap like hppa.
[07:51] <poolie> https://code.launchpad.net/~mbp/launchpad/use-txfixtures/+merge/81243
[07:51] <poolie> one step closer to deploying it separately
[07:51] <wgrant> And has to build for 5 year old distroseries.
[07:51] <wgrant> So can't run recent kernels.
[07:52] <lifeless> wgrant: I thought it ran outside the build environment
[07:52] <lifeless> wgrant: hppa I can understand :(
[07:52] <wgrant> It runs outside the chroot.
[07:52] <lifeless> right, so why does that constrain us
[07:54] <wgrant> There were issues with building dapper on recent kernels, and we have to support hppa for another 18 months, which sticks them back on hardy.
[07:54] <wgrant> lpia is similar.
[07:54] <wgrant> Because those archs don't existing in >= lucid.
[07:54] <wgrant> Our powerpc buildds also don't really like to boot on most releases.
[07:54] <wgrant> sparc as well.
[07:54] <wgrant> ia64/sparc can never come past lucid, hppa/lpia past hardy.
[07:55] <wgrant> But we need to support ia64/sparc for another 3.5 years.
[07:56] <poolie> ok
[07:57] <poolie> i fixed a 2.5ism in txfixtures
[07:57] <poolie> that's it for me for today
[08:40] <adeuring> good morning
[11:25] <jelmer> hi adeuring, do you have time for a quick review?
[11:25] <adeuring> jelmer: sure
[11:26] <jelmer> adeuring: https://code.launchpad.net/~jelmer/launchpad/newer-bzr-git/+merge/81259
[11:26] <jelmer> it's a single revision, the diff is also here: http://bazaar.launchpad.net/~jelmer/launchpad/newer-bzr-git/revision/14251
[11:36] <adeuring> jelmer: r=me
[11:59] <rvba> adeuring: Hi! Could you please have a look at this tiny optimization branch? https://code.launchpad.net/~rvb/launchpad/branches-timeout-bug-827935-3/+merge/81262
[12:11] <bigjools> anyone else seeing MP preview diffs that just say "Empty" ?
[12:11] <StevenK> bigjools: The MPJ ran after the branch was merged.
[12:12] <nigelb> ah, if it was merged it can't find the diff?
[12:12] <StevenK> It compares the branch against tip. If the tip has the contents merged, the diff is nil.
[12:12] <nigelb> AHHH.
[12:13] <bigjools> special
[12:13] <StevenK> Just slow.
[12:13] <bigjools> thanks
[12:13] <nigelb> Actually, that means fast.
[12:13] <bigjools> it means I pushed and lp-landed quickly
[12:13] <StevenK> It may mean the MPJ ran slow.
[12:13] <nigelb> bigjools managed to beat the cron.
[12:13] <bigjools> rabbit is coming!
[12:13] <StevenK> MPJ is horrible and needs to be beaten with a stick.
[12:14] <nigelb> rabbit is the holder of the stick I hear.
[12:14] <StevenK> Possibly over rabbit.
[12:15] <nigelb> Wait, I thought this bit wwas done in Qastaging wwith rabbit!
[12:15] <nigelb> I meant to test!
[12:19] <bigjools> nigelb: no, I am talking about starting jobs with rabbit
[12:20] <bigjools> different to the browser doing long-polling
[12:20] <nigelb> bigjools: as opposed to a cron?
[12:20] <bigjools> yes
[12:20] <nigelb> Yeah, that'd be nice :)
[12:20] <bigjools> we turn cronscripts into daemons
[12:20] <nigelb> INSTANT GRATIFICATION
[12:20] <nigelb> or at least more instant than currently.
[12:21] <StevenK> Then we can remove the job system entirely.
[12:21] <StevenK> AND IT WILL BE GLORIOUS
[12:21] <nigelb> Some of the stuff discussed in the launchpad session at UDS was nice as well. Like the brainstorm for the new profile page.
[12:22] <bigjools> StevenK: I doubt we can remove the job system for a while.  Although I'd love to replace it with celery etc.
[12:22] <StevenK> :-(
[12:22] <StevenK> The job system makes me very sad.
[12:22] <bigjools> in what way?
[12:23] <StevenK> The horrid duplication of effort
[12:23] <bigjools> I only have one problem
[12:23] <bigjools> it's NIH
[12:23] <StevenK> bigjools: My main issue is the sixty-four million interfaces and model classes you have to add for a new job type and they're all subtly different
[12:23] <StevenK> And require different cron jobs
[12:24] <StevenK>  /wrists
[12:24] <bigjools> yeah it's complex and over-engineered
[12:24] <StevenK> bigjools: Preaching. Choir.
[12:24] <bigjools> and on cue my random music selection starts playing Rage Against the Machine
[12:25] <nigelb> bigjools: Excellent timing ;)
[12:28] <bigjools> StevenK: I also get frustrated at the myriad ways to start a job off
[12:38] <jtv> Anyone mind if I run the archive publisher on dogfood?
[12:38] <jtv> bigjools, StevenK, wgrant?
[12:39] <wgrant> Not I.
[12:39] <nigelb> WCPGW.
[12:40] <StevenK> nigelb: It's dogfood. Don't ask that.
[12:40] <nigelb> heh
[12:41] <bigjools> jtv: fine
[12:42] <bigjools> btw you can cancel virtual builds over the API now. You'll also be able to do it in the UI at the next rollout
[12:44]  * StevenK QA'd it this morning
[12:44] <bigjools> StevenK: it should not have been marked released though, because you qa-ed the wrong thing :)
[12:44] <StevenK> I QA'd the UI only
[12:44] <bigjools> ah ok - but it's not released
[12:45] <bigjools> we need a permanent QA team in AsiaPac :)
[12:46] <bigjools> right, food time, ttyl
[12:47] <wgrant> Yay, ddebs built on prod.
[12:47] <wgrant> Finally.
[12:48] <bac> bigjools: would you revisit https://code.launchpad.net/~mbp/launchpad/800295-buildd-split/+merge/81111 and perhaps change your vote to 'approve' if you agree or make another comment?  horse is out of the barn.
[12:52] <adeuring> rvba: sure, I'll look (sorry for the delay -- had lunch)
[12:53] <rvba> adeuring: thanks!
[13:10] <adeuring> rvba: the changes look good, but a suggestion nevertheless: What about making is_branch_count_zero a @cachedproperty? after all, its accessed more than once and it calls a method. (I hva no idea though how expensive the method is...)
[13:12] <rvba> adeuring: well, the method is definitely no expensive: because self.branches().visible_branches_for_view and self.branch_count are @cacheproperty.
[13:12] <rvba> s/no/not/
[13:12] <adeuring> rvba: ah, ok, so let's leave the @property
[13:12] <adeuring> rvba: another question: If branch_count needs to be evaluated in is_branchcount__zero: DO you expect the query in these cases to be faster?
[13:13] <adeuring> ...if not, would it make sense to call resulstset.any() instead?
[13:14] <maxb> Hm
[13:14] <maxb> https://launchpad.net/builders --> Forbidden
[13:14] <rvba> adeuring: I'm not sure I understand your suggestion, this branch does not bring any improvement to the cases where branch_count needs to be evaluated.
[13:14] <maxb> I suppose a private object has leaked onto the page?
[13:15] <wgrant> maxb: yeah, that happens when a recipe is building int a private archive
[13:15] <wgrant> into
[13:15] <rvba> adeuring: When the batch is empty that is.  But hopefully this wont happen in many cases because when the number of branches is huge, chances are that you have all the statuses represented and the only filtering possible is by status.
[13:16] <wgrant> maxb: There's a bug about it.
[13:17] <adeuring> rvba: right; i I just wondered if it would be hard to add that case too. But that's probably my bad habit to squeeze too much changes into one branch ;)
[13:17] <adeuring> rbva: r=me then
[13:17] <nigelb> g22
[13:18] <rvba> adeuring: Thank you.  I know it's strange but for these requests .is_empty() is almost as expensive as .count().
[13:19] <deryck> abentley, adeuring -- I'm on weekly checkpoint call for feature so will need to hold standup for a bit.
[13:19] <rvba> I know that's utterly bizarre but that's what the profiling I've do says.
[13:19] <deryck> abentley, adeuring -- I can ping when done.
[13:19] <abentley> deryck: Okay.
[13:20] <adeuring> rvba: well, SQL queries can be "strucluraaly slow" ;)
[13:20] <adeuring> deryck: ok
[13:31] <jelmer> thanks adeuring, bac
[13:47] <danhg> #ubuntu-uds
[13:49] <deryck> abentley, adeuring -- let's do standup at top of hour.  cool?
[13:49] <adeuring> deryck: ok
[13:49] <abentley> deryck: sure.
[14:23] <rvba> adeuring: It's not a rewiew but I'd like your help for something if you have 2 minutes…
[14:23] <adeuring> rvba: sure
[14:24] <rvba> adeuring: I'd like you to hit https://code.qastaging.launchpad.net/~ubuntu-branches 5-6 time and give me the oops links
[14:24] <rvba> :)
[14:24] <rvba> times* even
[14:24] <rvba> the reason for this:
[14:24] <rvba> I have landed 2 branches to try some improvements on this page. All is protected by a FF that is on only for me.
[14:25] <adeuring> ok
[14:25] <rvba> And since this page is heavily dependent on TeamParticipation I need someone in the same teams.
[14:25] <rvba> to compare how my branches improve things
[14:27] <adeuring> rvba: https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-a22826044c7c82ef553e7d363ecb35d8 https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-06fabadb8b802665135e2f048d72e8ba https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-e13505381303340a3d138501c0f14c3a https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-51525de16620edd7a8c6c5ecfa64e39b https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-e7b00379ecad43e9a36798e84a87fb7a https://lp-oops.c
[14:28] <rvba> adeuring: thank you very much.
[14:28] <adeuring> welcome :)
[14:41] <rvba> adeuring: sorry to bother you again… but did one of the requests succeed?  I'd like to make sure the FF has been properly set…
[14:41] <adeuring> rvba: no, all requests timed out
[14:42] <rvba> adeuring: ok, thanks.
[14:44] <gmb> danhg: You've marked bug 240067 as New... It's going to pop up when we come to process the to-Triage list, so I'm just wondering if there's any reason we should leave it as New.
[14:44] <_mup_> Bug #240067: Launchpad projects need wikis <feature> <lp-foundations> <ubuntu-platform> <Launchpad itself:New> < https://launchpad.net/bugs/240067 >
[14:48] <jelmer> danhg: did you really mean to change the status of bug 240067 back to new?
[14:48] <_mup_> Bug #240067: Launchpad projects need wikis <feature> <lp-foundations> <ubuntu-platform> <Launchpad itself:Triaged> < https://launchpad.net/bugs/240067 >
[14:50] <bigjools> jelmer: you scared him off
[15:37] <rvba> adeuring: could you have a look at this tiny MP? https://code.launchpad.net/~rvb/launchpad/branches-timeout-bug-827935-4/+merge/81290
[15:37] <adeuring> rvba: sure
[15:37] <rvba> thx
[15:43] <adeuring> rvba: looks good; I think always showing the link is fine. But again a question, just out of curiosity: Would it be possible to use a UNION ALL in the query? Tht might  be faster than the current query
[15:44] <adeuring> rvba: r=me, btw
[15:44] <rvba> adeuring: I tried a bunch of things with this query: the problem is not the OR in itself.
[15:45] <adeuring> ok
[15:45] <rvba> The problem is simply Branch.transitively_private = FALSE fetches 300k rows (in the bad case I'm working on)
[15:45] <rvba> So we get a seq scan which is the right thing to do.
[15:46] <rvba> Because 1/3 of the table is returned.
[15:46] <rvba> But this is painfully slow.
[15:46] <rvba> All of this simply to choose whether or not to display a link…
[15:46] <rvba> You see what I mean :)
[15:46] <adeuring> rvba: right...
[15:47] <rvba> With the branch that you reviewed earlier, I expect a 50% gain.
[15:47] <adeuring> rvba: sounds good!
[15:48] <rvba> Indeed, but this is so sensible that I'll say "victory" when I see the numbers ;)
[15:48] <rvba> adeuring: thanks for the review.
[15:48] <adeuring> rvba: yeah, I know the feeling "what else might come next" with timeout issues ;)
[15:48] <rvba> adeuring: exactly
[15:49] <adeuring> rvba: but work on timeout bugs can be fiun nevertheless
[15:49] <rvba> adeuring: It's rather interesting I think.  It feels a little bit like open heart surgery.
[15:51] <adeuring> rvba: interesting comparison :) To me it felt sometimes like tinkering with a motorbike engine, when I was a youngster ;)  as a
[15:53] <rvba> adeuring: ;)
[15:56] <bigjools> jml: hello. I am just looking into the escalated debtags bug and wondered how much you've looked at that so far?
[16:46] <gary_poster> danhg, are you around?  If so, may I assign this commercial feedback ticket to you?  https://support.one.ubuntu.com/Ticket/Display.html?id=7272
[17:05] <nigelb> Is there a way for me to tell launchpadlib to use my local launchpad?
[17:05] <nigelb> (besides hacking hosts file)
[17:11] <gary_poster> yes, nigelb.  /me tries to remember how
[17:11] <nigelb> yay!
[17:13] <gary_poster> nigelb, use the service_root argument, and specify 'https://api.launchpad.dev/'
[17:14] <gary_poster> or launchpadlib.uris.DEV_SERVICE_ROOT
[17:14] <nigelb> gary_poster: Aha, thanks!
[17:14] <gary_poster> welcome
[17:14] <nigelb> I want to make it a setting in summit so I can move it out to local lp for testing :)
[17:15] <deryck> abentley, adeuring -- I think by Monday or Tuesday, whenever we get the current branches all landed and deployed, we should turn on the new bug lists for ourselves.
[17:15] <adeuring> deryck: yeah
[17:16] <abentley> deryck: that includes the beta banner that lets us turn it off, right?
[17:16] <deryck> should, yes.  depending on how well things go for adeuring.
[17:16] <nigelb> buglists is ready?
[17:16] <deryck> well, actually, I don't think adeuring will have this off switch in his first pass.
[17:16] <deryck> nigelb, gettting very close.
[17:16] <nigelb> deryck: Exciting! :)
[17:17] <deryck> nigelb, we have two weeks left of polish and then we will turn them on for beta testers.
[17:17] <adeuring> I'm still dealing with a test failure...
[17:17] <nigelb> deryck: Yay! Looking forward :)
[17:18] <deryck> adeuring, I looked at the MP for the python side today.  I imagine there is some test fallout :)
[17:18] <deryck> adeuring, but I like the approach you've taken.
[17:18] <adeuring> deryck: thanks!
[17:18] <deryck> adeuring, I got it intellectually, but seeing it in code made it real for me.  And I like it.
[17:20] <deryck> bac, I have a js branch for review if you have the time.
[17:54]  * bigjools outta here, have a nice weekend all
[18:12] <jml> the answer to bigjools's question is "not at all"
[18:44] <abentley> bac: could you please review https://code.launchpad.net/~abentley/launchpad/new-bug-fields/+merge/81310 ?
[18:44] <bac> deryck: done
[18:44] <bac> abentley: sure
[18:48] <deryck> bac, thanks!  I've struggled with the naming too.  Configurer was even an option at one point. ;)
[18:48] <deryck> I thought "settings" might be more obvious, too.
[18:55]  * deryck has to leave now.
[19:13] <benji> does anyone know how to get a branch scan job to run in a dev environment?
[19:14] <nigelb> I did that once.
[19:15] <abentley> benji: make scan_branches
[19:15] <nigelb> aha.
[19:16] <abentley> benji: or just run cronscripts/scan_branches.py
[19:16] <benji> abentley: right I know how to get the jobs to run, but it is creating the jobs to begin with that I don't see a sane way of doing
[19:17] <abentley> benji: scheduling the jobs is done by codehosting.  Locking the branch (e.g. by pushing to it) will schedule a scan.
[19:19] <benji> ok, let me see if I can do something with that (I have a branch that I want to get scanned, but it's foreign to the dev environment)
[19:20] <abentley> benji: I don't understand how you could scan a branch that was not part of the dev environment.
[19:22] <benji> abentley: I can't ;)  I want to get it into the environment so I can get a scan job created.
[19:23] <abentley> benji: Okay, you need to push it, which will create a scan job, which will run when you do make scan_branches.
[19:48] <benji> will someone with code import fu take a look at this question?  https://answers.launchpad.net/launchpad/+question/176710
[19:48] <benji> I /think/ the answer is "no" but want to be sure.
[19:50] <abentley> benji: the answer is "no", because you cannot move a branch from one project to another.
[19:51] <benji> thanks abentley!  I'll reply thusly.
[19:52] <jelmer> other than some extra resource usage on lp, deleteing and creating the branch again should have the same effect as moving
[20:55] <abentley> bac: like this? http://pastebin.ubuntu.com/728537/
[20:56] <bac> abentley: sure.  i just thought the test *after* it had been recalculated looked funny.  i'm sure it was correct but i had to think about it.
[20:58] <abentley> bac: Yes, it had to be correct, (because x / 60 !=0 if x > 60) but it was confusing.
[22:24] <wgrant> benji, abentley: You *can* move a branch from one project to another, but the UI was removed because it was too hard to make it work for package branches as well. It's still perfectly doable through the API.
[22:29] <jelmer> wgrant: oh, interesting
[22:30] <jelmer> wgrant: moving between package branches and product branches would actually be a very nice use of such a UI
[22:30] <wgrant> jelmer: Yes.
[22:35] <wgrant> jelmer: Although I somewhat disagree with how package branches were implemented.
[22:35] <wgrant> jelmer: Really they should probably be special distro-related branches on the product.
[22:35] <wgrant> IMO
[22:36] <wgrant> Rather than splitting all the branches for a codebase across $lots of places.
[22:37] <jelmer> wgrant: agreed
[22:38] <jelmer> wgrant: that still seems possible though, it mainly seems like a UI issue to me
[22:38] <wgrant> It's going to be particularly awkward for derived distros, because they are going to need cross-pillar stacking or hundreds of gigabytes of extra storage.
[22:39] <jelmer> we support multiple levels of stacking fine afaik
[22:39] <wgrant> We do, but we rarely do it cross-pillar.
[22:40] <wgrant> Because that crosses privilege boundaries, which is very riksy.
[22:41] <jelmer> Hmm, I'm not sure I understand what you mean by cross-pilar in this context.. perhaps my understanding of that term is incorrect
[22:41] <jelmer> do you mean between the different derived distros? Isn't there an implicit trust of the parent distro?
[22:41] <wgrant> Well, say Linaro wants to use package branches.
[22:41] <wgrant> Right.
[22:41] <wgrant> I guess.