[00:02] Ah ha. lifeless's comments are a little misleading. [00:03] BugNominationView._getListingItem isn't it at all, it's BugListingBatchNavigator._getListingItem [00:05] StevenK, hey [00:05] in pursuit of splitting out the buildd tree [00:05] i'm thinking of moving the tests that rely on zopelessdblayer etc out of the buildd tree into lp [00:05] so they will rely on a buildd but not be part of it [00:06] poolie: And making lp-buildd be pulled in via sourcecode or the download-cache? [00:06] yep [00:06] https://bugs.launchpad.net/launchpad/+bug/800295 fwiw [00:06] <_mup_> Bug #800295: buildd is unnecessarily coupled to the launchpad tree through tac file and readyservice < https://launchpad.net/bugs/800295 > [00:06] welcome back, bot [00:10] Actually, it's not. It's NominatedBugListingBatchNavigator. [00:18] StevenK: Do you know if overlay distros work yet? [00:18] I think they probably function. [00:19] Enough for someone to kill archivepurpose 4 with a few hours of work. [00:19] Ha. Maybe they do. [00:20] wgrant: Should I put up a NDT for r14231? [00:20] Since I've got Julian convinced that we should kill archivepurpose 7... [00:20] StevenK: Was hoping you'd ask :) [00:20] And 4 is pretty clearly actually an overlay distro. [00:20] What's 7 again? [00:20] 7 is DEBUG [00:20] I successfully convinced him that it's a terrible idea. [00:20] And 4 is PARTNER [00:20] Yes. [00:20] They both need to die. [00:20] Yes. [00:21] And I think 4 can die pretty cheaply by being turned into https://launchpad.net/canonical-partner [00:21] And that lets us remove a tonnnnnne of awfulness. [00:21] all_distro_archives can fuck off. [00:21] Needs IS involvment to get it onto archive.c.c. [00:21] But certainly doable [00:21] We can remove all the Distribution and DistroSeries publication-related methods. [00:22] IT WILL BE BEAUTIFUL [00:22] Yes, it requires some IS involvement. [00:22] But I think it's very doable. [00:22] Although I bet it's signed with the Ubuntu archive key. [00:22] Just to make things difficult. [00:22] Oh, rofl [00:22] * StevenK starts putting together a NDT [00:24] StevenK: Have you heard about my new solution to kill off 7? [00:24] I'm not sure if I've run it past you. [00:26] Need to run it past Colin and Adam and the like too, but I think it's sensible. [00:27] wgrant: Only in vague terms, I think. [00:28] StevenK: We publish them in separate $COMPONENT/debug indices, like $COMPONENT/debian-installer, and handle the split like we do with security and ports. [00:28] Eventually when Soyuz gets archive views, we can move the split to there... but until then we might as well utilise the existing mechanism. [00:29] Doesn't that make archive.u.c explode in size? [00:30] Remember that there's already the secret magic splitter that throws ports binaries and indices elsewhere. [00:30] And keeps security on security.u.c. [00:30] We repurpose that to create a fourth archive -- ddebs.ubuntu.com [00:30] Because this is really the same thing as the ports split. [00:31] Oh, right. [00:31] We need to keep some binaries off the mirrors. [00:31] It's divided by component instead of arch, but it's really the same thing. [00:32] With the appropriate magic-mirror hacking, that sounds like a good plan. [00:32] wgrant: And ddebs.u.c already exists on macquarie [00:32] Yes. The retracers will probably need to look at both for a while. [00:33] For values of "a while" that probably number in the several years. [00:33] But details, details. [00:35] apt and apt-ftparchive seem happy with custom slashed components. [00:35] Yay [00:35] It is practical. [00:35] And the Soyuz side is dead-easy. [00:37] So, we now have feasible solutions to both partner and debug that probably take less than two days each. [00:37] One is already escalated... [00:43] wgrant: Isn't there a better method to call to get upload perms for a user than distribution.main_archive.verifyUpload(person, name, component, distroseries) ? [00:43] Don't think so. [00:44] jml cleaned it up a couple of years back. [00:44] It would be nice if it was set-based, but it's not. [00:44] Since I think the lookup of the component is grabbing SPPH, and then verifyUpload does the same thing. [00:44] It's grabbing it to check the component? [00:44] It could be cleaned up further, but it's a deep hole. [00:45] component = self.distroseries.getSourcePackage( [00:45] name).latest_published_component [00:45] if distribution.main_archive.verifyUpload( [00:45] person, name, component, self.distroseries) is None: [00:45] return True [00:45] One I plan to follow eventually, but hopefully after abolishing archivepurpose (4, 7). [00:45] StevenK: You could possibly check how branches do it.; [00:46] launchpad.Edit for IBranch needs to do the same thing as IBugNomination.canApprove. [00:49] poolie: hi. i claimed your 678090-affected-count mp and then read the comments. are you still working on it? [00:49] wgrant: Digging around IBranch isn't turning up anything [00:49] wallyworld_, not right at this moment but i will [00:50] def checkAuthenticated(self, user): [00:50] can_edit = ( [00:50] user.inTeam(self.obj.owner) or [00:50] user_has_special_branch_access(user.person) or [00:50] can_upload_linked_package(user, self.obj)) [00:50] for ssp in ssp_list: [00:50] archive = ssp.sourcepackage.get_default_archive() [00:50] if archive.canUploadSuiteSourcePackage(person_role.person, ssp): [00:50] return True [00:50] poolie: ok. just wanted to see if i needed to review now or not. thanks [00:50] do you have any comments beyond what robert and i said? [00:50] Which uses latest_published_component. [00:50] poolie: not sure. didn't get that far. i'll have a look [00:52] wgrant: Ah, where is that? [00:52] StevenK: c.l.security [00:55] poolie: so you can delete the view property 'other_users_affected_count' i think [00:55] yep probably [00:56] is my change to @cachedproperty ok? [00:56] poolie: other than that, looks like a nice change. i was going to also suggest the feature flag [00:56] poolie: i like it how you've been doing these polish 'paper cut' bugs of late [00:57] thanks [00:57] poolie: i think the cachedproperty change is ok [00:57] i've run into issues before with cached permissions but i think this case is ok [00:58] it's just cached for the duration of the page view, right? [00:58] yep. the rendering [00:59] poolie: so, a cached property is good if a property is accessed more than once during a render. is this property used more than once? [00:59] yes, the template puts it in twice [00:59] for something to do with js fallbacks [00:59] ok [00:59] which seems ugly but i didn't want to change it without fully understanding it [00:59] and just caching it seemed fine [01:00] sure [01:01] lifeless, do you have any opinion where TacTestSetup ought to live, so that it can be used by both lp and buildd tests? [01:01] it is not enormous but not trivial [01:01] In Twisted :) [01:01] it would seem like a waste to make a new package just for that [01:01] :) [01:01] perhaps it could go in testfixtures [01:02] poolie: you could create a txfixtures [01:02] avoiding packages just makes problems [01:02] as a new package, depended upon by both? [01:02] yes [01:03] txfixtures with twisted specific Fixtures [01:03] it can depend on fixtures [01:03] lp and buildd can both use txfixtures [01:03] k [01:03] aside from that i think it's all ready to be cut out [01:04] nb it doesn't actually depend on twisted [01:04] so, in a sense it could go in plain fixtures [01:04] well, you probably cannot run its owntests without twisted [01:05] wallyworld_, while you're here can you read https://code.launchpad.net/~mbp/launchpad/800295-buildd-split/+merge/81111 [01:05] * wallyworld_ reads [01:17] poolie: looks ok to me, but i think the port to listen on needs to be paramaterised [01:27] lifeless: Do you know exactly what OEM needs? Just bugs with foo and hwe-foo tasks? [01:27] lifeless: I wonder if we can just have a temporary owning context. [01:28] lifeless: There's no real need to respect multiple policies. [01:28] I don't know precisely know; nor do I understand what you mean :) [01:29] lifeless: What if we say that OEM's multi-pillar bugs favour one of the targets, and use the access policy from that target? [01:29] Rather than respecting the union of policies from each target. [01:30] I have no objection to that [01:30] seems harder to do well [01:30] but thats your problem :)O [01:32] lifeless: Also, I forget some details of yesterday's conversation. IIRC you suggested just having the enum value on Bug/BugTask, without the pillar reference. But then we have to indirect through Product/Distribution/ProductSeries/DistroSeries to work out the policy. [01:34] some of these osutils almost need their own package [01:34] for full observers [01:34] not for restricted [01:34] Sure. [01:34] which is a small set [01:34] Sure, but it makes queries far worse. [01:34] you've tested ? [01:35] I mean in terms of complexity. [01:35] Speed, I haven't tested yet. [01:35] the are a little more awkward to write, thats true. [01:35] might be a driver for having two separate tables [01:35] How? [01:35] * lifeless speculates wildly [01:35] The problem is not that there's one table. [01:36] wgrant: its a lopsided table [01:36] THe problem is that we have to indirect through four other tables to get to the one table. [01:36] wgrant: task -> policyuse -> grants [01:36] gnuoy: one in between table [01:36] bah [01:36] wgrant: ^ [01:37] lifeless: How do you get from task to policyuse unless you refer to policyuse on task? [01:37] You'd have to go through ProductSeries or DistroSeries. [01:37] task.product == policyuse.product [01:37] mmm, sure. ok - 2 tables [01:37] task -> [*series] -> policyuse [01:37] And we already have to denorm this slightly, on grant. [01:37] I'm not sure that one is avoidable. [01:38] grant knowing the asset ? [01:38] Right, an artifact-specific grant has to know the artifact, its target, and its policy. [01:38] target == user ? [01:39] 'person' ? [01:39] pillar [01:39] why does it need pillar? it can use policyuse.id [01:39] (person, artifact, policyuse) [01:39] Right, but policyuse requires target and policy. [01:39] yes [01:40] Which means we're denorming (target, policy) from bugtask onto grant. [01:40] It's indirect, but it's still denormed. [01:40] mmm [01:40] so we'd do that for reporting [01:40] ack, its a denorm [01:41] thats part of your schema anyhow though, right ? [01:41] Yes. [01:41] so no-change [01:41] The change is that you suggest I no longer denorm that on bugtask itself. [01:42] Which seems odd, when we're already denorming it elsewhere. [01:42] Further away. [01:42] well, denorms are done for a reason [01:42] whats the reason for it to be denormalised on bugtask ? [01:42] or rather [01:42] let me ask it differently [01:42] if bug is the native location for this [01:43] whats the best denorm to use on bugtask [01:43] so bug.policy = ENUM [01:43] Right, better question. [01:43] bugtask.whatshouldwehavehere. [01:44] brb, nature [01:44] I suspect policyuse. So setting Bug.access_policy calculates policyuse and throws it at BugTask and artifact. [01:45] Although then we can't optimise public bugs. [01:45] Because we'd have to look up the public policy. But that may not be a problem. [01:59] ok, lp:txfixtures exists! [02:04] lifeless: We seem to have an OOPS issue. [02:04] * 308 Exceptions [02:04] All the None/None OOPSes are missing. [02:04] * wgrant checks devpad. [02:05] django.db.utils.DatabaseError: invalid byte sequence for encoding "UTF8": 0xf8 [02:05] HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". [02:05] yay [02:05] raised with datedir: /srv/launchpad.net-logs/production/chaenomeles/2011-11-03 filename: OOPS-4e0b81ac6e642ee8099df77672c8be1a [02:05] fwiw [02:07] ['HTTP_USER_AGENT', '\xf8\x07p\x01'] [02:07] That's a new one... [02:09] M-x ^g p ^a [02:09] help! [02:09] wallyworld_, so what did you think of https://code.launchpad.net/~mbp/launchpad/800295-buildd-split/+merge/81111 [02:10] lifeless, i'm just going to copy some of the osutils functions in to txfixtures to stop this recursing too much [02:10] [11:17:10] poolie: looks ok to me, but i think the port to listen on needs to be paramaterised [02:11] some kind of global osutils [02:11] wallyworld_, where? [02:12] wgrant: win [02:12] wgrant: rye has a branch up that should help with this [02:12] lifeless: .decode('utf-8', 'replace')? [02:12] wgrant: not proposed for merging yet [02:12] poolie: _hasDaemonStarted().... return self._isPortListening('localhost', 8221) [02:12] well, the db schema is royally messed up. headers are bytes. [02:12] Not sure whether to hack code or OOPS now.. [02:13] wgrant, that particular one looks a lot like unix keyboard input not broken utf8 [02:13] Yes. [02:13] wgrant: https://bugs.launchpad.net/python-oops-tools/+bug/884265 [02:13] <_mup_> Bug #884265: req_vars header values may cause crash in dboopsloader < https://launchpad.net/bugs/884265 > [02:13] of course in general escaping might be good [02:13] poolie: but i'm not very familiar with that stuff so i could be wrong [02:13] wallyworld_, it's already hardcoded in the tac file... [02:13] the coupling is not ideal but since we're apparently happy just counting on that port for these tests i can live with it [02:14] I think someone is going around attempting an exploit [02:14] poolie: so, my ignorance - i thought hasDaemonStarted() is a generic method used for different services [02:14] hence each would be on a different port [02:14] wgrant: https://code.launchpad.net/~rye/python-oops-tools/quote-req-vars [02:14] i guess i could make a tac file in a tmpfile [02:14] lifeless: Intriguing. [02:14] wallyworld_, the version that hardcodes the value is specific to tests that always use buildd-slave-test.conf which is always 8221 [02:15] But I guess that works. [02:15] For now. [02:15] poolie: ok. thanks for clarifying [02:15] * wgrant cowboys. [02:16] np [02:16] thanks for the speedy review [02:16] all good then? [02:17] wgrant: where did you see the error ? [02:18] lifeless: Ran make update_db manually. [02:18] poolie: lazr.utils might be a place you can put osutils things [02:18] if isinstance(value, str): [02:18] Needed to add that. [02:18] Since we have floats. [02:18] Perhaps U1 does not. [02:18] ok lunch, biab [02:18] poolie: though I have not checked the dependencies it would drag in [02:18] wgrant: well, they don't have native types at all yet (rfc822 still) [02:19] i don't have a good bias towards lazr.* but perhaps it's only restfulclient [02:19] wgrant: I'm working on the IBodyProducer thing they need to migrate [02:19] lifeless: Ah, of course. [02:20] poolie: looks ok [02:22] lifeless: Thanks. [02:22] lifeless: Any thoughts on bugtask? [02:24] permit bug.accesspolicy to be NULL [02:25] I agree; trigger updates both denormed locations (grant + tasks) [02:26] Great. [02:26] wgrant: raises the question, perhaps the artifact table should denorm the policyuse, rather than the grants. [02:26] (Or does it already? [02:26] ) [02:26] It doesn't already. It could. But currently the only method to manipulate artifacts is ensure(), which is nice. [02:27] so bug.accesspolicy being null implies task.policyuse is nullable, so public case is optimisable [02:27] Yep. [02:27] I'm not sure what to do with artifact-specific permissions when the artifact becomes public. [02:27] now, when the policy changes, there are less artifact rows to change than grant rows [02:27] Indeed. [02:27] And querying through artifact for reporting should be quickish... hopefully. [02:28] which, if you're hitting artifact anyway on reports [02:28] -> I would put the policyuse on artifact for restricted observers [02:28] -> and on grant for full observers. [02:28] Right. [02:28] yes, that breaks your combined schema. I'm feeling more and more sure it should be separate. [02:29] Possibly, yes. [02:29] It shouldn't make a significant performance difference. [02:29] To have to go through artifact. [02:29] Well, I think you are anyway, aren't you ? [02:29] or were you thinking to just report 'there are N total grants' [02:30] Pretty much that. [02:30] being able to report 'there are N private objects' and 'Y security ones' might be useful itself. [02:30] In order to be fast. [02:30] Yes. [02:30] True. [02:30] so, O(artifacts) < O(restricted grants) if there are 1 restricted grant per artifact. But there may not be. [02:30] so you could denorm to both. [02:30] I had considered putting policy on artifact, but it makes reporting slightly awkward. [02:30] But should be fine. [02:30] Hard to predict. [02:31] It really depends on the project. [02:31] Security will usually have one restricted observer. [02:31] the reporter? [02:31] Artifacts in private projects probably won't have any. [02:31] Right. [02:32] ok, so plan: bug.accesspolicy nullable; task.policyuse nullable; possibly add an artifact.policyuse; possibly drop grant.policyuse [02:32] last two subject to scaling testing by you [02:32] It's going to be awkward to get something representative, but I'll do what I can. [02:33] I feel your pain [02:34] you could do what I did with bugsummary [02:34] What's that? [02:34] conversion script into temp tables [02:34] ran it repeatedly on staging until happy [02:34] FSVO "you" [02:34] we should have a reasonable corpus already, particularly if you use private branches [02:34] wgrant: s/staging/dogfood/ [02:34] Not very representative of performance, but maybe. [02:34] if its fast there... [02:34] Sometimes hundreds of times faster, sometimes infinitely slower :) [02:35] Heh [02:35] True [02:35] its not the resources that make us make things fast, its the constraints :) [02:36] Mmm, yeah, but there's so many bad queries and schema in LP that pretty much anything happening on mawson blows away the cache and kills everything. [02:36] I will turn everything off :) [02:38] lifeless: Ah, denorming like this also more easily allows us to have a master target for The OEM Conundrum. [02:39] Since the policyuse is not calculated from each task's target separately. [02:39] it seems to make that unneeded though [02:39] I suspect doing a master target may be a big fail [02:39] I wouldn't try it myself [02:40] (because what if hwe aren't allowed to see all of oem's bugs) [02:45] lifeless: Then they can have restricted observers on all the relevant artifacts, just like they do now. [02:46] true [02:46] Unless they're in the OEM bug supervisor team (which is subscribed automatically, and will be the observer in the new model), they are being subscribed manually. [02:46] How would you do it without a master task? Which target would it use to look up the policy? [02:47] would what [02:47] it == the trigger to push Bug.access_policy through to BugTask.policy_use [02:48] And to AccessPolicyArtifact.policy_use, or AccessPolicyGrant.policy_use. [02:49] wgrant: I was assuming it would run a small function (task, policy enum) on each task. [02:50] lifeless: One of those has to win, in order to be put into AccessPolicyArtifact or AccessPolicyGrant. [02:51] ah [02:51] ok, I see. Yes, go ahead. [02:51] I figure most of the cases are bugs with $foo and hwe-$foo tasks, in which case $foo should win. Not sure how we'll define that, but that can be clarified next week. [02:59] fuuuu [02:59] File "/srv/lp-oops.canonical.com/cgi-bin/lpoops/src/oopstools/oops/models.py", line 370, in _get_oops_tuple [03:00] for (start, end, db_id, statement) in statements: [03:00] ValueError: need more than 3 values to unpack [03:00] * wgrant enfixorates further. [03:00] EWTFISGOINGON [03:00] don't tell me bson is translating (foo, None) as (foo,) [03:00] that would be fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu [03:00] It would indeed. [03:01] I remember we ran into problems like this when I instrumented openid. [03:01] I assume that's what this is. [03:01] But I guess we'll find out in a sec. [03:01] Except we've had 2000ish openid timeouts since we moved, and they haven't been problematic. [03:02] [212, 212, 'sendmail'], [215, 215, 'sendmail'], [218, 218, 'sendmail'] [03:02] etc. [03:02] sigh. [03:03] * wgrant greps. [03:03] are they old oopses or something ? [03:03] No, this is OOPS-11a3f88d3d139e90764487de402825c4 [03:04] can we make sure we have bugs on the producer side as well as whatever you do to fix the consumption ? [03:04] The fourth item should be the detail, shouldn't it? [03:04] yes [03:04] the statement [03:04] (start, end, db_id, statement) [03:04] For this issue I'll file a producer bug, sure. [03:04] For the user-agent one it probably didn't make sense. [03:05] the missing value one needs a bug too [03:05] user agent is crap data, I'm fine with sanitising that in oops-tools [03:05] The missing value one is quite possibly the same thing. [03:06] yes [03:06] Hmm, putting such an array through bson works fine. [03:06] with None and ''. [03:06] So not quite that trivial and awful, sadly. [03:09] headddeeeeesk [03:09] ? [03:12] just at the symptoms [03:12] Oh. [03:13] fwe9ofju*(OW#YU8947238942389234 [03:13] \ [03:13] >>> bson.loads(bson.dumps({'blah': (1, 2, object())})) [03:13] {u'blah': [1, 2]} [03:13] It's unrepresentable, so just omit it :D [03:14] lifeless: reasons for hating BSON++ [03:14] wgrant: so we have unrepresentable objects in the dict ? [03:15] wgrant: I doubt json does better. [03:15] wgrant: or things we expect to stringify() ? [03:15] wgrant: anyhow, if we're doing that (or analagous) we're breaking the oops contract *anyhow* [03:16] >>> json.dumps({'blah': (1, 2, object())}) [03:16] TypeError: is not JSON serializable [03:17] Given this is error handling, a case can be made to sanitize the dictionary before serialization, converting unrepresentable stuff to repr(foo) and maybe logging a warning. [03:17] lifeless: JSON crashes, yes. [03:17] Which is better. [03:17] It tells you that you're being a moron, rather than nodding and ignoring you. [03:23] Ah. [03:23] In the sendmail case it's an email.header.Header. [03:23] Which stringifies fine. [03:24] But doesn't JSON. [03:24] HMmm. [03:24] Oh, but we've probably never used it with JSON. [03:24] Only RFC822. [03:24] where it worked by chance [03:24] Yep. [03:24] its against the oops contract [03:25] (Pdb) bson.loads(bson.dumps({'foo': object()})) [03:25] {} [03:25] yay [03:27] I guess that explains the field.blob thing too. [03:27] It'll be an uploaded file thingy. [03:27] yeah [03:27] reasonable to not include it :) [03:28] Yeah, but not to just exclude it. [03:28] Particularly since once it's a dict the key will just disappear. [03:28] yes [03:28] worth a bug filed ustream [03:28] Does the bson library provide hooks for serialization of custom objects? [03:29] Oh god bson uses tabs. [03:29] I guess the problem is encode_value() [03:30] Which has a huge long if/elif, and no catch-all. [03:31] lifeless: What happens if the OOPS serialiser raises an exception? [03:32] wgrant: it blows up the stack [03:32] wgrant: there isn't [yet] a fallback mechanism. [03:32] Probably ending up in the appserver error log. [03:32] So we can't very well just make this blow up. [03:32] wgrant: core oops + oops-twisted is a place to put one, and it probably makes sense to do so. [03:33] e.g. generate an oops saying 'could not generate this oops' with the dict keys and values coerced, nonrecursive, to strings, with try:excepts everywhere. [03:35] So, should I file LP bugs complaining about the sendmail and field.blob cases? [03:36] yes [03:37] field.blob might be oops-wsgi [03:37] dunno if it was a loggerhead or appserver oops [03:38] Don't see why loggerhead would have Zope field names and uploaded blobs :) [04:00] wallyworld_: yo [04:00] wallyworld_: OCR time :) - https://code.launchpad.net/~lifeless/python-oops-wsgi/0.0.6/+merge/81230 [04:00] * wallyworld_ looks [04:01] * wallyworld_ wishes txlongpoll was done so he didn't have to wait for the mp diff [04:03] you'd still be waiting [04:03] you just wouldn't be refreshing [04:18] lifeless: at the last epic, the longpoll demo session made a point of showing that the diff generation was instant [04:19] wallyworld_: The page updates instantly once the diff is generated. [04:19] wallyworld_: The diff generation still takes a while, and is even still on a minutely cronjob. [04:19] that's a shame :-( [04:19] i hate cron jobs for this sort of stuff [04:20] should be event driven [04:20] Eventually we'll have an MQ-based jobrunner, I assume. [04:20] lifeless: dumb question given i'm not familiar with the the code i'm looking at - does it matter that the default tracker will call ensure_start_response twice? [04:41] wallyworld_: thats whats passed into it [04:41] wallyworld_: no it doesn't matter [04:42] wallyworld_: the code is structured to let that change without folk that write custom trackers having to know it changes [04:42] wallyworld_: on_first_bytes is called once only, and on_finish once only [04:43] lifeless: cool. it all seems ok but lacking any familiarity with the package, i'm not sure if i've missed anything [04:44] thats fine :) [04:44] unless you want to become a wsgi expert - you can cross reference with PEP 3333 [04:46] to save stalling i'll +1 it :-) [04:47] thankyouverramuch [04:50] wgrant: https://code.launchpad.net/~wgrant/launchpad/observer-db/+merge/81104 is flagged as in progress, but the db patch seems done (including the partial index on AccessPolicyGrant which I thought was staying as it was). Want me to tick it yet? [04:52] stub: lifeless revealed some additional requirements last night. And convinced me to enum it. [04:52] ok :-) [04:54] Anyone know if there is a bug open for the slow bugsubscription,teamparticipation join we were discussing the other day? [04:54] more strictly, OEM revealed ... [04:55] lifeless: Well, I assume you found out earlier than you let on :) [04:55] yesterday early aco [04:55] *avo* [04:55] Oh. [04:56] ~EOD in florida [04:56] Where did the 'deployable revisions' reports go? By bookmarks are no longer valid [04:57] stub: I removed the private redirect yesterday [04:58] stub: Didn't you see my mail about the deployment reports becoming public, and that I would remove the redirect in one month? [04:58] StevenK: I probably saw it and ignored it :-) [04:58] StevenK: in such cases a follow up at the time can be very useful [04:58] Bad stub, no cookie. [04:59] I can just about cope with tomorrow. One months time? Ha! [04:59] stub: lpqateam.canonical.com/qa-reports [04:59] [15:58:40] StevenK: I probably saw it and it was washed away in the flood waters, while I desperately tried to recover it; but alas; twas not to be <== fixed [05:00] spm: Now do it in haiku? [05:01] I wouldn't like to keep stealing the limelight like that [05:01] I'm a shy retiring type doncha know [05:01] terrible liar [05:01] Who are you, and what have you done with the real spm? [05:02] spm 1, StevenK 0 [05:02] Oh, pft. [05:10] i'm ready to add txfixtures as a dependency [05:10] are there any docs for this? [05:10] also, i presume it will be an egg rather than a branch? [05:34] so, are intel 320 class SSD's good ? [05:36] generally reckoned to be [05:36] This X201 has a 160, I think, and it's excellent. [05:40] I've hit the wall on my laptop [05:40] 130GB just isn't enough [05:41] I only have 160GB, but I also have a desktop and a fileserver [05:41] so do I but the pain of rotating stuff on and off had gotten to me [05:42] Sigh. Forgot to unmount NFS partition before moving networks. [05:46] wgrant: Can I get from ISPPH to ISourcePackage easily? [05:47] lifeless: What do you need to rotate on and off the laptop? [05:48] StevenK: Maybe. [05:49] wgrant: In a test/harness environment, so feel free to suggest evils like rSP() [05:51] StevenK: SPPH.meta_sourcepackage [05:51] lifeless, oh they come up to 300gb now? or more? [05:53] do i need a review to add the tar to the download-cache, or do i just add it? [05:53] Just add it. [05:54] and then propose an addition to versions.conf [06:02] \ [06:03] wgrant: I thought that returned a DSP? [06:03] StevenK: No. [06:03] You might be thinking of SP.distribution_sourcepackage [06:14] Bleh. Still doesn't make SPPH queries locally. [06:16] You're probably an admin. [06:18] poolie: yeah 360GB [06:19] poolie: faster too if you have 6GBps SATA [06:20] wgrant: No, I'm logged in as a user. [06:21] I already thought of that too. [06:21] It's not the archive owner? [06:22] Or the distribution or distroseries owner or driver? [06:23] Logged in as myself, as a member of ubuntu-team [06:23] That would do it. [06:23] Don't be a member of ubuntu-team. [06:26] Nice. ?batch=80 == 1500 queries [06:30] * StevenK tries to work out how to make IBugNomination.canApprove() nicer. [06:39] lifeless: The Intel drives are what people are currently recommending for databases - at a decent price point, reliable but a little slower than some less reliable drives [06:41] stub: the 320 series are laptop form factor [06:41] stub: there are larger ones around I believe [06:42] lifeless: Intel 320's is what I'm seeing. Not sure if there are server versions too. [06:43] lifeless: http://postgresql.1045698.n5.nabble.com/Recommendations-for-SSDs-in-production-td4958689.html for a thread I saw today [06:43] Bleh. Brain fried. [06:44] hah up to 600GB now [06:44] http://ark.intel.com/products/56569/Intel-SSD-320-Series-%28600GB-2_5in-SATA-3Gbs-25nm-MLC%29 [06:44] do i have to do something special to get the egg hooked up by buildout [06:44] after putting it in the download cache and in versions.cfg [06:44] Aparently there is a review of the Intel SSDs on Tom's Hardware today [06:45] the 710 [06:45] http://www.tomshardware.com/reviews/ssd-710-enterprise-x25-e,3038.html [06:52] when i try to import it (and do nothing else with it) [06:52] which looks to be that product http://ark.intel.com/products/56585/Intel-SSD-710-Series-%28300GB-2_5in-SATA-3Gbs-25nm-MLC%29 [06:52] i get an importerror kind of mixed together with a zopexmlconfigurationerror [06:57] poolie: did you do a make? [06:57] yeah [06:57] you need to do that [06:57] i hadn't mentioned it it setup.py, perhaps that was it [07:09] yep got it === wallyworld_ changed the topic of #launchpad-dev to: https://dev.launchpad.net/ | On call reviewer: - | Critical bugtasks: 266 [07:14] avagoodweekend wallyworld_ [07:14] poolie: still here. just don't want any last minute reviews. i'm fighting doc tests :-( [07:26] what's the oldest python version lp deploys on? 2.4? [07:29] 2.6 [07:31] great thanks [07:31] so i can use relative imports to help the transition of buildd code? [07:35] maybe :) we has some trouble there recently [07:37] i see https://bugs.launchpad.net/bugs/825485 [07:37] <_mup_> Bug #825485: canonical.launchpad.scripts.oops imports broken on natty & oneiric < https://launchpad.net/bugs/825485 > [07:38] so i guess that's a no [07:38] or, it's dangerous [07:39] well, it's fine without [07:41] poolie, lifeless: LP deploys on 2.6. lp-buildd has no such guarantee. [07:42] right and on hardy it may be earlier [07:42] Indeed, it's 2.5. [07:42] And lp-buildd is largely deployed on Hardy. [07:42] it's only the tests [07:42] but, it would be nice to run the tests there [07:45] wgrant: oh? we missed a platform [07:45] wgrant: is there a bug ? [07:50] lifeless: Huh? [07:51] lp-buildd has always has entirely unrelated deployment constraints. [07:51] Because it deploys on crap like hppa. [07:51] https://code.launchpad.net/~mbp/launchpad/use-txfixtures/+merge/81243 [07:51] one step closer to deploying it separately [07:51] And has to build for 5 year old distroseries. [07:51] So can't run recent kernels. [07:52] wgrant: I thought it ran outside the build environment [07:52] wgrant: hppa I can understand :( [07:52] It runs outside the chroot. [07:52] right, so why does that constrain us [07:54] There were issues with building dapper on recent kernels, and we have to support hppa for another 18 months, which sticks them back on hardy. [07:54] lpia is similar. [07:54] Because those archs don't existing in >= lucid. [07:54] Our powerpc buildds also don't really like to boot on most releases. [07:54] sparc as well. [07:54] ia64/sparc can never come past lucid, hppa/lpia past hardy. [07:55] But we need to support ia64/sparc for another 3.5 years. [07:56] ok [07:57] i fixed a 2.5ism in txfixtures [07:57] that's it for me for today [08:40] good morning === adeuring changed the topic of #launchpad-dev to: https://dev.launchpad.net/ | On call reviewer: adeuring | Critical bugtasks: 266 [11:25] hi adeuring, do you have time for a quick review? [11:25] jelmer: sure [11:26] adeuring: https://code.launchpad.net/~jelmer/launchpad/newer-bzr-git/+merge/81259 [11:26] it's a single revision, the diff is also here: http://bazaar.launchpad.net/~jelmer/launchpad/newer-bzr-git/revision/14251 [11:36] jelmer: r=me [11:59] adeuring: Hi! Could you please have a look at this tiny optimization branch? https://code.launchpad.net/~rvb/launchpad/branches-timeout-bug-827935-3/+merge/81262 [12:11] anyone else seeing MP preview diffs that just say "Empty" ? [12:11] bigjools: The MPJ ran after the branch was merged. [12:12] ah, if it was merged it can't find the diff? [12:12] It compares the branch against tip. If the tip has the contents merged, the diff is nil. [12:12] AHHH. [12:13] special [12:13] Just slow. [12:13] thanks [12:13] Actually, that means fast. [12:13] it means I pushed and lp-landed quickly [12:13] It may mean the MPJ ran slow. [12:13] bigjools managed to beat the cron. [12:13] rabbit is coming! [12:13] MPJ is horrible and needs to be beaten with a stick. [12:14] rabbit is the holder of the stick I hear. [12:14] Possibly over rabbit. [12:15] Wait, I thought this bit wwas done in Qastaging wwith rabbit! [12:15] I meant to test! [12:19] nigelb: no, I am talking about starting jobs with rabbit [12:20] different to the browser doing long-polling [12:20] bigjools: as opposed to a cron? [12:20] yes [12:20] Yeah, that'd be nice :) [12:20] we turn cronscripts into daemons [12:20] INSTANT GRATIFICATION [12:20] or at least more instant than currently. [12:21] Then we can remove the job system entirely. [12:21] AND IT WILL BE GLORIOUS [12:21] Some of the stuff discussed in the launchpad session at UDS was nice as well. Like the brainstorm for the new profile page. [12:22] StevenK: I doubt we can remove the job system for a while. Although I'd love to replace it with celery etc. [12:22] :-( [12:22] The job system makes me very sad. [12:22] in what way? [12:23] The horrid duplication of effort [12:23] I only have one problem [12:23] it's NIH [12:23] bigjools: My main issue is the sixty-four million interfaces and model classes you have to add for a new job type and they're all subtly different [12:23] And require different cron jobs [12:24] /wrists [12:24] yeah it's complex and over-engineered [12:24] bigjools: Preaching. Choir. [12:24] and on cue my random music selection starts playing Rage Against the Machine [12:25] bigjools: Excellent timing ;) [12:28] StevenK: I also get frustrated at the myriad ways to start a job off === bac changed the topic of #launchpad-dev to: https://dev.launchpad.net/ | On call reviewer: adeuring, bac | Critical bugtasks: 266 [12:38] Anyone mind if I run the archive publisher on dogfood? [12:38] bigjools, StevenK, wgrant? [12:39] Not I. [12:39] WCPGW. [12:40] nigelb: It's dogfood. Don't ask that. [12:40] heh [12:41] jtv: fine [12:42] btw you can cancel virtual builds over the API now. You'll also be able to do it in the UI at the next rollout [12:44] * StevenK QA'd it this morning [12:44] StevenK: it should not have been marked released though, because you qa-ed the wrong thing :) [12:44] I QA'd the UI only [12:44] ah ok - but it's not released [12:45] we need a permanent QA team in AsiaPac :) [12:46] right, food time, ttyl [12:47] Yay, ddebs built on prod. [12:47] Finally. [12:48] bigjools: would you revisit https://code.launchpad.net/~mbp/launchpad/800295-buildd-split/+merge/81111 and perhaps change your vote to 'approve' if you agree or make another comment? horse is out of the barn. [12:52] rvba: sure, I'll look (sorry for the delay -- had lunch) [12:53] adeuring: thanks! [13:10] rvba: the changes look good, but a suggestion nevertheless: What about making is_branch_count_zero a @cachedproperty? after all, its accessed more than once and it calls a method. (I hva no idea though how expensive the method is...) [13:12] adeuring: well, the method is definitely no expensive: because self.branches().visible_branches_for_view and self.branch_count are @cacheproperty. [13:12] s/no/not/ [13:12] rvba: ah, ok, so let's leave the @property [13:12] rvba: another question: If branch_count needs to be evaluated in is_branchcount__zero: DO you expect the query in these cases to be faster? [13:13] ...if not, would it make sense to call resulstset.any() instead? [13:14] Hm [13:14] https://launchpad.net/builders --> Forbidden [13:14] adeuring: I'm not sure I understand your suggestion, this branch does not bring any improvement to the cases where branch_count needs to be evaluated. [13:14] I suppose a private object has leaked onto the page? [13:15] maxb: yeah, that happens when a recipe is building int a private archive [13:15] into [13:15] adeuring: When the batch is empty that is. But hopefully this wont happen in many cases because when the number of branches is huge, chances are that you have all the statuses represented and the only filtering possible is by status. [13:16] maxb: There's a bug about it. [13:17] rvba: right; i I just wondered if it would be hard to add that case too. But that's probably my bad habit to squeeze too much changes into one branch ;) [13:17] rbva: r=me then [13:17] g22 [13:18] adeuring: Thank you. I know it's strange but for these requests .is_empty() is almost as expensive as .count(). [13:19] abentley, adeuring -- I'm on weekly checkpoint call for feature so will need to hold standup for a bit. [13:19] I know that's utterly bizarre but that's what the profiling I've do says. [13:19] abentley, adeuring -- I can ping when done. [13:19] deryck: Okay. [13:20] rvba: well, SQL queries can be "strucluraaly slow" ;) [13:20] deryck: ok [13:31] thanks adeuring, bac [13:47] #ubuntu-uds [13:49] abentley, adeuring -- let's do standup at top of hour. cool? [13:49] deryck: ok [13:49] deryck: sure. [14:23] adeuring: It's not a rewiew but I'd like your help for something if you have 2 minutes… [14:23] rvba: sure [14:24] adeuring: I'd like you to hit https://code.qastaging.launchpad.net/~ubuntu-branches 5-6 time and give me the oops links [14:24] :) [14:24] times* even [14:24] the reason for this: [14:24] I have landed 2 branches to try some improvements on this page. All is protected by a FF that is on only for me. [14:25] ok [14:25] And since this page is heavily dependent on TeamParticipation I need someone in the same teams. [14:25] to compare how my branches improve things [14:27] rvba: https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-a22826044c7c82ef553e7d363ecb35d8 https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-06fabadb8b802665135e2f048d72e8ba https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-e13505381303340a3d138501c0f14c3a https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-51525de16620edd7a8c6c5ecfa64e39b https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-e7b00379ecad43e9a36798e84a87fb7a https://lp-oops.c [14:28] adeuring: thank you very much. [14:28] welcome :) [14:41] adeuring: sorry to bother you again… but did one of the requests succeed? I'd like to make sure the FF has been properly set… [14:41] rvba: no, all requests timed out [14:42] adeuring: ok, thanks. [14:44] danhg: You've marked bug 240067 as New... It's going to pop up when we come to process the to-Triage list, so I'm just wondering if there's any reason we should leave it as New. [14:44] <_mup_> Bug #240067: Launchpad projects need wikis < https://launchpad.net/bugs/240067 > [14:48] danhg: did you really mean to change the status of bug 240067 back to new? [14:48] <_mup_> Bug #240067: Launchpad projects need wikis < https://launchpad.net/bugs/240067 > [14:50] jelmer: you scared him off [15:37] adeuring: could you have a look at this tiny MP? https://code.launchpad.net/~rvb/launchpad/branches-timeout-bug-827935-4/+merge/81290 [15:37] rvba: sure [15:37] thx [15:43] rvba: looks good; I think always showing the link is fine. But again a question, just out of curiosity: Would it be possible to use a UNION ALL in the query? Tht might be faster than the current query [15:44] rvba: r=me, btw [15:44] adeuring: I tried a bunch of things with this query: the problem is not the OR in itself. [15:45] ok [15:45] The problem is simply Branch.transitively_private = FALSE fetches 300k rows (in the bad case I'm working on) [15:45] So we get a seq scan which is the right thing to do. [15:46] Because 1/3 of the table is returned. [15:46] But this is painfully slow. [15:46] All of this simply to choose whether or not to display a link… [15:46] You see what I mean :) [15:46] rvba: right... [15:47] With the branch that you reviewed earlier, I expect a 50% gain. [15:47] rvba: sounds good! [15:48] Indeed, but this is so sensible that I'll say "victory" when I see the numbers ;) [15:48] adeuring: thanks for the review. [15:48] rvba: yeah, I know the feeling "what else might come next" with timeout issues ;) [15:48] adeuring: exactly [15:49] rvba: but work on timeout bugs can be fiun nevertheless [15:49] adeuring: It's rather interesting I think. It feels a little bit like open heart surgery. [15:51] rvba: interesting comparison :) To me it felt sometimes like tinkering with a motorbike engine, when I was a youngster ;) as a [15:53] adeuring: ;) [15:56] jml: hello. I am just looking into the escalated debtags bug and wondered how much you've looked at that so far? [16:46] danhg, are you around? If so, may I assign this commercial feedback ticket to you? https://support.one.ubuntu.com/Ticket/Display.html?id=7272 [17:05] Is there a way for me to tell launchpadlib to use my local launchpad? [17:05] (besides hacking hosts file) === beuno is now known as beuno-lunch [17:11] yes, nigelb. /me tries to remember how [17:11] yay! [17:13] nigelb, use the service_root argument, and specify 'https://api.launchpad.dev/' [17:14] or launchpadlib.uris.DEV_SERVICE_ROOT [17:14] gary_poster: Aha, thanks! [17:14] welcome [17:14] I want to make it a setting in summit so I can move it out to local lp for testing :) [17:15] abentley, adeuring -- I think by Monday or Tuesday, whenever we get the current branches all landed and deployed, we should turn on the new bug lists for ourselves. [17:15] deryck: yeah [17:16] deryck: that includes the beta banner that lets us turn it off, right? [17:16] should, yes. depending on how well things go for adeuring. [17:16] buglists is ready? [17:16] well, actually, I don't think adeuring will have this off switch in his first pass. [17:16] nigelb, gettting very close. [17:16] deryck: Exciting! :) [17:17] nigelb, we have two weeks left of polish and then we will turn them on for beta testers. [17:17] I'm still dealing with a test failure... [17:17] deryck: Yay! Looking forward :) [17:18] adeuring, I looked at the MP for the python side today. I imagine there is some test fallout :) [17:18] adeuring, but I like the approach you've taken. [17:18] deryck: thanks! [17:18] adeuring, I got it intellectually, but seeing it in code made it real for me. And I like it. [17:20] bac, I have a js branch for review if you have the time. [17:54] * bigjools outta here, have a nice weekend all [18:12] the answer to bigjools's question is "not at all" [18:44] bac: could you please review https://code.launchpad.net/~abentley/launchpad/new-bug-fields/+merge/81310 ? [18:44] deryck: done [18:44] abentley: sure [18:48] bac, thanks! I've struggled with the naming too. Configurer was even an option at one point. ;) [18:48] I thought "settings" might be more obvious, too. [18:55] * deryck has to leave now. === beuno-lunch is now known as beuno [19:13] does anyone know how to get a branch scan job to run in a dev environment? [19:14] I did that once. [19:15] benji: make scan_branches [19:15] aha. [19:16] benji: or just run cronscripts/scan_branches.py [19:16] abentley: right I know how to get the jobs to run, but it is creating the jobs to begin with that I don't see a sane way of doing [19:17] benji: scheduling the jobs is done by codehosting. Locking the branch (e.g. by pushing to it) will schedule a scan. [19:19] ok, let me see if I can do something with that (I have a branch that I want to get scanned, but it's foreign to the dev environment) [19:20] benji: I don't understand how you could scan a branch that was not part of the dev environment. [19:22] abentley: I can't ;) I want to get it into the environment so I can get a scan job created. [19:23] benji: Okay, you need to push it, which will create a scan job, which will run when you do make scan_branches. [19:48] will someone with code import fu take a look at this question? https://answers.launchpad.net/launchpad/+question/176710 [19:48] I /think/ the answer is "no" but want to be sure. [19:50] benji: the answer is "no", because you cannot move a branch from one project to another. [19:51] thanks abentley! I'll reply thusly. [19:52] other than some extra resource usage on lp, deleteing and creating the branch again should have the same effect as moving [20:55] bac: like this? http://pastebin.ubuntu.com/728537/ [20:56] abentley: sure. i just thought the test *after* it had been recalculated looked funny. i'm sure it was correct but i had to think about it. [20:58] bac: Yes, it had to be correct, (because x / 60 !=0 if x > 60) but it was confusing. === bac changed the topic of #launchpad-dev to: https://dev.launchpad.net/ | On call reviewer: - | Critical bugtasks: 266 [22:24] benji, abentley: You *can* move a branch from one project to another, but the UI was removed because it was too hard to make it work for package branches as well. It's still perfectly doable through the API. [22:29] wgrant: oh, interesting [22:30] wgrant: moving between package branches and product branches would actually be a very nice use of such a UI [22:30] jelmer: Yes. [22:35] jelmer: Although I somewhat disagree with how package branches were implemented. [22:35] jelmer: Really they should probably be special distro-related branches on the product. [22:35] IMO [22:36] Rather than splitting all the branches for a codebase across $lots of places. [22:37] wgrant: agreed [22:38] wgrant: that still seems possible though, it mainly seems like a UI issue to me [22:38] It's going to be particularly awkward for derived distros, because they are going to need cross-pillar stacking or hundreds of gigabytes of extra storage. [22:39] we support multiple levels of stacking fine afaik [22:39] We do, but we rarely do it cross-pillar. [22:40] Because that crosses privilege boundaries, which is very riksy. [22:41] Hmm, I'm not sure I understand what you mean by cross-pilar in this context.. perhaps my understanding of that term is incorrect [22:41] do you mean between the different derived distros? Isn't there an implicit trust of the parent distro? [22:41] Well, say Linaro wants to use package branches. [22:41] Right. [22:41] I guess.