[00:26] wgrant: when trying to fix the test for the group security issue: I can't add the private_team to the main_team since (I think) the main team's owner isn't a member of the private team. Do I have to make the main_team_owner own the private_team, and then switch the owner to private_team_owner afterwards, or is there a better way? [00:26] (does that even make sense?) [00:28] thomi: You could do it in "with admin_logged_in", or make the super-team owner a member of the sub-team temporarily. [00:29] wgrant: ahh - I forgot about admin... that's useful [00:31] thomi: Can you fix the commit message on that branch? [00:35] wgrant: done [00:36] wgrant: I was thinking: it'd be handy to have an automated script that did the re-scan dance - perhaps you already have something like that? [00:39] No. Automating it reduces the motivation to fix it properly :) [00:39] And it is fixable properly now. [00:42] wgrant: with the new db hardware? [00:43] Yeah [00:44] I understand we're testing it in the slave pool, before switching it to db master? [00:44] Still some teething issues, but everything is performing much better now. [00:44] The old DB servers are dead to us. [00:44] One new server is now the master, the other is the slave. [00:44] nice [00:44] so... why does this still fail? Is it just a matter to tweaking the db performance? [00:46] Some queries aren't performing quite as well on the new servers, and need analysis. And the two queries in question could use rewrites anyway. [00:46] (the schema also needs fixing in the post-git world, but that's a fair bit more work and we should be able to make the existing one mostly work again) [00:51] interesting [00:51] I'm surprised that certain queries perform _worse_ on newer hardware [00:51] It's very different hardware. [00:51] Completely different IO performance. [00:52] curious [00:52] wildcherry (the old master) had like 24 disks. aurora has 4 disks, but SSD caching with bcache. [00:52] ahhh [00:52] Currently we're running in writethrough mode, which means that big random writes can easily saturate. [00:52] what's the db query that makes diff generation slow / flaky? [00:53] (same with random reads, though most tables are cached on the SSDs... except branchrevision, the 500GB table that this query touches) [00:53] It's not the diff generation itself. [00:53] It's the step before: the branch scanner. [00:53] It reads the branch's full history from bzrlib, then creates a row in the BranchRevision table for each revision. [00:53] That's a lot of rows. [00:54] O.0 [00:54] for what purpose? there can't be that many places where we need that information, surely? [00:55] It's used in just a few places: to show the recent revisions on Branch:+index, to show the unmerged revisions on BranchMergeProposal:+index, and to detect when a branch has been merged into the other. [00:55] The last use case is the one that's difficult to implement without something roughly like this table. [00:56] ahhh.. yeah [00:56] hmmmm [00:59] I have a PoC which stores it more sensibly. [00:59] Because, funnily enough, thousands of slight variants of an append-only DAG have some redundancy [00:59] heh [01:02] There's about 4 billion rows in that table now. [01:05] thomi: https://lpstats.canonical.com/graphs/AppServer5XXsLpnet/20150125/20150224/ [01:05] That's the webapp's OOPS/timeout rate. You can probably see where we tested the new servers as slaves for a week or so, then when we promoted one to the master, then when I fixed one remaining timeout. [01:06] nice [01:06] what happened late on the 16th? [01:06] That's the teething issue. [01:06] ahh [01:07] We still don't know what caused it, but something was causing 200Mbps of replication traffic and maxing out the write IO on both. [01:07] Oh, no, that was the spike on the 20th. [01:08] The 16th/17th was just issues with wildcherry being crap. It probably realised we were about to shoot it in the head, so it panicked. [01:08] I realise now it was actually late on the 17th, [01:08] Dasiy daisy... [01:08] haha [01:08] there's all sorts of graphs in here :D [01:22] Hmm [01:23] My chances of having power all afternoon are looking slim [01:23] Nasty storm 20km away and power already flickering.. [01:31] either your storms are worse than ours, or your infrastructure is worse (or both) [01:32] wgrant: does this look correct to you? (minus the dumb spelling mistake): http://bazaar.launchpad.net/~thomir/launchpad/devel-fix-group-security/view/head:/lib/lp/app/tests/test_security.py#L223 [01:33] thomi: That looks correct to me. Does it correctly fail? [01:35] (and does it stop failing if you remove the retraction?) [01:36] wgrant: no - it fails in the first check - something about 'if self.obj.is_team and user.inTeam(self.obj.teamowner)' [01:36] just trying to get my head around what 'self.obj' and 'user' are, in that context [01:38] we do "self.forwardCheckAuthenticated(user, self.obj, 'launchpad.View')" which understandably fails, since main_team_owner isn't a member of the subsidiary private team [02:03] thomi: Which line fails? [02:03] Ah, you could well be giving the Authorization adapter a security-proxied object. [02:04] wgrant: line 949 in security.py raises an exception when trying to access self.obj.teamowner [02:04] The adapters in lp.security normally get naked objects, as they're the things that check whether non-naked objects can be accessed. [02:04] ugh - I have to use that... ummm... thing [02:04] removeSecurityProxy [02:04] Try importing zope.security.proxy.removeSecurityProxy and doing 'checker = PublicOrPrivateTeamsExistence(removeSecurityProxy(private_team))' instead [02:04] Ah, in fact, the other tests already do that. [02:20] wgrant: so, with that in place, I now can't get it to fail, I think because IPerson.super_teams doesn't take into consideration the TeamParticipation status - so security.py ln 1038 succeeds. [02:21] that seems like a bug though, surely calling person.super_teams should only return approved teams? [02:22] thomi: One point of clarification: TeamParticipation is the transitive closure of APPROVED and ADMIN TeamMemberships. [02:22] ahh, ok [02:23] in that case, something else is happening :D [02:23] And super_teams isn't even cached. [02:26] hmm, ok, so my memberhip retraction isn't working it seems [02:26] That's not helpful. [02:26] :D [02:26] Oh, you're doing it in the wrong direction. [02:26] member.retractTeamMembership(team), oddly enough. [02:27] O.0 [02:27] I even read the docstring [02:27] I thought there was a more sensible method elsewhere, but I can't see it. [02:27] lol [02:28] I guess it's that way to make the permissions easier. [02:28] Since a person should be able to remove themselves from any team. [02:29] So you just protect person.retractTeamMembership with launchpad.Edit on the Person, and pretend you didn't just give it the most confusing name ever. [02:29] haha [02:30] ok, that now passes without the retraction, and fails with it, so that's good :D [02:31] Excellent. [02:31] The test is half the battle. [02:31] Well, probably more than half in this case. [02:31] yeah [02:31] now I hope that the solution we discussed earlier actually works :D [02:32] it seems to :D [02:33] wgrant: should I also fix this for TeamMembershipStatus.EXPIRED ? Seems like it's going to cause the same problem [02:44] thomi: Yep. [02:44] thomi: A team admin should probably be able to see it if there's any TeamMembership at all. [02:44] wgrant: was Person.setMembershipData the convenience method you were thinking of earlier? [02:45] No. [02:45] I think it may have been a figment of my imagination. [02:45] Though that looks like it does the right thing. [03:32] wgrant: thinking about the manual QA for https://bugs.launchpad.net/launchpad/+bug/1423428 - after reading https://dev.launchpad.net/Soyuz/QA it seems like I can't dput a src package that I know will fail to build... any advice? [03:32] Bug #1423428: bad english when retrying a failed build [03:46] thomi: Ubuntu has lots of failed builds, though you probably don't have privs to retry them. If you don't have any failures in your PPA, I can test one. [03:48] thomi: Can you link the bug to the branch? [03:48] For devel-fix-group-security, I mean. [03:48] hmmm, I didn't already? sure [03:51] wgrant: done. [03:51] I'll do the manual QA for both those tomorrow first thing [03:57] thomi: Yep, we have nothing urgent to deploy, so tomorrow's fine. Thanks. [03:58] * wgrant offline for a bit for router recabling. [11:59] Hey! I wanted to use the launchpad-api to get the source-package name for a selected binary package [11:59] I tried using binary_package_publishing_history, but it doesn't have any information about its parent source package [12:00] Is there any way of doing that? Earlier I had to be querying packages.ubuntu.com instead, but that's not an entirely clean solution [12:02] I know it's relatively easy the other way around, since source_package_publishing_history has getPublishedBinaries... [12:22] I don't think there's a sensible way currently, unless you've gone via source_package_publishing_history.getPublishedBinaries() to start with. We could perhaps expose BPPH.binarypackagerelease.build._getLatestPublication() or something like that as a new attribute ... [21:29] hey wgrant [21:31] thomi: Morning. Just saw the Fix Committed -> In Progress... does the bugfix not work? [21:32] wgrant: crap - that was unintentional. stale page on my end I guess [21:32] wgrant: but I'm having difficulty doing the QA - how can I get my user on qastaging to get in a few more groups? is there an admin account, or..? [21:33] I think I'm missing persmissions, so I don't see a 'retry build' link on https://qastaging.launchpad.net/ubuntu/+source/cupsys/1.2.2-0ubuntu0.6.06.4/+build/435478 for exampl [21:33] e [22:07] thomi: Do you have a failed build in a PPA that you have access to, perhaps? [22:11] wgrant: I just checked, it seems I don't [22:11] at least, none of my personal ones [22:11] I can start going though all the teams I'm in I guess [22:15] Damn you and your former QA team apparently living up to its name. [22:15] wgrant: I found one :D [22:15] ...and it works [22:15] nice [22:15] :) [22:15] I would be pretty worried if it didn't :P [22:16] yeah [22:16] now... the other bug will be harder to setup [22:16] Quite. You'll may want to create a second account, or get someone else to assist you. [22:16] yeah [22:17] ..and figure out how to manually expire membership I guess [22:19] "The name 'private-master' has been blocked by the Launchpad administrators." - damn those launchpad administrators! [22:27] wgrant: I created a second account, but it seems the second account can't create private teams? I guess I need to be in ~canonical or something to do that? [22:28] thomi: You need to own a project with a commercial subscription, yeah. What you can do is create the team with your real account, then change the owner to the new account. [22:28] ahhh, cunning [22:28] Or I can add your new account to some team. [22:29] probably useful for the future - account name is thomi-r (or... is the db reset regularly? if so, then it won't be useful for the future I guess) [22:29] Yep, qastaging and staging have their DBs replaced regularly. [22:30] thomi: Try now. [22:30] ok, I'll do it the other way then [22:30] nice, thanks [22:38] wgrant: ok, that seems to work. Only thing I'm worried about is that I can't set this up without the main_team_owner being a deactivated member of the private sub-team (since you need to be a member of the private sub-team in order to be able to add it to the main team). [22:38] I wonder if that's invalidated the test [22:39] thomi: What if you create both teams with one account and link them together, then revoke the subteam's membership, add the other user as an admin of the superteam, then see how it works as the other user? [22:40] ahhh, yeah, that might work [22:40] * thomi trashes the qa database some more [22:44] that works too - setting to qa-ok [22:44] Excellent, thanks.