[00:00] so if we do 1 [00:00] we'll be able to switch to new versions at any point [00:00] the deployment will be fast [00:01] but there will be an open issue of garbage collection [00:01] of two aspects [00:01] old trees, and old processes [00:01] with the constraint that the tree shouldn't be deleted until the processes running from it have exited [00:01] and the possible complication that some processes may never exit [00:02] well, I have asked that we have a signal to send those processes telling them to exit after the next verb; other than hostile attacks, they will always exit in that situation [00:02] right, but this may take a long time [00:03] and it may break some clients [00:04] so adding that thing will help with gc of processes [00:05] but it won't make it fast enough that we want to do it within the deployment loop itself [00:06] I know it will break old clients, but folk do upgrade bzr fairly regularly [00:07] ok [00:07] well, that exists in bzr now [00:07] i don't think calling it right away is necessarily a good idea [00:08] so, you want to make it easier to rev the LP bzr service [00:08] which is good [00:09] Right now, its nodowntime but manual request [00:09] fsvo 'nodowntime' [00:09] what do you mean? [00:10] it drops connections at present? [00:11] anyhow [00:11] yes, i want to be able to land changes [00:13] so... [00:14] so you can today [00:14] if you want to improve the deploy process, cool. I think you have a better handle on the constraints. [00:14] but what's running will be uncontrolled vs what's in the tree [00:15] which is undesirable [00:15] I can't parse that [00:15] the changes won't necessarily run until a restart is requested? [00:16] so it seems like you're saying this is not actually a blocker to landing new bzr changes? [00:17] we can do a new deployment, which will cause it to start a new version of bzr, but keep running the current frontend? [00:17] a ndt request for codehosting will be slow and a bit manual, but when it completes everything is running the deployed revision [00:17] because its slow and manual, its not included in the 'nodowntime' alias. [00:18] but if you have a change for the service, its a stock standard deploy request away. [00:19] ok, so there's no reason to think this bug is a blocker to doing other changes? [00:19] it's just something that would be nice to do? [00:20] its very important to do as part of getting all LP running the same codebase all the time [00:20] thats an operational efficiency priority [00:23] i see it's critical [00:24] i guess i'm wondering if it is super-critical, or if it blocks anything else [00:24] and also, what would actually have to be done to close it [00:24] It adds friction to the testing of the forking daemon [00:24] because it takes so long to deploy a change (e.g. to stop using it) [00:25] to close it, we need to have deploys be time bounded and not need admin intervention; they don't have to be super fast, just admin free. [00:28] that seems to imply the tree will get automatically deleted hours later? [00:29] hm [00:29] Yes, when it reaches the front of the gc queue. [00:30] is there any kind of technical gc queue at the moment or is that just done by losas? [00:30] its the last stage of the deploy scripr [00:31] I would be happy if codehosting deploys took 2 hours - unsupervised, totally automatic [00:31] ok, so the deploy script could pause for a long time waiting for the process to complete [00:31] as long as it could eg poll something to see when it was done with that tree [00:31] ideally shorter, but the key thing is the totally automatic aspect, which we don't have today. [00:32] we tried to get it with an haproxy timeout, but that broke bzr (per the not-auto-reconnecting thing), and when we set a higher timeout, we found a set of clients that never go idle. [00:32] those are the CI servers. [00:32] lots of short verbs [00:33] .. [00:33] thus the two-pronged approach: tell the backend to shutdown as soon as it can, and tell bzr clients to reconnect if the channel goes away between verbs [00:33] so perhaps this requires no more code changes, only changes to the deployment scripts? [00:33] 1- tell haproxy to stop using that ssh server [00:33] 2- send sighup to the bzr processes [00:34] has the client side reconnect been SRU'd yet ? [00:34] poolie: does sighup tell the bzr process to 'exit as soon as idle' ? [00:34] 3- start a new ssh server and hook it into haproxy [00:34] poolie: we have a signal for the front end already, thats stops it listening [00:34] lifeless, i can't SRU it until it has been well tested in trunk [00:35] so there is a chicken and egg [00:35] poolie: so 1) is signal the frontend, and already happens. As does 3 - just adding in step 2 is probably enough, when we're happy enough with the disconnect reliability. [00:36] yes, the bzr side of this bug means the server will drop the connection after the current request [00:36] ok so all we need is a deployment script change to have it sighup the bzr processes? [00:37] or, perhaps the ssh server should do that when it is itself told to shut down [00:37] yes, thats where I would put it in. I wouldn't want to do that until we think a majority of users will be running disconnect-ready clients though [00:38] we could do it on staging [00:38] as a way to ease into breaking the chicken-egg thing [00:38] staging is not actually used [00:38] so, this is why i saw a way out of letting the backend continue running for a few hours [00:38] that will let most clients finish their business [00:39] so thats the current status quo [00:39] if we let them finish, we're not actually learning more about the safety of the code [00:41] mm [00:41] so, i don't know [00:42] i don't want to just leave it stuck here [00:42] we can do some manual tests against staging but i am not sure that will really give enough assurance [00:42] in some ways i think the current status quo is a better design [00:42] to just not disconnect people unless we have to [00:43] why rely on the reconnect working when you can just avoid the issue [00:44] well, this is one of the reasons http has a connection:close header [00:44] perhaps we should add the ability to signal that to bzr, to make it more reliable. [00:44] That said, we *know* there are clients out there that will stay connected for weeks. [00:45] right [00:45] so they're going to either have to upgrade bzrlib, or cope with some disconnections [00:45] That is clearly unsustainable, and there is a certain elegance in having one solution. [00:45] yes, we intend to communicate with them once the code is in all supported bzr series, and say 'please upgrade, you can use a point release if you want' [00:47] so, again, a chicken and egg [00:47] I have to go, sorry. [00:47] np [00:59] poolie, hi. bzr has been doing something a bit weird for me. I have a LP repo initialized with bzr init-repo --no-trees, and a lightweight checkout of local branches. For weeks or maybe even months now, (almost?) every time I switch the sandbox to local devel pull from LP's devel I get this: http://pastebin.ubuntu.com/776983/ . I then do break-lock and update and I'm fine, but it's kind of annoying. This is with [00:59] bzr 2.4.2. Do you have any ideas or advice? [01:00] hi, please file a bug, with a traceback and reproduction instructions, and try 2.5 [01:00] i don't recall an existing bug about that [01:02] poolie ack will do. thanks [01:02] (will do tomorrow [01:02] ) :-) [01:02] bye [01:02] thanks [03:30] poolie: back [03:42] lifeless, i don't want to ruin your vacation [03:44] what's the preferred style for multi-line comments? # or ''' [03:45] for a comment, definitely # [03:47] ok, thanks. makes it less readbale though [03:48] i saw that guido approves of ''' :-) [03:58] guido also is quoted as saying 'what was I thinking' (about other things, but shows his approval is no guarantee... [04:14] wallyworld_, i guess you know a ''' is not a comment? [04:14] it's a string [04:14] poolie: yes [04:14] so apart from the specific use of docstrings, putting comments in them feels a bit weird [04:14] but it serves the purpose :-) [04:59] stub: hi there. if you had a moment, now or later, i'd love to get a 2nd opinion on some sql. i think it will be efficient but i'd like to check [05:05] wallyworld_: sure [05:05] stub: thanks. here's something to look at. i can explain whatever is unclear. https://pastebin.canonical.com/57479/ [05:06] stub: the code runs in the security adaptor [05:07] Oh god. [05:07] stub: the sql is obstinately correct because the unit tests pass (the ones written so far). it's the performance aspects i'm interested in [05:08] wgrant: what's wrong? [05:08] This is crazy :) [05:08] what is? [05:08] Let's not put an expensive global bug search into person traversals.. [05:09] wgrant: it's only used in the api - for views, it's pre-cached [05:09] I'm tending to agree with that, but looking at the SQL at the moment [05:09] wallyworld_: Or for anything under a person. [05:09] eg. a PPA, a branch. [05:09] What is 'user_bug_filter'? [05:09] Having pillar-wide visibility would render this obsolete. [05:09] Which means we won't be causing more timeouts. [05:09] stub: that's an existing filter implemented in BugTask used to return the bugtasks a user can see [05:09] Which would be a good thing. [05:10] wallyworld_: Have an example? [05:11] stub: https://pastebin.canonical.com/57480/ [05:12] It probably won't be *that* bad most of the time, because of the lack of other constraints. [05:12] However. [05:12] It is a slippery and unnecessary slope, and will be slow in some situation. [05:12] +s [05:13] wgrant: that's the idea, and remember, it's only for when the API is used [05:13] No. [05:13] It's for any traversal through a person. [05:13] And "only for when the API is used" is particularly bad. [05:13] for the common use case, where someone opens a bugtask view, we pre-cache [05:13] Because this adds probably >1s per request. [05:13] And API makes lots of requests. [05:14] another common case is when someone navigates to a private team page [05:14] ah, but yes, any traversal through a private will trigger it [05:15] wallyworld_: whats this for ? [05:16] lifeless: it's in the security adaptor for limitedView permission on private teams [05:16] wallyworld_: ah, person visibility ? [05:16] wallyworld_: cool [05:16] you don't need the bug its, just exists [05:16] so you can tune this to be more simple still [05:16] wallyworld_: this only runs when the user doesn't have direct access to the team right ? (non-admin, non-member) [05:17] lifeless: yes, these checks occur last in the security adaptor [05:17] the least expensive ones are run first [05:17] cool [05:17] and \o/ [05:18] wgrant: this doesn't add 1s per request, it will add (amount to be checked) per request *by someone that cannot see the team today* [05:18] lifeless: this will allow someone to click through a private team in the bug subscription portlet and see the limited details of the team for example [05:18] And it scales for every single reference ever. [05:18] It's silly and unnecessary. [05:18] Overcomplicated, confusing, and slow. [05:18] wgrant: linear probing increase on the query planner, stats-based linear search of log-time checks from there [05:19] lifeless: + 100 other queries for the other person foreign keys. [05:19] That clause is certainly a killer. Materializing a list of all public bug ids is a sequential scan of the bug table [05:19] it's not confusing to me, or complicated. slow perhaps, which is why i've asked for advice [05:19] wallyworld_: The query isn't. [05:19] The concept is. [05:19] not to me [05:19] it's what we are basing social private teams on [05:20] wgrant: not all fk's will lead to disclosure (libraryfilealias owner for instance, or message owner) [05:20] stub: you mean the bug filter clause i pasted second [05:20] ? [05:20] lifeless: No, but most have to, in the proposed model. [05:20] many will yes [05:21] So, we are going to be in a nice situation where you can see a private team when it's linked to something, except when you can't. [05:21] we may want to build a fast data structure for it in time, but this is a reasonable approximation, especially given that its an uncommon-path anyway. [05:21] And it's impossible to say who can see a private team. [05:21] wallyworld_: Yes. The SQL you have will certainly need optimizing for use in a security adapter. Technically, it could be called an unlimited number of times on a page (once per private team) [05:21] It's exactly the opposite of the disclosure project. [05:21] lifeless: wgrant: but isn't the issue here that even if a private team is on a fk, the logged in user may not be able to see that object, so the team's existent should not be shown in that case [05:21] Realistically, once or maybe twice [05:22] It's making disclosure of private teams entirely opaque. [05:22] wgrant: its making it *possible* to use them sensibly [05:22] stub: what we do now, for is pre-cache the permissions for pages which require this, so the security adaptor is not called [05:22] lifeless: There are simpler ways. [05:22] wgrant: such as? [05:22] lifeless: Pillar-wide scope. [05:23] To subscribe a private team to a bug, it has to be able to be disclosed to that project. [05:23] So visibility is simply based on whether you can see a pillar in which the private team is involved. [05:23] It makes it easy to see what is visible where. [05:23] And it makes it difficult to make mistkaes. [05:24] stub: for example, on the bug page, the subscribers portlet which may have many private teams listed, all the permissions are cached [05:24] And it is cheap to query, and very understandable. [05:24] (it doesn't make sense until we have private projects, however) [05:24] and it doesn't address things like ayatana [05:25] Oh? [05:25] private list of folk working on the project, public bugs, do public triage. [05:25] as a for instance. [05:25] The team membership is still private. [05:25] right [05:25] Visible only to members/admins/etc. [05:25] does our model even support the simple determination of which pillars a private team is involved with yet? we would need something like the TeamParticipation table, no? [05:25] But the existence of the team is not. [05:25] LimitedView is existence, not membership. [05:26] but hten the next team over, which is totally private, but has access to private ayatana things, what is their safeguard for disclosure ? [05:26] I posit that LimitedView should be granted pillar-wide. [05:26] Huh? [05:26] Confused. [05:26] lets talk in budapest, what wallyworld_ is doing is what we agreed made sense on the call a week or two back [05:26] My plan was to talk in Budapest; I didn't expect this particular case to arise so quickly. [05:26] or we can take this offline now and beat it to death, but I don't want my holiday to die a death of a thousand cuts [05:27] What we have is surprisingly quick in any case for the example personid (243656) [05:28] stub: is person 243656 some random person? [05:28] http://paste.ubuntu.com/777089/ [05:28] wallyworld_: It was the id from your second pastebin :) [05:28] 243656 is a dev person [05:28] stub: ah, ok. that was from a unit test [05:29] stub: try with me or you [05:29] lets try it with a user that is a member of some teams then [05:30] or a user that can see a bug that a team is subscribed to but is not a member of the team [05:30] 1.2 seconds for me [05:30] 7.5 seconds for me [05:30] So lets see if we can optimize this. [05:30] stub: lifeless also mentioned the bug_id bits could be optimised out [05:30] yer [05:31] http://paste.ubuntu.com/777092/ [05:31] oh, I fail ad adjusting [05:32] wallyworld_: hang on, there are two persons involved [05:32] yes, the private team we are checking the visbility of and the person who wants to see [05:32] there is the person being *viewed* and the person *viewing* [05:32] stubs test only has one id [05:32] the pathological case is a private team that the person doesn't know about [05:33] ah, didn;t notice that [05:33] the bug visiiblity code is 'which bugs can this person see' [05:33] but we want 'are there mutually visible bugs' [05:33] I think you'll need to start from ground up [05:34] yes, the team being check has it's id in the TEamParticiaption query near the top [05:34] yes, but thats still wrong [05:34] there are *two* team expansions to do [05:34] * lifeless stops to think [05:34] lifeless: the query checks directly subscribed to and assignee [05:34] i think the query is correct [05:35] the private team is visible if: [05:35] - it has any interaction with a truely public object [05:35] - there is any interaction with a visible object [05:36] so the one half should be 'users visible objects' the other half 'interactions with objects' [05:36] the current implementation is for a subset of such rules, as per the comments in the python in the pastebin [05:36] just the things we can do quickly [05:36] and we can short circuit a bunch of that by saying 'mutual ineractions' [05:36] wallyworld_: yes, I saw that [05:37] so the blueprint check is one sided because they are only public [05:38] i included that check to hook into the team participation query [05:38] the select bug_id. .. would probably be better phrased as two separate things [05:38] a) bug subscription or assignee to public bugs [05:38] b) subscription or assignee to a mutually visible private bug [05:39] b) it not quite right i don't think [05:40] Better https://pastebin.canonical.com/57481/ [05:40] just because a private team can see a private bug doesn't mean i can see that private team [05:40] * wallyworld_ looks [05:41] wallyworld_: I know - there are two conditions: interact * and * mutual visibile [05:41] lifeless: not sure i agree with mutually visible [05:42] wallyworld_: so, if the private team interacts with something I cannot see, I cannot see the team, right ? [05:42] lifeless: yes [05:42] thats why mutually visible [05:43] i think i misunderstood you [05:43] if the private team does not interact with it, I can't see the team. [05:43] I believe in an exhaustive test suite and the query is right when it passes and is fast enough :) [05:43] And if I can't see the bug I can't see the team [05:43] and they cannot interact with something they cannot see [05:44] that 2nd point is irrelevant if we query for the things the team inetracts with, no? [05:44] since the result set will by definition contains the things they can see [05:44] depends on wgrants implementation plans for folk that are kicked out of a project [05:45] That's one bit I hadn't worked out yet. [05:45] anyhow, 'visible to user + interacted with by the team' is sufficient for now [05:45] stub: so, if i redo it based on your query, how do we want to ensure it's fast enough in practice? [05:45] wallyworld_: a doubly nested for loop on staging [05:45] for person in person for team in teams [05:46] stub: is that query rephrased at all, or just new constants plugged in ? [05:46] it is rephrased somewhat [05:47] https://pastebin.canonical.com/57482/ is even more reworked [05:47] lifeless: so you'd prefer this to land behind a feature flag? so we can test on staging and leave the flag off if we are not happy? [05:47] Just mild tweaks though [05:47] wallyworld_: always ;) [05:47] ok [05:47] wallyworld_: safety nets make radical change easier [05:47] wallyworld_: this is radical [05:48] So where we want to know 'EXISTS', don't generate more results than absolutely necessary [05:48] stub: so if i land this, and it's up on qas, can you help me test it for performance before fflag is activated? [05:49] pg will shortcut a simple query in an EXISTS, but I don't know if it is smart enough for a UNION query so I optimized those bits. Also, UNION ALL will be faster than UNION, again PG might have optimized that for us but better safe than sorry [05:49] stub: cool, I was just checking with me an dwanted to know whether to fiddle with c&p from your query [05:49] wallyworld_: If it is up on qas, you don't need me - you can get query times for a page request directly [05:49] wallyworld_: I'm pretty sure you will want to unwrap the user_bug_visibility filter and partition the checks [05:50] wallyworld_: IMBW but splitting on unbalanced data often makes pg happier [05:50] lifeless: not sure i understand what you mean by partition the checks [05:51] My updated query will involve changing the user_bug_visibility in any case - changing the single EXISTS to two EXISTS [05:51] oic [05:51] stub: with the testing question, i will and can do my own tests. lifeless mentioned the use of a double for loop, hence my question about additional tests [05:51] wallyworld_: rather than checking 'all interacted with' against 'all visible' [05:52] he means 'WHERE Bug.id IN (SELECT id FROM Bug WHERE private IS FALSE) OR Bug.id IN (SELECT id FROM Bug WHERE private IS TRUE AND (...)' [05:52] wallyworld_: do two checks with - 'interacted with public' || 'interacted with private-visible' [05:52] ah, ok [05:52] note that one half of that is dramatically cheaper [05:52] and it may short circuit the check? [05:52] don't need to know what the viewer did if there are any public interactions [05:53] sorry if i've stirred up a hornet's nest with this. i really appreciate your and stub's help [05:56] may want a CTE of the bug subscriptions, may not. [05:56] CTE? [05:57] common table expression - the WITH stuff [05:57] ah, ok [06:04] hmmm... make run is failing with "mkdir: cannot create directory `/var/tmp/bazaar.launchpad.dev/mirrors': Permission denied" [06:04] anyhow, stub has a fast query now. [06:04] it still needs work [06:04] remember that disclosure brings in a new model for 'can view' [06:05] Worth getting it as fast as possible as there will be outliers that cause grief [06:05] so there are diminishing returns in optimising this today: a) its used in the may-fail-anyway case, and b) the underpinnings are going to be reworked soon [06:05] stub: sure, not meaning you shoul stop [06:05] just be aware that we'll need to do it all again in 3 weeks or so :) [06:06] yer [06:06] just looking at my last query plan on how much was never executed - short circuits are great, but they won't always happen [06:08] huwshimi: sudo remove that dir should fix it i think [06:08] wallyworld_: Thanks, I'll give it a go [06:09] wallyworld_: It doesn't exist [06:09] lifeless: stub: doing this now fixes the current 404 on prod when users try and click through to private teams, plus picks up the work curtis just did this week on the page templating to take account of limitedView permissions [06:09] wallyworld_: Oh, the parent does, I'll remove that [06:09] huwshimi: remove the parent [06:09] beat me to it :-) [06:10] I haven't been following the frequent and lengthy discussions so am trusting a sane rationale for all this was produced :) [06:10] wallyworld_: Perfect, thankyou [06:10] huwshimi: np. it's apache that writes it out as root i think [06:10] silly apache [06:12] stub: lifeless and curtis agree on it all. it's a step along the way to a better solution and a fairly radical change to allow some private team details to be known. stakeholders seem pleased that we are doing it [06:22] wallyworld_: So if a team has subscribed to a public bug, or subscribed to a private bug that has been made public, that team is always visible? [06:25] A simple CSS review: https://code.launchpad.net/~huwshimi/launchpad/tag-link-colour-907132/+merge/86499 [06:27] stub: yes. if a user can see a bug that a team is subscribed to (it may be a private bug), that team has limitedView visibility for the user [06:28] Note that LimitedView doesn't provide access to the member listing. [06:28] just the name, displayname etc [06:28] Yes, and if that is a public bug then it is visible to everybody [06:28] stub: yes [06:28] since the team has placed itself in a public role [06:29] Or someone twiddled the privacy setting on a bug they are subscribed to :) [06:29] Not necessarily a member of the team [06:29] stub: and we are implementing code to warn the user who is placing the team in such a role when this happens [06:29] Exactly. [06:29] This is why it gets very messy. [06:29] Anyone can disclose the team. [06:31] huwshimi: with the new screenshot, it looks like "None" is lighter than the other tags? [06:31] wallyworld_: Yeah, it's not a link [06:32] huwshimi: my (perhaps invalid) opinion is that the label should be different from the text content [06:32] colour [06:33] sure, links need to be different as well [06:33] wallyworld_: I think that's valid, but as you say the links should be different [06:33] wallyworld_: I guess this branch is just about link colours [06:33] yes. and i don;t like the grey personally [06:33] Should the tag links perhaps look like tags? [06:33] like status/importance do [06:34] grey to me means disabled [06:34] wallyworld_: While you are there, can you quickly twiddle the code that generates https://pastebin.canonical.com/57480/ to say 'IS FALSE' rather than '= FALSE' to give us maximum chance of being optimized. [06:34] wgrant: I think we should do something with our tags, but if we do we need to do it everywhere [06:34] stub: sure [06:35] wallyworld_: And the UNION to a UNION ALL [06:35] huwshimi: I agree, but we changed status and importance in only one place... [06:35] stub: this will be in the existing bug filter code. so i'll fiddle the sql and let the existing tests validate correctness [06:36] wallyworld_: Yes. Doing my suggestion will be a win for everything. [06:36] wallyworld_: I'd like to change it to two EXISTS rather than one using UNION ALL, but that might have fallout. [06:37] stub: you mean performance regressions? [06:37] for existing bug queries [06:37] wallyworld_: The changes I suggested will not cause regressions. The change I'd like to do might, so I won't get you to make a quick hack right now :) [06:37] (just do the IS FALSE and UNION ALL) [06:37] wgrant: Yeah, we need to fix that [06:39] huwshimi: They look all old and crap everywhere else :( [06:39] stub: ok. i'll also split the other query as suggested earlier, and add you as a reviewer for the mp to check it? [06:39] Sure [06:39] thanks :-) i may not get it done tonight, have other stuff on [06:40] gotta finish cobverting the existing doc test to unit tests [06:40] huwshimi: so were you going to dick with the colours a bit? [06:42] wallyworld_: Well there are outstanding issues that will affect what other changes would be made. E.g. bug 894734 [06:42] <_mup_> Bug #894734: "Assignee: None" and "Tags: None" clutter bug listings < https://launchpad.net/bugs/894734 > [06:43] wallyworld_: And all the other bugs about content not being linked [06:43] huwshimi: and you are sure about the yucky grey? [06:44] wallyworld_: Well we could look at having a saturated version of the blue [06:44] huwshimi: my opinion is that that would be better, since grey to me signifies disabled, and is used elsewhere in lp to mean that (eg bug abd branch auto linking in text content) [06:45] wallyworld_: https://pastebin.canonical.com/57483/ is what I end up with anyway. I could split up the bug check into 'subscribed to public bugs', 'subscribed to visible bugs' etc. but that would involve bypassing user_bug_filter. I thought this would be bad, as that stanza will be evolving as bug privacy evolves. [06:46] stub: yes, that's why i wanted to use that method to get the user bug filter [06:46] stub: not looking too closely, that query took 0.5, the last one took 0.3. [06:47] wallyworld_: I'm about to finish up for the day (and wander over the road for the Big Bash), but I'll have a play and probably update things tomorrow. [06:47] wallyworld_: That was using incorrect parameters I think [06:48] oh no [06:49] yer - https://pastebin.canonical.com/57482/ does seem quicker, both in the cost estimate and the actual query time [06:49] huwshimi: ok. i wnt to the big bash last night. saw warnie and liz. bloody excellent. have fun [06:50] stub: ok. i'll go with the quicker one for now. i have to run up and make dinner - my wife came home from a procedure at hospital today. but i'll check irc again after dinner [06:50] thanks so much for your help [06:51] wallyworld_: Yeah, should be good [06:51] wallyworld_: np. I'm done anyway - need to go play with replication [08:58] Any reviewers in the house? https://code.launchpad.net/~jtv/launchpad/lint-lpserve/+merge/86498 [09:03] jtv1: done [09:03] thanks === almaisan-away is now known as al-maisan === jtv1 is now known as jtv [10:10] Does anyone have any idea why I'd get his error http://pastebin.ubuntu.com/777225/ whilst running cronscripts/scan_branches.py against a local codehosting instance? [10:12] Ah, hang on. [10:12] Because I'm an ijit, that's way. [10:13] gmb: was /var/tmp/bazaar.launchpad.dev/mirrors 700 or so? [10:13] That's been happening for me recently. [10:15] wgrant, Hmm, lemme check. I'm using dev.launchpad.net/Running/RemoteAccess and the Apache config for bazaar-internal looks a bit weird... [10:15] No, /mirrors is rw-rw-r-- [10:16] /var/tmp/bazaar.launchpad.dev is sensible too? [10:16] Apache likes to create them as root with a restricted umask :/ [10:17] Yep, that's sane. [10:17] Huh [10:17] I guess the apache error log is the next step. [10:17] Yep. [10:19] Haha [10:19] [client 172.16.15.154] client denied by server configuration: /var/tmp/bazaar.launchpad.dev/mirrors/00/00/00/4d/.bzr/branch-format [10:19] The clue's in the brackets. [10:19] The RemoteAccess document needs a rewrite :) [10:19] Ah [10:20] I keep my container's /etc/hosts pointing at 127.0.0.88 [10:20] Wise. [10:20] Only the host's gets the real IP. [10:21] As it should be. [10:21] I'll update the page. [10:22] Great. [10:50] Argh. Can anyone remember francis's non-netstat way of finding out what's hogging a port? [10:50] fuser? [10:51] fuser /tcp [10:51] Or lsof [10:51] lsof! [10:51] That's it. [10:51] Thanks. [10:51] Twisted: I hate you, die in a fire. [10:51] gmb: I tend to reach for fuser first, then netstat, then lsof. But whatever works. [10:51] my third successive day of doing maintenance rotation. FML [10:52] bigjools: I shall think of you while I play Half-Life. [10:52] gmb: Twisted is good, except its logging [10:52] StevenK, Ah, cool. I went netstat->lsof, but I'll try your ordering next time. [10:52] StevenK: you still play Half Life? [10:53] bigjools: I haven't played it before. It hit 75% off on Steam today, so I grabbed all of it. [10:54] StevenK: it's *awesome* :) [10:54] I wasted my life on it 10+ years ago [10:54] Haha [10:54] if you go online with it, prepare to end yours too [10:55] Now for the Quake Collection to hit 75% off. Which, amusingly enough, doesn't include Quake 4. [10:56] bigjools: The first Portal is 50% off, so it's probably only a few quid. [10:57] StevenK: cool, I might get it on the PS3 [11:07] $3.75 vs ??? [11:11] stub: It's $10 normally [11:11] It was on special. $10 I think got you both 1+2 [11:12] yesterday I think, maybe still is. [11:12] I'm backlogged about a year with games, so no need to pay full price :) [11:14] Haha [11:15] Hello, we'd appreciate a review of https://code.launchpad.net/~james-w/launchpad/bpph-binary-file-urls/+merge/86470 [11:18] * bigjools will look [11:22] bigjools: thanks. [11:24] gmb: the great thing about Twisted is that it holds your puny hatred in contempt, dwarfed as it is by its own vast loathing. [11:25] gmb: try getting that with Django! [11:31] jml, james_w: needs fixing, but minor stuff [11:32] * gmb wonders if he can fix bug 808930 buy just adding another commit().... [11:32] <_mup_> Bug #808930: Timeout running branch scanner job < https://launchpad.net/bugs/808930 > [11:33] (The answer is "probably." Whether it's the _right_ fix or not is debatable) [11:34] "Buy" just adding another commit? Seriously Graham, _that's_ the way you're spelling it? [11:36] gmb: Howso? [11:37] wgrant, Well, first, s/probably/possibly. We're suffering death by SQL (in at least one case) with a big for loop that holds the txn open whilst it adds new revisions. [11:38] Now, adding a commit at the end of that loop may obviate the problem. [11:38] But as a solution, it stinks. [11:38] Is this creating new Revisions? [11:38] Or BranchRevisions? [11:38] This is new Revisions. [11:41] It doesn't do a massive manually constructed INSERT? [11:41] Nope. [11:41] Huh. [11:41] Lots of revision = Revision(...) [11:41] How did that ever work :/ [11:41] Awesome! [11:41] wgrant, There's a lovely comment to the effect of "this is bad, but, eh, it works." [11:41] So, this is the issue I thought you'd hit first. [11:41] Didn't think it was quite *that* bad, though. [11:42] :) [11:42] There's already a mass-insertion method for branchrevision. [11:42] And binarypackagepublishinghistory [11:42] Ah, cool. [11:43] I shall crib appropriately. [11:43] And possibly bugnotificationrecipient and co. [11:43] It's ugly, but it works. [11:43] Hey, I'm not pretty, but I do okay. [11:43] "Ugly but works" works for me. [11:43] You may well need a looptuner, since inserting 200000 revisions in a single statement might not be too helpful, but we'll see... [11:43] LoopTuner was one of the things I was thinking of. [11:56] morning [12:00] I'll get my pending QA done at some point over the next couple of days, BTW - currently on holiday but should be able to squeeze it in to unblock people [12:01] mind you I'm not sure if anyone's planning another ftpmaster deployment this year :-) [12:59] ok, I have to admit, githubs "Merge" button is really really nice for code review. [13:00] I know that tarmac does something equaivalent for us, but I'd really really like to see something like that on LP :-) [13:05] nigelb: :) [13:05] nigelb: subsume reviewboard! [13:06] heh [13:06] even though I'm not a django fan [13:07] rick_h__: The other day I probably complained a litle too hard about sqlalchmeny and michael bates replied to me :P [13:07] beyer? [13:07] bah, beyer [13:07] nigelb: he's really awesome at being open/communicating [13:07] and a super genius [13:07] but cool, he help you out then? [13:08] Well, eventually, it wasn't sqlalchemy that was giving me either [13:08] Something else threw an sqlalechemy error :) [13:08] orly? [13:08] now I'm curious, what was throwing an SA exception at you? [13:08] Yeah. Flask-Admin. [13:09] ah, gotcha [13:09] yea, I've not used flask other htan hello world to kind of look at it [13:09] I'm coding exclusively in Flask for work. [13:09] Its harder, but its much more flexible than a full framework. [13:09] yea, I liked what I saw [13:10] and if I had a small app I'd use it [13:11] My boss wrote something like login.launchpad.net in Flask, except its an OAuth provider and not OpenID. [13:12] cool, check out https://github.com/bbangert/velruse sometime [13:12] another "to check out" sometime item [13:12] * nigelb looks [13:13] Interesting! [13:13] yea, I really love the idea [13:13] I was pitching it at my last job as a way to centralize auth through our apps [13:15] we have a centralized auth through our app. And have a client lib for our apps :-) [13:16] cool [13:16] yea, we had a ton of small little apps and used ldap internally, but for external users thought it might make sense. [13:16] we'd have needed to find a way to hack an ldap provider into there somehow, but would have been cool [13:17] that's the one thing I like about almost all Canonical apps. Everything uses openID. [13:17] right [13:17] took a little bit, but thank goodness [13:25] nigelb: How much data? [13:25] StevenK: Every single person row. [13:25] Argh [13:25] :D [13:25] Yes, write a garbo job. [13:26] nigelb: lib/lp/scripts/garbo.py. Create a new class with a descriptive name, copy method *names* from the other classes. [13:27] nigelb: What the methods return should be self-explanatory. If you want more help from me, you'll have to wait until it's not 0030. [13:27] StevenK: Well, I originally, wanted to poke you tomorrow and work along asking you questions. [13:28] I suppose you can do that. [13:28] I'll make some time before I start work to finish it off. [13:28] I haven't worked on LP in a while :) [14:00] bigjools, how do I know want to compare the list of urls to? [15:21] james_w: the stuff you created in the factory [15:22] make a list out of it and compare to the listified results [15:22] bigjools, do the same url generation against them? [15:22] james_w: well I'd construct an expected URL [15:23] bigjools, using ProxiedLibraryFileAlias? [15:23] yup [15:24] ok [15:24] james_w: or even just file.htt-purl [15:24] err [15:24] file.http_url [15:24] except that won't match as it's not proxied? [15:24] there's nothing to proxy [15:24] unless it's private [15:34] bigjools, [15:34] AssertionError: !=: [15:34] reference = ['http://localhost:41286/93/filename-100190'] [15:34] actual = [u'http://launchpad.dev/~person-name-100146/+archive/unique-from-factory-py-line2685-100148/+files/filename-100190'] [15:34] : ['http://localhost:41286/93/filename-100190'] != [u'http://launchpad.dev/~person-name-100146/+archive/unique-from-factory-py-line2685-100148/+files/filename-100190'] [15:35] so it looks like they are proxied? [15:35] archive = self.factory.makeArchive(private=False) [15:36] james_w: meh, my bad then. [15:37] the redirect happens in the appserver [16:06] deryck: can you review this please? https://code.launchpad.net/~rharding/launchpad/sort_buttons_907376/+merge/86579 [16:09] bigjools, https://code.launchpad.net/~james-w/launchpad/bpph-binary-file-urls/+merge/86470 [16:11] checking [16:12] james_w: perfect, thanks. Do you need me to land it? I can't recall if you can do that [16:13] bigjools, I had big problems doing so last time, so if you could throw it at ec2 it would be much appreciated [16:13] sure thing [16:13] james_w: can you set a commit message [16:13] thanks [16:13] oh yeah [16:14] done === deryck is now known as deryck[lunch] === al-maisan is now known as almaisan-away === deryck[lunch] is now known as deryck [17:54] deryck: ping, have a sec to peek at https://code.launchpad.net/~rharding/launchpad/sort_buttons_907376/+merge/86579 [17:56] rick_h__, sure [17:56] deryck: ty [18:02] rick_h__, so it feels like the kind of thing we should test since we keep breaking because of it. [18:02] rick_h__, but it seems hard to test, too. [18:02] deryck: right, because you have to find/replace all cases whereever you're catching the history event [18:02] the only *good* way would be to wrap history event access entirely with our own api [18:03] deryck: I searched before, but missed this case because it was in the .pt and not in the .js [18:03] deryck: I'd definitely prefer to have this stuff in a .js file and the .pt only do a single "setup()" kind of call, but didn't want to change this that much [18:03] rick_h__, yeah. so let's do this change as it is now. and can you file a high bug about needing to wrap history handling to prevent this happening again? [18:03] deryck: sure thing [18:04] rick_h__, awsome, thanks! [19:46] gary_poster, so about deploy…. I can hold for a bit to see if you guys can qa benji's work, if that's cool. [19:47] gary_poster, or were you thinking this is something to wait for tomorrow on? [19:47] benji, see ops:-/ [19:47] I mean deryck, see ops :-/ [19:47] gary_poster, gotchas, I see now. thanks. [21:25] Does anyone mind if I set up /srv/launchpad.net/ubuntu-archive/gnupg-home/ on mawson so that I can QA https://code.launchpad.net/~cjwatson/launchpad/sign-installer/+merge/85670 ? Much like the setup on production only with a dogfood key [21:27] y'know what, I'm pretty certain nobody cares so I'll JFDI and we can remove it later if it breaks somebody [21:30] * cjwatson wonders what "The calling script may set GNUPGHOME ..." is supposed to mean in cronscripts/publishing/distro-parts/ubuntu/publish-distro.d/10-sign-releases given that it proceeds to unconditionally override that variable [21:32] maybe the right answer is ln -s $HOME/.gnupg /srv/launchpad.net/ubuntu-archive/gnupg-home [21:34] any objections to upgrading mawson to >= r14564? [21:54] OK, nobody else logged in, so upgrading mawson [21:57] hi all