[00:00] <poolie> so if we do 1
[00:00] <poolie> we'll be able to switch to new versions at any point
[00:00] <poolie> the deployment will be fast
[00:01] <poolie> but there will be an open issue of garbage collection
[00:01] <poolie> of two aspects
[00:01] <poolie> old trees, and old processes
[00:01] <poolie> with the constraint that the tree shouldn't be deleted until the processes running from it have exited
[00:01] <poolie> and the possible complication that some processes may never exit
[00:02] <lifeless> well, I have asked that we have a signal to send those processes telling them to exit after the next verb; other than hostile attacks, they will always exit in that situation
[00:02] <poolie> right, but this may take a long time
[00:03] <poolie> and it may break some clients
[00:04] <poolie> so adding that thing will help with gc of processes
[00:05] <poolie> but it won't make it fast enough that we want to do it within the deployment loop itself
[00:06] <lifeless> I know it will break old clients, but folk do upgrade bzr fairly regularly
[00:07] <poolie> ok
[00:07] <poolie> well, that exists in bzr now
[00:07] <poolie> i don't think calling it right away is necessarily a good idea
[00:08] <lifeless> so, you want to make it easier to rev the LP bzr service
[00:08] <lifeless> which is good
[00:09] <lifeless> Right now, its nodowntime but manual request
[00:09] <poolie> fsvo 'nodowntime'
[00:09] <lifeless> what do you mean?
[00:10] <poolie> it drops connections at present?
[00:11] <poolie> anyhow
[00:11] <poolie> yes, i want to be able to land changes
[00:13] <poolie> so...
[00:14] <lifeless> so you can today
[00:14] <lifeless> if you want to improve the deploy process, cool. I think you have a better handle on the constraints.
[00:14] <poolie> but what's running will be uncontrolled vs what's in the tree
[00:15] <poolie> which is undesirable
[00:15] <lifeless> I can't parse that
[00:15] <poolie> the changes won't necessarily run until a restart is requested?
[00:16] <poolie> so it seems like you're saying this is not actually a blocker to landing new bzr changes?
[00:17] <poolie> we can do a new deployment, which will cause it to start a new version of bzr, but keep running the current frontend?
[00:17] <lifeless> a ndt request for codehosting will be slow and a bit manual, but when it completes everything is running the deployed revision
[00:17] <lifeless> because its slow and manual, its not included in the 'nodowntime' alias.
[00:18] <lifeless> but if you have a change for the service, its a stock standard deploy request away.
[00:19] <poolie> ok, so there's no reason to think this bug is a blocker to doing other changes?
[00:19] <poolie> it's just something that would be nice to do?
[00:20] <lifeless> its very important to do as part of getting all LP running the same codebase all the time
[00:20] <lifeless> thats an operational efficiency priority
[00:23] <poolie> i see it's critical
[00:24] <poolie> i guess i'm wondering if it is super-critical, or if it blocks anything else
[00:24] <poolie> and also, what would actually have to be done to close it
[00:24] <lifeless> It adds friction to the testing of the forking daemon
[00:24] <lifeless> because it takes so long to deploy a change (e.g. to stop using it)
[00:25] <lifeless> to close it, we need to have deploys be time bounded and not need admin intervention; they don't have to be super fast, just admin free.
[00:28] <poolie> that seems to imply the tree will get automatically deleted hours later?
[00:29] <poolie> hm
[00:29] <lifeless> Yes, when it reaches the front of the gc queue.
[00:30] <poolie> is there any kind of technical gc queue at the moment or is that just done by losas?
[00:30] <lifeless> its the last stage of the deploy scripr
[00:31] <lifeless> I would be happy if codehosting deploys took 2 hours - unsupervised, totally automatic
[00:31] <poolie> ok, so the deploy script could pause for a long time waiting for the process to complete
[00:31] <poolie> as long as it could eg poll something to see when it was done with that tree
[00:31] <lifeless> ideally shorter, but the key thing is the totally automatic aspect, which we don't have today.
[00:32] <lifeless> we tried to get it with an haproxy timeout, but that broke bzr (per the not-auto-reconnecting thing), and when we set a higher timeout, we found a set of clients that never go idle.
[00:32] <lifeless> those are the CI servers.
[00:32] <lifeless> lots of short verbs
[00:33] <poolie> ..
[00:33] <lifeless> thus the two-pronged approach: tell the backend to shutdown as soon as it can, and tell bzr clients to reconnect if the channel goes away between verbs
[00:33] <poolie> so perhaps this requires no more code changes, only changes to the deployment scripts?
[00:33] <poolie> 1- tell haproxy to stop using that ssh server
[00:33] <poolie> 2- send sighup to the bzr processes
[00:34] <lifeless> has the client side reconnect been SRU'd yet ?
[00:34] <lifeless> poolie: does sighup tell the bzr process to 'exit as soon as idle' ?
[00:34] <poolie> 3- start a new ssh server and hook it into haproxy
[00:34] <lifeless> poolie: we have a signal for the front end already, thats stops it listening
[00:34] <poolie> lifeless, i can't SRU it until it has been well tested in trunk
[00:35] <poolie> so there is a chicken and egg
[00:35] <lifeless> poolie: so 1) is signal the frontend, and already happens. As does 3 - just adding in step 2 is probably enough, when we're happy enough with the disconnect reliability.
[00:36] <poolie> yes, the bzr side of this bug means the server will drop the connection after the current request
[00:36] <poolie> ok so all we need is a deployment script change to have it sighup the bzr processes?
[00:37] <poolie> or, perhaps the ssh server should do that when it is itself told to shut down
[00:37] <lifeless> yes, thats where I would put it in. I wouldn't want to do that until we think a majority of users will be running disconnect-ready clients though
[00:38] <lifeless> we could do it on staging
[00:38] <lifeless> as a way to ease into breaking the chicken-egg thing
[00:38] <poolie> staging is not actually used
[00:38] <poolie> so, this is why i saw a way out of letting the backend continue running for a few hours
[00:38] <poolie> that will let most clients finish their business
[00:39] <lifeless> so thats the current status quo
[00:39] <lifeless> if we let them finish, we're not actually learning more about the safety of the code
[00:41] <poolie> mm
[00:41] <poolie> so, i don't know
[00:42] <poolie> i don't want to just leave it stuck here
[00:42] <poolie> we can do some manual tests against staging but i am not sure that will really give enough assurance
[00:42] <poolie> in some ways i think the current status quo is a better design
[00:42] <poolie> to just not disconnect people unless we have to
[00:43] <poolie> why rely on the reconnect working when you can just avoid the issue
[00:44] <lifeless> well, this is one of the reasons http has a connection:close header
[00:44] <lifeless> perhaps we should add the ability to signal that to bzr, to make it more reliable.
[00:44] <lifeless> That said, we *know* there are clients out there that will stay connected for weeks.
[00:45] <poolie> right
[00:45] <poolie> so they're going to either have to upgrade bzrlib, or cope with some disconnections
[00:45] <lifeless> That is clearly unsustainable, and there is a certain elegance in having one solution.
[00:45] <lifeless> yes, we intend to communicate with them once the code is in all supported bzr series, and say 'please upgrade, you can use a point release if you want'
[00:47] <poolie> so, again, a chicken and egg
[00:47] <lifeless> I have to go, sorry.
[00:47] <poolie> np
[00:59] <gary_poster> poolie, hi.  bzr has been doing something a bit weird for me.  I have a LP repo initialized with bzr init-repo --no-trees, and a lightweight checkout of local branches.  For weeks or maybe even months now, (almost?) every time I switch the sandbox to local devel pull from LP's devel I get this: http://pastebin.ubuntu.com/776983/ .  I then do break-lock and update and I'm fine, but it's kind of annoying. This is with
[00:59] <gary_poster>  bzr 2.4.2. Do you have any ideas or advice?
[01:00] <poolie> hi, please file a bug, with a traceback and reproduction instructions, and try 2.5
[01:00] <poolie> i don't recall an existing bug about that
[01:02] <gary_poster> poolie ack will do.  thanks
[01:02] <gary_poster> (will do tomorrow
[01:02] <gary_poster> ) :-)
[01:02] <gary_poster> bye
[01:02] <poolie> thanks
[03:30] <lifeless> poolie: back
[03:42] <poolie> lifeless, i don't want to ruin your vacation
[03:44] <wallyworld_> what's the preferred style for multi-line comments? # or '''
[03:45] <poolie> for a comment, definitely #
[03:47] <wallyworld_> ok, thanks. makes it less readbale though
[03:48] <wallyworld_> i saw that guido approves of ''' :-)
[03:58] <lifeless> guido also is quoted as saying 'what was I thinking' (about other things, but shows his approval is no  guarantee...
[04:14] <poolie> wallyworld_, i guess you know a ''' is not a comment?
[04:14] <poolie> it's a string
[04:14] <wallyworld_> poolie: yes
[04:14] <poolie> so apart from the specific use of docstrings, putting comments in them feels a bit weird
[04:14] <wallyworld_> but it serves the purpose :-)
[04:59] <wallyworld_> stub: hi there. if you had a moment, now or later, i'd love to get a 2nd opinion on some sql. i think it will be efficient but i'd like to check
[05:05] <stub> wallyworld_: sure
[05:05] <wallyworld_> stub: thanks. here's something to look at. i can explain whatever is unclear. https://pastebin.canonical.com/57479/
[05:06] <wallyworld_> stub: the code runs in the security adaptor
[05:07] <wgrant> Oh god.
[05:07] <wallyworld_> stub:  the sql is obstinately correct because the unit tests pass (the ones written so far). it's the performance aspects i'm interested in
[05:08] <wallyworld_> wgrant: what's wrong?
[05:08] <wgrant> This is crazy :)
[05:08] <wallyworld_> what is?
[05:08] <wgrant> Let's not put an expensive global bug search into person traversals..
[05:09] <wallyworld_> wgrant: it's only used in the api - for views, it's pre-cached
[05:09] <stub> I'm tending to agree with that, but looking at the SQL at the moment
[05:09] <wgrant> wallyworld_: Or for anything under a person.
[05:09] <wgrant> eg. a PPA, a branch.
[05:09] <stub> What is 'user_bug_filter'?
[05:09] <wgrant> Having pillar-wide visibility would render this obsolete.
[05:09] <wgrant> Which means we won't be causing more timeouts.
[05:09] <wallyworld_> stub: that's an existing filter implemented in BugTask used to return the bugtasks a user can see
[05:09] <wgrant> Which would be a good thing.
[05:10] <stub> wallyworld_: Have an example?
[05:11] <wallyworld_> stub: https://pastebin.canonical.com/57480/
[05:12] <wgrant> It probably won't be *that* bad most of the time, because of the lack of other constraints.
[05:12] <wgrant> However.
[05:12] <wgrant> It is a slippery and unnecessary slope, and will be slow in some situation.
[05:12] <wgrant> +s
[05:13] <wallyworld_> wgrant: that's the idea, and remember, it's only for when the API is used
[05:13] <wgrant> No.
[05:13] <wgrant> It's for any traversal through a person.
[05:13] <wgrant> And "only for when the API is used" is particularly bad.
[05:13] <wallyworld_> for the common use case, where someone opens a bugtask view, we pre-cache
[05:13] <wgrant> Because this adds probably >1s per request.
[05:13] <wgrant> And API makes lots of requests.
[05:14] <wallyworld_> another common case is when someone navigates to a private team page
[05:14] <wallyworld_> ah, but yes, any traversal through a private will trigger it
[05:15] <lifeless> wallyworld_: whats this for ?
[05:16] <wallyworld_> lifeless: it's in the security adaptor for limitedView permission on private teams
[05:16] <lifeless> wallyworld_: ah, person visibility ?
[05:16] <lifeless> wallyworld_: cool
[05:16] <lifeless> you don't need the bug its, just exists
[05:16] <lifeless> so you can tune this to be more simple still
[05:16] <lifeless> wallyworld_: this only runs when the user doesn't have direct access to the team right ? (non-admin, non-member)
[05:17] <wallyworld_> lifeless: yes, these checks occur last in the security adaptor
[05:17] <wallyworld_> the least expensive ones are run first
[05:17] <lifeless> cool
[05:17] <lifeless> and \o/
[05:18] <lifeless> wgrant: this doesn't add 1s per request, it will add (amount to be checked) per request *by someone that cannot see the team today*
[05:18] <wallyworld_> lifeless: this will allow someone to click through a private team in the bug subscription portlet and see the limited details of the team for example
[05:18] <wgrant> And it scales for every single reference ever.
[05:18] <wgrant> It's silly and unnecessary.
[05:18] <wgrant> Overcomplicated, confusing, and slow.
[05:18] <lifeless> wgrant: linear probing increase on the query planner, stats-based linear search of log-time checks from there
[05:19] <wgrant> lifeless: + 100 other queries for the other person foreign keys.
[05:19] <stub> That clause is certainly a killer. Materializing a list of all public bug ids is a sequential scan of the bug table
[05:19] <wallyworld_> it's not confusing to me, or complicated. slow perhaps, which is why i've asked for advice
[05:19] <wgrant> wallyworld_: The query isn't.
[05:19] <wgrant> The concept is.
[05:19] <wallyworld_> not to me
[05:19] <wallyworld_> it's what we are basing social private teams on
[05:20] <lifeless> wgrant: not all fk's will lead to disclosure (libraryfilealias owner for instance, or message owner)
[05:20] <wallyworld_> stub: you mean the bug filter clause i pasted second
[05:20] <wallyworld_> ?
[05:20] <wgrant> lifeless: No, but most have to, in the proposed model.
[05:20] <lifeless> many will yes
[05:21] <wgrant> So, we are going to be in a nice situation where you can see a private team when it's linked to something, except when you can't.
[05:21] <lifeless> we may want to build a fast data structure for it in time, but this is a reasonable approximation, especially given that its an uncommon-path anyway.
[05:21] <wgrant> And it's impossible to say who can see a private team.
[05:21] <stub> wallyworld_: Yes. The SQL you have will certainly need optimizing for use in a security adapter. Technically, it could be called an unlimited number of times on a page (once per private team)
[05:21] <wgrant> It's exactly the opposite of the disclosure project.
[05:21] <wallyworld_> lifeless: wgrant: but isn't the issue here that even if a private team is on a fk, the logged in user may not be able to see that object, so the team's existent should not be shown in that case
[05:21] <stub> Realistically, once or maybe twice
[05:22] <wgrant> It's making disclosure of private teams entirely opaque.
[05:22] <lifeless> wgrant: its making it *possible* to use them sensibly
[05:22] <wallyworld_> stub: what we do now, for is pre-cache the permissions for pages which require this, so the security adaptor is not called
[05:22] <wgrant> lifeless: There are simpler ways.
[05:22] <lifeless> wgrant: such as?
[05:22] <wgrant> lifeless: Pillar-wide scope.
[05:23] <wgrant> To subscribe a private team to a bug, it has to be able to be disclosed to that project.
[05:23] <wgrant> So visibility is simply based on whether you can see a pillar in which the private team is involved.
[05:23] <wgrant> It makes it easy to see what is visible where.
[05:23] <wgrant> And it makes it difficult to make mistkaes.
[05:24] <wallyworld_> stub: for example, on the bug page, the subscribers portlet which may have many private teams listed, all the permissions are cached
[05:24] <wgrant> And it is cheap to query, and very understandable.
[05:24] <wgrant> (it doesn't make sense until we have private projects, however)
[05:24] <lifeless> and it doesn't address things like ayatana
[05:25] <wgrant> Oh?
[05:25] <lifeless> private list of folk working on the project, public bugs, do public triage.
[05:25] <lifeless> as a for instance.
[05:25] <wgrant> The team membership is still private.
[05:25] <lifeless> right
[05:25] <wgrant> Visible only to members/admins/etc.
[05:25] <wallyworld_> does our model even support the simple determination of which pillars a private team is involved with yet? we would need something like the TeamParticipation table, no?
[05:25] <wgrant> But the existence of the team is not.
[05:25] <wgrant> LimitedView is existence, not membership.
[05:26] <lifeless> but hten the next team over, which is totally private, but has access to private ayatana things, what is their safeguard for disclosure ?
[05:26] <wgrant> I posit that LimitedView should be granted pillar-wide.
[05:26] <wgrant> Huh?
[05:26] <wgrant> Confused.
[05:26] <lifeless> lets talk in budapest, what wallyworld_ is doing is what we agreed made sense on the call a week or two back
[05:26] <wgrant> My plan was to talk in Budapest; I didn't expect this particular case to arise so quickly.
[05:26] <lifeless> or we can take this offline now and beat it to death, but I don't want my holiday to die a death of a thousand cuts
[05:27] <stub> What we have is surprisingly quick in any case for the example personid (243656)
[05:28] <wallyworld_> stub: is person 243656 some random person?
[05:28] <stub> http://paste.ubuntu.com/777089/
[05:28] <stub> wallyworld_: It was the id from your second pastebin :)
[05:28] <wgrant> 243656 is a dev person
[05:28] <wallyworld_> stub: ah, ok. that was from a unit test
[05:29] <lifeless> stub: try with me or you
[05:29] <stub> lets try it with a user that is a member of some teams then
[05:30] <wallyworld_> or a user that can see a bug that a team is subscribed to but is not a member of the team
[05:30] <stub> 1.2 seconds for me
[05:30] <lifeless> 7.5 seconds for me
[05:30] <stub> So lets see if we can optimize this.
[05:30] <wallyworld_> stub: lifeless also mentioned the bug_id bits could be optimised out
[05:30] <stub> yer
[05:31] <lifeless> http://paste.ubuntu.com/777092/
[05:31] <lifeless> oh, I fail ad adjusting
[05:32] <lifeless> wallyworld_: hang on, there are two persons involved
[05:32] <wallyworld_> yes, the private team we are checking the visbility of and the person who wants to see
[05:32] <lifeless> there is the person being *viewed* and the person *viewing*
[05:32] <lifeless> stubs test only has one id
[05:32] <lifeless> the pathological case is a private team that the person doesn't know about
[05:33] <wallyworld_> ah, didn;t notice that
[05:33] <lifeless> the bug visiiblity code is 'which bugs can this person see'
[05:33] <lifeless> but we want 'are there mutually visible bugs'
[05:33] <lifeless> I think you'll need to start from ground up
[05:34] <wallyworld_> yes, the team being check has it's id in the TEamParticiaption query near the top
[05:34] <lifeless> yes, but thats still wrong
[05:34] <lifeless> there are *two* team expansions to do
[05:34]  * lifeless stops to think
[05:34] <wallyworld_> lifeless: the query checks directly subscribed to and assignee
[05:34] <wallyworld_> i think the query is correct
[05:35] <lifeless> the private team is visible if:
[05:35] <lifeless>  - it has any interaction with a truely public object
[05:35] <lifeless>  - there is any interaction with a visible object
[05:36] <lifeless> so the one half should be 'users visible objects' the other half 'interactions with objects'
[05:36] <wallyworld_> the current implementation is for a subset of such rules, as per the comments in the python in the pastebin
[05:36] <wallyworld_> just the things we can do quickly
[05:36] <lifeless> and we can short circuit a bunch of that by saying 'mutual ineractions'
[05:36] <lifeless> wallyworld_: yes, I saw that
[05:37] <lifeless> so the blueprint check is one sided because they are only public
[05:38] <wallyworld_> i included that check to hook into the team participation query
[05:38] <lifeless> the select bug_id. .. would probably be better phrased as two separate things
[05:38] <lifeless> a) bug subscription or assignee to public bugs
[05:38] <lifeless> b) subscription or assignee to a mutually visible private bug
[05:39] <wallyworld_> b) it not quite right i don't think
[05:40] <stub> Better https://pastebin.canonical.com/57481/
[05:40] <wallyworld_> just because a private team can see a private bug doesn't mean i can see that private team
[05:40]  * wallyworld_ looks
[05:41] <lifeless> wallyworld_: I know - there are two conditions: interact * and * mutual visibile
[05:41] <wallyworld_> lifeless: not sure i agree with mutually visible
[05:42] <lifeless> wallyworld_: so, if the private team interacts with something I cannot see, I cannot see the team, right ?
[05:42] <wallyworld_> lifeless: yes
[05:42] <lifeless> thats why mutually visible
[05:43] <wallyworld_> i think i misunderstood you
[05:43] <lifeless> if the private team does not interact with it, I can't see the team.
[05:43] <stub> I believe in an exhaustive test suite and the query is right when it passes and is fast enough :)
[05:43] <lifeless> And if I can't see the bug I can't see the team
[05:43] <lifeless> and they cannot interact with something they cannot see
[05:44] <wallyworld_> that 2nd point is irrelevant if we query for the things the team inetracts with, no?
[05:44] <wallyworld_> since the result set will by definition contains the things they can see
[05:44] <lifeless> depends on wgrants implementation plans for folk that are kicked out of a project
[05:45] <wgrant> That's one bit I hadn't worked out yet.
[05:45] <lifeless> anyhow, 'visible to user + interacted with by the team' is sufficient for now
[05:45] <wallyworld_> stub: so, if i redo it based on your query, how do we want to ensure it's fast enough in practice?
[05:45] <lifeless> wallyworld_: a doubly nested for loop on staging
[05:45] <lifeless> for person in person for team in teams
[05:46] <lifeless> stub: is that query rephrased at all, or just new constants plugged in ?
[05:46] <stub> it is rephrased somewhat
[05:47] <stub> https://pastebin.canonical.com/57482/ is even more reworked
[05:47] <wallyworld_> lifeless: so you'd prefer this to land behind a feature flag? so we can test on staging and leave the flag off if we are not happy?
[05:47] <stub> Just mild tweaks though
[05:47] <lifeless> wallyworld_: always ;)
[05:47] <wallyworld_> ok
[05:47] <lifeless> wallyworld_: safety nets make radical change easier
[05:47] <lifeless> wallyworld_: this is radical
[05:48] <stub> So where we want to know 'EXISTS', don't generate more results than absolutely necessary
[05:48] <wallyworld_> stub: so if i land this, and it's up on qas, can you help me test it for performance before fflag is activated?
[05:49] <stub> pg will shortcut a simple query in an EXISTS, but I don't know if it is smart enough for a UNION query so I optimized those bits. Also, UNION ALL will be faster than UNION, again PG might have optimized that for us but better safe than sorry
[05:49] <lifeless> stub: cool, I was just checking with me an dwanted to know whether to fiddle with c&p from your query
[05:49] <stub> wallyworld_: If it is up on qas, you don't need me - you can get query times for a page request directly
[05:49] <lifeless> wallyworld_: I'm pretty sure you will want to unwrap the user_bug_visibility filter and partition the checks
[05:50] <lifeless> wallyworld_: IMBW but splitting on unbalanced data often makes pg happier
[05:50] <wallyworld_> lifeless: not sure i understand what you mean by partition the checks
[05:51] <stub> My updated query will involve changing the user_bug_visibility in any case - changing the single EXISTS to two EXISTS
[05:51] <stub> oic
[05:51] <wallyworld_> stub: with the testing question, i will and can do my own tests. lifeless mentioned the use of a double for loop, hence my question about additional tests
[05:51] <lifeless> wallyworld_: rather than checking 'all interacted with' against 'all visible'
[05:52] <stub> he means 'WHERE Bug.id IN (SELECT id FROM Bug WHERE private IS FALSE) OR Bug.id IN (SELECT id FROM Bug WHERE private IS TRUE AND (...)'
[05:52] <lifeless> wallyworld_: do two checks with - 'interacted with public' || 'interacted with private-visible'
[05:52] <wallyworld_> ah, ok
[05:52] <lifeless> note that one half of that is dramatically cheaper
[05:52] <wallyworld_> and it may short circuit the check?
[05:52] <lifeless> don't need to know what the viewer did if there are any public interactions
[05:53] <wallyworld_> sorry if i've stirred up a hornet's nest with this. i really appreciate your and stub's help
[05:56] <lifeless> may want a CTE of the bug subscriptions, may not.
[05:56] <wallyworld_> CTE?
[05:57] <lifeless> common table expression - the WITH stuff
[05:57] <wallyworld_> ah, ok
[06:04] <huwshimi> hmmm... make run is failing with "mkdir: cannot create directory `/var/tmp/bazaar.launchpad.dev/mirrors': Permission denied"
[06:04] <lifeless> anyhow, stub has a fast query now.
[06:04] <stub> it still needs work
[06:04] <lifeless> remember that disclosure brings in a new model for 'can view'
[06:05] <stub> Worth getting it as fast as possible as there will be outliers that cause grief
[06:05] <lifeless> so there are diminishing returns in optimising this today: a) its used in the may-fail-anyway case, and b) the underpinnings are going to be reworked soon
[06:05] <lifeless> stub: sure, not meaning you shoul stop
[06:05] <lifeless> just be aware that we'll need to do it all again in 3 weeks or so :)
[06:06] <stub> yer
[06:06] <stub> just looking at my last query plan on how much was never executed - short circuits are great, but they won't always happen
[06:08] <wallyworld_> huwshimi: sudo remove that dir should fix it i think
[06:08] <huwshimi> wallyworld_: Thanks, I'll give it a go
[06:09] <huwshimi> wallyworld_: It doesn't exist
[06:09] <wallyworld_> lifeless: stub: doing this now fixes the current 404 on prod when users try and click through to private teams, plus picks up the work curtis just did this week on the page templating to take account of limitedView permissions
[06:09] <huwshimi> wallyworld_: Oh, the parent does, I'll remove that
[06:09] <wallyworld_> huwshimi: remove the parent
[06:09] <wallyworld_> beat me to it :-)
[06:10] <stub> I haven't been following the frequent and lengthy discussions so am trusting a sane rationale for all this was produced :)
[06:10] <huwshimi> wallyworld_: Perfect, thankyou
[06:10] <wallyworld_> huwshimi: np. it's apache that writes it out as root i think
[06:10] <huwshimi> silly apache
[06:12] <wallyworld_> stub: lifeless and curtis agree on it all. it's a step along the way to a better solution and a fairly radical change to allow some private team details to be known. stakeholders seem pleased that we are doing it
[06:22] <stub> wallyworld_: So if a team has subscribed to a public bug, or subscribed to a private bug that has been made public, that team is always visible?
[06:25] <huwshimi> A simple CSS review: https://code.launchpad.net/~huwshimi/launchpad/tag-link-colour-907132/+merge/86499
[06:27] <wallyworld_> stub: yes. if a user can see a bug that a team is subscribed to (it may be a private bug), that team has limitedView visibility for the user
[06:28] <wgrant> Note that LimitedView doesn't provide access to the member listing.
[06:28] <wallyworld_> just the name, displayname etc
[06:28] <stub> Yes, and if that is a public bug then it is visible to everybody
[06:28] <wallyworld_> stub: yes
[06:28] <wallyworld_> since the team has placed itself in a public role
[06:29] <stub> Or someone twiddled the privacy setting on a bug they are subscribed to :)
[06:29] <stub> Not necessarily a member of the team
[06:29] <wallyworld_> stub: and we are implementing code to warn the user who is placing the team in such a role when this happens
[06:29] <wgrant> Exactly.
[06:29] <wgrant> This is why it gets very messy.
[06:29] <wgrant> Anyone can disclose the team.
[06:31] <wallyworld_> huwshimi: with the new screenshot, it looks like "None" is lighter than the other tags?
[06:31] <huwshimi> wallyworld_: Yeah, it's not a link
[06:32] <wallyworld_> huwshimi: my (perhaps invalid) opinion is that the label should be different from the text content
[06:32] <wallyworld_> colour
[06:33] <wallyworld_> sure, links need to be different as well
[06:33] <huwshimi> wallyworld_: I think that's valid, but as you say the links should be different
[06:33] <huwshimi> wallyworld_: I guess this branch is just about link colours
[06:33] <wallyworld_> yes. and i don;t like the grey personally
[06:33] <wgrant> Should the tag links perhaps look like tags?
[06:33] <wgrant> like status/importance do
[06:34] <wallyworld_> grey to me means disabled
[06:34] <stub> wallyworld_: While you are there, can you quickly twiddle the code that generates https://pastebin.canonical.com/57480/ to say 'IS FALSE' rather than '= FALSE' to give us maximum chance of being optimized.
[06:34] <huwshimi> wgrant: I think we should do something with our tags, but if we do we need to do it everywhere
[06:34] <wallyworld_> stub: sure
[06:35] <stub> wallyworld_: And the UNION to a UNION ALL
[06:35] <wgrant> huwshimi: I agree, but we changed status and importance in only one place...
[06:35] <wallyworld_> stub: this will be in the existing bug filter code. so i'll fiddle the sql and let the existing tests validate correctness
[06:36] <stub> wallyworld_: Yes. Doing my suggestion will be a win for everything.
[06:36] <stub> wallyworld_: I'd like to change it to two EXISTS rather than one using UNION ALL, but that might have fallout.
[06:37] <wallyworld_> stub: you mean performance regressions?
[06:37] <wallyworld_> for existing bug queries
[06:37] <stub> wallyworld_: The changes I suggested will not cause regressions. The change I'd like to do might, so I won't get you to make a quick hack right now :)
[06:37] <stub> (just do the IS FALSE and UNION ALL)
[06:37] <huwshimi> wgrant: Yeah, we need to fix that
[06:39] <wgrant> huwshimi: They look all old and crap everywhere else :(
[06:39] <wallyworld_> stub: ok. i'll also split the other query as suggested earlier, and add you as a reviewer for the mp to check it?
[06:39] <stub> Sure
[06:39] <wallyworld_> thanks :-) i may not get it done tonight, have other stuff on
[06:40] <wallyworld_> gotta finish cobverting the existing doc test to unit tests
[06:40] <wallyworld_> huwshimi: so were you going to dick with the colours a bit?
[06:42] <huwshimi> wallyworld_: Well there are outstanding issues that will affect what other changes would be made. E.g. bug 894734
[06:42] <_mup_> Bug #894734: "Assignee: None" and "Tags: None" clutter bug listings <bug-columns> <Launchpad itself:Triaged> < https://launchpad.net/bugs/894734 >
[06:43] <huwshimi> wallyworld_: And all the other bugs about content not being linked
[06:43] <wallyworld_> huwshimi: and you are sure about the yucky grey?
[06:44] <huwshimi> wallyworld_: Well we could look at having a saturated version of the blue
[06:44] <wallyworld_> huwshimi: my opinion is that that would be better, since grey to me signifies disabled, and is used elsewhere in lp to mean that (eg bug abd branch auto linking in text content)
[06:45] <stub> wallyworld_: https://pastebin.canonical.com/57483/ is what I end up with anyway. I could split up the bug check into 'subscribed to public bugs', 'subscribed to visible bugs' etc. but that would involve bypassing user_bug_filter. I thought this would be bad, as that stanza will be evolving as bug privacy evolves.
[06:46] <wallyworld_> stub: yes, that's why i wanted to use that method to get the user bug filter
[06:46] <wallyworld_> stub: not looking too closely, that query took 0.5, the last one took 0.3.
[06:47] <huwshimi> wallyworld_: I'm about to finish up for the day (and wander over the road for the Big Bash), but I'll have a play and probably update things tomorrow.
[06:47] <stub> wallyworld_: That was using incorrect parameters I think
[06:48] <stub> oh no
[06:49] <stub> yer - https://pastebin.canonical.com/57482/ does seem quicker, both in the cost estimate and the actual query time
[06:49] <wallyworld_> huwshimi: ok. i wnt to the big bash last night. saw warnie and liz. bloody excellent. have fun
[06:50] <wallyworld_> stub: ok. i'll go with the quicker one for now. i have to run up and make dinner - my wife came home from a procedure at hospital today. but i'll check irc again after dinner
[06:50] <wallyworld_> thanks so much for your help
[06:51] <huwshimi> wallyworld_: Yeah, should be good
[06:51] <stub> wallyworld_: np. I'm done anyway - need to go play with replication
[08:58] <jtv1> Any reviewers in the house?  https://code.launchpad.net/~jtv/launchpad/lint-lpserve/+merge/86498
[09:03] <stub> jtv1: done
[09:03] <jtv1> thanks
[10:10] <gmb> Does anyone have any idea why I'd get his error http://pastebin.ubuntu.com/777225/ whilst running cronscripts/scan_branches.py against a local codehosting instance?
[10:12] <gmb> Ah, hang on.
[10:12] <gmb> Because I'm an ijit, that's way.
[10:13] <wgrant> gmb: was /var/tmp/bazaar.launchpad.dev/mirrors 700 or so?
[10:13] <wgrant> That's been happening for me recently.
[10:15] <gmb> wgrant, Hmm, lemme check. I'm using dev.launchpad.net/Running/RemoteAccess and the Apache config for bazaar-internal looks a bit weird...
[10:15] <gmb> No, /mirrors is rw-rw-r--
[10:16] <wgrant> /var/tmp/bazaar.launchpad.dev is sensible too?
[10:16] <wgrant> Apache likes to create them as root with a restricted umask :/
[10:17] <gmb> Yep, that's sane.
[10:17] <wgrant> Huh
[10:17] <wgrant> I guess the apache error log is the next step.
[10:17] <gmb> Yep.
[10:19] <gmb> Haha
[10:19] <gmb> [client 172.16.15.154] client denied by server configuration: /var/tmp/bazaar.launchpad.dev/mirrors/00/00/00/4d/.bzr/branch-format
[10:19] <gmb> The clue's in the brackets.
[10:19] <gmb> The RemoteAccess document needs a rewrite :)
[10:19] <wgrant> Ah
[10:20] <wgrant> I keep my container's /etc/hosts pointing at 127.0.0.88
[10:20] <gmb> Wise.
[10:20] <wgrant> Only the host's gets the real IP.
[10:21] <gmb> As it should be.
[10:21] <gmb> I'll update the page.
[10:22] <wgrant> Great.
[10:50] <gmb> Argh. Can anyone remember francis's non-netstat way of finding out what's hogging a port?
[10:50] <StevenK> fuser?
[10:51] <StevenK> fuser <port number>/tcp
[10:51] <StevenK> Or lsof
[10:51] <gmb> lsof!
[10:51] <gmb> That's it.
[10:51] <gmb> Thanks.
[10:51] <gmb> Twisted: I hate you, die in a fire.
[10:51] <StevenK> gmb: I tend to reach for fuser first, then netstat, then lsof. But whatever works.
[10:51] <bigjools> my third successive day of doing maintenance rotation. FML
[10:52] <StevenK> bigjools: I shall think of you while I play Half-Life.
[10:52] <bigjools> gmb: Twisted is good, except its logging
[10:52] <gmb> StevenK, Ah, cool. I went netstat->lsof, but I'll try your ordering next time.
[10:52] <bigjools> StevenK: you still play Half Life?
[10:53] <StevenK> bigjools: I haven't played it before. It hit 75% off on Steam today, so I grabbed all of it.
[10:54] <bigjools> StevenK: it's *awesome* :)
[10:54] <bigjools> I wasted my life on it 10+ years ago
[10:54] <StevenK> Haha
[10:54] <bigjools> if you go online with it, prepare to end yours too
[10:55] <StevenK> Now for the Quake Collection to hit 75% off. Which, amusingly enough, doesn't include Quake 4.
[10:56] <StevenK> bigjools: The first Portal is 50% off, so it's probably only a few quid.
[10:57] <bigjools> StevenK: cool, I might get it on the PS3
[11:07] <stub> $3.75 vs ???
[11:11] <StevenK> stub: It's $10 normally
[11:11] <stub> It was on special. $10 I think got you both 1+2
[11:12] <stub> yesterday I think, maybe still is.
[11:12] <stub> I'm backlogged about a year with games, so no need to pay full price :)
[11:14] <StevenK> Haha
[11:15] <jml> Hello, we'd appreciate a review of https://code.launchpad.net/~james-w/launchpad/bpph-binary-file-urls/+merge/86470
[11:18]  * bigjools will look
[11:22] <jml> bigjools: thanks.
[11:24] <jml> gmb: the great thing about Twisted is that it holds your puny hatred in contempt, dwarfed as it is by its own vast loathing.
[11:25] <jml> gmb: try getting that with Django!
[11:31] <bigjools> jml, james_w: needs fixing, but minor stuff
[11:32]  * gmb wonders if he can fix bug 808930 buy just adding another commit()....
[11:32] <_mup_> Bug #808930: Timeout running branch scanner job <escalated> <linaro> <oops> <Launchpad itself:In Progress by gmb> < https://launchpad.net/bugs/808930 >
[11:33] <gmb> (The answer is "probably." Whether it's the _right_ fix or not is debatable)
[11:34] <gmb> "Buy" just adding another commit? Seriously Graham, _that's_ the way you're spelling it?
[11:36] <wgrant> gmb: Howso?
[11:37] <gmb> wgrant, Well, first, s/probably/possibly. We're suffering death by SQL (in at least one case) with a big for loop that holds the txn open whilst it adds new revisions.
[11:38] <gmb> Now, adding a commit at the end of that loop may obviate the problem.
[11:38] <gmb> But as a solution, it stinks.
[11:38] <wgrant> Is this creating new Revisions?
[11:38] <wgrant> Or BranchRevisions?
[11:38] <gmb> This is new Revisions.
[11:41] <wgrant> It doesn't do a massive manually constructed INSERT?
[11:41] <gmb> Nope.
[11:41] <wgrant> Huh.
[11:41] <gmb> Lots of revision = Revision(...)
[11:41] <wgrant> How did that ever work :/
[11:41] <wgrant> Awesome!
[11:41] <gmb> wgrant, There's a lovely comment to the effect of "this is bad, but, eh, it works."
[11:41] <wgrant> So, this is the issue I thought you'd hit first.
[11:41] <wgrant> Didn't think it was quite *that* bad, though.
[11:42] <gmb> :)
[11:42] <wgrant> There's already a mass-insertion method for branchrevision.
[11:42] <wgrant> And binarypackagepublishinghistory
[11:42] <gmb> Ah, cool.
[11:43] <gmb> I shall crib appropriately.
[11:43] <wgrant> And possibly bugnotificationrecipient and co.
[11:43] <wgrant> It's ugly, but it works.
[11:43] <gmb> Hey, I'm not pretty, but I do okay.
[11:43] <gmb> "Ugly but works" works for me.
[11:43] <wgrant> You may well need a looptuner, since inserting 200000 revisions in a single statement might not be too helpful, but we'll see...
[11:43] <gmb> LoopTuner was one of the things I was thinking of.
[11:56] <rick_h__> morning
[12:00] <cjwatson> I'll get my pending QA done at some point over the next couple of days, BTW - currently on holiday but should be able to squeeze it in to unblock people
[12:01] <cjwatson> mind you I'm not sure if anyone's planning another ftpmaster deployment this year :-)
[12:59] <nigelb> ok, I have to admit, githubs "Merge" button is really really nice for code review.
[13:00] <nigelb> I know that tarmac does something equaivalent for us, but I'd really really like to see something like that on LP :-)
[13:05] <rick_h__> nigelb: :)
[13:05] <rick_h__> nigelb: subsume reviewboard!
[13:06] <nigelb> heh
[13:06] <rick_h__> even though I'm not a django fan
[13:07] <nigelb> rick_h__: The other day I probably complained a litle too hard about sqlalchmeny and michael bates replied to me :P
[13:07] <rick_h__> beyer?
[13:07] <nigelb> bah, beyer
[13:07] <rick_h__> nigelb: he's really awesome at being open/communicating
[13:07] <rick_h__> and a super genius
[13:07] <rick_h__> but cool, he help you out then?
[13:08] <nigelb> Well, eventually, it wasn't sqlalchemy that was giving me either
[13:08] <nigelb> Something else threw an sqlalechemy error :)
[13:08] <rick_h__> orly?
[13:08] <rick_h__> now I'm curious, what was throwing an SA exception at you?
[13:08] <nigelb> Yeah. Flask-Admin.
[13:09] <rick_h__> ah, gotcha
[13:09] <rick_h__> yea, I've not used flask other htan hello world to kind of look at it
[13:09] <nigelb> I'm coding exclusively in Flask for work.
[13:09] <nigelb> Its harder, but its much more flexible than a full framework.
[13:09] <rick_h__> yea, I liked what I saw
[13:10] <rick_h__> and if I had a small app I'd use it
[13:11] <nigelb> My boss wrote something like login.launchpad.net in Flask, except its an OAuth provider and not OpenID.
[13:12] <rick_h__> cool, check out https://github.com/bbangert/velruse sometime
[13:12] <rick_h__> another "to check out" sometime item
[13:12]  * nigelb looks
[13:13] <nigelb> Interesting!
[13:13] <rick_h__> yea, I really love the idea
[13:13] <rick_h__> I was pitching it at my last job as a way to centralize auth through our apps
[13:15] <nigelb> we have a centralized auth through our app. And have a client lib for our apps :-)
[13:16] <rick_h__> cool
[13:16] <rick_h__> yea, we had a ton of small little apps and used ldap internally, but for external users thought it might make sense.
[13:16] <rick_h__> we'd have needed to find a way to hack an ldap provider into there somehow, but would have been cool
[13:17] <nigelb> that's the one thing I like about almost all Canonical apps. Everything uses openID.
[13:17] <rick_h__> right
[13:17] <rick_h__> took a little bit, but thank goodness
[13:25] <StevenK> nigelb: How much data?
[13:25] <nigelb> StevenK: Every single person row.
[13:25] <StevenK> Argh
[13:25] <nigelb> :D
[13:25] <StevenK> Yes, write a garbo job.
[13:26] <StevenK> nigelb: lib/lp/scripts/garbo.py. Create a new class with a descriptive name, copy method *names* from the other classes.
[13:27] <StevenK> nigelb: What the methods return should be self-explanatory. If you want more help from me, you'll have to wait until it's not 0030.
[13:27] <nigelb> StevenK: Well, I originally, wanted to poke you tomorrow and work along asking you questions.
[13:28] <StevenK> I suppose you can do that.
[13:28] <nigelb> I'll make some time before I start work to finish it off.
[13:28] <nigelb> I haven't worked on LP in a while :)
[14:00] <james_w> bigjools, how do I know want to compare the list of urls to?
[15:21] <bigjools> james_w: the stuff you created in the factory
[15:22] <bigjools> make a list out of it and compare to the listified results
[15:22] <james_w> bigjools, do the same url generation against them?
[15:22] <bigjools> james_w: well I'd construct an expected URL
[15:23] <james_w> bigjools, using ProxiedLibraryFileAlias?
[15:23] <bigjools> yup
[15:24] <james_w> ok
[15:24] <bigjools> james_w: or even just file.htt-purl
[15:24] <bigjools> err
[15:24] <bigjools> file.http_url
[15:24] <james_w> except that won't match as it's not proxied?
[15:24] <bigjools> there's nothing to proxy
[15:24] <bigjools> unless it's private
[15:34] <james_w> bigjools,
[15:34] <james_w> AssertionError: !=:
[15:34] <james_w> reference = ['http://localhost:41286/93/filename-100190']
[15:34] <james_w> actual = [u'http://launchpad.dev/~person-name-100146/+archive/unique-from-factory-py-line2685-100148/+files/filename-100190']
[15:34] <james_w> : ['http://localhost:41286/93/filename-100190'] != [u'http://launchpad.dev/~person-name-100146/+archive/unique-from-factory-py-line2685-100148/+files/filename-100190']
[15:35] <james_w> so it looks like they are proxied?
[15:35] <james_w> archive = self.factory.makeArchive(private=False)
[15:36] <bigjools> james_w: meh, my bad then.
[15:37] <bigjools> the redirect happens in the appserver
[16:06] <rick_h__> deryck: can you review this please? https://code.launchpad.net/~rharding/launchpad/sort_buttons_907376/+merge/86579
[16:09] <james_w> bigjools, https://code.launchpad.net/~james-w/launchpad/bpph-binary-file-urls/+merge/86470
[16:11] <bigjools> checking
[16:12] <bigjools> james_w: perfect, thanks. Do you need me to land it? I can't recall if you can do that
[16:13] <james_w> bigjools, I had big problems doing so last time, so if you could throw it at ec2 it would be much appreciated
[16:13] <bigjools> sure thing
[16:13] <bigjools> james_w: can you set a commit message
[16:13] <james_w> thanks
[16:13] <james_w> oh yeah
[16:14] <james_w> done
[17:54] <rick_h__> deryck: ping, have a sec to peek at https://code.launchpad.net/~rharding/launchpad/sort_buttons_907376/+merge/86579
[17:56] <deryck> rick_h__, sure
[17:56] <rick_h__> deryck: ty
[18:02] <deryck> rick_h__, so it feels like the kind of thing we should test since we keep breaking because of it.
[18:02] <deryck> rick_h__, but it seems hard to test, too.
[18:02] <rick_h__> deryck: right, because you have to find/replace all cases whereever you're catching the history event
[18:02] <rick_h__> the only *good* way would be to wrap history event access entirely with our own api
[18:03] <rick_h__> deryck: I searched before, but missed this case because it was in the .pt and not in the .js
[18:03] <rick_h__> deryck: I'd definitely prefer to have this stuff in a .js file and the .pt only do a single "setup()" kind of call, but didn't want to change this that much
[18:03] <deryck> rick_h__, yeah.  so let's do this change as it is now.  and can you file a high bug about needing to wrap history handling to prevent this happening again?
[18:03] <rick_h__> deryck: sure thing
[18:04] <deryck> rick_h__, awsome, thanks!
[19:46] <deryck> gary_poster, so about deploy…. I can hold for a bit to see if you guys can qa benji's work, if that's cool.
[19:47] <deryck> gary_poster, or were you thinking this is something to wait for tomorrow on?
[19:47] <gary_poster> benji, see ops:-/
[19:47] <gary_poster> I mean deryck, see ops :-/
[19:47] <deryck> gary_poster, gotchas, I see now.  thanks.
[21:25] <cjwatson> Does anyone mind if I set up /srv/launchpad.net/ubuntu-archive/gnupg-home/ on mawson so that I can QA https://code.launchpad.net/~cjwatson/launchpad/sign-installer/+merge/85670 ?  Much like the setup on production only with a dogfood key
[21:27] <cjwatson> y'know what, I'm pretty certain nobody cares so I'll JFDI and we can remove it later if it breaks somebody
[21:30]  * cjwatson wonders what "The calling script may set GNUPGHOME ..." is supposed to mean in cronscripts/publishing/distro-parts/ubuntu/publish-distro.d/10-sign-releases given that it proceeds to unconditionally override that variable
[21:32] <cjwatson> maybe the right answer is ln -s $HOME/.gnupg /srv/launchpad.net/ubuntu-archive/gnupg-home
[21:34] <cjwatson> any objections to upgrading mawson to >= r14564?
[21:54] <cjwatson> OK, nobody else logged in, so upgrading mawson
[21:57] <poolie> hi all