[02:09] <wallyworld_> StevenK: wgrant: bug 900431. the get_branch_privacy_filter() method ignores stacked on branches. so we have a new version of the transitively_private issue. i'm not sure if we want to solve this or not. thoughts?
[02:09] <_mup_> Bug #900431: branch visibility queries do not consider visibility of stacked on branches <403> <branches> <disclosure> <privacy> <sharing> <Launchpad itself:Triaged> < https://launchpad.net/bugs/900431 >
[02:17] <wgrant> wallyworld_: Right, that's related to the MP security check performance issue
[02:17] <wgrant> wallyworld_: It's not a recurrence of the transitively_private issue; it existed even then.
[02:17] <wallyworld_> it's a similar issue though
[02:17] <wgrant> transitively_private ensured that the stacked branch was private, but it never ensured that the stacked-on branch was actually visible
[02:18] <wallyworld_> sort of. it allowed a quick check to be made
[02:18] <wgrant> Which is why I didn't really want to keep it -- it didn't solve enough of the problem that it was really worth it
[02:18] <wallyworld_> without needing recursion each time
[02:18] <wgrant> Right, but you still had to check the stacked-on visibility
[02:18] <wgrant> It wasn't a sufficient check
[02:18] <wallyworld_> it was sufficient before we introduced info types
[02:19] <wgrant> No
[02:19] <wgrant> I can very well be subscribed to the stacked branch, but not the stacked-on branch
[02:19] <wgrant> Even back in the olden days
[02:19] <wallyworld_> i was ignoring subscription/visibility conflation
[02:19] <wallyworld_> is there a bug for the mp security check?
[02:20] <wgrant> There's one or two timeout bugs about it
[02:20] <wallyworld_> so do we think we can solve this without a denormalisation effort?
[02:20] <wgrant> And the subscription/visibility conflation isn't really relevant here; visibility can differ regardless of the precise mechanism
[02:21] <wgrant> I don't believe the denormalisation is practical even if we wanted to do it
[02:21] <wallyworld_> so we need efficient sql then
[02:21] <wgrant> In either the old or the new model
[02:21] <wgrant> Right
[02:22] <wallyworld_> denormalisation worked for private
[02:22] <wgrant> It didn't.
[02:22] <wallyworld_> why?
[02:22] <wgrant> I have this private branch
[02:22] <wgrant> It's stacked on another private branch
[02:22] <wgrant> I'm subscribed to my branch, but not the one it's stacked on
[02:22] <wgrant> transitively_private is true on the private branch, but it doesn't tell the code that I can't see the stacked-on branch
[02:23] <wallyworld_> it wasn't supposed to
[02:23] <wgrant> Right, so the situation is not significantly different from today
[02:23] <wgrant> In the old days you had to check visibility of the stacked-on branches
[02:23] <wallyworld_> it was just supposed to highlight that public branches stacked on private were private
[02:23] <wgrant> Now you have to check visibility of the stacked-on branches.
[02:24] <wgrant> Sure, but that's only vaguely related to this issue
[02:24] <wallyworld_> right. so the reason it is no longer needed isn't because it didn't work as advertised, but because we no longer allow public stacked on private
[02:25] <wgrant> The role that transitively_private served was of very limited use, due to exactly this issue
[02:25] <wgrant> From the day transitively_private was created.
[02:25] <wallyworld_>  it servered the purpose it was written for - to stop public branches stacked on private advertising themselves as public
[02:25] <lifeless> yay svn
[02:26] <wgrant> wallyworld_: Right, but that's not fundamentally hugely useful in itself
[02:26] <wgrant> lifeless: Oh?
[02:26] <wgrant> wallyworld_: transitively_private was floated as a partial solution to the issue you're looking at now
[02:26] <wgrant> But it doesn't work for that at all unless you squint very hard
[02:27] <wallyworld_> i recall it slightly differently - it was intended to stop people trying to access a so called public branch and being denied access
[02:27] <wgrant> Right
[02:27] <wgrant> But that doesn't help much
[02:27] <wgrant> Since you can still access a so-called private branch to which you have access
[02:27] <wallyworld_> i think it does, but anyway
[02:27] <wgrant> But still be denied access
[02:28] <wallyworld_> sure, different use case
[02:28] <wgrant> It helps the public case, but not anything else
[02:28] <wgrant> It's very very similar
[02:28] <wgrant> I have access to a branch, except I don't.
[02:28] <wallyworld_> but in the public case, yu expect access
[02:28] <wgrant> transitively_private solved the public special-case of that
[02:28] <wallyworld_> since there's no padlock etc
[02:28] <wgrant> Sure, and in the private case where I have access I expect access.
[02:29] <wallyworld_> 5 minutes is up, i haven't paid for the 10 minute argument
[02:30] <lifeless> wgrant: stabbing log rotate, tired of the incident reports
[02:30] <wgrant> lifeless: Ahh
[02:30] <lifeless> https://code.launchpad.net/~fdrake/zconfig/trunk is the closest zconfig has to a trunk
[02:30] <wgrant> lifeless: svn.zope.orgZ?
[02:30] <wgrant> -Z
[02:41] <wgrant> Oh, that's an import from there, I see
[03:00] <StevenK> wallyworld_, wgrant: https://code.launchpad.net/~stevenk/launchpad/proper-error-on-bug-attachment-bad-filename/+merge/122608
[03:02] <wallyworld_> StevenK: perhaps include the filename in the exception message?
[03:03] <StevenK> wallyworld_: I think this particular exception is only triggered via the API, in which case the user is well aware which filename.
[03:03] <wallyworld_> ok
[03:04] <StevenK> I'm still not sure about raising BadRequest directly
[03:04] <wallyworld_> we are losing the root cause info from the original exception
[03:04] <wallyworld_> so the user doesn't get to know why it's a bad filename
[03:05] <StevenK> wallyworld_: They didn't before either -- it tripped a constraint in the DB.
[03:05] <wgrant> Raising a BadRequest in the model sounds like a Bad Idea™
[03:05] <StevenK> Add more lines and create class InvalidFilename(Exception) ?
[03:06] <wallyworld_> sounds more reasonable. but we need to try and propogate the cause of the error
[03:10] <StevenK> wallyworld_: http://pastebin.ubuntu.com/1184984/
[03:11] <mwhudson> wgrant: postgres performance question
[03:11] <mwhudson> i'm finding that
[03:11] <mwhudson> select * from foo where bar in $SUB_QUERY1 and bar in $SUB_QUERY2;
[03:11] <mwhudson> is waaay slower than
[03:11] <mwhudson> select * from foo where bar in ($SUB_QUERY1 intersect $SUB_QUERY2);
[03:11] <mwhudson> wgrant: have you run into this?
[03:11] <wgrant> mwhudson: It's probably trying to be too smart in the first one. Do you have plans for both?
[03:12] <wallyworld_> StevenK: but it doesn't say *why* it's a bad filename - can we extract and use that from the original exception?
[03:12] <StevenK> wallyworld_: Nope.
[03:12] <mwhudson> wgrant: yes, but they may cause eyeball stabbing
[03:12] <wgrant> StevenK, wallyworld_: How does the webapp handle this?
[03:12] <StevenK> wallyworld_: Because the constraint error doesn't say either.
[03:12] <wgrant> StevenK, wallyworld_: This is a normal validation problem
[03:12] <wgrant> I'd suggest looking at how eg. product name validation is done
[03:13] <StevenK> wgrant: I think well-behaved browsers are immune
[03:13] <wgrant> Oh right, cause the filename is embedded in the widget, blah
[03:13] <wgrant> So difficult to use a traditional validator
[03:13] <StevenK> Right, which is why I was saying it's really only API
[03:14] <mwhudson> wgrant: slow: http://paste.ubuntu.com/1184985/ faster: http://paste.ubuntu.com/1184987/
[03:15] <wgrant> mwhudson: EXPLAIN ANALYZE is probably helpful, to see where its estimates go wrong
[03:15] <mwhudson> wgrant: heh, http://explain.depesz.com/s/5i9 <- fast http://explain.depesz.com/s/xUB <- slow
[03:17] <wallyworld_> StevenK: the IntegrityError exception does have a useful message doesn't it? so can we not reuse that?
[03:19] <mwhudson> wgrant: seems that the materialize is the problem?  i'm not super experienced at reading explains
[03:19] <wgrant> mwhudson: You're missing an index
[03:20] <wgrant> Not quite sure what your schema is, but I suspect a CREATE INDEX foo ON dashboard_app_namedattribute (content_type_id, name, value); will fix it
[03:20] <mwhudson> oh right, django
[03:20] <wgrant> Currently it finds everything with the matching content_type_id
[03:20] <wgrant> Then filters by name and value
[03:20] <mwhudson> yeah, that's not going to be helpful
[03:20] <wgrant> Which seems like the wrong way to do things, but is also trivially indexable
[03:20] <wgrant> And should take the query into the hundreds of microseconds.
[03:21] <mwhudson> oh
[03:21] <mwhudson> is it pulling the common part out of the two subqueries and so doing things in a boneheaded order?
[03:21] <wgrant> Oh, except for the testrun join, but still, tens of milliseconds.
[03:21] <wgrant> I don't know what the query looks like
[03:21] <wgrant> But no, it seems to be doing them separately
[03:22] <wgrant> It's just really slow to filter dashboard_app_namedattribute how you've asked
[03:22] <mwhudson> django generic relations, yay
[03:22] <wgrant> :)
[03:22] <mwhudson> oh well, i've wasted enough of your time, thanks
[03:22] <wgrant> Still, create that index and see what happens
[03:22] <wgrant> Should work
[03:22] <wgrant> No idea how you create custom indices in Django, though...
[03:22] <wgrant> yay django
[03:22] <wgrant> I love django, as you can tell
[03:23] <mwhudson> well
[03:23] <mwhudson> i'm not even slightly pretending to be portable in this project
[03:23] <wgrant> Good :)
[03:23] <mwhudson> so i can generate them with sql/$model.sql i guess
[03:23] <wgrant> Do you understand how to read the relevant bit of the explain?
[03:23] <mwhudson> the index doesn't help though :(
[03:23] <wgrant> :(
[03:24] <wgrant> Does it change the plan at all?
[03:24] <mwhudson> CREATE INDEX
[03:24] <mwhudson> Time: 36607.299 ms
[03:24] <mwhudson> wee
[03:24] <mwhudson> the index is used by the plan now
[03:25] <mwhudson> i think, as i could probably have predicted, the problem is that our schema is a bit nuts
[03:25] <mwhudson>                                              ->  Index Scan using foo on dashboard_app_namedattribute u1  (cost=0.00..1773.90 rows=527 width=4) (actual time=0.043..30.131 rows=28501 loops=1)
[03:25] <mwhudson>                                                    Index Cond: ((content_type_id = 23) AND (name = 'target.device_type'::text) AND (value = 'panda'::text))
[03:25] <wgrant> Django has that effect on people
[03:25] <mwhudson> 30000 index scans ?
[03:25] <mwhudson> not going to be good, i suspect
[03:26] <wgrant> And that's going straight into the testrun index scan, I guess, yeah :/
[03:26] <wgrant> Not ideal
[03:26] <mwhudson> http://explain.depesz.com/s/xkDK <- plan with index
[03:27] <wgrant> What's the actual query?
[03:27] <mwhudson> ah sorry
[03:28] <mwhudson> wgrant: http://paste.ubuntu.com/1185000/
[03:28] <wgrant> my eyes
[03:28] <mwhudson> brutally word-wrapped to avoid the one long line effect
[03:28] <mwhudson> hee hee
[03:30] <wgrant> Mmm
[03:30] <wgrant> That's ugly
[03:30] <mwhudson> yeah
[03:30] <wgrant> But nothing blatantly obvious
[03:30]  * wgrant runs off to lunch, will look more later
[03:31] <mwhudson> i think the problem is that the two queries on attributes vary massively in selectivity
[03:31] <mwhudson> there are a _lot_ of target.device_type = panda attributes
[03:31]  * mwhudson looks at the intersect plan some more
[03:32] <mwhudson> heh the index helps the fast plan a _lot_
[03:33] <mwhudson> down to 100ms now
[03:44] <StevenK> mwhudson: Have you seen http://sqlformat.appspot.com/ ?
[03:47] <mwhudson> StevenK: no!
[03:47] <mwhudson> i actually went looking for one of them a few days ago
[03:47] <mwhudson> StevenK: it doesn't do a very good job on this query :)
[03:47] <StevenK> Hahaha
[03:48] <StevenK> Yeah, it's formatting rules are a little strange
[03:48] <mwhudson> for which i think we can just blame django
[03:56] <mwhudson> heh heh
[03:56] <mwhudson> optimization is fun: i now have a version of this query that runs in 22ms
[03:56] <mwhudson> about 200x faster than where i started
[03:57] <StevenK> \o/
[03:57] <StevenK> wallyworld_: So http://pastebin.ubuntu.com/1184984/ is still bad?
[03:59] <wallyworld_> StevenK: i think you missed my last comment - can we extract the root cause message from the IntegrityError and include it with the new exception?
[04:00] <wallyworld_> otherwise the use has no idea *why* the filename is invalid
[04:00] <wallyworld_> so they don't know what to fix or correct
[04:00] <StevenK> wallyworld_: I did answer -- no, because the constraint error doesn't say either.
[04:01] <wallyworld_> you sure? eg from hwdb.txt "IntegrityError: ... violates unique constraint "hwsystemfingerprint..."
[04:01] <wallyworld_> there's info in there somewhere
[04:01] <StevenK> IntegrityError: new row for relation "libraryfilealias" violates check constraint "valid_filename"
[04:02] <StevenK> And it's an OOPS, so the user gets a 500.
[04:02] <wallyworld_> :-(
[04:02] <StevenK> Seriously, *anything* we do is going to be better.
[04:03] <wallyworld_> ok. we do seem to have an inherent layering issue though
[04:04] <StevenK> wallyworld_: Mmmm, we could move the exception handling into ILibraryFileAliasSet.create
[04:06] <wallyworld_> that would be a bit better
[04:06] <StevenK> But still makes you unhappy?
[04:07] <wallyworld_> there's still an issue extracting the reason for the error and presenting it to the user
[04:08] <wallyworld_> but moving the exception handling to the correct place helps
[04:08] <StevenK> "valid_filename" CHECK (filename !~~ '%/%'::text)
[04:08] <StevenK> Ah ha
[04:08] <StevenK> raise InvalidFilename("Filenames can not contain slashes.")
[04:08] <wallyworld_> yay
[04:09] <StevenK> wallyworld_: Let me move it into LFA and test, as well as change the message. Hopefully that will fix your little red wagon. :-)
[04:09] <wallyworld_> ok, thanks.i'm just thinking of the users :-)
[04:09] <StevenK> Suuuuuuure
[04:11] <wallyworld_> it would be nice to be able to look at the constraint name that was violated and re-raise with a sensible message. is there more than one i wonder. or is there only the slashes one
[04:12] <StevenK> That's the only CHECK CONSTRAINT that libraryfilealias has.
[04:13] <mwhudson> wgrant: for when you get back, it turns out that punching through the genericrelation abstract gets me a query that runs in 22ms :)
[04:13] <mwhudson> *abstraction
[04:16] <mwhudson> (with the index, 200ms without)
[04:21] <StevenK> wallyworld_: http://pastebin.ubuntu.com/1185054/
[04:24] <wallyworld_> StevenK: comment on line 48 needs fixing. are there more test changes? i think it looks ok though. the users thank you
[04:25] <StevenK> wallyworld_: It does? But it is a BadRequest?
[04:25] <StevenK> The API will never see the internal exception, but it raises BadRequest
[04:26] <wallyworld_> oh, yes, sorry. misread
[04:27] <StevenK> wallyworld_: So, push it up and prod you somewhere inappropriate when the diff updates?
[04:27] <wallyworld_> why not. i'm waiting with anticipation
[04:27]  * wallyworld_ grips the desk
[04:31] <wgrant> And he drops out just as I return
[04:32] <wgrant> StevenK: Want to extend that ndt by a rev?
[04:32] <StevenK> wgrant: To r15901? Sure.
[04:51] <wgrant> mwhudson: How'd you avoid that expensive join?
[04:52] <wgrant> I guess eliminating the duplicate dashboard_app_testrun join might have helped
[05:05] <StevenK> wgrant: https://oops.canonical.com/?oopsid=475a419919a34a618febf7b2ba9573e0
[05:06] <StevenK> wgrant: Why are their 5 repeated Archive lookups? :-(
[05:06] <lifeless> StevenK: 'there'
[05:07] <StevenK> lifeless: And where is your Grammar Nazi cape?
[05:07] <lifeless> in the kitchen
[05:07] <StevenK> Maybe I should have used "you're" just to make you twitch.
[05:11]  * wgrant twitches
[05:12] <wgrant> StevenK: It's probably trying to find archives that are buildable into, maybe.
[05:12] <wgrant> Checking the tracebacks will tell you for sure
[05:13] <StevenK> wgrant: The tracebacks for the UNION'd queries are quite unreadable
[05:14] <wgrant> Let me prove you wrong :)
[05:14] <wgrant> By reading them
[05:14] <StevenK> Oh wait, there it is
[05:15] <StevenK> The stack is like 100 deep and the font is tiny :-(
[05:15] <wgrant>   File "/srv/launchpad.net/production/launchpad-rev-15890/lib/lp/code/model/sourcepackagerecipe.py", line 89, in get_buildable_distroseries_set
[05:15] <wgrant>     supported_distros = set([ppa.distribution for ppa in ppas])
[05:15] <wgrant> lol
[05:15] <StevenK> That has to win some kind of award
[05:18] <StevenK> wgrant: Tempted to just have that use ILaunchpadCelebrities.ubuntu rather than that insane query.
[05:18] <wgrant> :(
[05:18] <lifeless> LOLWHAT
[05:19] <StevenK> Oh, SERIOUSLY
[05:19] <StevenK>                     |     # Now add in Ubuntu.
[05:19] <StevenK>                     |     supported_distros.add(getUtility(ILaunchpadCelebrities).ubuntu)
[05:19] <wgrant> heh
[05:19] <StevenK> (Guess what I'm reading ... :-)
[05:19] <wgrant> heh heh
[05:45] <lifeless> https://code.launchpad.net/~lifeless/zconfig/bug-481512/+merge/122612 - fingers crossed.
[05:48] <StevenK> lifeless: WTB CMPXCHG for Python, PST ?
[05:48] <lifeless> yup
[07:25] <adeuring> good morning
[08:01] <jelmer> moin
[08:13] <jelmer> mgz: there?
[08:13] <mgz> where?
[08:14] <jelmer> mgz: le mumbles
[08:15] <mgz> yeah.
[08:15] <ev> is there an estimate on staging coming back from the dead?
[08:17] <wgrant> ev: No, could easily be another day or so. Can you use qastaging instead?
[08:17] <ev> wgrant: absolutely can - didn't realize it existed
[08:17] <ev> thanks a bunch!
[08:18] <wgrant> It's been around for nearly two years now, I believe
[08:18] <wgrant> The DB is reset less frequently and the code updated slightly more quickly
[08:18] <ev> neat
[09:57] <mwhudson> wgrant: it was eliminating the joins in the subqueries that flipped the performance to good
[09:58] <wgrant> mwhudson: Ah, good
[10:07] <jam> jelmer: python-all-dbg: Depends: python-all (= 2.6.6-3+squeeze7~lp1) but it is not going to be installed                   Depends: python2.5-dbg (>= 2.5.5-6) but it is not installable
[10:07] <jam> it looks like your squeeze package wants to keep python-2.5
[10:07] <jam> vs 2.7?
[11:00] <czajkowski> mgz: jelmer jam does this make sense to any of ye ? https://bugs.launchpad.net/lpsetup/+bug/1045728
[11:00] <_mup_> Bug #1045728: lxcip.py looks for liblxc.so.0 in the wrong directory on 12.10 <lpsetup:New> < https://launchpad.net/bugs/1045728 >
[11:01] <jam> czajkowski: looks to be a yellow squad thing (since they introduced lpsetup).
[11:01] <jam> I understand what the bug is reporting (a library is looked for in .../lxc/lxc.so, but it would be found as just .../lxc.so)
[11:01] <jam> I don't know what a fix would be.
[11:02] <wgrant> Right, it moved in quantal
[11:03] <czajkowski> jam: ahh ok
[11:04] <czajkowski> are yellow off that area now though ?
[11:04] <jam> I think so. I seem to remember a "we need maintenance to take over our work" email.
[11:04] <czajkowski> yup
[11:04] <czajkowski> saw that
[11:07] <czajkowski> jam: so it's yers now :)
[11:07] <jam> unless I stick my head in the sand for wgrant next week :)
[11:08]  * wgrant points at the SEP field
[11:08] <jam> SEP ?
[11:09] <wgrant> Somebody Else's Problem :)
[11:09] <wgrant> It's not my problem (yet?), so I don't acknowledge its existence.
[11:12] <jelmer> wgrant: as if you are ever reluctant about acknowledging problems.. :P
[11:13] <wgrant> Heh.
[11:14] <czajkowski> jelmer: few of your questions on LP answers need follow up, do you wantme to assign them to you so you'll get the mails ?
[11:14] <jelmer> czajkowski: that'd be great, thanks
[11:30] <czajkowski> blues, I've created a ticket to track the bugzilla imports not working issue like mrevell suggested last week, this will be handy come handing off time
[11:30] <czajkowski> https://support.one.ubuntu.com//Ticket/Display.html?id=21832
[11:30] <czajkowski> as we're trying this out for a month be nice to have some feedback on this new way of tracking items raised.
[12:25] <jam> jelmer, mgz: I think gary_poster has officially handed lpsetup over to Maintenance, so can one of you look at: https://code.launchpad.net/~jameinel/lpsetup/missing_lxc_1045728/+merge/122663
[12:25] <jelmer> mgz: Can you look at that one perhaps? I haven't dealt with lpsetup yet
[12:26] <gary_poster> no-one has except yellow, jelmer :-) we can do a review-of-the-review if you like?
[12:26] <jam> gary_poster: it is a pretty small change, (just probe in 2 locations for lxc.so)
[12:26] <jam> feedback on how I refactored for testability would be nice
[12:28] <gary_poster> looks nice on the face of it jam.  I'll look more carefully--or get someone else to--but why don't we try to get someone else looking at the code first so we can start the acclimatization process, and then we'll come around and confirm
[12:28] <jam> gary_poster: fine with me
[12:29] <gary_poster> please ask reviewer to ping me (or someone else on yellow) to review the review
[12:30] <mgz> so, I can confirm that you know how to write code jam, no idea if it's the right code.
[12:30] <gary_poster> lol
[12:31] <gary_poster> oh come on, lxcip is itsy bitsy and self contained :-)
[12:32] <jam> gary_poster: is there a tarmac or something protecting the trunk?
[12:32] <jam> or are you just expected to run pre-commit.sh yourself?
[12:32] <gary_poster> yes jam
[12:33] <gary_poster> jam, tarmac.  pre-commit.sh if you want to be super careful. It takes awhile though
[12:34] <mgz> well, I don't understand why _load_library exists, and ctypes.find_library doesn't work
[12:34] <mgz> but I presume there's a reason
[12:36] <jam> mgz: does find_library know how to handle the multiarch stuff?
[12:36] <mgz> I'd expect debian to patch the logic they need for multiarch, but perhaps they've not.
[12:36] <gary_poster> I don't think it does. frankban, question about lpsetup from mgz ^^
[12:37] <mgz> ctypes hasn't been well maintained of later
[12:37] <mgz> -r
[12:37] <mgz> and it's always been on the nastier side of neat hax
[12:38] <mgz> there's a comment in my ctypes.util about the logic being wrong for biarch and multiarch
[12:42] <gary_poster> jam, frankban reports that our pre-commit.sh only tests lpsetup, not lxcip.  An oversight.  In order to make the tests work on the machine we need to make some sudoers changes on that machine, since the tests need to run as root because lxc-ip needs to run as root
[12:42] <gary_poster> I'll file a bug jam, and ask bac to comment
[12:42] <gary_poster> (since he knows the most about the tarmac setup)
[12:43] <jam> gary_poster: thanks. Looking at the code, it would appear to just run 'nosetests' which should run all of them, but maybe they fail when not run as root.
[12:43] <gary_poster> right, I believe that's the case
[12:43] <gary_poster> well...
[12:43] <gary_poster> if they fail then tarmac won't work...
[12:43] <gary_poster> and it does
 filed by lool is of some interest
[12:47] <gary_poster> agreed
[12:47] <gary_poster> bac, do we run nosetests as root on lpsetup tarmac?
[12:56] <frankban> mgz: that's exactly the problem we had with multiarch. and about the tests: nosetests run in the base directory will only run the tests for lpsetup. `cd lplxcip && sudo nosetests` must be used to run lxcip tests. This is because 1) lxcip requires root 2) test_lxcip creates an LXC container, used to exercise the main functions
[12:57] <czajkowski> a
[12:58]  * bac looks
[13:04] <gary_poster> bac, frankban, jam, ok cool.  I filed https://bugs.launchpad.net/lpsetup/+bug/1045807 .  bac, could you link the tarmac config info so that people can know where to look and what to do?  Or is this something only yellow squad can do (in which case I should probably file another bug :-P ) ?
[13:04] <_mup_> Bug #1045807: tarmac should run lplxcip tests <lpsetup:Triaged> < https://launchpad.net/bugs/1045807 >
[13:05] <bac> gary_poster: pre-commit.sh is run by the tarmac user.  however that user has sudo privileges so tests for lp-lxc-ip could be run as root.
[13:05] <gary_poster> cool bac
[13:05] <gary_poster> so it could be a trivial fix
[13:06] <bac> yes, just mod pre-commit.sh
[13:06] <gary_poster> bac, do you want me to add that to the bug (happy to) or are you?
[13:06] <gary_poster> IOW, are you doing it now?  If not, I will :-)
[13:06] <bac> mgz, jam: if you're interested the tarmac+lp2kanban canonistack branch is at https://bazaar.launchpad.net/%7Eyellow/lp-tarmac-configs/tarmac-puppet/files
[13:06] <bac> gary_poster: please do it
[13:06] <gary_poster> bac, on it.  thx
[13:07] <bac> gary_poster: i turned off lp2kanban for the yellow board.  let me know if you ever want it fired up again.
[13:08] <gary_poster> ack thanks bac
[13:09] <bac> gary_poster: we should probably add the maintenance squad to the list of people who can access the canonistack instance.  unfortunately it is currently a static list of people, not teams.
[13:10] <gary_poster> bac, makes sense. Ideally we would add all of LP
[13:10] <gary_poster> bac, can you do that as a misc thing today, or should I file a bug for "someday"?
[13:30] <bac> gary_poster: it shouldn't be hard to write a lplib script that can take a list of teams and do it.  i'll add a misc card for it.
[13:32] <gary_poster> thanks bac. slack might be more apt.  dunno
[14:45] <sinzui> jcsackett: am I really here? My irc is borked after quantal upgrade
[14:45] <sinzui> well the fonts are extra ugly too
[14:45] <rick_h_> sinzui: howdy
[14:45] <czajkowski> sinzui: welcome
[14:45] <sinzui> So freenode is indeed on despite the off switch
[14:46] <czajkowski> sinzui: how were the holidays and were you sick ?
[14:47] <sinzui> um, Staying in a beach house with extended family was awkward. That is the most polite way to describe the on going civil wars in house. There were only a few places to hide
[14:47] <mgz> see, you make that sound fun.
[14:48] <sinzui> I travelled to a Renaissance Festival yesterday. That felt like a holiday.
[14:48] <rick_h_> my wife was just saying that she wants to go to that when it's in town.
[14:49] <rick_h_> now I've got to go hiding
[14:50] <sinzui> I go several times a year for the bad food, shlocky celtic music, and fashions (and fashion tragedies)
[14:50] <rick_h_> hah, now I'll have to rethink my lack of desire to go. Your 'bad food' really sold me
[14:51] <sinzui> This years secret theme is Doctor Who. I have never seen so many Whovians in one place before.
[14:52] <sinzui> rick_h_ steak-on-a-stick is just the start. It eventually leads to macaroni-and-cheese-on-a-stick
[14:52] <sinzui> I am not kidding either
[14:53] <rick_h_> lol, I went one year and just don't have the desire to go back, but the wife wants to take the 2yr old and 'experience' it
[14:53] <rick_h_> yea, the giant muttons walking around with people snacking on them really set the scene
[14:53] <cjwatson> An ex of mine does full-on week-long Tudor immersion stuff
[14:53] <sinzui> My children demand going. Anne made the costumes toos
[14:54] <cjwatson> Scary amount of work
[14:54] <sinzui> I dress as myself because I look the same in all eras
[14:54] <rick_h_> cjwatson: oh hey, you're around. I'm looking at this MP
[14:54] <rick_h_> cjwatson: what do you think of tring to celery-ize this? It seems like a perfect fit though you'd be the first thing full time on celery
[14:54] <cjwatson> Heh, that was sort of why I wasn't :-)
[14:55] <rick_h_> cjwatson: in looking at the irc logs you link it seems wgrant and StevenK were going to check some perf stuff, did that all work out?
[14:55] <rick_h_> cjwatson: oh come on :P be brave and daring. It's a new job that only needs to happen in a specific use case, celery/queue ftw
[14:55] <rick_h_> a job per bug found would rock
[14:55] <cjwatson> I tried to include the necessary bits for celerification but I don't really know what I'm doing.  Would it just be an extra feature flag?
[14:56] <rick_h_> abentley: would probably be able to help tell you that. I think they're pretty close
[14:56]  * sinzui restarts to see if he can be on more than one irc server at a time.
[14:56] <cjwatson> The perf work is basically not likely to get to the point of being good enough to avoid the need for this job
[14:56] <cjwatson> So it's IMO orthogonal
[14:56] <cjwatson> After this branch it would just have the effect of making the jobs run faster
[14:56] <rick_h_> cjwatson: ok, but there were no performance reasons against this job?
[14:57] <cjwatson> Not that I know of
[14:57] <rick_h_> cjwatson: gotcha
[14:59] <cjwatson> So, I'm up for giving celery a try if it's at all likely to be workable.  How about I stick the job creation behind a feature flag and then we can try it on dogfood?
[14:59] <cjwatson> Not that I know whether it has celery runners
[14:59] <abentley> cjwatson: You need to run celeryRunOnCommit.
[14:59] <rick_h_>  abentley can you peek at https://code.launchpad.net/~cjwatson/launchpad/process-accepted-bugs-job/+merge/122420 and advise on testing it?
[14:59] <abentley> cjwatson: The feature flag is already there.
[15:00] <cjwatson> abentley: Right, I have celeryRunOnCommit.
[15:00] <abentley> cjwatson: jobs.celery.enabled_classes
[15:00] <cjwatson> abentley: I mean a different feature flag specific to this job, so that if it's off the code can adopt the previous behaviour of closing the bugs directly
[15:00] <cjwatson> So that we could land this and test somewhere without fear of breaking existing behaviour yet
[15:00] <abentley> cjwatson: Well, in theory you could reuse jobs.celery.enabled_classes for that...
[15:01] <cjwatson> I guess
[15:01] <abentley> cjwatson: I thought you already had a cron script.
[15:01] <cjwatson> You mean in stable or in my branch?
[15:02] <abentley> cjwatson: in your branch.
[15:02] <cjwatson> Not a specific one; it would involve cronscripts/process-job-source.py.
[15:03] <cjwatson> That will work, although it won't be as responsive as celery presumably could be.
[15:04] <cjwatson> I don't think responsiveness is hugely critical, but I'm wary because I'm going from synchronous behaviour to asynchronous so I don't want to create a significant perceived slowdown.
[15:04] <abentley> cjwatson: So I think the decision about whether to test the job under a feature flag is orthagonal to celery, then.
[15:04] <cjwatson> That's probably a fair point.
[15:09] <abentley> rick_h_: I don't know a lot about what's running or not running on dogfood.  You'd need to ensure a celeryd is running with the 'job' queue (and maybe the 'job_slow' queue).  And then you'd do an upload.
[15:10] <cjwatson> There's no celeryd running right now.
[15:11] <cjwatson> Where's this done on production?  I don't see it in lp-production-crontabs - is there an init script?
[15:13] <abentley> cjwatson: It runs on ackee.  I don't know the details of the deployment.  I think it's puppetized.
[15:19] <rick_h_> abentley: ok yuck on the extra work then.
[15:20] <cjwatson> That might not be too horrible; I'll need to talk to ops I suppose.
[15:21] <rick_h_> cjwatson: ok, well I guess it's up to you. I don't want to hold up your stuff, but it seems like a good candidate for the celery work.
[15:21] <cjwatson> I don't mind trying, and it isn't OMG urgent.  It might produce a better result all round.
[15:21] <cjwatson> Or it might involve mountains of extra debugging.  BUT THAT'S THE FUN
[15:22] <rick_h_> cjwatson: wooo more fun for you
[16:00] <rick_h_> abentley: does this ring any bell while tring to run tests? ImportError: cannot import name BzrDirMetaFormat1Colo
[16:00] <rick_h_> abentley: full trace: http://paste.mitechie.com/show/771/
[16:01] <abentley> rick_h_: No.  It sounds like it's running against an old version of bzrlib.
[16:02] <rick_h_> abentley: ok thanks...looks like something broke on me from thurs. Will peek further
[16:19] <czajkowski> bac: bug 1045774 want me to set the importance on it ?
[16:20] <_mup_> Bug #1045774: Attempting to authorize from cron script leads to repeated reauth attempts <launchpadlib :Triaged> < https://launchpad.net/bugs/1045774 >
[16:20] <bac> czajkowski: yes please
[16:20] <czajkowski> low or high?
[16:21] <mgz> 5am is pretty low :)
[16:28] <mgz> I'd be tempted to revert r93 on launchpadlib
[16:29] <mgz> it's a pain to deal with regardless
[16:29] <mgz> needing to hit enter isn't that much more pain.
[17:35] <cody-somerville> On the sharing pages, does anybody else notice that 'First' and 'Previous' never become links? (though clicking the words still works as expected).
[18:29] <al-maisan> q
[19:01] <rick_h_> deryck: is alive!
[19:01] <deryck> rick_h_, indeed! :)
[19:01] <deryck> actually deryck's internet lives!
[19:01] <deryck> hi abentley, too :)
[19:01] <abentley> deryck: Hi!
[19:02] <deryck> I'll do the webcam home tour at tomorrow's stand-up.
[19:02] <rick_h_> hah
[19:02] <rick_h_> how many sq ft was that going to be? /me blocks out the mansion tour on the calendar :P
[19:03] <deryck> heh
[19:03] <deryck> yeah, it could take awhile :)
[19:22]  * sinzui rescues children from state institution
[19:59] <abentley> rick_h_: Merged devel and got BzrDirMetaFormat1Colo
[19:59] <rick_h_> abentley: yea, I got it working with an apt-get update, source deps update, make clean and rebuild
[20:00] <rick_h_> abentley: not sure which one is the magic sauce tbh, the last thing I tried was make clean_buildout && amke clean so not sure if the other steps were required/not
[20:00] <abentley> rick_h_: Yikes.  Given that bzr is an egg, make should be all that's needed.
[20:00] <rick_h_> abentley: might have been, when you said bzrlib was out of date I went hitting the system first
[20:01] <rick_h_> so I probably went all the way around the thing
[20:01] <abentley> rick_h_: Sorry if I mislead you, but Launchpad doesn't use the system bzrlib.
[20:02] <rick_h_> well, I did a bin/py and import bzrlib; help(bzrlib) and found it was 0.0.1 behind my system so thought my lxc needed to be updated
[20:02] <rick_h_> anyway, all better, thanks
[20:05] <abentley> rick_h_: plain "make" solved it for me.  (Then I had to do update-sourcecode to handle bzr-loom imports.)
[20:06] <rick_h_> abentley: cool, I'll rework my depth chart of "crap's broke...try these commands"
[20:08] <abentley> rick_h_: yeah, "make" should be at the top of your list.  That's why I patched the 2.7 JS stuff-- because I use make a lot.
[20:10] <cody-somerville> sinzui, Does the sharingdetailsview use the API to get the information it shows or just to revoke grants to individual artifacts?
[20:12] <sinzui> cody-somerville, 100% api
[20:12] <cody-somerville> sinzui, I can't figure out how to get the data on the individual artifacts.
[20:12] <sinzui> ah
[20:13] <sinzui> that bloody cache again
[20:13]  * sinzui looks
[20:13] <cody-somerville> sinzui, getPillarGranteeData doesn't return details on the individual assets granted - just if there is any for the different information types
[20:14] <sinzui> I think the cache is being loaded. then maintained by the script. I recall there is a reload to get data. I am looking at how the page first gets the data. I suspect we may not be using API if I do not see a reload to sync changes
[20:15] <sinzui> but it is supposed to be 100% api
[20:19] <sinzui> cody-somerville, getSharedArtifacts() is not exported. I will report the bug now
[20:20] <cr3> I just noticed that blueprints now have a work items section in addition to the whiteboard. Should work items be defined like before, ie: Lorem ipsum dolor sit amet: TODO
[20:25] <sinzui> cody-somerville, https://bugs.launchpad.net/launchpad/+bug/1046022 should be fixed in a few days
[20:25] <_mup_> Bug #1046022: Cannot get the artifacts shared with a person over the API <api> <disclosure> <sharing> <Launchpad itself:Triaged> < https://launchpad.net/bugs/1046022 >
[20:25] <rick_h_> cr3: I believe so. If I recall it has a format validation that runs in case it's slightly off
[20:25] <czajkowski> cr3: that was there at last uds http://blog.launchpad.net/cool-new-stuff/work-items-in-blueprints
[20:25] <rick_h_> czajkowski: ftw!
[20:25] <cr3> rick_h_: sweet, thanks!
[20:27] <czajkowski> new features go on the blog
[20:27] <czajkowski> always check there :)
[20:40] <abentley> rick_h_: could you please review https://code.launchpad.net/~abentley/launchpad/blueprint-info-type-change/+merge/122753 ?
[20:40] <rick_h_> abentley: rgr
[20:40] <abentley> rick_h_: thanks.
[20:44] <rick_h_> abentley: r=me
[20:45] <abentley> rick_h_: thanks.  Have a good one!
[20:45] <cr3> czajkowski: weird, when I save my work items, it adds the string "Work items:" at the top of my work items, which looks redundant considering the label of the text area is also called "Work items" :)
[20:49] <cr3> ... and then I get an email that says: work items set to: work items: ..
[20:57] <sinzui> cr3, work items are not complete and have no commitment to ever be completed.
[20:59] <cr3> sinzui: ah, that has happened to be too: working on a feature that almost got done but not quite :(
[20:59] <sinzui> cr3 czajkowski: the feature was built by linaro to their needs and only they can use it fully. When the registry.upcoming_work_view.enabled is set to default or the guards removed from the code, everyone can use the full feature
[20:59] <cr3> sinzui: I'm sure contribute are quite welcome :)
[20:59] <sinzui> cr3, they are. At this time it is a maintenance burden since removing the flags break Lp :(
[21:12] <cody-somerville> sinzui, I notice that the API still returns 'USERDATA' for 'private'. Is that a bug/going to be changed to reflect what's in the UI?
[21:15] <sinzui> cody-somerville, that is intentional. "Private" is for the punters. USERDATA is for people who know better
[21:16] <cody-somerville> sinzui, It's likely that the lp api will be used by the punters too ;)
[21:34] <cody-somerville> sinzui, Since 'private' is really 'USERDATA', should I migrate all our bugs to proprietary?
[21:35] <sinzui> cody-somerville, you can try. Bugs that are shared with another project will raise and exception
[21:36] <sinzui> I think true private projects will require the bugs to be proprietary, but that is two or more months away.
[21:36] <rick_h_> no, we're currently looking at providing a 3 way option of public, proprietary, and embargoed
[21:36] <sinzui> how can a private project have public bugs?
[21:37] <sinzui> Good luck getting users to see the bugs when they cannot traverse to the project
[21:37] <rick_h_> sorry, didn't mean that. Meant the project gets those three choices
[21:37] <sinzui> yes. I agree
[21:37] <rick_h_> so then the bugs have to be one of the 'private' types
[21:38] <rick_h_> but won't require proprietary, embargoed will also be an option from what I understand
[21:41] <sinzui> rick_h_ private/USERDATA can be shared with other project, which leaks. Leak a lot in fact. We agreed not to support it. So the sharing policies will must be Proprietary only. Embargoed is not support by bugs but could be. any other choosen sharing policy will lead to constraint violations because the default cannot be enforced
[21:43] <rick_h_> sinzui: ok, I think our goal is to support embargoed in bugs, but as you said we've just started to maybe that doesn't work out
[21:44] <sinzui> rick_h_I think we should. They asymmetry will probably cause problems
[21:45] <sinzui> eg, can a proprietary bug be linked to an embargoed branch? What happens if the user cannot see both?
[21:45] <sinzui> It probably causes the same problems as public branches stacked on private branches.
[21:45] <rick_h_> sinzui: right, I assume we'll have some restrictions to manage that for sure
[21:47] <cody-somerville> I was told that the ability to affect /private/ bugs on a private project against public projects would be retained - via using the 'private' type for some bugs vs. 'proprietary', ie. you have to make a conscious decision to lower the privacy type.
[21:48] <sinzui> cody-somerville, yes, for pubic projects with private defaults.
[21:49] <sinzui> cody-somerville, the bug-linking feature was (and honestly alway will be) a requirement to address the needs of true private projects looking at bugs in many projects
[21:50] <cody-somerville> but the bug linking feature isn't going to be implemented - correct?
[21:50] <sinzui> cody-somerville, The other choice is to cut scope, accept the risk, and allow projects to share with non-canonical projects
[21:51] <cody-somerville> sinzui, I'll need to check with the folks who make heavy use of the bug linking. It might be acceptable that there projects have private bugs by default but not be an entirely private project. If not, that's indeed a discussion we'll have to have.
[21:51] <cody-somerville> *their
[21:51] <sinzui> It is a valid decision, though you need to be really sure the other project knows what you are doing to honour your intentions. And hope that the project does not send bugs to upstream bug trackers or mailing lists
[22:00] <cody-somerville> lifeless, would there be an easy way to add support to lazr.restful for prefetching certain relationships - sorta like how django lets you?
[22:00] <lifeless> cody-somerville: possibly
[22:00] <lifeless> cody-somerville: but I don't know what it is
[22:03] <cody-somerville> lifeless, basically being able to say when I make the request, for example, a project that I can specify that I want it to also return in the same response the entity linked to via the maintainer field instead of just a link
[22:04] <cody-somerville> same rationale as making one db request to get the info I know I'll need instead of a bunch of small ones
[22:05] <lifeless> yes, I know what you mean
[22:05] <lifeless> there was a plan put together by leonard + gary + benji + others
[22:06] <cody-somerville> lifeless, the getPillarGranteeData method on the sharing_service returns a json response with the bits of info I need on both the grants and the user so my script to inspect sharing information on a project is super fast compared to if I had to make an api request for each user
[22:06] <lifeless> yup
[22:06] <lifeless> thats why that approach has been adopted
[23:02] <sinzui> wallyworld_, https://launchpad.net/~launchpad/+related-software <- makes little sense
[23:03] <wgrant> cjwatson: It's a daemon
[23:03] <wgrant> blah
[23:03] <wgrant> I was scrolled back a few hours
[23:03] <wgrant> nevermind
[23:06] <sinzui> wallyworld_, https://bugs.launchpad.net/launchpad/+bugs/?field.tag=related-projects-packages <- the madness