[00:11] <wgrant> Hmm
[00:12] <wgrant> So pillarname is the most read table on the system by around 50%. And the main query that used it was 30x slower than it needed to be.
[00:12] <wgrant> Our performance situation is hilarious.
[00:13] <wgrant> == Most Bloated Tables ==
[00:13] <wgrant> logintoken || 96% || 555 MB of 576 MB
[00:14] <wgrant> Is autovacuum broken?
[00:14] <lifeless> no
[00:17] <wgrant> Oh-ho-ho
[00:17] <wgrant> I think it's all slony
[00:18] <wgrant> The performance regression in week 47 last year (the week of Nov 19) coincides with slony jumping from 20% to 85-110% master CPU usage.
[00:18] <wgrant> I wonder if that's when we upgraded.
[00:19] <wgrant> Compare https://devpad.canonical.com/~lpqateam/dbr/weekly/db-report-2011-11-11-2011-11-18.txt and https://devpad.canonical.com/~lpqateam/dbr/weekly/db-report-2011-11-18-2011-11-25.txt
[00:20] <wgrant> No major write load changes.
[00:22] <wgrant> The upgrade was the 18th.
[00:22] <wgrant> To slony2
[00:22] <wgrant> That is slightly suspicious, if you ask me.
[00:27]  * StevenK blinks at FormFields.select()
[00:28] <StevenK> It's second argument is *names, and when I pass it a list, (Pdb) p names
[00:28] <StevenK> (['name', 'displayname', 'visibility', 'contactemail', 'teamdescription', 'subscriptionpolicy', 'defaultmembershipperiod', 'renewal_policy', 'defaultrenewalperiod', 'teamowner'],)
[00:36] <wallyworld_> StevenK: who is a lp.dev user with launchpad.Commercial?
[00:37] <StevenK> admin ?
[00:37] <wgrant> wallyworld_: commercial-member@canonical.com
[00:37] <wgrant> or admin, yeah
[00:37] <wallyworld_> thanks
[00:40] <wgrant> wallyworld_, sinzui: Why BugTaskVisibilityPolicy?
[00:40] <wgrant> Isn't it global to all artifacts?
[00:41] <wallyworld_> wgrant: it's not
[00:41] <wallyworld_> unless something went wrong with the landing
[00:41] <wgrant> Oh, the commit message was wrong.
[00:41] <wgrant> The code is right.
[00:42] <wallyworld_> bugger, didn't change it when i changed the code
[00:42] <wallyworld_> StevenK: i think the rule in checkAllowVisibility() is wrong? if ff is on, it no longer allows lp.Commercial if ff is on
[00:42] <wgrant> I had also intended that it be AccessPolicyType, but we can discuss that.
[00:42] <wgrant> Since it's not an actual policy.
[00:42] <wallyworld_> sure, easy to change
[00:43] <wgrant> Just a class of policy.
[00:46] <StevenK> wallyworld_: Damn good point
[00:46] <StevenK> wallyworld_: I can subtly fix that in this branch if you want
[00:46] <wallyworld_> StevenK: yes please :-)
[00:46] <wallyworld_> and add a test
[00:52] <wallyworld_> StevenK: just noticed, the public/private field on Team+edit suffers the same issue as the private_bugs field
[00:53] <wallyworld_> StevenK: when you edit a private team, the public/private field says Public
[00:53] <wallyworld_> it's not being initialised from the context properly
[00:54] <StevenK> Right
[00:55] <wallyworld_> StevenK: lp:~wallyworld/launchpad/private_bugs-init-932721 has my solution
[00:55] <wallyworld_> to the problem which works
[00:57] <lifeless> StevenK: thats a list in a tuple
[00:57] <StevenK> lifeless: Well, yes.
[00:58] <StevenK> lifeless: I needed a *
[00:58] <StevenK> Which blows the list apart into arguments, I think
[00:59] <lifeless> if thing = ([foo,bar],)
[01:00] <lifeless> quux(*thing) == quux([foo,bar])
[01:01] <StevenK> thing = [foo, bar]
[01:01] <lifeless> then
[01:01] <lifeless> quux(*thing) == quux(foo, bar)
[01:01] <StevenK> Right, so I was right
[01:01] <lifeless> (but your names above was ([foo, bar],)
[01:02] <StevenK> lifeless: I was calling select(list), where it was 'def select(self, *names)'
[01:02] <StevenK> Calling select(list) got me ([foo, bar],)
[01:02] <StevenK> Calling select(*list) got me (foo, bar)
[01:02] <lifeless> kk
[01:13]  * StevenK grumbles at IPerson.checkAllowVisibility()
[02:16] <mwhudson> what is the intent of the register merge button doing ajax rather than a direct post?
[02:17] <mwhudson> is it supposed to generate the diff immediately?
[02:17] <StevenK> No
[02:18] <StevenK> It is supposed to tell you when stuff is wrong so you can fix it
[02:19] <wgrant> It'll be unbroken in a few hours.
[02:33] <mwhudson> StevenK: ah ok
[02:33] <mwhudson> StevenK: 'wrong' ?
[02:35] <StevenK> wallyworld_: ^
[02:36] <wgrant> Requesting a review from someone who can't see the branches.
[02:39] <wallyworld_> StevenK: ?
[02:39] <wallyworld_> it will warn you if you are about to expose private branches
[02:39] <wallyworld_> and ask you to confirm
[02:43] <mwhudson> ah ok
[02:55] <wallyworld_> mwhudson: since now nominated reviewers are subscribed to the source/target branch if the branch(es) would otherwise have been invisible to the reviewer
[03:02] <StevenK> BAH
[03:02] <StevenK> I see what broke the test
[03:02] <StevenK> We only want to render the context for edit
[03:03] <StevenK> Otherwise it looks for IPersonSet.visibility, and then the toys get de-pramed
[03:04] <StevenK> wallyworld_: I think I'm done with your changes.
[03:04] <wallyworld_> StevenK: excellent
[03:05] <StevenK> wallyworld_: I'll get you to review the changes I added, if you don't mind.
[03:05] <wallyworld_> np
[03:05] <StevenK> But that will about ten, since the kettle is calling
[03:05] <wallyworld_> can relate to that
[03:29] <StevenK> wallyworld_: Sorry, I also cleaned up after lunch, but r14801 only on https://code.launchpad.net/~stevenk/launchpad/better-name-field/+merge/93317
[03:29]  * wallyworld_ looks
[03:29] <StevenK> wallyworld_: Feel free to add your +1 on the MP if you want.
[03:32] <wgrant> StevenK: Do you plan a fastdowntime tonight?
[03:33] <wallyworld_> StevenK: why not use check_permission('launchpad.Commercial', self)? it was just the placement that was wrong, not the method call itself
[03:33] <StevenK> wallyworld_: Because it wasn't working, and check_permission() in model code is always a bit suspect
[03:33] <StevenK> wgrant: That sounds like a good plan
[03:34] <StevenK> I think the mass index death patch is first
[03:34] <wgrant> It is indeed.
[03:34]  * StevenK reaches for staging logs
[03:34] <wgrant> Hmm
[03:34] <wgrant> Now, unity, give me back my coding terminal.
[03:34] <wgrant> Please?
[03:34] <StevenK> wgrant: gvim? :-P
[03:34] <wgrant> It's still there in Alt-Tab, but has otherwise disappeared.
[03:35] <wallyworld_> StevenK: ah, didn't notice it was in model. in such circumstances, i would instantiate and call the security adaptor.'
[03:35] <StevenK> wallyworld_: Example?
[03:35] <wallyworld_> so that we know the same security checks are used
[03:35] <StevenK> wallyworld_: I quite like the use of IPersonRoles
[03:35] <wgrant> IPersonRoles is the right thing to do here.
[03:36] <wallyworld_> StevenK: except that it used to be launchpad.Commercial
[03:36] <wallyworld_> and now it's different
[03:36] <StevenK> in_commercial_admin() is identical
[03:36] <StevenK> For this purpose
[03:37] <wallyworld_> so we really want to allow in_admin as well?
[03:37] <StevenK> Both should be allowed
[03:37] <StevenK> .in_admin == ~admins
[03:37] <wallyworld_> sure, but it's new behaviour
[03:38] <wallyworld_> just checking that it's really required
[03:38] <StevenK> Not really, now it's explicit
[03:38] <StevenK> admins have lp.Commercial
[03:38] <wallyworld_> right, didn't know that
[03:39] <StevenK> admins also have lp.Moderate and I think they have lp.Special
[03:39] <wgrant> No
[03:39] <wgrant> lp.Special exists solely to exclude admins.
[03:39] <wgrant> That's its entire purpose.
[03:39] <StevenK> Scoff
[03:40] <lifeless> which is a bit backwards way of doing it
[03:40] <lifeless> its on the wrong dimension
[03:41] <wallyworld_> StevenK: r=me
[03:42] <StevenK> wallyworld_: Thanks!
[03:42] <StevenK> wallyworld_: Did you want to scribble the MP, or you don't care?
[03:42] <wallyworld_> np. be good to get all this stuff landed
[03:42] <wallyworld_> StevenK: already have :-)
[03:45] <StevenK> wgrant: I'm not sure I trust dbupgrade.log
[03:45] <wgrant> StevenK: Oh?
[03:45] <wgrant> The times in there should be accurate.
[03:46] <StevenK> 2012-02-15 11:15:36 INFO    2209-10-0 applied just now in 0.1 seconds
[03:46] <lifeless> yes, any ?
[03:47] <StevenK> I don't believe it. That patch drops 63 indices
[03:50] <lifeless> dropping an index is cheap
[03:50] <StevenK> So stub said, delete a file, sync, move on
[03:51] <lifeless> little more than that but yeah
[03:51] <lifeless> (metadata updates, locks, etc)
[03:51] <StevenK> I'd just expect dropping 63 to take more than 100ms
[03:51] <lifeless> did you confirm my question about FK indices ?
[03:52] <lifeless> StevenK: its one transaction, so only a few syncs, not one per index.
[03:52] <StevenK> I saw it, I didn't think we were dropping any
[03:52] <lifeless> branch.owner ?
[03:53] <lifeless> I haven't gone looking for others but I'd expect that to be the index we need for efficient referential-integrity queries
[03:54] <StevenK> But we don't select via owner. The index has been untouched
[03:54] <wgrant> It was branch__owner_name__idx that was dropped
[03:54] <wgrant> Not owner
[03:55] <wgrant> owner_name probably hasn't been used since unique_name was introduced.
[03:55] <mwhudson> that index was probably a hang over from before source package branches
[03:55] <lifeless> ah, cool.
[03:55] <lifeless> StevenK: the indexes I'm concerned about *wouldn't be used*
[03:55] <lifeless> or at least, very rarely
[03:55] <wgrant> Person merges would use them.
[03:55] <StevenK> lifeless: So, I trust stub's judgement on this
[03:56] <lifeless> sure
[03:56] <lifeless> I will ask if he checked
[03:56] <lifeless> which is all I was asking you
[03:56] <lifeless> that someone took the time to remember this step
[03:56] <StevenK> wgrant, I, and stub spent a few hours talking over all of them
[03:56] <wgrant> I disagree with the removal of some.
[03:56] <wgrant> eg the ones on incrementaldiff
[03:57] <wgrant> They're only unused because the feature is disabled.
[03:57] <wgrant> But I don't care enough to revert the patch.
[03:57] <StevenK> Index creation can be done without downtime
[03:57] <wgrant> They're easy to add back later if we don't want to delete the feature.
[03:57] <wgrant> Right.
[03:57] <StevenK> wgrant: FDT request up
[03:58]  * StevenK glares at his laptop
[03:58] <StevenK> If I double a text file on my desktop to bring up gedit, Unity crashes
[03:58] <StevenK> If I use gnome-do to bring up the same file, it doesn't
[03:59]  * wgrant blames nautilus
[03:59] <wgrant> And thumper.
[03:59] <StevenK> It's oneiric, so his care factor is probably a large negative number.
[04:49]  * StevenK stabs _lock_actions more.
[04:55] <cody-somerville> Why does buildmailman.py take so long to run?
[04:56] <StevenK> Because mailman is large
[04:56] <cody-somerville> and wtf
[04:56] <cody-somerville> ImportError: Bad magic number in /home/cody-somerville/Projects/launchpad/lp-branches/devel/lib/lp/services/timeline/timeline.pyc
[04:58] <cody-somerville> fixie fixie
[04:58] <cody-somerville> anyhow, either launchpad has gotten slower to start or my machine has gotten slower :(
[04:59] <cody-somerville> hmm... ImportError: No module named convoy.meta. Am I missing a dep?
[04:59] <StevenK> Yes
[05:00] <StevenK> launchpad-developer-dependencies depends on it
[05:02] <cody-somerville> What is convoy anyhow?
[05:04] <cody-somerville> and can I copy the lucid build with binaries of it into natty series of ppa?
[05:05] <StevenK> Why are you developing LP on Natty?
[05:05] <StevenK> I was going to remove the Natty packages from the PPA
[05:06] <cody-somerville> :( cause I like Natty
[05:07] <StevenK> cody-somerville: We offically support the latest LTS and the latest release only.
[05:08] <StevenK> However, I'll fix the convoy recipe to build for Natty.
[05:08] <cody-somerville> \o/
[05:08] <cody-somerville> StevenK, You're the best.
[05:09] <StevenK> cody-somerville: And to answer your question: Convoy is a WSGI app for loading multiple files in the same request.
[05:09] <StevenK> Our plan is to use it for JS rather than the horrible launchpad.js
[05:09] <cody-somerville> Is it faster?
[05:10] <StevenK> It will result in domReady happening quicker, because your browser no longer has to digest 3MiB of JS
[05:12] <StevenK> cody-somerville: launchpad.js is a concatenation of every JS file in our tree + YUI. convoy means that we only send down the files that are needed for each page in one request
[05:13] <cody-somerville> Cool. Are you taking advantage of xsendfile header?
[05:14] <StevenK> No, we're not using XSendFile
[05:21] <wgrant> stub: I found some more queries today which have massive planner overhead due to the stats things.
[05:22] <wgrant> stub: Should we drop the target down to something more sensible except for problematic cols?
[05:22] <stub> wgrant: I'm considering that, yes. Unfortunately, for the problematic issues we would need to set it to a value we know is too low (100).
[05:23] <wgrant> stub: What if we drop to like 500 and see how it goes?
[05:23] <stub> I'm trying to assemble a test case for upstream, so the more the merrier.
[05:23] <wgrant> stub: Also, slony 2 is a performance regression.
[05:24] <stub> Dropping it to 500 push the issue below the radar, but it will still be lurking. Not sure if that is the best idea.
[05:24] <wgrant> The unidentified global performance regression from November coincides exactly with the slony 2 upgrade, which coincides with 15x more sl_log_* tuple reads and slony going from 15% DB CPU usage to 110%
[05:24] <stub> So I have a collection of ideas I'm not happy with :)
[05:24] <wgrant> True
[05:25] <stub> wgrant: There is a known regression in Slony that is fixed with the next patch level that matches what you are seeing.
[05:25] <wgrant> http://www.slony.info/bugzilla/show_bug.cgi?id=167 is possibly relevant
[05:25] <wgrant> Yeah
[05:25] <wgrant> We should upgrade to 2.1, or possibly increase the intervals back to their defaults.
[05:25] <wgrant> See if that alleviates it.
[05:26] <stub> Is it affecting perceived performance? atm we have CPU and disk to spare I think, and the sl_log_ entries are going to end up in RAM anyway.
[05:27] <wgrant> Compare https://devpad.canonical.com/~lpqateam/dbr/daily/db-report-2011-11-16.txt and https://devpad.canonical.com/~lpqateam/dbr/daily/db-report-2011-11-18.txt
[05:27] <wgrant> I agree that it shouldn't be a problem.
[05:27] <wgrant> But the PPR graphs say otherwise.
[05:27] <wgrant> https://lpstats.canonical.com/graphs/PPR/20111017/20120217/
[05:28] <wgrant> Week 47 is the slony upgrade
[05:28] <wgrant> web increases by ~0.8s during that week and stays there
[05:29] <wgrant> It's possibly not related.
[05:29] <wgrant> But slony eating more than a core sounds undesirable anyway.
[05:48] <stub> wgrant: I have a suspicion we did a point release upgrade of PostgreSQL to 8.4.9 at the same time
[05:50] <wgrant> 21:18 < mthaddon> stub: any objections if we upgrade postgresql while we're at it? (security update)
[05:50] <wgrant> Indeed.
[05:50] <stub> Hmm... or would have have been the previous month? That point release was out 26 sept, but not sure how long it took for the packages to be made, dribble through to lucid and actually get deployed.
[05:50] <wgrant> Buildbot broke in late October
[05:50] <stub> Right. So the PPR graph is quite probably the planner issue.
[05:51] <wgrant> Yeah
[05:51] <wgrant> Fix join selectivity estimation for unique columns (Tom Lane)
[05:51] <wgrant> This fixes an erroneous planner heuristic that could lead to poor estimates of the result size of a join.
[05:51] <wgrant> That doesn't look irrelevant.
[05:52] <stub> It was suspected when I've discussed this on IRC, but I'll need to prove it to Tom.
[05:52] <wgrant> Fortunately we have a nice sacrificial sourcherry where we can reproduce the issue...
[05:53] <stub> Not really sure how PostgreSQL downgrades work :)
[05:53] <wgrant> No, but patching out the planner changes does :)
[05:55]  * stub gets dragged out to lunch
[06:34] <wgrant> stub: Setting branch.owner and teamparticipation.{person,team} to 2500 is enough to get it to 40ms on DF
[06:34] <wgrant> For one query I tested.
[06:36] <wgrant> https://pastebin.canonical.com/60335/ is the one.
[06:38] <wgrant> Ouch
[06:38] <wgrant> *Up* to 50ms if I remove the second part of the UNION
[07:01] <wgrant> stub: http://paste.ubuntu.com/844047/ is a reasonably minimal (but LP prod-specific) example with timings.
[07:02] <wgrant> So it gets pretty serious pretty quickly.
[07:09] <wgrant> And reproduced all those results to within 1ms on clones of those tables with only the relevant columns and no indices.
[07:12] <stub> wgrant: You are *upping* the stats to improve the planner time?
[07:13] <stub> 2500 is the current production and staging default
[07:28] <czajkowski> aloha
[07:36] <wgrant> stub: No, this is on DF, which defaults to 100
[07:37] <wgrant> The first column is the list of columns that I changed to 2500 and reanalyzed.
[07:38] <stub> wgrant: "Setting branch.owner and teamparticipation.{person,team} to 2500 is enough to get it to 40ms on DF". Did upping the stats to 2500 improve planner time to 40ms, or are you talking about query time?
[07:38] <wgrant> stub: enough to get it to 40ms from 1ms
[07:38] <wgrant> Remember that on DF planning is quick
[07:38] <stub> oic
[07:38] <wgrant> because of the tiny stats
[07:53] <wgrant> stub: Hm, did you tweak stuff on prod? I have yesterday's 200ms query taking ~15ms now.
[07:55] <stub> I applied a patch changing the statistics target on the TeamMembership.person column to 100
[07:56] <wgrant> Ah, assumed that wasn't live, since it was on db-devel.
[07:56] <stub> (all columns on that table could happily go to 100, but I didn't want to be invasive yet as there is a reasonable chance I'll be changing the default)
[07:56] <stub> Guess I should have landed it on devel now we have twiddled the pqm rule.
[07:57] <wgrant> Yeah, otherwise scheduling fastdowntime gets confusing.
[07:57] <wgrant> For us mortals without access to prod LDR :)
[07:58] <stub> Should wire that up on a web page in Launchpad. We never made a /+admin or similar jump off page did we?
[07:58] <wgrant> I've been considering that for a while.
[07:59] <wgrant> Just needs to be something trivial and text line +opstats, I guess.
[07:59] <wgrant> s/line/like/
[08:00] <wgrant> There's still significant overhead on a lot of queries, but it's vastly reduced on yesterday's.
[08:01] <wgrant> Heh
[08:02] <wgrant> My pillarname fix worked well
[08:02] <wgrant> Reads are down by 99%
[08:11] <wgrant> Wow
[08:12] <wgrant> pillarname has in fact dropped off the DBR entirely, from being the top read table by like 50%
[08:19] <stub> wgrant: When do Bug.access_policy and branch.access_policy get dropped or repaired?
[08:20] <stub> wgrant: What was the pillarname fix?
[08:22] <wgrant> stub: PillarNameSet.getByName was doing a full seq scan on pillarname
[08:22] <wgrant> That's the API that's, you know, used to traverse to products and distros.
[08:22] <stub> Yer. Always wondering why that table was so popular for reads :)
[08:23] <wgrant> It's probably read by more requests than any other table.
[08:23] <wgrant> But the reads should each be roughly one tuple.
[08:23] <wgrant> stub: Hmm, I should probably drop them with the rest of the schema changes.
[08:23] <wgrant> Forgot that detail.
[08:24] <wgrant> branch.access_policy will probably come back at some point, but bug.access_policy is dead to us.
[08:29] <StevenK> I love it when a branch goes through ec2 first time
[08:29] <StevenK> Yay for 4 changes as well as knowing what tests are impacted
[08:39] <wgrant> stub: Thanks
[08:40] <wgrant> Those columns are gone in -2
[08:49] <stub> wgrant: So people get access to an artifact either via being granted access to a policy or being granted directly to the artifact?
[08:50] <wgrant> stub: Right. Same as the old schema, just with multiple policies per artifact, and modelled slightly differently with denormalisation.
[08:52] <wgrant> So to see an Ubuntu security bug you need to either be in ubuntu-security (which has a grant for the ubuntu security policy), or have a specific grant for the bug.
[08:52] <stub> wgrant: I don't understand why any of the 'FK to be added later' columns need to have their foreign keys added later rather than now.
[08:53] <wgrant> stub: Person merging and branch deletion need special code to handle foreign keys to their tables.
[08:53] <stub> ok
[08:53] <wgrant> -3 adds that code, -4 will add the foreign keys
[08:53] <wgrant> The ugliest part of fastdowntime :(
[08:54] <wgrant> We really should rework that code to make it easier, but I'm not quite sure how.
[08:54] <stub> Hey, you guys wanted it like this :-)
[08:54] <wgrant> Pfft.
[08:55] <stub> You could add the columns to the 'ignore these columns' sets in -1 and -3, but probably no real gain
[08:56] <stub> well... one less fast downtime
[08:56] <wgrant> Ah, yes, could this time, because there's an initial patch.
[08:57] <wgrant> Might as well do that.
[08:57] <stub> I'm pretty certain person merge has the ignores stuff you need. I don't know about the branch removal code.
[08:58] <stub> Probably not - ignores doesn't make sense there except for this sort of db patch juggling.
[08:58] <wgrant> Branch removal doesn't actually check. It just has a doctest that lists all the FKs.
[08:58] <StevenK> Oh, twitch
[08:58] <StevenK> That's a pointless test
[08:58] <wgrant> Oh?
[08:59] <StevenK> How does listing all FKs on branch help in any way?
[08:59] <wgrant> Because it will fail if you've added one, hinting that you need to add new rules to handle it.
[09:00] <stub> AccessPolicyArtifact needs a PRIMARY KEY
[09:05] <wgrant> Curses, you're right.
[09:05] <czajkowski> morning
[09:05] <wgrant> Morning czajkowski.
[09:06] <czajkowski> has anything happens since last friday with launhcadlib , a user could login on friday and today is seeing errors and not not able to use it
[09:06] <wgrant> czajkowski: The user is using staging. staging's database gets reset over the weekend, so the credentials from last week are no longer valid.
[09:07] <czajkowski> wgrant: thank you
[09:07] <wgrant> They may need to use a keyring manager to remove the cached credentials.
[09:07] <czajkowski> nods
[09:07] <czajkowski> thanks for explaining
[09:07] <wgrant> np
[09:08] <StevenK> czajkowski: That's one thing to keep in mind -- "I'm having a problem with LP" covers a multitude of sins, you should get into the habit of asking if they're using production, edge, staging or qastaging if they don't say so themselves.
[09:08] <wgrant> (in this case the error showed it was staging)
[09:08] <czajkowski> StevenK: wgrant nods ok
[09:09] <StevenK> czajkowski: mrevell can explain the gory history about that four instances (and there is a fifth, but non developers tend not use it) if you care. :-)
[09:10] <lifeless> StevenK: czajkowski: s/edge//
[09:11] <czajkowski> thanks folks
[09:11] <StevenK> Oh, right, edge redirects now
[09:11] <wgrant> lifeless, StevenK: Not for the API
[09:16] <cody-somerville> How long does the Soyuz test suite take to run these days?
[09:16] <StevenK> Like an hour?
[09:17] <StevenK> buildmaster doesn't add much
[09:17] <cody-somerville> omg.
[09:17] <StevenK> cody-somerville: The entire testsuite on buildbot is now taking about 5.5 hours
[09:18] <cody-somerville> lol. I just ran it in 123.1 seconds - Tests with failures: runTest
[09:19] <cody-somerville> Okay. So I'm looking to export the authorized_size attribute on PPAs. StevenK: I'll buy you a beer at UDS if you do it for me :)
[09:19] <wgrant> stub: Declared the PK, dropped an index that was redundant with it, and renamed one which had an obsolete name.
[09:19] <StevenK> cody-somerville: And if I'm not at UDS? :-)
[09:20] <wgrant> It's a trap, because we don't go to UDS :)
[09:20] <cody-somerville> lol
[09:20] <StevenK> cody-somerville: To export it read-only, it's fairly easy
[09:20] <StevenK> To *change* it over the API, that's a little harder
[09:20] <cody-somerville> I want to be able to change it (provided one has the necessary permissions)
[09:21] <stub> ta. just looking at the triggers
[09:21] <StevenK> Yes, I thought you might. :-)
[09:21] <cody-somerville> Isn't it just an int field?
[09:24] <StevenK> cody-somerville: So, you can wrap it in exported(), and write a bunch of tests to see :-)
[09:25] <cody-somerville> If only that was easier :(
[09:25] <stub> wgrant: SECURITY DEFINER functions also need to add a 'SET search_path TO public' clause for security reasons (patch-2209-00-5.sql has an example).
[09:25] <StevenK> cody-somerville: webservice tests aren't too bad
[09:25] <cody-somerville> when I run make schema, I get 'ProgrammingError: text search configuration "default" does not exist'
[09:26] <stub> Not that it should be possible to exploit that in our setup, but defence in depth and all that.
[09:27] <wgrant> Yep
[09:27] <wgrant> cody-somerville: Have you run launchpad-database-setup?
[09:27] <wgrant> It's in utilities/
[09:27] <cody-somerville> I have in the past
[09:28] <StevenK> You probably need to again
[09:29] <cody-somerville> ugh. it destroyed my database, lol.
[09:32] <stub> cody-somerville: In a good way?
[09:32] <stub> That script scares me
[09:33] <cody-somerville> No. I thought I could provide a different account name as argument and it would just destroy stuff belonging to that
[09:33] <cody-somerville> but nope. it just went ahead and dropped the entire cluster
[09:34] <wgrant> That's why it gives you a warning :)
[09:35] <wgrant> Oh, but it lies.
[09:35] <wgrant> Because it indeed drops the whole cluster.
[09:35] <cody-somerville> 'THIS SCRIPT WILL DESTROY ALL POSTGRESQL DATA for the given user'
[09:36] <stub> That isn't a lie, it just doesn't tell you everything
[09:36] <cody-somerville> I gave it a non-existent user as an argument, lol
[09:37] <cody-somerville> welp, at leasts the tests are running correctly now
[09:37] <wgrant> stub: Is there any way we can tell what's reading 1.5 million branch tuples/sec?
[09:37] <wgrant> That seems pretty unlikely to be legit.
[09:39] <stub> wgrant: Those stats are not linked to particular queries or connections unfortunately.
[09:39] <wgrant> Yeah.
[09:39] <wgrant> Will see if the go away in 15 minutes when everything's disabled.
[09:39] <stub> wgrant: The only way I can think of would be to turn on statement + plan logging and analyze that.
[09:39] <wgrant> That will narrow it down to the webapp.
[09:46] <wgrant> stub: Ah, thanks for the approval.
[09:55] <wgrant> Unless it's translations_import_queue_gardener it must be the webapp...
[10:07] <gmb> Folks, I know I'm being dumb, but what do I need to do to make this go away:
[10:07] <gmb> Traceback (most recent call last):
[10:07] <gmb>   File "utilities/js-deps", line 4, in <module>
[10:07] <gmb>     from convoy.meta import main
[10:07] <gmb> ImportError: No module named convoy.meta
[10:07] <gmb> (when running `make`)
[10:08] <StevenK> Upgrade your dependencies
[10:08] <gmb> StevenK, That's what I thought... I'll try again.
[10:08] <wgrant> gmb: apt dependencies
[10:08] <wgrant> Make sure launchpad-developer-dependencies is installed
[10:08] <wgrant> And that you're not running natty, which IIRC you are.
[10:08] <gmb> Ahahahahahahhaahahasdhashdaqhrwuqwhrquiwhrqui3hr1ui32hr1ui3rh13iurh3stabstabstabstabdie
[10:08] <StevenK> Haha
[10:08] <gmb> wgrant, I'm on Precise.
[10:09] <wgrant> Oh, OK
[10:09] <gmb> For juju goodness.
[10:09] <gmb> And lxc
[10:09] <wgrant> One of the other MacBookers was on Natty at the 'dome
[10:09] <gmb> and other crazy-ass stuff.
[10:09] <gmb> wgrant, Yeah, but I'm running in a VM; I'm too lazy to do the work to get Ubuntu working properly on the metal.
[10:09] <wgrant> Heh
[10:11] <gmb> And whaddaya know, lp-dev-deps isn't installed any more.
[10:11] <gmb> Goodness knows why.
[10:11] <gmb> Thanks fellas.
[10:15] <bigjools> gmb: release upgrade removes it
[10:16] <gmb> bigjools, Yeah, I must've forgotten to re-add it afterwards.
[10:52] <jml> bigjools: testtools 0.9.14 released, fixing the subunit incompatibility.
[10:53] <jml> I really want that in precise.
[11:16] <StevenK> jml: Feature freeze is today, get cracking
[11:21] <rick_h> morning
[11:22] <jml> StevenK: I don't know what to do. Honest.
[11:23] <jml> StevenK: otherwise I would.
[11:23] <StevenK> Looks like it has been synced from Debian in every release except precise
[11:29] <bigjools> jml: awesome
[11:30] <jml> StevenK: normally lifeless takes care of it, but he's been busy this cycle.
[11:33] <bigjools> jml: I'll try and get it poked in
[11:33] <jml> bigjools: thanks.
[13:01] <wgrant> stub: I think I've found the thing that's reading branch a lot. The planner makes very bad life choices: http://paste.ubuntu.com/844328/ is the >1s query issued by BranchLookup.getUniqueName, <1ms if I extract the distroseries bits into a CTE.
[13:02] <stub> \o/
[13:03] <wgrant> Branch.owner is extraordinarily skewed, but the planner should be able to tell that those joins map perfectly through unique indices :/
[13:03] <wgrant> Unless I'm missing something.
[13:05] <stub> Yes, it is expecting a user to not have many branches
[13:05] <wgrant> Yeah, which is a reasonable assumption.
[13:05] <wgrant> I can't fault it for that.
[13:06] <wgrant> But I can fault it for not doing the dead-simple plan which evaluates the four unique joins individually :)
[13:06] <stub> It might know that id XXXX has lots of branches (if the sample size is high enough). But it won't know that that Person.name matches that id.
[13:06] <wgrant> Right, which is why it normally works, because we usually query by ID.
[13:07] <wgrant> But in this case the indices should tell it that the WHERE clause constrains each join to a single row, surely.
[13:11] <stub> The planner estimates agree with that
[13:12] <stub> It was expecting a single row at every stage of the plan, but things went pear shaped when it found that user actually owned 21k branches
[13:12] <StevenK> That's what happens for ubuntu-branches, though
[13:13] <wgrant> I'm still not sure how that can work out cheaper than the plan I suggest, but I guess index reads might do it.
[13:13] <StevenK> I say go for the CTE
[13:15] <stub> Yes, planner being too thick. For every table in there we can guarantee a single row due to uniqueness on the id column, except for Branch which *probably* will return 1 row but that is a guess, not a guarantee.
[13:15] <stub> Well, 0 or 1 rows
[13:16] <wgrant> It can reason through the unique indices and say there's only one branch row.
[13:16] <wgrant> Because it knows there's only one row for every element of one of its unique keys.
[13:18] <stub> I think the problem is that it doesn't know that. It could, but not implemented.
[13:19] <wgrant> Seems like it would be a useful sort of thing to know :/
[13:20] <stub> wgrant: Paste your rewritten query?
[13:20] <wgrant> http://paste.ubuntu.com/844355/
[13:21] <wgrant> Joining the CTE is slow. It may be the one-row hint provided by the = that does it.
[13:21] <wgrant> It's still not doing the right thing, but it works.
[13:22] <wgrant> (it does a nested loop from branch onto person)
[13:26] <stub> So you want to filter the Branch table on those criteria? Where do you start? Join with distroseries, you end up with a working set of maybe 20k. Join with SourcePackageName you end up with a working set of maybe 30. But join with Person, you are probably going to end up with 1 row in your working set.
[13:27] <stub> Rather than rewrite for a CTE, we might be able to craft an index that will be used for this and similar queries
[13:28] <stub> Say (person, distroseries, sourcepackagename)
[13:28] <stub> Suspect we always limit on both distroseries and sourcepackagename, never just sourcepackagename?
[13:29] <wgrant> This query is the only one that appears to be particularly problematic.
[13:29] <wgrant> Because it's called as the sole query behind an XML-RPC API
[13:29] <wgrant> So has none of the objects.
[13:29] <wgrant> So the stats are useless.
[13:29] <wgrant> Because it's all string-based.
[13:30] <wgrant> So, yeah, never just sourcepackagename.
[13:31] <stub> Index fails for me
[13:31] <wgrant> Likewise.
[13:43]  * wgrant sleeps.
[13:44] <stub> Night. I can't improve on what you have.
[13:46] <wgrant> stub: Thanks. I might land that tomorrow, then.
[13:47] <stub> http://paste.ubuntu.com/844395/ without the CTE
[14:34] <rick_h> https://code.launchpad.net/~rharding/launchpad/gallery-accordian_fix/+merge/93406
[14:58] <rick_h> jcsackett: can you look over this when you get a sec pls? https://code.launchpad.net/~rharding/launchpad/gallery-accordian_fix/+merge/93406
[15:02] <czajkowski> bigjools: did the person on this ticket contact you on irc to resolve the issue https://support.one.ubuntu.com/Ticket/Display.html?id=4997
[15:05] <jcsackett> rick_h: sure, i'll take a look in a sec.
[15:10] <czajkowski> sinzui: you about ?
[15:11] <sinzui> I am
[15:11] <czajkowski> ready for G+ ?
[15:12] <sinzui> I am
[15:13] <czajkowski> sinzui: https://plus.google.com/hangouts/133f83a9936535af8066fcbc7938f0325c3f4d8b?authuser=0&hl=en-GB#
[15:13] <czajkowski> sorry for the delay
[15:13] <czajkowski> we've been cleaning the RT system
[15:18] <jcsackett> did our diff builder get much smarter? or has it always just shown one line when a file has just been moved?
[15:38] <sinzui> czajkowski, bzr+ssh://bazaar.launchpad.net/~launchpad/lp-dev-utils/trunk/
[16:01] <mabac> jcsackett, thanks for reviewing https://code.launchpad.net/~linaro-infrastructure/launchpad/workitems-model-classes/+merge/92174. would you also land it for us?
[16:01] <abner`> folks, I'm planning to have a new bts installed in my server, but there's a complicated requirement here: I need two instances in two servers being synchronized. Any idea if it's possible to be done with launchpad?
[16:02] <sinzui> czajkowski, this is the command to disable a translation project because the project is already registered ./disable_projects.py --msg=2 gmusicbrowserhutranslate
[16:05] <jcsackett> lifeless: you around?
[16:05] <bigjools> jml: https://launchpad.net/ubuntu/+source/python-testtools
[16:06] <jml> bigjools: \o/
[16:06] <jml> bigjools: thanks :)
[16:06] <bigjools> jml: my extreme pleasure :)
[16:07] <jcsackett> mabac: yes, that's on my todo list. :-)
[16:07] <mabac> jcsackett, cool, thank you very much! :)
[16:15] <sinzui> czajkowski, this is the command to disable one or more projects that sends an explanation to the project maintainer: ./disable_projects.py totten25
[16:18] <sinzui> czajkowski, http://opensource.org/docs/osd
[16:56] <syst3mw0rm> Hi
[16:57] <syst3mw0rm> I would like to know the status of grackle project...as mailman is investigating the possibility of using grackle as new archiver framework
[17:13] <sinzui> syst3mw0rm, It is not yet operational
[17:14] <sinzui> syst3mw0rm, There are two needed parts to start testing. I am working on the client library right now. It might be done tomorrow
[17:14] <syst3mw0rm> sinzui, what is the second part ?
[17:15] <sinzui> syst3mw0rm, The Cassandra-based store is yet to be built and might be a month a way give that my squad does not have time to work on it
[17:16] <syst3mw0rm> sinzui: Well, what do you think about the idea of using grackle as new archiver framework ?
[17:16] <syst3mw0rm> for mailman
[17:16] <sinzui> syst3mw0rm, I have been using a memory implementation store to test the client lib. I suspect the memory implementation will evolve into a a reference implementation that can be used to create other backends.
[17:17] <sinzui> syst3mw0rm, That is my dream, but I think Cassandra's java is a nuance. I think something else with a python heritage would be more suitable.
[17:18] <syst3mw0rm> so you mean that grackle can be used with other backends as well, and you will be using cassandra-based store for launchpad ?
[17:18] <sinzui> syst3mw0rm, I already have a branch queued to land that makes out mailman use grackle
[17:18] <syst3mw0rm> can you provide me with the link ?
[17:19] <syst3mw0rm> sinzui, why did you choose cassandra when you think that cassandra's java is a nuance ?
[17:20] <sinzui> syst3mw0rm, correct. grackle is providing a fast way to inject a message and a mechanism to get JSON encoded messages back. We have already completed the JS/AJAX lib that will allow the browser to read the archive doing a minimal number of server queries
[17:21] <sinzui> I do not like java. I do not want to maintain a java-stack and python stack to run an archive.
[17:22] <sinzui> As a developer, or someone deploying the code, I want few dependencies, not many
[17:22] <sinzui> syst3mw0rm, Canonical like Cassandra it is not considered an extra dependency since it is the preferred blob store
[17:23] <syst3mw0rm> ok
[17:23] <sinzui> And to be fair to Cassandra, It was created to store messages by Facebook. I was surprised that an archive did not already exist
[17:25] <syst3mw0rm> So, what they used to store the messages when Cassandra was not open source ?
[17:27] <sinzui> syst3mw0rm, yes. I think Facebook has a one or more message stores in Cassandra. Their messages though are short, and without attachments.
[17:30] <syst3mw0rm> sinzui: can you give me link of the branch, where makes mailman use grackle ?
[17:30] <sinzui> syst3mw0rm, If I were to hack on the store on my own time. I would keep the mbox and use a db (sqlite default, but allow mysql, drizzle, postgresql) sot the indexes and json data. I think that will be fast enough for giant lists an easy to deploy
[17:31] <sinzui> The mbox would only been needed to get the non-text parts
[17:31] <sinzui> syst3mw0rm, https://code.launchpad.net/grackle
[17:31] <sinzui> I pushed some changes a few days ago
[17:32] <syst3mw0rm> sinzui:  non-text part parts, like attachments ?
[17:37] <sinzui> yes
[17:37] <sinzui> ^ syst3mw0rm
[17:37] <syst3mw0rm> sinzui: ok.
[17:38] <syst3mw0rm> sinzui: The link you have provided just have some unit tests..
[17:44] <lifeless> morning
[17:48] <sinzui> syst3mw0rm, there is code in there for the client lib. It is very small. The memory implementation is in the test. I will extract it after I have complete the client API
[17:49] <sinzui> syst3mw0rm, the javascript lib will be moved into grackle soon. It is in Lp at the moment because it had the YUI infrastructure we depend on.
[17:51] <syst3mw0rm> sinzui: nice..Can i also get involved with development of grackle client lib... ? As i am planning to make mailman use grackle..it would be better if i am quite familiar with grackle internals..
[17:55] <sinzui> syst3mw0rm, that would rock. I expect to have the client completed this week. That allows ud to refine it next week, integrate the js, and work on the server. wgrant was working on the server, but he has more important things to work on. We can work on the server.
[17:56] <syst3mw0rm> sinzui: seems like a great opportunity..
[17:56] <syst3mw0rm> so i think i will get myself familiarized with cassandra till you finish the client part..and then we can take it forward form there itself..what do you say ?
[17:57] <syst3mw0rm> we are going to work on cassandra itself, right ?
[17:57] <syst3mw0rm> sinzui: ^
[17:58] <lifeless> with, not on
[17:58] <lifeless> being familiar with how cassandra modelling is done is a good idea though
[17:58] <sinzui> syst3mw0rm, maybe. I certainly will use Cassandra when working on Canonical's time. On my own time I would write something that easier for someone to deploy with mailman
[17:59] <lifeless> sinzui: cassandra is no harder to deploy than postgresql once packaged :)
[17:59] <syst3mw0rm> lifeless: "with, not on" ?
[17:59] <lifeless> syst3mw0rm: we don't expect to be making changes to cassandrra
[17:59] <sinzui> lifeless, There are worlds ouside of Ubuntu's JuJu paradise.
[18:00] <lifeless> sinzui: oh, I know - I wasn't meaning juju
[18:00] <sinzui> lifeless, Most mailman installs would want a batteries-included option that meets the needs of 80% of mailman installs.
[18:00] <lifeless> sinzui: cassandra will run single-node quite happily
[18:01] <lifeless> sinzui: your time is of course your time; I just don't see much reason to do a different backend for other users: I expect most mailman installs are from Ubuntu packages already
[18:01] <lifeless> sinzui: (or Debian with packages)
[18:03] <sinzui> lifeless, I will not ever install a java stack to get mailman running for my biz/org The mediocre choices of archivers we have is partially because they are awesomely easy to setup at the moment of mailman's install
[18:03] <syst3mw0rm> sinzui, lifeless i think that i should discuss it with barry warsaw who maintains mailman, so he might be having better idea about which backend service might be good for mailman installations...What do you guys say ?
[18:06] <sinzui> syst3mw0rm, I certainly will talk to barry about the replacement of pipermail. That is why I think the server/store implementation needs to have options
[18:07] <sinzui> python can needs of most cases, galaxy-size hosts need enterprise level backends
[18:11] <syst3mw0rm> sinzui: your last statement isn't clear to me...
[18:12] <syst3mw0rm> are you saying that we should have two options, cassandra for galaxy-size hosts and python based service for most of the other cases ?
[18:12] <lifeless> that is what sinzui is saying
[18:12] <lifeless> where Launchpad counts as galaxy size
[18:13] <lifeless> salgado: nice finding of deletable code !
[18:14] <sinzui> lifeless, I would not have said that two years ago. The terrible performance we see now is cause by some monster lists we now host
[18:14] <salgado> lifeless, heh, it was danilo who told me about it :)
[18:15] <sinzui> salgado, there are still map views and locate tables
[18:15] <sinzui> personlocation tabled
[18:15] <sinzui> tables
[18:15]  * sinzui stops typing
[18:15] <salgado> sinzui, map views?!  those I'd like to delete
[18:16] <sinzui> I think we have two view, possibly unregistered sitting in lp.registry.browser.team
[18:16]  * salgado goes find them!
[18:17] <sinzui> wtf: salgado person.index still calls this: <div tal:content="structure context/@@+portlet-map" />
[18:17] <sinzui> I suck I should have removed that 14 months ago
[18:18] <salgado> I'm glad you did not
[18:18] <salgado> the +map view on teams is still registered though
[18:18] <salgado> yay, I'll kill 'em all
[18:18] <salgado> (https://launchpad.net/~linaro-infrastructure/+map)
[18:19] <lifeless> useful!
[18:19] <lifeless> deryck[lunch]: ohhai
[18:21] <sinzui> salgado, I removed a vocab a few months ago that was not being used.  There is not obvious way to locate an unused vocab, But i think there might be 2000 deletable lines
[18:23] <salgado> sinzui, other vocabs, you mean?
[18:23] <sinzui> yes
[18:23] <salgado> niice, I'll have a look and see if I find any others
[18:25] <abner`> folks, I'm planning to have a new bug tracking system installed in my server, but there's a complicated requirement here: I need two instances in two servers being synchronized. Any idea if it's possible to be done with launchpad?
[18:25] <sinzui> salgado, tar up all the windmill directories to in the tree and put it in the dev wiki. you might delete 20,000 lines of code that is never executed
[18:27] <lifeless> abner`: in principle yes, in reality no for two reasons; the federation system isn't all that mature (e.g. we have bits turned off because of issues) and we don't have an LP<-> federation module written
[18:27] <lifeless> abner`: but you could just use postgresql replication ..
[18:29] <salgado> sinzui, why put that in the wiki?
[18:29] <abner`> lifeless, I have been trying to do this replication on the db layer, but it's not attending to all requirements.
[18:29] <sinzui> I think deryck wanted to keep them as reference to what we did test when writing yui replacement tests
[18:29] <abner`> lifeless, how launchpad stores the data? is it one big db with all projects inside?
[18:30] <lifeless> we have 5 or 6 data stores (db, mailman, bazaar, debbugs replica, debian replica, librarian) with data stored according to its nature
[18:31] <lifeless> there is a big DB, but its not the largest store if you measure by disk size; and possibly not largest by addressable-items either
[18:34] <deryck> lifeless, I'll ping in about 30-1hr for call.  cool?
[18:34] <czajkowski> lifeless: sorry for the multiple pings in -meeting
[18:35] <czajkowski> i added the PPA CoC topic to the CC agenda to get some discussion/answers
[18:36] <abner`> lifeless, hmm.. I'm asking because during the synchronization/replication I would need to transfer only the data of a specific project, not all project. So having an unique db with all project can also be a problem.
[18:38] <lifeless> deryck: ok
[18:39] <lifeless> czajkowski: my irc client doesn't see any pings
[18:39] <sinzui> salgado, I am disappointed to see that windmill is only 2593 linez
[18:40] <czajkowski> lifeless: what irc client do you use ?
[18:40] <lifeless> czajkowski: irssi; it needs a : to invoke highlight
[18:41] <salgado> sinzui, come on, every windmill line, when removed, should probably be worth like a dozen lines of proper code ;)
[18:41] <lifeless> czajkowski: I could change that, but its good ATM - if someone doesn't want to invoke me, they don't accidentally do so.
[18:41] <czajkowski> ah yes same here
[18:41] <czajkowski> good to know
[18:41] <czajkowski> anyway we got an answer yes they do need to sign the CoC for ppas and it should be enforced
[18:41] <lifeless> czajkowski: thanks fo rletting me know about the discussion
[18:41] <czajkowski> shall get the mins later and post to the list tomorrow
[18:41] <lifeless> great
[18:42] <salgado> sinzui, btw, I got excited and started removing the location-related (except timezone) from IPerson but then realized .latitude and .longitued are exported on the API.  do we need to follow a process to remove those or can we just assume nobody will care as those things have been removed from the UI anyway?
[18:42] <czajkowski> lifeless: in fact before I head out, here you go http://ubottu.com/meetingology/logs/ubuntu-meeting/2012/ubuntu-meeting.2012-02-16-17.02.moin.txt and you can see the discussion http://ubottu.com/meetingology/logs/ubuntu-meeting/2012/ubuntu-meeting.2012-02-16-17.02.moin.txt
[18:43] <lifeless> salgado: keep the attribute, return None for them, for old API versions
[18:43] <lifeless> salgado: for devel, unexpect it entirely
[18:43] <lifeless> czajkowski: I used my backlog :P
[18:43] <salgado> lifeless, right, makes sense
[18:43] <lifeless> s/unexpect/unexport/
[18:43] <czajkowski> well in case others were curious :)
[18:43] <sinzui> salgado, yes, what lifeless said.
[18:45] <salgado> lifeless, what about stuff that was only exported on the beta API? I guess those can be removed altogether?
[18:45] <lifeless> IMO yes
[18:45] <lifeless> same as devel. YMMV though, I think some folk used beta heavily?
[18:45] <barry> lifeless: hiya
[18:46] <lifeless> barry: oh hey
[18:46] <lifeless> sinzui: and syst3mw0rm: were talking about mailman archivers
[18:46] <lifeless> and we got to a point where consult-barry was raised. I thought, why wait?
[18:47] <barry> syst3mw0rm: hi, thanks for joining us here
[18:47] <barry> lifeless: good thought :)
[18:47] <syst3mw0rm> barry: no problem
[18:50] <sinzui> barry, we were discussing ideals of archivers. We are using java-based Cassandra as the message store for Lp. I believe that is over kill for most mailman installs. A simpler store possibly taking the batteries-included approach would work
[18:51] <lifeless> I was putting forward the opinion that apt-get install cassandra is pretty simple :>
[18:51] <barry> right, anything java-based is overkill for standard mm installs :)
[18:52] <barry> lifeless: certainly for lp, that would be fine.  mm3 though runs on lots of *nixen
[18:52] <barry> sinzui: do you have apis designed for the backend store?  what are the basic requirements for that store?
[18:53] <sinzui> barry yes
[18:53] <sinzui> in, fact we have accidentally written a memory-based reference implementation for testing
[18:53] <barry> sinzui: would it be possible to use sqlite + filesystem ?
[18:54] <sinzui> I think it will be easy to play tests against another implementation or mock/fake to verify conformance
[18:54] <barry> or should i say storm + possibly filesystem
[18:54] <sinzui> barry, friend, man-of-my-mind, yes. I think mbox + sqlite will work for 80% of the cases
[18:55] <barry> sinzui: that would be perfect, if by 'mbox' you mean http://docs.python.org/library/mailbox.html
[18:55] <sinzui> barry, sqlite could also be a step to switch to drizzle, mysql, etc ...
[18:55] <sinzui> barry, yes, I do mean that
[18:55] <barry> sinzui: exactly.  and if done through storm, it's an easy-ish config var to change.  e.g. we have postgres support in mm3 bzr right now
[18:55] <barry> sinzui: beauty.  that would let us use maildir for example
[18:56] <sinzui> oh, yes. I forgot that
[18:56] <barry> (maildir on disk format)
[18:56] <barry> sinzui: btw, i *love* the idea of a backend store that does all the threading and whatnot, and vends the necessary info through rest for whatever formatter you want to use
[18:58] <syst3mw0rm> barry: i guess you are talking about the grackle's idea ?
[18:58] <sinzui> barry, I don't have any code yet, but I image the mbox is only used to store the attachments and is the portable-part of the archive. The db manages, indexes, threads, and stores JSON for fast returns os massage data
[18:58] <barry> syst3mw0rm: yep
[18:58] <salgado> lifeless, I can't seem to find how to tell exported() to do so only for certain versions of the API... any pointers? :)
[18:59] <barry> sinzui: yep.  mbox would store the raw email messages too, which you might eventually want to have access to
[19:00] <barry> sinzui: the nice thing is that the db would also store things like takedown flags.  and the really nice thing is that the consumer of the json can do things like email obfuscation algorithms, or body massaging (e.g. hyperlink bug tags, etc.)
[19:00] <lifeless> salgado: something_for_version IIRC
[19:01] <salgado> oh, a different thing, like operation_for_version I guess
[19:01]  * salgado digs
[19:01] <sinzui> barry, we are in agreement. we even intend to have the Cassandra version do the same.
[19:01] <lifeless> salgado: oh oh, I know what you mean
[19:01] <lifeless> salgado: its a dict at the end of the export
[19:02] <salgado> oh, right
[19:02] <lifeless> takes a mapping of versions and shit. Uhm, grep for '\<beta\>' and you'll likely find it quickly
[19:02] <barry> sinzui, syst3mw0rm: this would be so cool.  the state-of-the-art in floss archivers has been abysmal for at least a decade.  this would leapfrog everything that i'm aware of
[19:03] <syst3mw0rm> barry: seems cool..but frankly, it will take some time to digest totally whatever you were discussing...:)
[19:03] <sinzui> barry, We will use Lp as a pass-through to do linking in real-time because of permission/data changes. The page code/js does not know it is talking to Lp...it can talk directly to the archive (a wsgi app) if you want
[19:03] <barry> syst3mw0rm: no worries :)
[19:04] <barry> sinzui: rock
[19:04] <lifeless> sinzui: hmm, the js will need to be querying LP; I think yo umean 'LP will be offering the same API the archiver does'
[19:05] <lifeless> sinzui: so they can be substituted transparently
[19:05] <lifeless> (Do I understand correctly?)
[19:05] <sinzui> lifeless, correct
[20:59] <barry> lifeless: is there *anything* y'all can do to bump up the timeout on https://launchpad.net/ubuntu/+search?  it's so painful, but such a basic interface
[21:00] <lifeless> barry: patches appreciated
[21:00] <barry> :(
[21:00] <lifeless> let me link you the bug
[21:00] <lifeless> bug 816870
[21:00] <_mup_> Bug #816870: Distribution:+search (package search) timeouts <critical-analysis> <timeout> <Launchpad itself:Triaged> < https://launchpad.net/bugs/816870 >
[21:01] <lifeless> the count alone is 11 seconds to complete
[21:01] <barry> yay!  4 hotter
[21:01] <lifeless> the actual search will then be (probably but not always) less than that
[21:01] <lifeless> we'd likely need 20seconds to have -any- chance of reliability
[21:02] <barry> lifeless: jfdi!
[21:02] <lifeless> and thats as high as we /can/ go before other components in the chain will time it out
[21:02] <barry> i'm willing to wait :)
[21:02] <lifeless> barry: when we raise such timeouts, we consume more resources, increases contention for service points -> more issues elsewhere
[21:02] <lifeless> barry: this needs to be fixed, not tolerated. Thus my 'patches appreciated' comment.
[21:03] <barry> lifeless: i know.  i'm just being curmudgeonly.
[21:04] <lifeless> if we could just toss hardware at it, and tolerate the timeouts, it would be a different matter; but anyone who uses LP will be on master most of the time, and thats much harder to scale
[21:04] <lifeless> and this is all DB server CPU time, so we only have 16 concurrent queries we can run
[21:04] <lifeless> -> not many.
[21:04] <lifeless> barry: I'd love it if you fixed it.
[21:05] <barry> lifeless: no, i do appreciate the constraints
[22:05] <wallyworld_> sinzui: jcsackett: wherefor art thee?
[22:05] <sinzui> Fighting
[22:05] <wallyworld_> unity?
[22:05] <jcsackett> wallyworld: be there in just a moment.
[22:05] <thumper> what?
[22:05]  * thumper saw the word
[22:05] <sinzui> I lost X for 30 minutes
[22:05] <wallyworld_> thumper: unity is making me sad today
[22:05] <StevenK> thumper: Do you hilight on unity or something?
[22:05] <thumper> wallyworld_: why?
[22:06] <thumper> StevenK: no, but quassel has a summary buffer
[22:06] <wallyworld_> thumper: dash broken, random stuff left on screen
[22:06] <thumper> which shows all chatter on all channels
[22:06] <thumper> and I happened to see it
[22:06] <sinzui> I am not sure unity is my problem. X went belly up and I had no harware detection for display, mouse, or keyboard
[22:08] <wallyworld_> thumper: i just upgraded this morning. often when  stuff crashes and i go to report the bug, it says i am not running an official ubuntu package and so won't do it
[22:09] <thumper> wallyworld_: which are you using?
[22:09] <wallyworld_> thumper: let me check
[22:09] <thumper> there have been X graphics driver issues
[22:10] <thumper> visual corruption et al
[22:10] <wallyworld_> thumper: 5.2.0+bzr1975ubuntu0+644.really1977
[22:13] <thumper> wallyworld_: apt-cache policy unity
[22:14] <wallyworld_> thumper: https://pastebin.canonical.com/60465/
[22:15] <wallyworld_> thumper: the dash issue is that when i click on an icon to launch an app, nothing happens
[22:19] <thumper> wallyworld_: hmm... that is the latest and greatest
[22:19] <wgrant> wallyworld_: https://devpad.canonical.com/~lpqateam/dbr
[22:20] <wallyworld_> thumper: yes, i like the bleeding edge :-)
[22:21] <thumper> wallyworld_: so bleed
[22:22] <wallyworld_> thumper: indeed. i was just mentioning it :-)
[22:36] <thumper> wallyworld_: I think I've found your bug
[22:36] <wallyworld_> thumper: you're a legend
[22:37] <thumper> wallyworld_: I didn't say I've fixed it :)
[22:37] <wallyworld_> thumper: that's ok. finding it is the firs tstep :-)
[22:38] <wallyworld_> thumper: and fewer others will be affected hopefully
[22:39] <jcsackett> wallyworld_: ok, i've made my vote on the MP and added you directly as a requested look. enjoy. :-P
[22:39] <wallyworld_> jcsackett: thanks, i think :-)
[22:39]  * jcsackett grins
[22:52] <jcsackett> wallyworld_: are you the reviewer today, or just planning on looking at that MP out of the goodness of your heart?
[22:53] <jcsackett> wallyworld_: when you have a moment (say when your day fully begins), can you look at https://code.launchpad.net/~jcsackett/launchpad/trivially-add-sharing-ui-stuff/+merge/93501? it's much shorter than the other MP. :-)
[22:54] <wallyworld_> jcsackett: i had already clicked on it but haven't read it yet :-)
[23:20] <wgrant> wallyworld_: https://code.launchpad.net/~wgrant/launchpad/bug-933853/+merge/93508 is the branch performance fix I mentioned earlier.
[23:25] <StevenK> http://pastebin.ubuntu.com/845139/
[23:26] <StevenK> sinzui: http://pastebin.ubuntu.com/845139/
[23:29] <wgrant> oops
[23:35] <wgrant> lifeless: I was going to do a FF cleanup today anyway...
[23:46] <wgrant> baaah
[23:46] <wgrant> work items landed