[00:18] <rick_h> StevenK: howdy, how goes the convoy branch?
[00:24] <StevenK> rick_h: I've not worked on it. If this critical regression branch gets finished off, I'll toss it at ec2 today.
[00:25] <rick_h> StevenK: awesome, ok. wanted to check. I've got deryck starting to reivew my JS test branches so was curious
[00:27] <wgrant> Hahahawefwefw
[00:28] <rick_h> quick! someone get wgrant some water before he passes out!
[00:29] <wgrant> I just discovered why Product:+index is so terrible.
[00:30] <wgrant> ProductPackagesPortletView always calculates linkage suggestions, even if it's not going to show them.
[00:30] <wgrant> And that's an expensive FTI query.
[00:31] <huwshimi> Is there a way to merge pipes, or push changes back into the parent? I accidentally created a pipe when I had uncommitted changes and now want these changes to be in the parent pipe.
[00:32] <wgrant> huwshimi: They're just normal branches with handy aliases.
[00:32] <wgrant> You should be able to say 'bzr merge :next'
[00:32] <wgrant> That sort of thing.
[00:33] <huwshimi> wgrant: Ah awesome, that makes sense
[00:36]  * wallyworld punches wireless keyboard and mouse that just died
[00:37] <rick_h> wallyworld: wired > *
[00:37] <wallyworld> i think it's the receiver. bollocks
[00:38]  * StevenK loves his wireless keyboard and mouse
[00:38] <wallyworld> mine is a logitech which should be better than a no-name brand you would think
[00:39] <StevenK> I think they're due for replacement. I've played enough first person shooters that the labels for WASD are either completly or just about worn off
[00:39] <wallyworld> hah
[00:39] <wgrant> That's why real keyboards don't have labels :)
[00:39] <wgrant> Well, that and Dvorak.
[00:39] <wallyworld> i've never tried a Dvorak keyboard
[00:40] <rick_h> vim + dvorak scares me too much
[00:40] <rick_h> though I do love my buckling spring unicomp :)
[00:41] <StevenK> wallyworld: I do hate that the two major choices for keyboard/mouse are Microsoft and Logitech
[00:41] <rick_h> bah
[00:41] <rick_h> http://elitekeyboards.com/ http://www.pckeyboard.com/
[00:41] <wallyworld> yeah. i've have to buy another one today. can't not have one :-(
[00:42] <rick_h> there's your real keyboards
[00:42] <StevenK> rick_h: How much do you want to bet they don't ship to .au?
[00:42] <wallyworld> wow. the pckeyboard one even has a clitoris
[00:43] <rick_h> wallyworld: yea, but no middle scroll mouse button :( guess it's a trademark thingy they can't do
[00:43]  * wgrant hates non-mechanical switches now.
[00:43] <rick_h> StevenK: ah, ok got me there.
[00:43] <rick_h> wgrant: you tried a topre yet? I can't get my wallet out to go that route
[00:43] <wgrant> rick_h: No, but they intrigue me.
[00:44]  * StevenK prefers keyboards that are quiet
[00:44] <wgrant> But they feel terrible :/
[00:44]  * wallyworld prefers keyboard that work
[00:44] <StevenK> wallyworld: You seem to be typing okay.
[00:44] <rick_h> wgrant: do they? it's one of the few switches I've used
[00:44] <rick_h> StevenK: check out the cherry brown
[00:44] <wallyworld> StevenK: on my laptop keyboard
[00:44] <rick_h> those are pretty quiet switches
[00:45] <wallyworld> i'll have to undock for today
[00:45] <StevenK> Googling for Cherry Brown isn't so helpful
[00:45] <wallyworld> and whenever i undock, it kernel panics
[00:45] <wgrant> rick_h: I was replying to StevenK's quiet keyboards thing.
[00:45] <StevenK> wallyworld: Handy.
[00:45] <wallyworld> yeah
[00:45] <wallyworld> well here goes.....
[00:46] <wgrant> I believe mine has Cherry Browns.
[00:46] <wgrant> They're certainly not quiet.
[00:46] <rick_h> wgrant: oic
[00:46] <wgrant> But they're a bit quieter than Blues :)
[00:46] <StevenK> Bwaha
[00:46] <StevenK> That sounds like a panic
[00:46] <rick_h> wgrant: yea, I've got two of the leopolds, one in brown and one in blue
[00:46] <StevenK> wgrant: So you prefer keyboards that sound like a typewriter factory?
[00:46] <rick_h> from the elite keyboards place
[00:46] <wgrant> StevenK: Yes, because they are so nice to type on.
[00:47] <rick_h> wgrant: ++
[00:47] <wgrant> I used to prefer my ThinkPad keyboard to every other keyboard I'd tried.
[00:47] <wgrant> But now it feels terrible compared to this :/
[00:48] <StevenK> rick_h: $54USD to ship a keyboard from elitekeyboards
[00:48] <rick_h> StevenK: ouch
[00:49] <wgrant> pccasegear has some reasonable keyboards.
[00:49] <wgrant> They're Australian.
[00:50] <wgrant> http://www.pccasegear.com/index.php?main_page=index&cPath=113_1277
[00:50] <StevenK> All four of them are corded :-)
[00:50] <rick_h> cool, the leopolds are two that I have used. wgrant you used cherry reds? another one I'm interseted in
[00:51] <wgrant> Why would you want a wireless keyboard :/
[00:51] <wgrant> You have to have wires to your monitor, and your keyboard is right in front of your monitor...
[00:51] <wgrant> rick_h: I just have a Das Keyboard Ultimate S
[00:51] <wgrant> Which has Browns AFAIK
[00:51] <rick_h> wgrant: ah, gotcha
[00:52] <wgrant> I've never tried reds.
[00:52] <rick_h> heh, tonight at the local coders meet up is keyboard night. Brought out the leopold blue, brown, happy hacker, and unicomp
[00:52] <rick_h> I need help...
[00:52] <StevenK> wgrant: My keyboard isn't
[00:52] <StevenK> I have a keyboard drawer
[00:53] <wgrant> I used an actual Model M for a while.
[00:53]  * StevenK wonders why calling this view is returning ""
[00:54] <rick_h> wgrant: my unicorn is one of those split model M's. Like a model M and a MS Natural had babies, but they for for like $1k on ebay :/
[00:54] <lifeless> is bug 732510 f-r really?
[00:54] <_mup_> Bug #732510: poppy-sftp should connect to the database with a unique user ID <canonical-losa-lp> <dbuser> <poppy> <qa-ok> <tech-debt> <trivial> <Launchpad itself:Fix Committed by stub> < https://launchpad.net/bugs/732510 >
[00:58] <StevenK> wallyworld_: That was a nice panic?
[00:58]  * StevenK puts together a deployment
[00:59] <wallyworld_> StevenK: i went and got a drink before i rebooted
[01:11] <wgrant> lifeless: No
[01:11] <wgrant> lifeless: Needs downtime.
[01:21] <lifeless> wgrant: paramses?!
[01:23] <wgrant> lifeless: Well, it's a sequence of sets of parameters :(
[01:23] <wgrant> Yes, it's terrible, but params is already used for the singular.
[01:35] <lifeless> rules, searches, clauses, conditions, param_sets, ...
[01:35] <lifeless> I dunno, but accepting it as terrible seems undesirable :)
[01:36] <wgrant> lifeless: It's a sequence of BugTaskSearchParams objects.
[01:36] <lifeless> yes
[01:37] <lifeless> which all get unioned together
[01:37] <lifeless> not anded
[01:37] <lifeless> this is worth calling out perhaps
[01:37] <wgrant> Yes
[01:37] <lifeless> -> alternatives
[01:37] <lifeless> -> alternate_parameters
[01:40] <wgrant> alternatives it is
[01:46] <lifeless> wgrant: bug 929241
[01:46] <_mup_> Bug #929241: ProductPackagesPortletView calculates suggestions when it knows it won't show them <Launchpad itself:In Progress by wgrant> < https://launchpad.net/bugs/929241 >
[01:46] <lifeless> wgrant: why can't that just be the first line of setUpFields ?
[01:54] <wgrant> lifeless: Because then setUpWidgets complains that there aren't any fields.
[01:55] <lifeless> ah. thanks
[01:55] <wgrant> If I avoid setUpFields, I have to also avoid the rest of the form code.
[01:57] <lifeless> yea
[02:02] <StevenK> WidgetInputError('name', u'Name', LaunchpadValidationError(u'unique-from-test-team-py-line529-100007 is already in use by another person or team.'))
[02:02] <StevenK> I call shenanigans
[02:12] <ayan> maybe i was dreaming but i recall hearing about a C API for launchpad.
[02:12] <ayan> if it exists, where can i download it?
[02:12] <wgrant> I'm not aware of one.
[03:09] <wallyworld_> lifeless: the docs say any alter table patch cannot be applied live. is this true even for "alter table xxxx drop constraint...." patches?
[03:10] <StevenK> Doesn't that take a full write lock on the table?
[03:10] <wallyworld_> no idea
[03:10] <StevenK> wallyworld_: It should still be fine to apply during FDT
[03:10] <wallyworld_> but i would have thought a drop would be virtually instant
[03:10] <StevenK> It still needs a lock
[03:11] <wallyworld_> sure, but a very short one
[03:11] <wallyworld_> which hopefully we could tolerate?
[03:11] <StevenK> wallyworld_: Applied live refers to "Do not needs us to go down, can be done live"
[03:11] <wgrant> I think we should tolerate it.
[03:11] <wgrant> Although it will cause timeouts.
[03:11] <wgrant> Ah, except slony.
[03:11] <wgrant> So no, it requires a full lock across the cluster.
[03:12] <StevenK> Which is FDT only
[03:12] <wgrant> If we weren't using slony, I think it would be acceptable, as it would only cause things to time out for a few seconds.
[03:12] <wgrant> Yes.
[03:12] <wallyworld_> ok, will redo patch nr
[03:12] <wgrant> nr?
[03:13] <lifeless> wallyworld_: yes, it is true.
[03:13] <wallyworld_> nr is an (i thought) common abbreviation for number
[03:13] <wgrant> wallyworld_: Right, but this doesn't involve a number change.
[03:13] <wgrant> The docs in allocated.txt are out of date.
[03:13] <wallyworld_> wgrant: my patch nr ends in -1
[03:13] <lifeless> wallyworld_: the reason it says cannot be applied live is because it cannot be applied live.
[03:13] <wgrant> 'tis irrelevant nowadays.
[03:14] <wallyworld_> so do i make the patch nr end in -0?
[03:14] <lifeless> wallyworld_: in a busy cluster with e.g. backups, we might wait up to 24 hours to get a lock.
[03:14] <wgrant> wallyworld_: It no longer matters.
[03:14] <lifeless> wallyworld_: so even without slony, it would be unsafe to apply such a patch live.
[03:14] <wgrant> lifeless: It's safe if you are careful.
[03:14] <wallyworld_> ok, will mark the mp as needs review then
[03:14] <lifeless> wgrant: FSVO
[03:14] <lifeless> wgrant: its not trivially repeatedly safe
[03:14]  * StevenK stops looking at this visibility branch for a bit, since it is making him very very frustrated.
[03:15] <wgrant> lifeless: It involves less downtime than fastdowntime (up to ~9s of timeouts, rather than 90s of outage)
[03:15]  * wallyworld_ taps fingers waiting for diff
[03:15] <lifeless> wgrant: the goal of the process is to make it trivially and repeatedly safe to evolve the schema; having complex rules that only db gurus can get right is not a good way to achieve this.
[03:15] <wgrant> But because of slony we can't do it.
[03:15] <wgrant> Mmm
[03:15] <wgrant> True
[03:15] <lifeless> wgrant: more than 9 seconds, even without slony. not unless and until we have sorted a bunch of other stuff out.
[03:16] <lifeless> wgrant: (other stuff being e.g. backups - anywhere in the cluster. Cronscripts with locks. Etc.
[03:16] <wgrant> lifeless: Assuming that you are smart and don't apply when there are long transactions.
[03:16] <lifeless> (read locks)
[03:16] <wgrant> Which is what I meant by "careful"
[03:16] <lifeless> wgrant: hard to automate -well- because you cannot tell what /will/ be long when you start, only what *is* long already.
[03:16] <wgrant> lifeless: Lies.
[03:17] <wgrant> crontab disablement is handy.
[03:17] <lifeless> wgrant: e.g. the fdt kick everyone off logic would still be needed.
[03:17] <wgrant> Nope.
[03:17] <wallyworld_> hurry up diff!!
[03:17] <wgrant> You set a global flag which tells longrunning jobs to GTFO
[03:17] <wgrant> s/GTFO/not start/
[03:17] <lifeless> wgrant: I agree that in an ideal world it could be made moderately reliable, but I won't go further than that :0
[03:17] <wgrant> Wait for any stragglers to go away.
[03:17] <wgrant> Apply patch, which waits for contending requests to go away
[03:17] <wgrant> Done
[03:18] <lifeless> wgrant: modulo bugs, modulo stale connections, modulo another sysadmin doing something ...
[03:18] <wgrant> Yes.
[03:18] <lifeless> wgrant: the patch during that period breaks the world, of course.
[03:19] <lifeless> wgrant: taking everything out as fdt does, ignoring cluster sync, we can get less than that 9 second downtime window, and do it reliably (ignoring cluster sync because this is a post-slony discussion)
[03:19] <wgrant> lifeless: It breaks things that use that table for up to $REQUEST_TIMEOUT.
[03:19] <wgrant> Rather than breaking the entire application for $SLONY_TIME
[03:19] <lifeless> wgrant: and it also pauses all other incoming queries for that same window
[03:19] <wgrant> Isn't that
[03:19] <wgrant> "breaks things"?
[03:20] <wgrant> That was my definition of "breaks things".
[03:20] <wgrant> Oh, you mean it will hang threads and therefore cause queueing?
[03:20] <wgrant> True.
[03:20] <lifeless> yes
[03:20] <lifeless> knock on effects
[03:20] <lifeless> rather than 'LP is down, come back soon' folk see 'LP is slow with no notice'
[03:20] <lifeless> this is undesirable
[03:21] <wgrant> Mmm
[03:21] <wgrant> Slow for 5 seconds is better than down ever.
[03:21] <wgrant> s/5/10/
[03:21] <lifeless> debatable; there is no reason FDT can't be that fast *anyhow*, so ...
[03:22] <lifeless> given a less risky and highly repeatable process, with similar limits on performance, why would you want a more risky process?
[03:23] <lifeless> wgrant: are you working on that new loggerhead bug right now? should I delay releasing?
[03:24] <wgrant> lifeless: I'm 30s from pushing
[03:24] <wgrant> Just found another couple
[03:24] <lifeless> cool
[03:28]  * wgrant stomps on the branch scanner's face
[03:28] <wgrant> Hard
[03:28] <StevenK> That's okay, it has another one
[03:29] <wgrant> Unfortunately.
[03:29] <wgrant> wallyworld_: this ajax submit thing is very slow
[03:29] <wgrant> Or is it broken...
[03:29] <wallyworld_> wasn't slow for me before
[03:29] <wgrant> I'm leaning towards broken at this point.
[03:30] <wgrant> Oh
[03:30] <wgrant> It's the longpoll bug
[03:30] <wgrant> Had too many MPs open.
[03:31] <lifeless> rotfl
[03:31] <lifeless> cat just fell off the desk
[03:31] <wallyworld_> meow
[03:31] <lifeless> I have pillows behind my LCD
[03:31] <wallyworld_> wgrant: what's the longpoll bug?
[03:31] <lifeless> nice big one stacked two deep; both cats can lie up there
[03:31] <wgrant> wallyworld_: Web browsers suck
[03:31]  * wallyworld_ hates cats
[03:31] <wgrant> wallyworld_: They try to avoid DoSing, by limiting the number of connections per hostname
[03:32] <lifeless> the white one stretched out, arched back, and slid off the edge of the pillows, and the desk, and fell onto my minitower case :)
[03:32] <wallyworld_> ah :-)
[03:32] <wgrant> lifeless: https://code.launchpad.net/~wgrant/loggerhead/bug-929275/+merge/92194
[03:32] <wgrant> Finally
[03:32] <lifeless> is that a request for a code review ?
[03:33] <wgrant> s/request/fishing attempt/
[03:33] <StevenK> 31	from paste import httpexceptions
[03:33] <StevenK> 32	+from paste.httpexceptions import HTTPNotFound
[03:33] <StevenK> Fail
[03:34] <wgrant> bah
[03:34] <lifeless> wgrant: so, I wonder, would a wsgi middleware to map NoSuchId and NoSuchFile -> 404 be a good idea, or hide to many legit issues
[03:34] <wgrant> lifeless: I don't think that's a good idea.
[03:34] <lifeless> right now, for instance, the cod eyou've added can mask some classes of file system error
[03:35] <lifeless> because they just trigger NoSuchFile
[03:35] <wgrant> I used NoSuchId and NoSuchRevision
[03:35] <wgrant> Aren't those pretty specific?
[03:35] <lifeless> oh hmm,
[03:35] <lifeless> I misread slightly
[03:35] <lifeless> so yes, that should be pretty narrow
[03:36] <lifeless> anyhow, fine at first glance, I'll let StevenK who seems to be in a reviewing mood do it ;)
[03:36] <wgrant> StevenK: FIxed
[03:36] <wgrant> lifeless: I think that could mask legitimate bugs.
[03:36] <wgrant> And encouraging people to write terrible code is not what Loggerhead needs more of :)
[03:41] <StevenK> wgrant: r=me
[03:42] <wgrant> StevenK: Thanks.
[03:42]  * wgrant lands.
[03:47] <wgrant> lifeless: Should I upgrade Launchpad's?
[03:47] <wgrant> Or do you have stuff planned?
[03:52] <lifeless> wgrant: just release stuff from toshio, so please go ahead
[03:52] <wgrant> k
[03:54] <StevenK> WidgetInputError('name', u'Name', LaunchpadValidationError(u'unique-from-test-team-py-line529-100007 is already in use by another person or team.'))
[03:54] <StevenK> LIES!
[04:05] <wgrant> lifeless: the GIL contention between xmlrpc-private and app seems to be growing :(
[04:05] <wgrant> we see lots of https://lp-oops.canonical.com/oops/?oopsid=OOPS-1e8d134f2de3a8101bf09a55c20c00ae now
[04:07] <lifeless> 4 seconds getting feature flags?
[04:07] <wgrant> Precisely.
[04:07] <lifeless> ugh.
[04:07] <lifeless> ugh ugh ugh ugh ugh guh
[04:07] <wgrant> It's not unexpected.
[04:08] <lifeless> doesn't make it /nice/ :)
[04:08] <lifeless> so, we need to figure an appropriate ratio and then manually split the cluster, I guess
[04:08] <wgrant> Yeah
[04:08] <lifeless> I haven't seen anything suggesting haproxy knows how to count across two clusters yet
[04:08] <wgrant> I don't think so, no.
[04:08] <lifeless> that, or we teach lp how to server both from the same port
[04:09] <wgrant> But we can do the rebalancing entirely in haproxy
[04:09] <wgrant> which is nice
[04:09] <wgrant> I considered that, but that is scary.
[04:10] <lifeless> it could be a pgbouncer limit being hit
[04:10] <wgrant> Unlikely
[04:10] <lifeless> first query
[04:10] <wgrant> Hm, really?
[04:10] <lifeless> https://lp-oops.canonical.com/oops/?oopsid=OOPS-1e8d134f2de3a8101bf09a55c20c00ae#statementlog
[04:10] <wgrant> Oh, xmlrpc, so no auth
[04:11] <wgrant> I've never seen that on a webapp request, though.
[04:11] <lifeless> in main servers we get the ff early too, to determine query timeout
[04:11] <lifeless> should be the first query everywhere
[04:11] <wgrant> I don't think we get it before auth.
[04:11]  * wgrant checks.
[04:11] <lifeless> webapp threads are probably never idle
[04:12] <lifeless> fsvo idle
[04:12] <wgrant> SQL-main-slave SELECT getlocalnodeid()
[04:12] <wgrant> is the first for webapp
[04:12] <lifeless> or more importantly, the number of webapp errors probably dwarfs the number that would match this...
[04:12] <wgrant> Right.
[04:12] <lifeless> which is on -slave
[04:12] <wgrant> Mmm, true.
[04:12] <lifeless> so, two theories; whats the cheapest thing to do
[04:13] <wgrant> Graph pgbouncer connections, which UI has been trying to do this week? :)
[04:13] <wgrant> At least that's what #webops has looked like.
[04:13] <lifeless> excellent
[04:13] <wgrant> I've only glanced, though.
[04:13] <lifeless> as a starting point that makes sense
[04:13] <lifeless> I wonder if pgbouncer logs enough that we can short circuit that
[04:13] <wgrant> I'll check when these started
[04:13] <wgrant> Because we hit the FD limit last week
[04:14] <wgrant> when we moved session to go through pgbouncer.
[04:14] <lifeless> e.g. we've just added X appservers (have we?) -> more connections used
[04:14] <wgrant> No, they've not been added yet
[04:14] <lifeless> Also someone needs to add the new prefixes to oops-tools once they are hactive
[04:14] <wgrant> At least I hope not
[04:14] <wgrant> Since there's still queue depth issues
[04:17] <StevenK> Last I heard we were still waiting for more RAM and the extra CPU in one appserver
[04:17] <wgrant> HMmm, you may be right.
[04:17] <wgrant> On the pgbouncer thing
[04:17] <wgrant> Unfortunately I started pruning properly again yesterday, so we don't have full records.
[04:17] <wgrant> But I can't see too many similar OOPSes before the 31st.
[04:26] <lifeless> I think we should investigate pgbouncer
[04:26] <lifeless> 4 seconds in the GIL is pretty extreme, we were seeing that when we had 20 threads running on private-xmlrpc full-tilt, or something crazy like it
[04:27] <lifeless> its more plausible to me that this is a queued connection in pgbouncer
[04:27] <lifeless> at least on the data we have
[04:32] <lifeless> grrr, wtf, quoting *everything*. sob.
[04:35] <wgrant> What's quoting everything?
[04:35] <lifeless> handlebars
[04:35] <lifeless> shouldCompileTo("{{awesome}}", {awesome: "&\"'`\\<>"}, '&amp;&quot;&#x27;&#x60;\\&lt;&gt;'
[04:35] <lifeless> its quoting " and ' `
[04:36] <lifeless> some might say this is overkill
[04:36] <wgrant> "' need quoting
[04:36] <wgrant> ` not really
[04:36] <lifeless> why does ' need quoting in a template expansion ?
[04:45] <wgrant> lifeless: Attributes
[04:45] <wgrant> Some misguided folk use ' to delimit XML attributes.
[04:45] <wgrant> It's perfectly valid, if a little discouraged and quite rare.
[04:52] <lifeless> nah
[04:52] <lifeless> ah
[05:01] <poolie> hi
[05:01] <poolie> is it a regression, or a known problem, that attaching a patch to a bug no longer shows up in the comment timeline?
[05:03] <poolie> oh well, https://bugs.launchpad.net/launchpad/+bug/929313
[05:04] <_mup_> Bug #929313: patches aren't mentioned in the bug comment where they were attached <Launchpad itself:New> < https://launchpad.net/bugs/929313 >
[05:05] <lifeless> I'm not sure it is either
[05:05] <lifeless> o/
[05:07] <StevenK> wgrant: Can haz help? With http://pastebin.ubuntu.com/834814/ and running the new feature flag test I get WidgetInputError('name', u'Name', LaunchpadValidationError(u'unique-from-test-team-py-line529-100007 is already in use by another person or team.'))
[05:09] <wgrant> StevenK: Looking
[05:14] <wgrant> StevenK: I'm not sure that's a problem.
[05:14] <wgrant> StevenK: (Pdb) view.request.response._headers
[05:14] <wgrant> {'location': ['http://launchpad.dev/~unique-from-test-team-py-line529-100007']}
[05:14] <wgrant> It created it anyway, and redirected.
[05:14] <wgrant> it's possible that you're not meant to render the form in that case.
[05:18] <StevenK> So I'm not supposed to call view()?
[05:18] <lifeless> IIRC redirects have no body -> no need to render ever
[05:19] <StevenK> I'm just a bit unclear what I should do in that case.
[05:19] <StevenK> Just call c_i_v() and then check the team exists and is private?
[05:21] <wgrant> StevenK: I believe so.
[05:22] <StevenK> AssertionError: !=:
[05:22] <StevenK> reference = <DBItem PersonVisibility.PRIVATE, (30) Private>
[05:22] <StevenK> actual = <DBItem PersonVisibility.PUBLIC, (1) Public>
[05:22] <StevenK> :-(
[05:25] <wgrant> StevenK: Also, lint.
[05:25] <wgrant> Trailing whitespace everywhere!
[05:26] <StevenK> If everywhere is 3 places
[05:26] <StevenK> Anyway, fixed.
[05:26] <wgrant> It's bright red.
[05:26] <wgrant> So yes, everywhere.
[05:27]  * StevenK isn't sure why visibility isn't set
[06:45] <stub> ImportError: No module named pqm.pqm_submit
[06:49] <stub> Anyone know where that is supposed to be pulled in from? I don't see this bzr plugin in sourcecode.
[06:49] <wgrant> stub: The bzr-pqm package
[06:50] <stub> Ta
[06:50] <wgrant> Are you on precise?
[06:51] <stub> Which is installed
[06:51] <stub> Oneiric
[06:52] <wgrant> Hmm
[06:52] <wgrant> Possibly 2.6 vs 2.7
[06:53] <stub> My system bzr is happy with the pqm plugin. Not sure how Launchpad's bzrlib is supposed to load the system plugin.
[06:54] <stub> Could be right about 2.7 - system bzr uses 2.7 and lp uses 2.6
[07:05] <stub> Nah
[07:05] <stub> So how is the pqm plugin from the package supposed to get loaded when bzrlib.plugins is an egg?
[07:06] <stub> The system path is shadowed
[07:06] <wgrant> bzrlib.plugins is magical.
[07:06] <wgrant> It can pull stuff in from lots of places.
[07:06] <wgrant> I don't know how, though.
[07:15] <wgrant> stub: Will postgres plan through a view as if it wasn't there?
[07:15] <stub> Yes
[07:15] <stub> It gets expanded inline and the whole thing planned
[07:15] <wgrant> Right. Thanks.
[07:21] <StevenK> stub: You're not really concerned with say, planner changes between 8.4 and 9.1?
[07:23] <stub> StevenK: Usually things get better with a few regressions.
[07:36] <stub> webops ping: Need the pqm pre-commit hook tuned
[07:37] <stub> Or maybe just removed is better no
[07:37] <stub> w
[07:37] <stub> Need 'Makefile' added to the list of exceptions along with *.sql and fti.py
[07:38] <wgrant> stub: Why?
[07:40] <wgrant> stub: We've never had problems with database/schema/Makefile changes accidentally landing on devel, so I don't see any point in restricting them.
[07:50] <stub> sorry... on the phoe
[07:50] <stub> Had to change Makefile rules for the newly labotimized fti.py invocation
[07:51] <stub> lobotomised  even
[07:51] <wgrant> stub: *-0.sql and fti.py are the only things that are *forbidden*
[07:51] <stub> I think I should give up typing today
[07:51] <wgrant> Everything else is allowed.
[07:51] <stub> Ahh... so my fti.py changes are the trigger
[07:51] <wgrant> Yep
[07:52] <stub> webops ping: Please remove the precommit hook then
[07:52] <wgrant> Heh
[08:00] <hloeung> stub: where would I find this?
[08:00] <hloeung> ah, think I've found it
[08:02] <hloeung> stub: is this for /home/pqm/archives/rocketfuel/launchpad/devel or /home/pqm/archives/rocketfuel/launchpad/db-devel or both?
[08:05] <hloeung> precommit_hook=[ -z "$(bzr status -S database/schema/ | grep -e '\(-0\.sql\|/fti.py\)$')" ]
[08:05] <hloeung> this one?
[08:10] <stub> hloeung: that is the one
[08:11] <stub> hloeung: It is no longer helpful, and a hindrance for my branch
[08:14] <stub> lifeless: Do you really think it is worth keeping memcache around for just the blog info?
[08:22] <hloeung> stub: ok, commented out
[08:22] <stub> Ta
[08:26] <lifeless> stub: well its also used as a cheap temp store for some migrations
[08:27] <lifeless> stub: I nearly wrote 'and move the blog to something else' before I remember that
[08:27] <lifeless> StevenK: I expect us to get fucked over in short order by pg 9.1, but the really traumatic stuff I think was in 8.4
[08:28] <lifeless> StevenK: when a bunch of old implicit optimisations got derailed by CTE AFAICT
[08:51] <mthaddon> wgrant: I'm not sure I see the logic of "we've never had a problem, so let's remove the prevention of a problem"
[08:53] <wgrant> mthaddon: For some things, sure.
[08:54] <mthaddon> can we change it rather than removing it altogether?
[08:54] <wgrant> mthaddon: The particular case I was arguing against was a suggestion to *add* a new restriction that would not have stopped any previous problems.
[08:54] <wgrant> But in general I think that restriction does more harm than good.
[09:04] <adeuring> good morning
[10:19] <wgrant> rick_h: JS on qastaging seems to be pretty broken.
[10:19] <wgrant> LPJS is not defined
[10:20] <wgrant> rick_h: At least on +register-merge
[11:47] <rick_h> wgrant: looking
[11:52] <rick_h> wgrant: @#$#@ see it. Putting up a mp right now. Very stupid
[11:53] <StevenK> rick_h: I got annoyed with the regression branch so much I pushed convoy to the PPA and put up a small MP for convoy.
[11:54] <StevenK> rick_h: I also tossed combo-url at ec2, which failed with codehosting failures.
[11:54] <rick_h> StevenK: or wgrant can you review: https://code.launchpad.net/~rharding/launchpad/graph_lpjs_928500/+merge/92257
[11:54] <rick_h> very small stupid me change that's got qastaging broken JS
[11:55] <rick_h> StevenK: ok, thanks for the heads up. Want to shoot me the failed tests and I can peek at them today?
[12:05] <StevenK> rick_h: Forwarded. Not sure if it's my fault.
[12:23] <rick_h> adeuring: ping, can you peek at the MP please? https://code.launchpad.net/~rharding/launchpad/graph_lpjs_928500/+merge/92257
[12:23] <adeuring> rick_h: sure
[12:41] <adeuring> rick_h: r=me
[12:42] <rick_h> adeuring: thanks
[13:48] <lifeless> bah, whats that thing where you can't sleep?
[13:51] <stub> day?
[13:55] <lifeless> hah yes
[13:55] <lifeless> hey, so I've got the db hardware quote moving again
[13:56] <lifeless> should have updated options soon and will rope flacoste into it when he is back on deck
[13:56] <lifeless> stub: herb is proposing SSD, perhaps with the same RAM we have today, or even a reduction. Your thoughts?
[13:57] <stub> SSDs are great if you pick the right hardware (and a disaster waiting to happen if you pick the wrong ones, such as the ones with volatile write caches on board). But not sure if they help our performance since we see little disk activity overall.
[14:00] <stub> Not sure what RAID setup we would need - need to scan the section in the PG performance tomb.
[14:00] <stub> Maybe a single RAID5 would be fine offsetting cost, or maybe we would still need two channels or an expensive RAID0+1 setup.
[14:00] <lifeless> stub: we do have a number of cold-cache bugs open
[14:01] <lifeless> stub: I know we see little absolute disk IO, but when we do its all concentrated AFAICT :)
[14:04] <abentley> gmb: I'm taking on branchscanner-timeout-bug-808930.  Can we chat?
[14:06] <gmb> abentley: Sure, can you give me a little while to get to a point where I can switch? Say 14:30 UTC?
[14:07] <abentley> gmb: That's stand-up time.  15:00?
[14:07] <gmb> abentley: Sure, 15:00 would be fine.
[14:07] <abentley> gmb: great.
[14:07] <flacoste> good night lifeless ;-)
[14:10] <lifeless> flacoste: heh I wish.
[14:11] <lifeless> flacoste: we finally got cynthia to sleep properly, fingers crossed. And now *I* have insomnia.
[14:11] <flacoste> lifeless: that's so not unusual!
[14:11] <lifeless> stub: whats your feeling about RAM usage with SSD's? Could we keep it at todays levels, even with more appservers; or could we even reduce it?
[14:12] <lifeless> stub: and yeah, please do scan the PG performance tomb and get back to me
[14:12] <lifeless> flacoste: \i/
[14:12] <stub> lifeless: I'd need to do research or benchmarking. Spinning plates of rust could end up faster if we can afford more RAM
[14:12] <lifeless> so, herb is going to get options on SSD w/128GB of RAM
[14:13] <stub> lifeless: similarly, you are right that we might be able to get away with less RAM with SSDs (although SSD access is still slower than cache)
[14:13] <lifeless> I think we need to advise on the raid setup for those options?
[14:13] <herb> my intention would be raid 10
[14:13] <stub> Yes. There have been several threads on the mailing lists.
[14:13] <lifeless> herb: oh hai!
[14:13] <herb> even with SSD. :)
[14:14] <stub> herb: over hardware raid 5? For paranoia or throughput reasons?
[14:14]  * lifeless guesses at less rewrites-of-sectors
[14:14] <lifeless> (+ performance)
[14:14] <herb> stub: both
[14:15] <lifeless> herb: ok, since thats about as good as it gets, I don't think we need to ask you for a specific config :)
[14:15] <stub> herb: And one or two channels? We have two partitions, but that might be unnecessary since we are not queuing up writes for the right bit of rust to be in the right point in space/time.
[14:16] <herb> my intention was a single channel SAS. even with SSD RAID 10 we won't be able to saturate the channel (nor could with with 16 or more spindles)
[14:16] <lifeless> stub: the controller would still have a limited tagging queue, which can still saturate; but I doubt we would have an issue given our current write load
[14:17] <herb> but I'm happy to take input on that.
[14:17] <stub> elmo has the same book I havem but I'll trawl the mailing lists for the more resent threads. Only major thing I recall is many drives cheat by having write caches that fail for disaster recovery (including 'enterprise' devices)
[14:17] <lifeless> herb: we could add another channel later, right ?
[14:18] <herb> in any case, let's get some order of magnitude pricing. it might show that SSD is a non-starter and RAM is the way to go.
[14:18] <stub> I suspect it is unnecessary
[14:18] <lifeless> herb: if we can add another channel later, I think its totaly fine to start with one; SSD is -very- nice these days :)
[14:18] <herb> lifeless: indeed we could.
[14:19] <lifeless> so, we all seem agreed - lets defer the question
[14:19] <stub> I think we should save money on RAM and buy me a coffee machine. Much better performance improvement.
[14:19] <lifeless> use your frequent flyer points :)
[14:19] <herb> I find your claim to be suspect.
[14:20] <lifeless> herb: lol :) Also, I can point you at some hair raising queries/schema bits, which no amount of hardware can compensate for :)
[14:20] <stub> KLM, Air France, Emirates.... don't think I've earned a real FF point in ages...
[14:22] <stub> Anyway, off for dinner at 9:20pm because some of us have this non-sleeping thing from the other end.
[14:24] <lifeless> ciao
[14:26] <stub> herb: Consider a combination of SSD and spinning metal. No point wasting SSD on logs and dumps.
[14:27] <herb> stub: good call. will do.
[14:30] <deryck> abentley, adeuring, rick_h -- https://plus.google.com/hangouts/extras/talk.google.com/orange-standup
[14:31] <lifeless> stub: herb: May I suggest the san rather than spinning metal for those offcuts ?
[14:31] <lifeless> probably faster than spinning metal ...
[15:01] <abentley> gmb: mum-ble or han-gout?
[15:01] <gmb> abentley: Hangout works for me - besides, I can't guarantee that Mumble will. Bear with me a sec...
[15:03]  * gmb waits for firefox to get its act together
[15:13] <flacoste> adeuring, deryck: are we done with bug 829074? or is there some other branches that need to land?
[15:13] <_mup_> Bug #829074: Show bugs that are not known to affect "official" upstream <bugs> <escalated> <qa-ok> <Launchpad itself:Fix Released by adeuring> < https://launchpad.net/bugs/829074 >
[15:15] <deryck> flacoste, on call with adeuring now.  explain shortly.
[15:15] <flacoste> deryck: ok
[15:21] <adeuring> flacoste: there is a minimal fix in place: you can filter by upstream target, if a packaging link exists
[15:22] <adeuring> flacoste: i am working on a more comprehensive fix:
[15:22] <adeuring> under the label "bug 2325"
[15:22] <_mup_> Bug #2325: Distro CVE report: permission denied! <lp-bugs> <Launchpad itself:Fix Released by kiko> < https://launchpad.net/bugs/2325 >
[15:22] <adeuring> the idea is to porvide an option to select arbitrary upstream targets
[15:23] <adeuring> well, not completely arbitrary, but any product or other source pachkage
[15:23] <adeuring> erm, i meant bug 232545
[15:24] <_mup_> Bug #232545: resolved_upstream list does not do product / source package matching <lp-bugs> <ubuntu-upstream-relations> <Launchpad itself:In Progress by adeuring> < https://launchpad.net/bugs/232545 >
[15:24] <flacoste> adeuring: is the comprehensive fix requested by the stakeholders also?
[15:24] <jcsackett> anyone else unable to use ec2 commands b/c of pqm issue? did i miss an email about something i need to change?
[15:24] <adeuring> the bug report was a bit vague...
[15:24] <lifeless> jcsackett: I am, abentley is/has fixed it.
[15:24] <jcsackett> lifeless: ah fantastic. thanks.
[15:25] <lifeless> jcsackett: I haven't tried again since, so i don't know where teh fix has gotten up to.
[15:25] <abentley> lifeless: mur?
[15:25] <lifeless> abentley: the pqm-submit config stacks patch
[15:25] <lifeless> abentley: (IIRC the various bits correctly) - a few days ago.
[15:26] <abentley> lifeless: Yes, I've fixed the lp-land command.  Does that also fix ec2 land?
[15:26] <lifeless> abentley: oh, I thought it did/would :)
[15:26] <lifeless> abentley: IIRC the error in ec2 land happens when it invokes bzr lp-land to get the signed email for later delivery
[15:27] <lifeless> jcsackett: can you pastebin the transcript showing hte error?
[15:27] <jcsackett> lifeless: sure, one moment.
[15:28] <flacoste> adeuring: might be worth talking to brian and bryce about it
[15:28] <abentley> lifeless: Oh, I didn't think it shelled out to lp-land.
[15:28] <lifeless> it might be the thing rick has fixed
[15:28] <flacoste> adeuring: just to make sure we actually implement what they need
[15:28] <jcsackett> lifeless: http://pastebin.ubuntu.com/835337/
[15:28] <lifeless> in which case pulling trunk will fix it
[15:28] <adeuring> flacoste: ok
[15:29] <lifeless> jcsackett: ah, thats not what I saw a few days back
[15:29] <abentley> jcsackett: That looks unrelated to the issue I fixed.
[15:29] <jcsackett> lifeless, abentley: well, darn.
[15:29] <lifeless> jcsackett: that looks like what stub experienced
[15:29] <jcsackett> lots of different issues cropping up here, huh? :-P
[15:29] <lifeless> jcsackett: system plugin path isn't on your bzr plugins path (is all I know)
[15:29] <abentley> jcsackett: Possibly pqm-submit isn't installed on the box in question.
[15:29] <lifeless> jcsackett: stub filed a bug
[15:30] <flacoste> adeuring: from the launchpadlib pseudo-code that Bryce shows in the bug report, i think it was fine to make this work for linked_product only
[15:30] <flacoste> adeuring: best to validate with them directly
[15:30] <lifeless> jcsackett: look in latest-bugs for LP
[15:30] <jcsackett> pqm-submit is installed, abentley. i was hoping that was the issue too, since it was easily resolved. :-p
[15:30] <adeuring> flacoste: ok
[15:30] <jcsackett> lifeless: ok, thanks.
[15:33] <jcsackett> lifeless: yup, i'm experiencing the same thing. thanks for pointing me to the bug report.
[15:41] <sinzui> danhg, I have 3 draft posts on blog.launchpad.net. Do you want to read them?
[15:49] <deryck> adeuring, abentley, rick_h -- I'm starting on the open questions now. for real. :)
[15:49] <rick_h> deryck: excellent
[16:04] <deryck> allenap or gmb -- if an external tracker is a Trac bug tracker, and we're syncing comments, shouldn't the status also update based on upstream bug?
[16:04] <gmb> Yes
[16:04] <allenap> deryck: Yes, it ought to be.
[16:04] <deryck> there's no mapping required between trac and us, right?
[16:08] <deryck> ah, not updated in awhile. resetting watch it is.
[16:13] <deryck> bigjools or jelmer -- I've got a question about failed recipe build I don't know how to answer. Can I assign to one of you, or you point me to who? :)
[16:14] <rick_h> deryck: link? jelmer was just helping someone in #lp and curious if it's the same
[16:14] <deryck> rick_h, https://answers.launchpad.net/launchpad/+question/186193
[16:15] <rick_h> deryck: ok, different one, nvm
[16:15] <deryck> rick_h, thanks anyway :)
[16:19] <bigjools> deryck: I really have NFI what's going on there.
[16:19] <bigjools> other than something wants more memory than it can get
[16:20] <bigjools> deryck: however, it's Java, so all bets are off, we've had this before with java builds
[16:22] <jelmer> deryck: that just sounds like the issue with java using too much memory on the buildds
[16:22] <lifeless> adeuring: that launchpadlib bug is a dupe of a server side bug, i forget the # though.
[16:23] <lifeless> adeuring: you may want to mark it as such to avoid duplicate analysis
[16:23] <adeuring> lifeless: right
[16:23] <deryck> bigjools, jelmer -- ok thanks guys.  I'll find the bug and point the user at it.
[16:23] <lifeless> adeuring: its almost certainly fallout from the materialised INCOMPLETE enums
[16:23] <lifeless> adeuring: (but you know that :P)
[16:23] <jelmer> bigjools: bug 693524 might be related
[16:23] <_mup_> Bug #693524: Daily builds of Java packages fail: "Could not reserve enough space for object heap" <recipe> <Launchpad Auto Build System:Triaged> < https://launchpad.net/bugs/693524 >
[16:23] <adeuring> yeah ;)
[16:42] <deryck> abentley, adeuring, rick_h -- questions are all caught up.  answered or assigned them all.
[16:42] <abentley> deryck: win!
[16:43] <deryck> \o/
[16:45] <deryck> adeuring, have you done interrupts today? ;)
[16:45] <adeuring> deryck: working on roject review
[16:45] <deryck> adeuring, awesome!
[16:45] <deryck> adeuring, I shall quit nagging you now. :)
[16:45] <adeuring> ;)
[16:46] <deryck> adeuring, was about to ratchet it up to public internet shaming, but all is well now. ;)
[16:46] <deryck> I had alreading registered adeuringsucksatinterruptduties.com
[16:46] <deryck> :)
[17:33] <mabac> rick_h, thanks for reviewing https://code.launchpad.net/~linaro-infrastructure/launchpad/workitems-model-classes/+merge/92174 . could you have a look at my update and let me know if I'm on the right track?
[17:34] <rick_h> mabac: will do, sec
[17:38] <mabac> rick_h, absolutely no rush. :) thanks!
[18:00] <abentley> lifeless: What's the best way to examine an OOPS that occurred on a dev instance (launchpad.dev) ?
[18:00] <lifeless> abentley: my preferred way is to glue oops-tools into the rabbit instance and have them come up in the normal web UI
[18:01] <lifeless> any oopses before you set that up will have been tossed by rabbit (as the exchange would have had no queue attached to it)
[18:01] <lifeless> on https://dev.launchpad.net/QA/OopsToolsSetup there is a 'deploying locally' section
[18:01] <abentley> lifeless: I have an oops in /var/tmp/codehosting.test/2012-02-09
[18:02] <lifeless> abentley: ah, interesting - must have been running without rabbit active
[18:02] <lifeless> in which csae the simplest thing is dump-bson
[18:02] <lifeless> sorry, bsondump
[18:02] <lifeless> which is in utilities/bsondump, or if you have a buildout of oops-amqp in bin/bsondump of it
[18:02] <lifeless> e.g. bsondump $path
[18:03] <abentley> lifeless: cool.
[18:04] <abentley> lifeless: utilities/bsondump says "No module named bson", but "bin/py utilities/bsondump" says "'module' object has no attribute 'decode_all'"
[18:04] <lifeless> you can load that into the web UI if you want using datedir2amqp from python-oops-datedir2amqp
[18:05] <lifeless> abentley: argh, that blows.
[18:05] <lifeless> abentley: I know the one in oops-amqp works well
[18:06] <lifeless> erm, wrong project :<
[18:06] <lifeless> abentley: quick start:
[18:06] <lifeless> branch python-oops-datedir-repo
[18:06] <lifeless> link in the eggs and download-cache from your LP work area
[18:07] <abentley> lifeless: I installed python-bson with synaptic, and it works with that.
[18:07] <lifeless> ./bootstrap ...
[18:07] <lifeless> abentley: ok cool
[18:17] <rick_h> mabac: hey, sorry, was going to ping you in irc but you stepped out
[18:17] <rick_h> mabac: replied to your MP, the model definition is different with stormbase
[18:17] <mabac> rick_h, sorry. I'm going in and out of sessions at Linaro Connect
[18:17] <mabac> rick_h, thanks. I'll check in a bit
[18:17] <rick_h> mabac: I hooked you up with an example in the blueprints directory there with you
[18:18] <rick_h> let me know if that doesn't help
[18:22] <mabac> rick_h, awesome. thank you!
[18:23] <rick_h> mabac: thank you for working on it
[18:23] <mabac> my pleasure :)
[19:09] <abentley> gmb: The transaction killer is not related to bug 808930 AFAICT.  The jobs are killed by the job-running infrastructure, because they run too long.  It's not length-of-transaction; it's length-of-job, which is tougher to optimize, because you'll be lucky if you only have O(n) complexity.
[19:09] <_mup_> Bug #808930: Timeout running branch scanner job <escalated> <linaro> <oops> <Launchpad itself:Triaged> < https://launchpad.net/bugs/808930 >
[19:10] <abentley> And with O(n) complexity, it's always possible to time out by increasing n.
[19:27] <bac> hi, abentley, i've been looking around and cannot figure out the story for nested branches with bzr.  is it in 2.5?  can you point me somewhere to find out about them?
[19:28] <abentley> bac: Nested trees?  Not implemented yet AFAIK.
[19:28] <bac> abentley: ok.  i'd heard they were
[19:28] <abentley> bac: You don't mean colocated branches, do you?
[19:29] <bac> no
[19:29] <bac> nested, as in the equivalent of svn externs
[19:29] <abentley> bac: jelmer is planning to implement them, but he was nowhere near finished at the thunderdome.
[19:30] <bac> abentley: ok, thanks
[22:04] <sinzui> wallyworld_, wgrant, jcsackett, StevenK, I think this summarises my day reading the voucer/commercial subscription code: http://people.canonical.com/~curtis/wtfpm.jpg
[22:07] <maxb> lol
[22:08] <maxb> I shall have to show that picture to my colleagues tomorrow :-)
[22:14] <StevenK> sinzui: http://pastebin.ubuntu.com/835839/
[22:17] <jcsackett> sinzui, wallyworld: bug 929352 is what has hit me.
[22:17] <_mup_> Bug #929352: ec2 land unable to find pqm plugin <Launchpad itself:Confirmed> < https://launchpad.net/bugs/929352 >
[22:19] <sinzui> jcsackett, set the env before calling the script BZR_PLUGINS_AT=gtk@/path/to/your/plugins/pqm c
[22:19] <jcsackett> sinzui: thanks!
[22:19] <sinzui> ^ that is how I test plugin changes in the branch I am hacking on
[22:20] <sinzui> oops
[22:20] <sinzui> jcsackett, set the env before calling the script BZR_PLUGINS_AT=pqm@/path/to/your/plugins/pqm c
[22:21] <wgrant> abentley: Around?
[22:21] <abentley> wgrant: yes.
[22:22] <wgrant> abentley: I think your bzr plugin loading changes break ec2 land
[22:22] <wgrant> It can't load the pqm plugin
[22:23] <abentley> wgrant: You're talking about the site-customize changes?
[22:25] <wgrant> abentley: Yes. Just finding exactly what it is now.
[22:26] <abentley> wgrant: wouldn't have thought that ec2* ran Launpad in-process.
[22:26] <wgrant> abentley: bin/ec2 is a buildout script
[22:26] <wgrant> So it uses lp_sitecustomize
[22:33] <wgrant> Hmm
[22:33] <wgrant> So loading lp.codehosting breaks everything.
[22:33] <wgrant> abentley: Can we avoid that, or should I revert the whole rev?
[22:34] <abentley> wgrant: I don't think we should avoid that.  If we do, we get weirdness where LoomBranch isn't treated as a subclass of Branch.
[22:35] <wgrant> abentley: I suspect we want to rip that out of lp.codehosting, then.
[22:35] <wgrant> lp_sitecustomize is imported extremely early, and pulls in no other big bit of lP.
[22:35] <wgrant> Just a few isolated bits from lp.services.
[22:36] <abentley> wgrant: I don't know what you're saying would be ripped out of lp.codehosting.
[22:36] <abentley> wgrant: I'm sure the fact that it's loading bzr plugins is related to the fact that pqm-submit isn't loading.
[22:37] <wgrant> Yeah, but separately I don't think lp_sitecustomize should be importing codehosting.
[22:38] <abentley> wgrant: Okay, but the main efffect of importing codehosting is initializing the plugins.
[22:40] <wgrant> abentley: True
[22:40] <wgrant> Anyway, I think we have little choice but to revert this.
[22:40] <wgrant> As it breaks the dev toolchain.
[22:41] <abentley> wgrant: Doesn't break *my* dev toolchain, but I guess you should.
[22:41] <wgrant> abentley: Have you rerun bin/buildout?
[22:41] <abentley> wgrant: I don't use ec2.
[22:41] <wgrant> Hah
[22:59] <sinzui> wallyworld_, http://www.youtube.com/watch?v=YWMVVpOsjko
[22:59] <sinzui> Its from 2005 I think
[23:00] <wallyworld_> sinzui: thanks, looking now :-)
[23:02] <james_w> omg buildout is annoying
[23:03] <james_w> "no you can't have that"
[23:03] <james_w> Error: Picked: distribute = 0.6.24
[23:03] <james_w> ok, add to to versions.cfg
[23:03] <james_w> Error: There is a version conflict.
[23:03] <james_w> We already have: distribute 0.6.16dev-r0
[23:04] <james_w> ok, so why didn't you pick that one?
[23:04] <james_w> change versions.cfg
[23:04] <james_w> Error: Couldn't find a distribution for 'distribute==0.6.16dev-r0'.
[23:04] <james_w> but you said you already had it!
[23:08] <wgrant> james_w: That's what you get for daring to use a packaged Python.
[23:08] <wgrant> Upstream doesn't like distributions much.
[23:09] <james_w> clearly
[23:09] <james_w> we're considering migrating to buildout though
[23:10] <james_w> because it's possible that other alternative suck more
[23:11] <wgrant> I think buildout sucks less than virtualenv(+pip)
[23:11] <james_w> I find that combination easier to use
[23:12] <wgrant> In some respects.
[23:12] <james_w> but I don't like the "oh, you have that version already installed? well I'm going to try installing it anyway? oh look, it's already in the download cache! well, I'll try downloading it anyway just to make sure"
[23:18] <james_w> https://code.launchpad.net/~james-w/python-oops-amqp/bson-compat/+merge/92389 is what I was trying to do
[23:47] <mabac> james_w, hi. looks to me that you need to have a little cider break ;)
[23:48] <mabac> rick_h, thanks for being patient. I have pushed another change to lp:~linaro-infrastructure/launchpad/workitems-model-classes/ and hope that will work better
[23:50] <james_w> mabac, heh, yeah, heading out curling now :-)
[23:51] <mabac> james_w, sounds dangerous ;) take care!
[23:52] <james_w> I will!