[00:28] <wallyworld_> rockstar: can you ping me if/when you might have a moment?
[00:29] <lifeless> rockstar: so what was up with your template
[00:51] <LPCIBot> Project devel build (166): ABORTED in 1 hr 39 min: https://hudson.wedontsleep.org/job/devel/166/
[00:51] <LPCIBot> * Launchpad Patch Queue Manager: [r=adeuring][ui=none][no-qa] New utility script,
[00:51] <LPCIBot> massage-bug-import-xml,
[00:51] <LPCIBot> that attempts to repair some commonly seen problems with bug import
[00:51] <LPCIBot> XML.
[00:52] <LPCIBot> * Launchpad Patch Queue Manager: [r=allenap][ui=none][bug=655242][bug=505304][bug=504126][bug=606866]
[00:52] <LPCIBot> Fix OpenGPG documentation.
[00:52] <LPCIBot> * Launchpad Patch Queue Manager: [rs=me][ui=None][no-qa] Slow the rate of polling the buildd-manager
[00:52] <LPCIBot> does to reduce the load on the database and cesium (the
[00:52] <LPCIBot> buildmaster host).
[00:52] <StevenK> LPCIBot: Shhh, don't tell everyone.
[00:53] <wallyworld_> thumper: quick question... ping?
[01:09] <rockstar> wallyworld_, what's up gangster?
[01:09] <wallyworld_> rockstar: can you skype?
[01:13] <rockstar> wallyworld_, I can't, actually.  :(
[01:13] <wallyworld_> ok. i just wanted to touch base about your merge queues email
[01:14] <wallyworld_> one question - while i wait for your branch to merge in to db-devel, can i merge from your lp copy and later merge from db-devel with no problems?
[01:14] <wallyworld_> that way i can start using your stuff now
[01:14] <rockstar> wallyworld_, yeah, you certainly can.
[01:14] <rockstar> wallyworld_, yeah.  deryck and I have learned some icky edge cases that don't play well with feature flags.
[01:15] <wallyworld_> anything i need to know?
[01:15] <wallyworld_> also, do turn on the merge queue feature flag, do i need to insert a record into my local db?
[01:16] <rockstar> wallyworld_, just do it the way I did it, and you should be good.
[01:16] <wallyworld_> ok. i'm also unsure what you meant by merge queue vocabulary and the comments around that
[01:18] <wallyworld_> actuallt, to clarify my previous question,  i know how to wrap stuff inside a feature flag, but how do i enable that flag locally for my testing?
[01:18] <wallyworld_> inprod, there would be a db record required, no?
[01:20] <rockstar> wallyworld_, see my branch:  lp.code.browser.tests.test_branchmergequeue
[01:21] <wallyworld_> rockstar: np
[01:21] <LPCIBot> Project db-devel build (108): STILL FAILING in 3 hr 55 min: https://hudson.wedontsleep.org/job/db-devel/108/
[01:22] <wallyworld_> that is fine for unit tests, but if i want to run up a system to try out, is there a preferred way for that?
[01:22] <rockstar> wallyworld_, you don't need to worry about the vocabulary stuff.  I basically just need to be able to get a set of BranchMergeQueue items for a given criteria.  You're currently working on a solution for that.
[01:23] <wallyworld_> rockstar: yes. the only methods currently supported are visibleBy() and ownedBy() but the infrastructure is there to extend as required
[01:23] <rockstar> wallyworld_, yeah, I can extend it.
[01:25] <wallyworld_> rockstar: ok, i'll grab your branch off lp. you can grab my working copy too i guess if you need to. i'll email you a status later today
[01:26] <rockstar> wallyworld_, yeah.  It'd be great if you could get it landed though.  Pipes don't like to have merges from elsewhere.
[01:27] <wallyworld_> rockstar: ok. the main work left is the visisbleBy() and the tests and merging in your latest once it lands into db-devel
[01:28] <rockstar> wallyworld_, cool.  I've got LOTS of other things to work on, so don't feel too much pressure.
[01:28] <rockstar> wallyworld_, I'm going to be busy all weekend.
[01:28] <wallyworld_> ok. np. thanks.
[01:43] <thumper> hey
[01:44] <wallyworld_> straw
[01:44] <thumper> I've made it to the shared work space
[01:44] <thumper> very interesting
[01:44] <thumper> and free for where I'm at :-)
[01:44] <wallyworld_> so now everyone else there has to put up with you too and not just us :-)
[01:46] <thumper> heh, yeah
[01:47] <rockstar> thumper, cool.  I'm interested to see what your experience with it is.
[01:47] <thumper> rockstar: sure
[01:47] <thumper> rockstar: how's UDS going for you?
[01:47] <rockstar> thumper, I'm REALLY tired.  :)
[01:47] <thumper> heh, not surprised
[01:47] <rockstar> deryck and I are watching football and spending a quiet evening in.
[01:47] <thumper> :)
[01:48]  * thumper needs to focus on actually doing something :)
[02:39] <lifeless> rockstar: hey
[02:39] <lifeless> rockstar: so what was up with the request.features thing the other day
[02:40] <rockstar> lifeless, complications in using page fragments.
[02:40] <rockstar> It seems that in some cases, page fragments result in entirely separate request objects than the original request, even though it's technically in the same request.
[02:41] <rockstar> I think page fragments are a potential performance problem.
[02:41] <rockstar> Maybe not the biggest performance problem, but something we may need to evaluate.
[02:42] <rockstar> That's why we would get those separate "requests" halfway through a pdb of a page: it was getting another page fragment.
[02:42] <lifeless> yeah, they've been on my radar already.
[02:42] <lifeless> however why weren't the request *objects* getting a start_request event ?
[02:43] <rockstar> lifeless, I think it's because it wasn't an actual request, but a page fragment.
[02:43] <rockstar> They'd technically already had a start_request event.
[02:44] <rockstar> When you left, we were checking for "features" in self.__dict__ when we should have been checking for "_features" in that pdb setup we made.
[02:45] <rockstar> When I fixed that, I found that we weren't ever getting without setting.
[02:45] <rockstar> Although the request object did some funky things.
[02:45] <lifeless> so, you've a work around ?
[02:46] <rockstar> lifeless, no, the actual issue was another type poolie found.
[02:47] <rockstar> I was missing a semicolon in my tal:define.
[02:47] <rockstar> lifeless, so I got it to work when I defined features correctly.
[02:47] <rockstar> lifeless, the pdb quest was very eye opening though.
[02:47] <lifeless> -ah
[02:47] <lifeless> coolio
[02:48] <rockstar> The branch index page traverses zcml at LEAST 8 times...
[02:48] <lifeless> yeah
[02:48] <lifeless> we have a lot of inefficiencies
[02:48] <lifeless> what we need to do, once we have the coarse performance headaches out of the way
[02:48] <lifeless> is to start streamlining the stack,
[02:53] <thumper> rockstar: what do you mean traverses zcml at least 8 times?
[02:54] <rockstar> thumper, every time we go get a page fragment, we have to look it up in zcml.
[02:54] <thumper> rockstar: you mean humans? or machines?
[02:54] <thumper> rockstar: because the machine doesn't touch the zcml
[02:54] <thumper> rockstar: it is parsed once
[02:54] <rockstar> thumper, for context, lifeless and I went on a pdb adventure yesterday trying to sort out a feature flag problem.
[02:55] <rockstar> thumper, yeah, but it's stored in an object tree, which gets traversed.
[02:57] <wgrant> Surely the adapter lookup doesn't suck too much, though...
[02:57] <thumper> I thought that was pretty much optimized at the zope level
[02:58] <lifeless> thumper: its about 40 function calls to do a notify()
[02:58] <lifeless> make no assumptions
[02:58] <thumper> heh
[02:58] <thumper> 40?
[02:58] <thumper> hmm..
[02:58] <rockstar> thumper, easily 40.
[02:58] <thumper> that's kinda shit
[02:58] <lifeless> conservative
[02:58] <lifeless> theres a profound degree of indirection in the zope core
[02:58] <lifeless> some bits are highly tuned
[02:59] <rockstar> lifeless, yeah, stepping through a whole request life was VERY eye opening for me.
[02:59] <lifeless> but not all, not by any stretch
[02:59] <lifeless> rockstar: ;)
[03:03] <rockstar> lifeless, kapil also said he made a really good debug WSGI middleware that gets good SQL tracing and such.
[03:03] <lifeless> sadly we really don't have a wsgi stack
[03:03] <lifeless> we're glued on the end
[03:03] <lifeless> OTOH we have damn awesome sql tracing already
[03:04]  * wgrant likes the Django Debug Toolbar.
[03:04] <rockstar> wgrant, I have tried to duplicate that since day 1 at Canonical.
[03:05] <lifeless> web debugger?
[03:05] <wgrant> (It hooks into WSGI in a couple of places, adding a toolbar on the right of every page which lets you easily see template contexts, seeing SQL statements and running EXPLAINs and all sorts of other awesome stuff.)
[03:05] <lifeless> should be broadly straight forward.
[03:05] <rockstar> lifeless, no, just adds some HTML with information about query times and templates used and performance times.
[03:05] <lifeless> rockstar: oh, just make the comment at the bottom visible via a feature flag.
[03:05] <rockstar> lifeless, we're not using a WSGI stack?  I thought gary said we were.
[03:06] <lifeless> I've been meaning to do it, but after edge going
[03:06] <wgrant> rockstar: FSVO stack.
[03:06] <rockstar> lifeless, no, you'd never enable this on production.  It'd be a devserver thing.
[03:06] <lifeless> rockstar: we're 95% custom publication code incompatible with wsgi
[03:06] <rockstar> wgrant, FSVO?
[03:06] <wgrant> For Some Values Of
[03:06] <lifeless> rockstar: oh, we have a bunch of stuff that I'd be fine showing to you on prod.
[03:07] <lifeless> you could expand it for lp.dev
[03:07] <rockstar> lifeless, yeah, but the query middleware makes things pretty slow.
[03:07] <wgrant> AIUI it's not so much a WSGI stack as some Zopeish Launchpaddery with a WSGI frontend glued on.
[03:07] <lifeless> right
[03:08] <rockstar> wgrant, yeah, that's what I was assuming, but I guess I assumed there was more WSGI-ish stuff than what's there.
[03:10] <wgrant> Is crowberry still unwell?
[03:10] <StevenK> This remains to be seen
[03:11] <StevenK> wgrant: Why? Do you have an issue?
[03:11] <lifeless>     Hard / Soft  Page ID
[03:11] <lifeless>      283 /   17  Archive:EntryResource:getBuildSummariesForSourceIds
[03:11] <lifeless>      180 /   30  Person:+commentedbugs
[03:11] <lifeless>      140 /  203  CodeImportSchedulerApplication:CodeImportSchedulerAPI
[03:11] <lifeless>      100 /  251  BugTask:+index
[03:11] <lifeless>       98 /    3  MailingListApplication:MailingListAPIView
[03:11] <lifeless>       69 /  270  Distribution:+bugs
[03:11] <lifeless>       16 /    0  Sprint:+temp-meeting-export
[03:11] <lifeless>       13 / 1018  Archive:+index
[03:11] <lifeless>       13 /   44  Archive:+packages
[03:11] <wgrant> StevenK: Just wondering about the frequent downtime lately.
[03:11] <lifeless>       13 /    1  ProjectGroup:+milestones
[03:12] <wgrant> lifeless: That top one is a bit odd.
[03:12] <StevenK> wgrant: We are clearly only trying to make you think there is an issue.
[03:14]  * rockstar shuts laptop and concentrates on football
[03:17] <lifeless> wgrant: whats odd aboot it ?
[03:19] <wgrant> lifeless: I don't recall seeing that one high up before.
[03:19] <wgrant> But I may be wrong.
[03:20] <lifeless> https://bugs.edge.launchpad.net/soyuz/+bug/662523
[03:20] <lifeless> wgrant: I've been nagging you to stab it for weeks
[03:20] <wgrant> Hmm.
[03:20] <lifeless> well
[03:20] <lifeless> 11 days :P
[03:21] <wgrant> StevenK: Has there been any discussion at UDS about deleting partner?
[03:21] <lifeless> some
[03:21] <StevenK> wgrant: Indeed. We have a plan. It will just take 4 years.
[03:21] <wgrant> StevenK: ie. until Lucid EOL?
[03:21] <StevenK> Yes
[03:22] <wgrant> Can't we just repurpose commercial-compat to rebrand a PPA as partner?
[03:22]  * StevenK shrugs
[03:22] <wgrant> Then terminate ArchivePurpose 4 with extreme prejudice?
[03:23] <StevenK> wgrant: Does this mean you have a branch that removes every bit of ArchivePurpose.PARTNER and you're looking for an excuse to land it?
[03:23] <wgrant> StevenK: No.
[03:23] <StevenK> Pity.
[03:23] <wgrant> Unlike lucilleconfig.
[03:25] <StevenK> Has that all trickled through?
[03:25]  * thumper cries quietly while looking at blueprint code
[03:26] <mwhudson> thumper: i wonder if you look at the code every week, say, it gets less surprising how awful it is
[03:26] <thumper> mwhudson: I'm betting not
[03:26] <mwhudson> (i doubt it, the brain doesn't remember pain)
[03:26] <thumper> mwhudson: as part of my blueprint work I have a herd of yaks to shave
[03:26] <StevenK> Which particular herd is it?
[03:27] <mwhudson> i suppose if you spend an hour improving it each week, eventually it will be ok
[03:27] <mwhudson> in like 150 weeks
[03:27] <StevenK> Which is 3 years
[03:27] <mwhudson> yeah, sounds optimistic
[03:28] <StevenK> Optimistic at best. A nightmare at worst.
[03:28] <StevenK> thumper: Put the rope down.
[03:28]  * thumper might start soonish with the shaving
[03:28]  * thumper shakes the shaving foam can
[03:29] <thumper> is it worthwhile renaming Specification to Blueprint?
[03:30] <wgrant> thumper: It's been through three names already... leave the poor creature alone.
[03:30] <lifeless> that would confuse people
[03:31] <thumper> what, renaming it?
[03:31] <thumper> wgrant: three names?
[03:31] <wallyworld_> thumper: what's the heritage of the blueprint code> why is it so bad?
[03:32] <wallyworld_> s/>/?
[03:32] <thumper> wallyworld_: that is a long story
[03:32] <thumper> perhaps we can talk about it on Monday
[03:33] <wgrant> thumper: Features, Blueprints, Specifications.
[03:33] <wallyworld_> ok. sounds like something best discussed with a case of red at close hand
[03:33] <wgrant> But it was renamed to Blueprints long before most people here appeared.
[03:33] <thumper> the word Specification is only mentioned 2699 times in the code
[03:33] <thumper> simple find and replace :)
[03:33] <wgrant> Heh.
[03:33] <StevenK> Was there a Feature to rename Features to Blueprints?
[03:34] <thumper> StevenK: no, it would have been Feature to Specification
[03:34] <thumper> and Specs to Blueprints
[03:34] <wgrant> I think it was originally specification.
[03:34] <thumper> the UI changed, but not the code
[03:34] <wgrant> Then to feature, then specification, the blueprint.
[03:34] <wgrant> But that first transition may be wrong.
[03:34] <StevenK> We changed it *back*?
[03:34] <thumper> I just find it a tad annoying in the code
[03:36] <thumper> and 100 files would need to be renamed
[03:37] <thumper> but that is a small price to pay I think for some form of consistency
[03:37] <StevenK> And a 17,000 line diff
[03:37] <thumper> sure, whatever
[03:38] <thumper> as long as it doesn't conflict :-)
[03:38] <StevenK> ... more than 3 times
[03:38] <wgrant> I'd be against the rename. Most other objects in Launchpad now have sane names.
[03:38] <wgrant> Specification -> Blueprint is the opposite of sane.
[03:39] <StevenK> The URL is blueprints.l.n
[03:39] <wgrant> It is.
[03:39] <wallyworld_> i've never used the term Blueprint wrt software. for me, it's always been a Specification Document
[03:39] <wgrant> But most people call them specs.
[03:39] <wallyworld_> blueprint are something use use to build a !$%@$ house
[03:39] <wgrant> Heh.
[03:39] <StevenK> So now they're blues?
[03:39] <mwhudson> i just try to alternate
[03:39] <StevenK> As in, we have them?
[03:40] <StevenK> wallyworld_: No, blueprints are used for a *bikeshed*
[03:40] <wallyworld_> :-D
[03:41] <wallyworld_> so my vote would be to use the industry accepted term, namely Specifications
[03:41] <wallyworld_> why reinvent terms that are in common usage?
[03:41] <wgrant> Because five years ago it was cool.
[03:41] <wgrant> Or something.
[03:41] <StevenK> wgrant is just bitter because five years ago he was eight.
[03:42] <wallyworld_> how could calling something a non-standard term that no-one else uses in that context have been considered cool?
[03:42] <wallyworld_> ever?
[03:42] <wgrant> wallyworld_: Don't ever go back before r3000 or so.
[03:42]  * thumper snorts
[03:43] <wallyworld_> what's lurking there? anything interesting?
[03:44] <thumper> FAAARRRRKKKK!
[03:44] <thumper> buildbot is all read
[03:44] <thumper> or red even
[03:45] <wallyworld_> maybe it's feeling Blue :-)
[03:45] <thumper>    lp.registry.windmill.tests.test_distroseriesdifference_expander.TestDistroSeriesDifferenceExtraJS.test_diff_extra_details_blacklisting
[03:45] <thumper>    lp.registry.windmill.tests.test_product.TestProductIndexPage.test_title_inline_edit
[03:45] <thumper> that's devel
[03:45] <rockstar> thumper, yes, rs=me on deleting all windmill tests
[03:46] <rockstar> :)
[03:46] <wgrant> wallyworld_: Names were a fair bit less sane back then.
[03:46] <thumper> db-devel: test_uploading_collection (lp.soyuz.tests.test_binarypackagebuildbehavior.TestBinaryBuildPackageBehaviorBuildCollection)
[03:46] <lifeless> rockstar: beep, denied.
[03:46] <wallyworld_> thumper: you saying those tests are failing?
[03:46] <thumper> wallyworld_: failed on buildbot
[03:46] <wallyworld_> what's the error?
[03:46] <thumper> wallyworld_: did you want to take a look?
[03:47] <lifeless> thumper: in terms of names, I think we should do serious user research before doing another rename.
[03:47] <thumper> lifeless: ack
[03:47] <rockstar> lifeless, I've said this before, but I think their cost outweighs their value.
[03:47] <wgrant> *cough* Branches *cough*
[03:47] <wallyworld_> thumper: tell me where to look - do i log into devpad to see the logs?
[03:47] <thumper> lifeless: there is enough other yaks to shave in blueprints
[03:47] <lifeless> thumper: exactly.
[03:47] <thumper> wallyworld_: https://lpbuildbot.canonical.com/waterfall - click on summary
[03:48] <rockstar> Blueprints mean something very valuable to the core users of blueprints: Ubuntu.
[03:51] <rockstar> Also, of the three things I've sent to ec2 and the two that deryck sent to ec2 today, we have no emails to show for it.  :(
[03:52] <StevenK> rockstar: I tossed two branches at ec2 and got 2 mails
[03:53] <rockstar> StevenK, lucky you.
[03:53] <wallyworld_> rockstar: i've had stuff go missing lately too :-(
[03:53] <rockstar> StevenK, still, 2 for 7 isn't great.
[03:53] <StevenK> rockstar: Better than 0 for 5 :-P
[03:53] <rockstar> wallyworld_, yeah, it really affects my productivity, because now I have to go back and babysit.
[03:54] <rockstar> StevenK, not for me it isn't!  :)
[03:54] <StevenK> We should get out of testfix first, though
[03:54] <wallyworld_> rockstar: i hear you. i've lost a lot of time as well, also chasing down "failures" which weren't
[03:54] <wgrant> StevenK: Better than 0 for 9 :(
[03:55] <StevenK> The last time it happened to me, there was a bug which I had introduced which was killing stuff
[03:55] <rockstar> One of these branches has support for soupmatchers, which I want to use everywhere, so it'd be nice it that landed...
[03:56] <rockstar> StevenK, so, there's some layer teardown stuff that's causing hangs in windmill.  deryck and I finally reproduced it locally with my lazr-js branch.
[03:57] <wgrant> Ah, excellent news.
[03:58] <wallyworld_> thumper: well, there's no dog's ball anywhere that i can see. and the code in question hasn't changed recently
[03:59] <lifeless> rockstar: interesting
[03:59] <lifeless> rockstar: tried with trunk? landed a layer teardown change today
[03:59] <thumper> wallyworld_: does it pass locally for you?
[03:59] <wallyworld_> trying now, updating as we speak
[04:00] <wallyworld_> s/updating/building
[04:00] <wallyworld_> have to stop my other db-devel system first and switch schemas @#@^^!$@
[04:02] <lifeless> thumper: hey, could you perhaps https://code.edge.launchpad.net/~lifeless/launchpad/edge/+merge/39594 ?
[04:04] <thumper> lifeless: there seems to be some waste there
[04:04]  * thumper comments
[04:05] <lifeless> oh?
[04:05] <lifeless> thanks
[04:05] <rockstar> lifeless, well, ec2 merges into trunk, right?
[04:05] <lifeless> rockstar: yes, but it would depend on the revno at the time you started ec2
[04:06] <thumper> lifeless: done
[04:07] <lifeless> thumper: thanks
[04:07] <rockstar> lifeless, yeah.  I'm sending them back to ec2.
[04:07] <rockstar> lifeless, also, it seems that ec2 is still running postgres 8.3
[04:07] <lifeless>  /o\
[04:08] <lifeless> thumper: >< our coding standards make your change _longer_
[04:09] <thumper> :)
[04:09]  * lifeless ignores the nuts wrapping rules and just does the bzr way.
[04:10] <lifeless> hmm, I've added another 80 lines of budget :)
[04:13] <wallyworld_> thumper: apart from a LayerIsolationError, WFM
[04:13] <thumper> hmm...
[04:13] <thumper> wondering if we should just force a build to see if it works this time
[04:13] <wallyworld_> that would be my plan of attach. at least the problem will be deffered for a little while :-)
[04:14] <wallyworld_> i gotta learn to type better
[04:16] <thumper> I've forced devel
[04:16] <wallyworld_> let's hope it works
[04:16] <thumper> wondering about the db-devel failure though
[04:17]  * wallyworld_ looks
[04:18] <wallyworld_> something just doesn't look right with that build server. those errors don't appear coding issues do they
[04:19] <jtv> How can I have a webservice method that I can invoke in a stories test but not with launchpadlib against launchpad.dev?
[04:21] <jtv> I thought the one implied the other.
[04:22] <lifeless> night all
[04:23] <wgrant> jtv: Are you using launchpadlib in your test?
[04:23] <jtv> no, webservice
[04:23] <wgrant> Your WADL is out of date.
[04:23] <wgrant> make build
[04:23] <jtv> !
[04:24]  * jtv tries that
[04:24] <jtv> wgrant: then again, I don't see the method on staging or qastaging either, and the revision's been on there for a while now.
[04:25] <wgrant> jtv: You're not checking a frozen version of the webservice?
[04:25] <jtv> wgrant: I tried version='devel'
[04:25] <wgrant> Which method?
[04:26] <jtv> IHasTranslationImports.getTranslationImportQueueEntries
[04:27] <jtv> Tested in lib/lp/translations/stories/webservice/xx-translationimportqueue.txt
[04:28] <LPCIBot> Yippie, build fixed!
[04:28] <LPCIBot> Project devel build (167): FIXED in 3 hr 36 min: https://hudson.wedontsleep.org/job/devel/167/
[04:28] <LPCIBot> Launchpad Patch Queue Manager: [r=thumper][ui=none][bug=662552] Re-use POFile edit permission info
[04:28] <LPCIBot> across message sub-views.
[04:35] <jtv> wgrant: what I don't get is how that test can pass, yet I can't get at that method using launchpadlib interactively.  I haven't played with launchpadlib much so maybe I'm just doing something wrong there.
[04:38] <wgrant> jtv: Old-style webservice tests do not use WADL.
[04:38] <wgrant> So they don't care how broken it is.
[04:38] <jtv> wgrant: so I neglected to update the wadl?
[04:39] <wgrant> jtv: It's meant to be autogenerated.
[04:39] <jtv> grrr
[04:39] <jtv> But surely there is a way of getting our methods into the API, or else why do we have any
[04:39] <wgrant> It is in the WADL, though.
[04:39] <wgrant> Or in my WADL, at least.
[04:39] <wgrant> And in my apidoc.
[04:40] <jtv> So maybe I'm just doing something stupid.
[04:40] <wgrant> When did it land?
[04:40] <wgrant> Is it deployed?
[04:40] <jtv> Days ago.  It should be on staging.
[04:40] <jtv> Here's what I do locally:
[04:41] <jtv> l=Launchpad.get_token_and_login('gaga', service_root='https://api.launchpad.dev/', version='devel')
[04:41] <jtv> (Authorize in browser)
[04:41] <wgrant> jtv: It's on qastaging's apidoc.
[04:41] <jtv> l.me.getTranslationImportQueueEntries()
[04:41] <jtv> AttributeError: 'Entry' object has no attribute 'getTranslationImportQueueEntries'
[04:41] <wgrant> jtv:
[04:41] <wgrant> grep -r getTranslationImportQueueEntries lib/canonical/launchpad/apidoc/
[04:42] <jtv> Nothing.
[04:43] <wgrant> Does 'make apidoc' take forever and say it's generating stuff for all three versions?
[04:44] <jtv> I haven't tried that.
[04:44] <jtv> (Didn't know it existed)
[04:44] <wgrant> build depends
[04:44] <wgrant> ... on it
[04:44] <jtv> Well, "make build" does take forever now, after a "make clean."
[04:45] <wgrant> But that's probably buildout automatically sacrificing various items of wildlife that if finds around you.
[04:45] <wgrant> Not the WADL generation.
[04:45] <jtv> wgrant: it's building that now
[04:45] <wgrant> Aha.
[04:47] <jtv> And now my wadl shows the method.  Wonder why "make build" didn't do that before.
[04:47] <jtv> Are these files under revision control?
[04:47] <wgrant> They're not.
[04:48] <wgrant> But this may change soon.
[04:48] <wgrant> They take forever to generate.
[04:48] <jtv> Ah, I did this in an old branch.
[04:48] <wgrant> So 'make build' probably won't regenerate them every time.
[04:48] <jtv> I guess the dependencies aren't complete.
[04:48] <wgrant> Or make run would take even longer than it does already.
[04:48] <wgrant> s/aren't/can't be without taking 15 minutes to start an appserver/
[04:50] <wgrant> Anyway, hopefully that will all work.
[04:50] <wgrant> I need to run off and submit my final project.
[04:50] <jtv> wgrant: thanks as always!
[04:50] <jtv> Final… university project?
[04:50] <wgrant> That's the one.
[04:51]  * thumper closes up.
[04:51] <jtv> wgrant: good luck—ace it like I did.  :-)
[04:51] <jtv> bye thumper
[04:51] <wgrant> Heh. Thanks.
[04:51] <wgrant> Night thumper.
[04:51] <wallyworld_> thumper: quick quest before you go
[04:51] <wallyworld_> [04:51] <wallyworld_> ERROR: lp.codehosting.vfs.tests.test_transport.TestLaunchpadTransportImplementation.test_win32_abspath (subunit.RemotedTestCase)
[04:51] <wallyworld_> ----------------------------------------------------------------------
[04:51] <wallyworld_> _StringException: Text attachment: garbage
[04:51] <wallyworld_> ------------
[04:51] <wallyworld_> [<bzrlib.smart.medium.SmartTCPClientMedium object at 0xc109810>]
[04:52] <wallyworld_> does that mean there's something screwed on ec2 wrt code hosting or something
[04:52] <wallyworld_> an ec2 test run just failed with 1000 odd errors all like this
[05:03] <thumper> ?
[05:05] <thumper> wallyworld_: ah....
[05:05] <thumper> not seen that before
[05:06] <thumper> sounds like something screwy
[05:06] <wallyworld_> thumper: no matter. looks like there's something environmental going on
[05:06]  * thumper nods and leaves
[05:06] <wallyworld_> i'll resubmit later. not having much luck with ec2 lately :-( have a good w/e
[05:42] <spiv> losa fyi: I'm trying to reproduce a bzr/codehosting bug, and I think I may have caused a python process to spin in an infinite loop on codehosting.
[05:42] <spm> impressive
[05:43] <spiv> spm: if you're not too busy, I'd be quite curious to know if I really have done that...
[06:51] <jtv> Is anyone here on first-name terms with the webservice API?
[06:51] <wgrant> What has it done to you now?
[06:53] <jtv> Well my method is documented now
[06:53] <wgrant> !!
[06:53] <jtv> I mean, it's in the wadl and apidoc now,
[06:54] <jtv> _but_
[06:54] <jtv> it doesn't appear on the objects that implement it and the interface it's in.
[06:54] <jtv> Instead, it appears in a separate API class named after the interface.
[06:55] <jtv> Which I suppose means that, instead of making objects implement the interface, I need to derive the objects' interfaces from my interface.
[06:55] <wgrant> That's right.
[06:55] <jtv> Pretty unfashionable strong static typing, that.
[06:56] <wgrant> It is unfortunate, but fortunately not difficult.
[06:58] <jtv> Unless it goes against the direction of all other dependencies and so is likely to introduce massive amounts of circular imports.
[06:58] <wgrant> Well, true.
[06:59] <jtv> And guess what?  That appears to be the case.  :)
[06:59] <wgrant> :D
[07:22] <shadeslayer> any ideas if my qtwebkit build was fixed?
[08:44] <adeuring> good morning
[09:04] <mrevell> Hello
[13:06] <lifeless> mwhudson: does anything outside the dc know about guava ?
[13:57] <jml> hello
[14:18] <mwhudson> lifeless: no
[14:18] <lifeless> mthaddon: ^
[14:19] <mthaddon> I'm going to assume a whole lot of context there and merge that config change
[14:19] <lifeless> mthaddon: there is an rt for getting a view on haproxy; should we add to that a note to get view on this new haproxy ?
[14:20] <mthaddon> that'd probably make sense, yeah
[14:25] <mwhudson> lifeless: although guava is in the dmz for historically daft reasons, so it can't connect to dc machines quite as one would expect
[14:25] <mwhudson> (don't know if this is important here)
[14:25] <lifeless> mwhudson: I don't think it is
[14:26] <lifeless> mwhudson: story is - we're haproxying up codebrowse for no-downtime deploys.
[14:26] <mwhudson> mtaylor: the launchpad/bazaar session has just been moved later, hope that's ok with you
[14:26] <lifeless> mwhudson: so we just want to make sure nothing is left using the un-haproxy urls
[14:28] <mwhudson> lifeless: yeah, all real traffic goes via crowberry
[14:43] <mrevell> adeuring, Do you have a moment? I have a question about bug pages
[14:44] <adeuring> mrevell: sure
[14:45] <mrevell> thanks adeuring. I'm working on bug 117460 and I need to add a help link next to "Also affects project" on bug pages. Looking at bug.py I thought I should ask some advice before diving in. Do you have a suggestion for how to do it?
[14:45] <_mup_> Bug #117460: "Also affects project" doesn't provide any instructions <better-forwarding> <trivial> <ui> <Launchpad Documentation:Invalid> <Launchpad Bugs:Triaged by matthew.revell> <https://launchpad.net/bugs/117460>
[14:46]  * adeuring is reading the bug repoert
[14:48] <mrevell> adeuring, It's quite an involved bug report and some of it has been split off into other reports and those solved separately. I'm going to deal with the "people don't understand what 'also affects project' means" aspect.
[14:49] <adeuring> mrevell: right... difficult task ;) shall we talk on mumble about it?
[14:49] <mrevell> adeuring, Sure!
[14:50] <mrevell> adeuring, You don't hear me?
[14:50] <adeuring> mrevell: no, i don't. also, mumble does seems to notice that you are speaking
[14:51] <mrevell> Let me check the mic isn't muted
[15:11] <adeuring> mrevell: so, the links (actually there are two: "add project" and "add distribution") are defined as objects of class Link in lp.bugs.browser.bug ; they are rendered in lp/bugs/templates/bugtasks-and-nominations-table.pt . That is the template where you can add the "help magic", I think.
[15:15] <rockstar> Ooh! I think deryck and I have figured out why ec2 instances aren't sending out email.
[15:15] <mrevell> Thanks adeuring, I shall give that a try!
[15:16] <rockstar> mars, ^^
[15:17] <mars> rockstar, eh?
[15:18] <rockstar> mars, we shutdown automatically after 5 hours, right?  We're 20 minutes into that 5 hours before the tests even start running.  If our tests have any reason to take 4.5 hours (buildbot says 4.4 right now), it shuts down before the tests are done.
[15:18] <rockstar> So any heavy I/O would slow the test suite down, etc.
[15:19] <mars> rockstar, ugh, yes, great hypothesis.  "5 hours will be enough"  ("No one will ever need more that 640K of memory")
[15:19] <rockstar> mars, if the layer tearDown stuff was really an issue, abentley would be saying something about it.
[15:20] <rockstar> mars, but he doesn't have to worry about time limits when he runs his tests.  They're done when they're done.
[15:20] <mars> rockstar, darn test run time - do we graph that I wonder?
[15:20] <mars> rockstar, it used to be ~3 hours
[15:20] <rockstar> mars, deryck and I are cumulatively 0 for 8 on ec2 runs.
[15:20] <mars> that is why we set a 5 hour limit
[15:20] <rockstar> mars, it USED to be ~2 hours, 3 years ago.  :)
[15:20] <lifeless> up it ;)
[15:20] <mars> 8(
[15:21] <lifeless> we're working on test perf now
[15:21] <rockstar> lifeless, yeah, I'm bumping the timeout on a single run.
[15:21] <abentley> rockstar: I don't use test_on_merge because it times out.  Instead I run bin/test, which runs as long as it takes to finish.
[15:21] <mars> rockstar, I think we should up the limit anyway
[15:21] <mars> I'll land the branch. rs=lifeless?
[15:21] <rockstar> abentley, I'm not sure if ec2 runs on test_on_merge specifically, but even if it didn't time out, ec2 shuts down after 300 minutes.
[15:21] <rockstar> mars, rs=rockstar
[15:22] <lifeless> mwhudson: yes
[15:22] <lifeless> bah
[15:22] <lifeless> mars: yes
[15:22] <StevenK> mars: We're in testfix
[15:22]  * rockstar facepalms
[15:23] <abentley> rockstar: So, I know test_on_merge dies due to timeouts, but it's a different timeout.  It's a "minutes without output" timeout.
[15:23] <rockstar> abentley, yeah, so in ec2, we could be getting output, but the test suite could be taking longer than the ~260 minutes we allot it.
[15:24] <abentley> rockstar: indeed.  Now what's this layer teardown thing?
[15:24] <rockstar> abentley, deryck and I reproduced an issue with windmill where it fails to teardown a layer and then just stops dead right then.
[15:25] <abentley> rockstar: Ah.  Not a problem I recall.
[15:26] <rockstar> abentley, yeah, I suspect it has a lot to do with windmill's asshattery.
[15:27] <lifeless> rockstar: interesting
[15:27] <lifeless> rockstar: how do you trigger that ?
[15:27] <rockstar> lifeless, I thought deryck sent to output to the list.
[15:27] <lifeless> I'm behind on mail just now
[15:28] <rockstar> lifeless, he says he didn't send a mail, because he just disabled the test because it was failing spuriously anyway.
[15:28] <rockstar> lifeless, bin/test --layer WindmillLayer
[15:28] <lifeless> rockstar: did he file a bug? We need to treat our test environment as rigorously as we do our target code
[15:29] <rockstar> lifeless, he didn't at the time, but I just asked him to.
[15:29] <lifeless> \o/
[15:30] <rockstar> lifeless, to be fair, we've been trying to get a new lazr-js landed in launchpad for more than a month now.  We're trying to get it done by hook or by crook.
[15:30] <lifeless> rockstar: I'm fine with that
[15:30] <lifeless> as long as we record interesting things like such problems as you've encountered.
[15:33] <mars> rockstar, would that be the spurious failure Ian said he fixed?
[15:34] <rockstar> mars, no idea.  We're getting all manner of spurious failures.
[15:34] <mars> rockstar, I ask because I wonder if the test was re-enabled
[15:36] <rockstar> mars, I don't think so, since this branch is pretty old.  Of course, we merged regularly, but there's lots of stuff that windmill does stupidl.
[15:36] <StevenK> mars: bigjools is landing a testfix
[15:36] <mars> StevenK, ok, thanks
[15:36]  * bigjools is literally just landing it
[15:37] <mars> lifeless, so this is an rs= fix, your new proceedure says I need an MP for it as well?
[15:37] <lifeless> mars: my process experiment is for you if you do r=mars
[15:37] <lifeless> mars: rs=otherperson is an existing defined process that I didn't touch
[15:37] <mars> right
[15:38] <mars> rs= is 'land this and skip the MP'
[15:38] <lifeless> we could, if the experiment is successful, further iterate and combine these things
[15:38] <StevenK> lifeless: r=trivial will continue to be verbotten?
[15:39] <rockstar> mars, for instance, in a test, windmill has recently decided that even though a link is green and has a javascript event connected to it, it would rather just click through.
[15:39] <lifeless> StevenK: yes
[15:39] <lifeless> 'trivia' is not a very useful metric IMO
[15:39] <StevenK> lifeless: Okay, good
[15:39] <mars> rockstar, timing issue?
[15:39] <rockstar> mars, no, not a timing issue, just a bug of some sort.
[15:39] <rockstar> mars, something is quite obviously broken in windmill.
[15:40] <mars> rockstar, if you can narrow it down, we can take it to #windmill directly
[15:40] <mars> they would really want to know about that
[15:40] <mars> rockstar, I assume it was the 1.4 upgrade
[15:40] <rockstar> mars, probably, but I'd rather invest no more time in windmill at all.
[15:40] <mars> rockstar, show me an alternative that takes less time than fixing what we have, and I'll go for it :)
[15:41] <rockstar> mars, I keep meaning to write to the mailing list about this.  :)
[15:41] <mars> oh, and that doesn't lose any of the benefits that windmill gives us
[15:41] <rockstar> mars, right now, windmill isn't really giving us many benefits.
[15:41] <mars> (notice that I did not say what the benefits /are/)
[15:41] <lifeless> who knows how to get to the staging outbound mailbox ?
[15:41] <mars> lifeless, deryck and team might?
[15:42] <bigjools> my fix is in PQM
[15:42] <mars> bigjools, ok, thanks for letting me know
[15:44] <lifeless> I will hunt deryck
[15:50] <rockstar> lifeless, deryck says bigjools has the credentials in an encrypted text file somewhere and can get to it easier than he can.
[15:50] <bigjools> que?
[15:51] <mars> bigjools, lifeless wants to access the staging outbound mailbox
[15:54] <bigjools> mars: ok, I have it
[15:55] <bigjools> kwallet FTW
[16:08] <flacoste> lifeless: i found ftp://ftp.cs.wpi.edu/pub/techreports/pdf/06-17.pdf
[16:08] <flacoste> it describes a space-efficient way to compute an approximation of the median
[16:08] <flacoste> works on streaming data
[16:09] <flacoste> should be fine for us
[16:12] <flacoste> lifeless: another simpler approach would be to estimate the median using the frequency table
[16:13] <lifeless> flacoste: both sound reasonable to me
[16:13] <lifeless> and incremental
[16:13] <lifeless> we should be able to do what we want without a DB
[16:15] <lifeless> https://bugs.staging.launchpad.net/bzr/+bug/644855
[16:15] <_mup_> Bug #644855: TestVersion.test_main_version fails run from installed package <easy> <selftest> <Bazaar:Fix Released by vila> <Bazaar 2.2:Fix Released by vila> <Bazaar 2.3:Fix Released by vila> <https://launchpad.net/bugs/644855>
[16:15] <lifeless> https://staging.launchpad.net/bzr/+milestone/2.2.2/+subscribe
[16:21] <lifeless> flacoste: ping
[16:21] <flacoste> hi lifeless
[16:21] <lifeless> calling you
[16:22] <flacoste> lifeless: try again?
[16:22] <lifeless> on skype... goes to voicemail
[16:22] <flacoste> hmm
[16:23] <flacoste> yeah, same thing if i try to call you
[16:23] <flacoste> :-/
[16:23] <flacoste> let me restart skype
[16:24] <mars> bigjools, did your testfix go through?  Mine bounced on the testfix regex
[16:24] <flacoste> lifeless: restarted
[16:24] <flacoste> pfff
[16:24] <flacoste> it does ring here
[16:24] <lifeless> ring me ?
[16:30] <bigjools> mars: it's in devel
[16:30] <mars> bigjools, ok, testfix didn't clear - I thought it would have
[16:30] <bigjools> me too :/
[16:31] <StevenK> I think the poller needs to run to do so
[16:31] <StevenK> And that will only do so after buildbot builds devel
[16:32] <mars> StevenK, gary says the poller runs every 5-10 minutes?
[16:32] <StevenK> Yes, but I think the last devel build in buildbot needs to be sucessful
[16:33]  * mars opts for empirical evidence
[16:33] <mars> pqm-submit - play it again!
[16:34] <StevenK> lp-land FTW
[16:45] <abentley> lifeless: will landing on stable update the cron scripts for qa-staging?
[16:45] <lifeless> mm, ask mthaddon / chex
[16:53] <abentley> mthaddon, chex: will landing on stable update the cron scripts for qa-staging?
[17:03] <Chex> abentley: are you talking about the /cronscripts/ directory in the root of the LP branch?
[17:04] <lifeless> bigjools: hw did you go?
[17:05] <abentley> Chex: I'm asking about the behaviour of the whole system.  I haven't actually changed files in that directory, but I've changed files elsewhere that will change their behaviour.  Will I get the updated behaviour on qa-staging the way I would with staging?
[17:06] <bigjools> lifeless: badly, still trying to get staging's inbox to load
[17:08] <lifeless> bigjools: :(
[17:08] <poolie_droid> jml: do we have any launchpad stickers here?
[17:08] <jml> poolie_droid: no, we don't.
[17:08] <poolie_droid> oh well
[17:09] <poolie_droid> vikram was admiring mine
[17:09] <shadeslayer> poolie_droid: bug 668407
[17:09] <_mup_> Bug #668407: daily builds hang at pulling qtwebkit <Launchpad Bazaar Integration:New> <https://launchpad.net/bugs/668407>
[17:09] <lifeless> I forgot my roll... sorry
[17:10] <Chex> abentley: yes you should get the same behaviour that you would on staging with changes
[17:11] <abentley> Chex: I know that asuka is the equivalent of loganberry for staging, but what's the equivalent for qa-staging?  Is there a diagram?
[17:13] <abentley> rockstar: I just got a windmill hang.
[17:17] <Chex> abentley: staging and qastaging are both like loganberry, the difference being qastaging is using a current copy of prod DB schema
[17:17] <rockstar> abentley, ugh.
[17:18] <rockstar> abentley, what kind of hang?
[17:18] <abentley> Chex: what machine is running the cron scripts for qastaging that loganberry runs for production?
[17:18] <Chex> abentley: asuka
[17:18] <abentley> rockstar: The kind where it hangs.
[17:18] <abentley> Chex: asuka is handling both staging and qastaging?
[17:19] <rockstar> abentley, no traceback or anything?
[17:19] <abentley> rockstar: no, that would be a crash.
[17:20] <abentley> rockstar: It started running, and it never stopped, and it didn't appear to be making progress, and I waited about an hour.
[17:20] <Chex> abentley: thats correct
[17:20] <rockstar> abentley, yeah, so the one we saw was where it trackbacked in tearing down a layer, and then it just stopping tearing down the rest of the layers.
[17:22] <abentley> rockstar: This was the whole output until I pressed CTRL-C.
[17:22] <abentley> rockstar: http://pastebin.ubuntu.com/522177/
[17:23] <abentley> rockstar: This is the output after I pressed CTRL-C: http://pastebin.ubuntu.com/522178/
[17:25] <lifeless> abentley: we haven't enabled all cronscripts on qastaging because of load; if there's one you want to have execute work to do QA, ask a losa to run it (however many times you need)
[17:26] <abentley> lifeless: it seems like landing on db-staging would be much simpler.
[17:27] <abentley> lifeless: s/db-staging/db-stable/
[17:27] <lifeless> abentley: we're going to have to do a similar thing for staging too - the load on asuka is just nuts.
[17:27] <lifeless> abentley: or we may find a machine to do scripts on and move staging *and* qastaging scripts to that
[17:29] <abentley> lifeless: up till now, we've relied on staging as a place we can test script changes.  It would be a significant loss if we were forced to use losas for our script testing.
[17:30] <lifeless> abentley: yes, the current setup is undesirable, see bug ...
[17:30] <lifeless> https://bugs.edge.launchpad.net/launchpad-foundations/+bug/633713
[17:30] <_mup_> Bug #633713: staging is overloaded <Launchpad Foundations:Triaged by canonical-losas> <https://launchpad.net/bugs/633713>
[17:32] <abentley> lifeless: Please pursue that second option.  Could we use the internal UEC cloud?
[17:33] <abentley> lifeless: Also, we could perhaps significantly reduce the load with a "New Task System" that was event-driven so it wasn't constantly polling.
[17:35] <lifeless> the polling ones aren't the problems AIUI
[17:35] <lifeless> anyhow, yes, am pursuing things
[17:39] <abentley> lifeless: if the polling ones aren't the problems, you shouldn't need to disable them, right ? ;-)
[17:40] <lifeless> abentley: it was simpler to just not turn anything on - we have /lots/ of cronscripts.
[17:40] <lifeless> abentley: feel free to ask for specific ones to be enabled
[17:40] <lifeless> gary_poster: are you here today?
[17:41] <lifeless> gary_poster: I'd like to talk about that overload bug with isf/iso
[18:04] <henninge> matsubara: hi!
[18:04] <matsubara> henninge, hi
[18:05] <henninge> matsubara: where does the code live that turns our script output into on oops?
[18:06] <henninge> matsubara: or more specifically, can I avoid an oops by reducing a log message from "warn" to "info"?
[18:06] <henninge> matsubara: or are you the wrong guy to ask? ;-)
[18:10] <matsubara> henninge, lib/canonical/launchpad/webapp/errorlog.py and lib/lp/services/scripts/base.py
[18:11] <henninge> matsubara: thanks, I'll look into those
[18:14] <matsubara> henninge, yes, you can avoid an oops by reducing the log message to info only. See https://bugs.launchpad.net/malone/+bug/661441 where Robert suggest that approach to silence some checkwatches oopses.
[18:14] <_mup_> Bug #661441: Duplicate oops reports logged for each checkwatches oops <oops> <Launchpad Bugs:Triaged> <https://launchpad.net/bugs/661441>
[18:14] <henninge> matsubara: yup, it's in there. WARN and higher get turned into OOPSes.
[18:14] <henninge> :-D
[18:14] <matsubara> :-)
[18:14] <henninge> matsubara: thanks
[18:14] <matsubara> np henninge
[18:53] <gary_poster> lifeless, here, was at lunch, 1 sec
[18:54] <gary_poster> lifeless, dunno what isf/iso is, but back
[20:04] <lifeless> jml: hi
[20:04] <jml> lifeless: hello
[20:04] <lifeless> where are you ?
[20:04] <lifeless> bigjools: we're still blocked on that qa right
[20:05] <lifeless> ?
[20:05] <StevenK> jml: O hai, can haz mentor review for https://code.edge.launchpad.net/~henninge/launchpad/devel-bug-666660-poimport-oops/+merge/39653
[20:06] <jml> lifeless: curucao 4
[20:06] <jml> StevenK: granted
[20:07] <henninge> jml: thanks!
[20:11] <bigjools> lifeless: yes, I am unable to view the staging inbox
[20:19] <jml> lifeless: do you not care?
[20:24] <LPCIBot> Project devel build (168): SUCCESS in 3 hr 58 min: https://hudson.wedontsleep.org/job/devel/168/
[20:24] <lifeless> jml: I care a lot
[20:24] <lifeless> jml: I need to talk to you :)
[20:24] <lifeless> bigjools: what were the details again ?
[20:24] <lifeless> like
[20:24] <lifeless> not the password
[20:24] <lifeless> the other stuff
[20:25] <bigjools> lifeless: ->ops
[20:25] <gary_poster> lifeless: was @ lunch; appear to have replied when you were offline.  here now.  dunno what isf/iso is
[20:31] <lifeless> gary_poster: is foundations/ops
[20:31] <gary_poster> ah!
[20:31] <gary_poster> ok
[20:33] <lifeless> gary_poster: I cc'd you on mail instead
[20:33] <gary_poster> oh
[20:33] <lifeless> we'll get a machine for staging scripts
[20:33] <gary_poster> cool, that will be great
[20:33] <gary_poster> saw email
[20:39] <lifeless> so, I'm trying to qa
[20:39] <lifeless> but mail from bugs on staging doesn't seem to be going to the staging mailbox
[20:44] <lifeless> mbarnett: we're not seeing the last few comments from https://bugs.staging.launchpad.net/bzr/+bug/644855 in the staging imap mbox
[20:44] <_mup_> Bug #644855: TestVersion.test_main_version fails run from installed package <easy> <selftest> <Bazaar:Fix Released by vila> <Bazaar 2.2:Fix Released by vila> <Bazaar 2.3:Fix Released by vila> <https://launchpad.net/bugs/644855>
[20:47] <mbarnett> lifeless: sorry, got a couple pings at the same time there.  give me a minute and i will take a look at that.
[20:48] <lifeless> thanks
[21:12] <mbarnett> lifeless: so, the issue is, new comments should be generating individual emails that are sent to the staging imap mbox, and this step isn't happening?
[21:14] <lifeless> mbarnett: yeah
[21:14] <lifeless> I think that that happens through a script
[21:15] <mbarnett> lifeless: is this unique to that bug?
[21:15]  * mbarnett goes to try and find the cronjob for generating bug-comment-emails
[21:16] <lifeless> last mail I see there was 22 hours ago (for staging bug mail)
[21:19] <lifeless> mbarnett: ^
[21:20] <mbarnett> i ran the process-mail script by hand.  i saw some errors in the log file from the failed staging restore from a couple days ago
[21:20] <mbarnett> but it seemed to return cleanly
[21:21] <lifeless> I'll give it a minute and try again
[21:22] <StevenK> losa ping
[21:22] <mbarnett> heya StevenK
[21:23] <lifeless> StevenK: <- -ops :P
[21:23] <lifeless> StevenK: for the 'ping' itself.
[21:24] <StevenK> lifeless: Yes, brain fail :-(
[21:25] <lifeless> what does 'More than one process found' mean from cronscripts?
[21:42] <lifeless> mbarnett: not seeing anything
[21:42] <lifeless> mbarnett: what were the scripts errors ?
[21:43] <mbarnett> lifeless: i saw some reference to shipit, which is what caused me to suspect it was from the failed staging update
[21:43] <mbarnett> that and the fact that they were from 2 days ago
[21:43] <lifeless> right; it might not be starting up properly then
[21:43] <lifeless> ?
[21:47] <mbarnett> lifeless: when you say "it" here, what exactly are you referring to?  /me is trying to figure out what *should* be happening with regard to bug-generated emails.
[21:48] <lifeless> 'the bug mail sender' might not ...
[21:48] <lifeless> mbarnett: I'd expect them to end up on m.c.c in the account 'staging'
[21:51] <mbarnett> lifeless: right, but your question that "it" might not be starting up properly.. i am not sure what the "it" was.
[21:52] <lifeless> mbarnett: the bug mail sending script - the one that gave you shipit errors
[21:52] <lifeless> mbarnett: deleting lib/canonical/shipit should fix those errors
[21:52] <mbarnett> lifeless: it is very possible i was looking at the wrong script... i am not sure if "process-mail" is what gets emails from bug comments to the staging ,mailbox
[21:53] <lifeless> mbarnett: well the mail is sent to local smtp as usual
[21:53] <lifeless> mbarnett: so you could look at that config
[21:53] <mbarnett> lifeless: those errors were from a couple days ago, i haven't seen them since and the process-mail cronjob runs much more frequently, so i don't think we havea  problem there
[21:56] <lifeless> mbarnett: ok; so its going somewhere else ;)
[21:56] <lifeless> mbarnett: mail is being sent to localhost:25
[22:17] <poolie_droid> Does anyone do their lp development on a cloud instance?
[22:18] <lifeless> I use a vm, not a cloud instance though
[22:18] <poolie_droid> As opposed to just running tests there
[22:18] <lifeless> yes
[22:19] <poolie_droid> Yes, I use a vm on my desktop
[22:19] <poolie_droid> Am going to try using ec2 for a bit
[22:26] <mtaylor> lifeless: that launchpad extension thing in the lunch session ... do you know if there's a project for that?
[22:27] <lifeless> lunch session? I missed the lightning talks - was grabbed by two! people for 'must have before eow discussions'
[22:27] <poolie_droid> Nice talk btw mtaylor
[22:27] <mtaylor> lifeless: ah.
[22:27] <mtaylor> poolie_droid: thanks!
[22:27] <lifeless> mtaylor: what thing are you talking about
[22:27] <mtaylor> poolie_droid: at this point, I have more-than-normal amount of experience talking to people who are switching to bzr :)
[22:28] <mtaylor> lifeless: it was a thing someone was showing where he'd made a test-results web app thing, but used some amount of lpapi/something to make a) the app look launchpad-y and b) register with real launchpad the existence of his thing
[22:28] <StevenK> mtaylor: That was cr3
[22:28] <poolie_droid> Mtaylor the speaker was Marc Tardif, if you're talking about the thing with grease monkey
[22:29] <lifeless> mtaylor: look at /launchpad-project
[22:29] <mtaylor> poolie_droid: StevenK: sweet. thanks
[22:29] <lifeless> mtaylor: its there
[22:33] <mtaylor> lifeless: when you say "it's there" ... I am obviously unintelligent, cause I don't see nothing...
[22:33] <mtaylor> lifeless: thanks
[22:48] <poolie_droid> Got it now?
[23:07] <mars> OOPS-1763CMP4
[23:21] <StevenK> mars: You should be able to land your branch now
[23:22] <mars> StevenK, thanks, I completely forgot about that :/
[23:22] <StevenK> mars: Heh, I hadn't. I've been fighting to get us out of testfix for most of the day
[23:23] <mars> StevenK, and most of your weekend, it appears.  That sucks.
[23:26] <mars> Sent.  I'll know in (2*13)+(13/2) minutes
[23:26] <mars> (approximately)