[00:26] <StevenK> wgrant: Can you link me that duplicate_of json URL so I can see that the right stuff is still redacted on qas?
[01:00] <wallyworld_> StevenK: it's a holiday in victoria today afaik
[01:01] <StevenK> wallyworld_: It is, but he was talking earlier. :-)
[01:01] <wallyworld_> of course :-)
[01:05] <wgrant> $ grep duplicate_of.*json /home/wgrant/irclogs/Freenode/#launchpad-dev.log | head -n 1
[01:05] <wgrant> 15:53 < wgrant> Grab https://bugs.qastaging.launchpad.net/api/devel/bugs/718213/duplicate_of?ws.accept=application/json as anonymous
[01:05] <_mup_> Bug #718213: Can't access due to content. <Internet Archive - Tech Support:New> < https://launchpad.net/bugs/718213 >
[01:05] <wgrant> StevenK: ^^
[01:05] <StevenK> Ah ha!
[01:06] <StevenK> wgrant: I think 1234 is no longer private ...
[01:06] <wgrant> That can be easily fixed.
[01:08] <StevenK> But no redacted in the JSON. :-(
[01:08] <wgrant> Not if you can see the bug, no.
[01:09] <StevenK> wget should be anonymous
[01:10] <wgrant> https://api.qastaging.launchpad.net/devel/bugs/718213/duplicate_of?ws.accept=application/json looks thoroughly redacted to me.
[01:10] <_mup_> Bug #718213: Can't access due to content. <Internet Archive - Tech Support:New> < https://launchpad.net/bugs/718213 >
[01:10]  * StevenK suspects caching
[01:11] <StevenK> wgrant: Okay, if it looks nothing is leaking, I'll mark the bug as qa-ok.
[01:11] <wgrant> Sounds reasonable.
[01:12] <StevenK> But my JSON here is still non-redacted. :-(
[01:14] <StevenK> wallyworld_: You should be fine to put up a NDT in ~ 6 minutes.
[01:14] <wallyworld_> ok will do
[01:14] <StevenK> Just wait 6 minutes or so first and confirm the deployment report is green
[01:16] <wallyworld_> yep, finishing off something first anyway
[01:48] <thumper> wallyworld_: who is on holiday in Oz today?
[01:48] <wallyworld_> thumper: supposedly wgrant
[01:49] <wallyworld_> victoria has a holiday
[01:49] <wallyworld_> i think that's all?
[01:49] <thumper> south australia?
[01:49] <thumper> not perth?
[01:49] <wallyworld_> maybe, i'd have to check
[01:50] <thumper> wallyworld_: where are you looking?
[01:50] <wallyworld_> thumper: vic, act, sa, tas
[01:50] <wallyworld_> http://www.publicholidaysaustralia.com.au/
[01:51] <thumper> ta
[01:51] <wallyworld_> np
[01:51] <wallyworld_> i ask professor google :-)
[01:51] <thumper> Eight Hours Day?
[01:51] <thumper> really?
[01:51] <thumper> bwahaha
[01:52] <StevenK> How is that funny?
[01:52] <wallyworld_> thumper: i can't believe that one either
[01:52] <thumper> the name is funny
[01:52] <wallyworld_> stupid if you ask me
[01:52]  * thumper gives StevenK a sense of humour
[01:53] <wallyworld_> thumper: has the gtk/qt decision been made? seems gt is the way forward given some of the postings to the mailing list?
[01:53] <wallyworld_> qt
[01:54] <thumper> it isn't gtk/qt, but nux/qt
[01:54] <thumper> and no
[01:54] <wallyworld_> ok, thanks
[02:03] <wgrant> thumper: Eight Hours Day is only TAS. VIC isn't that insane :)
[02:12] <nigelb> wgrant takes holidays?
[02:25] <mwhudson> not from irc apparently
[02:31] <nigelb> heh
[03:14]  * StevenK kicks the qa-tagger
[03:15] <StevenK> staging is running r11434, but the deployment report only shows up to r11432.
[03:15] <StevenK> INFO:qa-tagger:Revision 11433 not deployed to LP-staging.
[03:15] <StevenK> INFO:qa-tagger:Revision 11433 not deployed to db-stable
[03:15] <StevenK> LIES!
[05:14]  * StevenK tries to work out why test_sharingservice breaks with his change.
[05:14] <StevenK> wallyworld_: ^ ?
[05:40] <wallyworld_> StevenK: what's your change?
[05:42] <StevenK> wallyworld_: http://pastebin.ubuntu.com/879931/ -- in the branch that creates APs when a product or a distro is.
[05:45] <wallyworld_> StevenK:  looks at at first read. what's the error?
[05:47] <StevenK> wallyworld_: The final assert in _test_deletePillarSharee fails with 800 lines of JSON
[05:48] <StevenK> It's expecting USERDATA, but EMBARGOEDSECURITY is there instead.
[05:50] <wallyworld_> hmm. not sure. i'll make your change and have a look.
[05:52] <StevenK> wallyworld_: You'll need to merge in lp:~stevenk/launchpad/product-distribution-accesspolicy first
[05:53] <wallyworld_> ah, yeah. i'm in the middle of a branch. i'll shelve my changes
[05:55] <StevenK> wallyworld_: You'll also need to run make schema
[05:55] <wallyworld_> ok
[05:58] <wallyworld_> StevenK: just doing the merge and mack schema and test_deleteProductShareeAll I get
[05:58] <wallyworld_> duplicate key value violates unique constraint "accesspolicy__product__type__key"
[05:59] <wallyworld_> i'll have to track that down unless you've seen it before?
[05:59] <wallyworld_> ah never mind, i need to add your changes i think
[06:00] <StevenK> Right
[06:02] <wallyworld_> and i see the json mismatch
[06:12] <StevenK> wallyworld_: Good to know it has stumped you too :-)
[06:12] <wallyworld_> it's not obvious, look through the code
[06:14] <wallyworld_> StevenK: i think i know what it is
[06:14] <wallyworld_> the access_polices and information_types lists were previously aligned
[06:15] <wallyworld_> in that one was made from the other
[06:15] <StevenK> Right
[06:15] <wallyworld_> now, access_polices comes from a db query
[06:15] <wallyworld_> so this will be wrong
[06:15] <wallyworld_>         another_person_data = self._makeShareeData(
[06:15] <wallyworld_>             another, information_types[:1])
[06:15] <wallyworld_> the test can no longer rely on the first info type being the one to use
[06:16] <wallyworld_> make sense?
[06:16] <StevenK> Yup
[06:16] <StevenK> I wonder if information_types = [ap.type for ap in access_policies] will fix it
[06:16] <wallyworld_> i may well
[06:16] <wallyworld_> it
[06:16] <wallyworld_> since the lists would be "aligned" again
[06:16] <StevenK> Success!
[06:17] <wallyworld_> yay
[06:17] <StevenK> +3/-9. That's my kind of test fix.
[06:17] <wallyworld_> yeah
[06:18]  * StevenK lp-lands this branch, finally
[06:18]  * wallyworld_ unselves
[06:18] <wallyworld_> unshelves
[06:18] <StevenK> wallyworld_: Thanks for the help
[06:19] <wallyworld_> np.
[06:23] <wallyworld_> StevenK: there's code in sharePillarInformation() which creates access policies if they don't exist. i guess that can be deleted now too
[06:25] <StevenK> wallyworld_: In SharingService itself? No, that's still good for things like PROPRIETARY
[06:26] <wallyworld_> makes sense
[06:26] <wgrant> StevenK, wallyworld_: that should probably go away
[06:27] <wgrant> It's not yet clear how we'll create PROPRIETARY, but it certainly won't be implicit like that.
[06:27] <StevenK> Right, I'll drop it
[06:27] <wallyworld_> the code today only provides proprietary as an option if the project is commercial
[06:27] <wallyworld_> on the +sharing view i mean
[06:27] <wallyworld_> in the sharing picker
[06:27] <wgrant> Even so.
[06:28] <wallyworld_> sure, i wasn't saying it should be implicit, just reiterating what's there now
[06:28] <wgrant> We'll see how sinzui's entitlement stuff ends up.
[06:28] <wgrant> And then work out HTF to do this.
[06:28] <wallyworld_> yep
[06:28] <StevenK> So, dump it in this branch, or leave it for now?
[06:29] <wgrant> Kill, crush, destroy.
[06:30] <StevenK> Which may well end up creating more test failures.
[06:30] <wgrant> True, if it deals with proprietary.
[06:30] <wgrant> So perhaps leave it.
[06:30] <wgrant> If it causes failures.
[06:30] <StevenK> I don't know if it will or not
[06:31] <StevenK> I was willing to push it out and toss it at ec2 and see what happens
[08:52] <adeuring> good morning
[09:22] <StevenK> adeuring: Did you enjoy your OCR duties over the weekend? :-)
[09:23] <adeuring> StevenK: ;)
[12:07] <StevenK> Ahh, DST, you heartless bitch.
[12:07] <czajkowski> and hello there to you too StevenK
[12:07] <StevenK> Haha
[13:30] <deryck> adeuring, abentley -- I completely forgot my audio troubles.
[13:31] <deryck> adeuring, abentley -- I do however have everything else working :)
[13:31] <abentley> deryck: That's good.
[13:31] <abentley> deryck: Maybe the conference system?
[13:31] <deryck> abentley, yeah, let's do that.
[13:31] <adeuring> deryck: sounds good. I got a somewhat enigmatic error when I tried to upgrade to precise...
[13:33] <deryck> flacoste, can orange use your conference code for a conference call?
[13:33] <flacoste> deryck: sure, be my guest
[13:33] <deryck> flacoste, audio input not working from my upgrade yet.
[13:33] <deryck> flacoste, and thanks!
[13:33] <flacoste> deryck: did you report it?
[13:33] <deryck> abentley, adeuring -- use flacoste's conf number.
[13:33] <deryck> flacoste, I did not.  But I will definitely.
[13:33] <adeuring> deryck: ok
[13:33] <flacoste> deryck: asap is best if you want to have sound in precise :-)
[13:34] <deryck> flacoste, ok :)  I had other issues with lp and forgot about my audio troubles actually.
[13:51] <adeuring> abentley: let's use mumble
[13:51] <abentley> adeuring: okay.
[13:54] <abentley> adeuring: lp:~abentley/lazr.jobrunner/run-via-celery
[14:29] <abentley> adeuring: I think we should start using a wiki page to coordinate.  Cool?
[14:29] <adeuring> abentley: sounds good
[15:23] <abentley> deryck, adeuring: I've posted my current notes to a wiki page: https://dev.launchpad.net/CeleryJobRunner
[15:24] <deryck> abentley, nice, thanks
[15:29] <deryck> adeuring, abentley -- I'm going offline for lunch to run errands here shortly.  But will be back after my lunch.
[15:29] <deryck> adeuring, abentley -- and I have a working headset now :)
[15:30] <abentley> deryck: excellent!
[15:34] <adeuring> abentley: cool
[15:34] <abentley> adeuring: I forgot to talk about resources with you, but it seems we should stick to 2 worker processes, i.e. one per lane.
[15:35] <adeuring> abentley: ok
[16:39] <jcsackett> sinzui: do you know if wallyworld_ had to do anything special to get his services work running on launchpad.dev? it's failing on post, but i don't see anything in any of his emails/docs.
[16:43] <sinzui> jcsackett, We had discussed a feature flag, but decided it was not needed now
[16:49] <jcsackett> ok, so i may be encountering a bug.
[16:49] <jcsackett> i'll investigate--it's blocking creating sample data. i can always use the factory if i don't find a solution soon.
[16:50] <jcsackett> sinzui ^
[16:50] <sinzui> :(
[17:55] <timrc> What needs to happen to subscribe a private team to a code branch? Do I need to be a member of that team?
[18:13] <sinzui> benji, do you have time to review https://code.launchpad.net/~sinzui/launchpad/entitlement-2/+merge/97074
[18:13] <benji> sinzui: sure
[18:15] <sinzui> benji, one moment. I for got the dependent branch
[18:15]  * sinzui makes diff smaller
[18:17] <sinzui> benji, https://code.launchpad.net/~sinzui/launchpad/entitlement-2/+merge/97078 will show a smaller diff
[18:17] <benji> sinzui: ok
[18:20] <czajkowski> Is a package in a persons  ppa a valid dependency for a package being daily-built by launchpad?
[18:21] <sinzui> czajkowski, yes, but that relationship is managed via a different mechanism.
[18:22] <sinzui> czajkowski, The owner of the PPA can specify that the PPS depends on another using +edit-dependencies
[18:22] <czajkowski> ahh
[18:23] <czajkowski> sinzui: thank you
[18:23] <sinzui> czajkowski, recipes just create a source package from a branch, after that, the PPA rules that you are building into come into play...
[18:24] <sinzui> if two years are using the same recipe to build a branch, one might fail because they are building into different PPAs
[18:24] <czajkowski> ok think I'll be able to answer the question now
[18:32] <czajkowski> sinzui: any advice when people keep changing the status of a bug after you've already changed it once and left a comment and they keep reopening it
[18:37] <sinzui> czajkowski, I threated suspension.
[18:38] <sinzui> czajkowski, is this bug 945503 that we are talking about
[18:38] <_mup_> Bug #945503: gcc-4.7 branch imports fails (timeouts) <Linaro GCC:Fix Released> <Launchpad itself:New> < https://launchpad.net/bugs/945503 >
[18:41] <czajkowski> yes it's rally annoying the way the person keeps changing it after I went to find out the info from you andothers
[18:41] <czajkowski> :/
[18:42] <czajkowski> and it's a co-worker as well
[18:43] <sinzui> czajkowski, doko is an canonical employee. I deleted the Lp from the bug
[18:44] <czajkowski> sinzui: ahh didnt know I could do that
[18:44] <czajkowski> thanks
[18:45] <sinzui> czajkowski, you can delete an affects line for your projects if there will still be another effects line in the table.
[18:45] <sinzui> I do not think there is a bug in the code in either projects for that bug. The bug is being misused
[18:46] <czajkowski> nods
[18:48] <benji> sinzui: I'm done with https://code.launchpad.net/~sinzui/launchpad/entitlement-2/+merge/97078
[18:49] <sinzui> thank you benji
[18:49] <benji> my pleasure
[18:52] <lifeless> flacoste: I'm crook today
[18:52] <deryck> yay!  I survived the small town equivalent of the DMV!
[19:01] <sinzui> deryck, Did you notice that all the pictures/frames on the wall were crooked?
[19:01] <sinzui> deryck, They want to keep you off balance so that you will sign away your soul.
[19:07] <salgado> benji, hi there.  if you have a couple minutes, I have a trivial branch up for review at https://code.launchpad.net/~salgado/launchpad/do-not-migrate-ubuntu-work-items/+merge/97088 :)
[19:07] <benji> salgado: sure
[19:18] <deryck> sinzui, ha! No, I didn't notice that. Just that seemed to be clocks that hadn't been rolled forward for DST were actually clocks that never moved.
[19:18] <flacoste> lifeless: i have a dr appointment now, but i'll be online later this evening to catch up with bigjools, we could talk then otherwise tomorrow is also fine
[19:19] <sinzui> deryck, indeed
[19:23] <benji> salgado: your branch looks good, I had one small suggestion you might like
[19:24] <salgado> benji, oh, sqlvalues()!  I'd completely forgotten about that. will make the change you suggest, thanks!
[19:25] <benji> salgado: cool
[19:29] <salgado> benji, would you mind landing it for me as well? I've just set the commit message
[19:30] <benji> salgado: I would love to, but my VM for LP dev is currently horribly broken. :\
[19:30]  * benji moves fixing that up on his todo list.
[19:31] <salgado> benji, hmm, ok, will see if I find somebody.  thanks for the review, though :)
[19:31] <benji> my pleasure
[19:32] <salgado> sinzui, would you mind ec2-landing https://code.launchpad.net/~salgado/launchpad/do-not-migrate-ubuntu-work-items/+merge/97088 for me?
[19:33] <sinzui> I will do that now
[19:36] <salgado> thanks sinzui!
[19:46] <abentley> lifeless: Do you know anything about the amqp "immediate" flag?  I think it's what I want, but it's not behaving like I'd expect.
[20:13] <sinzui> abentley, Can you advise me about this. I ec2 land does not work for me as of today: https://pastebin.canonical.com/62167/
[20:13] <sinzui> maybe I can run test and land via PQM manulary
[20:14] <sinzui> manually
[20:14] <benji> sinzui: that looks somewhat like a failure that bit me because I wasn't running on Precise
[20:15] <sinzui> benji, I have been running precise for 3 months
[20:15] <benji> sinzui: then you deserve not to have this problem ;P
[20:23] <abentley> sinzui: It looks like the wrong kind of config object is being passed to SMTPConnection(); it now expect a config stack.  I'll look into why that's happening.
[20:24] <sinzui> abentley, I saw some bzr updates. do I need to get more maybe?
[20:24]  * sinzui runs update to see
[20:24] <abentley> sinzui: hmm, maybe.
[20:25] <abentley> sinzui: potentially a new bzr is now available on remote instances?
[20:25] <sinzui> bugger I see new bzr and bzrlib, but I cannot install them
[20:27]  * sinzui forces them to install
[20:33] <lifeless> flacoste: I've taken sick leave for today, lets catch up when I'm back on deck
[20:33] <abentley> sinzui: Here is an untested patch that may fix you: https://pastebin.canonical.com/62168/
[20:34] <abentley> sinzui: (It's for lp-dev-utils)
[20:34] <lifeless> abentley: what do you want to achieve?
[20:34] <sinzui> abentley, thank you very much
[20:34] <lifeless> abentley: (as in why are you using immediate?)
[20:35] <abentley> lifeless: I want to have two queues, and use one queue as an overflow when the other cannot consume tasks.
[20:35] <abentley> lifeless: i.e. tasks go to the slow queue unless it's full, in which case they go to the fast queue.
[20:36] <lifeless> the typical pattern for that in amqp is a topic exchange with different binds
[20:36] <abentley> lifeless: "full" meaning all worker processes are occupied.
[20:36] <lifeless> mm
[20:36] <lifeless> actually, I think you're probably over specifying the behaviour
[20:37] <lifeless> a key thing with message systems is that you don't - and shouldn't - know whats on the other end
[20:37] <lifeless> abentley: what problem are you trying to solve?
[20:37] <abentley> lifeless: I want to implement a fast lane and a slow lane, so that fast tasks complete with low latency and slow tasks complete eventually.
[20:38] <lifeless> abentley: is the work queue itself stored in the DB still, or in rabbit ?
[20:39] <abentley> lifeless: the job details are stored in the database, the jobs are dispatched, by id, using rabbit.
[20:39] <lifeless> abentley: if it is, then I'd suggest just using either a topic exchange, or two exchanges; have everything submitted for fast initially and the timeout mechanism resubmits to the other (topic|exchange)
[20:40] <lifeless> probably two exchanges makes the most sense here I suspect, as you may need topics to route to consumers on different machines
[20:40] <abentley> lifeless: At this point, the plan is to use the same machine.
[20:40] <lifeless> abentley: e.g. one exchange 'jobs-slow' one 'jobs-fast'
[20:41] <lifeless> abentley: The big picture vision should permit scaling to N machines, even if the first iteration won't, so selecting something with a decent growth path is relevant.
[20:41] <lifeless> bind all your slow works to jobs-slow, all your fast workers to jobs-fast
[20:41] <abentley> lifeless: trying fast first means you have to timeout slow jobs every time.  Trying slow first means you only have to timeout slow jobs if you get a slow job while another slow job is running.
[20:42] <lifeless> abentley: if you know it is slow, queue it to jobs-slow
[20:42] <abentley> lifeless: I don't know it is slow and I would prefer to operate with zero knowledge.
[20:43] <lifeless> I think then that this is a case of square peg, round hole. Your description now sounds more like 'I want N homogeneous workers and only M of them are allowed to run past the timeout at any one point in time'
[20:44] <lifeless> this is quite different to checking whether a slow queue is in use, because you might have your workers disconnected, which in your described model does not mean 'these things are fast'
[20:45] <abentley> lifeless: That is a reasonable summary.  I'd add that if a task does time out, it is only rescheduled such that it is permitted to run past the timeout next time it runs.
[20:45] <lifeless> ah, good point
[20:45] <lifeless> I really need to crawl back into bed and try and shake this illness
[20:45] <lifeless> uhm
[20:46] <abentley> lifeless: Sure.  I didn't realize you were sick when I asked.
[20:46] <lifeless> what will the timeout be?
[20:46] <lifeless> like
[20:46] <lifeless> if its 60 seconds
[20:46] <lifeless> or even 5 minutes
[20:46] <abentley> lifeless: it looks like 3 minutes.
[20:46] <lifeless> if we have a reasonable number of workers, and most jobs are fast, then queuing to slow first becomes an optimisation rather than a necessity
[20:46] <abentley> lifeless: 3 minutes lets 99% of BranchScanJobs and MergeProposalJobs complete without timing out.
[20:47] <lifeless> so
[20:47] <lifeless> my suggestion is to queue to fast first, and on timeout queue to slow, and revisit queueing-slow-first if the timeout setting needed to have 99% past-first-time grows unacceptably in the future
[20:47] <abentley> lifeless: Per Friday's discussion, we want to avoid using more than 2 cores on ackee.
[20:47] <lifeless> abentley: why is that ?
[20:48] <lifeless> (I only saw a minute or so of chatter)
[20:48] <abentley> lifeless: https://pastebin.canonical.com/62113/
[20:49] <lifeless> ah
[20:49] <lifeless> so jjo missed that you are not adding new work
[20:49] <lifeless> you are migrating work
[20:49] <lifeless> the question then is 'how much work on ackee is from "jobs"' not 'how much spare capacity does it have'
[20:50] <abentley> lifeless: We are migrating work and adding work.
[20:50] <lifeless> (and loganberry - we have two boxes servicing our jobs)
[20:50] <lifeless> abentley: what work is being added ?
[20:50] <abentley> lifeless: The added work is the slow lane.
[20:50] <abentley> lifeless: right now, we simply timeout at 5 minutes.
[20:50] <lifeless> abentley: that is 1% of the total by frequency, and you have an intentional concurrency cap on it
[20:51] <lifeless> abentley: so even if its huge, we can cap that at one core initially
[20:51] <lifeless> abentley: also jjo was concisdering ackee to have 4 cores, not the 8 that we effectively get
[20:52] <lifeless> abentley: so, I think you're completely safe allocating 4 cores to the new runner, migrating across and reevaluating
[20:53] <lifeless> abentley: a 1% increase in the fast work won't matter from this perspective, and the slow lane you can cap at one worker initially as a safety net
[20:53] <lifeless> abentley: how does that sound? Does it make requeuing less urgent in your opinion?
[20:54] <abentley> lifeless: IOW, giving everything a long timeout as we were planning to do with the slow lane?
[20:55] <abentley> lifeless: Actually, I don't understand.
[20:55] <abentley> lifeless: Do you mean just having a slow lane?
[20:56] <lifeless> I mean:
[20:56] <lifeless> allocate 3 workers to the fast lane, with a 3 minute timeout
[20:56] <lifeless> 1 worker to the slow lane
[20:56] <lifeless> queue to the fast lane initially, and on timeout requeue to the slow lane
[20:56] <abentley> lifeless: How do we implent fast lane/slow lane without requeueing?
[20:57] <lifeless> once we migrate across the existing jobs, reevaluate and possibly give the slow lane another core, a nd the fast lane another 2
[20:58] <abentley> lifeless: I think this would be an improvement over status quo.
[20:59] <lifeless> you were saying that the optimisation of queueing to slow first was needed due to a resource constraint on ackee; I believe the resources are substantially larger, once you consider: twice the number of effective cores and that the only *new* work is (fix cores of slow lanes + 1% of fast lane jobs timing out at 3 minutes)
[21:00] <lifeless> So I'm suggesting that queuing to slow first is future work we may not need
[21:02] <abentley> lifeless: they seem to be similar effort, and slow first appears to give better behaviour.
[21:02] <lifeless> slow first depends on interrogating the status of the workers
[21:03] <lifeless> and would still require requeing when slow is saturated
[21:03] <lifeless> so AFAICT its purely additional development, and will have more corner cases.
[21:04] <abentley> lifeless: if the immediate flag worked, RabbitMQ would handle determining whether the message could be consumed immediately.
[21:04] <lifeless> this is an example of 'more corner cases'
[21:05] <lifeless> the immediate flag should return the message to you if there is no consumer available
[21:06] <lifeless> ok, I really have to crash; I really think queuing fast first is a better use of development time today - its a simpler system, doesn't have issues when dealing with upgrades and migrations, and can be transparently enhanced to do slow-first in the future if needed
[21:07] <lifeless> we have way to many frills in LP where we have complexity that doesn't justify itself, this *really* sounds like a case of that to me.
[21:08] <lifeless> so, I'm going to vamoose; please discuss this further with deryck / abel, or I'll be on deck in a couple days I hope.
[21:10] <abentley> lifeless: sure.
[21:29] <cr3> is there a different meaning between methods that start with search, like searchTasks, and those that start with find, like findPerson?
[21:32] <salgado> cr3, nope, this is just because (AFAIK) there's never been a naming standard for search method names
[21:41] <abentley> sinzui: did that patch work?
[21:58] <sinzui> abentley, yes it did
[21:58] <abentley> sinzui: great.  I'll land a properly-generalized fix tomorrow.
[23:00] <bigjools> morning
[23:56] <StevenK> lifeless: O HAI