[00:31] <wgrant> StevenK: Morning
[00:36] <StevenK> wgrant: O hai
[00:37] <wgrant> StevenK: Your maintainer mail thingy needs QA
[00:37] <wgrant> And then we can deploy.
[00:38] <StevenK> wgrant: Yeah, I was been pondering how best to QA that while I was breakfasting.
[00:39] <wgrant> It should be pretty simple
[00:39] <wgrant> As you've basically just deleted code
[00:44] <StevenK> wgrant: If you file a bug under https://qastaging.launchpad.net/auditor, I shouldn't get a notification about it
[00:48] <wgrant> StevenK: ~auditor-team still has a structsub
[00:51] <StevenK> wgrant: Removed
[00:53] <wgrant> StevenK: https://bugs.qastaging.launchpad.net/auditor/+bug/939498 <- the creation notification should have gone to auditor-team, wallyworld and me; the comment to wallyworld and me.
[00:53] <_mup_> Bug #939498: rhythmbox-metadata crashed with SIGSEGV in _start() <apport-crash> <i386> <precise> <rhythmbox (Ubuntu):New> < https://launchpad.net/bugs/939498 >
[00:56] <StevenK> wgrant: Noted. Still waiting for Thunderbird to choke down the staging inbox
[00:57] <wgrant> StevenK: You've not used wallyworld's script to empty it?
[00:57] <StevenK> I have, there's still 6800 mails in it
[00:59] <wallyworld_> did you use the param to set the time period to clear messages? perhaps they are recent ones not cleared by default
[00:59] <StevenK> Default is 2 days
[00:59] <wallyworld_> yes, so maybe those messages are < 2 days od
[01:00] <StevenK> Earliest message is 28/07 10:00
[01:01] <wallyworld_> which is the 2 day window
[01:02] <wallyworld_> so you need to specify to clear messages starting at 0 days ago
[01:54] <StevenK> wgrant: So it took qas long enough for the messages to hit the staging inbox. Three mails for the new bug -- and then one for the comment
[01:55] <StevenK> I guess wallyworld_'s structsub is close only
[01:56] <wgrant> Ah, yeah
[01:56] <wgrant> Makes sense.
[01:56] <wgrant> Sounds good to me.
[02:03] <StevenK> wgrant: I can put together a deployment for up to r15702 if you wish.
[02:04] <wgrant> StevenK: Indeed
[02:56] <wallyworld_> spm: i made a cake on the weekend - the one you sent me a picture of a couple of weeks ago. it turned out ok https://www.dropbox.com/s/irvxydgjw6eolcp/M1410015.JPG
[02:56] <wallyworld_> spm: i even saved you a piece https://www.dropbox.com/s/yxdksp1uhz450qf/M1410020.JPG
[02:56] <wallyworld_> so that should get me a few NDTs without begging
[02:57] <spm> wallyworld_: 1 spm: 0, so throughly outtrolled....
[02:58] <wallyworld_> well, this time anyway
[02:58] <spm> wallyworld_: and was it as good as it looks?
[02:58] <spm> it's looks terribly rich.
[02:58] <wallyworld_> spm: i think so. ask bigjools
[02:58] <wallyworld_> spm:  it is very rich, about 1kg of sugar all up
[02:58] <spm> bigjools: ^^ your considered opinion?
[02:58] <spm> !!!!!!!!!!!!!!!!!!!!!!!!
[02:58] <wallyworld_> 900g maybe
[02:58] <bigjools> spm: it was so good that I didn't need dinner last night
[02:58] <spm> wallyworld_: was this bribery from your wife to get a cuddle with some new twins?
[02:59] <wallyworld_> spm: it was her birthday. it was bribery to get me a "cuddle"
[02:59] <spm> oic
[03:33] <lifeless> in other news
[03:33] <lifeless> pybars appears to have users.
[03:33] <lifeless> Who would have thought.
[03:33] <lifeless> check the download count on http://pypi.python.org/pypi/pybars
[03:34] <wgrant> Users or Google?
[03:35] <lifeless> I'm fairly sure not google.
[03:35] <lifeless> I've seen much lower counts for other projects.
[03:35] <lifeless> e.g. http://pypi.python.org/pypi/oops_datedir_repo - 370
[03:36] <lifeless> and thats still got users (folk @ Canonical to be sure, but virtualenv users...)
[05:28]  * StevenK stabs django
[05:29] <StevenK> Trying to use it with making use of django.test.TestCase is just an exercise in futility
[05:38] <StevenK> ImportError: Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.
[05:38] <StevenK> SIGH
[06:37] <nigelb> StevenK: Wait, why are you using django?
[06:37] <StevenK> nigelb: https://launchpad.net/auditor
[06:38] <nigelb> ooh. Nice.
[06:38] <nigelb> also, yay django
[06:38] <nigelb> :P
[06:38]  * nigelb runs
[08:00] <adeuring> good morning
[08:56] <lifeless> wgrant: how much faster would you say the new bug search queries are?
[08:57] <wgrant> lifeless: 42
[08:57] <lifeless> wgrant: haha
[08:58] <lifeless> wgrant: really, you have no idea?
[08:58] <wgrant> Pretty much.
[08:58] <wgrant> I know they're not slower.
[08:58] <lifeless> darn
[08:58] <wgrant> Why?
[08:58] <lifeless> I like being able to say what we've done to make things faster.
[08:58] <wgrant> Yeah, certainly
[08:59] <lifeless> but if its 'no slower', its pretty hard to point at and say 'booyah'
[08:59] <wgrant> It's hard to measure 'cause of all the different cases, and the fact that the access control mechanism changed at the same time.
[08:59] <lifeless> I know :(
[08:59] <wgrant> I was also planning to do StormRangeFactory afterwards, but that didn't happen.
[08:59] <wgrant> That would make the biggest difference.
[08:59] <lifeless> doing less work :)
[09:00] <lifeless> do you know off hand how long say, 'firefox' in Ubuntu takes ?
[09:00] <lifeless> it used to be 3s or so hot to count IIRC
[09:01] <wgrant> Yeah, Ubuntu packages are where the biggest win is.
[09:01] <wgrant> Some of them will be more than 100x faster
[09:01]  * lifeless tries
[09:01] <lifeless> so close
[09:01] <lifeless> 1 → 75 of 111109 results
[09:01] <lifeless> just two more
[09:01] <lifeless> and it will be epic
[09:01] <wgrant> Heh
[09:02] <wgrant> 55ms for a count of firefox's open bugs
[09:02] <lifeless> oh, late eval faillll
[09:02] <wgrant> (could be much faster if we added another set of indices, to restrict to just open bugs. that was another plan alongside SRF)
[09:03] <lifeless> 762.013ms to count forall bugs, 116 to get the batch
[09:04] <lifeless> 1327.692ms for the batch for firefox itself, 592.421ms to count.
[09:04] <wgrant> Er
[09:04] <lifeless> that looks like ~50% of the times we were seeing this time last year.
[09:04] <wgrant> What?
[09:04] <lifeless> wgrant: cold cache.
[09:04] <wgrant> On which instance?
[09:04] <lifeless> wgrant: 'firefox' FTI in 'ubuntu' context.
[09:04] <wgrant> Oh
[09:05] <wgrant> FTI
[09:05] <wgrant> You didn't say FTI :)
[09:05] <lifeless> yeah, I did, I said "'firefox' in Ubuntu"
[09:05] <wgrant> I thought you meant firefox in Ubuntu
[09:05] <wgrant> ie. https://bugs.launchpad.net/ubuntu/+source/firefox
[09:06] <lifeless> yeah, I saw that
[09:06] <lifeless> I would have quoted consistently for that
[09:06] <lifeless> e.g. not at all, or both.
[09:06] <wgrant> So yeah
[09:06] <wgrant> FTI and count are going to be roughly the same length
[09:07] <wgrant> Because they're pretty similar
[09:07] <wgrant> ie. both terrible
[09:07] <lifeless> we've switched index type already right ?
[09:07] <wgrant> But it's not as problematic as, say, the default Ubuntu listing
[09:07] <wgrant> Yes
[09:07] <wgrant> Where the batch of 75 takes a few ms.
[09:07] <wgrant> But the count takes a second
[09:07] <wgrant> Anyway, I need to eat.
[09:07] <wgrant> bbs
[09:43] <cjwatson> lifeless: Any opinions on ending the beta period of https://blog.launchpad.net/general/beta-test-asynchronous-ppa-package-copies and rolling that out to everyone?  There was one bug, namely bug 1026976, which I fixed a week ago; and I've had a number of positive reports.
[09:43] <_mup_> Bug #1026976: Archive:+copy-packages creates job which oopses when async-copying from private to public archive <oops> <qa-ok> <Launchpad itself:Fix Released by cjwatson> < https://launchpad.net/bugs/1026976 >
[09:43] <lifeless> cjwatson: what failure modes do you think it could have ?
[09:44] <bigjools> did you keep the synchronous exception for small numbers of packages?
[09:44] <maxb> The async stuff is pretty poor at giving you feedback when a copy fails
[09:45] <maxb> I guess I should have bugged, I just assumed it was unfinished for now
[09:45] <cjwatson> maxb: When did you last check?
[09:45] <cjwatson> bigjools: I haven't removed it yet, but I'd like to; that check is fairly bogus IMO
[09:45] <bigjools> it's stabbing in the dark, yeah
[09:45] <cjwatson> The code would be a lot more maintainable if it only had a single well-maintained path
[09:46] <lifeless> cjwatson: I'm hugely +1 on that.
[09:46] <cjwatson> lifeless: Failure to copy at all, I guess; the actual backend code is fairly well-tested
[09:46] <cjwatson> Or UI problems with presenting failures, but I thought that was fixed
[09:46] <bigjools> given that the backend forms part of ubuntu series opening :)
[09:47] <lifeless> cjwatson: sounds reasonable to expand it to me, long as you aren't going on leave tomorrow ;)
[09:47] <maxb> I seem to have lost the email concerned; but anyway; just displaying a message which says a copy failed, with no additional explanation, and emailing an oops code, isn't very nice
[09:48] <cjwatson> maxb: It's supposed to give an explanation of why - there's a screenshot example in the blog post above
[09:48] <cjwatson> E-mailing an OOPS code indeed isn't nice.  I thought I'd made it e-mail an actual error.
[09:49] <maxb> I didn't get an explanation string in the web ui
[09:49] <cjwatson> Did you click the expander arrow?
[09:49] <maxb> yes
[09:49] <cjwatson> If you know what package you were copying, and when, there's some faint possibility I might be able to investigate
[09:49] <lifeless> how long ago ?
[09:50] <maxb> ok, this was 6 days ago, and I've found the email now
[09:50] <maxb> Launchpad encountered an internal error during the following operation: copying a package.  It was logged with id OOPS-952a0272d05f462844490a94901414e9.  Sorry for the inconvenience.
[09:50] <maxb> was the full body text
[09:50] <cjwatson> One moment
[09:51] <cjwatson> Damn, it's expired
[09:51] <maxb> 24/07/12 23:52
[09:51] <cjwatson> Is it possible that this was bug 1023372?
[09:51] <_mup_> Bug #1023372: Direct-copying an already-published package OOPSes <lp-soyuz> <oops> <Launchpad itself:Triaged> < https://launchpad.net/bugs/1023372 >
[09:51] <maxb> Sounds quite likely actually
[09:52] <cjwatson> Because that's the one case I know of where the UI error would be entirely uninformative
[09:52] <cjwatson> The job log has expired from carob too
[09:53] <wgrant> Yeah
[09:53] <maxb> Although, I believe I was trying to copy to un-delete a deleted package, i.e. previously published, not already-published
[09:53] <wgrant> NFI why we have only 5 days of retention there
[09:54] <bigjools> cjwatson: one of my concerns is that all the PCJs go through a single job runner
[09:54] <lifeless> possibly noone has removed the cowboy after we gobbled all of neems disk ?
[09:54] <bigjools> cjwatson: and ubuntu series opening will dos ppa copies
[09:54] <wgrant> bigjools: No.
[09:54] <wgrant> bigjools: Initiali[sz]eDistroSeries doesn't use PCJs
[09:55] <wgrant> It doesn't use the copier at all, in the Ubuntu case.
[09:55] <cjwatson> Indeed, it uses PCRs instead
[09:55] <bigjools> I clearly need more sleep
[09:55] <maxb> OK, I've reconstructed the thing I was copying, it was bzr-upload 1.1.0-2~bazaar1~precise1 to bzr/proposed (and then I just uploaded a new version when the copy failed)
[09:55] <bigjools> cjwatson: I still have the concern though
[09:55] <cjwatson> Stuff like Archive.copyPackage (e.g. SRUs, syncs from Debian) is our mass-testing of PCJs
[09:55] <bigjools> syncs will fuck it, yes
[09:56] <bigjools> which is what I was thinking of and yet somehow typed wrong
[09:56] <cjwatson> Syncs from Debian are the worst case I'm aware of; I think I've seen that go up to maybe half an hour of backlog, perhaps an hour
[09:56] <cjwatson> maxb: If you can set up a new example of it and get me an OOPS code, I'm interested
[09:56] <wgrant> (IDS doesn't actually use PCRs either -- it uses the cloner directly)
[09:56] <maxb> I'll see if I can retrace the process
[09:57] <maxb> But in a few hours, at this point - I need to go to work now
[09:57] <cjwatson> So production jobs run on, what, ackee and loganberry?
[09:57] <wgrant> cjwatson: Right.
[09:57] <wgrant> cjwatson: lp:lp-production-crontabs is the usual reference.
[09:58] <wgrant> You should be able to see that.
[09:58] <cjwatson> And process-job-source locks for any given job source so you can't run >1 on the same machine easily.
[09:58] <wgrant> Yeah, but that's easily fixable, as you've found
[09:58] <wgrant> IIRC we use a separate runner for the PCJs
[09:58] <cjwatson> That was more process-job-source-groups
[09:58] <wgrant> So IDS won't be a problem there.
[09:58] <cjwatson> If you mean bug 839659
[09:58] <_mup_> Bug #839659: process-job-source-groups.py will starve itself if run with multiple overlapping schedules <Launchpad itself:Triaged> < https://launchpad.net/bugs/839659 >
[09:59] <wgrant> If that's the one we filed after opening some series recently, indeed.
[09:59] <wgrant> It's not.
[09:59] <wgrant> But it may be the same thing.
[09:59] <cjwatson> I filed bug 991876 separately.  I'm not sure that it's the same.
[09:59] <_mup_> Bug #991876: initializedistroseriesjob starved by other jobs <derivation> <new-release-cycle-process> <Launchpad itself:Triaged> < https://launchpad.net/bugs/991876 >
[10:00] <wgrant> Right
[10:00] <cjwatson> No, not as such.
[10:00] <wgrant> They're pretty similar.
[10:00] <wgrant> But different scripts.
[10:00] <wgrant> Probably the same fix.
[10:00] <cjwatson> Anyway, the locking in process-job-source itself: what resource is it actually protecting?
[10:01] <wgrant> Launchpad maintenance engineers' jobs, mostly.
[10:01] <wgrant> It's just the normal script boilerplate.
[10:03] <cjwatson> Well, it does seem at least somewhat real.  JobRunner.runFromSource fetches a list of jobs first and then tries to run them all; if it's racing with another runner it's not clear to me that it will behave sensibly.
[10:04] <cjwatson> I suppose we could have a separate job source depending on whether you were copying to a distribution or a PPA, a bit like publishing.
[10:04] <wgrant> cjwatson: It should be fine.
[10:04] <wgrant> cjwatson: Doesn't it try to acquire the lease and skip if it can't, committing between jobs?
[10:05] <cjwatson> Oh, true.
[10:05] <wgrant> Anyway, the solution to non-concurrent code is to make it concurrent and fix stuff that breaks.
[10:05] <cjwatson> Still, multiple runners won't magically avoid starvation, they'll just make queues clear more quickly.
[10:05] <wgrant> Yeah
[10:06] <wgrant> There's also celery
[10:06] <wgrant> I'm not sure what the queueing options are there
[10:06] <cjwatson> If the goal is for PPA copies never to be starved by distro copies, the answer is for those not to be in the same queue
[10:09] <lifeless> is the distro copy done as 20K separate jobs?
[10:09] <lifeless> Or one 20K long job ?
[10:09] <lifeless> or something in between ?
[10:09] <cjwatson> You mean syncs from Debian?
[10:11] <cjwatson> Something in between.  It uses Archive.copyPackages, which in principle could copy the whole lot at once, but in practice it does a little bit of advance checking for each package so it times out if asked to do too many in a single job; so our client for that bisects the list until it finds a set it can do in one go.  It tends to wind up being 100 or so per job.
[10:11] <cjwatson> Copying precise to quantal (say), as wgrant observes, is a different system not in the same queue as PPA copies.
[10:13] <lifeless> well, I mean this specific thing you 're worried about starvation thereof
[10:13] <lifeless> celery lets us have multiple execution units for a single queue
[10:13] <cjwatson> Then the answer is that the worst case I'm aware of for that tends to be around 20x100.
[10:13] <lifeless> so, - meh.
[10:14] <lifeless> I wouldn't spend any time on starvation for such a small set
[10:15] <cjwatson> Duration-wise, I can't remember exactly but I think it took an hour or so to clear that.
[10:15] <lifeless> thats entirely tolerable (not desirable)
[10:16] <cjwatson> Mm.  There's no publicly-visible indication of queue length anywhere, which doesn't help.
[10:16] <lifeless> if you have time, enabling concurrent runners (w/celery) would be good, that would allow us to have e.g. 8 of them and have them mostly idle
[10:17] <cjwatson> As far as I'm aware the celery plumbing is in place, but are there any jobs using celery for real yet?
[10:17] <lifeless> not on prod AFAIK, keeps failing qa
[10:17] <cjwatson> I wonder if it would be interesting for the in-progress notification of PPA copies to say how long the queue is, or something.
[10:18] <wgrant> lifeless, cjwatson: Branch scan jobs have been celery'd on prod for a few weeks
[10:18] <wgrant> It's some related followup work that I am chain-reverting.
[10:18] <bigjools> does any of this ever get announced?
[10:19] <bigjools> I try and keep up, but ...
[10:19] <wgrant> I only found out by finding that the logs were missing :)
[10:19] <bigjools> :/
[10:19] <cjwatson> <cjwatson@sarantium ~/src/canonical/lp-production-crontabs>$ grep -i celery *
[10:19] <cjwatson> <cjwatson@sarantium ~/src/canonical/lp-production-crontabs>$
[10:19]  * cjwatson wonders how to tell
[10:19] <wgrant> cjwatson: celery isn't a cronjob
[10:19] <wgrant> That's why it's better than cronjobs
[10:19] <cjwatson> Oh, right
[10:19] <wgrant> You can see celeryd-job.log on ackee
[10:20] <wgrant> https://launchpad.net/+feature-rules
[10:20] <wgrant> jobs.celery.enabled_classesdefault0BranchScanJob
[10:20] <wgrant> Well that wentw ell.
[10:20] <cjwatson> Where do the runners log?
[10:20] <cjwatson> Oh, sorry, read up
[10:29] <bigjools> night all
[11:14] <cjwatson> Is it likely to be deliberate that PlainPackageCopyJob.createMultiple doesn't call celeryRunOnCommit for each of the new jobs?
[11:15] <wgrant> adeuring: ^^
[11:16] <adeuring> cjwatson, wgrant: I don't know -- abentley workod on that part
[12:34] <jam> cjwatson: did you ever get a chance to look at the Maverick filenames? Should I try to regenerate the list, or are you not going to be able to get to it for a while?
[12:34] <cjwatson> If you can generate something fresh, I can probably look this week, crappy Internet connection chaos notwithstanding
[12:35] <cjwatson> (My home internet connection has been down for six days)
[12:58] <deryck> Morning, all.
[12:58] <rick_h_> morning
[13:35] <deryck> rick_h_, ping for standup.
[13:35] <jam> dpm: just a quick ping about where translations is at, did you get a chance to look at the imports
[13:35] <rick_h_> deryck: ah sorry, didn't get invite
[13:37] <dpm> hi jam, I just came back from holiday, but some community members manually approved and fixed the imports in the meantime. Once I've finished catching up, we'll open translations. I need to sync up with the other translations coordinators, but I think we can open and announce tomorrow or wednesday
[13:44] <jam> dpm: sounds good to me
[13:45] <jam> just checking in on it
[13:45] <jam> I have no personal time frame in mind :)
[13:45] <dpm> ok, thanks :)
[14:03] <sinzui> mrevell: do you have time today to talk about Sharing?
[14:03] <mrevell> sinzui, I sure do. My two meetings this afternoon have been cancelled, so I'm wide open.
[14:04] <sinzui> lets talk in 25 minutes.
[14:04] <sinzui> ^ mrevell
[14:05] <mrevell> sinzui, ack, speak then
[14:05] <rick_h_droid> deryck sorry machine completely hung
[14:06] <deryck> rick_h_droid, np.  We eneded standup.  Will ping you shortly to chat more about mockups.
[14:06] <rick_h_droid> ok
[14:24] <deryck> rick_h_ or rick_h_droid -- are you back up?
[14:24] <rick_h_> deryck: rgr
[14:24] <deryck> rick_h_, ok, let's chat in 5 then.  And bottom of hour.
[14:25] <rick_h_> deryck: k
[14:29] <deryck> rick_h_, let's use the standup hangout.
[14:56] <timrc> We've been having intermittent build failures due to connection reliability issues with LP... has anyone else been experiencing this problem? Was there a disruption around was their a service planned (or otherwise) disruption around 2012-07-30 01:16:00 UTC ?
[14:57] <timrc> blah at my awesome editing skills
[14:59] <czajkowski> timrc: we were due to have disruption at 11am UTC today but it didn't go ahead
[15:32] <rick_h_> deryck: ping, got a sec to hop back on a hangout?
[15:43] <jam> cjwatson: devpad: ~jameinel/cleanup-librarian/maverick-2012-07-30.log.gz
[15:43] <jam> is a new list of files
[16:37] <cjwatson> Anyone want to comment on my suggested approach to bug 745799?
[16:37] <_mup_> Bug #745799: DistroSeries:+queue Timeout accepting packages (bug structural subscriptions) <timeout> <Launchpad itself:Triaged> < https://launchpad.net/bugs/745799 >
[18:01] <rick_h_> deryck: howdy
[18:01] <deryck> hey hey, rick_h_
[18:02] <rick_h_> deryck: got a sec?
[18:02] <deryck> rick_h_, oh, oops.  You asked while I was on another call and forgot to ping back.
[18:02] <deryck> rick_h_, sorry, dude.  let me grab headset and we can go now.
[18:02] <rick_h_> yea, np. Know it's busy but need to ask a few ?
[18:02] <rick_h_> ty much
[18:03] <abentley> rick_h_: btw, it looks like we will prioritize private blueprints over private questions.
[18:04] <rick_h_> k
[18:11] <deryck> rick_h_, I think you is froze again :)
[18:27] <rick_h_> sinzui: ping, have time for a 2min sanity check hangout?
[18:32] <sinzui> rick_h_, yes
[18:41] <sinzui> rick_h_ https://qastaging.launchpad.net/launchpad/+admin
[18:48] <sinzui> rick_h_ http://www.youtube.com/watch?v=Vq6VsKuFY_o
[18:48] <rick_h_> sinzui: ty much
[19:14] <abentley> benji: Could you please review https://code.launchpad.net/~abentley/launchpad/longer-mq-timeout/+merge/117324 ?
[19:18] <benji> abentley: sure
[19:49] <benji> abentley: done with your branch
[19:51] <abentley> benji: ty
[20:05] <benji> abentley: my pleasure
[20:22] <sinzui> benji: do you have time to review https://code.launchpad.net/~sinzui/launchpad/register-project-maintainer/+merge/117335
[21:02] <maxb> sinzui: In http://blog.launchpad.net/general/project-maintainer-can-see-private-bugs, people can lose access, not loose it :-)
[21:02] <sinzui> maxb: alas. I have lost it
[21:02] <sinzui> I will try to correct the blog
[22:18] <wgrant> abentley: Hi, https://bugs.launchpad.net/launchpad/+bug/1018235 needs QA -- can you do it or explain what needs testing?
[22:18] <_mup_> Bug #1018235: TestRunMissingJobs.test_find_missing_ready is disabled due to spurious failures <qa-needstesting> <spurious-test-failure> <test-system> <Launchpad itself:Fix Committed by abentley> < https://launchpad.net/bugs/1018235 >
[22:52] <lifeless> wgrant: where is the sso decoupling at?
[22:53] <wgrant> lifeless: Well, I shelved it while waiting for them to land my first branch
[22:53] <wgrant> lifeless: It took 3 months
[22:53] <wgrant> And then I forgot to pick it up again
[22:53] <wgrant> Maybe I should.
[22:57] <lifeless> there is an RT at the moment around sso staging w teams not there etc etc
[22:57] <lifeless> it would help, I think :)
[23:01]  * StevenK stabs mumble
[23:03] <lifeless> sinzui: how goes the blockage?
[23:59] <StevenK> wgrant: So I do think test_information_type_does_not_leak is a bong test. It structurally subscribes the product owner, creates a userdata bug for that product and then asserts they aren't notified.