[00:05] <bdmurray> sinzui: the +subscriptions mp is ready for rereview now - https://code.edge.launchpad.net/~brian-murray/launchpad/limited-subscriptions-page/+merge/35177
[00:14] <jml> I'm so behind on email.
[00:15] <mwhudson> jml: i suggest putting a small weight on your 'y' key
[00:15] <jml> mwhudson: I much prefer '['
[00:16] <jml> mwhudson: even so, I've still got 50+ threads marked as "actually, I should read this" and another 20+ as "should answer"
[00:16] <mwhudson> ah ok, that's a harder problem
[00:17] <jml> yes.
[00:17] <jml> I think the only answer is to never concentrate ever again
[00:18] <jml> (unless an email demands that I do so)
[00:29] <jkakar> jml: Remove the label from all the emails.  Important things that you should have paid attention to will either (a) get taken care of naturally or (b) spiral out of control and explode.
[00:29] <jkakar> jml: It'll be easy to see which things need attention. :)
[00:30] <jkakar> jml: I've been enjoying the labelling system you described to me... I find it really hard to be disciplined to actually do the "and get back to deal with labelled items" part, though.
[00:31] <jkakar> Much of the time I procrastinate and then, when I get back to the items, they're no longer relevant.  It's quite an effective technique.  I'm very efficient.
[00:36] <thumper> lunch calls
[00:46] <wgrant> buildd-manager seems broken.
[00:47] <wgrant> Can someone check its logs?
[00:47] <wgrant> Did the CP end up happening?
[00:53] <wgrant> StevenK: ^^
[00:53] <poolie> thumper: i have a deal for you
[00:55] <poolie> thumper: get someone from your team to review and sponsor jam's lp-serve branch, and then he can move on to commit to stacked branches
[01:12] <wgrant> sinzui: Does the move of tales to lp.app imply that we're not going to split it?
[01:12] <sinzui> no
[01:12] <wgrant> Ah, good.
[01:13] <sinzui> 25% belongs in lp.app, 50% registry
[01:15] <thumper> poolie: ok
[01:15] <thumper> poolie: deal
[01:20] <rockstar> thumper, skype?
[01:20] <thumper> rockstar: sure, let me finish my roll first
[01:25] <thumper> rockstar: ok, now
[01:31] <wgrant> sinzui: Are you sure that users should be granted access to see *everything*? Even embargoed security bugs?
[01:32] <sinzui> wgrant, I am not sure
[01:37] <lifeless> In the context of an already private project?
[01:37] <lifeless>  / distro
[01:37] <wgrant> Right.
[01:38] <sinzui> lifeless I think the issue can be generalised to all projects and distros.
[01:38] <sinzui> If I grant view on a public project with private bugs and branches I assume most cases I mean to let the user see everything
[01:39] <sinzui> but in the case of security (which is security contact, not bug supervisor) we may want to require subscription
[02:47] <wallyworld__> thumper: ping
[02:50] <rockstar> wallyworld__, I understand you're wanting to do some YUI stuff.
[02:51] <wallyworld__> rockstar: yeah. do you want to take a peek at https://code.edge.launchpad.net/~wallyworld/launchpad/improved-broken-link-handling
[02:51] <wallyworld__> and tell me i'm doing it all wrong
[02:51] <wallyworld__> we can skype after you've had a look?
[02:52] <rockstar> wallyworld__, okay, I'll take a look.  I was actually going to say "we can skype tomorrow and chat about it"
[02:53] <wallyworld__> rockstar: tomorrow is fine
[02:53] <rockstar> wallyworld__, as it stands, this is my wife's night off this week, and if I don't watch Madmen with her, I will probably find the locks are changed the next time I leave the house.
[02:53] <wallyworld__> rockstar: np. we can chat tomorrow. i forgot it was way past eod for you, sorry
[02:54] <rockstar> wallyworld__, my EOD is kinda fuzzy some days.  :)
[02:54] <wallyworld__> rockstar: me too :-) i was up late last night figuring out some of the YUI stuff that i'm playing with now
[02:55] <wallyworld__> hopefully i'm not totally on the wrong track. :-)
[02:55] <rockstar> wallyworld__, I'm still trying to find the right track with YUI.  :)
[02:56] <wallyworld__> well, i just tried to see how other stuff had been done and go with the flow. anyway, have a good time watching madmen. talk tomorrow
[02:56] <wgrant> mwhudson: Does Shotwell realise that it doesn't own your photos?
[02:56] <mwhudson> wgrant: what do you mean?
[02:57] <wgrant> mwhudson: F-Spot likes to copy the photos into its local library and store all metadata somewhere other than the EXIF tags.
[02:57] <wgrant> Which makes it impossible to use across multiple machines.
[02:57] <mwhudson> wgrant: i think it's better about the copying (when you import it asks if you want to copy them)
[02:57] <mwhudson> wgrant: dunno about tags
[03:00] <thumper> wallyworld__: hey
[03:02] <wallyworld__> thumper: just wanted to skype at some point about the new work. when you are free
[03:02] <thumper> wallyworld__: I'm never free :)
[03:02] <thumper> cheap maybe, but not free
[03:02] <thumper> wallyworld__: let me get a small bit of work done
[03:02] <thumper> then we can talk
[03:02] <wallyworld__> ack. i'll have lunch
[03:07] <denniss1> #bshellz
[03:08] <thumper> ProgrammingError: operator does not exist: text = bytea
[03:08] <thumper> has anyone been fixing these on trunk yet?
[03:08] <lifeless> no
[03:08] <mars> thumper, bug 631010
[03:08] <_mup_> Bug #631010: ProgrammingError: operator does not exist: text = bytea <database> <maverick> <storm> <Launchpad Foundations:In Progress by mars> <https://launchpad.net/bugs/631010>
[03:08] <mars> thumper, working on it right now
[03:08] <lifeless> that reminds me to talk to doko or simila rabout it
[03:08] <thumper> is that the package downgrade hack?
[03:08] <lifeless> yes
[03:09]  * thumper sighs
[03:09] <lifeless> mars: if the fix is 'scatter u around the place' then its going to make the code base really ugly.
[03:09] <lifeless> mars: so please don't do that ;)
[03:09] <mars> lifeless, https://bugs.edge.launchpad.net/launchpad-foundations/+bug/631010/comments/4
[03:09] <_mup_> Bug #631010: ProgrammingError: operator does not exist: text = bytea <database> <maverick> <storm> <Launchpad Foundations:In Progress by mars> <https://launchpad.net/bugs/631010>
[03:10] <lifeless> mars: do you mean to say you're putting the older version in the ppa ?
[03:10] <lifeless> mars: if so cool, but can you please file a bug on maverick at the same time?
[03:10] <thumper> mars: do you have the downgrade command handy?
[03:11] <mars> thumper, no, sorry, in the thread, look for "aptitude hold python-psycopg2", maybe "apt-get install python-psycopg2=2.0.13-2ubuntu2"
[03:11] <mars> just a guess
[03:13] <mars> lifeless, what do you mean "file a bug against maverick"?  Like bug 585704?
[03:13] <_mup_> Bug #585704: Storm test suite fails when using psycopg2 2.2 <Storm:Fix Committed by jamesh> <https://launchpad.net/bugs/585704>
[03:13] <lifeless> no
[03:13] <lifeless> I mean that psycopg2 in maverick is buggy
[03:14] <lifeless> this change (presumably from upstream) is -not safe- to make in python 2.x modules.
[03:14] <lifeless> python 2.x is hopeless confused about bytes vs text, and I can pretty much guarantee that every python webapp on pgsql will be fucked by maverick.
[03:14] <mars> lifeless, may I suggest you file the bug then, if you know the implications and the people involved?
[03:14] <lifeless> I don't know the people.
[03:14] <lifeless> but sure
[03:16] <wgrant> OTOH we could just fix LP.
[03:17] <thumper> sudo dpkg -i --force-downgrade python-psycopg2_2.0.13-2ubuntu2_amd64.deb
[03:17] <mars> ah
[03:17] <thumper> that's what I did (for future reference)
[03:17] <lifeless> wgrant: it will, I guarantee, bit us for months.
[03:17] <lifeless> wgrant: its unbelievably hard to be 'correct' in this regard in python 2.
[03:17] <lifeless> even if you just use the core built in things like 'Exception'
[03:18] <wgrant> lifeless: It is hard, but Storm already forces us to be pretty much correct.
[03:18] <wgrant> There are only a few places where we're not.
[03:18] <lifeless> wgrant: really?
[03:18] <mars> so does anyone know if simply copying the package from Lucid to Maverick in the LP PPA would work?
[03:18] <lifeless> wgrant: so < 2 hours work?
[03:18] <wgrant> lifeless: Storm will refuse str vs unicode mismatches.
[03:18] <wgrant> lifeless: So it's only where we're using string-based SQL that there are problems.
[03:18] <lifeless> wgrant: thats only one form of incorrectness
[03:19] <lifeless> wgrant: if its so transparent, presumably storm can fix it transparently?
[03:19] <wgrant> lifeless: Storm is pedantic in the way you say psycopg2 shouldn't be.
[03:19] <wgrant> And has been since day 1.
[03:19] <mars> wgrant, maybe you know if the package can simply be copied?
[03:20] <lifeless> wgrant: then how can this issue exist? we pass 99% of everything through storm
[03:20] <wgrant> lifeless: Pillar name retrievals don't go through Storm.
[03:20] <lifeless> mars: if wgrant is correct just fixing LP would be better?
[03:20] <wgrant> eg. getUtility(IDistributionSet)['ubuntu'] doesn't.
[03:20] <lifeless> wgrant: they don't?
[03:20] <wgrant> Because of the pillar name mess.
[03:20] <lifeless> what does it go through ?
[03:21] <mars> lifeless, I think henning looked at that, but he "stopped doing this when I hit the first decoding errors. :("
[03:21] <mars> He was putting u'' or unicode() around the query parameters in the test suite
[03:21] <wgrant> lifeless: IPillarNameSet, which constructs parameterised SQL manually.
[03:21] <lifeless> wgrant: but that still goes through storm
[03:22] <lifeless> we ask storm to run it and storm asks psycopg to run it
[03:22] <wgrant> Right.
[03:22] <lifeless> so I repeat, why can't storm fix this?
[03:22] <poolie> does "fix released locking" mean that once a bug is fixreleased, it can't leave that state?
[03:22] <lifeless> actually lets back off
[03:22] <lifeless> poolie: I think so
[03:22] <wgrant> Because Storm is even more pedantic than psycopg2. It would go against its normal policies to fix it.
[03:22] <lifeless> wgrant: We have two competing theories.
[03:22] <lifeless> wgrant: one is that its shallow.
[03:23] <lifeless> wgrant: the other is that its complex and contains surprises.
[03:23] <wgrant> mars: You should be able to get away with copying the Lucid *primary archive* version of psycopg2 to Maverick in the PPA.
[03:23] <lifeless> wgrant: The data we have supporting it being shallow is nonexistent.
[03:23] <lifeless> wgrant: its speculation based on the changes storm needed itself to work.
[03:23] <mars> wgrant, ok, that would solve the Py2.5 dep problem by being for Py2.6.  I'll give it a shot
[03:24] <wgrant> I was able to fix lots of it just by hacking the celebrity descriptors to unicode() their names.
[03:24] <wgrant> mars: We shouldn't have psycopg2 for Lucid in the PPA at all.
[03:24] <lifeless> wgrant: the data we have suggesting it being complex is - decode errors being thrown when putting u before strings was attempted.
[03:24] <lifeless> plus how freaking hard it was to get bzr roughly sane, and the testtools stack similarly.
[03:25] <lifeless> wgrant: I humbly submit that you're being optimistic, unless you have more data you haven't presented.
[03:25] <wgrant> The thing is that LP is already very close, because Storm and Zope force it to be.
[03:25] <mars> poolie, I wouldn't be surprised.  There must be a fair bit of thought behind a request like that.
[03:25] <lifeless> wgrant: but they don't
[03:25] <lifeless> Foo.name=='thing' <- clearly not 'correct'
[03:25] <lifeless> Foo.name=u'thing' <- clearly ugly. Vive la Python tres.
[03:26] <wgrant> Well, let's get Zope running on 3.2 :P
[03:26] <lifeless> wgrant: and 'thing' works at the moment in lucid.
[03:26] <lifeless> wgrant: whyever its not working in maverick is a regression in the stack.
[03:26] <lifeless> in python 3 it would be correct, and historically in python 2 it is also correct.
[03:27] <wgrant> Has anybody talked to the psycopg2 folks about this?
[03:29] <lifeless> meh, mere facts.
[03:29] <lifeless> https://bugs.edge.launchpad.net/ubuntu/+source/psycopg2/+bug/650777
[03:29] <_mup_> Bug #650777: operator does not exist: text = bytea <amd64> <apport-bug> <maverick> <psycopg2 (Ubuntu):New> <https://launchpad.net/bugs/650777>
[03:31] <lifeless> http://comments.gmane.org/gmane.comp.python.storm/1256
[03:31] <lifeless> its also partly storm
[03:31] <lifeless> treating str as a bytea
[03:32] <lifeless> which is arguable either way as 'correct' in Python2 - but clearly isn't pleasant to interoperate with.
[03:34] <poolie> is there any systematic preference towards having one feature rules page that appears either editable or not, vs having a separate edit page to which only some people have access?
[03:34] <poolie> launchpad tends towards the latter but the former seems to have less dry
[03:34] <poolie> less repetition, i mean
[03:37] <mtaylor> Can't open average time db /var/debbuild/avg-build-times
[03:37] <mtaylor> Can't open average space db /var/debbuild/avg-build-space
[03:37] <wgrant> mtaylor: That's normal.
[03:46] <mtaylor> wgrant: then why did this fail
[03:46] <mtaylor> wgrant: http://launchpadlibrarian.net/56710656/buildlog_ubuntu-lucid-amd64.drizzle_2010.09.1797-1.1~lucid01_BUILDING.txt.gz
[03:47] <wgrant> That looks like an upload failure.
[03:47] <wgrant> Can you link me to the build?
[03:48] <wgrant> Ah, there.
[03:48] <mtaylor> wgrant: ah yes
[03:48] <mtaylor>  -> http://launchpadlibrarian.net/56710652/57D507LpzoCnzjO9QXav5M64BhB.txt (duplicate key value violates unique constraint "binarypackagerelease_binarypackagename_key"
[03:48] <wgrant> Yes.
[03:48] <wgrant> Which means a bug has resurfaced.
[03:48] <wgrant> Sigh.
[03:48] <wgrant> Your build is fine; just ignore the failure.
[03:49] <wgrant> It already finished once, then magically got illegally retried.
[03:53] <mtaylor> ok. cool
[03:53] <mtaylor> well, glad I could help find a bug :)
[03:55] <lifeless> poolie: I don't care either way.
[03:56] <wgrant> mtaylor: With any luck it will be fixed after tonight's cherrypick.
[03:56] <mtaylor> w00t
[03:56] <wgrant> It replaces what I suspect to be the problematic code, for others reasons.
[04:30] <StevenK> lifeless: Ping
[04:32] <lifeless> hi
[04:33] <StevenK> lifeless: Re: your mail about poppy, is there anything you need me to fix?
[04:34] <lifeless> it was meant to educate
[04:34] <lifeless> I knew you and jml collaborated on that stuff
[04:34] <lifeless> and this got through your pair programming sessions.
[04:35] <StevenK> lifeless: I'm happy for education and the chance to clean up after my mistakes
[04:36] <lifeless> StevenK: my 'test' branch has fixes.
[04:37] <lifeless> StevenK: I appreciate the offer to clean up, but there's no need to handoff here.
[04:41] <lifeless> I'll be back around later
[05:18] <stub> lifeless: Are you hacking layers today?
[05:33] <lifeless> stub: lp:~lifeless/launchpad/test has layer stuff in it and is in ec2land atm
[05:33] <lifeless> stub: got a request?
[05:33] <stub> I didn't want to conflict.
[05:34] <lifeless> if you check that you don't conflict with my branch, it will be fine.
[05:34] <lifeless> what are you doing ?
[05:34] <stub> I think I might have pushed too early - pqm is currently in process of landing by branch, and I pushed new revisions. Have I borked it or will it only merge revisions pushed when it started processing?
[05:35] <stub> Making the thread check more robust - there is a codehosting test that occasionally fails, so I'm giving garbage threads a few seconds to shut down and gc.collect() to help them along
[05:36] <stub> Uncommit and push overwrite before PQM finishes processing?
[05:37] <wgrant> Doesn't the signed request include a rev ID?
[05:38] <stub> No it doesn't
[05:38] <stub> lifeless: ^^
[05:39] <wgrant> Well. That's a bit strange.
[05:39]  * stub tries the push overwrite before it is too late
[05:39] <stub> bah - something else has it locked
[05:39] <lifeless> it will only merge what it found when it started.
[05:40] <stub> ok. So I can get this changed reviewed and check for conflicts :)
[05:40] <lifeless> stub: any reason we can't just remove the thread check? what does it actually help us with? Or make it like the bzr one (reports at the end on tests that leaked)
[05:41] <stub> lifeless: We can drop the thread check if we don't care about tests leaving threads lying around. I've mentioned that option when that has come up in the past, but people have always opted for fixing the source of the issue.
[05:41] <lifeless> stub: I quite like what bzr does:
[05:41] <lifeless>  - gathers threads
[05:41] <lifeless>  - doesn't fail, but reports on the tests that leaked.
[05:41] <lifeless> its a mix of both worlds.
[05:42] <lifeless> I very much like fixing the source of issues, but a thread not being gone by the time a test finishes isn't prima facie an issue
[05:42] <lifeless> bbs
[05:42] <stub> A test can turn the fail into a warn if it wants, or we could set it globally by changing the flag in BaseLayer.
[05:50] <poolie> any guesses why my validate method is getting an empty 'data' dict?
[05:52] <poolie> i guess the field is not correctly seen as a widget
[06:02] <poolie> jtv i realized i can use validate() to give a better syntax error message
[06:02] <jtv> poolie: yes, that's a good place for it.
[06:02] <lifeless> stub: so rather than adding multi second delays
[06:02] <lifeless> stub: lets make it a warning
[06:03] <lifeless> stub: globally
[06:03] <stub> lifeless: Tests rarely hit this - we will be delaying maybe one in 10 test runs a few seconds
[06:03] <jtv> poolie: I see no excuse for the field not to be in the data dict… does the form use GET instead of POST or something?
[06:03] <stub> I'd like to land it as is because the revision did get through PQM :-/
[06:04] <lifeless> stub: thats fine, but perhaps you could follow up with more stuff?
[06:05] <stub> So the sleep is still valuable as it is 1) unreasonable to expect tests to wait on every thread spawed, particularly if they were not responsible for spawning it 2) sometimes threads can take a while to shut down.
[06:05] <stub> Are both those points reasonable?
[06:05] <lifeless> mmm
[06:06] <lifeless> so we had a period in bzr where we tried a few different things
[06:07] <stub> We can of course make the sleep less - maybe 100 checks at 0.1 second intervals rather than 10 at 1 second?
[06:08] <stub> If it is just the thread didn't get allocated enough cpu to shutdown, it is likely sorted in milliseconds.
[06:08] <lifeless> so, to cleanup threads, they *have* to be joined
[06:09] <poolie> jtv i've solved it now
[06:09] <jtv> poolie: what was it?
[06:10] <poolie> i had handcrafted html for the textarea and i should have let launchpad generate it
[06:10] <poolie> well, zope
[06:10] <poolie> then the field name would have matched what it expected
[06:10] <jtv> ah ah ah
[06:10] <jtv> So your data wasn't empty—you just weren't looking for the right key!
[06:11] <stub> lifeless: I looked for a way to do that and couldn't find it. A second check now and it looks like I *can* iterate over the new threads and join them with a timeout.
[06:12] <lifeless> stub: well, more importantly, I think this invalidates 1)
[06:12] <lifeless> test code is either directly responsible for spawning the thread, or its using an api which must have a facility to join its own threads
[06:13] <stub> Yes. I think we can make BaseLayer wait for the threads to terminate for a period of time. I think it is worth failing the test if they don't die, as it is a leak.
[06:13] <lifeless> or its something like a thread pool which reasonably wants to stay active.
[06:13] <poolie> jtv, it was empty, but that's because zope code doesn't pass through unrecognized input parametres
[06:14] <poolie> my god it's a bit disgusting how many http requests there are per page in test mode
[06:14] <stub> We have a flag to turn the fail into a warn. I think it is worth keeping it as a fail until we actually have threads that should remain active across threads.
[06:14] <poolie> i realize in production some things are rolled up into larger resources
[06:14] <lifeless> if we want to catch incidental spawned threads that leak, we either need an API to say that some threads *are* expected, or we have to ensure that all tests clean up everytime
[06:15] <jtv> poolie: a lot of those will be for icing that isn't served by LP
[06:15] <lifeless> setting up a thread pool is both reasonably cheap and also reasonably expressed as a fixture for sharing across tests.
[06:15] <lifeless> so I think we can say 1) it is reasonable to require tests to join all threads they cause (in)directly to be spawned.
[06:15] <jtv> poolie: but it'd be great if we could reduce the requests and db queries we see on dev systems, yes
[06:16] <lifeless> stub: with that assertion, 2) becomes irrelevant,
[06:16] <lifeless> stub: what do you think?
[06:17] <stub> Yes, 2 becomes irrelevant.
[06:18] <stub> 1) is still relevant though, as threads could be spawned by third party libraries like bzrlib.
[06:18] <lifeless> does my argument for revising (1) make sense to you?
[06:18] <stub> No.
[06:18] <stub> If bzrlib starts spawning a new thread where previously it didn't, it is unreasonable to update all the LP tests to check for this new thread to terminate.
[06:19] <stub> or zope or psycopg or whatever
[06:19] <lifeless> But a test fail is a test fail.
[06:19] <jtv> poolie: perhaps you can help me with the traceback you see near the end of this "make harness" session: https://pastebin.canonical.com/37820/
[06:19] <lifeless> stub: I mean, I agree, thats why I was suggesting warning only initially;
[06:20] <stub> The layer will wait for the thread to terminate if it takes time (the spurious failures I'm trying to address), and if it doesn't terminate in a few seconds, then it fails.
[06:20] <jtv> poolie: that's on the staging db.  Why the lp-internal URL scheme, and why does convert_path_to_url choke on it?
[06:20] <stub> warnings just get ignored, and we are left with a test isolation leak
[06:21] <lifeless> I think warnings that are very clear are less likely to be ignored
[06:22] <lifeless> 'test foo leaked 4 tests' is pretty clear.
[06:22] <lifeless> stub: waiting for e.g. a thread pool thread to terminate is likely to just timeout, only a few threads are ones that will naturally go away.
[06:23] <stub> So what is the advantage of warning? I see absolutely none, except a potential problem has leaked into the codebase. Its not like we expect this to happen - we have never had a thread leak that wasn't a) spurious b) an error
[06:23] <lifeless> stub: I don't think its a strong argument for the layer waiting at all given that being able to wait successfully is itself a special case.
[06:23] <stub> (a or b)
[06:23] <poolie> jtv, this is just spm randomly typing stuff?
[06:23] <poolie> jtv, that transport is registered by the lp-serve plugin
[06:24] <lifeless> sigh, openid grrr grr grr grr
[06:24] <poolie> so for some reason that's not loaded in this process
[06:24] <jtv> poolie: no, this is spm pasting a script I wrote up for this purpose into make harness.
[06:24] <poolie> either
[06:24] <poolie> 1- you didn't ask for plugins to be loaded in this interpreter
[06:24] <poolie> 2- that plugin wasn't found, probably because it wasn't on your path
[06:24] <poolie> 3- other
[06:25] <lifeless> stub: so there are three approaches: a) wait and fail; b) warn [no point waiting if it can't fail], c) fail [make it the tests responsibility]
[06:27] <stub> For b), we should wait. We don't want spurious warnings as people just filter them out as noise.
[06:27] <lifeless> stub: I don't see the value of waiting
[06:27] <jtv> poolie: well I didn't ask for them to be loaded in this script, but it's not unthinkable that the branch scanner (which goes through similar steps) doesn't either.  How do I ask for that?
[06:28] <lifeless> stub: we've spent enough time on it - do what makes sense to you.
[06:28] <lifeless> stub: I can say that I wouldn't wait :)
[06:28] <stub> lifeless: warn and not wait is the solution everyone has rejected so far.
[06:32] <poolie> jtv bzrlib.plugin.load_plugins
[06:32] <poolie> jtv importing lp.codehosting seems to do that as a side effect
[06:33] <jtv> poolie: yup, see it in the __init__
[06:33] <jtv> .py
[06:33] <jtv> So that's my script, not the branch scanner code
[06:33] <jtv> Thanks!
[06:33] <poolie> try running bzrlib.plugin.plugins()
[06:34] <jtv> poolie: locally that gives me a bunch, including lpserve.
[06:34] <poolie> k
[06:34] <poolie> perhaps the paths are being set up wrong so it's not loaded
[06:35] <jtv> poolie: well in that script I think I simply didn't do anything that loads it.  Trying a corrected version now.
[06:39] <lifeless> stub: sure, thats doesn't mean its wrong :P
[06:39] <lifeless> stub: anyhow, as I say, do what makes sense to you
[07:23] <wgrant> lifeless: What does the bad-commit-11566 tag do?
[07:24] <poolie> hi jtv? all ok?
[07:24] <jtv> hi poolie!  No reply from spm yet
[07:28] <lifeless> wgrant: https://dev.launchpad.net/QAProcessContinuousRollouts
[07:33] <lifeless> StevenK: I would like an answer on my concurrent poppy question
[07:34] <StevenK> lifeless: I've had a few stack dumps since then, can you please re-ask?
[07:35] <lifeless> StevenK: look in the mailing list :)
[07:36] <StevenK> lifeless: And no, we don't
[07:37] <StevenK> lifeless: Both cocoplum and germanium run the FTP and SFTP servers
[07:37] <StevenK> And I have no idea about how either of those services would cope or behave in a load-balancing situation
[07:38] <noodles775> Morning ppl
[07:39] <lifeless> StevenK: we don't what ?
[07:40] <StevenK> lifeless: We don't run a single copy of both the FTP and SFTP servers.
[07:42] <lifeless> for ppas, how many copies do we run ?
[07:43] <StevenK> One, on germanium
[07:43] <wgrant> But cocoplum's works perfectly well.
[07:44] <StevenK> lifeless: Notice I'm not disagreeing it's a SPOF, I'm disagreeing that we only run one copy.
[07:44] <lifeless> StevenK: I don't know what you're saying at all
[07:44] <lifeless> StevenK: so i'm trying to pin it down
[07:44] <lifeless> wgrant: will a ppa upload to *either* work for PPAs' ?
[07:44] <wgrant> lifeless: Yes.
[07:45] <lifeless> so all we hve to do is shove ha proxy in front and make all the dns names point at the same place?
[07:45] <StevenK> That will then cause havoc
[07:45] <wgrant> I'd prefer to resolve a germanium SPOF in a way that doesn't involve cocoplum.
[07:45] <lifeless> what havoc?
[07:45] <StevenK> Since people download and upload from ppa.launchpad.net
[07:45] <lifeless> let me explain what I need.
[07:45] <lifeless> I need:
[07:45] <wgrant> lifeless: Does haproxy do FTP and SFTP?
[07:46] <lifeless>  - to know whether the software is safe to run concurrently for a given 'queue' (or whatever abstraction it uses - sounds like it is)
[07:46] <wgrant> poppy can safely run concurrently, yes.
[07:46] <lifeless>  - to provide a detailed request for config changes to the losas to eliminate the current downtime during upgrades.
[07:46] <wgrant> It's not safe to run multiple upload processors, but we do anyway.
[07:46] <lifeless> upload processors are less concerning
[07:47] <lifeless> because its not listening on the network
[07:47] <wgrant> Right.
[07:47] <lifeless> so
[07:47] <wgrant> But we can't safely run multiple upload processors from different hosts.
[07:48] <lifeless> so, which machine is the ppa one
[07:48] <wgrant> germanium
[07:48] <lifeless> and what is cocoplum known as?
[07:48] <wgrant> cocoplum is the other instance -- it is ftpmaster.
[07:48] <lifeless> does ftpmaster provide an HTTPS interface?
[07:48] <lifeless>  and HTTP ?
[07:48] <StevenK> No
[07:48] <wgrant> Not externally.
[07:48] <lifeless> so, how does this sound.
[07:48] <StevenK> HTTP only, and only internally
[07:49] <lifeless> is that http apache?
[07:49] <StevenK> Yes
[07:49] <lifeless> here is what I propose
[07:49] <lifeless> ha proxy with ppa and ftpmaster pointing at it
[07:50] <lifeless> ha proxy directs http requests to germanium
[07:50] <lifeless> ha proxy or germanium's apache sends ftpmaster internal requests to cocoplum
[07:50] <lifeless> ha proxy directs ftp and sftp to either machine depending on whats running
[07:50] <wgrant> This sounds like two SPOFs.
[07:51] <wgrant> Also, will FTP be much of a fan of being run through haproxy?
[07:51] <lifeless> wgrant: SEP
[07:51] <StevenK> Or SFTP, for that matter
[07:51] <wgrant> SFTP should be fine.
[07:51] <wgrant> It's a single TCP stream.
[07:51] <lifeless> so, elmo has said there is a tool (it might not be haproxy) for doing this stuff.
[07:51] <StevenK> I'm worried about host keys
[07:51] <lifeless> if there isn't we'll have to go find or write one.
[07:52] <lifeless> StevenK: ok, thats a good point. We can do one of two things.
[07:52] <wgrant> StevenK: They'd have to be the same on both.
[07:52] <wgrant> Easy enough.
[07:52] <lifeless> we can say 'these should really have been one service', announce and JFDI it.
[07:52] <StevenK> I'm also worried about user-confusion
[07:53] <wgrant> lifeless: Why should ftpmaster.internal requests go through germanium?
[07:53] <lifeless> or we can say 'these are meant to be different', and we then bring up new instances of these services on other machines and tell the twisted daemon to use an appropriate host key.
[07:53] <lifeless> StevenK: getting rid of downtime can go a long way :)
[07:53] <lifeless> note that openssh host keys are not involved here
[07:53] <lifeless> because its twisted doing ssh
[07:53] <wgrant> lifeless: The two hostnames used for FTP and SFTP are upload.ubuntu.com and ppa.launchpad.net.
[07:54] <lifeless> wgrant: so what are the two spofs you see?
[07:54] <lifeless> wgrant: I'm suggesting only that ftpmaster things on the hostname that poppy is on get redirected.
[07:54] <lifeless> wgrant: if there is no http/https on 'upload.ubuntu.com' then its irrelevant
[07:55] <wgrant> lifeless: ftpmaster.internal should still point at cocoplum. upload.ubuntu.com and ppa.launchpad.net will point at haproxy.
[07:55] <lifeless> yes
[07:55] <wgrant> haproxy forwards HTTP(S) to germanium, and sends ftp/sftp to germanium or cocoplum.
[07:55] <lifeless> yes
[07:56] <wgrant> This leaves germanium as only a SPOF for PPA, not ftpmaster too.
[07:57] <lifeless> anything else we need to tweak, or can I shoot this off?
[07:57] <wgrant> Well, it's not safe at the moment.
[07:57] <lifeless> wgrant: is it less safe than what we have now?
[07:58] <wgrant> Uploading to both can result in rather corrupt archives.
[07:58] <wgrant> If I upload a package twice to one host, it will be rejected the second time.
[07:58] <lifeless> what do you mean?
[07:58] <wgrant> If the second one lands on another host, and is processed simultaneously, both uploads may succeed.
[07:58] <wgrant> This is not theoretical -- this happened to me a couple of months ago.
[07:58] <lifeless> sounds like a bug
[07:58] <wgrant> It is, yes.
[07:58] <lifeless> when will you fix it?
[07:59] <wgrant> We need better locking. But neither Julian nor I have much idea of how to do it.
[07:59] <lifeless> [did you see what I did there?]
[07:59] <lifeless> wgrant: how did that happen? or were you uploading to the other host deliberately?
[08:04] <wgrant> lifeless: I'd waited an hour for an upload to be processed on germanium, and had no response. So I uploaded to cocoplum. Also no response.
[08:04] <wgrant> Half an hour later, both uploads were processed simultaneously.
[08:04] <wgrant> And my PPA got rather unhappy.
[08:06] <lifeless> wgrant: could we just put a unique index on the archive?
[08:06] <wgrant> lifeless: Over about 6 tables, sure.
[08:06] <StevenK> No
[08:07] <wgrant> ie. "no"
[08:07] <wgrant> Archive-BPPH-BPR-BPF-LFA
[08:08] <StevenK> SPPH
[08:08] <StevenK> SPR
[08:08] <wgrant> That too.
[08:08] <wgrant> And SPRF.
[08:08] <stub> I'm fixing the stable->db-devel merge conflict
[08:09] <StevenK> So it's Archive-BPPH-BPR-BPF-SPPH-SPR-SPRF-LFA
[08:09] <wgrant> We could put a constraint in, I suppose.
[08:09] <StevenK> Tasty, 8 table unique index
[08:11] <lifeless> mmm, that says you have a buggy model.
[08:11] <lifeless> we should look at that.
[08:11] <wgrant> The model is horrifyingly complex.
[08:11] <wgrant> I'm not sure it's *buggy*.
[08:11] <lifeless> now
[08:11] <lifeless> its simple to fix this
[08:11] <lifeless> rather than active active
[08:11] <lifeless> active-failover
[08:12] <lifeless> is the load on the system low enough that that would work
[08:12] <StevenK> Not for cocoplum
[08:12] <StevenK> The publisher on cocoplum is mean
[08:13] <stub> wgrant: If you need to enforce uniqueness over that many tables, it is buggy (well - unsupported by RDB model). The data that needs to be unique needs to be broken out into a separate table.
[08:13] <lifeless> StevenK: why are we talking about the publisher?
[08:13] <StevenK> Because it's involved as well, it runs on cocoplum for the Ubuntu archive
[08:14] <stub> fudz
[08:14] <lifeless> StevenK: It wasn't listed in the changes proposed above, can you describe how its connected?
[08:15] <StevenK> lifeless: You asked if the load on the system is low enough
[08:15] <lifeless> ok
[08:15] <lifeless> so if we make germanium the normally active master
[08:15] <lifeless> for poppy
[08:16] <lifeless> can cocoplum handle the load for say 30 minutes during a germanium services reboot ?
[08:16] <StevenK> It will suffer, and I have no idea about germanium's nominal load, since I don't have access to it
[08:17] <lifeless> the question is whether the uploads during that period would feel degraded to users.
[08:17] <lifeless> but if the main load is the archive, it would presumably feel just like upload.ubuntu.com does today.
[08:17] <lifeless> so, we have:
[08:17] <lifeless> upload & ppa -> haproxy
[08:17] <lifeless> http -> germanium
[08:18] <lifeless> sftp and ftp -> germanium unless its down, then to cocoplum
[08:18] <lifeless> announce and consolidate a single hostkey for both services
[08:19] <StevenK> They may already share one, but I'm unsure
[08:19] <lifeless> ok, go/nogo ?
[08:19] <lifeless> I'm going to cc bigjools
[08:19] <lifeless> but I want to be sure we're happy with the plan first
[08:20] <StevenK> To be perfectly honest, I have my doubts, but nothing I can put my finger on.
[08:20] <StevenK> Call it a gut feeling
[08:20] <lifeless> ok
[08:20] <lifeless> Will let Julian mull on it.
[08:20] <StevenK> That bit sounds good
[08:22] <wgrant> stub: I'm not sure there's a significantly better way, unfortunately.
[08:23] <StevenK> In an utterly unrelated question, can I have an __init__ function take arguments if it's being called from getUtility(Interface) ?
[08:23] <lifeless> no, but you can register an instance rather than a factory
[08:24] <stub> StevenK: utilities are singletons and even shared between threads.
[08:24] <lifeless> wgrant: whats the bug number ?
[08:25] <poolie> it seems like zope, if a field is '', omits it from the data dict?
[08:25] <poolie> is that true or am i doing something weird?
[08:26] <lifeless> I would not be surprised. Displeased, but not surprised.
[08:26] <poolie> hm
[08:26] <poolie> it's kind of unfortunate because it conflates that case with "i made a mistake and didn't get the form i expected"
[08:28] <lifeless> wgrant: does that race condition exist with ftp + sftp as well ?
[08:31] <wgrant> lifeless: I don't think so.
[08:31] <wgrant> But I can't see the configs.
[08:31] <wgrant> Do they use the same upload queue directory?
[08:33] <StevenK> No, they don't
[08:33] <StevenK> Oh, queue directory. Yes.
[08:34] <wgrant> Then they're safe.
[08:40] <poolie> jtv, lifeless, i think i'm finally almost there
[08:42] <adeuring> good morning
[08:46] <poolie> hi abel
[08:46] <wgrant> lifeless: Do you have a plan to remove the main germanium SPOF?
[08:47] <wgrant> That is HTTP.
[08:47] <wgrant> It probably has more users than any other LP service.
[08:50] <lifeless> san
[08:50] <wgrant> Ideally, yes.
[08:50] <wgrant> Do you have one?
[08:50] <lifeless> soonish
[08:52] <wgrant> Excellent.
[08:52] <wgrant> Will it be used for the librarian too?
[08:54] <lifeless> probably
[08:54] <lifeless> though running something like ceph might be even better
[08:54] <wgrant> Hmm. Maybe.
[08:54] <wgrant> But SANs are easier.
[08:55] <lifeless> not really
[08:55] <lifeless> dufferent
[09:04] <lifeless> grrr ec2land fail
[09:04] <wgrant> Can we rename it to ec2reject?
[09:05] <poolie> it's kind of russian-roulette landing isn't it
[09:05] <wgrant> It varies.
[09:05] <wgrant> For a while it was almost always one-shot.
[09:05] <wgrant> Then it... wasn't.
[09:06] <StevenK> ec2 landifyourelucky
[09:06] <lifeless> well
[09:06] <lifeless> this is changing fundamental assumptions
[09:07] <lifeless> so its not surprising to be playing 'tanks'
[09:07] <lifeless> its just slow.
[09:07] <lifeless> bbiab
[09:26] <mrevell> Morning
[10:03] <jml> lifeless: the ec2 land fail... what was the bug?
[10:19] <stub> Does the failure at https://lpbuildbot.canonical.com/builders/lucid_db_lp/builds/272/steps/compile/logs/stdio make any sense to anyone?
[10:20] <stub> Failure looks genuine, and the landing is the stable -> db-devel merge
[10:21] <stub> https://lpbuildbot.canonical.com/changes/13 - tales stuff being modified by someone.
[10:22] <wgrant> Um. "Dissolve cornflakes"?
[10:22] <wgrant> Ah, I see.
[10:22] <stub> I'd guess https://lpbuildbot.canonical.com/changes/9
[10:23] <stub> nope - that is production...
[10:23] <lifeless> jml: interrupted subunit stream, probably due to the layer code - perhaps even the thing I fixed in zope.testrunner that isn't merged yet because it has no tests because buildout was not working for it
[10:24] <lifeless> stub: sinzui moved tales to a new place
[10:24] <bigjools> jelmer: there he is :)
[10:25] <stub> https://lpbuildbot.canonical.com/changes/12 from jcsackett, but that landed on lp/devel too recently to be affecting db-devel?
[10:27] <stub> oic. so moving it broke some new code on db-devel
[10:29] <lifeless> stub: https://bugs.edge.launchpad.net/launchpad-foundations/+bug/607391 needs qa
[10:29] <_mup_> Bug #607391: upgrade robustness for cronscripts <cron> <qa-needstesting> <Launchpad Foundations:Fix Committed by stub> <https://launchpad.net/bugs/607391>
[10:29] <lifeless> stub: and if it needs special deployments stuff, writing up in the normal fashion.
[10:29] <lifeless> stub: its next in the https://devpad.canonical.com/~lpqateam/qa_reports/launchpad-stable-deployment.txt queue
[10:29] <stub> k
[10:31] <stub> I'm sending a [testfix] with the corrected import
[10:33] <jml> lifeless: ahh, ok
[10:33] <jml> lifeless: I guess we ought to upgrade to z.testrunner if we can, just to make sending things upstream easier (or, as you say, abandon it altogether)
[10:33] <lifeless> abandooon
[10:34] <lifeless> one less thing to maintain.
[10:42] <jml> lifeless: well, first port of call is stop using layers :)
[10:43] <jml> alternatively, stop needing z.testrunner to do layers
[10:43] <lifeless> working on that, this branch keeps bouncing with more glitches
[10:43] <lifeless> because the tolerance for bad environment goes way deep
[10:44] <jml> lifeless: good luck!
[10:52] <lifeless> thanks
[10:57] <stub> If we stop using layers, I don't think there is any reason to use z.testrunner.
[10:59] <jml> stub: yeah, that's my impression too
[11:01] <stub> mthaddon: Can you confirm that staging cronscripts are happily running? (This is for QA).
[11:02] <mthaddon> stub: erm, all of them? not quite sure what you mean
[11:02] <stub> mthaddon: Any of them
[11:03] <mthaddon> stub: process-mail.py seems to be working fine
[11:03] <stub> mthaddon: Assuming they are running, can you please create http://paste.ubuntu.com/502544/ as cronscripts.ini in the root of the staging tree and run an arbitrary cronscript?
[11:03] <stub> It should give you a log message about being disabled, and I can qa-ok by bug.
[11:06] <stub> (INFO level message)
[11:07] <stub> Wee! The daily storm is starting. Love the show this time of year ;)
[11:07] <mthaddon> stub: https://pastebin.canonical.com/37829/
[11:08] <stub> mthaddon: Ta. That's everything I need.
[11:08] <mthaddon> k
[11:11] <stub> mthaddon: So two changes you should like have landed. The --log-file argument has changed. Now you can run cronscripts as 'foo.py -q --log-file=debug:/srv/logs/foo.log' and have WARNING and above go to stderr and DEBUG and above go to the log file. -qq if you only want to see ERROR and above as you would expect.
[11:12] <stub> mthaddon: logrotate will work fine on these logs - the module will notice and cope with the file being rotated under its feet.
[11:13] <stub> mthaddon: The other one is cronscripts.ini being available at a configurable URL and enabling/disabling scripts individually or in bulk. I suspect the first use of this will be to disable the cronscripts while running upgrade.py etc. during the staging restore.
[11:13] <stub> mthaddon: How should we go about documenting these?
[11:13] <mthaddon> hmm
[11:15] <mthaddon> stub: I think probably an email to the losas is the way to start - we can then figure out what to do from there
[11:15] <stub> mthaddon: ok. Will do.
[11:41] <gmb> Well now, this is special...
[11:41] <gmb> Does anyone have any idea why I've suddenly started having this problem after merging devel: http://pastebin.ubuntu.com/502558/
[11:41] <gmb> stub: Is this ^^ something to do with the logging stuff you were working on last week?
[11:42] <gmb> Or am I just seeing "logger" and jumping to conclusions?
[11:42]  * gmb make cleans
[11:42] <stub> gmb: I think it is fallout
[11:43] <gmb> stub: Okay. We'll see what happens when I've cleaned and rebuilt.
[11:44] <stub> gmb: We were previously throwing away a lot of log messages by accident. We should fix that noise in the Librarian log file. Ideally by someone who understands how twisted does its logging.
[11:44] <stub> I think we currently have some frankenstein Python/twisted hybrid?
[11:45] <stub> Might just involve adding a NullHandler to getLogger('librarian') in lp_sitecustomize.py or in the librarian startup gumph.
[11:46] <stub> gmb: This won't stop the librarian from starting though, so either the real error is further down or the real error isn't being output because librarian logging has been screwed.
[11:46] <gmb> Hmm, interesting.
[11:46] <stub> gmb: Suspect rogue librarian process (aka. the usual suspects)
[11:46] <gmb> Entirely possible.
[11:46]  * gmb goes to poke around
[11:48] <stub> jml might know since I saw his XXX somewhere in there...
[11:48] <jml> hello.
[11:49] <stub> jml: I broke librarian logging. please fix it.
[11:50] <jml> stub: OK. I'll need some coffee and to finish with this email thread first though.
[11:50] <stub> score
[11:52] <gmb> stub, jml: FTR the problem went away after make clean / make. No stale processes hanging around.
[11:52] <jml> \o/
[11:52] <jml> gmb: I guess there's a deeper bug about the shitty failure mode
[11:52] <gmb> Indeed.
[11:53] <gmb> I do like the librarian's strategy of just shouting about the problem twenty or thirty times and then falling over. That might be the new way forward for error handling.
[11:57] <jml> gmb: in particular, shouting about the wrong problem
[11:58] <gmb> Yes. That was somewhat unhelpful.
[11:59] <stub> We could make our logs more readable by dropping the LEVEL prefixes entirely and use repetition to indicate severity.
[12:00] <deryck> Morning, all.
[12:01] <jml> stub: I disagree. We should use ANSI colors to indicate severity.
[12:02] <stub> --log-file=debug:/var/tmp/log.html --log-format=html
No handlers could be found for loggNo handlers could be found for logger "librarian"No handlers could be found for logger "librarian"er "librarian"</blink>
[12:03] <wgrant> jml: Critical errors can blink!
[12:03] <wgrant> Damn, I was beaten.
[12:04] <deryck> allenap, morning.  Do you think we need a card on the kanban board for bug 650991?  Or does the current card cover it?
[12:04] <_mup_> Bug #650991: Add getSubscriptionsForBug to IStructuralSubscriptionTarget <Launchpad Bugs:In Progress by allenap> <https://launchpad.net/bugs/650991>
[12:05] <allenap> deryck: I've forgotten how I should model these things. So, that bug is going to end up having 2 or more branches. I've completed the first, am working on the second. Should I have a card for each branch or bug?
[12:07] <deryck> allenap, I think generally we want a card for each branch.  Thinking being that the branch is the unit of work.
[12:07] <allenap> deryck: Okay, I'll make it so.
[12:07] <jml> I'm going to gently ambiate feelings of warm encouragement about fixing the librarian's bad failure mode
[12:08] <deryck> allenap, thanks!  Just trying to get us conscious about moving work forward clearly.
[12:10] <deryck> allenap, you can use the incremental flag as you land these and just do qa on the final branch/card that closed the bug, though.
[12:11] <deryck> so the other cards can land and flow straight into done-done.
[12:11] <allenap> deryck: Okay, cool.
[12:39] <adeuring> deryck: could you please run an EXPLAIN ANALYSE for this query on staging: https://pastebin.canonical.com/37832/ ?
[12:40] <bigjools> jelmer: congrats on the new buildd-manager code, it looks to be FREAKING ROCK :-)
[12:40] <deryck> adeuring, sure.  Doing so now....
[12:40] <adeuring> thanks!
[12:40] <jml> wuuu
[12:40] <deryck> wow, that's a query, adeuring :-)
[12:40] <adeuring> ;)
[12:40] <jml> bigjools: that's in stable now?
[12:40] <bigjools> jml: it's gone live
[12:41] <bigjools> db-stable I think
[12:41]  * wgrant watches build farm latency drop to zero.
[12:41] <jml> bigjools: ahh, ok.
[12:41] <jml> bigjools: just thinking about merging it into the twisted work
[12:43] <wgrant> Logtails updating a couple of times a minute... I approve.
[12:43] <noodles775> Niice
[12:43] <wgrant> better than every 20 :)
[12:43] <bigjools> :)
[12:44] <bigjools> jml: yes, I'd merge it
[12:44] <bigjools> jml: although thinking about it, I think we already have it
[12:44] <bigjools> it landed a while ago
[12:44] <jml> bigjools: yeah, but we've been developing from devel/stable, not db-devel
[12:44] <jml> bigjools: which revision is jelmer's change?
[12:44] <wgrant> It shouldn't be in db-
[12:45] <jml> or what branch...
[12:45] <bigjools> good point well made
[12:45] <wgrant> 11566 or so?
[12:45]  * wgrant checks.
[12:45]  * bigjools is busy rescuing the librarian
[12:45] <wgrant> jml: 11566
[12:45] <wgrant> devel
[12:45] <jml> wgrant: you know, that's just a little eerie.
[12:45] <jml> wgrant: but thank you.
[12:45] <bigjools> there was another branch
[12:46] <wgrant> Oh, right, 11579
[12:46] <wgrant> That fixes recipe builds.
[12:48] <bigjools> lifeless: ping
[12:48] <bigjools> optimistic ping....
[12:48] <wgrant> It's 1am...
[12:48] <bigjools> they're on DST already?
[12:48] <wgrant> Last Sunday.
[12:49] <bigjools> huh, weird :)
[12:49] <wgrant> We're not until this weekend, though.
[12:49] <wgrant> They are special.
[12:49] <deryck> adeuring, it still hasn't completed.  I think I should kill it.
[12:49] <bigjools> and another month for us
[12:49] <adeuring> deryck: gahhh...
[12:49] <bigjools> to go back that is
[12:49] <bigjools> it's going to make my standups interesting with Steve
[12:54] <adeuring> deryck: could you try this one: https://pastebin.canonical.com/37835/ ?
[12:58] <deryck> adeuring, sure
[13:03] <deryck> adeuring, still going.  I'll kill it now.
[13:03] <adeuring> deryck: yeah, sure
[13:15] <wgrant> bigjools: Why don't we run the upload processor every minute?
[13:15] <bigjools> which upload processor?
[13:16] <wgrant> The one behind popp.
[13:16] <wgrant> +y
[13:16] <bigjools> hysterical raisins
[13:16] <wgrant> I presume we now run the buildd-manager one every minute.
[13:16] <bigjools> 30 seconds actually
[13:16] <wgrant> Even better.
[13:16] <wgrant> (how?)
[13:18] <jml> dear pycon, please stop calling for papers when I am phenomenally busy on non-programming stuff, love jml
[13:19] <bigjools> wgrant: oh I lie, it's every 5
[13:19] <bigjools> minutes
[13:19] <wgrant> bigjools: Uh, really?
[13:19] <bigjools> it should be every minute... jelmer?
[13:19] <wgrant> That's going to break date_finished pretty badly.
[13:19] <wgrant> Although not as badly as it was broken before, I guess.
[13:21] <jelmer> bigjools: the upload processor runs every 5 minutes
[13:30] <jml> just to confirm...
[13:30] <jml> IBuilder.active = True does not imply that there's an active build on that builder
[13:30] <jml> ?
[13:30] <wgrant> active just controls whether it's shown on /builders.
[13:31] <wgrant> So, no, it's unrelated to whether there's a build on it.
[13:31] <wgrant> Yes, we like confusing names.
[13:47] <jml> wgrant: ta
[14:17] <wgrant> bigjools: Um.
[14:17] <wgrant> bigjools: You didn't run that SQL, did you?
[14:18] <wgrant> That's rather overbroad :/
[14:24] <bigjools> I did
[14:26] <wgrant> bigjools: I think you just deleted most of hardy.
[14:26] <bigjools> wgrant: huh?
[14:28] <bigjools> wgrant: actually I see what you mean
[14:28] <bigjools> arse
[14:28] <wgrant> bigjools: You found all binaries in intrepid that weren't published or pending, and deleted their files.
[14:28] <wgrant> Most of those are inherited from dapper.
[14:28] <wgrant> Er.
[14:28] <wgrant> Hardy.
[14:28] <wgrant> And will be in jaunty and co too.
[14:31] <wgrant> We need to move this expiration to something more like the librarian GC.
[14:32] <wgrant> (excluding active records, rather than trying to select inactive ones)
[14:33] <bigjools> yes, reference counting would be nce
[14:33] <bigjools> nice
[14:35] <wgrant> I forget how librariangc does it.
[14:35] <wgrant> But it's reasonably not too bad.
[14:36] <wgrant> But anyway, I hope the GC hasn't run yet, or we are screwed if we want to be able to initialise natty.
[14:39] <bigjools> it has not
[14:59] <bac> abentley, adeuring, allenap , bac, danilo, sinzui, deryck, EdwinGrubbs, flacoste, gary, gmb, henninge, jelmer, jtv, bigjools, leonard, mars, noodles775
[14:59] <bac> : reviewers meeting starting soonish
[14:59] <bigjools> bac: I will be late
[14:59] <bigjools> sorry
[15:02] <mars> bac, gary is out today, he sends his apologies
[15:02] <maxb> My email "[Launchpad-dev] RFC: Cleaning Launchpad Lucid PPA" is lonely. Anyone feel like replying? :-)
[15:09] <jml> maxb: I feel totally unqualified to reply in any way other than "Gosh I'm glad someone else is taking care of this"
[15:09] <maxb> heh
[15:34] <salgado> sinzui, we seem to have some spurious TeamParticipation entries which might be caused by those changes Edwin did to the code which maintains that table: https://bugs.edge.launchpad.net/launchpad-foundations/+bug/597208 (last comment is the relevant one)
[15:34] <_mup_> Bug #597208: Run cronscripts/check-teamparticipation.py on production and make its output more visible <Launchpad Foundations:Triaged> <https://launchpad.net/bugs/597208>
[15:34] <sinzui> :(
[15:37] <sinzui> salgado, there are all merged teams. I think they were merged using the delete action which is a special form of merge
[15:38] <salgado> sinzui, they're not all merged. for instance, https://edge.launchpad.net/~ubuntuone-users is on that list
[15:38] <sinzui> salgado, I delete mailing-list-beta-testers myself. This is a bug in the delete rules. I assumed something like this could not happen since delete is a subclass of merge where the destination team is ~registry
[15:39] <salgado> oh, nm me
[15:39] <salgado> I see that the entry is for another team that merged into ubuntuone-users
[15:40] <sinzui> salgado, Do we have a script to clean up the mess that is in TeamParticipation?
[15:40] <salgado> sinzui, nope
[15:41] <gmb> Can someone remind me what I need to do to enable a feature flag in my dev environment? I.e. how do I get features.getFeatureFlag('malone.foo-bar.enabled') to return True?
[15:41] <sinzui> okay, I will arrange a code fix and a cleanup
[15:44] <wgrant> Can someone poke ISD about the login.launchpad.net issue that has been reported in #launchpad?
[15:44] <wgrant> I can confirm; it seems fairly broken.
[15:44] <deryck> gmb, I believe you're looking for noodles775 reply to the email thread "how to use feature flags".  And subsequent emails.
[15:44] <deryck> gmb, unless you mean doing the db insert locally to enable the flag?
[15:45] <deryck> If I'm understanding this right, not having used it myself.
[15:47] <noodles775> gmb: http://pastebin.ubuntu.com/494656/ is what I'm using (just line 3).
[15:47] <gmb> noodles775: Thanks.
[15:49] <noodles775> wgrant: I've pinged the ISD guys.
[15:49] <wgrant> noodles775: Thanks.
[15:49] <flacoste> gmb: poolie landed a UI for this yesterday
[15:57] <bdmurray> sinzui: the +subscriptions mp is ready for rereview now - https://code.edge.launchpad.net/~brian-murray/launchpad/limited-subscriptions-page/+merge/35177
[15:58] <sinzui> bdmurray, salgado should go first
[15:58] <salgado> bdmurray, sinzui, I've just started. :)
[15:58] <bdmurray> sinzui: okay
[16:07] <lifeless> bigjools: yo?
[16:07] <lifeless> flacoste: For simpler rollouts I think qastaging is needed, but not edge gone.
[16:07] <lifeless> flacoste: discuss.
[16:07] <bigjools> lifeless: jeebus, it's what, 4am there?!
[16:08] <lifeless> yes, early plane flight to sydney. brb
[16:14] <gmb> I would like it noted that the fact that +subscribe is handled by BugTaskView instead of BugSubscriptionAddView (or similar) is a) odd and b) annoying.
[16:15] <bigjools> lifeless: anyway unping
[16:15] <bdmurray> I concur
[16:21] <bdmurray> salgado: what bug didn't send you back to +subscriptions on clicking cancel?  it works for me
[16:22] <salgado> bdmurray, I think I tried a couple different ones.  let me try again
[16:24] <lifeless> bigjools: hah, ok
[16:24] <salgado> bdmurray, indeed, reproduced with bug 13 and bug 4
[16:24] <_mup_> Bug #13: empty signing rules lead to invalid checksums <Baz (deprecated):Fix Released by lifeless> <https://launchpad.net/bugs/13>
[16:24] <_mup_> Bug #4: Importing finished po doesn't change progressbar <Launchpad Translations:Fix Released by carlos> <Ubuntu:Invalid> <https://launchpad.net/bugs/4>
[16:24] <bdmurray> salgado: as name12?
[16:24] <salgado> bdmurray, yes
[16:24] <deryck> gmb, indeed, it's ridiculous.  And we're likely doing extra work for the regular view that we don't need.
[16:25] <bdmurray> salgado: okay, thanks I'll poke at it some more
[16:26] <gmb> deryck: Indeed. So, new bug to file, then :)
[16:26] <deryck> gmb, yes, indeed.
[16:26] <gmb> (I'll fix that before doing bug 651108
[16:26] <_mup_> Bug #651108: Update the bug +subscribe view to include the options for bug_notification_level <story-better-bug-notification> <Launchpad Bugs:In Progress by gmb> <https://launchpad.net/bugs/651108>
[16:27] <salgado> bdmurray, I see at http://bazaar.launchpad.net/~brian-murray/launchpad/limited-subscriptions-page/revision/11486 that it will redirect to the referrer when you hit the Continue button but not the Cancel link.  for the link you need to set cancel_url, IIRC
[16:31] <bdmurray> salgado: http://bazaar.launchpad.net/~brian-murray/launchpad/limited-subscriptions-page/annotate/head%3A/lib/lp/bugs/templates/bug-subscription.pt the cancel href is set to the _return_url there
[16:40] <bdmurray> salgado: so I'm not quite certain how that is happening for you.  Does the branch also require sinzui's approval?
[16:40] <salgado> bdmurray, I think I know what's going on... ReturnToReferrerMixin must be mixed into a LaunchpadFormView, but you've mixed into a LaunchpadView
[16:40] <salgado> bdmurray, does the Cancel link work for you?
[16:41] <salgado> bdmurray, yes, it needs sinzui's approval as well
[16:42] <bdmurray> salgado: yes and the test in lib/lp/registry/stories/person/xx-person-subscriptions.txt works too
[16:46] <salgado> bdmurray, I thought that maybe I could have a plugin which was causing chromium to not send the referrer, but I see the same behaviour on epiphany, which has no extensions here
[16:47] <salgado> bdmurray, the last rev on your branch is r11487, correct?
[16:48] <bdmurray> salgado: I just pushed 11488 which fixed the Continue test in the previously mentioned test
[16:54] <lifeless> see you on the flip side
[16:55] <bigjools> jml: I thought of another thing we need to sort out in the async world of buildd-manager code - the log messages are going to look *weird* out of order, so they need vastly improving with more context :)
[16:59] <jml> events!
[17:09] <flacoste> gmb: who is the release manager for 10.10?
[17:10] <gmb> flacoste: Edwin.
[17:11] <flacoste> gmb: thanks! and hi!
[17:11] <gmb> flacoste: Hi! Welcome back :)
[17:21] <henninge> danilos: can have a look at this, please? http://paste.ubuntu.com/502711/
[17:22] <danilos> henninge, I don't like the orange colour on the web page
[17:22] <henninge> danilos: This error comes straight from gettextpo.
[17:22] <henninge> :-P
[17:22] <henninge> The difference is "msgid_plural"
[17:22] <danilos> henninge, sounds like what jtv has seen recently with upgrade to maverick as well
[17:23] <henninge> ah, I was expecting some upgrade in gettext
[17:23] <danilos> henninge, we should probably normalize these into exceptions in pygettextpo and not worry about the text
[17:23] <danilos> henninge, though, not sure how doable it is... where do you see this?
[17:23] <henninge> in a pagetest
[17:23] <danilos> henninge, locally or? I am assuming it's maverick?
[17:24] <henninge> yes, locally on maverick
[17:24] <henninge> http://paste.ubuntu.com/502712/
[17:24] <henninge> error display in pagetests sucks
[17:31] <henninge> I'll just add some ... to get the test passing on both maverick and older ...
[17:31] <henninge> danilos: ^
[17:31] <danilos> henninge, yeah, it sucks, though I've been fighting my test runner today as well
[17:32] <danilos> henninge, it'd be very useful to try it out on both (you can run just a single test on ec2)
[17:40] <henninge> danilos: I want a full run now, anyway.
[17:46] <bigjools> henninge: it sucks when people do "print browser.contents" and then match with an ellipsis ... :(
[17:51] <danilos> bigjools, oh, you should have seen most of our pagetests from back in the day then, you'd love it
[17:54] <bigjools> danilos: dude, I work in Soyuz .... :)
[18:07] <henninge> bigjools: I replaced it with "extract_text" in this instance ;-)
[18:07] <bigjools> henninge: did you get the page fragment too? :)
[18:08] <henninge> Oh, it was already just a tag being tested.
[18:08] <henninge>   >>> for tag in find_tags_by_class(user_browser.contents, 'error'):
[18:08] <henninge>   ...     print extract_text(tag)
[18:08] <bigjools> heh :)
[18:24] <jml> heh heh
[18:24] <jml> lifeless: lounge?
[18:25] <lifeless> yes
[18:25] <lifeless> downloading my failed stream so i can try to fix the errors
[18:34] <sinzui> jcsackett, you may be partly responsible for the sampledata conflict. Do you want to try to fix it my merging stable into db-devel, then submit the resolution to pqm?
[18:34] <jcsackett> sinzui: actually, i just saw those errors, did a clean merge of devel and db-devel on my machine with resolution, and was about to ask you about it.
[18:34] <sinzui> no
[18:34] <sinzui> stable to db-devel
[18:35] <sinzui> jcsackett, we have a step them ensure that db-devel will also merge into stable for CP and rollouts. The failures are saying we cannot do either
[18:36] <jcsackett> sinzui: yeah, i follow that; just didn't realize which branch was involved. i'm pulling down stable now to merge--i suspect the resolution step will be the same.
[18:36] <sinzui> jcsackett, pull db-devel, merge in production-stable, resolve, push, send to pqm
[18:37]  * lifeless waves
[18:37] <sinzui> jcsackett, remember that this is developer data, not test/app data so sending to to ec2 is pointless
[18:37] <jcsackett> sinzui: got it. does using lp-land work for that, or is there some special invocation?
[18:37] <abentley> jcsackett: you're not supposed to land database changes to devel/stable.  The one exception seems to be security.cfg
[18:38] <abentley> jcsackett: In this case, you haven't changed the database schema, so you shouldn't have needed to change the sampledata.
[18:38] <sinzui> jcsackett, bzr pqm-submit -m "[testfix][r=<review>][ui=none] Resolve conflicts."
[18:38] <jcsackett> abentley: i was under the (errant) belief that referred only to schema changes (like adding columns); i had made a change for sample data to illustrate some cases per salgado's request.
[18:38] <jcsackett> this won't happen again. sorry all.
[18:40] <jml> jcsackett: sorry our process isn't simpler or at least more self-documenting
[18:40] <sinzui> abentley, I am not sure jcsackett broke rules since his change was for engineers. The data is not used by the test runner
[18:41] <abentley> sinzui: If there's not a rule, there definitely should be, because what just happened is what will always happen.
[18:41] <jml> yay more rules
[18:41] <jml> oh wait I mean boo.
[18:44] <abentley> jml: simpler rules: "don't change anything in "database/" except security.cfg and current-dev" becomes "don't change anything in "database/" except security.cfg."
[18:44] <jml> abentley: I like simpler.
[18:45] <jcsackett> sinzui: okay, i have resolved conflict; you want to take a look at it beforehand?
[18:46] <sinzui> sure
[18:48] <jcsackett> sinzui: hm; diff looks ridiculous. fixing the merge error was just a case of deciding which block of blank lines to keep.
[18:50] <jcsackett> sinzui: https://pastebin.canonical.com/37863/
[18:50] <sinzui> r=me
[18:50] <jcsackett> cool.
[18:52] <jcsackett> submitted.
[18:52]  * jcsackett hides in cave.
[18:55] <jcsackett> should i update the wiki describing editing sampledata to include the fact that changes must be submitted to db-devel, not devel?
[18:59] <abentley> jcsackett: please do.
[19:23] <jcsackett> sinzui: https://pastebin.canonical.com/37865/
[19:36] <cr3> hi folks, anyone happen to know where/how the "webservice" part of "${context/webservice}" is defined? I looked throughout interfaces and zcmls, but couldn't quite find where/how that attribute or method is defined.
[19:37] <jcsackett> abentley, can you take a look at this? https://pastebin.canonical.com/37865/
[19:38] <jcsackett> i gather that pqm rejected my merge b/c it found conflict markers, but there appear to be none in my branch or in a local checkout of db-devel if i merge my branch into it.
[19:38] <jcsackett> (according to the same test).
[19:39] <abentley> jcsackett: the rejection would have been an attempt to merge stable, not your branch, so that's the closest thing to try.
[19:39] <cr3> nevermind folks, found it under lazr.restful stuff
[19:40] <jcsackett> abentley: why would it be trying to merge stable if it's trying to merge my testfix?
[19:40] <cr3> by the way, what does "LAZR" stand for? Launchpad And Zope REST?
[19:40] <abentley> jcsackett: Oh, I thought you were referring to the original failure.
[19:40] <jcsackett> abentley: no, this was a response to my testfix.
[19:41] <jcsackett> i know what the conflict was in merging stable; that's what i resolved.
[19:41] <jcsackett> was hoping error code in the paste might be more meaningful to someone else, and give me something to chase.
[19:44] <abentley> jcsackett: What you pastebinned is not what PQM does if there's a merge conflict.
[19:44] <bac> hi abentley, i used 'bzr lp-land' this morning and it sent my branch off to pqm even though i had uncommitted changes.  most of the other tools check for that, should lp-land too?
[19:44] <abentley> jcsackett: That's a rule that's checking for unwanted database changes.
[19:44] <jcsackett> abentley: dig. i'm confused since the error appears to be in make check_merge.
[19:45] <abentley> jcsackett: We don't check for conflict markers, we check the status code of the merge command.
[19:46] <jcsackett> abentley: ah, i see. the test i see when running make check_merge is test_no_conflict_markers, which is where i made the guess.
[19:46] <abentley> jcsackett: this is checking that the outcome of the merge is reasonable.
[19:46] <abentley> jcsackett: not checking whether the merge succeeded.
[19:47] <abentley> jcsackett: Okay, maybe we *also* check for conflict markers, but those would be markers that got there because someone committed them.
[19:48] <jcsackett> abentley: okay. so then, is my supposition that what is failing in the paste happens after pqm is merging the testfix into db-devel?
[19:48] <jcsackett> abentley: i'm tryin to chase down the error so i can figure out what in the testfix is unpalatable and fix it.
[19:48] <abentley> jcsackett: yes.  After the merge, it's running make check_merge
[19:49] <jcsackett> so, theoretically, if i have a clean branch of db-devel, merge my testfix into it, and run make check_merge, if all is well locally i shouldn't be seeing that error?
[19:49] <jcsackett> abentley^
[19:49] <abentley> jcsackett: You should try doing the merge locally and then running "make check_merge" locally.
[19:49] <abentley> jcsackett: yes.
[19:49] <jcsackett> abentley: excellent. that's what i did, and the make check_merge passes locally.
[19:50] <jcsackett> abentley: i suppose not 'excellent', as that removes the most obvious path for a fix.
[19:50] <sinzui> jcsackett, sorry. I was in a meeting you are landing in db-devel: pqm-submit --submit-branch=bzr+ssh://bazaar.launchpad.net/~launchpad-pqm/launchpad/db-devel
[19:50] <sinzui> jcsackett, bzr before that command and -m "message" after that command
[19:50] <abentley> jcsackett: okay, make sure your db-devel is up-to-date.
[19:52] <sinzui> jcsackett, did you branch devel or db-devel as your base?
[19:52] <jcsackett> sinzui: db-devel.
[19:52] <abentley> bac: I guess that would make sense, to be in line with pqm-submit.
[19:52] <sinzui> okay, then I think you are fine
[19:53] <sinzui> jcsackett, if I gave you my bazaar aliases, then dbsubmit  does the pqm-submit --submit-branch=bzr+ssh://bazaar.launchpad.net/~launchpad-pqm/launchpad/db-devel
[19:53] <bac> abentley: yeah, it would be nice.  i just put us in [testfix] b/c i made a change to a failed test but then sent it off to PQM without commiting the change.  :(
[19:53] <bac> abentley: i'll open a bug
[19:54] <jcsackett> sinzui: thanks.
[19:55] <jcsackett> abentley: thanks as well.
[20:06] <jml> g'night all
[20:25] <rockstar> Has the upload server for PPAs changed?  I can't ping upload.launchpad.net
[20:27] <rockstar> Ugh, it's ppa.launchpad.net
[20:31] <bryceh> in a configure.zcml, when should I make my classes class="lp.bugs.model.*" vs. class="canonical.launchpad.database" ?
[20:34] <salgado> bryceh, always the former. canonical.launchpad.database is deprecated and at some point everything there will have been moved somewhere else
[20:34] <bryceh> salgado, ok thanks
[20:45]  * rockstar goes to find lunch
[21:41] <mwhudson> what does 'dissolve cornflakes' mean as a commit message?
[21:41] <thumper> :)
[21:41] <thumper> I saw that too
[21:42] <bac> hi thumper
[21:42] <thumper> I think it was a merge conflict resolution
[21:42] <thumper> hi bac
[21:43] <bac> thumper: the branch i landed redesigned the a project's code page to show whether the project uses launchpad, and if it does not to show where the code is hosted
[21:43] <thumper> bac: so what change the capitalisation?
[21:43] <bac> thumper: part of that design was to add side portlets to the page to bring it into the 3.0 UI world
[21:43] <bac> those statements went from in-line to being in a portlet
[21:44]  * thumper is not happy
[21:44] <thumper> who reviewed it?
[21:44] <bac> thumper: salgado, allenap, sinzui, and mrevell
[21:44] <bac> thumper: look at screenshots at http://people.canonical.com/~bac/code_usage/
[21:45] <maxb> uhoh, we have code imports failing with NoWhoami again
[21:45] <maxb> https://answers.edge.launchpad.net/launchpad-code/+question/127341
[21:47] <thumper> bac: what about personal branches? distro branches?
[21:47] <thumper> bac: the style of those will now be inconsistent?
[21:48] <thumper> bac: also, I don't see the point of saying that branches will be public initially
[21:48] <thumper> bac: no-one can make public branches private
[21:49] <mwhudson> maxb: eek
[21:49] <mwhudson> maxb: particularly as it's failed on each importd :/
[21:49] <mwhudson> losa ping
[21:50] <bac> thumper: look at curtis' last comment on https://code.edge.launchpad.net/~bac/launchpad/bug-643538-code/+merge/36377
[21:50] <bac> thumper: the one from jml specifically asking for consistency across tabs for projects
[21:51] <mbarnett> heya mwhudson
[21:52] <mwhudson> mbarnett: has anything happened to the ~importd/.bazaar/bazaar.conf files on the import slaves recently?
[21:52]  * thumper sighs
[21:52] <thumper> bac: so... back to the consistency thing then
[21:52] <thumper> bac: have the other branch listings been changed?
[21:52] <bac> thumper: no
[21:53] <maxb> mwhudson: Yes, I think *ALL* cscvs imports are currently failing, but only when the upstream repository has a new commit. And this is why the entire collection of CSCVS imports haven't migrated to failed already
[21:53]  * thumper agrees with maxb
[21:53] <mwhudson> maxb: this is strange, we definitely had them working at some point since the nowhoami behaviour got rolled out
[21:54] <maxb> So why is this not affecting non-CSCVS imports? :-/
[21:54] <thumper> maxb: all the others use a common base class
[21:54] <thumper> maxb: which I think has been fixed for the no-whoami thingy
[21:54] <mwhudson> no
[21:54] <mwhudson> it's because they use a different api to create commits
[21:55] <mwhudson> at least, i think
[21:55] <mwhudson> create revisions rather
[21:55] <thumper> mwhudson: there was a bzrlib bug where the method was ignoring the param
[21:55] <mwhudson> thumper: ah
[21:55] <thumper> mwhudson: so you still had to set BZR_EMAIL env
[21:56] <maxb> This problem arose between 2010-09-08 and 2010-09-12
[21:57] <thumper> mwhudson: also, about the incorrectly stacked distro branches
[21:57] <thumper> mwhudson: we can fix it by running bzr reconfigure --unstacked on them
[21:57] <thumper> mwhudson: I'm trying to avoid writing a "special" fixit script
[21:57] <mwhudson> thumper: seems fair enough
[21:58] <wallyworld___> morning
[21:58] <thumper> wallyworld___: morning
[22:05] <sinzui> thumper, I agree with you
[22:05] <thumper> sinzui: which bit?
[22:06] <thumper> abentley, rockstar, wallyworld___: standup?
[22:06] <sinzui> thumper, I reported bugs on blueprint and malone when I saw project was changed when the feature was about specifcationtarget and bugtarget. branch collections/target must be the on on launchpad and behave like all launchpad
[22:06] <wallyworld___> yep
[22:06] <abentley> thumper: okie
[22:07] <thumper> sinzui: I don't understand what you just said
[22:07] <sinzui> thumper, no 2.0 designs
[22:07] <sinzui> thumper, do not make IProduct an exception
[22:07] <mbarnett> mwhudson: sorry, overly distracted over here with a db issue.
[22:08] <mbarnett> mwhudson: will have to look at that in a few
[22:08] <bac> thumper: i think sinzui is saying we'll update the other pages as soon as possible.  is that right curtis?
[22:09] <sinzui> yes. If someone else does not report the bug I will
[22:09] <sinzui> besides jml, will not end bridging-the-gap until his vision of consistent UI and clarity of message is met
[22:47] <rockstar> wallyworld___, ping me when you're ready. :)
[22:48] <wallyworld___> rockstar: do you want to talk now or a bit later? if later, i'll go grab a coffee and breakfast
[22:51] <rockstar> wallyworld___, have breakfast first.  I can wait.
[22:51] <wallyworld___> rockstar: i saw your message after i sent mine. i can talk now if you like. might be good to get it done
[22:51] <rockstar> wallyworld___, if you're good to go, I am.
[22:52] <wallyworld___> rockstar: skype?
[23:10] <rockstar> wallyworld___, http://developer.yahoo.com/yui/3/io/
[23:21] <james_w> rockstar: merged your bzr-builder branch, apologies for the delay
[23:21] <james_w> while there I found a case where a bug was corrupting the text of recipes
[23:22] <james_w> but there was in fact not enough checking in __str__ to catch the problem
[23:23] <rockstar> james_w, ah!  Thank you for doing that.
[23:24] <poolie> hi rockstar, james_w
[23:24] <rockstar> Morning poolie.
[23:24] <james_w> hi poolie
[23:24] <james_w> https://code.edge.launchpad.net/~james-w/bzr-builder/fix-text-of-nest-parts/+merge/37076 if someone wants to review
[23:26]  * rockstar walks dog
[23:29]  * thumper afk for brief shopping
[23:37] <james_w> erm, helps if I actually commit
[23:52] <james_w> ok, third time lucks
[23:52] <poolie> hi mars? happy to help with your feature flag test if i can
[23:52] <james_w> https://code.edge.launchpad.net/~james-w/bzr-builder/fix-text-of-nest-parts/+merge/37080 if someone wants to review
[23:54] <mars> poolie, sure: I would be glad for the help: https://code.edge.launchpad.net/~mars/launchpad/add-py25-lint
[23:54] <mars> poolie, this test fails: bin/test canonical.launchpad -t profiling.txt
[23:54] <mars> poolie, I have to run, but I can check back later
[23:54] <poolie> k