[00:23]  * thumper has lost pyflakes
[00:30] <lifeless> its nice when one index cuts 50% time off of things
[00:31] <thumper> lifeless: like what?
[00:33] <lifeless> thumper: https://bugs.edge.launchpad.net/malone/+bug/618396
[00:34] <lifeless> it won't help with the count(*) blow-out which happens from time to time, but it will shift the whole page performance down bu 1.4seconds or so
[00:56] <thumper> wallyworld: you are very quiet in your corner there
[00:56] <thumper> wallyworld: how are things going?
[00:56] <wallyworld> thumper: head down, tail up
[00:56] <thumper> wallyworld: what are you looking at now?
[00:56] <lifeless> his navel, by that description
[00:57] <thumper> mwhudson: can I have a quick pre-impl with you about branch-distro?
[00:57] <wallyworld> i've landed 3 branches of late and am finishing up the default reviewer test cases prior to doing a mp
[00:57] <wallyworld> after that i was wanting to have a catch up
[00:57] <mwhudson> thumper: ok, one sec
[01:00] <thumper> wallyworld: sounds good
[01:00] <thumper> wallyworld: I want to head to lunch after a quick chat to mwhudson
[01:00] <thumper> it should be very quick :)
[01:00] <wallyworld> thumper: ok, do you want to call me when you are free
[01:01] <thumper> wallyworld: sure
[01:01] <mwhudson> thumper: ok
[01:01] <lifeless> hmm, thats 4 ec2 lands I have running at once.
[01:01] <mwhudson> ah, helps if i launch skype
[01:01] <thumper> wallyworld: I'll call you after lunch
[01:01] <lifeless> Need moar to do
[01:01] <wallyworld> thumper: ack
[01:01] <lifeless> StevenK:
[01:01] <lifeless> StevenK: ping
[01:02] <lifeless> spm: also when you, oh, three minutes or so I need help QAing a patch on staging.
[01:02] <lifeless> of course that need staging running
[01:05] <spm> yes, one would tend to follow the other :-)
[01:07] <lifeless> https://bugs.edge.launchpad.net/launchpad-foundations/+bug/634342/comments/3 is what I'll need your help with
[01:09] <lifeless> oh, also
[01:09] <lifeless> 'garbo-daily' didn't run. Not that you need more queued items to look at :)
[01:10] <spm> you guys get the logs for those too, fwiw. :-)
[01:11] <lifeless> where?
[01:12] <spm> sodium: locate 'garbo' | grep log | head ==> /x/launchpad.net-logs/scripts/loganberry/garbo-daily.log
[01:13] <spm> bleh, still doing the session prod fail thing
[01:13] <lifeless> $ less /x/launchpad.net-logs/scripts/loganberry/garbo-daily.log
[01:13] <lifeless> /x/launchpad.net-logs/scripts/loganberry/garbo-daily.log: No such file or directory
[01:15] <lifeless> ah, date suffixes
[01:15] <spm> well locate is always older; yarp
[01:16] <lifeless> it would  be very interesting to say 'no increased LOC outside of tests.'
[01:17] <lifeless> I wonder if that would fly.
[01:31] <mwhudson> aaaaaaaaaaaaaaargh
[01:32]  * mwhudson is splitting up a doctest and man, sample data sucks
[01:39] <mwhudson> as do hidden parameters
[01:41] <lifeless> you have my sympathy
[01:46] <lifeless> poolie: I'd just *love* it if you were to do some more on the UI for flags
[01:46] <poolie> ah, this week i'm feeling you might be lucky
[01:47] <lifeless> \o/
[01:53] <lifeless> StevenK: Yo
[01:56] <lifeless> edge was happy over the weekend
[01:56] <lifeless> 38 Time Outs
[01:56] <lifeless> possibly time to lower it again.
[01:57] <lifeless> 112 /  236  CodeImportSchedulerApplication:CodeImportSchedulerAPI has reached #1
[01:58] <lifeless> 104 /  218  BugTask:+index
[01:58] <lifeless>       44 /   33  RootObject:+login
[01:58] <lifeless>       40 /   56  DistributionSourcePackage:+filebug
[01:58] <lifeless>       37 /   12  Distribution:+search
[02:04] <jelmer> lifeless: hi
[02:04] <poolie> hi jelmer
[02:05] <lifeless> hi jelmer
[02:05] <lifeless> jelmer: if you're actually around now, theres a soyuz patch blocking further migrations of bugfixes to production
[02:05] <lifeless> jelmer: its what I was pinging about the other day
[02:12] <lifeless> stub: hi
[02:12] <lifeless> stub: can we talk indexes for a minute?
[02:12] <jelmer> lifeless: Which patch is that?
[02:13] <stub> lifeless: ok - gimme a minute to finish this coffee
[02:13] <lifeless> jelmer: rev 11556
[02:13] <lifeless> stub: take your time
[02:13] <lifeless> https://code.edge.launchpad.net/~lifeless/launchpad/bug-121363/+merge/35971 and https://code.edge.launchpad.net/~lifeless/launchpad/bug-618396/+merge/35970
[02:14] <jelmer> lifeless: I'm seeing some similar errors to the OOPS one you fixed earlier. The OOPS ID in the job runner test that you fixed is off now.
[02:14] <lifeless> jelmer: ECONTEXTFAIL
[02:14] <jelmer> lifeless: sorry
[02:16] <stub> y
[02:16] <stub> o
[02:16] <lifeless> jelmer: https://devpad.canonical.com/~lpqateam/qa_reports/launchpad-stable-deployment.txt is the thing I look at for things to deploy.
[02:17] <lifeless> stub: hey
[02:17] <lifeless> stub: so I was looking at a timeout with bug searching and noticed that importance isn't indexed.
[02:17] <lifeless> stub: there was a similar issue already reported with date_closed; I've used a slightly unusual index syntax to get the (foo desc, id) ordering working
[02:18] <lifeless> stub: wondered if you're ok with this (and if so, can we add these indexes w/o downtime)
[02:18] <lifeless> stub: spm added the importance one to staging for me for testing
[02:18] <spm> yo stub
[02:19] <lifeless> jelmer: so, some context and I can understand what you're talking about :)
[02:20] <stub> So with PG 8.2 we would never use the index on importance. We might now if it is there. Did you test any example queries to see if they made use of the index on staging?
[02:20] <lifeless> yes
[02:20] <lifeless> 3500ms -> 1800ms
[02:20] <lifeless> the key thing is the ordering of the second field
[02:20] <stub> Ok. Adding both of those indexes makes sense, and we can create them live.
[02:20] <lifeless> cool
[02:21] <stub> We have a date_closed index - UNQUE (date_closed, id) WHERE status = 30
[02:21] <lifeless> yes, it won't be used
[02:22] <lifeless> oh, bah, I missed the unique and partial aspects
[02:22] <lifeless> anyhow it needs to be '(date_closed, id desc nulls first)'
[02:22] <lifeless> to be used.
[02:22] <stub> The index would be used if you are limiting the search to FIXRELEASED bugs.
[02:22] <jelmer> lifeless: sorry, looking at that revision now
[02:23] <lifeless> stub: no, due to the query being "order by date_closed desc, id"
[02:23] <lifeless> http://www.postgresql.org/docs/8.3/static/indexes-ordering.html
[02:24] <stub> Right. So we should drop that index then I think.
[02:24] <lifeless> jelmer: great, please do: if you can qa it as ok to deploy, or incomplete (can just remove the qa-* tags I think in that case), we can deploy more stuff
[02:24] <stub> (the existing one that is less useful than the one we are creating)
[02:25] <lifeless> stub: I can't see that UNIQUE status=30 index
[02:25] <lifeless> stub: in launchpad_dev
[02:25] <stub>     "bugtask__date_closed__id__idx" UNIQUE, btree (date_closed, id) WHERE status = 30
[02:25] <lifeless>     "bugtask__date_closed__id__idx" btree (date_closed, id DESC)
[02:26] <lifeless> stub: anyhow, my patch drops it
[02:26] <lifeless> oh foo
[02:26] <lifeless> I've applied my patch
[02:26] <lifeless> anyhow, my patch drops that index and adds bugtask__date_closed__id__idx2 (does that first so there is no point without an index)
[02:27] <lifeless> stub: should my new one also be partial?
[02:27] <stub> I need to split it in two anyway if we want the indexes built live - dropping indexes has to wait until the next db update.
[02:27] <lifeless> stub: interesting. I'm curious why it needs to wait ?
[02:27] <stub> It needs to grab an exclusive lock on the table for a short period, and that will never happen on the live system.
[02:28] <lifeless> ah, but adding doesn't ?
[02:28] <stub> CREATE INDEX CONCURRENTLY - special syntax.
[02:28] <lifeless> ok
[02:28] <lifeless> what should I do to my patch.
[02:28] <lifeless> a) make it partial?
[02:28] <lifeless> b) make there be two patch files?
[02:29] <stub> Can I see your patch as it stands?
[02:30] <lifeless> https://code.edge.launchpad.net/~lifeless/launchpad/bug-121363/+merge/35971
[02:30] <lifeless> separate one for importance : https://code.edge.launchpad.net/~lifeless/launchpad/bug-618396/+merge/35970
[02:31] <stub> I'm unsure on the partial thing - it will be smaller, which makes things using it faster. But we have to be sure that our test queries still make use of it (ie. for all queries we want to use it we have to specify STATUS=30 in the WHERE clause, and confirm PG is smart enough to realise this matches the index and it can be used - fine for simple queries, but want to test more complex stuff to make sure that info doesn't get lost).
[02:32] <lifeless> about 50% of bugs are closed IIRC
[02:33] <lifeless> so its not going to be masssively smaller - not even a single height difference in any decent sized btree
[02:34] <jelmer> lifeless: I don't think I'll be able to get to it today (I'm on leave), perhaps StevenK is around to have a look?
[02:35] <lifeless> jelmer: sure, I'll naggify him
[02:35] <lifeless> jelmer: its sunday, of course you're on leave.
[02:35] <lifeless> jelmer: whats the OOPS thing about
[02:36] <jelmer> lifeless: I'm on leave this week as well (I'm in CA)
[02:36] <lifeless> ah!
[02:36] <lifeless> jelmer: have fun!
[02:37] <jelmer> lifeless: I'm trying to land my final branches for the buildd master performance improvements, and jml and bigjools are sprinting on some incremental stuff for that this week
[02:38] <jelmer> lifeless: but one final test (the one you fixed up some stuff in last week wrt OOPSes) is still failing. The OOPS ID the test is expecting is off by one.
[02:43] <jelmer> lifeless: any idea what might be going wrong? It's definitely another test isolation issue.
[02:48] <lifeless> what do you mean 'off by one'
[02:48] <lifeless> pastebin the test perhaps, and the error
[02:48] <lifeless> stub: so I think it shouldn't be partial
[02:48] <stub> lifeless: yup
[02:48] <lifeless> stub: ok, and you want two patch files ?
[02:49] <stub> I'll just do the review with patch numbers
[02:49] <lifeless> thanks
[02:49] <thumper> wallyworld: if you are working in branch foo, and have devel at the same depth in the directory
[02:49] <thumper> wallyworld: go `bzr merge --preview -d ../devel .`
[02:49] <stub> lifeless: Is the bugtask.importance index being applied live too?
[02:49] <lifeless> stub: please
[02:50] <jelmer> lifeless: http://pastebin.ubuntu.com/496781/
[02:53] <thumper> lifeless: oops 1723XMLP312 :((
[02:54] <stub> I guess including id in the indexes is necessary now, as batch jobs can set date_closed in bulk. Previously id was only there to keep the test suite happy.
[02:54] <thumper> # SQL time: 14 ms
[02:54] <thumper> # Non-sql time: 34074 ms
[02:54] <lifeless> stub: do we need the UNIQUE ?
[02:55] <lifeless> jelmer: where is the test code?
[02:55] <lifeless> jelmer: oh, I see, unmodified
[02:55] <jelmer> lifeless: yes, the test is unmodified so it's a test isolation problem. I have (obviously) changed other parts of the tree.
[02:56] <jelmer> The test doesn't fail when run by itself, only on ec2 as part of a full test run.
[02:56] <lifeless> jelmer: and what do you mean by off by one ?
[02:57] <stub> lifeless: It doesn't make any practical difference. In this case, I'd say not. But that is a pretty arbitrary call.
[02:57] <jelmer> If you look at the last line, you'll see that the assertIn fails because the string that we look for has a slightly different OOPS ID
[02:57] <lifeless> stub: ok, I was wondering if there was a practical issue here
[02:57] <stub> lifeless: Maybe in some future version of PG
[02:57] <thumper> lifeless: it is worth looking at oopses 1723XMLP307 1723XMLP308 1723XMLP309 1723XMLP310 as well
[02:57] <jelmer> lifeless: OOPS-1721T6 vs OOPS-1721T7
[02:57] <lifeless> \o/ next CP passed ec2 now to wait for bb
[02:59] <wgrant> Up to where?
[03:00] <lifeless> wgrant: the soyuz thing
[03:00] <lifeless> StevenK: yo!
[03:00] <mwhudson> talking of hidden parameters earlier
[03:00] <mwhudson> now i'm reading a shell script
[03:00] <mwhudson> :(
[03:00] <wgrant> mwhudson: buildd?
[03:01] <wgrant> Or something less sinister?
[03:01] <lifeless> thumper: thats -special-
[03:01] <thumper> lifeless: I'm going to talk to spm to see if we can see what else was going on at the time
[03:01] <lifeless> gl
[03:02] <lifeless> we've got a single-threaded appserver setup on the way
[03:02] <mwhudson> wgrant: do you know what lexbuilder is?
[03:02] <wgrant> mwhudson: A ha ha.
[03:02] <wgrant> I have heard about it.
[03:02] <mwhudson> i guess it
[03:03] <mwhudson> i guess it's kinda like buildd, but based around live-helper, not sbuild
[03:11] <lifeless> stub: can you give me a shout when those indices are up, I want to see how much of a difference it makes to the overall feeling
[03:12] <poolie> heh, so part of the reason dkim is not working completely is a lack of DRP
[03:12] <poolie> DRY
[03:12] <lifeless> do repeat yourself?
[03:12] <poolie> iow a higher level doesn't ask "is it authenticated" but "does it have a gpg signature"
[03:12] <stub> lifeless: You are landing those branches?
[03:14] <lifeless> stub: they are queued in https://pqm.launchpad.net/
[03:14] <lifeless> jelmer: ok thats a reasonable clue
[03:14] <lifeless> jelmer: for some reason the unique file namer is reusing an older OOPS id
[03:14] <poolie> urk
[03:14] <stub> lifeless: Land them now. There is no point me kicking of the index builds yet as they will block until the daily backup has completed (CREATE INDEX CONCURRENTLY has to wait on all existing transactions to complete at one point of the process)
[03:15] <lifeless> probably a missing +1 somewhere.
[03:15] <lifeless> stub: yes, they are landing ;)
[03:15] <stub> And I'm typing slow
[03:15] <lifeless> stub: there are three branches is in pqm is all
[03:15] <lifeless> :>
[03:16] <stub> lifeless: So unless you want me to kill the daily backup, we are waiting 5 hours.
[03:17] <stub> (8 hours! Geez...)
[03:17] <stub> (Death to BranchRevision!)
[03:18] <lifeless> stub: it wouldn't worry me
[03:19] <lifeless> stub: we have redundant copies of the data anyway; is that backup offsite?
[03:20] <stub> I'm told the backups get copied somewhere, but I haven't seen the rsync scripts or know the final destination - that is IS magic.
[03:21] <lifeless> spm: yo, mr IS.
[03:21] <lifeless> ^
[03:21] <stub> There is an antique RT trying to get confirmation on this, as I wanted to confirm that everything being backed up is what I think is being backed up (inspired by a call for disaster recovery a year or three ago)
[03:22] <lifeless> heh, yes, I see that RT every time I go look at RFWTAD progress ;)
[03:22] <lifeless> stub: would you be up for a learn-cassandra sprint early next year, in principle?
[03:22] <lifeless> learn/evaluate that is
[03:22] <poolie> i would be
[03:22] <poolie> :)
[03:23] <stub> lifeless: If the timing is good and it is in an exciting location ;)
[03:23] <lifeless> poolie: I'll keep that in mind
[03:23] <lifeless> stub: can you let me and statik know what timings are bad for you then ?>
[03:23] <lifeless> stub: as for the location, nowhere is as exciting as bangkok
[03:23] <lifeless> sorry.
[03:25] <stub> lifeless: Any time is good really. Late march early april would be best for my visa purposes, but that is my problem really ;) Bangkok is fine! Come on over! But before April as that would be the preferred timing for the next lot of mass protests or coup.
[03:25] <lifeless> stub: possibly jan - U1 want to really evaluate cassandra
[03:25] <poolie> lifeless: just invite people til the centre of gravity converges on your prefered location :)
[03:25] <stub> (They like to hold these events on long weekends so everyone can participate, and the Songran holiday is the longest weekend)
[03:25] <poolie> if that's "dunedin" it may be difficult :)
[03:26] <lifeless> their sharding story is not as nice as they'd like.
[03:26] <lifeless> in all seriousness I'd be expecting e.g. florida or something
[03:26] <stub> Dunedin is fine if there is skiing. Jan doubles up with the LP sprint in Dallas.
[03:26] <poolie> i might try to poke dkim along a bit first, then do flags
[03:36] <mwhudson> even in dunedin i don't think there will be much snow in january
[03:41] <stub> Probably more than AU in high season though.
[03:43] <lifeless> 0 > 0 == False
[03:44] <ajmitch> yay for precedence?
[03:45] <poolie> heh gotta love python
[03:45] <poolie> i'm sure that seemed like a cute idea at the time
[03:46] <poolie> 1 > 0 == False [03:46] <mwhudson> oh it's chaining, isn't it
[03:47] <stub> If you aren't using brackets for clarity you deserve what you get.
[03:48] <stub> Pitty the language doesn't enforce it.
[03:49] <poolie> it's not just precedence
[03:49] <poolie> the problem is that it transforms to
[03:49] <poolie> (1 > 0) and (0 == False)
[03:49] <mwhudson> http://python.net/~mwh/blog/nb.cgi/view/weblog/2006/05/15
[03:50] <poolie> it seems like such a small win at a price of being inconsistent with everytihng else
[03:50] <poolie> maybe not perl?
[03:50] <poolie> mwhudson: because 'XXX', 'in' and 'progress' might later turn out to be defined?
[03:50] <poolie> that's pretty cute
[03:51] <poolie> oh wow
[03:51] <mwhudson> poolie: well
[03:51] <mwhudson> 'in' is a keyword
[03:51] <stub> Taipei! It was all 'everyone should get to Taipei to see the amazing stuff being done there' and then we kept having sprints in N.America.
[03:51] <mwhudson> but otherwise yeah
[03:51] <mwhudson> it was a mistake of course, the - was meant to be #
[03:52] <poolie> "a == 2 or fucked-up"
[03:59] <lifeless> stub: ok, so in 6 hours I should ask agian ?
[04:00] <stub> or 5 hours
[04:00] <lifeless> stub: yeah, but you need time to add them :P
[04:00] <stub> I might need to be reminded too ;)
[04:00] <lifeless> ok, 8pm
[04:01] <lifeless> StevenK: ping . Surely you're not _still_ asleep at 1pm
[04:05] <spm> note to self. LPIA != PA-RISC. wgrant, that builder is now lpia (again)
[04:08] <wgrant> spm: Which builder?
[04:09] <wgrant> Ah, in the question.
[04:09] <wgrant> Thanks.
[04:09] <wgrant> Looks good.
[04:10] <spm> oh. sorry, yes. rehinium or similar spelling
[04:12] <lifeless> stub: 44 /   33  RootObject:+login - I think thats the OAuthNonce thing, 3rd highest oops count atm
[04:13] <wgrant> is there a good reason for OAuthNonce to be in the DB, rather than memcache?
[04:17] <stub> We actually make use of the transactional integrity it gives us for some edge cases
[04:17] <stub> I had a branch that moved it to memcache, which is when I picked up on this.
[04:18] <wgrant> Does a request retry try to perform the authentication again?
[04:18] <stub> I don't know why +login is messing with OAuthNonce - oauth is a different authentication system to +login which is OpenId (I don't think they make use of the same views?)
[04:19] <stub> wgrant: Yes - the whole request is tried again
[04:19] <wgrant> Ah.
[04:21] <StevenK> lifeless: No, I was afk. Hm?
[04:23] <lifeless> StevenK: can you please qa rev 11556?
[04:25] <StevenK> lifeless: Fix an OOPS when an archive admin uses the +queue page to
[04:25] <StevenK>         accept uploads that close private bugs.?
[04:26] <lifeless> StevenK: there are two linked bugs on that patch
[04:26] <lifeless> both need to be qa'd
[04:26] <lifeless> I'm told dogfood is involved.
[04:26] <lifeless> wgrant reported them and can probably advise.
[04:27] <lifeless> StevenK: if they aren't qa'd, its a team wide pipeline stall :- qa matters :P
[04:27] <wgrant> I'm pretty sure that one of them isn't fixed, but it's easy enough to QA.
[04:27] <lifeless> if its not fixed, but no worse, tag it qa-ok, set the status back to triaged.
[04:27] <wgrant> If you can't work out details, give me a yell.
[04:28] <lifeless> StevenK: the bugs are linked from the commit message.
[04:28] <StevenK> Well, duh
[04:29] <StevenK> lifeless: Can you give me a few minutes to actually get state rather than jumping and asking "Is it QA'd yet? How about now? Now?"
[04:29] <lifeless> StevenK: Sure, I was simply meaning to give context for assessing priority.
[04:29] <lifeless> I realise that qaing it may take time
[04:30] <lifeless> StevenK: also, I forgot to tell you this last week; statik is ok with funding your hudson experiments on ec2.
[04:30] <lifeless> StevenK: just expense it while its in the current ballpark; if it starts to really go up in $$$ let me know and we'll recheck.
[04:31] <StevenK> lifeless: My EC2 usage is currently ~ $85US for the month
[04:31] <lifeless> yes
[04:31] <lifeless> the linode is dedicate to hudson too right ?
[04:31] <StevenK> It is
[04:32] <lifeless> so, 160 - long as it stays under 200 I'm sure statik will have no concerns; if its going to go over, ping me
[04:32] <StevenK> To be completly honest, I think that's another 2 days of test running.
[04:32] <lifeless> who do your expense claims get sent to ? bigjools or vlumper?
[04:33] <StevenK> lifeless: Julian
[04:33] <spiv> vlumper, our transylvanian code team lead?
[04:33] <lifeless> spiv: virtual flacoste :P
[04:34] <wgrant> He's still flumper to me.
[04:36] <lifeless> mailed
[04:38] <StevenK> wgrant: So, create an upload to dogfood that closes a private bug, and see if it goes bang when I accept it. If not, bug 564491 is qa-ok.
[04:39] <wgrant> StevenK: not when you accept it -- it needs to be a non-admin.
[04:39] <StevenK> I can fix that
[04:39] <wgrant> True.
[04:39] <StevenK> I think I've dropped my duck on dogfood
[04:40] <lifeless> StevenK: the short story is : do not stress about the bill. Don't drain the bank account, but don't stress.
[04:40] <lifeless> lol. 'dropped the duck'
[04:40] <StevenK> wgrant: And for bug 566339, create an upload that uses a private e-mail address for Changed-By, same deal.
[04:40] <wgrant> StevenK: I think 566339 is probably not fixed, but yes.
[04:41] <lifeless> thank you guys
[04:41] <StevenK> Yes, but I'd like to confirm it, so I can pull lifeless off my back and onto someone elses. :-P
[04:41] <wgrant> Heh.
[04:42]  * StevenK looks for a private bug
[04:43] <poolie>     ZopeXMLConfigurationError: File "/home/mbp/launchpad/lp-branches/dkim/lib/canonical/launchpad/webapp/configure.zcml", line 204.4-209.10
[04:43] <poolie>     ImportError: cannot import name SAFE_INSTRUCTIONS
[04:43] <poolie> does that mean anything?
[04:43] <mwhudson> poolie: run utilities/update-sourcecode
[04:43] <wgrant> poolie: update-sourcecode
[04:43] <poolie> thanks
[04:43] <wgrant> Your bzr-builder is out of date.
[04:44] <mwhudson> wgrant: is http://williamgrant.id.au/f/1/2009/soyuzness.html still vaguely current?
[04:45] <wgrant> mwhudson: No.
[04:45] <wgrant> There's a wiki page
[04:45] <wgrant> https://dev.launchpad.net/Soyuz/HowToUseSoyuzLocally
[04:45] <wgrant> It is an evolved version of the original.
[04:45] <mwhudson> thanks
[04:49]  * mwhudson is going to try to set up his beagle board xm as a local builder, sounds like a fun waste of time
[04:49] <lifeless> also you may need to delete lib/mailman and run 'make
[04:49] <lifeless> '
[04:50] <wgrant> mwhudson: Are beagleboards ARMv7?
[04:50] <mwhudson> yes
[04:50] <wgrant> You'll need a bit of tweaking to get the buildd to work remotely.
[04:50] <mwhudson> fairly sure, at least
[04:50] <wgrant> But it isn't difficult.
[04:50] <mwhudson> ok
[04:50] <wgrant> (because they need librarian access)
[04:51] <StevenK> Ah ha. I suspect the Lucid upgrade has broken dogfood.
[04:51] <lifeless> <type 'exceptions.TypeError'>: cannot concatenate 'str' and 'NoneType' objects
[04:51] <lifeless> yay buildbot. I hates you/
[04:52] <StevenK> lifeless: I can't QA this on dogfood at least
[04:52] <wgrant> What's up with it?
[04:52] <StevenK> Exception-Value: could not access file "$libdir/plpython": No such file or direc
[04:52] <StevenK> tory
[04:52] <wgrant> Ah, awesome.
[04:52] <StevenK> And staging is down
[04:52] <wgrant> Easily fixed, at least.
[04:52] <StevenK> lifeless: Dropped
[04:53] <persia> mwhudson, Take care: performance is such on those that I don't even use raw sbuild, let alone anything else.
[04:53] <lifeless> dropped ?
[04:53] <persia> (although the XM might be faster)
[04:53] <StevenK> lifeless: I can't QA those two bugs; dogfood is broken, and staging is down for a code update
[04:53] <mwhudson> persia: it's just for testing, i'm not actually interested in building anything non-trivia;
[04:53] <mwhudson> persia: the xm has 512 megs of ram at least
[04:54] <lifeless> StevenK: wgrant seems to think that dogfood is easily fixed.
[04:54] <wgrant> lifeless: By a sysadmin.
[04:54] <StevenK> I thought it required at least a GSA
[04:54] <lifeless> wgrant: here's one I prepared earlier.
[04:54] <StevenK> Ah, see ... :-)
[04:54] <wgrant> I didn't know LOSAs did DF
[04:54] <lifeless> wgrant: its lp; we start there.
[04:54] <StevenK> They don't
[04:55] <lifeless> StevenK: why not?
[04:55] <lifeless> anyhow, digression, if we need a sysadmin
[04:55] <StevenK> Because we manage it, not them
[04:55] <lifeless> StevenK: thats kindof a statement of the status, not a rationaile.
[04:56] <lifeless> I so want buildbot to say 'transubstantiating', not just 'substantiating'
[04:56] <StevenK> lifeless: Then please ask mthaddon or bigjools, obviously I can't recall correctly
[04:57] <lifeless> I've pinged vanguard in is
[04:57] <StevenK> lifeless: Hold on, I'm still checking
[04:57] <lifeless> pjdc: may be easier to discuss here.
[04:58] <pjdc> righto
[04:58] <lifeless> pjdc: the upgrade to lucid has probably jiggled the pg plpython packages around
[04:58] <wgrant> You probably need postgresql-plpython-8.3
[04:58] <wgrant> Unless it's running 8.4 already.
[04:59] <lifeless> also staging is just -down-, its been down all day AFAIR
[04:59] <StevenK> It's running 8.4
[05:01] <StevenK> lifeless: I'm not even sure if the database has made it one piece to be fair
[05:01] <StevenK> \dT psql launchpad_dogfood does not tell a pretty story
[05:04] <StevenK> Ah ha, dogfood is running both.
[05:04] <StevenK> pjdc: Yes, please install postgresql-plpython-8.3 on mawson
[05:05] <pjdc> StevenK: hmm, it doesn't seem to be available in lucid
[05:06] <wgrant> It's not.
[05:06] <wgrant> It's probably in the PPA.
[05:06] <wgrant> Otherwise karmic.
[05:07] <pjdc> it doesn't look like mawson had karmic or ppa sources before the upgrade
[05:07] <wgrant> It wouldn't have. karmic has 8.3 natively, whereas lucid just has 8.4.
[05:08] <pjdc> well, mawson went hardy -> lucid
[05:08] <wgrant> Oh, it was hardy before. Right.
[05:08] <wgrant> My point stands.
[05:08] <pjdc> looks like it was a vanilla hardy version, so i guess let's see if that will install
[05:09] <StevenK> We should probably transistion dogfood to 8.4, but that will take about two days. (Sadly, I'm not kidding)
[05:13] <pjdc> StevenK: postgresql-plpython-8.3 is installed (as are its dependencies python2.5 and python2.5-minimal)
[05:13] <poolie> ok it seems like dkim works but not for the case of mail to new@
[05:13] <wgrant> It's not using 8.4 instead?
[05:13] <StevenK> pjdc: Many thanks
[05:14] <StevenK> wgrant: Like I said, it's running both, and psql is still talking to 8.3
[05:16] <pjdc> StevenK: you're welcome!
[05:16] <pjdc> StevenK: if you could ping a GSA when that can be removed again, that'd be super :)
[05:16] <StevenK> pjdc: Was indeed already my plan -- I'll talk to Julian on our stand-up about moving mawson to 8.4
[05:17] <pjdc> excellent
[05:20] <thumper> ah poo
[05:20] <thumper> what "special" things to I need to do to run tests on maverick?
[05:20] <wgrant> Downgrade python-psycopg2 to Lucid's version.
[05:20] <thumper> is it just the database library?
[05:21] <wgrant> We need to work out how best to fix that.
[05:21] <wgrant> Our code is sort of broken.
[05:21] <thumper> wgrant: happen to remember the command to do that off the top of your head?
[05:21] <thumper> that it is
[05:22] <wgrant> thumper: wget http://launchpad.net/ubuntu/+archive/primary/+files/python-psycopg2_2.0.13-2ubuntu2_i386.deb
[05:22] <wgrant> sudo dpkg -i python-psycopg2_2.0.13-2ubuntu2_i386.deb
[05:22] <wgrant> Replace i386 with amd64 if you are one of them.
[05:22]  * thumper is 64 bit
[05:23] <thumper> wgrant: ta
[05:23] <wgrant> (the problem is that we use strs when constructing some queries that want unicodes)
[05:24] <wgrant> eg. lots of places do getUtility(IDistributionSet)['ubuntu'] or similar.
[05:24]  * thumper nods
[05:24] <wgrant> And the celebrity descriptors suffer the same problem.
[05:24] <lifeless> wgrant: the db library is being stupid
[05:24] <wgrant> lifeless: I'd say we are...
[05:24] <lifeless> wgrant: python2.x is -not- suitable for strict u'' enforcement
[05:25] <wgrant> Storm is strict about it.
[05:25] <wgrant> As it should be.
[05:25] <wgrant> psycopg2 is now too.
[05:25] <lifeless> which is a bad idea to do without a new major release
[05:25] <lifeless> I guarantee nearly every single nontrivial db app will be broken by this
[05:25] <lifeless> I'm not denying the semantic clarity it brings
[05:26] <lifeless> but hell
[05:26] <lifeless> even simple things like traceback objects and Exception are not unicode safe in 2.x
[05:26] <lifeless> I suspect we want to put a patch up to be SRU'd to make this sane again.
[05:26] <lifeless> by sane I mean 'works with the status quo'
[05:28]  * thumper frowns
[05:28] <thumper> I've still got problems
[05:28] <lifeless> spm: sorry to interrupt you, but lpbuildbot seems buggered for lp_prod
[05:28] <lifeless> thumper: we know :P
[05:28] <thumper> canonical.testing.layers.MemcachedLayer setup
[05:28] <lifeless> thumper: oh sorry, you mean with maverick ?
[05:29] <thumper> layer setup
[05:29] <lifeless> backtrace?
[05:29] <thumper> No such file or directory
[05:29] <wgrant> What's it whining about?
[05:29] <thumper> what a useful error
[05:29] <wgrant> Do you still have memcached installed?
[05:29] <thumper> no path given
[05:29]  * thumper wonders
[05:29] <wgrant> It's not left some stupid pid around like the librarian does?
[05:30] <thumper> what is c?
[05:30] <lifeless> poolie: cool
[05:30] <lifeless> thumper: 299,792,458 metres per second
[05:31] <poolie> i just filed three consecutive bugs
[05:31] <persia> Depends where you are, really.  Might be a bit faster up a mountain, or a bit slower in a valley.
[05:31] <poolie> it must be a quiet day
[05:31] <poolie> really?
[05:32] <spm> lifeless: le sigh. that's a full stab to recover I suspect... but trying master only.
[05:32] <lifeless> persia: only if you're the smartest mathematician on the disc
[05:32] <spm> cud cud cud cud
[05:33] <persia> lifeless, I'm thinking atmospheric excitement influenced by current density vs. influence of turtles
[05:33] <lifeless> persia: but do you have 4 stomachs ?
[05:33] <thumper> wgrant: apparently memcached was uninstalled as I upgraded
[05:34]  * persia is not nearly that capable a mathematician
[05:34] <mwhudson> wgrant: where in https://dev.launchpad.net/Soyuz/HowToUseSoyuzLocally does bob the builder get created?
[05:34] <wgrant> mwhudson: Sampledata.
[05:34] <wgrant> Others, /builders/+new
[05:34] <mwhudson> as n utilities/soyuz-sampledata-setup.py?
[05:34] <wgrant> mwhudson: No, it's been in the sampledata since 2005.
[05:35] <lifeless> spm: let me guess. You stabbed, but forgot the first aid kit.?
[05:35] <mwhudson> ah, i'm not authorized
[05:35] <wgrant> mwhudson: Are you bootstrapping from nothing?
[05:35] <mwhudson> wgrant: sorry for the noise
[05:35] <mwhudson> wgrant: no
[05:36] <wgrant> That gets a bit messier.
[05:36] <wgrant> So good.
[05:38] <mwhudson> hah
[05:38] <mwhudson> there doesn't seem to be an armel processor in the sample data?
[05:39] <wgrant> There should be a family.
[05:39] <wgrant> maybe not a processor.
[05:39] <wgrant> Indeed not.
[05:39] <spm> lifeless: more like didn't stabb enough
[05:39] <mwhudson> wgrant: is there a ui for that?
[05:39] <wgrant> mwhudson: No.
[05:39] <wgrant> You can use the factory, or good old psql.
[05:39] <wgrant> Then you need to add a DAS.
[05:40] <wgrant> For that there is UI.
[05:40] <wgrant> Not sure if it's linked, though -- DistroSeries:+addport, IIRC.
[05:41] <spm> sigh. no. full stabb coming up.
[05:44] <mwhudson> wgrant: thanks
[05:44] <mwhudson> some impressive warnings on that page :)
[05:44] <wgrant> Yes.
[05:44] <wgrant> The ZCML also says to poke sabdfl before changing any of the values.
[05:45] <poolie> lifeless: no rush, but you could comment some time on https://bugs.edge.launchpad.net/launchpad-foundations/+bug/643224 as to where the configuration ought to go
[05:46] <lifeless> poolie: I did
[05:47] <lifeless> poolie: or did you want a more detailed sketch ?
[05:48] <lifeless> spm: I only ask because 502's
[05:49] <lifeless> wgrant: you might want to patch that to instead mention roles.
[05:49] <wgrant> lifeless: Hm?
[05:49] <lifeless> 16:44 < wgrant> The ZCML also says to poke sabdfl before changing any of the values.
[05:50] <lifeless> I'm pretty sure thats delegated now
[05:50] <wgrant> Heh, probably.
[05:51] <lifeless> spm: thanks
[05:52] <spm> sigh. tho one slave is still being levels of stubborn.
[05:54] <lifeless> spm: looks all yellow now
[05:54] <lifeless> I really have to patch bb
[05:55] <lifeless> http://en.wikipedia.org/wiki/Transubstantiation would be a much cooler message
[05:55] <lifeless> nup, db_lp fail
[05:58] <thumper> wallyworld: got those questions?
[05:58] <wallyworld> thumper: skype?
[05:58] <spm> lifeless: well. for varying values of "yellow"
[05:58] <thumper> wallyworld: aye
[06:00] <lifeless> StevenK: are you unblocked for qaing?
[06:04] <mwhudson> wgrant: i uploaded something and ran process-upload -- and now nothing is happening
[06:04] <mwhudson> wgrant: where should i be looking?
[06:09]  * poolie loves makeSignedMessage(oh no actually not signed) :)
[06:10] <poolie> oh even better that's the default :)
[06:11] <wgrant> mwhudson: What did process-upload say?
[06:11] <wgrant> Did you run it with -vvv?
[06:11] <poolie> i realize it means "potentially signed message"
[06:12] <wgrant> (-vv is OK too, but -vvv lets you see its emails, among other details)
[06:12] <mwhudson> wgrant: https://dev.launchpad.net/Soyuz/HowToUseSoyuzLocally doesn't say -vvv so no
[06:12] <wgrant> mwhudson: Bah.
[06:12] <wgrant> -vvv
[06:12] <wgrant> It may have already moved it to rejected or failed.
[06:12] <wgrant> Does root have mail?
[06:13] <wgrant> Oh, it's Zopeless, so it probably won't actually send mail.
[06:13] <wgrant> Sad.
[06:13]  * mwhudson edits the wiki
[06:13] <mwhudson> ah
[06:13] <mwhudson> Unable to find hello_2.4.orig.tar.gz in upload or distribution.
[06:13] <wgrant> debuild -S -sa
[06:13] <mwhudson> i guess i need to force a full upload somehow
[06:14] <poolie> i think -sa does that
[06:14] <poolie> nb with bzr builddeb you need 'bzr builddeb -S -- -sa'
[06:14] <poolie> iirc
[06:14] <wgrant> That's right.
[06:14] <mwhudson> http://launchpad.dev:58080/100/nHOxGYhrhsWKP0SXew1T6tEUh30.txt
[06:14] <mwhudson> er
[06:14] <wgrant> Yeah, not useful.
[06:14] <mwhudson>  Cannot build any of the architectures requested: any
[06:14] <wgrant> mwhudson: Do you have chroots?
[06:14] <wgrant> Ah.
[06:14] <mwhudson> i guess because i'm uploading to a ppa
[06:15] <mwhudson> and the builders are both 'official'
[06:15] <wgrant> If you only have an armel chroot, you'll need to mark it unrestricted, or whitelist the PPA.
[06:16] <mwhudson> wgrant: how do i do that?
[06:16] <lifeless> spm: Cannot rebuild database. There are 1 open connections.
[06:16] <lifeless> spm: https://lpbuildbot.canonical.com/builders/lucid_lp/builds/163/steps/shell_7/logs/stdio
[06:16] <wgrant> mwhudson: Go to the PPA's +admin, and check the ARM box.
[06:16] <mwhudson> i presume checking virtualized isn't the thing to do
[06:16] <wgrant> Then try uploading again.
[06:16] <lifeless> spm: please. apply. signals.
[06:17] <wgrant> mwhudson: Ah, you'll need a virt builder if you want it to actually be dispatched, yes.
[06:17] <wgrant> You also need to fill in the VM host with any string.
[06:17] <mwhudson> wgrant: how does it know not to actually try to reset the builder?
[06:18] <wgrant> mwhudson: Because the dev config sets it to a dummy command.
[06:18] <mwhudson> wgrant: ah
[06:18] <wgrant> On production it does some ssh magic that I'm not allowed to see yet.
[06:18] <mwhudson> 2010-09-20 05:18:16 DEBUG   Created armel build of hello 2.4-3ubuntu2 in ubuntu lucid RELEASE [31] in PPA named test-ppa for Ppa-user (2505)
[06:18] <wgrant> Excellent.
[06:19] <mwhudson> i don't see where that build has gone though :(
[06:19] <wgrant> What do buildd-manager logs say?
[06:19] <wgrant> Probably not much.
[06:20] <mwhudson> heh, the last entry in /var/tmp/buildd-manager/buildd-manager.log is from may
[06:20] <wgrant> Not /var/tmp/development-builddmanager.log
[06:20] <wgrant> ?
[06:20] <wgrant> Or development-buildd-manager.log... I forget which.
[06:20] <wgrant> It changed recently.
[06:20] <mwhudson> ah hah
[06:20] <lifeless> arrggggghhhh
[06:20] <lifeless>   File "/srv/buildbot/slaves/launchpad/production-devel/build/orig_sourcecode/eggs/lazr.restful-0.13.0-py2.5.egg/lazr/restful/_resource.py", line 924
[06:20] <lifeless>     except Exception as e:
[06:21] <lifeless> spm: hi, how far along are we to 'all-lucid' ?
[06:21] <mwhudson> wgrant: hm, nothing for a couple of hours
[06:21] <mwhudson> i guess i'll try restarting it...
[06:22] <wgrant> mwhudson: If it's still not helping, change its logging level from INFO to DEBUG.
[06:22] <wgrant> Should be in lib/lp/buildmaster/manager.py
[06:23] <mwhudson> wgrant: 2010-09-20 17:23:36+1200 [-] No build candidates available for builder.
[06:24] <mwhudson> which is i guess not surprising as the queue seems to be empty
[06:24] <poolie> lifeless: does launchpad have something like bzr's overrideAttr?
[06:24] <mwhudson> _why_ the queue is empty though...
[06:24] <poolie> to monkeypatch something for just one test?
[06:24] <mwhudson> oh, i need to publish the source first?
[06:24] <wgrant> mwhudson: Only if it's a private PPA.
[06:25] <mwhudson> ok, not that
[06:25] <lifeless> poolie: no, I'd love a feature that does that
[06:25] <lifeless> poolie: you can use bzr testcases in lp, if you don't mind making your own factory
[06:25] <wgrant> mwhudson: What's the build status?
[06:25] <wgrant> Hit 'View build records' on the PPA.
[06:27] <mwhudson> wgrant: hm, pending
[06:27] <wgrant> Oh.
[06:27] <wgrant> It's possible that the PPA is created non-virt.
[06:27] <wgrant> Try un-virting the builder.
[06:29] <mwhudson> hah, it's building
[06:29] <mwhudson> well not really
[06:29] <mwhudson> 2010-09-20 17:29:08+1200 [QueryWithTimeoutProtocol,client] <bob:http://10.0.0.46:8221/> response for "ensurepresent": [False, 'Error accessing Librarian: <urlopen error [Errno 111] Connection refused>']
[06:29]  * mwhudson watches the build be dispatched over & over & over again
[06:29] <wgrant> As expected.
[06:30] <wgrant> There is failure counting in db-devel, IIRC.
[06:30] <mwhudson> the development librarian only listens to the local interface?
[06:30] <lifeless> uyes
[06:30] <wgrant> I believe so.
[06:30] <wgrant> I have done this in a year, though.
[06:30] <wgrant> Er.
[06:30] <wgrant> Haven't.
[06:30] <mwhudson> i guess i can fix that, or get the slave to talk to this proxy i happen to have running
[06:33] <mwhudson> graaaar
[06:33] <wgrant> Welcome to Soyuz :)
[06:34] <wgrant> I suppose I can just about validly say that now.
[06:34] <lifeless> I want a rubber stamp for testfix on prod-devel
[06:34] <lifeless> last.restful needed a 'Exception as e:' -> 'Exception, e:'
[06:35] <mwhudson> lifeless: sounds ok
[06:36]  * lifeless takes thumpers name in vain as its a fix to the previous +1
[06:38] <mwhudson> ok, that's actually a little strange, it's the proxy not knowing about hostnames in /etc/hosts
[06:38] <lifeless> what poxy?
[06:38] <mwhudson> polipo
[06:40] <mwhudson> ah, that's a config setting apparently
[06:41] <mwhudson> it's building!
[06:41] <mwhudson> well, unpacking the chroot
[06:42] <wgrant> Go and have dinner.
[06:42] <wgrant> And breakfast.
[06:42] <mwhudson> :)
[06:42] <spm> so buildbot seems to do a good job of firing off an instance; and then losing contact with it. hence db_lp in a questionable state
[06:43] <mwhudson> took 5 minutes on the real builder: https://edge.launchpad.net/ubuntu/+source/hello/2.5-1/+build/1725699
[06:43] <lifeless> add to that that the non lucid builders should be failing every time atm
[06:43] <lifeless> because of lazr.restful 0.13.0
[06:45] <mwhudson> and it failes
[06:45] <mwhudson> http://archive.launchpad.dev/ubuntu/dists/lucid/main/binary-armel/Packages.gz
[06:46] <wgrant> Ah, yes.
[06:46] <mwhudson> W: Failed to fetch http://archive.launchpad.dev/ubuntu/dists/lucid/main/binary-armel/Packages.gz  504  Host archive.launchpad.dev lookup failed: Host not found
[06:46] <mwhudson> rather
[06:46] <mwhudson> i guess more /etc/hosts nargery?
[06:46] <wgrant> So you need to fix the resolution, and also './scripts/publish-distro.py -C' to actually create the archive.
[06:46] <lifeless> mwhudson: you might like squid
[06:47] <mwhudson> wgrant: what should it resolve to?  same as launchpad.dev?
[06:47] <wgrant> mwhudson: It just needs to make it into Apache.
[06:47] <wgrant> So that should do.
[06:48] <wgrant> I did add it to the stock Apache config, didn't I?
[06:50] <mwhudson> wgrant: it's 403ing, joy
[06:51] <mwhudson> which is a bit strange
[06:52] <wgrant> It is.
[06:52] <wgrant> What does Apache say about it?
[06:52] <mwhudson> [Mon Sep 20 17:51:01 2010] [error] [client 10.0.0.1] client denied by server configuration: /var/tmp/archive/ubuntu/dists/lucid-updates/restricted/binary-armel/Packages.gz
[06:52] <wgrant> Ah.
[06:52] <wgrant> Right.
[06:52] <mwhudson> which is a bit less surprising when you look at the config
[06:52] <wgrant> Forgot that bit.
[06:53] <wgrant> Yeah, I decided to be safe, I suppose.
[06:54] <mwhudson> i certainly hope i don't take this laptop to a cafe and get a 10.x.x.x ip address for a while...
[06:54] <wgrant> Oh, yeah, if you have lp-buildd installed you really want a firewall.
[06:56] <mwhudson> well, that's on the arm board
[06:56] <wgrant> True.
[06:56] <lifeless> stub: ping on the backup :)
[06:56] <lifeless> stub: <- optimist
[06:57] <mwhudson> oh what the hell
[06:57] <mwhudson> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/maverick/main/binary-armel/Packages.gz  404  Not Found
[06:57] <mwhudson> well, /yes/ but ...
[06:57] <stub> lifeless: Is still running
[06:57] <persia> mwhudson, ports.u.c/ubuntu-ports/...
[06:58] <mwhudson> persia: yes yes, but this is 7 layers in
[06:58] <persia> ftp.internal is merged.
[06:58] <lifeless> stub: thanks
[06:58] <mwhudson> wgrant: any ideas?  is the /etc/sources.list from the chroot mangled somehow?
[07:01] <wgrant> mwhudson: Ah. Check the external dependencies on the PPA's +admin.
[07:01] <wgrant> They are set to archive.ubuntu.com by default, because the dev primary archive doesn't actually have any build-deps that packages might want.
[07:14] <mwhudson> well, this is taking longer to fail at least
[07:16] <wgrant> Is it installing build-deps yet?
[07:16] <mwhudson> past that, it seems
[07:18] <mwhudson> wgrant: this doesn't look very good http://people.canonical.com/~mwh/uhoh.png
[07:18] <mwhudson> wgrant: for several reasons
[07:19] <wgrant> mwhudson: Check the build and b-m logs.
[07:19] <wgrant> Looks like it was GIVENBACK.
[07:19] <mwhudson> i've retried the same build a few times
[07:19] <mwhudson> the buildlog looks fine though
[07:20] <wgrant> Oh, it was successful?
[07:20] <wgrant> You may be running into jelmer's incomplete branch.
[07:20] <wgrant> Try merging his IFORGETTHIS-popen-2 branch.
[07:21] <mwhudson> well
[07:21] <mwhudson> actually i think i'm going to go home
[07:21] <mwhudson> but i'll try that tomorrow :-)
[07:21] <wgrant> That could work.
[07:21] <mwhudson> will it just keep on building in the current state?
[07:22] <wgrant> I suspect so.
[07:22] <wgrant> kill off b-m.
[07:22] <mwhudson> yeah, looks like it
[07:22] <mwhudson> anyway, close enough to success for one day
[07:23]  * mwhudson eods
[07:23] <wgrant> I should also head home.
[07:32] <poolie> what if anyhting is the patch size limit?
[07:33] <lifeless> the review policy asks for patches to be < 800 lines unless you negotiate with your reviewer
[07:34] <lifeless> practically speaking, anything the review finds to awkward/complex/unclear will be pushed back on anyway.
[07:34] <lifeless> personally I think the limit is bogus and should be tossed away, in favour of asking for single-intent, focused patches; which may be very large and mechanical, or small but subtle.
[07:37] <persia> mechanical patches are generally exceedingly large by their very nature
[07:37] <poolie> mm
[07:37] <poolie> i think there's a fair overhead per landing
[07:37] <poolie> which pushes people (or at least me) to do bigger ones
[07:42] <lifeless> spm: substantiate failed
[07:43] <spm> buildbot?
[07:43] <spm> orsum it's effed and died on both ec2 images. sigh.
[07:44] <StevenK> Can we switch to Hudson now?
[07:44] <lifeless> StevenK: is it ready?
[07:44] <lifeless> StevenK: also, how did you go with the qa?
[07:44] <StevenK> QA is underway, still. :-/
[07:44] <lifeless> StevenK: KK
[07:45] <lifeless> StevenK: If there is someway I can help, lemme know.
[07:45] <StevenK> lifeless: Hudson is blocked until we can figure out how to get it building from source in a way that IS are comfortable with.
[07:45] <lifeless> StevenK: have you asked on the hudson list uet ?
[07:45] <lifeless> StevenK: also, I don't see why that blocks it; because there are other things that aren't ready aren't there?
[07:46] <StevenK> lifeless: No, but there is a Debian ITP open about it.
[07:46] <poolie> ok, https://code.edge.launchpad.net/~mbp/launchpad/dkim/+merge/35985 i think fixes a few of those things
[07:46] <StevenK> lifeless: To be brutally honest, it seems like only you and me care, and I'm the only one doing things to it.
[07:47] <lifeless> StevenK: personally, I wouldn't worry about the deb building aspect.
[07:47] <lifeless> its the least of the issues.
[07:48] <StevenK> lifeless: IS refuse to use the upstream .deb package
[07:48] <lifeless> but, and I can't stress this enough, its *really important* to talk to upstream about these things.
[07:48] <lifeless> StevenK: thats fine; we're not ready to deploy it yet are we?
[07:48] <lifeless> StevenK: horses go in front of the carriage.
[07:49] <spm> lifeless: unless you have a horseless carriage, in which case the horses are squished into the engine somehow. which may be behind the carriage.
[07:49] <lifeless> StevenK: in terms of caring / doing; spm has expressed great passion to get rid of BB, so has maris.
[07:49] <lifeless> spm: those are beetles.
[07:49] <lifeless> spm: totally different things.
[07:49] <spm> Lamborgini's!!!!
[07:49] <StevenK> lifeless: James Page has commented on the Debian ITP bug, which already has upstream scribbling on it -- I was going to touch base with him
[07:49] <spm> mainly as BB is an unreliable .... .... .... ... ....!
[07:50] <lifeless> StevenK: why are you worring about the deployment side of it yet though
[07:50] <StevenK> spm: And Ferraris, some Audis, every Porsche except one
[07:50] <lifeless> frankly, I'd get everything else totally happy first.
[07:53] <poolie> spm, thanks for your patient help, i could have done it without you
[07:53] <poolie> but it would have been harder
[07:53] <poolie> (might as well be honest :)
[07:53] <spm> poolie: ha. np
[07:54] <spm> but, you'll crush my feelings saying I'm not necessary to the process
[07:54] <poolie> i think i should leave now before something goes wrong
[08:05] <lifeless> spm: I think you need a larger weapon
[08:05] <spm> oh yes. just dealing with a pri 90; then it's destressing time
[08:06] <StevenK> NameError: name 'self' is not defined
[08:06] <StevenK> Oh, for the love of ...
[08:09] <lifeless> StevenK: thats on dogfood ?
[08:09]  * jml waves hello
[08:09] <lifeless> StevenK: or local, cause local would let my heart start beating again
[08:09] <lifeless> hi jml
[08:10] <StevenK> lifeless: No, completly different branch
[08:10] <lifeless> whew
[08:10] <StevenK> lifeless: Oh, sorry, I meant production
[08:10]  * StevenK calls for an ambulance to lifeless' house
[08:11] <lifeless> wow, either I really messed up OOPS reporting, or the CP this morning has actually helped a lot
[08:11] <lifeless> https://devpad.canonical.com/~lpqateam/lpnet-oops.html#time-outs
[08:11] <lifeless> also https://lpstats.canonical.com/graphs/OopsLpnetHourly/20100914/20100921/
[08:13] <lifeless> https://lpbuildbot.canonical.com/one_box_per_builder - it is -completely- confuddled.
[08:16] <StevenK> Oh bugger. ImportError: No module named cachedproperty
[08:16] <StevenK> That *is* on dogfood
[08:17] <StevenK> I'm blaming shipit, since that appears in the traceback
[08:17] <StevenK> Looks like you can't use copy_field() when the field you want to copy is in the same class as you are
[08:26] <lifeless> StevenK: your shipit is out of date
[08:26] <lifeless> utilities/update-sourcedode
[08:26] <StevenK> I should run that on dogfood?
[08:27] <lifeless> I would
[08:27] <lifeless> you need to make deps just the same as anywhere else
[08:27] <StevenK> Bleh, it can't check out shipit
[08:27] <lifeless> for prod/staging we run that on devpad (well, we use config-manager, but near enough)
[08:27] <lifeless> could just remove sourcecode/shipit
[08:29] <StevenK> lifeless: I don't like that idea
[08:31] <lifeless> why?
[08:31] <lifeless> is there a shipit.dogfood.launchpad.net ?
[08:31]  * lifeless tires
[08:31] <lifeless> no, there isn't
[08:31] <lifeless> rename it to something else then - mv sourcecode/shipit{,.disabled}
[08:32] <lifeless> shipit is optional in the tree - its closed source
[08:34] <StevenK> lifeless: shipit disabled, thank you
[08:38] <spm> lifeless: buildbot hit with a bigger hammer. thrice.
[08:39] <lifeless> spm: thanks
[08:39] <lifeless> lets see if it sticks
[08:46] <lifeless> grah what does it take to get out of testfix these days
[08:49] <lifeless> spm: bb is building, isn't that meant to take us out of testfix mode?
[08:49] <spm> not sure tbh where the trigger is; at receipt of a fix; or when it fixes.
[08:50] <lifeless> AIUI starting a build is the key
[08:50] <lifeless> bbs, dinner time
[08:56] <lifeless> 3.4seconds for 99% of all lp back at the 7th aug.
[08:57] <lifeless> 3.95seconds for 99% of all lp yesterday
[08:57] <lifeless> bah
[08:57] <lifeless> 2.95
[08:57] <StevenK> lifeless: IE, we've lost 0.5 seconds?
[08:57] <lifeless> I think so
[08:57] <lifeless> dunno how much noise there is in the usage patterns
[08:58] <lifeless> interestingly the mean has moved up 0.06, but the variance has come down more than enough to compensate
[09:02] <adeuring> good morning
[09:09] <mrevell> Hello
[09:41] <StevenK> allenap: Hi! Are you available for a call in about 10?
[09:42] <wgrant> StevenK: Did you get anywhere with DF?
[09:42] <allenap> StevenK: Would 20 be okay?
[09:42] <allenap> StevenK: So, about 0900 UTC.
[09:42] <StevenK> wgrant: Fighting horribly
[09:42] <wgrant> Heh.
[09:43] <StevenK> And giving up
[09:43] <wgrant> That bad?
[09:44] <lifeless> stub: backup finished?
[09:45] <StevenK> wgrant: Well, Julian's turned up, and I have other things aside from work to be doing
[09:45] <wgrant> Ah, right.
[09:45] <wgrant> So it's not defeat.
[09:46] <lifeless> bigjools: oh hai!
[09:47]  * StevenK doesn't even have the motivation to insult wgrant, so I must be sick
[09:47] <lifeless> :(
[09:47] <lifeless> vitamin C time
[09:47] <bigjools> hai
[09:47]  * bigjools is sprinting with jml this week
[09:48] <lifeless> cool
[09:48] <wgrant> bigjools: Ready for the Twisted overdose?
[09:48] <bigjools> in body yes, in mind no
[09:49] <lifeless> hey, there is a patch - 11556 IIRC - that needs qaing on dogfood; its the blocking patch in the line for doing quicker deployments of a bunch of bugfixes.
[09:49] <lifeless> StevenK was trying to qa it, but ran into some stuff.
[09:49] <bigjools> ok
[09:49] <bigjools> there's 2 that need DF QA
[09:49] <StevenK> wgrant: He probably won't remember it if he drinks heavily enough
[09:50] <jml> http://isometric.sixsided.org/strips/twisted_plutonium/
[09:50] <wgrant> Is Twisted not itself sufficiently intoxicating?
[09:51] <bigjools> StevenK: will you be able to do those QA things?
[09:53] <StevenK> bigjools: Until when?
[09:53] <bigjools> a question is not a valid response to a question
[09:59] <StevenK> bigjools: Bug 564491 qa-ok
[09:59] <_mup_> Bug #564491: Cannot accept package which closes inaccessible bug <boobytrap> <oops> <qa-ok> <queue-page> <ui> <Soyuz:Fix Committed by julian-edwards> <https://launchpad.net/bugs/564491>
[09:59] <StevenK> bigjools: Do you think there is any point QA'ing 556339?
[10:00] <bigjools> bug 556339
[10:00] <_mup_> Bug #556339: systray.py crashed with SIGSEGV in vtable for QPaintDevice() <amd64> <apport-crash> <apport-failed-retrace> <lucid> <hplip (Ubuntu):Confirmed> <https://launchpad.net/bugs/556339>
[10:00] <bigjools> Oo
[10:00] <StevenK> Sorry, 566339
[10:01] <StevenK> bug 566339
[10:01] <_mup_> Bug #566339: Cannot accept package which would notify private email addresses <boobytrap> <qa-needstesting> <Soyuz:Fix Committed by julian-edwards> <https://launchpad.net/bugs/566339>
[10:01] <bigjools> StevenK: no, can you set it back to triaged please
[10:01] <bigjools> and unlink the branch
[10:02] <StevenK> Done
[10:02] <bigjools> cheers - and the other one needs a couple of COPY archive builds
[10:02] <bigjools> on a frozen series, which you then mark released
[10:03] <wgrant> Is buildd-manager actually functional right now?
[10:03] <wgrant> In devel, that is.
[10:03] <StevenK> bigjools: Sorry, you've completly lost me
[10:03] <bigjools> wgrant: why would it not be?
[10:05] <wgrant> r11566
[10:05] <wgrant> AFAICT it will break successful completion.
[10:05] <bigjools> StevenK: the bug was that when a series gets released, any COPY archive builds in progress would cause b-m to oops
[10:05] <lifeless> bigjools: https://devpad.canonical.com/~lpqateam/qa_reports/launchpad-stable-deployment.txt is the report I'm looking at for deployments btw
[10:05] <wgrant> Until the second part lands.
[10:05] <bigjools> wgrant: jelmer said he was going to land everything on Friday, what do you think is missing?
[10:06] <wgrant> I don't think part 2 has landed.
[10:06] <bigjools> what's in part2?
[10:06] <lifeless> jelmer hit trouble with OOPS
[10:07] <lifeless> the same thing we had last week
[10:07] <bigjools> FFS
[10:07] <wgrant> buildd-manager no longer destroys the BQ, so the build will be reset as soon as the upload is queued.
[10:08] <bigjools> https://code.edge.launchpad.net/~jelmer/launchpad/506256-remove-popen-2/+merge/35412
[10:08] <bigjools> not merged :/
[10:08] <wgrant> That's the one.
[10:08] <wgrant> It has an odd test failure.
[10:08] <bigjools> do you know how to fix it?
[10:08] <bigjools> is it even related?
[10:08] <lifeless> StevenK: \o/
[10:08] <wgrant> I don't know how, no. It's a test isolation error of some sort.
[10:08] <bigjools> lifeless: is it too late for me to push this on to you so I can get on with my sprint?
[10:09] <wgrant> Nobody seems to know.
[10:09] <lifeless> bigjools: the oops thing?
[10:09] <bigjools> lifeless: well whatever is blocking jelmer's branch
[10:09] <lifeless> bigjools: its 9pm, I doubt I'll make much traction on it tonight.
[10:09] <bigjools> since it's also now blocking QA of a different branch
[10:09] <lifeless> I spent most of a day staring at it last week.
[10:09] <lifeless> and worked around it
[10:10] <lifeless> I do however have a little more info now from jelmer, so I'll happily stare at it again.
[10:10] <lifeless> bigjools: is wgrant right, that if we deployed r11556 we'd break things?
[10:10] <bigjools> yes
[10:10] <lifeless> ok, so it needs to be marked qa-bad
[10:10] <bigjools> it was a 2-parter
[10:10] <lifeless> with the specific revision. hang a sec and I'll dig it up
[10:11] <bigjools> split up for review purposes, but he should have landed them together really
[10:12] <lifeless> bigjools: 11556 was your patch
[10:12] <allenap> StevenK: http://pastebin.ubuntu.com/496900/
[10:12] <wgrant> 11566
[10:12] <lifeless> wgrant: do you really mean 11556 ?
[10:12] <lifeless> ah
[10:12] <lifeless> 11566, coolio.
[10:12] <wgrant> That's what I said, isn't it?
[10:12] <wgrant> Yes.
[10:12] <lifeless> its pretty far back to consider rollingback too. darn.
[10:13] <StevenK> allenap: http://pastebin.ubuntu.com/496901/
[10:13] <wgrant> lifeless: Howso?
[10:13] <wgrant> That code isn't touched often.
[10:14] <lifeless> wgrant: high chance of unrelated fallout from the same oops thing, I bet it would be bad
[10:14] <bigjools> lifeless: there';s nothing wrong with 11556
[10:14] <wgrant> Ah, right.
[10:14] <lifeless> bigjools: ok cool; StevenK was qaing the two bugs - I see one is done.
[10:14] <bigjools> lifeless: I thought you meant jelmer's rev
[10:14] <lifeless> the other is pending?
[10:14] <lifeless> bigjools: I was all confused; I'm synced up now.
[10:14] <bigjools> lifeless: yeah one of them is going to get unlinked, the other is QA OK
[10:15]  * bigjools is going to disconnect from IRC in 5 minutes so I can get some work done
[10:15] <lifeless> bigjools: https://bugs.edge.launchpad.net/soyuz/+bug/566339 - the second bug on 11556 - isn't qa-ok yet.
[10:15] <_mup_> Bug #566339: Cannot accept package which would notify private email addresses <boobytrap> <qa-needstesting> <Soyuz:Triaged by julian-edwards> <https://launchpad.net/bugs/566339>
[10:18] <bigjools> lifeless: that's the one that should be unlinked from the MP
[10:19] <bigjools> it's not fixed, I should not have linked it
[10:19] <lifeless> bigjools: ok, I'll just remove the qa tag from it
[10:19] <bigjools> great
[10:19] <lifeless> that might fix it
[10:19] <bigjools> fix what?
[10:19] <lifeless> if it doesn't, I'll qa-untestable it
[10:19] <lifeless> the deploy script
[10:19] <bigjools> ah that's in action now then
[10:19] <lifeless> candidate-deploy-stuff
[10:20] <lifeless> bigjools: the bits are all coming online yes; its output is linked above
[10:20]  * bigjools is going to run tests locally on jelmer's unlanded branch and try and land it
[10:20] <bigjools> if it still fucks up with this OOPS shit, I am punting to you :)
[10:20] <lifeless> bigjools: it will
[10:20] <bigjools> :/
[10:20] <lifeless> bigjools: I'm looking at it now
[10:20] <bigjools> ok
[10:20] <bigjools> thanks then
[10:20] <lifeless> save your time; I'll email you when I pass out from exhaustion with a brain dump
[10:21] <bigjools> ok :)
[10:22] <lifeless> not that I can land a fix
[10:22] <lifeless> BB shat itself royally.
[10:23] <lifeless> and we're in test fix for the next 2.5? hours
[10:24] <bigjools> can we just kill BB now? :)
[10:24] <lifeless> stub: the new index seems to work - https://lp-oops.canonical.com/oops.py/?oopsid=1724ED352 - 1.6 second query rather than the previous 3+
[10:25] <jml> lifeless, I think testfix rules have changed such that submitting a testfix branch removes the testfix flag
[10:25] <jml> lifeless, can we please have pre-merge testing back?
[10:25] <lifeless> jml: +1
[10:26] <lifeless> jml: I don't know what you mean by that, but the branches are all fine;
[10:26] <bigjools> before we do that, we need a test suite that takes 5 minutes instead of 4 hours
[10:26] <lifeless> its all fixed, just waiting for bb to figure that out.
[10:26] <StevenK> bigjools: Patches welcome
[10:27]  * bigjools wonders what the Aussie equivalent of a P45 is
[10:27] <jml> bigjools, not true! pqm takes 20 minutes to process a branch, so we are by definition happy with a 20 minute processing time per branch.
[10:27] <jml> bigjools, a slab of beer
[10:28] <bigjools> jml: FSVO happy
[10:28] <jml> yeah
[10:28] <jml> the kind of happiness that you drink with ice
[10:29] <lifeless> jml: https://bugs.edge.launchpad.net/launchpad-project/+bugs?field.tag=timeout
[10:29] <jml> lifeless, what am I looking at?
[10:30] <lifeless> 7 of the 60 are done, one under way, and I just closed 2 off as fix released.
[10:30] <lifeless> jml: Progress
[10:30] <jml> \o/
[10:32] <lifeless> jelmer: ping
[10:33] <jml> lifeless, he's on vacation
[10:33] <jml> lifeless, in another tz
[10:34] <lifeless> yes, but we was around not so long ago
[10:34] <lifeless> this thing isn't what I was looking at
[10:38] <lifeless> bigjools-afk: try this http://pastebin.com/fNKSwyTS
[10:38] <lifeless> bigjools-afk: if I'm right it will work
[10:38]  * bigjools-afk is not here
[10:38] <lifeless> its 930 at night; I started at 6:30 - I'm not here either :P
[10:39] <lifeless> bigjools-afk: takes a full test run to show the problem
[10:40] <lifeless> I'll commit this and push it
[10:42] <lifeless> stub: thanks for applying those indices
[10:45] <henninge> jtv, danilos: Here is my current problem: http://paste.ubuntu.com/496907/
[10:46] <henninge> The actual failure is at the end: pofile.currentCount()
[10:46] <henninge> I added some output to see what's going on and I see four current messages but currentCount returns 3.
[10:46] <henninge> what am I missing?
[10:47] <jtv> henninge: is pofile an Ubuntu pofile or an upstream pofile?
[10:47] <henninge> (its five, actually)
[10:47] <henninge> jtv: let me look
[10:47] <henninge> this is fromp poimport.txt btw
[10:48] <henninge> jtv: upstream
[10:48] <jtv> I'd expect to see 2 current messages there.
[10:48] <henninge> jtv: so does the test. which ones?
[10:49] <jtv> Actually, no, I don't see a reason why the last message shouldn't be current.  But it may have a (deliberate) validation error in the plural.
[10:49]  * henninge checks
[10:50] <jtv> Or maybe it's even supposed not to count as a current translation because it doesn't translate all forms.
[10:50] <jtv> Plurals are nasty.
[10:50] <henninge> jtv: no, the plurals are complete and correct
[10:52] <henninge> jtv: ah!
[10:52] <jtv> (BTW the code for getTranslationMessages has an oversized line)
[10:52] <jtv> ?
[10:52] <henninge> one of the potmsgsets is obsolete, I think
[10:53] <jtv> I was wondering why you weren't filtering for those…
[10:53] <henninge> ;-)
[10:53] <henninge> jtv: yes, balloon does not exist in the template that is being imported in the test.
[10:54] <henninge> jtv: and we create obsolete POTMsgSets for unkown msgids in pofiles, right?
[10:54] <jtv> I'm sure we do _somewhere_…
[10:54] <lifeless> bigjools-afk: ok
[10:55] <lifeless> bigjools-afk: https://code.edge.launchpad.net/~lifeless/launchpad/lessGetLastOops/+merge/35994 - review this
[10:55] <lifeless> merge it into jelmers branch, and then land both.
[10:55] <lifeless> jelmer: ^ that should take care of it
[10:56] <jtv> Can anyone tell me where this revision came from?  http://bazaar.launchpad.net/~launchpad-pqm/launchpad/devel/revision/11369
[10:57] <jtv> It seems to be related to bug 614404, but it's not in a branch linked to that bug.
[10:57] <_mup_> Bug #614404: commit uses system user-id to generate revision-id even when committer id is supplied <Bazaar:Fix Released> <Launchpad Bazaar Integration:Triaged> <https://launchpad.net/bugs/614404>
[10:57] <lifeless> jtv: bzr 2.2
[10:57] <jtv> It was done _for_ bzr 2.2, but this is a Launchpad branch
[10:58] <lifeless> which changes versions.cfg
[10:58] <jtv> (which is breaking things in production for us atm)
[10:58] <lifeless> that commit brings in bzr 2.2 via versions.cfg
[10:58] <jtv> But it's not bzr that's breaking things; it's an undertested Launchpad change to accommodate the bzr upgrade.
[10:59] <jtv> (Well, untested really)
[11:00] <lifeless> I'm not sure what tests would be appropriate; its a fairly deep change in assumptions; how would you ensure that all possible cases are caught, in a reasonable time
[11:00] <lifeless> anyhow
[11:00] <jtv> Yes, that's the big question innit.
[11:00] <jtv> Here's the problem I'm trying to fix:
[11:00] <jtv> directbranchcommit now creates a bzr id based on the responsible person's preferred email address,
[11:01] <jtv> but it doesn't check whether that person has a preferred email address.
[11:01] <jtv> In our case, the person is often a team.
[11:01] <lifeless> there's other code that does this.
[11:01] <lifeless> Perhaps consolidating it all would be a good idea
[11:01] <jtv> Very.
[11:02] <lifeless> (for package branches, an email is needed and teams are often the person so a fallback is needed)
[11:03] <jtv> So we want some kind of "get_bzr_id_for_person" function that falls back to something sensible.
[11:03] <lifeless> no
[11:03] <lifeless> get_email_for_person
[11:03] <lifeless> used by
[11:03] <lifeless> get_bzr_id_for_person
[11:04] <lifeless> the thing for package branches is used in the .changes file
[11:04] <lifeless> when building the recipe
[11:04] <jtv> We already have "get email for person" functions.
[11:04] <lifeless> well, just use whichever one is right :)
[11:04] <jtv> I think the use case you mention is a different one than mine.
[11:05] <lifeless> perhaps; I don't see why the logic would differ
[11:05] <lifeless> both are dealing with:
[11:05] <lifeless>  - need an email address
[11:05] <jtv> In my case, I need a valid bzr user identity.  The sane fallback would probably be a celebrity.
[11:05] <lifeless>  - have to handle teams
[11:05] <lifeless>  - it will be publicly visible
[11:05] <lifeless> jtv: thats what the recipe code does.
[11:06] <lifeless> direct address; fallback to owner preferred; fallback to a well known fake address
[11:06] <lifeless> fallbacks occur when not set or private-address.
[11:06] <jtv> So that's not the same.
[11:06] <lifeless> why can't it be
[11:06] <lifeless> ?
[11:06] <jtv> In our case, fallback to the owner would be misleading.
[11:07] <jtv> In our case the bzr id is not so much for contact purposes as it is for attribution purposes.
[11:07] <lifeless> I don't see that its any better or worse; perhaps I'm wrong about the other code.
[11:07] <jtv> If we get too clever, we only make it harder to see that the change is authored by Launchpad.
[11:07] <wgrant> (Recipes don't fall back to the owner -- the first attempt is the owner, right?)
[11:07] <lifeless> its also for attribution in a .changes file
[11:07] <lifeless> wgrant: recipe owners are teams sometimes
[11:07] <lifeless> wgrant: I believe - maybe wrongly - that the team owner is the first fallback.
[11:07] <lifeless> anyhow.
[11:07] <lifeless> a) its late
[11:08] <wgrant> Oh.
[11:08] <wgrant> I don't think so, but possibly.
[11:08] <lifeless> b) I don't see any difference in needs so far.
[11:08] <wgrant> We need a solution to this.
[11:08] <wgrant> There are two separate implementations at least.
[11:08] <lifeless> c) the recipe code may be wrong, but lets not have two different implementations of the same basic use case.
[11:08] <wgrant> And I can think of two or three other places that need it.
[11:08] <lifeless> jtv: so if your needs are different, please at least minimally use the same function with a knob.
[11:09] <jtv> lifeless: I'm not completely stupid, thank you!
[11:09] <lifeless> lifeless: hah, sorry if something came out badly.
[11:09] <lifeless> I'm really very tired.
[11:09] <jtv> Maybe not a good time to discuss this then—it's getting on here as well.
[11:10] <jtv> (Of course we do need it fixed, but we can work it through with the American timezones)
[11:10] <lifeless> jtv: I don't think you're at all stupid; I think perhaps I got the wrong end of the stick with what you were saying about different needs.
[11:10] <lifeless> I'm sure you'll do something sane.
[11:11] <jtv> lifeless: I understand—imagine a smiley face there.
[11:11] <lifeless> :)
[11:11] <henninge> jtv: filtered: http://paste.ubuntu.com/496919/
[11:11] <jtv> These are the rabbit holes we stumble over when we're tired.
[11:12] <jtv> henninge: and the "Foo %i" translation should not validate, so that shouldn't become current.
[11:12] <lifeless> jtv: I don't know if you saw, but it looks like we've shaved 0.5 seconds off of the 99% threshold systemwide
[11:12] <jtv> lifeless: !
[11:12] <jtv> lifeless: stub's graphs are great for this
[11:12] <lifeless> yup
[11:12] <jtv> Seeing the change you've made there is really gratifying.
[11:13] <jtv> testfix :(
[11:14] <lifeless> jtv: small steps; and fortunately its not just me :)
[11:15] <lifeless> I'm fortunate too having the leeway to focus my coding time on tis
[11:15] <lifeless> this
[11:15] <lifeless> tomorrow should see another CP with a bunch of stuff
[11:15] <jtv> lifeless: got an interesting effect on the +templates page… the biggest of those went down to about 2 seconds for the normal case, but it still takes >7s sometimes
[11:16] <lifeless> jtv: have you captured a ++oops++ for one of the 7 second cases?
[11:16] <jtv> Yes, but not much to see—there are no queries or blocking actions in the interesting part.
[11:16] <lifeless> jtv: so its 'python time' ?
[11:16] <jtv> My current sneaky suspicion is that maybe those templates are usually in storm cache, in which case the page renders in 2s (I could easily shave some more off that but can't justify it right now)
[11:17] <lifeless> jtv: we nuke the storm cache at the end of each request.
[11:17] <jtv> …but when they aren't, deserializing their header strings (kilobyte-order) can take time.
[11:17] <lifeless> jtv: so in prod you won't see a change there.
[11:17] <lifeless> therve has a  branch tuning that if you want to give it a spin
[11:17] <jtv> Tuning it?  In what way?
[11:17] <lifeless> getting rid of fat
[11:18] <lifeless> I filed a bug talking about performance there - saw a page taking 16 seconds to deserialise stuff

[11:18] <lifeless> turns out that when a machine is very busy storm performance take s ahuge hit
[11:18] <lifeless> partly its giving up the gil and grabbing it a whole heap
[11:18] <jtv> The problem we feared.
[11:18] <lifeless> and so it goes back on the runqueue each time when it calls into python code
[11:19] <lifeless> thats -machine- load not -many-python-threads- per se.
[11:19] <lifeless> so if the machine is at or beyond 100% cpu, even though the thread has a timeslice permitted, its back trying to grab it rather than going on immediately.
[11:19] <lifeless> prod servers aren't run as hard as staging though
[11:19] <lifeless> anyhow, there is an RT open to get some instances made single core
[11:19] <lifeless> so we can evaluate that.
[11:20] <jtv> What surprised me for the +templates page was the discrete nature of the delay: sometimes it's 2s, sometimes it's 7s, but there's nothing inbetween.
[11:20] <lifeless> thats indeed very interesting.
[11:20] <jtv> (Let me reiterate here my call for a performance experiment server)
[11:20] <lifeless> I wonder what we can do to get a handle on it.
[11:21] <wgrant> It's not because Ubuntu takes 7s and everything else takes 2s?
[11:21] <lifeless> jtv: we can do experiments as needed; as I say there is a single-core one already in the pipeline.
[11:21] <jtv> wgrant: no, I routinely get the Ubuntu page in 2s.
[11:21] <jtv> But if you can think of something with more templates…  :)
[11:21] <lifeless> jtv: so, one thing I'd do is look at the PPR and see if your feeling about it is real
[11:21] <jtv> Maybe it's master/slave, or difference in privilees.
[11:21] <jtv> *privileges
[11:21] <lifeless> jtv: the ppr graph should show if its 2,7 or around-2, around-7 or whatever
[11:22] <jtv> lifeless: that's where I spotted the difference.  Would be useful to get a bigger sample size though, yes.
[11:22] <lifeless> jtv: if it is pretty clearly 2 peaks, then the ztrace log should let us figure out more data
[11:22] <lifeless> jtv: such as master/slave and so on
[11:22]  * jtv looks up ppr
[11:25] <lifeless> Ursinha-afk: I can't see why ;
[11:25] <lifeless> Revision 11556 can be deployed: qa-ok
[11:25] <lifeless> Revision 11557 can not be deployed: not-ok
[11:26] <lifeless> Ursinha-afk: we may need to make the more detailed output visible or something - the summary stuff is only useful if the detailed context above is shown too.
[11:27] <lifeless> Ursinha-afk: (for developers). For folk saying 'time to doa deploy, the summary is sufficient, but I don't thinkt hey will mind seeing the rest.
[11:27] <lifeless> Ursinha-afk: https://bugs.edge.launchpad.net/malone/+bug/497386 is qa-ok
[11:27] <_mup_> Bug #497386: ScopedCollection:CollectionResource:#message-page-resource timeouts for bugs with many messages <api> <qa-ok> <timeout> <Launchpad Bugs:Fix Committed by lifeless> <https://launchpad.net/bugs/497386>
[11:30] <lifeless> Ursinha-afk: also we may need a way to say 'this rev fixes bad-commit-12345' other than rollback; for the case when we decide to leave it in and fix it.
[11:30] <gmb> Hooray for misspelt TestCases
[11:31] <lifeless> gnight y'all
[11:32] <jtv> henninge: how's that test doing now?
[11:37] <henninge> jtv: I don't see updateTranslation doing that. It makes all messages current that don't have a translation conflict.
[11:37] <jtv> henninge: it uses a helper… called validate_translation IIRC
[11:37] <henninge> independent of validation errors.
[11:38] <henninge> yes, it does
[11:39] <jtv> henninge: I think it's this line in _makeTranslationMessageCurrent that does it:
[11:39] <jtv>         if new_message.validation_status == TranslationValidationStatus.OK:
[11:41] <henninge> jtv: the test is now failing on pofile.getPOTMsgSetWithErrors
[11:42] <henninge> which does not find any such POTMsgSets (should find 1)
[11:43] <jtv> henninge: in the recife code as it stands, getPOTMsgSetWithErrors is still hard-wired to look only for upstream-current messages.
[11:44] <henninge> so what should I do? Fix the test?
[11:45] <jtv> Well getPOTMsgSetWithErrors needs fixing, though I'm not sure that's the cause of the test failure.
[11:46] <jtv> I think the other thing is that getPOTMsgSetWithErrors should find the message only if it did become current despite the validation error (which normally it shouldn't).
[11:51] <jtv> henninge: does the test make the failed message current by manipulating the flags directly?  If so, see that it matches expectations—getPOTMsgSetWithErrors is wrongly hard-wired to is_current_upstream, the test is wrongly hard-wired to is_current_ubuntu.
[11:52] <henninge> yes, in order to change that I will have to make the test use is_current_ubuntu
[12:12] <deryck> Morning, all.
[12:14] <persia> Good morning deryk.  I was advised you were the best person to ask for an opinion on 642637
[12:19] <stub> Anyone recall what I can use to see if a script my test just ran emitted the expected OOPS reports?
[12:20] <lifeless> in-process, self.oopses[-1]
[12:20] <lifeless> ex-process getLastOopsReport
[12:20] <lifeless> which is having a little quirky-moment just now, for reasons I haven't dug into.
[12:21]  * lifeless really really is gone now.
[12:21] <stub> ta
[12:22]  * wgrant resets the "seconds since lifeless last left" timer.
[12:49] <mars> wgrant, I think your IRC client does that for you ;)
[12:49] <mars> well, the server /whois actually
[12:50] <wgrant> True, true.
[12:53] <persia> Trick is writing the script to distinguish time-since-last-activity from time-since-last-claim-to-depart
[13:16] <maxb> No CHR today?
[13:34] <allenap> bryceh: Hi, I'm looking into your email now. I'm on CHR today, I haven't done it for months, and it's still a complete pita :) Anyway, I'll try and get you an answer soon.
[13:55] <jtv> ping abentley
[13:55] <abentley> jtv, pong
[13:55] <jtv> hi!  Did you see bug 643345?
[13:55] <_mup_> Bug #643345: Failing branch exports <Launchpad Translations:In Progress by jtv> <https://launchpad.net/bugs/643345>
[13:56] <jtv> The problem in a nutshell: bzr apparently needs an email address when DirectBranchCommit commits, but the user in question doesn't necessarily have a preferredemail.
[13:56] <jtv> (And so our translations-to-branch export script breaks)
[13:59] <abentley> jtv, no, I didn't see it, but I've seen other issues like it.
[13:59] <jtv> It'd be good to have a single solution.
[14:00] <jtv> We run into it with teams in particular.
[14:00] <abentley> jtv, yeah.  It would be nice if everyone had email addresses.  It is 2010, after all.
[14:00] <jtv> Which means that many use twitter for communication.
[14:01] <jtv> I wonder if anyone thinks of us as old-fashioned for using email yet.
[14:01] <jtv> But I digress.
[14:01] <jtv> There are many things one _could_ do to find a valid email address in the case of a team, but I think ultimately they would be misleading.
[14:02] <jtv> So my suggestion is: fall back to a celebrity.
[14:02] <jtv> I don't know if that should be the same one every time, or whether we could have a global one.
[14:02] <jtv> Ahem.  Those would be the same option.  Sorry.
[14:03] <abentley> jtv, I don't know about this.  This is an address that people are using to create commits for the world to see.
[14:04] <jtv> abentley: that's a good point—there may be other things worth trying in some or all of the cases.
[14:05] <abentley> jtv, bzr users have complained about bzr silently generating an email address without them setting it.
[14:05] <abentley> jtv, because they find out much later, and don't want to have to redo all their previous work just to get the right email address on their commits.
[14:05] <jtv> abentley: in our use-case, the best fallback would be some celebrity that stands for "Launchpad committed this."
[14:06] <jtv> I have no idea how suitable that would be for any or all of the other use-cases.
[14:06] <abentley> jtv, are you sure that's what users want, not an opportunity to set the email address to the desired one?
[14:07] <abentley> jtv, because in general, bzr users seem to want the latter.
[14:07] <jtv> abentley: opportunity is always nice, but "you need to set an email address for this team before translations exports to branches will work" would be highly non-obvious.
[14:07] <jtv> Not to mention the potential difficulty in conveying that point to someone without a preferred email address.  :)
[14:07] <abentley> jtv, you mean a warning/error message with that text would not be clear?
[14:08] <jtv> We could write a clear message, but it'd be yet another complication that's hard to foresee or justify through reasoning.
[14:08] <jtv> (The preceding one being "you need to push to your branch," for which we also send out emails)
[14:10] <jtv> Effectively we'd be letting go of Launchpad as a unified repository of identity: "Launchpad knows who you are but Launchpad Translations can't write to a Launchpad Code branch without your email address."
[14:11] <jtv> abentley: I don't suppose we could generate a consistent non-email id along the lines of username@emailless-user.launchpad?
[14:12] <abentley> jtv, we could certainly do that, but do users actually want that behaviour?  Our experience with bzr users suggests they do not.
[14:12] <jtv> abentley: I can see that, but the solution after all is "make sure we have a valid email address for you."
[14:13] <jtv> (Also, aren't there any hidden problems with exposing these email addresses publicly?)
[14:14] <jtv> For our particular script, I think it'd be fine to have a single celebrity produce all commits.  Nice and recognizable.
[14:14] <abentley> jtv, if we ask users to set an email address to be used as their public identity, I don't think exposing it is a problem at all.
[14:16] <abentley> jtv, In my experience, using a fake email address is something users don't want.
[14:16] <abentley> jtv, Do you have contradictory experience?
[14:17] <jtv> abentley: I've seen some pretty hot-headed remarks about public email addresses, but I'm more interested in the question of "using a fake email address is something users don't want given what alternative?"
[14:18] <jtv> If the answer is "if you don't like this, give us an email address to use," I don't think there's much ground for complaint there.
[14:18] <jtv> grounds?
[14:19] <jcsackett> does anyone know (or is there a place i can check) how long staging is likely to be updating for?
[14:19] <jtv> Right now I'm not at all interested in the case where an email address for exactly the right Person (whether user or team) is available.
[14:19] <abentley> jtv, the alternative is to error until users supply an address to use, even if the user-supplied address is fake.
[14:19] <jtv> jcsackett: until it's fixed I guess.  It's been that way since my morning AFAIK.
[14:20] <jtv> abentley: for what our script does, erroring out would be really unsettling.
[14:20] <jcsackett> jtv: thanks.
[14:20] <jtv> jcsackett: at the time I heard mention of fixing it, but haven't had time to look closer.
[14:21] <abentley> jtv, the problem is that commits hang around for a long time.  IME, people would rather get it right at the start, and not have bogus identities on the first few commits.
[14:21] <jcsackett> jtv: i'll check the lists &c, thanks. it's not a rush thing on my part, just curious when i can test something.
[14:22] <jtv> abentley: I'm thinking in the case of our script, it'd be fine to fix that with a celebrity—I just need to pick or create one.  In more interactive use-cases, it'd be different.
[14:22] <jtv> abentley: but I should point out that a preferred email address can disappear "spontaneously."
[14:22] <abentley> I don't understand why you keep going back to your celebrity idea.  Do you think that your intuition trumps my experience?
[14:23] <persia> Could one automatically assign every team an address that would match their mailing list address, and when it wasn't enabled, drop it to /dev/null ?
[14:23] <persia> Users have addresses.
[14:23] <jtv> abentley: no, that's just entirely about our specific use-case where picking a person probably isn't a very good idea in the first place.
[14:24] <abentley> jtv, I don't think your use-case is as specific as you think it is.  It produces commits.  They live a long time.  People get emotional about the contents of commits.
[14:25] <jtv> abentley: the thing is, the choice of person we have there right now is really a bit arbitrary.  It's a cron job doing the actual committing.
[14:27] <jtv> abentley: the branch owner in that script serves as an approximation of whoever enabled the commits, but it picks the current branch owner which isn't necessarily the same person.
[14:28] <jtv> abentley: now, _if_ our use-case is different in that sense, then there is no need to let it weigh down the general solution.
[14:28] <abentley> jtv, it's being done on behalf of someone.  If the service didn't exist, some person would have to be committing to get the same result.
[14:30] <abentley> jtv, the Code team doesn't have any use cases involving automated bzr commits.  We have use-cases involving email addresses for source packages, though.
[14:30] <abentley> jtv, e.g. https://bugs.edge.launchpad.net/launchpad-code/+bug/620137
[14:30] <_mup_> Bug #620137: allow choice of DEBEMAIL for recipes builds <recipe> <Launchpad Bazaar Integration:Triaged> <https://launchpad.net/bugs/620137>
[14:32] <jtv> abentley: but in our case, who exactly is the commit done on behalf of?  The best answer I could give is "all the people who changed the translations."
[14:32] <abentley> jtv, even for source packages generated automatically, even when the email address is valid, users want to specify the *particular* valid address.
[14:32] <abentley> jtv, the commit is done on behalf of the person who enabled exports for that branch.
[14:33] <abentley> jtv, or imports.  Whatever.
[14:33] <jtv> Which is information we don't currently have available with full accuracy.
[14:35] <jtv> In the problem cases we have now, for unrelated reasons, it's rather likely to be inaccurate.
[14:35] <abentley> jtv, limitations of your current data model are not a good justification for planning future behaviour.
[14:37] <jtv> abentley: no, but future behaviour that doesn't solve the problem _and_ requires an extension of the data model isn't very relevant to our problem.
[14:39] <abentley> jtv, that's a tautology that can be shortened to "stuff that doesn't solve our problem doesn't solve our problem".
[14:40] <jtv> abentley: if you want to leave out relevant conditions, sure.  But storing a different person doesn't solve the problem of not having an email address.  It merely increases the relevance of the email address if we have one.
[14:41] <jtv> I would hope that it also increase the chances of having an email address, but it again leaves us with no very good course of action and a non-obvious failure mode when an email address goes away.
[14:42] <abentley> jtv, I don't see how leaving out the conditions changes the truth of the expression.  future behaviour that doesn't solve the problem _and_ *does not* require and extension of the data model isn't very relevant to our problem.
[14:43] <abentley> jtv, what about using the email address of the team owner when the team itself has no email address?
[14:43] <abentley> jtv, for error messages, not for commits.
[14:45] <jtv> abentley: when it comes to error messages we might as well notify an entire team.
[14:46] <abentley> jtv, it would be nice, but we are talking about a situation where we don't have contact information for the team, right?
[14:47] <abentley> jtv, I guess you mean contacting each team member individually?
[14:47] <jtv> abentley: I thought you could email an entire team, as a collection of individuals with separate email addresses.  But no matter.  What I'm so unhappy about is having the extra error condition in the first place.
[14:50] <abentley> jtv, I think that error condition is due to external requirements.  It's irreducible complexity.  But I think we can make it extremely rare.
[14:51] <jtv> abentley: if we can do that, then great.
[14:51] <jtv> How would we go about it for teams?
[14:52] <abentley> jtv, I'm thinking for all Persons, we could have a "public email id" field.  And when someone's configuring imports for a branch, we can require that field to be filled out.
[14:52] <abentley> We can then use the same email id for source package recipe builds.
[14:53] <jtv> abentley: would it be a valid email address?
[14:53] <abentley> jtv, no, it would not be required to be valid.
[14:53] <jtv> (Just wondering if there are any abuse risks)
[14:55] <abentley> jtv, it would be hard for that field to accidentally go away, so the error condition would only be hit if the user decided to clear it manually.
[14:56] <jtv> abentley: what happens when a user transfers branch ownership?  Would we require the new owner to have a public email id?
[14:57] <abentley> jtv, I don't know if it's possible to transfer branch ownership.
[14:57] <jtv> It is.  But I see the point: if we store who initially set up the exports, that indicates which public email id to use.
[14:59] <abentley> jtv, I guess depending on how we do it, transferring ownership could break imports.
[15:00] <abentley> jtv, I'm not sure whether it makes sense to continue doing imports when transferring a branch.  That way, the branch gets contents that its owner never approved.
[15:01] <abentley> jtv, we could also refuse to change ownership to an owner with no public email id.
[15:01] <jtv> abentley: well forgetting admins for the moment, obviously the original branch owner makes the transfer.
[15:02] <jtv> At that point, the original "license" to export to the branch has been given by someone who isn't necessarily an owner of the branch any more, but the new owner receives it with this restriction.
[15:02] <abentley> jtv, right.  So now that the branch is owned by owner-new, should the imports configured by owner-old still apply?
[15:03] <jtv> abentley: I think so—it's behaviour our users already use and expect, and it facilitates transfer to a team.
[15:03] <jtv> If the old owner were to become thoroughly distrusted, I think there's worse problems than ongoing translation commits.
[15:04] <jtv> Also, with the commits happening daily, there's little the new owner could do to become dependent on them not happening before they became obvious.
[15:04] <abentley> jtv, true, although in the case of transfer to a team, owner-old is usually a member of owner-new, and so has the right to configure imports for owner-new.
[15:04] <jtv> Yes.
[15:06] <jtv> Of course keeping the email id of the original requester of the daily exports does mean that we could be committing in the name of a user that may have disappeared.
[15:07] <abentley> jtv, My idea was actually that we would use the public email id of the current owner.
[15:07] <jtv> OIC
[15:07] <jtv> So then transfers are back on the table as an issue to be resolved.
[15:10] <abentley> jtv, if the common case is that owner-old is a member of owner-new, perhaps we should let owner-old set the public email id of owner-new while transferring the branch.
[15:10] <jtv> abentley: you mentioned the possibility of requiring a public email id when transferring a branch… that'd be necessary for what we do, but is it a reasonable thing to ask for other cases?
[15:12] <abentley> jtv, if recipes could be transferred, I think it would be reasonable for that case.
[15:13] <jtv> Then there's transition.  What to do about the existing cases?
[15:14] <jtv> (Also, it'd be nice to have a better way of making "almost-candidates" clear in pickers: "you could almost transfer ownership to this person, _but_ there's no matching public email id yet")
[15:14] <abentley> jtv, for Persons which currently have imports or source package recipe builds, we set the public email id to the value currently being used, and send a notification.
[15:15] <jtv> abentley: I imagine we'd always fall back to a real publicly visible email address when no explicit email id has been set, but I'm thinking especially of the email-less cases.
[15:17] <StevenK> allenap: And you were right, getUtility(IDistributionSet) is not what I want
[15:17] <abentley> jtv, maybe I should have said "for Persons which currently have *working* imports or source package recipe builds..."
[15:18] <jtv> abentley: I don't mean to be finicky about that—just asking what to do about the current problem cases.
[15:20] <abentley> jtv, for the current problem cases, the "no public email id" case applies, and so we use that error handling.
[15:20] <abentley> jtv, send it to the owner, as I suggested, or send it to each member of the team, as you suggested.
[15:20] <jtv> abentley: that's a sizable number of people to bother.  :(
[15:20] <abentley> jtv, this is why I suggested only bothering the owner.
[15:21] <jtv> I see.
[15:22] <jtv> This also means we have to go through a transition period where an owning team must expose a real email address.
[15:23] <jtv> Not saying that's a dealbreaker, just worth addressing if we can.
[15:23] <abentley> jtv, why would an owning team need to expose a real email address?  The public email id need not be real or valid.
[15:24] <jtv> abentley: but we don't support those ids yet, or did I misunderstand?
[15:24] <abentley> jtv, we don't.
[15:24] <jam> hey guys, is it more appropriate to propose merges into "lp:launchpad" or "lp:launchpad/devel" the former being db-devel seemed less appropriate for non-db patches.
[15:24] <jtv> But we do have production failures going on.
[15:25] <abentley> jam, devel unless it has db patches or depends on new versions of scripts.
[15:26] <abentley> jtv, okay, I figured that was pre-transition.
[15:26] <abentley> jtv, what currently happens when we get these failures?
[15:26] <jam> abentley: thanks, that's what I thought, but I'll note that the UI doesn't push you in that direction.
[15:26] <jtv> abentley: all depends on perspective I guess… I was thinking of "from what we have now to the situation where we have public email ids"
[15:27] <jtv> abentley: the exports to branches simply don't work, and we get oopses
[15:27] <jtv> (or at least, logged failures—the oopses turned out not to be working for some reason)
[15:27] <abentley> jam, it's because of the overloading of "development focus" for stacking.  db-devel is the best branch to stack on.
[15:30] <abentley> jtv, so as an immediate fix, you could catch these errors and mail them to the team owner or the team members.
[15:31] <abentley> jtv, I think that having public email ids is the right direction, but perhaps we should get other people involved in the discussion, like sinzui or lifeless or launchpad-dev.
[15:34] <jtv> abentley: discussed with lifeless in the afternoon (about 9 hours ago I think), but got too late in the day for him to make it a reasonable demand.  Timezones make this hard.  I think as an interim measure I'll have to contact the affected teams and urge them to set an email address.  However…
[15:35] <jtv> …I have one question about that: do teams get a preferredemail address in the same way that users do?
[15:35] <jtv> I saw some code earlier that made me wonder.
[15:38] <abentley> jtv, I don't know.
[15:38] <jtv> I'm looking into it.
[15:40] <jtv> abentley: hum—Person.setPreferredEmail starts out with "assert not self.is_team"
[15:40]  * jtv digs for something more conclusive
[15:43] <StevenK> allenap: Changes to db-add-derivedistroseries-api pushed, have a sneaky-peek when you have a chance.
[15:48] <jtv> sinzui: are team email addresses maintained completely separately from user emails?  We have a situation where persons are expected to have a preferredemail, for teams that doesn't seem possible.
[15:49] <sinzui> jtv, they are not separate because we keep people and team in the same table
[15:50] <sinzui> teams have a contact address, users a preferred address, and we test for .is_team in teamowner in sql
[15:51] <sinzui> jtv I have a branch that is dealing with a person/team email address corner-case right now. This is a real nuance.
[15:51] <jtv> sinzui: thanks for clarifying that… is there an easy way to get "preferred email or contact address"?
[15:52] <sinzui> jtv: in python or sql?
[15:52] <jtv> sinzui: python
[15:52] <sinzui> no, this is a look before you leap issue
[15:52] <jtv> sinzui: great thing to hear after the bus has already gone of the edge :)
[15:53]  * sinzui cheats in SQL <person|team> -> EmailAddress (status = 4)
[15:53] <sinzui> jtv, I have long wanted to separate users from teams.
[15:54] <jtv> I've long said that users are the cause of by far most of our problems; anything you can take away from them is good.
[15:54] <jtv> But seriously. :)
[15:55] <sinzui> well, I have argued for a base class and table called agent, and then have three subclasses and tables for robot, user, and team
[15:55] <jtv> sinzui: abentley and I had just figured out that we can fix our present production failures by asking teams to register preferred email addresses, but if they can't have preferred email addresses, we need to work on the code to accept contact addresses as well.
[15:56] <sinzui> If we made everyone robots the users would not be an issue
[15:56] <jtv> Four Terminator films and a TV series—it's been tried.  :(
[15:56] <sinzui> except for when the rebel and turn themselves into users like all Cyclons do
[15:56] <jtv> All of this has happened before, and all of it will happen again.
[15:57] <sinzui> I bet the fans are reallying hoping for it to happen again
[15:57]  * sinzui got bored with Caprica
[15:58]  * jtv tried and failed to watch the pilot
[15:58] <jtv> Unbearably bad.
[15:59] <jtv> sinzui: what's the data model for team contact addresses?  Up to 1 validated address?  Pick any validated address?
[15:59] <sinzui> jtv no :(
[16:00]  * jtv hopes against reason that sinzui is merely disagreeing with his taste in TV series
[16:01] <sinzui> jtv: 1 or 2 address. 1 contact address (status 4) and 1 mailing list address (status 3); or 1 contact address that is also the mailing list address (status 4). The magic is status 4, which is used to select the right one in the implementation
[16:01] <jtv> sinzui: isn't status 3 OLD?
[16:02] <sinzui> jtv: but since the address may be None, no contact address, but 1 mailing list address (status 3), in which case we lookup the members of the team and send to their addresses
[16:02] <sinzui> oh, maybe, then I mean 2
[16:02] <jtv> That's VALIDATED.
[16:02] <sinzui> yep, that is what I mean
[16:03] <jtv> ISTM 2 and 4 are the only usable states an email address can be in, and only 3 (VALIDATED) is applicable to teams…  So I guess that amounts to "up to 2 validated addresses, and a bit of complication for picking the right one."
[16:04] <jtv> Wait—you're saying a team _can_ have an email address with status PREFERRED.  Maybe it just doesn't get set as such through the normal channels, but ends up working the same otherwise?
[16:05] <jtv> Of course if the address is then None, we're still, if I conjugate the Colonial correctly, fruck.
[16:06] <jcsackett> sinzui: how would you feel about adding a usage_attribute to a view?
[16:06] <jcsackett> i'm sort of reaching through some convolutions to access translations_usage for a product_series, and i think just having the views have their own usage_attribute mirroring whatever is the case on the context might be easier and cleaner.
[16:06] <jcsackett> sorry, ProductSeries.
[16:08] <jtv> jcsackett: in some narrow contexts I dig up a reference to the product-or-distribution, so I can inspect its official_rosetta attribute regardless of what it is.
[16:08] <sinzui> jcsackett, yes it is convoluted because ProduceSeries is independent of Product. Any user can setup an automatic import for Ubuntu. henninge, jtv, and danilos can explain the separation and hopefully simplify the issue
[16:08] <jtv> sinzui: how is ProductSeries independent of Product?
[16:09] <sinzui> jtv: if the project choses unknown, external, (or even does not apply), a user can still setup an automatic import from the series
[16:10] <jtv> sinzui: ah, the translations usage per se can change independently from the series…
[16:11] <jcsackett> sinzui, jtv: okay, so the translations_usage attribute (which is supplanting official_rosetta, jtv), ProductSeries mirrors Product by using the ProductSeries to Product adapter. i'm looking for an easy way to get at that for templates. are you suggesting that we actually need ProductSeries to have its own, entirely separate translations_usage attr?
[16:12] <sinzui> jtv: I think the confusion is that there are many things that can be configured, project, series, and permissions. I think we should have once screen to manage permission and configuring/enable the service for the project. I think enabling series imports need to be clearly explained that is a separate issue on the project and series pages.
[16:12] <jtv> Per-series translations usage as we have it now is an emergent property, derived primarily from the presence of (non-deactivated) templates.
[16:12] <sinzui> jtv, jcsackett https://translations.edge.launchpad.net/gedit-class-browser is very confusing
[16:14] <jtv> Yes, some very different things there: setup instructions; import queue; permissions model and translation group.  And no sign of branch synchronization, since that's a per-series setup.
[16:14] <sinzui> How can the project have open permission if it does not use Lp? Why can I see an import queue if it does not use Lp (well, because the series can be setup). However, if the project sets not applicable, shouldn't the entire page go blank?
[16:15] <jtv> sinzui: you typically want to provide templates and set permissions before you start using Rosetta.
[16:16] <jtv> At that state, you may well need or want to see the import queue to see what's going on with your uploads.
[16:17] <jtv> What's more confusing is that there are 4 access models, but only 2 make sense before you have a translation group—and of course not enabling translation moots them all.
[16:17] <jtv> Causality goes something like this:
[16:18] <sinzui> jtv, so to choose "In Launchpad", the form should insist that I have imported something. The form could include permission widgets as subordinates of "In Launchpad".
[16:18] <jtv> sinzui: I can't answer that without a better understanding of the implications of that setting.
[16:19] <sinzui> jtv: how many projects say official translations, but have never imported a template?
[16:20] <jtv> sinzui: I don't know; I don't see it mattering much.
[16:21] <sinzui> jtv: sure it does. I am trying to locate where translations happen for an upstream project. the project claims it is Lp, but that is not true because they never complete setup
[16:22] <jtv> sinzui: internally I don't think we ever thought of official_rosetta as providing that information, useful as it is.
[16:22] <sinzui> jtv, jcsackett: I think we also have a sizable number of projects that say they user translations, but their code it not i18n enabled
[16:22] <jtv> But looking at it now, would you believe, the majority of official_rosetta projects have no templates.
[16:23] <sinzui> jtv: the bridging the gap goal is includes clearly stating where services/work is done: in lp, in an external service, and in this case, not applicable
[16:24] <jcsackett> jtv: i can confirm that official_rosetta isn't in use much; the only place that seems to check it is products. distributions, projectgroups, and series (both distro and product) ignore it on the browser side.
[16:24] <jtv> sinzui: in the case of translation, we also have edge cases like "translation is done in Launchpad for these few languages only."
[16:24] <jtv> jcsackett: we also check it for external suggestions.
[16:24] <jcsackett> jtv: ah yes; missed that.
[16:25] <sinzui> jtv: jcsackett I just changed https://translations.edge.launchpad.net/gedit-class-browser to not applicable. the code is not i18n enabled. So I think I should not see the other portlets below the message that clearly states the project has nothing to translate
[16:25] <jtv> (Actually the complexity there is set to move to SuggestivePOTemplate)
[16:25] <sinzui> jtv: yeah, code has similar edge cases
[16:26] <jtv> does that team have any good ways of incorporating those in the usage information?
[16:31] <sinzui> jtv: jcsackett: I think I am reading that rosetta is implicitly enabled by the setup of templates for a series (code is enabled by a branch linked to a series). Should official_translations be enabled when that state is achieved? Does this mean the project owner cannot choose external or not applicable later?
[16:35] <jcsackett> i could see translations_usage on series being a calculated property based on that condition; it does seem like this is less of a case of it following the setting on the related pillar.
[16:35] <jcsackett> at least i assume: are we only talking productseries here, or product and distroseries--it seems similar concerns hold for both.
[16:35] <jcsackett> jtv, sinzui^
[16:36] <jtv> The permissions model certainly provides an additional "master switch" on the translations…  the only difference IIRC between "Closed" permissions and disabling translations is visibility.
[16:37] <jtv> Everything we've done so far has been on the assumption that the considerations for productseries and packages on the one hand are separate from those for products and distributions on the other.
[16:38] <jtv> It take some re-think to get my head around a different way of doing it; though it seems sensible to avoid duplication of the concerns.
[16:40] <jtv> By the way, templates can be disabled.  This should be tantamount (give or take) to reversible deletion.
[16:41] <jtv> Only POTemplates with the iscurrent flag set should be counted when it comes to determining whether their owning object uses Translations.
[16:45] <jtv> jcsackett: for Products, it's quite possible that people set up a pilot, import translations (possibly even a lot of them), and finally decide not to use Launchpad.  So there, whether a project uses Launchpad is meaningful information (though it can only be the case if there are non-deactivated templates).  For productseries, I think having non-disabled templates is a sufficient condition to conclude that it uses Translations.
[16:47] <jcsackett> jtv, sinzui: okay, so clearly the series needs its own attribute to determine this, rather than just piggybacking off the Product.
[16:47] <jcsackett> which will also ease my initial question, b/c now all contexts hitting the template will have the attribute in question.
[16:47] <jtv> jcsackett: actually that's something we already derive from the presence of templates (with iscurrent set)
[16:48] <jcsackett> jtv: sure, but given the existence of something called "translations_usage" i would think we want that to tell something meaningful.
[16:48] <jtv> jcsackett: yes, I'm talking purely of productseries there.
[16:48]  * jcsackett nods.
[16:49] <jcsackett> so, this still raises the question of consistency: i'm a product owner, and i turn off translations for my product. now i have a series of my product that is running independent of my product decision.
[16:49] <jcsackett> i suppose in this instance its just a case of "go deactivate everything if you want this off."
[16:49] <jtv> jcsackett: well, the "master usage switch" should override usage for the series.
[16:50] <jcsackett> jtv: okay, i agree. that's what i was driving at (rather circuitously, i admit)
[16:50] <jtv> jcsackett: it's late here, don't use long words.  :)
[16:50] <jcsackett> jtv: sorry. :-)
[16:50] <sinzui> jcsackett, jtv: I can expound on the question. Should I be able to deactivate the project if it has automatic import enabled?
[16:50] <jtv> In fact, I'd like to wrap up my urgent issue with sinzui and call it a night!
[16:50] <jtv> sinzui: you mean import of translations from a branch?
[16:51] <sinzui> jtv: jcsackett: this was an excellent conversation. We really appreciate it
[16:51] <sinzui> jtv: yes I do mean from a branch
[16:51]  * jcsackett nods. it's helped clear up several things.
[16:51] <jtv> sinzui: so far we have taken the position that you may need some time to set things up right before you open up translation
[16:52] <jtv> (in fact you may set things up experimentally and then give it a miss)
[16:52] <sinzui> jtv: yes, I think I am seeing that situation in projects
[16:53] <sinzui> jtv: do you still need my help with email addresses?
[16:53] <jtv> Imports in particular can take a while.  We can't just say "oh you have templates, you're open for translation" because it could take considerable time to import your existing translations, and you don't want to invite people to do redundant work on translations that you already have coming in anyway.
[16:54] <jtv> sinzui: yes, I still need help with that—first off: is that branch you have waiting in the wings one that would give us an easy way to get a "preferred email address for user or team" in the short term?
[16:54] <jtv> If not: do you have a design in mind that the design for such a function should or could comply with?
[16:55]  * sinzui was thinking of creating an IContactAddressSet adapter for person|team that works like a cast, and provides a set of address for the person, or team, or team members
[16:55] <jtv> sinzui: isn't that going a bit overboard with the adapters?  It sounds like a job for an interface.
[16:56] <sinzui> jtv ^ If person, get preferred, if contact address is not None, get contact address, else for member get email address
[16:57] <jtv> sinzui: given what abentley-lunch said, it probably makes sense to have a separate function for what bzr needs, because we may end up falling back on a public email-shaped id that need not be an actual email address.
[16:57] <sinzui> ah, I was missing context
[16:58] <sinzui> jtv: so we want unique identity as a string, and is commonly an email address
[16:58] <jtv> In the short term however we need a way for people to have "the" email address respected regardless of whether it's a team's or a person's.
[16:58] <jtv> sinzui: you'd have to ask Code people for the details there; my immediate concern is something that definitely obtains a publicizable email address if one is available.
[16:59] <sinzui> teams cannot hide addresses, so that the issue is teams without an address
[16:59] <jtv> We can work around the current problem in the very short term by asking affected team owners to set up a contact address.  But the existing code only looks for preferred addresses.
[16:59] <jtv> Yes, that's the issue.
[17:00] <sinzui> does the address need to work
[17:00]  * sinzui ponders xrds rules for id urls
[17:01] <jtv> sinzui: possibly not, though obviously we want to draw a very clear line around ones that might not.
[17:01] <sinzui> jtv: does this id need to look like an email address, can we use a url
[17:02] <jtv> sinzui: talk to Code people :)
[17:02]  * sinzui looks at membership code for sekret email addresses
[17:31] <jtv> sinzui: bugger—ITeamCreation seems to imply that we only allow contact addresses to be set upon team creation.
[17:31] <sinzui> no, the team can add them at anytime
[17:32] <sinzui> jtv: team use a different view than users
[17:32] <jtv> ah
[17:34] <sinzui> jtv: I am inclined to do think of the future with this issue: "launchpad engineering <1234@launchpad.net>" where the number is Person.id like we do for bugs and questions
[17:35] <sinzui> jtv: I have not confirmed that address is available, but I can see there are no handlers looking for that address.
[17:35] <jtv> sinzui: of course we'd like to avoid making up ids if we can; that's something that apparently has been done in the past and which the code people want to do away with.
[17:36] <sinzui> jtv: we use ids because bugs move, and teams change their names
[17:37] <jtv> sinzui: apparently people get really unhappy about made-up ids, and so the feeling among Code seems to be that it's better to error out.
[17:37] <sinzui> I have a problem where it is not possible for the team owner to contact his team because of ridiculous rules involving the teams contact address and mailing list. I might be able to solve the issue with an address like that. It would only work for members of the team
[17:38] <jtv> You mean the address would become functional?
[17:39] <jtv> It sounds cool on the one hand, but suspiciously close to re-inventing mailing lists on the other.
[17:39] <sinzui> jtv: I think we need a functional address, but the debate on it has driven me to tears
[17:39] <jtv> sinzui: I can imagine it vividly
[17:40] <sinzui> mailing lists are indeed the problem...a team can have a list, but no members are subscribed...if a tree falls in a forest, and there is not one to hear it, does it make a sound
[17:40]  * sinzui believe all team members must get email from the list
[17:41] <jtv> So far we were thinking: if we can at least give people, possibly as soon as tomorrow, some control over the effective preferredemail of their team, then we're in a position to require one.
[17:41] <sinzui> jtv: I bet mailing lists would be successful if they were automatically on. all teams have them, and they just work!
[17:42]  * jtv implemented a system like that once… you got a wiki, a mailing list, an svn repository etc.  It was quite nice.
[17:42] <sinzui> projects have series and code, teams have lists and messages
[17:43] <sinzui> jtv: team-name@lists.launchpad.net is could be real for all teams, but that is a new feature
[17:54] <jtv> sinzui: it looks as if actually, team.preferredemail should work as it would for user.preferredemail; they're just set in different ways, but setting a team's contact email should produce the effect we're after.
[17:55] <cr3> the launchpad python style guide mentions that methods should be initialCamelCase, but how do I integrate that with frameworks expecting to callback methods which use_underscores in a clear way? is there somekind of comment or something I should mention so that people can know to expect one or the other naming convention?
[17:55] <sinzui> yes, they have different rules about the number off addresses
[17:55] <jtv> abentley-lunch: ^^ this is good news—means we can ask teams to set contact addresses as an immediate workaround.  Need to verify though.
[17:57] <jtv> cr3: our """See `IAnInterfaceThisClassImplements`.""" docstring convention.
[17:57] <jtv> may help.
[17:58] <jtv> cr3: if there's an interface or base class that declares the method that you're defining, mention it in the docstring in that way.
[17:58] <jtv> If there's no such thing, another nice one (which also happens to fit with our guidelines completely) is:
[17:58] <jtv> def doMyThing(self):
[17:58] <jtv>     pass
[17:59] <jtv>  
[17:59] <jtv> my_thing = doMyThing
[18:00] <cr3> jtv: both make sense, I'll try to limit myself to the first approach then and define the interface in an appropriate module, like someframework.py which would scope all those external expectations in a single location
[18:01] <cr3> jtv: the second approach might be nice if I expect the method to be called from within the code base in addition to the framework expecting a certain naming convention. thanks!
[18:02] <jtv> happy hacking
[18:35] <EdwinGrubbs> abentley: ping
[18:36] <abentley> EdwinGrubbs, pong
[18:40] <EdwinGrubbs> abentley: I read your response to my email about a generic job queue. I'm not interested in creating an end-all-be-all solution. I just think we don't need a bunch of almost identical cronjobs. The registry is adding a couple of job tables, and it would be nice if anybody else adding a table could just re-use one cronjob.
[18:42] <abentley> Edwin, that was a response to Julian Edwards' email, not yours.  My thoughts on having a single script were covered by thumper.
[18:43] <EdwinGrubbs> abentley: I'm wondering if you have any thoughts on how a given class of jobs should opt-in to using a generic job queue. The interface could subclass from a GenericJobSource interface, or for more fine grained control, each implmentation could specify classProvides(GenericJobSource).
[18:44] <EdwinGrubbs> abentley: so, in response to thumper, I would like to provide a round-tuit, since you only have square ones.
[18:45] <abentley> EdwinGrubbs, I think that each job should have its own configuration, and should have its own crontab entry, with the particular configuration specified as a commandline parameter.
[18:45] <abentley> EdwinGrubbs, the script itself can then be generic.
[18:48] <abentley> EdwinGrubbs, one thing that's important to the LOSAs is that each task type have its own database ID.
[18:49] <EdwinGrubbs> abentley: why do the LOSAs want each job type to have it's own db id? Stub only wanted each cron script to have its own database id.
[18:50] <abentley> EdwinGrubbs, The losas want it so that they have better visibility into resource use problems.  stub only wanted each cron script to have its own database id because each cron script represented a task type.
[18:51] <abentley> EdwinGrubbs, we haven't yet found a way to provide that visibility without database ids, and you say you don't want to do a huge change, so...
[18:53] <EdwinGrubbs> abentley: having the generic cron script use a bunch of separate crontab entries for each task seems like it could lead to a dangerous number of parallel processes. Running them sequentially would help prevent that. A single cron process could switch db ids, and the config could just map job types to db ids.
[18:55] <abentley> EdwinGrubbs, Running jobs sequentially would harm responsiveness, potentially quite significantly.
[18:56] <abentley> EdwinGrubbs, I don't agree that the scale we're talking about could lead to a "dangerous" number of parallel processes.
[18:56] <abentley> EdwinGrubbs, I agree that we could switch DB ids in the script if that seems like a good implementation choice.
[19:07] <EdwinGrubbs> abentley: Even if paralell process won't overload the system, it will improve throughput if the parallel processes can run any job type, so I think I will go that route.
[19:08] <abentley> EdwinGrubbs, in other words, provide the parallelism via the script, not by running several scripts at once?
[19:09] <EdwinGrubbs> abentley: well, either the script could be threaded, or there could be multiple crontab entries for the same script, but the crontab entries wouldn't have an argument that limited that process to just one type of job.
[19:10] <abentley> EdwinGrubbs, there will need to be limits on the types of jobs that run on a given machine.  CreateMergeProposalJobs must run on crowberry, for example.
[19:11] <abentley> EdwinGrubbs, MergeProposalJobs are supposed to run on loganberry.  (They could also run on crowberry, but that would waste crowberry's resources.)
[19:11] <EdwinGrubbs> abentley: I'm not planning on moving over existing job types. I'm just planning on providing a simple way for a developer to choose to have their job class run on the generic cron script.
[19:12] <abentley> EdwinGrubbs, so when I choose to use that for my latest job type that can only run on loganberry, how will I do that?
[19:17] <EdwinGrubbs> abentley: I wasn't aware of that feature requirement. However, instead of giving the generic cron script a job type argument, the config could specify which servers that a job type can run on, or there could be names for groups of jobs depending on what resources they need. A server's config would just have to specify which groups it can run based on which resources (e.g. external network access, or storage access) it provides.
[19:20] <abentley> EdwinGrubbs, I will be happy if the new system provides control over which hosts the job runs on.
[19:21] <abentley> EdwinGrubbs, I still don't think it's a good idea to run distinct job types sequentially, because they can have radically different performance characteristics.
[19:22] <abentley> EdwinGrubbs, so if you introduce parallelism, I would suggest (at least) one thread/process per job type.
[19:24] <abentley> EdwinGrubbs, you should also look at the TwistedJobRunner, because it has support for parallelism.
[19:28] <EdwinGrubbs> abentley: how does someone add a crontab entry anyway? TwistedJobRunner would at least eliminate that step. My main goal is to make it possible to add job types with as little work as possible.
[19:29] <abentley> EdwinGrubbs, add a request to https://wiki.canonical.com/InformationInfrastructure/OSA/LaunchpadProductionStatus
[19:44] <lifeless> moin
[19:44] <jelmer> lifeless: hi
[19:45] <jelmer> lifeless: Thanks very much for that fix yesterday; I crashed shortly after our conversation on IRC.
[19:45] <lifeless> no probs, did it work?
[19:46] <lifeless> abentley: EdwinGrubbs: I agree that serialised jobs of different sorts is undesirable
[19:46] <jelmer> lifeless: From what I've seen it did.
[19:46] <lifeless> jelmer: great
[19:47]  * jelmer disappears off to vacation
[21:31] <cr3> leonardr: hi there, might you have a moment for advice on decorators for interface methods and attributes?
[21:35] <leonardr> cr3, sure, but my latency may be very high
[21:39] <cr3> leonardr: ditto :) so, the problem I'm trying to solve is that I have classes that are expected to have methods named with underscores by the framework but I'd like to adhere to the python style guide which uses camelCase instead.
[21:39] <leonardr> cr3: you can use export_as to publish a named operation under a name different from the underlying python method name
[21:40] <cr3> leonardr: I was thinking it might be cool to have interfaces define methods using camelCase and decorate the methods with something like @compatible_as("uses_undecores"), where the string would be the compatible method name which would be defined by the decorator in the class implementing the interface
[21:40] <cr3> leonardr: could I use that even if I don't intend to export the method through lazr?
[21:41] <leonardr> you can use it, but nothing will happen
[21:41] <cr3> leonardr: in other words, does export_as simply create another alias for the name of the method in the class implementing the interface?
[21:41] <leonardr> no, lazr.restful creates a _totally new_ adapter class
[21:41] <EdwinGrubbs> losas: I have some non-urgent questions regarding the cronjob setup.
[21:41] <leonardr> in which methods have different names
[21:42] <cr3> leonardr: ah, so what would you think of @compatible_as("other_name") in this non-lazr context then? does this approach make sense?
[21:42] <leonardr> cr3: can you sketch out the interface and its implementation as it would work without the camelCase?
[21:43] <leonardr> i don't have a picture of what you have now
[21:43] <abentley> thumper, how would you ensure that the comments, revisions and incremental diffs appeared in a particular order?
[21:46] <cr3> leonardr: https://pastebin.canonical.com/37433/
[21:49] <leonardr> cr3: you could get that to work by putting the annotation in the *implementation*
[21:49] <cr3> leonardr: get_user = getUser
[21:50] <leonardr> if you had it in the interface, you'd need to run grok or something on the interface to do something like "find all the registered implementations and add methods to them"
[21:50] <leonardr> but if you put it in the implementation, you could just do this, with no annotator:
[21:50] <leonardr> get_user = getUser
[21:50] <leonardr> which i guess is what you just said
[21:51] <cr3> leonardr: if so, jtv suggested that earlier, but I feel it's tucked away too much
[21:51] <leonardr> cr3: tucked away just by virtue of being in the implementation?
[21:52] <cr3> leonardr: that's a good way of putting it, I guess it's alright
[21:53] <lifeless> gary_poster: hi
[21:53] <gary_poster> lifeless, hi
[21:53] <lifeless> would you like to catch up?
[21:53] <leonardr> cr3: the problem is that the interface just puts constraints on any possible implementation
[21:54] <leonardr> you can specify that any possible implementation must implement both get_user and getUser and that they must do the same thing, but you still have to actually do the work
[21:55] <cr3> leonardr: well, @compatible_as would man that any implementation must implement getUser but would also have get_user defined for free
[21:55] <cr3> s/man/mean/
[21:56] <gary_poster> lifeless: Thank you.  I'll be stepping out in 15 min, and in another conversation now.  Would a very quick catch-up be useful, since the next time we can do so is a whole day away?
[21:57] <cr3> leonardr: but, as you said earlier, that would imply something like grok would have to go modify each implementation. I feel that's somewhat cleaner than rely on each implementation having to remember to define get_user as well as getUser
[21:57] <leonardr> cr3: problem is you can't do that in an interface. interfaces don't give you anything for free, they just specify the work that must be done
[21:57] <lifeless> gary_poster: nothing urgent; was just going to chat over the zope conf stuff; architectural concerns for ajax; apologise for the
[21:57] <lifeless> # of bugs I filed while you were gone...
[21:57] <gary_poster> lifeless: heh, yeah that's a bit overwhelming.  At least I'm not at a lack of things to do ;-)
[21:58] <lifeless> well still have some spikes in https://lpstats.canonical.com/graphs/OopsLpnetHourly/20100914/20100921/ I need to track down
[21:59] <leonardr> cr3: well, that's certainly something grok can do
[21:59] <leonardr> it might be a little simpler if you put the grok code in the implementation, though
[21:59] <leonardr> @fill_in_methods(IAuth)
[22:00] <leonardr> that way IAuth wouldn't have to find Auth. Auth already knows about IAuth
[22:00] <lifeless> Ursinha: hiya
[22:00] <leonardr> you could even have @fill_in_methods() call implements()
[22:00] <leonardr> and use it instead of implements()
[22:01] <cr3> leonardr: interesting, I'll definately consider that
[22:01] <leonardr> cr3: what's the framework that wants you to use underscores?
[22:02] <leonardr> django?
[22:02] <cr3> leonardr: and oauth
[22:03] <wallyworld> morning
[22:04] <leonardr> cr3: consider calling it @implements_for_django or something
[22:04] <leonardr> and then reference django as well in your interface annotations
[22:05] <leonardr> @django_name('get_user')
[22:05] <cr3> leonardr: I feel this is more generic though, since I've already encountered the problem with two frameworks in the very early stages of a project which has only had three commits so far :)
[22:06] <leonardr> cr3: ok, well, make some reference to what is happening
[22:06] <cr3> leonardr: also, I was thinking this could also be an intellectual exercise in an attempt to do things right... as long as it's not too long to implement either :)
[22:06] <thumper> morning
[22:07] <leonardr> that the interface being implemented incldes method renames that need to be added
[22:07] <thumper> wallyworld: are you around for a standup now?
[22:07] <cr3> leonardr: indeed
[22:07] <wallyworld> yup
[22:07] <lifeless> Ursinha: when you come back, I need a hand with the deployment script
[22:07] <thumper> abentley, rockstar: one of you host?
[22:08]  * rockstar realizes it's standup time!
[22:08] <abentley> thumper, I will.
[22:10] <gary_poster> lifeless: spikes: interesting.
[22:10] <gary_poster> zope conf stuff: there's some small things of interest.
[22:11] <gary_poster> I mentioned publicly that I wanted to switch to a WSGI front end, and then I would be interested in someone making a WSGI simple openid consumer plugin because I don't see why that can't be shared, and the existing implementation like repoze.who are complex and more flexible than we or many other projects would need.
[22:11] <gary_poster> I got two discussions about the WSGI openid plugin within the hour, with one proposed implementation and a review request (from Martin von Löwis) the next day.  Perhaps I didn't clarify sufficiently that we weren't *immediately* ready :-)
[22:11] <gary_poster> I mentioned the long poll idea, but it took a long time for people to see the point.  Chris McDonough proposed web3 on the web sig with a mechanism that would have let us do what we want but that was not the intent of the design, so no one responded when I said that it would be nice.  Maybe I didn't explain it well enough...it is awfully close to the use case of passing an iterable (which WSGI supports natively, 
[22:11] <gary_poster> the difference is that the long poll approach needs you to be able to send headers once the deferreds have been processed.
[22:11] <gary_poster> Anyway, I think I see how to do that.  The kind of collaboration we need is more on the WSGI or Twisted side.  I think we should just try it and see how it works, seeing if we can get Twisted people to help out.
[22:11] <gary_poster> But now I have to run :-)
[22:13] <lifeless> ciao
[22:13] <rockstar> What. The. Hell. My system just shat itself back out to the login screen for no reason at all.
[22:13] <lifeless> X crashed
[22:13] <lifeless> or something similar.
[22:14] <rockstar> lifeless, yeah, the screen went all white or something.
[22:14] <lifeless> check your X log
[22:28] <mtaylor> lifeless: wanna see something weird?
[22:28] <mtaylor> lifeless: https://bugs.edge.launchpad.net/drizzle/+bugs?search=Search&field.assignee=mordred <--- there's my drizzle bugs...
[22:28] <mtaylor> lifeless: now click on #622576, which is listed as High/Confirmed
[22:28] <_mup_> Bug #622576: plugin status vars not working <Drizzle:Confirmed for mordred> <Drizzle dexter:Invalid by mordred> <Drizzle elliott:Invalid by mordred> <https://launchpad.net/bugs/622576>
[22:29] <mtaylor> and then you see it's invalid in all of the places I can mark it invalid. :)
[22:30] <lifeless> please file a bug
[22:31] <lifeless> looks like a series / nomination / infestation bug
[22:31] <lifeless> there are three tasks
[22:31] <lifeless> one task is project wide
[22:31] <lifeless> but its knobs are hidden
 % bzr branch lp:ubuntu/mdadm
[22:51] <jml>  bzr: ERROR: Server sent an unexpected error: ('error', '<Fault -1: "Unexpected Zope exception: CannotHaveLinkedBranch: <Distribution \'Ubuntu\' (ubuntu)> cannot have linked branches.">')
[22:51] <jml>  what am i doing wrong?
[22:51] <jml> ^^^ thumper, from #ubuntu-devel
[22:52] <thumper> it isn't linked most likely
[22:53] <thumper> jml: https://code.edge.launchpad.net/ubuntu/+source/mdadm no branches
[22:53] <jml> thumper, it's the wrong error for no branch
[22:53] <thumper> jml: yes, there is a bug filed about bad errors
[22:54] <jml> thumper, is it going to be fixed soon?
[22:54] <thumper> jml: no one is looking at it right now
[22:55] <jml> thumper, it was introduced by the recent private branches stuff right?
[22:55] <thumper> yes
[22:55] <thumper> as we pushed the checks deeper
[22:57] <jml> thumper, since it's a regression introduced by a recent change, I reckon it would be a good idea to bump the bug up in the queue
[23:04] <lifeless> hmm, time to tackle BugTask:+index methinks
[23:04] <lifeless> Chex: thanks for CP
[23:06] <Chex> lifeless: yes sure, your welcome
[23:06] <mwhudson> is buildbot un****ed yet?
[23:07] <lifeless> mwhudson: it was fine last night
[23:07] <mwhudson> ok
[23:07] <lifeless> mwhudson: yeterdays CP is live now
[23:07] <mwhudson> lp and db_lp are offline, i see
[23:07] <lifeless> now there is a buglet with the qatagger to identify
[23:07] <lifeless> mwhudson: they are ec2 slaves
[23:07] <mwhudson> my branch passed ec2 yesterday but didn't land because of testfix
[23:07] <lifeless> mwhudson: so thats not in itself an issue
[23:22] <mars> mwhudson, don't worry, the CI system is being a real pain right now, but I plan to work on it.
[23:23] <lifeless> 'click clack'
[23:23] <mwhudson> mars: \o/
[23:26] <thumper> I have a general statement about our buildbot builders, lp and db_lp are failing a lot, if we don't care, we should remove them, if we do care we should fix them
[23:26] <thumper> lifeless: what is it to be? ^^^
[23:26] <wgrant> We can't really remove them until Hardy is gone.
[23:26] <thumper> so... when are our appservers going lucid
[23:26] <thumper> ?
[23:26] <elmo> they are
[23:26] <elmo> only the DB servers are hardy now
[23:27] <thumper> really?
[23:27] <thumper> awesome
[23:27] <thumper> so... we can get rid of the hardy builders
[23:27] <mars> elmo, just the DB?
[23:27] <wgrant> That's convenient.
[23:27] <mars> what thumper just said
[23:27] <wgrant> As long as the Lucid builders aren't running 8.4.
[23:27] <thumper> python 2.5 is dead, long live python 2.6
[23:27] <elmo> thumper: https://wiki.canonical.com/InformationInfrastructure/OSA/Projects/LucidUpgrades
[23:28] <thumper> elmo: thanks
[23:28] <mars> whoohoo!  When did that get finished?
[23:28] <wgrant> Are the Lucid builders using postgres 8.3 or 8.4?
[23:28] <elmo> mars: last rollout was the last big push
[23:29] <elmo> if someone knows their machine names, I can check
[23:29] <mars> elmo, pilinut
[23:29] <elmo> pilinut is 8.4
[23:29] <wgrant> Because we probably still want the Hardy ones around if they're 8.4
[23:29] <elmo> we could downgrade them to 8.3 on lucid
[23:30] <mars> hmm
[23:31] <deryck> lifeless, ping
[23:31] <mars> so we develop on 8.4, EC2 runs 8.4, buildbot runs 8.3
[23:31] <lifeless> deryck: hey
[23:31] <deryck> hey
[23:31] <lifeless> thumper: I wrote to the list
[23:31] <deryck> lifeless, are you building CPs for every rev in stable now that gets qa-ok?
[23:31] <lifeless> thumper: we care because it breaks production-devel->production-stable builds if python 2.6 only stuff gets in.
[23:31] <lifeless> deryck: hell yeah
[23:32] <deryck> awesome
[23:32] <lifeless> deryck: but only full merges, not actual cherry picks
[23:32] <lifeless> so for instance
[23:32] <lifeless> https://devpad.canonical.com/~lpqateam/qa_reports/launchpad-stable-deployment.txt
[23:32] <lifeless> Revision 11556 can be deployed: qa-ok
[23:32] <lifeless> Revision 11557 can not be deployed: not-ok
[23:32] <mars> elmo, wgrant, unfortunately I do not know enough to say that downgrading to 8.3 would cause much hardship for packaging, dependencies, etc.
[23:32] <lifeless> I would do 11556, but not 11558 even if its 'ok', because 11557 isn't ok.
[23:32] <mars> elmo, wgrant, stub would know though.  And he'll be online in a few hours
[23:33] <lifeless> mars: thumper: no we can't get rid of the hardy builders
[23:33] <lifeless> see my mail to the list where I already detailed this.
[23:33] <deryck> lifeless, ok, cool.  And the upshot is we don't need to build CP merge requests ourselves any longer, right?
[23:33] <thumper> lifeless: so we should fix them
[23:33] <thumper> lifeless: because right now they aren't
[23:33] <lifeless> thumper: yes
[23:33] <lifeless> thumper: I totally agree
[23:33]  * thumper cracks the whip
[23:33] <elmo> mars: I think it's fine - we run sourcherry as lucid but with 8.4 + 8.3 installed and 8.3 is the primary
[23:33] <elmo> mars: FWIW
[23:34] <lifeless> we also need to make devel->stable promotion require *both*
[23:34] <wgrant> mars: We ran 8.3 on Lucid for ages.
[23:34] <wgrant> On every dev system.
[23:34] <wgrant> And mawson is running it.
[23:34] <lifeless> deryck: so, if you want something that is *after* a non-qa'd rev, you can do a few things:
[23:34] <lifeless>  - help get that non-qa'd rev qa'd
[23:34] <lifeless>  - do a manual cp around it
[23:35] <lifeless> sometimes we'll have  abad rev, like 11566, where we need to wait for the fix to be ready as well and land a wide range of revs.
[23:35] <lifeless> the goal is, obviously, to be doing 1 rev at a time.
[23:35] <mwhudson> lifeless: i guess reverting the bad rev is also an option?
[23:35] <mars> wgrant, yep.  I am worried about test coverage, and trying to get all systems running the same thing.  Dev is moving to 8.4.
[23:35] <mars> or has moved for new setups
[23:36] <wgrant> mars: Right, which is why I fear killing off lp and db_lp when they are the only 8.3 we have.
[23:36] <wgrant> Besides production.
[23:36] <lifeless> mwhudson: yes, in which case the revert is the 'fix' and we still need to land a wide range of revisions.
[23:36] <mwhudson> lifeless: right
[23:37] <lifeless> deryck: once mthaddon has the qastaging up and running
[23:37] <lifeless> deryck: we can stop deploying to edge w/out qa
[23:38] <lifeless> deryck: and this will become a bit simpler - we'll just deploy stable to edge + lpnet, rather than deploying production-stable, or something like that.
[23:38] <wgrant> Revs without MPs make me sad.
[23:38] <mwhudson> wgrant, lifeless: btw, do you know if the buildd-manager problem i had yesterday is fixed yet?
[23:38] <deryck> lifeless, this is awesome.  I don't see an email outlining this is active now.  I see your "always deploy devel" email reminder, but that's it.  What subject is it?
[23:39] <wgrant> mwhudson: It should be.
[23:39] <wgrant> Not sure how.
[23:39] <wgrant> But the branch landed overnight.
[23:39] <mwhudson> heh
[23:39] <mwhudson> i guess i can find out
[23:39] <mwhudson> wgrant: it was buildd-manager only?  i don't need to reinstall launchpad-buildd?
[23:39] <wgrant> No, it's a master-only change.
[23:39] <mwhudson> cool
[23:40] <wgrant> If you confirm that it no longer exhibits yesterday's problem, I'll update the docs with an extra step that's required to process the uploads.
[23:40] <lifeless> deryck: well the overall RFWTAD process documents the end goal
[23:40] <lifeless> deryck: I'm just following the existing process for deploying midcycle using the qa tools that have come live from that process to aid me
[23:41] <lifeless> deryck: I'm waiting for mars and Ursinha to send their mail about the qatagger.
[23:41] <lifeless> which was meant to happen last week.
[23:41]  * lifeless eyeballs mars and Ursinha 
[23:41] <deryck> lifeless, right, I totally get.  And I completely understand we're moving to this.... just meant that since you're patching everything as CPs now, our devs don't need to put single CPs together.  bdmurray just put one up, for example.
[23:41] <mars> wgrant, I'm just looking up that thread that Robert mentioned.  Otherwise, I wonder why we don't run lucid 8.4 + lucid 8.3.  I would think it would be easier to maintain than lucid 8.4 + hardy 8.3.
[23:41] <mwhudson> wgrant: ok
[23:42] <mars> lifeless, on the list for today
[23:42] <lifeless> deryck: well they do if they want it promptly: we're a week behind on what I'm doing because of (teething problems, things not being qa'd promptly)
[23:42] <wgrant> mars: Do we know when 8.4 is happening?
[23:42] <wgrant> Not until the next rollout?
[23:42]  * mars shrugs
[23:42] <lifeless> wgrant: it will be done rolling AIUI
[23:43] <wgrant> Even the master?
[23:43] <lifeless> wgrant: same slony version, different pg, replicate, switch master, continue.
[23:43] <wgrant> That seems unlikely, but nice.
[23:43] <wgrant> Ah, that works?
[23:43] <wgrant> Handy.
[23:43] <lifeless> AIUI, YMMV, AS
[23:44] <lifeless> deryck: so, if something needs CPing - do it. If its near the number in https://devpad.canonical.com/~lpqateam/qa_reports/launchpad-stable-deployment.txt, consider helping that number move up
[23:44] <lifeless> deryck: otherwise just CP as normal.
[23:44] <lifeless> deryck: which is why I haven't sent a mail about this; its not a reliable service yet )
[23:44] <lifeless> :)
[23:45] <wgrant> Why is the Launchpad Credentials Manager going through the webapp with OAuth, rather than using a restricted API?
[23:45] <wgrant> That seems unnecessarily fragile.
[23:45] <lifeless> wgrant: its being backed out
[23:46] <wgrant> Ah.
[23:47] <mars> lifeless, so I read the thread.  Are you saying we need the hardy builders because the LP packages have to still install on the db machines, which are postgres 8.3 on hardy?
[23:47] <lifeless> yes
[23:47] <lifeless> otherwise we can't rollout
[23:47] <lifeless> which would be ... bad
[23:47] <wgrant> Why do the DB servers have LP code on them?
[23:47] <lifeless> wgrant: security.py
[23:47] <lifeless> wgrant: database/schema/patch*
[23:47] <wgrant> Well, that *could* be run remotely.
[23:47] <mwhudson> wgrant: gaarrr
[23:48] <wgrant> mwhudson: Hm?
[23:48] <lifeless> wgrant: more fragile in the event of network glitches etc.
[23:48] <mwhudson> wgrant: i now have a build thats "Needs building  on Bob The Builder", but buildd-mananager isn't scanning that builder
[23:48] <lifeless> wgrant: we choose not to, for now.
[23:48] <deryck> lifeless, gotcha, thanks
[23:48] <mwhudson> i started the manager before i started the librarian, so the attempt to dispatch the job failed hilariously
[23:48]  * deryck is taking pics of kids in super suits while chatting :-)
[23:49] <mwhudson> the builder is marked ok and idle though
[23:49] <lifeless> wgrant: https://bugs.edge.launchpad.net/soyuz/+bug/641338 - is that a root cause dup?
[23:49] <_mup_> Bug #641338: Archive:EntryResource:syncSource timeouts <timeout> <Soyuz:Triaged> <https://launchpad.net/bugs/641338>
[23:49] <mwhudson> i guess i can restart buildd-manager
[23:49] <mars> lifeless, so if we have a near release window for the DB server updates, then we do not have to fix the lp and db_lp builders.
[23:49] <wgrant> mwhudson: buildd-manager isn't scanning the builder, or just isn't dispatching?
[23:49] <mwhudson> wgrant: isn't scanning
[23:49] <mars> lifeless, if they are upgraded next week, then why go through the trouble?
[23:49] <wgrant> mwhudson: Urgh.
[23:49] <wgrant> Kill it and restart it.
[23:49] <mwhudson> the attempt to dispatch really did blow up though
[23:49] <mwhudson> three tracebacks!
[23:50] <wgrant> lifeless: Probably not entirely.
[23:50] <wgrant> lifeless: The copy checker is awfully slow.
[23:50] <wgrant> Unnecessarily so.
[23:50] <mwhudson> 2010-09-21 10:44:27+1200 [-] canonical.librarian.interfaces.UploadFailed: [localhost:58090]: [Errno 110] Connection timed out
[23:50] <lifeless> mars: I can't asnwer that until I know how much trouble is needed.
[23:50] <mwhudson> 2010-09-21 10:44:35+1200 [-] exceptions.AttributeError: 'BuilderSlave' object has no attribute 'clean'
[23:50] <mwhudson>         exceptions.AttributeError: 'NoneType' object has no attribute 'specific_job'
[23:50] <lifeless> mars: but its a pipeline stall for 8 hours (two runs) when it breaks.
[23:50] <wgrant> mwhudson: WTF? BuilderSlave?
[23:50] <lifeless> mars: and I've run into this every second day or so in the last work week
[23:50] <wgrant> It should be a RecordingSlave!
[23:50] <wgrant> JULIAN!
[23:51] <mars> lifeless, yes, I understand, it sucks.
[23:51] <mwhudson> yes, i think it's the failure counting stuff
[23:51] <mwhudson> let me paste the whole misery
[23:51] <mwhudson> bah
[23:51] <mars> need to go afk, back later
[23:51] <wgrant> mwhudson: You could try r11580
[23:51] <mwhudson> now it's just failing
[23:51] <lifeless> wgrant: are you working on that ?
[23:51] <lifeless> wgrant: its perf tuesday ;)
[23:51] <wgrant> 11581 has some buildd-master changes that I don't quite understand, so I can't trust that they're not this breakage.
[23:52] <jml> did you guys see https://code.edge.launchpad.net/~jml/launchpad/buildd-deferred/+merge/36010 and https://code.edge.launchpad.net/~jml/launchpad/buildd-deferred-fo-sho/+merge/36037, btw?
[23:52] <wgrant> lifeless: It's also I-have-far-too-many-projects week :(
[23:52] <lifeless> mars: will Ursinha be back? or do you know about the qatagger deployment story?
[23:52] <jml> (speaking of BuilderSlave)
[23:52] <mwhudson> wgrant: http://pastebin.ubuntu.com/497319/
[23:52] <wgrant> jml: That's the change to which I refer.
[23:53] <wgrant> I'm currently trying to interpret the diff.
[23:53] <lifeless> for line in lines:
[23:53] <lifeless> ...
[23:53] <wgrant> lifeless: Hm?
[23:53] <mwhudson> and bah, it fails in almost the same way when i restart the buildd-manager
[23:53] <lifeless> wgrant: a joke
[23:53] <wgrant> Ah.
[23:54] <jml> wgrant, my hunch is that it doesn't swap a BuilderSlave for a RecordingSlave anywhere
[23:54] <wgrant> jml: It already didn't in some situations.
[23:54] <wgrant> Which caused breakage. But it may no longer cause the same breakage, so I don't recognise it.
[23:54] <jml> wgrant, I mean, the diff doesn't really change call sites
[23:54] <jml> wgrant, if we're talking about buildd-deferred/+merge/36010
[23:55] <jml> wgrant, the breakage is because of the shift away from magic getattr methods
[23:55] <wgrant> mwhudson: Is the librarian running?
[23:55] <mwhudson> wgrant: it wasn't then
[23:56] <mwhudson> wgrant: but when i restart with it running, it fails in the same way, except without the first traceback
[23:56] <jml> wgrant, BuilderSlave needs an explicit clean() method that forwards to the xmlrpc proxy
[23:56] <wgrant> mwhudson: Pastebin pls.
[23:56] <wgrant> Better tests pls.
[23:56] <wgrant> The fact that buildd-manager has been completely broken in devel twice in the last week hints that we might possibly need better tests, I think.
[23:57] <jml> wgrant, working on it.
[23:57] <wgrant> Excellent.
[23:57] <jml> wgrant, see the two diffs I pasted.
[23:57] <lifeless> wgrant: presumably 2201  OOPS-1723N1442  Archive:+copy-packages is simillar ? (thats statement counts)
[23:57] <mwhudson> wgrant: http://pastebin.ubuntu.com/497321/
[23:57] <wgrant> It should be easier now that jelmer's branch has removed the Popen crap.
[23:57] <wgrant> lifeless: Yes.
[23:57] <jml> mwhudson, http://pastebin.ubuntu.com/497322/ this will address the clean() thing
[23:57] <wgrant> Yeah, looks like it.
[23:57] <mwhudson> wgrant, jml: it's possible that my playing around yesterday left the db in a strange state
[23:58] <wgrant> Not sure about the specific_job thing, though.
[23:58] <wgrant> That's a bug, even if your DB is borked.
[23:58] <jml> mwhudson, soyuz ought to be more robust in the case of... yeah, what wgrant said
[23:58] <wgrant> (which it is)
[23:58] <wgrant> jml: Not Soyuz any more lalala.
[23:58] <mwhudson> yeah
[23:58] <jml> buildmaster
[23:58] <jml> such a silly name. I'd of called it a chuzwazza
[23:58] <mwhudson> ok, it's building now
[23:59] <mwhudson> (with jml's patch)
[23:59] <jml> I can add that patch with tests tomorrow, if you'd like
[23:59] <mwhudson> it seems like it would be a good idea