[02:34] <wallyworld> thumper: do you need to change the overall mp status to Approve, or i can....?
[02:34] <thumper> wallyworld: well, someone does
[02:34] <StevenK> wallyworld: You should be able to
[02:34] <wallyworld> thumper: i know i can - just wasn't  sure what the correct protocol was
[02:35] <wallyworld> thanks
[02:35] <thumper> wallyworld: if you have the reviews, you can change the overall status
[02:35] <thumper> that's fine as protocol goes
[02:48] <wgrant> spm: How should I coerce you into adding my key to PQM?
[02:48] <StevenK> wgrant: Via an RT
[02:48] <wgrant> :(
[02:48]  * wgrant dislikes RT.
[02:48] <wgrant> Mostly because it's Perl.
[02:50] <StevenK> wgrant: Then check out http://jifty.org/view/FAQ
[02:53] <thumper> OMFG
[02:54] <thumper> launchpad.dev is seriously borked
[02:54] <thumper> how do I turn of dev mode?
[02:54] <wgrant> thumper: The 500 JS requests per page?
[02:54] <thumper> wgrant: yeah
[02:54] <wgrant> thumper: configs/development/launchpad.conf
[02:54] <wgrant> s/devmode on/devmode off/
[02:54] <wgrant> Reduces it to a nice fast one request.
[02:55] <StevenK> wgrant: You didn't just review your lunch due to that web page? :-)
[02:55] <thumper> I remember deryck saying that devmode had a problem, but that is just absurd
[02:55] <wgrant> StevenK: I am immune to such things.
[02:56] <wgrant> thumper: It is just about unusable, yeah.
[02:56] <wgrant> Fortunately, devmode is not important.
[02:56] <thumper> no, it is unusable
[02:56] <wgrant> It does load *eventually*.
[02:56] <StevenK> wgrant: Wait until Wednesday
[02:56] <wgrant> After firing off 500 requests through a single-threaded slow appserver, generating dozens of oopses in the process.
[02:57] <wgrant> StevenK: Oh?
[02:57] <StevenK> And then I'll see how immune you are in person :-)
[02:57] <wgrant> Ah, right.
[03:57]  * thumper screams a little
[03:57] <thumper> gah
[04:01] <StevenK> thumper?
[04:02]  * wgrant is reminded of Wellington, where thumper spent most of the time muttering "fuck fuck fucking fuck"
[04:06] <StevenK> -            return "You already have a PPA named '%s'." % proposed_name
[04:06] <StevenK> +            return "A PPA named '%s' already exists." % proposed_name
[04:06] <StevenK> thumper: Does that address your concerns re: bug 682548?
[04:06] <_mup_> Bug #682548: Archive.validate has poor wording for matching team ppas <confusing-ui> <ppa> <trivial> <Soyuz:Triaged> <https://launchpad.net/bugs/682548>
[04:07] <thumper> StevenK: I would have preferred two different errors depending on whether the PPA is for me or a team I'm a member of
[04:07] <thumper> perhaps "You already have a PPA named '%s'." % proposed_name for me
[04:07] <thumper> and "%s already has a PPA named '%s'." % (owner.display_name, proposed_name)
[04:07] <thumper> for a team
[04:07] <thumper> that's much nicer
[04:08] <StevenK> I can abstract that easily
[04:08] <StevenK> ... maybe
[04:12] <thumper> yes you can
[04:13] <thumper> I looked at it when I was making the related  change
[04:13] <thumper> but decided not to get too distracted
[04:13] <thumper> I'm still dealing with the other branch actually
[04:13] <thumper> right now
[04:14] <StevenK> thumper: I have a branch for this change, so I'm happy to fix it
[04:14] <thumper> good :-)
[04:29] <thumper> wallyworld: ping
[04:30] <wallyworld> thumper: pong
[04:30] <thumper> wallyworld: have you landed your built-packages-listing branch yet?
[04:30] <thumper> because I want you to delete some whitespace
[04:30] <wallyworld> thumper: it's in ec2 now
[04:31] <wallyworld> thumper: can i pull it from ec2?
[04:31] <thumper> wallyworld: what about the filter follow up?
[04:31] <wallyworld> thumper: no, i was going to land the base branch first
[04:31] <wallyworld> thumper: i can make the changes in the 2nd one
[04:31] <thumper> wallyworld: you have space between the open brace and the hyperlink
[04:32] <thumper> wallyworld: (<a href="blah">
[04:32] <thumper> wallyworld: followed by a newline, then 'some text'
[04:32]  * wallyworld looks
[04:32] <thumper> wallyworld: will give a blank space between ( and some text
[04:32] <thumper> to avoid that, you need the text to be hard against the closing > of the anchor tag
[04:36] <wallyworld> thumper: you mean it should be:  .....ourceBuilds">Lean more about....  ?
[04:36] <thumper> yep
[04:37] <wallyworld> thumper: ok. will fix in the filter branch. thanks for letting me know. i didn't notice the extra space
[04:54] <StevenK> thumper: If I look at https://bugs.launchpad.net/soyuz/+bug/680889 and then click the diff link from the linked MP, the floating diff box starts at line 123 and I can't dismiss it
[04:54] <_mup_> Bug #680889: Needs to handle "all linux-any" like "linux-any" <qa-needstesting> <soyuz-build> <Soyuz:Fix Committed by wgrant> <https://launchpad.net/bugs/680889>
[04:55] <lifeless> [04:55] <lifeless>     Hard / Soft  Page ID
[04:55] <lifeless>       86 / 4920  Archive:+index
[04:55] <lifeless>       56 /  194  BugTask:+index
[04:55] <lifeless>       32 /    0  Person:EntryResource:retractTeamMembership
[04:55] <lifeless>       29 /  366  POFile:+translate
[04:55] <lifeless>       15 /  234  Distribution:+bugs
[04:55] <lifeless>       12 /  112  ProjectGroupSet:CollectionResource:#project_groups
[04:55] <lifeless>        6 /  251  Distribution:+bugtarget-portlet-bugfilters-stats
[04:55] <lifeless>        5 /    5  ProjectGroup:+milestones
[04:55] <lifeless>        4 /   12  DistroArchSeries:+index
[04:55] <lifeless>        4 /    7  NullBugTask:+index
[04:55] <lifeless> someone should do a deploy ;)
[04:57] <wgrant> lifeless: A couple of my other revs are blocking it.
[04:57] <wgrant> I would QA them if I had DF access :)
[04:58] <lifeless> wgrant: you don't need df access
[04:58] <lifeless> wgrant: you can ask spm or another losa to do superuser stuff involved in qa
[04:58] <wgrant> lifeless: ROFL
[04:58] <spm> hmmm?
[04:58] <lifeless> wgrant: but also please ensure you file bugs/rts so that we can qa such things on qastaging in the future
[05:03] <StevenK> lifeless: Can be summed up as "Please run soyuz services on qastaging"
[05:04] <StevenK> But that's rofl-tastic at the moment, given asuka's load
[05:04] <lifeless> StevenK: needs to be more specific or it won't happen.
[05:05] <lifeless> StevenK: please help it happen.
[05:05] <lifeless> as for load, there's another box coming, don't wait to describe (in detail) whats needed for that box.
[05:06] <wgrant> It's also not clear how staging Soyuz should work.
[05:06] <wgrant> Given that staging doesn't have the archives.
[05:07] <lifeless> the goal is simple.
[05:07] <wgrant> Sure.
[05:07] <wgrant> But Soyuz isn't :)
[05:07] <lifeless> everything else I happily delegate to folk working :)
[05:07] <wgrant> Heh.
[05:07] <lifeless> seriously. We need an archive on a box running soyuz stuff against qastaging? RT IT.
[05:07] <lifeless> -now- -please-.
[05:08] <wgrant> We need to first work out if it's possible.
[05:08] <lifeless> wgrant: huh? no.
[05:09] <lifeless> wgrant: that would be backwards.
[05:09] <lifeless> I've tried rting this and got back a 'huh, please spell out whats not running' - staging *is* running all soyuz services according to the losas.
[05:10] <wgrant> staging is running buildd-manager.
[05:10] <wgrant> I don't think qastaging is.
[05:10] <lifeless> qastaging isn't yet.
[05:10] <wgrant> And AFAIK they're not running anything else.
[05:10] <lifeless> but you can qa on staging too.
[05:10] <lifeless> wgrant: if they aren't, RT IT
[05:10] <lifeless> or tell me, and I will file an RT, in all caps, because I'm having to repeat silly, obvious, clearly sensible things on my holiday :)
[05:11] <lifeless> james and charlie both want the work queue for losas/sysadmins present in rt, not in our heads to be filed when its become top priority.
[05:12] <spm> and for us too - makes it easier to get a scope for work and what's important vs what can wait
[05:12] <lifeless> we know we want this, and we know that theres stuff to do to make it work, so what stops us from rting the specifics today?
[05:12] <lifeless> I *can't* because I don't know the specifics.
[05:12] <lifeless> wgrant: StevenK: ^ seriously.
[05:13] <wgrant> I think Soyuz people (since we're about to blink out of existence) need to talk about what makes sense, given experiences with dogfood. Until then we do not know specifics.
[05:13] <lifeless> how can you not know specifics?
[05:13] <lifeless> I don't mean *how*, I mean *what*
[05:13] <wgrant> Ah, so the general specifics, I see :P
[05:13] <lifeless> service foo-bar talking to qastaging db
[05:14] <lifeless> thats specific
[05:14] <lifeless> what machine? - losa choice
[05:16] <lifeless> soyuz folk are of course welcome to talk to bigjools and so forth, but its not a question of what makes sense: we /need/ to be able to qa /everything/ on [qa]staging
[05:16] <lifeless> dogfood as a place to do major experimentation and stress testing is great.
[05:17] <lifeless> blocking deployments because a patch happens to be to the foo-bar component and that only runs on dogfood - thats a huge problem
[05:17] <wgrant> Sure.
[05:17] <lifeless> it partitions the codebase, blocks the whole team, increases the risk of subsequent deploys.
[05:17] <lifeless> So all I'm asking is that the missing services be:
[05:17] <lifeless>  - enumerated
[05:17] <lifeless>  - in an RT
[05:17] <lifeless>  - or a bug
[05:17] <lifeless> and I'm totally  baffled what is hard/objectionable/unworthy in that request.
[05:18] <lifeless> can you help me understand?
[05:20] <wgrant> The publisher will explode at the slightest problem. For example, staging won't have any of the archives on disk. What will the publisher do? Boom. What if something that the publisher wants is deleted from the production librarian? Boom.
[05:20] <wgrant> I'm sure there are others.
[05:20] <wgrant> How do we test ftpmaster-tools changes?
[05:20] <lifeless> ask a losa to run the command
[05:20] <wgrant> What if we need to test override behaviour?
[05:20] <wgrant> Hmm.
[05:21] <lifeless> if you need the archives on disk, ask for that in the rt
[05:21] <lifeless> thats *exactly* the sort of thing I'm asking be written down and made explicit
[05:23] <wgrant> That's an awful lot of data to be copying around.
[05:23] <lifeless> having a good solid QA pipeline is fundamental to continuous deployments
[05:24] <lifeless> are you worried about cost or somethinhg?
[05:24] <lifeless> if so, say so!
[05:25] <wgrant> And practicality of syncing that regularly.
[05:26] <lifeless> how often does it need syncing?
[05:26] <spm> chuckles
[05:26] <lifeless> I would say once a week when the db is reset
[05:27] <lifeless> more importantly
[05:27] <lifeless> wgrant: this is a guess, but are you trying to make sure it will all work before asking for it ?
[05:28] <wgrant> lifeless: Yes.
[05:28] <wgrant> Asking for impossible things is not something I want to do!
[05:28] <lifeless> wgrant: thats a particularly bad antipattern
[05:28] <lifeless> please stop
[05:29] <lifeless> teamwork depends on clearly articulating and socialising the needs *before* solutions are arrived at.
[05:29] <lifeless> its only impossible until a solution is arrived at
[05:29] <lifeless> and no discussion on sollutions can happen until the requirements and constraints have been articulated
[05:30] <lifeless> you should feel *no shame* at asking for a dozen impossible things before breakfast [assuming they are things we want to do that will help us]
[05:30] <spm> wgrant: fwiw? you'd be hardpressed, REALLY hardpressed to outdo lifeless for impossible RT's and impossible numbers of same.
[05:30] <wgrant> So, I don't see how discussing this on an RT ticket will help, when it mostly needs thought within Soyuz. Perhaps I misunderstand your use of RT.
[05:31] <lifeless> soyuz is only a fraction of the solution space
[05:31] <spm> it's both discussion, task request, please help, everything
[05:31] <lifeless> the gsas
[05:31] <lifeless> and losas
[05:31] <lifeless> are the ones that will be:
[05:31] <lifeless>  - provisioning
[05:31] <lifeless>  - admining
[05:31] <lifeless>  - maintaining
[05:31] <lifeless>  - implementing
[05:31] <lifeless>  - supporting
[05:31] <lifeless> it
[05:32] <lifeless> its TOTALLY premature to have any discussion about *how* without them being involved, and their *requested* forum is RT.
[05:32] <spm> and loling at it's crackfullness. don't forget this one.
[05:32] <lifeless> spm: yeah but you do that on your sekret comments
[05:32] <spm> not always
[05:32] <lifeless> true
[05:32] <wgrant> lifeless: We don't know what we want.
[05:32] <spm> sometimes we practice democratic and open crackful sharing
[05:32] <lifeless> anyhow, this is way depressing for my holidays.
[05:32] <lifeless> wgrant: I know what we want.
[05:33] <lifeless> wgrant: I just don't know how to enumerate it.
[05:33] <lifeless> wgrant: and I'm feeling sad that I'm not getting much support on that.
[05:37] <lifeless> wgrant: anyhow, I'll leave you be, and will sit on all ex-soyuz folk at the epic and get this written down
[05:37] <lifeless> unless one of you takes pity on me and writes it up
[05:42] <wgrant> Do we have goals for the Epic yet?
[05:43] <lifeless> broadly yes
[05:44] <lifeless> there will be some presentations - e.g. a ta update, strategy update, tl update
[05:44] <lifeless> and we'll be settling into the new team structure
[05:45] <StevenK> Hudson, damn it!
[05:45] <StevenK> WANT!
[05:45]  * spm notes the prior comments about crack
[05:46] <spm> spm 1, stevenk 0
[05:46] <StevenK> Hudson is far less crackful than buildbot
[05:46] <spm> spm 2, stevenk 0
[05:46] <spm> (was a nil response, auto win. sorry)
[05:47] <StevenK> And doesn't require writing PYTHON to teach it how to build a project
[05:47] <wgrant> Just a lot of XML?
[05:47] <lifeless> no
[05:48] <wgrant> But JAVA!
[05:48] <spm> yes.
[05:48] <lifeless> pretty nice, I know
[05:48]  * spm has supported jvm's before. it's not pretty.
[05:48] <lifeless> fast though
[05:51] <spm> until a GC comes along, then you freeze for a second or more.
[05:52] <poolie> hm
[05:52] <lifeless> well you want to avoid stop the world gcs
[05:52] <lifeless> which current gc engines are pretty good at, if tuned to the workload
[05:52] <poolie> you would think so
[05:53] <spm> I remain... sceptical. based on prior experience.
[05:53] <StevenK> spm: Given current buildbot, or java, which would you prefer?
[05:53] <poolie> are we talking about Hudson for testing lp?
[05:53] <spm> strawman choice :-)
[05:53] <StevenK> Just so I know if I'm pushing uphill or down
[05:54] <poolie> if that has occasional pauses it shouldn't really matter
[05:54] <spm> ha. no. this is personal opinion not pushback. if LP decides it want's hudson, that's cool.
[05:54] <lifeless> we're considering a java db for some stuff
[05:54] <lifeless> (cassandra)
[05:54] <poolie> how about using jvm based things like Flume for logging? or cassandra
[05:55] <lifeless> no problem
[05:55] <lifeless> there are several considerations of course
[05:55] <lifeless> canonical runs some jvm services internally already
[05:55] <poolie> well, my question was more "do the SAs fear it"
[05:56] <lifeless> AIUI the biggest consideration is the preffered tech mandate for best-of-breed fungible components
[05:59] <poolie> iow trying to decide if they're the best option to be used everywhere?
[05:59] <lifeless> bringing in something different to whats already used, for instance
[06:04] <poolie> right, the "is it worth changing/diverging" conversation
[06:05] <lifeless> with the first of a kind
[06:05] <spm> I'm not too fussed myself over what tech you choose. just saying that my experiences with JV< is that it does have some nasty side effects. and GC under load is a biggie.
[06:05] <lifeless> we can get into a bit of an overoptimisation discussion
[06:05] <poolie> mm
[06:05] <poolie> i mean of course erlang and python use gc too
[06:06] <spm> not that you'd notice from some of the "leaks" we see... :-)
[06:06] <poolie> so the thing would have to be whether the jvm is worse at it (whihc seems a bit unlikely) or whether the application architecture exacerbates the problem
[06:06] <lifeless> the reason you don't notice the pauses in python is because its never fast to start with
[06:06] <spm> haha
[06:06] <poolie> :) or indeed gil
[06:08] <spm> just at $job-1 we were running via a j2ee. wit hextra gc logging. and the amount of time lost to gc's was truly astonishing.
[06:10] <poolie> well, you probably don't know how much time we're losing to gc inside python :)
[06:10] <poolie> i'm just saying it might be even more
[06:10] <poolie> (though it's probably not)
[06:11] <spm> that would make me a sad panda
[06:11] <poolie> it might have just been more visible with extra logging one
[06:11] <lifeless> gill + gc would exceed stop the world gc + object moving overheads IMO
[06:11] <poolie> *on
[06:13] <spm> can/should we be tuning python for that in any way? just thinking that with jvm you can tweak a fair bit to optimise for your load and usage.
[06:14] <lifeless> spm: yes, rt #idunnoitsmyholiday
[06:14] <lifeless> spm: 'single threaded appservers'
[06:14] <spm> hahahaha
[06:14] <lifeless> spm: wins on two counts
[06:14] <lifeless> firstly one thread
[06:14] <spm> ok, so you guys are worrying about that then already. /me washes hands.
[06:14] <lifeless> secondly smaller memory footprint so overhead per-cpu of gc time is reduced
[06:15] <lifeless> even though the total time on the machine is increased, if that makes sense
[06:15] <spm> yeah that was the unobvious killer - you can't just throw memory at a jvm, that can make things worse.
[06:15] <lifeless> you might like the cassandra slides I sent around
[06:16] <spm> shrug. I have a .procmail From: lifeless >> /dev/null
[06:16] <spm> it saves time.
[06:16] <spm> :-P
[06:16] <lifeless> s/null/rw/
[06:17] <spm> read-only surely?
[06:17] <lifeless> raid warning :P
[06:17] <spm> hahaha
[06:18] <poolie> spm i think the "can't throw memory at it" problem is kind of related to it insisting on the whole app being inside a single OS process
[06:18] <poolie> which is an example of an architectural limit
[06:18] <poolie> and kind of the opposite of robert's single-thread appserver experiment
[06:18] <spm> yeah, I'd go with that
[06:19] <spm> certainly we did notice that we could only get so much out of a single process, then it made more sense to fire up more on the same (4 cpu, not cores, cpu) server
[06:20] <spm> the process couldn't use the h/w resources it had available.
[06:21] <spm> mind you... coldfusion code; which gets interpreted by the CF interpreter, which itself is a jvm/j2ee container. wheee.
[06:21] <poolie> right, that's just what i mean
[06:21] <poolie> python obviously has this effect to an even higher degree, because of only really using one core at a time
[06:22] <spm> so poolie, you'll write a fix for python for us ... today?
[06:22] <poolie> better minds than mine have failed
[06:23] <poolie> the best idea is to run more processes
[06:23] <spm> :-)
[06:23] <poolie> this is kind of good for horizontal scaling and robustness anyhow
[06:23] <spm> yeah, it's a nice match there.
[06:23] <spm> if frustrating.
[06:25] <poolie> why frustrating?
[06:27] <spm> in that, eg, we don't fire up multiple squids or apaches etc to max h/w out. they can just make use of it. recognising I'm comparing apples with fish, so a tad unfair.
[06:27] <poolie> !?
[06:27] <poolie> but apache at least does spawn multiple processes
[06:27] <poolie> and squid does that too for some specific bits, last time i looked
[06:27] <spm> yeah - it's handled internal to itself. it's not a start apache1 through 200 thing.
[06:28] <poolie> ah, yes
[06:28] <poolie> maybe we could fix that
[06:28] <spm> ha. from rambling discussions comes "hey that's a problem, we should look at fixing that" :-D
[06:29] <poolie> do you really manually start 200 things?
[06:29] <poolie> it seems it could at least be scripted
[06:30] <poolie> anyhow, that could be good to file bugs about
[06:33] <spm> we have, more or less, apps 1-16 ish, plus edge 1-5. each with their own separate configs and such. so sorta. 200 is more for how many apache processes I've enabled previous with a single trivial change.
[06:34] <spm> which is also about 20 different init.d scripts :-/
[06:35] <poolie> sheesh
[06:35] <poolie> that definitely seems worth fixing
[06:36] <lifeless> theres a pending MP from me with a config autogenerator
[06:36] <lifeless> which is a first step at reducing the insanity
[06:37] <spm> how would/could you fix the multiple procs per server thing? idly curious.
[06:37] <lifeless> me? i wouldn't.
[06:37] <spm> :-)
[06:37] <lifeless> I'd make the variation per configured proc 0
[06:37] <lifeless> and then use existing sysadmin tools to dial the number of processes desired
[06:37] <poolie> +1
[06:37] <lifeless> last thing I want to maintain is another process-manager implementation :)
[06:38] <spm> :-)
[06:39] <spm> oh bother stubs not around. and staging update failed in a rather impresive and interesting way.
[07:31] <wgrant> spm: WTF, but OK.
[07:31] <wgrant> Thanks.
[07:32] <spm> wgrant: yes. :-)
[07:32] <spm> the cleanup of the allowed email addresses is painful. all these ... funky ones.
[07:33] <lifeless> wtf?
[07:33] <wgrant> "Note that due to PQM's finnkiy nature, all submissions must come from the 1st/default address."
[07:33] <spm> pqm - only accepts the 1st email in a gpg key
[07:34] <lifeless> gpgv
[07:34] <spm> send from another, even if in the allowed key, fail.
[07:34] <lifeless> oh
[07:34] <lifeless> so there are two discrete things
[07:34] <lifeless> gpgv
[07:34] <lifeless> and email addresses used for per project/branch permissions
[07:35] <lifeless> gpgv ensures you own the email address
[07:35] <lifeless> and the email address is what the policy check is done on.
[07:36] <lifeless> trivial to list N addresses for a person if you want to
[07:36] <lifeless> just a config issue
[07:36] <spm> you're kidding. gnnnnnnnnnnnngh.
[07:36] <lifeless> no
[07:36] <lifeless> noone has ever asked me
[07:36] <spm> hahahahaahhaha
[07:37] <spm> we thought you knew :-D
[07:37] <lifeless> nope
[07:37] <spm> so where/how?
[07:37] <lifeless> just throw any additional emails you want folk to support in the email list for the group
[07:37] <lifeless> done
[07:38] <spm> ahh. code change?
[07:38] <lifeless> no
[07:38]  * spm notes to self, rob is on HOLIDAYS... let it go......
[07:38] <lifeless> you can even turn off email checking entirely if you just have one group
[07:38] <lifeless> keep verify_sigs on though :)
[07:39] <spm> we do have others, but I don't think they're really used. and could possibly be dropped safely.
[07:40] <lifeless> I do object to the 'finnkiy nature' bit
[07:40] <lifeless> when its a config issue:)
[07:40] <StevenK> Let's just switch to tarmac
[07:40]  * StevenK hides
[07:40] <wgrant> Isn't that almost done?
[07:41] <spm> lifeless: that's still finicky (now spelt with more correct)
[09:18] <mrevell> Morning
[11:16] <wgrant> Does anyone know what's up with the lucid_db_lp?
[11:16] <wgrant> It looks force-worthy.
[12:02] <deryck> Morning, all.
[14:49] <danilos> gary_poster, hi :)
[14:49] <gary_poster> heh, hey danilos
[14:50] <danilos> gary_poster, so, canonical/launchpad/versioninfo.py loads bzr-version-info.py using imp.load_source('...', 'bzr-version-info.py') and that tries to read it from the cwd
[14:50] <gary_poster> huh
[14:51] <danilos> gary_poster, that means that none of our cronscripts which are run like $LP_PY /path/to/lp/tree/cronscripts/blah.py get a reasonable value of version info data
[14:51] <danilos> gary_poster, small script to demonstrate the problem: https://pastebin.canonical.com/40858/
[14:52] <danilos> gary_poster, I figure we can run our scripts using (cd $LP_ROOT && $LP_PY ...) to work-around the problem, but that'd probably be ugly for the future and we'd want to fix versioninfo to read LP_ROOT/bzr-version-info.py instead
[14:53] <gary_poster> danilos: /me wonders if a symlink into lib is not a reasonable colution
[14:53] <gary_poster> s
[14:54] <danilos> gary_poster, I have no particular opinion, I basically want to solve bug 682186 and ensure nobody else runs into it themselves :)
[14:54] <_mup_> Bug #682186: X-Generator: Launchpad (build Unknown) <Launchpad Translations:Triaged> <https://launchpad.net/bugs/682186>
[14:55] <danilos> gary_poster, btw, I've tried that (symlinking into lib) and that doesn't work for me
[14:56] <gary_poster> danilos, duped, weird
[14:57] <gary_poster> danilos: I'll futz around for a sec and get back to you if I find something I think is reasonable
[14:57] <danilos> gary_poster, imp.find_module can find it then though
[14:59] <danilos> gary_poster, anyway, thanks, got a call now :)
[15:12] <gary_poster> flacoste: do you happen to know why Steve A used the imp module to implement and canonical/launchpad/versioninfo.py's import of bzr-version-info.py?  His checkin message is "add revno to main template". :-)
[15:12] <gary_poster> Seems like a simpler, more robust approach would be to symlink that file from the top of the tree into lib (as an importable name) and then do try: ... except ImportError: in canonical/launchpad/versioninfo.py .
[15:12] <gary_poster> Any idea why it is the way it is, historically or otherwise?  symlinks were supported in bzr at the time, as far as I can tell from a quick web search.
[15:18] <bigjools> mthaddon: how's the script doing?
[15:18] <mthaddon> bigjools: have done a few more runs - all still taking an hour or so
[15:19] <bigjools> mthaddon: that's weird, I'd expect it to finish to completion with the limit you used.  Maybe it is finishing to completion, it's just genuinely taking that long
[15:19] <bigjools> can I take a peek at the log?
[15:20] <mthaddon> bigjools: devpad:~mthaddon/2010-12-13-ppa-log-parser*.log
[15:22] <bigjools> mthaddon: so it looks like it's hitting the limit each time
[15:23] <bigjools> I'd be tempted to remove that
[15:23] <EdwinGrubbs> lifeless: ping
[15:23] <mthaddon> bigjools: I'd rather not - I really don't think there's any point to running a script as long as the last run we did - I'm fine to do it batches like this until we catch up
[15:23] <mthaddon> bigjools: do we have any way of estimating how far from catching up we are?
[15:24] <bigjools> mthaddon: roughly, it's processing around the "Tue Sep 21" date range
[15:24] <bigjools> so it has a long way to go
[15:25] <mthaddon> ok
[15:25] <bigjools> mthaddon: huh actualy scratch that
[15:25] <bigjools> it's processing files in some weird ordering
[15:25] <bigjools> so I have NFI how long it will take, especially when it logs lines like "Finished parsing <gzip on 0x8722e18>"
[15:25] <bigjools> :/
[15:38] <gary_poster> danilos: http://pastebin.ubuntu.com/543094/ is smallest change that works locally.  http://pastebin.ubuntu.com/543095/ is in the direction of a cleanup, IMO (would also maybe want to change bzr-version-info.py name or something).
[15:38] <gary_poster> Next steps: maybe check with losas to see if they read bzr-version-info.py from some surprising place, or if it is always in the root of the tree.  I think it is always in the root of the tree, including in production.  Then choose one of those two, apply, and do something else. ;-)
[15:58] <flacoste> gary_poster: no idea
[15:58] <gary_poster> ack flacoste thanks
[16:04] <danilos> gary_poster, hey, done with a call, let me look at that
[16:18] <danilos> gary_poster, I don't mind either solution, I'd be happy to drive the fix forward, but I am not sure where'd I put a test for it :) and whether it would be a useless test (i.e. more of a baggage test that takes long time to run since I'd have to use popen or something to make sure python is outside the tree)
[16:21] <gary_poster> danilos, that would be great if you would drive it forward, though you can also ask me to.  For test, a popen.call from an alt directory calling the equivalent of "bin/py -c 'from canonical.launchpad.versioninfo import revno; print revno is not None'" might be good enough.
[16:22] <gary_poster> and should be pretty fast.
[16:25] <danilos> gary_poster, right, that's what I was thinking, but popen is an order of magnitude slower than a bare unit test or something, that's what I meant by "slow"
[16:25] <gary_poster> sure
[16:26] <danilos> gary_poster, anyway, I'll go with your second (symlink) option and only add a test to that, how does that feel to you?
[16:29] <gary_poster> danilos: sounds good to me, cool
[16:29] <gary_poster> thank you
[16:30] <danilos> gary_poster, no worries, it solves a bug for me as well :)
[16:31] <gary_poster> :-)
[16:50] <danilos> gary_poster, do you think it's ok to move versioninfo to lp.app or lp.services along the way?
[16:50] <danilos> gary_poster, (I'll do it only if it doesn't cause too much work, but just wondering)
[16:51] <gary_poster> danilos: I think that would be another nice cleanup, yeah.  maybe app?  <shrug>
[16:51] <gary_poster> (lp.app I mean)
[16:52] <danilos> gary_poster, yeah, that's what I lean to as well
[16:52] <gary_poster> cool
[17:10] <jcsackett> did something land to speed up ec2 test recently? i just noticed runs are taking like 3.5 hours, where they used to take 4-5 hours for me.
[17:11] <jcsackett> i suppose this could have happened awhile ago, and i've just been oblivious.
[17:18] <bigjools> did windmill get turned off?
[17:29] <mars> that could do it
[17:29] <mars> windmill takes 40 minutes to run
[17:31] <bigjools> I noticed it's not running locally for me
[17:34] <danilos> bigjools, is it perhaps crawling locally for you?
[17:35] <bigjools> danilos: more of an amble
[17:35]  * bigjools EODs
[17:45] <deryck> jcsackett, it is windmill being turned off that saved the time.
[17:54] <jcsackett> deryck: ah, cool. glad that happened. 3.5 hours for results is way better.
[18:37] <lifeless> EdwinGrubbs: ?
[19:16] <lifeless> jml: ping
[19:27] <lifeless> flacoste: ping
[19:29] <jml> lifeless: hi
[19:29] <jml> lifeless: wassup?
[19:30] <flacoste> hi lifeless
[19:30] <lifeless> oh hai, I pinged, then remembered you're on leave now, right ?
[19:30] <lifeless> so I mailed flacoste instead :)
[19:30] <lifeless> flacoste: hi
[19:30] <jml> lifeless: ok :)
[19:31] <lifeless> so one thing I was very worried about in the bugjam was that we'd close stuff we shouldn't, and I've forwarded you a particular mail that triggered those fears, along with a plea for some guidance to the jammers
[19:33] <lifeless> jml: I was wondering with your testtools open bug
[19:33] <lifeless> jml: if it was a threading correctness issue that python-on-linux handles better
[19:33] <Ursinha> hi abentley, is it possible to the owner of an mp to set its status to Approved? even if the person isn't a reviewer?
[19:34] <lifeless> no
[19:34] <abentley> Ursinha, no, and we certainly don't want it.
[19:34] <lifeless> Ursinha: if this is for tarmac, its a problem - we need to be using the merge queues facility, not the old 'lands stuff that is approved' hack
[19:34] <Ursinha> mars, ^
[19:34] <Ursinha> abentley, I see I'm able to change the status of the mp to approved
[19:35] <Ursinha> and I'm not a reviewer
[19:35] <abentley> Ursinha, which mp?
[19:35] <Ursinha> abentley, this one: https://code.launchpad.net/~ursinha/launchpad/add-ec2land-rules-orphaned-branches-no-conflicts/+merge/31386
[19:35] <lifeless> Ursinha: you are
[19:35] <Ursinha> lifeless, a launchpad reviewer?
[19:35] <Ursinha> no
[19:35] <lifeless> yup
[19:36] <lifeless> Ursinha: yes, you are in ~launchpad
[19:36] <lifeless> which is in ~canonical-launchpad-reviewers
[19:36] <lifeless> and ~launchpad-reviewers
[19:37] <Ursinha> lifeless, ~launchpad is a subgroup of the other two, and not the inverse?
[19:37] <Ursinha> that makes no sense...
[19:37] <lifeless> sure it does
[19:37] <lifeless> as far as LP is concerned, all canonical lp staff are permitted to land things and review.
[19:37] <lifeless> we use social glue for the process of learning, not technical.
[19:38] <Ursinha> lifeless, right
[19:38] <lifeless> maybe not optimal, but thats a different discussion.
[19:38] <Ursinha> lifeless, this is for tarmac
[19:38] <lifeless> as far as lp is concerned, you are a reviewer for lp:launchpad and lp:launchpad/db-devel/trunk
[19:38] <Ursinha> we're discussing changing lp-land to set the mp to approved, so tarmac can handle it
[19:38] <lifeless> tarmac shouldn't need that
[19:39] <lifeless> with the merge queue stuff
[19:39] <Ursinha> lifeless, is that implemented already?
[19:39] <lifeless> and lp-land is the wrong time to set it, as aaron says.
[19:39] <lifeless> Ursinha: I don't know. I shouldn't be here anyway :)
[19:39] <abentley> Ursinha, lp-land is all about PQM.  Are you planning to make Tarmac read emails the way PQM does?
[19:39] <EdwinGrubbs> lifeless: I was just going to ask if there was a workaround for when somebody forgets to use [rollback=] in the pqm commit message that was better than just marking the bug qa-untestable temporarily.
[19:40] <lifeless> EdwinGrubbs: no; as you can see from abentley's excellent LEP we have a lot of polish to go on the deployment magic
[19:40] <Ursinha> abentley, no, just use the same mechanism to make people able to be part of the simplify merge machinery beta
[19:40] <lifeless> EdwinGrubbs: I deliberately closed my eyes and went 'lalalala' at the start, so that we'd have *some* deployments rather than still be building up infrastructure now :)
[19:41] <lifeless> EdwinGrubbs: you just need to manually do the arithmetic when someone forgets a tag like that at the moment.
[19:41] <EdwinGrubbs> ok, thanks
[19:41] <abentley> Ursinha, Tarmac currently uses an "approved" review as a signal that it should perform a merge, right?
[19:41] <Ursinha> abentley, I believe it uses the mp status
[19:41] <abentley> Ursinha, I mean a status of "approved" on the whole proposal.
[19:41] <Ursinha> abentley, ah, yes
[19:42] <abentley> Ursinha, so this is not going to be any kind of smooth transition.  Because we don't use the Approved status that way.
[19:43] <Ursinha> abentley, is the Approved status used in any way today?
[19:43] <abentley> Ursinha, yes.  It's set when the last reviewer approves it.
[19:43] <Ursinha> abentley, and what does that mean?
[19:44] <abentley> Ursinha, it means that the reviewer has no major objections to landing it, but there may be some requested changes.
[19:45] <Ursinha> abentley, wait, you're talking about the queue_status or the vote status?
[19:45] <lifeless> queue status
[19:45] <lifeless> well
[19:45] <lifeless> mp status
[19:45] <Ursinha> right
[19:45] <lifeless> queue status is orthogonal
[19:45] <abentley> Ursinha, I'm talking about the merge proposal status.
[19:45] <Ursinha> that's the name of the property, I don't know why is that called this way
[19:45] <lifeless> I'm going to leave this discussion in abentley's more than capable hands, and go play some wow. enjoy!
[19:47] <abentley> Ursinha, we could change Tarmac, or we could change our practice.  I think changing our practice would be expedient.
[19:48] <abentley> Ursinha, So we would be changing it such that the reviewer never marks it "Approved".  The submitter would do that.
[19:49] <abentley> Ursinha, that would give them a window to make any suggested changes.
[19:49] <abentley> Ursinha, and if the submitter isn't a reviewer, they would need to ask a reviewer to do that, just as they need to get a committer to land on PQM.
[19:50] <abentley> Ursinha, But if we're changing our practice anyway, I don't see value in using lp-land to land merge proposals.
[19:53] <Ursinha> abentley, right, so you suggest changing the practice by marking mps Approved with a different meaning, and doing that manually
[19:53] <abentley> Ursinha, right.
[19:53] <Ursinha> I agree lp-land isn't the right place to do that, we're trying to find more automated ways to test the smm changes
[19:54] <abentley> Ursinha, lp-land only exists because pqm isn't well integrated with Launchpad.
[19:54] <Ursinha> abentley, right
[19:56] <lifeless> one final note
[19:57] <lifeless> a team of 30 is slow to change
[19:57] <Ursinha> lifeless, what do you suggest?
[19:57] <lifeless> you may find that new instructions will not be followed immediately, so plan for that happening.
[19:57] <lifeless> Ursinha: myself, I'd help rockstar finish merge queues.
[19:57] <lifeless> My understanding was that that was the plan.
[19:58] <abentley> lifeless, indeed, that is planned and will result in a better system.  Only question is "when".
[19:58] <lifeless> anyhow, I only popped back to make that note about inertia; folk will do the old process for some time after a change is announced
[19:58] <lifeless> so if the system would be fragile, were someone to do the old process, then thats a risk that needs some consideration.
[19:59] <Ursinha> lifeless, one of the ideas of using ec2 land / bzr lp-land to set things up was to help with that, afaik
[20:00] <abentley> Ursinha, the problem is that "Approved" is already being used with a laxer meaning.  If people follow current practise, lp-land can't help with that.
[20:00] <abentley> Ursinha, because by the time they run "lp-land" the change will already be merged.
[20:01] <Ursinha> abentley, right, got it
[20:24] <deryck> benji, ping
[20:24] <benji> hey deryck, what's up?
[20:25] <deryck> benji, hey, is revno 12032 in devel users?  Something about smoke test script?
[20:28] <benji> deryck: I'm afraid I can't parse the quesiton "is revno 12032 in devel users?"
[20:28] <benji> revno 12032 does contain a librarian smoke test script
[20:29] <deryck> benji, sorry I fail.  "yours" I meant.  Is it your rev in devel?  Did you land it?
[20:29] <benji> oh, yes, I did
[20:30] <deryck> benji, can I ask for qa from you for that? :-)  If it gets marked qa-ok, I can deploy a revno we need and get a feature out.
[20:32] <benji> deryck: sure; I'd say that it's untestable, so I'll mark it thusly
[20:32] <deryck> awesome.  easy peasy then.
[20:32] <deryck> benji, thanks!
[20:34] <benji> deryck: done
[20:37] <deryck> cool
[20:47] <wgrant> sinzui: Uh, how did my rev roll back yours? :/
[20:47] <wgrant> It's not in the diff.
[20:59] <wgrant> Morning all.
[21:01] <mwhudson> morning
[21:03] <mars> morning
[21:06] <wallyworld_> abentley: thumper: mumble?
[21:06] <abentley> wallyworld_, thumper may be delayed.
[21:06] <abentley> wallyworld_, see his email.
[21:08] <wallyworld_> abentley: ok. we can do it later
[21:09] <wallyworld_> abentley: my stupid pulse audio was not working again too, so it's reboot time :-(
[22:01] <jcsackett> sinzui: question for you on bug 684151. you say we can just change the name to remove '-secondary', but then won't it clash with the form instance at the top of the page?
[22:01] <_mup_> Bug #684151: Search field at the bottom of a search results page never works <trivial> <launchpad-web:In Progress by jcsackett> <https://launchpad.net/bugs/684151>
[22:01] <sinzui> jcsackett, no, field names are not unique
[22:02] <sinzui> jcsackett, the error is making the names unique because the ids are unique
[22:02] <sinzui> the ids do need to be unique
[22:03] <jcsackett> sinzui: okay. question the second; how does one go about testing this, since the usual create_initialized_view and pass in "field.text" approach isn't sure to hit the right form?
[22:03] <jcsackett> the only tests for this i see are launchpad-search-pages.txt, which all go via "+search", which isn't specific to the form being used. :-/
[22:04] <sinzui> jcsackett,  get the second form by id and verify its input type="text" element have a sane name attr
[22:04] <sinzui> jcsackett, It can be done using the beautiful soup instance returned by get_tag_by_id()
[22:05] <jcsackett> ah, so just test the name instance, don't bother testing function from that form?
[22:05] <jcsackett> that's much easier. :-P
[22:06] <sinzui> jcsackett, Correct. We can test the contracts provide rather than the view's processing of the data
[22:06] <jcsackett> sinzui: dig. thanks!
[23:33] <thumper> well that took longer than expected
[23:33] <spm> thumper: this is your haircut you left for, yesterday morning?
[23:34] <thumper> spm: heh, no
[23:34] <thumper> spm: my daughter's class was going for a walk this morning
[23:34] <spm> oh lovely
[23:34] <thumper> spm: and I went to help thinking it would be all over in 1.5 hours
[23:34] <thumper> spm: but no
[23:34] <spm> but no.
[23:35] <thumper> spm: delayed further by a tree falling down on the route we took