[00:02]  * wallyworld types s/launchpad.See/launchpad.LimitedView/g
[00:02] <StevenK>  /g is probably not necessary
[00:03] <wallyworld> but harmless none the less
[00:03] <wallyworld> it was figurative anyway - my ide is doing it :-P
[00:03] <StevenK> wallyworld: :%s/\(launchpad.\)See/\1LimitedView/
[00:04] <wallyworld> now you're just showing off
[00:04] <lifeless> don't you use an IDE ?
[00:04] <lifeless> ;)
[00:05] <wallyworld> yes, but i don't do regex search and replace very often
[00:07] <mwhudson> bzr ls -0VRk file | xargs -o sed -i -e 's/
[00:07] <mwhudson> the fact that i can type that without thinking much is a little scary
[00:08] <mwhudson> (should be -0)
[00:08] <wgrant> bzr sed!
[00:09] <mwhudson> yes... i think
[00:11] <wallyworld> i like my IDE because for non-trivial search and replace you can preview the changes and veto any matches which should not be included
[00:12] <wgrant> sed, diff...
[00:12] <wgrant> done
[00:13] <wallyworld> sure, but text based (harder to read the output) and not nearly as user friendly
[00:14] <wallyworld> ie can you click through from a match to the source code
[00:33] <poolie> lifeless, fwiw i think you should just capture all traceback
[00:34] <wgrant> lifeless: 20 frames isn't many
[00:34] <wgrant> (thanks to TAL)
[00:35] <lifeless> whats a better number ?
[00:35] <poolie> 60?
[00:35] <poolie> 50?
[00:37] <StevenK> lifeless: So what about ++profile++sqltrace ?
[00:38] <lifeless> what about it ?
[00:39] <StevenK> lifeless: Oh, you want to do it all the time?
[00:39] <lifeless> in the query log in OOPSes, yes
[00:41] <StevenK> Oh, and do the work in the OOPS handler?
[00:41] <lifeless> the timeline, yes.
[00:41] <lifeless> oops handler is too late
[00:44] <lifeless> I found a methodology error too
[00:44] <lifeless> timeit python API
[00:44] <lifeless> != timeit CLI behaviour
[00:46] <lifeless> StevenK: this doesn't stop the sqltrace profiler beind useful, it just means we have equivalent data whenever we get an OOPS
[00:46] <lifeless> it does point the way at possibly simplifying the profiler implementation
[00:55] <huwshimi> I assume I can ignore these errors and do a lp-land? http://paste.ubuntu.com/739827/
[00:56] <StevenK> huwshimi: That's odd. That's local, or via ec2?
[00:56] <huwshimi> StevenK: ec2. from yesterday afternoon
[00:56] <StevenK> Wow.
[00:58] <huwshimi> StevenK: Did I break the world?
[00:59] <StevenK> huwshimi: I just find it surprising that something else is listening on 2121 on an ec2 instance while the test is running
[01:01] <wgrant> StevenK: Some things listen on random ports.
[01:01] <wgrant> eg. the librarian
[01:03] <huwshimi> StevenK: So I can ignore it then?
[01:12] <hloeung> poolie, jelmer: ok, packages are there now. Where would you like me to install / update?
[01:13] <wgrant> hloeung: clementine
[01:13] <wgrant> (the guest on concordia)
[01:14] <hloeung> hmm, it seems I don't have SSH access into clementine
[01:14] <hloeung> nevermind, found another way around that
[01:14] <wgrant> Correct.
[01:14] <wgrant> It's a guest that is torn down every build.
[01:15] <wgrant> You need to update its image.
[01:15] <hloeung> ah right
[01:29] <lifeless> StevenK: oh hai ;) https://code.launchpad.net/~lifeless/python-oops-tools/bug-890001/+merge/82334
[01:29] <lifeless> StevenK: (diff coming)
[01:31] <benji> hi lifeless; if I put a fastdowntime deployment on the deployment page would you (or stub) have time to do it in the next few hours?
[01:32] <lifeless> benji: no; we only have one window a day for fdt
[01:32] <lifeless> benji: (because they are downtime)
[01:33] <StevenK> lifeless: r=me
[01:33] <benji> lifeless: darn, well that's understandable, I'll put it up during my day tomorrow to get in line for the next one; thanks
[01:33] <lifeless> benji: they are also done by losa FWIW. Perhaps you mean a live schema patch ?
[01:33] <lifeless> benji: next one is in ~ 6 hours
[01:33] <StevenK> Which patches are unapplied?
[01:34] <benji> lifeless: oh right, I think stub said that this (adding a table) was doable live
[01:34] <lifeless> benji: does the table have FK references ?
[01:34]  * benji tries to remember
[01:34] <lifeless> if it does, it requires downtime.
[01:35] <wallyworld> sinzui: if you are still around - do you have any objection to me adding isBugSupervisor(obj) and isSecurityContact(obj) to IPersonRoles? to compliment isOwner(obj) and isDriver(obj)
[01:36] <benji> lifeless: yep it does; fastdowntime it is
[01:36] <wallyworld> sinzui:  or do we want to avoid referencing bug model artifacts from registry
[01:36] <wallyworld> i can see why we would not want to do that
[01:40] <wgrant> benji: So, request it for today's window.
[01:41] <wgrant> benji: Since we can't do it tomorrow.
[01:41] <sinzui> wallyworld, go ahead. They are consistent. <- StevenK wallyworld may be doing some of the role checking you are working on
[01:41] <wgrant> (there is an extended outage for a slony upgrade)
[01:41] <StevenK> I wonder if my db-devel patch has hit prod
[01:41] <wgrant> StevenK: No
[01:41] <wgrant> StevenK: is that before benji's?
[01:42] <wallyworld> sinzui: thanks. i thought they were consistent but wasn't sure if we considered registry a lower layer than bugs, hence the question about referencing
[01:42] <StevenK> wgrant: Let me check
[01:42] <wgrant> benji: I also added your patch number to the allocation listing this morning, since it seems you missed that.
[01:43]  * wgrant lunches.
[01:43] <StevenK> wgrant: db-stable r11140
[01:43] <sinzui> wallyworld, registry is considered the lowest and only required app.
[01:43] <wgrant> StevenK: That's unfortunate.
[01:43] <StevenK> wgrant: Oh? Why?
[01:43] <wgrant> Because we can't do benji's tonight.
[01:44] <wgrant> And I have two landing now.
[01:44] <wgrant> And tomorrow's window is used.
[01:44] <wgrant> Well, only one of my two is landing now.
[01:44] <StevenK> Lean on lifeless for more than one patch a day?
[01:44] <wgrant> But the other is ready, blocked on the first being deployed.
[01:44] <wgrant> Which probably can't happen until Wed.
[01:44] <wallyworld> sinzui: yes. so the fact that registry (PersonRoles) would now reference interfaces declared in the bugs layer may be considered "dirty"
[01:44] <StevenK> FDT is not ... fast
[01:45] <wgrant> wallyworld: Dirty but sometimes unavoidable.
[01:45] <wgrant> wallyworld: And bugsupervisor is registry anyway.
[01:45] <StevenK> wallyworld: That's a layering violation
[01:45] <sinzui> wallyworld, yes it is, but it will not be. forever. at least not for security...it will go to registry because many artefacts could be labeled security
[01:45] <wallyworld> wgrant: IHasBugSupervisor is bugs
[01:46] <StevenK> Move it down, I'd suggest
[01:46] <StevenK> In a seperate branch
[01:46] <StevenK> sinzui: When were you thinking of chatting? I'll be lunching soon
[01:46] <wallyworld> StevenK: yes, that was the basis of my question. i didn't like the layering violation
[01:47] <wallyworld> i think the consensus is just do it now and fix the layering later
[01:47] <StevenK> Fixing the layering should be very quick
[01:48] <StevenK> And I don't like leaving tech debt around if it can be helped.
[01:48] <StevenK> We already have too much!
[01:48] <wallyworld> yes, possibly touch many files though
[01:48] <wallyworld> i won't leave it hanging around
[01:49] <StevenK> It can be done via regex ... :-P
[01:52] <lifeless> its questionable to say its bugs, given the upcoming issuetracker concept :)
[01:55] <StevenK> ForbiddenAttribute: ('bug_supervisor', <lp.registry.model.distributionsourcepackage.DistributionSourcePackage object at 0xcbf0290>)
[01:55] <StevenK> :-(
[01:58] <sinzui> StevenK, wallyworld. we want the DSP to have a bug supervisor. It is not in scope (yet) for disclosure
[01:59] <sinzui> see bug 191809 and bug 619218
[01:59] <_mup_> Bug #191809: A DistributionSourcePackage needs a bug supervisor role to control permissions <lp-bugs> <search> <ubuntu-qa> <Launchpad itself:Triaged> < https://launchpad.net/bugs/191809 >
[01:59] <_mup_> Bug #619218: It isn't possible to set the bug supervisor for a source package <lp-bugs> <Launchpad itself:Triaged> < https://launchpad.net/bugs/619218 >
[02:00] <huwshimi> lifeless: If you get the chance at some stage I would like to get another review of this: https://code.launchpad.net/~huwshimi/launchpad/avatars-everywhere-712894/+merge/81430
[02:00] <sinzui> well I say the later is a dupe of the former bug
[02:01] <wallyworld> not a recent problem then
[02:02] <sinzui> no. dsp could have a field that returns the distro bug supervisor so that that the interface is met
[02:05] <lifeless> wgrant: remember that oops that was breaking staging amqp oops loading
[02:05] <wallyworld> yes it could
[02:22] <lifeless> wgrant: anyhow, its hilarious
[02:23] <lifeless> StevenK: hi. Diff coming soon - https://code.launchpad.net/~lifeless/python-oops-tools/bug-889982/+merge/82337
[02:25] <lifeless> wgrant: this was the cause - https://bugs.launchpad.net/launchpad/+bug/890951
[02:25] <_mup_> Bug #890951: bad oops prefix in staging translations-export-to-branch <oops> <Launchpad itself:Triaged> < https://launchpad.net/bugs/890951 >
[02:42] <lifeless> sinzui: what was the subject of the mail that kicked off the thread about arbitrarty cocs for teams ?
[02:42] <lifeless> sinzui: I had some thoughts but have lost the thread
[02:43] <StevenK> sinzui: I'm just confused that IHasBugSupervisor.providedBy(self.pillar) is returning True for DSPs
[02:43] <sinzui> lifeless, Proposed team agreements feature
[02:43] <lifeless> cool
[02:43] <lifeless> I'm aware this is a community driven thing
[02:44] <lifeless> I thing there is a good inflection point in LP to push most of it out of LP entirely - so they can do whatever they want as a django app or similar.
[02:44] <lifeless> s/thing/think/
[02:44] <StevenK> Now I'm being mocked.
[02:44] <lifeless> I will mail my thoughts in in a bit.
[02:44] <StevenK> In [3]: IHasBugSupervisor.providedBy(dsp)
[02:44] <StevenK> Out[3]: False
[02:44] <lifeless> StevenK: by me asking for reviews?
[02:44] <StevenK> lifeless: Certainly not!
[02:44] <lifeless> :)
[02:45] <StevenK> lifeless: I've approved your last one
[02:49] <StevenK> OH, it's the bloody view change
[02:54] <lifeless> poolie: bug 885972 might be up your alley
[02:54] <_mup_> Bug #885972: raw_sendmail creates TimedActions with invalid detail <regression> <Launchpad itself:Triaged> < https://launchpad.net/bugs/885972 >
[02:56] <sinzui> trying to see all of the disclosure feature is like looking into the face to Cthulhu; madness
[02:57] <wgrant> lifeless: Haha, handy.
[03:00] <wgrant> sinzui: Rather.
[03:00] <wgrant> Intertwined and partially invisible madness :(
[03:19] <StevenK> % uptime
[03:19] <StevenK>  14:19:32 up 14:07,  3 users,  load average: 31.39, 24.86, 12.65
[03:19] <StevenK> Dear RabbitMQLayer, DIE
[03:20] <lifeless> wgrant: oh, you just read scrollback ?
[03:21] <wgrant> lifeless: Yes.
[03:25]  * StevenK is slowly getting happier with this branch
[03:26] <StevenK> 4 files changed, 53 insertions(+), 54 deletions(-)
[03:40] <jtv1> StevenK, wgrant: no joy from dogfood buildmaster.  :(
[03:41] <wgrant> Well, it's not running at the moment... that could be relevant.
[03:41] <wgrant> I wonder why not.
[03:41]  * wgrant starts.
[03:43] <wgrant> It is building stuff now.
[03:45] <lifeless> hmm
[03:45] <lifeless> what, if any, js library do we use in oops-tools, I wonder
[03:45] <lifeless> (I want expandable sections on the timeline)
[03:47] <lifeless> also, I want a tool to report on out of date deps
[03:47] <lifeless> does setuptools/buildout perhaps know already ?
[03:48] <lifeless> StevenK: oh hai :) https://code.launchpad.net/~lifeless/python-oops-datedir-repo/timeline/+merge/82344
[03:49] <StevenK> sinzui: Can haz chat, if you're still around?
[03:51] <StevenK> lifeless: r=me
[03:53] <jtv1> StevenK, wgrant: builds are failing on dogfood, but I don't see anything explaining the failures.  Am I missing something, or did I simply break the build farm?
[03:53] <wgrant> Where are the failures?
[03:55] <jtv> https://dogfood.launchpad.net/~jtv/+archive/ppa/+packages?batch=150
[03:56] <poolie> https://dev.launchpad.net/EC2Test/Image we use an EBS instance too, right?
[03:56] <StevenK> We do not.
[03:56] <poolie> no, sorry, instance storage?
[03:56] <StevenK> Right
[03:56] <StevenK> It is not EBS-based.
[03:56] <StevenK> Er, backed
[03:57] <poolie> and 'large'
[03:57] <StevenK> I think it's xlarge
[03:57] <StevenK> It's in the code anyway
[03:57] <mwhudson> ec2 test predates ebs backed images
[03:58] <mwhudson> i think instance store makes more sense for ec2 test-ish ephemeral vms anyway
[04:00] <poolie> yeah
[04:00] <poolie> i may try actually changing it to put more on a ramdisk
[04:00] <mwhudson> ah yeah, that may be a win
[04:00] <StevenK> I'd like EBS-backed instances for Jenkins, only for the reason that we can start testing quicker if we have a seeded download-cache and such
[04:00] <poolie> wee
[04:00] <mwhudson> there are much more powerful instances now too, may be worth trying them out
[04:00] <poolie> 23MB/s apt update
[04:01] <mwhudson> (partly the power is in multi-cores, which doesn't help so much of course)
[04:01] <mwhudson> "High-Memory Quadruple Extra Large" instances are really quite something
[04:02] <poolie> so apparently the local store is kinda slow
[04:02] <poolie> don't have data
[04:02] <poolie> may try it now while this is running
[04:02] <mwhudson> ebs is even slower :-)
[04:02] <mwhudson> but yes ramdisks do help afaik
[04:03] <mwhudson> "High-Memory Extra Large Instance" would almost certainly be better for us than basic extra large (fewer cores, but more 'compute units' per core)
[04:03] <mwhudson> cheaper too
[04:06] <mwhudson> maybe "High-Memory Double Extra Large Instance" with everything stuffed onto a ramdisk would be fast enough that the extra cost is mediated?
[04:06] <StevenK> That's c1.2xlarge?
[04:06] <StevenK> We currently use c1.xlarge instances
[04:07] <mwhudson> no, m2.xlarge or m2.2xlarge
[04:08] <mwhudson> don't think c1.2xlarge is actually a thing?
[04:08] <StevenK> I was guessing
[04:08] <mwhudson> heh :)
[04:08] <mwhudson> the names are hard to remember, there's almost a pattern
[04:08] <jtv> StevenK, wgrant: no logs of my build failures whatsoever afaict.  I think it's all broken.
[04:08] <StevenK> Yes, almost.
[04:09] <wgrant> jtv: Indeed, it looks that way.
[04:09] <jtv> :-(
[04:09] <wgrant> Did you try it locally?
[04:09] <jtv> No.
[04:09] <StevenK> storeBuildInfo is now utterly fucked?
[04:09] <StevenK> Or is failure counting screwing us over?
[04:09] <jtv> Or log retrieval… who knows.
[04:09] <wgrant> The builds dispatch fine.
[04:10] <wgrant> I saw logs.
[04:10] <jtv> Failure accounting will have seen some changes, I think.
[04:10] <StevenK> I saw logtails
[04:10] <jtv> Yes, there were logs previously.
[04:10] <StevenK> *COUNTING*
[04:10] <jtv> Ah.
[04:10] <jtv> Failure counting will have seen some changes, I think.
[04:11] <jtv> With retries in particular, IIRC.
[04:11] <jtv> Some information or other from a previous failure now survives into a next dispatch, that previously didn't.
[04:12] <poolie> mwhudson, trying a tmpfs hack now
[04:13] <mwhudson> poolie: woo
[04:13] <lifeless> poolie: will you look at that sendmail issue? [I d/c'd before apologies if you replied already]
[04:13] <mwhudson> poolie: i think i'll email launchpad-dev about trying m2.xlarge, do you want to follow up about tmpfss?
[04:13] <StevenK> lifeless: You need better internets
[04:13] <lifeless> StevenK: I know :(
[04:14] <lifeless> StevenK: https://code.launchpad.net/~lifeless/python-timeline/backtrace/+merge/82348
[04:14] <poolie> cat is sleeping on my mouse cord
[04:14] <StevenK> No diff, no review.
[04:14] <poolie> makes it hard
[04:14] <StevenK> :-P
[04:14] <StevenK> poolie: Your mouse has a cord? :-P
[04:14] <poolie> that may be my problem
[04:15] <poolie> lifeless, are you talking about bug 885972
[04:15] <_mup_> Bug #885972: raw_sendmail creates TimedActions with invalid detail <oops-infrastructure> <regression> <Launchpad itself:Triaged> < https://launchpad.net/bugs/885972 >
[04:15] <StevenK> My keyboard and mouse are cordless. It's quite nice.
[04:15] <lifeless> poolie: yes
[04:15] <lifeless> StevenK: there is a diff
[04:15] <lifeless> :P
[04:15] <jtv> StevenK: don't rub it in—my wrists have been aching ever since bluetooth stopped working
[04:16] <StevenK> Haha
[04:16] <jtv> (*mostly* stopped working… it still works for some things)
[04:16] <StevenK> Mine are abusing 2.4GHz
[04:17] <lifeless> StevenK: I'm sorry for muddling the LGPL change in there, I should have self-reviewed it separately; I claim tiredness.
[04:17] <lifeless> StevenK: it is a separate commit, you may be better clicking on the commits
[04:18] <poolie> lifeless, i'll see what I can do
[04:18]  * jtv goes for a little cry or food or something
[04:19] <StevenK> lifeless: I'm already trying to keep track of my own branch, there is six other branches you've tossed me over the course of the day, and sinzui has sent me a novel that I need to reply to. My brain, it is leaking out my ears.
[04:21]  * StevenK stabs lifeless for the 1,400 lines.
[04:24] <StevenK> # The fifth element defaults to None:
[04:24] <StevenK> Bruce Willis would not approve.
[04:25] <lifeless> poolie: thanks
[04:25] <lifeless> StevenK: yeah, I know. Gotta love small reusable components!
[04:27] <lifeless> StevenK: don't worry, my next branch is on LP itself.
[04:27] <StevenK> lifeless: You don't specify 'v3' for LGPL in setup.py
[04:27] <lifeless> StevenK: you can't
[04:27] <StevenK> You did for AGPL
[04:27] <lifeless> yes
[04:27] <lifeless> http://mail.python.org/pipermail/catalog-sig/2011-November/004028.html
[04:27] <StevenK> And you seem to repeat LGPL once too many in README
[04:28] <lifeless> hmm, let me see
[04:29] <lifeless> thats a nuisance, cause I've got that copied all over the place now.
[04:29] <lifeless> Nice catch
[04:29] <lifeless> its in every darn little header too
[04:29] <lifeless> fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
[04:30] <lifeless> I'll just delete that last line.
[04:42] <poolie> mwhudson, really a one line patch? that's cool
[04:42] <mwhudson> poolie: it's two, it turns out
[04:42] <mwhudson> but yes
[04:42] <mwhudson> very easy
[04:42] <poolie> i get the chance to add better handling of instance creation errors
[04:42] <poolie> by printing the reason code
[04:43] <poolie> apparently they have sold out of computers
[04:45] <mwhudson> heh
[04:45] <lifeless> yes we have no banaanas ?
[04:45] <StevenK> No bananas today!
[04:46] <poolie> http://support.rightscale.com/06-FAQs/FAQ_0112_-_How_come_I_get_an_insufficient_capacity_error_when_launching_an_EC2_instance%3F
[04:46] <mwhudson> i saw a fun fact a while ago: amazon adds more capacity to ec2 every day than they had running the entire amazon organization for the first five years of its existence
[04:46] <lifeless> yeah
[04:46] <lifeless> scary shit
[04:47] <StevenK> mwhudson: *!!!*
[04:47] <poolie> yep, i heard that too
[04:47]  * mwhudson fails to google up the source for that
[04:48] <lifeless> it was in their newsletter
[04:48] <poolie> the other one is that google is the largest pc manufacturer in the world
[04:48] <poolie> perhaps fsvo pc
[04:48] <mwhudson> "launch an instance in the -Any- availability zone" -- _surely_ we do that already
[04:48] <StevenK> Step 5 of that answer makes me eyeroll.
[04:48] <poolie> and fsvo manufacturer
[04:48] <lifeless> mwhudson: no, you get to choose a zone
[04:48] <lifeless> mwhudson: this is 'are you feeling lucky punk' zoning
[04:50] <mwhudson> i know you can specify a zone, but you can also say "i don't care which zone" can't you?
[04:51] <mwhudson> i can't see anything in devscripts/ec2test that would specify a zone
[04:51] <mwhudson> (or a region, come to that)
[04:51] <mwhudson> anyway, time to go
[04:51] <poolie> hm maybe i can also fix this '400 bad request' in shutdown
[04:51] <lifeless> the endpoint you connect to
[04:51] <lifeless> isn't it
[04:54] <mwhudson> lifeless: no, zone is a parameter to runinstances
[04:54] <mwhudson> (he says confidently, after reading the api docs)
[04:54] <mwhudson> region, i think you're right
[04:54] <lifeless> meh, I'll need to page in then
[04:55] <mwhudson> Placement.AvailabilityZone | The Availability Zone you want to launch the instance into Type: xsd:string Default: EC2 chooses a zone for you
[04:55] <mwhudson> anyway, time to wander around in the sun for a bit
[04:55]  * mwhudson eods
[04:57] <lifeless> now for the joy of updating LP
[04:57] <poolie> that error  might have meant there were none anywhere in east1"
[04:58] <poolie> that error  might have meant there were none anywhere in east1"
[05:03]  * StevenK seriously ponders a patch to bin/test that makes --subunit the default if output isn't a TTY.
[05:17] <poolie> ok so this does seem a _lot_ faster
[05:17] <poolie> because the job i started later has finished ahead of the original version of the code i started half an hour earlier
[05:17] <poolie> it may be measurement error, i can't really believe it
[05:17] <wgrant> Which job?
[05:19] <poolie> update-image with fs performance tweaks vs without
[05:19] <wgrant> Hmm, what does update-image do that can be temporary?
[05:19] <poolie> branching and setting up lp seems massively faster
[05:19] <wgrant> Assuming you mean a tmpfs.
[05:19] <poolie> this is with data=writeback etc
[05:19] <poolie> also
[05:19] <wgrant> Ahh
[05:20] <wgrant> poolie: https://launchpad.net/builders/actinium
[05:20] <wgrant> Building now
[05:20] <poolie> ec2instance discards stdout when it runs commands on the slave
[05:20] <poolie> no
[05:26] <poolie> ok, stdout is out of order with stderr
[05:28] <wgrant> poolie: Huh
[05:28] <poolie> is possibly hard to avoid
[05:28] <wgrant> poolie: maverick and oneiric built successfully 5 hours ago
[05:28] <poolie> there are two pipes; they have an ordering
[05:29] <poolie> we are not as smart about reading from them as a regular ssh would be
[05:29] <poolie> conceptual pipes
[05:29] <poolie> for some reason we have our own select/recv loop
[05:35] <StevenK> lifeless: https://code.launchpad.net/~stevenk/launchpad/subunit-default-not-a-tty/+merge/82352
[05:38] <lifeless> does that work?
[05:38] <lifeless> I thought optparse options didn't like being written to
[05:38] <lifeless> also where are you needing this?
[05:39] <StevenK> Nowhere, I'm scratching an itch
[05:39] <stub> overwriting optparse options works
[05:39] <lifeless> StevenK: I'm just worried it may mess something up for other people
[05:40] <lifeless> StevenK: is there a --non-subunit option if they want non-subunit there ?
[05:40] <stub> Although you are usually better off using parser.set_defaults() *before* parsing the command line
[05:40] <StevenK> I don't think so.
[05:40] <StevenK> lifeless: It has seriously taken me 5 minutes, I don't mind tossing it if need be.
[05:40] <lifeless> StevenK: it would be nice to have an opt-out, I think.
[05:41] <lifeless> StevenK: i've been known to run non-subunit through less, for some nitty debugging situations
[05:46] <stub> (branch_nick, revno) doesn't really identify the revision of a tree that was used because branch_nick is not unique. Is there a better key available? I need to record what branch+revno was used for the latest database update
[05:54] <poolie> wgrant, mwhudson, something like https://code.launchpad.net/~mbp/launchpad/ec2-tmpfs/+merge/82355
[05:54] <poolie> still testing it
[05:56]  * wallyworld off to the vet again. poor dog :-(
[06:03] <lifeless> stub: revid is a globally unique identifier
[06:03] <lifeless> stub: can look that up in the history of a given branch to identify the local revno for it
[06:03] <stub> lifeless: For a revision, or for the branch at that revision?
[06:03] <lifeless> stub: revision
[06:05] <stub> So I could, say, keep a table with every revid in the branch that has been applied to production.
[06:05] <stub> Or just the lastest revid, hopefully 'good enough' (although someone cowboying the wrong way could send automation crazy)
[06:06] <lifeless> so the branch has many more revids than db schema patches
[06:06] <lifeless> whats the big picture here
[06:07] <stub> there are more to db updates than schema patches (trusted.sql, update.py, its dependencies...)
[06:07] <stub> https://bugs.launchpad.net/launchpad/+bug/845464
[06:07] <_mup_> Bug #845464: deployment report should fail revisions in stable containing undeployed db changes <fastdowntime-later> <Launchpad itself:Triaged> <qa-tagger:Triaged> < https://launchpad.net/bugs/845464 >
[06:07] <lifeless> stub: can we make trusted.sql into patches, and ignore update.py ?
[06:08] <stub> We need to know what has been deployed. That is what is safe to merge from db-stable -> devel.
[06:08] <lifeless> yes, I agree
[06:09] <lifeless> uhm, yes, if you record a single 'latest rev deployed' in the prod DB, and we only deploy forwards, then you can just do a merge graph check to see what revs are safe to merge from db-stable to devel
[06:09] <stub> Knowing the db patch numbers that have been applied isn't perfect, as patch numbers could conflict if people mess up, and we want the version-applied rather than a potentially modified one.
[06:10] <stub> Knowing the revision number isn't perfect, as there is a chance we didn't deploy from db-stable
[06:10] <lifeless> well, lets not overthink it - because the things that can go wrong are infinite :)
[06:10] <stub> But we probably won't get perfect, so looking for a balance of correct and good enough
[06:10] <lifeless> and we'll get cascading failures in both of those cases.
[06:11] <lifeless> in the former case we'll have developers going zomg wtf, not to mention failures on staging, failures on pqm
[06:11] <lifeless> (More reliably too if we drop the redundant patch info from the file - less ways for devs to mess up)
[06:12] <lifeless> for the latter case, hmmm, no guarantee of cascade failures, OTOH can we automate it so we can't messs up ?
[06:12] <stub> At the moment I'm thinking of lastest revid and assuming that the branch is lp:launchpad/db-stable
[06:13] <lifeless> I think that would work fine
[06:13] <lifeless> s/stest/test/
[06:14] <poolie> hm i guess we could fold together stdout and stderr on the server and that would avoid the issue
[06:23] <nigelb> mwhudson: Oooh. nice idea with using the ec2 with higher memory!
[06:25] <nigelb> The shaping launchpad's next 18 months thread is full of win :)
[06:28] <poolie> it sure is
[06:40] <lifeless> wallyworld: wgrant: Picking two random js fluent folk; what js should we (or do we?) use in python-oops-tools? Whats the easiest way to tell? If none is in use, whats the easiest way to get up and running with YUI, (given that unlike LP the user base is captive, and having it greased-lightning fast isn't critical)
[06:41] <wgrant> lifeless: It doesn't have any JS at the moment, except for the jQuery that's built into Django's admin UI.
[06:44] <lifeless> so, what I want to do is take the text traceback
[06:45] <lifeless> StevenK: https://code.launchpad.net/~lifeless/launchpad/oops-polish/+merge/82359 if you are still reviewing
[06:45] <StevenK> I'm not, sorry.
[06:47]  * micahg hugs LP devs for the cancel build option
[06:47] <wgrant> micahg: Watch out, it's permanent and non-retriable.
[06:48] <micahg> understood (does it mention that on the cancel page)?
[06:48] <wgrant> I don't know. I didn't realise it was publicly available until a couple of days ago.
[06:48] <micahg> I've usually only wanted it in cases where I know it's going to fail and will be uploading a new version
[06:48] <lifeless> wgrant: anyhow, that MP above
[06:49] <lifeless> wgrant: will get us backtraces from actions in the LP timeline
[06:49] <lifeless> wgrant: I want to put them in the linear query section of the oops viewing template in python-oops
[06:49] <lifeless> wgrant: hidden by default
[06:49] <lifeless> wgrant: with a clicky clicky thing to show it
[06:50] <lifeless> wgrant: I'm not bothered about automated correlation or anything at this point
[06:51] <lifeless> wgrant: I'm seeking opinionated 'do it this way its easiest' answers.
[06:52] <wgrant> lifeless: I've never started a fresh YUI project before.
[06:52] <wgrant> jQuery is trivial.
[06:52] <micahg> ah, but the cancel button only seems to be on pending builds, but that's progress :)
[06:53] <wgrant> micahg: It works on building virtual builds.
[06:53] <micahg> ah, ok, I just checked a native one
[06:53] <wgrant> It doesn't work on non-virtual builds, because we can't really cancel them.
[06:54] <lifeless> wgrant: so, opinionated recommendation ?
[06:54] <wgrant> lifeless: Work out how to use YUI, I guess.
[06:54] <wgrant> I don't have a good solution.
[06:56] <lifeless> hmm, but jquery is bundled
[06:56] <wgrant> With Django, yes.
[06:56] <lifeless> wouldn't it make sense to use one library within the one site ?
[06:56] <wgrant> Because Django is hideous.
[06:56] <nigelb> wgrant: So much hate for Django :(
[06:56] <poolie> the ec2 ubuntu archive seems to be unreachable
[06:56] <poolie> :/
[07:01] <poolie> fwiw the default az is that aws chooses an az for you
[07:04] <nigelb> That's probably for the best.
[07:04] <nigelb> I've seen cases where aws says some zone doesn't have the machine you want.
[07:04] <nigelb> *doesn't have any more of
[07:04] <nigelb> That's when I realized there exists such a limit per AZ.
[07:06] <poolie> i wonder if we should mirror that image in to singapore
[07:07] <wgrant> Isn't Singapore significantly more expensive?
[07:09] <poolie> only slightly more expensive
[07:09] <poolie> i think 57c/hr vs 50
[07:09] <wgrant> Ah, not so bad.
[07:10] <poolie> also it may not be on fire at the moment
[07:11] <wallyworld> lifeless: i've not used jQuery, but if it's bundled with Django and python-oops-tools uses that, it makes sens to me to just use jQuery
[07:12] <nigelb> poolie: wait, what's on fire?
[07:18]  * wallyworld now has 2 launchers running after unity crashed :-/
[07:19] <poolie> i'm having a lot of trouble launching anything in us-east-1, and lots of things are failing with network glitches after they are launched
[07:36] <StevenK> Heh, so effectively AWS is.
[07:38] <nigelb> poolie: Now you're scaring me.
[07:38]  * nigelb has workplace depending on us-east-1.
[07:39] <StevenK> There is now us-east-2
[07:39] <StevenK> But migrating AZs is horrid
[07:40] <nigelb> poolie: AWS status says there was some DNS issues in us-east-1
[07:40] <nigelb> Ugh, but that was at least 4 hours ago. and that's marked as resolved already.
[07:41] <jtv-afk> StevenK, wgrant: successful package build on dogfood!  Now it says "uploading build."
[07:43] <jtv> It's been "uploading build" for a long time though—6 minutes!
[07:43] <wgrant> jtv: Is the build upload cronjob enabled?
[07:43] <jtv> Oh, that's a separate cron job?
[07:43] <wgrant> Yes
[07:45] <jtv> I looked at the “crontab -l” output for the launchpad user, but the only thing mentioning “upload” there is process-upload.
[07:46] <wgrant> #*/5 * * * * LPCONFIG=dogfood /srv/launchpad.net/codelines/current/scripts/process-upload.py -C buildd --builds -v /srv/launchpad.net/builddmaster -a jelmer@canonical.com >> /srv/launchpad.net/dogfood-logs/upload-builds.log 2>&1
[07:46] <wgrant> It's disabled.
[07:53] <poolie> nigelb, ok, image building in singapore!
[07:53] <nigelb> poolie: *whee*
[07:53] <poolie> maybe
[07:53] <nigelb> heh
[07:53] <lifeless> pop quiz
[07:54] <poolie> for me that is ~165ms away
[07:54] <lifeless> how bad would it be to just add the tb as another table column
[07:54] <nigelb> how far is us-east-1?
[07:54] <poolie> 275ms
[07:54] <nigelb> that's significantly faster!
[07:54] <poolie> yeah
[07:54] <poolie> at least if you're going to do anything interactive
[07:55] <poolie> us still seems to be having network problems though
[07:55] <nigelb> ah, right. I can't ping our machines from outside.
[07:56] <poolie> us still seems to be having network problems though
[07:56] <poolie> and so am i :)
[07:56] <nigelb> Did you raise a ticket?
[07:56] <poolie> i thought it was me
[07:56] <lifeless> jtv: I believe you are OCR today?
[07:56] <jtv> lifeless: I believe you are right.
[07:57] <lifeless> I have a midsize branch https://code.launchpad.net/~lifeless/launchpad/oops-polish/+merge/82359 if you have the time
[07:57] <lifeless> its just updating to newer dependencies and dealing with the fallout
[07:59] <jtv> lifeless: I'll take it
[08:00] <poolie> wow actually despite the smaller mean ping time that is noticeably more responsive
[08:00] <lifeless> jtv: thank you
[08:00] <lifeless> poolie: despite or because of ?
[08:00] <jtv> wgrant: would it be okay to reproduce the command line from that cron job?
[08:01] <lifeless> jtv: if there are no passwords or production paths in it, it should be fine.
[08:01] <lifeless> jtv: if you're in doubt, you can paste it to me and I can venture an opinion
[08:01] <jtv> That's dogfood, and I'm copying it off public IRC.  :)
[08:03] <lifeless> hah :P
[08:03] <wgrant> jtv: Huh?
[08:03] <wgrant> jtv: Oh, you mean run it manually?
[08:03] <jtv> Yes
[08:03] <wgrant> Sure, or just reenable the cron job...
[08:03] <jtv> Note the bit about being smacked on the head.
[08:04] <jtv> \o/ successfully built
[08:04] <wgrant> He's away for ages, he won't notice.
[08:04] <wgrant> And there's no auditing.
[08:04] <wgrant> So we just have to delete IRC logs and you're safe :)
[08:04] <jtv> The secret to a happy life: impunity!
[08:07] <nigelb> poolie: I just did an apt-get update on a machine and it seems to hit the mirror fine.
[08:08] <poolie> lifeless, "despite the measured difference being small the subjective difference is large"
[08:08] <lifeless> ah :)
[08:09] <lifeless> log senses :)
[08:09] <poolie> yeah
[08:09] <poolie> or perhaps there is less jitter or variability or something
[08:10] <poolie> nup, there's very little
[08:10] <poolie> i don't know then
[08:10] <poolie> error: lp.code.model.tests.test_codeimportjob.TestCodeImportJobWorkflowNewJob.test_dateDueOldPreviousResult [ multipart
[08:10] <poolie> is this spurious? i did'nt break it i think
[08:10] <poolie> s//i don't think i broke it
[08:10] <wgrant> /date/i -> spurious
[08:11] <jtv> lifeless: question about ll.132—133 of the diff… was it meant as a feature for "name1" to be able to occur twice with different values?
[08:11] <jtv> (req_vars0
[08:11] <jtv> *(req_vars)
[08:11] <jtv> It doesn't make much sense to me, but I wonder why the test seems to want to exercise that.
[08:11] <lifeless> jtv: it was a bogus concept
[08:11] <lifeless> jtv: all our producers and consumers dictified it.
[08:12] <jtv> Amusing.
[08:12] <lifeless> jtv: the change here eliminates it
[08:12] <jtv> Well, this does look a lot better.
[08:12] <poolie> you have got to be ... kidding me
[08:12] <lifeless> you'll see a lot of s/dict(// in the diff :)
[08:12] <poolie>             counter = count(randint(0, 1000000))
[08:12] <lifeless> poolie: thats in your TCP stack ?
[08:13] <poolie> that's in the test factory
[08:13] <poolie> eventually the same number will turn up twice
[08:14] <jtv> Yowza—the tests even depended on ordering of req_vars.
[08:14] <poolie> http://irclogs.ubuntu.com/2011/04/13/%23launchpad-yellow.html#t18:36
[08:15] <nigelb> Interesting.
[08:15] <lifeless> jtv: this is a long running source of low grade pain
[08:15] <lifeless> jtv: I'm very very happy to finally have hit the point in this oops yak shaving exercise that addressing it was reasonable.
[08:16] <wgrant> poolie: Was that your failure?
[08:16] <poolie> that is giving me a very wgrant moment
[08:16] <wgrant> Oh?
[08:16] <poolie> who on earth counts on random integers being unique in a test
[08:16] <StevenK> We do.
[08:16] <wgrant> It's probably not random.
[08:16] <lifeless> jtv: after this, there is https://code.launchpad.net/~lifeless/python-oops-tools/polish/+merge/82365 which shows backtraces on each SQL query in lp-oops, something that I think will help folk tracking down sources of thousand-query-death.
[08:17] <wgrant> Oh, it uses getUniqueString
[08:17] <wgrant> I see
[08:17] <lifeless> jtv: its also pretty small and self contained.
[08:17] <nigelb> "a wgrant moment" heh.
[08:18] <poolie> wgrant, there may be some other explanation
[08:18] <lifeless> jtv: I will be back soon - getting dinner for the family
[08:18] <wgrant> We see similar sorts of things when a test runs a script but doesn't mark the DB dirty. Sequences don't get reset, so extremely confusing errors crop up a few tests later.
[08:18] <poolie> but that randint call looks like a pretty big red flag
[08:18] <wgrant> But in this case you're right, I think.
[08:19] <poolie> i wonder why it is trying to make them unique per-thread?
[08:19] <poolie> why would you want that
[08:23] <wgrant> poolie: No idea.
[08:29] <poolie> 'wgrant moment' meaning swearing at people
[08:29] <poolie> anyhow, https://bugs.launchpad.net/launchpad/+bug/891028
[08:30] <_mup_> Bug #891028: TestFactory getUniqueInteger isn't <spurious-test-failure> <Launchpad itself:Triaged> < https://launchpad.net/bugs/891028 >
[08:42] <lifeless> jtv: I am back, if that matters :)
[08:43] <jtv> lifeless: I am sure you are happy to be back.  And that is a good thing, right?
[08:43] <lifeless> :P
[08:43] <jtv> Nice error page we have now…
[08:43] <jtv> (Which is what I get instead of your other MP)
[08:44] <lifeless> hmm, works fir me
[08:44] <lifeless> I think you hit fdt
[08:45] <jtv> Yes.  It was a nice frowney.
[09:03] <mrevell> Hello
[09:03] <adeuring> good monring
[09:16] <jtv> morning mrevell, morning adeuring
[09:17] <mrevell> Howdy howdy howdy
[09:21] <poolie> hi mrevell
[09:26] <lifeless> jtv: so did you have any thoughts on https://code.launchpad.net/~lifeless/python-oops-tools/polish/+merge/82365 ?
[09:26] <jtv> Not yet.
[09:26] <lifeless> k
[09:26] <jtv> I think I'll have my standup first.
[09:26] <lifeless> fair enough
[09:26] <lifeless> I'm going to halt() - tl meeting in the early am
[09:26] <lifeless> jtv: so if you don't get to it, its not a problem, I can nab first-up OCR tomorrow
[09:27] <jtv> I should get to it anyway.  Good night!
[09:27] <lifeless> nn
[09:29] <poolie> mrevell, i liked curtis's karma rant
[09:29] <poolie> i was going to speculate but i won't
[09:35] <mrevell> poolie, Let's speculate; it's fun :) I have a reply in mind, but I want to stick to Curtis' request of "doing something" rather than just bikeshedding. Dashboards and walls are on the roadmap for next year. Karma ... let's kill it and, if we replace it with anything, let's replace it with human-awarded recognitions. "poolie thanked mrevell for xxxxx", or similar.
[09:36] <poolie> i would possibly, if it gets to the top of my queue
[09:36] <poolie> put up a patch that hides all mention of karma as a number
[09:37] <poolie> so you just see Person has been active in Project
[09:37] <poolie> or maybe "N most active people"
[09:37] <mrevell> Right, so long as the people in that list are there because of the number of useful actions rather than their karma score.
[09:37] <poolie> well
[09:38] <poolie> i think this is complementary to, but decoupled from, having a better sense of what is useful action
[09:38] <poolie> maybe if we had a really well calibrated measure of useful we'd be ok to show it as a number, but maybe not even then
[09:41] <poolie> mrevell, oh the other thing as far as 'thankyou'
[09:41] <poolie> would be to add some social network buttons
[09:41] <poolie> i realize this will be controversial and might be a bad idea
[09:41] <poolie> also might not actually be useful
[09:41] <poolie> might be interesting
[09:43] <wgrant> "wgrant liked this bug"?
[09:43] <wgrant> That doesn't sound very useful...
[09:44] <poolie> yeah one drawback is that a lot of lp is bugs
[09:44] <poolie> mm
[09:44] <poolie> i can imagine voting up your merge proposals, or thanking you for fixing things
[09:45] <mrevell> Karma, as anything other than a list of actions, is a false representation of the value of someone's contributions. I think we can make useful judgements about what is a useful action but when we start weighting actions and so on, it leads to discontent. THere are some people who like karma and I know that turning other types of software into pseudo-games is a thing right now, but I'd rather see human recognition o
[09:45] <mrevell> f good work done than a calculation that is basically broken.
[09:45] <poolie> yeah
[09:45] <mrevell> poolie, As for +1s and Likes, it fits in with what some people have asked for. Like you say, you'd limit it to certain actions.
[09:45] <mrevell> So
[09:46] <poolie> i think if i saw a person's timeline, or a project's timeline, that would give me a much better calibrated sense of their contribution
[09:46] <poolie> amhnews's timeline :)
[09:46] <mrevell> poolie, Right yeah
[09:46] <mrevell> So, you wouldn't like a bug, you'd like that someone had reported a bug.
[09:46] <mrevell> Or rather, thank them
[09:46] <mrevell> in some way
[09:47] <poolie> maybe thank them for fixing it
[09:47] <poolie> maybe +1 a project
[09:47] <mrevell> poolie, or reporting a bug, I think that's fine.
[09:47] <mrevell> Yeah, +1/Like a project
[09:47] <mrevell> or a patch
[09:47] <mrevell> or a branch
[09:47] <poolie> so
[09:47] <mrevell> Hmm, interesting.
[09:48] <mrevell> I can feel some LEPping coming on.
[09:48] <poolie> so it seems at least interesting if not totally justifiable
[09:48] <mrevell> :)
[09:48] <poolie> mm
[09:48] <poolie> downsides:
[09:48] <poolie>  - may break things
[09:48] <poolie> - people will bitch about it
[09:48] <poolie> - gives personal data to $company
[09:48] <poolie> - need to choose which ones to do
[09:48] <poolie> - opportunity cost
[09:49] <mrevell> 1. Everyone we do breaks things
[09:49] <mrevell> 2. Depends who and whether they have a point.
[09:49] <mrevell> 3. We should have very clear and effective opt-outs or even make it opt-in.
[09:50] <mrevell> 5. Dashboards and walls are important, IMO; this is just a bit of icing that we might be able to get on top.
[09:50] <mrevell> s/Everyone/Everything
[09:50] <mrevell> poolie, Do you think this will break things more than usual?
[09:50] <poolie> probably less
[09:51] <poolie> less than changing the buildds :)
[09:52] <wgrant> How will it work with our ever-expanding volume of confidential data?
[09:52] <poolie> it's a really good question
[09:52] <poolie> one answer is that i think we run urchin.js on all pages at the moment
[09:52] <poolie> (?)
[09:52] <mrevell> wgrant, I think that's the part where I get to say, "Ha, mere implementation detail" :)
[09:52] <poolie> so google already know all the urls and get to run js on them
[09:52] <wgrant> poolie: I believe so, yes, which is pretty awful.
[09:53] <wgrant> But being open to Google Analytics compromises is one thing.
[09:53] <wgrant> Most of the web is open to that.
[09:53] <poolie> :)
[09:53] <mrevell> I have a call shortly. I'm spending to day preparing the LP roadmap for the stakeholder meeting tomorrow. We have a skeleton Activity Walls LEP: contributions welcome: https://dev.launchpad.net/LEP/ActivityWalls
[09:53] <wgrant> What do the social network buttons do?
[09:54] <wgrant> I've never used one.
[09:54] <poolie> so this would potentially encourage people to click "+1 woo hoo" on the Ubuntu Electric Car project
[09:54] <poolie> despite it being private
[09:54] <adeuring> stub: could you have a look at this MP: https://code.launchpad.net/~adeuring/launchpad/bug-sorting-by-attachment-age/+merge/82315 ?
[09:54] <wgrant> poolie: That's sort of what I'm worried about.
[09:54] <stub> adeuring: yup. saw that earlier - will go over it now
[09:54] <poolie> i was wondering about that too
[09:54] <adeuring> stub: thanks!
[09:54] <poolie> perhaps a Must is that they only appear on public pages
[09:54] <wgrant> I don't think it's really useful for something like LP.
[09:54] <wgrant> But maybe.
[09:54] <poolie> if that can be easily cheaply computed
[09:55] <poolie> wgrant, they typically act as a counter of how many people clicked it, they may post it to a feed you create on the web
[09:55] <wgrant> But then we're further restricting functionality by privacy, which seems sort of bad.
[09:55] <poolie> they can skew google's search results
[09:55] <poolie> which is i think one of the more interesting outcomes
[09:56] <poolie> https://plus.google.com/105614526446850346960/posts/VZYQR5rsKUA
[09:59] <poolie> the first step would be to give less stupid search snippets
[09:59] <poolie> i think i'm tired
[10:24] <nigelb> poolie: any luck with singapore?
[10:35] <stub> adeuring: done.
[10:35] <adeuring> stub: thanks!
[10:36] <stub> adeuring: I can throw together the populate script after the patch is deployed if you want (for i in range(1,1000000): cur.execute(...); con.commit())
[10:36] <adeuring> stub: that would be great!
[10:48] <stub> Can I get the revision id of the last committed revision in my branch with the bzr command line?
[10:50]  * stub finds log --show-ids
[11:14] <wgrant> stub: bzr version-info --custom --template="{revision_id}"
[11:14] <stub> ta - that is nicer
[11:18] <poolie> nigelb, it's still building there
[11:18] <poolie> i think it will be ok unless there's another glitch or bug
[11:19] <poolie> mrevell, https://www.google.com/support/webmasters/bin/topic.py?hl=en&topic=1140191
[11:20] <mrevell> Ah thanks poolie
[11:22] <mrevell> Which Google account? martin.pool@canonical.com?
[11:22] <mrevell> arg
[11:22] <poolie> :)
[11:22] <stub> slow but :-/
[11:23] <stub> 10 seconds more to fastdowntime. Hopefully faster on real disks.
[11:25] <stub> oh... don't test on a remote checkout idiot
[11:35] <poolie> woo, ok, a spicy singaporean AMI!
[11:40] <rick_h_> morning party people
[11:43] <wgrant> Morning rick_h_
[11:47] <nigelb> poolie: Yay! :)
[11:47] <nigelb> (sorry, meetings suck. I was stuck in one for an hnour)
[11:48] <nigelb> ~>
[12:01] <wgrant> poolie: HMMMM
[12:01] <wgrant> https://lpbuildbot.canonical.com/builders/lucid_lp/builds/1559/steps/shell_6/logs/summary
[12:02] <wgrant> There are a few potential suspicious revisions :(
[12:03] <rick_h_> hey, I've been fighting that for the last 2 days
[12:03] <rick_h_> the random generated "person-name" is always the same id number
[12:03] <wgrant> Have you ever managed to reproduce it locally?
[12:03] <rick_h_> no, not once
[12:03] <rick_h_> always works locally
[12:04] <rick_h_> tried 4 times, my bugfix branches 3 times and ran devel last night to see if maybe it was me
[12:04] <rick_h_> always get the same 923594
[12:04] <rick_h_> but the getUniqueInteger method should be 'random' starting point
[12:06] <poolie> rick_h_, i filed a bug
[12:06] <poolie> rick_h_, where are you btw?
[12:06] <rick_h_> Michigan USA
[12:06] <poolie> this is like the things i was fighting a few weeks ago where
[12:06] <poolie> amazon seemed to have great aptitude for finding timing dependent failures
[12:07] <rick_h_> I thought maybe as well, but I've done 4 runs over 2 days (3 runs yesterday) and it's always the same
[12:07] <poolie> always the same test?
[12:07] <wgrant> rick_h_: What's the earliest devel revno that showed the problem?
[12:07] <rick_h_> I just want to land my pretty "add a css class" bugfix :)
[12:07] <poolie> i wonder if it is not coincidental
[12:07] <rick_h_> poolie: yes, same test, same invalid person name
[12:07] <poolie> whoa
[12:08] <wgrant> It may be the DB dirtiness issue that I alluded to earlier.
[12:08] <rick_h_> sec, let me see if i can find my original failed test branch
[12:08] <wgrant> Someone should try running the previous 100 tests or so
[12:08] <wgrant> See if it's reproducible.
[12:09] <wgrant> Locally, I mean.
[12:10] <rick_h_> https://code.launchpad.net/~rharding/launchpad/bugfix_814696 is my branch I first got it
[12:11] <rick_h_> so 14294 was where I started out at?
[12:11] <wgrant> The ec2 email should say which version of devel was used.
[12:11] <rick_h_> sorry, still getting back into bzr
[12:11] <rick_h_> yea, I'm still searching for that original email
[12:11] <poolie> add a -r option to ec2 test
[12:11] <wgrant> It automatically merges devel before it starts running tests.
[12:11] <rick_h_> ah, nice
[12:14] <poolie> there isa  --trunk otpion
[12:14] <poolie> add a -r and see if trunk sucked in the past
[12:15] <wgrant> It's easy enough to bisect, but I'm hoping to find out how far back we might have to look.
[12:15] <wgrant> Without running too many ec2 instances...
[12:17] <poolie> rvba, does your patch seriously cut that page from 1000+ to 7 queries?
[12:18] <poolie> wgrant, there's only say 200 per month
[12:18] <poolie> less than 8 runs should be enough
[12:18] <wgrant> That's true.
[12:20] <wgrant> But anyway, hopefully gary or someone can sort this out before I get a chance to.
[12:25] <rick_h_> ok, found that first test email. Source: bzr+ssh://bazaar.launchpad.net/~rharding/launchpad/bugfix_814696 r14296
[12:26] <rick_h_> Target: bzr+ssh://bazaar.launchpad.net/~launchpad-pqm/launchpad/devel/ r14301
[12:26] <wgrant> Hmmm
[12:26] <wgrant> So it wasn't the testfix.
[12:33] <poolie> +sed -ie 's![ \t]\/[ \t]*ext3[ \t]*defaults[ \t]!/ ext3 data=writeback,commit=3600,async,relatime!' /etc/fstab
[12:33] <poolie> isn't that just such a good idea
[12:33] <poolie> don't tell elmo
[12:34] <rick_h_> poolie: ooh pretty
[12:40] <nigelb> poolie: elmo will probably shoot you or shoot himself.
[12:41] <poolie> i hope it's actually faster after all these round trips
[12:44] <nigelb> heh
[13:25] <rvba> poolie: I was lunching… what patch?
[13:27] <rvba> Gosh, I've been bitten by this too: "NameAlreadyTaken: The name 'person-name-923594' is already taken."
[13:29] <rick_h_> do you always get it? or is it temp?
[13:29] <rvba> I just got it in an ec2 run.
[13:29] <rick_h_> ah ok
[13:40] <rick_h_> rvba: ping
[13:41] <rvba> rick_h_: pong
[13:41] <rick_h_> I'm finishing up getting that bugfix for the close button we talked about yesterday
[13:41] <rvba> Great
[13:41] <rick_h_> I'm wondering if you know the what for on the green "steps" bar that shows up on the edit sub link?
[13:41] <rick_h_> I thought it was mostly decorative/header showing kind of thing
[13:42] <rick_h_> but the docs seem to imply some sort of multi-step process or something in overlay.js?
[13:42] <rvba> let me have a look.
[13:44] <rick_h_> Hmm, I guess some links do have it only half across, some all the way,
[13:46] <rvba> hum, this step 'thing' is new to me… let's see where it's used.
[13:47] <rick_h_> https://bugs.launchpad.net/launchpad/+bug/741463
[13:47] <_mup_> Bug #741463: popup diff close button misplaced <trivial> <ui> <Launchpad itself:In Progress by rharding> < https://launchpad.net/bugs/741463 >
[13:47] <rick_h_> most of the ajax links on the right have it in some fashion
[13:47] <rick_h_> "link to related branch" is half across
[13:47] <rick_h_> subscribe someone else as well
[13:48] <rick_h_> https://bugs.launchpad.net/launchpad "subscribe to bug mail" is all the way across
[13:49] <rvba> weird
[13:51] <rvba> rick_h_: oh I see
[13:52] <rvba> rick_h_: it's a kind of weird looking progress bar for multi step stuff.
[13:52] <rick_h_> right, that's what the code seemed to say, but things like that "subscribe to bug mail" didn't seem to describe 100% done of some process
[13:53] <rvba> rick_h_: like when you "subscribe someone else", first you search for a name (step1) and then you pick up someone from the list you get (step2).
[13:53] <rick_h_> ok, so that overlay itself has another step after submitting the form then
[13:53] <rick_h_> so if I'm loading this diff, should I include it as just 100% done then? Or leave it out since there's not really a process to go through
[13:53] <rick_h_> you're just loading the content
[13:54] <rvba> rick_h_: I suppose it should be 100% because because there is no next step ;)
[13:54] <rick_h_> ok, will go that route then
[13:54] <rick_h_> thanks
[13:55] <rvba> rick_h_: thank you, you made me discover something I simply never spotted before ;)
[13:59] <rvba> rick_h_: did you see other pickups where the green bar is simply absent?
[14:00] <rick_h_> rvba: yea, like the "this bug affects me"
[14:00] <rick_h_> but that's a slightly different overlya
[14:00] <rick_h_> overlay
[14:00] <rvba> I suppose it's still based on the very same js code.
[14:02] <rvba> rick_h_: in fact, maybe it's best to leave it out then.
[14:07] <rick_h_> rvba: yea, there's a flag in the JS to show progress bar
[14:07] <rick_h_> in the pretty overlay ATTRS
[14:08] <rvba> rick_h_: the presence or not of the green bar does not seem to be very consistent across lp.  But it seems to be associated with "performing an action" within an overlay.  In your instance, you don't *do* anything with the overlay, it's simply showing information.
[14:08] <rvba> So yeah, on second though, I think you can leave it out.
[14:08] <rick_h_> yep, I kind of wanted it since i thought it looked good, but seems to have a reason/madness if a bit inconsistant
[14:09] <rick_h_> ok, I can agree with that then
[14:09] <rvba> Cool.
[14:13] <jcsackett> morning, all.
[14:13] <rvba> Hey jcsackett.
[14:14] <jcsackett> hi, rvba. :-)
[14:14] <rick_h_> morning
[14:16] <rick_h_> rvba: ok, branch up, put you in as requested reviewer
[14:18] <rvba> rick_h_: sure, I'm in the middle of something right now but I'll get to it later.
[14:18] <rick_h_> np, thanks for the help getting things figured out
[14:18] <rvba> welcome.
[14:27] <rick_h_> adeuring: we were talking about that ec2 test failures this morning. That's the error I was dealing with yesterday on the call
[14:27] <adeuring> rick_h_: ah, thanks for the hint!
[14:27] <nigelb> hah, adeuring mailed about the same failure that everyone's talking about.
[14:49] <deryck> gary_poster, we have a persistent build failure affecting ec2 and buildbot runs, which seems like a timing issue in factory generated stuff....
[14:50] <deryck> gary_poster, I'd ping bigjools, too, but he isn't here. :)  it seems like the kind of thing a maintenance squad should look at...
[14:50] <deryck> gary_poster, given no one can pin down a change that introduced it.
[14:50] <deryck> gary_poster, what do you think? :)
[14:55] <gary_poster> deryck, sorry, my IRC ping failed me somehow, sorry.
[14:55] <deryck> np
[14:56] <deryck> I see you are aware from mail.
[14:56] <gary_poster> yeah
[14:56] <gary_poster> yeah, deryck, I'll dig into it or get someone else to
[14:56] <gary_poster> deryck, do you know when this started?
[14:56] <gary_poster> the most recent buildbot failure was the first one I've seen
[14:57] <deryck> gary_poster, I know that rick_h_ was having the issue in ec2 early yesterday.
[14:57] <gary_poster> deryck, ah ok, so another intermittent issue. :-/  ok, thanks
[14:58] <deryck> gary_poster, well, it's not intermittent, I don't think.  Not entirely.  rick_h_ gets it consistently in every run as does adeuring.
[14:58] <rick_h_> yes, I've completed 4 runs over the last 48 hours, each with the same failure and everyone has the same person id as the same failure id
[14:59] <gary_poster> ah ok
[15:01] <nigelb> gary_poster: If it helps, poolie ran into it all day as well I think.
[15:42] <abentley> rick_h_, adeuring, deryck: Here are my current aliases, with comments about what they do: https://pastebin.canonical.com/55863/
[15:43] <rick_h_> thanks abentley
[15:43] <adeuring> abentley: cool, thanks
[15:43] <deryck> abentley, yes, thanks!
[15:44] <deryck> rick_h_, let's mumble top of the hour.  cool?
[15:44] <rick_h_> sounds good
[15:47] <abentley> rick_h_: Here's a snippit that can't be expressed as an alias:  bzr lp-find-proposal -r mainline:annotate:lib/lp/bugs/javascript/buglisting.js:20
[15:48] <abentley> rick_h_: It means "find me the merge proposal in which landing this line was discussed".
[15:49] <rick_h_> cool, going through the app auth process
[15:51] <abentley> rick_h_: it needs to run in the branch the proposal was targeted against.
[16:03] <bac> rick_h_: so are you seeing all fail with person-name-923594 ?
[16:07] <rvba> bac: I've had the problem in ec2 a few hours ago too.
[16:08] <bac> rvba: can you confirm it was with exactly the person name i listed?
[16:08] <rvba> bac: yep (NameAlreadyTaken: The name 'person-name-923594' is already taken.)
[16:13] <jelmer> hi bac, rvba
[16:13] <jelmer> I've just had an ec2 failure with that error as well
[16:13] <bac> thanks jelmer
[16:13] <rvba> hi jelmer.
[16:14] <jelmer> I had a quick look, but I didn't see any recent changes in that area. Has either of you looked into it further?
[16:15] <rvba> not me, bac is on the case ;)
[16:25] <rick_h_> bac: yes
[16:26] <bac> thanks rick_h_
[16:27] <rvba> rick_h_: btw I've reviewed your mp (you should have received an email about that).  I think you need to add a test for the change and then you're good to go!
[16:28] <rick_h_> rvba: yep, thanks. Great feedback, appreciate it
[16:28] <rick_h_> working on it now
[16:28] <rvba> Great.
[16:34] <allenap> abentley: Could you sanity check a script for me? https://code.launchpad.net/~allenap/launchpad/wipe-precise-translations/+merge/82422
[16:35] <abentley> allenap: looking.
[16:35] <allenap> Thanks.
[16:37] <abentley> allenap: It looks okay, but I'm no expert on this domain.
[16:37] <allenap> abentley: Thanks, awesome.
[17:36] <mtaylor> it's posssible to do a bug import into a project that already has bugs, yeah?
[17:42] <gary_poster> adeuring, when did you first see the person-name-923594 problem?  rick_h, are you the only one who saw the problem 48 hours ago to your knowledge?  Is there reason to suspect that it might be in a diff that you made?
[17:42] <gary_poster> It's odd that everyone can reproduce it now
[17:43] <gary_poster> and buildbot only just failed with it now
[17:43] <gary_poster> but people report it earlier
[17:43] <rick_h_> gary_poster: I was the first to see it because I was trying to run my first ec2 test Tues morning.
[17:43] <rick_h_> but no change of mine has landed
[17:43] <gary_poster> rick_h_ ok cool thanks
[17:43] <rick_h_> and that first run was just to add a css class to an item
[17:44] <gary_poster> rick_h_, that was it! ;-)
[17:44] <adeuring> gary_poster: recieved a failure messsage  yesterday, 22:39UTC
[17:44] <rick_h_> I just think I got lucky to start running ec2 tests at the right moment
[17:44] <gary_poster> ok thanks adeuring
[17:46] <gary_poster> rick_h_ you said before that you had seen the error 48 hours earlier, which would have been Monday morning.  Was it Monday or Tuesday that you first saw it?
[17:46] <rick_h_> sorry, guess I started trying to run tests monday afternoon
[17:46] <rick_h_> and we had the ec2 issues that you helped me through
[17:46] <rick_h_> so the actual run didn't go until Tues
[17:46] <adeuring> gary_poster: first line from the log of the test run: Tests started at approximately Tue, 15 Nov 2011 18:04:11 UTC
[17:46] <adeuring> bzr+ssh://bazaar.launchpad.net/~launchpad-pqm/launchpad/devel/, revision 14301
[17:46] <rick_h_> I got the failure email about 2pm EST on tues
[17:47] <gary_poster> rick_h_, adeuring, ok thanks, that's a big help.
[17:47] <adeuring> rick_h_: the revision number from your failure could help Gray too
[17:47] <adeuring> s/Gray/Gary/
[17:47] <rick_h_> adeuring: yea, it was the same, 14301
[17:48] <gary_poster> ok thanks
[17:48] <rick_h_> Source: bzr+ssh://bazaar.launchpad.net/~rharding/launchpad/bugfix_814696 r14296
[17:48] <rick_h_> Target: bzr+ssh://bazaar.launchpad.net/~launchpad-pqm/launchpad/devel/ r14301
[17:48] <gary_poster> yeah, 14301 then
[17:48] <deryck> abentley, I filed bug 891239 after doing qa on my integration work.  Is that what your branch fix from yesterday addressed?
[17:49] <_mup_> Bug #891239: CustomBugListings js error: Cannot read property 'mustache_model' <bug-columns> <Launchpad itself:Triaged> < https://launchpad.net/bugs/891239 >
[17:50] <mtaylor> sinzui: how do I add a file to a question so that a losa can get at my bug import xml file again?
[17:51] <sinzui> mtaylor, launchpad does not support attachments to questions
[17:51] <sinzui> see bug 77123
[17:51] <_mup_> Bug #77123: Attachments in answer tracker <feature> <lp-answers> <Launchpad itself:Triaged> < https://launchpad.net/bugs/77123 >
[17:52] <mtaylor> sinzui: so how do I send the bug xml file?
[17:52] <mtaylor> sinzui: upload and add a link?
[17:52] <sinzui> mtaylor, yes
[17:52] <mtaylor> sinzui: ok. cool
[18:00] <mtaylor> sinzui: sweet. https://answers.launchpad.net/launchpad/+question/179006 if you get bored
[18:13] <abentley> deryck: no.
[18:13] <deryck> abentley, ok, thanks.  And we timed out the same.  I just confirmed running your branch it's not fixed by it.
[18:14] <deryck> abentley, I'm happy to keep poking and fix it.
[18:14] <abentley> It was https://bugs.launchpad.net/launchpad/+bug/890745
[18:14] <_mup_> Bug #890745: dynamic bug listings don't handle next and prev correctly on first and last batch <Launchpad itself:In Progress by abentley> < https://launchpad.net/bugs/890745 >
[18:37] <abentley> deryck: My nets went down again.
[18:37] <deryck> abentley, np.  wb.
[18:37] <deryck> abentley, I've got some serious storms brewing here, so could lose power or nets, too.
[18:37] <deryck> rick_h_, adeuring ^^
[18:37] <abentley> deryck: Surfing on 3G currently.
[18:38] <deryck> bummer
[18:43] <rick_h_> ouch
[19:08] <bac> hi allenap, you still here?
[19:20] <bac> wallyworld: ping
[20:24] <deryck> abentley, interestingly chromium fires a history change event on load that ff does not.
[20:25] <abentley> deryck: Oh well, I guess we want that to support HistoryHash stuffs, too.
[20:26] <rick_h_> deryck: yea, that's a pita
[20:26] <rick_h_> you basically have to catch the event and check if you've got anything on the stack
[20:26] <rick_h_> and edge case it
[20:26] <rick_h_> there's a big discussion on trying to decide one way or another, but different browsers choose to fire that on load or not to
[20:27] <deryck> rick_h_, yeah, such is the pain of browsers.
[20:27] <deryck> rick_h_, I think I like FF's approach on this one.  Load really isn't a history change.
[20:27] <rick_h_> yea, same here
[20:28] <rick_h_> but I think the idea is that you might be able to (on a pure client side app) determine a page when loaded
[20:28] <rick_h_> where normally the server takes care of handling the initial url params
[20:28] <rick_h_> so you need to trigger "I'm here, and here's my first page of data load info"
[20:29] <abentley> rick_h_: YUI indicates what caused the event, so it should be pretty clean.
[20:29] <deryck> yeah, it just seems "load" represents that already. ;)  and "history" represents everything there after.
[20:30] <rick_h_> cool, I've not used that yet, only history.js
[20:52] <adeuring> deryck: my daily request to run SQL queries: https://pastebin.canonical.com/55909/
[20:53] <deryck> adeuring, ok, np
[20:54] <adeuring> deryck: thanks!
[20:54] <deryck> np!
[20:58] <adeuring> deryck: sorry, I forgot a LIMIT clause...
[20:59] <adeuring> deryck: these queires please: https://pastebin.canonical.com/55910/
[21:05] <adeuring> deryck: my first paste mised a LIMIT clause. please use this one:  https://pastebin.canonical.com/55910/
[21:05] <deryck> adeuring, ok.  sorry I got my system locked up somehow.
[21:05] <adeuring> np
[21:06] <abentley> deryck: we were going to chat?
[21:06] <deryck> abentley, ah, forgot.
[21:06] <deryck> abentley, let's do it first thing tomorrow.  still got plenty going here.  and need to switch locations for work here shortly.
[21:07] <abentley> deryck: might as well make it part of our regular Thursday call, eh?
[21:08] <deryck> abentley, yeah, that works.
[21:09] <deryck> adeuring, the first query is taking forever.... still going.
[21:09] <deryck> oh just finished
[21:09] <adeuring> deryck: ouch...
[21:10] <deryck> adeuring, they're done now.
[21:10] <deryck> adeuring, https://pastebin.canonical.com/55913/
[21:10] <adeuring> deryck: thanks!
[21:10] <deryck> adeuring, I need to switch locations and drop girls off at gym.  if you need anything else, email.  I'll hit it when I return.
[21:10] <adeuring> deryck: ok
[21:12] <deryck> later on, all.
[21:24] <poolie> hi
[21:24] <poolie> so, this time my singapore ami build completed
[21:24] <poolie> let's see how it goes
[21:34] <deryck> adeuring, did you need anymore queries?
[21:34] <deryck> the gym has wifi :)
[21:35] <adeuring> deryck: no, thanks. I'll ignore the quite slow distibution related query fow now; the others wer fast enough, I think.
[21:35] <deryck> adeuring, awesome
[21:37] <poolie> StevenK, thanks for name-to-service
[21:37] <poolie> o/ deryck
[21:37] <poolie> adeuring, thanks for the beta banner, it looks really nice
[21:38] <adeuring> poolie: thanks :)
[21:41] <deryck> hi poolie
[21:42] <poolie> hi, how are you?
[22:09] <deryck> poolie, I'm good, thanks.  How's things with you?
[22:09] <deryck> abentley, here's how I decided to fix this:  http://pastebin.ubuntu.com/740645/
[22:18] <deryck> oops, time to go.  later on everyone.
[22:34] <flacoste> when a branch scanning job fails
[22:34] <flacoste> how do we get the scanner to scan the branch again
[22:34] <flacoste> for example: https://code.launchpad.net/~stub/launchpad/replication
[22:38] <lifeless> push to the branch
[22:38] <lifeless> even a no-op will create a new job
[22:38] <abentley> flacoste: We used to lock the branch and unlock it, but now that Bazaar Experts has been disbanded, you'll need to get a losa to do that.
[22:39] <flacoste> abentley: what command would they run?
[22:39] <flacoste> lifeless: i can't push to the branch
[22:39] <abentley> flacoste: push is probably the simplest.
[22:39] <flacoste> anything that can be done without downloading the branch first?
[22:40] <lifeless> flacoste: any sftp session that writes, or any bzr session that takes a write lock
[22:40] <flacoste> lifeless: sorry, i'm thick, do you have any specific command suggestion :-)
[22:41] <flacoste> bzr break-lock?
[22:41] <lifeless> branch.lock_write(); branch.unlock()
[22:42] <abentley> flacoste: Sure, 'bzr init; bzr push ~stub/launchpad/replication".  It doesn't have to work, it just has to lock.
[22:42] <flacoste> abentley: ah thanks
[23:11] <lifeless> abentley: still on OCR? if so - https://code.launchpad.net/~lifeless/python-oops-tools/hostname/+merge/82466
[23:11] <abentley> lifeless: sorry.
[23:12] <lifeless> abentley: no worries
[23:12] <lifeless> uhm, who can I nab, my tz... wgrant ^
[23:12] <lifeless> wgrant: 1 liner
[23:14] <wgrant> lifeless: Looking
[23:14] <wgrant> You ask too much.
[23:14] <wgrant> lifeless: Want to add reporter while you're there?
[23:15] <lifeless> wgrant: sure
[23:15] <lifeless> wll
[23:16] <lifeless> well
[23:16] <lifeless> appinstance.name I think is the model here
[23:21] <lifeless> wgrant: pushed
[23:27] <wgrant> lifeless: Done
[23:27] <wgrant> Thanks
[23:42] <poolie> hi
[23:42] <poolie> re https://bugs.launchpad.net/launchpad/+bug/891373 where is the bzr-builder people are supposed to use to test against launchpad?
[23:42] <_mup_> Bug #891373: recipe help page has the wrong ppa name for bzr-builder <Launchpad itself:Triaged> < https://launchpad.net/bugs/891373 >
[23:42] <poolie> in ppa:launchpda?
[23:43] <poolie> not a good idea for people to add that though, since it has so much stuff
[23:50] <wgrant> poolie: We've never had anything to do with that.
[23:50] <wgrant> poolie: james_w/bzr handed a bzr-builder to us/IS.