[00:22] <StevenK> Is twisted.internet.error.CannotListenError: Couldn't listen on any:2121: [Errno 98] Address already in use.
[00:22] <StevenK> Is ^ on ec2 normal?
[00:34] <wgrant> StevenK: It started appearing a week or two ago.
[00:34] <wgrant> But yes.
[00:38] <wgrant> Hmm.
[00:39] <poolie> StevenK, see my mail a week ago
[00:39] <wgrant> We should probably present a subtle beta notification to everyone when there's something optional on a page, and allow them to opt-in then and there.
[00:39] <poolie> wgrant, i agree with the ammendment that not every beta is going to be available to every person
[00:39] <poolie> necessarily
[00:40] <wgrant> poolie: Right, a feature flag to turn on the option for certain teams, I guess.
[00:40] <poolie> perhaps plenty of them arethough
[00:41] <wgrant> IIRC Gmail did something like this years ago.
[00:41] <wgrant> Not sure if they still do.
[00:41] <wgrant> Labs or something, but seems Google discontinued all of that.
[00:41] <poolie> yes, google love it
[00:41] <poolie> they do things i really envy
[00:41] <poolie> such as correlating this with their logs
[00:41] <poolie> to see if it makes things slower, crashier, more popular
[00:42] <wgrant> I hope we can get to a point eventually where that would be useful.
[00:42] <wgrant> Right now it's not.
[00:43] <poolie> stevenk https://bugs.launchpad.net/launchpad/+bug/894205
[00:43] <_mup_> Bug #894205: spurious test_poppy failures <poppy> <soyuz> <spurious-test-failure> <Launchpad itself:Triaged> < https://launchpad.net/bugs/894205 >
[00:43] <wgrant> We need an ec2test -D
[00:43] <poolie> for what?
[00:43] <wgrant> So we can break into the instance once it hits that failure.
[00:44] <wgrant> And see what's going on.
[00:44] <poolie> oh postmortem
[00:44] <poolie> can't be that hard
[00:45] <poolie> wouldn't ec2 test --attached -t '-D' do the job?
[00:46] <poolie> except perhaps you don't want to do that every time just in case it fails
[00:46] <wgrant> Or perhaps if we just immediately emailed on failure.
[00:46] <wgrant> Or indeed just ran a few instances and watched ec2 list
[00:46] <poolie> yeah
[00:48] <poolie> i thought ec2 list used to give a progress percentage
[00:48] <wgrant> No
[00:48] <poolie> but apparently not any more?
[00:48] <poolie> well
[00:48] <wgrant> It never did.
[00:48] <wgrant> Just a time.
[00:48] <poolie> maybe i'm thinkingo f bzr pqm
[00:48] <poolie> i guess once you're experienced it's enough to know it takes about 4h
[00:48] <wgrant> Yeah
[00:50] <poolie> the other thing i would like to do here is make the flags admin ui a bit better
[00:50] <poolie> oh and developer control on *staging
[00:50] <poolie> so much to do
[00:52] <wgrant> wallyworld__: What if we said something like "Albert Einstein | [Proprietary: everything (!)] [Security: 6 bugs, 4 branches (!)] [(+)]" (where (!) is the edit icons, (+) is the add icon)
[00:53] <wgrant> wallyworld__: Gets rid of the obscure "observer" and "restricted observer" terms.
[00:53] <wgrant> Removes the third column.
[00:53] <wgrant> And shows more directly what's what.
[00:53] <wallyworld__> wgrant: sounds reasonable at first glance
[00:53] <poolie> wgrant, so
[00:54] <poolie> self.factory.makeBugComment()
[00:54] <poolie> fails if i don't previously log in as a user
[00:54] <poolie> is this a bug? it seems inconsistent with other factory things
[00:54] <wgrant> poolie: A bug, but an unsurprising one.
[00:54] <poolie> this is from https://code.launchpad.net/~mbp/launchpad/888353-microformats/+merge/82767 circa line 59
[01:15] <wallyworld__> StevenK: wgrant: have you used ec2 test recently? does it work for you?
[01:15] <StevenK> ec2 test and ec2 land are mostly the same
[01:15] <wallyworld__> yes, that's what i thought. but first ec2 test complained there was no submit branch
[01:15] <wallyworld__> so i added one to branch.conf
[01:15] <wallyworld__> then it gets to starting ec2 and complains there's a missing revision
[01:15] <StevenK> What are you trying to do?
[01:15] <StevenK> Oh, you do everything in one working directory, don't you?
[01:15] <wallyworld__> just try ec2 test to confirm it still works for me
[01:15] <wallyworld__> i have a lightweight checkout
[01:15] <wallyworld__> i guess it needs submit_branch=... so that it knows where to grab the code from inside the ec2 instance
[01:15] <wgrant> wallyworld__: I used ec2 test 12 hours ago.
[01:15] <wgrant> Worked fine.
[01:15] <wallyworld__> hmmm. don't know what broken for me then :-(
[01:15] <wgrant> wallyworld__: rocketfuel-setup adds a submit_branch
[01:15] <wgrant> To ~/.bazaar/locations.conf
[01:16] <wallyworld__> wgrant: mine has a submit_branch:policy = appendpath
[01:16] <wgrant> Er
[01:16] <wgrant> really?
[01:17] <wgrant> public_branch:policy and push_location:policy should be appendpath
[01:17] <wgrant> submit_branch should not.
[01:17] <wallyworld__> yeah, it's been that way forever. maybe is mis-copied from somewhere
[01:17] <wgrant> http://paste.ubuntu.com/747751/ is what I use.
[01:18] <wallyworld__> i'll try those settings. but i'm sure ec2 test used to work
[01:19] <wgrant> ec2 land will work, since it gets it from the MP
[01:19] <wgrant> ec2 test uses submit_branch.
[01:20] <wallyworld__> right, makes sense
[01:21] <wallyworld__> wgrant: so now the code in ec2 says public branch out of date
[01:22] <wgrant> wallyworld__: The public branch is probably out of date :)
[01:22] <wgrant> Or wrongly configured.
[01:22] <wallyworld__> yet, i'm running ec2 test from a branch i've just bzr pulled to
[01:22] <wgrant> It has the same name?
[01:23] <wallyworld__> as what?
[01:23] <wgrant> Your local branch and LP branch need to have the same name.
[01:23] <wgrant> Or you need to configure public_location explicitly.
[01:24] <wallyworld__> ah, that may be it
[01:24] <poolie> i have been using ec2 test a lot recently
[01:25] <wallyworld__> my public_branch is bzr+ssh://bazaar.launchpad.net/~wallyworld/launchpad/devel
[01:25] <wallyworld__> my subit branch is bzr+ssh://bazaar.launchpad.net/~launchpad-pqm/launchpad/devel
[01:26] <wgrant> Ah
[01:26] <wgrant> There's the problem.
[01:26] <wgrant> public_branch needs to the be the public URL of the current branch.
[01:26] <wgrant> That's why it uses appendpath
[01:27] <wgrant> so my observer-db-2 branch maps to bzr+ssh://bazaar.launchpad.net/~wgrant/launchpad/observer-db-2
[01:27] <wallyworld__> wgrant: so, how to fix? edit ~/.bazaar/locations.conf?
[01:27] <wgrant> wallyworld__: Right.
[01:28] <poolie> if this is misconfigured i don't understand how ec2 would be working for you at all
[01:28] <poolie> or are you always giving the full url?
[01:28] <wgrant> Indeed, even ec2 land should fail here.
[01:28] <wgrant> If public_branch is wrong.
[01:29] <wallyworld__> it's all worked so far for whatever reason
[01:29] <wallyworld__> i have a single working tree and use bzr switch
[01:29] <wallyworld__> if the working tree has the current branch i want to land, i just type ec2 land
[01:30] <wallyworld__> if the working tree is switched to a different branch, i used ec2 land fullurl
[01:32] <wallyworld__> wgrant: so my locations.conf is the same was yours, yet i get the public and submit branch mismatch
[01:33] <wgrant> wallyworld__: Right, I use multiple working trees.
[01:33] <wgrant> wallyworld__: Where are your branches?
[01:33] <wgrant> Directly in lp-branches?
[01:33] <wallyworld__> yes
[01:33] <wgrant> Hmm
[01:33] <wallyworld__> and the branch directories are empty
[01:33] <wgrant> That should still work.
[01:33] <wallyworld__> except for .bzr
[01:34] <wgrant> wgrant@lucid-test-lp:~/launchpad/lp-branches$ bzr co --lightweight rip-out-accountpassword rip-out-accountpassword-co
[01:35] <wgrant> wgrant@lucid-test-lp:~/launchpad/lp-branches$ bzr info rip-out-accountpassword-co | grep public public branch: bzr+ssh://bazaar.launchpad.net/~wgrant/launchpad/rip-out-accountpassword
[01:35] <wgrant> WFM :/
[01:35] <wgrant> You don't have it configured in branch.conf?
[01:35] <wallyworld__> my branch.conf has parent_location
[01:36] <wallyworld__> to point back to the parent devel chechout on disk which i pull into from lp and then branch from when i create a new branch
[01:37] <poolie> [/home/mbp/launchpad/lp-branches/work/.bzr/branches/]
[01:37] <poolie> submit_branch = bzr+ssh://bazaar.launchpad.net/~launchpad-pqm/launchpad/devel
[01:37] <poolie> [/home/mbp/ianb/lp-branches/] actually
[01:38] <wallyworld__> that matches my submit_branch when i type bzr info
[01:38] <wallyworld__> but ec2 test complains that something is out of date
[01:38] <poolie> what specifically?
[01:39] <wallyworld__> bzrlib.errors.PublicBranchOutOfDate: Public branch "bzr+ssh://bazaar.launchpad.net/~wallyworld/launchpad/devel" lacks revision "launchpad@pqm.canonical.com-20111123175547-p02iyjil3n13end4"
[01:39] <wallyworld__> which it does if i goto ~wallyworld/launchpad/devel
[01:40] <wallyworld__> but i'm running ec2 test from a branch which is fully up to date via bzr pull
[01:40] <poolie> are you wanting to run the tests on devel?
[01:41] <wallyworld__> i just wanted to get ec2 test working, because i ec2 demo is based off that and i want to see if ec2 demo works
[01:42] <poolie> there is a special 'ec2 test --trunk' option that doesn't do a merge
[01:42] <wallyworld__> sounds like my config is hosed somehow, yet ec2 land, bzr pull, bzr push etc all work
[01:43] <poolie> wallyworld__, i think your setup has a confusion between a branch called devel owned by you, and the real devel owned by launchpad-pqm
[01:43] <poolie> it's kind of bad this state is possible
[01:43] <poolie> the short answer is to either use --trunk or test one of your own real branches
[01:43] <wallyworld__> i tried one of my own real branches and got similar errors
[01:44] <wallyworld__> --trunk is fine for ec2 test but i really wanted to see if ec2 demo works with a branch containing some changes
[01:45] <wallyworld__> since it was stated ec2 demo is broken, and i wanted to see why/how
[01:45] <poolie> really?
[01:45] <StevenK> wallyworld__: ec2 test should work fine
[01:46] <StevenK> Since ec2 land runs test with details from the MP
[01:46] <poolie> ec2 demo also works fine for me
[01:46] <poolie> wallyworld__, give me the name of one of your branches?
[01:46] <wallyworld__> poolie: someone stated on the thread i started about interactive demos that ec2 demo was broken
[01:47] <wallyworld__> poolie: delete-all-bugtasks-889202
[01:47] <wallyworld__> StevenK: ec2 land works fine for me. but ec2 test doesn't (i haven't run it in ages, and tried a bit earlier for the first time in a while)
[01:49] <wallyworld__> perhaps my workflow of having a single local checkout of devel and branching off that and using a single working tree which i switch between isn't very common
[01:49] <StevenK> Ya think?
[01:49] <poolie> i think it's common
[01:49] <poolie> i use colos which are similar enough
[01:49] <poolie> wallyworld__, can you be more specific about what's going wrong
[01:49] <poolie> do you get a traceback or something?
[01:50] <StevenK> steven@liquified:~/launchpad/lp-branches% ls -1 | wc -l
[01:50] <StevenK> 130
[01:50] <wallyworld__> i have all my branch dires in lp-branches too
[01:50] <wallyworld__> but use devel-sandbox as my only dir with a working tree
[01:51] <poolie> that's fine
[01:51] <poolie> there is no good reason why that would affect ec2
[01:52] <poolie> lifeless, what actually is bitrotten in ec2 demo?
[01:52] <wallyworld__> poolie: i tried it again in my devel-sandbox with a previous branch and it worked this time
[01:53] <wallyworld__> poolie: but it didn't work in my local trunk working tree that i use to branch from
[01:53] <poolie> so
[01:53] <poolie> that's probably best fixed by adding this configuration
[01:53] <poolie> [/home/ianb/launchpad/lp-branches/devel]
[01:53] <poolie> public_location = lp:launchpad
[01:54] <poolie> so it knows it's just a mirror
[01:54] <wallyworld__> ah, ok. that's great. i'll try it. thanks :-)
[01:59] <wallyworld__> poolie: that worked. the key though is public_branch. thanks.
[02:41] <poolie> wallyworld__, all good now?
[02:41] <poolie> i might have some lunch
[03:07] <lifeless> wallyworld__: I work as you do
[03:07] <lifeless> wallyworld__: I think its pretty common
[03:08] <wallyworld__> saves a lot of time checking out, and disk space too
[03:08] <wallyworld__> s/checking out/branching
[03:09] <mwhudson> to be pedantic, i think you do mean checking out
[03:09] <mwhudson> or tree building
[03:09] <mwhudson> branching is not the bit you wait for :-)
[03:09] <wallyworld__> it is if you branch from lp:launchpad rather than a local ondisk copy
[03:10] <wgrant> No.
[03:10] <wgrant> Well, only if you don't have a shared repo.
[03:10] <wgrant> In which case you are wrong.
[03:10] <mwhudson> if you're using a shared repo, you're doomed :)
[03:10] <mwhudson> +not
[03:11] <wgrant> Exactly.
[03:11] <wallyworld__> maybe i misremembered then. i could have sworn branching off lp:launchpad took a fair bit long than branching from local
[03:12] <wgrant> Well, that's orthogonal.
[03:12] <wgrant> You can branch locally perfectly well without colo.
[03:12] <wgrant> bzr branch devel somethingelse
[03:13] <wallyworld__> right
[03:14] <StevenK> steven@liquified:~/launchpad/lp-branches% du -sh .
[03:14] <StevenK> 21G	.
[03:14] <StevenK> Crumbs
[03:14] <wgrant> StevenK: How!?
[03:14] <wgrant> Do you never delete branches?
[03:14] <StevenK> Not usually
[03:17] <wallyworld__> i don't delete branches either and mine is 1.5GB
[03:18] <wallyworld__> and that includes a few non lp branches
[03:18] <wallyworld__> ls -1 | wc -l -> 254
[03:40] <nigelb> that's crazy.
[03:40] <nigelb> I delete branches after they're deployed.
[04:22] <wallyworld__> nigelb: because i only use one working tree, the branches are just directories, and i keep them around locally for easy reference
[04:23] <nigelb> Nice.
[04:23] <nigelb> I ran out of space on my hard-disk.
[04:23] <nigelb> Too much code :)
[04:23] <StevenK> /dev/mapper/sys-home   46G   39G  4.7G  90% /home
[04:24] <StevenK> But I have another 390Gb that is unallocated. :-)
[04:26] <nigelb> I need to buy more hard-disk space.
[04:26] <nigelb> Going to work with lots of code soonish.
[04:26] <nigelb> Morning poolie!
[04:26] <nigelb> Err, Afternoon, rather :)
[04:27] <poolie> hey there
[04:29]  * StevenK deletes 10Gb of old branches
[04:32] <wallyworld__> StevenK: wgrant: attempting to make a multi-pillar bug private - do you agree that it should raise ValueError or do you prefer a custom exception type?
[04:33] <wgrant> A custom type.
[04:34] <wallyworld__> ockey dokey
[04:35] <wgrant> OMG
[04:35] <wgrant> Python 3.3 will have slightly less stupid Unicode.
[04:36] <wgrant> It won't just support the BMP.
[04:36] <StevenK> You mean like Perl 5.10 YEARS ago?
[04:40]  * nigelb sees StevenK's subtle stab.
[04:45] <wgrant> Bah, no jtv.
[05:21] <StevenK> (Pdb) p self.errors
[05:21] <StevenK> [WidgetInputError('owner', u'Maintainer', ConstraintNotSatisfied(<Person at 0xd3471d0 foobarbaz (Foobarbaz)>))]
[05:21] <StevenK> :-(
[05:23] <wgrant> Hmm?
[05:23] <StevenK> Product:+edit-people
[05:23] <wgrant> Sure. But why is that saddening?
[05:24] <StevenK> Because the error that is set is "Constraint not satisfied"
[05:27] <StevenK> It's due to the validator failing, but I'm not sure how to deal with it properly.
[05:50] <wallyworld__> StevenK: could be trying to assign an open team as a pillar owner
[05:50] <StevenK> I am!
[05:50] <StevenK> That's the point.
[05:51] <StevenK> Just the error message is horrible.
[05:51] <wallyworld__> that's how it bubbles up from storm i guess
[05:51] <StevenK> Right
[05:51] <wallyworld__> we should catch that and rethrow
[05:51] <StevenK> wallyworld__: Catch it where?
[05:52] <StevenK> It's already in self.errors in the validate() of the view.
[05:52] <wallyworld__> hmmm. not sure then off hand
[05:52] <wallyworld__> in the model code somewhere
[05:52] <wallyworld__> so that by the time the view sees it, it is a nice error
[05:52] <StevenK> Can you think of an example?
[05:53] <wallyworld__> doesn't validate() take the form values prior to executing the form action?
[05:53] <wallyworld__> so a check could be done there that the owne isn't open
[05:53] <wallyworld__> and the field error set accordingly
[05:54] <wallyworld__> but the vocab shouldn't allow such a team to be picked anyway
[05:54] <StevenK> wallyworld__: The first thing validate() does is 'if len(self.errors) >= 1: return'
[05:54] <wallyworld__> so my guess as to how validate works is wrong :-(
[05:55] <wallyworld__> seems backwards
[05:55] <wallyworld__> shouldn't validate happen first, then the action
[05:55] <StevenK> It does happen that way
[05:56] <StevenK> wallyworld__: I can explain this easier over mumble, since it seems I've confused you.
[05:56] <wallyworld__> StevenK: let me look quickly at the code
[05:57] <StevenK> wallyworld__: The view in question is ProductEditPeopleView
[05:59] <wallyworld__> StevenK: so looks like custom_widget('owner', .....) needs to have the correct vocab passed in
[05:59] <wallyworld__> so that only closed teams or people can be picked
[05:59] <StevenK> wallyworld__: But I didn't use the picker to generate that error
[06:00] <wallyworld__> you typed into the field directly?
[06:00] <StevenK> And hit the button, yes
[06:00] <wallyworld__> ok. so the picker vocab is still wrong :-)
[06:00] <StevenK> Right, so I can fix that at the same time, but my question still stands
[06:00] <wallyworld__> but i don't understand how validate() has errors already set
[06:01] <wallyworld__> at the very start of the method
[06:01] <wallyworld__> when no validation has been done yet
[06:01] <StevenK> LaunchpadFormView has done it? Or storm's\?
[06:02] <wallyworld__> perhaps. it seems apparent something is having a go at validation before the view nominated validate() is called
[06:03] <StevenK> Agreed.
[06:03] <wallyworld__> based on the form schema
[06:03] <wallyworld__> and the declared fields
[06:03] <wallyworld__> but that's the limit of my knowledge
[06:03] <StevenK> It seems ugly to just reach into self.errors and pull out ConstraintNotSatisfied and replace it
[06:03] <wallyworld__> yup. let me check something
[06:05] <wallyworld__> StevenK: so the storm validator for closed teams raises a OpenTeamLinkageError
[06:05] <wallyworld__> not sure how this is munged into a constraint error
[06:05] <StevenK> By Zope, I guess
[06:05] <StevenK> I have no idea how validators work
[06:05] <wallyworld__> i do seem to recall a doc test checking for OpenTeamLinkageError
[06:06] <wallyworld__> rather than ConstraintError
[06:07] <wallyworld__> sadly, i have in the past set breakpoints inside LaunchpadFormView to track the flow when a form submit is done
[06:07] <wallyworld__> you may need to do the same here
[06:07] <wallyworld__> bbiab
[06:13] <huwshimi> wallyworld__: I've just sent a somewhat lengthy reply to the list about prototyping. Once you've had a chance to read it I'm happy to have a bit more of a chat here if it helps (not saying you should read it now).
[06:32] <wgrant> jtv: Evening.
[06:41] <jtv> Hi wgrant
[06:42] <jtv> Sorry I had to email; internet connection where I was was just unusable.
[06:43] <wgrant> jtv: Did you end up testing recipe builds on DF?
[06:43] <jtv> Yes.
[06:43] <jtv> Ahem.
[06:43] <jtv> Not on DF.
[06:43] <jtv> On staging.
[06:43] <wgrant> Ah, I just tested on DF anyway.
[06:43] <wgrant> Looks good.
[06:43] <jtv> On staging as well.
[06:47] <wgrant> jtv: there is a lot of commit/DatabaseTransactionPolicy/dosomething/commit which seems like it could be done in a single context manager. You also unexplainedly abort in a few places.
[06:47] <wgrant> It's all a bit of a mess, but I guess that can't really be helped...
[06:48] <wgrant> Without the rewrite that can't happen.
[06:48] <jtv> Right. Ideally, I'd like to abort everywhere.
[06:48] <jtv> Because it's all supposed to be read-only transactions.
[06:48] <jtv> But that might upset tests.
[06:48] <wgrant> Yeah.
[06:48] <jtv> OTOH aborts may possibly be slower, depending on implementation.
[06:48] <wgrant> If it wasn't for tests I would have required you to also have the context manager assert that it was already a read-only txn.
[06:49] <jtv> It's a risk that's hard to manage.  :/
[06:49] <jtv> That's also why I have all these minimal read-write regions: can't afford to have anything in there yield.
[06:50] <wgrant> It would be nice if we could add a hook to the reactor to confirm that the transaction is read-only before and after each callback.
[06:50] <jtv> Yeah.  I asked around, but Twisted doesn't seem to have anything like that.
[06:51] <jtv> Ultimately Robert is right: the twisted-based daemon and the ORM-backed logic shouldn't even be in the same process.
[06:52] <jtv> Although adding more moving parts is no happier a prospect than twistification probably was.
[06:52] <wgrant> Well.
[06:52] <wgrant> A Twisted bridge between two XML-RPC sides is probably not that bad.
[06:53] <wgrant> And the current architecture *could* work well, I suspect, except that the code architecture is still from the non-Twisted world.
[06:53] <jtv> We don't have just 2 sides though: there's the slaves, the builder logic, the build manager, and Librarian.
[06:53] <wgrant> Anyway, I'd really like to see you use 'with write_transaction:' or something like that, rather than the commit/DatabaseTransactionPolicy/commit
[06:53] <wgrant> Like with dbuser.
[06:54] <wgrant> Mm.
[06:54] <wgrant> Sort of, I guess.
[06:55] <jtv> Here's where I regret that it took so long.  I forget so many details.  What I remember is that write_transaction had both general problems and specific problems.
[06:55] <wgrant> I don't mean the existing write_transaction.
[06:55] <jtv> One of the general ones was that it doesn't enforce a fresh transaction.
[06:55] <wgrant> Which is a function decorator that also retries.
[06:55] <wgrant> Just something to encapsulate what you repeat.
[06:55] <jtv> So you can get more of these weird “A and C have been done, but B aborted somehow” effects.
[07:02] <jtv> wgrant: Ah yes, another problem with read_transaction is that it allows you to write to the database.  I didn't want that in this case, because of the high probability that there would be accidental, untested database writes in the read-only sections.
[07:03] <wgrant> jtv: Sure, the existing decorators are not useful here. I should have used a different name.
[07:04] <jtv> Naming is hard.  Any specific suggestions?
[07:06] <wgrant> with a_promise_that_i_am_not_evil_twisted_code
[07:11] <jtv> with hand_on_heart:
[07:57] <jtv> Do we provide download statistics for user PPAs anywhere?
[07:58] <StevenK> In the DB
[08:00] <wgrant> jtv: And the API
[08:00] <jtv> Only there?
[08:01] <wgrant> Yes.
[08:01] <wgrant> I only added an initial API, then got busy and never designed a UI.
[08:03] <jtv> wgrant: Oh well, I'll tell them it's your fault then.  Thanks.
[08:05] <jtv> wgrant: meanwhile, any further thoughts on my MP?  Anything I need to change?  Do you think it's landable?
[08:09] <wgrant> jtv: I don't like the duplication. It's too easy to forget a commit. Apart from that it looks pretty sane, but I need to go over it again.
[08:09] <jtv> The duplication of what?
[08:09] <wgrant> commit/DatabaseTransactionPolicy/commit
[08:30]  * stub reads up on celery
[08:56] <rick_h_> j
[09:04] <lifeless> stub: how much of the last fdt was slony / fti/trusted / the actual patch ?
[09:04] <lifeless> stub: e.g. how much identifiable fat do we have ?
[09:04] <stub> lifeless: I don't think we have done a rollout with the new Slony yet, so I don't know the real, current answer to that.
[09:05] <stub> lifeless: I don't think we can tell, because to minimize overhead we collapse everything update.py does into a single big .sql script rather than running EXECUTE SCRIPT several times. We can tell how long it takes to run security.py vs. update.py, but beyond that?
[09:09]  * stub wonders if the timestamps in LaunchpadDatabaseRevision have the information, or if transaction_timestamp vs. statement_timestamp dance that has to be done means they lie.
[09:09] <stub> oh... staging has been new slony for ages. duh.
[09:12] <wgrant> stub: We did an fdt with the new slony on Monday
[09:13] <wgrant> And probably Friday too
[09:13] <stub> lifeless: http://paste.ubuntu.com/747987/ is the relevant section from staging.  51s to apply no patches (but this still resets security and applies trusted.sql)
[09:13] <wgrant> Yeah, 18th and 21st there were fastdowntimes.
[09:14]  * stub looks for the production logs
[09:14] <wgrant> lifeless: :( postgres on carob is hungry.
[09:22] <stub> http://paste.ubuntu.com/747996/ is relevant stuff from a production outage, 3:12 outage
[09:24] <stub> The fat seems to be in waiting for sync and things to propagate. About 10 seconds everytime we pause and ask if everything is ok and caught up.
[09:26] <stub> 13s to lock all the tables and run trusted.sql and the single db patch. The db patch thinks it took 2.3 seconds.
[09:26] <stub> 47s for that update to propagate and be run on the slaves and success to be reported back to the master.
[09:27] <stub> We run security.py in series, but that is only a few seconds as we bypass slony entirely for that.
[09:27] <wgrant> rvba: Can you QA bug #867941, please?
[09:27] <_mup_> Bug #867941: person:+activereviews times out <affects-ubuntu> <code-review> <qa-bad> <timeout> <Launchpad itself:Fix Committed by rvb> < https://launchpad.net/bugs/867941 >
[09:27] <wgrant> Your rollback is on qastaging.
[09:28] <rvba> wgrant: oh, great, I was just waiting for it to be deployed to qastaging.
[09:28] <wgrant> rvba: buildbot finally passed :)
[09:28] <rvba> Finally! :)
[09:28] <wgrant> stub: The previous fastdowntime might be better to analyse.
[09:29] <stub> The biggest fat I can identify is adding new stuff we have created to replication. If we create new tables, we need to call EXECUTE SCRIPT twice.
[09:29] <wgrant> stub: What are our slony intervals nowadays?
[09:29] <wgrant> We'll hopefully do a fastdowntime tomorrow.
[09:29] <wgrant> I might do a double one.
[09:29] <wgrant> The second being a no-op.
[09:30] <wgrant> So we can see how that goes.
[09:30] <wgrant> And we can fit both in 5 minutes, so it's fine.
[09:30] <stub> default settings if you mean the slond config timings
[09:31] <stub> so 2 seconds for sync-check-interval. We could lower that and see if it speeds up all the handshaking
[09:32] <wgrant> Hmm.
[09:32] <wgrant> What's the other one? 10s?
[09:32] <wgrant> IIRC that's the deaful
[09:32] <wgrant> default
[09:33] <stub> wgrant: That is a heartbeat - shouldn't affect handshaking
[09:33] <wgrant> You'd think not.
[09:33] <wgrant> But in my testing locally it did, IIRC.
[09:33] <wgrant> It's been a few months, though.
[09:34] <stub> hmm
[09:34] <stub> well, easy enough to tweak it and see what happens. We are not going to overload things changing these settings
[09:34] <stub> But only one at a time of course.
[09:35] <stub> wgrant: Any opinion on changing sync-check-interval or sync-interval-timeout first?
[09:36] <mrevell> Hallo
[09:40] <wgrant> stub: I am no longer sufficiently informed to have such an opinion.
[09:41] <lifeless> wgrant: yeah, it is
[09:45] <rvba> lifeless: Hi Rob, any way for me to access an oops report… like manually or something? The web application seems to be still jammed ;).
[09:46] <rvba> I suppose the old oops cleaning up stuff is taking more time than anticipated.
[09:47] <stub> wgrant: I'll knock the sync-check-interval (how often a daemon polls to see if a sync should be sent) to half, 1s. Lets see if it affects that 10s time.
[09:53] <stub> wgrant: huh. 'If the node is not an origin.... wasteful for this value to be much less than sync_interval_timeout'. So yeah, twiddle them both.
[09:54] <stub> Ideally we would change this just before the rollout...
[09:56] <lifeless> rvba: grep for the id in the relevant dir on carob
[09:57] <lifeless> rvba: the relevant dir for staging/qasstaging is /srv/oops-amqp/<instance>/<day>
[09:57] <lifeless> rvba: for production its /srv/launchpad.net/logs/production/host/<day>
[09:57] <rvba> lifeless: Ok, thanks… for the tip.
[09:58] <lifeless> rvba: it should be unjammed now
[09:58] <rvba> lifeless: I can access the tools not… for my qastaging oops from 10 minutes ago is not there.
[09:58] <rvba> s/not/now/
[09:59] <rvba> s/for/but/
[09:59] <lifeless> rvba: if its not there yet, its probably in the rabbit queue waiting for the consumer to catch up
[09:59] <rvba> Right…
[09:59] <wgrant> Do we have a graph of that, or just a nagios check?
[09:59] <lifeless> for that there isn't much to do but wait ><
[10:00] <lifeless> we want a graph, I think liam was doing/has done, but I can't see it on tuolumne
[10:00] <rvba> I think gnuoy is working on a graph for that.
[10:00] <lifeless> wwaiting for him to say 'done' before asking where it is ;)
[10:01] <gnuoy> wgrant, we have graphs of oops queue lengths in staging and qastaging
[10:01] <wgrant> Ooh
[10:01]  * wgrant looks
[10:01] <wgrant> rvba: qastaging's amqp2disk is doing lots of stuff.
[10:01] <gnuoy> I've just landed a fix to stop there being missing data
[10:01] <wgrant> Or at least spamming a lot of syscalls.
[10:01] <rvba> +activereviews for ~ubuntu-branches issues 169 (as opposed to 1400+ previously) but I'd like to see the forced oops to make sure all the repeated statements are gone.
[10:02] <rvba> 169 queries that is.
[10:02] <wgrant> Yeah, it's writing out OOPSes.
[10:02] <wgrant> So it should be there eventually.
[10:02] <wgrant> gnuoy: Ah, nice.
[10:03] <rvba> "QAStaging Rabbit Oops Queue" … cool
[10:03] <wgrant> 300 unacked messages is a bit odd.
[10:03] <wgrant> I guess we'll see what happens once the queue reduces.
[10:03] <gnuoy> There are also graphs for rabbits rss and the mnesia db size
[10:03] <lifeless> wgrant: pipelining perhaps
[10:03] <rvba> Very nice, thanks gnuoy!
[10:04] <gnuoy> rvba, no problem
[10:04] <wgrant> lifeless: Except that it's been that way for 11 hours
[10:04] <wgrant> constant
[10:04] <wgrant> May just be amqp2disk hanging during your terrorisation of postgres, though.
[10:06] <wgrant> It's certainly not being fast at the moment, but it is working.
[10:06] <wgrant> I wonder how it will cope with fastdowntime.
[10:06] <wgrant> rvba: What's the OOPS ID you're looking for?
[10:07] <rvba> wgrant: OOPS-9daabd974426cdd97b512b23d6906f55
[10:07] <rvba> It's not in /srv/oops-amqp/qastaging/2011-11-24 yet
[10:08] <wgrant> That may be because it is under a different filename.
[10:08] <wgrant> However, not in the DB yet either.
[10:08] <wgrant> I guess it's near the end of the queue.
[10:09] <rvba> The number of queries plus the fact that it's not (always) timing out tells me it's qa ok but I'd like to be sure there is no repeated stmt left…
[10:10] <wgrant> Yep
[10:10] <stub> What are we using rabbit for atm? Just OOPS or is txlongpoll in there as well?
[10:11] <wgrant> rvba: https://lp-oops.canonical.com/?oopsid=OOPS-9daabd974426cdd97b512b23d6906f55
[10:11] <wgrant> And indeed, amqp2disk has gone silent.
[10:11] <wgrant> So it was near the end of the queue.
[10:11] <wgrant> And we are now caught up.
[10:11] <wgrant> 123 repetitions
[10:12] <wgrant> However, only one significantly repeated query.
[10:12] <wgrant> So much better.
[10:13] <rvba> Rarg, fetching branches info…
[10:15] <lifeless> wgrant: that would fit
[10:15] <rvba> I'm already fetching data for target/sources/prerequisite branches…
[10:16] <lifeless> wgrant: x number pipelined, and the consumer wedged
[10:16] <wgrant> lifeless: Possibly. We'll see soon.
[10:16] <wgrant> When the graph updates.
[10:16] <lifeless> wgrant: no, when the queue clears
[10:16] <wgrant> rvba: Fortunately we have tracebacks :)
[10:16] <lifeless> wgrant: if its pipelined it will be constant(ish)
[10:16] <wgrant> lifeless: The queue s clear.
[10:16] <rvba> wgrant: indeed! tracebacks ftw!
[10:16] <lifeless> ah cool
[10:17] <wgrant>     if can_access and self.stacked_on is not None:
[10:17] <wgrant> You're not preloading stacked_on, I guess :)
[10:17] <lifeless> thats spelt wrong
[10:17] <lifeless> self.stacked_onID is not None
[10:17] <wgrant> But damn, 169 queries with 123 able to be trivially eliminated... not  bad.
[10:17] <lifeless> unless you're actually accessing the stacked_on branch
[10:17] <wgrant> lifeless: Unless it accesses the attribute in the next line.
[10:17] <wgrant> Which is probable.
[10:17] <lifeless> yeah, so needs eager loading
[10:18] <lifeless> rvba: nice work
[10:18] <wgrant>             if self.stacked_on not in checked_branches:
[10:18] <wgrant>                 can_access = self.stacked_on.visibleByUser(
[10:18] <wgrant>                     user, checked_branches)
[10:18] <rvba> thanks lifeless.
[10:18] <wgrant> is the next bit
[10:18] <rvba> Yep, you're right wgrant, stacked_on branches.
[10:20] <wgrant> rvba: I guess that will only really be a significant problem for ~ubuntu-branches, because everyone else will work with only a few trunks.
[10:20] <rvba> wgrant: right, but fixing up ~ubuntu-branches is my goal!
[10:21] <wgrant> Yep.
[10:21] <wgrant> rvba: So, qa-ok, then?
[10:21] <wgrant> We should deploy.
[10:21] <rvba> wgrant: yep
[10:22] <wgrant> rvba: Can you tag the bug and request a deployment once the QA report is updated?
[10:23] <rvba> wgrant: sure
[10:23] <rvba> bug tagged
[10:24] <rvba> I've got another branch coming up to reuse the same eager loading method to fix product:+activereviews.  Right now it can be as bad as issuing 1800+ queries.
[10:24] <wgrant> Yeah.
[10:24] <wgrant> Fortunately they're very quick queries.
[10:24] <wgrant> But still really bad.
[10:24] <rvba> Right, but still.
[10:30]  * jml waits for diffs
[10:33] <jtv> allenap: done with your review.
[10:35] <allenap> jtv: Thanks!
[10:38] <wgrant> Aha, green deployment report.
[10:38] <rvba> wgrant: ok
[10:39] <jml> 9 minutes and counting
[10:39] <wgrant> jml: Is that including scan time?
[10:40] <jml> wgrant: Don't think so, LP has known there are pending changes for that whole time.
[10:40] <wgrant> jml: Nowadays it shows that also when a scan is pending.
[10:40] <jml> wgrant: interesting.
[10:41] <jml> so, yeah, probably including scan time.
[10:41] <wgrant> Does the branch page also have the pending change warning?
[10:42] <jml> wgrant: it's finished now
[10:42] <jml> wgrant: so no :)
[10:42] <wgrant> Heh
[10:43] <wgrant> Excellent timing.
[10:45] <wgrant> rvba: Thanks.
[10:46] <rvba> wgrant: Glad to help out with that, I know you've been doing that a lot lately.
[10:52] <rvba> wgrant: funny, the check performed on the stacked_on branch is recursive! The only was to load all the branches in one query is to use a recursive CTE.
[10:52] <wgrant> rvba: Ah, how lovely.
[10:53] <wgrant> Nearly not worth it.
[10:53] <rvba> Yeah, that's what I was thinking, I'll just preload on level of stacked_on branches.
[10:53] <rvba> s/on/one/
[10:55] <wgrant> Good plan.
[10:56] <lifeless> rvba: don't need one query
[10:57] <lifeless> rvba: need 2 - one for the collection, and then the stacked set for the batch (in the pre iter hook) - and I would definitely use a CTE at this point
[10:58] <lifeless> wgrant: did you say didrocks was an edge user?
[10:58] <rvba> lifeless: ok, you think it's worth it? Also all these branches will have to be checked by branch._checkBranchVisibleByUser
[10:58] <wgrant> lifeless: I fixed that yesterday.
[10:58] <lifeless> wgrant: thanks
[10:58] <wgrant> lifeless: That's why the edge graphs are back to their old levels.
[10:59] <wgrant> bdmurray also migrated his stuff away
[10:59] <wgrant> Will attack more tomorrow.
[10:59] <lifeless> rvba: yes, they need to have visiblity checked, but the eager loading query should do that
[10:59] <lifeless> rvba: as in, your eager loading query should filter non-visible stacked-on branches
[10:59] <lifeless> rvba: I think its worth it yes, we have some silly deep stacking chains
[11:00] <rvba> lifeless: all right, let's do this all the way then :)
[11:01] <rvba> lifeless: not sure I understand "your eager loading query should filter non-visible stacked-on branches", my plan is to preload all the stacked_on branches, then let the python sort the visibility without having to hit the database.
[11:02] <lifeless> rvba: as a matter of principle, you shouldn't pull stuff from the DB the user cannot see.
[11:03] <lifeless> rvba: as a matter of pragmatism, filtering in the query is faster - becaues the python visibilty rules trigger late evaluation
[11:03] <rvba> lifeless: Okay, so this mean I'll have code the visibility rules in sql right?
[11:03] <rvba> means even
[11:03] <rvba> s/code/to code/
[11:03] <lifeless> rvba: they are already coded, in BranchCollection
[11:03] <wgrant> It probably already is in the main query.
[11:04] <lifeless> rvba: you should be able to use a (new) BranchCollection to make the query for stacked on, with the CTE as an additional filter
[11:04] <wgrant> Yeah.
[11:04] <wgrant> The main query for that page already does the privacy check.
[11:04] <rvba> lifeless: Okay.
[11:04] <wgrant> It might just not prepopulate the security adapter cache.
[11:05] <wgrant> Bah, although it's not recursive.
[11:05] <lifeless> right, and if it doesn't, it a) should and b) thats even more reason not to retrieve things that the user cannot see :)
[11:08] <rvba> allenap: can you please have a look at https://code.launchpad.net/~rvb/launchpad/activereviews-bug-893702/+merge/83172 ?
[11:08] <allenap> rvba: Certainly.
[11:08] <rvba> thx
[11:26] <rvba> lifeless: hum… to check the visibility, BranchCollection materializes the list of visible branches (to use branch.id in (...)) … and that's precisely what's making the main query really slow.  I think I'll have to be very careful and maybe I won't be able to use BranchCollection here because the remedy may be worse than the disease If I do …
[11:36] <lifeless> rvba: or you need to teach branchcollection a couple of ways to check...
[11:36] <lifeless> rvba: e.g. BranchCollection()...EagerLoadById() -> checks per branch rather than all-visible etc
[11:37] <rvba> lifeless: yeah, I need to think about it. I'm sure there is proper way, but I'll need to be careful when doing the qa to make sure I'm not issuing one more giant query :)
[11:41] <rvba> lifeless: If you don't mind, I'll ping you again when I have something ready so you can have a look.
[11:47] <lifeless> sure
[11:52] <rvba> Thanks.
[12:34] <jml> hey
[12:35] <andy_js> How do I get launchpad to use openid.net instead of testopenid.dev?
[12:35] <jml> did you get rid of ReadyService?
[12:37] <wgrant> jml: launchpad-buildd no longer uses it, but other things do.
[12:37] <jml> wgrant: what does it use instead?
[12:37] <wgrant> andy_js: grep for testopenid.dev in the configs and change it to openid.net, I guess.
[12:37] <wgrant> jml: It tries to connect to the port.
[12:38] <jml> and keeps retrying until syn/ack?
[12:38] <wgrant> I believe so.
[12:38] <jml> thanks.
[12:42] <andy_js> wgrant: It's not that simple I'm afraid.  Changing 'enable_test_openid_provider' in schema-lazr.conf to False seems like the right solution, but after I do that the login page says 'OpenID Provider Is Unavailable at This Time'
[12:42] <wgrant> andy_js: That just disables the serving of testopenid.
[12:42] <wgrant> It doesn't affect its use.
[12:45] <andy_js> But once you disable 'enable_test_openid_provider' launchpad no longer tries to redirect to 'testopenid.dev' (which is not available on my non-local instance)
[12:45] <wgrant> No.
[12:45] <wgrant> Because it can't perform discovery on it.
[12:45] <wgrant> It'll complain that it's unavailable instead.
[12:46] <wgrant> I don't know if openid.net supports the discovery functionality that we require.
[12:47] <andy_js> So does launchpad support OpenID or not?
[12:48] <wgrant> Launchpad supports OpenID, but currently only in SSO mode, to a predefined provider.
[12:48] <wgrant> I only know of one instance other than launchpad.net, and it uses CommunityID as its OpenID provider.
[12:49] <wgrant> launchpad.net uses Ubuntu SSO, which is powered by canonical-identity-provider.
[12:51] <wgrant> However, any provider that supports Yadis or XRDS discovery should work.
[12:53] <andy_js> All I'm trying to do is set up an instance of launchpad which is remotely accessible.
[12:53] <andy_js> The only blocker for me is getting login to work.
[12:53] <wgrant> You are aware of the branding licensing issues?
[12:54] <andy_js> I thought it was free for non-commercial use.  Is there more to it than that?
[12:54] <wgrant> https://dev.launchpad.net/LaunchpadLicense
[12:54] <wgrant> The branding must be changed for a production instance.
[12:55] <wgrant> Commercially or non-commercially.
[12:55] <andy_js> Ah that's no problem.  I was going to rebrand it for my project anyway.
[12:56] <wgrant> The code itself is AGPLv3, so you can use it for whatever you wish. The images and name need to be changed, but that's not an insurmountable obstacle.
[12:56] <wgrant> Why are you not using launchpad.net, out of interest?
[12:56] <andy_js> 1) I want complete control
[12:57] <andy_js> 2) I don't think my architecture is supported (solaris-i386, solaris-amd64)
[12:57] <wgrant> Ah, for PPAs, I see.
[12:58] <wgrant> That's the same reason the only other instance I know of exists -- to build PPAs for Debian.
[12:58] <andy_js> Yep.  The possibility to have ppa with tight integration with bug tracking and scm is really enticing
[12:59] <wgrant> Getting a productionish build farm up and running is a bit of effort, but doable.
[13:00] <andy_js> Is it possible to run ppa's without setting up a build-farm?
[13:00] <wgrant> Not if you want binaries. There has to be somewhere to build them :)
[13:00] <andy_js> So I can't just upload using dput?
[13:01] <andy_js> Because I don't think our architecture is quite mature enough for doing the build-farm route.
[13:02] <wgrant> There is an old option to the upload processor that should let you upload source and binaries, but it hasn't been used for a few years.
[13:02] <wgrant> Why do you say that?
[13:03] <andy_js> I haven't been able to create a working ISO in a few monthes so I have nothing current to install on the build servers (that, and I don't actually have any build-server :P)
[13:04] <andy_js> Anway, if I can upload stuff using dput then it should be able to do what I want.
[13:04] <andy_js> If it's an old crusty open that doesn't work quite right I guess I'll have to fix it
[13:05] <wgrant> LP was designed from the start to have sources uploaded to it, and build binaries itself. If you just want to publish an archive with pre-built binaries, why not use something a little simpler and less terrifying, like reprepro?
[13:06] <andy_js> reprepro is what I'm using now and it does the job fairly well
[13:07] <andy_js> I want the other things that launchpad offers (bug tracking, mailing lists, scm management, the ability to let people create their projects)
[13:07] <andy_js> And in the future I do intend to make use of the build-farm feature
[13:08] <wgrant> I suspect that you underestimate the resources and effort required to maintain a complete Launchpad instance.
[13:10] <andy_js> It seemed like it would be easier than writing my own software which is what I started to do.
[13:10] <andy_js> Is it really that of a pain in the arse to maintain?
[13:10] <wgrant> Probably, but for everything except non-Ubuntu PPAs you can just use launchpad.net...
[13:12] <andy_js> Well I'm actually getting my sources from Debian so I guess that wouldn't be acceptable.
[13:12] <wgrant> Hmm?
[13:13] <andy_js> Hmm what? :)
[13:13] <wgrant> I'm not sure of the relevance of you getting your sources from Debian.
[13:15] <andy_js> I thought you just said PPA's were only for Ubuntu?
[13:15] <andy_js> (on launchpad.net, anyway)
[13:15] <wgrant> launchpad.net will only build for releases and architectures that Ubuntu supports.
[13:16] <wgrant> My suggestion would be that you track your bugs and projects and everything except packages on launchpad.net.
[13:16] <wgrant> Since you're not going to be using build farm functionality yet anyway, PPAs probably provide little to no benefit to you.
[13:17] <wgrant> If you do really want to use them, you could use a local instance for PPAs, I suppose. But maintaining your own codehosting and mailinglist setups is no particularly small effort, and I don't see much benefit over using launchpad.net.
[13:21] <andy_js> I want to be able to give people the ability to create their own PPA and upload package via dput.
[13:22] <wgrant> Ah, including uploading binaries?
[13:22] <andy_js> Yes
[13:23] <wgrant> That would require some hackery of the upload policies of a local instance. The secure upload paths do not permit binaries. But that is easily changed.
[13:25] <andy_js> Hackery is fine as long as what I want to do can be acheived without having to rewrite major chunks.
[13:26] <andy_js> wgrant: Thank you very much for your help so far btw :)
[13:26] <wgrant> It was done regularly until a couple of years ago. Security updates were built outside LP, and source+binaries uploaded at once.
[13:26] <wgrant> We still have tests for that, so it shouldn't have bitrotten.
[13:28] <wgrant> Anyway, I should sleep.
[15:30] <abentley> allenap: could you please review https://code.launchpad.net/~abentley/launchpad/history-model/+merge/83310?
[15:34] <allenap> abentley: I'm not going to be able to finish that before I have to go out. I will be working this evening, from 2000 UTC to 2200 UTC. Would it be okay if I did it then?
[15:34] <abentley> allenap: sure, no big rush.
[15:37] <allenap> abentley: Cool.
[17:46] <flacoste> abentley: shouldn't the Order by buttons reflects the list of fields displayed?
[17:47] <abentley> flacoste: they should, but we haven't done that yet.
[17:47] <flacoste> ah ok
[17:51] <flacoste> gmb: you actually reported a known bug that deryck is fixing, it should have been mentioned as a known issue in the blog post
[17:51] <flacoste> maybe we should do a duplicate guess contest
[17:52] <flacoste> i bet we ar going to get 5 duplicates of that bug
[17:52] <gmb> flacoste: Yeah, I noticed it got duped. My bug title was obviously far enough away from the original bug for the match to fail.
[20:13] <lifeless> righto, vanadlism undone
[20:13] <lifeless> all hail deleting of bugtasks too
[22:03] <allenap> abentley: Do you have a few minutes to talk about PackagingJob?
[22:04] <lifeless> allenap: just the lad
[22:04] <andy_js> How do I get login to work in a production environment?
[22:04] <lifeless> allenap: I want to talk to you aboot script activity ;)
[22:04] <abentley> allenap: sure.
[22:04] <lifeless> allenap: I'm happy to queue behind you and abentley
[22:04] <allenap> lifeless: Yeah, I know :) I really wanted to reply on the wiki but I have hardly had a moment :-/
[22:05] <allenap> Cool, abentley first, he's closer to EOD.
[22:06] <allenap> abentley: In trying to open translations for Precise we first have to erase all existing data that has been created for it.
[22:06] <allenap> abentley: We have a script to do that: http://paste.ubuntu.com/748706/
[22:07] <allenap> However, there are also references to some of that data from PackagingJob records. Nearly 80k of them iirc.
[22:07] <allenap> abentley: Jeroen reckoned that we could probably delete them, but that we'd need to recreate them.
[22:07] <allenap> abentley: Your name and Danilo's is all over PackagingJob, hence why I'm asking you.
[22:09] <abentley> allenap: My work on packaging has been SourcePackageRecipeBuilds.  Those did use a PackagingJob, IIRC.
[22:10] <abentley> Oh, and I guess the translation stuff counts, too.
[22:11] <abentley> allenap: So, I don't think you need to recreate those records.
[22:11] <allenap> \o/
[22:11] <allenap> We can just delete and be done?
[22:12] <abentley> allenap: Records that are not active are strictly for historical record.
[22:12] <abentley> allenap: If you're deleting the Precise translations, then you wouldn't need info about the jobs related to those Precise translations.
[22:13] <abentley> allenap: So yes, I think you can just delete and be done.
[22:13] <allenap> abentley: Great. So, as long as the job records are in terminal states we can trash them. If they're not in terminal states we ought to get them to terminal states before trashing them.
[22:14] <abentley> allenap: I don't think you even need to get them to a terminal state, since you'll be deleting whatever they could act on.
[22:15] <abentley> allenap: but of course, you want to avoid doing this for non-Precise jobs.
[22:15] <allenap> abentley: Okay, tip top. Thank you, I think that's all I need to know.
[22:15] <abentley> allenap: cool.
[22:16] <allenap> abentley: Yes, accidentally overstepping has been my main fear when crafting this script.
[22:17] <lifeless> allenap: voice?
[22:17] <allenap> lifeless: Okay, let me re-read your last comment first though :)
[22:17] <lifeless> sure
[22:20] <allenap> lifeless: Okay, ready.
[22:33] <huwshimi> lifeless: Just before I do it, I want to bump bug 894449 to critical. It creates a completely broken UI and would oops if there were such things for the UI. Is there any reason why it shouldn't be critical (in case I missed something)?
[22:33] <_mup_> Bug #894449: It's possible to create an entirely useless bug listing <bug-columns> <ui> <Launchpad itself:Triaged> < https://launchpad.net/bugs/894449 >
[22:33] <wgrant> That's only High.
[22:34] <lifeless> huwshimi: It doesn't prevent users doing what they want, and it only affects users explictly opted into the beta program.
[22:34] <wgrant> Users can make the UI in a beta feature show nothing if they explicitly tell it to.
[22:35] <lifeless> huwshimi: So I don't think it meets our guidelines for critical.
[22:35] <lifeless> huwshimi: *however* there is a bug that the new UI breaks some searches, so I think we have to turn it off anyhow.
[22:35] <lifeless> huwshimi: because *that* bug is critical.
[22:35] <huwshimi> lifeless: OK, no problems