[00:00] <poolie> sure, call me?
[00:00] <lifeless> are you on skype?
[00:00] <poolie> i can be
[00:02] <lifeless> please
[00:02] <poolie> mm it's trying to connect
[00:02] <lifeless> ok, I'll ring ze landline
[00:32] <jam> rockstar: just to confirm, the email I CC'd you on failed with Submit Request Failure as I expected it would
[00:50] <rockstar> jam, hm.
[00:53] <jam> rockstar: would forwarding the failure help you?
[00:54] <rockstar> jam, no, I see that it's valid.
[00:54] <jam> sure, I didn't know if the failure included any info you could use
[00:57] <rockstar> jam, does the same thing happen on bugs as well?
[00:57] <jam> same thing
[00:58] <rockstar> jam, hm.  I think the bug probably goes with foundations.  I wonder if something has changed with python gpg bindings.
[01:02] <bac> lifeless: meeting ping
[01:04] <lifeless> hish
[01:14] <poolie> jam by 'avatar' do you mean 'current user'?
[01:14] <poolie> is that the actual name for it in that code?
[01:14]  * poolie is not pedantic, just curious
[01:14] <jam> right
[01:14] <poolie> at least in this context
[01:14] <jam> avatar.user_id is the database identifier used for the branch vfs
[01:17] <poolie> ok
[01:18] <poolie> jam, robert recently landed (or at least proposed) some new scopes, which may give you a template
[01:20] <poolie> lifeless: where did they go? they don't seem to be in devel yet
[01:21] <poolie> oh, it is landed, i just expected it would be in scopes.py
[01:23] <thumper> rockstar: want to talk?
[01:23] <poolie> i'll reply by mail
[01:32] <lifeless> poolie: feel free to refactor; I was following what I saw (right or wrong)
[01:33] <rockstar> thumper, 5 minutes.
[01:33] <thumper> rockstar: ok, I'll go make coffee
[01:34] <wallyworld_> thumper: white with 2 sugars please
[01:34] <thumper> wallyworld_: pass me your cup
[01:34] <wallyworld_> thumper:  i would if i knew how to get the cup icon to display in irc :-)
[01:36] <poolie> ☕☕
[01:36] <poolie> "find" in character map :)
[01:36] <wallyworld_> poolie: show off!
[01:43]  * thumper waits for rockstar
[01:44] <rockstar> thumper, wait no more!
[02:06] <wgrant> Can we get ~ubuntu-bugs removed from ~launchpad-beta-testers? It was added overnight to allow the ubuntu-bugs ML to receive comments from remote bug trackers, but has the side effect of redirecting lots of people to edge.
[02:07] <wgrant> Where "lots" is around 400.
[02:08] <lifeless> that seems like a bad idea
[02:08] <lifeless> file a bug, malone, critical sev
[02:09] <wgrant> Why does it need a bug?
[02:09] <lifeless> well, won't the comments not go to ubuntu-bugs if we remove it ?
[02:09] <wgrant> There's already a bug for sending the remote comments to everybody.
[02:09] <lifeless> ok.
[02:09] <wgrant> And they've only been going there since last night.
[02:09] <wgrant> So it's not a significant loss.
[02:09] <lifeless> make sense to me
[02:09] <lifeless> bugsquad can do it
[02:10] <wgrant> Bug 639736
[02:10] <wgrant> lifeless: How?
[02:10] <lifeless> exit the team?
[02:10] <wgrant> Bug Control is a member, but bdmurray is the only admin.
[02:10] <lifeless> oh
[02:10] <lifeless> bdmurray: yo!
[02:10] <wgrant> Otherwise an ~l-b-t admin can do it.
[02:10] <lifeless> bdmurray: a) you may want to have someother admins
[02:11] <lifeless> bdmurray: and b) ^ the discussion aobove
[02:14] <bdmurray> wgrant: sure I'll do it now
[02:14] <wgrant> bdmurray: Thanks.
[02:14] <bdmurray> I was trying to work around a Launchpad bug
[02:14] <wgrant> is there a reason for bugcontrol to be a member these days?
[02:15] <wgrant> Yeah, I saw the mail go past overnight :)
[02:15] <wgrant> ~ubuntu-bugcontrol could be the bug supervisor, and ~ubuntu-bugs just be subscribed.
[02:18] <bdmurray> wgrant: well Launchpad is oops'ing on me now wrt ubuntu-bugs and lp-beta-testers
[02:18] <wgrant> bdmurray: What's the traceback?
[02:18] <bdmurray> wgrant: its a TimeoutError in the sql statement
[02:18] <wgrant> Awesome.
[02:19] <wgrant> Retry a couple of times, I suppose.
[02:19] <wgrant> Maybe it will eventually work.
[02:29] <StevenK> thumper: O hai; I seem to recall SQLObjectNotFound coming up in your review of my branch -- did you recommend to just bin it and I missed it?
[02:30] <thumper> StevenK: I think I recommended getting it form lazr.lifecycle
[02:30] <StevenK> Ah yes, that sounds right
[02:30] <StevenK> thumper: I'm fixing up all of the lint :-)
[02:31] <StevenK> thumper: I don't see SQLObjectNotFound in lazr.lifecycle?
[02:32] <thumper> what do you see?
[02:32] <StevenK> I just grepped for it, I got no results
[02:32] <lifeless> thumper: is there a bug that
[02:32] <lifeless> https://code.edge.launchpad.net/~lifeless/launchpad/bugmessage/+merge/35620/reviews/45660/+reassign
[02:33] <lifeless> doesn't let you change the review type requested ?
[02:33] <thumper> lifeless: yes
[02:33] <lifeless> ok cool.
[02:34] <thumper> StevenK: you are right
[02:34] <thumper> it isn't there
[02:34] <thumper> from lp.app.errors import NotFoundError
[02:34] <mwhudson> is this a way of teaching your mentee a lesson? :-)
[02:34] <thumper> that'll do a 404
[02:35] <thumper> mwhudson: heh
[02:35] <mwhudson> "don't take everything on trust"
[02:35] <thumper> mwhudson: and "everybody lies"
[02:35] <mwhudson> :)
[02:35] <StevenK> Thank you, House
[02:36]  * StevenK passes thumper a cane, and some Vicodin
[02:36]  * thumper hobbles off to do some shopping
[03:03] <lifeless> man
[03:03] <lifeless>     2026  OOPS-1719N1979  BugTask:+index
[03:03] <lifeless> still need to get that down
[03:05] <lifeless> Ursinha-afk: if you see this https://devpad.canonical.com/~lpqateam/qa_reports/launchpad-stable-deployment.txt - not sure why 'Revision 11522 can not be deployed: not-ok'
[03:05] <bdmurray> wgrant: so I set the expiration date for the 16th we'll see what that does
[03:06] <lifeless> also we probably want to report on more of the non-ok revisions
[03:18] <poolie> lifeless: bug 640125 for logging stuff
[04:30] <wallyworld__> anyone seen this error running bin/test -vvm lp.registry.browser.tests.test_milestone
[04:30] <wallyworld__> Failure in test lp.registry.browser.tests.test_milestone.TestMilestoneMemcache.test_milestone_index_memcache_anonymous
[04:30] <wallyworld__>   File "/home/ian/projects/lp-branches/current/lib/lp/registry/browser/tests/test_milestone.py", line 108, in test_milestone_index_memcache_anonymous
[04:30] <wallyworld__>     'anonymous, view/expire_cache_minutes minute', content)
[04:30] <wallyworld__>   File "/home/ian/projects/lp-branches/current/lib/lp/testing/memcache.py", line 33, in assertCacheHit
[04:30] <wallyworld__>     self.assertTrue(cache_start in before)
[04:30] <wallyworld__> AssertionError
[04:32] <wallyworld__> i was testing a small change which touched test_milestone.py and get the same error even after reverting back to the original version
[04:40] <thumper> wallyworld__: what touched test_milestone?
[04:41] <jtv> Is ec2 broken for anyone else?
[04:41] <jtv> The script, I mean.
[04:41] <wallyworld__> thumper: line 86 change to series = self.factory.makeProductSeries(product=product) -- it was just self.factory.makeSeries(...)
[04:41] <thumper> jtv: I'm just testing, but it is an older one
[04:41] <thumper> wallyworld__: ah.
[04:41] <jtv> I just updated everything, and regret it
[04:42] <jtv> Actually it may just be "ec2 land"—I'm firing off an "ec2 test" that's really firing up an instance afaics
[04:42] <wallyworld__> thumper: i re-ran the tests on a different branch without the changes and it still failed the same way :-(
[04:43] <thumper> wallyworld__: I'll try here
[04:43] <wallyworld__> ok. thanks. i'll have to get out my scuba gear and dive in to take a look
[04:45] <thumper> wallyworld__: it runs here on your tales-linkify-broken-links branch
[04:45] <thumper> wallyworld__: on lucid
[04:45] <wallyworld__> thumper: hmmmm. that's the same branch i tried it on :-(
[04:45] <thumper> wallyworld__: it could be a lucid vs maverick issue
[04:45] <thumper> can we find other volunteers to try?
[04:46] <wallyworld__> ok. I'll make sure all other tests etc run locally and then run it through ec2 test
[04:47] <thumper> wallyworld__: one thing on that branch
[04:47] <wallyworld__> thumper: yeeeees?
[04:47] <thumper> wallyworld__: can I get you to prefix any variable without a proxy with naked_ ?
[04:48] <thumper> so...
[04:48] <thumper> naked_product = removeSecurityProxy(branch.product)
[04:48] <thumper> wallyworld__: as this confused me looking through the tests
[04:48] <wallyworld__> yup. good idea
[04:48] <thumper> wallyworld__: as I expected it to fail not realising that the product didn't have a proxy
[04:48]  * wallyworld__ nods
[04:48] <thumper> thanks
[04:49] <wallyworld__> maybe one day next year this branch will get landed :-)
[04:51] <thumper> it is ' ' close
[04:51]  * thumper mimics ' ' this close with fingers slightly apart
[04:51] <lifeless> jtv: symptoms?
[04:51] <wallyworld__> still, i'm getting to play with pipes going back and firth between the 2 branches. nice plugin :-)
[04:52] <wallyworld__> s/firth/forth
[04:53] <jtv> lifeless: http://paste.ubuntu.com/494554/
[04:54] <jtv> "ec2 test" works
[04:56] <thumper> wallyworld__: don't forget to pump :)
[04:56]  * thumper steps away to cook, back later for more evening calls
[04:57] <thumper> wallyworld__: I'll check back on the tales fix branch before I sit down to eat to see if we can get that into the landing queue
[04:57] <wallyworld__> thumper: i won't forget :-)
[04:58] <wallyworld__> thumper: i'm just about to push the "final" changes. had to answer the door. a box from Think Geek arrived :-)
[05:04] <wallyworld__> lifeless: what's the status of your getLastOops fix? ie when is it safe to try and land a branch again?
[05:34] <lifeless> it landed
[05:34] <lifeless> so the symptoms are addressed, I'm landing another branch via ec2 land atm, if that works I'll mail the list. if it doesn't I'll mail the list.
[05:57] <lifeless> stub: hai
[05:57] <lifeless> stub: I has db patch up
[05:57] <stub> yo
[05:57] <stub> yup
[05:57] <lifeless> I'd like to check its a sane approach with you
[06:04] <stub> lifeless: That seems fine. I want to run a quick test on staging to see if we should populate it in the DB patch and make it NOT NULL.
[06:05] <lifeless> stub: coolio
[06:05] <lifeless> stub: I was figuring on a garbo task to populate it
[06:06] <stub> We could just use the message id instead of the index, but that would break existing links.
[06:06] <stub> lifeless: I don't see how it is useful if it information is only available sometimes.
[06:07] <lifeless> stub: garbo task to populate over a cycle
[06:07] <stub> ok
[06:07] <lifeless> once the row exists land a patch to set it on all new rows
[06:07] <stub> The table isn't wide - I don't think it will take long to rewrite it but I'll check first.
[06:07] <lifeless> then, once the garbo has caught up, we know its all there
[06:08] <lifeless> land a patch to use it
[06:41] <lifeless> stub: its going to be millions of rows though
[06:48] <thumper> wallyworld___: around?
[06:52] <thumper> wallyworld_: around?
[06:52] <wallyworld_> thumper: like a rissole
[06:53] <thumper> wallyworld_: I've approved the tales branch with an optional but suggested change
[06:53] <wallyworld_> ok. will check email
[06:53] <wallyworld_> btw, a make clean helped that milestone test to pass
[06:55] <wallyworld_> thumper: i'll make the suggested change and land it once ec2 is back in business. thx
[06:55] <thumper> wallyworld_: ack
[06:55]  * thumper done until call with mthaddon later
[07:02] <stub> lifeless: its not too slow to update, but it is too slow to calculate all the indexes
[07:03] <stub> well... both...
[07:03] <lifeless> stub: times?
[07:03] <stub> longer than I could be arsed waiting ;)
[07:06] <lifeless> stub: so incremental is the way
[07:07] <lifeless> stub: how long to add null and index ?
[07:07] <lifeless> stub: or should we add null and index after the rollout ?
[07:09] <stub> The index is added in this rollout, we populate, then we add the not null next rollout.
[07:09] <lifeless> kk
[07:09] <lifeless> what patch number ?
[07:10] <lifeless> stub: how many rows do we need to calculate?
[07:16] <lifeless> stub:?
[07:17] <stub> 3.3 million rows to calculate
[07:22] <stub> So 42 seconds to calculate on staging with this approach... I'll leave the update running for a bit.
[07:22] <stub> patch will be 2208-14-0
[07:24] <lifeless> how are you populating ?
[07:31] <lifeless> stub: and can you approve so I cans lands it ?
[07:42] <poolie> lifeless:  are your Epic slides on the web?
[07:42] <lifeless> yes
[07:42] <lifeless> linked from the architectureguide
[07:42] <poolie> tx
[07:43] <lifeless> and emails n stuff
[07:50] <stub> lifeless: http://paste.ubuntu.com/494601/ is the best I can do, but the update is too slow (killed it again)
[07:51] <stub> lifeless: So I'll approve the patch in the form you have and we will need datamigration and a follow up patch
[07:57] <lifeless> thanks
[08:44] <adeuring> good moring
[08:49] <mrevell> Morning
[10:00] <jelmer> lifeless, hi
[10:00] <jelmer> lifeless, I saw your getLastOops branch landed. Did it help with your ec2 runs?
[10:28] <jml> good morning.
[10:29] <jelmer> hi jml
[10:48] <jml> I thought someone had done some work recently on exposing blueprints over the webservice
[10:49] <wgrant> There have been a few efforts there.
[10:49] <wgrant> ajmitch had one.
[10:49] <wgrant> I don't recall who tried it last.
[10:49] <wgrant> Everyone seems to give up quickly...
[10:51] <jml> I think james_w & cjwatson have both tried.
[10:51] <jml> *sigh*
[10:52] <wgrant> Maybe some things just weren't meant to be (I may be thinking of the app itself here, though...)
[10:53] <nigelb> jml: ajmitch said he was working on one.
[10:53] <nigelb> arg, I should read
[10:54] <jml> nigelb, it happens :)
[10:54]  * nigelb is waking up at 3 pm - confusion guaranteed
[11:00] <ajmitch> yeah, james_w had done a bit more than me & submitted it for review - I think people recoiled in horror from the blueprints internals that got exposed
[11:00] <nigelb> heh
[11:00] <ajmitch> I don't know if anyone else has tried further since then
[11:01] <nigelb> Now I know why wgrant always wants that part killed
[11:01] <wgrant> I too tried back in the early days.
[11:02] <wgrant> But there's a lot of refactoring required.
[11:02] <wgrant> Then I decided that attempting to arrange the application's demise was more productive.
[11:02] <ajmitch> this is why I've not really touched it since
[11:03] <nigelb> oh, it is going to be killed?
[11:03] <ajmitch> if wgrant has his way
[11:04] <wgrant> Sadly, others do not seem convinced.
[11:08] <ajmitch> someone broke the css again?
[11:17] <henninge> ajmitch: yup, looks like it
[12:03] <deryck> Morning, all.
[12:03] <jml> I'd love to spend a couple of months improving our developer documentation & tools.
[12:03] <jml> deryck, good morning
[12:07] <jml> deryck, just quickly, what's the status of https://dev.launchpad.net/LEP/BugzillaComponents ?
[12:08] <deryck> jml, we're in progress on it.  bryceh started coding a couple weeks ago.
[12:08] <jml> deryck, ta.
[12:08] <deryck> jml, I thought you approved this one.  Did we leap before we looked?
[12:08] <jml> deryck, it's approved, but it was in the "Ready to code" section. Just tidying up the LEP page.
[12:08] <deryck> ah, ok.
[12:09] <jml> the page tells a slightly different story to the Kanban boards, I think :)
[12:16] <lifeless> jelmer: well it, itself, got through ec2
[12:18] <lifeless> deryck: for your delectation https://code.edge.launchpad.net/~lifeless/launchpad/bugmessage/+merge/35620
[12:19] <lifeless> jelmer: two branches since have passed ec2, but not got through pqm
[12:19] <deryck> lifeless, nice!
[12:28]  * thumper halts
[12:28] <lifeless> deryck: hopefully
[12:29] <lifeless> deryck: we're 3 weeks out from starting the data migration ;)
[12:29] <deryck> heh, well true.  But still it's step in the right direction.
[12:30] <deryck> bigjools, hey.  I just now noticed you requested a review from me.  Sorry.  You still need this?
[12:31] <bigjools> hey deryck, yeah it would not hurt if you had a look.  It's to do with closing private bugs from uploads.
[12:31] <deryck> bigjools, ok, will look now.
[12:31] <bigjools> cheers
[12:32] <bigjools> deryck: we had the conversation ages ago where we said it would be a good idea to do the closing in a Job, but I've decided on the dirty hack instead :)
[12:32] <deryck> bigjools, right,  I recall that now.  And you dirty hacker, you. ;)
[12:33] <lifeless> deryck: bugtrackerset isn't pagination
[12:33] <lifeless> deryck: though it only has 900 rows, so its not really dying for it.
[12:35] <lifeless> jml: thanks for landing that fix the other day
[12:35] <lifeless> bigjools: your cps are done; can you refreeze the db in prod though?
[12:35] <jml> lifeless, np
[12:36] <bigjools> lifeless: it was first on my list of things to do this morning, and the branch is playing in prod_lp right now
[12:36] <lifeless> bigjools: sweet, thanks
[12:36] <bigjools> lifeless: although the DAS enablement branch should have gone to appservers as well, BTW
[12:36] <lifeless> bigjools: we deployed to all
[12:37] <lifeless> bigjools: edge prod backends the works.
[12:37] <bigjools> lifeless: your CP request just said "soyuz" ?
[12:37] <lifeless> yes, but there were 4? 5? things, and we want same revs as much as possible.
[12:37] <bigjools> if it went everywhere then great
[12:37] <lifeless> so chex just said 'meh, the works'.
[12:38] <bigjools> :)
[12:38] <bigjools> thanks for getting it in
[12:38] <deryck> lifeless, right.  I didn't think it was paginated.  Would still benefit from that, though the page isn't really doing much short of the listing I think.
[12:38] <lifeless> bigjools: If you'd had it partially there I would have known in advance to ensure it got to the appservers :) Still it, did, so its cool.
[12:38] <lifeless> deryck: I'm sure of it
[12:38] <lifeless> 11 /    8  BugTrackerSet:+index
[12:39] <lifeless> pity that the oops are not listed
[12:43] <lifeless> SpamapS: hey
[12:43] <lifeless> SpamapS: what was that 11 second api you found again?
[12:50] <wgrant> bigjools: What do you think about bug #566339? I'm fairly sure it's not the one that you've fixed.
[12:50] <bigjools> possibly yeah, there's more than one place that sends emails
[12:51] <wgrant> Yay.
[12:51] <deryck> bigjools, so I'm not really worried about this change as it stands now, but I do have concerns other additions to the script don't come along acting on a not security proxied bug and doing things we don't want.
[12:51] <bigjools> well the bug mail and the upload notifications I mean
[12:52] <bigjools> deryck: totally - this change is completely isolated to that one function though, so it should be fine. I hope :)
[12:52] <bigjools> deryck: thanks for looking
[12:52] <deryck> bigjools, well, I would rather have the comment above be a true XXX or a strongly worded "if you do more than fix release bugs here, this has to move to jobs."  What do you think of that?
[12:53] <bigjools> deryck: possibly - I'm not sure the job is necessary but I agree with a strongly-worded warning about being careful with the unproxied object in that function/.
[12:54] <bigjools> then the developer can make an informed choice about what to do
[12:54] <deryck> bigjools, right, that works.  Thanks.  r=me.
[12:54] <bigjools> deryck: great, thanks
[12:56] <lifeless> danilos: if I'm still not getting it, perhaps we can talk my morning; tis midnight here.
[12:59] <danilos> lifeless, sure
[13:00] <lifeless> danilos: also I really want to figure out if it was canonical_url, or the template code, or 50/50 that contributed to the big +templates speedup
[13:00] <lifeless> danilos: I'm already encouraging gary to get the new template engine widely deployed asap, but don't have enough data yet to say if we need to overhaul/replace/ditch canonical_url.
[13:06] <lifeless> deryck: https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1720G80
[13:06] <lifeless> 938 queries!
[13:07] <deryck> oh, wow.  At least one very tracker then.
[13:07] <lifeless> 1 915 4354 4 4350 SQL-launchpad-main-slaveSELECT COUNT(*) FROM BugWatch WHERE BugWatch.bugtracker = %s
[13:07] <lifeless> its counting the bugs per tracker.
[13:08] <lifeless> group by time boys!
[13:08] <deryck> ok, so that's a pretty cheap fix!
[13:08] <deryck> right!  group by FTW! :-)
[13:08] <lifeless> should be for a while, yes
[13:08] <deryck> Today's my long day, so I'll see what I can do today.
[13:10] <lifeless> https://bugs.edge.launchpad.net/malone/+bug/618375
[13:11] <stub> lifeless: I'm thinking of making scripts that call logger.exception(msg) emit an OOPS. Worthwhile making scripts that call logger.error(msg) emit an OOPS too? Or logger.warn(msg)?
[13:11] <lifeless> deryck: Time: 90.955 ms
[13:11] <wgrant> jelmer: Morning.
[13:11] <lifeless> select BugWatch.bugtracker, count(bugwatch.*) from bugwatch group by bugwatch.bugtracker;
[13:11] <jelmer> wgrant: hi
[13:11] <lifeless> stub: \o/
[13:12] <lifeless> stub: here are my thoughts
[13:12] <lifeless> stub: things that go wrong in scripts should be oops
[13:12] <stub> To all three? I was thinking of starting with exception and error.
[13:12] <lifeless> stub: except for things that go wrong writing oops out.
[13:12] <lifeless> stub: I'd do exception and error, and let everyone know that thats happening.
[13:13] <stub> k
[13:13] <lifeless> stub: just make sure there is a backdoor to say 'hey, oops writing failed'
[13:13] <lifeless> (and include the traceback of *that* failure)
[13:13] <lifeless> ok, really must go.
[13:13] <lifeless> avoir.
[13:13] <stub> lifeless: I'm not going to stop the log.error() output, just create the logs in addition.
[13:13] <stub> night
[13:14] <wgrant> jelmer: Your build uploading fixes seem to have alleviated my concerns. Looks pretty good now.
[13:17] <jelmer> wgrant: Thanks!
[13:17] <wgrant> jelmer: Are you deliberately storing the upload log in all cases now?
[13:19] <wgrant> There's also a tiny race (the upload processor may catch a build after b-m moves it into incoming, but before it commits the UPLOADING status), but that may not be a concern.
[13:19] <wgrant> This is all too complicated :(
[13:23] <jelmer> wgrant: you're thinking we should have an explicit database commit before we do the move?
[13:24] <wgrant> jelmer: That might work.
[13:24] <wgrant> Otherwise, well, the extremely unlikely case will probably happen at some point :)
[13:24] <wgrant> And then we'll have builds stuck in UPLOADING and the world will end.
[13:25] <jelmer> wgrant: actually, it won't be an issue at the moment since the uploadprocessor will warn if it encounters a build that doesn't have the right build.status and skip that directory.
[13:26] <wgrant> jelmer: Didn't you just change it to move it to failed?
[13:26] <wgrant> It was OK when it just skipped.
[13:37] <jelmer> wgrant: hmm, perhaps it should be changed back to only skip to cope with that situation
[13:39] <wgrant> jelmer: It'd be nice if it would skip it once, but that's probably hard to arrange.
[13:54] <wgrant> jelmer: Maybe skip if date_finished is in the last 5 minutes or something.
[13:58] <gmb> Gaaaaah.
[13:58] <gmb> Can anyone remember how to raise an exception (or do something similar anyway) in YUI3?
[13:59] <james_w> jml: I'm pretty close I think. I need to move the code from security.py on to the model, and tweak some methods, then get the pre-requisite branch reviewed
[13:59] <james_w> I haven't worked on it in a while though
[14:00] <gmb> Or maybe I should just stick with the JS way of doing things.
[14:02] <jelmer> wgrant: A commit/flush seems like the proper thing to do though.
[14:03] <wgrant> jelmer: As long as nothing expects an UPLOADING build to have an entry in incoming.
[14:04] <jelmer> wgrant: right
[14:08] <wgrant> jelmer: Previously the upload log was recorded only on failure -- it's probably fine to record on success too, but I'd like to be sure that's a deliberate change.
[14:08] <jelmer> wgrant: yeah, I saw your point earlier. I was going to wait for Julian to come back from lunch and then ask him about it.
[14:09]  * bigjools is back
[14:09] <wgrant> It might be a good idea to record it. But it's a little bit of extra librarian bloat.
[14:09] <jelmer> wgrant: it's not recorded on successful uploads to save up on space, but lp-code has a triaged bug open about the fact that not storing it for successful builds is inconsistent
[14:09] <wgrant> Yeah, I saw that.
[14:10] <wgrant> Given the other large files we store for every build anyway, the space is trivial.
[14:11] <bigjools> jelmer: the bug is filed on lp-code?
[14:11] <jelmer> bigjools: yeah - it just talks about recipe builds
[14:11] <bigjools> I'm inclined to stay with the existing behaviour
[14:11] <jelmer> bigjools: https://bugs.edge.launchpad.net/launchpad-code/+bug/
[14:11] <jelmer> 581892
[14:12] <bigjools> I don't see the point of storing successful upload logs
[14:12] <bigjools> nobody will look at them
[14:13] <jelmer> abentley: hi
[14:13] <abentley> jelmer, hi.
[14:14] <jelmer> abentley: You appear to be the reporter of bug 581892. I'm touching the related code at the moment and we were just discussing how useful having the upload logs for successful builds would be.
[14:15] <jelmer> abentley: The upload logs are currently not stored because they're not particulary useful for anything and just take up space on the librarian.
[14:15] <abentley> jelmer, I think that knowing what a successful upload looks like can help with debugging unsuccessful ones.
[14:16] <abentley> jelmer, it also seems asymmetrical.  Why store build logs and not upload logs?
[14:17] <jelmer> abentley: build logs contain output from the package build system, they might be interesting to the uploader.
[14:17] <bigjools> my suggestion is to make it optional depending on what the upload policy says
[14:18] <bigjools> nobody wants or needs it in soyuz world
[14:18] <bigjools> AFAIK anyway
[14:18] <bigjools> but that should not block this landing
[14:18] <bigjools> (with the existing behaviour)
[14:18] <jml> james_w, that's great.
[14:18] <abentley> jelmer, I don't understand why "contain output from the package build system" makes it more storable.
[14:19] <wgrant> It's useful to see how the package was built. There's more than a binary situation there.
[14:19] <jml> james_w, btw, do you think it would be useful to have a tool that gets the build dependency branches for a given project?
[14:19] <jelmer> abentley: it makes it more interesting, thus more storable. as a package uploader I often look at succesful build logs, I have never even thought about looking at the upload logs.
[14:19] <wgrant> With a successful upload... it just works.
[14:19] <jelmer> bigjools: That seems reasonable
[14:20] <gmb> mars, Around?
[14:20] <james_w> jml: I think so
[14:20] <abentley> jelmer, bigjools: when I was first looking at it and coding my own build page, it looked like something was wrong.
[14:21] <abentley> Because I knew the upload had happened, and I knew we store logs, but I couldnt' see it.
[14:21] <jml> james_w, I don't think it would be too hard to do with what we have now.
[14:21] <james_w> jml: indeed not
[14:21] <abentley> wgrant, could you give me an example of different ways the package could be built?
[14:21] <jelmer> bigjools: I was hoping there was quick consensus on this and we could deal with that bug (whether it be wontfix or fixcommitted), but since there isn't I'll just keep things as is for the moment and leave that bug alone.
[14:22] <jml> james_w, perhaps I'll knock up a prototype and then see if I can get someone else to take over work on it :)
[14:22] <bigjools> abentley: the soyuz build pages seem to be fine without it
[14:22] <jelmer> abentley: the compiler output would be part of the build output
[14:22] <wgrant> abentley: configure might make a bad decision based on the installed build-dependencies.
[14:22] <bigjools> jelmer: it should be policy based
[14:22] <jml> (although more likely I'll get distracted fixing txrestfulclient
[14:22] <jml> )
[14:22] <wgrant> abentley: There could be build warnings.
[14:23] <wgrant> We could have braindead packages detecting the installed kernel version and tailoring themselves to it grrrrrr.
[14:23] <james_w> jml: good idea
[14:23] <james_w> both of them
[14:23] <jml> :)
[14:24] <abentley> wgrant, thanks.
[14:24] <wgrant> So, I agree that it would be nice to store successful upload logs, and it would make things more consistent.
[14:24] <wgrant> But they're really not useful, and are a bit of bloat.
[14:25] <wgrant> abentley: Why did Code want them?
[14:26] <abentley> wgrant, when I was coding up the the build pages, I found it surprising.  I thought the upload logs were "missing".
[14:26] <wgrant> Ah.
[14:26] <gmb> mars, Unping; I'm an idiot.
[14:27] <bigjools> I don't think that's a compelling reason to add them, personally.
[14:28] <wgrant> Well, it does simplify archiveuploader if they are added.
[14:29] <abentley> bigjools, we probably have different thresholds for breaking symmetry.  Their slight bloat doesn't seem like a compelling reason to make the system more complicated to me.
[14:30] <bigjools> I think you have that back to front, but that's just my opinion
[14:30] <wgrant> Code does have branchrevision... :P
[14:30] <abentley> bigjools, that opinon would be consistent with us having different thresholds :-)
[14:31] <bigjools> heh, once you're used to branchrevision, nothing seems like bloat :)
[14:31] <abentley> wgrant, thumper and I are looking at ways to get rid of it (or at least remove the scaling problems).
[14:32] <wgrant> abentley: Excellent.
[14:40] <bigjools> wgrant: I have a challenge for you
[14:50] <wgrant> bigjools: Oh?
[14:50] <bigjools> wgrant: yeah, we need a test case for bug 556839
[14:51] <wgrant> Is that the +queue 403 that I was arguing about earlier?
[14:51] <bigjools> it's the double copy thing.  Crap that's a private bug, sorry.
[14:51] <wgrant> Oh, that one.
[14:52] <bigjools> noodles775 may have pinpointed the source of the problem in the packagecopier code
[14:52] <wgrant> yeah, can't see it.
[14:52] <Ursinha> lifeless, rev 11522 was blocked on bug 86185, that was also linked to the landed branch and qa-needstesting when it was in fact qa-untestable, because unrelated to the fix
[14:52] <bigjools> where if you're copying the same SPR with binaries, it lets it through
[14:52] <Ursinha> lifeless, the problem we discussed before (bug 638468)
[14:52] <wgrant> What's the actual problem?
[14:52] <wgrant> A double copy should be fine.
[14:53] <bigjools> wgrant: it was nightmarishly hard to reproduce, but I eventually found out that if you double tap the copy button you end up with the same package trying to get published twice, and the publisher spazzes out with a QueueInconsistentStateError
[14:54] <wgrant> Oh, delayed copies?
[14:54] <bigjools> nope, regular
[14:54] <wgrant> Er.
[14:54] <wgrant> Why would it whinge about that?
[14:54] <wgrant> That doesn't make sense.
[14:54] <bigjools> they were both private archives
[14:54] <wgrant> What's the QISE?
[14:55] <bigjools> "blah" is already published in archive for <series>
[14:55] <wgrant> That sounds delayed to me.
[14:55] <wgrant> In fact, yes.
[14:55] <wgrant> The publisher only deals with the queue in the case of a delayed copy.
[14:56]  * bigjools shrugs.  It happens.
[14:57] <bigjools> but yeah it's in process-accepted
[14:57] <wgrant> You're sure it was direct?
[14:57] <bigjools> so something went very wrong
[14:57] <wgrant> Ah, so no, delayed.
[14:57] <wgrant> It's not easy to reproduce?
[14:57] <bigjools> with the double tap I can do it
[14:57] <wgrant> Hm, I guess you might have to get another request in before the PackageUpload is there.
[14:57]  * wgrant checks.
[14:59] <wgrant> Yeah, so you need to have two concurrent transactions.
[14:59] <wgrant> Or the second will be rejected.
[14:59] <wgrant> That is hard to test :(
[14:59] <bigjools> exactly :)
[14:59] <wgrant> It's the same as the upload conflicts we get sometimes.
[14:59] <wgrant> Since simultaneous uploads on different machines can violate the file conflict checks.
[15:00] <wgrant> These should probably be solved similarly.
[15:00] <bigjools> yup
[15:01] <bigjools> and I've no real idea how
[15:02] <wgrant> Everywhere else solves them with UNIQUE constraints. But I don't think we can.
[15:03] <wgrant> We need some way to lock an archive.
[15:04] <bigjools> maybe a partial commit
[15:04] <bigjools> as soon as we get into the copy code
[15:04] <bigjools> so it's a race as to which appserver thread gets there first and only one wins
[15:04] <wgrant> Partial commit in webapp == nauseating.
[15:04]  * bigjools shrugs
[15:04] <bigjools> how else?
[15:05] <bigjools> we need a lock
[15:05] <wgrant> We do.
[15:05] <wgrant> But I've no idea how to do that here.
[15:05] <wgrant> Hopefully somebody else does.
[15:05] <bigjools> balls, stub just left
[15:06] <bigjools> hmmm
[15:06] <bigjools> maybe we can use optimistic locking
[15:06] <bigjools> not sure that's the right term actually
[15:07] <bigjools> since the appservers run with isolation level serializable, we can make the txn write to the same thing
[15:07] <wgrant> No.
[15:07] <bigjools> one will fail
[15:07] <wgrant> The appservers are READ COMMITTED :(
[15:07] <bigjools> nope
[15:07] <bigjools> stub told me not 10 minutes ago they're serializable
[15:08] <wgrant> Er.
[15:08] <bigjools> which in PG is actually snapshot isolation
[15:08] <wgrant> Did that change recently?
[15:08] <wgrant> I see you're right now.
[15:09] <wgrant> But I see a lot of READ COMMITTED in old logs.
[15:09] <bigjools> however I have a horrible feeling that the transaction just gets retried if it fails
[15:09] <wgrant> Yep.
[15:09] <bigjools> we could work around that easy I think
[15:09] <bigjools> treat a lock flag on an archive as a copy mutex
[15:10] <bigjools> we just need a way of resetting it later
[15:10] <wgrant> Well, we could probably just set it and unset it.
[15:10] <wgrant> And that would cause a conflict.
[15:10] <bigjools> it won;t work if the transaction is retried
[15:11] <wgrant> And while the request would be retried in the case of a conflict, that would mean that the check would be executed, and fail this time.
[15:11] <wgrant> So I think it would work.
[15:11] <bigjools> you'd need to 1) check it's not already set, b) then set it or fai;
[15:11]  * bigjools chuckling at jml's tweet
[15:12] <wgrant> If the start of the copy just sets and unsets it, it should cause a conflict in other simultaneous copies, right?
[15:12] <wgrant> In READ COMMITTED that wouldn't work, since they'd have the same final value.
[15:12] <bigjools> wgrant: in that scenario, the 2nd copy would fail once then work the second time
[15:13] <bigjools> I think
[15:13] <wgrant> bigjools: It wouldn't work the second time: the whole request would be retried, which would mean the duplicate check would fail the second time.
[15:14] <bigjools> ok I'm thinking of it re-running the code for the request, maybe that does't happen :)
[15:14] <wgrant> It doesn't just rerun the SQL.
[15:14] <wgrant> That would be insane.
[15:15] <bigjools> eparse.  what?
[15:15] <wgrant> When retrying because of a conflict, the request is restarted.
[15:15] <wgrant> So all the checks are run again.
[15:15] <wgrant> In a fresh transaction.
[15:16] <bigjools> ok
[15:16] <bigjools> so it would work the 2nd time
[15:16] <wgrant> In that the duplicate check would reject the copy.
[15:16] <bigjools> since there would no longer be a conflict
[15:16] <wgrant> Why not?
[15:16] <wgrant> Oh.
[15:17] <wgrant> Conflict as in transaction conflict, not archive file conflict.
[15:17] <bigjools> yes
[15:17] <bigjools> the current code does not prevent archive file conflicts
[15:17] <wgrant> Right.
[15:18] <bigjools> so we need to check the flag before embarking on the copy, then try and reset it later.
[15:18] <bigjools> that's the nasty bit, only the publisher comes to mind, and that's a long delay to prevent a copy :/
[15:19] <bigjools> hmmm if you don't double tap the copy button, the 2nd request after the current one finishes does get rejects as I recall
[15:19] <wgrant> Why check it? Nothing should leave it set.
[15:19] <wgrant> It does, yes. It will confirm that there's no existing delayed copy.
[15:20] <bigjools> if it's not left checked then what's to stop another transaction crossing?
[15:20] <wgrant> bigjools: If each copy transaction sets and unsets it, they should all conflict.
[15:21] <bigjools> yes and the retry will work, as we said
[15:21] <bigjools> so we need the isolation guard and also a code guard
[15:21] <wgrant> The retry has two possible outcomes:
[15:22] <wgrant>  1) It occurs during the same conflicting transaction. It too will conflict, and be retried again; or
[15:22] <wgrant>  2) It occurs after the conflicting transaction. It will perform the copy checks, notice the file or PackageUpload conflict, and reject the copy.
[15:23] <bigjools> there is another way
[15:23] <bigjools> we make a copy a "copy request"
[15:24] <bigjools> they need to go offline anyway, they are too complicated to happen in a request
[15:24] <bigjools> then we serialize the copies through a script
[15:24] <wgrant> I don't think they're actually that complicated. The checker can be optimised much further.
[15:24] <bigjools> it's not just the checker
[15:24] <wgrant> And inline feedback is a whole lot better than forcing clients to poll..
[15:25] <bigjools> copying one package with loads of binaries also goes titsup
[15:25] <wgrant> Is that the checker, or the actual copy?
[15:25] <bigjools> I *think* the latter
[15:25] <wgrant> Hmm.
[15:25] <wgrant> Odd.
[15:25] <Ursinha> sinzui, hi, I'm sorry about your qa-ok bugs retagged by mistake -- I've already fixed them back to qa-ok. We're experiencing preconditional errors in the api with the script so it believes last run was unsuccessful and tries to tag again
[15:25] <bigjools> but I am unsure
[15:25] <wgrant> I guess for a kernel it could get fairly large.
[15:25] <Ursinha> sinzui, I'm working on a fix
[15:26] <bigjools> that's exactly the scenario I am thinking of :)
[15:26] <bigjools> we can't currently copy a kernel reliably
[15:26] <bigjools> unless it's delayed
[15:26] <sinzui> Ursinha, its okay
[15:26] <wgrant> But there's only 8 archs and probably only a few dozen binaries for each.
[15:26] <bigjools> actually y'know what, delayed copies are requests-to-copy thinking about it
[15:27] <wgrant> So it's only a few hundred inserts, even if we do it naively.
[15:27] <bigjools> but we pre-checked 'em
[15:28] <wgrant> We should try to optimise before we externalise.
[15:28] <bigjools> agreed
[15:28] <wgrant> Particularly given that the primary motive for delayed copies has evaporated.
[15:28] <bigjools> yeah :/
[15:29] <wgrant> Still, they'll be useful for distro copies.
[15:29] <wgrant> So it wasn't all wasted effort.
[16:38] <SpamapS> lifeless: the 11 second api call was for basically all bugs.
[16:46] <jcsackett> is there a way to do conditional define statements in a div in TAL? sort of "condition:view/attr then define:var attr else define:var some_other_attr" ?
[16:51] <benji> jcsackett: tal:define="var python:view.attr and one_value or another_value"
[16:51] <jcsackett> benji: thanks. you are my new hero.
[16:51] <benji> :)
[17:00] <statik> sinzui, your mail about the text = bytea error on launchpad/maverick, there is a branch attached to bug#585704. Should I apply that patch to the version of storm shipping in maverick today?
[17:03] <statik> the patch is here: https://code.edge.launchpad.net/~jamesh/storm/bug-585704/+merge/26472/+preview-diff/+files/preview.diff
[17:03] <sinzui> statik: I believe henninge tried it without success.
[17:05] <statik> odd. it's one-line patch to storm/variables.py, excluding tests. you would probably also have to fix some of the launchpad tests to make it all work though (U1 had to do the same)
[17:06] <statik> sinzui: i ask because you were "hoping it would be fixed", and I can patch and upload storm to ubuntu if there is a patch required to enable launchpad.
[17:06] <sinzui> statik. yes. excluding tests will not fix Lp
[17:07] <sinzui> statik, Lp dies on the first db request for a lot of page, such as a person page
[17:12] <statik> sinzui, my point is simply that downgrading psycopg is not a solution, and there is a rapidly closing window where fixes could be uploaded to maverick, and i am ready and willing to upload such a patch to storm if it helps the LP team.
[17:14] <sinzui> statik. I am out of my depth. gary, stub, or benji-lunch may understand the issue better. I do not know for certain there is a fix I do know that the foundations team controls the versions of storm and psychopg2 and apply patches as needed
[17:14] <statik> ah, ok.
[17:19] <mars> deryck, ping, may have found a nasty regression on the +filebug page JavaScript.  I need some help verifying the problem.
[17:20] <deryck> mars, sure.  What's up?
[17:22] <mars> deryck, was visiting https://bugs.edge.launchpad.net/launchpad-foundations/+filebug, used the "Extra options" to assign a person.  Pressing the 'Find' link does not throw up a JavaScript dialog - instead it takes you to the search page, trashing all form input.
[17:22] <deryck> yup
[17:22] <deryck> nasty and known ;)
[17:22] <mars> :(
[17:23] <mars> ok, thanks deryck
[17:23]  * deryck is looking for bug
[17:23] <deryck> https://bugs.edge.launchpad.net/malone/+bug/513591
[17:24] <deryck> mars, it's on my top 15 bugs list, but I just haven't gotten someone on it yet.
[17:24] <mars> thanks, dupe search didn't turn up that bug
[17:25] <mars> deryck, thanks for letting me know.  I'll stop bugging you now :)
[17:25] <deryck> np at all
[18:04] <Ursinha> rockstar, hi :) could you triage bug 639785, please? I've seen those on lpnet for a few days now
[18:04] <rockstar> Ursinha, since you asked nicely.
[18:05] <rockstar> Also, wtf is mup?
[18:05] <Ursinha> it's dead
[18:05] <Ursinha> _mup_, hi
[18:09] <Ursinha> rockstar, thanks :)
[18:11]  * gmb EoDs. Night folks.
[18:57] <rockstar> jml, can you point me to an example LEP that you find to be the best example of how you envision a LEP?
[19:36] <lifeless> Ursinha: ah cool
[19:57] <lifeless> jcsackett: can you QA https://bugs.edge.launchpad.net/launchpad-registry/+bug/623408 please?
[20:15] <lifeless> deryck: hiya
[20:16] <lifeless> abentley: ping
[20:16] <abentley> lifeless, pong
[20:16] <jcsackett> lifeless: shoot, yes. sorry. qa'ed them in fact a few days ago off the kanban and forgot to update LP.
[20:16] <deryck> hi lifeless
[20:16] <jcsackett> lifeless: done.
[20:16] <lifeless> abentley: just seeking an answer on the (probably unclear) mail I sent about the builder patch in stable & CPing it
[20:16] <lifeless> jcsackett: thanks!
[20:17] <jcsackett> lifeless: you're welcome. :-)
[20:17] <lifeless> deryck: so, I'm thinking to do bugtrackerset, unless you have it in-flight at the moment
[20:18] <deryck> lifeless, oh, no.  Was actually just about to turn to it in 30 minutes or so.  You're timing is good.
[20:18] <lifeless> deryck: nono, don't let me stop you :)
[20:19] <lifeless> deryck: just when you sign off commit the inprogress work and push it; then tag me.
[20:19] <deryck> ah, sure.  I can do that.
[20:19] <deryck> thanks lifeless!
[20:20] <abentley> lifeless, I suppose it probably is.
[20:20] <lifeless> abentley: thanks
[20:20] <lifeless> abentley: we've got a bunch of OOPS and timeout fixes we can pull across https://devpad.canonical.com/~lpqateam/qa_reports/launchpad-stable-deployment.txt
[20:20] <abentley> lifeless, it does require a new bzr-builder, and it requires it in sourcedeps.conf
[20:21] <lifeless> its easier to merge rather than CP when possible
[20:21] <lifeless> abentley: ah cool.
[20:21] <abentley> lifeless, I'm not sure what you hope to gain from it.
[20:22] <lifeless> if 'it' is your patch, bring it across lets me do a branch merge
[20:22] <abentley> lifeless, it's just pushing responsibility for safety checks into a better place.
[20:22] <lifeless> rather than a partial merge, when preparing the new revision for production-stable
[20:23] <lifeless> abentley: cool, it was tagged 'orphaned' was all, so I wanted to be sure it was ok :)
[20:23] <lifeless> categorised as orphaned, I should say, not tagged.
[20:23] <abentley> lifeless, It's "orphaned" because it doesn't fix any bugs.
[20:23] <lifeless> yeah, which is cool
[20:23] <lifeless> as I say, I wanted to do a proper merge and was just cross t's and dotting is.
[20:23] <abentley> I did not bother creating a bug that the code was not it the most optimal place.
[20:24] <lifeless> exactly, I wouldn't either.
[20:24] <abentley> lifeless, it sounds like a rather aggressive approach to CPing to me.
[20:24] <lifeless> abentley: its precisely what we'll be doing once we have the production schema qa environment anyway
[20:25] <lifeless> abentley: it can be done now but it needs a little more effort, which I'm delighted to put in, given the rewards
[20:25] <abentley> lifeless, that also worries me.
[20:25] <lifeless> abentley: can you enlarge on that?
[20:26] <abentley> lifeless, merely sticking UI stuff behind feature  flags doesn't seem like a very good way to avoid introducing regressions.
[20:26] <abentley> lifeless, I'm working on a lot of changes to put incremental diffs in merge proposals, and there's been a lot of refactoring.
[20:26] <lifeless> abentley: we're also QAing it before deploying it.
[20:27] <lifeless> abentley: I'm not CPing anything that hasn't been QAd.
[20:27] <abentley> lifeless, One can only QA so much.  I would rather give the changes time to cook, too.
[20:28] <lifeless> abentley: we're giving it to users uncooked at the moment
[20:28] <lifeless> and unqaed
[20:28] <abentley> lifeless, not really.  It stays on staging for half a cycle.
[20:28] <lifeless> abentley: if its in db-devel, yes.
[20:29] <lifeless> abentley: anything in devel, which is all thats in scope for now, hits users the next day after buildbot blesses it. (At the moment, this will change with RFWTAD)
[20:31] <abentley> lifeless, some users, not all users.
[20:32] <abentley> lifeless, I'd rather break edge than break both edge and production.
[20:32] <lifeless> we appear to break both edge and production once a cycle or two at the moment due to the skew between edge and production.
[20:35] <abentley> lifeless, I wonder how that compares to the number of times we break edge alone?
[20:35] <lifeless> abentley: so do I
[20:35] <lifeless> abentley: I also wonder how many of the things that break edge we would pickup if we qa'd
[20:36] <abentley> lifeless, we often do qa on edge.
[20:36] <abentley> lifeless, feature flags have been presented as equivalently safe to the edge/production split.  I'm just becoming accutely aware that they are not.
[20:37] <abentley> lifeless, you can argue that on balance the risks are reduced or increased, but it's definitely a big change.
[20:37] <lifeless> abentley: Its a huge change
[20:37] <lifeless> abentley: I think it has massive benefits and that the risks are approximately the same.
[20:39] <abentley> lifeless, I think the risks of breaking production are increased.
[20:40] <lifeless> abentley: thats true; one of the things being used to mitigate that is smaller rollouts (more granular rollbacks) and manually initiated deployments (human paying attention after the deploy for issues) and no-downtime rollouts (lets us rollback without scheduling stuff if there is an issue)
[20:42] <abentley> lifeless, anyhow the reason I didn't reply to your email is because I haven't landed anything worth CPing, so I thought it was for someone else.
[20:43] <lifeless> abentley: no worries
[20:43] <lifeless> abentley: I figured I was unclear or confusing :)
[20:43] <abentley> lifeless, definitely some context on the change would have helped.  I don't keep revnos in my head.
[20:44] <lifeless> hah! yes.
[20:46] <lifeless> sinzui: hi, bug https://bugs.edge.launchpad.net/launchpad-registry/+bug/640700 - seeking your thoughts
[21:11] <sinzui> lifeless, I think half the oopese were me after doko made me an admin of openjdk. I was just using the form normally. The so-called problem message changes twice over the 90 minutes I was using the feature
[21:11] <lifeless> sinzui: and you were seeing it batched ?
[21:11] <sinzui> yes
[21:12] <sinzui> Skipping the the last batch, after doing a first batch seems to introduce the problem.
[21:12] <sinzui> skipping TO the last btch
[21:12] <sinzui> batc
[21:14] <sinzui> lifeless,  the batches are essentially sequential by date. I realised that the problem message (shown as a message id/timestamp) in the oops was from the start of the batches
[21:15] <lifeless> moderate uses currentBatch
[21:20] <lifeless> sinzui: so i'm thinking there are two thinkgs we need to do.
[21:20] <lifeless> we need to use message id as the selector in the form.
[21:20] <lifeless> I'm not sure how one does that with the way forms are wired up in lp
[21:21] <sinzui> lifeless, I think to loop in the action is brittle
[21:21] <lifeless> we also need to handle receiving a form telling us to do something nonsensical and show that (1, N) messages as being unmoderatable, not cancel the entire thing.
[21:21] <lifeless> sinzui: I agree
[21:22] <lifeless> sinzui: in the sense that we bail out if anything is wrong, rather than doing what we can.
[21:22] <sinzui> we loop over the messages assuming the batch has not mutates since the form as rendered. Maybe we should iterate over what is submitted and look for the messages another way
[21:22] <lifeless> oh
[21:22] <lifeless> also
[21:22] <lifeless> the batch may not match the backing data
[21:22] <lifeless> yes
[21:22] <lifeless> so we need to quewry for the messages the form refers to only
[21:22] <sinzui> oh, in the 90 minutes I was testing, surely we got spam in the queue
[21:22] <sinzui> I agree
[21:22] <lifeless> and handle those
[21:23] <lifeless> sinzui: can you give me some tips on getting the message id (or some other primary key) in the form data?
[21:23] <sinzui> well I have a few hours more experience then yourself...
[21:23] <deryck> bryceh, hey.  Let's wait another day before we do another Bugzilla tracker.  I see a backlog for gnome-bugs.  And I want to make sure checkwatches isn't overloaded.
[21:23] <lifeless> sinzui: hah!
[21:24] <lifeless> sinzui: ok, I will look at it during my day
[21:24] <lifeless> thumper: is https://bugs.edge.launchpad.net/launchpad-code/+bug/633758 qa'd?
[21:24] <bryceh>  deryck, okie doke
[21:24] <sinzui> MessageApproval (the message in holds) as a foreign key to the Message and the MailingList. We can query the table for ids
[21:24] <sinzui> ^ lifeless
[21:25] <deryck> lifeless, I'm being too optimistic.  I'm not going to get to this bugtracker set stuff today.  Unless later tonight for me.
[21:25] <bryceh> deryck, might let some of the grousing about getting emails die down a bit
[21:25] <deryck> yeah :-)
[21:25] <lifeless> deryck: de nada
[21:25] <lifeless> deryck: Being fancy free as I am, I may.
[21:25] <lifeless> deryck: or I may not, I'm trying to organise a CP too
[21:25] <deryck> right, np
[21:25] <lifeless> hmm
[21:25] <lifeless> actually, friday is a silly day for that
[21:26] <lifeless> I'll agitate folk to QA stuff
[21:26] <lifeless> and CP on monday
[21:35] <bryceh> deryck, btw, I assume there's no way we can temporarily suppress the emails from the checkwatches results, or is there?
[21:35] <deryck> bryceh, not really.
[21:36] <deryck> hmmm, well unless there is a flag to the script.  Let me see....
[21:38] <deryck> bryceh, no, sorry
[21:38] <deryck> bryceh, maybe after Mozilla we should hold off unless some asks about a tracker?
[21:39] <bryceh> ok cool, I'd be embarrassed if there had been an easy way to prevent the emails :-)
[21:40] <deryck> heh
[21:51] <lifeless> bbiab, doctor appt
[21:54] <wallyworld_> morning
[21:54] <jelmer> 'morning wallyworld_
[21:55] <wallyworld_> hi jelmer
[22:00] <abentley> thumper, are we standupping at 5:15 or 6:00?
[22:02] <wallyworld_> abentley: i've logged in now so we can do it early if everyone is here
[22:31] <mwhudson> morning
[23:25] <mwhudson> Launchpad encountered an error during the following operation: generating the diff for a merge proposal.  The source branch has pending writes.
[23:25] <mwhudson> it doesn't say which branch :)
[23:31] <thumper> :(
[23:31] <thumper> mwhudson: file a bug :)
[23:32] <mwhudson> thumper: ok
[23:38] <mwhudson> hm, https://bugs.edge.launchpad.net/launchpad-code/+bug/612171 is somewhat related