[00:00] <sinzui> wgrant, I do not know. mars and gary will know
[00:08] <wgrant> Can someone mentor StevenK's code* of https://code.edge.launchpad.net/~wgrant/launchpad/bug-655648-a-f-maverick/+merge/37820?
[00:12] <lifeless> wgrant: yes we can switch it off
[00:30] <lifeless> spm: deathrow fialing to run ?
[00:30] <spm> what's the logs say?
[00:30] <wgrant> Erk.
[00:30] <wgrant> When did it start to fail?
[00:30] <wgrant> Is it complaining that it can't remove files because LFCs are missing?
[00:31] <lifeless> wgrant: one day ago
[00:31] <wgrant> One day ago, or 13 hours ago?
[00:31] <lifeless> spm: The script 'process-death-row' didn't run on 'cocoplum' between 2010-10-07 22:07:04 and 2010-10-07 23:07:04 (last seen 2010-10-06 07:13:08.625020)
[00:32] <wgrant> Oh, cocoplum.
[00:32] <lifeless> wgrant: first error was The script 'process-death-row' didn't run on 'cocoplum' between 2010-10-06 15:07:04 and 2010-10-06 16:07:04 (last seen 2010-10-06 07:13:08.625020)
[00:32] <lifeless> and
[00:32] <lifeless> The script 'process-death-row' didn't run on 'germanium' between 2010-10-06 11:07:03 and 2010-10-06 13:07:03 (last seen 2010-10-06 07:33:51.715570)
[00:32] <wgrant> Is germanium's OK?
[00:32] <spm> lifeless: no, the logs from cocoplum for that script. they'll be on devpad
[00:32] <wgrant> Crap.
[00:33] <wgrant> So it predates the DB surgery last night.
[00:33] <wgrant> That's a good thing.
[00:36] <lifeless> spm: where would I look?
[00:37] <lifeless> wgrant: publish queue or pload ?
[00:38] <spm> devpad? heh, what I usually do is 'locate death' and infer from there :-)
[00:38] <wgrant> lifeless: pload?
[00:38] <lifeless> 2010-10-07 20:33:04 INFO    Creating lockfile: /var/lock/launchpad-process-death-row.lock
[00:38] <lifeless> 2010-10-07 20:45:06 ERROR   Unexpected exception while doing death-row unpublish
[00:38] <lifeless>  -> http://launchpadlibrarian.net/57251973/rbZP27X6hCTbVYqO1d4X8iLXCSg.txt (terminating connection due to administrator command
[00:38] <lifeless> server closed the connection unexpectedly
[00:38] <lifeless>         This probably means the server terminated abnormally
[00:39] <lifeless>         before or while processing the request.
[00:39] <lifeless> )
[00:39] <wgrant> Well, that is awesome.
[00:39] <lifeless> spm: there are death-row entries in the log file, what does the 'failed to run' stuff look for ?
[00:39] <lifeless> wgrant: it continues:
[00:39] <lifeless> )
[00:39] <lifeless> 2010-10-07 20:45:15 INFO    Processing http://ppa.launchpad.net/soren/ppa/ubuntu
[00:39] <lifeless> 2010-10-07 21:03:05 INFO    Creating lockfile: /var/lock/launchpad-process-death-row.lock
[00:40] <wgrant> So it mysteriously died without saying anything?
[00:40] <spm> that last line is a continual source of useless information across a bunch of scripts. it's not creating a lock file; it's starting to try and create a lock file and may or not have succeeded.
[00:43] <spm> it looks to be nicely hung, just awaiting being drawn and quartered.
[00:43] <spm> Process 32346 attached - interrupt to quit
[00:43] <spm> restart_syscall(<... resuming interrupted call ...>
[00:43] <wgrant> Is it running with maximum verbosity?
[00:43] <spm> -vv
[00:44] <wgrant> Does that show DEBUG? I forget.
[00:44] <spm> i've killed cocoplum
[00:44] <spm> yes
[00:44] <wgrant> Where was cocoplum's hanging?
[00:44] <wgrant> (in the log)
[00:48] <spm> this is where it gets tricky to tell
[00:49] <spm> but if I hazard a guess: 2010-10-07 22:10:11 DEBUG   Unpublishing death row for Primary Archive for Ubuntu.
[00:49] <wgrant> Wll, that's handy.
[00:49] <wgrant> Can you set that LP_DEBUG_SQL thingy and run it again?
[00:49] <wgrant> Since it's not being very verbose...
[00:53] <lifeless> spm: does nagios alert on this condition?
[00:53] <spm> lifeless: no; that's what scriptactivity is for
[00:54] <lifeless> perhaps we should feed that into nagios
[00:54] <spm> given the number of apparently false alarms we're getting from scriptactivity; I'd be unhappy at doing that atm :-(
[00:55] <lifeless> ok
[00:55] <lifeless> are there any false alarms without bugs?
[00:56] <spm> basically we've been doing a major ... redo(?) of nagios alerts the past few months. so critical alerts really are critical; with a special subset of those being ZOMG critical. everything else is warning; with the idea that they get manually scanned, vs an actual alert. "Red Fatigue" is a real problem.
[00:57] <lifeless> I get that
[00:57] <lifeless> I'm asking if the false alarms have had bugs created for them
[00:57] <wgrant> lucid_db_lp hates me.
[00:59] <wgrant> ...
[00:59] <wgrant> Somehow production-devel got merged into db-devel with my branch. Despite not showing up in the MP diff.
[01:00] <lifeless> wgrant: heh; did you  build your branch on my cp branch ?
[01:02] <wgrant> Yes, but then I replayed on top of db-devel. Apparently that created a merge from production-devel.
[01:02] <wgrant> And somehow the diff doesn't include the merged changes.
[01:04] <lifeless> what rev
[01:05] <lifeless> of db-devel
[01:05] <wgrant> 9875
[01:05] <wgrant> https://code.edge.launchpad.net/~wgrant/launchpad/bug-655614-disabled-arch-indices/+merge/37849
[01:10] <spm> lifeless: (sorry distracted a bit there) I'm not aware of bugs being filed; I'd assume that those who are responsible for the scripts in question are reading the logs on devpad for same and filing bugs from that? :-)
[01:10] <lifeless> wgrant: the diff doesn't update once merged.
[01:10] <lifeless> spm: I make no such assumption.
[01:11] <wgrant> lifeless: I know.
[01:12] <wgrant> lifeless: So the diff should show the result of the merge.
[01:12] <wgrant> But it ignores the fact that there's a production-devel in there.
[01:13] <lifeless> spm: can you start either filing drive-by bugs when you get a false alert - any false alert, or failing that at least email me and I'll file the bugs.
[01:15] <lifeless> wgrant: the content of the commit on db-devel is definitely fine.
[01:16] <wgrant> lifeless: bzr cdiff -c9875 doesn't show a windmill change in versions.cfg?
[01:17] <lifeless> wgrant: well it has lib/devscripts/ec2test/tests/test_remote.py changed
[01:18] <wgrant> Right, that's another one. Note how that doesn't show up in the MP diff.
[01:18] <lifeless> wgrant: and versions.cfg
[01:18] <lifeless> -# r1544 of lp:windmill (the tip revision at the time of packaging).
[01:18] <lifeless> +# r1440 of lp:~bjornt/windmill/1.3-lp. It includes our patches to make test
[01:18] <spm> lifeless: heh, by false alerts; i mean that some of these generate errors semi frequently; and those failure alerts go to the LP list; and no one in turns says "hey this is a problem! the logs say X, can a losa investigate Y; I've filed Bug Z to track this" ergo, false alarm.
[01:18] <mwhudson> like oops-prune, for example?
[01:18] <spm> right
[01:19] <spm> tho that one did exactly as I described. diogo did chase it up and file a bug.
[01:19] <mwhudson> although for that one, i thought it was a conscious decision that it didn't warrant a CP
[01:19] <wgrant> lifeless: So why isn't that in the MP diff?
[01:19] <mwhudson> so the monitoring arguably should have been disabled
[01:19] <lifeless> spm: ergo a bug should be filed that it should not generate that error
[01:20] <wgrant> The MP diff isn't very useful if it doesn't show the real diff.
[01:20] <spm> otherwise we're just middlemanning the information for developers. we as losas are going to be looking at the exact same info that the developers already have access to. so we're duplicating for no reason.
[01:20] <lifeless> spm: if the bug gets marked 'invalid' clearly it *is* an error and *should* be addressed.
[01:21] <spm> mwhudson: yeah. that's a trap. disabling is very easy to forget to re-enable.
[01:22] <spm> so you end up with convoluted loops like we have with the staging restores. we have a text lock file that stops stagnig alerts during updates; but have anotehr alert that checks for the age of that lock. meta alerts ftw.
[01:23] <mwhudson> at '+ 1 month' mail ...
[01:23] <lifeless> spm: there is a difference in focus between day to day reactive and feature work
[01:23] <lifeless> spm: I'm not advocating lots of handoffs; I'm saying that operational issues should:
[01:24] <lifeless>  - trap to the monitoring system [nagios]
[01:24] <lifeless>  - not trap if nothing is wrong
[01:24] <lifeless> and that when you see a trap, if its ignorable, we should change things to not trap.
[01:24] <lifeless> I appreciate that the scriptactivity stuff isn't hooked to nagios, but it would be better if it was, and if we can't because its too noisy, we need to fix that.
[01:25] <lifeless> and the losas are best placed to judge.
[01:25] <spm> right. And I'm saying we're so flat out reacting to ignorable alerts; or puppetting requests that other salready have all the info they require; they we have approx zero time to focus on all tyhe feature reqeusts that are asked of us.
[01:26] <lifeless> spm: so the way to dig out of that is to fix things one at a time; ignorable alerts seem straight forward. I won't know if its one of those unless you or someone else tell me.
[01:27] <spm> it's not an us vs them situation here. its "our" system, and as we move more to regular rollouts etc, devs NEED to start being more aware of operational stuff happening. they can't shunt it aside as "oh I'm working on a feature"
[01:27] <lifeless> spm: I agree
[01:27] <lifeless> spm: who *should* I be asking to ensure that ignorable mails are turned into bug reports?
[01:28] <spm> so personal take - if the scriptactivity reports, which are really operationally lite compared to alerts; are the first step devs have to see an operation area and fix it, so it is less noisy and more useful for them.
[01:28] <spm> scriptactivity is purely under the control of dev's.
[01:29] <spm> our involvement tehre is around ensuring things are added to be checked for failure. which is itself a failure state :-)
[01:31] <spm> so look at cocobanana and germanium with the process-death-row alerts.  they're generating emails every hour. is this appropriate? it needs fixing, sure, manual intervention was required. But how long can it remain broken before we need to look? Were the logs recorded sufficient to identify what was the problem?
[01:32] <spm> is it a critical service like, eg the branchscanner which is time sensitive and IMHO should be nagios'ed; or more "important, not critical"
[01:33] <spm> and if so; how can we modify the logic in the activity report so we get timely alerts without dying under noise
[01:34] <spm> ideally, every one of those scriptactivity responses would get a personal ACK, "this is noise, bug X; this is a problem losas do Y"
[01:35] <lifeless> spm: so
[01:38] <lifeless> spm: things like nagios understand the idea of edge/level triggering of notifications
[01:38] <lifeless> spm: reproducing that in other tools seems pointless
[01:45] <spm> hrm. sorta?. In that you have 4 states: unknown. critical. warning. ok. And nagios is geared to aprporpaitely deal with that. How those states are defined, is not a nagios thing; we have bunches of scripts to get these for funkier corners of losaville. I put SA into the same boat.
[01:46] <spm> but who gets the alerts, and how timely they are is a BIG component here.
[01:47] <spm> or rephrase. Critical bug reports. should they be tied to nagios? (going for absurd by way of demonstratin')
[01:48] <spm> so a critical security bug gets an sms sent to The Developer On Duty; and if they don't ACK within 10 minutes it goes to a team lead; and if they don't ACK in 10, it goes to francis.
[01:49] <spm> if X breaking is more a case of "we can wait a few hours, or days" then SMS alerts is the wrong way; and just get the script in question to send a more descriptive and full email to a generic list; nagios is overengineered and heavy weight for this case.
[01:50] <spm> ie. it's friday. a week into our shift to daylight savings, and I personally still haven't adjusted the hours I get SMS alerts. multiply that by more people? urg.
[01:54] <lifeless> spm: there's a pretty strong argument we should be running a 24x7 shop for security issues
[01:54] <lifeless> spm: so I don't think thats particularly absurd
[01:55] <lifeless> spm: I also get and agree that we shouldn't be alerting everyone on every fault, but again, reimplementing filtering and routing of that in an adhoc lp specific script seems unoptimal
[01:59] <spm> as in adjusting how often we send out alerts and who they get escalated to etc? shrug, 6 of 1. for security alerts; if you really want to go that way, sure. nagios would be best; but if we just want to send a simple email; it's overkill. The logic on what is/not an alert will still need to be written. that part must happen.
[01:59] <lifeless> yes
[02:00] <lifeless> and my first comment was that when that logic is wrong its a bug, and it needs to be filed.
[02:00] <lifeless> which I stand by.
[02:00] <spm> at the moment that logic is (simplisticly) in the scriptactivity script; so that's the obvious place to fix it.
[02:00] <lifeless> sure, I made no comment at the start of this conversation about where to fix it.
[02:01] <spm> it has been a wide ranging discussion full of interesting and pertinent ideas and concepts. :-)
[02:02] <lifeless> hah
[02:02] <lifeless> :)
[02:10] <spm> but yes, back to the start. atm, for better or worse, and for a while longer; reports out of scriptactivity are going to still go to the LP list. If those messages atm are noisy, or not as helpful, or indicative of bugs needing to be filed, then my 2c is that the recipients - the LP list - should be filing bugs about them. if we want losas to do bug filing; then the obvious fix is send the results to losas@. Which'd get me crucified by the others, but
[02:10] <spm> anyway.
[02:11] <spm> or phrased another way - losas are responsible for real time alerts; lp-devs are responsible for not-realtime alerts.
[02:11] <spm> grotesquely simplified.
[02:12] <lifeless> spm: scriptactivity failures are realtime IMNSHO, thats also a massive simplification
[02:12] <spm> I'm not sure they are.
[02:12] <spm> some of those scripts run every 5 minuyes; but we only care about alerts after 12+ hours.
[02:12] <spm> checkwatches springs to mind as culprit #1
[02:14] <spm> the poorly named nightly.sh, has a whole bunch of scripts in it. some that take 5+ hours to run. an additional hour or 3 on waiting for a success/fail result isn't urgent.
[02:14] <lifeless> so simplisticly, scriptactivity should only be looking for a run every 12 hours
[02:14] <lifeless> theres a mismatch there.
[02:14] <spm> made worse that the scripts in nightly.sh are serial; and get bounced around start/stop times by other scripts being fast or slow
[02:16] <spm> hrm. possibly comms misunderstanding; we have 22 separately timed runs (crontab entries) for scriptactivity; which in turn measures some 22 x ?? scripts.
[02:16] <spm> eg: nightly.sh:
[02:16] <spm> 0 5 * * * $LP_PY /srv/launchpad.net/production/launchpad/scripts/script-monitor.py -U statistician 1440 loganberry:karma-update loganberry:allocate-revision-karma loganberry:launchpad-stats loganberry:expire-questions loganberry:checkwatches loganberry:productreleasefinder loganberry:update-cache loganberry:launchpad-targetnamecacheupdater loganberry:flag-expired-memberships loganberry:update-personal-standing
[02:17] <spm> ie. check for a successful run of those scripts on those server(s) once a day at 5am, for activity in the last 24 hours.
[02:18] <spm> which gets busted nicely; if said script in total takes 24.1 hours to run.
[02:19] <lifeless> well shannon has something to say about that
[02:20] <lifeless> have to sample at twice the frequency you want to measure
[02:23] <spm> I think my head imploded trying to appreciate that; so I'll just go for the wise 'nod'
[02:23] <lifeless> so we need 12 hourly probes
[02:23] <lifeless> looking for a run in the last 24
[02:26] <lifeless> actually thats not quite right
[02:27] <lifeless> we want 12 hourly probes and an error message if we have two successive probes with no activity, we also need that when those long scripts take forever, we a) consider them as one group and b) report on activity as it goes
[02:39] <LPCIBot> Project devel build (100): STILL FAILING in 3 hr 28 min: https://hudson.wedontsleep.org/job/devel/100/
[02:39] <LPCIBot> Launchpad Patch Queue Manager: [r=EdwinGrubbs][ui=none][bug 652007] Switch mlist-sync to use
[02:39] <LPCIBot> LaunchpadScript as its base.
[02:39] <_mup_> Bug #652007: scripts/mlist-sync.py should use LaunchpadScript as a base <mailing-lists> <oops> <Launchpad Registry:Fix Committed by sinzui> <https://launchpad.net/bugs/652007>
[02:49] <wgrant> Hm.
[02:49] <wgrant> bin/kill-test-services is broken.
[02:49] <wgrant> The librarian stuff has been changed.
[02:50] <lifeless> ah fnord
[02:50] <lifeless> a) its untested
[02:51] <lifeless> b) its untested
[02:51] <lifeless> c) we should really move to the model I wrote up :)
[03:22] <LPCIBot> Project db-devel build (59): STILL FAILING in 3 hr 41 min: https://hudson.wedontsleep.org/job/db-devel/59/
[03:22] <LPCIBot> Launchpad Patch Queue Manager: [testfix][rs=sinzui][ui=none] Resolved adapter conflicts in
[03:22] <LPCIBot> configure.zcml.
[03:33] <sinzui> spm: I want to move productreleasefinder to a separate script or run it in parallel. Half of those scripts do not need exclusive use of the database
[04:17] <bac> spiv:  the test suite has some known spurious tests and i think your branch got hit by them all.  :(
[04:18] <spiv> bac: I'm a lucky guy!
[04:19] <bac> yep.  so i'm going to wade through the discussion and see where we are.  will land after things return to normalcy but will probably miss today's PQM deadline.  no big deal since this isn't a rush by any means.
[04:21] <spiv> No, clearly not :)
[04:35] <spm> sinzui: sure. probably as part of the current release? or maybe as a CP or something? I guess the only Q I'd have - did you have a particular time you wanted it to run within; or just leave it to us to try and pick a "not as busy" time?
[04:51] <LPCIBot> Project db-devel build (60): STILL FAILING in 1 hr 29 min: https://hudson.wedontsleep.org/job/db-devel/60/
[04:51] <LPCIBot> * Launchpad Patch Queue Manager: [rs=buildbot-poller] automatic merge from stable. Revisions: 11689,
[04:51] <LPCIBot> 11690 included.
[04:51] <LPCIBot> * Launchpad Patch Queue Manager: [release-critical=edwingrubbs][rs=mwhudson][ui=none][no-qa] Rollback dropping of BranchRevision.id
[04:52] <lifeless> spm: deadrows still dead
[04:54] <spm> i think that's a lie; the logs suggest stuff is happening.
[04:54] <lifeless> so the scriptactivity check for it is wrong?
[04:54] <spm> overly aggressive *atm*; probably
[04:54] <lifeless> I'll file a bug
[04:54] <spm> dunno if a spike in actions causing it to appear to fail or something funky
[04:55] <lifeless> if the script is running
[04:55] <lifeless> and scriptactivity is complaining, then our detection logic is flawed.
[04:55] <spm> (last seen 2010-10-06 07:33:51.715570) <== that worries me
[04:56] <spm> I know we've seen one script previously (don't recall which) that was working, to a point; and would then die. so stuff was happening but the magic record "I have finished successfully!" wasn't getting written.
[04:57] <spm> StevenK: ^^ can you offer any insights as to what/where/why we can look for what's up?
[04:59] <mwhudson> you can also have the fun where the outage causes so much work to build up that it takes longer than the check period to complete
[05:00] <mwhudson> really scriptactivity is pretty awful
[05:05] <spm> https://pastebin.canonical.com/38362/
[05:05] <spm> vis last current entry: 2010-10-08 02:58:28 INFO    Processing http://ppa.launchpad.net/mozillateam/ppa/ubuntu <== germanium
[05:05] <spm> so stuff is happening; but we're not getting completion.
[05:06] <spm> *successful* completion.
[05:06] <mwhudson> can you see when the currently running process started?
[05:06] <spm> ~ 3.5 hours ago
[05:09] <mwhudson> ah
[05:09] <mwhudson> so i think my guess seems likely
[05:09] <spm> hrm. maybe. I suspect there's more to it than that tho.
[05:10] <spm> the current process, via the log, has been "stuck" for an hour.
[05:11] <spm> ugh. there's whole bunches of oop's on germanium which aren't (apparently) on lp-oops
[05:13] <spm> bah. the oops themselves are on devpad tho
[05:13] <spm> eg /x/launchpad.net-logs/production/germanium/lp_publish/2010-10-06/85613.PPAPUBLISH854
[05:25] <StevenK> I wonder if they all are PoolOverwrite
[05:40] <spm> appear to be
[05:40] <spm> based on the random 4 I've catted
[06:04] <LPCIBot> Project devel build (101): STILL FAILING in 3 hr 25 min: https://hudson.wedontsleep.org/job/devel/101/
[06:04] <LPCIBot> Launchpad Patch Queue Manager: [r=danilo][ui=none][bug=652256] Adds a configuration link for
[06:04] <LPCIBot> translations_usage on the product translations front page.
[06:21] <LPCIBot> Project db-devel build (61): STILL FAILING in 17 min: https://hudson.wedontsleep.org/job/db-devel/61/
[06:30] <wgrant> StevenK, spm: deathrow is failing with PoolOverwriteErrors?
[06:31] <spm> wgrant: not sure. something is failing with those errors; the oops isn't actually recording what script is doing the failing.
[06:31] <wgrant> Ah, that would be p-d, then.
[06:32] <wgrant> publish-distro, that is.
[06:32] <wgrant> Although those shouldn't be happening any more.
[06:33]  * StevenK kicks Firefox and Thunderbird
[06:33] <wgrant> StevenK: Chromium! Evolution!
[06:34] <StevenK> wgrant: Evolution doesn't like IMAP mailboxes with >10,000 messages for me
[06:34] <wgrant> StevenK: Even in Maverick?
[06:34] <wgrant> It was a bit bad in Lucid, but works OK in Maverick.
[06:34] <StevenK> Evolution still makes me twitch
[06:34] <wgrant> :(
[06:37] <lifeless> StevenK: Ever hacked on it ?
[06:39] <StevenK> lifeless: On Evolution? Nope
[06:40] <lifeless> do so, *then* you'll have reason to twitch
[06:40] <StevenK> Don't make me look at the bad code
[06:40] <lifeless> its not 'bad'
[06:42] <spm> isn't this the place someone says "patches accepted"?
[06:42] <lifeless> lies
[06:42] <spm> *where* someone says. gah
[06:48] <wgrant> StevenK: i-d-s copies packagesets now, doesn't it?
[06:49] <StevenK> wgrant: Yes
[06:50] <StevenK> wgrant: I have a branch to limit which ones are copied, too
[06:50] <wgrant> StevenK: NewReleaseCycleProcess still says to copy them manually.
[06:50] <wgrant> And it suggests that -a will be needed to exclude disabled archs.
[06:50] <wgrant> But I think that was CPd a couple of days back.
[06:50] <StevenK> Yes, I know, I need to talk to Colin again
[06:51] <wgrant> OK, great.
[07:50]  * wgrant gives the publisher a couple of good stabs.
[07:50] <StevenK> Which tentacle are you trying to cut off?
[07:51] <wgrant> Refactoring the crap I was dealing with last night. Mostly Release generation.
[07:51] <wgrant> It has this awesome reorder_components function, which looks safe enough (it creates a new list).
[07:51] <wgrant> But of course it also calls .remove on the old list.
[07:52] <wgrant> So it looks like it's going to be non-destructive to its input, but then completely obliterates it.
[07:53] <wgrant> Hm, that's actually a bit interesting. It will have been mutating the publisher's internal config.
[07:53] <wgrant> Surprising that it works.
[07:56] <wgrant> Even better is that the tests don't care.
[09:03] <wgrant> bigjools: Morning. dogfood looks happy!
[09:04] <bigjools> morning
[09:12] <wgrant> The publisher tests do not whinge if you disable publication for non-Release pockets :(
[09:41] <lifeless> stub: are you going to be able to make the cassandra training? will you be staying for the end of the week ? [I'll stay for the 2 end days if you're going to stay, otherwise I'll just go for the 3 days training.
[09:41] <lifeless> stub: also, can tables be renamed in postgresql?
[09:41] <stub> I think I'm going for the 3 days training.
[09:41] <stub> Yes, tables can be renamed in PostgreSQL
[09:41] <jml> mwhudson: hi
[09:42] <jml> mwhudson: only a little
[09:42] <jml> mwhudson: I was a little surprised that the service wasn't written with Twisted
[09:42] <lifeless> stub: when will you know ?
[09:43] <stub> Early next week - I need to ensure I can sort out paperwork and stuff earlier than expected.
[09:43] <lifeless> ok
[09:43] <lifeless> I'll hold off booking for now
[09:44] <lifeless> doing the extra 2 days experimenting with stuff for LP with you could be great.
[09:44] <stub> Come home via Bangkok ;)
[09:45] <lifeless> the routing on that would be nuts I think, now that I'm in NZ
[09:47] <poolie> hello stub, jml, lifeless
[09:51] <jml> poolie: hi
[09:52] <jml> poolie: replying to some of your emails
[09:52] <jml> poolie: wrt your dkim branch... I don't think you should try too much harder to refactor that bit.
[09:53] <jml> poolie: I think someone should though. in a different branch.
[09:53] <poolie> your reply about notifications the other day made my day
[09:54] <jml> :)
[09:56] <spiv> bac: heh, a completely different failure this time
[09:59] <bac> spiv: well, that keeps things interesting...
[10:01] <poolie> jml we all seem to be pretty much in agreement then
[10:01] <poolie> i may put up a change
[10:01] <poolie> just wondered if i was missing something
[10:01] <wgrant> spiv: Ah, you're trying to compete with my 6 unique failures for a single branch?
[10:03] <spiv> wgrant: well, 2 from 2 tries so far.
[10:03] <spiv> This time it's a spurious windmill failure.
[10:04] <spiv> Last time it was 4 (non-windmill) failures.
[10:09] <poolie> this is really sweet
[10:09] <poolie> just add "variations = [VaryByHttpClientImplementation()]" to a test class and it spans out
[10:09] <poolie> better than writing load_tests
[10:37] <jml> poolie: that is cool
[10:37] <jml> poolie: what does that?
[10:38] <poolie> http://bazaar.launchpad.net/~mbp/bzr/597791-http-tests/annotate/head:/bzrlib/tests/test_variations.py
[10:38] <poolie> jml, and http://bazaar.launchpad.net/~mbp/bzr/597791-http-tests/annotate/head:/bzrlib/tests/test_http.py
[10:38] <poolie> i may split/copy it into testtools
[10:39] <jml> yeah.
[10:39] <jml> for some reason, testscenarios is a separate project
[10:40] <poolie> oh, or testscenarios
[10:40] <poolie> hard to keep up
[10:40] <poolie> and testfixtures is another one again i think
[10:41] <jml> yeah
[10:43] <jml> "do one thing and do it well" vs "make it easy to find, discover & maintain these things"
[10:45] <jml> I guess if I had more energy I'd try to come up with a good over-arching name for the project of high quality testing tools for Python
[10:45] <jml> and then make a cool website and stuff
[10:59] <lifeless> poolie: I'd love a patch for testscenarios to do that
[10:59] <poolie> yeah, it's a funny thing
[10:59] <lifeless> jml: tip ? :P
[10:59] <poolie> activation energy to do and test it in another library, and to bump the dependency
[11:00] <poolie> how about if you review it here and if we're happy, i'll move it
[11:00] <poolie> is the general plan to copy or to move?
[11:00] <lifeless> I don't think bzr uses testscenarios itself, yet.
[11:00] <lifeless> will chat more next week; gnight.
[11:01] <jml> g'night.
[14:08] <jml> if we wanted to have future distroseries for Ubuntu open on Launchpad so that people could assign bugs to those future distroseries, what would we need to do?
[14:09] <wgrant> It should mostly work now.
[14:09] <wgrant> (I talked about this a bit with cjwatson earlier)
[14:10] <wgrant> We can have a new series set to Experimental, and it will happily coexist uninitalised.
[14:10] <jml> hmm.
[14:10] <wgrant> I think all uploads will be held in NEW. Perhaps they should be rejected.
[14:11] <wgrant> (until a couple of months back it would have caused publisher carnage, but that was fixed)
[14:11] <jml> Arguably that should be independent of the series status
[14:11] <jml> right, but maybe code other than the publisher will break if we start doing that
[14:12] <jml> also, "Experimental" isn't such a great status for Natty
[14:12] <wgrant> It's not, no.
[14:12] <sinzui> jml: I agree with wgrant. We can do it now, but it will not be open for building and translations.
[14:13] <wgrant> Oh, there's also Future.
[14:13] <wgrant> Forgot about that status.
[14:13] <jml> that's a much more appropriate status
[14:14] <jml> who would have permission to try an experiment on staging?
[14:14] <jml> (I know staging isn't up right now)
[14:14] <jml> and by permission I mean computer permisson
[14:14] <wgrant> It's possible that ~ubuntu-drivers could do it, but it may well need an ~admin.
[14:14] <jml> I'm surrounded by people who have the authority to do so :)
[14:15] <wgrant> (I'm not sure I've ever seen Future used outside Debian)
[14:15] <wgrant> (so it may not even be exposed in the UI)
[14:15] <jml> right
[14:16] <wgrant> Yeah, it needs an admin.
[14:16] <jml> so "mostly work" might be more accurately phrased "doesn't work"
[14:16] <wgrant> Well, Experimental would work fine.
[14:17] <wgrant> Gr no staging.
[14:17] <jml> another question, when is the psycopg problem going to be fixed so I can upgrade in peace?
[14:18] <sinzui> jml: the bug was marked fixed a few days ago
[14:18] <jml> sinzui: what do I have to do to get the fix?
[14:19] <wgrant> It's not fixed..
[14:19] <wgrant> launchpad-dependencies depends on the old version.
[14:19] <sinzui> maybe wait for deps to update
[14:19] <jml> wgrant: yeah, that's what I mean :)
[14:20] <wgrant> Forcing installation of the old version is not a fix. It breaks upgrades and introduces all sorts of madness.
[14:20] <wgrant> The real fix is inhibited by an argument over who is at fault.
[14:23] <wgrant> sinzui: I can't view bug listings at all for a distro that doesn't use Launchpad Bugs. Is this intentional?
[14:23] <wgrant> Not even +bugs works.
[14:24] <sinzui> wgrant,  not exactly. I expected hacking the url to work
[14:24] <wgrant> jml: Release nominations work as expected for Experimental and Future series.
[14:25] <jml> wgrant: thanks.
[14:25] <jml> wgrant: I'm just mucking around on a launchpad.dev instance now
[14:25] <jml> I notice that we can't register a series as "Future". It seems to default to Experimental, but I'm not sure why that is.
[14:26] <sinzui> jml: there are some old rules about the status of a new series and how it can progress through the status
[14:26] <wgrant> Hah, newSeries does, yes. How very odd.
[14:27] <jml> sinzui: that makes me want to ask a bunch of questions:
[14:27] <jml> sinzui: 1. what are these rules?
[14:27] <jml> sinzui: 2. are these rules still appropriate? what was the justification for them?
[14:27] <wgrant> Some of the rules are still critical to Soyuz operation.
[14:27] <jml> sinzui: 3. why are these rules not obvious on the series registration page?
[14:27] <wgrant> The one about starting in Experimental... not so much.
[14:27] <sinzui> the exist to ensure you cannot accidental move an ubuntu series back in status. If you make a mistake, god speed, find a losa and hack the db!
[14:28] <jml> wgrant: it would be nice to decouple soyuz operation and distroseries status, I think
[14:28] <wgrant> jml: Probably.
[14:29] <wgrant> We need a tristate column to control the state of the RELEASE pocket, mainly.
[14:29] <sinzui> jml, wgrant Only losas are creating series. Drivers cannot update the statues. We reply on a losas to do this. This is an Ubuntu exception
[14:29] <jml> sinzui: I don't understand... how can rules about the status of a *new* series prevent accidentally moving a series back in status
[14:29] <jml> sinzui: why are only losas allowed to create series?
[14:29] <wgrant> Because they can seriously break Soyuz if done badly.
[14:29] <sinzui> Ubuntu wants it that way
[14:29] <jml> sinzui: pretend they don't.
[14:29] <sinzui> I send an email to several people asking why this is
[14:30] <sinzui> bdmurray, deduced the issue is that Ubuntu made its driver team the owners. There are too many untrusted people who would have permission to create series
[14:31] <jml> sinzui: where is this documented?
[14:31] <sinzui> bdmurray, sent an email last month explaining the issue and suggested that Ubuntu have an separate owner team
[14:31] <jml> sinzui: how would I have found out about this if you were asleep?
[14:32] <sinzui> jml: There is a bug about this. I have sent many emails to the Lp list overt the last two years. mdz also asks about what a driver can do (Ubuntu driver team or driver role?), and the Ubuntu council is where the recent emails were
[14:33] <wgrant> (there is one thing that ~ubuntu-drivers can (but shouldn't be able to) do that nobody has mentioned yet)
[14:34] <sinzui> jml: bug 174375
[14:34] <_mup_> Bug #174375: Distribution drivers permissions may need redesign <Launchpad Registry:Triaged> <ubuntu-community:Confirmed for techboard> <https://launchpad.net/bugs/174375>
[14:35] <jml> wgrant: which is?
[14:35] <wgrant> jml: Well, check the owner of the Ubuntu primary archive...
[14:35] <wgrant> And weep.
[14:36] <jml> wgrant: I don't know how to do that.
[14:36] <wgrant> Well, it's ~ubuntu-drivers.
[14:36] <wgrant> So they can copy stuff in and probably add permissions.
[14:37] <wgrant> (just noticed this now; it's not really related to the distribution.driver field, besides being the same team)
[14:39] <jml> do I have to bug a losa if I want to experiment on staging?
[14:40] <Chex> jml: if you mean working on the servers behind the web-interface of staging, then yes, through us
[14:41] <jml> Chex: in this particular context I mean creating a new series of Ubuntu
[14:41] <jml> once staging has restored itself to its usual glory, of course
[14:41] <wgrant> It has been down for hours... I fear that it will not.
[14:41] <jml> oh. pity.
[14:42] <wgrant> I am hopeful that it's because someone is secretly setting up qastaging.
[14:42] <wgrant> But I doubt it.
[14:49] <sinzui> Chex, wgrant, jml: Edwin reported that staging has been down now for at least 12 hours
[14:49] <sinzui> Chex, is staging stuck?
[14:49]  * sinzui has qa to do on staging
[15:33] <dobey> hey abentley
[15:34] <abentley> dobey: hey.
[15:34] <dobey> how's it going?
[15:34] <abentley> dobey: alright.  Yourself?
[15:34] <dobey> pretty good
[15:35] <dobey> any luck with the branch scanning db issue?
[15:35] <sinzui> does any one have power to moderate launchpad-feedback? flacoste?
[15:35] <flacoste> sinzui: i have
[15:35] <flacoste> sinzui: i'll take care of it
[15:35] <sinzui> thanks
[15:36] <abentley> dobey: I am waiting for Chex to tell me what the procedure is to reinstate the needed db permissions.
[15:37] <dobey> abentley: ok
[15:38] <abentley> dobey: the fix will be permanent when we do our next rollout, but I'm trying to get it reinstated now.
[15:39] <bigjools> jml: what do you think about this: http://bazaar.launchpad.net/~julian-edwards/launchpad/builderslave-resume/revision/11661
[15:40] <dobey> abentley: ok. i'm just asking about it constantly because tarmac requires metadata from the LP API that is only available when rescan succeeds, and we have some u1 branches that are not getting rescanned properly
[15:41] <jml> bigjools: the comment and the logic look ok. I'm surprised that transaction.commit() doesn't do the reset though
[15:41] <abentley> dobey: I understand that you need it.  I think it's unacceptable that it's breaking, but I don't have the power to fix it myself.
[15:42] <jml> bigjools: also, I need to talk about packagesets
[15:42] <jml> but maybe that's Monday
[15:42] <flacoste> sinzui: done, there were 3 messages not spam
[15:42] <bigjools> jml: the tests pass... not sure what else to say about it
[15:42] <bigjools> jml: when would you like to talk about packagesets?
[15:42] <bigjools> oh Monday
[15:43] <jml> bigjools: what does write_transaction do that's different to transaction.commit()?
[15:43] <bigjools> I think the firefighting this week has made me blind
[15:43] <dobey> abentley: understandable
[15:43] <bigjools> jml: the decorator is itself decorated with the reset_store one
[15:43] <sinzui> thanks flacoste
[15:43] <jml> I'm pretty sure I have more grey hairs today than I did Monday.
[15:43] <jml> bigjools: ahh, ok.
[15:44] <bigjools> jml: which is a little nasty for partial commits
[15:44] <jml> bigjools: yeah
[15:45]  * bigjools now intrepidly steps forward into the buildd-slavescanner.txt wilderness
[15:45] <jml> bigjools: perhaps write_transaction shouldn't have the reset_store decorator, and the librarian should have reset_store explicitly on methods that currently have write_transaction
[15:46] <bigjools> jml: yeah I did raise that with gary_poster, I think it warranted some careful testing to see if that reset is still needed, since it might be a hangover from sqlobject
[15:47] <gary_poster> yes, removing that is a bit of a big deal
[15:47] <gary_poster> since we also do that for transactions
[15:47] <gary_poster> in the webapp
[15:47] <gary_poster> Landscape doesn't reset store
[15:47] <bigjools> interesting
[15:48] <gary_poster> But it's been like this for us for so long that I'm afraid that we might rely on it
[15:48] <bigjools> I wonder how much of this stuff is slowing us down
[15:48] <jml> bigjools: but what I'm saying is that we don't have to worry about that
[15:48] <gary_poster> so changing it would be one of those "who knows what will fall over when we switch it on"
[15:48] <jml> bigjools: we split it off write_transaction, and then change the places where write_transaction is used so that it uses reset_store explicitly
[15:48] <bigjools> gary_poster: indeed!
[15:48] <jml> absolutely 0 risk.
[15:49] <bigjools> jml: yes, it's only used in one other place
[15:49] <gary_poster> but then we expose something that has untested semantics?
[15:49] <gary_poster> sorry, I had a nick ping, but am not really fully in context yet
[15:50] <gary_poster> IOW, why do we want to remove it out of write_transaction, if we are worried it might be necessary in write_transaction?
[15:50] <bigjools> gary_poster: 'tis ok, it's not urgent by any means, and jml is talking about something different.  But I reckon we should spend some time examining if the reset is needed at some point.
[15:50] <gary_poster> completely agree
[15:51] <gary_poster> I've wondered about that since I started, actually
[15:51] <gary_poster> (ish ;-) )
[15:53] <bigjools> gary_poster: it may have been needed by the sqlobject compat layer, I wonder if jamesh would know
[15:53] <gary_poster> he did
[15:54] <gary_poster> he told me that it was
[15:54] <bigjools> aha
[15:54] <gary_poster> so like I said, we probably don't need it
[15:54] <gary_poster> except that it may be a pervasive, underlying assumption for some of our code now
[15:54] <bigjools> right
[15:54] <bigjools> yeah :(
[15:55] <bigjools> but our 100% test coverage would root that out!
[15:55]  * bigjools runs
[15:55] <gary_poster> :-)
[15:58] <sinzui> henninge, noodles775: have you discussed UI review graduation
[15:59] <henninge> sinzui: no and noodles775 is not my mentor ... ;)
[15:59] <sinzui> henninge, thanks, sorry about my incompetence.
[15:59] <henninge> sinzui: np
[16:12] <noodles775> :)
[16:41] <rockstar> Edwin-afk, ping
[16:44] <jcsackett> sinzui: you encountered oddities with metal:head_epilogue and conditional content, right? specifically the condition being ignored and it always being there?
[16:45] <sinzui> bac did
[16:46] <jcsackett> ah. do you remember what he did to get around that?
[16:46] <jcsackett> alternatively: bac? you still around?
[16:46]  * sinzui checks pastebin
[16:47] <sinzui> jcsackett, http://pastebin.ubuntu.com/507152/
[16:51] <jcsackett> sinzui: thanks.
[18:02] <dobey> abentley: still around?
[18:02] <abentley> dobey: yes.
[18:04] <dobey> abentley: so it looks like Chex did the patch, but am still seeing a couple branches (that were already pushed), waiting to be rescanned. any ideas on how to get them to 'finish' the rescan?
[18:05] <abentley> dobey: Yes, the branches need to be write-locked and unlocked to generate a new scan job.  Pushing to them should do the trick, or I can do a manual thing.
[18:06] <dobey> abentley: can you do the manual thing? the owner of the two branches i'm looking at isn't around, so i can't ask him to repush at the moment
[18:06] <abentley> dobey: sure.  What branches?
[18:06] <dobey> https://code.edge.launchpad.net/~rodrigo-moya/libubuntuone/dont-depend-on-gconf
[18:06] <dobey> https://code.edge.launchpad.net/~rodrigo-moya/ubuntuone-client/dont-use-dialog-vbox
[18:06] <dobey> those two
[18:09] <abentley> dobey: okay, those should get scanned in the next minute.
[18:09] <dobey> abentley: great. thanks much!
[18:10] <abentley> dobey: you're welcome.  Sorry for the glitch.
[18:34] <mars> A good Friday laugh: http://people.canonical.com/~mars/performance.png :)
[18:35] <bigjools> I feel like the guy at the bottom this week
[18:35] <mars> lol
[18:37] <bigjools> g'night all
[18:37] <mars> good night bigjools
[18:56] <gary_poster> sinzui: I'm trying to figure out an immediate way to help Darwin @ https://answers.edge.launchpad.net/launchpad-foundations/+question/128430 .  I was going to suggest that he merge his accounts (pointing to https://help.launchpad.net/YourAccount/Merging).
[18:56] <gary_poster> However, to do that without screwing up his PPA he would need to log into his primary account.  I could suggest merging his account to the secondary account, and then changing his name for him back to his primary account.
[18:56] <gary_poster> Thoughts or ideas?
[18:56] <gary_poster> Alternatively, I could try to construct SQL to try to figure out what is going on and ask the LOSAs to run it, but that's going to be such an expensive flail that I was hoping there would be another way
[18:57] <sinzui> gary_poster, The reverse merge and rename is viable. I have suggested it to users as well.
[18:57] <gary_poster> ok thanks sinzui.  I'll give it a whirl, and hope stub can investigate the underlying problem
[18:57] <sinzui> gary_poster, deleting a PPA is very destructive, you cannot get it back...the name is taken forever
[18:58] <gary_poster> sinzui: does that mean that the reverse merge/rename is dead for him?
[18:58] <gary_poster> (cause he has a PPA in the primary account)
[18:58] <sinzui> We cannot rename someone with a PPA. It must be deleted first
[18:59] <sinzui> I do not think you can merge a profile with a PPA
[18:59] <gary_poster> :-(
[18:59] <gary_poster> So not an option
[18:59] <gary_poster> So DB fiddling is the only real option
[18:59] <sinzui> gary_poster, are the account ids wrong/switched?
[19:01] <gary_poster> he doesn't have access to his primary account anymore.  He reports that he tried a password reset and never got an email back.  Unless something is seriously hosed in the LP database, that's probably what I need him to do.
[19:01] <gary_poster> (I mean, work with ISD to get the password reset working for him)
[19:01] <sinzui> gary_poster, did the user delete an email address on his baudm profile
[19:02] <gary_poster> sinzui: he didn't report doing so, but I don't see how he could be where we are without doing so
[19:02] <gary_poster> (I have not asked, but would be happy to if you thought it were worthwhile)
[19:02] <sinzui> password reset is automatic from a form. it sends it to your email address. Follow the link and you are done.
[19:02] <gary_poster> he says he didn't get the email
[19:03] <gary_poster> I'll pursue that
[19:03] <flacoste> gary_poster: do you know if stub qa-ed rev 9762?
[19:03] <gary_poster> not by number :-) looking...
[19:03] <flacoste> gary_poster: see https://devpad.canonical.com/~lpqateam/qa_reports/deployment-db-stable.html
[19:03] <gary_poster> k
[19:03] <flacoste> we are now on 9880
[19:07] <cr3> when should an interface inherit from another interface or the model implement that interface?
[19:09] <cr3> for example, the Product class implements IHasLogo and the IProduct interface inherits from IHasLogo too
[19:12] <gary_poster> it's a semantic question cr3.  If an interface semantically subsumes another interfaces, it should inherit.  OTOH, if a class happens to implement two semantically different interfaces, the interfaces themselves should not inherit
[19:13] <gary_poster> flacoste: what you see is an instance of bug 640566.  The old bug was added into the mix erroneously.  The new revision was in fact QAd according to the other bug: https://bugs.edge.launchpad.net/launchpad-foundations/+bug/644975
[19:13] <_mup_> Bug #640566: qa-tagger should not consider linked bugs to branches when bugs are already Fix Released <new-merge-workflow> <qa-tagger:In Progress by ursinha> <https://launchpad.net/bugs/640566>
[19:13] <_mup_> Bug #644975: Export OpenIdIdentifier to Canonical SSO <qa-ok> <Canonical SSO provider:New> <Launchpad Foundations:Fix Committed by stub> <https://launchpad.net/bugs/644975>
[19:14] <gary_poster> I am determining with Ursinha now what we should do practically about that old bug
[19:14] <cr3> gary_poster: in the case of IHasLogo though, if the IProduct interface already inherits from IHasLogo, does it make semantic sense for the Product implementation to explicitly call implements for the IHasLogo interface, shouldn't it be implied by calling implements for the IProduct interface?
[19:14] <flacoste> gary_poster: ah, ok, that means the report isn't useful at this stage then
[19:14] <flacoste> gary_poster: to determine which revision can be rolled out
[19:14] <Ursinha> flacoste, this kind of situation is an exception
[19:15] <gary_poster> flacoste: correct, when people use the same branch repeatedly, it falls over until that bug is fixed
[19:15] <flacoste> which is common in our team at this stage
[19:15] <gary_poster> flacoste: I'm going to mark that qa-ok, on Ursinha's suggestion
[19:15] <flacoste> stub, lifeless and others do this
[19:15] <flacoste> ok
[19:15] <flacoste> staging is down btw
[19:15] <flacoste> we are stuck on a replication error
[19:15] <gary_poster> They are the only two AFAIK, but in any case Ursinha is working on it
[19:15] <flacoste> I've been known to reuse branches also
[19:16] <flacoste> not that i'm a big commiters anymore
[19:16] <gary_poster> :-)
[19:16] <gary_poster> ok report should be happy again next time it runs
[19:18] <gary_poster> flacoste: staging replication error: my only remediation options are to ask LOSAs to review the kinds of things stub have asked them to review in the past, or wake up stub, or send stub an email.  My inclination is to send stub an email, since I assume LOSAs have already done "the usual" whatever that is.  Agree/disagree?
[19:19] <flacoste> gary_poster: it also means that it's likely that we won't have an estimate the time the DB upgrade take
[19:19] <gary_poster> flacoste: yes
[19:19] <flacoste> Chex: have you gone through the usual slony debug checklist?
[19:22] <Ursinha> gary_poster, flacoste, it's updated now: https://devpad.canonical.com/~lpqateam/qa_reports/deployment-db-stable.html
[19:23] <gary_poster> Thanks Ursinha
[19:24] <flacoste> Ursinha: do we have a bug to include the revision until tip?
[19:25] <flacoste> Ursinha: for example, on https://devpad.canonical.com/~lpqateam/qa_reports/deployment-stable.html, i have no idea how many revisions follow that not-qaed revision
[19:25] <flacoste> if it's only 1, it's not the same things as if there are 10 other revisions after that
[19:25] <Ursinha> flacoste, no, there's not
[19:25] <Ursinha> makes sense
[19:25] <flacoste> i'll file one
[19:25] <flacoste> what project?
[19:25] <Ursinha> flacoste, qa-tagger
[19:30] <gary_poster> Ursinha: I was pretty sure that these are shown
[19:31] <gary_poster> But maybe I misremember :-/
[19:31] <Ursinha> gary_poster, iirc they were never there
[19:31] <Ursinha> mars knows better
[19:31] <flacoste> Ursinha: bugs 65712 and 657015
[19:31] <gary_poster> Ursinha: ok, you would probably know :-) .
[19:31] <flacoste> bug 657012
[19:31] <_mup_> Bug #657012: Display the revision number of tip on the report <qa-tagger:New> <https://launchpad.net/bugs/657012>
[19:32] <flacoste> and bug 657015
[19:32] <_mup_> Bug #657015: Deployable revisions report should go until tip <qa-tagger:New> <https://launchpad.net/bugs/657015>
[19:32] <mars> Ursinha, ? reading backscroll
[19:34] <mars> Ah, yes.  Ursinha, they are included in the fake report but the real application does not produce them
[19:41] <Ursinha> thanks flacoste
[19:41] <Ursinha> and mars
[19:46] <sinzui> gary_poster, I updated the bug. We should request a losa to try a merge from edge to see if it fixes the users login issue
[19:47] <gary_poster> I was just reading your reply sinzui, thank you.  OK, I'll try that.
[19:48] <gary_poster> sinzui: djclue917 got an email address when he used that email address again, somehow, I think.
[19:48] <gary_poster> oh, duh, you said this
[19:49] <sinzui> login is so very confusing that I cannot remember what I say about it either
[19:51] <sinzui> gary_poster, since the address is not preferred, I wonder if it is possible that the address was taken from the Lp profile...Lp profile -> 3 addresses, but two accounts with different email addresses.
[19:55] <gary_poster> sinzui: does the lock on this page show that the email address is preferred now? https://edge.launchpad.net/~djclue917
[19:55] <sinzui> The lock means it is hidden.
[19:55] <gary_poster> oh ok
[19:55] <sinzui> gary_poster, you have demi-god powers to see it
[19:55] <gary_poster> go, me!
[20:06] <mars> gary_poster or benji, API question for you: do you know what happens if you log in twice, passing different parameters to allow_access_levels?
[20:06] <mars> login_with(allow_access_levels=["READ_PUBLIC", "READ_PRIVATE"])
[20:07] <mars> then, log in again with: login_with(allow_access_levels=["READ_PUBLIC", "WRITE_PUBLIC"])
[20:07] <benji> mars: I don't know definitively, but my strong suspiction would be taht the second superceeds the first
[20:07] <gary_poster> not offhand, mars.  if benji doesn't, can look around.
[20:07] <gary_poster> well, I'd hope so, but that doesn't mean it's so :-)
[20:07] <mars> benji, by supercedes, do you mean it will ask you to authorize the application again?
[20:08] <mars> since you changed the token's authorization
[20:08]  * gary_poster hopes so
[20:08] <mars> yes, that is what I suspected
[20:08] <gary_poster> have you tried, mars?  that's what I'd do, with staging, since we are coming up empty
[20:09] <mars> gary_poster, not yet, will do so.  benji, gary_poster, thanks for the help
[20:09] <gary_poster> such as it was, in my case :-)
[20:10] <benji> mars: yep, that is my expecation (and hope)
[20:12] <rockstar> abentley, you know what would be cool?  A pump command that could work while I'm in the pipe I want changes to get pumped to.
[20:17] <abentley> rockstar: yeah, I sometimes want that.
[20:19] <abentley> rockstar: I think you can abuse pump --from-submit for that, but that will also apply any changes from the submit branch.
[20:20] <rockstar> abentley, yeah, and then we get weird criss-cross merges too.  I always have to be careful with that.  :)
[20:21] <abentley> rockstar: err, --from-submit will rarely give you criss-cross.
[20:22] <abentley> rockstar: only if, say, you submit your first pipe, then merge & commit, then merge again after the pipe lands.
[20:25] <mars> gary_poster, do you remember the conditions under which a staging code update happens?
[20:25] <mars> I am looking at the graphs and there appears to be no regular pattern there
[20:26] <gary_poster> mars, I think code updates happen as soon as a new revision is available, as checked by some time-regular cron-like thing; and db updates are on the weekend
[20:26] <mars> gary_poster, ok, thanks.  I'll just say 'is down regularly'
[20:26] <james_w> mars, I don't think it asks you to reauth
[20:27] <james_w> mars, I think you just get 403 if you now want to do something that you didn't ask for permission for earlier
[20:27] <mars> james_w, when escalating or dropping permissions?
[20:27] <mars> ah
[20:27] <james_w> mars, neither way does anything to existing tokens I don't think
[20:27] <mars> ok.  Thanks james_w
[20:27] <james_w> dropping is clearly not going to cause functional issues
[20:28] <james_w> I may be wrong though, but I think that allow_access_levels is just sent to launchpad in the GET request to +authorize-token
[21:21] <benji> gary_poster: on a freshly installed kubunto machine plus apt-get install python-keyring-kwallet I get "OSError: can't get access to dbus" when trying to store a password
[21:22] <gary_poster> bah
[21:22] <gary_poster> benji, sounds like it's time to trash keyring with prejudice
[21:23] <benji> I concur.
[21:24] <gary_poster> cool, go for it
[21:25] <james_w> with what in preference?
[21:25] <benji> gary_poster: do you want me to whip up a KWallet integration to go with our Gnome keyring support or go with what we have until the time comes?
[21:25] <dobey> why is the prerequisite on a merge proposal a link to a branch, and not a merge proposal?
[21:26] <gary_poster> benji: assuming KWallet integration is on the order of half a day to a day, go for it
[21:26] <benji> sounds good
[21:26] <gary_poster> cool
[21:26] <dobey> gary_poster, benji: if this is in Python, which I presume it is, why not use python-keyring?
[21:27] <gary_poster> dobey: that's exactly what we are talking about.  We've encountered several annoyances with it.  The straw that broke the camels back was that it doesn't seem to work on Kubuntu, which, other than Ubuntu/Gnome, is the other keyring that we care about.
[21:28] <james_w> have you filed a bug?
[21:29] <benji> james_w: not yet, but it's on my list
[21:29] <dobey> gary_poster: that's odd
[21:30] <gary_poster> dobey: "why is the prerequisite on a merge proposal a link to a branch, and not a merge proposal?" I don't understand why it would be a merge proposal, so I don't understand your perspective yet
[21:32] <rockstar> abentley, shall we have our standing up?
[21:32] <dobey> gary_poster: so we just ran into, and i filed bug #657038 with tarmac. and i'm working on a fix to tarmac to deal with it, but it doesn't seem like prerequisites work the optimal way for that. it seems like i would want to specify a merge proposal, because i am requiring the prerequisite to be merged into its target, before my proposal can land.
[21:32] <_mup_> Bug #657038: Branches with prerequisites can land without prerequisite having landed <Tarmac:In Progress by dobey> <https://launchpad.net/bugs/657038>
[21:32] <abentley> rockstar: sure.
[21:33] <dobey> gary_poster: and since a branch can have many proposals with different targets associated, there is no easy way to determine what the developer meant by specifying a prerequisite
[21:35] <gary_poster> dobey, ah I see.  AFAIK, prerequisites are only about presenting diffs from LP's perspective.  I can see why you would want that, and I expect the difference simply comes from the fact that the prerequisites weren't originally envisioned for that use case.
[21:36] <dobey> gary_poster: so total usability fail then :)
[21:37] <gary_poster> for this use case, yes, sounds like it
[21:37] <dobey> well for both use cases
[21:37] <dobey> because in the "show a different diff on LP" case, the diff doesn't necessarily show what will actually land
[21:40] <gary_poster> MPs weren't made for tarmac
[21:40] <dobey> ignoring tarmac
[21:41] <dobey> if LP is showing a diff based on the prerequisite, and that proposal is approved/landed before the prerequisite, it might actually result in code being landed that wasn't shown in the diff, or finished being reviewed
[21:41] <gary_poster> (not that I don't think tarmac ought to be a driver, actually)
[21:45] <gary_poster> approved out-of-order is fine, and even nice to have IME.  Comes in handy for trivial branches after big honking branches.
[21:45] <gary_poster> But anyway, I see your point.
[21:45] <gary_poster> Make a bug. See where it goes. :-)
[21:47] <dobey> well, it's good if the "trivial" branch doesn't also include the changes from the "big honking" branch. but if it doesn't, then there probably isn't much need for specifiying a prerequisite :)
[21:47] <gary_poster> it did
[21:47] <gary_poster> they did
[21:49] <dobey> as it is now though, there's no easy way to implement a check to ensure that the 'big honking branch' lands first, to avoid having the 'trivial branch' land with unapproved code
[21:49] <dobey> no?
[21:50] <bryceh> I've been getting spurious warnings about LPModerate and XMLRPCRunner import errors - I am guessing I need to have something installed that isn't?  (This is lucid, dist-updated to current, with db-devel head)  http://pastebin.ubuntu.com/509048/
[21:54] <gary_poster> dobey, right, which is the use case that tarmac has that I didn't have when I was doing things manually.  MPs were a way of social communication for me, rather than a mechanism for enforcement and automation, and I think that's how they started in vision.  I landed things myself, as it was appropriate to do so.  But anyway, like I said, file a bug!
[21:55] <dobey> yeah i will
[21:55] <gary_poster> bryceh: smells like Mailman failed to build to me.  Possible?
[21:56] <bryceh> gary_poster, it's set to the stock defaults, I've not touched it in any way... does it require configuration for launchpad?
[21:56] <gary_poster> bryceh: I mean, the mailman that launchpad builds itself in the Make
[21:57] <gary_poster> not system mailman
[21:57] <bryceh> oh, well I notice in the db-devel bzr log there's been some changes to it, but I've not made any changes
[21:59] <gary_poster> bryceh: did you look at the output of make when you built this branch?  If not, try make clean && make, and see if it seems ok
[21:59] <gary_poster> if so, retry tests
[21:59] <bryceh> ok
[21:59]  * gary_poster hopes it will then work, because he hasn't seen this problem
[22:05] <bryceh> much better, thanks gary!
[22:05] <gary_poster> yay, bryceh :-)