[16:00] <Ursinha> me?
[16:00] <sinzui> you
[16:00] <sinzui> us
[16:00] <intellectronica> Ursinha: no, not you
[16:00] <sinzui> them
[16:00] <Ursinha> ah
[16:00] <Ursinha> :(
[16:00] <matsubara> sorry
[16:00] <matsubara> #startmeeting
[16:00] <MootBot> Meeting started at 10:00. The chair is matsubara.
[16:00] <MootBot> Commands Available: [TOPIC], [IDEA], [ACTION], [AGREED], [LINK], [VOTE]
[16:00] <Ursinha> roll call,roll call
[16:00] <matsubara> my firefox died
[16:00] <sinzui> me
[16:00]  * stub belches
[16:00] <matsubara> hang on a second please
[16:00] <henninge> me
[16:00] <Ursinha> poor matsubara
[16:01]  * jml eavesdrops
[16:01]  * bigjools wafts stub's belch away
[16:01] <rockstar> ni
[16:01] <matsubara> Welcome to this week's Launchpad Production Meeting. For the next 45 minutes or so, we'll be coordinating the resolution of specific Launchpad bugs and issues.
[16:01] <matsubara> [TOPIC] Roll Call
[16:01] <MootBot> New Topic:  Roll Call
[16:01] <Ursinha> meeee
[16:01] <bigjools> me
[16:01] <matsubara> Not on the Launchpad Dev team? Welcome! Come "me" with the rest of us!
[16:01] <stub> me
[16:01] <henninge> me again
[16:01] <intellectronica> i
[16:01] <matsubara> flacoste, hi
[16:02] <flacoste> me
[16:02] <matsubara> herb, hi
[16:02] <herb> me
[16:02] <matsubara> ok, everyone here.
[16:02] <matsubara> [TOPIC] Agenda
[16:02] <MootBot> New Topic:  Agenda
[16:02] <matsubara>  * Actions from last meeting
[16:02] <matsubara>  * Oops report & Critical Bugs & Broken scripts
[16:02] <matsubara>  * Operations report (mthaddon/herb/spm)
[16:02] <matsubara>  * DBA report (stub)
[16:02] <matsubara> [TOPIC] * Actions from last meeting
[16:02] <MootBot> New Topic:  * Actions from last meeting
[16:02] <matsubara>  * matsubara to chase rockstar about failure on updatebranches script
[16:02] <matsubara>  * stub to give a try on bug 354593 with mars help if needed
[16:02] <matsubara>  * stub to fix bug 310818
[16:02] <matsubara>  * mars to take a look at OOPS-1307J16
[16:02] <matsubara>  * Discuss the solution proposed by gary_poster after the meeting, about ExpatErrors and bug 403606
[16:02] <matsubara>  * mars and stub to discuss the Disconnection and OperationalErrors after the meeting
[16:02] <jml> me
[16:03] <Ursinha> yay, jml
[16:03] <matsubara> I suck, I didn't chase rockstar about the updatebranches script failures
[16:03] <rockstar> matsubara, I thought we agreed that mwhudson would be better to chase on it.
[16:03] <jml> matsubara, got a URL for the failure?
[16:03] <matsubara> otoh, the script is not failing anymore...
[16:03] <rockstar> matsubara, I know mwhudson was looking at it on his Tuesday.
[16:03] <rockstar> jml would be good to ask as well.
[16:04] <matsubara> rockstar, all right. I'll talk to jml and mwhudson later on today
[16:04] <matsubara> [action] * matsubara to chase mwhudson/jml about failure on updatebranches script
[16:04] <MootBot> ACTION received:  * matsubara to chase mwhudson/jml about failure on updatebranches script
[16:04] <rockstar> matsubara, jml is here right now. :)
[16:04] <matsubara> jml, I'll get you an url for the scripts after the meeting. I need to trawl my emails to find it
[16:04] <jml> matsubara, ok. thanks.
[16:05] <matsubara> stub, how's 354593 fix coming along?
[16:05] <flacoste> why is this High again?
[16:06] <matsubara> I wonder if mars had time to look over OOPS-1307J16
[16:06] <matsubara> flacoste, do you know ^?
[16:06] <flacoste> hmm, i put it as such
[16:06] <flacoste> any reason it should be?
[16:06] <matsubara> flacoste, according to the bug history you made it high :-)
[16:06] <flacoste> debranding of the SSO is a U1/ISD affair anyway
[16:06] <stub> matsubara: Slow. I need to discuss with people how to actually do it - maybe next week on the sprint if I get time.
[16:07]  * sinzui agrees with flacoste
[16:07] <flacoste> stub: i think we should try to get stu and James to do it :-)
[16:07] <flacoste> especially, stu, it would be a test good case for transfer knowledge
[16:07] <stub> Anything that means I don't have to work out how ZPT macros works is fine by me.
[16:07] <flacoste> +1
[16:08] <matsubara> Ursinha, what's up with  "Discuss the solution proposed by gary_poster after the meeting, about ExpatErrors and bug 403606"?
[16:08] <Ursinha> matsubara, the ExpatErrors were being discussed by mars and gary
[16:08] <gary_poster> matsubara: that;s now registry.  it actualy is a legitimate oops
[16:08] <matsubara> [action] stub to delegate bug 354593 to ISD
[16:08] <MootBot> ACTION received:  stub to delegate bug 354593 to ISD
[16:08] <gary_poster> it indicates a problem with mailman integration
[16:09] <sinzui> I will ask barry to look into bug 403606
[16:09] <matsubara> stub, You recently fixed a DisconnectionError bug. was it related to the errors you discussed with mars? that action item is now done?
[16:09] <matsubara> thanks sinzui and gary_poster
[16:10] <gary_poster> :-)
[16:10] <stub> matsubara: I landed code to log OOPS reports on DisconnectionError before retrying the request. Is that what you mean?
[16:10] <matsubara> stub, I mean: "* mars and stub to discuss the Disconnection and OperationalErrors after the meeting"
[16:11] <Ursinha> stub, is that what caused the TransactionRollbackError oopses?
[16:11] <stub> We discussed. I don't recall much about the conversation though :)
[16:11] <matsubara> :-)
[16:11] <stub> Ursinha: That fix was, yes. I've got another branch that turns the volume down so we don't log the TransactionCommitError's
[16:12] <matsubara> [action] sinzui to ask barry to fix bug 403606
[16:12] <MootBot> ACTION received:  sinzui to ask barry to fix bug 403606
[16:12] <Ursinha> stub, good, I filed bug 409907 for that
[16:13] <matsubara> Ursinha, is there a bug for OOPS-1307J16?
[16:13] <Ursinha> matsubara, not that I opened one, because we needed to know what was going on over there
[16:13] <Ursinha> to open the bug
[16:13] <Ursinha> so mars was going to investigate that
[16:15] <Ursinha> I don't recall having those anymore
[16:15] <matsubara> [action] ursinha to chase mars about OOPS-1307J16 and file a bug about it
[16:15] <MootBot> ACTION received:  ursinha to chase mars about OOPS-1307J16 and file a bug about it
[16:15] <matsubara> I think that's all for last meeting's action items
[16:15] <matsubara> thanks everyone
[16:15] <matsubara> [TOPIC] * Oops report & Critical Bugs & Broken scripts
[16:15] <MootBot> New Topic:  * Oops report & Critical Bugs & Broken scripts
[16:16] <Ursinha> there are two issues to discuss
[16:16] <Ursinha> one was about bug 409907, that I already mentioned to stub and it's being handled
[16:16] <Ursinha> the other is about the select replication_lag() timeouts we're having
[16:17] <Ursinha> mthaddon also reported problems that we don't know if are related to that
[16:18] <Ursinha> I don't know if there's much to be discussed at this point, because it seems we need to fix oops reports first to be able to see the real problem here
[16:18] <Ursinha> is that correct stub:
[16:18] <Ursinha> ?
[16:18] <matsubara> should we request a CP for the branch that fixes the oops log?
[16:19] <intellectronica> given that we're skipping a release, that's probably a good idea
[16:19] <Ursinha> flacoste, I've spoken with jtv yesterday about those,and he also said that was unlikely to be his changes fault (possible but unlikely)
[16:20] <stub> I landed code today that should tell us more about if the timeout is actually occuring due to blocking on the database, or elsewhere.
[16:20] <Ursinha> s/his/translations/
[16:20] <Ursinha> stub, should we request a CP?
[16:20] <flacoste> yeah, i really think a CP is a good idea
[16:20] <Ursinha> (please please)
[16:20] <matsubara> [action] stub to request CP for his branch that fixes oops logging
[16:20] <MootBot> ACTION received:  stub to request CP for his branch that fixes oops logging
[16:20] <Ursinha> cool
[16:21] <Ursinha> we have two critical bugs, already fix committed
[16:21] <Ursinha> so, good
[16:21] <matsubara> cool
[16:21] <Ursinha> about the failing scripts
[16:21] <matsubara> we had some scripts failing this week
[16:21] <matsubara> nightly, productreleasefinder and garbo-hourly
[16:22] <matsubara> and rosetta-poimport too
[16:22] <matsubara> nightly was already addressed by jtv
[16:22] <Ursinha> matsubara, productreleasefinder isn't expected to fail anymore? sinzui?
[16:22] <matsubara> as a rosetta script was taking too much time and jtv will remove it from nightly and add a cronjob for it
[16:22] <sinzui> Ursinha: no, but the errors is see are not failures...the script was not run
[16:23] <matsubara> stub, do you know why garbo-hourly is failing?
[16:23] <stub> Its failing?
[16:23] <sinzui> matsubara: many scripts are not running because of one log process
[16:23] <matsubara> henninge, rosetta-poimport failed on the 5th. can you investigate and reply to the list?
[16:24] <sinzui> s/log/long/
[16:24] <Ursinha> matsubara, it's not being run, it seems
[16:24] <henninge> matsubara: sure, I will.
[16:24] <matsubara> stub, I got a few emails: "Scripts failed to run: loganberry:garbo-hourly"
[16:24] <sinzui> Ursinha: matsubara there is some traffic about this. spm reported the long running prcess a weeks ago. I has asked why the prf had not run
[16:24] <matsubara> and no replies to the list, so I'm asking here
[16:25] <matsubara> thanks henninge
[16:25] <Ursinha> matsubara, actually stub repklied
[16:25] <Ursinha> *replied
[16:25] <stub> Oh - there were some blocked runs because the rosetta export-to-branch script was running in a 5 hour long transaction
[16:25] <stub> So the script blocks because it doesn't want to make anything worse.
[16:25] <matsubara> [action] henninge to investigate rosetta-poimport script failure on the Aug 5th and report back to the list
[16:25] <MootBot> ACTION received:  henninge to investigate rosetta-poimport script failure on the Aug 5th and report back to the list
[16:27] <Ursinha> so I guess it's ok
[16:28] <Ursinha> that's all for this section
[16:28] <Ursinha> from me
[16:28] <Ursinha> thanks everyone
[16:28] <Ursinha> !
[16:28] <Ursinha> you can move on matsubara
[16:28] <matsubara> all right. thanks everyone
[16:28] <matsubara> [TOPIC] * Operations report (mthaddon/herb/spm)
[16:28] <MootBot> New Topic:  * Operations report (mthaddon/herb/spm)
[16:28] <herb> 2009-07-31 - Rolled out r8323 to bzrsyncd
[16:28] <herb> 2009-08-05 - Cherry picks for code imports, lpnet* and the script server.
[16:29] <herb> Our monitoring system has been timing out in connecting to the app servers more often this week. Admittedly its timeout is set lower than the OOPS timeout. But we've also been noticing higher load on the app servers as well. This was discussed by Ursinha during the oops/critical bugs/broken scripts section.
[16:29] <herb> There's currently 1 cherry pick and 1 database query awaiting (dis)approval.
[16:29] <herb> The LOSAs currently have 14 bugs marked high and triaged. Only 1 of which is assigned to someone and targeted for a release. We would be grateful if we saw some movement on these.
[16:29] <herb> We're currently running with a single slave in preparation for the sprint next week.
[16:30] <mthaddon> also wanted to check that there should be a cherry pick request for the cowboyed storm change to lpnet9 and lpnet10 (per the production status wiki page)
[16:30] <flacoste> cowboyed storm change?
[16:30] <mthaddon> flacoste: https://pastebin.canonical.com/20503/ under eggs/storm-0.14salgado_storm_launchpad_288_308-py2.4-linux-i686.egg
[16:30] <flacoste> mthaddon, herb: i'll look at the LPS to approve/decline
[16:31] <flacoste> right
[16:31] <flacoste> mthaddon: the cherry pick would simply be to update that dependency
[16:31] <matsubara> herb, do you keep that list of 14 bugs somewhere? in a wiki page or have a tag to group them?
[16:31] <herb> matsubara: bugs.launchpad.net/~canonical-losas
[16:32] <mthaddon> flacoste: well in any case, the CP that was requested (and performed) yesterday overwrote it, so it needs to be formalised so other CPs don't overwrite it again
[16:32] <flacoste> sinzui: can salgado makes an appropriate CP request?
[16:32] <sinzui> Yes
[16:33] <flacoste> it's simply a new upload to download-cache with a versions.cfg change
[16:34] <matsubara> sinzui, flacoste, intellectronica, rockstar: Could you take a look at herb's bug list (bugs.launchpad.net/~canonical-losas) and see what your teams can do about the high ones in the short term?
[16:35] <flacoste> ok
[16:35] <herb> clearly we're not looking for all of them to be fixed by the next meeting (though that would be great ;)
[16:35] <herb> just mostly would like to know they're staying on the right radars and are being worked on as appropriate.
[16:36] <matsubara> cool
[16:36] <matsubara> anything else for herb?
[16:36] <intellectronica> herb: so, basically, these are mostly bugs which will make life easier for you when fixed?
[16:36] <sinzui> bug 348722 should become invalid when we update all pmt teams to become true private teams
[16:37] <herb> intellectronica: some of them are geniune operational issues, some of them are quality of life issues for the LOSAs
[16:37] <sinzui> There should be no private-membership teams at the start of week 1
[16:37] <intellectronica> cool, sure, we'll take a look and see if there's any low hanging fruit
[16:38] <sinzui> barry will be working with the losas on August 11 to fix bug 325962
[16:38] <herb> sinzui: that was the one that was assgned and targetted at a release.
[16:39] <sinzui> herb, many times
[16:39] <herb> assigned even
[16:39] <herb> heh
[16:39] <matsubara> all right. I think that's it
[16:39] <sinzui> herb it failed my rules that bug is not high if it is not worked on by all parties in 3 months
[16:39] <herb> thanks
[16:39] <matsubara> thanks herb and everyone
[16:39] <matsubara> [TOPIC] * DBA report (stub)
[16:39] <MootBot> New Topic:  * DBA report (stub)
[16:40] <stub> We set off some alerts when the poimport script and PostgreSQL decided that lots of disk space should be used. We see some smaller spikes, which is just PG using disk to store intermediary results, but this time it was large enough to set of the alarms.
[16:40] <stub> We have seen this once before, and in neither case have we been able to repeat it. My best hypothesis is the planner statistics triggering a really bad query plan, so I'll bump the planner statistic sample size on the production dbs in case this stops future occurances.
[16:41] <matsubara> henninge, maybe the last rosetta-poimport failure was related to that ^
[16:43] <henninge> matsubara: I believe we already know what it was about and it may be related to that.
[16:43] <henninge> matsubara: I'll talk to the guys.
[16:43] <matsubara> henninge, cool. thanks
[16:43] <matsubara> stub, anything else?
[16:43] <stub> Not that I can think of
[16:43] <matsubara> all right. thank you stub
[16:43] <matsubara> I guess that's all for today
[16:44] <matsubara> Thank you all for attending this week's Launchpad Production Meeting. See https://dev.launchpad.net/MeetingAgenda for the logs.
[16:44] <matsubara> #endmeeting
[16:44] <MootBot> Meeting finished at 10:44.
[16:44] <herb> thanks everyone
[16:44] <Ursinha> right on time
[16:44] <matsubara> :-)
[16:44] <Ursinha> thanks guys