[00:17] <jml> sorry guys.
[00:18]  * jml is back.
[15:50] <Ursinha> henninge, :)
[15:50] <henninge> Ursinha: Hi :)
[16:02] <rockstar> Hi
[16:02] <Ursinha> #startmeeting
[16:02] <Ursinha> Welcome to this week's Launchpad Production Meeting. For the next 45 minutes or so, we'll be coordinating the resolution of specific Launchpad bugs and issues.
[16:02] <Ursinha> [TOPIC] Roll Call
[16:02] <Ursinha> Not on the Launchpad Dev team? Welcome! Come "me" with the rest of us!
[16:02] <MootBot> Meeting started at 10:02. The chair is Ursinha.
[16:02] <MootBot> Commands Available: [TOPIC], [IDEA], [ACTION], [AGREED], [LINK], [VOTE]
[16:02] <MootBot> New Topic:  Roll Call
[16:02] <bigjools> me
[16:05] <Ursinha> me
[16:05] <henninge> ich
[16:05] <sinzui> me
[16:05] <rockstar> me
[16:05] <flacoste> me
[16:05] <herb> me
[16:05] <Ursinha> herb, intellectronica, hi
[16:05] <Ursinha> :)
[16:05] <Ursinha> who's missing?
[16:05] <Ursinha> stub is missing, but he can join later
[16:05] <Ursinha> intellectronica is missing too
[16:05] <Ursinha> let's move on
[16:05] <Ursinha> [TOPIC] Agenda
[16:05] <Ursinha> * Actions from last meeting
[16:05] <Ursinha> * Oops report & Critical Bugs
[16:05] <Ursinha> * Operations report (mthaddon/herb/spm)
[16:05] <Ursinha> * DBA report (stub)
[16:05] <MootBot> New Topic:  Agenda
[16:05] <Ursinha> [TOPIC] * Actions from last meeting
[16:05] <Ursinha> * Ursinha to talk to intellectronica about bug 357316
[16:05] <Ursinha> * Ursinha to talk to henninge about bug 302449
[16:05] <Ursinha> * rockstar to confirm that bzr fix for bug 360791 was applied to LP's bzr tree.
[16:05] <MootBot> New Topic:  * Actions from last meeting
[16:05] <Ursinha> * cprov to request CP of fix for bug 370513
[16:06] <Ursinha> I suck and failed mine
[16:06] <Ursinha> [action] Ursinha to talk to intellectronica about bug 357316
[16:06] <MootBot> ACTION received:  Ursinha to talk to intellectronica about bug 357316
[16:06] <Ursinha> henninge, hi :)
[16:06] <henninge> I think danilo was onto that ...
[16:06] <rockstar> I don't know if the fix has been cherry picked into production...
[16:06] <Ursinha> henninge, can you just confirm that, please? it's set as medium, do we use that status?
[16:08] <henninge> Ursinha: in rosetta we do ;)
[16:08] <Ursinha> rockstar, hm, can you check that too?
[16:08] <rockstar> Code team does, for things of medium importance.
[16:08] <bigjools> as does Soyuz
[16:08] <herb> rockstar: it was cherry picked on 2009-05-09
[16:08] <Ursinha> I rarely see medium statuses :) that's why I'm asking
[16:08] <Ursinha> thanks herb
[16:08] <rockstar> herb: cool, thanks.
[16:08] <Ursinha> [action] henninge to check with danilo the status of bug 302449
[16:08] <MootBot> ACTION received:  henninge to check with danilo the status of bug 302449
[16:08] <Ursinha> cool
[16:08] <Ursinha> moving on then
[16:08] <Ursinha> [TOPIC] * Oops report & Critical Bugs
[16:08] <Ursinha> there's only one worth mentioning, that is the one causing the InterfaceError oopses, we're still having lots and lots of occurrences (bugs 374909 and 376207), seems to be worked on by jamesh, is that correct flacoste?
[16:08] <MootBot> New Topic:  * Oops report & Critical Bugs
[16:11] <flacoste> so, jamesh is working on 374909
[16:11] <flacoste> and the other one also
[16:11] <Ursinha> right
[16:11] <flacoste> but stuart has an easy fix for the later, that I'll likely asked to be cherrypicked
[16:11] <flacoste> we can deploy jamesh proper fix during next roll-out
[16:11] <Ursinha> flacoste, hm, good
[16:11] <Ursinha> flacoste, about the cp, when do you think it can be done?
[16:11] <flacoste> i didn't look at the branch
[16:11] <herb> 374909 is still cropping up from time to time, btw. though it's much gooder(tm) than it was last week.
[16:11] <flacoste> but once i approved it, as soon as the LOSA can take care of it
[16:12] <Ursinha> flacoste, right. okay
[16:13] <flacoste> i don't think i'll ask a C-P of the INterfaceError
[16:14] <flacoste> (storm fix being worked on by jamesh)
[16:14] <herb> flacoste: why?
[16:14] <flacoste> well, because that's not a root-cause
[16:14] <flacoste> so with the other fix in place, we shouldn't see it
[16:14] <herb> ok
[16:14] <flacoste> that fix is more prophylactic
[16:14] <Ursinha> flacoste, who's investigating the root cause?
[16:17] <flacoste> if you are talking about why we are getting disconnection errors in the first place
[16:17] <flacoste> nobody, really
[16:17] <flacoste> but we have a fix for the places where we should be trapping those
[16:17] <flacoste> and recovering
[16:17] <Ursinha> yes, the disconnection errors
[16:17] <flacoste> we have no ideas at why it's happening
[16:17] <flacoste> there is nothing in the DB logs
[16:17] <flacoste> about them
[16:17] <Ursinha> this is creepy
[16:17] <Ursinha> the fix you say you have
[16:18] <Ursinha> is inside the fixes for one of those bugs?
[16:18] <flacoste> yes
[16:18] <Ursinha> great
[16:18] <Ursinha> so, you'll ask a cp for the second bug
[16:18] <flacoste> exactly
[16:20] <Ursinha> [action] flacoste to ask a cp for fix for bug 376207 after reviewing it
[16:20] <MootBot> ACTION received:  flacoste to ask a cp for fix for bug 376207 after reviewing it
[16:20] <Ursinha> awesome
[16:20] <Ursinha> we have one critical bug, being worked on
[16:20] <Ursinha> so, unless anyone has anything to point, moving to the next section!
[16:20] <Ursinha> good
[16:20] <Ursinha> [TOPIC] * Operations report (mthaddon/herb/spm)
[16:20] <MootBot> New Topic:  * Operations report (mthaddon/herb/spm)
[16:20] <herb> 2009-05-07 - Cherry pick r8906 to the scripts server and r122 of storm to lpnet* & edge*
[16:20] <herb> 2009-05-09 - Cherry pick r4006 of bzr to the codehosting server and r123 of storm to lpnet* & edge*
[16:20] <herb> 2009-05-09 - Cherry pick r8348, r8312 to the PPA server and r8376 to lpnet*
[16:20] <herb> 2009-05-10 - mailman didn't have access the necessary access to the DB server, but it was only noticed after restarting for log rotation. mailing lists were unavailable for approximately 7 hours.
[16:20] <herb> We still seem to be encountering bug #156453 and bug #118626, but the situation is much improved since the rollout.
[16:23] <herb> flacoste: cprov requested a rollout of the current production tree to cesium. Apparently there was a critical fix that was included in while in RC, but we didn't re-roll to cesium. Can you (dis)approve that?
[16:23] <bigjools> herb: it's already approved by kiko
[16:23] <flacoste> herb: i think kiko did? but otherwise, i can look into it
[16:23] <flacoste> right
[16:23] <herb> bigjools: missed it.  thanks
[16:23] <Ursinha> any other questions to herb?
[16:23] <Ursinha> okay
[16:23] <Ursinha> [TOPIC] * DBA report (stub)
[16:23] <MootBot> New Topic:  * DBA report (stub)
[16:23] <rockstar> herb: I was under the impression that the loggerhead stuff was WAY better than "much improved"
[16:23] <flacoste> stub sent it to the list
[16:23] <herb> rockstar: we're still restarting a couple of times a week. which is much improved over a couple of times a day.
[16:23] <herb> rockstar: order of magnitude.
[16:23] <Ursinha> flacoste, to lp list? I can't seem to find it
[16:23] <rockstar> herb: okay.  Is it memory restarts, or hanging restarts
[16:23] <flacoste> The ex-master database (launchpad_prod on hackberry) is bloated and
[16:23] <flacoste> started exceeding its free space map settings. Nothing really to worry
[16:23] <flacoste> about - it might cause bloat to spiral but I suspect not in this case.
[16:23] <flacoste> The losas can bounce it after shutting down the systems using it as a
[16:23] <flacoste> slave, and I've suggested using it as the standalone replica for
[16:23] <flacoste> read-only mode launchpad during the rollout because we then rebuild it
[16:23] <rockstar> (The memory restarts shouldn't be happening anymore)
[16:23] <flacoste> afterwards and it will be all nice and freshly packed.
[16:23] <flacoste> Nothing major do do with database patches this cycle. rockstar's
[16:23] <flacoste> bugbranch and specbranch column pruning needs to be cleared with Mark
[16:23] <flacoste> still.
[16:23] <Ursinha> thanks flacoste
[16:24] <rockstar> flacoste: yeah, and there are other branches dependent on that one.
[16:24] <herb> rockstar: The memory situation seems ok (ie. not death spiraling).  seems to be hangs at this point.
[16:25] <herb> rockstar: we still see it ~1.5GB resident, but doesn't seem to grow beyond that.
[16:25] <rockstar> herb: well, "death spiraling" to be is different than the memory issues.
[16:26] <rockstar> herb: it's going to be *kinda* memory intensive just because of what it's serving.
[16:26] <herb> rockstar: understood
[16:26] <herb> rockstar: as I said, much improved. 1.2 - 1.5G is much better than 3.7G
[16:26] <rockstar> herb: and we know the cause of the hangs, we just don't know how to fix it.
[16:26] <herb> rockstar: good news, bad news, eh?
[16:26] <rockstar> herb: yeah, something like that.
[16:27] <Ursinha> okay. anyone else want to say something?
[16:27] <Ursinha> 5
[16:27] <Ursinha> 4
[16:28] <Ursinha> 3
[16:28] <Ursinha> 2
[16:28] <Ursinha> 1
[16:28] <Ursinha> Thank you all for attending this week's Launchpad Production Meeting. See the channel topic for the location of the logs.
[16:28] <Ursinha> #endmeeting
[16:28] <MootBot> Meeting finished at 10:27.
[16:28] <Ursinha> thanks all
[16:28] <henninge> thanks Ursinha ;)
[16:28] <Ursinha> henninge, :)