=== salgado is now known as salgado-brb === salgado-brb is now known as salgado === mrevell is now known as mrevell-luncheon === mrevell-luncheon is now known as mrevell === salgado is now known as salgado-afk === salgado-afk is now known as salgado-lunch === matsubara-lunch is now known as matsubara [16:00] #startmeeting [16:00] Meeting started at 10:00. The chair is matsubara. [16:00] Commands Available: [TOPIC], [IDEA], [ACTION], [AGREED], [LINK], [VOTE] [16:00] Welcome to this week's Launchpad Production Meeting. For the next 45 minutes or so, we'll be coordinating the resolution of specific Launchpad bugs and issues. [16:00] [TOPIC] Roll Call [16:00] New Topic: Roll Call [16:00] Not on the Launchpad Dev team? Welcome! Come "me" with the rest of us! [16:00] me [16:00] me [16:00] me [16:01] me [16:01] me [16:01] sorry mrjazzcat, I always forget to ping you about the meeting. I'll add you to the "Who should be here?" section if you don't mind [16:01] yes, please [16:01] on the MeetingAgenda page, I mean [16:01] no worries [16:02] [action] add brian to the list of attendees in the MeetingAgenda page [16:02] ACTION received: add brian to the list of attendees in the MeetingAgenda page [16:02] Ursula won't be around today [16:02] and I'll be standing in for Gary [16:02] rockstar, hi, around? [16:03] allenap, hi [16:03] well, let's move on and then Gavin and Paul can join in later [16:03] [TOPIC] Agenda [16:03] New Topic: Agenda [16:03] * Actions from last meeting [16:03] * Oops report & Critical Bugs & Broken scripts [16:03] * Operations report (mthaddon/Chex/spm/mbarnett) [16:03] * DBA report (stub) [16:03] * Proposed items [16:03] [TOPIC] * Actions from last meeting [16:03] New Topic: * Actions from last meeting [16:04] * allenap to dig the master bug of OOPS-1474EA771 [16:04] https://lp-oops.canonical.com/oops.py/?oopsid=1474EA771 [16:04] * salgado to take a look in the TypeError oopses (OOPS-1479S1000) [16:04] * already did that, this is bug 403281, it happened because mthaddon was testing the new read-only switch on staging. [16:04] * rockstar to take a look in OOPS-1480CMP1 [16:04] https://lp-oops.canonical.com/oops.py/?oopsid=1479S1000 [16:04] Launchpad bug 403281 in launchpad-foundations "public xmlrpc requests broken during read only period" [Undecided,Triaged] https://launchpad.net/bugs/403281 [16:04] https://lp-oops.canonical.com/oops.py/?oopsid=1480CMP1 [16:04] ok, so I'll re-add both items for allenap and rockstar [16:05] [action] * allenap to dig the master bug of OOPS-1474EA771 [16:05] https://lp-oops.canonical.com/oops.py/?oopsid=1474EA771 [16:05] ACTION received: * allenap to dig the master bug of OOPS-1474EA771 [16:05] https://lp-oops.canonical.com/oops.py/?oopsid=1474EA771 [16:05] [action] * rockstar to take a look in OOPS-1480CMP1 [16:05] https://lp-oops.canonical.com/oops.py/?oopsid=1480CMP1 [16:05] ACTION received: * rockstar to take a look in OOPS-1480CMP1 [16:05] https://lp-oops.canonical.com/oops.py/?oopsid=1480CMP1 [16:05] [TOPIC] * Oops report & Critical Bugs & Broken scripts [16:05] New Topic: * Oops report & Critical Bugs & Broken scripts [16:05] we have some oops reports but most of them foundations issues [16:06] https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1488EA884 [16:06] Looks like an anonymous user is trying to do some operation which (s)he's not allowed. Should we really log an oops for this? [16:06] maybe related to https://bugs.edge.launchpad.net/launchpad-foundations/+bug/271029 [16:06] More non-informational disconnectionerrors https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1489J147 [16:06] https://lp-oops.canonical.com/oops.py/?oopsid=1488EA884 [16:06] InternalError after ther rollout https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1489C1094 [16:06] https://lp-oops.canonical.com/oops.py/?oopsid=1489J147 [16:06] Ubuntu bug 271029 in launchpad-foundations "ForbiddenAttribute exception raised changing property of object" [Medium,Triaged] [16:06] https://lp-oops.canonical.com/oops.py/?oopsid=1489C1094 [16:06] code team, BranchMergeProposalExists https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1488EA174 [16:06] https://lp-oops.canonical.com/oops.py/?oopsid=1488EA174 [16:06] so, that's it and there's no one from Code to take a look at the BranchMergeProposalExists one [16:06] [action] matsubara to email Tim about https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1488EA174 [16:06] https://lp-oops.canonical.com/oops.py/?oopsid=1488EA174 [16:06] ACTION received: matsubara to email Tim about https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1488EA174 [16:06] https://lp-oops.canonical.com/oops.py/?oopsid=1488EA174 [16:07] [action] matsubara to talk to leonard about https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1488EA884 [16:07] https://lp-oops.canonical.com/oops.py/?oopsid=1488EA884 [16:07] ACTION received: matsubara to talk to leonard about https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1488EA884 [16:07] https://lp-oops.canonical.com/oops.py/?oopsid=1488EA884 [16:07] [action] matsubara to talk to salgado about More non-informational disconnectionerrors https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1489J147 [16:07] https://lp-oops.canonical.com/oops.py/?oopsid=1489J147 [16:07] ACTION received: matsubara to talk to salgado about More non-informational disconnectionerrors https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1489J147 [16:07] https://lp-oops.canonical.com/oops.py/?oopsid=1489J147 [16:07] [action] matsubara to talk to stub or gary about InternalError after ther rollout https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1489C1094 [16:07] https://lp-oops.canonical.com/oops.py/?oopsid=1489C1094 [16:07] ACTION received: matsubara to talk to stub or gary about InternalError after ther rollout https://lp-oops.canonical.com/oops.py/?oopsid=OOPS-1489C1094 [16:07] https://lp-oops.canonical.com/oops.py/?oopsid=1489C1094 [16:07] lovely, looks like I'm running the meeting all by myself heheh [16:07] me [16:08] me [16:08] :) [16:08] on the broken scripts side [16:08] sinzui, Scripts failed to run: loganberry:send-person-notifications seems to be broken [16:09] sinzui, could you take a look and reply to the list? [16:09] matsubara: all scripts appear to be broken [16:09] all? [16:09] They are not running and I am tempted to say something new was added that is taking forever and a day [16:09] I only see notifications for send-person-notifications and garbo-hourly [16:10] sinzui, can you confirm and reply to the list that's the case, at least for the send-person-notifications one? [16:10] I'll ask losas and/or stub about garbo-hourly not running as well [16:10] matsubara: Re. OOPS-1474EA771, it's bug 508302, and deryck is working on it today. [16:10] Launchpad bug 508302 in malone "NotImplementedError OOPS when reporting a bug" [High,In progress] https://launchpad.net/bugs/508302 [16:10] https://lp-oops.canonical.com/oops.py/?oopsid=1474EA771 [16:10] thanks allenap, I'll adjust the bug link on that oops report [16:11] [action] matsubara to fix bug link on OOPS-1474EA771 to point to bug 508302 [16:11] https://lp-oops.canonical.com/oops.py/?oopsid=1474EA771 [16:11] ACTION received: matsubara to fix bug link on OOPS-1474EA771 to point to bug 508302 [16:11] https://lp-oops.canonical.com/oops.py/?oopsid=1474EA771 [16:11] [action] sinzui to investigate failure on send-person-notifications and reply to the list with his findings [16:11] ACTION received: sinzui to investigate failure on send-person-notifications and reply to the list with his findings [16:13] btw, updatebranches script also failed recently but that's been fixed by spm. the new rollout changed the script name and losas updated the notification thing to recognize the new name [16:13] on the critical bugs side [16:13] matsubara, updatebranches no longer runs. [16:13] matsubara: er, not quite [16:13] we have 3 critical bugs [16:13] It's been replaced by scan_branches [16:13] matsubara: we've had to revert it a bunch of times [16:14] mthaddon, hmm no? spm's email seems to indicate that [16:14] matsubara: spm went to bed a while ago - a new problem was discovered since then [16:14] matsubara: abentley and Chex have been working on it [16:14] oh, I was looking at this latest email to the list replying to one of the script failures notification [16:15] well, if they're already working on it, it's ok. :-) [16:15] matsubara: not really... [16:15] mthaddon, no? what else is expected? [16:15] matsubara: as I understand it, we've reverted to the old script because we still don't know what was wrong [16:15] matsubara: and the fact that we've reverted between the old and new scripts twice now on production is a problem in itself [16:16] matsubara: and also the fact that the first we heard about the problem was from a user report [16:16] mthaddon, I meant it's ok in the sense that people are already working on a solution and there's nothing much to be done during this meeting to have people act on it [16:16] i.e. we don't have a good measure of when this problem is even happening [16:17] matsubara: maybe not, but I'd like a bit of discussion about this class of problem and what can be done to prevent it in the future [16:17] mthaddon: what is the exact problem that we need to be able to track? (sorry, I am not fully up to date on what broke) [16:17] danilo_: aiui email notifications of branch updates failed to be sent out [16:18] mthaddon: "reverted ...twice...on production": I think we all agree this sucks. However, AIUI, this was successfully QAd. Either the QA was bad, or staging is not close enough to prod in some way. I don't think we know yet. [16:18] mthaddon, I'm unaware of the details as well. My expectation is that a IncidentLog will be filed and action to prevent it will be included in the incident log [16:18] mthaddon: ah, right, that could have a bigger impact (it might be harming us in translations as well) [16:19] matsubara: this doesn't really qualify as an incident log item since there's no measurable service that's been interrupted (we don't have any kind of nagios monitoring of this) - I guess I'm asking how we plan to approach it from here [16:20] and how we got into this situation [16:21] mthaddon, gary_poster, matsubara: we are obviously missing a dedicated "communications person" for this specific item (someone to keep the entire situation in check); we've discussed that approach before, it'd be nice to find someone who can offload the communication side from abentley and others working on it [16:21] danilo_: to the degree there's a failure there (communications), it'd probably be mine as RM [16:21] maybe we can have somebody else too [16:21] but that's RM stuff [16:22] but AIUI that's not the prob [16:22] gary_poster, not necessarily, we discussed this in a TL call a few weeks (months?) back where we need someone to communicate with everyone [16:22] maybe so [16:22] but probs I see: [16:23] gary_poster, it's mostly about having someone take responsibility for making sure problems are visible and we know what's going on [16:23] - we didn't catch this on staging. Why? [16:23] either QA was bad or staging is too diff [16:23] we need to know why [16:23] and fix it [16:23] yep, I agree with that [16:24] then also, unless I misunderstand, mthaddon is saying that we don't have an automated nagios-like process verifying basic success on production for this thing [16:24] gary_poster, neither of those is easy to fix (one depends on people always DTRT, another on machines always DTRT), so we need to be able to easily find out when it's broken rather than wait for users to report it [16:24] danilo_: but doesn't that depend on one of the three things I said? (people DTRT, machines DTRT, nagios-like-thing DTRT) [16:25] gary_poster, it does, I was typing before you typed the last one :) [16:25] gary_poster: it's possible we can't do that for *everything*, but if we decide this is a sufficiently important thing that we care about it if it fails, it sounds like we need to monitor it somehow, yeah (possibly we are already with OOPSes, but why didn't we catch it til a user told us about it?) [16:25] :-) ok [16:25] gary_poster, the 4th is lack of coordination and communication :) [16:26] mthaddon: right. For me, this gets to my "too many different kinds of moving parts" in our architecture. If we have fewer moving parts then we can institute more uniform nagios-like-checks. [16:26] maybe the jobs system can help with this [16:26] anyway, gary_poster, I think we should just raise the importance of ensuring sufficient monitoring of this part of code-hosting by thumper, and we can be done with the topic [16:27] maybe we can architect the jobs system to give us a nagios-like hook [16:27] gary_poster, we don't have to solve the problem here :) [16:27] because doing it with cron scripts is a one-per job [16:27] danilo_, can you raise the topic in the next TL meeting? [16:28] danilo_: ack. I kind of disagree with your summary though, and your action item, so that's why I'm continuing to blather :-) [16:28] matsubara, we are having a week long TL meeting next week, so it'd be best to action it for someone from code team to pass it on to thumper, imho :) [16:28] (IOW, this is not a problem for thumper, it is a problem for Björn, team leads, etc.) [16:29] gary_poster, well, sure, I agree, but one step at a time [16:29] matsubara: two action items: :-) [16:29] [action] rockstar to raise the importance of ensuring sufficient monitoring of this part (i.e. branch updates emails failing to be delivered) of code-hosting by thumper [16:29] ACTION received: rockstar to raise the importance of ensuring sufficient monitoring of this part (i.e. branch updates emails failing to be delivered) of code-hosting by thumper [16:29] gary_poster, there's immediate problem and then there's the elegant solution; I'm always for fixing the immediate problem first and having the elegant solution come out of that [16:29] yeah, that's number one [16:30] number two is gary to bring up archtecture concerns to team lead mtg :-) [16:30] gary_poster, as for the other one, I think it ties in well with what we discussed today and what we'll want to discuss anyway [16:30] [action] TLs + Bjorn to talk about "too many different kinds of moving parts" in our architecture. If we have fewer moving parts then we can institute more uniform nagios-like-checks. [16:30] ACTION received: TLs + Bjorn to talk about "too many different kinds of moving parts" in our architecture. If we have fewer moving parts then we can institute more uniform nagios-like-checks. [16:30] does that summarize it well? [16:31] yeah thank you. though it's probably my action, since I'm the one with the bee in my bonnet :-) but that's fine [16:31] gary_poster, matsubara: I don't like action items like that because they put no responsibility on anyone in particular, thus meaning that if they get done, they get done unrelated to the action item; thus, you don't really need it [16:31] so give it to me :-) [16:31] danilo_, I'll add it to gary's queue when I add the summary to the MeetingAgenda page [16:31] gary_poster, heh, that's ok, I am certain we would have discussed this regardless of us having any particular action item [16:32] matsubara, sure, thanks [16:32] :-) [16:32] it serves as a reminder as well [16:32] anyway, thanks for the comments [16:32] It fairness, the "not getting branch update emails" thing was because a rather large part of the code hosting system was made into a job. [16:33] To whom are you being fair? :-) [16:33] Never mind, I'll be quiet :-) [16:33] :) [16:33] we have 3 critical bugs, one in progress, one fix committed [16:33] I'm not sure how "sufficient monitoring" would have fixed this. [16:33] the other one is triaged, bug 511567 [16:33] Launchpad bug 511567 in launchpad-foundations "Can't remove authorised app" [Critical,Triaged] https://launchpad.net/bugs/511567 [16:33] gary_poster, to the code team in general. [16:33] hmm [16:33] rockstar, sufficient monitoring of scripts that do this [16:33] that's a dupe [16:33] and I filed that bug a few days ago [16:34] danilo_, howso? [16:34] or maybe I filed the dupe [16:34] rockstar: ah, gotcha. Tim can beat us into shape at the TL sprint so we understand. [16:34] gary_poster, yeah, I'll talk to him. [16:34] cool [16:34] rockstar, monitoring should have caught the problem (i.e. "hey, this script is failing"); I won't pretend to understand the entire problem, so we might be entirely off base, but we should be able to check our service level [16:34] danilo_, there wasn't a script failing. [16:35] It ran fine, it was just a new script that had apparently left out some old functionality. [16:36] rockstar, right, never mind the "implementation details", the problem is: "why we didn't catch it before someone told us it's failing"; there's not necessarily a technical solution [16:38] matsubara, am I still on the channel? [16:38] yes [16:38] oh, ok, it's just everybody being quite :) [16:38] matsubara, I think we should go on [16:38] sorry, I was looking for a bug report to dupe against 511567 [16:38] anyway [16:38] thanks [16:39] [TOPIC] * Operations report (mthaddon/Chex/spm/mbarnett) [16:39] New Topic: * Operations report (mthaddon/Chex/spm/mbarnett) [16:41] hello? [16:41] Chex, mbarnett ? [16:41] sorry [16:42] sorry [16:42] here is the report [16:42] - LP rollout 10.01 Wednesday was successful: [16:42] : See https://wiki.canonical.com/InformationInfrastructure/OSA/LPRollout20100127 for more details. [16:42] : The read-only switch left idle connections to the master DB, it is currently being investigated [16:42] - New LP Appserver is online, some issues with internal access, but now everything is OK. [16:42] - New branch-scanner having issues, just reverted back to old again. Based on meeting dicsussion here, [16:42] continuing to address. [16:42] and thats all for us. Any questions/comments? [16:43] Chex, what's this new LP appserver online? I guess I'll have to tell oops-tools about oops reports from it? [16:43] [action] matsubara to update oops-tools to know about the new lp appserver [16:43] ACTION received: matsubara to update oops-tools to know about the new lp appserver [16:43] Chex: do you know if the new servers have access to the private librarian? [16:43] matsubara: soybean was recently put online as a replacement for gangotri + [16:44] noodles775: that was resolved earlier today [16:44] mbarnett, oh, so it's using the same config files? [16:44] A user was seeing about 1 in 4 requests to download a... ah, great, thanks! [16:44] matsubara: it took over lpnet1, lpnet2, and edge1 from gangotri, stole lpnet9 from gandwana, and added a sparkly new lpnet15 standard lpnet appserver [16:45] mbarnett, ok, it's the new lpnet15 instance I care about. I'll check the configs and update oops-tools accordingly [16:45] thanks [16:45] moving on [16:45] matsubara: thank you. [16:45] [TOPIC] * DBA report (stub) [16:45] New Topic: * DBA report (stub) [16:45] stub sent the report to the list [16:46] allenap, he mentioned something about checkwatches being very cpu intensive. it's probably of interest of the Bugs team [16:46] mars: deryck has just forwarded the message to me. [16:46] matsubara: ^ [16:46] thanks allenap [16:46] [TOPIC] * Proposed items [16:47] New Topic: * Proposed items [16:47] no proposed items [16:47] which brings this meeting to a close [16:47] Thank you all for attending this week's Launchpad Production Meeting. See https://dev.launchpad.net/MeetingAgenda for the logs. [16:47] and sorry for the delay [16:47] #endmeeting [16:47] Meeting finished at 10:47. === EdwinGrubbs is now known as Edwin-lunch === matsubara is now known as matsubara-afk === salgado is now known as salgado-afk