[00:35] is there a way to move a PPA from a team to another or do we need to create a new PPA and copy all the packages over? [00:38] new ppa + copy [00:42] hm, ok. thanks lifeless [02:36] StevenK: what's up? [02:38] rick_h: sidnei marked my MP as Needs Fixing :-( [02:38] I didn't think combo_app() was actually tested. [02:40] StevenK: yea, it is [02:40] the TestApp() wraps combo_app [02:40] and loads it as a wsgi appliaction that gets tested in the tests/test_combo.py [02:41] StevenK: not sure what he wants test-wise, the current tests make sure combo_app functions, I suppose a test mounting _application in TestApp would work [02:41] StevenK: and yea, I kind of agree with him that it'd probably be best to not _ it if we're going to import it [02:42] Right [02:42] I also think I didn't explain it the best in the MP, if he's asking "Why do you need this?" [02:42] StevenK: yea, I didn't follow it either until you showed me how we were changing out code side [02:43] though I think that the reason of "combo_app should be able to be wrapped as a wsgi app" is enough reason [02:43] Right, so pasting our WSGI wrapper might help [02:43] StevenK: yea, at least in the MP so there's a record I guess. Technically it's the more 'correct' way anyway. [02:43] rick_h: I'll look at sorting it out after lunch. [02:44] StevenK: ok, thanks [02:44] I didn't see his response so sorry I didn't catch him during hte day with it [02:44] just ping'd to let him know we needed it and ask him to take a peek [04:39] * StevenK tries to work out how to run convoy's test suite [04:46] Grarrrr [04:46] * wgrant mauls the branch scanner [04:46] * wgrant chainsaws the branch scanner [04:47] Haha [04:47] Whyfor? [04:48] It is slow. [04:48] It holds locks. [04:48] It randomly hangs. [04:48] It's like Soyuz, except more unreliable and with a simpler task. [04:48] wgrant: Can't bug 910492 be closed? [04:48] <_mup_> Bug #910492: long urls break lazr restful object representation cache < https://launchpad.net/bugs/910492 > [04:49] StevenK: Done. [04:49] # Bug heat increases by a quarter of the maximum bug heat [04:49] # divided by the number of days since the bug's creation date. [04:49] wut [04:49] lifeless: Bug heat confuses me [04:51] I expect there are plentiful lies by now in the code [04:51] due to age [04:51] Ha ha [04:53] Hm [04:54] wgrant: while you're in a chainsawing slow services mindset, may I direct your energies towards.... checkwatches? [04:54] or have I trolled too far? [04:54] Hey, it hasn't hung in weeks. [04:55] is it running? [04:56] Unfortunately. [04:56] :-) [05:40] wgrant: O hai. So given https://code.launchpad.net/~launchpad/+recipe/launchpad-convoy . If I push changes to the packaging branch does that mean it will build 0.2.1-0~19-oneiric1 again? [05:42] StevenK: Yes [05:42] StevenK: The packaging revno is not included in that version template. [05:42] Also, why do both thumper and StevenK ask me about recipes, when it was their project :( [05:42] Haha [05:42] wgrant: What would you recommend? [05:43] wgrant: You know, the brain supresses bad memories ... [05:47] StevenK: Either commit to trunk or include the packaging revno in the template (possibly temporarily) [05:50] wgrant: I don't want the packaging in trunk [05:52] StevenK: I never suggested that :) [05:53] I could pull out lp:convoy, but then I/someone else has to merge lp:convoy into the packaging branch and push it [05:53] What about one of the options I gave? [05:55] I'm just not sure how/where to inject the packaging revno [05:55] Without screwing up upgrades [06:00] Let me find one of mine. [06:01] StevenK: https://code.launchpad.net/~wgrant/+recipe/ivle-trunk [06:04] Why +dr? [06:04] debian revision [06:04] It's arbitrary. [06:04] Right [06:05] StevenK: https://code.launchpad.net/~wgrant/launchpad/stop-this-aging-nonsense/+merge/91763 [06:06] It's over 9000! [06:07] (IE, r=me) [06:07] Thanks. [06:09] I thought bug heat was already done in the DB [06:10] The calculation is a PL/Python function, if that's what you mean. [06:11] Right [06:13] lol. stop this aging nonsense. [06:13] Yes. wgrant will forever be 16. [06:13] hahaha [06:14] wgrant is the Australian vampire I suppose. :P [06:14] StevenK: ^ [06:14] Haha [06:18] He doesn't sparkle though. [06:18] Spoken like a Twilight fan [06:19] Heh === almaisan-away is now known as al-maisan [07:27] stub: Oh, didn't know that was possible. Thanks. [07:29] The planner gets to inline SQL functions into SQL queries. Not sure if it will help in this case. [07:30] Unlikely, since the surrounding Python is terrible. [07:30] But it's something. [08:50] good morning [10:30] jml: who do we ping to get your updated testtools in the archive? Do you know who packages it? [10:30] bigjools: it gets imported from Debian, lifeless is the maintainer there. [10:31] jml: ok ta [10:31] jml: there's an ubuntu-specific version in the archive at the moment [10:31] oh really? [10:31] jml: 0.9.11-1ubuntu1 [10:31] * jml should really pay more attention to downstreams. [10:31] I haven't looked at its local patch [10:32] heh, debian has 0.9.11-1 === al-maisan is now known as almaisan-away [10:44] " * Build using dh_python2." [10:44] from doko [10:44] I guess it's not much of a patch :) [10:44] (incidentally, well done LP for making that easy to find out: https://launchpad.net/ubuntu/+source/python-testtools) [10:46] \m/ [11:03] gmb, wgrant: could you review this MP: https://code.launchpad.net/~adeuring/launchpad/bug-829074-ui/+merge/91796? [11:04] adeuring: Sure thing. === gmb changed the topic of #launchpad-dev to: https://dev.launchpad.net/ | On call reviewer: gmb | Firefighting: - | Critical bugtasks: 4*10^2 [11:05] gmb: thanks! [11:05] adeuring: Just finishing another branch, but I'll get to it presently. [11:48] adeuring: Looks good. r=me. [11:48] gmb: thanks! [11:49] Welcome :) [11:52] adeuring: I think your change in r14748 does require QA. [11:53] StevenK: yes and no -- the point is that the new featrues can't be used yet. The branch just reviewed by Graham will make that easier [12:01] morning [12:01] morning rick_h [12:16] bigjools: wtf, just read that kubuntu is being killed :-( [12:18] wallyworld_: it no longer has a dedicated canonical engineer working on it (as was the case when riddell was on rotation to Bazaar) [12:18] wallyworld_: that's not quite the same as it being killed [12:19] jelmer: the net effect will be the same i fear [12:20] they seem to've done fine for 11.10 when jonathan was on rotation to bzr, and {x,edu,l}ubuntu seem to do fine with just infrastructure support too [12:21] not saying it won't have a negative impact [12:21] I think it would take much more than a single bullet to kill of kubuntu. [12:21] yeah, maybe i'm being too pessimistic [12:21] just a bit sad i guess [12:21] wallyworld_: not killed [12:22] wallyworld_: I think it's a good thing actually [12:22] really? [12:23] yeah, it means any criticism about it will need to be levelled at the community, not canonical [12:23] true [12:23] and the community will not be encumbered by anything [12:26] StevenK: can the combo-url land? Or we waiting on RT? [12:28] rick_h: We are waiting for the convoy MP. [12:28] StevenK: ok cool. [12:30] rick_h: If that lands, then I can update the convoy package, and land combo-url [13:36] adeuring: do you know anyone that knows translations well? [13:36] rick_h: jtv for example [13:36] adeuring: I'm trying to find some way to mass download .pot files without any success in wiki/google [13:36] adeuring: ok, thanks [13:57] jtv: ping, got a sec for a translation question? someone is asking about mass downloading all ubuntu .pot files for spanish languages? [13:58] jtv: I don't see any way to mass download from the wiki/webui. I see a lp-translations tools package that seems to do mass uploads though, but code doesn't seem to download? [13:58] adeuring: so did maint. RT, questions, translations, and new projects [13:59] rick_h: cool -- i sucked again :( [13:59] bah, missed jtv [14:01] Morning, all. [14:05] morning deryck === almaisan-away is now known as al-maisan [14:10] adeuring, rick_h -- I'd like to do a G+ hangout for our standup today. [14:11] deryck: sounds like a plan === matsubara is now known as matsubara-lunch [14:16] deryck: gahh -- i still have no g+ account :( [14:16] adeuring, see my PM to you. :) [14:33] abentley, we're G+ hanging out today for standup. [15:17] jcsackett, sinzui: A branch's unique name is a well-established term. Unique names do not include the lp: prefix. [15:17] my apologies [15:18] sinzui: np, just let's keep the definition consistent. [15:26] what's happening with bug heat? [15:26] I thought it was going to be removed - is it going to stay around in some form (given it has just changed)? === matsubara-lunch is now known as matsubara [15:30] abentley, sinzui: so do we need to roll that back? [15:31] jcsackett: No, but you should change the function name, or else change where you attach the prefix. [15:31] abentley: ok, i'll land a follow up to correct the name. [15:31] jcsackett: thanks. [15:34] jcsackett: You should also change the HTML so it's not called #branch-unique-name. [15:34] * jcsackett nods [15:34] seems odd unique name was ever used, as it's all about presenting the location, which incorporates the unique name but isn't the same thing at all. [15:35] jcsackett: I don't know where this is, but it's possible the authors thought it was about presenting the name, not the location. [15:36] abentley: fair. could be it was misappropriated for presenting the location later. [15:38] jcsackett: Nope, looks like it was always presenting the name and calling it the location. [15:38] right on. well, i'll be making it consistent shortly. [15:39] jcsackett: I guess you could stick the lp: in the template. [15:39] possibly, but it would have to exist outside of the node, since that gets set by the js. [15:39] seems a might bit hackish. [15:42] jcsackett: Yes, it's treating HTML as a template language. [16:18] jelmer: https://blog.launchpad.net/general/bugheatchange [16:20] abentley: ah, thanks - missed that for some reason [16:26] danhg, talky talky time? [16:47] Hey sinzui, I'm in the middle of MaaS tests, I should be free by 18:00 GMT? [16:47] okay [17:29] adeuring, hey, any luck on those interrupt duties? (a friendly ping from one slacker to another. ;) [17:30] deryck: sgh... goit distracted again by working on a branch... [17:52] deryck: hey [17:59] hi lifeless [18:00] how is your schedule to-day? [18:01] lifeless, unfortunately, my wife ninja-scheduled me for the dentist. and I need to leave a little early today. [18:01] she knows I'm a baby and need her to hold my hand. [18:04] deryck: I take it that that is in a few minutes time? If it was say 40-50 minutes away we could do a quick call... [18:05] lifeless, no, it's actually couple hours away still. I'm just heads down trying to finish these hanging actions from TL call. [18:05] lifeless, I'd really love to chat, but I don't want to be yet another day working on these "investigations" either. :) [18:06] lifeless, how about tomorrow post TL call? [18:06] IIRC I was going to give you a hand with one of them [18:06] lifeless, you gave me enough of I hand, I think. I filed a bug this morning. [18:06] uhm, I have no idea, let me check (the new time has thrown out my memosied schedule) [18:06] let me see bug number.... [18:06] lifeless, bug 928327 [18:06] <_mup_> Bug #928327: codebrowse hangs due to exception/oops handling < https://launchpad.net/bugs/928327 > [18:07] lifeless, my guess/diagnose could easily be wrong ^^ so I appreciate you looking at the bug. [18:07] gary_poster: we have a parallel testing biweekly thing conflicting with the TL new time [18:07] deryck: I have a slot *before* the tl meeting; after has my 1:1 with statik [18:08] lifeless, that works for me better actually. forgot about the TL call time shift. [18:09] deryck: why do you think oops is implicated ? [18:10] the hang seems to be in oops_middleware [18:10] I don't follow - oops_middleware is in the call stack yes, but its a WSGI middleware, so it will always be so. [18:11] thread 11 in https://pastebin.canonical.com/59603/ is in the middle of a global GC run [18:12] but the other one has no GC in it, so either different cases, or not GC. [18:12] lifeless, so I saw the threads that seemed stuck in sock_sendall had stuff happening in httpexceptions and oops_middleware.... [18:12] lifeless, so I just assumed something was hanging in dealing with an oops. [18:13] so this, for instance: [18:13] #6 0x00000000004fa67b in sock_sendall (s=0xa8e4ba0, args=) from ../Modules/socketmodule.c [18:13] #7 0x00000000004a7c5e in call_function () from ../Python/ceval.c [18:13] /usr/lib/python2.6/socket.py (282): flush [18:13] /usr/lib/python2.6/socket.py (292): write [18:13] /srv/codebrowse.launchpad.net/production/launchpad2-rev-14640/eggs/Paste-1.7.2-py2.6.egg/paste/httpserver.py (123): wsgi_write_chunk [18:13] /srv/codebrowse.launchpad.net/production/launchpad2-rev-14640/eggs/oops_wsgi-0.0.8-py2.6.egg/oops_wsgi/middleware.py (131): oops_write [18:14] ? [18:15] deryck: ^ [18:16] lifeless, indeed. that's what I meant. [18:17] deryck: ok, so the way wsgi works means that every layer that offers facilities will /tend/ to have its own 'write' callable that is passed down. [18:17] lifeless, yeah I noticed [18:17] lifeless, I think TL wins ;-) [18:17] oops_write is the callable passed from the oops middleware to the next deeper wsgi thing [18:18] and wsgi_write_chunk is the callable that was returned by the paste http server [18:18] flacoste, lifeless, should we move parallel testing to 4PM Eastern, 21 UTC? [18:18] http_exceptions etc [18:18] Wed still? [18:18] gary_poster: is that 1 hour later or something? [18:18] lifeless, ah, ok. Didn't realize that. [18:18] lifeless, right [18:18] gary_poster: I have a call with statik then [18:19] lifeless, doesn't parallel testing take precedence? ;-) [18:19] lifeless, ok. I'll look at schedules and make another proposal later [18:19] gary_poster: I'm about a month out what with budapest, sickness, the QBR. [18:19] gary_poster: if I hadn't missed that many 1:1's I'd say sure... [18:19] lifeless, sure. np [18:19] but IME if you don't pin statik down with a nailgun .. :) [18:19] lifeless, so it seems those pastes are pretty useless then, if I understand that right. except for knowing we're stuck in socket send. or am I missing something? [18:20] deryck: well, we don't know we're stuck in socket send [18:20] ok [18:20] deryck: there are lots of threads, and some of them were writing content when the core was taken [18:20] we don't know how long they had been there [18:21] deryck: it *may* be that that is a smoking gun indicating e.g. network issues talking to haproxy or something [18:21] or it may be totally irrelevant [18:21] ah, gotcha. [18:21] deryck: lets go through in some detail tomorrow, for now I've gardened the bug to have just the definitive data [18:21] gary_poster: +1 [18:22] cool [18:22] lifeless, ok [18:22] deryck: note that sock_sendall is a python module, so it may well get involved in or mangled by GIL issues, bad locking etc [18:22] deryck: we may end up spelunking into C [18:23] deryck: that said, we're missing line numbers [18:23] sounds fun :) [18:23] deryck: what command did you use to get the traces ? [18:23] deryck: and did you get missing symbol errors when you fired up gdb in the chroot ? [18:23] lifeless, used pygdb. and no, I don't think so. I can look again now. [18:24] deryck: if you could, with regular gdb, uhm, 'thread apply all bt' and see if you get line numbers for the C frames [18:24] if you don't, then we haven't got the debug environment right [18:25] lifeless, ah, yes, that is better. line numbers indeed. [18:25] lifeless, could have sworn I did this and didn't get anything, and then tried pystack macros which hung. [18:26] lifeless, but may the regular bt attempt was when I was running locally still, and not in right env. [18:27] deryck: not to worry - you have line numbers now ;) - could you refresh the paste links in the bug ? [18:31] gmb: are you still ocr? [18:35] lifeless, done. === al-maisan is now known as almaisan-away [18:36] deryck: in bug 928327 ? I still see the old numbers. [18:36] <_mup_> Bug #928327: codebrowse hangs in production < https://launchpad.net/bugs/928327 > [18:37] deryck: ahha [18:37] deryck: mmm, cluster lag [18:39] deryck: the trick for the source is apt-get source python2.6 in the chroot [18:40] deryck: so we can see that in https://pastebin.canonical.com/59625/ [18:41] thread 3 is in a libc call - n = send(s->sock_fd, buf, len, flags); [18:42] thread 7 is in the same libc call - send() [18:42] the content being written looks like fairly inane annotated pages [18:43] mysql-5.1-wl820 in thread3 [18:43] same branch in thread 7 [18:44] totally different thing in thread 8 - ~dcplusplus-team/dcplusplus/dcpp-plugins/revision/3/win32/PluginPage.h [18:44] and it renders pretty snappily [18:45] thread 10 is in the python zlib module [18:45] I've found race conditions / bugs in it before [18:45] so little alerts are going off for me [18:45] note that it is in PyEval_RestoreThread [18:46] see (http://docs.python.org/c-api/init.html) - in short, this is a common place for hangs [18:46] it means it will be trying to get the GIL [18:47] now, looking down its frames, that is in knit extraction [18:48] so this should be safe as long as loggerhead isn't sharing the same objects across threads [18:48] (it may be safe if the objects are being shared, but its less of an automatic assumption) [18:49] gary_poster: how about at the old TL call position? [18:49] threads 14,13,12 10 are all waiting on the GIL [18:50] (determined by taking the GIL lock which the call to RestoreThread identifies and searching fo rit [18:51] lifeless, interesting. I had to read back a few times, but I follow now. I feel +2 times smarter now. :) [18:51] if you check the code for PyEval_RestoreThread you can see how I got the GIL lock just from the backtrace [18:51] because the only lock it tries to get is the GIL [18:52] thread 11 is doing a GC [18:52] this means thread 11 holds the GIL [18:52] the threads that are in sock_sendall have released the GIL [18:52] (line 2723 in socketmodule.c is wrapped in Py_BEGIN_ALLOW_THREADS / Py_END_ALLOW_THREADS [18:53] again - see http://docs.python.org/c-api/init.html - that means they have released the GIL [18:53] so thats all the threads [18:53] the other threads have noise in their stack [18:53] I *suspect* they are killed threads by the paste thread killing code [18:54] e.g. dead but not joined yet [18:54] now, if the server has been attempted to shutdown [18:54] but hasn't gone [18:54] this would explain why there is no main thread visible [18:55] (thread 1 shows [18:55] Thread 1 (Thread 27332): [18:55] #0 0x00002b34765e5ebc in ?? () [18:55] #1 0x0000000000000000 in ?? () [18:55] ) [18:55] deryck: ^ probably need to ask webops for the exact sequence of events leading up to the core to validate that theory [18:55] if we can validate the theory then we can make an interesting observation [18:56] which is that the listen event loop *has* shutdown properly; what is missing is cleanup of these other threads [18:56] which cannot happen until garbage collection completes [18:56] lifeless, right [18:56] well, not properly :P [18:57] there are 2 threads in sock operations, 4 threads waiting for the gil (and apparently fine otherwise) and 1 in gc with nothing sensible higher up its stack [18:58] thats a total of 7, but we'd expect 10 worker threads IIRC, plus mainloop [18:58] so I strongly suspect a SIGINT or something already sent [18:59] now, lets peek at the other core [19:00] thread 4 is in sendall [19:00] as is 9, same spot as they have all been [19:00] thread 12 is taking a threading lock [19:00] lets see [19:00] flacoste, sorry, just saw this. The old team lead call time is fine with me, but I thought lifeless would prefer not to meet that early. The later we make it, the less likely Europeans from my team can attend, and the easier it is for lifeless, AFAIK. [19:01] gary_poster: I can attend at that time, but we'll have to boot deryck :) [19:01] gary_poster: who claimed that spot like a flash, before [19:01] lifeless, heh [19:01] um [19:02] ok, I'll go look at the calendar... [19:02] deryck: you'd be ok ~ this time on your thursday ? [19:02] I'm more Batman than Flash. are we talking about me? :) [19:02] heh [19:02] lifeless, sure. === abentley changed the topic of #launchpad-dev to: https://dev.launchpad.net/ | On call reviewer: - | Firefighting: - | Critical bugtasks: 4*10^2 [19:02] ok, so deryck if you move your one +24 hours or so, and gary_poster you can move the paralleltests one an hour earlier. [19:03] lifeless, done! [19:03] thank you lifeless & deryck [19:03] deryck: thread 12 looks like its the implementation for a threaded queue or something (haven't checked the .py source yet) [19:04] flacoste, done on calendar [19:04] deryck: so its not going to be the gil that its waiting ok - in fact it has just released the gil (see threadmodule.c line 46) [19:05] deryck: it looks like it is waiting for another request to service, judging from the call stack [19:06] thread 13 is running some code *I think* [19:06] the python integration blew up though (frame 0 is the fail) [19:06] so we need to check the source to see if it holds the GIL [19:07] and indeed, line 943 is in the middle of the big opcode jump case statement [19:07] so, thread 13 holds the gil [19:08] and is processing a knit repository, which the other inventory access in the other core was doing as well [19:08] /~starbuggers/sakila-server/mysql-5.1-wl820/view/head:/plugin/java_udf/java_context_test.cc is the file [19:10] deryck, rick_h: Interrupt dutes done in less than an hour. Went down the whole list. [19:10] /~starbuggers/sakila-server/mysql-5.1-wl820/view/head:/plugin/java_udf/grokjni.pl was the inventory content the other core was doing [19:10] abentley, nice! I'll look forward to mine in an hour then. :) [19:11] abentley: rocking [19:11] deryck: coat tail rider :P [19:11] so, weak correlation there [19:11] rick_h, now you've finally figured me out. oh no, my secret is exposed! :) [19:12] thread 14 is another waiting-for-a-request [19:12] as is 15,16,17 [19:12] lifeless, ah, so dealing with the same objects in different threads. did I understand that right? [19:12] 18 [19:12] deryck: no, no indication of that yet; was noting that the same branch is being accessed from each core [19:12] so there may be something to do with that content [19:12] lifeless, ok [19:13] its also a 'knit' format branch which is bzr < 1.0's native format [19:13] I think, or something in that general area [19:13] 18 is waiting for a request [19:13] 19 and 20 too [19:14] 21 is waiting for the GIL [19:14] and its the actual mainloop - note the serve_forever () and the PyMain at the top of the stack [19:14] Py_Main I mean [19:15] deryck: I think https://pastebin.canonical.com/59626/ has two different bt's in it, its a little confusing [19:15] yeah, definitely does [19:17] my info is ok, because I started at the bottom which was indeed the other set of bt's [19:17] anyhow, what does this mean [19:18] 12,14,15,16,17,18,19,20 are workers waiting for a request, 21 is the mainloop, 13 is doing work - thats 9 waiting, one mainloop and one worker working [19:18] so this second core looks totally healthy and unstuck [19:20] threads 9 and 4 are a little worrying - that send() behaviour [19:20] but they don't hold the GIL [19:21] did I cut-n-paste wrong or something, to get different bt's? I thought I just scp'ed gdb and pasted straight as is. [19:21] gdb.txt, I meant. [19:21] there is nothing, assuming thread 13 would come alive again, stopping the healthy workers from serving more requests [19:21] deryck: https://pastebin.canonical.com/59626/ and https://pastebin.canonical.com/59625/ [19:21] deryck: compare the first four lines [19:21] deryck: and then the bottom four lines [19:22] the bottom four lines of 59625 appear in the middleish of 59626 [19:23] deryck: so, the core with happy workers has only one real issue and thats a busy thread; its possible that that isn't releasing the GIL for some reason, but just regular bzrlib code *should* give other threads timeslices [19:23] deryck: were both cores taken from hung loggerheads? How was hung determined ? [19:24] lifeless, that's a webops question. not sure. I can ask them. [19:24] deryck: the mysql urls in question both render near-instantly for me [19:25] http://bazaar.launchpad.net/~starbuggers/sakila-server/mysql-5.1-wl820/view/head:/plugin/java_udf/grokjni.pl and http://bazaar.launchpad.net/~starbuggers/sakila-server/mysql-5.1-wl820/view/head:/plugin/java_udf/java_context_test.cc [19:26] deryck: so, you may want to copy some of this to the bug; the bad news is I see now reason for the second process to appear hung, and the first process appears to have had its mainloop killed (e.g. via the OOM killer, manual SIGINT, whatever) and that *will* stop it serving. [19:26] deryck: we now need to track down more data around the state of both of the cores, to see if we can infer anything else. [19:27] deryck: I hope this has helped! [19:27] lifeless, I really don't mind copying this too the bug. but it's a lot of text. Would it be better for you to just summarize this briefly there? [19:27] just so I don't mis-represent. [19:28] one core has damaged (I suspect killed but not joined()) threads including a missing mainloop. The missing mainloop would on its own make it appear dead to haproxy. [19:29] It is in gc in another thread; one possible theory is it got too big memory wise and what we are looking at is damaged fallout from some attempt to recover it [19:29] the other core appears entirely healthy except for the oddness that stuff is stuck in send(); but that is normal if the OS buffer is full, which will happen if the internets are not brilliantly happy (because buffering affects the entire chain) [19:31] so we need to know for the first one, as much as we can about how it got to that state - were any sysadmin interventions applied first? (if so, the core doesn't represent the failure, it represents the failure + mangling) [19:32] for the second, we need to know the symptoms that were being reported [19:32] deryck: I suggest putting the transcript in an attachment for folk wanting to check the workings [19:37] lifeless, done [19:41] cool [19:41] and now, breakfast. === matsubara is now known as matsubara-afk [20:06] sinzui: is there anything we can do to fix private mailing list archive access? :( [20:07] barry: isd have a fix [20:07] barry: it is 'in deployment' [20:07] lifeless: excellent, thanks [20:08] lifeless, since when. [20:10] barry, lifeless bug 663923 give no indication there is a fix available [20:10] <_mup_> Bug #663923: Cannot view list archive of private team < https://launchpad.net/bugs/663923 > [20:11] * barry subscribes [20:11] I still believe grackle will be deployed and that bug will be fixed [20:13] sinzui: what's grackle? [20:13] barry, the archiver we are writing [20:14] ah, right. do you mean once grackle is deployed, you won't need the openid dance? [20:14] correct [20:15] cool [20:15] that'll be nice [20:15] heck, i might even switch to grackle in mm3 [20:17] barry, possibly. I think Cassandra should be a choice rather than a requirement. We can written an almost complete memory store implementation that could be subclassed to implement a sql or simple mbox implementation [20:17] s/We can written/We HAVE written/ [20:19] nice. what's the status of it? is code available? is it functional yet? [20:23] barry: We started work on it at the Thunderdome, but I haven't been involved since. [20:23] where are the branches? :) [20:23] barry: lp:grackle [20:24] * barry branches it for later [20:25] barry, I need one more day to complete the client. We can them complete the server in a few days [20:26] barry, all the code is in trunk https://code.launchpad.net/grackle [20:26] sinzui: since the ISD weekly report [20:27] thanks. i will definitely keep my eye on it [20:27] sinzui, it is also in my team's goals for Q4 [20:28] mars, thanks [20:28] sinzui: are you on isd-announce? [20:28] no [20:37] sinzui: I'm not sure how to get you on it; but it does have a aweekly summary of what ISD are up to that may be informative [20:38] lifeless, I do not need to be more involved. This issue will be closed soon [20:47] sinzui: bug 928391 [20:47] <_mup_> Bug #928391: ProgrammingError creating new team < https://launchpad.net/bugs/928391 > [20:47] sinzui: I think that that might be something your squad knows aboot [20:48] lifeless, i learn about it an hour ago. [20:48] My team will fix it [20:53] kk [21:02] dentist time, yuck [21:12] rick_h: have you closed bug #294656 ? [21:12] <_mup_> Bug #294656: Every page requests two JavaScript libraries (remove MochiKit) < https://launchpad.net/bugs/294656 > [21:13] abentley: ah, sorry. Guess that never got linked to the branch. Yea, mochi is done and gone [22:27] sinzui: https://pastebin.canonical.com/59655/ [22:31] fuuuu [22:31] [22:31] From the isd team creation forbidden [22:32] g'morning wallyworld_, wgrant [22:33] jelmer: g'day [22:37] wallyworld_, are you running tip? I see what looks like a fix: https://code.launchpad.net/~bzr-pqm-devel/bzr-pqm/devel [22:38] sinzui: no, just whatever a default precise install provides. i'll try tip, thanks [22:38] Morning jelmer. [22:39] StevenK: Bug #928440 [22:39] <_mup_> Bug #928440: When attempting to create a new team, I'm told I am "Not allowed here" < https://launchpad.net/bugs/928440 > [22:39] See my comment [22:42] wallyworld_, or revert to -r 80. [22:43] sinzui: tip still breaks, so will try that rev [22:47] wallyworld_, the branch is in ec2 now [22:47] sinzui: and rev 80 breaks too. i need to see where RemoteBranch lives [22:47] thanks for landing [22:48] sinzui: the issue is the version of bzrlib [22:48] wallyworld_, yes, I think I am using the system lib [22:49] sinzui: makes sense. i am using the one from lp-sourcedeps [23:16] lifeless: Shall I start landing my heat incineration branches? [23:27] wgrant: yes [23:27] noone has flamed us AFAICT [23:27] * thumper flames lifeless [23:27] That was my thinking [23:27] * thumper flames wgrant and wallyworld for good measure [23:27] Uhoh [23:28] * thumper leaves again [23:28] thumper: what have i done this time? [23:28] wallyworld: I'm sure you know... [23:28] thumper: well, it could be one or soooo many things [23:28] s/or/of [23:29] lifeless: Does this count as removing complexity to offset disclosure? :P [23:29] wgrant: I can see we're going ot have fun with that [23:29] disclosure is offsetting user ticket complexity and performance suckiness too [23:29] It is [23:30] I am trying to respect the 5s rule with bug searches. [23:30] As much as I can. [23:30] anyhow [23:30] heat was signed off on by stakeholders including ubuntu, for the changes effective monday [23:31] I see no reason to wait an extended period [23:31] Sure. [23:31] The stakeholders aren't the only stakeholders, but indeed the outcry seems to be nonexistent. [23:31] wgrant: btw [23:31] Which is as I expected. [23:31] wgrant: you can't delete the garbo job straight away [23:31] Oh? [23:32] (he says, as he Ctrl+Cs the lp-landing of the garbo job removal) [23:32] exercise for the reader. You will facepalm. [23:32] tell me if you timeout ;) [23:33] rick_h: is bug 928500 your work? [23:33] <_mup_> Bug #928500: 'Series and Milestones' graph not loading - LPJS is not defined < https://launchpad.net/bugs/928500 > [23:33] lifeless: I don't see the issue. [23:33] wgrant: we are changing the rule for heat calculation to not include age. [23:34] wgrant: what process will we use to update bugs that *are not changed* to use the new rule ? [23:34] lifeless: I decided that we don't care enough. [23:34] Do we? [23:34] Well, if we can point any-and-all 'wtf' bug reports to you, sure. [23:35] I think its pretty cheap to let the garbo do one full scan post-heat-calculation-change, and it ensures that it is all consistent [23:35] Then I'll mark the bug as affecting and then not-affecting me, and then say "wtf" back because the value is correct :) [23:35] But true. [23:36] So, I guess I'll put the DB patch in a separate pipe and do that first. [23:37] \o/ [23:37] lifeless: Hm, [23:37] lifeless: Except that the updater never completes. [23:37] Bug #906193 [23:37] wgrant: I'm pretty sure it is incremental [23:37] Probably better to do a one-off [23:37] <_mup_> Bug #906193: BugHeatUpdater never completes < https://launchpad.net/bugs/906193 > [23:37] It's not [23:37] Oh [23:37] I guess it is [23:37] the warning was bogus, last I looked at it [23:37] it doesn't do a full scan in 1 hour [23:38] Yeah, true. [23:38] It probably never catches up, though. [23:38] Anyway, will land the DB patch without the garbo dropping. [23:38] let me check ze code [23:39] wgrant: so yeah - [23:39] def _outdated_bugs(self): [23:39] outdated_bugs = getUtility(IBugSet).getBugsWithOutdatedHeat( [23:39] self.max_heat_age) [23:39] But it seems to never finish. [23:39] Which means it is behind. [23:39] It's incremental, but probably never catches up. [23:40] your new function should be cheaper [23:40] I suspect it's better just to do a one-off four-line script to update everything. [23:40] It is about 5 times cheaper, true. [23:40] I don't have an opinion; script is fine if thats what you think is best [23:40] my reading of the code is that the heat updater will, on each run, do the lowest-id N older-than-X bugs [23:41] this could be always be behind but still hit everything [23:41] It does, yes. [23:41] Oh? [23:41] What if the first hundred thousand bugs get updated regularly? [23:41] The top 800000 never will [23:41] so lets say it takes Y days to become stale [23:42] mm, rephrase [23:42] runs hourly [23:42] in one hour, it does N bugs [23:42] if the number of bugs *becoming stale* per hours is greater than N [23:42] I wouldn't expect it to be, but it looks like isDone is never hit. [23:42] Which suggests that it is. [23:43] then after X days, it will have to do the first N again, and you'll have a loop over 24*Y*N bugs [23:43] any bug updated for other reasons within that Y period will perturbate the loop and get other bugs updated. [23:43] anyhoo; shrug. Like I say, choose the best use of your time w/curtis, and have fun.