[00:27] do folks know which ports juju client needs to be open? [00:27] also I am working with mxc in #juju and he is hitting http://pastebin.com/AQArivJz [00:28] doing a comparison the next logical step from juju should have been juju.provider.azure environ.go:406 picked tools [00:28] but that is where the bad request comes from [00:29] does anyone know why juju.provider.azure environ.go:406 would fail in a bad request on bootstrap? [00:31] arosales: um... [00:31] arosales: the api/state ports to the server [00:32] arosales: i'd poke axw when he starts about the bootstrap issue [00:38] thumper, thanks i'll see if axw has any insights when he comes online [00:39] thumper, re the api/state ports to the server [00:39] yes... [00:39] on the client [00:40] does juju-core binary need to call out on any specific port? [00:40] probably also needs access to the charm store (not sure which port) if deploying standard charms [00:41] aside from deploying and even at that the client wouldn't do the the deploying [00:41] the default port for state is 37017 and api is 17070 [00:41] client in this context is where juju commands are being issues form [00:41] *from [00:41] right, the juju binary talks to the bootstrap node primarily over the api now (17070), but a few still use state directly (37017) [00:53] thumper, gotcha thanksfor the info [00:58] axw, hello [00:58] arosales: howdy [00:59] I am working with mxc in #juju and he is hitting http://pastebin.com/AQArivJz [00:59] doing a comparison the next logical step from juju should have been juju.provider.azure environ.go:40 [00:59] doing a comparison the next logical step from juju should have been juju.provider.azure environ.go:406 [01:00] do you know why that section of code would fail with a 400: bad request [01:00] axw, ^ [01:00] arosales: not sure, looking [01:01] axw thanks [01:01] I wasn't able to reproduce the issue he was seeing [01:05] axw, I am going to step away from the computer but any insights you have would be appreciated. I think at this early in the process he may have a cert config issue, but I am uncertain on how to confirm that. [01:05] arosales: no worries, I will see what I can see [01:06] arosales: is there a bug or something I can update? [01:06] no mxc on #juju atm [01:07] axw, unfortunately no, he only posted http://askubuntu.com/questions/388419/juju-bootstrap-fails-in-azure-badrequest-the-affinity-group-name-is-empty-or [01:07] he was in #juju but left [01:07] he also had a post to the juju mailing list [01:08] axw, don't spend too much time I thought I would just check with you on why that section of code was failing as a clue to what may have been going on. [01:08] okey dokey, I will update the list if I find something, or file a bug if there is one [01:08] axw, thanks [01:08] arosales: sure, nps === smoser` is now known as smoser [02:00] hazmat: There will be a couple of new APIs coming in 1.18 still, but we don't expect any existing ones to be removed. (status is added, PutCharm/Deploy/UpgradeCharm added, etc) MachineConfig is likely to stick around as is (though wasn't in 1.16) [02:04] uh oh, I think my hard drive is failing [02:06] * axw fscks, bbs === gary_poster is now known as gary_poster|away [03:26] * thumper heads to the supermarket [04:31] thumper, did you get my mail yesterday about cadmin? [04:31] hi dimitern [04:31] thumper, hey [04:32] dimitern: yeah, I got it, but not looked yet sorry [04:32] thumper, no worries [04:34] hatch, hey, you around? [04:34] dimitern sortof :) what's up? [04:35] hatch, I wanted to check with you about the charms upload implementation [04:35] oh ok cool, so where are we on that? [04:36] hatch, basically, we settled on no API calls, just a POST /charms?series= and a multipart/form-data body containing a single zip file with the charm, and returning a json response in the form {"code":, "error": "message", "charmUrl": "the url"} [04:37] what about the auth? [04:37] hatch, if there is an error code and error are populated, charmUrl is missing, on success charmUrl is populated and error and code are missing [04:37] hatch, basic http auth with a user-tag and password [04:38] hatch, i'm finishing the proposal later today [04:38] hmm [04:39] so https post with username and secret? [04:39] hatch, the implementation allows further development, like implementing GET /charms?url=&file=icon.svg, but not implemented for now [04:40] hatch, the same creds as for Login() - tag and password, where the tag must be a "user-xxx" [04:41] and what does this give us over the previously discussed pattern? [04:41] hatch, it's the simplest approach [04:43] The only part I'm concerned about is the http auth stuff [04:45] not that we can't do it [04:45] I can't remember how the auth data is handled across the various login methods [04:45] I'll have to check when I'm back at my computer [04:46] hatch, well, it's standard "Authorization" header containing `Basic realm="juju" ` [04:46] right but I'm not sure if we store that information or not [04:46] hatch, ah, sorry `Basic ` no realm [04:47] it's been quite a while since I looked at the login code [04:47] hatch, it should be almost exactly the same [04:47] I mean that the user will already be logged in [04:47] so if we don't store their creds we will need to ask for it again [04:48] hatch, hmm, yes, but you can generate the base64 token and store it in a cookie or something i guess [04:48] right yeah [04:49] hatch, anyway, all that is just a heads up for what's coming, not set in stone, we'll change accordingly to accommodate the gui as needed [04:50] right, yeah thanks, I've got this all down and when I'm not so darn tired I'll take a look at what we have/need - but it sounds good :) [04:50] thanks [04:51] hatch, sure, sorry to bug you so late btw :) [04:51] dimitern can you cc me on the proposal so I can take a look in the morning? [04:51] haha no problem :) thanks for keeping me up to date [04:52] hatch, I will update the document and send you a link [04:52] great thanks [04:52] have a good one! [04:53] hatch, you too! [05:06] * thumper has a "fuck yeah!" moment [05:10] thumper, \o/ [05:11] dimitern: yay tests is all I can say [05:11] dimitern: http://pastebin.ubuntu.com/6549442/ are the tests that are passing :-) [05:12] thumper, :) looking [05:12] dimitern: working on the server side of juju-run [05:12] thumper, great! [05:12] dimitern: so you can do something like this on any machine hosting a unit [05:13] thumper, looking forward to trying it out when done [05:13] juju-run my-unit/4 "some magic" [05:13] inside cron [05:13] so runs in a hook context etc... [05:13] * thumper pops the stack and writes tests for uniter.RunCommands [05:14] nice! [05:15] actually, now might be a good time to stop for the day [05:15] finish on a high and all that [05:15] * thumper does some admin bits [05:17] :) [05:18] axw: i hope that failure is not related to not having a tty [05:18] davecheney: which failure is that? [05:19] davecheney: the azure/ec2/etc. bootstrap failure? [05:19] the same [05:19] my money is still on timeouts [05:19] davecheney: nfi, the gpgv thing is weird [05:20] I've found a bunch of bug reports on this, and all the resolutions are "oh I just re-ran apt-get update and it fixed it" [05:20] axw: do you pass ssh -t when dialing the bootstrap node ? [05:20] davecheney: yes, for sudo [05:20] why would sudo require that [05:20] ubuntu user doesn't need a password [05:21] davecheney: it's needed for manual provisioning/bootstrap [05:21] could be disabled for cloud provider [05:21] nonsense [05:22] davecheney: nonsense? it's needed because the ubuntu user doesn't necessarily exist on a machine you're manually bootstrapping [05:22] axw: seriously ? [05:22] oh ffs [05:22] that's just peachy [05:23] davecheney: why does that matter? [05:23] i guess if you pass -t to ssh [05:23] that is all we need [05:23] yup [05:25] night all [07:57] jam, so, looks like we can pull the trigger on 1.16.5? [07:57] fwereade_: I believe so. I don't have anything stuck in my head we're waiting for === rvba` is now known as rvba [08:00] jam, \o/ [08:00] 1.17.0 on the other hand ... :) [08:04] does anyone know if there are cloud (ec2, azure, ...) mirrors for cloud-archive? [08:04] seems that installing monogdb-server from cloud archive is what takes so damn long on azure [08:06] jam, yeah :( [08:06] axw: I don't believe it is mirrored by anyone, ATM. [08:06] jam, actually, hmm, I should make force-destroy-machine delegate actual removal to the provisioner [08:07] jam, axw, who should we be talking to about that? ben howard? [08:07] jam: okey dokey [08:07] fwereade_: I think it is possible, but hard to mirror stuff that isn't in the central archive. [08:08] I actually think the juju-mongodb proposal, possibly with V8 stripped out will get us big wins here [08:08] as we can make the package that gets installed a lot smaller [08:08] (today we install 'mongodb' which gives us client and server, and client is 60-90MB) [08:08] while I agree local mirrors will still be faster, I'm not sure how that works with non Ubuntu archives [08:09] (when we say "add cloud-archive:tools" how is that found in a mirror?) === axw_ is now known as axw [08:30] jam, fwereade_: https://bugs.launchpad.net/juju-core/+bug/1259453 -- marked as Low, feel free to increase if you think it's worth pursuing [08:30] <_mup_> Bug #1259453: Bootstrap is significantly delayed by installing mongodb-server from cloud-archive [08:30] it won't be a problem with Trusty [08:39] axw, I think precise is still pretty important, but it sounds like it may be hard to do [08:39] yeah, probably [09:58] * fwereade_ wrote some code! [09:59] rogpeppe1, you're OCR -- https://codereview.appspot.com/39970043/ and https://codereview.appspot.com/37610044/ [09:59] rogpeppe1, they're identical [09:59] fwereade_: looking [10:04] fwereade_: dimitern did comment on it. if you set safe-mode: true will this actually stop instances? [10:04] jam, safe mode distinguishes between "stopping" and "unknown" [10:05] if anyone has spare cycles, this could do with a review sooner rather than later (it fixes a 1.17 critical): https://codereview.appspot.com/39940043/ [10:05] fwereade_: did you actually test this? [10:05] fwereade_, so safe-mode is selectively destructive then :) [10:05] * axw eods [10:06] jam, I'm running it now, I *think* I'm being clever and parallel rather than slapdash ;p [10:09] fwereade_: so this is 'juju status' in manual provisioning when you try to add, and it can create the machine in the DB but calls DestroyMachine http://paste.ubuntu.com/6550335/ [10:10] that is what I was saying about it ending up 'pending but dying' [10:11] but never alive [10:12] jam, so what is it that failed after the machine was created in the db? [10:12] fwereade_: well, I haven't finished the manual stuff, so MachineConfig failed in the middle, and then it couldn't set up upstart, but the theorical "something after allocating a machine id but before the agent is actually running" [10:12] we try to clean up [10:13] but I don't *quite* see the point, as the cleanup doesn't actually work [10:13] jam, yeah, the problem is (I think) that an instance id is assigned so we don't fast-forward the destroy [10:14] jam, I'm not reallycomfortable with differentiating based on the "manual:" bit [10:14] fwereade_: well, we're in manual provisioning code right then, if we have a way to actually signal it should be destroyed [10:14] jam, just because I'm sure that one day some provider will give us a real instance id starting with "manual:" [10:14] we *could* call something to remove the instance id as we clean up [10:15] jam, well, yeah, we can write the code :) [10:16] jam, but, generally, removing an instance id is a pretty surprising thing to do [10:16] jam, and I'm reluctant to normalise it [10:17] jam, really we should have flagged manual instances in some way that wasn't just a magic string embedded in the instance id [10:21] jam, plausibly we could start doing that now, though..? [10:21] fwereade_: given that this is all about compatibility with 1.16 direct DB access, I'm not planning on changing DB internals just yet :) [10:21] jam, rogpeppe1: fwiw, yes, the provisioner works as expectedlive [10:21] I think it is fine that in the "cleanup we have an error while manual provisioning" for us to do the steps we need to clean it up [10:21] fwereade_: great [10:22] fwereade_: we'll need to think what that looks like in the API-only case as well, because that is also broken in trunk [10:22] so there is certainly "it is out of scope for today" but I think it is a bug to be fixed [10:22] fwereade_: sorry, some context? [10:23] jam, yeah, indeed -- an easy fix here is fine -- but for trunk, and going forward, I think we should be a bit more precise about specific machines' providers [10:23] rogpeppe1, jam asked if I'd tested it live, I just finisheddoing so [10:23] fwereade_: ah, cool [10:24] fwereade_: i *thought* that's what you meant, but just checking [10:24] jam, the trouble is that we still don't have an explicit concept of provider in state, it's still all gummed up with the environment [10:25] fwereade_: anyway, I'm at the "file a bug, and get on with what I'm actually trying to solve" point. [10:32] jam, yeah, that sounds reasonable [10:34] fwereade_: so there isn't anyway to actually get rid of the machine in 1.16.3, right? (We might be able to call ForceDestroyMachines if it was available) [10:38] jam, I think that is so, yes [10:39] jam, and with my change there still won't be in 1.16.5 [10:40] jam, it'll get to Dead but won't actually be removed, I think [10:40] jam, would you bug it for 1.18 please? [10:40] fwereade_: and I think we would still want to distinguish from "I did get it set up, so I want the agent to clean up after itself" from "I create the record, but the agent will never come up" [10:40] fwereade_: https://bugs.launchpad.net/juju-core/+bug/1259496 [10:40] <_mup_> Bug #1259496: juju add-machine ssh: may not clean up properly on failure [10:41] jam, definitely, it's a very specific situation [10:41] jam, I worry most about the code being abused once it exists [10:42] fwereade_: bug #1259490 ... It sounds to me like he means "juju debug-log" is too verbose, but what he actually means is that the only way he has to see what is going on is an overly-verbose-for-debugging log [10:42] <_mup_> Bug #1259490: juju-log in debug mode is too verbose [10:45] jam, maybe -- the hook-tool invocation line is probably more a DEBUG-level thing -- but it sounds like what he actually wants is to set the log level to INFO..? [10:45] It *might* be that he just wants to be able to INFO level filter the debug-log, but it sounds mostly like he's having trouble getting appropriate feedback. [10:48] jam, rogpeppe1, standup? [11:44] natefinch: still up for 1:1? [11:44] natefinch, yep [11:44] k, I'm there [12:03] natefinch: you're feed is paused, is it working for you? [12:10] wow, had to hard-reset my laptop [12:10] welcome back natefinch [12:10] natefinch: I sent you an email, I'm not sure there's much to finish up [12:11] jam: ok, cool. I sorta figured [12:58] rogpeppe1: https://codereview.appspot.com/36540043/ is in again, looks and feels better now. [12:59] rogpeppe1: thx for your review hints, helped a lot. [12:59] TheMue: cool. looking. [13:01] TheMue: hmm, seems like you didn't use bzr mv, which is a pity as i can't easily see the diffs from my last review [13:01] rogpeppe1: I used bzr mv === gary_poster|away is now known as gary_poster [13:01] TheMue: weird [13:01] rogpeppe1: and rietveld discovers it correctly [13:01] TheMue: oh well, it's probably just a rietveld thing [13:02] rogpeppe1: our good old friend :D [13:04] rogpeppe1: oh, interesting, changed the patch sets in display, see your troubles :( [14:11] okay, I'm back around reliably again now [14:27] fwereade_, jam: charm upload's state operations https://codereview.appspot.com/40160043 [14:41] hey. [14:41] general question... [14:41] generic answer... [14:41] what woudl people think about having a 'environment' file for jujud [14:42] that does what? [14:42] my motivation for this is right now 'lxc-create' uses 'ubuntu-cloudimg-query', which *can* read a environment variable to set a mirror. [14:42] but there is no way to get environment variable into juju [14:42] same is true for 'http_proxy' and 'https_proxy' [14:42] which would be respected by utilities if they could be set. [14:43] but putting stuff in /etc/environment does'nt perculate through to daemons started with upstart. [14:59] smoser: I agree some way to chuck in extra cloud-init values seems really useful [14:59] though, that also means the bypass that manual provisioning currently does need to start using cloud-init... [15:00] mgz, well, i wasn't talking about cloud-init specifically [15:00] but for the manual provisioning, my solution for that is to actually use cloud-init [15:01] (and cloud-init provide a consistent interface to "run cloud-init now") [15:01] yeah, that would be good [15:01] smoser: I'd prefer cloud-init extra config to random envvars [15:02] well, cloud-config wouldn't even suffice here. [15:02] mgz, well... out oside of somewhat abusing it. [15:02] cloud-init can set http-proxy for apt [15:03] but doesn't do it for /etc/environment a [15:03] the only way to solve this would be to provide a boothook that dpkg-divert'd the lxc-create to a wrapper. [15:06] some guy hacked around this previously by modifying his base image to have HTTP_PROXY set for the ubuntu user... but it seems like a really fragile why of saying you need to go out through a gateway for your cloud [15:13] rogpeppe1: any hints on debugging "panic: Session already closed" in tests for status cmd after I've been fiddling with the api bits? [15:14] mgz: what does the traceback look like? [15:15] I shall pastebin [15:16] rogpeppe1: http://paste.ubuntu.com/6551589/ [15:17] TheMue: you've got a review BTW [15:19] mgz: i think i'd start by trying to find out when the state.State was closed [15:20] mgz: as i think that's probably the only way that that panic can happen [15:20] mgz: usual drill: add some printfs... [15:21] it's a begger as it's one giant table test, so I can't really do a minimal run... [15:22] rogpeppe1: just seen, thx [15:22] mgz: you could see if you get the issue with all but two of the tests omitted [15:29] rogpeppe1: heh, if I only use the fallback path, it works [15:29] mgz: the fallback path? [15:29] probably something in the testing reset state logic breaks the api [15:29] fallback to direct state access [15:29] mgz: ah yes [15:29] mgz: yeah, probably [15:31] kills some pingers and calls JujuConnSuite.Reset()... [15:39] rogpeppe1: okay, it involves how I'm using conn from inside the apiserver... [15:40] mgz: oh yes? [15:40] if I just don't close it after creating one, everything is fine... but that seems like a leak? [15:41] rogpeppe1: http://paste.ubuntu.com/6551726/ how bogus is that? [15:42] mgz: were you closing the Conn? [15:43] mgz: if so, that was definitely the reason for the problem [15:43] mgz: NewConnFromState just shares the State that's passed into it [15:43] I was, doubtingly, and not does indeed help. but if I'm calling New... every api call, what's doing the c... okay, ace [15:44] thanks! [15:44] mgz: np [16:32] dimitern: you've got a review https://codereview.appspot.com/40160043/ [16:33] rogpeppe1, cheers! [16:38] another fun mystery... [16:38] switching to the api has lead to a test mismatch, where setting a machine's agent state is expected to be: [16:39] "agent-state":"started" [16:39] but comes out as: [16:39] "agent-state":"down", "agent-state-info":"(started)" [16:39] mgz: is that a function of instance state? [16:41] it's all entwined in a bunch of declarative testing stuff, but I can't quite see how I have affected it at all with the api... [16:44] may well just be a fixup error I made somewhere, in whice case I'm happy for the test... but it's not obvious where the problem lies [16:49] hm, more likely, the provider bit is just not working [16:54] what could cause AgentAlive to be unhappy... [17:15] rogpeppe1, fwereade_, next (and last for today) CL https://codereview.appspot.com/40290044 [17:16] * dimitern reached eod [17:16] dimitern: looking [17:16] rogpeppe1, thanks [17:16] * fwereade_ supper, might pop backonlater [17:17] dimitern: i'd like to see a version of the PutCharm document that encapsulates the current proposal [17:17] dimitern: i'm not sure the history matters so much [17:17] rogpeppe1, well see it - i've updated the doc and even sent you (and the others a mail [17:18] rogpeppe1, last section "Chosen Implementation" [17:19] dimitern: ah cool. perhaps that could go at the top. [17:27] rogpeppe1, feel free to edit to your heart's content :) === teknico_ is now known as teknico [17:27] dimitern: okeydokey :-) [18:03] I am getting a ERROR Get : 301 response missing Location header. It started happening after I destroyed-environment. I hadn't done anything to juju from the bootstrap previous to make this happen except a system update [18:03] I tried removing and reinstalling juju-core but it still exist [18:09] rogpeppe1, (et al): have posted https://codereview.appspot.com/40350043 with current progress and cry for help [18:09] mgz: will look after i've finished with dimitern's [18:11] thanks! [18:37] dimitern: reviewed [18:37] rogpeppe1, thank you [18:54] mgz: you've got a review [18:54] rogpeppe1: thanks! [18:57] mgz: i don't quite understand your CL description [18:57] mgz: what are the #1, #3 etc referring to? [19:03] rogpeppe1: in cmd/juju/status_test.go statusTests is a list of test() things, which have several expect{} asserts withing [19:03] -g [19:04] mgz: yeah, i'm looking at it currently [19:04] mgz: are the #1 etc referring to steps of test 0 ? [19:04] yeah, I didn't 0-index [19:04] mgz: oh, confusing? [19:04] and the first test() fails [19:04] s/?/ [19:05] see the string for which one it actually is if the numbering is confusing :) [19:05] rogpeppe1, mgz, when I wrote those tests, each test() was a separate case [19:06] rogpeppe1, mgz, and it seems it still is [19:06] mgz: i'm finding it difficult to parse: " [19:06] The test #1 expect #3 which does SetAgentAlive [19:06] passes [19:06] " [19:06] right [19:06] rogpeppe1, mgz, it just written so that you can also build cases incrementally with multiple expects [19:06] and the following one fails [19:07] mgz: should that be "expects" ? [19:07] mgz, once you're using the api, the setagentalive tests are meaningless [19:07] mgz, because once you connect to the api as a machine or a unit the agent is set to alive [19:08] dimitern: surely they're not meaningless? [19:08] dimitern: 'cos we're connecting as a client [19:08] status still needs to care, is the issue [19:08] rogpeppe1, oh, right [19:08] mgz: i see step #6 fail FWIW [19:09] mgz: ah, you're counting expects! [19:09] rogpeppe1: yeah, it's step #6, but expect #4 [19:09] mgz: ... using 0-based counting for steps, but 1-based for expects :-) [19:10] mgz, if it helps, split the test()s so that each one has a single expect(), then you can comment out the rest and run them one by one [19:10] mgz, this will introduce some code duplication, because the first test case relies on setting and testing stuff as it goes [19:15] mgz: it's weird - i've just verified that SetAgentAlive is called and AgentAlive initially returns true but returns false a few moments later [19:15] rogpeppe1, i had that issue before [19:16] dimitern: oh yeah? [19:16] dimitern: do you remember what the issue was? [19:16] rogpeppe1, it was something to do with the pinger being killed at some point [19:16] rogpeppe1: right, that's why the cry for help :) [19:16] rogpeppe1, or maybe it was related to startsync on the right state - BackkingState or State in the suite [19:18] * dimitern has can type? time to call it a night [19:18] g'night all, see you tomorrow guys [19:18] night dimitern :) [19:23] mgz: ah, i think i know what the issue might be [19:23] mgz: the api's state presence hasn't seen the agent becoming alive yet [19:24] mgz: dimiter is right - if you startsync in the api's state, it should fix the issue. [19:24] ah, interesting [19:25] mgz: the logic in startAliveMachine is doing the right thing but on the wrong State [19:25] mgz: i have a feeling that it's there because of the issue that dimiter encountered before (the same issue) [19:27] mgz: hope that helps enough to get you through it. [19:27] rogpeppe1: hopefully! thanks! [19:28] mgz: perhaps this is a case for moving those tests closer to the implementation [19:28] rogpeppe1: indeed [19:28] mgz: and making the status tests in cmd/juju just a smoke test [19:38] rogpeppe1: how do I get a reference to the api, given that it's created with NewApiClientFromName on each Run invocation? [19:51] mgz, I am seeing a test failure in 1.16 tip. I think something in my own configuration is interfering. any insights? http://pastebin.ubuntu.com/6552812/ [19:51] sinzui: looking [19:51] that looks like joy [19:52] I am on trusty BTW, though I did a major git reconfiguration last week [19:53] it seems like you may have personal git config that breaks the expectations of how juju-core things git does things [19:53] which is pretty naive [19:53] mgz: ISTR there's a method on the dummy provider [19:53] mgz: that returns the State used by the API server [19:58] rogpeppe1: GetStateInAPIServer sounds good [20:23] okay, all fixed [21:22] thumper: Is there an equivalent to all-machines.log for the local provider? So far, I've only found individual agent logs. [21:22] abentley: yes, all-machines.log [21:23] abentley: well, in trunk [21:23] abentley: there were changes recently to have the local provider use rsyslog too [21:23] but not in the 1.16 branch [21:23] thumper: Oh, cool. Yes, I was using the 1.16 branch. [21:25] jcsackett: the big changes are done for splitting upgrade and deploy into separate tests. [21:44] thumper: you know anything about the uniter test failures I mentioned in email? [21:44] hi natefinch [21:44] ah... which email? [21:44] * thumper looks at email [21:44] thumper - very recemt [21:44] recent [21:45] no, not seen it [21:45] different git? [21:45] seems only to be leading # [21:46] yeah. thats what I was thinking [21:47] thumper, what's your git version? I'm 1.8.5 [21:47] Installed: 1:1.8.3.2-1 [21:48] so maybe that's it. I'm too bleeding-edge for Juju :) [21:48] abentley: ack, thanks. [21:52] * thumper takes a deep breath [21:52] * thumper exhales slowly [22:26] * wallyworld -> dentist, yay :-( [23:15] hi wallyworld