=== CyberJacob is now known as CyberJacob|Away [00:37] lazyPower, well i wasn't able to reproduce the issue utlemming was having [00:37] but i was using trusty.. i'll try again with precise [00:37] hazmat: did the redirect daemon work ootb? [00:38] lazyPower, the underlying symptom was exception on connection [00:38] lazyPower, which didn't occur [00:39] Hmmmm.. curious === 21WAAD3MF is now known as wallyworld [01:43] negronjl: pushed fixes+additions to seafile, if you could review them [01:43] sorry for the branch name being a little bit... long [01:44] jose: no worries ... I'll review a bit later tonight [01:44] thanks :) [03:10] how can I access juju's bootstrap env ? [03:10] i'd like to hack into the db and remove an entry :) === timrc is now known as timrc-afk === wallyworld_ is now known as wallyworld === vladk|offline is now known as vladk === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === vladk is now known as vladk|offline === axw is now known as axw-away === vladk|offline is now known as vladk [11:45] do we already have any clue why add-machine doesn't work for local providers anymore ? === timrc-afk is now known as timrc [13:20] hi [13:21] I'm unable to deploy any charms to a local environment on my trusty box [13:21] http://paste.ubuntu.com/7310317/ is all-machines.log [13:21] anyone have any ideas of how to debug further? === BradCrittenden is now known as bac [13:29] james_w: I believe that's the error I ran into a few days ago. Try adding "default_series: precise" to your local entry in environments.yaml [13:29] Sorry, default-series: precise [13:36] cory_fu: no change [13:38] Hrm. mbruzek, tvansteenburgh: Do you recall what ended up being the fix for the environ-provisioner error on LXC? [13:38] Yes [13:38] setting default series and cleaning up the local env. [13:39] "cleaning up"? [13:39] Oh, let me pastebin the cleanup script [13:39] Just as econd [13:39] http://pastebin.ubuntu.com/7314725/ [13:40] You may need to change line 21 to: sudo rm -rf ~/.juju/local [13:40] http://paste.ubuntu.com/7314726/ [13:40] mbruzek: gunicorn readme fix: [13:40] https://code.launchpad.net/~bloodearnest/charms/precise/gunicorn/fix-readme/+merge/216836 [13:41] james_w: Also, do you have encrypted home dirs enabled? If so, you may need to set the root-dir in the local env to outside of the home dir (e.g., /var/juju, as in my cleanup script) [13:41] I saw that bloodearnest thank you. [13:41] I don't [13:41] Ok, then mbruzek's version is what you want, then [13:43] james_w: NB: You should run the cleanup script *after* doing juju destroy-environment --force -y local [13:43] And you will get several "file not found" responses from the script, which is fine [13:47] ok, machine 0 is reporting: agent-state: down [13:48] now I did deploy ubuntu and it has agent-state: started [13:50] and I still have the environ-provisioner message [13:53] cory_fu: any other ideas? [13:55] Hrm. Other than trying juju destroy-environment --force -y local ; clean-lxc ; juju bootstrap a couple more times, not really. :-/ [13:56] What versions of juju and juju-local do you have? [13:57] hmm [13:58] they weren't installed [13:58] trying again after installing them [13:59] 1.18.1-0ubuntu1 [13:59] Ok, that's the current version [13:59] still looks to be the same [14:01] https://bugs.launchpad.net/juju-core/+bug/1248800 mentions the error and says "restarting the provisioner fixed it" [14:01] how would I try that? [14:01] <_mup_> Bug #1248800: worker/provisioner missed signal to start new machine [14:03] Hrm. I'm really not sure [14:04] I can't get my work done without a working environment [14:04] I guess I could try deploying to ec2 [14:05] I think the provisioner should be restarted when you destroy-env and bootstrap === ming is now known as Guest7729 [14:27] james_w, Are you still having problems with juju and local? [14:27] mbruzek: yep [14:28] Can you pastebin the error you are seeing or your log file? [14:30] mbruzek: http://paste.ubuntu.com/7315103/ is the all-machines.log [14:33] OK james_w try this please [14:33] juju destroy-environment -e local -y --force [14:33] (run clean script) [14:33] juju sync-tools [14:34] juju bootstrap -e local [14:34] (with the --upload-tools flag) [14:35] james_w, I am assuming your ~/.juju/environments.yaml file already has the default-series: set to something valid. [14:36] mbruzek: precise [14:44] james_w, Any progress? [14:45] mbruzek: still looks the same [14:48] james_w, What is this bit about you not having juju-local installed? Was this working at some point, or is this your first attempt at getting juju local running? [14:48] mbruzek: first attempt with juju-core [14:51] james_w, Can you describe what is not working for you? The log on pastebin looks mostly OK. [14:51] mbruzek: no services start [14:51] Are they all stuck in pending? [14:52] http://pastebin.ubuntu.com/7315210/ [14:52] james_w: is your LAN using the 10.0.3.0 segment? [14:53] I'm working on getting a MAAS / Juju deployment setup to manage an Openstack cluster, and I've gotten to the point that I have Juju basically going, but any host I add gets stuck in "pending". [14:53] oh hey, looks like james_w is facing something similar maybe... [14:53] lazyPower: 10.0.1 [14:53] james_w: ok, i ask because ip collision will prevent the lxc containers from booting [14:53] qhartman: mine is with lxc [14:53] and any nodes I try to destroy get stuck in "dying". [14:53] james_w, ah [14:53] if you're not using 10.0.3.0 you'll be fine. [14:54] for that issue anyway [14:54] qhartman: logs? [14:54] qhartman do you have a pastebin of the logs? [14:54] lazyPower, Love to. Which ones? [14:54] all-machines / machine-0 [14:54] rgr [14:54] ~/.juju/local/log/all-machines.log [14:55] mbruzek: thats fine for lxc, but he's working with maas. [14:55] s/he's/qhartman [14:55] Thanks lazyPower [14:55] np bruddah [14:58] There's machine-0.log: 2014-04-22 23:00:34 INFO juju.cmd supercommand.go:297 running juju-1.18.1-trusty-amd64 [gc] [14:58] 2014-04-22 23:00:34 INFO juju.cmd.jujud machine.go:127 machine agent machine-0 start (1.18.1-trusty-amd64 [gc]) [14:58] 2014-04-22 23:00:34 DEBUG juju.agent agent.go:384 read agent config, format "1.18" [14:58] 2014-04-22 23:00:34 INFO juju.cmd.jujud machine.go:155 Starting StateWorker for machine-0 [14:58] 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "state" [14:58] 2014-04-22 23:00:34 INFO juju.state open.go:81 opening state; mongo addresses: ["localhost:37017"]; entity "machine-0" [14:58] 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "api" [14:58] qhartman: pastebin plz [14:58] 2014-04-22 23:00:34 INFO juju apiclient.go:114 state/api: dialing "wss://localhost:17070/" [14:58] 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "termination" [14:58] 2014-04-22 23:00:34 ERROR juju apiclient.go:119 state/api: websocket.Dial wss://localhost:17070/: dial tcp 127.0.0.1:17070: connection refused [14:58] 2014-04-22 23:00:34 ERROR juju runner.go:220 worker: exited "api": websocket.Dial wss://localhost:17070/: dial tcp 127.0.0.1:17070: connection refused [14:58] 2014-04-22 23:00:34 INFO juju runner.go:254 worker: restarting "api" in 3s [14:58] 2014-04-22 23:00:34 INFO juju.state open.go:119 connection established [14:58] 2014-04-22 23:00:34 DEBUG juju.utils gomaxprocs.go:24 setting GOMAXPROCS to 16 [14:58] 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "instancepoller" [14:58] 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "apiserver" [14:58] 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "cleaner" [14:58] 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "resumer" [14:58] qhartman please use pastebin [14:58] 2014-04-22 23:00:34 INFO juju.state.apiserver apiserver.go:43 listening on "[::]:17070" [14:58] 2014-04-22 23:00:34 INFO juju runner.go:262 worker: start "minunitsworker" [14:58] 2014-04-22 23:00:37 INFO juju runner.go:262 worker: start "api" [14:58] 2014-04-22 23:00:37 INFO juju apiclient.go:114 state/api: dialing "wss://localhost:17070/" [14:59] 2014-04-22 23:00:37 INFO juju.state.apiserver apiserver.go:131 [1] API connection from 127.0.0.1:56002 [14:59] 2014-04-22 23:0 [14:59] oh balls, sorry [14:59] yeah [14:59] there: http://pastebin.com/44wjjRvA [14:59] though I had copied the URL when I hadn't [15:00] qhartman: on the machine that's in 'pending' is this machine registered in the maas region controller? [15:00] yup [15:00] and when I juju-created it it correctly started up and did the fastpath install [15:00] ok [15:00] and then... nothing [15:01] i'm curious because the logs state mysql has no node associated [15:01] line 57 [15:01] right, I've been futzing around with trying to add/ remove serivces and machines since then [15:01] and a lot of that is weirdly not reflected in that log [15:02] probably a good place to start would be how to clean up this "dying" machine-1 so I can start again from a clean-ish spot [15:02] juju remove-unit --force [15:02] aha [15:02] er [15:02] where am i this morning [15:02] heh [15:02] * lazyPower needs coffee [15:02] so say we all [15:02] juju destroy-machine --force [15:02] right [15:02] remove-unit :P hah [15:03] i'm clearly working on something else with that command [15:03] * lazyPower whistles innocently [15:03] heh [15:04] alright, now it says it's dead with a pending agent-state [15:04] aaand, gone [15:04] ok [15:06] boom [15:08] alright "juju deploy mysql", it seems to have grabbed the same machine it used for machine-1 before, but now it's calling it machine-3 [15:08] everything is "pending.... [15:09] right juju will assign its own alias to the machine [15:09] it increments +1 [15:09] for each [15:09] right, makes sense [15:10] is there any activity I can look for on the machine? [15:10] the machine-0 log hasn't changed much [15:11] here are the new lines: http://pastebin.com/wFyGGTYF [15:12] did the node ever get bootstrapped into juju? when i boot my kvm maas units, it takes ~ 2 minutes for them to come online and register with the juju bootstrap node [15:12] also, can your maas units reach the bootstrap node? [15:12] they should be able to. Do they try to reach it by IP or name? [15:12] most of my help will be anecdotal qhartman, i've got a physical hardware machien as my region cluster, and all my maas nodes are KVM [15:12] it tries by name first, then by ip [15:12] ok [15:13] yeah, I'm trying to set things up so everything is multi-homed which seems to be confusing things somewhat. [15:14] so, "juju bootstrap" from my MAAS controller spun up and bootstrapped the juju node machine-0 [15:14] but I didn't do any "bootstrap" on any other machine. [15:14] you dont need to [15:14] and your juju bootstrap controller came online right? [15:15] its up, running, and communicating with you - i guess thats the case since you could issue a juju deploy mysql [15:15] hummm [15:15] I guess so, here's my juju status: http://pastebin.com/D25ANbyw [15:15] yeah, everything for machine-0 seems right as far as I can tell [15:15] it seems like the agent isn't doing what it needs to. [15:15] right [15:15] how does that get installed? [15:16] does machine-0 try to ssh as some user over to it or something? [15:16] ^ [15:16] during cloud init it pushes the proper ssh keys to the node [15:16] then the bootstrap node ssh's into it and kicks off the agent installation [15:16] hm, ok [15:16] i'm thinking this may be wrt the tools [15:17] try donig a juju sync-tools and retry the provisioning [15:17] destroy the service and the machine, and remove and re-add them to maas? All the way to the beginning? [15:18] I see the ubuntu user on machine-3 has a bunch of juju keys in it's authorized_keys [15:18] so that seems to have worked.... [15:18] I wonder if the keys got out of sync somewhere... [15:19] its doubtful but possible. [15:19] ah, I think I figured it out [15:20] the routing is messed up on machine-3, it's default path is super wrong [15:20] so it can't download the tools from canonical [15:20] ok [15:20] whee [15:20] now I have somewhere to go [15:21] Thanks for the help [15:21] I'll be back if I hit another wall. [15:21] :D [15:21] np qhartman, glad that helped [15:21] * qhartman enters lurk mode [15:31] marcoceppi: can I deploy both precise and trusty units in Canonistack? [15:31] while I wait for a trusty-enabled postgresql charm === BradCrittenden is now known as bac [16:48] mbruzek: found the problem. I had to disable ufw on my hos [16:48] host [16:48] mbruzek: now the units start [16:48] but the unit agent fails with [16:48] james_w, that is GREAT [16:48] 2014-04-23 16:47:19 ERROR juju runner.go:220 worker: exited "uniter": ModeInstalling cs:precise/ubuntu-4: git init failed: exec: "git": executable file not found in $PATH [16:49] is that part of the 'tools' [16:49] Thanks for sharing that [16:49] or a problem with the charm? [16:49] Looks like git is not installed in the charm [16:50] there's also 2014-04-23 16:49:35 WARNING juju.worker.uniter.charm git_deployer.go:200 no current staging repo === vladk is now known as vladk|offline [16:56] james_w, Did you get the 1.18 to work or did you update to the latest version? [16:56] mbruzek: I upgraded [16:56] mbruzek: not to say 1.18 wouldn't have worked with the disabled firewall [16:56] Based on your statement I suspect that the ufw could have been disabled with 1.18 [17:04] the git thing is because the package install during cloud-init failed [17:04] apparently because there's something wrong with dns in the container [17:06] lazyPower, I ended up needing to nuke and pave the host that was formerly machine-3, but adding the ip-forwarding on the MAAS controller so the route works ended up allowing "juju deploy" to work [17:06] I now have nodes joining the juju and deploying services [17:06] \o/ [17:12] qhartman: awesome! [17:12] glad you got it sorted :) [17:21] https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1205086 is the dns problem [17:21] <_mup_> Bug #1205086: lxc-net dnsmasq --strict-order breaks dns for lxc non-recursive nameserver === roadmr is now known as roadmr_afk [17:40] calamity! Trusty doesn't have a nova-volume charm! [17:40] is that by design? [17:42] qhartman: there's an audit going on for charms to have series promotion [17:42] which means this doc doesn't apply correctly: https://help.ubuntu.com/community/UbuntuCloudInfrastructure . Is there a version that has been updated for Trusty? I have not found one. [17:43] right now, you're best bet is to target precise - if you need a trusty charm, you can be one of the brave beta users and just specify trusty, deploy, and pray it works correctly [17:43] most of the charms will be ok, but there are some discrepancies between precise => trusty deployments. [17:43] right [17:44] so, I'm on trusty, so to deploy the precise charm I use "juju deploy cs:precise/nova-volume"? [17:44] so in order for you to do that, you need to create a local charm repository, and charm-get the charm you want to deploy on trusty into the trusty series directory [17:44] ah, ok [17:44] then juju deploy local:trusty/nova-volume --repository=../../ (from within the nova charm dir) [17:44] but be warned, if it blows up [17:44] yeah [17:44] as of right now, you're accepting a backwoods warranty. if it blows up in half, you own both halves. [17:44] I get to keep all the pieces [17:45] :P [17:45] well, this is testing deployment anyway [17:45] your feedback will be gold [17:45] if you run into any caveats make sure you ping the juju list with them [17:45] once I get a feel for things I'm going to kill it and re-deploy anyway [17:45] will do [17:45] thanks qhartman [17:45] sure thing [17:46] What's the state of the ceph charms on Trusty? Big picture I'm planning on using that for storage on this deployment [17:47] not sure. check the charm store [17:47] k [17:47] i think most if not all of the openstack charms have trusty releases [17:47] i know they were sprinting to get them pushed day of trusty release. [17:47] Hey, am I doing something wrong or is there simply no charm for memcached / wordpress on trusty? [17:47] but i'm not sure of anything beyond that, i've been busy with other areas of focus [17:48] flohack, no trusty charm that I see [17:48] flohack: there is not. The trusty charms are part of an audit, and slow going. [17:48] lazyPower, sure. [17:48] if you want to be a +1 reviewer to charms in the trusty series, we'd appreciate the extra hands on deck, and possible amulet test submissions [17:48] lazyPower: ok, thanks! anything I could do about it? Like copy it locally, try and see if it works when modifying the series!? [17:49] flohack: theres 2 requirements you'll need to be aware of. it has to pass a full blown charm review, and contain deployment tests (amulet flavored is preferrable) [17:49] but the caveat here is series support in amulet is pending a merge last i checked. so it may blow up on you attempting to write tests [17:49] marcoceppi: any updates on that to speak of? [17:50] he's at lunch, so responses may be latent. [17:50] lazyPower: what's the tl;dr? [17:50] lazyPower: Ok, I'm not familiar with writing/testing/auditing charms so far, I'm a professional dev though, so maybe with a few pointers, I'll be able to contribute!? [17:50] marcoceppi: trusty / series support in amulet. [17:51] flohack: i ran a charm school yesterday on it. let me fish up the link [17:51] lazyPower: that will be fixed in 1.5, to be released early next week [17:51] flohack: https://www.youtube.com/watch?v=2Y1MiSPox5I#t=31 [17:51] here's how to get acquainted with the rev Q, and there is a docs page on writing amulet tests https://juju.ubuntu.com/docs/tools-amulet.html [17:52] lazyPower: anything to read? I so much quicker when reading compared to watching a video ;-) [17:52] flohack: https://juju.ubuntu.com/docs/reference-reviewers.html [17:53] flohack: if you start doing reviews / audit work - any questions should go here or to the mailing list. We'll be more than happy to help support you in your efforts === CyberJacob|Away is now known as CyberJacob [17:54] lazyPower: Great, so for starters, how do I copy the precise charm for memcached to a local repository? [17:54] is there a git to clone? [17:56] flohack, bzr branch lp:charms/precise/memcached [17:56] cheers mates! I'll take it from there and get back with questions! [17:59] actually use charm-get from charm-tools [17:59] its a wrapper, but keeps things consistent === vladk|offline is now known as vladk === roadmr_afk is now known as roadmr [18:21] hey there [18:21] I'm trying to start up an lxc instance and I'm getting this error [18:22] agent-state-info: '(error: container failed to start)' [18:22] how can I debug what is going wrong? [18:22] this is on 1.18.1.4 [18:22] is there a limit to the number of containers I can run? [18:24] mm [18:53] kiko: nope, have you put in a default-series directive in your environments.yaml? [18:54] kiko: the only limit superimposed by lxc is dependent on your physical hardware, if you go spinning up crazy containers with crazy processes, it'll cause other unpredictable behavior due to being out of resources. But juju / lxc superimpose no limit to the quantity of machines you can spin up. [18:54] lazyPower, I haven't put default-series in my environments [18:54] and I already have a bunch of working containers [18:55] it seems like the new way to run containers isn't working [18:55] hmm [18:57] okay [18:57] so it seems like the container actually DOES run [18:57] hmm [19:02] lazyPower, so here's what I am seeing [19:02] lazyPower, the container is running (i.e. lxc-console --name gets to it) [19:02] lazyPower, juju status says "container failed to start") [19:03] the rest looks all normal [19:03] any notice in the logs about a tools mismatch? [19:03] any possible IP Collisions? [19:04] lazyPower, well, which log should I look at? oddly, there is no log in /var/log/juju-juju-local for that machine [19:04] should be in $HOME/.juju/local/logs [19:04] ah, there is now [19:04] the tools mismatch log message scrolls in machine-0.log [19:04] and can be corrected with sync-tools [19:05] lazyPower, the word "mismatch" does not appear in machine-0.log [19:05] lazyPower, is machine 0 special when using the local provider? [19:05] I notice it's set to localhost [19:05] It is. machine-0, your bootstrap node, is the parent machine warehousing the lxc containers [19:06] however we are running to the end of my knowledge of common problems with LXC - if its not ip collision or tools/series. [19:06] that's very interesting! is it documented anywhere? [19:06] which aspect of the output? that the bootstrap node is localhost? [19:07] hmmm [19:07] I guess, yes, or that the bootstrap node's logs are interesting :) === natefinch is now known as natefinch-afk [19:10] lazyPower, aha, machine-0's log has a lot of interesting stuff === roadmr is now known as roadmr_afk [19:10] 2014-04-23 19:09:34 DEBUG juju.environs.simplestreams simplestreams.go:490 fetchData failed for "http://192.168.99.5:8040/tools/streams/v1/index.sjson": file "tools/streams/v1/index.sjson" not found [19:11] 2014-04-23 19:09:37 WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "": no instances found [19:22] is there any way to hardcode the type of machine juju deploys on EC2? it's very annoying finding out that even though you set the constraints it deploys a machine bigger than what you were expecting [19:23] kiko: thats.. not good. it cant find the simplestreams data [19:24] marcoceppi: simplestreams on juju-local is updated when you run sync-tools no? [19:25] lazyPower: kiko local provider doesn't use simplestreams [19:25] last I checked [19:25] wat - i thought /tools/streams - was in fact simplestrams [19:25] that DEBUG is a red herring [19:25] maybe my terminology is wrong [19:25] bah [19:25] thanks for the clarification [19:26] kiko: is 192.168.99.5 your machine? [19:26] marcoceppi, yes [19:27] marcoceppi, good to hear that simplestreams is a red herring [19:27] kiko: I might be wrong though, local has changed a lot. [19:27] What version is this? [19:27] marcoceppi, 1.18.1.4 [19:28] kiko: can you destroy then re-run with the --debug flag? [19:29] marcoceppi, no, this is in production [19:29] oh [19:29] you mean destroy the machine? [19:29] I have already [19:29] many times [19:29] local provider in production? [19:29] marcoceppi, sure, I hear that's the way to do it :) === natefinch-afk is now known as natefinch [19:33] 'Fix genghisapp charm directory name' https://github.com/juju/docs/pull/84 [19:45] so I have a charm that failed to deploy, and destroying it doesn't seem to work. [19:46] How can I force that to happen (no --force it seems) and/or force the charm to be redeployed on the node to see if I've fixed the problem that caused it to fail? === timrc is now known as timrc-afk === timrc-afk is now known as timrc [19:53] If I force destroy the machine, then I can destroy the services that are in a bad state. [19:54] qhartman: you need to do 'juju resolved unitname/#' first [19:55] as juju is event-based, it needs to be taken out of the error state before continuing with the next action [19:55] jose, oooooh, that makes sense [19:57] cool. Is this sort of usage stuff documented anywhere? I've found lots of howto-style docs, but very little reference for juju. [19:57] well, we have docs at juju.ubuntu.com/docs [19:57] but I'm not sure if that specific point is documented, should be [19:57] ok [19:58] yeah, I've been poking around on there [19:58] https://juju.ubuntu.com/docs/charms-destroy.html#life:-dying [19:58] there it is :) [19:58] and this sort of middle-ground stuff isn't covered well, or I'm just not seeing it. Lots of bootstrappy stuff, and lots of dev oriented stuff though [19:58] awesome [19:58] I must just need to learn how to find stuff here [19:59] qhartman: feedback on the list to your experience with the docs is gold as well :) [19:59] lazyPower, I'll add that to the list [19:59] if you have any questions you're welcome to just ask around here :) [19:59] lazyPower, I posted a couple items a few minutes ago [20:00] jose, yup, and gotten lots of help so far [20:00] qhartman: btw, nova-volume has not been promulgated to trusty yet [20:00] jose, so, how did you find that page? I don't see it in an index anywhere and a couple of intentionally naive but sensible searches didn't turn it up [20:01] juju.ubuntu.com/docs, on the sidebar, the 'Destroying Services' page, and read until the end of it [20:01] jose, yup, I knew I was breaking ground somewhat, was mostly hoping that my notes could help someone else along, [20:01] jose, aha, I see it now [20:02] I think the search looks for articles on the blog (if there's any) [20:02] yeah, it doesn't seem to search the docs at all === roadmr_afk is now known as roadmr [20:33] What exactly is juju-mongodb for? Is it to enable SSL? What else? And why on Trusty only? [20:41] zdr: juju-mongodb is the tweaked and tuned package for mongodb's inclusion into teh juju ecosystem. [20:43] mongodb is the storage mechanism for your topology and other juju specific bits === vladk is now known as vladk|offline [21:56] marcoceppi: I can't find that ppa in my "subscriptions" list thingy -- should I have access or should I not? :) [22:13] sarnold: shhh, we wont talk about that [22:15] marcoceppi: sorry, my irc machine died after I saw a highlight but before I could switch to this channel to see what was said.. I assume it was you who replied to me, anyway :) === sarnold_ is now known as sarnold === NoNameYet_xnox is now known as xnox === CyberJacob is now known as CyberJacob|Away === wedgwood is now known as Guest68564