[00:33] vern: so hml is working on that as part of her work enabling a config option for some init commands for just that kind of case [00:33] vern: she's out today but might be worth an email to the list perhaps? I'm not up to date on the latest of the work. [00:35] thanks rick_h. for now I was able to ssh in and do the config while the bootstrap was in progress [00:36] vern: gotcha [02:54] I tried installing the Canonical Kubernetes installation via conjure-up. Now I've got LXD causing a hard lock on my Ubuntu host when it starts up -- anyone have thoughts on what may be going on? I've put CPU/Memory/Task limits on the service, and it's still hard-locking. [03:28] apes: Hard lock as in the machine has become completely unresponsive to ping, ssh, etc.? If so, make sure you're running an up-to-date kernel, and check logs/console for any kernel oops info. [03:31] blahdeblah: I'm just running it on my local machine -- let me see if it will even respond to ping. Kernel is up to date. There's some stuff in the logs about disks, but I'm not sure if it's related. [03:48] blahdeblah: Here's the relevant syslogs: https://gist.github.com/apeschel/a7b2e08b88c6c24cc6335b20892b5860 === frankban|afk is now known as frankban [09:03] Hi guys! [09:03] Still struggling a bit with openstack and juju [09:03] do any of you guys use gnocchi without ceph? [09:05] what does gnocchi has to do with ceph? [09:07] I was wondering the same thing :) [09:07] haha, so why do you ask? [09:07] we used the telemetry base, and removed anything that had to do with ceph [09:08] but cannot seem to find out how to tell gnocchi to use anything else than ceph [09:09] https://gyazo.com/c4ee1fa9942b5a89e2cc30108cf6eca8 [09:10] hmm [09:10] that is an oversight i think [09:10] very bad to in my opinion to force ceph [09:10] but as it looks like now, it forces ceph for the storage [09:11] Yes that's what i thought as well :/ [09:11] so i was wondering if you guys know any way to tell it to use local storage instead ^^ [09:12] not using the charm i think [09:13] hmm, i see. So we have to scrap that idea as well then? >_< haha, we've been struggling to get openstack up and running for over a month now [09:13] you btw have better luck asking this question in #openstack-charms [09:14] utking: you can ofcourse still deploy ceph [09:14] just use an image file on 3 seperate nodes as storage instead of a blockdevice [09:15] the official documentation supports redis and swift [09:15] of gnocchi that is [09:15] Yeah i know, but we are trying to deploy an "older" classic way of openstack, and then comparing it to a hyper-converged openstack :) [09:15] but doesn't seem to be available for the gnocchi charm [09:15] i saw that [09:15] Been looking through the config files of the charm as well [09:17] else you need to modify the template of the gnocchi config to support an other type of data-storage [09:18] but you don't need to use ceph for instances storage, just remove the relations of ceph to cinder/nova and you will not have a "hyper-converged" env [09:18] just use it for gnocchi storage [09:18] Ah, and still deploy ceph you mean? [09:19] yea :) [09:19] Could do that yes [09:19] hmm, nice, thanks BlackDex! :) [09:19] yw :) [10:26] hi [10:35] there's still some pretty funky stuff going on with juju gui, jaas, bundles, machine allocation and so on [10:35] magicaltrout: the one issue got a fix landed in the gui and should go out in release soon [10:37] i just ended up with 3 apps and 6 machines [10:57] magicaltrout: is there a backlog for what you mean there? [11:25] rick_h: you run into anything like this when upgrading juju https://bugs.launchpad.net/juju/+bug/1755155? [11:25] Bug #1755155: charm hook failures after controller model upgrade from 2.1.2 to 2.3.4 [11:27] zeestrat: hmm looking. There was an upgrade bug that thumper fixed that missed an upgrade step when jumping versions but I thuoght that was in 2.3.4. Let me double check [11:30] lol zeestrat https://bugs.launchpad.net/juju/+bug/1746265 was what I was thinking but guess that's not the same [11:30] Bug #1746265: juju-upgrade from 2.2.9 to 2.3.2 fails with state changing too quickly [11:32] rick_h: Nah, that looked to be fixed in 2.3.4 as we didn't see those those in staging. [11:33] zeestrat: yea, that was in 2.3.3 [11:33] zeestrat: I'll ask around. let me see what I can do [11:38] rick_h: thanks a bunch. If there's any further info or troubleshooting needed just shout. Kinda need to get this one unstuck :) [11:38] zeestrat: no doubt [11:47] zeestrat: I pinged someone on the juju team about it and they're going to get it some eyeballs. [11:48] zeestrat: sorry, folks just getting back from travel has the eyeball count a little light the last few days but getting on it [11:50] rick_h: No problemo, thanks for asking. Juju agents are rather hardheaded so they'll keep on retrying in the mean time ;) [11:50] zeestrat: yea :/ [14:49] I have a case where the kubernetes-master node is in a waiting state, it gives the following message "Waiting to retry addon deployment". Etcd is running just fine. [14:50] Rebooting and restarting snap services doesn't help [14:50] Any ideas? [15:03] boo he went [15:03] i hja [15:03] had the same [15:12] magicaltrout: yea? kwmonroe / tvansteenburgh ^ [15:15] yeah, although in the end i just switched off the dashboard and dns via juju config [15:15] and then switched them back on [15:15] seemed to fix it [15:32] hm, ryebot mentioned something about kubefed having trouble.. maybe ^^ that's the issue? [15:35] hmm [15:35] SuneK: magicaltrout did that persist no matter how long you waited? [15:35] oops, just magicaltrout I guess [16:00] i got bored after about 15 minutes [16:00] if that helps [16:01] on a different note [16:01] juju add-ssh-key blah..... juju ssh 9... unauthorized [16:01] have i missed a step? [16:01] magicaltrout: hmm, nope...should show the key in .authorized_keys [16:06] hmm weird [16:06] we gave a colleague access to a controller [16:07] juju status works [16:07] we've added his key, i can see it on the remove units and we can login using ssh ubuntu@.... [16:07] using that key [16:07] but juju ssh 9.. for example says "unauthorized access" [16:07] ? is it what you think it is? [16:09] is that a question or a prohetic statement? [16:10] we have tested a default key and a newly generated one both do the same [16:10] i'm assuming its something we've done but its a bit weird [16:11] heh, I mean that using juju ssh 9 maybe isn't the model you think it is or something? [16:12] I'm not sure, as you note if you can ssh ubuntu@... then not really sure what juju is doing different for you there that would fail [16:12] well [16:13] juju status shows us all the machienes/units etc [16:13] so i'm assuming juju ssh will then access the correct box [16:14] https://gist.github.com/buggtb/89252ab1a378dd5c3a48ab372c5e71fd [16:14] tried the same with unit names as well [16:14] same output === frankban is now known as frankban|afk