[00:33] <rick_h> vern: so hml is working on that as part of her work enabling a config option for some init commands for just that kind of case
[00:33] <rick_h> vern: she's out today but might be worth an email to the list perhaps? I'm not up to date on the latest of the work.
[00:35] <vern> thanks rick_h. for now I was able to ssh in and do the config while the bootstrap was in progress
[00:36] <rick_h> vern: gotcha
[02:54] <apes> I tried installing the Canonical Kubernetes installation via conjure-up. Now I've got LXD causing a hard lock on my Ubuntu host when it starts up -- anyone have thoughts on what may be going on?  I've put CPU/Memory/Task limits on the service, and it's still hard-locking.
[03:28] <blahdeblah> apes: Hard lock as in the machine has become completely unresponsive to ping, ssh, etc.?  If so, make sure you're running an up-to-date kernel, and check logs/console for any kernel oops info.
[03:31] <apes> blahdeblah: I'm just running it on my local machine -- let me see if it will even respond to ping. Kernel is up to date. There's some stuff in the logs about disks, but I'm not sure if it's related.
[03:48] <apes> blahdeblah: Here's the relevant syslogs: https://gist.github.com/apeschel/a7b2e08b88c6c24cc6335b20892b5860
[09:03] <utking> Hi guys!
[09:03] <utking> Still struggling a bit with openstack and juju
[09:03] <utking> do any of you guys use gnocchi without ceph?
[09:05] <BlackDex> what does gnocchi has to do with ceph?
[09:07] <utking> I was wondering the same thing :)
[09:07] <BlackDex> haha, so why do you ask?
[09:07] <utking> we used the telemetry base, and removed anything that had to do with ceph
[09:08] <utking> but cannot seem to find out how to tell gnocchi to use anything else than ceph
[09:09] <utking> https://gyazo.com/c4ee1fa9942b5a89e2cc30108cf6eca8
[09:10] <BlackDex> hmm
[09:10] <BlackDex> that is an oversight i think
[09:10] <BlackDex> very bad to in my opinion to force ceph
[09:10] <BlackDex> but as it looks like now, it forces ceph for the storage
[09:11] <utking> Yes that's what i thought as well :/
[09:11] <utking> so i was wondering if you guys know any way to tell it to use local storage instead ^^
[09:12] <BlackDex> not using the charm i think
[09:13] <utking> hmm, i see. So we have to scrap that idea as well then? >_< haha, we've been struggling to get openstack up and running for over a month now
[09:13] <BlackDex> you btw have better luck asking this question in #openstack-charms
[09:14] <BlackDex> utking: you can ofcourse still deploy ceph
[09:14] <BlackDex> just use an image file on 3 seperate nodes as storage instead of a blockdevice
[09:15] <BlackDex> the official documentation supports redis and swift
[09:15] <BlackDex> of gnocchi that is
[09:15] <utking> Yeah i know, but we are trying to deploy an "older" classic way of openstack, and then comparing it to a hyper-converged openstack :)
[09:15] <BlackDex> but doesn't seem to be available for the gnocchi charm
[09:15] <utking> i saw that
[09:15] <utking> Been looking through the config files of the charm as well
[09:17] <BlackDex> else you need to modify the template of the gnocchi config to support an other type of data-storage
[09:18] <BlackDex> but you don't need to use ceph for instances storage, just remove the relations of ceph to cinder/nova and you will not have a "hyper-converged" env
[09:18] <BlackDex> just use it for gnocchi storage
[09:18] <utking> Ah, and still deploy ceph you mean?
[09:19] <BlackDex> yea :)
[09:19] <utking> Could do that yes
[09:19] <utking> hmm, nice, thanks BlackDex! :)
[09:19] <BlackDex> yw :)
[10:26] <kumar> hi
[10:35] <magicaltrout> there's still some pretty funky stuff going on with juju gui, jaas, bundles, machine allocation and so on
[10:35] <rick_h> magicaltrout: the one issue got a fix landed in the gui and should go out in release soon
[10:37] <magicaltrout> i just ended up with 3 apps and 6 machines
[10:57] <rick_h> magicaltrout: is there a backlog for what you mean there?
[11:25] <zeestrat> rick_h: you run into anything like this when upgrading juju https://bugs.launchpad.net/juju/+bug/1755155?
[11:25] <mup> Bug #1755155: charm hook failures after controller model upgrade from 2.1.2 to 2.3.4 <juju:New> <https://launchpad.net/bugs/1755155>
[11:27] <rick_h> zeestrat: hmm looking. There was an upgrade bug that thumper fixed that missed an upgrade step when jumping versions but I thuoght that was in 2.3.4. Let me double check
[11:30] <rick_h> lol zeestrat https://bugs.launchpad.net/juju/+bug/1746265 was what I was thinking but guess that's not the same
[11:30] <mup> Bug #1746265: juju-upgrade from 2.2.9 to 2.3.2 fails with state changing too quickly <upgrade-juju> <juju:Fix Committed by jameinel> <juju 2.2:Won't Fix> <juju 2.3:Fix Released by jameinel> <https://launchpad.net/bugs/1746265>
[11:32] <zeestrat> rick_h: Nah, that looked to be fixed in 2.3.4 as we didn't see those those in staging.
[11:33] <rick_h> zeestrat: yea, that was in 2.3.3
[11:33] <rick_h> zeestrat: I'll ask around. let me see what I can do
[11:38] <zeestrat> rick_h: thanks a bunch. If there's any further info or troubleshooting needed just shout. Kinda need to get this one unstuck :)
[11:38] <rick_h> zeestrat: no doubt
[11:47] <rick_h> zeestrat: I pinged someone on the juju team about it and they're going to get it some eyeballs.
[11:48] <rick_h> zeestrat: sorry, folks just getting back from travel has the eyeball count a little light the last few days but getting on it
[11:50] <zeestrat> rick_h: No problemo, thanks for asking. Juju agents are rather hardheaded so they'll keep on retrying in the mean time ;)
[11:50] <rick_h> zeestrat: yea :/
[14:49] <SuneK> I have a case where the kubernetes-master node is in a waiting state, it gives the following message "Waiting to retry addon deployment". Etcd is running just fine.
[14:50] <SuneK> Rebooting and restarting snap services doesn't help
[14:50] <SuneK> Any ideas?
[15:03] <magicaltrout> boo he went
[15:03] <magicaltrout> i hja
[15:03] <magicaltrout> had the same
[15:12] <rick_h> magicaltrout: yea? kwmonroe / tvansteenburgh ^
[15:15] <magicaltrout> yeah, although in the end i just switched off the dashboard and dns via juju config
[15:15] <magicaltrout> and then switched them back on
[15:15] <magicaltrout> seemed to fix it
[15:32] <kwmonroe> hm, ryebot mentioned something about kubefed having trouble.. maybe ^^ that's the issue?
[15:35] <ryebot> hmm
[15:35] <ryebot> SuneK: magicaltrout did that persist no matter how long you waited?
[15:35] <ryebot> oops, just magicaltrout I guess
[16:00] <magicaltrout> i got bored after about 15 minutes
[16:00] <magicaltrout> if that helps
[16:01] <magicaltrout> on a different note
[16:01] <magicaltrout> juju add-ssh-key blah..... juju ssh 9... unauthorized
[16:01] <magicaltrout> have i missed a step?
[16:01] <rick_h> magicaltrout: hmm, nope...should show the key in .authorized_keys
[16:06] <magicaltrout> hmm weird
[16:06] <magicaltrout> we gave a colleague access to a controller
[16:07] <magicaltrout> juju status works
[16:07] <magicaltrout> we've added his key, i can see it on the remove units and we can login using ssh ubuntu@....
[16:07] <magicaltrout> using that key
[16:07] <magicaltrout> but juju ssh 9.. for example says "unauthorized access"
[16:07] <rick_h> ? is it what you think it is?
[16:09] <magicaltrout> is that a question or a prohetic statement?
[16:10] <magicaltrout> we have tested a default key and a newly generated one both do the same
[16:10] <magicaltrout> i'm assuming its something we've done but its a bit weird
[16:11] <rick_h> heh, I mean that using juju ssh 9 maybe isn't the model you think it is or something?
[16:12] <rick_h> I'm not sure, as you note if you can ssh ubuntu@... then not really sure what juju is doing different for you there that would fail
[16:12] <magicaltrout> well
[16:13] <magicaltrout> juju status shows us all the machienes/units etc
[16:13] <magicaltrout> so i'm assuming juju ssh will then access the correct box
[16:14] <magicaltrout> https://gist.github.com/buggtb/89252ab1a378dd5c3a48ab372c5e71fd
[16:14] <magicaltrout> tried the same with unit names as well
[16:14] <magicaltrout> same output