[07:48] <kjackal> Good morning Juju world
[07:51] <junaidali> Hi guys, I'm trying to restore a failed juju controller. I have created a new controller and running the command 'juju restore-backup -b --file=juju-backup-20170110-092916.tar.gz'
[07:51] <junaidali> but it is erroring out with the message 'ERROR old bootstrap instance ["bootstrap:6tdgfk"] still seems to exist; will not replace'
[07:54] <junaidali> I'm following this doc->https://jujucharms.com/docs/2.0/controllers-backup. Am I missing anything here?
[08:33] <kjackal> hi junaidali, you might want to also ask at #juju-dev
[09:20] <junaidali> kjackal: thanks
[10:09] <axino> hi
[10:09] <axino> does anyone have an example of a config file you can pass to "juju deploy --config" ?
[10:10] <axino> https://jujucharms.com/docs/devel/charms-config wee
[10:13] <kjackal> axino: what are you looking for? Why isn;t the example enough?
[10:13] <axino> kjackal: the example is enough and is what I was looking for
[10:13] <axino> :)
[10:13] <axino> I was just answering to myself
[10:13] <kjackal> aaah :) sorry
[11:41] <Hetfield> Hi i'm using conjure-up with openstack base charm
[11:43] <Hetfield> unfortunately it seems using up to 18 machines (from juju status). i have 8 registered with maas server
[11:44] <Hetfield> docs say i only need 4 machines. how to tell juju to relocate services to available machines? (ram is enough)
[14:28] <Zic> hi here, what is the best way to controll where Juju deploys "roles" of a multi-machine Charms? For example, I'm still on my PoC testing of canonical-kubernetes, I deployed some VMs "foo-k8smaster-0[123]" and "foo-k8snode-0[123]" and added all of them to Juju with manual provisionnong (juju add-machine)
[14:29] <Zic> if I run a juju deploy ./bundle.yaml of my custom canonical-kubernetes templates (wich have 3 kubernetes-master), Juju will deploy them somewhere else foo-k8smaster-0[123]
[14:30] <Zic> (for this special example, foo-k8smaster-01 was dedicated to EasyRSA part, and foo-k8snode-02 was choosen to be the kube-api-loadbalancer...)
[15:25] <petevg> cory_fu: matrix PR for you when you get the chance: https://github.com/juju-solutions/matrix/pull/69 (I think that this is the best way to get the matrix output_dir passed through everything -- it avoids attempting to resolve collisions between the args for bundletester and the args for matrix, and also only necessitates a code change in one more place. The
[15:25] <petevg> only downside is that it doesn't work if we point the cwr  output at an S3 bucket ... but I figure that we can cross that bridge when we come to it.)
[15:28] <sfeole> hey everyone, i'm working on my own charm and using subprocess.run to carry out a particular action, when i install the charm, the install hook fails complaining that subprocess does not have a module called 'run'. according to the docs the reactive framework runs in python3. Anyone have ideas?
[15:30] <tvansteenburgh> sfeole: subprocess.run was added in 3.5
[15:30] <sfeole> tvansteenburgh, reactive does not?
[15:30] <sfeole> tvansteenburgh, oh i missed that
[15:33] <sfeole> tvansteenburgh, thanks, btw, libjuju rocks
[15:33] <tvansteenburgh> sfeole: thanks, glad to see people using it
[16:49] <jhobbs> where can i get a juju deployer that works with juju 2.0 and juju2.1?
[16:49] <jrwren> its built in as `juju deploy bundle.yaml`
[16:50] <jhobbs> yeah but you can't deploy to existing machines with that
[16:55] <vmorris> jhobbs: juju-deployer doesn't work for both versions? https://launchpad.net/juju-deployer
[16:57] <jhobbs> vmorris: yeah i think it does, just wondering what ppa to install it from i guess
[16:57] <jhobbs> the version in xenial is old
[16:58] <vmorris> i think the last doc i saw said to use virtualenv and pip
[16:58] <jhobbs> ah ok
[16:58] <vmorris> https://pypi.python.org/pypi/juju-deployer/
[16:58] <jhobbs> that would work too i think
[16:58] <jhobbs> thanks vmorris
[16:58] <vmorris> jhobbs yw
[17:03] <emjburns> Hi, I'm using the canonical distribution of kubernetes charm. I can use the juju command to ssh into the nodes, but I'm wondering how to ssh into the nodes w/o juju - where are the credentials stored? I'm also wondering if there's any plan for the juju command to support rsync
[17:08] <jhobbs> emjburns: ~/.local/share/juju/ssh
[17:09] <lazyPower> emjburns - "ssh into the nodes without juju" - are you referring to just ssh user@ip? without routing through juju? i'm not certain i understand that portion of the question.
[17:11] <emjburns> lazyPower: yes, i'm looking to do just ssh user@ip. I'd like to get rsync to work so i can grab the log files off each machine and put them in a central place (my kubectl logs command isnt working because of lack of FQDNs in my cluster)
[17:11] <emjburns> jhobbs thanks!
[17:11] <jhobbs> you're welcome emjburns
[17:12] <lazyPower> emjburns ah, yeah :) that credentials path jhobbs posted is where you can find the client ssh credentials. you can alternatively add your owns sh key to the mix as well
[17:12] <tvansteenburgh> jhobbs: the latest juju-deployer is in ppa:tvansteenburgh/ppa
[17:12] <lazyPower> emjburns juju run --application foobar "ssh-import-id gh:my-github-id"  or leave off the gh: and use your launchpad id to import from launchpad.
[17:15] <emjburns> lazyPower good to know! The next thing I'm wondering: is there a tutorial anyone can point me to for hooking up the juju ELK stack with my kubernetes cluster? I'm new to ELK.
[17:16] <lazyPower> emjburns - ok, bit of contention there. our ELK offering is using older versions of all those components. (pre 5.0 release)
[17:16] <lazyPower> you can still use it, just know that it comes with that caveat, its not the most recent version fo elastic's wares. I've been leading an effort to try and get community maintainers to help pitch in and make those charms prod ready
[17:16] <emjburns> lazyPower ok also good to know. what would you suggest that I use then? (that effort would be awesome, btw)
[17:17] <lazyPower> emjburns - the recommended path is to deploy the beats-core bundle
[17:17] <lazyPower> and then relate those beats to your services you wish to monitor
[17:17] <lazyPower> i haven't done that in a few weeks, you might be bitten by series mismatch on the beats charms themselves.
[17:19] <emjburns> hmm ok. any tutorial you can point me towards that's more in depth than the beats-core info page?
[17:19] <lazyPower> I do believe iw rote  a blog post about this, 1 moment
[17:20] <lazyPower> emjburns https://insights.ubuntu.com/2016/09/22/monitoring-big-software-stacks-with-the-elastic-stack/
[17:21] <emjburns> lazyPower cool thank you so much!
[17:21] <lazyPower> emjburns no problem. If you're interested in the 5.0 upgrade story and have cycles to lend, i could certainly use your hands :)
[17:23] <emjburns> lazyPower ok if beats doesn't work out for me I may see what I can do
[17:25] <lazyPower> I'm all ears on this issue, there are several users that have requested better monitoring/metric collection from their k8s clusters, and i'm happy to entertain ideas/suggestions until we have a solid roadmap built around the story. Our initial goal is to provide options, so ELK, perhaps greylog if someone has the time to write the charm, prometheus, et-al
[17:27] <emjburns> lazyPower I'm quite new to using the kubernetes charm, but I would absolutely love if it came with (or had the option to enable in the bundle.yaml) some logging solution. (or, i'm always happy to follow tutorials to set it up using other charms!)
[17:28] <lazyPower> emjburns - well it does ship with log aggregation for the running workloads. there's a fluentd forwarding system that is used in the administrative dashboard
[17:29] <lazyPower> but if you're looking for trend reporting, and long term retention, that's not enabled in our current bundle, and would be better served as an external deployment in my humble opinion (how do you log kubernetes if your cluster is unhealthy?)
[17:30] <emjburns> lazyPower makes total sense as an external deployment, good point.