=== med_ is now known as Guest47520 === Guest47520 is now known as medberry === mup_ is now known as mup [07:48] Good morning Juju world [07:51] Hi guys, I'm trying to restore a failed juju controller. I have created a new controller and running the command 'juju restore-backup -b --file=juju-backup-20170110-092916.tar.gz' [07:51] but it is erroring out with the message 'ERROR old bootstrap instance ["bootstrap:6tdgfk"] still seems to exist; will not replace' [07:54] I'm following this doc->https://jujucharms.com/docs/2.0/controllers-backup. Am I missing anything here? [08:33] hi junaidali, you might want to also ask at #juju-dev [09:20] kjackal: thanks [10:09] hi [10:09] does anyone have an example of a config file you can pass to "juju deploy --config" ? [10:10] https://jujucharms.com/docs/devel/charms-config wee [10:13] axino: what are you looking for? Why isn;t the example enough? [10:13] kjackal: the example is enough and is what I was looking for [10:13] :) [10:13] I was just answering to myself [10:13] aaah :) sorry [11:41] Hi i'm using conjure-up with openstack base charm [11:43] unfortunately it seems using up to 18 machines (from juju status). i have 8 registered with maas server [11:44] docs say i only need 4 machines. how to tell juju to relocate services to available machines? (ram is enough) === deanman_ is now known as deanman [14:28] hi here, what is the best way to controll where Juju deploys "roles" of a multi-machine Charms? For example, I'm still on my PoC testing of canonical-kubernetes, I deployed some VMs "foo-k8smaster-0[123]" and "foo-k8snode-0[123]" and added all of them to Juju with manual provisionnong (juju add-machine) [14:29] if I run a juju deploy ./bundle.yaml of my custom canonical-kubernetes templates (wich have 3 kubernetes-master), Juju will deploy them somewhere else foo-k8smaster-0[123] [14:30] (for this special example, foo-k8smaster-01 was dedicated to EasyRSA part, and foo-k8snode-02 was choosen to be the kube-api-loadbalancer...) === scuttle|afk is now known as scuttlemonkey === deanman is now known as deanman_ [15:25] cory_fu: matrix PR for you when you get the chance: https://github.com/juju-solutions/matrix/pull/69 (I think that this is the best way to get the matrix output_dir passed through everything -- it avoids attempting to resolve collisions between the args for bundletester and the args for matrix, and also only necessitates a code change in one more place. The [15:25] only downside is that it doesn't work if we point the cwr output at an S3 bucket ... but I figure that we can cross that bridge when we come to it.) [15:28] hey everyone, i'm working on my own charm and using subprocess.run to carry out a particular action, when i install the charm, the install hook fails complaining that subprocess does not have a module called 'run'. according to the docs the reactive framework runs in python3. Anyone have ideas? [15:30] sfeole: subprocess.run was added in 3.5 [15:30] tvansteenburgh, reactive does not? [15:30] tvansteenburgh, oh i missed that [15:33] tvansteenburgh, thanks, btw, libjuju rocks [15:33] sfeole: thanks, glad to see people using it [16:49] where can i get a juju deployer that works with juju 2.0 and juju2.1? [16:49] its built in as `juju deploy bundle.yaml` [16:50] yeah but you can't deploy to existing machines with that [16:55] jhobbs: juju-deployer doesn't work for both versions? https://launchpad.net/juju-deployer [16:57] vmorris: yeah i think it does, just wondering what ppa to install it from i guess [16:57] the version in xenial is old [16:58] i think the last doc i saw said to use virtualenv and pip [16:58] ah ok [16:58] https://pypi.python.org/pypi/juju-deployer/ [16:58] that would work too i think [16:58] thanks vmorris [16:58] jhobbs yw [17:03] Hi, I'm using the canonical distribution of kubernetes charm. I can use the juju command to ssh into the nodes, but I'm wondering how to ssh into the nodes w/o juju - where are the credentials stored? I'm also wondering if there's any plan for the juju command to support rsync [17:08] emjburns: ~/.local/share/juju/ssh [17:09] emjburns - "ssh into the nodes without juju" - are you referring to just ssh user@ip? without routing through juju? i'm not certain i understand that portion of the question. [17:11] lazyPower: yes, i'm looking to do just ssh user@ip. I'd like to get rsync to work so i can grab the log files off each machine and put them in a central place (my kubectl logs command isnt working because of lack of FQDNs in my cluster) [17:11] jhobbs thanks! [17:11] you're welcome emjburns [17:12] emjburns ah, yeah :) that credentials path jhobbs posted is where you can find the client ssh credentials. you can alternatively add your owns sh key to the mix as well [17:12] jhobbs: the latest juju-deployer is in ppa:tvansteenburgh/ppa [17:12] emjburns juju run --application foobar "ssh-import-id gh:my-github-id" or leave off the gh: and use your launchpad id to import from launchpad. [17:15] lazyPower good to know! The next thing I'm wondering: is there a tutorial anyone can point me to for hooking up the juju ELK stack with my kubernetes cluster? I'm new to ELK. [17:16] emjburns - ok, bit of contention there. our ELK offering is using older versions of all those components. (pre 5.0 release) [17:16] you can still use it, just know that it comes with that caveat, its not the most recent version fo elastic's wares. I've been leading an effort to try and get community maintainers to help pitch in and make those charms prod ready [17:16] lazyPower ok also good to know. what would you suggest that I use then? (that effort would be awesome, btw) [17:17] emjburns - the recommended path is to deploy the beats-core bundle [17:17] and then relate those beats to your services you wish to monitor [17:17] i haven't done that in a few weeks, you might be bitten by series mismatch on the beats charms themselves. [17:19] hmm ok. any tutorial you can point me towards that's more in depth than the beats-core info page? [17:19] I do believe iw rote a blog post about this, 1 moment [17:20] emjburns https://insights.ubuntu.com/2016/09/22/monitoring-big-software-stacks-with-the-elastic-stack/ [17:21] lazyPower cool thank you so much! [17:21] emjburns no problem. If you're interested in the 5.0 upgrade story and have cycles to lend, i could certainly use your hands :) [17:23] lazyPower ok if beats doesn't work out for me I may see what I can do [17:25] I'm all ears on this issue, there are several users that have requested better monitoring/metric collection from their k8s clusters, and i'm happy to entertain ideas/suggestions until we have a solid roadmap built around the story. Our initial goal is to provide options, so ELK, perhaps greylog if someone has the time to write the charm, prometheus, et-al [17:27] lazyPower I'm quite new to using the kubernetes charm, but I would absolutely love if it came with (or had the option to enable in the bundle.yaml) some logging solution. (or, i'm always happy to follow tutorials to set it up using other charms!) [17:28] emjburns - well it does ship with log aggregation for the running workloads. there's a fluentd forwarding system that is used in the administrative dashboard [17:29] but if you're looking for trend reporting, and long term retention, that's not enabled in our current bundle, and would be better served as an external deployment in my humble opinion (how do you log kubernetes if your cluster is unhealthy?) [17:30] lazyPower makes total sense as an external deployment, good point. === CyberJacob is now known as zz_CyberJacob === redir is now known as redir|exercise === redir|exercise is now known as redir