[00:55] <bolthole> hi Juju mavens.... I'm trying to find some kind of comparision between juju and kubernets
[00:55] <bolthole> They seem to be somewhat competing tech, both being "orchestration tools"..  but searches only ever seeem to bring up "how to deploy kubernetes USING juju" !!
[00:56] <jrwren> bolthole: juju is a layer higher ;]
[00:57] <jrwren> bolthole: kubernetes is a container management system, juju is a service modeling system. Some part of your juju modeled services might be containers, may not be, and some part of those may be kubernets containers.
[00:57] <jrwren> bolthole: does that make sense?
[01:03] <bolthole> jrwren .... kind of..  Some coworkers are claiming that kubernetes is also a service orchestration tool
[01:04] <bolthole> at this point, I am somewhat familiar with juju, but not at all with kubernetes. So need more info to make good comparison to management
[01:07] <jrwren> bolthole: i don't know enough about kubernetes to say anymore than I already have. Sorry.
[02:23] <firl> marcoceppi you guys still at the summit?
[09:30] <jacekn> hello. Can somebody tell me if https://bugs.launchpad.net/charms/+bug/1538573 is you charmers' radar?
[09:30] <mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:New> <https://launchpad.net/bugs/1538573>
[10:54] <stub> jacekn: It needs a merge proposal targetting lp:charms/trusty/collectd to get on the review list.
[10:55] <jacekn> stub: but that's not mergeable, it completely new charm
[10:57] <stub> jacekn: ok. I can give it a look over and try and help with Amulet, but one of the eco team will need to help with landing once they are back from their thing.
[10:58] <jacekn> stub: cool, thanks. FTR I asked here before about amulet, nobody was able to help looks like it was never used with "juju-info" relations
[10:58] <stub> What sort of things did you find that failed?
[10:59] <stub> I fall back to raw juju commands or plugins like juju-wait when Amulet doesn't stretch far enough.
[10:59] <jacekn> stub: can't remember exactly but it was something about cs:ubuntu and the collectd relation not being added
[10:59] <jacekn> stub: I think it may have been because "juju-info" is not declared interface or something like that
[11:00] <stub> That sounds like it is coming from juju-deployer, which Amulet uses for the initial deploy.
[11:00] <jacekn> could be yeah, anyway have a look when you have a moment
[11:24] <Laney> hi juju-ers
[11:24] <Laney> I have a subordinate service (nrpe-external-master) and now I want the thing it is subordinate to to drop some config into it (nagios checks) on upgrade
[11:25] <Laney> I wrote a hook nrpe-external-master-relation-changed and then did juju upgrade-charm my-unit but it didn't get called
[11:25] <Laney> any way to get it to work so that I can add more in future and just be able to run some command?
[11:35] <rick_h__> Laney: i'd suggest an email yo the list. i think folks might have a solution and be able to discuss it with a wider group there.
[11:36] <Laney> rick_h__: ok, which list?
[11:36] <Laney> I'm sure there is a solution as like 9999 Canonical charms use nrpe-external master :P
[11:38] <Laney> found juju@l.u.c
[11:39]  * Laney hopes someone is moderating the list because he doesn't want to subscribe
[11:42] <rick_h__> Laney: yes the juju@ list
[11:44] <Laney> rick_h__: thanks, I sent something
[11:45] <Laney> no I didn't, it got rejected
[11:46] <Laney> bah
[17:31] <lazypower|summit> bolthole o/
[17:55] <bolthole> hi lazypower
[17:55] <bolthole> did you see my question yesterday ?
[18:17] <lazypower|summit> bolthole no, but i did see the this morning about the differences between k8s and juju
[18:17] <lazypower|summit> bolthole which is one that i can answer :)
[18:18] <firl> lazypower|summit: you around?
[18:19] <lazypower|summit> bolthole - the big and main differences is that Juju is a modeling tool / orchestrator - while Kubernetes is a container orchestrator. They have somewhat similar properties but the sidewalk ends pretty quickly during a comparison. Juju allows you to model things like networks, storage across providers. K8s allows you to describe the deployment of containers, and scale/"manage" them. But its very inflexible in terms of making services
[18:19] <lazypower|summit>  talk to one another consistently, and its quite an exercise in abstract thinking, as you have to work with yaml files describing pods and services (akin to charms and units)
[18:19] <lazypower|summit> firl o/ hey there
[18:20] <firl> have time for the kubernetes stuff?
[18:20] <lazypower|summit> firl I'm hacking on some end of week stuff, but i started to dig into the failures in the head of the k8s work
[18:21] <lazypower|summit> i've got a few potential fixes sitting on my disk that need testing
[18:21] <lazypower|summit> firl is there anything specific i can help with aside from being super latent to deliver a fix?
[18:21] <firl> haha just waiting to do the install
[18:21] <firl> I am on PTO next week
[18:22] <firl> what’s the github for the layer again?
[18:22] <lazypower|summit> sorry again :( summit/fosdem pretty well ate up my week and i'm out until Thurs of next week on PTO as well
[18:22] <lazypower|summit> http://github.com/mbruzek/layer-k8s
[18:22] <lazypower|summit> the specific layer thats having an issue is the flannel layer, its not been re-worked to be compliant with the etcd changes to support a proper cluster
[18:23] <firl> gotcha
[18:23] <lazypower|summit> http://github.com/chuckbutler/layer-flannel  (mostly undocumented as it was intended to be temporary)
[18:23] <firl> so what’s the best way to install kubernetes then?
[18:23] <lazypower|summit> if you use cs:trusty/etcd-4
[18:23] <lazypower|summit> it'll be fine as is from tip of the built charm living in mbruzek's github repo
[18:24] <firl> so use etcd-4 and the layer-k8s?
[18:24] <firl> and it should work
[18:25] <lazypower|summit> yep. you will need to run `charm build` in the k8s layer
[18:25] <lazypower|summit> you cant just deploy the layer and expect magic :)
[18:25] <firl> haha ok
[18:25] <firl> is this going to be close to the final solution ( curious )
[18:25] <lazypower|summit> firl - we should have a published bundle in the store ~ the time you get back from PTO
[18:25] <lazypower|summit> we have a line item to deliver that by end of week in our namespace
[18:26] <firl> oh awesome
[18:26] <firl> I didn’t realize it was that soon
[18:26] <lazypower|summit> pretty close, yeah
[18:26] <firl> I should be able to wait for the bundle then
[18:26] <lazypower|summit> we're still cycling on breaking apart the components
[18:26] <lazypower|summit> so you can independently deploy/scale api-server vs nodes
[18:27] <firl> out of curiousity how do I setup the external net for floating ip’s essentially
[18:27] <firl> for the kubectl service layers
[18:27] <lazypower|summit> That depends on which SDN layer is in the output bundle
[18:27] <firl> kk
[18:27] <lazypower|summit> if its weave, you use weave router and reverse proxy nodes
[18:28] <lazypower|summit> if its no sdn - you have to do some iptables schenanigans and supposedly we can get some floating IP bits from the coud provider, but that gets expensive in a hurry
[18:28] <lazypower|summit> plus you're limited to i think like... 7... unless you request more
[18:28] <firl> eww
[18:28] <lazypower|summit> if you use flannel, basically same principal as weave but i'm still owrking through the nuances of that
[18:29] <firl> the bundle won’t have any by association default?
[18:29] <lazypower|summit> i'm trying to nail down some time to work with illya of weaveworks to do a full integration w/ weave,  and similar with flannel but haven't heard back from my contact(s) at CoreOS
[18:29] <lazypower|summit> we'll publish a different flavored bundle with each configuration we're going to support
[18:30] <lazypower|summit> oh! and we may even have a go w/ the fan :)
[18:30] <lazypower|summit> but dont ask me about that one yet, i still have a lot of catching up to do with our darling SDN solution
[18:30] <firl> haha
[18:30] <haasn> I don't understand how juju is supposed to work. I have a MAAS setup, and I understand that juju charms let me automatically deploy services on this MAAS. But before I am able to do this, I need to bootstrap juju? And this needs another machine for some reason? What is this machine doing?
[18:31] <firl> sounds good, it will be interesting to see, thanks man
[18:31] <firl> I will check back in after PTO
[18:31] <lazypower|summit> haasn - juju deploys a controller - which is responsible for managing each charm you deploy
[18:31] <lazypower|summit> firl np np, thanks for the interest :)
[18:31] <haasn> lazypower|summit: Can I just install this controller “locally” instead of needing a dedicated machine for just that?
[18:31] <haasn> i.e. on some machine of my choosing
[18:32] <haasn> (in this case it would be the same as the MAAS controller machine)
[18:32] <haasn> I guess I could also set up a virtual machine on the MAAS controller, add that to the MAAS, then somehow coax juju bootstrap into choosing this machine for juju.. but it still seems like overkill
[18:32] <lazypower|summit> Not easily, no.
[18:32] <lazypower|summit> that would be my suggestion
[18:32] <lazypower|summit> to register a VM, tag it, and use tags as a constraint
[18:32] <haasn> I found juju-local which lets me use lxc containers, but I don't want to use lxc containers for *everything*, just the controller
[18:33] <lazypower|summit> juju bootstrap --constraints="tags=bootstrap"
[18:33] <haasn> ah okay
[18:33] <lazypower|summit> if maas knew about LXD containers (which seems like an odd duck as lxd containers are very machineish but they dont have all the same properties of a machine) it could support that :)
[18:35] <jrwren> haasn: once you bootstrap you don't have to waste machine 0 to just controller. you can deploy to lxc:0 to use that same machine.
[18:35] <lazypower|summit> jrwren - aiui, we're discouraging that moving forward with the 2.0 changes
[18:36] <haasn> I guess I'll make as small a VM I can on some slow VM host
[18:36] <jrwren> lazypower|summit: why is that?
[18:36] <haasn> It's not like the machine is doing anything is it?
[18:36] <lazypower|summit> jrwren - i'm going to defer to rick_h__  to make sure i'm not spreading FUD
[18:36] <jrwren> oh, well, 2.0 will be lxd instead of lxc
[18:36] <jrwren> but same principle should apply. i'll be sad if it doesn't
[18:37] <lazypower|summit> haasn quite a bit actually, its only coordinating every service you deploy into the model, ever :)
[18:37] <lazypower|summit> jrwren have you used JES yet?
[18:37] <haasn> “coordinating”?
[18:37] <lazypower|summit> the 'default' bootstrap model "special" - when you add a new model no the controller there is no machine 0
[18:37] <lazypower|summit> s/no/on
[18:37] <jrwren> lazypower|summit: yes, ah yes, I see what you mean.
[18:37] <jrwren> lazypower|summit: still, it works ok with JES
[18:38] <lazypower|summit> haasn correct, it coordinates event states on many units that belong to many applications
[18:38] <lazypower|summit> jrwren - how do you deploy to 0 (eg: the controller) when you are in a different model, and there is no machine 0 until you either add-machine or juju deploy "thing"
[18:38] <jrwren> haasn: i agree with you, it shouldn't be doing anything unless i'm deploying or adding relations ;]
[18:38] <jrwren> lazypower|summit: the control lives in a model itself, you can use that model for something.
[18:39] <lazypower|summit> heh
[18:39] <lazypower|summit> ok
[18:39] <lazypower|summit> have fun thinking that ;)
[18:39] <jrwren> *shrug*
[18:39] <jrwren> if I have to burn yet another machine for a control, i'm sad.  that is all I know for sure.
[18:40] <lazypower|summit> i mean you can colocate stuff on the controller, in that "special" environment
[18:40] <haasn> I guess the implicit assumption in enterprise software is that you have hundreds of machines, so one more control machine is just another figure
[18:40] <jrwren> but I guess that is just a good reason to run JES controller in a manual model in something like maas so a machine isn't wasted
[18:41] <jrwren> haasn: yes, I agree, that seems to be an implicit assumption. I think it is more cloud-scale assumption than enterprise.
[18:43] <bolthole> sooo.. speaking of juju-local... no support for using juju+docker, instead of juju+lxc. how come? just that noone interested in doing the work?
[18:43] <lazypower|summit> bolthole - oh? juju can deliver docker, and deliver docker payloads
[18:43] <lazypower|summit> http://github.com/juju-solutions/layer-docker
[18:44] <bolthole> thats not what I asked
[18:44] <bolthole> i think
[18:44] <bolthole> What if I just want to  run juju, with environment=local, but use docker containers instead of lxc. but nothing else changes?
[18:44] <bolthole> normal juju use other than that
[18:45] <bolthole> (i'm new to docker, so I dont know if this even makes sense for sure)
[18:45] <bolthole> is it that the charm infrastructure relies too heavily on assumptions about the OS layer inside lxc , but certain things are just not present inside docker containers by default?
[18:50] <lazypower|summit> bolthole that doesn't really make sense due to the fact that app containers (eg: docker) are intended to be immutable artifacts
[18:50] <lazypower|summit> while its reasonably safe to assume you could start one with a /sbin/init system running, we have machine/system containers to do this that are intended to run an init system
[18:52] <lazypower|summit> bolthole - i suppose the short way to say that is "lack of effort, and the end product sounds like a hacky workaround to use docker for the sake of using docker"
[18:54] <bolthole> mm. sometimes, though , due to "Reasons".. it is neccessary to use docker for the sake of docker :-/
[18:54] <bolthole> we shall see
[19:50] <jrwren> i am trying to use mongodb interface. after charm build it creates a hooks/relations/mongodb dir with modules in it. From where do these modules come?
[20:27] <smartbit> on OSX I connect to http://127.0.0.1:6079 with vagrant image http://cloud-images.ubuntu.com/vagrant/trusty/current/trusty-server-cloudimg-amd64-juju-vagrant-disk1.box. I get an "Connecting to the Juju environment" and no login.
[20:27] <smartbit> Tried config.trigger.after 'sudo route add -net 10.0.3.0/24 172.16.250.15 >/dev/null' at no avail.
[20:27] <smartbit> Any suggestions?
[22:37] <haasn> I created a container via the web UI and this container is stuck on “pending” (as per `juju status`). How can I figure out what's wrong? I attached to the container and it doesn't seem to be doing anything
[22:37] <haasn> I can reach it from the outside, too
[22:38] <haasn> (ping)
[22:39] <rick_h__> haasn: can you ssh to the container and look for a log in /var/log/juju with some activity
[22:40] <rick_h__> haasn: normally the container is running a unit of something so I'd say to grab the unit log, but if you just asked for a container there should be a machine log there I believe
[22:40] <haasn> https://0x0.st/X_x.txt this maybe?
[22:43] <rick_h__> haasn: hmm, replicasets are a mongodb thing. Is this in lxc locally? what version of juju?
[22:44] <haasn> hmm no I'm getting those errors outside of the container too (/var/log/juju/machine-0.log seems to be present on both)
[22:44] <haasn> I tried destroying the container with `juju destroy-machine 0/lxc/0` and now it's stuck in “state: pending life: dying”
[22:44] <rick_h__> haasn: is the container on machine 0?
[22:44] <haasn> and still doesn't seem to be doing anything
[22:44] <haasn> yes
[22:44] <haasn> and now I get that “public no address” error about once every few seconds
[22:44] <rick_h__> haasn: and this is on lxc provider? lxd? aws?
[22:46] <haasn> no idea. `juju version` on the machine I'm controlling from is 1.25.0-wily-amd64 but I have no idea if that's relevant
[22:47] <haasn> well, I managed to get rid of the container by using lxc-stop, lxc-destroy and juju destroy-machine --force
[22:47] <haasn> Time for another attempt. Can I set the MAC address to use for containers when creating them? Part of the reason it was stuck to begin with may have had to do with the fact that my DHCP server didn't have an entry for the random MAC it generated
[22:47] <haasn> (then again, it's not like the GUI asked me..)
[22:48] <rick_h__> haasn: right but you ran "juju bootstrap" on some provider?
[22:48] <haasn> Yes, juju bootstrapped itself into a machine
[22:48] <rick_h__> haasn: where you configured an entry in your environments.yaml
[22:48] <rick_h__> using the manual provider?
[22:48] <haasn> Yes, I configured that file and added a single environment of type: maas
[22:49] <haasn> This works, too
[22:49] <haasn> I have multiple maas nodes that I can request just fine with juju and deploy charms on
[22:49] <haasn> But now I want to have a maas node that hosts multiple LXC containers
[22:49] <rick_h__> ah ok
[22:49] <haasn> (for small charms not worth wasting a full server on)
[22:49] <rick_h__> that helps clarify
[22:49] <haasn> It's the creation of these LXC containers that doesn't seem to be working
[22:49] <rick_h__> ok, so right, maas should be able to provide dhcp addresses to the containers on the machines
[22:50] <haasn> I'm trying to figure out how to replicate the configuration the GUI tried to apply from the command line
[22:50] <rick_h__> sure thing, sec
[22:50] <haasn> I'm not using maas DHCP, I'm using an external DHCP server. I add the MACs of new servers manually
[22:51] <rick_h__> haasn: https://jujucharms.com/docs/1.25/charms-deploying search for "placement"
[22:52] <haasn> I understand that I can use that --to syntax to install multiple charms on a single machine, but that installs them “side by side” inside the same machine, on the same FS - they don't get isolation, they don't get individual IPs, etc. I figured I would prefer having them just live in LXC containers which are reasonably lightweight but provide some free isolation
[22:54] <rick_h__> haasn: ah sorry, I thought it had a container example there as well but not seeing it.
[22:55] <rick_h__> haasn: you do what the gui is doing by specifying a container path in the --to flag
[22:55] <haasn> I guess the container example was something like “juju machine add lxc:0” to add a new LXC container on machine 0
[22:55] <haasn> which seems to run into the exact same result as doing it from the GUI
[22:55] <rick_h__> haasn: juju help deploy
[22:55] <rick_h__> there's an example there of deploying into a container
[22:55] <haasn> rick_h__: I'm not that far yet. I need a container working before I can deploy services into it
[22:56] <rick_h__> so juju deploy mysql --to 0/lxc/0 I think
[22:56] <haasn> The container doesn't work, and the problem seems to be that it picks a random MAC address instead of letting me configure what MAC to use
[22:56] <haasn> I'm looking for a way to solve this issue
[22:56] <rick_h__> haasn: right, but saying you can have Juju create the container and put the service on it at the same time
[22:56] <haasn> ah okay
[22:56] <rick_h__> haasn: but you're right, if there's a dhcp issues with the mac then solving that first is a good thing
[22:57] <rick_h__> haasn: I don't know of any way to inject the mac address there for you
[22:57] <haasn> I guess the best “solution” here would be to allocate the LXC container manually, give it a static IP of my choice, then add knowledge of this to juju? https://askubuntu.com/questions/671326/manually-provision-an-existing-lxc-container-in-juju-local
[22:57] <rick_h__> haasn: unfortunately things are setup expecting maas to be doing the heavy lifting there in my experience.
[22:57] <haasn> fair enough
[22:58] <haasn> This alone might be a reason to set up an extra subnet for the maas and use its DHCP, if it makes life easy when allocating lots of lxc containers dynamically
[22:58] <haasn> but I guess I can just “colocate” my services that run on the same machine (via --to) and stick to the current, simpler design that doesn't require any new network hardware
[22:58] <haasn> and only use maas for physical machines or VMs
[23:02] <haasn> Am I correct in assuming that (currently) `juju expose` pretty much does absolutely nothing for maas environments and all services are always publicly reachable?
[23:02] <haasn> I heard that some new rewrite of juju in Go is going to have a juju firewall that manages which services to expose or not to expose, but I take it this is in the distant future?
[23:02] <rick_h__> haasn: yea, that's true because there's not a provider security/firewall api in question
[23:03] <rick_h__> haasn: I think there was something looked into to control a firewall, but it's not on all hosts by default atm
[23:15] <haasn> How do service relations work? If I write “juju deploy wordpress && juju add relation wordpress mysql”, does that mean wordpress “automagically” uses the mysql service as its backend? What happens if I just write “juju deploy wordpress” without having a mysql service anywhere else?
[23:16] <rick_h__> haasn: so first, what will happen if you don't have a database. The wordpress service will be deployed and report that it's blocked, waiting for a database, before it's useful
[23:17] <rick_h__> haasn: the two charms both declare they can communicate around a protocol by defining both ends of a relationship
[23:17] <rick_h__> haasn: https://jujucharms.com/docs/1.25/charms-relations has a starter and there's other docs on writing them
[23:20] <haasn> makes sense, thanks
[23:22] <haasn> is the machine ID treated like a nonce? i.e. if I add and destroy machines often, it will always only grow?
[23:23] <rick_h__> yes