=== salmankhan1 is now known as salmankhan [11:35] Is it possible to deploy Kubernetes with conjure-up to a non-default VPC on EC2? [12:37] Hello all! I'm quite new to juju and am writing a charm which utilises layer:snap but it never seems to actually install the snap. Is there anyone here who might know what I've done wrong. [12:43] I'm having issues with a charm, https://github.com/MartinHell/charm-collectd/blob/6338fe9d99d8c8c4f510cff28cf617aebdd6f901/reactive/collectd.py#L220 "AttributeError: module 'charmhelpers.fetch' has no attribute 'archiveurl'" [13:13] nvm i fixed it [13:29] hello :) I've brought up canonical-kubernetes using juju, after having conjure up fail. I think I was left with a kubernetes setup that has lots of the settings as per the defaults with conjure up. Would someone be able to advise, for example, how I'd repeat this process to get the kubernetes "external" IPs to be in a subnet of my choosing? [13:30] If it makes any difference, we're hosting this ourselves and it's all provisioned through MAAS [13:38] hi EdS, I can give it a try [13:39] hi kjackal :) thank you! [13:39] EdS: you are deploying canonical-kubernetes [13:40] what do you mean by "external" Ips? [13:40] ok, sorry for my terminology. I mean the IP addresses assigned to services that I expose. [13:41] ok how do you expose the services? nodeport? [13:41] the kubernetes cluster can "expose" a service and it is then assigned an "external ip" [13:41] however, I've never seen anywhere where I can define the CIDR for these addresses [13:43] Conjure up appeared to allow me to set the desired properties of kubernetes, but did not work. [13:44] Juju has worked really smoothly, but I missed out on all the tweaking that would make this new kubernetes cluster usable to us! [13:44] yes, I have exposed the first test service with nodeport [13:44] and I have ended up with a seemingly random IP 10.1.63.10 [13:45] the 10.1.63.10 is one of the kubernetes nodes, right? [13:46] no, the nodes are on 10.10.10.0/16 [13:46] ah, sorry, that's the IP of the pod [13:46] k8s has a service-cidr config variable [13:47] ok, brilliant, that sounds like the right thing. [13:47] can you show me a juju config kubernetes-master [13:47] the exposed service, if I read this right, is 10.152.183.97 [13:48] that sounds better because the service-cidr has a default value of: 10.152.183.0/24 [13:48] aha ok [13:49] so, I think the question is now much simpler. Do you know how to set that? :p [13:49] but you cannot change the service-cidr after the initial deployment [13:49] ok, that's fine [13:49] you will need to redeploy k8s [13:49] how would I set it, at all? [13:49] give me a sec looking for the documenttion page [13:50] aaah it will be faster if I just tell you [13:50] I think, with juju it was so smooth it felt like magic (ok, so it's in the name) that important things like this were missed (at least for me) because of my half-success with conjure up perhaps leaving config around? If that's even possible to happen? IDK [13:51] thats a good suggestion [13:52] so what we will do is to grab the bundle from the store change the config variable and deploy it [13:53] ok, I have that already as I had to tweak constraints [13:53] can you do a "charm pull canonical-kubernetes" [13:53] :) [13:53] awesome [13:54] so you go under the kubernetes-master service and you set the service-cidr to what you need [13:54] let me do this here so I tell you exactly how this looks [13:54] EdS: When you say that conjure-up failed, can you give me more info? I don't know much about the k8s side, but I'd like to sory out any issues with conjure-up at least. [13:56] EdS: it should look like this: http://pastebin.ubuntu.com/25732311/ [13:57] oh wow :) [13:57] ok will give that a shot. [13:57] Cory, two seconds. :) [13:59] cory_fu: I have a feeling that I was running into several things at once. I'm hunting a few tickets [14:01] first one; too many machines used, so it ran out of machines to provision [14:03] like this: https://github.com/conjure-up/spells/issues/67 [14:03] except our scenario was less extreme than 4->18 [14:03] We just had a discussion yesterday about having Juju do better about verifying MAAS / cloud limits / availability early on. :/ [14:03] thanks so much for your help kjackal, that had eluded me for ages [14:04] lol yeah, might help me out. I unpacked a lot of extra machines trying to get around this [14:05] but got it going in the end. [14:06] EdS: Odd. I thought that the "too many machines" bug was resolved already. Any chance you still have the ~/.cache/conjure-up/conjure-up.log file? [14:06] while you're here... can you satisfy an enquiring mind? did my conjure-up attempts store configuration that was used in a subsequent attempt with juju and a bundle file I specified myself? Or am I over thinking this? [14:07] This wasn't exactly in the last few days. I can go digging and see if I have it. [14:07] ooh lots of evidence :/ [14:08] shall I pastebin? [14:08] EdS: Not currently. If you don't go past the "Configure Applications" screen and click the "Deploy All" (or every individual deploy) button, nothing will get saved [14:09] Well, technically, we were planning on having a resume feature, so we might persist choices into a sqlite db in that ~/.cache/conjure-up directory, but they're never read in again [14:09] EdS: Yeah, pastebin of the log would be helpful. [14:09] ok, thanks, that clears up a few doubts [14:15] jam: Hey, can you confirm if a unit's IP address changes due to DHCP whether Juju would trigger a config-changed hook? [14:15] my conjure-up log... sorry about many times I tried this... http://pastebin.ubuntu.com/25732388/ [14:16] cory_fu: so we trigger config-changed on startup anyway, but I'm not 100% sure about where we ended up from auto-populating private-address with new values because of charms that override the value. (openstack charms used to set the VIP instead of their personal addresses) [14:16] that said, if a live machine changes its IP address, I think we'll notice within 10 minutes or so, I'm not sure if that immediately triggers a config-changed. [14:30] EdS: From that log, it looks like you might have had several successful runs. Did any of those actually succeed or did they get stuck? [14:31] It always got stuck, but that may have been because of various external things. [14:31] cory_fu: I was setting up MAAS, juju and reading lots. [14:35] EdS: Odd. If it got stuck deploying, I would have expected to see log messages about 00-deploy_done failing [14:36] cory_fu & kjackal: :D thanks so much - that has straightened a lot out in my head! [14:37] it's entirely possible I have cleared out the log of the failed runs, but it never felt like I truly succeeded with conjure up [14:40] EdS: I do see some failures in there related to the connection to the controller failing. That seems plausible if the machines were provisioned and not released. [14:42] EdS: You asked about it saving info; as I mentioned, there shouldn't be any persistent effects if you stop before the deploy, but from the log, it looks like you went that far a few times. Obviously, you'd have to clean up any provisioned machines or anything else that Juju or conjure-up claimed in MAAS [14:46] cory_fu: yeah. I managed those bits. I think the difference between doing it with conjure-up and juju tricked me into thinking I'd get a similar opportunity to tweak the settings. When juju + maas worked it was all up, but now I realise with defaults, not any leftover config. [14:47] cory_fu: I think as I'm at the early stages of this setup, I'll tear it all down and try to get the settings I wanted :) [15:12] EdS: Ok. If you end up trying conjure-up again with any MAAS issues sorted out and have any issues again, let me or stokachu know. We're travelling, so might not respond right away, but we'd like to sort out any bugs you might run in to. [15:13] But Juju direct is also entirely viable and should be just as configurable, even if it might not be presented as nicely. (At the end of the day, conjure-up is just calling out to Juju, after all.) [15:14] cory_fu: superb, thanks you. I'm just setting off from the start again with juju + the bundle. I think personally, the yaml is fine for me. Enjoy your travels. [17:57] Hiya.. I know I had this working before.. but then I wiped that box & started again.. I'm trying to have my kubernetes (loaded via conjure-up) to use my docker registry (running on the host that did conjure up) .. I thought I used juju run-action registry to make this work before, but that seems to be for secured registries, and mine is unsecured.. [17:57] I found https://insights.ubuntu.com/2017/10/11/private-docker-registries-and-the-canonical-distribution-of-kubernetes/ which hints I need to set a config key .. which I think is now 'docker-opts' not 'docker-config' as in the article. [17:57] how's the config here look? https://insights.ubuntu.com/2017/10/11/private-docker-registries-and-the-canonical-distribution-of-kubernetes/ [17:58] Tim passed me the link here the other day :) [17:58] yeah.. thats the link I just pasted right ? ;p [17:58] oh haha sorry [17:59] docker-opts sounds familiar from recent docker versions [17:59] anyways.. I've done "juju config kubernetes-worker docker-opts="--insecure-registry 192.168.1.xx:2375" .. do I also need to do the juju run-action registry step ? [18:00] (because atm, if I have an image: tag in my yml for 192.168.1.xx/myimage:latest it complains getsockopt connection refused) [18:00] not if you've already got the registry, it sounds like you have. [18:00] I have a registry running at 192.168.1.xx :2375 that I can talk to, push images to, run containers on etc [18:01] seems tho that my worker node can't talk to it.. I'm missing something. [18:01] yeah. don't deploy a registry with juju run action then :) [18:02] is your registry in the same subnet as nodes? [18:03] BarDweller: i would juju ssh to the node and try a docker pull from there, and see what that tells you [18:03] sounds like a networking issue [18:03] good plan.. [18:04] juju ssh kubernetes-worker/0 .. and then `docker images` is showing me a different docker registry.. but from that env I can ping my other one ok.. lemme see if I change DOCKER_HOST if I can talk to my other reg from that shell [18:05] yep [18:05] so the kubernetes-worker/0 is capable of reaching my docker registry, and can talk to it.. but seems configured to use a different registry [18:06] hmm.. do I need to do something after the juju config that tells the worker to use my registry? (restart the worker or sommat?) [18:06] juju config kubernetes-worker - do the docker-opts have the correct registry in that output? [18:07] BarDweller: when you set it via config, the charm should do everything for you [18:07] if it's not, that's a bug [18:09] yes, juju config kubernetes-worker shows the options I put in docker-opts (--insecure-registry 192.168.1.xx:2375) [18:10] anyway I can kick it to tell it to read it ? [18:10] hmm.. wait up [18:11] vagrant ssh in the kubernetes-worker/0 then docker info shows my registry listed in there.. digging further [18:19] hmm. I have torn down my canonical-kubernetes setup to rebuild with a different service-cidr. This is now stuck waiting with flannel blocked :/ [18:19] I think I will try again, I have noticed the 1.7->1.8 version bump. [18:26] yeah.. there's something odd here.. it's not a network issue, it's a docker config issue.. I'm trying variants atm [18:27] mebbe it'd be easier if I just started using the registry from the juju charm ? [18:28] BarDweller: if you're just playing around that's fine but it's not a production setup [18:28] yeah.. this isn't for prod, it's for local dev [18:29] I just need a way to push custom images that I can load into the kube =) [18:32] for production systems, used only internally within our company, would you consider it ok to run a registry pod/service with images stored in an nfs PV? [18:33] I'm not sure we need or want to give docker-registry a server of it's own. It seems overkill for us. [18:37] BarDweller: gotcha. i'm still keen to figure out what the issue is so we can fix it if we need to [18:37] Eds: yes - the helm chart in that blog post is great for that [18:38] yeah.. I'm still digging.. I'm not entirely sure I've got everything lined up right [18:39] tvansteenburgh: thank you :) [18:40] I know if I do `juju ssh kubernetes-worker/0` and then do `export DOCKER_HOST=192.168.1.xx:2375` and then do `docker images` that I can see my expected images [18:40] so I know my docker is up, and reachable by the worker node. [18:41] BarDweller: okay, that's good feedback - we can try to reproduce [18:41] so then I do `unset DOCKER_HOST` and then `docker info` and I note at the bottom it lists "Insecure Registries: 192.168.1.xx:2375" [18:42] hmm [18:42] and then I try 'docker pull 192.168.1.xx:2375/my-image:latest` and it says error image not found. [18:43] which is an improvement from before.. where it was saying getsockopts conn refused. [18:43] BarDweller: would you mind filing a bug with this info here: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/new [18:44] I had this working a few weeks back.. I have the kube yamls that say so.. but I wiped the host I'd done the magic to add the registry on.. and failed to add what I did to my vagrantfile =) [18:45] mebbe my registry isn't the right thing [18:45] I assume your docker images when you set the DOCKER_HOST shows my-image. [18:46] I use an insecure registry hosted outside k8s and all I had to do was add that option [18:46] I'm seeing people saying they can do things like http://ip:port/v2/_catalog to see images.. mine doesn't seem to like that just gives back "{"message":"page not found"} [18:46] docker pull 192.168.1.xx/image_name just works [18:46] are you running a version 1 registry? [18:46] yes, if I set my DOCKER_HOST to be 192.168.1.xx then docker images will show my-image [18:46] checking.. [18:47] apparently I'm running 17.09.0-ce, api version 1.32 (min ver 1.12) build date sep 26 2017 [18:48] (from docker version while docker host is set) [18:48] I wonder if I don't have a docker registry, I just have a docker server.. #noobquestion is there a difference ? [18:49] BarDweller: how are you running it? the docker registry is a docker container named registry [18:49] I'm running registry:2 for example [18:50] with DOCKER_HOST working it sounds like you're using a docker daemon instead of a registry [18:50] loosely .. apt-get install -y docker-ce socat .. then update dockerd options to pass -H tcp://0.0.0.0:2375 [18:50] yes, that's the realisation I'm coming to (re daemon vs registry) [18:51] ah, yep. a registry is typically on port 5000 and is run via something like `docker run -p 5000:5000 registry:2 [18:51] ok.. so should I change my original question to.. is there a way to have my kube-worker pull an image from my docker-daemon ? [18:53] BarDweller: I would think it would be easier to crank up a registry myself. [18:54] BarDweller: docker run -p 5000:5000 -e REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry -v /my/registry/volume:/var/lib/registry registry:2 [18:54] BarDweller: something like that would do it [18:55] hehe.. sounds like an idea.. I'll have a bash [18:56] although you'd kinda think by this point I should just use the juju docker-registry charm [18:57] BarDweller: if you have a machine for juju to snap up for it, sure. For me, I'm using bare metal and didn't want to waste resources on something that is used so infrequently. I also was able to put it on the nfs server, so file io was local [18:58] I have all this inside a vagrant vm.. so it really doesn't make too much difference.. at the mo the vm is running the dockerd .. I'll try deploying a registry first, because that might integrate easier [18:58] BarDweller: sounds like a good idea [19:14] ouch.. I think I figured this out =) [19:17] so to have docker client talk to an insecure registry, you add the --insecure-registry option to the dockerd, (or use daemon.json) however, if the docker you are using is remote, you do it to _that_ docker .. which is awesome in my case, because it means the clients of my vm wont need to care [19:29] cool.. my image came up finally inside kube =) thanks for the assist =) [19:36] BarDweller: glad to hear you go it going! [19:39] yep.. I think before I had used the juju charm to deploy a registry.. but it's not clear to me how I ever had that working, because I never configured anything beyond 'domain' and set ingress =true .. I never had all the insecure-registry stuff before [19:40] anyways.. updated vagrantfile to not do that, and instead use juju config to add the insecure registry bit for the registry launched onto the docker daemon as part of the provisioning [19:53] I'm seeing "Too many arguments." during config-changed, and I can't figure out where it's coming from [19:53] I've grepped all of the juju code base [19:56] is that bash's too many arguments? [19:57] is something being expanded to a long list, eg ls * in a folder with many thousands of files will trigger that, IIRC [20:00] hmm, I'll have to dig around to see if anything like that is happening [20:05] that was off the top of my head, sorry if I'm way off the mark! [20:09] :) === frankban is now known as frankban|afk