=== mbarnett_ is now known as mbarnett [05:03] jamespage:what is the right time to talk to you ? [05:03] are you online now ? [06:54] pranav: https://developer.openstack.org/api-ref/compute/?expanded=list-all-major-versions-detail,list-servers-detail#list-servers [07:11] Good morning Juju world! [07:56] dpawar: +1.5 hours from now please === degville_ is now known as degville [10:37] so has anyone on the juju team looked at https://bugs.launchpad.net/juju/+bug/1681495 ? [10:37] Bug #1681495: add juju lxd proxy configuration [15:04] SimonKLB: ping [16:03] lazyPower: I'm going to try again now to deploy on centos7 and wait longer. How can I monitor the progress? [16:36] erik_lonroth: there should be a wget going to fetch that cloud imag eif it doesn't exist. [16:36] erik_lonroth: additionally `lxc image list` shoudl show something like juju/centos/7 [16:36] just got kubernetes up and running and ran into this issue attempting a helm install https://github.com/kubernetes/helm/issues/1455 [16:37] tychicus: which bundle? kubernetes-core or canonical-kubernetes? [16:37] canonical-kubernetes [16:38] tychicus: its the "error upgrade connection" not the DNS issue correct? [16:38] lazyPower: originally the DNS issue, that is fixed no getting the upgrade issue [16:38] tychicus: that means the reverse proxy isn't respecting the http2 protocol. [16:39] 504 Gateway Time-out [16:39] tychicus: which version of kube-apiloadbalancer do you have deployed? [16:39] that should have been resolved with our 1.6.1 push week before last... [16:40] ok, just did the install within the last 24 hours [16:40] lazyPower: how do I find the kube-apiloadbalancer version? [16:41] juju status kube-api-loadbalancer, there will be a charm revision in that output [16:41] eg: kube-api-loadbalancer/7 [16:42] argh, i typod it [16:42] kubeapi-load-balancer [16:42] lazyPower: Nothing seems to have made it https://postimg.org/image/ynuq1ac41/ [16:43] 1.10 [16:43] lazyPower: As you can see from the image, it seems centos7 is not in the list of images even now after waiting some time. [16:45] erik_lonroth: http://pad.lv/1495978 [16:46] it looks like this isn't supported on the lxd provider at this time. Its also ben closed as of 2/22 with a "fix committed" status, that confuses me as i see nothing related to actually fixing the issue [16:46] tychicus: 1 moment. [16:46] lazyPower: np, thanks [16:46] rick_h: do you have any clarity here you can shed on http://pad.lv/1495978 [16:47] lazyPower: looking [16:47] lazyPower: Thats exactly my thought also. I read that bug earlier, thats also why I asked here. I was assuming I made some kind of error. [16:48] lazyPower: yea....so we were going to get that a bit ago. I'm not sure where it's at atm. [16:49] it was closed recently as fix released [16:49] lazyPower: erik_lonroth so the deal was that you could do it with your own images in maas/lxd images/etc but having it work with Juju in the cloud meant "picking" an image that we don't provide/support as the basis for juju working or not. [16:49] does this mean its coming or does this mean we dropped it all together? there's a lot of churn in that issue. [16:49] ah ok [16:49] lazyPower: erik_lonroth so the issue was what process would be put in place to select what image id in each cloud was "THE CENTOS" that would be tested/supported/used. [16:49] so the churn continues... [16:50] lazyPower: erik_lonroth and how we'd handle if that image was pulled, changed, etc. [16:50] lazyPower: erik_lonroth it was "90%" done at one point but I'm not sure if it ever got over that hump. This is a LXD case, does it work sans-lxd? [16:50] lazyPower: erik_lonroth we'd have to bug balloons to see if he has any insight beyond my history of it. [16:51] lazyPower: erik_lonroth I think azure and aws were the two targets first. [16:53] I understand, but the argument proposed in the bug that supporting at least some kind of workflow for centos would be a winning feature of juju. At least, a good idea would be to provide some kind of indication that this needs either some manual "fix" to let me install centos, or how to get this feature to operate as I would hope. After all, centos is one "of the major" dists out there. [16:54] ... as it is now, me as a noob on juju, is getting into a situation where I can't distinguish between this as a "method error" or a "bug" [16:57] what is this? [16:58] balloons: do you have any insights on centos charms on lxd or public clouds? [16:58] balloons: I know there was work to select a centos image in azure/aws but not sure if that ever "got released" [16:59] erik_lonroth: understand and agree. We also love saying Juju is cross OS and we work on the client being multi-platform. Our issues with ootb centos is a :( [17:02] I understand also. I'd love to help out here as I see the potential. Fixing issues lite this bridges also a healthy transition over OS:es and I see a number of "charms" that needs this functionality. [17:04] One charm I'm looking for is the "freeIPA" which is a KEY element if you want to bring in enterprise level authentication systems with juju. At the moment, I don't know about a charm that deals with authentication in the charm store. (If you know about any, please let me know =) [17:05] https://www.freeipa.org/page/Main_Page [17:05] rick_h, we do test on centos on aws indeed [17:06] balloons: ah ok, it'd be nice to see if it's using some hard coded image id or if it's available to all. [17:06] rick_h, the deploy is a simple charm, but you can do a basic bootstrap and deploy [17:06] balloons: k, erik_lonroth what charm are you using? I wonder if I can test it on aws [17:07] maybe hit and see where it might work vs doesn't and at least get farther on what works vs doesn't atm [17:08] I'm using my own charm which is part of my "tutorial" writing to help my colleges to get started on juju development: https://github.com/erik78se/juju/wiki/The-hello-world-charm [17:09] I've tested my charm on ubuntu already and now I wanted to "prove" juju as being linux-agnostic. [17:09] ... well, at least "centos"/"ubuntu" agnostoc. [17:10] erik_lonroth: rgr [17:11] This is what my next tutorial would likely be... showing that the hello-world example indeed can be commissioned on both ubuntu/centos. [17:12] rick_h, seems I misspoke. I looked at the test and it still bootstraps with trusty [17:12] balloons: k, I'm ok with it bootstrapping just curios on the charm deploy [17:12] balloons: e.g. can we mix centos/ubuntu workloads and the like [17:12] rick_h, that is the idea of the test it seems [17:17] I don't know really at the moment where we are in this process...? Are we stuck with "not being able to support centos images", or is there a workaround, or what is the verdict here? [17:18] rick_h: What's the plan on different LXD storage backends when deploying on MAAS? The default directory option that Juju uses is quite inflexible. We have separate /var partitions on our MAAS nodes which we need to watch out for as they get rather full when you're colocating LXD containers (something which quickly could take down all LXD containers). [17:19] zeestrat: yea there's a bug around the /var issue. I know there was a plan but not sure where it's at on the todo list atm [17:22] If you're thinking about #1634390, then that is something else. This is about being able to select different storage backends such as LVM instead of just straight up directories for LXD to use as a storage backend. [17:22] Bug #1634390: jujud services not starting after reboot when /var is on separate partition [17:44] Just wondering if this was something on the roadmap. If not, I'll write up a bug with the details. [17:54] lazyPower: applied the settings listed here https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/ now I get a slightly different error message Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 10.148.0.100:10250: getsockopt: connection timed out [17:55] erik_lonroth: ok, so on aws, once you agree to the terms of the centos7 image, I was able to deploy your charm on --series=centos7 and then the install hook errors with the "apt not found" [17:56] erik_lonroth: so yea, it works in theory, but not across the board atm [17:56] kubernetes is deployed on top of openstack [17:56] erik_lonroth: confirmed gce doesn't have an centos image set for use [17:58] tychicus: sorry i'm getting pulled, i will circle back in a moment [17:58] good grief that centos image is slow [18:22] edited the security group to enable ingress on tcp 10250 0.0.0.0/0 to the specific kubernetes worker that helm was attempting to communicate with. this allowed helm to run [18:23] tychicus: ah, juju run --application kubernetes-worker "open-port 10250" && juju expose kubernetes-worker [18:23] tychicus: thats hwo you would accomplish that natively in juju terms. [18:23] thanks [18:23] and that will expose all workers [18:25] next question is related to persistent volume storage there is a great write up here https://kubernetes.io/docs/getting-started-guides/ubuntu/storage/ but I already have ceph deployed for openstack, so I think I would just use cinder to give persistent volumes to kubernetes [18:26] is that as simple as using the storage charm and doing juju deploy canonical-kubernetes —storage data=50G [18:34] tychicus: so the only storage backends we support today are nfs and rbd from ceph [18:35] you can do som emanual ops to get things like ebs working pending our cloud native support however those are WIP and in a pre-alpha state at this time. (and you're deploying on openstack) [18:36] tychicus: so wrt supporting cinder, a bug filed would go a long way towards helping us prioritize those storage providers natively without any manual intervention [18:37] lazyPower: so can I connect to the existing ceph rdb? [18:37] you certainly can [18:38] tychicus: add the relation between your ceph-mon and kubernetes-worker charms, and there's an action for enlisting RBD's as PV's [18:38] tychicus: one thing to note, that action doesn't validate and its entirely possible to over-provision your rbd's. eg: enlist a 90tb rbd as a pv, and k8s will gladly take it and do what it will with the rbd, even though theres no hope of fulfilling that requests initial storage capacity without adding physical disks and enlisting in the OSD's. [18:39] thats one thing i've discovered that I haven't come up with a fix for just yet [18:41] tychicus: additionally you can enlist the credentials and use the rbd auto-provisioner [18:41] but thats not something we've enabled out of the box just yet. its on the todo list. [18:42] tychicus: https://github.com/kubernetes/kubernetes/tree/master/examples/persistent-volume-provisioning - the CephRBD example illustrates how to setup the rbd auto provisioner. [18:43] lazyPower: just to make sure that I understand all of this correctly, since I am still very new to juju and I very well could have done some things wrong. I used juju + maas to deploy openstack on physical hardware. Then I used juju to deploy kubernetes on top of openstack. I currently have 2 juju bootstrap nodes 1 node controls openstack, the other controls kubernetes [18:44] tychicus: ah, nested... [18:45] So that relationship i outlined was if it were on the same layer as the openstack deployment, since its nested you cannot reasonably relate k8s to ceph. [18:45] good call. in this case, i would recommend enlistment of the ceph credentials, and using the rbd provisioner [18:45] I can use the existing Ceph Cluster deployed for openstack to provide persistent storage to the kubernetes cluster via https://github.com/kubernetes/kubernetes/tree/master/examples/persistent-volume-provisioning [18:46] correct [18:46] the RBD example there should get you moving. You'll need to create a secret with your ceph credentials and enlist the storage class [18:46] thanks, and thanks again for your assistance [18:47] as far as having 2 bootstrap nodes, is that correct, or shouldI only have a single bootstrap node with multiple models? [18:47] you should have been able to re-use that initial bootstrap node for your openstack deployment [18:48] ok [18:48] rick_h: fact check me here? [18:48] * rick_h reads backward [18:48] i'm going to owe mr _h a pizza by the end of today [18:48] or a beer, notwithstanding. [18:48] do you know offhand what the syntax is for telling juju about an already existing bootstrap node? [18:49] tychicus: e.g. using a single controller for multiple providers? [18:49] tychicus: it doesn't support that at this time. [18:49] gah [18:49] i lied :( sorry [18:49] really glad i pinged [18:49] tychicus: if they're on the same provider, you can just add-model with a region flag [18:50] ok and does controller == bootstrap node? [18:50] you are correct [18:50] a controller is the 2.0+ terminology for a bootstrap node. [18:51] ok, so it makes sense to have 1 controller for MaaS and another controller for openstack [18:51] hey folks, i'm getting an error about a node not having an address family in common with my MAAS server that i'm trying to bootstrap a controller for (full error message here: https://pastebin.com/Pd8ktgPm) any ideas as to how to fix this? [18:51] * rick_h is sorry to not say "yes!" [18:52] vasey: looks like your network config in maas doesn't match whats happening in that rack. [18:52] vasey: did you model the network fabrics in maas that match whats coming back from the DHCP server? i can only presume you'r eusing an external DHCP in this setup... [18:52] rick_h: thanks for the clarification [18:52] lazyPower: thanks for all the help [18:52] tychicus: no problem, ping me with any questions about k8s :) happy to help. [18:53] i clearly need to brush up on my non k8s bits... i'm a bit embarrased to have fibbed about the controller support :S I'm living in a JAAS world, and its easy to blur the lines apparently. [18:54] so far I am very happy with what I have seen in juju thus far, excited to learn more [19:00] Thats great to hear :) [19:00] lazyPower: i'm using the MAAS built-in DHCP for this setup, and everything seems fine on that end; should i have given my juju a static IP [19:01] well this is fun, i cant find a bug with that error text either [19:01] this might be greenfield [19:02] actually this seems related https://bugs.launchpad.net/maas/+bug/1683433 [19:02] Bug #1683433: MAAS 2.2-rc1 refuses to deploy if all a node's interfaces are set to DHCP [19:02] vasey: do the nodes successfully boot if you manually deploy one? [19:05] rick_h: Thanx for testing it out and yeah, the "apt" thing error was very expected. But, the centos awkwardness for testing with lxd seems to me really something worth addressing. The typical development workflow would be to test on a local lxc and if that scenario fails, I think the concept also fails.... Can I help out here somehow ? [19:08] lazyPower: i haven't tried a manual deployment, though they automatically PXE boot into the ephemeral ubuntu image just fine though [19:09] erik_lonroth: I think it's got to be streams work with lxd->image listing->juju trust that has to be setup for lxd [19:09] vasey: that error message is actually surfacing from maas, i've grepped through the juju tree and dont find that error message. [19:09] vasey: so it leads me to beleive there's an issue with the unit configuration in maas. [19:09] that coupled with 1683433 linked above, i think its related to the networking configuration and its a bug in 2.2.0-rc1+ [19:11] rick_h: I'm sorry, but I don't quite follow you there. "streams" ? [19:48] lazyPower: interesting! i'll look into that [19:51] erik_lonroth: streams are a json listing of image type / series (like centos type, series is 7, or ubuntu type, series is xenial) and where to fetch those cloud images from. [20:14] how do you connect to an existing juju controller from a new juju client? [20:25] hey guys, is anyone using the purity charm? [20:27] lazyPower: turns out none of my nodes had an interface configured with an auto-assigned address, that did the trick [20:47] tychicus: try again? where's the client at? [20:47] bildz: not tried it out myself, what's up? [20:50] vasey: awesome! glad we got you fixed up. [20:50] bildz: would you be referring to https://jujucharms.com/u/brent-clements/cinder-purity/ ? [21:17] hello people [21:17] can you still get the dashboard list of your own charms in the charmstore? [21:18] oh i see what its doing there [21:43] magicaltrout: ? [22:48] this is what gets on my tits about the store search [22:48] rick_h: what do i need to change to get my gitlab charm to show up? [22:48] just call it something else? [22:48] i don't even know how you know it exists [22:49] magicaltrout: heh, I saw your email and I knew to check your user account. [22:49] meh [22:49] magicaltrout: the best thing is to get it promulgated I guess. [22:50] that takes effort! :P [22:50] i have a backlog of charms to get promulgated, this is why i have new people starting, to get through my self inflicted backlog ;) [22:50] magicaltrout: yea, looks like the one there now has just been limped along by lazyPower and others. [22:51] magicaltrout: shouldn't be too bad to get it swapped out. I'm sure they'd like to have someone else doing awesome stuff in that namespace [22:51] i don't get why the search just ignores anything with the same name [22:51] why not just segregate the results somehow [22:52] magicaltrout: used to do that and it confused folks getting several versions of ceph and the like [22:52] hmmmmmmmmmmmmmmmmmmm [22:52] fair enough [22:53] can't you just tell them not to be confused? ;) [22:53] or "check this box to return stuff we really have no idea if it works or not" ;) [22:54] magicaltrout: yea, so the new agreement was to add an option to "find all other options" or the like [22:54] magicaltrout: but not been completed yet [22:55] ah [22:55] cool [22:56] * rick_h sees a handful of gitlab deploys and checks who's charm they are running [22:57] hmm, yea 8 of the current one out there I can tell [22:58] magicaltrout: so yea we'd have to work out a migration path as part of swapping the promulgated version :( [22:58] magicaltrout: or push yours as trusty as the current one is only precise [22:58] which would be <3 [22:58] my will be xenial and trusty when i get the former checked [22:58] magicaltrout: cool [22:59] https://jujucharms.com/u/spiculecharms/gitlab okay rick_h [22:59] thats the one, although I need to push an updated readme and a few tweaks in the morning [22:59] i'll write some tests and get it into the review queue when I get an hour or 3 later in the week [22:59] magicaltrout: it's not public [22:59] meh [22:59] magicaltrout: rgr [23:00] my guesses failed me [23:01] public now rick_h [23:01] magicaltrout: ty I see it [23:01] magicaltrout: ooh, support http endpoint! /me rushes to use it with the ssl-termination charm [23:04] its set to the latest version, but if people want to run a specific version they can, i've tested 8.0 through to 9.1 or wherever we are now [23:04] i have some code to decouple the database and other bits but i've not tested them thoroughly yet [23:04] it will land one day === firl_ is now known as firl