[05:03] <dpawar> jamespage:what is the right time to talk to you ?
[05:03] <dpawar> are you online now ?
[06:54] <dpawar> pranav: https://developer.openstack.org/api-ref/compute/?expanded=list-all-major-versions-detail,list-servers-detail#list-servers
[07:11] <kjackal> Good morning Juju world!
[07:56] <jamespage> dpawar: +1.5 hours from now please
[10:37] <cnf> so has anyone on the juju team looked at https://bugs.launchpad.net/juju/+bug/1681495 ?
[10:37] <mup> Bug #1681495: add juju lxd proxy configuration <juju:New> <https://launchpad.net/bugs/1681495>
[15:04] <urulama> SimonKLB: ping
[16:03] <erik_lonroth> lazyPower: I'm going to try again now to deploy on centos7 and wait longer. How can I monitor the progress?
[16:36] <lazyPower> erik_lonroth: there should be a wget going to fetch that cloud imag eif it doesn't exist.
[16:36] <lazyPower> erik_lonroth: additionally `lxc image list` shoudl show something like juju/centos/7
[16:36] <tychicus> just got kubernetes up and running and ran into this issue attempting a helm install https://github.com/kubernetes/helm/issues/1455
[16:37] <lazyPower> tychicus: which bundle? kubernetes-core or canonical-kubernetes?
[16:37] <tychicus> canonical-kubernetes
[16:38] <lazyPower> tychicus: its the "error upgrade connection" not the DNS issue correct?
[16:38] <tychicus> lazyPower: originally the DNS issue, that is fixed no getting the upgrade issue
[16:38] <lazyPower> tychicus: that means the reverse proxy isn't respecting the http2 protocol.
[16:39] <tychicus> 504 Gateway Time-out
[16:39] <lazyPower> tychicus: which version of kube-apiloadbalancer do you have deployed?
[16:39] <lazyPower> that should have been resolved with our 1.6.1 push week before last...
[16:40] <tychicus> ok, just did the install within the last 24 hours
[16:40] <tychicus> lazyPower: how do I find the kube-apiloadbalancer version?
[16:41] <lazyPower> juju status kube-api-loadbalancer, there will be a charm revision in that output
[16:41] <lazyPower> eg: kube-api-loadbalancer/7
[16:42] <lazyPower> argh, i typod it
[16:42] <lazyPower> kubeapi-load-balancer
[16:42] <erik_lonroth> lazyPower: Nothing seems to have made it https://postimg.org/image/ynuq1ac41/
[16:43] <tychicus> 1.10
[16:43] <erik_lonroth> lazyPower: As you can see from the image, it seems centos7 is not in the list of images even now after waiting some time.
[16:45] <lazyPower> erik_lonroth: http://pad.lv/1495978
[16:46] <lazyPower> it looks like this isn't supported on the lxd provider at this time. Its also ben closed as of 2/22 with a "fix committed" status, that confuses me as i see nothing related to actually fixing the issue
[16:46] <lazyPower> tychicus: 1 moment.
[16:46] <tychicus> lazyPower: np, thanks
[16:46] <lazyPower> rick_h: do you have any clarity here you can shed on http://pad.lv/1495978
[16:47] <rick_h> lazyPower: looking
[16:47] <erik_lonroth> lazyPower: Thats exactly my thought also. I read that bug earlier, thats also why I asked here. I was assuming I made some kind of error.
[16:48] <rick_h> lazyPower: yea....so we were going to get that a bit ago. I'm not sure where it's at atm.
[16:49] <lazyPower> it was closed recently as fix released
[16:49] <rick_h> lazyPower: erik_lonroth so the deal was that you could do it with your own images in maas/lxd images/etc but having it work with Juju in the cloud meant "picking" an image that we don't provide/support as the basis for juju working or not.
[16:49] <lazyPower> does this mean its coming or does this mean we dropped it all together? there's a lot of churn in that issue.
[16:49] <lazyPower> ah ok
[16:49] <rick_h> lazyPower: erik_lonroth so the issue was what process would be put in place to select what image id in each cloud was "THE CENTOS" that would be tested/supported/used.
[16:49] <lazyPower> so the churn continues...
[16:50] <rick_h> lazyPower: erik_lonroth and how we'd handle if that image was pulled, changed, etc.
[16:50] <rick_h> lazyPower: erik_lonroth it was "90%" done at one point but I'm not sure if it ever got over that hump. This is a LXD case, does it work sans-lxd?
[16:50] <rick_h> lazyPower: erik_lonroth we'd have to bug balloons to see if he has any insight beyond my history of it.
[16:51] <rick_h> lazyPower: erik_lonroth I think azure and aws were the two targets first.
[16:53] <erik_lonroth> I understand, but the argument proposed in the bug that supporting at least some kind of workflow for centos would be a winning feature of juju. At least, a good idea would be to provide some kind of indication that this needs either some manual "fix" to let me install centos, or how to get this feature to operate as I would hope. After all, centos is one "of the major" dists out there.
[16:54] <erik_lonroth> ... as it is now, me as a noob on juju, is getting into a situation where I can't distinguish between this as a "method error" or a "bug"
[16:57] <balloons> what is this?
[16:58] <rick_h> balloons: do you have any insights on centos charms on lxd or public clouds?
[16:58] <rick_h> balloons: I know there was work to select a centos image in azure/aws but not sure if that ever "got released"
[16:59] <rick_h> erik_lonroth: understand and agree. We also love saying Juju is cross OS and we work on the client being multi-platform. Our issues with ootb centos is a :(
[17:02] <erik_lonroth> I understand also. I'd love to help out here as I see the potential. Fixing issues lite this bridges also a healthy transition over OS:es and I see a number of "charms" that needs this functionality.
[17:04] <erik_lonroth> One charm I'm looking for is the "freeIPA" which is a KEY element if you want to bring in enterprise level authentication systems with juju. At the moment, I don't know about a charm that deals with authentication in the charm store. (If you know about any, please let me know =)
[17:05] <erik_lonroth> https://www.freeipa.org/page/Main_Page
[17:05] <balloons> rick_h, we do test on centos on aws indeed
[17:06] <rick_h> balloons: ah ok, it'd be nice to see if it's using some hard coded image id or if it's available to all.
[17:06] <balloons> rick_h, the deploy is a simple charm, but you can do a basic bootstrap and deploy
[17:06] <rick_h> balloons: k, erik_lonroth what charm are you using? I wonder if I can test it on aws
[17:07] <rick_h> maybe hit and see where it might work vs doesn't and at least get farther on what works vs doesn't atm
[17:08] <erik_lonroth> I'm using my own charm which is part of my "tutorial" writing to help my colleges to get started on juju development: https://github.com/erik78se/juju/wiki/The-hello-world-charm
[17:09] <erik_lonroth> I've tested my charm on ubuntu already and now I wanted to "prove" juju as being linux-agnostic.
[17:09] <erik_lonroth> ... well, at least "centos"/"ubuntu" agnostoc.
[17:10] <rick_h> erik_lonroth: rgr
[17:11] <erik_lonroth> This is what my next tutorial would likely be... showing that the hello-world example indeed can be commissioned on both ubuntu/centos.
[17:12] <balloons> rick_h, seems I misspoke. I looked at the test and it still bootstraps with trusty
[17:12] <rick_h> balloons: k, I'm ok with it bootstrapping just curios on the charm deploy
[17:12] <rick_h> balloons: e.g. can we mix centos/ubuntu workloads and the like
[17:12] <balloons> rick_h, that is the idea of the test it seems
[17:17] <erik_lonroth> I don't know really at the moment where we are in this process...? Are we stuck with "not being able to support centos images", or is there a workaround, or what is the verdict here?
[17:18] <zeestrat> rick_h: What's the plan on different LXD storage backends when deploying on MAAS? The default directory option that Juju uses is quite inflexible. We have separate /var partitions on our MAAS nodes which we need to watch out for as they get rather full when you're colocating LXD containers (something which quickly could take down all LXD containers).
[17:19] <rick_h> zeestrat: yea there's a bug around the /var issue. I know there was a plan but not sure where it's at on the todo list atm
[17:22] <zeestrat> If you're thinking about #1634390, then that is something else. This is about being able to select different storage backends such as LVM instead of just straight up directories for LXD to use as a storage backend.
[17:22] <mup> Bug #1634390: jujud services not starting after reboot when /var is on separate partition  <uosci> <juju:Triaged> <juju-core:Triaged> <https://launchpad.net/bugs/1634390>
[17:44] <zeestrat> Just wondering if this was something on the roadmap. If not, I'll write up a bug with the details.
[17:54] <tychicus> lazyPower: applied the settings listed here https://kubernetes.io/docs/getting-started-guides/ubuntu/troubleshooting/ now I get a slightly different error message Error: forwarding ports: error upgrading connection: error dialing backend: dial tcp 10.148.0.100:10250: getsockopt: connection timed out
[17:55] <rick_h> erik_lonroth: ok, so on aws, once you agree to the terms of the centos7 image, I was able to deploy your charm on --series=centos7 and then the install hook errors with the "apt not found"
[17:56] <rick_h> erik_lonroth: so yea, it works in theory, but not across the board atm
[17:56] <tychicus> kubernetes is deployed on top of openstack
[17:56] <rick_h> erik_lonroth: confirmed gce doesn't have an centos image set for use
[17:58] <lazyPower> tychicus: sorry i'm getting pulled, i will circle back in a moment
[17:58] <rick_h> good grief that centos image is slow
[18:22] <tychicus> edited the security group to enable ingress on tcp 10250 0.0.0.0/0 to the specific kubernetes worker that helm was attempting to communicate with. this allowed helm to run
[18:23] <lazyPower> tychicus: ah, juju run --application kubernetes-worker "open-port 10250" && juju expose kubernetes-worker
[18:23] <lazyPower> tychicus: thats hwo you would accomplish that natively in juju terms.
[18:23] <tychicus> thanks
[18:23] <tychicus> and that will expose all workers
[18:25] <tychicus> next question is related to persistent volume storage there is a great write up here https://kubernetes.io/docs/getting-started-guides/ubuntu/storage/ but I already have ceph deployed for openstack, so I think I would just use cinder to give persistent volumes to kubernetes
[18:26] <tychicus> is that as simple as using the storage charm and doing juju deploy canonical-kubernetes —storage data=50G
[18:34] <lazyPower> tychicus: so the only storage backends we support today are nfs and rbd from ceph
[18:35] <lazyPower> you can do som emanual ops to get things like ebs working pending our cloud native support however those are WIP and in a pre-alpha state at this time. (and you're deploying on openstack)
[18:36] <lazyPower> tychicus: so wrt supporting cinder, a bug filed would go a long way towards helping us prioritize those storage providers natively without any manual intervention
[18:37] <tychicus>  lazyPower: so can I connect to the existing ceph rdb?
[18:37] <lazyPower> you certainly can
[18:38] <lazyPower> tychicus: add the relation between your ceph-mon and kubernetes-worker charms, and there's an action for enlisting RBD's as PV's
[18:38] <lazyPower> tychicus: one thing to note, that action doesn't validate and its entirely possible to over-provision your rbd's. eg: enlist a 90tb rbd as a pv, and k8s will gladly take it and do what it will with the rbd, even though theres no hope of fulfilling that requests initial storage capacity without adding physical disks and enlisting in the OSD's.
[18:39] <lazyPower> thats one thing i've discovered that I haven't come up with a fix for just yet
[18:41] <lazyPower> tychicus: additionally you can enlist the credentials and use the rbd auto-provisioner
[18:41] <lazyPower> but thats not something we've enabled out of the box just yet. its on the todo list.
[18:42] <lazyPower> tychicus: https://github.com/kubernetes/kubernetes/tree/master/examples/persistent-volume-provisioning - the CephRBD example illustrates how to setup the rbd auto provisioner.
[18:43] <tychicus> lazyPower: just to make sure that I understand all of this correctly, since I am still very new to juju and I very well could have done some things wrong.  I used juju + maas to deploy openstack on physical hardware.  Then I used juju to deploy kubernetes on top of openstack.  I currently have 2 juju bootstrap nodes 1 node controls openstack, the other controls kubernetes
[18:44] <lazyPower> tychicus: ah, nested...
[18:45] <lazyPower> So that relationship i outlined was if it were on the same layer as the openstack deployment, since its nested you cannot reasonably relate k8s to ceph.
[18:45] <lazyPower> good call. in this case, i would recommend enlistment of the ceph credentials, and using the rbd provisioner
[18:45] <tychicus> I can use the existing Ceph Cluster deployed for openstack to provide persistent storage to the kubernetes cluster via https://github.com/kubernetes/kubernetes/tree/master/examples/persistent-volume-provisioning
[18:46] <lazyPower> correct
[18:46] <lazyPower> the RBD example there should get you moving. You'll need to create a secret with your ceph credentials and enlist the storage class
[18:46] <tychicus> thanks, and thanks again for your assistance
[18:47] <tychicus> as far as having 2 bootstrap nodes, is that correct, or shouldI only have a single bootstrap node with multiple models?
[18:47] <lazyPower> you should have been able to re-use that initial bootstrap node for your openstack deployment
[18:48] <tychicus> ok
[18:48] <lazyPower> rick_h: fact check me here?
[18:48]  * rick_h reads backward
[18:48] <lazyPower> i'm going to owe mr _h a pizza by the end of today
[18:48] <lazyPower> or a beer, notwithstanding.
[18:48] <tychicus> do you know offhand what the syntax is for telling juju about an already existing bootstrap node?
[18:49] <rick_h> tychicus: e.g. using a single controller for multiple providers?
[18:49] <rick_h> tychicus: it doesn't support that at this time.
[18:49] <lazyPower> gah
[18:49] <lazyPower> i lied :( sorry
[18:49] <lazyPower> really glad i pinged
[18:49] <rick_h> tychicus: if they're on the same provider, you can just add-model with a region flag
[18:50] <tychicus> ok and does controller == bootstrap node?
[18:50] <lazyPower> you are correct
[18:50] <lazyPower> a controller is the 2.0+ terminology for a bootstrap node.
[18:51] <tychicus> ok, so it makes sense to have 1 controller for MaaS and another controller for openstack
[18:51] <vasey> hey folks, i'm getting an error about a node not having an address family in common with my MAAS server that i'm trying to bootstrap a controller for (full error message here: https://pastebin.com/Pd8ktgPm) any ideas as to how to fix this?
[18:51]  * rick_h is sorry to not say "yes!"
[18:52] <lazyPower> vasey: looks like your network config in maas doesn't match whats happening in that rack.
[18:52] <lazyPower> vasey: did you model the network fabrics in maas that match whats coming back from the DHCP server? i can only presume you'r eusing an external DHCP in this setup...
[18:52] <tychicus> rick_h: thanks for the clarification
[18:52] <tychicus> lazyPower: thanks for all the help
[18:52] <lazyPower> tychicus: no problem, ping me with any questions about k8s :) happy to help.
[18:53] <lazyPower> i clearly need to brush up on my non k8s bits... i'm a bit embarrased to have fibbed about the controller support :S  I'm living in a JAAS world, and its easy to blur the lines apparently.
[18:54] <tychicus> so far I am very happy with what I have seen in juju thus far, excited to learn more
[19:00] <lazyPower> Thats great to hear :)
[19:00] <vasey> lazyPower: i'm using the MAAS built-in DHCP for this setup, and everything seems fine on that end; should i have given my juju a static IP
[19:01] <lazyPower> well this is fun, i cant find a bug with that error text either
[19:01] <lazyPower> this might be greenfield
[19:02] <lazyPower> actually this seems related https://bugs.launchpad.net/maas/+bug/1683433
[19:02] <mup> Bug #1683433: MAAS 2.2-rc1 refuses to deploy if all a node's interfaces are set to DHCP <regression> <MAAS:Fix Committed by blake-rouse> <https://launchpad.net/bugs/1683433>
[19:02] <lazyPower> vasey: do the nodes successfully boot if you manually deploy one?
[19:05] <erik_lonroth> rick_h: Thanx for testing it out and yeah, the "apt" thing error was very expected. But, the centos awkwardness for testing with lxd seems to me really something worth addressing. The typical development workflow would be to test on a local lxc and if that scenario fails, I think the concept also fails.... Can I help out here somehow ?
[19:08] <vasey> lazyPower: i haven't tried a manual deployment, though they automatically PXE boot into the ephemeral ubuntu image just fine though
[19:09] <rick_h> erik_lonroth: I think it's got to be streams work with lxd->image listing->juju trust that has to be setup for lxd
[19:09] <lazyPower> vasey: that error message is actually surfacing from maas, i've grepped through the juju tree and dont find that error message.
[19:09] <lazyPower> vasey: so it leads me to beleive there's an issue with the unit configuration in maas.
[19:09] <lazyPower> that coupled with 1683433 linked above, i think its related to the networking configuration and its a bug in 2.2.0-rc1+
[19:11] <erik_lonroth> rick_h: I'm sorry, but I don't quite follow you there. "streams" ?
[19:48] <vasey> lazyPower: interesting! i'll look into that
[19:51] <lazyPower> erik_lonroth: streams are a json listing of image type / series (like centos type, series is 7, or ubuntu type, series is xenial) and where to fetch those cloud images from.
[20:14] <tychicus> how do you connect to an existing juju controller from a new juju client?
[20:25] <bildz> hey guys, is anyone using the purity charm?
[20:27] <vasey> lazyPower: turns out none of my nodes had an interface configured with an auto-assigned address, that did the trick
[20:47] <rick_h> tychicus: try again? where's the client at?
[20:47] <rick_h> bildz: not tried it out myself, what's up?
[20:50] <lazyPower> vasey: awesome! glad we got you fixed up.
[20:50] <lazyPower> bildz: would you be referring to https://jujucharms.com/u/brent-clements/cinder-purity/ ?
[21:17] <magicaltrout> hello people
[21:17] <magicaltrout> can you still get the dashboard list of your own charms in the charmstore?
[21:18] <magicaltrout> oh i see what its doing there
[21:43] <rick_h> magicaltrout: ?
[22:48] <magicaltrout> this is what gets on my tits about the store search
[22:48] <magicaltrout> rick_h: what do i need to change to get my gitlab charm to show up?
[22:48] <magicaltrout> just call it something else?
[22:48] <magicaltrout> i don't even know how you know it exists
[22:49] <rick_h> magicaltrout: heh, I saw your email and I knew to check your user account.
[22:49] <magicaltrout> meh
[22:49] <rick_h> magicaltrout: the best thing is to get it promulgated I guess.
[22:50] <magicaltrout> that takes effort! :P
[22:50] <magicaltrout> i have a backlog of charms to get promulgated, this is why i have new people starting, to get through my self inflicted backlog ;)
[22:50] <rick_h> magicaltrout: yea, looks like the one there now has just been limped along by lazyPower and others.
[22:51] <rick_h> magicaltrout: shouldn't be too bad to get it swapped out. I'm sure they'd like to have someone else doing awesome stuff in that namespace
[22:51] <magicaltrout> i don't get why the search just ignores anything with the same name
[22:51] <magicaltrout> why not just segregate the results somehow
[22:52] <rick_h> magicaltrout: used to do that and it confused folks getting several versions of ceph and the like
[22:52] <magicaltrout> hmmmmmmmmmmmmmmmmmmm
[22:52] <magicaltrout> fair enough
[22:53] <magicaltrout> can't you just tell them not to be confused? ;)
[22:53] <magicaltrout> or "check this box to return stuff we really have no idea if it works or not" ;)
[22:54] <rick_h> magicaltrout: yea, so the new agreement was to add an option to "find all other options" or the like
[22:54] <rick_h> magicaltrout: but not been completed yet
[22:55] <magicaltrout> ah
[22:55] <magicaltrout> cool
[22:56]  * rick_h sees a handful of gitlab deploys and checks who's charm they are running
[22:57] <rick_h> hmm, yea 8 of the current one out there I can tell
[22:58] <rick_h> magicaltrout: so yea we'd have to work out a migration path as part of swapping the promulgated version :(
[22:58] <rick_h> magicaltrout: or push yours as trusty as the current one is only precise
[22:58] <rick_h> which would be <3
[22:58] <magicaltrout> my will be xenial and trusty when i get the former checked
[22:58] <rick_h> magicaltrout: cool
[22:59] <magicaltrout> https://jujucharms.com/u/spiculecharms/gitlab okay rick_h
[22:59] <magicaltrout> thats the one, although I need to push an updated readme and a few tweaks in the morning
[22:59] <magicaltrout> i'll write some tests and get it into the review queue when I get an hour or 3 later in the week
[22:59] <rick_h> magicaltrout: it's not public
[22:59] <magicaltrout> meh
[22:59] <rick_h> magicaltrout: rgr
[23:00] <magicaltrout> my guesses failed me
[23:01] <magicaltrout> public now rick_h
[23:01] <rick_h> magicaltrout: ty I see it
[23:01] <rick_h> magicaltrout: ooh, support http endpoint! /me rushes to use it with the ssl-termination charm
[23:04] <magicaltrout> its set to the latest version, but if people want to run a specific version they can, i've tested 8.0 through to 9.1 or wherever we are now
[23:04] <magicaltrout> i have some code to decouple the database and other bits but i've not tested them thoroughly yet
[23:04] <magicaltrout> it will land one day