[00:17] thumper: can haz review? https://github.com/juju/juju/pull/8867 [00:31] veebers: I made the mistake of trying to use go-guru-callers, it is taking a long time and making my computer very sad. [00:34] babbageclunk: hah yeah I don't think I've had anything useful from that before. I usually need to kill it before long [00:36] I think I might just be setting the scope wrong... [00:40] babbageclunk: I *think* the scope should be something like: github.com/juju/juju/... but I'm not 100% certain [00:52] veebers: yeah - from the docs it sounds like it should be github.com/juju/juju/cmd/jujud - the package that has the main (which is the starting point). I guess the problem is that it's whole-program analysis for a too-big program, at least for my computer. [00:53] babbageclunk: get more computers [00:54] babbageclunk: heh yeah, I would be interested in how well it works with a smaller project etc. I notice a bit of slow down when I change branches etc. as it compiles bits to give me completion etc. === chigang__ is now known as chigang [02:59] wallyworld: with you shortly, finishing with IS [03:07] wallyworld: omw now [03:17] wallyworld: bug 1778970 [03:17] Bug #1778970: offer will not leave model [06:18] wallyworld, would u mind to take a look these PRs when u have time: https://github.com/juju/charm/pull/249/files https://github.com/juju/jujusvg/pull/56/files https://github.com/juju/charmstore/pull/808/files ? thanks [06:32] kelvin_: sure, will do [06:32] vino: want to join HO again? [06:32] yes. [06:39] kelvin_: with the svg PR, you should wait to land the latest charm.v6 change and use the dep from that one [06:41] wallyworld, yes, the charm.v6 is the dep for all the others, and I will do juju finally after all of these landed as well. [06:44] wallyworld, and one more for bundlechanges please, thanks https://github.com/juju/bundlechanges/pull/41/files [06:45] i will update dependencies.tsv for it. [06:50] kelvin_: and for svg as well [06:50] evenn though PR is already proposed [06:55] kelvin_: just in a meeting, will finish looking soon [07:01] wallyworld, thanks [07:07] wallyworld, vino: team standup? [07:08] thumper: in meeting which was delayed [07:08] try and finished soon [09:49] Anyone ever get this when building Juju? "readSym out of sync". [09:50] manadart: never seen that before [10:58] Need a review for constraints (cores, mem, instance type) support for LXD container machines: https://github.com/juju/juju/pull/8869 [11:02] manadart: looking now [11:03] manadart: look really clean... [11:04] s/look/looks [11:05] stickupkid: I finished it and then rewrote it. This way it's a one-liner to apply it in the provider. [11:05] manadart: mines merging now [11:06] stickupkid: Nice. want to sync up after lunch? [11:06] manadart: hell yeah [11:15] manadart: you've got failure in your PR, it looks like an intermitent failure so I've just do a retry [12:27] jam stickupkid: Looking for a review of https://github.com/juju/juju/pull/8862. [12:27] Going to land https://github.com/juju/juju/pull/8869 when it goes green. [12:38] trying to deploy openstack-lxd but having issues, keystone never completes database relation even when mysql/0 unit is ready. Ceph-mon/0,1,2 gets stuck in Bootstrapping MON cluster [12:38] any suggestions how can I fix it [13:34] manadart: quid-pro-quo? https://github.com/juju/juju/pull/8871 [13:34] jam: Deal. [13:41] stickupkid: Want to jump on the guild HO? [13:42] yeah [13:42] ah, someone already there [13:46] manadart: reviewed 8862 [13:46] jam: Many thanks. [13:47] jam: Approved yours too. [14:13] rick_h_: question for you about build/release/etc process. [14:13] with 2.3 being an LTS, I'd like to keep the 2.3 branch up to date and merge those changes into the 2.4 branch. However, as we are real-close to a 2.4.0 should I wait to merge a 2.3 patch into 2.4 ? [14:14] even if it is effectively a no-op? (it does have some small, eg line-numbers-in-a-diff, changes) [14:14] I do think that we generally want 2.3 => 2.4 => develop, so we know that any patches that we *do* backport to 2.3 are fully applied to new code bases. [14:16] anyway, I have https://github.com/juju/juju/pull/8872 that potentially updates 2.4, but I'm happy to wait until 2.4.0 is cut to actually merge that. [14:16] have a good weekend lal [14:16] all [14:22] jam: definitely wait atm [14:22] jam: it'll have to go into 2.4.1 [14:23] have a good weekend jam [14:24] hi i have fresh deployment of openstack using juju charms / conjure-up but i cannot start first instance , in logs i see: "Instance failed to spawn: ImageNotFound: Image could not be found." what's weird looks like imagename is missing [14:25] conjure-up deploys kubernetes (CDK bundle) without setting the machines names, and so this causes MAAS to auto-pick a random name for each machine from pet names library. is there a way that we can have the conjure-up to use a naming convention? [14:25] So I'm in #conjure-up channel and they are redirecting me to here [14:28] adham: we're here too. no, you can't have conjure-up set machine names [14:28] because juju can't set machine names [14:28] because juju doesn't care about machine names [14:28] w0jtas: starting the first instance in which way? [14:29] Kwonroe, I have 70+ machine created by conjure-up kubernetes, if you saw my MAAS window, and see how much funny names are there [14:29] kwmonroe: lol, just realized the pet-names library is called "pet" when we say to stop keeping pets (servers) and start driving cattle. [14:29] you would definitely reconsider the deployment here [14:30] connjure-up & juju are a great tool but honestly this little issue is destroying their greatness [14:30] adham: the more machines the better. You shouldn't ever really care about the machines but what's running on them. Juju's taking the application/task based view of the world and so machines are expendable little things that you can reallocate all the time. [14:30] adham: can I ask why the names are important? What task/etc are you doing that is driving you to referencing the machines individually? [14:30] rick_h_: in horizon i want to start first instance with ubuntu 16.04 using lxd [14:30] rick_h_, the machines on MAAS got not tag, no definition, they are just pet names, you cannot distinguish which machine is what [14:31] it would make sense if we have for example, lb1, controller1, master1, something relative [14:31] but not blank [14:31] adham: oh we definitely encourage putting tags on your maas machines so that you can target machines for storage/networking/etc. [14:31] you want me to tag 70+ that was created by conjure-up/juju? [14:31] w0jtas: do you have the images used loaded into glance? I'm not sure how that's pre-seeded in an OS install. You might check with the OS folks. [14:32] adham: no, in MAAS you do it once and you don't have to redo/etc. It's just part of setting up the machine infrastructure. Maybe I'm missing where you're heading there [14:32] rick_h_: i have on list 2 ubuntu images to choose, 14.04 and 16.04 when creating instances [14:32] adham: so Juju supports leveraging/using MAAS tags. [14:32] here is a sample of how machine names look like on my MAAS >> aware-code aware-yak casual-corgi casual-whale casual-mole clear-hound close-liger cool-troll decent-beetle divine-bug driven-drake easy-cod equal-frog equal-swan exotic-earwig expert-cow expert-slug fair-bee first-dog frank-monkey gentle-racer good-koi grown-bunny guided-eft handy-wahoo hip-hornet holy-bass holy-hen intent-bear large-kit [14:32] how can I know which one is at least load balancer, and which one is controller? [14:33] adham: by looking at juju and saying "juju ssh load-balancer/0" [14:33] using the task based approach [14:33] adham: the machine names are part of the MAAS setup when you commission the machines though. [14:33] adham: for instance, in my maas I have nuc1 through nuc8 [14:33] juju/conjure-up doesn't really care about the maas machine name [14:34] during Kunjure Up, I'm using our MAAS as the cloud [14:34] conjure-up* [14:34] adham: right, understand. But the names of the machines in MAAS come from commissioning in MAAS, before you ever run conjure-up [14:35] Yes, if you commission a pod that doesn't have a name [14:35] adham: conjure-up or juju don't change or modify the maas machine names at all [14:35] or a machine I mean [14:35] adham: conjure-up doesn't comission the machine. That's done ahead of time when adding hardware to MAAS [14:35] those machines did not show up unless I ran the conjure-up because these are actually the kubernetes machines, if I deleted those machines, kubernetes will go down [14:36] adham: so I'm failing to grok that statement there [14:36] I'm confused to be honest [14:36] how can I cooperate the 2 of them, or should I install kubernetes away from MAAS [14:36] adham: so you have a maas, with nothing running on it. And you go to the list of nodes. Each node has a name. That name is the machine name. Before kubernetes, conjure-up, juju, anything else is involved. [14:37] adham: did you conjure-up kubernetes onto your MAAS? [14:37] yes [14:37] rick_h_: anything i should check on my setup ? it's fresh conjure-up openstack / lxd setup, my first attempt so i am newbie here :( [14:37] and our MAAS has VMs and machines on it already [14:37] adham: ok, before you ran conjure-up you had a MAAS setup and that MAAS had X machines comissed into it [14:37] yes [14:37] adham: and when you go to the nodelist you see those names [14:38] the funny ones are only after kubernetes deployment [14:38] w0jtas: sorry, I don't know enough about openstack/glance to diagnose. I have to suggest checking out the openstack charms irc channel/mailing list. [14:38] anyone: how can I get 2 different configs of charms installed on same machine? I have different configurations of ceph-osd charms for 2 types of servers ? Thanks [14:38] but nothing changed to our VMs and machines (where they have proper names) [14:38] adham: are the funny ones VM's registered in MAAS? and not the original MAAS machines then? [14:39] they are not original maas machines [14:39] rathore_: so you have to deploy them as two different applications [14:39] they were made by conjure-up [14:39] rathore_: juju deploy ceph-osd ceph-mode1 [14:39] rathore_: juju deploy ceph-osd ceph-mode2 [14:39] w0jtas: what's wrong [14:39] rick_h_ : Thanks a lot [14:40] rathore_: np, if you need different ocnfigs then you'll want to log/perform other operations/scale them differently so it's just reusing the same charm for two purposes. [14:40] stokachu: i installed openstack using conjure-up / lxd and now in horizon i want to run first instance, but it's failing and in node logs i see "Instance failed to spawn: ImageNotFound: Image could not be found." [14:40] adham: ok, so conjure-up created some VMs with pet-names that are now registered in MAAS somehow? [14:40] correct [14:40] adham: is this the MAAS "devices" list or the node list? [14:41] w0jtas: ok, so a glance issue is happening [14:41] rick_h_: Cool. I will try it out [14:41] When we first time saw this, we thought that this is spam or a virus that came with kubernetes, so we deleted them, kubernetes deployment became offline [14:41] we deleted kubernetes completely and brought down the controller [14:41] stokachu: how do i check glance condition then ? any status debug or whatever [14:41] w0jtas: sec [14:41] adham: ouch yea, ok. I'm guessing this is on devices list vs the machine list? [14:41] https://github.com/conjure-up/spells/blob/master/openstack-novalxd/steps/01_glance/glance.sh [14:42] w0jtas: ^ [14:42] after discussion with conjure-up and juju, we understood that this normal and caused by the deployment because no machines names given [14:42] that's basicaly what you need to run to import the images, maybe that failed somewhere, can you check in ~/.cache/conjure-up/openstack-novalxd/steps [14:42] we then tried to redeploy kubernetes, and here we see again the list of funny names 70+ machines [14:42] those names can be seen from MAAS [14:43] adham: ok, sorry I'm catching up. So these are probably the lxd containers created for the k8s cluster registered in MAAS as devices. [14:43] and if we launch list the vms from the terminal on linux [14:43] adham: gotcha [14:43] we still see those funny names, almost everywhere [14:44] adham: so can you confirm that in MAAS you go to the nodes page and there's the table. At the top of the table is filters for "12 Machines 34 Devices 1 Controller" [14:44] adham: and that the funny names only show up in the Devices filter? [14:44] stokachu: where should i find glance.log file ? [14:44] yes [14:44] and also in the vm list from the terminal (outside maas) when listing the VMs [14:45] Actually machines not devices that I'm referring to [14:45] sorry, my apologize nodes [14:45] w0jtas: ~/.cache/conjure-up/openstack-novalxd/steps/01_glance [14:45] I'm currently do not have access to the MAAS [14:45] seems like Kubernetes deployment has taken over the local load balancer on the same server [14:46] w0jtas: you can also juju ssh nova-cloud-controller/0 [14:46] source the credentials file there [14:46] and perform glance commands [14:46] Is there a way that I can disable the kubernetes load balancer and keep the original load balancer on the server as the default? [14:47] kwmonroe: ^ wasn't there something about leveraging external cloud bits? [14:47] adham: will ask the k8s experts. I don't think so though because the charms are a combined solution tested together to work so pulling it apart would potentially break stuff. [14:49] thx rick_h_, I'm in kubernetes channel talking with them [14:49] adham: ok [14:49] but can you please help me about anyway if it's possible at all to have juju or conjure-up to set names for hte machines? [14:49] rather than leaving them blank, and force MAAS to set funny names to them? [14:49] adham: the other question is if there's any way to get conjure-up to use non-petnames for the containers created. I'm not sure how that is setup. [14:50] yea, I'm not sure if conjure-up is asking maas to name them or is providing names for them. I've not registered VMs in MAAS like that. [14:51] i'm trying to bring in cory_fu from conjure-up channel [14:51] as he's the one who redirected me here [14:51] adham: it looks like the add-device allows setting a name. So the question is how would you name them on deploy of the k8s cluster? I mean you don't want to be "juju deploy canonical-kubernetes --lxd1: prettyname, --lxd2: prettyname... [14:52] rick_h_: To clarify, it sounds like adham is using pods in MAAS such that the VMs are created on-demand rather than the older way of doing things where all VMs are pre-created with specific resource sets and managed manually via MAAS [14:52] In that scenario, the names are auto-generated by MAAS and conjure-up / juju has no way to influence them [14:52] cory_fu: oh, the kvm pods stuff? [14:52] Yes [14:52] oh, I was wondering why I'd not run into this before [14:53] thx cory_fu, appreciated... [14:53] rick_h_: My understanding is that this should function very similarly to the public cloud, where you have no control over the instance name / ID, but Juju should create tags in the metadata to indicate which Juju machine is running on that instance. [14:54] adham: ok, so bad news is I've got no path forward for you. I'd love if you filed a bug on bugs.launchpad.net/juju and bring up names to kvm pods in MAAS though as that might be something we need to update Juju to supply at VM creation time but I've not played with the pods stuff in MAAS yet. [14:54] rick_h_, adham: For instance, on my k8s deployment on AWS, my instance i-04c41c1309bde47d4 got the tag juju-machine-id=conjure-kubernetes-core-0a9-machine-0 [14:54] cory_fu: right, exactly. [14:54] cory_fu: so we'll have to setup something using pods and see what Juju does and update anywhere we're not treating it correctly [14:54] https://stackoverflow.com/questions/50970133/installed-kubernetes-on-ubuntu-and-i-see-a-lot-of-nodes-are-getting-created-in-m [14:54] rick_h_: At the end of the day, though, I think adham's real issue is that there were too many VMs created and he can't track down why or what roles each is playing. [14:54] would this help? [14:56] adham: a bit, but the key thing is how the MAAS is setup regarding the pods usage/etc. [14:56] I'm really not sure why more than around 9 VMs would have been created unless conjure-up was run multiple times. That's one for the Juju controller, and one for each machine required by CDK [14:56] adham: because the root thing is that this isn't a typical MAAS with bare metal machines going [14:57] exactly cory_fu " [14:57] rick_h_: At the end of the day, though, I think adham's real issue is that there were too many VMs created and he can't track down why or what roles each is playing." [14:57] adham: Do you still have your ~/.cache/conjure-up/conjure-up.log file available? That should have a record of everything that conjure-up did, including requesting new machines. [14:57] this is exactly what I'm running through [14:57] adham: then where did the VMs come from? What's the "virtual machine manager" tool? [14:57] Luckily, I still have this https://github.com/conjure-up/conjure-up/issues/1476 [14:58] this issue happened (after deleting the machines thinking they are a virus), but I'm over it [14:58] stokachu: so i see error on glance in neutron.log: keystoneauth1.exceptions.auth.AuthorizationFailure, then in keystone i see error: Unable to load certificate - ensure you have configured PKI with "keystone-manage pki_setup" [14:58] this problem is no longer persisting, but if you are looking for the log files, it's all packaged there [14:58] virtual machine manager to list the vms [14:59] adham: I'm trying to understand your setup so we can replicate it and diagnose why the tags about what resources were used for aren't making it. You say you comissions X bare metal nodes into a MAAS running somewhere correct? [14:59] so rick, like cory_fu mentioned KVM, but I use the virtual-machine manager for virsh [14:59] stokachu: and command not working: keystone-manage: error: argument command: invalid choice: 'pki_setup' [15:00] adham: I don't see any logs attached to that GitHub issue. Also, that connection error indicates that conjure-up tried to connect to a controller but couldn't, presumably because you had deleted the VM while Juju still had a record of it as a valid controller (in ~/.local/share/juju/controllers.yaml) [15:00] to replicate my enviornment >> follow >> https://tutorials.ubuntu.com/tutorial/create-kvm-pods-with-maas >> once you can commission machines successfully, you can proceed with conjure-up kubernetes, you will reproduce 100% what I have here [15:00] adham: ok, so you have MAAS running and you used a the virtual-machine-manager to create VMs and registered those VMs in MAAS? [15:01] adham: ty, that's what I needed. [15:01] jam: In case it gets lost in the torrent of Github mail. I commented on the PR. Pinning a single CPU is done via range syntax - "limits.cpu=0-0 [15:02] I ran cory_fu, I ran juju unregister controller so I think this issue fixed [15:02] hmm [15:02] let me check [15:03] there is only one controller there which is the current one [15:03] I removed the previous controller [15:04] cory_fu, I thought I attached the logs, one moment, I'll double check [15:05] cory_fu, can you please recheck the issue? [15:05] I uploaded the logs [15:06] adham: I think one thing that isn't clear from that tutorial when using MAAS with conjure-up is that once you have a pod available, if you don't tell conjure-up what pre-allocated machines to assign each unit to by providing a (if I recall correctly) tag= constraint on the CONFIGURE APPLICATIONS screen of conjure-up (you have to select Configure for each application), then Juju will assume you want new machines and will ask MAAS to [15:06] allocate them as it sees fit, which will likely lead to new machines being allocated from the pod. But I'm not certain about that because I don't have much experience with pods in MAAS [15:06] adham: so looking at that tutorial "... will be created and a random name will be assigned to it as soon as you hit the "Compose machine" button." [15:06] cory_fu: right, Juju isn't pod-aware so the thing is Juju just asks MAAS for a machine to use and since they're generated I'll bet it just creates them [15:07] cory_fu: I'll have to play with this and try it out. I've not used it yet. [15:07] I can retry with this [15:07] Guys, it's been a huge sufferring for to find someone where I can talk about this issue, someone who knows and can help about it, at least with knowledge [15:08] adham: hey, we're here most days. Happy to help as this is going to get me to play with something new in MAAS I've not done yet. [15:08] do you guys mind if I can please email both of you in a group email with updates where we can continue discussion about this? [15:08] adham: sorry it took a bit to dig into what you had running there, but I think it's coming together [15:08] adham: file a bug please. That's the email thread and it'll let others see/find/etc [15:09] that's fine, and even better for me [15:09] can you pls tell me where to file it [15:09] and cory_fu, can you please also watching this? [15:09] adham: Of course [15:09] or stay in the loop, in case we need to refer back to conjure-up :D [15:09] thx [15:09] rick_h_, where can I file the bug? [15:10] adham: bugs.launchpad.net/juju [15:10] do you need the logs as provided in the github issue? [15:11] adham: Please do link to the GitHub issue and StackOverflow question, for context [15:13] adham: anything you've got we'll take and look into. [15:14] adham: to be clear, we can't change/set the machine names as those come from MAAS. However, we should have noted with tags or metadata in MAAS what the machines are up to. [15:14] adham: The only thing I see in that conjure-up.log file is 5 failed attempts to bootstrap. I could see that having created 5 new VMs in MAAS but I can't possibly imagine how it would have created more than that, though. Does your MAAS have those created VMs still available? Can you see if there were any Juju-assigned tags on them? [15:18] rick_h_: https://bugs.launchpad.net/juju/+bug/1779161 [15:18] Bug #1779161: conjure-up kubernetes creates 70+ VMs on KVM managed by MAAS with funny names [15:20] cory_fu: I have had few VMs before Kubernetes --- I created this issue after deleting the VMs then I couldn't conjure-up/down anymore the kubernetes, I thought those VMs a viruses, so I deleted them manually via the virtual machine manager and MAAS, having this done has corrupted kubernetes and prevented conjure-up/down from working [15:20] this is the time when I uploaded the logs [15:21] someone explained to me the details of juju, and from here I was able to tear the rest of kubernetes down manually via juju, and then I redeployed and confirmed that those vms are from kubernetes [15:22] rick_h_: I don't really expect this >> "juju deploy canonical-kubernetes --lxd1: prettyname, --lxd2: prettyname... because it doesn't make sense [15:22] but at least instance i-04c41c1309bde47d4 got the tag juju-machine-id=conjure-kubernetes-core-0a9-machine-0 << I expect the machine name to be conjure-kubernetes-core-0a9-machine-0 if name is set to blank [15:23] this way it should avoid (hopefully) the MAAS from setting those un-named machines with pet names [15:25] cory_fu, are you still here? [15:27] rick_h_ are you still here? [15:28] adham: still here. But it's the work day so will go in/out sometimes with calls/etc. [15:29] ahh, no, it''s alright, just making sure that both of you got everything and at least we're all linked together on the bug ticket [15:29] adham: so we'll have to see. Typically with MAAS machine names are long setup before Juju gets the machine. With this support for kvm-pods if MAAS now allows that to be tweaked over the API we'll ahve to see what changes Juju needs to work with it. [15:29] I am going to tear down the current kubernetes installation [15:29] adham: right, I've fired off an initial email making sure if anyone on the team's played with the pods stuff and honestly we'll have to find a block of time to set that all up and see how it works [15:29] we've not used it yet [15:30] And try >> "cory_fu> adham: I think one thing that isn't clear from that tutorial when using MAAS with conjure-up is that once you have a pod available, if you don't tell conjure-up what pre-allocated machines to assign each unit to by providing a (if I recall correctly) tag= constraint on the CONFIGURE APPLICATIONS screen of conjure-up (you have to select Configure for each application), then Juju will assume you want new machines and [15:30] will ask MAAS to" [15:37] adham, rick_h_: I can't seem to subscribe myself to the Juju bug on LP [15:38] adham, rick_h_: Can one of you please try subscribing johnsca (Cory Johns) to that bug? [15:38] cory_fu: you show up as subscribed to me [15:38] adham: And yes, sorry I got pulled away for a minute as well [15:39] under the "Notified of all changes" heading is Cory Johns [15:39] kwmonroe: Oh, ok. It's not showing for me. That's fine then [15:39] I subscribed you [15:39] can you pls check cory_fu [15:39] adham: so I don't udnerstand what you mean by "And try..." with cory's quote [15:40] I'm going to see if I set the constraints [15:40] adham: I guess it just doesn't show me to myself. :p [15:40] I will avoid the MAAS autonaming [15:40] adham: I can tell you that's true. Against, MAAS is allocating machines on the fly using a virtual machine setup so they won't be reused unless you specify a placement constraint [15:40] stokachu: Can you please confirm the correct syntax for setting a constraint to target a specific machine in conjure-up? [15:40] stokachu: Also, how do you handle the case where there are multiple units, like for kubernetes-worker? [15:42] adham: Please watch for stokachu's reply to ^, because I'm not certain of the correct syntax [15:43] thanks cory_fu, will do, it's 1:43 AM here, I might go to sleep soon as I have work tomorrow at 8 AM [15:43] I can't really keep my eyes open anymore [15:44] adham: I completely understand. [15:44] I am actually talking to you guys from Australia [15:45] adham: One of the Juju folks that you've spoken with in the past, yeah, the ones from Australia, could also tell you the constraint syntax to target MAAS machines, since it's actually a Juju constraint that's being specified [15:45] wallyworld is the one I spoke to and who advised me to reach out to conjure-up team [15:46] I think it's confusing probably or I'm not doing good in explaining [15:46] my apologize guys for the hassle [15:46] is it possible that we can use the bug ticket for discussion and mentioning? It would be really great if stokachu posted an update there [15:46] ? [15:47] adham: Yes, I'll comment on there with my understanding of the issue as well [15:47] thanks cory_fu [15:47] goodnight [16:08] it's "tags=" [16:08] see https://docs.jujucharms.com/2.3/en/reference-constraints [16:29] kwmonroe: https://www.dropbox.com/s/jtcterg4ft7f09z/Screenshot%20from%202018-06-28%2017-26-15.png?dl=0 [16:29] we've got some catching up to do ;) [16:37] magicaltrout: :) [19:29] zeestrat: is this your stuff turned tutorial? https://tutorials.ubuntu.com/tutorial/tutorial-charm-development-part1#0 [19:32] bdx: ^ might be cool for your new folks when you bring them in [19:40] rick_h_: no, sir. What tipped you off? Looks nice though. [19:41] zeestrat: no? I thought you were working with folks on putting your docs stuff into a tutorial [19:41] zeestrat: I saw it shared from the twitter account actually. https://twitter.com/ubuntu/status/1012417625685708800 [19:47] Nope. I know our boy Lönroth has been working on some docs/tutorial stuff so he might know. [19:48] zeestrat: ok well figured I'd check [19:49] Thanks for the consideration :) looks like the tutorial page could use some authorship details and perhaps a link to the source. [20:15] Hi guys, trying to install Kubernetes right now, and it seems the Kubernetes Charms are using series xenial. Any idea if it would work with Bionic ? [20:18] PatrickD_: yeah it should. that likely won't be the default until after the 18.04.1 release [20:21] What's the easiest way to force it to Bionic ? [20:24] juju deploy cs:bionic/ ? [20:26] PatrickD_: trying to figure that out. i thought you could do `juju deploy canonical-kubernetes --series bionic --force`, but it appears those args only work on individual charms, not bundles [20:32] yeah, we also tried that :) [20:33] PatrickD_: for now i'm afraid you'll have to deploy the charms individually so you can use `--force --series bionic` [20:33] PatrickD_: raw bundle is here: https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive/bundle.yaml [20:33] that'll show you what charm and relations you need [20:33] charms [20:34] Or I use Xenial, and find a way to use a 4.12+ kernel (drivers for Dell 640 requirement). Any easy way to do that ? [20:36] PatrickD_: where are you deploying? public cloud? [20:37] on MAAS [20:39] PatrickD_: i'm not exactly a MAAS expert, but it seems like you could create a xenial image with the kernel you want [20:45] PatrickD_: also `juju model-config -h | grep cloudinit -C3` [20:46] could potentially upgrade kernel that way. haven't tried it. [20:47] PatrickD_: honestly charm pull the bundle and edit the default series on it to bionic. [20:47] PatrickD_: otherwise we rely on the charm default as the author suggests series X. No way around it without editing the bundle because it's a lot of assuming that each charm supports a series in a bundle [20:49] rick_h_: that won't work [20:49] rick_h_: we don't have bionic in the charms' series list yet [20:49] tvansteenburgh: oh, at all? I gotcha. [20:49] tvansteenburgh: yea, then the "bundle" isn't ready for that yet heh. [20:50] yeah, trying to use bionic kernel in xenial, which would be just fine. [20:52] PatrickD_: in that case I'd deploy the bundle and then juju run the commands to get the new kernel in place/reboot [20:52] PatrickD_: vs bootstrapping the whole other series (if just the kernel is what you're after) [21:28] Also have an issue with MAAS zones in Juju. we have 3 zones (default, A and B). It either goes to default or B, but I want it to go to A... Any way to specify MAAS zone when deploying the bundle ? [21:31] PatrickD_: I thought zones were meant to be like AZ in public clouds so that they were rotated/spread across for fault tolerance. [21:31] PatrickD_: so Juju should rotate them as you add-unit. [21:31] PatrickD_: that said, you might try a constraint of zone=xxx [21:31] but not sure on that tbh [21:32] PatrickD_: I'd start with a juju deploy xx --constraints zone=yyy first [21:32] and see if Juju will do that [21:32] constraint doesn't work, but what you say makes sense... maybe I should remove unused zones then. [21:33] considering that there are zero machines in default and B, it's a bit strange that it wants to go to B [21:45] PatrickD_: hmm, yea that's not right [22:13] we don't currently support zone placement in bundles [22:14] we don't have zone as a constraint [22:14] we have talked about it before [22:14] a key here is that maas zones are quite different to other cloud zones [22:16] tvansteenburgh, PatrickD_: I *think* you could specify a bundle overlay to change the default series [22:16] * thumper takes a look [22:16] thumper: sure, if the charm supports the series, but it doesn't [22:16] ah [22:16] hmm... [22:16] yeah [22:16] fair call [22:17] PatrickD_: fwiw i'm working on adding bionic to the charms, but it won't be done today