wallyworld | thumper: can haz review? https://github.com/juju/juju/pull/8867 | 00:17 |
---|---|---|
babbageclunk | veebers: I made the mistake of trying to use go-guru-callers, it is taking a long time and making my computer very sad. | 00:31 |
veebers | babbageclunk: hah yeah I don't think I've had anything useful from that before. I usually need to kill it before long | 00:34 |
babbageclunk | I think I might just be setting the scope wrong... | 00:36 |
veebers | babbageclunk: I *think* the scope should be something like: github.com/juju/juju/... but I'm not 100% certain | 00:40 |
babbageclunk | veebers: yeah - from the docs it sounds like it should be github.com/juju/juju/cmd/jujud - the package that has the main (which is the starting point). I guess the problem is that it's whole-program analysis for a too-big program, at least for my computer. | 00:52 |
veebers | babbageclunk: get more computers | 00:53 |
veebers | babbageclunk: heh yeah, I would be interested in how well it works with a smaller project etc. I notice a bit of slow down when I change branches etc. as it compiles bits to give me completion etc. | 00:54 |
=== chigang__ is now known as chigang | ||
thumper | wallyworld: with you shortly, finishing with IS | 02:59 |
thumper | wallyworld: omw now | 03:07 |
thumper | wallyworld: bug 1778970 | 03:17 |
mup | Bug #1778970: offer will not leave model <juju:New> <https://launchpad.net/bugs/1778970> | 03:17 |
kelvin_ | wallyworld, would u mind to take a look these PRs when u have time: https://github.com/juju/charm/pull/249/files https://github.com/juju/jujusvg/pull/56/files https://github.com/juju/charmstore/pull/808/files ? thanks | 06:18 |
wallyworld | kelvin_: sure, will do | 06:32 |
wallyworld | vino: want to join HO again? | 06:32 |
vino | yes. | 06:32 |
wallyworld | kelvin_: with the svg PR, you should wait to land the latest charm.v6 change and use the dep from that one | 06:39 |
kelvin_ | wallyworld, yes, the charm.v6 is the dep for all the others, and I will do juju finally after all of these landed as well. | 06:41 |
kelvin_ | wallyworld, and one more for bundlechanges please, thanks https://github.com/juju/bundlechanges/pull/41/files | 06:44 |
kelvin_ | i will update dependencies.tsv for it. | 06:45 |
wallyworld | kelvin_: and for svg as well | 06:50 |
wallyworld | evenn though PR is already proposed | 06:50 |
wallyworld | kelvin_: just in a meeting, will finish looking soon | 06:55 |
kelvin_ | wallyworld, thanks | 07:01 |
thumper | wallyworld, vino: team standup? | 07:07 |
wallyworld | thumper: in meeting which was delayed | 07:08 |
wallyworld | try and finished soon | 07:08 |
manadart | Anyone ever get this when building Juju? "readSym out of sync". | 09:49 |
stickupkid | manadart: never seen that before | 09:50 |
manadart | Need a review for constraints (cores, mem, instance type) support for LXD container machines: https://github.com/juju/juju/pull/8869 | 10:58 |
stickupkid | manadart: looking now | 11:02 |
stickupkid | manadart: look really clean... | 11:03 |
stickupkid | s/look/looks | 11:04 |
manadart | stickupkid: I finished it and then rewrote it. This way it's a one-liner to apply it in the provider. | 11:05 |
stickupkid | manadart: mines merging now | 11:05 |
manadart | stickupkid: Nice. want to sync up after lunch? | 11:06 |
stickupkid | manadart: hell yeah | 11:06 |
stickupkid | manadart: you've got failure in your PR, it looks like an intermitent failure so I've just do a retry | 11:15 |
manadart | jam stickupkid: Looking for a review of https://github.com/juju/juju/pull/8862. | 12:27 |
manadart | Going to land https://github.com/juju/juju/pull/8869 when it goes green. | 12:27 |
rathore | trying to deploy openstack-lxd but having issues, keystone never completes database relation even when mysql/0 unit is ready. Ceph-mon/0,1,2 gets stuck in Bootstrapping MON cluster | 12:38 |
rathore | any suggestions how can I fix it | 12:38 |
jam | manadart: quid-pro-quo? https://github.com/juju/juju/pull/8871 | 13:34 |
manadart | jam: Deal. | 13:34 |
manadart | stickupkid: Want to jump on the guild HO? | 13:41 |
stickupkid | yeah | 13:42 |
stickupkid | ah, someone already there | 13:42 |
jam | manadart: reviewed 8862 | 13:46 |
manadart | jam: Many thanks. | 13:46 |
manadart | jam: Approved yours too. | 13:47 |
jam | rick_h_: question for you about build/release/etc process. | 14:13 |
jam | with 2.3 being an LTS, I'd like to keep the 2.3 branch up to date and merge those changes into the 2.4 branch. However, as we are real-close to a 2.4.0 should I wait to merge a 2.3 patch into 2.4 ? | 14:13 |
jam | even if it is effectively a no-op? (it does have some small, eg line-numbers-in-a-diff, changes) | 14:14 |
jam | I do think that we generally want 2.3 => 2.4 => develop, so we know that any patches that we *do* backport to 2.3 are fully applied to new code bases. | 14:14 |
jam | anyway, I have https://github.com/juju/juju/pull/8872 that potentially updates 2.4, but I'm happy to wait until 2.4.0 is cut to actually merge that. | 14:16 |
jam | have a good weekend lal | 14:16 |
jam | all | 14:16 |
rick_h_ | jam: definitely wait atm | 14:22 |
rick_h_ | jam: it'll have to go into 2.4.1 | 14:22 |
rick_h_ | have a good weekend jam | 14:23 |
w0jtas | hi i have fresh deployment of openstack using juju charms / conjure-up but i cannot start first instance , in logs i see: "Instance failed to spawn: ImageNotFound: Image could not be found." what's weird looks like imagename is missing | 14:24 |
adham | conjure-up deploys kubernetes (CDK bundle) without setting the machines names, and so this causes MAAS to auto-pick a random name for each machine from pet names library. is there a way that we can have the conjure-up to use a naming convention? | 14:25 |
adham | So I'm in #conjure-up channel and they are redirecting me to here | 14:25 |
kwmonroe | adham: we're here too. no, you can't have conjure-up set machine names | 14:28 |
kwmonroe | because juju can't set machine names | 14:28 |
kwmonroe | because juju doesn't care about machine names | 14:28 |
rick_h_ | w0jtas: starting the first instance in which way? | 14:28 |
adham | Kwonroe, I have 70+ machine created by conjure-up kubernetes, if you saw my MAAS window, and see how much funny names are there | 14:29 |
rick_h_ | kwmonroe: lol, just realized the pet-names library is called "pet" when we say to stop keeping pets (servers) and start driving cattle. | 14:29 |
adham | you would definitely reconsider the deployment here | 14:29 |
adham | connjure-up & juju are a great tool but honestly this little issue is destroying their greatness | 14:30 |
rick_h_ | adham: the more machines the better. You shouldn't ever really care about the machines but what's running on them. Juju's taking the application/task based view of the world and so machines are expendable little things that you can reallocate all the time. | 14:30 |
rick_h_ | adham: can I ask why the names are important? What task/etc are you doing that is driving you to referencing the machines individually? | 14:30 |
w0jtas | rick_h_: in horizon i want to start first instance with ubuntu 16.04 using lxd | 14:30 |
adham | rick_h_, the machines on MAAS got not tag, no definition, they are just pet names, you cannot distinguish which machine is what | 14:30 |
adham | it would make sense if we have for example, lb1, controller1, master1, something relative | 14:31 |
adham | but not blank | 14:31 |
rick_h_ | adham: oh we definitely encourage putting tags on your maas machines so that you can target machines for storage/networking/etc. | 14:31 |
adham | you want me to tag 70+ that was created by conjure-up/juju? | 14:31 |
rick_h_ | w0jtas: do you have the images used loaded into glance? I'm not sure how that's pre-seeded in an OS install. You might check with the OS folks. | 14:31 |
rick_h_ | adham: no, in MAAS you do it once and you don't have to redo/etc. It's just part of setting up the machine infrastructure. Maybe I'm missing where you're heading there | 14:32 |
w0jtas | rick_h_: i have on list 2 ubuntu images to choose, 14.04 and 16.04 when creating instances | 14:32 |
rick_h_ | adham: so Juju supports leveraging/using MAAS tags. | 14:32 |
adham | here is a sample of how machine names look like on my MAAS >> aware-code aware-yak casual-corgi casual-whale casual-mole clear-hound close-liger cool-troll decent-beetle divine-bug driven-drake easy-cod equal-frog equal-swan exotic-earwig expert-cow expert-slug fair-bee first-dog frank-monkey gentle-racer good-koi grown-bunny guided-eft handy-wahoo hip-hornet holy-bass holy-hen intent-bear large-kit | 14:32 |
adham | how can I know which one is at least load balancer, and which one is controller? | 14:32 |
rick_h_ | adham: by looking at juju and saying "juju ssh load-balancer/0" | 14:33 |
rick_h_ | using the task based approach | 14:33 |
rick_h_ | adham: the machine names are part of the MAAS setup when you commission the machines though. | 14:33 |
rick_h_ | adham: for instance, in my maas I have nuc1 through nuc8 | 14:33 |
rick_h_ | juju/conjure-up doesn't really care about the maas machine name | 14:33 |
adham | during Kunjure Up, I'm using our MAAS as the cloud | 14:34 |
adham | conjure-up* | 14:34 |
rick_h_ | adham: right, understand. But the names of the machines in MAAS come from commissioning in MAAS, before you ever run conjure-up | 14:34 |
adham | Yes, if you commission a pod that doesn't have a name | 14:35 |
rick_h_ | adham: conjure-up or juju don't change or modify the maas machine names at all | 14:35 |
adham | or a machine I mean | 14:35 |
rick_h_ | adham: conjure-up doesn't comission the machine. That's done ahead of time when adding hardware to MAAS | 14:35 |
adham | those machines did not show up unless I ran the conjure-up because these are actually the kubernetes machines, if I deleted those machines, kubernetes will go down | 14:35 |
rick_h_ | adham: so I'm failing to grok that statement there | 14:36 |
adham | I'm confused to be honest | 14:36 |
adham | how can I cooperate the 2 of them, or should I install kubernetes away from MAAS | 14:36 |
rick_h_ | adham: so you have a maas, with nothing running on it. And you go to the list of nodes. Each node has a name. That name is the machine name. Before kubernetes, conjure-up, juju, anything else is involved. | 14:36 |
rick_h_ | adham: did you conjure-up kubernetes onto your MAAS? | 14:37 |
adham | yes | 14:37 |
w0jtas | rick_h_: anything i should check on my setup ? it's fresh conjure-up openstack / lxd setup, my first attempt so i am newbie here :( | 14:37 |
adham | and our MAAS has VMs and machines on it already | 14:37 |
rick_h_ | adham: ok, before you ran conjure-up you had a MAAS setup and that MAAS had X machines comissed into it | 14:37 |
adham | yes | 14:37 |
rick_h_ | adham: and when you go to the nodelist you see those names | 14:37 |
adham | the funny ones are only after kubernetes deployment | 14:38 |
rick_h_ | w0jtas: sorry, I don't know enough about openstack/glance to diagnose. I have to suggest checking out the openstack charms irc channel/mailing list. | 14:38 |
rathore_ | anyone: how can I get 2 different configs of charms installed on same machine? I have different configurations of ceph-osd charms for 2 types of servers ? Thanks | 14:38 |
adham | but nothing changed to our VMs and machines (where they have proper names) | 14:38 |
rick_h_ | adham: are the funny ones VM's registered in MAAS? and not the original MAAS machines then? | 14:38 |
adham | they are not original maas machines | 14:39 |
rick_h_ | rathore_: so you have to deploy them as two different applications | 14:39 |
adham | they were made by conjure-up | 14:39 |
rick_h_ | rathore_: juju deploy ceph-osd ceph-mode1 | 14:39 |
rick_h_ | rathore_: juju deploy ceph-osd ceph-mode2 | 14:39 |
stokachu | w0jtas: what's wrong | 14:39 |
rathore_ | rick_h_ : Thanks a lot | 14:39 |
rick_h_ | rathore_: np, if you need different ocnfigs then you'll want to log/perform other operations/scale them differently so it's just reusing the same charm for two purposes. | 14:40 |
w0jtas | stokachu: i installed openstack using conjure-up / lxd and now in horizon i want to run first instance, but it's failing and in node logs i see "Instance failed to spawn: ImageNotFound: Image could not be found." | 14:40 |
rick_h_ | adham: ok, so conjure-up created some VMs with pet-names that are now registered in MAAS somehow? | 14:40 |
adham | correct | 14:40 |
rick_h_ | adham: is this the MAAS "devices" list or the node list? | 14:40 |
stokachu | w0jtas: ok, so a glance issue is happening | 14:41 |
rathore_ | rick_h_: Cool. I will try it out | 14:41 |
adham | When we first time saw this, we thought that this is spam or a virus that came with kubernetes, so we deleted them, kubernetes deployment became offline | 14:41 |
adham | we deleted kubernetes completely and brought down the controller | 14:41 |
w0jtas | stokachu: how do i check glance condition then ? any status debug or whatever | 14:41 |
stokachu | w0jtas: sec | 14:41 |
rick_h_ | adham: ouch yea, ok. I'm guessing this is on devices list vs the machine list? | 14:41 |
stokachu | https://github.com/conjure-up/spells/blob/master/openstack-novalxd/steps/01_glance/glance.sh | 14:41 |
stokachu | w0jtas: ^ | 14:42 |
adham | after discussion with conjure-up and juju, we understood that this normal and caused by the deployment because no machines names given | 14:42 |
stokachu | that's basicaly what you need to run to import the images, maybe that failed somewhere, can you check in ~/.cache/conjure-up/openstack-novalxd/steps | 14:42 |
adham | we then tried to redeploy kubernetes, and here we see again the list of funny names 70+ machines | 14:42 |
adham | those names can be seen from MAAS | 14:42 |
rick_h_ | adham: ok, sorry I'm catching up. So these are probably the lxd containers created for the k8s cluster registered in MAAS as devices. | 14:43 |
adham | and if we launch list the vms from the terminal on linux | 14:43 |
rick_h_ | adham: gotcha | 14:43 |
adham | we still see those funny names, almost everywhere | 14:43 |
rick_h_ | adham: so can you confirm that in MAAS you go to the nodes page and there's the table. At the top of the table is filters for "12 Machines 34 Devices 1 Controller" | 14:44 |
rick_h_ | adham: and that the funny names only show up in the Devices filter? | 14:44 |
w0jtas | stokachu: where should i find glance.log file ? | 14:44 |
adham | yes | 14:44 |
adham | and also in the vm list from the terminal (outside maas) when listing the VMs | 14:44 |
adham | Actually machines not devices that I'm referring to | 14:45 |
adham | sorry, my apologize nodes | 14:45 |
stokachu | w0jtas: ~/.cache/conjure-up/openstack-novalxd/steps/01_glance | 14:45 |
adham | I'm currently do not have access to the MAAS | 14:45 |
adham | seems like Kubernetes deployment has taken over the local load balancer on the same server | 14:45 |
stokachu | w0jtas: you can also juju ssh nova-cloud-controller/0 | 14:46 |
stokachu | source the credentials file there | 14:46 |
stokachu | and perform glance commands | 14:46 |
adham | Is there a way that I can disable the kubernetes load balancer and keep the original load balancer on the server as the default? | 14:46 |
rick_h_ | kwmonroe: ^ wasn't there something about leveraging external cloud bits? | 14:47 |
rick_h_ | adham: will ask the k8s experts. I don't think so though because the charms are a combined solution tested together to work so pulling it apart would potentially break stuff. | 14:47 |
adham | thx rick_h_, I'm in kubernetes channel talking with them | 14:49 |
rick_h_ | adham: ok | 14:49 |
adham | but can you please help me about anyway if it's possible at all to have juju or conjure-up to set names for hte machines? | 14:49 |
adham | rather than leaving them blank, and force MAAS to set funny names to them? | 14:49 |
rick_h_ | adham: the other question is if there's any way to get conjure-up to use non-petnames for the containers created. I'm not sure how that is setup. | 14:49 |
rick_h_ | yea, I'm not sure if conjure-up is asking maas to name them or is providing names for them. I've not registered VMs in MAAS like that. | 14:50 |
adham | i'm trying to bring in cory_fu from conjure-up channel | 14:51 |
adham | as he's the one who redirected me here | 14:51 |
rick_h_ | adham: it looks like the add-device allows setting a name. So the question is how would you name them on deploy of the k8s cluster? I mean you don't want to be "juju deploy canonical-kubernetes --lxd1: prettyname, --lxd2: prettyname... | 14:51 |
cory_fu | rick_h_: To clarify, it sounds like adham is using pods in MAAS such that the VMs are created on-demand rather than the older way of doing things where all VMs are pre-created with specific resource sets and managed manually via MAAS | 14:52 |
cory_fu | In that scenario, the names are auto-generated by MAAS and conjure-up / juju has no way to influence them | 14:52 |
rick_h_ | cory_fu: oh, the kvm pods stuff? | 14:52 |
cory_fu | Yes | 14:52 |
rick_h_ | oh, I was wondering why I'd not run into this before | 14:52 |
adham | thx cory_fu, appreciated... | 14:53 |
cory_fu | rick_h_: My understanding is that this should function very similarly to the public cloud, where you have no control over the instance name / ID, but Juju should create tags in the metadata to indicate which Juju machine is running on that instance. | 14:53 |
rick_h_ | adham: ok, so bad news is I've got no path forward for you. I'd love if you filed a bug on bugs.launchpad.net/juju and bring up names to kvm pods in MAAS though as that might be something we need to update Juju to supply at VM creation time but I've not played with the pods stuff in MAAS yet. | 14:54 |
cory_fu | rick_h_, adham: For instance, on my k8s deployment on AWS, my instance i-04c41c1309bde47d4 got the tag juju-machine-id=conjure-kubernetes-core-0a9-machine-0 | 14:54 |
rick_h_ | cory_fu: right, exactly. | 14:54 |
rick_h_ | cory_fu: so we'll have to setup something using pods and see what Juju does and update anywhere we're not treating it correctly | 14:54 |
adham | https://stackoverflow.com/questions/50970133/installed-kubernetes-on-ubuntu-and-i-see-a-lot-of-nodes-are-getting-created-in-m | 14:54 |
cory_fu | rick_h_: At the end of the day, though, I think adham's real issue is that there were too many VMs created and he can't track down why or what roles each is playing. | 14:54 |
adham | would this help? | 14:54 |
rick_h_ | adham: a bit, but the key thing is how the MAAS is setup regarding the pods usage/etc. | 14:56 |
cory_fu | I'm really not sure why more than around 9 VMs would have been created unless conjure-up was run multiple times. That's one for the Juju controller, and one for each machine required by CDK | 14:56 |
rick_h_ | adham: because the root thing is that this isn't a typical MAAS with bare metal machines going | 14:56 |
adham | exactly cory_fu " | 14:57 |
adham | <cory_fu> rick_h_: At the end of the day, though, I think adham's real issue is that there were too many VMs created and he can't track down why or what roles each is playing." | 14:57 |
cory_fu | adham: Do you still have your ~/.cache/conjure-up/conjure-up.log file available? That should have a record of everything that conjure-up did, including requesting new machines. | 14:57 |
adham | this is exactly what I'm running through | 14:57 |
rick_h_ | adham: then where did the VMs come from? What's the "virtual machine manager" tool? | 14:57 |
adham | Luckily, I still have this https://github.com/conjure-up/conjure-up/issues/1476 | 14:57 |
adham | this issue happened (after deleting the machines thinking they are a virus), but I'm over it | 14:58 |
w0jtas | stokachu: so i see error on glance in neutron.log: keystoneauth1.exceptions.auth.AuthorizationFailure, then in keystone i see error: Unable to load certificate - ensure you have configured PKI with "keystone-manage pki_setup" | 14:58 |
adham | this problem is no longer persisting, but if you are looking for the log files, it's all packaged there | 14:58 |
adham | virtual machine manager to list the vms | 14:58 |
rick_h_ | adham: I'm trying to understand your setup so we can replicate it and diagnose why the tags about what resources were used for aren't making it. You say you comissions X bare metal nodes into a MAAS running somewhere correct? | 14:59 |
adham | so rick, like cory_fu mentioned KVM, but I use the virtual-machine manager for virsh | 14:59 |
w0jtas | stokachu: and command not working: keystone-manage: error: argument command: invalid choice: 'pki_setup' | 14:59 |
cory_fu | adham: I don't see any logs attached to that GitHub issue. Also, that connection error indicates that conjure-up tried to connect to a controller but couldn't, presumably because you had deleted the VM while Juju still had a record of it as a valid controller (in ~/.local/share/juju/controllers.yaml) | 15:00 |
adham | to replicate my enviornment >> follow >> https://tutorials.ubuntu.com/tutorial/create-kvm-pods-with-maas >> once you can commission machines successfully, you can proceed with conjure-up kubernetes, you will reproduce 100% what I have here | 15:00 |
rick_h_ | adham: ok, so you have MAAS running and you used a the virtual-machine-manager to create VMs and registered those VMs in MAAS? | 15:00 |
rick_h_ | adham: ty, that's what I needed. | 15:01 |
manadart | jam: In case it gets lost in the torrent of Github mail. I commented on the PR. Pinning a single CPU is done via range syntax - "limits.cpu=0-0 | 15:01 |
adham | I ran cory_fu, I ran juju unregister controller so I think this issue fixed | 15:02 |
adham | hmm | 15:02 |
adham | let me check | 15:02 |
adham | there is only one controller there which is the current one | 15:03 |
adham | I removed the previous controller | 15:03 |
adham | cory_fu, I thought I attached the logs, one moment, I'll double check | 15:04 |
adham | cory_fu, can you please recheck the issue? | 15:05 |
adham | I uploaded the logs | 15:05 |
cory_fu | adham: I think one thing that isn't clear from that tutorial when using MAAS with conjure-up is that once you have a pod available, if you don't tell conjure-up what pre-allocated machines to assign each unit to by providing a (if I recall correctly) tag=<machine-tag> constraint on the CONFIGURE APPLICATIONS screen of conjure-up (you have to select Configure for each application), then Juju will assume you want new machines and will ask MAAS to | 15:06 |
cory_fu | allocate them as it sees fit, which will likely lead to new machines being allocated from the pod. But I'm not certain about that because I don't have much experience with pods in MAAS | 15:06 |
rick_h_ | adham: so looking at that tutorial "... will be created and a random name will be assigned to it as soon as you hit the "Compose machine" button." | 15:06 |
rick_h_ | cory_fu: right, Juju isn't pod-aware so the thing is Juju just asks MAAS for a machine to use and since they're generated I'll bet it just creates them | 15:06 |
rick_h_ | cory_fu: I'll have to play with this and try it out. I've not used it yet. | 15:07 |
adham | I can retry with this | 15:07 |
adham | Guys, it's been a huge sufferring for to find someone where I can talk about this issue, someone who knows and can help about it, at least with knowledge | 15:07 |
rick_h_ | adham: hey, we're here most days. Happy to help as this is going to get me to play with something new in MAAS I've not done yet. | 15:08 |
adham | do you guys mind if I can please email both of you in a group email with updates where we can continue discussion about this? | 15:08 |
rick_h_ | adham: sorry it took a bit to dig into what you had running there, but I think it's coming together | 15:08 |
rick_h_ | adham: file a bug please. That's the email thread and it'll let others see/find/etc | 15:08 |
adham | that's fine, and even better for me | 15:09 |
adham | can you pls tell me where to file it | 15:09 |
adham | and cory_fu, can you please also watching this? | 15:09 |
cory_fu | adham: Of course | 15:09 |
adham | or stay in the loop, in case we need to refer back to conjure-up :D | 15:09 |
adham | thx | 15:09 |
adham | rick_h_, where can I file the bug? | 15:09 |
rick_h_ | adham: bugs.launchpad.net/juju | 15:10 |
adham | do you need the logs as provided in the github issue? | 15:10 |
cory_fu | adham: Please do link to the GitHub issue and StackOverflow question, for context | 15:11 |
rick_h_ | adham: anything you've got we'll take and look into. | 15:13 |
rick_h_ | adham: to be clear, we can't change/set the machine names as those come from MAAS. However, we should have noted with tags or metadata in MAAS what the machines are up to. | 15:14 |
cory_fu | adham: The only thing I see in that conjure-up.log file is 5 failed attempts to bootstrap. I could see that having created 5 new VMs in MAAS but I can't possibly imagine how it would have created more than that, though. Does your MAAS have those created VMs still available? Can you see if there were any Juju-assigned tags on them? | 15:14 |
adham | rick_h_: https://bugs.launchpad.net/juju/+bug/1779161 | 15:18 |
mup | Bug #1779161: conjure-up kubernetes creates 70+ VMs on KVM managed by MAAS with funny names <juju:New> <https://launchpad.net/bugs/1779161> | 15:18 |
adham | cory_fu: I have had few VMs before Kubernetes --- I created this issue after deleting the VMs then I couldn't conjure-up/down anymore the kubernetes, I thought those VMs a viruses, so I deleted them manually via the virtual machine manager and MAAS, having this done has corrupted kubernetes and prevented conjure-up/down from working | 15:20 |
adham | this is the time when I uploaded the logs | 15:20 |
adham | someone explained to me the details of juju, and from here I was able to tear the rest of kubernetes down manually via juju, and then I redeployed and confirmed that those vms are from kubernetes | 15:21 |
adham | rick_h_: I don't really expect this >> "juju deploy canonical-kubernetes --lxd1: prettyname, --lxd2: prettyname... because it doesn't make sense | 15:22 |
adham | but at least instance i-04c41c1309bde47d4 got the tag juju-machine-id=conjure-kubernetes-core-0a9-machine-0 << I expect the machine name to be conjure-kubernetes-core-0a9-machine-0 if name is set to blank | 15:22 |
adham | this way it should avoid (hopefully) the MAAS from setting those un-named machines with pet names | 15:23 |
adham | cory_fu, are you still here? | 15:25 |
adham | rick_h_ are you still here? | 15:27 |
rick_h_ | adham: still here. But it's the work day so will go in/out sometimes with calls/etc. | 15:28 |
adham | ahh, no, it''s alright, just making sure that both of you got everything and at least we're all linked together on the bug ticket | 15:29 |
rick_h_ | adham: so we'll have to see. Typically with MAAS machine names are long setup before Juju gets the machine. With this support for kvm-pods if MAAS now allows that to be tweaked over the API we'll ahve to see what changes Juju needs to work with it. | 15:29 |
adham | I am going to tear down the current kubernetes installation | 15:29 |
rick_h_ | adham: right, I've fired off an initial email making sure if anyone on the team's played with the pods stuff and honestly we'll have to find a block of time to set that all up and see how it works | 15:29 |
rick_h_ | we've not used it yet | 15:29 |
adham | And try >> "cory_fu> adham: I think one thing that isn't clear from that tutorial when using MAAS with conjure-up is that once you have a pod available, if you don't tell conjure-up what pre-allocated machines to assign each unit to by providing a (if I recall correctly) tag=<machine-tag> constraint on the CONFIGURE APPLICATIONS screen of conjure-up (you have to select Configure for each application), then Juju will assume you want new machines and | 15:30 |
adham | will ask MAAS to" | 15:30 |
cory_fu | adham, rick_h_: I can't seem to subscribe myself to the Juju bug on LP | 15:37 |
cory_fu | adham, rick_h_: Can one of you please try subscribing johnsca (Cory Johns) to that bug? | 15:38 |
kwmonroe | cory_fu: you show up as subscribed to me | 15:38 |
cory_fu | adham: And yes, sorry I got pulled away for a minute as well | 15:38 |
kwmonroe | under the "Notified of all changes" heading is Cory Johns | 15:39 |
cory_fu | kwmonroe: Oh, ok. It's not showing for me. That's fine then | 15:39 |
adham | I subscribed you | 15:39 |
adham | can you pls check cory_fu | 15:39 |
rick_h_ | adham: so I don't udnerstand what you mean by "And try..." with cory's quote | 15:39 |
adham | I'm going to see if I set the constraints | 15:40 |
cory_fu | adham: I guess it just doesn't show me to myself. :p | 15:40 |
adham | I will avoid the MAAS autonaming | 15:40 |
rick_h_ | adham: I can tell you that's true. Against, MAAS is allocating machines on the fly using a virtual machine setup so they won't be reused unless you specify a placement constraint | 15:40 |
cory_fu | stokachu: Can you please confirm the correct syntax for setting a constraint to target a specific machine in conjure-up? | 15:40 |
cory_fu | stokachu: Also, how do you handle the case where there are multiple units, like for kubernetes-worker? | 15:40 |
cory_fu | adham: Please watch for stokachu's reply to ^, because I'm not certain of the correct syntax | 15:42 |
adham | thanks cory_fu, will do, it's 1:43 AM here, I might go to sleep soon as I have work tomorrow at 8 AM | 15:43 |
adham | I can't really keep my eyes open anymore | 15:43 |
cory_fu | adham: I completely understand. | 15:44 |
adham | I am actually talking to you guys from Australia | 15:44 |
cory_fu | adham: One of the Juju folks that you've spoken with in the past, yeah, the ones from Australia, could also tell you the constraint syntax to target MAAS machines, since it's actually a Juju constraint that's being specified | 15:45 |
adham | wallyworld is the one I spoke to and who advised me to reach out to conjure-up team | 15:45 |
adham | I think it's confusing probably or I'm not doing good in explaining | 15:46 |
adham | my apologize guys for the hassle | 15:46 |
adham | is it possible that we can use the bug ticket for discussion and mentioning? It would be really great if stokachu posted an update there | 15:46 |
adham | ? | 15:46 |
cory_fu | adham: Yes, I'll comment on there with my understanding of the issue as well | 15:47 |
adham | thanks cory_fu | 15:47 |
adham | goodnight | 15:47 |
stokachu | it's "tags=<tagname>" | 16:08 |
stokachu | see https://docs.jujucharms.com/2.3/en/reference-constraints | 16:08 |
magicaltrout | kwmonroe: https://www.dropbox.com/s/jtcterg4ft7f09z/Screenshot%20from%202018-06-28%2017-26-15.png?dl=0 | 16:29 |
magicaltrout | we've got some catching up to do ;) | 16:29 |
rick_h_ | magicaltrout: :) | 16:37 |
rick_h_ | zeestrat: is this your stuff turned tutorial? https://tutorials.ubuntu.com/tutorial/tutorial-charm-development-part1#0 | 19:29 |
rick_h_ | bdx: ^ might be cool for your new folks when you bring them in | 19:32 |
zeestrat | rick_h_: no, sir. What tipped you off? Looks nice though. | 19:40 |
rick_h_ | zeestrat: no? I thought you were working with folks on putting your docs stuff into a tutorial | 19:41 |
rick_h_ | zeestrat: I saw it shared from the twitter account actually. https://twitter.com/ubuntu/status/1012417625685708800 | 19:41 |
zeestrat | Nope. I know our boy Lönroth has been working on some docs/tutorial stuff so he might know. | 19:47 |
rick_h_ | zeestrat: ok well figured I'd check | 19:48 |
zeestrat | Thanks for the consideration :) looks like the tutorial page could use some authorship details and perhaps a link to the source. | 19:49 |
PatrickD_ | Hi guys, trying to install Kubernetes right now, and it seems the Kubernetes Charms are using series xenial. Any idea if it would work with Bionic ? | 20:15 |
tvansteenburgh | PatrickD_: yeah it should. that likely won't be the default until after the 18.04.1 release | 20:18 |
PatrickD_ | What's the easiest way to force it to Bionic ? | 20:21 |
pmatulis | juju deploy cs:bionic/<charm> ? | 20:24 |
tvansteenburgh | PatrickD_: trying to figure that out. i thought you could do `juju deploy canonical-kubernetes --series bionic --force`, but it appears those args only work on individual charms, not bundles | 20:26 |
PatrickD_ | yeah, we also tried that :) | 20:32 |
tvansteenburgh | PatrickD_: for now i'm afraid you'll have to deploy the charms individually so you can use `--force --series bionic` | 20:33 |
tvansteenburgh | PatrickD_: raw bundle is here: https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive/bundle.yaml | 20:33 |
tvansteenburgh | that'll show you what charm and relations you need | 20:33 |
tvansteenburgh | charms | 20:33 |
PatrickD_ | Or I use Xenial, and find a way to use a 4.12+ kernel (drivers for Dell 640 requirement). Any easy way to do that ? | 20:34 |
tvansteenburgh | PatrickD_: where are you deploying? public cloud? | 20:36 |
PatrickD_ | on MAAS | 20:37 |
tvansteenburgh | PatrickD_: i'm not exactly a MAAS expert, but it seems like you could create a xenial image with the kernel you want | 20:39 |
tvansteenburgh | PatrickD_: also `juju model-config -h | grep cloudinit -C3` | 20:45 |
tvansteenburgh | could potentially upgrade kernel that way. haven't tried it. | 20:46 |
rick_h_ | PatrickD_: honestly charm pull the bundle and edit the default series on it to bionic. | 20:47 |
rick_h_ | PatrickD_: otherwise we rely on the charm default as the author suggests series X. No way around it without editing the bundle because it's a lot of assuming that each charm supports a series in a bundle | 20:47 |
tvansteenburgh | rick_h_: that won't work | 20:49 |
tvansteenburgh | rick_h_: we don't have bionic in the charms' series list yet | 20:49 |
rick_h_ | tvansteenburgh: oh, at all? I gotcha. | 20:49 |
rick_h_ | tvansteenburgh: yea, then the "bundle" isn't ready for that yet heh. | 20:49 |
PatrickD_ | yeah, trying to use bionic kernel in xenial, which would be just fine. | 20:50 |
rick_h_ | PatrickD_: in that case I'd deploy the bundle and then juju run the commands to get the new kernel in place/reboot | 20:52 |
rick_h_ | PatrickD_: vs bootstrapping the whole other series (if just the kernel is what you're after) | 20:52 |
PatrickD_ | Also have an issue with MAAS zones in Juju. we have 3 zones (default, A and B). It either goes to default or B, but I want it to go to A... Any way to specify MAAS zone when deploying the bundle ? | 21:28 |
rick_h_ | PatrickD_: I thought zones were meant to be like AZ in public clouds so that they were rotated/spread across for fault tolerance. | 21:31 |
rick_h_ | PatrickD_: so Juju should rotate them as you add-unit. | 21:31 |
rick_h_ | PatrickD_: that said, you might try a constraint of zone=xxx | 21:31 |
rick_h_ | but not sure on that tbh | 21:31 |
rick_h_ | PatrickD_: I'd start with a juju deploy xx --constraints zone=yyy first | 21:32 |
rick_h_ | and see if Juju will do that | 21:32 |
PatrickD_ | constraint doesn't work, but what you say makes sense... maybe I should remove unused zones then. | 21:32 |
PatrickD_ | considering that there are zero machines in default and B, it's a bit strange that it wants to go to B | 21:33 |
rick_h_ | PatrickD_: hmm, yea that's not right | 21:45 |
thumper | we don't currently support zone placement in bundles | 22:13 |
thumper | we don't have zone as a constraint | 22:14 |
thumper | we have talked about it before | 22:14 |
thumper | a key here is that maas zones are quite different to other cloud zones | 22:14 |
thumper | tvansteenburgh, PatrickD_: I *think* you could specify a bundle overlay to change the default series | 22:16 |
* thumper takes a look | 22:16 | |
tvansteenburgh | thumper: sure, if the charm supports the series, but it doesn't | 22:16 |
thumper | ah | 22:16 |
thumper | hmm... | 22:16 |
thumper | yeah | 22:16 |
thumper | fair call | 22:16 |
tvansteenburgh | PatrickD_: fwiw i'm working on adding bionic to the charms, but it won't be done today | 22:17 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!