/srv/irclogs.ubuntu.com/2018/06/28/#juju.txt

wallyworldthumper: can haz review? https://github.com/juju/juju/pull/886700:17
babbageclunkveebers: I made the mistake of trying to use go-guru-callers, it is taking a long time and making my computer very sad.00:31
veebersbabbageclunk: hah yeah I don't think I've had anything useful from that before. I usually need to kill it before long00:34
babbageclunkI think I might just be setting the scope wrong...00:36
veebersbabbageclunk: I *think* the scope should be something like: github.com/juju/juju/... but I'm not 100% certain00:40
babbageclunkveebers: yeah - from the docs it sounds like it should be github.com/juju/juju/cmd/jujud - the package that has the main (which is the starting point). I guess the problem is that it's whole-program analysis for a too-big program, at least for my computer.00:52
veebersbabbageclunk: get more computers00:53
veebersbabbageclunk: heh yeah, I would be interested in how well it works with a smaller project etc. I notice a bit of slow down when I change branches etc. as it compiles bits to give me completion etc.00:54
=== chigang__ is now known as chigang
thumperwallyworld: with you shortly, finishing with IS02:59
thumperwallyworld: omw now03:07
thumperwallyworld: bug 177897003:17
mupBug #1778970: offer will not leave model <juju:New> <https://launchpad.net/bugs/1778970>03:17
kelvin_wallyworld, would u mind to take a look these PRs when u have time: https://github.com/juju/charm/pull/249/files https://github.com/juju/jujusvg/pull/56/files  https://github.com/juju/charmstore/pull/808/files ? thanks06:18
wallyworldkelvin_:  sure, will do06:32
wallyworldvino: want to join HO again?06:32
vinoyes.06:32
wallyworldkelvin_: with the svg PR, you should wait to land the latest charm.v6 change and use the dep from that one06:39
kelvin_wallyworld,  yes, the charm.v6 is the dep for all the others, and I will do juju finally after all of these landed as well.06:41
kelvin_wallyworld, and one more for bundlechanges please, thanks https://github.com/juju/bundlechanges/pull/41/files06:44
kelvin_i will update dependencies.tsv for it.06:45
wallyworldkelvin_: and for svg as well06:50
wallyworldevenn though PR is already proposed06:50
wallyworldkelvin_: just in a meeting, will finish looking soon06:55
kelvin_wallyworld, thanks07:01
thumperwallyworld, vino: team standup?07:07
wallyworldthumper: in meeting which was delayed07:08
wallyworldtry and finished soon07:08
manadartAnyone ever get this when building Juju? "readSym out of sync".09:49
stickupkidmanadart: never seen that before09:50
manadartNeed a review for constraints (cores, mem, instance type) support for LXD container machines: https://github.com/juju/juju/pull/886910:58
stickupkidmanadart: looking now11:02
stickupkidmanadart: look really clean...11:03
stickupkids/look/looks11:04
manadartstickupkid: I finished it and then rewrote it. This way it's a one-liner to apply it in the provider.11:05
stickupkidmanadart: mines merging now11:05
manadartstickupkid: Nice. want to sync up after lunch?11:06
stickupkidmanadart: hell yeah11:06
stickupkidmanadart: you've got failure in your PR, it looks like an intermitent failure so I've just do a retry11:15
manadartjam stickupkid: Looking for a review of https://github.com/juju/juju/pull/8862.12:27
manadartGoing to land https://github.com/juju/juju/pull/8869 when it goes green.12:27
rathoretrying to deploy openstack-lxd but having issues, keystone never completes database relation even when mysql/0 unit is ready. Ceph-mon/0,1,2 gets stuck in Bootstrapping MON cluster12:38
rathoreany suggestions how can I fix it12:38
jammanadart: quid-pro-quo? https://github.com/juju/juju/pull/887113:34
manadartjam: Deal.13:34
manadartstickupkid: Want to jump on the guild HO?13:41
stickupkidyeah13:42
stickupkidah, someone already there13:42
jammanadart: reviewed 886213:46
manadartjam: Many thanks.13:46
manadartjam: Approved yours too.13:47
jamrick_h_: question for you about build/release/etc process.14:13
jamwith 2.3 being an LTS, I'd like to keep the 2.3 branch up to date and merge those changes into the 2.4 branch. However, as we are real-close to a 2.4.0 should I wait to merge a 2.3 patch into 2.4 ?14:13
jameven if it is effectively a no-op? (it does have some small, eg line-numbers-in-a-diff, changes)14:14
jamI do think that we generally want 2.3 => 2.4 => develop, so we know that any patches that we *do* backport to 2.3 are fully applied to new code bases.14:14
jamanyway, I have https://github.com/juju/juju/pull/8872 that potentially updates 2.4, but I'm happy to wait until 2.4.0 is cut to actually merge that.14:16
jamhave a good weekend lal14:16
jamall14:16
rick_h_jam: definitely wait atm14:22
rick_h_jam: it'll have to go into 2.4.114:22
rick_h_have a good weekend jam14:23
w0jtashi i have fresh deployment of openstack using juju charms / conjure-up but i cannot start first instance , in logs i see: "Instance failed to spawn: ImageNotFound: Image  could not be found." what's weird looks like imagename is missing14:24
adhamconjure-up deploys kubernetes (CDK bundle) without setting the machines names, and so this causes MAAS to auto-pick a random name for each machine from pet names library. is there a way that we can have the conjure-up to use a naming convention?14:25
adhamSo I'm in #conjure-up channel and they are redirecting me to here14:25
kwmonroeadham: we're here too.  no, you can't have conjure-up set machine names14:28
kwmonroebecause juju can't set machine names14:28
kwmonroebecause juju doesn't care about machine names14:28
rick_h_w0jtas: starting the first instance in which way?14:28
adhamKwonroe, I have 70+ machine created by conjure-up kubernetes, if you saw my MAAS window, and see how much funny names are there14:29
rick_h_kwmonroe: lol, just realized the pet-names library is called "pet" when we say to stop keeping pets (servers) and start driving cattle.14:29
adhamyou would definitely reconsider the deployment here14:29
adhamconnjure-up & juju are a great tool but honestly this little issue is destroying their greatness14:30
rick_h_adham: the more machines the better. You shouldn't ever really care about the machines but what's running on them. Juju's taking the application/task based view of the world and so machines are expendable little things that you can reallocate all the time.14:30
rick_h_adham: can I ask why the names are important? What task/etc are you doing that is driving you to referencing the machines individually?14:30
w0jtasrick_h_: in horizon i want to start first instance with ubuntu 16.04 using lxd14:30
adhamrick_h_, the machines on MAAS got not tag, no definition, they are just pet names, you cannot distinguish which machine is what14:30
adhamit would make sense if we have for example, lb1, controller1, master1, something relative14:31
adhambut not blank14:31
rick_h_adham: oh we definitely encourage putting tags on your maas machines so that you can target machines for storage/networking/etc.14:31
adhamyou want me to tag 70+ that was created by conjure-up/juju?14:31
rick_h_w0jtas: do you have the images used loaded into glance? I'm not sure how that's pre-seeded in an OS install. You might check with the OS folks.14:31
rick_h_adham: no, in MAAS you do it once and you don't have to redo/etc. It's just part of setting up the machine infrastructure. Maybe I'm missing where you're heading there14:32
w0jtasrick_h_: i have on list 2 ubuntu images to choose, 14.04 and 16.04 when creating instances14:32
rick_h_adham: so Juju supports leveraging/using MAAS tags.14:32
adhamhere is a sample of how machine names look like on my MAAS >> aware-code aware-yak casual-corgi casual-whale casual-mole clear-hound close-liger cool-troll decent-beetle divine-bug driven-drake easy-cod equal-frog equal-swan exotic-earwig expert-cow expert-slug fair-bee first-dog frank-monkey gentle-racer good-koi grown-bunny guided-eft handy-wahoo hip-hornet holy-bass holy-hen intent-bear large-kit14:32
adhamhow can I know which one is at least load balancer, and which one is controller?14:32
rick_h_adham: by looking at juju and saying "juju ssh load-balancer/0"14:33
rick_h_using the task based approach14:33
rick_h_adham: the machine names are part of the MAAS setup when you commission the machines though.14:33
rick_h_adham: for instance, in my maas I have nuc1 through nuc814:33
rick_h_juju/conjure-up doesn't really care about the maas machine name14:33
adhamduring Kunjure Up, I'm using our MAAS as the cloud14:34
adhamconjure-up*14:34
rick_h_adham: right, understand. But the names of the machines in MAAS come from commissioning in MAAS, before you ever run conjure-up14:34
adhamYes, if you commission a pod that doesn't have a name14:35
rick_h_adham: conjure-up or juju don't change or modify the maas machine names at all14:35
adhamor a machine I mean14:35
rick_h_adham: conjure-up doesn't comission the machine. That's done ahead of time when adding hardware to MAAS14:35
adhamthose machines did not show up unless I ran the conjure-up because these are actually the kubernetes machines, if I deleted those machines, kubernetes will go down14:35
rick_h_adham: so I'm failing to grok that statement there14:36
adhamI'm confused to be honest14:36
adhamhow can I cooperate the 2 of them, or should I install kubernetes away from MAAS14:36
rick_h_adham: so you have a maas, with nothing running on it. And you go to the list of nodes. Each node has a name. That name is the machine name. Before kubernetes, conjure-up, juju, anything else is involved.14:36
rick_h_adham: did you conjure-up kubernetes onto your MAAS?14:37
adhamyes14:37
w0jtasrick_h_: anything i should check on my setup ? it's fresh conjure-up openstack / lxd setup, my first attempt so i am newbie here :(14:37
adhamand our MAAS has VMs and machines on it already14:37
rick_h_adham: ok, before you ran conjure-up you had a MAAS setup and that MAAS had X machines comissed into it14:37
adhamyes14:37
rick_h_adham: and when you go to the nodelist you see those names14:37
adhamthe funny ones are only after kubernetes deployment14:38
rick_h_w0jtas: sorry, I don't know enough about openstack/glance to diagnose. I have to suggest checking out the openstack charms irc channel/mailing list.14:38
rathore_anyone: how can I get 2 different configs of charms installed on same machine? I have different configurations of ceph-osd charms for 2 types of servers ? Thanks14:38
adhambut nothing changed to our VMs and machines (where they have proper names)14:38
rick_h_adham: are the funny ones VM's registered in MAAS? and not the original MAAS machines then?14:38
adhamthey are not original maas machines14:39
rick_h_rathore_: so you have to deploy them as two different applications14:39
adhamthey were made by conjure-up14:39
rick_h_rathore_: juju deploy ceph-osd ceph-mode114:39
rick_h_rathore_: juju deploy ceph-osd ceph-mode214:39
stokachuw0jtas: what's wrong14:39
rathore_rick_h_ : Thanks a lot14:39
rick_h_rathore_: np, if you need different ocnfigs then you'll want to log/perform other operations/scale them differently so it's just reusing the same charm for two purposes.14:40
w0jtasstokachu: i installed openstack using conjure-up / lxd  and now in horizon i want to run first instance, but it's failing and in node logs i see "Instance failed to spawn: ImageNotFound: Image  could not be found."14:40
rick_h_adham: ok, so conjure-up created some VMs with pet-names that are now registered in MAAS somehow?14:40
adhamcorrect14:40
rick_h_adham: is this the MAAS "devices" list or the node list?14:40
stokachuw0jtas: ok, so a glance issue is happening14:41
rathore_rick_h_: Cool. I will try it out14:41
adhamWhen we first time saw this, we thought that this is spam or a virus that came with kubernetes, so we deleted them, kubernetes deployment became offline14:41
adhamwe deleted kubernetes completely and brought down the controller14:41
w0jtasstokachu: how do i check glance condition then ? any status debug or whatever14:41
stokachuw0jtas: sec14:41
rick_h_adham: ouch yea, ok. I'm guessing this is on devices list vs the machine list?14:41
stokachuhttps://github.com/conjure-up/spells/blob/master/openstack-novalxd/steps/01_glance/glance.sh14:41
stokachuw0jtas: ^14:42
adhamafter discussion with conjure-up and juju, we understood that this normal and caused by the deployment because no machines names given14:42
stokachuthat's basicaly what you need to run to import the images, maybe that failed somewhere, can you check in ~/.cache/conjure-up/openstack-novalxd/steps14:42
adhamwe then tried to redeploy kubernetes, and here we see again the list of funny names 70+ machines14:42
adhamthose names can be seen from MAAS14:42
rick_h_adham: ok, sorry I'm catching up. So these are probably the lxd containers created for the k8s cluster registered in MAAS as devices.14:43
adhamand if we launch list the vms from the terminal on linux14:43
rick_h_adham: gotcha14:43
adhamwe still see those funny names, almost everywhere14:43
rick_h_adham: so can you confirm that in MAAS you go to the nodes page and there's the table. At the top of the table is filters for "12 Machines    34 Devices    1 Controller"14:44
rick_h_adham: and that the funny names only show up in the Devices filter?14:44
w0jtasstokachu: where should i find glance.log file ?14:44
adhamyes14:44
adhamand also in the vm list from the terminal (outside maas) when listing the VMs14:44
adhamActually machines not devices that I'm referring to14:45
adhamsorry, my apologize nodes14:45
stokachuw0jtas: ~/.cache/conjure-up/openstack-novalxd/steps/01_glance14:45
adhamI'm currently do not have access to the MAAS14:45
adhamseems like Kubernetes deployment has taken over the local load balancer on the same server14:45
stokachuw0jtas: you can also juju ssh nova-cloud-controller/014:46
stokachusource the credentials file there14:46
stokachuand perform glance commands14:46
adhamIs there a way that I can disable the kubernetes load balancer and keep the original load balancer on the server as the default?14:46
rick_h_kwmonroe: ^ wasn't there something about leveraging external cloud bits?14:47
rick_h_adham: will ask the k8s experts. I don't think so though because the charms are a combined solution tested together to work so pulling it apart would potentially break stuff.14:47
adhamthx rick_h_, I'm in kubernetes channel talking with them14:49
rick_h_adham: ok14:49
adhambut can you please help me about anyway if it's possible at all to have juju or conjure-up to set names for hte machines?14:49
adhamrather than leaving them blank, and force MAAS to set funny names to them?14:49
rick_h_adham: the other question is if there's any way to get conjure-up to use non-petnames for the containers created. I'm not sure how that is setup.14:49
rick_h_yea, I'm not sure if conjure-up is asking maas to name them or is providing names for them. I've not registered VMs in MAAS like that.14:50
adhami'm trying to bring in cory_fu from conjure-up channel14:51
adhamas he's the one who redirected me here14:51
rick_h_adham: it looks like the add-device allows setting a name. So the question is how would you name them on deploy of the k8s cluster? I mean you don't want to be "juju deploy canonical-kubernetes --lxd1: prettyname, --lxd2: prettyname...14:51
cory_furick_h_: To clarify, it sounds like adham is using pods in MAAS such that the VMs are created on-demand rather than the older way of doing things where all VMs are pre-created with specific resource sets and managed manually via MAAS14:52
cory_fuIn that scenario, the names are auto-generated by MAAS and conjure-up / juju has no way to influence them14:52
rick_h_cory_fu: oh, the kvm pods stuff?14:52
cory_fuYes14:52
rick_h_oh, I was wondering why I'd not run into this before14:52
adhamthx cory_fu, appreciated...14:53
cory_furick_h_: My understanding is that this should function very similarly to the public cloud, where you have no control over the instance name / ID, but Juju should create tags in the metadata to indicate which Juju machine is running on that instance.14:53
rick_h_adham: ok, so bad news is I've got no path forward for you. I'd love if you filed a bug on bugs.launchpad.net/juju and bring up names to kvm pods in MAAS though as that might be something we need to update Juju to supply at VM creation time but I've not played with the pods stuff in MAAS yet.14:54
cory_furick_h_, adham: For instance, on my k8s deployment on AWS, my instance i-04c41c1309bde47d4 got the tag juju-machine-id=conjure-kubernetes-core-0a9-machine-014:54
rick_h_cory_fu: right, exactly.14:54
rick_h_cory_fu: so we'll have to setup something using pods and see what Juju does and update anywhere we're not treating it correctly14:54
adhamhttps://stackoverflow.com/questions/50970133/installed-kubernetes-on-ubuntu-and-i-see-a-lot-of-nodes-are-getting-created-in-m14:54
cory_furick_h_: At the end of the day, though, I think adham's real issue is that there were too many VMs created and he can't track down why or what roles each is playing.14:54
adhamwould this help?14:54
rick_h_adham: a bit, but the key thing is how the MAAS is setup regarding the pods usage/etc.14:56
cory_fuI'm really not sure why more than around 9 VMs would have been created unless conjure-up was run multiple times.  That's one for the Juju controller, and one for each machine required by CDK14:56
rick_h_adham: because the root thing is that this isn't a typical MAAS with bare metal machines going14:56
adhamexactly cory_fu "14:57
adham<cory_fu> rick_h_: At the end of the day, though, I think adham's real issue is that there were too many VMs created and he can't track down why or what roles each is playing."14:57
cory_fuadham: Do you still have your ~/.cache/conjure-up/conjure-up.log file available?  That should have a record of everything that conjure-up did, including requesting new machines.14:57
adhamthis is exactly what I'm running through14:57
rick_h_adham: then where did the VMs come from? What's the "virtual machine manager" tool?14:57
adhamLuckily, I still have this https://github.com/conjure-up/conjure-up/issues/147614:57
adhamthis issue happened (after deleting the machines thinking they are a virus), but I'm over it14:58
w0jtasstokachu: so i see error on glance in neutron.log: keystoneauth1.exceptions.auth.AuthorizationFailure, then in keystone i see error: Unable to load certificate - ensure you have configured PKI with "keystone-manage pki_setup"14:58
adhamthis problem is no longer persisting, but if you are looking for the log files, it's all packaged there14:58
adhamvirtual machine manager to list the vms14:58
rick_h_adham: I'm trying to understand your setup so we can replicate it and diagnose why the tags about what resources were used for aren't making it. You say you comissions X bare metal nodes into a MAAS running somewhere correct?14:59
adhamso rick, like cory_fu mentioned KVM, but I use the virtual-machine manager for virsh14:59
w0jtasstokachu: and command not working: keystone-manage: error: argument command: invalid choice: 'pki_setup'14:59
cory_fuadham: I don't see any logs attached to that GitHub issue.  Also, that connection error indicates that conjure-up tried to connect to a controller but couldn't, presumably because you had deleted the VM while Juju still had a record of it as a valid controller (in ~/.local/share/juju/controllers.yaml)15:00
adhamto replicate my enviornment >> follow >> https://tutorials.ubuntu.com/tutorial/create-kvm-pods-with-maas >> once you can commission machines successfully, you can proceed with conjure-up kubernetes, you will reproduce 100% what I have here15:00
rick_h_adham: ok, so you have MAAS running and you used a the virtual-machine-manager to create VMs and registered those VMs in MAAS?15:00
rick_h_adham: ty, that's what I needed.15:01
manadartjam: In case it gets lost in the torrent of Github mail. I commented on the PR. Pinning a single CPU is done via range syntax - "limits.cpu=0-015:01
adhamI ran cory_fu, I ran juju unregister controller so I think this issue fixed15:02
adhamhmm15:02
adhamlet me check15:02
adhamthere is only one controller there which is the current one15:03
adhamI removed the previous controller15:03
adhamcory_fu, I thought I attached the logs, one moment, I'll double check15:04
adhamcory_fu, can you please recheck the issue?15:05
adhamI uploaded the logs15:05
cory_fuadham: I think one thing that isn't clear from that tutorial when using MAAS with conjure-up is that once you have a pod available, if you don't tell conjure-up what pre-allocated machines to assign each unit to by providing a (if I recall correctly) tag=<machine-tag> constraint on the CONFIGURE APPLICATIONS screen of conjure-up (you have to select Configure for each application), then Juju will assume you want new machines and will ask MAAS to15:06
cory_fuallocate them as it sees fit, which will likely lead to new machines being allocated from the pod.  But I'm not certain about that because I don't have much experience with pods in MAAS15:06
rick_h_adham: so looking at that tutorial "... will be created and a random name will be assigned to it as soon as you hit the "Compose machine" button."15:06
rick_h_cory_fu: right, Juju isn't pod-aware so the thing is Juju just asks MAAS for a machine to use and since they're generated I'll bet it just creates them15:06
rick_h_cory_fu: I'll have to play with this and try it out. I've not used it yet.15:07
adhamI can retry with this15:07
adhamGuys, it's been a huge sufferring for to find someone where I can talk about this issue, someone who knows and can help about it, at least with knowledge15:07
rick_h_adham: hey, we're here most days. Happy to help as this is going to get me to play with something new in MAAS I've not done yet.15:08
adhamdo you guys mind if I can please email both of you in a group email with updates where we can continue discussion about this?15:08
rick_h_adham: sorry it took a bit to dig into what you had running there, but I think it's coming together15:08
rick_h_adham: file a bug please. That's the email thread and it'll let others see/find/etc15:08
adhamthat's fine, and even better for me15:09
adhamcan you pls tell me where to file it15:09
adhamand cory_fu, can you please also watching this?15:09
cory_fuadham: Of course15:09
adhamor stay in the loop, in case we need to refer back to conjure-up :D15:09
adhamthx15:09
adhamrick_h_, where can I file the bug?15:09
rick_h_adham: bugs.launchpad.net/juju15:10
adhamdo you need the logs as provided in the github issue?15:10
cory_fuadham: Please do link to the GitHub issue and StackOverflow question, for context15:11
rick_h_adham: anything you've got we'll take and look into.15:13
rick_h_adham: to be clear, we can't change/set the machine names as those come from MAAS. However, we should have noted with tags or metadata in MAAS what the machines are up to.15:14
cory_fuadham: The only thing I see in that conjure-up.log file is 5 failed attempts to bootstrap.  I could see that having created 5 new VMs in MAAS but I can't possibly imagine how it would have created more than that, though.  Does your MAAS have those created VMs still available?  Can you see if there were any Juju-assigned tags on them?15:14
adhamrick_h_: https://bugs.launchpad.net/juju/+bug/177916115:18
mupBug #1779161: conjure-up kubernetes creates 70+ VMs on KVM managed by MAAS with funny names <juju:New> <https://launchpad.net/bugs/1779161>15:18
adhamcory_fu: I have had few VMs before Kubernetes --- I created this issue after deleting the VMs then I couldn't conjure-up/down anymore the kubernetes, I thought those VMs a viruses, so I deleted them manually via the virtual machine manager and MAAS, having this done has corrupted kubernetes and prevented conjure-up/down from working15:20
adhamthis is the time when I uploaded the logs15:20
adhamsomeone explained to me the details of juju, and from here I was able to tear the rest of kubernetes down manually via juju, and then I redeployed and confirmed that those vms are from kubernetes15:21
adhamrick_h_: I don't really expect this >> "juju deploy canonical-kubernetes --lxd1: prettyname, --lxd2: prettyname... because it doesn't make sense15:22
adhambut at least instance i-04c41c1309bde47d4 got the tag juju-machine-id=conjure-kubernetes-core-0a9-machine-0 << I expect the machine name to be conjure-kubernetes-core-0a9-machine-0 if name is set to blank15:22
adhamthis way it should avoid (hopefully) the MAAS from setting those un-named machines with pet names15:23
adhamcory_fu, are you still here?15:25
adhamrick_h_ are you still here?15:27
rick_h_adham: still here. But it's the work day so will go in/out sometimes with calls/etc.15:28
adhamahh, no, it''s alright, just making sure that both of you got everything and at least we're all linked together on the bug ticket15:29
rick_h_adham: so we'll have to see. Typically with MAAS machine names are long setup before Juju gets the machine. With this support for kvm-pods if MAAS now allows that to be tweaked over the API we'll ahve to see what changes Juju needs to work with it.15:29
adhamI am going to tear down the current kubernetes installation15:29
rick_h_adham: right, I've fired off an initial email making sure if anyone on the team's played with the pods stuff and honestly we'll have to find a block of time to set that all up and see how it works15:29
rick_h_we've not used it yet15:29
adhamAnd try >> "cory_fu> adham: I think one thing that isn't clear from that tutorial when using MAAS with conjure-up is that once you have a pod available, if you don't tell conjure-up what pre-allocated machines to assign each unit to by providing a (if I recall correctly) tag=<machine-tag> constraint on the CONFIGURE APPLICATIONS screen of conjure-up (you have to select Configure for each application), then Juju will assume you want new machines and15:30
adham will ask MAAS to"15:30
cory_fuadham, rick_h_: I can't seem to subscribe myself to the Juju bug on LP15:37
cory_fuadham, rick_h_: Can one of you please try subscribing johnsca (Cory Johns) to that bug?15:38
kwmonroecory_fu: you show up as subscribed to me15:38
cory_fuadham: And yes, sorry I got pulled away for a minute as well15:38
kwmonroeunder the "Notified of all changes" heading is Cory Johns15:39
cory_fukwmonroe: Oh, ok.  It's not showing for me.  That's fine then15:39
adhamI subscribed you15:39
adhamcan you pls check cory_fu15:39
rick_h_adham: so I don't udnerstand what you mean by "And try..." with cory's quote15:39
adhamI'm going to see if I set the constraints15:40
cory_fuadham: I guess it just doesn't show me to myself.  :p15:40
adhamI will avoid the MAAS autonaming15:40
rick_h_adham: I can tell you that's true. Against, MAAS is allocating machines on the fly using a virtual machine setup so they won't be reused unless you specify a placement constraint15:40
cory_fustokachu: Can you please confirm the correct syntax for setting a constraint to target a specific machine in conjure-up?15:40
cory_fustokachu: Also, how do you handle the case where there are multiple units, like for kubernetes-worker?15:40
cory_fuadham: Please watch for stokachu's reply to ^, because I'm not certain of the correct syntax15:42
adhamthanks cory_fu, will do, it's 1:43 AM here, I might go to sleep soon as I have work tomorrow at 8 AM15:43
adhamI can't really keep my eyes open anymore15:43
cory_fuadham: I completely understand.15:44
adhamI am actually talking to you guys from Australia15:44
cory_fuadham: One of the Juju folks that you've spoken with in the past, yeah, the ones from Australia, could also tell you the constraint syntax to target MAAS machines, since it's actually a Juju constraint that's being specified15:45
adhamwallyworld is the one I spoke to and who advised me to reach out to conjure-up team15:45
adhamI think it's confusing probably or I'm not doing good in explaining15:46
adhammy apologize guys for the hassle15:46
adhamis it possible that we can use the bug ticket for discussion and mentioning? It would be really great if stokachu posted an update there15:46
adham?15:46
cory_fuadham: Yes, I'll comment on there with my understanding of the issue as well15:47
adhamthanks cory_fu15:47
adhamgoodnight15:47
stokachuit's "tags=<tagname>"16:08
stokachusee https://docs.jujucharms.com/2.3/en/reference-constraints16:08
magicaltroutkwmonroe: https://www.dropbox.com/s/jtcterg4ft7f09z/Screenshot%20from%202018-06-28%2017-26-15.png?dl=016:29
magicaltroutwe've got some catching up to do ;)16:29
rick_h_magicaltrout: :)16:37
rick_h_zeestrat: is this your stuff turned tutorial? https://tutorials.ubuntu.com/tutorial/tutorial-charm-development-part1#019:29
rick_h_bdx: ^ might be cool for your new folks when you bring them in19:32
zeestratrick_h_: no, sir. What tipped you off? Looks nice though.19:40
rick_h_zeestrat: no? I thought you were working with folks on putting your docs stuff into a tutorial19:41
rick_h_zeestrat: I saw it shared from the twitter account actually. https://twitter.com/ubuntu/status/101241762568570880019:41
zeestratNope. I know our boy Lönroth has been working on some docs/tutorial stuff so he might know.19:47
rick_h_zeestrat: ok well figured I'd check19:48
zeestratThanks for the consideration :) looks like the tutorial page could use some authorship details and perhaps a link to the source.19:49
PatrickD_Hi guys, trying to install Kubernetes right now, and it seems the Kubernetes Charms are using series xenial. Any idea if it would work with Bionic ?20:15
tvansteenburghPatrickD_: yeah it should. that likely won't be the default until after the 18.04.1 release20:18
PatrickD_What's the easiest way to force it to Bionic ?20:21
pmatulisjuju deploy cs:bionic/<charm> ?20:24
tvansteenburghPatrickD_: trying to figure that out. i thought you could do `juju deploy canonical-kubernetes --series bionic --force`, but it appears those args only work on individual charms, not bundles20:26
PatrickD_yeah, we also tried that :)20:32
tvansteenburghPatrickD_: for now i'm afraid you'll have to deploy the charms individually so you can use `--force --series bionic`20:33
tvansteenburghPatrickD_: raw bundle is here: https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive/bundle.yaml20:33
tvansteenburghthat'll show you what charm and relations you need20:33
tvansteenburghcharms20:33
PatrickD_Or I use Xenial, and find a way to use a 4.12+ kernel (drivers for Dell 640 requirement). Any easy way to do that ?20:34
tvansteenburghPatrickD_: where are you deploying? public cloud?20:36
PatrickD_on MAAS20:37
tvansteenburghPatrickD_: i'm not exactly a MAAS expert, but it seems like you could create a xenial image with the kernel you want20:39
tvansteenburghPatrickD_: also `juju model-config -h | grep cloudinit -C3`20:45
tvansteenburghcould potentially upgrade kernel that way. haven't tried it.20:46
rick_h_PatrickD_: honestly charm pull the bundle and edit the default series on it to bionic.20:47
rick_h_PatrickD_: otherwise we rely on the charm default as the author suggests series X. No way around it without editing the bundle because it's a lot of assuming that each charm supports a series in a bundle20:47
tvansteenburghrick_h_: that won't work20:49
tvansteenburghrick_h_: we don't have bionic in the charms' series list yet20:49
rick_h_tvansteenburgh: oh, at all? I gotcha.20:49
rick_h_tvansteenburgh: yea, then the "bundle" isn't ready for that yet heh.20:49
PatrickD_yeah, trying to use bionic kernel in xenial, which would be just fine.20:50
rick_h_PatrickD_: in that case I'd deploy the bundle and then juju run the commands to get the new kernel in place/reboot20:52
rick_h_PatrickD_: vs bootstrapping the whole other series (if just the kernel is what you're after)20:52
PatrickD_Also have an issue with MAAS zones in Juju. we have 3 zones (default, A and B). It either goes to default or B, but I want it to go to A... Any way to specify MAAS zone when deploying the bundle ?21:28
rick_h_PatrickD_: I thought zones were meant to be like AZ in public clouds so that they were rotated/spread across for fault tolerance.21:31
rick_h_PatrickD_: so Juju should rotate them as you add-unit.21:31
rick_h_PatrickD_: that said, you might try a constraint of zone=xxx21:31
rick_h_but not sure on that tbh21:31
rick_h_PatrickD_: I'd start with a juju deploy xx --constraints zone=yyy first21:32
rick_h_and see if Juju will do that21:32
PatrickD_constraint doesn't work, but what you say makes sense... maybe I should remove unused zones then.21:32
PatrickD_considering that there are zero machines in default and B, it's a bit strange that it wants to go to B21:33
rick_h_PatrickD_: hmm, yea that's not right21:45
thumperwe don't currently support zone placement in bundles22:13
thumperwe don't have zone as a constraint22:14
thumperwe have talked about it before22:14
thumpera key here is that maas zones are quite different to other cloud zones22:14
thumpertvansteenburgh, PatrickD_: I *think* you could specify a bundle overlay to change the default series22:16
* thumper takes a look22:16
tvansteenburghthumper: sure, if the charm supports the series, but it doesn't22:16
thumperah22:16
thumperhmm...22:16
thumperyeah22:16
thumperfair call22:16
tvansteenburghPatrickD_: fwiw i'm working on adding bionic to the charms, but it won't be done today22:17

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!