[00:20] Budgie^Smore, magicaltrout I'm not even sure Canonical still uses those (pretty awesome) orange boxes for OS training anymore, IIRC they now use remote servers from different providers [00:22] umbSublime pretty sure you are right there, those were still cool though [00:22] heck yah [00:23] they had me drooling when they first came out :p [06:47] The orange boxes tend to get used where network connectivity is limited, cause they can be flown in with archive mirrors and whatnot prepopulated [07:25] Good monrning Juju world! [11:16] Hello, can i run juju within a lxd when using MAAS as its cloud-provider? [11:16] i mean the juju bootstrap [11:16] so, i want to bootstrap juju within a lxd container and use maas for the rest [11:16] currently i'm creating a kvm instance and run the bootstrap on that [11:29] BlackDex: not at this time. One "provider" per controller. [11:34] and if i create a lxd and let it detect by maas? Or that isn't an option? [12:23] BlackDex: yes, if you load a vm of any sort into maas so that it's the maas api/provider that's in use you're ok [12:30] oke rick_h Thx, i will try that :) [13:10] Howdy folks .. I just tried to deploy kubernetes to Azure cloud .. And it is stuck .. I see 4 deployments have failed in Azure portal :/ [13:11] Can anyone help me debug this or get it working [13:18] kim0: i can certainly try to lend a hand here. What units are failed and how did they fail? [13:32] lazyPower: actually not units .. I think it's machines that failed [13:32] kim0 ah, so azure failed to give you the machines? [13:32] https://www.irccloud.com/pastebin/r7A8WpgV/ [13:32] yes .. the deployment to Azure failed [13:32] that certainly looks indicative that azure didn't give you the vms. [13:32] is there a way to delete those VMs and re-deploy them [13:33] you can try juju retry-provisioning 0 [13:33] same with 1 [13:33] interesting Ok [13:33] and it should re-request the machines from azure [13:33] I had done juju add-machine [13:33] but you may have to wait for them to enter a failed state [13:33] but it's not appearing into juju status for some reason [13:33] i dont think it will retry if they are marked as down [13:34] $ juju show-machine 0 [13:34] model: gulfk8s [13:34] machines: [13:34] "0": [13:34] juju-status: [13:34] current: down [13:34] message: agent is not communicating with the server [13:34] since: 04 May 2017 14:05:27+02:00 [13:34] instance-id: machine-0 [13:34] machine-status: [13:34] current: provisioning error [13:34] message: Failed [13:34] It says .. "Provisioning error" I guess juju should know it has failed :) [13:36] I tried juju "retry-provisioning 0" .. but as you expected, it's not retrying [13:36] any way to force-fail the old machines or something [13:36] it's been an hour really .. so I guess it might never fail them [13:39] lazyPower: Any ideas ^ .. Thanks! [13:40] kim0 I dont, the best i could offer woudl be to remove the model and try again. [13:41] yeah .. I tried deploying this a couple of months back, and got the same VM provisioning failure .. so a little :/ [13:41] I can try one more time [13:44] kim0 : do you have any restrictions on your account or anything similar? [13:45] kwmonroe i know you have the broadest welath of knowledgea bout our azure provider [13:45] kwmonroe do you see failed provisioning often? [13:47] kwmonroe .. The error Azure portal gives me is like "PutVirtualNetworkOperation xxxx was cancelled and supersceeded by PutVirtualNetworkOperation yyy. Code: CancelledAndSuperseededDueToAnotherOperation) [13:47] To me .. this looks more like juju's fault than Azure's [13:52] yup lazyPower.. bug 1673847 is the reference bug (kim0 has already commented). [13:52] Bug #1673847: azure does not handle failed deployment well (juju-2.1.2) [13:52] kwmonroe fantastic. Thanks for confirming [13:52] :( [13:53] well poo, wont fix on 2.1 which means you wont see it in this series, but 2.2 is right around the corner [13:53] Is it already fixed in 2.2 devel ? can I try that ? [13:54] I don't see anything related to "fix committed" which tells me its still present in 2.2 devel [13:54] kwmonroe: Thanks for confirming! Any advice to manually workaround this issue ? [13:55] The bug OP says he hits this .. 1 out of 10 times .. to me, I hit it the 2 times I tried juju :) [13:55] If I should just destroy the model & retry, I can do that too [13:58] I tried "juju add-machine" which seems to work .. The machine is started in azure portal .. put it is not listed under my k8s model .. not sure why [13:58] I was hoping to replace the ill machines with healthy ones [14:02] kim0: my workaround (i'm the bug OP btw) is to "juju add-unit " on anything that juju status reports as "provisioning error". [14:02] kim0: it's not pretty, but it keeps any relations in tact and is a bit faster than totally removing the application and re-adding it with new relations later. [14:03] aha .. so that will allocate a new VM and [14:03] deploy on it [14:03] correct kim0 [14:03] my only worry, if unit/0 is "special" [14:03] Ok .. easyrsa/0 was down .. trying to add a unit on that [14:04] kim0: it really shouldn't be.. there's no guarantee that /0 was the first unit to become ready (ie, you may have deployed 2 units at the same time and unit/1 just happened to beat /0 to ready state) [14:04] kim0: so things like charm leadership and even tests should not be relying on /0 as any indication of "first" or "leader" or whatever. [14:04] For example .. etcd/0 is on a broken machine .. while etcd/1 & /2 are on working machines .. However it seems they are blocked waiting for etcd/0 ! [14:05] Nice .. easyrsa/1 is now healthy! [14:05] kwmonroe: should I somehow remove the old easyrsa/0 ? [14:05] kim0: i'm not too familiar with etcd -- lazyPower, is there something special about etcd/0? [14:05] * lazyPower reads backscroll [14:06] kim0 kwmonroe - shouldn't be blocked waiting on etcd/0. Which unit has the asterisk next to its name? [14:07] kim0: i do remove the failed units because i can't stand being reminded of them every time i type juju status ;) i think the syntax is "juju remove-unit easyrsa/0", but if that doesn't work, "juju remove-machine " should do it. [14:07] i would presume etcd/1* if it was the first to come active which would earmark it as the leader for the rest of the units coming online. [14:07] lazyPower: etcd/2 has the * [14:07] is the "every unit on a separate VM" the norm in juju world ? [14:08] kim0 - can you provide more detail as to where it appears they are waiting on etcd/0? [14:09] mm it was just saying "blocked" .. since then I add-unit one etcd .. and now it says ready! [14:09] Ok, it should only be blocked if its missing the easyrsa details [14:10] that was indeed the case .. I had also added unit on easyrsa [14:10] it requires TLS certs to operate as of about a year ago. I flipped the switch to disallow insecure connections [14:10] sorry I don't know the order of events :) [14:10] All good kim0 - its a learning experience :) [14:10] exactly [14:10] I'm trying to post the full juju status [14:11] http://paste.ubuntu.com/24511284/ [14:12] ok that status message for etcd/2 should update within 5 minutes when it runs update-status again and it'll report 2 peers [14:12] What does the * besides etcd/2 mean [14:12] that denotes the current juju leader [14:12] in some instances like the etcd charm, we use the juju leader to coordinate cluster events [14:12] kim0: ideally, each unit would be isolated (either on a totally separate machine or in a separate container). the exception is for "subordinate charms". these are things like nagios or ganglia monitors, rsyslog, etc that make no sense to stand alone in a VM. in that case, both a principal charm (like etc) and a subordinate (like ganglia-node) would live side-by-side on the same unit. [14:13] kim0: this ideal isolation is not enforced, so you *can* jam a whole bunch of charms onto the same unit, but that usually leads to package conflicts or other headache.. [14:13] aha .. Ok! I was really thinking about container isolation (k8s style) [14:14] you can also colocate using lxd which doesn't suffer from that, however it comes with other nuances you have to be aware of. Cross host container networking is only supported in MAAS at this time. [14:14] nvm .. I'll keep separate VMs for now [14:15] kim0 as this is your first dpeloyment, next go-around you can start with a much smaller bundle - kubernetes-core, which only requires 3 units. [14:15] colocated 1 etcd/easyrsa, 1 master, 1 worker. [14:15] same kubernetes experience, smaller requirements. CDK is more of a production-facing minimal deployment, where you wish to have HA control plane, and resilient etcd === menn0_ is now known as menn0 [14:17] I actually care about a production ready deployment [14:17] it seems remove-unit does not remove the underlying machine [14:18] I tried to remove the machine by hand .. but it either needs time, or is not working [14:18] kim0: use a bigger hammer: juju remove-machine --force [14:19] Ok .. just did :) [14:19] removed [14:19] cool [14:19] seems to be converging to a working setup :) [14:19] is there a possibility to power-off all machines (to save money) .. yet start them tomorrow ? [14:21] kim0: even powered off, i think clouds will still charge you (not for IOPS or cpu time, but for holding the underlying resources for you) [14:21] yeah I understand that .. but that cost is like 5% of the running cost [14:21] kim0: so if you really don't need the cluster overnight, just destroy the model and re-deploy it when you need it again [14:21] well .. I've spent 2 hours on a bunch of add-unit / remove-unit / remove-machine --force [14:22] so if possible .. I want to keep it [14:22] kim0 it shouldn't have any issues coming back up from a stopped state [14:22] kim0 if you find issues there, i want bugs please <3 [14:22] do I stop from azure or juju [14:22] the azure cp [14:22] you should be able to halt the vms or suspend them without any penalty to functionality [14:23] cool! [14:23] or you could juju run shutdown [14:23] I am approaching an "all-green" state .. exciting :D [14:23] oh checking that out [14:23] yeah [14:23] juju run --all "shutdown -h now" [14:23] admcleod i'm not sure if that marks the vm as halted in azure cp tho [14:23] oh that's like a parallel ssh .. I see [14:23] lazyPower: neither! [14:23] yeah .. it doesn't [14:24] i know some providers decouple the power state in th hypervisor from the state of the unit... [14:24] *adds to notes* [14:24] I have to "deallocate" the VM .. so I'll use the azure tools [14:24] yeah [14:24] ^ [14:24] do that [14:24] Awesome! [14:24] kim0 now, you're basically testing our failure recovery model :) [14:24] fyi [14:24] hhhh [14:24] hope it works :) [14:24] so if you do find bugs, i want them, plz plz plz dont let that go unnoticed [14:24] Sure thing [14:24] <3 appreciated [14:25] one last thing [14:25] Should I expect to be able to scale kubernetes worker nodes up/down & upgrade kubernetes versions from now on ? [14:25] yes [14:25] ie should the above work without a party ? [14:25] we're cutting a 1.6.2 release soon (probably today) [14:25] Awesome!!! [14:25] so you can test that function relatively soon [14:25] I will love to upgrade from 1.6.1 to that [14:25] :) [14:25] wish granted [14:25] Great .. I appreciate a ping after it's push [14:26] pushed* [14:26] sure [14:26] on here? [14:26] Yep [14:26] k [14:26] lazyPower: how many hours roughly to go [14:26] if you subscribe to the juju mailing list we also hit that, kubernetes users [14:26] lazyPower: the release notification also goes to a list too doenst it? [14:26] right [14:26] reddit [14:27] All VMs get public IPs right [14:27] do those VMs get security updates too ? :D [14:27] kim0 not without something like landscape client attached or installing unattended-upgrades [14:28] kim0 and ot respond to your question: SUCCESS! -- 146 Passed | 0 Failed | 0 Pending | 442 Skipped [14:28] :D [14:28] we're close, getting clean test results from e2e on our 1.6.1 => 1.6.2 on 2 clouds now [14:29] any way for juju to install unattended-upgrades on the nodes it's managing ? [14:29] or should I do ansible for that :) [14:29] kim0: yea, sec. I highlighted that charm in the juju show last week [14:30] kim0: https://lists.ubuntu.com/archives/juju/2017-April/008838.html [14:30] 👍 [14:33] juju deploy unattended [14:33] ERROR cannot resolve URL "cs:unattended": charm or bundle not found [14:35] kim0: sorry, it's just a user charm atm: https://jujucharms.com/u/paulgear/unattended/ [14:35] kim0: try the copy/paste command there. [14:36] it seems to use a local file ./unattended .. should I git clone it first [14:36] I expected "juju deploy paulgear/unattended" to work but it didnt [14:37] rick_h: is the juju show something that is recorded? [14:38] tychicus: it sure is, check out https://www.youtube.com/jujucharms and there's a playlist for just the juju show episodes [14:38] kim0: the copy and paste should have gotten you juju deploy cs:~paulgear/unattended-2 [14:38] fantastic, thanks! [14:39] rick_h: sorry .. my eye totally missed that little box up there :) [14:40] kim0: all good, helpful for making it dirt simple to get the right username/etc. [14:44] @channel .. Thanks for all the awesome help .. I have an all-green deployment now ;) [14:44] kim0: <3 awesome [14:59] kim0 \o/ partyyyy [15:00] :+1 [15:00] kim0: head's up -- i just use the azure tools to put a machine in "Stopped (deallocated)", and when i restarted it, it got a new public ip. this seems to cause juju to switch over to the 192.x private address, which is no longer addressable. [15:01] kwmonroe good catch. we're not going to recover from that [15:01] hmm [15:02] At least you guys should allocate static IPs I guess [15:02] rick_h: is there a way for juju to "refresh" the public-ip from the provider? or does it perhaps poll periodically to see if a public ip has changed? [15:03] kwmonroe: so...I know there's open bugs about machines rebooting to different IPs. I'm not sure what the resolution of those bugs has turned to off the top of my head [15:03] kwmonroe: from my experience the agent will recover, eventually [15:03] ok - i'll at least give it the update-status window and see. meanwhile i'll checkout some lp bugs [15:03] thx rick_h marcoceppi [15:04] how much is that window [15:04] 5 minutes [15:04] cool :) [15:06] Are charms written in a way such that any app is scalable .. For example, I see kubeapi-load-balancer and kubernetes-master and I jus don't know whether or not it's wise to try scaling those up [15:06] kim0 - each of those components are written in such a way that you can scale them [15:06] you can scale charms, whether they expect to be scaled is up to the developer [15:06] do I get some error if something should not be scaled [15:06] an they will reconfigure when a scale event (up or down) happens. [15:07] kim0 the only charm in that bundle that doesn't expect to be scaled at this point is easyrsa [15:07] so what happens when I scaled that up [15:07] we have an open item of medium priority on that functionality. it involves cloning the CA, doing intermediate signing certificates, and a dance that i haven't fully wrapped my head around. [15:08] you'll get 2 separate Certificate Authorities, adding new units may be problematic at that point. [15:08] however existing units will be uneffected. [15:08] Ok thanks [15:09] I guess it should at least error out, that scaling easyrsa is not a recommended action or something [15:10] kim0: the juju agent on my re-started unit does eventually sync / display the new public address, which means addressability is good again. it took 15 minutes for me -- not sure why that amount of time, but i'll deallocate again and see if it's consistent. [15:10] Awesome, thanks [15:11] marcoceppi: duno who monitors the form submission at developer.juju.solutions but if you find one from my new guy and a quality message that i didn't write at all [15:11] can you send him some tokens? [15:13] Docs say "By default, the channel will be set to stable, which means your cluster will always be upgraded to the latest stable version of Kubernetes available." .. Does that auto-reboot nodes by default ? [15:14] kim0: it won't reboot your machines at all, just updates the software running on them [15:17] marcoceppi: so a reboot is not even needed ? [15:18] correct [15:18] and if you combine that with things like live patching, you won't have to reboot for kernel updates either [15:18] winning [15:19] cool stuff [15:20] you still have to reboot for glibc updates though :P hhh [15:23] kim0: second deallocate/restart went faster. the unit refreshed it's public ip 5 minutes after coming back up. [15:24] Very good! [15:24] that may cause an issue with the certs [15:24] kim0: so that's all well and good, but if things expect a static ip, you still might be in trouble. [15:25] right lazyPower [15:25] certs add the public IP as a SAN [15:25] at time of request [15:25] which we have an open bug to re-key infra, but that hasn't been started yet [15:25] afaik, the public IP isn't even "on" the VM ? [15:25] does it go through the trouble of finding it out to add it to the cert [15:25] sure does [15:26] unit-get public-address is passed in as a subject-alt-name on the CSR [15:27] is there a mode, where almost all VMs do not get public IPs to begin with [15:28] kwmonroe https://github.com/juju-solutions/layer-tls-client/issues/8 [15:28] i filed a bug about this so we can track it and get layer-tls-client updated with some logic to re-key during normal operating events [15:28] kim0: ^^ :) [15:29] 👍 [15:30] so basically today, upon nodes reboot .. I should expect the cluster to break, right [15:30] reboots are for wimps [15:30] hhhh [15:31] real men redeploy from scratch [15:32] kim0: no, you should not expect any OS-level reboot to change the IP. it's only when you suspend/deallocate from the cloud control panel. [15:32] Aha got it [15:33] juju set-constraints kubernetes-worker mem=8G [15:33] mm that ^ is still not too helpful to help me get the SSD variant of Azure VMs :/ [15:40] kim0: docs will typically use the least-common-denom for application constraints.. in your case mem=8G is a common constraint that will work across aws/gce/azure/etc. juju also supports cloud-specific constraints. so in your case, if you're really sure you want azure, you could run "juju set-constraints instance-type=Standard_D1_v2" [15:41] Thanks!! [15:42] kim0: the list of instance types for azure (in order of cost) are here: https://github.com/juju/juju/blob/2.1/provider/azure/instancetype.go#L29, and pricing here: https://azure.microsoft.com/en-us/pricing/details/cloud-services/ [15:42] kwmonroe: Thanks for all the help! [15:42] np! [15:43] * magicaltrout gives kwmonroe a supportive pat on the back [15:43] well done kevin \o/ [15:43] heh [15:47] kwmonroe: fyi, it seems the instancetype.go is missing some newer VM types like Standard_DS1_v2 (actually all DSv2-series series) and Av2-series ..etc [15:55] yeah kim0 - looks like instancetype.go is not the definitive list (sorry 'bout that). juju does support DSX_v2, and you can see the full instance-type constraint list here: http://paste.ubuntu.com/24511657/ [15:56] cool np [16:04] When I try to deploy a VM with constraint "instance-type=Standard_DS1" .. I only get "current: provisioning error" [16:04] Any way to get more meaningful errors ? [16:22] kim0: what region are you deploying to? [16:22] westeurope [16:25] kim0: i was gonna say your region might not have DS machines available, but https://azure.microsoft.com/en-us/regions/services/ shows westeurope is supported [16:25] Can I see the error coming back from azure somewhere [16:25] kim0: you can go through the azure portal, select your resource group, and then "Deployments" [16:26] kwmonroe: Got it .. I see the error [16:26] kim0: is it the PutVirtualNetworkOperation from bug 1673847? [16:26] Bug #1673847: azure does not handle failed deployment well (juju-2.1.2) [16:26] Unable to create the VM bec this size is not available in the cluster where the availability set is created [16:27] Strange error .. first time to get this from Azure!! [16:28] kim0: let's see if juju is exposing that with a different status format.. try "juju status --format=yaml" [16:29] not really the machine part is only saying "provisioning error" [16:30] http://paste.ubuntu.com/24511934/plain/ [16:31] I guess this *might* be bec I'm using a test account [16:34] ok kim0 - would you mind adding another comment to bug 1673847, listing this new error message? i think minimally it would be nice if juju would expose these provisioning error messages rather than having to dig through the azure portal. [16:34] Bug #1673847: azure does not handle failed deployment well (juju-2.1.2) [16:37] kwmonroe: Done [16:37] kwmonroe: Are we nearing the 1.6.2 push :) [16:38] kim0: that's a question for lazyPower / kjackal / ryebot. they're mashing lots of buttons atm. [16:38] kim0 once my co-worker gets back from lunch we were planning on a release [16:38] Sorry confused [16:38] Awesome keep rocking :) === daniel1 is now known as Odd_Bloke [17:29] o/ juju world [17:39] \o Budgie^Smore [17:39] got a 1.6.2 release coming your way shortly [17:44] lazyPower coolio, would need to find an environment to deploy it since I no longer have the setups I had :-/ [17:45] Budgie^Smore apply for cloud developer credentials [17:45] help us find bugs [17:45] http://developer.juju.solutions [17:46] lazyPower you know me, always happy to go bug hunting ;-) [17:59] OK I have filled in the form, soooooo now we wait ;-) === frankban is now known as frankban|afk [18:02] awesome :) when marco isn't out doing sprint type stuff you should get an email with some creds [18:02] or a request for more info [18:02] either way, you'll hear back from us shortly [18:06] well even if it a day or 2 it is still faster than HR ;-) [18:07] allright 1.6.2 has hit stable channels [18:07] * lazyPower raises a cuppa coffee to the sky [18:07] kim0 1.6.2 release ping [18:07] \o/ [18:08] how do I upgrade to that [18:08] https://kubernetes.io/docs/getting-started-guides/ubuntu/upgrades/ [18:54] mm `juju status` does not show an upgrade is available .. just saying :) [18:59] mmm is juju status suppose to show upgrade availbility or just the current status of the model? [19:00] Docs say: You can use juju status to see if an upgrade is available. There will either be an upgrade to kubernetes or etcd, or both. [19:00] kubernetes-master seems to have upgraded itself to 1.6.2! [19:00] juju status --format=yaml [19:00] smart output i think hides that notation [19:01] which is a good call, i had not thought about that.. one of those things i've just become accustomed to. inversely, if you're using the GUI, the GUI shows it under the charm detail [19:01] kim0 thats snaps in action ;) [19:02] even in the yaml format .. I cannot easily spot what should tell me there's an upgrade [19:02] @lazyPower .. so it's normal that master auto-upgrades right [19:03] kim0 only when there is an upgrade in its subscribed chanel. [19:03] it wont auto-uprade between minor releases, eg: when we cut and push 1.7 to stable [19:04] yo wont auto upgrade to 1.7, you will need to configure teh charms to look at that channel in order to receive the upgrade [19:04] but for minor patch releases, you get those automatically [19:04] kim0: lazyPower juju searches for upgrades like daily [19:04] yeah .. so for -worker .. there is an upgrade (it's now on 1.6.1) .. but my eyes can't see where the upgrade is [19:04] so you won't see an upgraded charm available for quite a while [19:04] well, maybe 6 hours [19:04] master upgraded like 10 mins after you guys announced it :) [19:04] lucky me I guess [19:04] but -worker still waiting [19:05] lazyPower: we should add a "refresh" action to master and worker, so people like kim0 can just get the latest "now" [19:05] kim0 juju upgrade-charm kubernetes-worker should get it prompting you to run the upgrade action. [19:05] @marcoceppi juju run-action kubernetes-worker/0 upgrade already does that ;) [19:05] oh, udhhh [19:05] <3 [19:05] Here's the yaml format: http://paste.ubuntu.com/24512609/ [19:06] there's no hint there is an upgrade, or is there [19:08] kim0 - It may not tell you anything about the upgrade until the controller polls for upgrades to charms in the env. [19:09] kim0 marcoceppi stated that was ~ every 6 hours or so. perhaps daily. I'm not 100% positive on exactly when it does that either but i can certainly find out and ping back. [19:09] aha I see [19:09] kim0: you can still run the upgrade, even if juju doesn't tell you about one [19:09] I triggered the update manually [19:09] it'll check when you run the command [19:09] yep [19:09] I suppose that kills any running pods .. or does it [19:10] that's why you'll get the prompt about having to run the upgrade action [19:10] Only in rare instances will it incur downtime, and thats why its gated behind an upgrade action. [19:10] but 90% of the time its only recycling kubelet, which has no effect on the running workloads [19:10] Well for me, there was no "prompt" .. it just did its thing [19:10] $ juju upgrade-charm kubernetes-worker [19:10] Added charm "cs:~containers/kubernetes-worker-23" to the model. [19:10] that was all .. no questions asked [19:11] Upgrade done! well .. I have to say, this is sweet! [19:13] kim0 thanks for the positive feedback :) the team appreciates it [19:13] What if I want to change the VM type of worker nodes [19:14] I can add a constraint, add new nodes .. but how do I get rid of the old ones [19:14] kim0 i would recommend blue-green style deployment [19:14] kim0 you would have to add / destroy units for that [19:14] kim0 juju deploy cs:~contianers/kubernetes-worker worker-beta --constraints="mem=16G" [19:14] add relations [19:14] wait for teh dust to settle [19:14] cordon the og nodes [19:14] juju run-action kubernetes-worker/0 pause [19:14] ... down the line... [19:15] once all nodes have been evacuated [19:15] juju remove-application kubernetes-wroker [19:15] Got it! [19:16] Why do I need to add relations manually [19:16] i have a lot fo doc updates i see [19:16] I didn't need that in the initial deployment [19:16] well when you do the blue-green style, there's nothing telling juju how to wire the deployment [19:16] bundles are pre-modeled applications in a specific formation for you to consume [19:16] ah [19:16] you could take the canonical bundle, and modify it with the alternative deployment and deploy it into the same model [19:16] it will converge [19:17] now that may come with its own flavor of challenges if you have mutated the model... which is why i would encourage you to do it manually [19:17] what you can do is just export the model as well [19:17] make changes in the yaml [19:17] then apply that yaml [19:18] there's a lot of options, but as a dev that really likes to know how things work, i'm always going to recommend the manual method. Keeps things simple when things go wrong and i can help you unwind or recover from that :) [19:18] its harder when we're playing trace the bug [19:22] lazyPower: it's easier to just juju set-constraints I think [19:22] or that [19:22] kim0: we'll document this for sure [19:22] but i hope you remember you did that when you dont want that instance type anymore :) [19:48] https://www.reddit.com/r/kubernetes/comments/699v24/canonicals_support_for_kubernetes_162_released/ [19:48] upboats appreciated [19:53] balloons: i think i need an illustrated map of the ppas you are using... [20:24] is it possible to start a machine that is powered off via juju? [20:27] tychicus: to power it back on with juju? how did you power it off via juju? [20:28] juju run --machine=4 "shutdown -h now" [20:29] machine is tied to maas, I know i can power it back on from maas, but I wasn't sure if there was a way to juju run —machine=x start [20:31] tychicus: oh heh no. juju run works by ssh'ing to the machine [20:32] tychicus: so it has to be listening to ssh for that to do anything unfortunately [20:32] ok, that's what I throught [20:32] OK I upboated that, now if only I had somewhere to "play" with 1.6.2! ;-) [20:33] I couldn't find the upboat icon [20:33] tychicus see the arrows at the side of hte link? [20:34] ah those are the boat sails ;) [20:38] I'll run through a cdk deployment today, if only I could figure out why my floating IP's work in openstack, but directly connecting an instance to the ext_net does not :( [20:55] Would "upgrade-charm" also upgrade me to 1.7 when it's released [20:56] or is it involved [21:01] kim0: you'll need to change configuration i beleive it just targets 1.6/stable [21:02] ok still easy enough :) [22:26] is it too much to ask for worker autoscaling for kubernetes :) [22:32] kim0: have to see if https://jujucharms.com/charmscaler/ will work out for you [22:33] well that is interesting, however I really meant adding more nodes under k8s which this scaler doesn't seem to do