/srv/irclogs.ubuntu.com/2017/05/04/#juju.txt

umbSublimeBudgie^Smore, magicaltrout I'm not even sure Canonical still uses those (pretty awesome) orange boxes for OS training anymore, IIRC they now use remote servers from different providers00:20
Budgie^SmoreumbSublime pretty sure you are right there, those were still cool though00:22
umbSublimeheck yah00:22
umbSublimethey had me drooling when they first came out :p00:23
stubThe orange boxes tend to get used where network connectivity is limited, cause they can be flown in with archive mirrors and whatnot prepopulated06:47
kjackalGood monrning Juju world!07:25
BlackDexHello, can i run juju within a lxd when using MAAS as its cloud-provider?11:16
BlackDexi mean the juju bootstrap11:16
BlackDexso, i want to bootstrap juju within a lxd container and use maas for the rest11:16
BlackDexcurrently i'm creating a kvm instance and run the bootstrap on that11:16
rick_hBlackDex: not at this time. One "provider" per controller.11:29
BlackDexand if i create a lxd and let it detect by maas? Or that isn't an option?11:34
rick_hBlackDex: yes, if you load a vm of any sort into maas so that it's the maas api/provider that's in use you're ok12:23
BlackDexoke rick_h Thx, i will try that :)12:30
kim0Howdy folks .. I just tried to deploy kubernetes to Azure cloud .. And it is stuck .. I see 4 deployments have failed in Azure portal :/13:10
kim0Can anyone help me debug this or get it working13:11
lazyPowerkim0: i can certainly try to lend a hand here. What units are failed and how did they fail?13:18
kim0lazyPower: actually not units .. I think it's machines that failed13:32
lazyPowerkim0 ah, so azure failed to give you the machines?13:32
kim0https://www.irccloud.com/pastebin/r7A8WpgV/13:32
kim0yes .. the deployment to Azure failed13:32
lazyPowerthat certainly looks indicative that azure didn't give you the vms.13:32
kim0is there a way to delete those VMs and re-deploy them13:32
lazyPoweryou can try juju retry-provisioning 013:33
lazyPowersame with 113:33
kim0interesting Ok13:33
lazyPowerand it should re-request the machines from azure13:33
kim0I had done juju add-machine13:33
lazyPowerbut you may have to wait for them to enter a failed state13:33
kim0but it's not appearing into juju status for some reason13:33
lazyPoweri dont think it will retry if they are marked as down13:33
kim0$ juju show-machine 013:34
kim0model: gulfk8s13:34
kim0machines:13:34
kim0  "0":13:34
kim0    juju-status:13:34
kim0      current: down13:34
kim0      message: agent is not communicating with the server13:34
kim0      since: 04 May 2017 14:05:27+02:0013:34
kim0    instance-id: machine-013:34
kim0    machine-status:13:34
kim0      current: provisioning error13:34
kim0      message: Failed13:34
kim0It says .. "Provisioning error" I guess juju should know it has failed :)13:34
kim0I tried juju "retry-provisioning 0" .. but as you expected, it's not retrying13:36
kim0any way to force-fail the old machines or something13:36
kim0it's been an hour really .. so I guess it might never fail them13:36
kim0lazyPower: Any ideas ^ .. Thanks!13:39
lazyPowerkim0 I dont, the best i could offer woudl be to remove the model and try again.13:40
kim0yeah .. I tried deploying this a couple of months back, and got the same VM provisioning failure .. so a little :/13:41
kim0I can try one more time13:41
lazyPowerkim0 : do you have any restrictions on your account or anything similar?13:44
lazyPowerkwmonroe i know you have the broadest welath of knowledgea bout our azure provider13:45
lazyPowerkwmonroe do you see failed provisioning often?13:45
kim0kwmonroe .. The error Azure portal gives me is like "PutVirtualNetworkOperation xxxx was cancelled and supersceeded by PutVirtualNetworkOperation yyy. Code: CancelledAndSuperseededDueToAnotherOperation)13:47
kim0To me .. this looks more like juju's fault than Azure's13:47
kwmonroeyup lazyPower.. bug 1673847 is the reference bug (kim0 has already commented).13:52
mupBug #1673847: azure does not handle failed deployment well (juju-2.1.2) <azure-provider> <intermittent-failure> <retry> <juju:Triaged> <juju 2.1:Won't Fix> <https://launchpad.net/bugs/1673847>13:52
lazyPowerkwmonroe fantastic. Thanks for confirming13:52
lazyPower:(13:52
lazyPowerwell poo, wont fix on 2.1 which means you wont see it in this series, but 2.2 is right around the corner13:53
kim0Is it already fixed in 2.2 devel ? can I try that ?13:53
lazyPowerI don't see anything related to "fix committed" which tells me its still present in 2.2 devel13:54
kim0kwmonroe: Thanks for confirming! Any advice to manually workaround this issue ?13:54
kim0The bug OP says he hits this .. 1 out of 10 times .. to me, I hit it the 2 times I tried juju :)13:55
kim0If I should just destroy the model & retry, I can do that too13:55
kim0I tried "juju add-machine" which seems to work .. The machine is started in azure portal .. put it is not listed under my k8s model .. not sure why13:58
kim0I was hoping to replace the ill machines with healthy ones13:58
kwmonroekim0: my workaround (i'm the bug OP btw) is to "juju add-unit <application>" on anything that juju status reports as "provisioning error".14:02
kwmonroekim0: it's not pretty, but it keeps any relations in tact and is a bit faster than totally removing the application and re-adding it with new relations later.14:02
kim0aha .. so that will allocate a new VM and14:03
kim0deploy on it14:03
kwmonroecorrect kim014:03
kim0my only worry, if unit/0 is "special"14:03
kim0Ok .. easyrsa/0 was down .. trying to add a unit on that14:03
kwmonroekim0: it really shouldn't be.. there's no guarantee that /0 was the first unit to become ready (ie, you may have deployed 2 units at the same time and unit/1 just happened to beat /0 to ready state)14:04
kwmonroekim0: so things like charm leadership and even tests should not be relying on /0 as any indication of "first" or "leader" or whatever.14:04
kim0For example .. etcd/0 is on a broken machine .. while etcd/1 & /2 are on working machines .. However it seems they are blocked waiting for etcd/0 !14:04
kim0Nice .. easyrsa/1 is now healthy!14:05
kim0kwmonroe: should I somehow remove the old easyrsa/0 ?14:05
kwmonroekim0: i'm not too familiar with etcd -- lazyPower, is there something special about etcd/0?14:05
* lazyPower reads backscroll14:05
lazyPowerkim0 kwmonroe - shouldn't be blocked waiting on etcd/0. Which unit has the asterisk next to its name?14:06
kwmonroekim0: i do remove the failed units because i can't stand being reminded of them every time i type juju status ;)  i think the syntax is "juju remove-unit easyrsa/0", but if that doesn't work, "juju remove-machine <machine-number-for-easyrsa/0>" should do it.14:07
lazyPoweri would presume etcd/1* if it was the first to come active which would earmark it as the leader for the rest of the units coming online.14:07
kim0lazyPower: etcd/2 has the *14:07
kim0is the "every unit on a separate VM" the norm in juju world ?14:07
lazyPowerkim0 - can you provide more detail as to where it appears they are waiting on etcd/0?14:08
kim0mm it was just saying "blocked" .. since then I add-unit one etcd .. and now it says ready!14:09
lazyPowerOk, it should only be blocked if its missing the easyrsa details14:09
kim0that was indeed the case .. I had also added unit on easyrsa14:10
lazyPowerit requires TLS certs to operate as of about a year ago. I flipped the switch to disallow insecure connections14:10
kim0sorry I don't know the order of events :)14:10
lazyPowerAll good kim0 - its a learning experience :)14:10
kim0exactly14:10
kim0I'm trying to post the full juju status14:10
kim0http://paste.ubuntu.com/24511284/14:11
lazyPowerok that status message for etcd/2 should update within 5 minutes when it runs update-status again and it'll report 2 peers14:12
kim0What does the * besides etcd/2 mean14:12
lazyPowerthat denotes the current juju leader14:12
lazyPowerin some instances like the etcd charm, we use the juju leader to coordinate cluster events14:12
kwmonroekim0: ideally, each unit would be isolated (either on a totally separate machine or in a separate container).  the exception is for "subordinate charms".  these are things like nagios or ganglia monitors, rsyslog, etc that make no sense to stand alone in a VM.  in that case, both a principal charm (like etc) and a subordinate (like ganglia-node) would live side-by-side on the same unit.14:12
kwmonroekim0: this ideal isolation is not enforced, so you *can* jam a whole bunch of charms onto the same unit, but that usually leads to package conflicts or other headache..14:13
kim0aha .. Ok! I was really thinking about container isolation (k8s style)14:13
lazyPoweryou can also colocate using lxd which doesn't suffer from that, however it comes with other nuances you have to be aware of. Cross host container networking is only supported in MAAS at this time.14:14
kim0nvm .. I'll keep separate VMs for now14:14
lazyPowerkim0 as this is your first dpeloyment, next go-around you can start with a much smaller bundle - kubernetes-core, which only requires 3 units.14:15
lazyPowercolocated 1 etcd/easyrsa, 1 master, 1 worker.14:15
lazyPowersame kubernetes experience, smaller requirements. CDK is more of a production-facing minimal deployment, where you wish to have HA control plane, and resilient etcd14:15
=== menn0_ is now known as menn0
kim0I actually care about a production ready deployment14:17
kim0it seems remove-unit  does not remove the underlying machine14:17
kim0I tried to remove the machine by hand .. but it either needs time, or is not working14:18
kwmonroekim0: use a bigger hammer:  juju remove-machine --force <number>14:18
kim0Ok .. just did :)14:19
kim0removed14:19
kim0cool14:19
kim0seems to be converging to a working setup :)14:19
kim0is there a possibility to power-off all machines (to save money) .. yet start them tomorrow ?14:19
kwmonroekim0: even powered off, i think clouds will still charge you (not for IOPS or cpu time, but for holding the underlying resources for you)14:21
kim0yeah I understand that .. but that cost is like 5% of the running cost14:21
kwmonroekim0: so if you really don't need the cluster overnight, just destroy the model and re-deploy it when you need it again14:21
kim0well .. I've spent 2 hours on a bunch of add-unit / remove-unit / remove-machine --force14:21
kim0so if possible .. I want to keep it14:22
lazyPowerkim0 it shouldn't have any issues coming back up from a stopped state14:22
lazyPowerkim0 if you find issues there, i want bugs please <314:22
kim0do I stop from azure or juju14:22
lazyPowerthe azure cp14:22
lazyPoweryou should be able to halt the vms or suspend them without any penalty to functionality14:22
kim0cool!14:23
admcleodor you could juju run shutdown14:23
kim0I am approaching an "all-green" state .. exciting :D14:23
kim0oh checking that out14:23
lazyPoweryeah14:23
lazyPowerjuju run --all "shutdown -h now"14:23
lazyPoweradmcleod i'm not sure if that marks the vm as halted in azure cp tho14:23
kim0oh that's like a parallel ssh .. I see14:23
admcleodlazyPower: neither!14:23
kim0yeah .. it doesn't14:23
lazyPoweri know some providers decouple the power state in th hypervisor from the state of the unit...14:24
admcleod*adds to notes*14:24
kim0I have to "deallocate" the VM .. so I'll use the azure tools14:24
lazyPoweryeah14:24
lazyPower^14:24
lazyPowerdo that14:24
kim0Awesome!14:24
lazyPowerkim0 now, you're basically testing our failure recovery model :)14:24
lazyPowerfyi14:24
kim0hhhh14:24
kim0hope it works :)14:24
lazyPowerso if you do find bugs, i want them, plz plz plz dont let that go unnoticed14:24
kim0Sure thing14:24
lazyPower<3 appreciated14:24
kim0one last thing14:25
kim0Should I expect to be able to scale kubernetes worker nodes up/down & upgrade kubernetes versions from now on ?14:25
lazyPoweryes14:25
kim0ie should the above work without a party ?14:25
lazyPowerwe're cutting a 1.6.2 release soon (probably today)14:25
kim0Awesome!!!14:25
lazyPowerso you can test that function relatively soon14:25
kim0I will love to upgrade from 1.6.1 to that14:25
lazyPower:)14:25
lazyPowerwish granted14:25
kim0Great .. I appreciate a ping after it's push14:25
kim0pushed*14:26
lazyPowersure14:26
lazyPoweron here?14:26
kim0Yep14:26
lazyPowerk14:26
kim0lazyPower: how many hours roughly to go14:26
lazyPowerif you subscribe to the juju mailing list we also hit that, kubernetes users14:26
admcleodlazyPower: the release notification also goes to a list too doenst it?14:26
admcleodright14:26
lazyPowerreddit14:26
kim0All VMs get public IPs right14:27
kim0do those VMs get security updates too ? :D14:27
lazyPowerkim0 not without something like landscape client attached or installing unattended-upgrades14:27
lazyPowerkim0 and ot respond to your question: SUCCESS! -- 146 Passed | 0 Failed | 0 Pending | 442 Skipped14:28
kim0:D14:28
lazyPowerwe're close, getting clean test results from e2e on our 1.6.1 => 1.6.2 on 2 clouds now14:28
kim0any way for juju to install unattended-upgrades on the nodes it's managing ?14:29
kim0or should I do ansible for that :)14:29
rick_hkim0: yea, sec. I highlighted that charm in the juju show last week14:29
rick_hkim0: https://lists.ubuntu.com/archives/juju/2017-April/008838.html14:30
kim0👍14:30
kim0juju deploy unattended14:33
kim0ERROR cannot resolve URL "cs:unattended": charm or bundle not found14:33
rick_hkim0: sorry, it's just a user charm atm: https://jujucharms.com/u/paulgear/unattended/14:35
rick_hkim0: try the copy/paste command there.14:35
kim0it seems to use a local file ./unattended .. should I git clone it first14:36
kim0I expected "juju deploy paulgear/unattended" to work but it didnt14:36
tychicusrick_h: is the juju show something that is recorded?14:37
rick_htychicus: it sure is, check out https://www.youtube.com/jujucharms and there's a playlist for just the juju show episodes14:38
rick_hkim0: the copy and paste should have gotten you juju deploy cs:~paulgear/unattended-214:38
tychicusfantastic, thanks!14:38
kim0rick_h: sorry .. my eye totally missed that little box up there :)14:39
rick_hkim0: all good, helpful for making it dirt simple to get the right username/etc.14:40
kim0@channel .. Thanks for all the awesome help .. I have an all-green deployment now ;)14:44
rick_hkim0: <3 awesome14:44
lazyPowerkim0 \o/ partyyyy14:59
kim0:+115:00
kwmonroekim0: head's up -- i just use the azure tools to put a machine in "Stopped (deallocated)", and when i restarted it, it got a new public ip.  this seems to cause juju to switch over to the 192.x private address, which is no longer addressable.15:00
lazyPowerkwmonroe good catch. we're not going to recover from that15:01
kim0hmm15:01
kim0At least you guys should allocate static IPs I guess15:02
kwmonroerick_h: is there a way for juju to "refresh" the public-ip from the provider?  or does it perhaps poll periodically to see if a public ip has changed?15:02
rick_hkwmonroe: so...I know there's open bugs about machines rebooting to different IPs. I'm not sure what the resolution of those bugs has turned to off the top of my head15:03
marcoceppikwmonroe: from my experience the agent will recover, eventually15:03
kwmonroeok - i'll at least give it the update-status window and see.  meanwhile i'll checkout some lp bugs15:03
kwmonroethx rick_h marcoceppi15:03
kim0how much is that window15:04
kwmonroe5 minutes15:04
kim0cool :)15:04
kim0Are charms written in a way such that any app is scalable .. For example, I see kubeapi-load-balancer and kubernetes-master and I jus don't know whether or not it's wise to try scaling those up15:06
lazyPowerkim0 - each of those components are written in such a way that you can scale them15:06
magicaltroutyou can scale charms, whether they expect to be scaled is up to the developer15:06
kim0do I get some error if something should not be scaled15:06
lazyPoweran they will reconfigure when a scale event (up or down) happens.15:06
lazyPowerkim0 the only charm in that bundle that doesn't expect to be scaled at this point is easyrsa15:07
kim0so what happens when I scaled that up15:07
lazyPowerwe have an open item of medium priority on that functionality. it involves cloning the CA, doing intermediate signing certificates, and a dance that i haven't fully wrapped my head around.15:07
lazyPoweryou'll get 2 separate Certificate Authorities, adding new units may be problematic at that point.15:08
lazyPowerhowever existing units will be uneffected.15:08
kim0Ok thanks15:08
kim0I guess it should at least error out, that scaling easyrsa is not a recommended action or something15:09
kwmonroekim0: the juju agent on my re-started unit does eventually sync / display the new public address, which means addressability is good again.  it took 15 minutes for me -- not sure why that amount of time, but i'll deallocate again and see if it's consistent.15:10
kim0Awesome, thanks15:10
magicaltroutmarcoceppi: duno who monitors the form submission at developer.juju.solutions but if you find one from my new guy and a quality message that i didn't write at all15:11
magicaltroutcan you send him some tokens?15:11
kim0Docs say "By default, the channel will be set to stable, which means your cluster will always be upgraded to the latest stable version of Kubernetes available." .. Does that auto-reboot nodes by default ?15:13
marcoceppikim0: it won't reboot your machines at all, just updates the software running on them15:14
kim0marcoceppi: so a reboot is not even needed ?15:17
marcoceppicorrect15:18
marcoceppiand if you combine that with things like live patching, you won't have to reboot for kernel updates either15:18
magicaltroutwinning15:18
kim0cool stuff15:19
kim0you still have to reboot for glibc updates though :P hhh15:20
kwmonroekim0: second deallocate/restart went faster.  the unit refreshed it's public ip 5 minutes after coming back up.15:23
kim0Very good!15:24
lazyPowerthat may cause an issue with the certs15:24
kwmonroekim0: so that's all well and good, but if things expect a static ip, you still might be in trouble.15:24
kwmonroeright lazyPower15:25
lazyPowercerts add the public IP as a SAN15:25
lazyPowerat time of request15:25
lazyPowerwhich we have an open bug to re-key infra, but that hasn't been started yet15:25
kim0afaik, the public IP isn't even "on" the VM ?15:25
kim0does it go through the trouble of finding it out to add it to the cert15:25
lazyPowersure does15:25
lazyPowerunit-get public-address is passed in as a subject-alt-name on the CSR15:26
kim0is there a mode, where almost all VMs do not get public IPs to begin with15:27
lazyPowerkwmonroe https://github.com/juju-solutions/layer-tls-client/issues/815:28
lazyPoweri filed a bug about this so we can track it and get layer-tls-client updated with some logic to re-key during normal operating events15:28
kwmonroekim0: ^^ :)15:28
kim0👍15:29
kim0so basically today, upon nodes reboot .. I should expect the cluster to break, right15:30
magicaltroutreboots are for wimps15:30
kim0hhhh15:30
magicaltroutreal men redeploy from scratch15:31
kwmonroekim0: no, you should not expect any OS-level reboot to change the IP.  it's only when you suspend/deallocate from the cloud control panel.15:32
kim0Aha got it15:32
kim0juju set-constraints kubernetes-worker mem=8G15:33
kim0mm that ^ is still not too helpful to help me get the SSD variant of Azure VMs :/15:33
kwmonroekim0: docs will typically use the least-common-denom for application constraints.. in your case mem=8G is a common constraint that will work across aws/gce/azure/etc.  juju also supports cloud-specific constraints.  so in your case, if you're really sure you want azure, you could run "juju set-constraints <app> instance-type=Standard_D1_v2"15:40
kim0Thanks!!15:41
kwmonroekim0: the list of instance types for azure (in order of cost) are here:  https://github.com/juju/juju/blob/2.1/provider/azure/instancetype.go#L29, and pricing here: https://azure.microsoft.com/en-us/pricing/details/cloud-services/15:42
kim0kwmonroe: Thanks for all the help!15:42
kwmonroenp!15:42
* magicaltrout gives kwmonroe a supportive pat on the back15:43
magicaltroutwell done kevin \o/15:43
kwmonroeheh15:43
kim0kwmonroe: fyi, it seems the instancetype.go is missing some newer VM types like Standard_DS1_v2 (actually all DSv2-series series) and Av2-series ..etc15:47
kwmonroeyeah kim0 - looks like instancetype.go is not the definitive list (sorry 'bout that).  juju does support DSX_v2, and you can see the full instance-type constraint list here:  http://paste.ubuntu.com/24511657/15:55
kim0cool np15:56
kim0When I try to deploy a VM with constraint "instance-type=Standard_DS1" .. I only get "current: provisioning error"16:04
kim0Any way to get more meaningful errors ?16:04
kwmonroekim0: what region are you deploying to?16:22
kim0westeurope16:22
kwmonroekim0: i was gonna say your region might not have DS machines available, but https://azure.microsoft.com/en-us/regions/services/ shows westeurope is supported16:25
kim0Can I see the error coming back from azure somewhere16:25
kwmonroekim0: you can go through the azure portal, select your resource group, and then "Deployments"16:25
kim0kwmonroe: Got it .. I see the error16:26
kwmonroekim0: is it the PutVirtualNetworkOperation from bug 1673847?16:26
mupBug #1673847: azure does not handle failed deployment well (juju-2.1.2) <azure-provider> <intermittent-failure> <retry> <juju:Triaged> <juju 2.1:Won't Fix> <https://launchpad.net/bugs/1673847>16:26
kim0Unable to create the VM bec this size is not available in the cluster where the availability set is created16:26
kim0Strange error .. first time to get this from Azure!!16:27
kwmonroekim0: let's see if juju is exposing that with a different status format.. try "juju status <unit> --format=yaml"16:28
kim0not really the machine part is only saying "provisioning error"16:29
kim0http://paste.ubuntu.com/24511934/plain/16:30
kim0I guess this *might* be bec I'm using a test account16:31
kwmonroeok kim0 - would you mind adding another comment to bug 1673847, listing this new error message?  i think minimally it would be nice if juju would expose these provisioning error messages rather than having to dig through the azure portal.16:34
mupBug #1673847: azure does not handle failed deployment well (juju-2.1.2) <azure-provider> <intermittent-failure> <retry> <juju:Triaged> <juju 2.1:Won't Fix> <https://launchpad.net/bugs/1673847>16:34
kim0kwmonroe: Done16:37
kim0kwmonroe: Are we nearing the 1.6.2 push :)16:37
kwmonroekim0: that's a question for lazyPower / kjackal / ryebot.  they're mashing lots of buttons atm.16:38
lazyPowerkim0 once my co-worker gets back from lunch we were planning on a release16:38
kim0Sorry confused16:38
kim0Awesome keep rocking :)16:38
=== daniel1 is now known as Odd_Bloke
Budgie^Smore o/ juju world17:29
lazyPower\o Budgie^Smore17:39
lazyPowergot a 1.6.2 release coming your way shortly17:39
Budgie^SmorelazyPower coolio, would need to find an environment to deploy it since I no longer have the setups I had :-/17:44
lazyPowerBudgie^Smore apply for cloud developer credentials17:45
lazyPowerhelp us find bugs17:45
lazyPowerhttp://developer.juju.solutions17:45
Budgie^SmorelazyPower you know me, always happy to go bug hunting ;-)17:46
Budgie^SmoreOK I have filled in the form, soooooo now we wait ;-)17:59
=== frankban is now known as frankban|afk
lazyPowerawesome :) when marco isn't out doing sprint type stuff you should get an email with some creds18:02
lazyPoweror a request for more info18:02
lazyPowereither way, you'll hear back from us shortly18:02
Budgie^Smorewell even if it a day or 2 it is still faster than HR ;-)18:06
lazyPowerallright 1.6.2 has hit stable channels18:07
* lazyPower raises a cuppa coffee to the sky18:07
lazyPowerkim0 1.6.2 release ping18:07
kim0\o/18:07
kim0how do I upgrade to that18:08
lazyPowerhttps://kubernetes.io/docs/getting-started-guides/ubuntu/upgrades/18:08
kim0mm `juju status` does not show an upgrade is available .. just saying :)18:54
Budgie^Smoremmm is juju status suppose to show upgrade availbility or just the current status of the model?18:59
kim0Docs say: You can use juju status to see if an upgrade is available. There will either be an upgrade to kubernetes or etcd, or both.19:00
kim0kubernetes-master seems to have upgraded itself to 1.6.2!19:00
lazyPowerjuju status --format=yaml19:00
lazyPowersmart output i think hides that notation19:00
lazyPowerwhich is a good call, i had not thought about that.. one of those things i've just become accustomed to. inversely, if you're using the GUI, the GUI shows it under the charm detail19:01
lazyPowerkim0 thats snaps in action ;)19:01
kim0even in the yaml format .. I cannot easily spot what should tell me there's an upgrade19:02
kim0@lazyPower .. so it's normal that master auto-upgrades right19:02
lazyPowerkim0 only when there is an upgrade in its subscribed chanel.19:03
lazyPowerit wont auto-uprade between minor releases, eg: when we cut and push 1.7 to stable19:03
lazyPoweryo wont auto upgrade to 1.7, you will need to configure teh charms to look at that channel in order to receive the upgrade19:04
lazyPowerbut for minor patch releases, you get those automatically19:04
marcoceppikim0: lazyPower juju searches for upgrades like daily19:04
kim0yeah .. so for -worker .. there is an upgrade (it's now on 1.6.1) .. but my eyes can't see where the upgrade is19:04
marcoceppiso you won't see an upgraded charm available for quite a while19:04
marcoceppiwell, maybe 6 hours19:04
kim0master upgraded like 10 mins after you guys announced it :)19:04
kim0lucky me I guess19:04
kim0but -worker still waiting19:04
marcoceppilazyPower: we should add a "refresh" action to master and worker, so people like kim0 can just get the latest "now"19:05
lazyPowerkim0 juju upgrade-charm kubernetes-worker should get it prompting you to run the upgrade action.19:05
lazyPower@marcoceppi juju run-action kubernetes-worker/0 upgrade already does that ;)19:05
marcoceppioh, udhhh19:05
lazyPower<319:05
kim0Here's the yaml format: http://paste.ubuntu.com/24512609/19:05
kim0there's no hint there is an upgrade, or is there19:06
lazyPowerkim0 - It may not tell you anything about the upgrade until the controller polls for upgrades to charms in the env.19:08
lazyPowerkim0 marcoceppi stated that was ~ every 6 hours or so. perhaps daily. I'm not 100% positive on exactly when it does that either but i can certainly find out and ping back.19:09
kim0aha I see19:09
marcoceppikim0: you can still run the upgrade, even if juju doesn't tell you about one19:09
kim0I triggered the update manually19:09
marcoceppiit'll check when you run the command19:09
kim0yep19:09
kim0I suppose that kills any running pods .. or does it19:09
marcoceppithat's why you'll get the prompt about having to run the upgrade action19:10
lazyPowerOnly in rare instances will it incur downtime, and thats why its gated behind an upgrade action.19:10
lazyPowerbut 90% of the time its only recycling kubelet, which has no effect on the running workloads19:10
kim0Well for me, there was no "prompt" .. it just did its thing19:10
kim0$ juju upgrade-charm kubernetes-worker19:10
kim0Added charm "cs:~containers/kubernetes-worker-23" to the model.19:10
kim0that was all .. no questions asked19:10
kim0Upgrade done! well .. I have to say, this is sweet!19:11
lazyPowerkim0 thanks for the positive feedback :) the team appreciates it19:13
kim0What if I want to change the VM type of worker nodes19:13
kim0I can add a constraint, add new nodes .. but how do I get rid of the old ones19:14
lazyPowerkim0 i would recommend blue-green style deployment19:14
Budgie^Smorekim0 you would have to add / destroy units for that19:14
lazyPowerkim0 juju deploy cs:~contianers/kubernetes-worker  worker-beta --constraints="mem=16G"19:14
lazyPoweradd relations19:14
lazyPowerwait for teh dust to settle19:14
lazyPowercordon the og nodes19:14
lazyPowerjuju run-action kubernetes-worker/0 pause19:14
lazyPower... down the line...19:14
lazyPoweronce all nodes have been evacuated19:15
lazyPowerjuju remove-application kubernetes-wroker19:15
kim0Got it!19:15
kim0Why do I need to add relations manually19:16
lazyPoweri have a lot fo doc updates i see19:16
kim0I didn't need that in the initial deployment19:16
lazyPowerwell when you do the blue-green style, there's nothing telling juju how to wire the deployment19:16
lazyPowerbundles are pre-modeled applications in a specific formation for you to consume19:16
kim0ah19:16
lazyPoweryou could take the canonical bundle, and modify it with the alternative deployment and deploy it into the same model19:16
lazyPowerit will converge19:16
lazyPowernow that may come with its own flavor of challenges if you have mutated the model... which is why i would encourage you to do it manually19:17
lazyPowerwhat you can do is just export the model as well19:17
lazyPowermake changes in the yaml19:17
lazyPowerthen apply that yaml19:17
lazyPowerthere's a lot of options, but as a dev that really likes to know how things work, i'm always going to recommend the manual method. Keeps things simple when things go wrong and i can help you unwind or recover from that :)19:18
lazyPowerits harder when we're playing trace the bug19:18
marcoceppilazyPower:  it's easier to just juju set-constraints I think19:22
lazyPoweror that19:22
marcoceppikim0: we'll document this for sure19:22
lazyPowerbut i hope you remember you did that when you dont want that instance type anymore :)19:22
lazyPowerhttps://www.reddit.com/r/kubernetes/comments/699v24/canonicals_support_for_kubernetes_162_released/19:48
lazyPowerupboats appreciated19:48
mwhudsonballoons: i think i need an illustrated map of the ppas you are using...19:53
tychicusis it possible to start a machine that is powered off via juju?20:24
rick_htychicus: to power it back on with juju? how did you power it off via juju?20:27
tychicusjuju run --machine=4 "shutdown -h now"20:28
tychicusmachine is tied to maas, I know i can power it back on from maas, but I wasn't sure if there was a way to juju run —machine=x start20:29
rick_htychicus: oh heh no. juju run works by ssh'ing to the machine20:31
rick_htychicus: so it has to be listening to ssh for that to do anything unfortunately20:32
tychicusok, that's what I throught20:32
Budgie^SmoreOK I upboated that, now if only I had somewhere to "play" with 1.6.2! ;-)20:32
tychicusI couldn't find the upboat icon20:33
Budgie^Smoretychicus see the arrows at the side of hte link?20:33
tychicusah those are the boat sails ;)20:34
tychicusI'll run through a cdk  deployment today, if only I could figure out why my floating IP's work in openstack, but directly connecting an instance to the ext_net does not :(20:38
kim0Would "upgrade-charm" also upgrade me to 1.7 when it's released20:55
kim0or is it involved20:56
marcoceppikim0: you'll need to change configuration i beleive it just targets 1.6/stable21:01
kim0ok still easy enough :)21:02
kim0is it too much to ask for worker autoscaling for kubernetes :)22:26
rick_hkim0: have to see if https://jujucharms.com/charmscaler/ will work out for you22:32
kim0well that is interesting, however I really meant adding more nodes under k8s which this scaler doesn't seem to do22:33

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!