/srv/irclogs.ubuntu.com/2017/02/14/#juju.txt

stokachui see that 502 error sometimes too00:00
stokachuhard to reproduce00:00
stormmorecould it be because I don't have dns setup correct?00:01
stormmoreyup that was the problem00:05
stormmorenow I have to figure out why my ingress isn't presenting the right wrong cert00:51
lazyPower"the right wrong cert" - so many questions about this statement...00:54
stormmorelazyPower well I am expecting an error due to DNS not being setup instead of a CA / self-signed issue00:57
stormmorelazyPower do you have an example of a https ingress pulling it's cert from secrets collection to help me see where I went wrong?01:01
stormmorefor some reason my ingress is pulling a self-signed cert!!01:02
lazyPowerhttps://kubernetes.io/docs/user-guide/ingress/01:02
lazyPowerunder the TLS header01:02
stormmoreyeah that is what I was using :-/01:03
lazyPowerhttps://github.com/kubernetes/ingress/blob/master/controllers/nginx/README.md#https01:03
lazyPowerare you attempting to configure this via configmap?01:03
lazyPowerjust perchance? we have an open PR to impelement that feature, however its author is on vacation01:03
lazyPoweri may need to piggyback it in01:03
stormmoreno just did a kubectl create secret tls wildcard --key <key file> --cert <cert file>01:05
lazyPowerhmm ok01:05
stormmorehttps://gist.github.com/cm-graham/51c866e87934b53daa64afa104a4f6b7 is my YAML01:07
lazyPowerstormmore - can you confirm the structure of the secret has the keys 'tls.key' and 'tls.crt'?01:09
stormmorelazyPower - yeah that is one of things I checked01:09
lazyPowerhonestly iv'e only ever tested this with self signed tls keys01:10
lazyPoweri'm not sur ei would have noticed a switch in keys01:10
stormmoreI am still curious as to were it got the cert it is serving it01:11
lazyPowerlet me finish up collecting this deploy's crashdump report and i'll context switch to looking at this stormmore.01:11
lazyPoweri'm pretty sure the container generates a self signed cert that it will offer up by default01:11
lazyPowerand you're getting that snakeoil key01:11
stormmoreyeah that is what I am thinking01:11
lazyPowerlike the ingress rule itself isn't binding to that secret01:11
lazyPowerso its falling back to default01:11
stormmoreone way to test that threory... going to go kill a container!01:11
stormmorehmmm so the container needs to present that cert first?01:12
lazyPoweri dont think so01:12
lazyPowerstormmore ok sorry about that, now i'm waiting for a deploy of k8s core on gce.01:28
lazyPoweri'll pull this in and try to get a tls ingress workload running01:28
stormmorelazyPower no worries, as always I don't wait for anyone :P01:28
lazyPowergood :)01:28
lazyPowerif you cant get this resolved i'll happily take a bug01:28
stormmorelazyPower I am trying a different namespace to see if it is a namespace overload issue01:28
stormmoreOK definitely not an namespace issue and even deleting the deployment, service and ingress didn't stop it serving the right wrong cert01:30
stormmoreand I have confirmed that the container services the right cert if I run it locally01:34
lazyPoweryou mean locally via minikube or via lxd deployment or?01:34
stormmoreeven simpler ... docker run -p...01:35
lazyPowerok, i woudl expect there to be different behavior between that and the k8s ingress controller01:37
lazyPowerit has to proxy tls to teh backend in that scenario + re-encrypt with whatever key its serving from configured in the LB01:37
stormmorejust rulling out the container 100%01:38
stormmoreit serves both tls and unsecure traffic at the moment so should be fine for lb/ingress at the moment01:38
* lazyPower nods01:39
stormmoreyeah I am running out of ideas on what to try next :-/01:42
stormmorewell short of destroying the cluster and starting again!01:46
stormmoreyou should only need to setup the deployment, service and ingress right?02:13
lazyPowerstormmore correct02:13
stormmorehmmm sigh :-/ still no serving the cert from the secret vault02:13
lazyPowerstormmore - this is whats incoming https://github.com/kubernetes/kubernetes/pull/40814/files02:18
lazyPowerand it has tls passthrough from the action to the reigstry running behind it, but its configured via configmap.02:19
lazyPowerso this branch is a pre-req to get the functionality but this is our first workload configured with tls that we have encapsulated as an action, which has the manifests02:20
stormmoreyeah I am monitoring that02:21
stormmoreI am also thinking nexus3 is potentially a better option for us as it gives us other repository types than docker registeries02:22
stormmorealso I am only using it at the moment as a test service02:22
stormmoregoing to find an AWS region we aren't using an setup the VPC more appropriately for a cluster02:23
stormmoregoing to head home and see if I can do that02:23
=== thumper is now known as thumper-afk
Budgie^SmorelazyPower it's stormmore, do you know of a good guide for spinning up a k8s aws cluster including setting up the VPC appropreiately?03:29
abhay_Hi Chris04:24
abhay_are you there chris  ?04:27
veebersabhay_: see pm04:27
abhay_ok04:27
=== frankban|afk is now known as frankban
kjackal_Good morning Juju World!08:55
kklimondahow do I create bundle with local charms?09:11
kklimonda(using juju 2.0.3)09:11
kklimondaright now I have a bundle/ directory with bundle.yaml and charms/ inside, and I'm putting local charms into charms/)09:14
kklimondabut when I try to use charm: ./charms/ceph-269 I get an error >>path "charms/ceph-269" can not be a relative path<<09:15
kklimondaI guess I can workaround by passing an absolute path instead, but that's not what I should be doing09:28
kjackal_Hi kklimonda, I am afraid  absolute path is the only option at the moment.09:42
SimonKLBwhat is the best practice for exposing isolated charms (i.e. charms co-located in LXD containers) ?10:22
SimonKLBright now i manually add iptables rules to port forward since expose doesnt seem to be enough, but perhaps there is a juju-way ?10:23
magicaltroutthat is the way SimonKLB10:33
magicaltroutall expose does is if you have a provider that understands firewalls, to open the port10:34
magicaltroutLXD obviously doesn't so expose does nothing10:34
SimonKLBmagicaltrout: right, do you know if there has been any discussion regarding this before? it would be neat if the expose command did the NAT:ing for us if the targeted application was in an LXD container10:37
SimonKLBbut perhaps this is not implemented by choice10:37
magicaltroutSimonKLB: I've brought it up before, I don't think it ever really got anywhere. You could do this type of thing https://insights.ubuntu.com/2015/11/10/converting-eth0-to-br0-and-getting-all-your-lxc-or-lxd-onto-your-lan/10:50
magicaltroutbasically just bridge the virtual adapter through the host adapter and get real ip addresses10:50
magicaltroutall depends on how much you can be bothered :)10:50
SimonKLBmagicaltrout: i wonder how well that's going to work on a public cloud though10:59
kklimondaany idea how to debug juju MAAS integration? I can deploy juju controller just fine, but then I can't deploy my bundle - juju status shows all machines in a pending state, and maas logs show no request to create those machines11:02
magicaltroutSimonKLB: indeed, it will suck on a public cloud :)11:04
magicaltrouti reckon there is probably scope for a juju plugin someone could write to add that functionality though. Something like juju nat <application-name>11:05
magicaltroutbut i'm not that guy! ;)11:05
kklimondaalso, are there any chinese mirrors of streams.canonical.com?11:18
magicaltrouthey kklimonda i'm not a maas user so I can't offer help but you could also #maas if you're not already there to see if other people are awake11:40
kklimonda#juju seems to be more active11:41
magicaltroutlol fair enough11:41
magicaltroutit will certainly pick up later in the day when more US folks come online11:41
magicaltroutkjackal_ might be able to provide some MAAS support, I'm not sure11:41
ayushcholcombe: Hey11:43
ayushcholcombe: I needed some help regarding the Ceph Dash charm11:43
ayushhelp11:45
magicaltroutayush: you might find people awake on #openstack-charms11:52
ayushThanks. Will check there :)11:53
anrahHi! Can someone help with juju storage and cinder?11:54
anrahMy deployments are working fine to my private openstack cloud, but I would want to use cinder volumes on my instances for logfiles etc.11:55
anrahI have https enabled on my openstack and I use ssl-hostname-verification: false11:56
anrahUnits get added without problem, but when I want to add storage to instances I get error https://myopenstack.example.com:8776/v2/b3fbae713741428ca81bca384e037540/volumes: x509: certificate signed by unknown authority11:57
kjackal_kklimonda: I am not a maas user either, so apart from the usual juju logs I am not aware of any other debuging ...facilities12:37
kjackal_anrah:  You might have better luck asking at #openstack-charms12:38
anrahI'm not deploying openstack :)12:40
anrahI'm deploying to OpenStack12:40
anrahOpenStack as provider12:40
kklimondakjackal_: so it seems juju is spending some insane amount of time doing... something, most likely network related, before it starts orchestrating MAAS to prepare machines13:00
cory_fustub, tvansteenburgh: ping for charms.reactive sync13:01
kklimondathis is a lab somewhere deep in china, and the connection to the outside world is just as bad as I've read about - it looks once juju finishes doing something it starts bringing nodes, one at a time13:02
magicaltroutkklimonda: so a few things should happen, MAAS will check to make sure it has the correct images as far as I know, if it doesn't it'll download some new ones, likely Trusty and Xenial13:03
magicaltroutthen when juju spins up it will start downloading the juju client software and then do apt-get update etc13:03
kklimondafor the maas images, I've created a local mirror with sstream-mirror and pointed MAAS to it13:04
kklimondait's definitely possible that juju is trying to download something else13:06
magicaltroutyeah it will download the client, then when thats setup any resources it needs for the charms13:06
kklimondacan I point it to alternate location?13:07
magicaltroutnot a clue, clearly you can run an apt mirror somewhere13:07
magicaltrouti don't know how you point cloud init somewhere else though13:07
* magicaltrout taps in kjackal_ or an american at this point13:08
kklimondaI don't think it's even getting to apt13:08
kjackal_kklimonda: I am not sure either, sorry13:09
kklimondacontroller is deployed and fine, and machines are in pending state without any visible progress for at least 10 minutes (that's how long it took for juju to spawn first out of three machines)13:09
magicaltroutjuju status --format yaml might, or might not give you more to go on13:10
magicaltroutrick_h loves a bit of MAAS when he gets into the office13:12
* rick_h looks up and goes "whodawhat?"13:12
kklimondayeah, I'll wait around ;)13:12
rick_hkklimonda: what's up? /me reads back13:13
magicaltrouthe does love MAAS, don't let him tell you otherwise! ;)13:13
kklimondarick_h: I have a MAAS+Juju deployment somewhere in China, and juju add-machine takes ages13:13
rick_hI do, my maas https://www.flickr.com/gp/7508761@N03/47B58Y13:14
kklimondamy current assumption is that juju, before it even starts machine through MAAS, is trying to download something from the internet13:14
kklimondawhich is kinda no-go given how bad internet is there13:15
rick_hkklimonda: probably pulling tools/etc from our DC perhaps. You might know more looking at the details logs and doing stuff like bootstrap --debug which will be more explicit13:15
rick_hkklimonda: hmm, well it shouldn't do anything  before the machine starts13:15
rick_hkklimonda: it's all setup as scripts to be run when the machine starts13:15
kklimondabootstrap part seems fine13:16
kklimondaI've done a sstreams-mirror to get agent/2.0.3/juju-2.0.3-ubuntu-amd64.tgz and later bootstraped it like that: juju bootstrap --to juju-node.maas --debug --metadata-source  tools maas --no-gui --show-log13:17
rick_hkklimonda: hmm, no add-machine should just be turning on the machine and installing jujud on the machine and register it to the controller13:17
magicaltroutkklimonda: if you tell maas just to start a xenial image does it come up?13:19
kklimondaI can deploy xenial and trusty just fine through UI and it takes no time at all (other than servers booting up etc.)13:20
kklimondabut juju is not even assigning and deploying a machine until it finishes doing... whatever it's doing13:21
kklimondathe funny part is, it seems to be working just fine - only with a huge delay (like 15 minutes per machine)13:21
rick_hkklimonda: hmm, yea might be worth filing a bug with as many details as you can put down. what versions of maas/juju, how many spaces/subnets are setup, what types of machines they are, etc.13:23
kklimondasigh, it's definitely connecting to streams.canonical.com13:24
kklimondaI just tcpdumped traffic13:24
magicaltrouti blame canonical for not having a DC in the back of china!13:24
kklimondasigh, there are mirrors for old good apt repositories13:27
kklimondabut we're living in a brave new world13:27
kklimondaand apparently infrastructure has not yet caught up ;)13:27
magicaltroutwell you can boot of a stream mirror, I wonder why its not using your config13:28
magicaltroutor is it a fall back. I'm unsure of how simple streams work, its like some black art stuff13:28
kklimondathis part seems to be rather thinly documented13:30
ayushHas anyone used the ceph dashboard chime?13:32
ayushcharm*13:32
marcoceppiayush: I have, a while ago13:33
ayushDid you use it with the ceph chimes? Or can it be setup with a separate ceph cluster?13:33
marcoceppiayush: I used it with the ceph charms13:34
ayushOkay.13:34
ayushWhich version of juju were you using?13:34
marcoceppiayush: ultimately because you need to run additional software on the ceph nodes to actually gather the insights13:34
marcoceppijuju 2.013:35
ayushmarcoceppi: I ran this. Could you tell me how to get the credentials? "juju config ceph-dash 'repository=deb https://username:password@private-ppa.launchpad.net/canonical-storage/admin-ceph/ubuntu xenial main'"13:38
marcoceppiayush: you'd have to chat with cholcombe or icey on that13:39
ayushmarcoceppi: Thanks :)13:41
kjackal_cory_fu: I would like your help on the badge status PR. Let me know when you have 5 minutes to spare13:44
cory_fukjackal_: Ok, just finishing up another meeting.  Give me a couple of min13:58
Zichi here14:08
ZiclazyPower: are you around?14:08
ZicI have some problem with conjure-up canonical-kubernetes, two LXD machines for kubernetes-worker are staying in "pending"14:09
Zic(and the charm associated are blocked in "waiting for machine" so)14:09
stokachuZic, yea im working to fix that now14:09
Zicah :}14:09
cory_fustub, tvansteenburgh: Thanks for the charmhelpers fix.  We once again have passing Travis in charms.reactive.  :)14:09
stub\o/14:10
Zicstokachu: do you have a manual workaround?14:10
Zichttp://paste.ubuntu.com/23995096/14:18
cory_fukjackal_: Ok, I'm in dbd14:21
kjackal_cory_fu: going there now14:22
cholcombeayush, you have to be given access to that PPA14:35
cholcombeayush, seems you and icey have been in contact.  i'll move this discussion over to #openstack-charms14:36
=== med_` is now known as medberry
=== medberry is now known as med_
stokachuZic, is this the snap version?15:09
Zicstokachu: nope, but I think it was because I forgot to 'apt update' after the add-apt-repository for the conjure-up's PPA :)15:15
Zic(I got the older version of conjure-up, with the new one from PPA it seems to be OK)15:16
ZiclazyPower: are you awake? :)15:33
kklimondais there a juju way for handling NTP?15:38
lazyPowerkklimonda - juju deploy ntp15:43
lazyPowerZic - heyo15:43
kklimondawill it just deploy itself on each and every machine and keep track of new machines I add?15:44
ZiclazyPower: my conjure-up deployed stale at "Waiting to retry KubeDNS deployment" at one of the 3 masters, don't know if it's normal: http://paste.ubuntu.com/23995418/15:46
kklimondaah,  I see15:46
Zicthe first deploy, I had a too many open files, I increased the fs.file-max via sysctl and do a second deployment :)15:46
kklimondaI can create a relationship for other units that need ntp15:47
kklimondacool15:47
Zicnow it's just this silly "waiting" which block15:47
lazyPowerZic - i saw that when i was testing before the adontactic was updated15:47
lazyPower*addonTactic15:47
ZiclazyPower: I don't know what can I do to unblock it, it's in this state from 30 minutes15:49
lazyPowerZic - did you swap with the cs:~lazypower/kubernetes-master-11 charm?15:50
Zicyep15:51
ZiclazyPower: I saw the step "Waiting for crypto master key" (I think it's the one you added)15:54
lazyPowerZic -yeah, thats correct15:54
Zicbut I have on of this kubernetes-master instances which stay in waiting about KubeDNS :/15:54
lazyPowerZic - give me 1 moment i'm on a hangout helping debug a private registry probem15:54
lazyPoweri think i might have botched the build with teh wrong template15:55
lazyPoweri'll need to triple check15:55
Zic:D15:55
ZiclazyPower: just for info, I used the "juju upgrade-charm kubernetes-master --switch cs:~lazypower/kubernetes-master --channel=edge" command15:56
Zic(I didn't see your update with cs:~lazypower/kubernetes-master the first time :D)15:56
Zic(I didn't see your update with cs:~lazypower/kubernetes-master-11 the first time :D)15:56
Zicbut the return displayed with the --channel=edge was actually cs:~lazypower/kubernetes-master-11 so it seems OK15:57
lazyPowerZic - running another build now, give me a sec to buid and re-test16:04
Zicnp :)16:05
ZicFYI, my mth-k8stest-01 VM is about 8vCPU of 2GHz, 16GB of RAM and 50GB of disk (I saw that CPU and RAM was heavily used in my first attempt with 4vCPU/8GB of RAM)16:06
lazyPowerZic - so far still waiting on kube-addons16:22
lazyPowerchurning slowly on lxd but churning all the same16:22
lazyPower(i have an underpowered vm running this deployment)16:22
ZiclazyPower: yeah, it stale long in "Waiting for kube-system pods to deploy" (something like that) but this step pass OK16:25
lazyPowerZic - if it doesnt pass by the first/secoond update-status message cycle its broken16:25
lazyPowerZic - dollars to donuts its failing on the template's configmap16:25
lazyPowerZic - kubectl get po --all-namespaces && kubectl describe po kube-dns-blahblah16:26
lazyPowershoudl say something about error in manifest if its what i think it is16:26
Zicyeah, I tried that, it's in Running16:26
lazyPowerplot thickens...16:26
lazyPowermine turned up green16:26
lazyPowerwwaiting on 1 more master16:26
Zicyep16:26
Zicwas long on the 2 others, but it finally came to green16:27
Zicbut one master is still waiting in kubernetes-master/1       waiting   idle       8        10.41.251.165   6443/tcp        Waiting to retry KubeDNS deployment16:27
Zicoops, missed copy/paste16:27
lazyPowerZic - but hte pod is there and green?16:27
Zicyep16:27
Zicdon't know what it's waiting though :D16:27
lazyPowerlikely a messaging status bug16:29
lazyPowerif the pod is up16:29
lazyPoweractually Zic16:29
lazyPowerrestart that vm16:30
lazyPowertest the resiliency of the crypto key16:30
lazyPowernothing like an opportunity to skip a step :)16:30
Zicoki16:31
lazyPowerZic https://www.evernote.com/l/AX7J_eiKOdNF94_eSoBqGZ3fjz-ZA8qQzAkB/image.png16:36
lazyPowerconfirmation that its working for me locally. if you see different results, i'm highly interested in what was different16:36
lazyPowerZic - and to be clear, i did the switch *before* the deployment completed16:36
lazyPoweri do need to add one final bit of logic ot the charms to update any existing deployments16:37
lazyPowerits not wiping the auth.setup state which it should be on upgrade-charm.16:37
ZiclazyPower: yep, I do the --switch ~5s after conjure-up printed me to press Q to quit :)16:38
Zic(as you said I need to switch when juju status print "allocating")16:38
lazyPoweryeah :) thats just to intercept before deployment and start with -1116:39
lazyPowernot test an upgrade path16:39
lazyPowerthis was only tested as a fresh deploy, so the upgrade steps still need to be fleshed out but it should be a simple state sniff and change16:39
Zicfor now, it's stale at : kubernetes-master/1       waiting   idle   8        10.41.251.165   6443/tcp        Waiting for kube-system pods to start16:39
Zic(after the reboot of the LXD machine)16:39
lazyPower:\ boo16:40
lazyPowerok, can i trouble you for 1 more fresh deploy?16:40
Zicall kube-system pods are Running...16:40
Zicyep16:40
lazyPowerthanks Zic  - sorry, not sure what happened there16:40
ZicI'm here for ~1 hour more :p16:40
lazyPowerhowever did your crypto-key validation work?16:40
lazyPowerwere you able ot verify all units had the same security key16:40
ZicI don't test it as I thought this KubeDNS waiting error blocks the final steps of installation16:41
Zicoh16:41
lazyPowerif the kubedns pod is running16:41
Zicit switch to active o/16:42
Zicjust now16:42
lazyPowerdid it?16:42
Zicyep16:42
lazyPowerfantastic, it was indeed a status messaging bug16:42
lazyPowerlooks like perhaps the fetch might have returned > 016:42
lazyPowernot certain, but thats why the message says that, is its waiting for convergence of the dns container16:42
Zicyeah, I just rebooted the LXD machine with this status message blocked16:43
Zictake ~4minutes to switch to active/idle/green16:43
lazyPowerupdate-status runs every 5 minutes16:44
lazyPowerso that seems about right16:44
Zicok16:44
Zicas "reboot" of LXD machine is too fast, I don't know if it's a good test for resilience16:44
Zicif I poweroff instead, and wait about 5 minutes16:44
ZicI need to find how to re-poweron an LXD machine :D16:44
Zicit's my first-time-use of LXD :p16:45
lazyPowerZic - lxc stop container-id16:45
lazyPowerlxc start container-id16:45
lazyPowerlxc list shows all of them16:45
lazyPowerZic - did you snap install or apt install?16:45
lazyPowerjust curious :)16:45
Zicapt16:45
ZicI just followed the homepage quickstart :)16:46
Zic(as it was updated with conjure-up instruction and add-apt-repository)16:46
lazyPowerok16:46
lazyPower:) I'm running a full snap setup and its working beatifully16:46
lazyPowernot sure if you want to dip your toes in the snap space but its a potential for you16:46
Zicthis mth-k8stest-01 VM will stay for test I think16:46
lazyPoweras the snaps seem to move pretty quickly, and they auto-update16:46
Zicso I can test snaps in it :)16:46
lazyPowernice16:47
lazyPowerjust make sure you purge your apt-packages related to juju before you go the snap route16:47
bdxlazyPower: giving CDK another deploy in a few minutes, trying to get the exact steps documented to get deis running post CDK deploy16:49
Zicfor the test VM I can go through snap, for the real cluster, I have right to reinstall it a last time tomorrow morning :x16:49
Zicso I'm not decided if I can use your edge patch directly to production16:49
Zicor if I should wait that it go to master16:49
lazyPowerZic - wait until its released with the charms16:50
Zic(master branch)16:50
lazyPowerZic - there's additional testing, this was just early validation16:50
lazyPowerZic - as well as teh upgrade path needs to be coded (remember this is fresh deploy test vs an upgrade test)16:50
Zicyeah, but as always, deadlines ruins my world: the last time I can reinstall the cluster is tomorrow morning, so I think I will just install the old (released) version with the bug with a singlemaster16:52
Zicand when your upgrade will go to release, I will add two more master16:52
Zicdo you think it's the right path?16:52
Zicor it's better to deploy directly with 3 masters on the old release, and poweroff two masters waiting your patch going prod?16:53
=== Anita is now known as Guest91022
Zic(it's just for the real cluster, I can do every tests I need/want on other VMs :))17:00
Guest91022Hi This is regarding revoking few revisions of a charm. the revisions need to be revoked, first released to different channel and then tried to revoke those revisions. But revoking is happening revision wise.17:01
Guest91022Sorry revoking is not happening revisions wise17:02
Guest91022grant/revoke is happening for all revisions of the charm.17:02
Guest91022please advice17:02
ZiclazyPower: for the master part it seems ok for now17:34
ZiclazyPower: but for the worker part, if I reboot one of the worker, the ingress-controller on it pass to "Unknown" and try to respawn on another node... and stay in Pending17:35
Zicdon't know if it's the normal behaviour with ingress-controller pod17:35
Zic  5m  20s  22 {default-scheduler }   Warning  FailedScheduling pod (nginx-ingress-controller-3phbl) failed to fit in any node17:36
Zicfit failure summary on nodes : PodFitsHostPorts (2)17:36
Zicoh, as Ingress Controller listen on 80 it's logic that I got this error17:40
lazyPower:()17:51
lazyPowerZic - interesting17:51
lazyPowerso its trying ot migrtate an ingress controller to a unit thats already hosting one to satisfy the replica count17:52
Zicyep17:52
lazyPowerit has requested an impossible scenario :P17:52
lazyPoweri love it17:52
Zicbut as it already have an Ingress which listen on *:80... it raise a PodFitsHostsPorts17:52
lazyPoweryet another reason to do worker pooling17:52
lazyPowerand have an ingress tier17:52
Zicdoes StatefulSet have a kind of "max one pod of this type per node"?17:53
Zicit's maybe one of possible solution :)17:53
lazyPowerwe would need to investigate teh upstream addons and see if they woudl be keen on accepting that17:53
lazyPowerwe dont modify any of the addon templates in order to keep that "vanilla kubernetes" label17:54
lazyPoweri think we do one thing, which is sniff arch17:54
lazyPowerbut otherwise, its untained by any changes17:54
Zicit's not a crash issue so, I'm kinda happy with it anyway :D17:54
lazyPowersounds like its testing positively then?17:54
lazyPoweraside from that one oddball status message issue17:54
Zicyep17:54
lazyPowerfantastic17:55
lazyPoweri'll get the upgrade steps included shortly and get this prepped for the next cut of the charms17:55
lazyPowerthanks for validating Zic17:55
bdxlazyPower: http://paste.ubuntu.com/23996226/18:13
lazyPowerbdx - in a vip meeting, let me circle back to you afterwords18:13
lazyPower~ 40 minutes18:14
bdxk, np18:14
lazyPower<3 ty for being patient18:14
=== mwenning is now known as mwenning-lunch-r
cory_fukjackal_: I know it's late and you should be EOD, but were you +1 to merging https://github.com/juju-solutions/layer-cwr/pull/71 with my fix from earlier?18:44
kjackal_cory_fu: I did not have asuccessful run but went through the code and it was fine18:45
kjackal_cory_fu: So yes, merge it!18:45
cory_fuheh18:45
cory_fukwmonroe: You want to give it a poke?18:45
cory_fuI ask because I'm trying to resolve the merge conflicts in my containerization branch and would like to get that resolved at the same time18:46
=== frankban is now known as frankban|afk
cory_fukjackal_: Also, one last thing.  Who was you said possibly had a fix for https://travis-ci.org/juju-solutions/layer-cwr/builds/201538658 ?18:47
cory_fu*Who was it you said18:47
kjackal_it was balloons!18:47
cory_fuballoons: Help!  :)18:47
kwmonroeyup cory_fu, do you have cwr charm released in the store?18:48
kjackal_cory_fu: balloons: it is about releasing libcharmstore18:48
cory_fukwmonroe: What do you mean?  That PR branch?18:50
balloonsohh, what did I do? :-)18:50
rick_hkjackal_: libcharmstore? https://github.com/juju/theblues ?18:50
kwmonroeyeah cory_fu.  is it built/pushed/released somewhere, or do i need to do that?18:50
cory_fuballoons: kjackal_ says you might know how to fix our travis failure due to charm-tools failing to install18:50
kwmonroeor cory_fu, do just want me to pause for 5 minutes, pretend like i read the code, and merge it?18:50
kjackal_rick_h: cory_fu: balloons: its about this: https://github.com/juju/charm-tools/issues/30318:51
cory_furick_h: libcharmstore seems to be just a wrapper around theblues at this point.  Not sure what it provides extra that charm-tools needs18:51
rick_hcory_fu: ah ok cool18:51
cory_fukwmonroe: I can push that branch to the store, but I thought we didn't update the store until it was merged.  I'll push it to edge, tho18:52
kwmonroecory_fu: i would be most thankful for an edge rev18:53
balloonsuse a snap? AFAICT, charm-tools wasn't built for trusty at any point.18:53
balloonsyou could also migrate to xenial I guess18:54
bdxwhere can I find documentation on bundles vs. charmstore, e.g. how do I push a bundle to the charmstore?18:54
rick_hbdx: bundles should act just like charms.18:54
rick_hbdx: they're both just zips that get pushed and released and such18:54
cory_fuballoons: Travis doesn't offer xenial, AFAICT18:55
rick_hbdx: you just have the directory for the readme/yaml file18:55
bdxrick_h: http://paste.ubuntu.com/23996456/18:55
balloonscory_fu, building charm-tools for trusty and publishing it seems the most straightforward18:57
bdxrick_h: looks like it needed a README.md18:57
rick_hbdx: otp, will have to look. must be something that's not made it think it's a bundle18:57
rick_hbdx: ah ok18:57
rick_hbdx: crappy error message there for just that :/18:57
bdxrick_h: yeah, I'll file a bug there for clarity18:58
bdxthanks18:58
rick_hbdx: ty18:58
cory_fuballoons, marcoceppi: I thought charm-tools was already available for trusty?18:58
cory_fuballoons: I also can't find the snap for charm-tools18:58
marcoceppisnap install charm18:58
marcoceppithere's a broken dep for trusty18:59
cory_fumarcoceppi: Yeah, I don't understand why the dep broke.  Also, will snaps even work inside Travis?18:59
marcoceppiprobably?19:00
cory_fukwmonroe: Ok, cs:~juju-solutions/cwr-46 is released to edge19:04
kwmonroegracias cory_fu19:05
cory_fukwmonroe: You probably want the bundle, too19:05
kwmonroenah19:05
kwmonroecory_fu: bundle grabs the latest19:05
kwmonroeoh, der.. probably latest stable.  anyway, no biggie, i can wedge 46 into where it needs to go19:05
cory_fukwmonroe: I can release an edge bundle19:06
kwmonroetoo late cory_fu, i just deployed what i needed19:06
cory_fu:)19:06
bdxrick_h: so I was able to get my `charm push` command to succeed, now my bundle shows in the store https://imgur.com/a/w4sYr, but when I select "View", I see this ->  https://imgur.com/a/udu1s19:09
rick_hbdx: what's the ACL on that?19:10
rick_hbdx: hmm ok so that seems like it should work out.19:13
bdxrick_h: unpublished, write for members of ~creativedrive19:13
bdxI could try opening it up to everyone and see if it make a difference19:14
bdxI was able to deploy it from the cli ...19:14
rick_hbdx: well you should be allowed to look at it like that w/o opening it up19:14
rick_hbdx: right, you're logged in, first question would be a logout/login19:14
cory_fuballoons: Is there a ppa for current stable snap?  Getting this: ZOE ERROR (from /usr/lib/snap/snap): zoeParseOptions: unknown option (--classic)19:16
cory_fuballoons: Also of note, our .travis.yml requests xenial but still gets trusty19:17
bdxrick_h: yeah .. login/logout did not fix19:17
rick_hbdx: k, that sounds like a bug, the output of the ACL from the charm command would be good and I'll try to find a sec to sedtup a bundle and walk through it myself19:18
catbus1Hi, I used conjure-up to deploy openstack with novakvm on a maas cluster. after it's up and running, I try to ssh to the instance via external port, but I can't. I am checking the switch configurations now (to make sure there is no vlan separating the traffic), but wanted to check here to see if there is any known issue here. I added the external port on maas/conjure-up node to the conjureup0 bridge via brctl.19:18
catbus1I did specify the external port on the neutron-gateway.19:19
bdxrick_h: sweet. thx19:19
balloonscory_fu, ppa for a stable snap?19:36
balloonsthat's a confusing statement19:36
balloonscory_fu, yea, travis might keep you stuck. But again, fix the depends for trusty or ...19:37
bdxcan storage be specified via bundle?19:41
cory_fuballoons: PPA to get the latest stable snapd on trusty.  Version that installs doens't support --classic19:43
balloonscory_fu, heh. Even edge ppa doesn't supply for trusty; https://launchpad.net/~snappy-dev/+archive/ubuntu/edge19:44
balloonscory_fu, but are you sure it doesn't work? I know I've used classic snaps on trusty19:45
balloonsyou probably just need to use backports19:45
cory_fuballoons: backports.  That sounds promising.19:46
balloonscory_fu, actually, no.. http://packages.ubuntu.com/trusty-updates/devel/snapd19:46
balloonsthat should work19:46
cory_fuballoons: How do I tell it to use trusty-updates?19:47
cory_fuAh, -t19:48
cory_fuHave to run an errand.  Hopefully that will work19:49
cory_fuballoons: That didn't work.  :(  https://travis-ci.org/juju-solutions/layer-cwr/builds/20163455319:50
cory_fuAnyway, got to run.19:50
cory_fubbiab19:50
bdxlazyPower: just documenting the next phase of the issue http://paste.ubuntu.com/23996697/19:55
lazyPowerbdx - self signed cert issue at first glance19:57
lazyPowerjust got back from meetings / finishing lunch19:57
marcoceppicory_fu: I'll have a dep fix tomorrow19:57
bdxlazyPower: yea, entirely, just not sure how deis it is ever expected to work if we can't specify our own key/cert :-(19:57
magicaltroutlunch is banned until stuff works!19:58
lazyPowerbdx - spinning up a cluster now19:58
lazyPowergive me 10 to run the deploy and i'll run down the gsg of deis workflow, see if i can identify the weakness here19:58
lazyPowerbdx - i imagine this can be resolved by pulling in teh CA from k8s and adding it to your chain, which is not uncommon for self signed ssl activities19:59
=== thumper-afk is now known as thumper
bdxlazyPower: oooh, I hadn't thought of that20:03
lazyPowerrick_h - do you happen to know if storage made it into the bundle spec?20:12
lazyPoweror is that strictly a pool/post-deployment supported op20:13
kwmonroecory_fu: you badge real good:  http://juju.does-it.net:5001/charm_openjdk_in_cs__kwmonroe_bundle_java_devenv/build-badge.svg20:14
rick_hlazyPower: yes https://github.com/juju/charm/blob/v6-unstable/bundledata.go check storage20:15
lazyPoweraha fantastic20:16
rick_hlazyPower: it can't create.pools but can use them I believe via constraints and such20:16
bdxrick_h: awesome thanks20:16
bdxrick_h: any docs around that?20:16
=== menn0 is now known as menn0-busy
rick_hbdx: I think it's under documented.20:17
rick_hbdx: sorry, in line at the kid's school ATM so phoning it in (to IRC)20:17
lazyPoweri'm filing a bug for this atm rick_h - i gotchoo20:17
lazyPowerbdx - nope sadly we're behind on that one. However https://github.com/juju/docs/issues/1655 is being edited and will yield more data as we get it20:18
bdxlazyPower: awesome, thx20:19
lutostagblackboxsw: fginther: we should totally set up a call for figuring out best path to merge https://github.com/juju/autopilot-log-collector & https://github.com/juju/juju-crashdump -- talking use-cases and merging20:22
blackboxswlutostag, seems like  log-collector might be a potential consumer/customer of crashdump as it tries to pull juju logs and other system logs together into a single tarfile20:25
blackboxswahh extra_dir might be what we need there20:25
lutostagblackboxsw: yeah, I was curious about your inner-model stuff in particular to make sure we could do that for you too20:26
stormmorehey lazyPower I am still trying to understand ingress controllers... is there a way of load balancing floating IPs using them?20:27
lutostagblackboxsw: and your use of ps_mem as well (the motivation behind that and what it gives you)20:27
fgintherlutostag, it is a bit tuned to our CI log analysis, but would not be impossible to merge things20:28
lutostagfginther: yeah, OIL has a similar log analysis bit I'm sure, and merging heads seems like a good idea -- get everybody on one crash-collection format, then standardize analysis tools on top of it20:29
lazyPowerstormmore - actually yes, you can create ingress routes that point at things that aren't even in your cluster20:29
lazyPowerstormmore - but it sounds more like you're looking specifically towards floating ip management?20:29
stormmorelazyPower - just trying to efficiently utilize my public IP space while providing the same functionality of using the service type loadBalancer gives in AWS20:30
lazyPowerstormmore - so in CDK today - every worker node acts as an ingress point. Every single worker is effectively an ELB style router20:31
fgintherlutostag, yeah, no objections to going that route. Would hopefully make things easier in the future20:31
lazyPowerstormmore - you use ingress to slice that load balancer up and serve up the containers on proper domains. I would think any floating IP assignment you're looking to utilize there would probably be best served at being pointed directly at the units you're looking to make your ingress tier (this will tie in nicely to future work with support for worker pooling via annotations, but more on this later)20:32
lutostagfginther: since we already have 2 ci-teams doing it, we should smash em together so that other's don't have to re-invent, wish I had reached out to you guys earlier tbh :/20:32
stormmorelazyPower yeah I saw that, I am more looking at providing service IP / VIPs instead20:32
lazyPowerok for doing virtual ips, i'll need to do some reading20:32
lazyPowerstormmore - i've only tested nodport for that level of integration, there's more to be done there for VIP support i'm pretty sure20:32
stormmorelazyPower the problem with using the node's IP is what happens to the IP if the node goes down20:33
lazyPowerright and that varies on DC20:33
lazyPowerit could stay the same, could get reassigned20:33
stormmoreexactly... my idea right now, is figure out how to run keepalived in the environment on one of the interfaces20:34
lazyPowerstormmore - the thing is, when you declare a service in k8s you're getting a VIP20:34
lazyPowerbut its not one that would be routable outside of the cluster20:34
lazyPowerhttps://kubernetes.io/docs/user-guide/services/#ips-and-vips20:35
stormmorehave keepalived as a DaemonSet and then be able to assign an IP to the internal service IP20:35
lazyPowerok, that sounds reasonable20:35
stormmoreseems the logical way of separating out the actual used IPs from the infrastructure 100%20:37
lazyPoweryeah i see what you mean. i was looking at this20:37
lazyPowerhttps://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/20:37
lazyPoweri presume this is more along the lines of what you were looking for, with --type=loadbalancer20:37
lazyPowerwhere its just requesting an haproxy from the cloud at great expense, to act as an ingress point to the cluster VIP of that service20:37
stormmoreyeah I have been looking at that from an AWS standpoint and allows kubernetes to setup the ELB20:41
lutostagfginther: I'll spend a little time, making juju-crashdump more library-like, and import able, then maybe I'll setup a call for early next week to discuss what you guys need in terms of output format to minimize disruption to your analysis if that's ok20:42
fgintherlutostag, a call next week would be fine... But please note that some of the content of autopilot-log-collector is no longer needed20:43
fgintherall of the juju 1 content can be removed20:43
fgintherlutostag, I wouldn't want you to implement a lot of changes for the sake of autopilot-log-collector and have them not be used20:44
cory_fuballoons, marcoceppi: It turns out I was installing "snap" when I should have been installing "snapd".  Unfortunately, it seems that snaps do not in fact work in Travis: https://travis-ci.org/juju-solutions/layer-cwr/builds/20164735620:46
balloonscory_fu, ahh, whoops20:47
balloonsit's possible to ignore that, but not with our package20:47
lutostagfginther: ok, sure, we'll make a list!20:47
cory_fuballoons: What do you mean, "not with our package"?20:57
balloonscory_fu, snapd can be built with selinux / apparmor or perhaps no security model. not sure. But the ubuntu package absolutely wants app armor20:58
lazyPowerkwmonroe - what bundle is that svg from?20:58
lazyPowercharm_openjdk_in_cs__kwmonroe_bundle_java_devenv <-- kinda tells me but kinda doesnt20:58
cory_fulazyPower: The bundle would be cs:~kwmonroe/bundle/java-devenv20:59
lazyPoweroh i guess its this https://jujucharms.com/u/kwmonroe/java-devenv/20:59
lazyPowerninja'd20:59
lazyPowerwatching matt deploy some ci goodness20:59
kwmonroewell if matt can do it, we're in great shape.21:00
lazyPowerbdx - just got deis up and running, stepping into where you found problems i think21:07
lazyPower"register a user and deploy an app" right?21:07
cholcombeso i finally got around trying lxd on my localhost.  My lxc containers are started with juju and they seem to be stuck in 'allocating'.  I'm not sure why21:12
lazyPowerbdx - can you confirm your deis router is currently pending and not started?21:15
lazyPowerbdx - looks like a collision between the ingress controller we're launching and the deis router.21:15
lutostagcholcombe: if you do a lxc list, do you see any "juju" machines there?21:19
cholcombelutostag: yeah they're def running21:19
cholcombei deployed 3 machines and i see 4 lxc containers with juju- in the name21:19
lazyPowerbdx yeah i was able to get the router scheduled by disabling ingress, we made some assumptions there that would prevent deis from deploying cleanly, both are attempting to bind to host port 8021:20
lutostagcholcombe: I would try lxc exec <juju-...> bash # and then run top in there and see where it is stuck21:20
cholcombelutostag: ok21:20
lutostagcholcombe: at one point there was an issue with things not apt-upgrading appropriately and getting stuck there indefinitely21:20
cholcombelutostag: ahh interesting21:21
cholcombelutostag: i don't see much going on.  let me check another one21:21
lutostagbut that was months ago, still going in and poking is the best way to find it21:22
cholcombelooks like everything is snoozing21:22
lutostagcholcombe: hmm, if they are all still allocating, mind pasting one of the containers "ps aux" to paste.ubuntu.com21:31
cholcombelutostag: sure one sec21:31
lutostagif no luck there, we'll have to see what the juju controller says...21:31
lazyPowerbdx - i see the error between your approach and what its actually doing21:37
lazyPowerbdx - you dont contact the kube apiserver for this, its attempting to request a service of type loadbalancer to proxy into the cluster and give you all that deis joy21:38
cholcombelutostag: http://paste.ubuntu.com/23997274/21:38
lazyPowerbdx - i'd need to pull in the helm charts and give it a bit more of a high-touch to make it work as is, in the cluster right now. It would have to use type nodeport networking, and we would need to expose some additional ports21:38
lazyPowerBudgie^Smore - cc'ing you on this as well ^    if your VIP work-aroudn works, i'd like to discuss teh pattern a little more and dissect how you went about doing it. as it seems like there's a lot of useful applications for that.21:40
cholcombelutostag: the last message i see in the unit logs are that it downloaded and verified my charm21:46
lutostagcholcombe: yeah, I can't see anything popping out at me, nothing helpful in "juju debug-log" ?21:48
cholcombelutostag: no just messages about leadership renewal21:48
lutostagcholcombe: hmm, looks like maybe you are stuck in the exec-start.sh. You could try rebooting one of those containers22:03
lutostagfighting another juju/lxd issue here myself, and a bit out of my depth, wish I could be more helpful22:04
bdxlazyPower: ok, that would make sense, awesome22:07
lazyPowerbdx - however as it stands, this doesn't work in an obvious fashion right away. not sure when i'll have time to get to those chart edits22:10
cholcombelutostag: no worries22:11
bdxlazyPower: what are my options here? am I SOL?22:13
bdxuntil you have time to dive in22:14
lazyPowerbdx  not at all, you can pull down those helm charts and change the controller to be type: hostnetwork or type: nodeport22:14
bdxoooh22:14
lazyPowerthen just reschedule the controller22:14
lazyPowerhelm apply mything.yml22:14
lazyPowerits devops baby22:14
lazyPowerwe have the technology :D22:14
bdxnice22:14
bdxI'll head down that path and see what gives22:14
bdxlazyPower: as always, thanks22:14
lazyPowerbdx - c'mon man :) We're family by now22:15
bdx:)22:16
=== perrito667 is now known as perrito666
=== mup_ is now known as mup
=== SaMnCo_ is now known as SaMnCo
=== psivaa_ is now known as psivaa
=== bryan_att_ is now known as bryan_att
=== rmcadams_ is now known as rmcadams
=== med_ is now known as Guest3904
* Budgie^Smore (stormmore) had to come home since his glasses broke :-/ 22:49
lazyPowerBudgie^Smore - i've been there, done that. literally this last week22:51
andrew-iiOn one of my machines, LXD containers do not seem to deploy. No obvious error, machine (i.e. 0/lxd/0) just never leaves the Pending status, and the agent is stuck on "allocating". Am I missing something obvious?23:00
andrew-iiOh, and occasionally another machine doesn't deploy containers as well. Is there a point where something can cause LXD container deployments to fail without error/retry?23:06
Budgie^SmorelazyPower ouch! just sucks just how short sighted I am :-/23:06
lazyPowerandrew-ii - which version of juju?23:11
andrew-ii2.0.223:12
andrew-iiRunning on MAAS 2.1.323:13
andrew-iiMachine seems to be connected and healthy, and an app will deploy, just not to a container23:14
andrew-iicat /var/log/lxd/lxd.log is just like my other machines, except it doesn't have the "alias=xenial ....", "ephemeral=false...", or "action=start ...." lines23:18
=== menn0-busy is now known as menn0
Budgie^Smoreto make matters worse, I have 4k displays23:33
lazyPowerBudgie^Smore - so i just confirmed the reigstry action's tls certs work as expected, no sure if you were still on that blocker23:47
lazyPowerbut i can help you decompose a workload using this as a template to reproduce for additional tls terminated apps23:47
lazyPoweri deployed one using letsencrypt certs and it seems to have gone without an issue23:47
Budgie^Smoreyeah that is a blocker still, got side tracked with another blocker anyway and looking at intergrating with the ELB for now23:51
Budgie^Smorewith the let's encrypt method are you manually uploading the key and cert to the k8s secrets vault?23:52
Budgie^SmoreI am also trying to build a cluster in a box so I can test scenarios better than right now23:55
lazyPoweryeah23:57
lazyPowerBudgie^Smore - its encapsulated in the action, its taking teh base64 encoded certs and stuffing them in the secret template and enlisting it23:57
Budgie^Smorehmmm so if I go to the dashboard and look at the secrets should I see the base64 or the ascii?23:59
lazyPowerbase6423:59

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!