/srv/irclogs.ubuntu.com/2017/02/02/#juju.txt

lazyPowerwellllllll00:00
skudadifferent tradeoffs, just testing now00:00
lazyPowerthats not *entirely* true00:00
lazyPoweryou would replica it to another unit, and that filesystem would be networked right? so the state is still available00:00
siva_guru@lazyPower, here is the hooks code00:00
lazyPowerand if you use a deployment its a blue/green00:00
siva_guruhttp://paste.ubuntu.com/23908126/00:00
skudayes, slower, but there00:00
lazyPowerskuda - here's the ark manifest https://gist.github.com/a55050f50fde8daf11434a09023eef8f00:00
skudahehehe00:00
lazyPowerskuda oh you bet, lxd live migration would just blow the stuff and things out of the water there00:00
siva_guru@lazypower, I seeing the following error in the logs00:01
siva_guru2017-02-01 23:45:45 INFO install   File "/var/lib/juju/agents/unit-contrail-analytics-0/charm/hooks/install", line 92 2017-02-01 23:45:45 INFO install     print "NUM CONTROL UNITS: ", len(units("contrail-control")) 2017-02-01 23:45:45 INFO install                               ^ 2017-02-01 23:45:45 INFO install SyntaxError: invalid syntax00:01
lazyPowerand not look back00:01
skudabut I can not do a live migration, all the players will be kicked during the migration window00:01
lazyPowerright00:01
siva_guru@lazpower, the same code works fine with py200:01
lazyPowerits possible to CRIU in docker, but most of the demo's i've seen of this have not been k8s00:01
skudaon the other hand it's pretty neat to know that k8s is going to relocate everything automatically when something fails00:01
lazyPowerits been pure docker, with some wizardry in the backend thats not been shared00:01
skudathanks for the manifest lazyPower00:02
lazyPowernp skuda, if you want the docker source too (like you dont trust me, which you shouldnt, i'm a stranger) i can send you over the dockerfile00:02
skudaI can check the Dockerfile in the registry, no?00:03
lazyPoweri dont think i published it00:03
skudaahhh00:03
lazyPoweri think i just docker pushed because i too like pain00:03
skudahahaha00:03
lazyPowersiva_guru  looking now00:03
skudaok, then... if you could send me it the Dockerfile would be awesome00:03
skuda:)00:03
lazyPowersiva_guru - that error is py3 complaining that you didnt paren your print statement , it should read:  print("NUM CONTROL UNITS: " + len(units("contrail-control"))00:04
skudathere is not another cluster aware ui for LXD other than OpenStack, isn'it?00:04
lazyPowerso thats a python3 error, not a hook execution error, it hasn't actually executed that bit, python is interpreted00:04
lazyPowerskuda - let me get back to you on that one00:04
stormmoreI wonder if k8s will do live migrations ever, don't think so cause of that assumption of being able to suffer a lost of a container temporarily until it spins up another00:05
lazyPowerskuda - i know the guys over on flockport are using lxd, and there's some other stuff, but you do know that juju does lxd dontchya? :D00:05
skudaI found some projects in github but all of them were about to manage one node00:05
siva_guru@lazyPower, that's the minor thing. The thing I am concerned is how is the relation-joined hook getting called as part of install?00:05
lazyPowersiva_guru - install the python3 flake8 checker, and flake8 your code, it will help you catch all htose python3 errors00:05
skudahahaha yes lazyPower it's something to manage the cluster after it's created and get graphs, repeating tasks support and niceties like those.00:06
lazyPowersiva_guru - i see no evidence of it being called though, thats a perfectly acceptable error as python is interpreted, so it was looking through the code file before it executed to map its contorl flow00:06
lazyPowersiva_guru - python3 flake8 your code, and give it another go once you've resovled the python3 changes00:06
lazyPowersiva_guru if its still misbehaving i'll eat my hat and we'll take another look at why its misbheaving00:07
lazyPowerand i may not eat my hat, because shrimp taco's were delicious and i'm not hungry after eating them00:07
stormmorelazyPower - video or it didn't happen :P00:07
skudastormmore: k8s will not have live migration in a time at least, they are very focused on services with more than 1 instance available at the same time, If you can trust that's always the case you don't need live migrations00:07
lazyPowerstormmore whyyyy did i know you'd have peanut gallery commentary after that? :D00:07
siva_guru@lazyPower, thanks. Will do00:07
lazyPowerskuda - theres the whole class of workload thing again00:08
skudabut it's not always the case, this is the reason why vms are not going to disappear sometime sooon00:08
stormmoreskuda yeah I know, especially considering who is behind k8s and their ethos00:08
stormmorelazyPower - humor is the bread of life :P00:08
lazyPowerskuda https://gist.github.com/89f4c7596c0a8ee3c47422e63db1a23a00:08
skudathanks lazyPower00:09
lazyPowernp np00:09
stormmoreskuda but it is the case, Google proved that by running their whole environments that way00:09
lazyPowerunofficially, if that explodes you own both halves. but its a fun validation workload since you're already doing game servers00:09
lazyPowermight be fun ot mix it up and add ARK to the list, as private servers seem to be the way to go there00:10
lazyPowerunless you like pain00:10
lazyPowerthen play on the official servers and enjoy the unfeddered RUST abusers00:10
skudagoogle it's pretty much stateless, I mean stateless as https requests00:10
skudathe don't usually keep sockets opened much time, it's about request after request00:10
stormmoreskuda I get that but they run stateful services the same way as stateless00:10
skudaand that makes sense for them, sure00:10
skudawell the industry is going after that now, and it's amazing for many use-cases00:11
stormmorestatless only gives you so much until you have to store something in a stateful service00:11
stormmoreI will admit that stateful services take a bit more planning when you are running them in containers00:12
skudawell that the trap many people got caught, "orchestrate all the stateless nginx that you want, you are going to finally consume dynamo, or ebs, or 'put your favourite stateful service here' and pay for it big"00:12
skudabut things are getting better slowly anyway00:13
skudaI would like to see more usage and integration of LXD, it's a container with many good things from vms00:13
skudabut Docker get all the attention00:15
lazyPower^ that00:15
stormmoreDocker is more muture by a long way00:15
lazyPoweri think what has hindered lxd adoption is the fact you dont get a native feel on other clients like osx00:15
lazyPowerit kind of requires an ubuntu rig to really shine. i'm sure someone will fight me on that00:16
lazyPowerbut thats my 2 cents00:16
skudasome people think the problem is that it is too tied to Ubuntu too00:16
lazyPoweras someone that walks the line of both00:16
lazyPoweri hear all those statements and i want to hug them and ask them to humor me00:16
lazyPowerbut nobody ever does00:16
skudahahahaha00:16
stormmoreThat goes to my comment about muturity for LXD00:17
lazyPoweri'm not sure i agree, but thats a matter of opinion anyway00:18
lazyPowerand if we agreed on everything stormmore we would be super boring00:18
skudait's a shame because I think that together Docker and LXD could make amazing things00:18
stormmoreheck I would have been bored already and probably moved to CoreOS or DCOS instead :P00:18
skudatoo many complex tricks are done today to be able to run stateful services in Docker00:19
skudaLXD brings that in a super natural way, with the added plus of live migrations00:19
stormmoreskuda that I definitely disagree with. there is nothing to complex that you can't run it in docker containers00:19
skudasure not, you have two options for example to put online 1 mysql00:20
skudaslower as hell network storage00:20
skudaor superpricey00:20
skudaor the second option, use a local storage of the docker node running it and keep the process always there00:20
stormmoreskuda live migrations are only useful if you are wanting to "service" the underlying hardware still doesn't help you in a failed hw scenario00:20
skudaand well, in MySQL at least you could use Galera or other solutions to create a cluster and try to live with it00:21
stormmoreskuda Ceph for network storage00:21
skudaCeph it's pretty slow00:21
skudathe latency it's usually terrible and the IOPS are not much better00:22
stormmorethat sounds like a badly configured Ceph setup00:22
stormmoreCERN uses Ceph for the storage requirements with Petabyte sized clusters00:22
skudahttp://cloudscaling.com/blog/cloud-computing/killing-the-storage-unicorn-purpose-built-scaleio-spanks-multi-purpose-ceph-on-performance/00:23
skudastormmore: I am not speaking about size here I am speaking about speed00:23
skudaand I don't have the resources of CERN to install a cluster with hundreds of computers and drives00:23
skudasome databases that are cloud aware easily restore state from instances that keep working after a partial crash, ElasticSearch for example00:24
skudait takes some time and a lot of bandwith but could be done with local storage easily with Docker00:24
stormmoreskuda they have clusters that do either 100 IOPS (about the same as local HDD) or 500 IOPS00:25
skudabut not 100% of usages are ok with that pattern00:25
skudathat's too slow for a medium/big database00:25
skudaI have been using only SSD for databases like 2 years now, and before that only SAS disks, you need lots of IO sometimes00:26
skudathe same for big minecraft servers00:26
skudaI seen one super big survival full of people saturate 1 SSD00:26
skudathose types of workloads are not designed to be put in Ceph00:26
skudabut works amazingly well in local SSD using LXD for example00:27
skudaI know live migration it's not going to solve many things (failures) that k8s solves without proper (external to lxd) clustering thought in your part00:28
skudain the project I am working now it would suppose to be able to migrate minecraft server between nodes without interruption00:29
skudaobviously, the fronted, admin, api and all the webservices will be running in Docker containers orchestrated via k8s or dc/os00:29
lazyPowerskuda - using mcserver (or is it mcadmin? i forget) as the admin ui i assume?00:33
skudadepending on the tests I will be doing the coming days maybe even the minecraft servers will be split in smaller units, as small as possible, and orchestrated via k8s or dc/os, it's one of the options I will be testing.00:33
skudanope, we are developing one00:33
lazyPoweroh nice00:33
skudathe most used it's multicraft I think00:33
lazyPowerman i love it when people show up with their own solutions00:33
lazyPowerthat to me, is far more interesting than say, hopping on github, finding a thing, and then finding a way to profit from it00:33
skudawe tried but we doesn't solve all our needs and introduce some problems00:33
lazyPoweryeah00:34
lazyPoweri used mcadmin (i think again? naming?) and it was a shitshow when it came to backups and specifically the restore00:34
siva_guru@lazypower, that resolved the issue00:34
lazyPowerevery last single one of them was corrupted00:34
lazyPowersiva_guru FAN TASTIC!00:34
siva_guruThanks for all your help00:34
lazyPowersiva_guru thats what i'm talkin bout boooyaaaaa00:34
lazyPowernp np00:34
lazyPowerhappy to get you unblocked :)00:34
siva_guru;)00:34
siva_guru:)00:34
bdxlazyPower: "i think what has hindered lxd adoption is the fact you dont get a native feel on other clients like osxb" - entirely00:34
skudahahaha, similar problem for multicraft, backups sucks, but not only that, some other things are not fully working or very weird00:35
lazyPowersiva_guru - it can be tough going sometimes, especially when you're making changes you dont fully understand. sorry that bit you, but pthe py2->py3 change was a painful one for me at first until i started linting *everything*00:35
stormmoreskuda I don't know about that, 15GB/s seems pretty good even for large scale DBs00:35
lazyPowerbdx <3 hey dude00:35
lazyPowerwb00:35
siva_guru@lazypower, yes.. I moving from trusty to xenial and from py2 to py300:36
skudastormmore, 15Gb/s? where? with how many disks?00:36
stormmorehttps://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf00:36
skudayou are not going to get 15Gb/s without special network hardware anyway00:36
skudathe total bandwith of the cluster doens't matter a lot00:36
skudawhat's matter is how much will be getting my small Mysql instance in one node00:37
skudaand it's totally imposible to get more bandwith that your network card offers you, usually 1Gb, 10Gb in special situations00:37
bdxhaha - just reading the scrollback ... docker has an os x virtualbox wrapper now ... even though the docker containers aren't really being deployed to the osx host, it gives devs the feel/usability as if they were running native00:38
stormmoreskuda sure you can, you can bond NICs. My understand last I really looked at CERN was they were using 10Gb x 2 for each side of their cluster00:38
skuda150 clients00:38
lazyPowerbdx - s/virtualbox/xhyve/00:38
bdxlazyPower: I'm assuming thats what you are referring to?00:38
lazyPowerftfy00:38
lazyPoweryeah, their xhyve schenanigans00:38
stormmoreskuda well it is a 30PB cluster!00:39
bdxahh yea, my bad00:39
skudaDuring March 2015 CERN IT­DSS provisioned nearly 30 petabytes of rotational disk storage 00:39
skudafor a 2 week Ceph test00:39
lazyPower"Oh look its native!!!"00:39
lazyPowerdude...00:39
lazyPowerits boot2docker in a dress00:39
lazyPowerstop lying to me docker inc00:39
lazyPowerbut i'll give them this00:39
lazyPowerit works really wel and its gotten a ton of bug fixes00:39
lazyPoweri prefer it over docker-machine now00:39
skudastormore I don't have 30 petabytes of disks to be consumed by 150 clients at the same time hahahaha00:39
skudaIf I had this cluster size, probably I would be fine with Ceph, yes, but for my use case I would be better purchasing a good san before that00:40
stormmoreskuda I get that, just pointing out that Ceph isn't as slow as you think. If it is, the design of your Ceph environment is wrong00:40
skudadid you check the comparison with ScaleIO I sent to you?00:40
bdxstormore: +100:40
skudaI am speaking by the way of Ceph clusters not at the sale of CERN, much more smallers ones00:41
skuda*scale00:41
skudaBTW in the cern test at 15Gb/s every client is getting 100Mbit/s00:42
skudathat with 150 clients serving and writing 4Mb files, so highly sequential00:43
skudaif you think that's ok for a big OLTP database we have different opinions00:43
stormmoreoh I am aware of ScaleIO and it has a different approach that Ceph. I am only considering Ceph and it checks off more boxes for my workloads than ScaleIO00:43
skudablksizemodethreadstrans/secreq/secmin_req_timemax_req_timeavg_req_time00:44
skuda16384seqwr16122,57Mb/sec7844,220,071484,692,0400:44
skudathat it's a sad and old intel ssd 32000:45
skuda122Mb local, 1 disk, latency 2,0400:45
skudait's obviously much slower than current generation SSD or nvme00:45
skudastill it's faster than what 1 client is able to get from that super big ceph cluster of CERN00:46
skudait's not needed for every case, sure, sometimes it is00:46
skudaI am not saying ceph is not a cool tech that can work in many many cases00:46
lazyPowerok i need to run some errands and i'm going to be traveling for the next few days until the 8'th. So hit me up on the mailnig list if you gents need anything. Otherwise i'll try to check for pings but replies are going to be super latent00:47
lazyPowergood luck in your exploration skuda, i'm here to help if needed00:47
lazyPowerstormmore - keep fighting the good fight00:47
skudaonly saying it's not the solution for everything00:47
lazyPowerbdx - poke magicaltrout in the forhead for me ;P that wiley brit00:47
stormmorelazyPower always and have fun in Belguim00:47
skudalazyPower: Thanks! I will contact you if I hit roadblocks!00:47
lazyPowers/me/the mailing list/00:48
lazyPowerftfy00:48
lazyPower<300:48
skudayes!00:48
skudamailing list, I know!!00:48
skudaI am going bed now too, it's 2am here in Spain ;P00:48
skudaI will try tomorrow juju k8s, before in LXD with conjure, later I will try to get it to install on the 4 dedicated servers I have to test00:49
lazyPowerskuda - if you've go tthe time we will be in ghent belgium. you're more than invited to attend the charmer summit and we can run deployments in real time00:50
lazyPowerand with that i'm leaving for real this time00:50
TeranetQuestion : juju status gives me a bit to much on info is there a way I can filter it so it only list me out the Unit's I have deployed ???03:10
lazyPowerTeranet: try 'juju status $application'03:12
lazyPowerTeranet or `juju status --format=short`03:12
Teranetthx still not really what I like to see but better03:18
lazyPowerTeranet - if there's another filtered view that would be useful for you, if you dont mind filing a bug its likely to get included in the list of filters. you can see what kind of enhanced status outputs we have available via juju status --help03:21
TeranetThis is almost perfect : juju status --format=oneline     just more like a table look would be nice03:26
Teranetwith color03:26
lazyPowerexcellent, glad you've found something that works better for you03:26
lazyPowerbut thats good feedback, and again a bug would be handy to reference when talking about the feature with the core devs :)03:27
TeranetI certainly can file a bug is I know where this could be filed best within the juju github bug report03:27
lazyPowerhttps://bugs.launchpad.net/juju/+filebug   would be preferrable03:28
Teranetok will do thx03:32
lazyPowerThanks Teranet :)03:32
Teranetreported it as detailed as I could  : https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/166114503:37
mupBug #1661145: Feature request for juju status  <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1661145>03:37
Teranetnow  ido have to still figure out why neutron and openvswitch won't do VLAN's for the Openstack setup on eth1 :-(   grrrr03:38
mhiltonmorning all07:39
kjackalGood morning Juju world!07:59
=== frankban|afk is now known as frankban
admcleodkjackal: :]08:57
ZiclazyPower: hi (NO, it's not a new problem, as usual :>), a simple wishfeature (if you confirm it's a good idea, I can officially submit): redirect http to https in the kube-api-loadbalancer09:29
ZicI can do it on my own but as the vhost file is managed by Juju, it will be overwritten09:29
ZiclazyPower: just for browsing content, I understand that kubectl cannot match the redirection but it is directly configured to https in the default ~/.kube/config09:30
chetannHello ,09:52
chetannneed help in juju09:52
chetannanybody there?09:52
chetannHi , need help in setting ip version of kubernetes using juju09:54
chetannHi , need help in setting up version of kubernetes using juju09:54
Zicchetann: hi, describe your problem precisely09:57
chetannwe are running this : juju deploy cs:bundle/canonical-kubernetes-2009:58
chetannor let me ask in diffrent way09:59
chetannin this charm : charm: "cs:~containers/kubernetes-master-10"   how to check what version of kubernetes master will going to provision10:00
chetannwhen we deploy the above charm it deply 1.5.2 version for kubernetes master , but we wish to have 1.4.2 for kubernetes master10:01
chetannwhen we deploy the above charm it deply 1.5.2 version for kubernetes master , but we wish to have 1.4.4 for kubernetes master10:01
marcoceppichetann: Hi, you can do that, but it's a bit manual. Let me dig you up instructions10:03
chetannok10:05
chetannthanks10:05
=== tinwood is now known as tinwood_afk
=== freyes__ is now known as freyes
lazyPowerZic - I think thats a good contribution14:19
ZiclazyPower: I had once again a "certificate error" in one of my k8s-master (saw this un /var/log/syslog) but was just for une pod (kube-dns), restarting this pod (by deleting it) just fixed the problem14:40
Zic(the last time, all requests had this type of error)14:40
lazyPowerwe need to figure out why thats happening14:41
Zicnot so important this time as I recovered quickly14:41
lazyPowerZic - fyi i'm goign to be traveling until feb 8'th14:41
Zicthe kube-dns pod was in CLBO during this time14:41
lazyPowerstarting later today14:41
Zicok, I have no troubles anyway, just to let you know if you see some other report about this :)14:42
lazyPowerZic - i'll keep that in the back of my mind and try to come up with a suggestion for us to trace this issue14:42
Zicthanks :)14:42
lazyPowerbut as it stands right now, you're finding some edge cases we haven't seen in our long running instances, or testing14:43
lazyPowerso its hard to really recommend a fix until we truly understand whats happening14:43
Zicthe weird part is that it's only happened to kube-dns this time, all other requests that I saw was OK14:44
Zicand it stopped when I deleted the pod and it respawned14:44
mbruzekHello Zic I see you are back at kubernetes today.14:49
lazyPowero/ mbruzek14:50
mbruzek\o lazyPower14:50
Zicmbruzek: I didn't crash the cluster this time :D14:51
Zicjust saw a strange but quickly fixed error :)14:51
mbruzekI have faith in you Zic, you just are not trying hard enough today. You need more coffee14:51
* lazyPower snickers14:52
lazyPowerFeel the internet-troll flow through you mbruzek. The troll exists in all of us.14:52
lazyPower<314:52
mbruzeksorry. Maybe *I* haven't had enough coffee today.14:53
mbruzekZic knows how to break clusters better than anyone I know. I *like* that!14:53
mbruzekI apprecate that and the feedback and the challenge.14:54
Zic:D14:54
Zicit's the only error I encountered in 1.5.2 :)14:55
lazyPowermbruzek thats a constant state of being for me... lack of coffee14:56
Zichey, this message does not even contain a problem/error, AMAZING -> do you know how I can "clean up" the InfluxDB database? I have some old pods that's not existing anymore, and same for deleted namespaces15:02
ZicI searched through InfluxDB docs but it's not very clear to me15:02
=== tinwood_afk is now known as tinwood
lazyPowerZic - that seems like there's some latency or issue with etcd again if the pods aren't being reaped and namespaces are lingering15:06
Zicoh, I thought that it was normal to conserve by default pods in InfluxDB as it can be used for history15:07
Zicbut here, in the drop-down list of Pods, I have some old entry of pods that didn't exist, and old namespaces :(15:08
ZicI need to find some etcdctl command to explore what etcd have15:09
Ziclike a list of pods15:09
Zicetcdctl ls / --recursive15:12
Zicseems OK15:13
lazyPowerZic - yeah, all of the k8s data is stored in /repository/15:14
lazyPowerand it tree's off down there based on object type15:15
ZicI don't know if it's normal but I saw some old namespaces that's here15:15
Zicbut they do not contain any pods or ressources15:15
Zic(and they are not shown in the kubectl get ns)15:15
lazyPoweri dont think it actually wipes the key-space15:15
lazyPoweri think it just wipes the values15:15
Zicok, it seems normal so15:15
ZicI saw also some persistentvolumeClaim that not longer exist15:15
Zicbut as they are not returned by kubectl get pvc --all-namespaces, it seems OK also15:16
ZicI don't know where InfluxDB get its obsolete pods :/15:16
Zicit's not broken as new namespaces and new pods appeared in Grafana15:16
Zicbut the old one stayed15:17
Zicand as I did so many tests, it's quite long now <315:17
lazyPowerZic - lets follow up on teh k8s mailing list to ask about this. I think its behavior of the addon15:17
lazyPowerif the authors indicate it should be getting wiped, we probably have something slightly misconfigured15:17
lazyPoweror some oddity15:17
lazyPowernot certain which15:17
lazyPowerbut i'll err on the former15:17
Zicit's maybe the behaviour yes, I was testing Prometheus (with Grafana also) in my old K8S cluster installed by kubeadm (shame! shame! I was not presented to Juju at this time :p) and Prometheus did the wipe15:22
ZicI'm just realising now that InfluxDB has maybe a different behaviour on this point15:22
pranav_Hey. Can anyone here help me with a query on hooks?16:02
perrito666pranav_: ask the question and well see who can help you :)16:02
pranav_Alright :). I have multiple relations in my charm that i need to wait on and I want my config-changed hook to be called after all the relations are done16:03
pranav_is there a way that config-change can be called after relation hooks?16:03
perrito666pranav_: until all the relations in one charm right?16:04
pranav_yes. Right now I am moving my charm to blocked state when even one of the relation is not up16:05
pranav_But once relations are done, I don't know how to automatically move to config16:06
perrito666pranav_: mm i thought config-changed was called after relation is established16:07
perrito666lazyPower: happen to know anything about this?16:07
pranav_The documentation says its called after install & upgrade16:07
perrito666rahworks: I see, I believe your option is to check in every relation16:08
pranav_Ah ok. Will have to figure a way out. Can i use the status in any way to automatically trigger somethin in JUJU?16:16
pranav_I did see the following way, but am yet to explore on it :16:17
pranav_@when(‘apache.installed’) def do_something():    # Install a webapp on top of the Apache Web server...    set_state(‘webapp.available’)16:17
rick_hpranav_: perrito666 is this a reactive charm? if so you could use state for this right?16:18
rick_hpranav_: so you can track the state of each relation and then @when.... each is up execute16:18
pranav_I haven't checked what reactive charm is. Any pointers to read on it?16:20
rick_hhttps://jujucharms.com/docs/stable/developer-event-cycle16:21
rick_hpranav_: ^ for some beginner notes16:21
rick_hpranav_: lots of folks working on charms have experience on the mailing list and the #juju freenode channel16:21
rick_hpranav_: but it's kind of a framework to help track state and make charming a bit easier16:21
Zicmbruzek: h/3416:22
Zicoops16:22
mbruzekh/4216:22
Zicthat free hl... don't know why your nick was on my IRC prompt :)16:22
mbruzekNo problem.16:23
pranav_I did read up the event thing but couldn't find anything on the reactive thing. But i will go through it once and get back post some reading. Thanks Guys! :)16:23
Zicmbruzek: to eliminate immediately this use of unwanted hl, I have a question :-] -> we're right that running kubectl command on random kubernetes-master (locally for example, or by modifying the ~/kube/config of your workstation to a master directly instead of the kube-api-loadbalancer) cannot do anything wrong?16:28
Zicbecause I saw in juju status that there is an official "master" of... masters16:29
Zicbut as the nginx vhost of kube-api-loadbalancer just have an upstream { } block, I think it's just a roundrobbin, right?16:29
mbruzekZic: The load balancer is out attempt at making the masters HA.16:30
mbruzekZic: You can scale up your master nodes separately than the worker nodes, and request different sizes from Juju16:31
Zicyeah, but I saw a system of "lock" in /var/log/syslog which said only one of my master is having a "lock"16:31
Zicis a notion of "active" kubernetes-master? or they are all active?16:31
mbruzekZic: To answer your question more directly. Yes you can point to a master directly in the configuration16:31
Zicok16:31
mbruzekzic: should you lose that node, it will not work.16:32
ZicI feared that I didn't understand something and do nasty things by running sometimes to a master which is not "the active one"16:32
Zicmbruzek: yeah, I'm just using this when I don't have the kubectl binary locally16:32
ZicI'm SSHing directly to one master and use its kubectl command16:32
Zicthis message was responsible of my question: leaderelection.go:247] lock is held by mth-k8smaster-03 and has not yet expired16:35
ryebotZic: Shouldn't matter. All of the masters use the same source of truth.16:35
ryebotZic: what was that in response to?16:36
Zicbecause I have this kind of error sometime in the non-locked masters, but no error at all in the locked one: jwt.go:239] Signature error (key 0): crypto/rsa: verification erro / handlers.go:58] Unable to authenticate the request due to an error: crypto/rsa: verification error16:36
Zicit's not like the first time where all requests have this in return16:36
Zichere, is just... "sometime" in /var/log/syslog16:37
Zicall my kubectl command works perfectly, the dashboard too16:37
ryebotZic: Hmm, not sure what's causing that, but I can tell you with a lot of confidence that it shouldn't matter from where you run kubectl, they all point to the same place16:37
Zicok16:38
ZicI really don't know what can I do with this crypto/rsa error, all is working actually, but I'm fearing a bit16:44
ryebotZic: Can you paste the logs for us somewhere to look at?16:57
Zicyep16:57
Zichttp://paste.ubuntu.com/23912041/16:59
Zicthere is ~5 examples in this extract17:00
ryebotZic: Thanks, we're taking a look17:13
ryebotZic: The lock logging, at least, is normal and expected. Looking into the error.17:17
ryebotZic: Did you by any chance change the service account token signing key?17:18
Zicryebot: nope, my only operation since the restauration of this cluster was testing StatefulSet :)17:21
ryebotZic: okay, cool; still investigating.17:22
Zicryebot: for the record, the last weeks I have a ton of error like that, not just a bit, in all masters, and all operations was completely blocked if it involve writing (like kubectl create/delete), reading was OK (get/describe)17:24
Zicryebot: here, I just have some, and the "locking" master does not have any17:24
Zicall is working actually, I'm just fearing it will come again :x17:24
ryebotZic: understood, it's a reasonable concern17:25
=== mskalka|afk is now known as mskalka
Zicryebot: another maybe useful information, I had kube-dns which show a "30" in the Restart column of kubectl get pods17:48
Zicdon't know if it seems high17:49
Zicryebot: I'm leaving my office but I'm staying on IRC as usual, feel free to ping back me if you discover something; and thanks for your involvement as usual :)17:57
Mac_Hi, I'm using "juju charm get" to download the charm. But I'm not able to download some of the charm, e.g. keystone, neutron-api, and some others.18:24
Mac_But I can deploy them directly from charm shop.18:24
Mac_$ juju charm get keystone18:26
Mac_Error: keystone not found in charm store.18:26
Mac_Any suggestion?18:27
=== frankban is now known as frankban|afk
magicaltroutjust did my DC/OS office hour demoing juju, quite a few folks on the call and she said she's gonna chuck the video around internally because they're on a big ease of use drive internally18:36
magicaltroutso I better get those Centos base layers working....18:36
rick_hMac_: try just charm get? You using the charm snap?18:37
rick_hMac_: actually the command is "pull" in there now.18:38
rick_hcharm pull keystone18:39
Mac_charm get result in the same error18:39
Mac_$ charm get keystone18:39
Mac_Error: keystone not found in charm store.18:39
rick_hMac_: I think you've got a really out of date tool as get is no longer a valid command18:39
Mac_$ charm pull keystone18:40
Mac_Error: pull is not a valid subcommand18:40
Mac_I'm working on Ubuntu 14.04.5.18:40
Mac_I'm trying to patch the charm for my environment, therefore need to make local charm repo.18:41
rick_hMac_: oic, hmm. I think the new charm command is only available as a snap these days.18:41
rick_hMac_: maybe just download the zip file from the page https://jujucharms.com/keystone/18:42
rick_hMac_: look on the right column by the file listing for "Download .zip"18:42
Mac_So the zip is the same as the "charm get"?18:44
Mac_Will try, thanks.18:44
Mac_rick_h: thanks.18:44
rick_hMac_: yes, it's the zip in the store for that charm18:45
Mac_Another question, does the "juju deploy" can resolve the series and revision, e.g. "cs:trusty/percona-cluster-31" , with the downloaded zip or the dir by "charm get"?18:49
Mac_Or shoud I select series and revision before download?18:50
rick_hMac_: what version of Juju are you on?18:59
Mac_$ juju --version18:59
Mac_1.25.9-trusty-arm6418:59
rick_hMac_: so for 1.25 you need to setup a charm repo directory structure that has the charm in a directory called trusty18:59
Mac_.19:00
Mac_└── trusty19:00
Mac_    ├── ceilometer19:00
Mac_    ├── ceilometer-agent19:00
Mac_    ├── glance19:00
Mac_    ├── mongodb19:00
Mac_    ├── nagios19:01
Mac_    ├── nova-cloud-controller19:01
Mac_    ├── nrpe19:01
Mac_    ├── ntp19:01
Mac_    └── rabbitmq-server19:01
Mac_Like this?19:01
Mac_And I have something like "cs:~cordteam/trusty/neutron-api-4"19:04
Mac_also need ./~cordteam/trusty/ ?19:05
=== jog_ is now known as jog
Mac_It seems the charm ./revision is not auto generated.20:14
Mac_for example, cs:trusty/ceilometer-240, but the ./revision is 4420:16
Mac_So if I deploy from cs, it shows 240, but if I deploy from local, it show 4420:17
Mac_And the contents are also different20:22
rick_hMac_: so when you deploy a charm locally, it auto updates the revision as it can't tell what changes there are/etc20:22
rick_hMac_: when you go from the store, each upload to the store creates a revision and so the store is tracking it20:23
rick_hMac_: so there's a disconnect when you go from the store to a local files on disk20:23
Mac_ok, I'll try "charm get" again, but I just did that this morning.......20:24
rick_hMac_: I'm sorry, you don't need to re-download20:29
rick_hMac_: if you deploy from local it'll just increment the number over and over20:29
rick_hMac_: there's absolutely no association to the revision you download from the store and the revision it shows once you deploy it locally to be honest20:29
Mac_But I thought the charm deployed with "cs:trusty/ceilometer" and "charm get ceilometer" should both be the latest.....20:34
Mac_And now I cannot "charm get", I think it's because I did "bzr lp-login".20:35
Mac_$ charm get ceilometer20:36
Mac_Branching ceilometer to /cord/build/platform-install/juju-charm/trusty/var20:36
Mac_Warning: Permanently added 'bazaar.launchpad.net,91.189.95.84' (RSA) to the list of known hosts.20:36
Mac_Permission denied (publickey).20:36
Mac_ConnectionReset reading response for 'BzrDir.open_2.1', retrying20:36
Mac_Warning: Permanently added 'bazaar.launchpad.net,91.189.95.84' (RSA) to the list of known hosts.20:36
Mac_Permission denied (publickey).20:36
Mac_Error during branching:  Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist.20:36
Mac_Is it because I'm not a charmer?20:36
Mac_And there's no lp-logout, so I'm stuk.....20:38
Mac_@@20:38
rick_hMac_: so the issue is that charm get is from a time when all charms had to be put into launchpad bzr and then the store pulled them out of there20:40
rick_hMac_: but today, charms are uploaded with a newer charm tool (the snap) and can come from github, your own drive, etc20:40
rick_hMac_: that's why the "download zip" is your best bet atm20:40
rick_hMac_: so I'd not use anything pulled from bzr and I'd stop using the charm get command all together because it's just not current enough20:41
ryebotZic: after some investigation, we still don't have a solution. Would you mind opening a bug and tagging us in it so we can track it?20:41
ryebotZic: On our end, we'll keep investigating.20:41
Mac_I see....20:42
Mac_rick_h: Can the new charm tool (the snap) get old version of charm?20:51
rick_hMac_: yes you need to use the full URL to get an older version like cs:trusty/keystone-521:02
iceyhas anybody tried mixing bash + python in a reactive, layered charm?21:04
rick_hicey: not seen it myself, what's got you thinking about the mix?21:05
iceyrick_h: a discussion we had on the openstack team a couple of days ago21:05
iceyrick_h: I couldn't come up with a way to make it work with a bit of thinking but figured that maybe somebody else had thought about it21:06
kwmonroeicey: we've mixed bash actions with reactive py charms.. see https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager as the reactive py charm with actions/* as bash stuffs.21:18
iceykwmonroe: I've done that kind of thing before, I'm wondering more something that actually mixes reactive bits21:19
kwmonroeicey: which reactive bits?  you can do stuff like 'is_state' from bash, https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/mrbench#L2021:20
iceykwmonroe: imagine a base layer that `apt-get install -y samba`, and then adding a python layer on top of that21:21
iceyfor example21:21
kwmonroewell your first problem is samba...21:21
magicaltroutdon't you use hdfs for everything these days?21:21
kwmonroedon't you?!?!21:21
iceyha kwmonroe21:21
iceyok, how about `apt-get install -y squid`21:22
iceypoint is mixing layers that use bash with layers that use python21:22
iceyin the actual reactive bits21:22
kwmonroeicey: does your py layer need to react to things like "apt.installed.squid"?  i think that'll work -- stub would know for sure.21:27
iceykwmonroe: part of the question then is how does the bash reactive stuff get called21:27
iceygiven that both python and bash reactive bits may want to execute on each hook21:28
kwmonroestub: if i apt install squid in a bash layer, and include that bash layer in an upper python layer, will @when(apt.installed.squid) recognize that squid was installed and be set?21:30
kwmonroeicey: as for "how does bash reactive stuff get called", it happens with calls to 'charms.reactive x'21:31
kwmonroecharms.reactive is a bash script available on anything that has charms.reactive in its wheelhouse21:31
iceykwmonroe: but how would my `squid.sh` get executed so that I could call `charms.reactive set_state('apt.squid.installed')21:32
kwmonroeicey: i don't know what squid.sh is in this scenario, but any bash stuff that needs to set a state would do "sudo apt install squid; charms.reactive set_state good2go", and then you could react in a later layer with @when(good2go).21:37
iceykwmonroe: let me make a super basic version and share, I thihnk it's confusing21:38
kwmonroeroger that icey, but don't push my limits.  if you do, you'll have to answer to cory_fu.  all i know is that bash stuff can do reactive stuff by calling "charms.reactive foo", where foo is:  http://paste.ubuntu.com/23913564/21:40
iceykwmonroe: I know about that, we'll see if this (super stupid) example can be made to work ;-)21:41
iceykwmonroe: https://github.com/ChrisMacNaughton/layers_test21:43
kwmonroeicey: by virtue having to do more than 2 clicks through that repo, i can tell you're going to need cory_fu.21:44
iceyHA21:44
iceykwmonroe: I'm just goint to try that charm :-P21:44
iceywell, pair of layers, built into a charm21:44
cory_fuicey: LGTM21:45
iceycory_fu: you think that will actually work?21:45
cory_fuShould, yeah21:45
iceyawesome :)21:45
kwmonroehey icey, LGTM.  that should work.21:46
iceythanks kwmonroe ;-)21:46
iceywow +1 cory_fu kwmonroe :) it works!21:52
iceydownright voodoo ;-P21:52
kwmonroeicey: if you blog about your experiences, you'll need another 15 minutes of help.21:52
iceywhy would I need help to blog about it...?21:53
kwmonroelol21:53
cory_fuicey: Ignore kwmonroe's sass.  :)21:54
kwmonroeicey: i fubar'd that.  i meant to say 'you'll *get* another', as if irc help was tied to evangalism.21:54
cory_fuheh21:54
iceyhahaha21:54
icey=! cory_fu21:54
icey+121:54
kwmonroeyou had it right.. hahaha != cory_fu.  he doesn't mess around.21:55
iceythanks again guys :)21:57
Mac_rick_h: thanks!!22:02
=== mskalka is now known as mskalka|afk

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!