[00:00] <lazyPower> wellllllll
[00:00] <skuda> different tradeoffs, just testing now
[00:00] <lazyPower> thats not *entirely* true
[00:00] <lazyPower> you would replica it to another unit, and that filesystem would be networked right? so the state is still available
[00:00] <siva_guru> @lazyPower, here is the hooks code
[00:00] <lazyPower> and if you use a deployment its a blue/green
[00:00] <siva_guru> http://paste.ubuntu.com/23908126/
[00:00] <skuda> yes, slower, but there
[00:00] <lazyPower> skuda - here's the ark manifest https://gist.github.com/a55050f50fde8daf11434a09023eef8f
[00:00] <skuda> hehehe
[00:00] <lazyPower> skuda oh you bet, lxd live migration would just blow the stuff and things out of the water there
[00:01] <siva_guru> @lazypower, I seeing the following error in the logs
[00:01] <siva_guru> 2017-02-01 23:45:45 INFO install   File "/var/lib/juju/agents/unit-contrail-analytics-0/charm/hooks/install", line 92 2017-02-01 23:45:45 INFO install     print "NUM CONTROL UNITS: ", len(units("contrail-control")) 2017-02-01 23:45:45 INFO install                               ^ 2017-02-01 23:45:45 INFO install SyntaxError: invalid syntax
[00:01] <lazyPower> and not look back
[00:01] <skuda> but I can not do a live migration, all the players will be kicked during the migration window
[00:01] <lazyPower> right
[00:01] <siva_guru> @lazpower, the same code works fine with py2
[00:01] <lazyPower> its possible to CRIU in docker, but most of the demo's i've seen of this have not been k8s
[00:01] <skuda> on the other hand it's pretty neat to know that k8s is going to relocate everything automatically when something fails
[00:01] <lazyPower> its been pure docker, with some wizardry in the backend thats not been shared
[00:02] <skuda> thanks for the manifest lazyPower
[00:02] <lazyPower> np skuda, if you want the docker source too (like you dont trust me, which you shouldnt, i'm a stranger) i can send you over the dockerfile
[00:03] <skuda> I can check the Dockerfile in the registry, no?
[00:03] <lazyPower> i dont think i published it
[00:03] <skuda> ahhh
[00:03] <lazyPower> i think i just docker pushed because i too like pain
[00:03] <skuda> hahaha
[00:03] <lazyPower> siva_guru  looking now
[00:03] <skuda> ok, then... if you could send me it the Dockerfile would be awesome
[00:03] <skuda> :)
[00:04] <lazyPower> siva_guru - that error is py3 complaining that you didnt paren your print statement , it should read:  print("NUM CONTROL UNITS: " + len(units("contrail-control"))
[00:04] <skuda> there is not another cluster aware ui for LXD other than OpenStack, isn'it?
[00:04] <lazyPower> so thats a python3 error, not a hook execution error, it hasn't actually executed that bit, python is interpreted
[00:04] <lazyPower> skuda - let me get back to you on that one
[00:05] <stormmore> I wonder if k8s will do live migrations ever, don't think so cause of that assumption of being able to suffer a lost of a container temporarily until it spins up another
[00:05] <lazyPower> skuda - i know the guys over on flockport are using lxd, and there's some other stuff, but you do know that juju does lxd dontchya? :D
[00:05] <skuda> I found some projects in github but all of them were about to manage one node
[00:05] <siva_guru> @lazyPower, that's the minor thing. The thing I am concerned is how is the relation-joined hook getting called as part of install?
[00:05] <lazyPower> siva_guru - install the python3 flake8 checker, and flake8 your code, it will help you catch all htose python3 errors
[00:06] <skuda> hahaha yes lazyPower it's something to manage the cluster after it's created and get graphs, repeating tasks support and niceties like those.
[00:06] <lazyPower> siva_guru - i see no evidence of it being called though, thats a perfectly acceptable error as python is interpreted, so it was looking through the code file before it executed to map its contorl flow
[00:06] <lazyPower> siva_guru - python3 flake8 your code, and give it another go once you've resovled the python3 changes
[00:07] <lazyPower> siva_guru if its still misbehaving i'll eat my hat and we'll take another look at why its misbheaving
[00:07] <lazyPower> and i may not eat my hat, because shrimp taco's were delicious and i'm not hungry after eating them
[00:07] <stormmore> lazyPower - video or it didn't happen :P
[00:07] <skuda> stormmore: k8s will not have live migration in a time at least, they are very focused on services with more than 1 instance available at the same time, If you can trust that's always the case you don't need live migrations
[00:07] <lazyPower> stormmore whyyyy did i know you'd have peanut gallery commentary after that? :D
[00:07] <siva_guru> @lazyPower, thanks. Will do
[00:08] <lazyPower> skuda - theres the whole class of workload thing again
[00:08] <skuda> but it's not always the case, this is the reason why vms are not going to disappear sometime sooon
[00:08] <stormmore> skuda yeah I know, especially considering who is behind k8s and their ethos
[00:08] <stormmore> lazyPower - humor is the bread of life :P
[00:08] <lazyPower> skuda https://gist.github.com/89f4c7596c0a8ee3c47422e63db1a23a
[00:09] <skuda> thanks lazyPower
[00:09] <lazyPower> np np
[00:09] <stormmore> skuda but it is the case, Google proved that by running their whole environments that way
[00:09] <lazyPower> unofficially, if that explodes you own both halves. but its a fun validation workload since you're already doing game servers
[00:10] <lazyPower> might be fun ot mix it up and add ARK to the list, as private servers seem to be the way to go there
[00:10] <lazyPower> unless you like pain
[00:10] <lazyPower> then play on the official servers and enjoy the unfeddered RUST abusers
[00:10] <skuda> google it's pretty much stateless, I mean stateless as https requests
[00:10] <skuda> the don't usually keep sockets opened much time, it's about request after request
[00:10] <stormmore> skuda I get that but they run stateful services the same way as stateless
[00:10] <skuda> and that makes sense for them, sure
[00:11] <skuda> well the industry is going after that now, and it's amazing for many use-cases
[00:11] <stormmore> statless only gives you so much until you have to store something in a stateful service
[00:12] <stormmore> I will admit that stateful services take a bit more planning when you are running them in containers
[00:12] <skuda> well that the trap many people got caught, "orchestrate all the stateless nginx that you want, you are going to finally consume dynamo, or ebs, or 'put your favourite stateful service here' and pay for it big"
[00:13] <skuda> but things are getting better slowly anyway
[00:13] <skuda> I would like to see more usage and integration of LXD, it's a container with many good things from vms
[00:15] <skuda> but Docker get all the attention
[00:15] <lazyPower> ^ that
[00:15] <stormmore> Docker is more muture by a long way
[00:15] <lazyPower> i think what has hindered lxd adoption is the fact you dont get a native feel on other clients like osx
[00:16] <lazyPower> it kind of requires an ubuntu rig to really shine. i'm sure someone will fight me on that
[00:16] <lazyPower> but thats my 2 cents
[00:16] <skuda> some people think the problem is that it is too tied to Ubuntu too
[00:16] <lazyPower> as someone that walks the line of both
[00:16] <lazyPower> i hear all those statements and i want to hug them and ask them to humor me
[00:16] <lazyPower> but nobody ever does
[00:16] <skuda> hahahaha
[00:17] <stormmore> That goes to my comment about muturity for LXD
[00:18] <lazyPower> i'm not sure i agree, but thats a matter of opinion anyway
[00:18] <lazyPower> and if we agreed on everything stormmore we would be super boring
[00:18] <skuda> it's a shame because I think that together Docker and LXD could make amazing things
[00:18] <stormmore> heck I would have been bored already and probably moved to CoreOS or DCOS instead :P
[00:19] <skuda> too many complex tricks are done today to be able to run stateful services in Docker
[00:19] <skuda> LXD brings that in a super natural way, with the added plus of live migrations
[00:19] <stormmore> skuda that I definitely disagree with. there is nothing to complex that you can't run it in docker containers
[00:20] <skuda> sure not, you have two options for example to put online 1 mysql
[00:20] <skuda> slower as hell network storage
[00:20] <skuda> or superpricey
[00:20] <skuda> or the second option, use a local storage of the docker node running it and keep the process always there
[00:20] <stormmore> skuda live migrations are only useful if you are wanting to "service" the underlying hardware still doesn't help you in a failed hw scenario
[00:21] <skuda> and well, in MySQL at least you could use Galera or other solutions to create a cluster and try to live with it
[00:21] <stormmore> skuda Ceph for network storage
[00:21] <skuda> Ceph it's pretty slow
[00:22] <skuda> the latency it's usually terrible and the IOPS are not much better
[00:22] <stormmore> that sounds like a badly configured Ceph setup
[00:22] <stormmore> CERN uses Ceph for the storage requirements with Petabyte sized clusters
[00:23] <skuda> http://cloudscaling.com/blog/cloud-computing/killing-the-storage-unicorn-purpose-built-scaleio-spanks-multi-purpose-ceph-on-performance/
[00:23] <skuda> stormmore: I am not speaking about size here I am speaking about speed
[00:23] <skuda> and I don't have the resources of CERN to install a cluster with hundreds of computers and drives
[00:24] <skuda> some databases that are cloud aware easily restore state from instances that keep working after a partial crash, ElasticSearch for example
[00:24] <skuda> it takes some time and a lot of bandwith but could be done with local storage easily with Docker
[00:25] <stormmore> skuda they have clusters that do either 100 IOPS (about the same as local HDD) or 500 IOPS
[00:25] <skuda> but not 100% of usages are ok with that pattern
[00:25] <skuda> that's too slow for a medium/big database
[00:26] <skuda> I have been using only SSD for databases like 2 years now, and before that only SAS disks, you need lots of IO sometimes
[00:26] <skuda> the same for big minecraft servers
[00:26] <skuda> I seen one super big survival full of people saturate 1 SSD
[00:26] <skuda> those types of workloads are not designed to be put in Ceph
[00:27] <skuda> but works amazingly well in local SSD using LXD for example
[00:28] <skuda> I know live migration it's not going to solve many things (failures) that k8s solves without proper (external to lxd) clustering thought in your part
[00:29] <skuda> in the project I am working now it would suppose to be able to migrate minecraft server between nodes without interruption
[00:29] <skuda> obviously, the fronted, admin, api and all the webservices will be running in Docker containers orchestrated via k8s or dc/os
[00:33] <lazyPower> skuda - using mcserver (or is it mcadmin? i forget) as the admin ui i assume?
[00:33] <skuda> depending on the tests I will be doing the coming days maybe even the minecraft servers will be split in smaller units, as small as possible, and orchestrated via k8s or dc/os, it's one of the options I will be testing.
[00:33] <skuda> nope, we are developing one
[00:33] <lazyPower> oh nice
[00:33] <skuda> the most used it's multicraft I think
[00:33] <lazyPower> man i love it when people show up with their own solutions
[00:33] <lazyPower> that to me, is far more interesting than say, hopping on github, finding a thing, and then finding a way to profit from it
[00:33] <skuda> we tried but we doesn't solve all our needs and introduce some problems
[00:34] <lazyPower> yeah
[00:34] <lazyPower> i used mcadmin (i think again? naming?) and it was a shitshow when it came to backups and specifically the restore
[00:34] <siva_guru> @lazypower, that resolved the issue
[00:34] <lazyPower> every last single one of them was corrupted
[00:34] <lazyPower> siva_guru FAN TASTIC!
[00:34] <siva_guru> Thanks for all your help
[00:34] <lazyPower> siva_guru thats what i'm talkin bout boooyaaaaa
[00:34] <lazyPower> np np
[00:34] <lazyPower> happy to get you unblocked :)
[00:34] <siva_guru> ;)
[00:34] <siva_guru> :)
[00:34] <bdx> lazyPower: "i think what has hindered lxd adoption is the fact you dont get a native feel on other clients like osxb" - entirely
[00:35] <skuda> hahaha, similar problem for multicraft, backups sucks, but not only that, some other things are not fully working or very weird
[00:35] <lazyPower> siva_guru - it can be tough going sometimes, especially when you're making changes you dont fully understand. sorry that bit you, but pthe py2->py3 change was a painful one for me at first until i started linting *everything*
[00:35] <stormmore> skuda I don't know about that, 15GB/s seems pretty good even for large scale DBs
[00:35] <lazyPower> bdx <3 hey dude
[00:35] <lazyPower> wb
[00:36] <siva_guru> @lazypower, yes.. I moving from trusty to xenial and from py2 to py3
[00:36] <skuda> stormmore, 15Gb/s? where? with how many disks?
[00:36] <stormmore> https://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf
[00:36] <skuda> you are not going to get 15Gb/s without special network hardware anyway
[00:36] <skuda> the total bandwith of the cluster doens't matter a lot
[00:37] <skuda> what's matter is how much will be getting my small Mysql instance in one node
[00:37] <skuda> and it's totally imposible to get more bandwith that your network card offers you, usually 1Gb, 10Gb in special situations
[00:38] <bdx> haha - just reading the scrollback ... docker has an os x virtualbox wrapper now ... even though the docker containers aren't really being deployed to the osx host, it gives devs the feel/usability as if they were running native
[00:38] <stormmore> skuda sure you can, you can bond NICs. My understand last I really looked at CERN was they were using 10Gb x 2 for each side of their cluster
[00:38] <skuda> 150 clients
[00:38] <lazyPower> bdx - s/virtualbox/xhyve/
[00:38] <bdx> lazyPower: I'm assuming thats what you are referring to?
[00:38] <lazyPower> ftfy
[00:38] <lazyPower> yeah, their xhyve schenanigans
[00:39] <stormmore> skuda well it is a 30PB cluster!
[00:39] <bdx> ahh yea, my bad
[00:39] <skuda> During March 2015 CERN IT­DSS provisioned nearly 30 petabytes of rotational disk storage 
[00:39] <skuda> for a 2 week Ceph test
[00:39] <lazyPower> "Oh look its native!!!"
[00:39] <lazyPower> dude...
[00:39] <lazyPower> its boot2docker in a dress
[00:39] <lazyPower> stop lying to me docker inc
[00:39] <lazyPower> but i'll give them this
[00:39] <lazyPower> it works really wel and its gotten a ton of bug fixes
[00:39] <lazyPower> i prefer it over docker-machine now
[00:39] <skuda> stormore I don't have 30 petabytes of disks to be consumed by 150 clients at the same time hahahaha
[00:40] <skuda> If I had this cluster size, probably I would be fine with Ceph, yes, but for my use case I would be better purchasing a good san before that
[00:40] <stormmore> skuda I get that, just pointing out that Ceph isn't as slow as you think. If it is, the design of your Ceph environment is wrong
[00:40] <skuda> did you check the comparison with ScaleIO I sent to you?
[00:40] <bdx> stormore: +1
[00:41] <skuda> I am speaking by the way of Ceph clusters not at the sale of CERN, much more smallers ones
[00:41] <skuda> *scale
[00:42] <skuda> BTW in the cern test at 15Gb/s every client is getting 100Mbit/s
[00:43] <skuda> that with 150 clients serving and writing 4Mb files, so highly sequential
[00:43] <skuda> if you think that's ok for a big OLTP database we have different opinions
[00:43] <stormmore> oh I am aware of ScaleIO and it has a different approach that Ceph. I am only considering Ceph and it checks off more boxes for my workloads than ScaleIO
[00:44] <skuda> blksize	mode	threads	trans/sec	req/sec	min_req_time	max_req_time	avg_req_time
[00:44] <skuda> 16384	seqwr	16	122,57Mb/sec	7844,22	0,07	1484,69	2,04
[00:45] <skuda> that it's a sad and old intel ssd 320
[00:45] <skuda> 122Mb local, 1 disk, latency 2,04
[00:45] <skuda> it's obviously much slower than current generation SSD or nvme
[00:46] <skuda> still it's faster than what 1 client is able to get from that super big ceph cluster of CERN
[00:46] <skuda> it's not needed for every case, sure, sometimes it is
[00:46] <skuda> I am not saying ceph is not a cool tech that can work in many many cases
[00:47] <lazyPower> ok i need to run some errands and i'm going to be traveling for the next few days until the 8'th. So hit me up on the mailnig list if you gents need anything. Otherwise i'll try to check for pings but replies are going to be super latent
[00:47] <lazyPower> good luck in your exploration skuda, i'm here to help if needed
[00:47] <lazyPower> stormmore - keep fighting the good fight
[00:47] <skuda> only saying it's not the solution for everything
[00:47] <lazyPower> bdx - poke magicaltrout in the forhead for me ;P that wiley brit
[00:47] <stormmore> lazyPower always and have fun in Belguim
[00:47] <skuda> lazyPower: Thanks! I will contact you if I hit roadblocks!
[00:48] <lazyPower> s/me/the mailing list/
[00:48] <lazyPower> ftfy
[00:48] <lazyPower> <3
[00:48] <skuda> yes!
[00:48] <skuda> mailing list, I know!!
[00:48] <skuda> I am going bed now too, it's 2am here in Spain ;P
[00:49] <skuda> I will try tomorrow juju k8s, before in LXD with conjure, later I will try to get it to install on the 4 dedicated servers I have to test
[00:50] <lazyPower> skuda - if you've go tthe time we will be in ghent belgium. you're more than invited to attend the charmer summit and we can run deployments in real time
[00:50] <lazyPower> and with that i'm leaving for real this time
[03:10] <Teranet> Question : juju status gives me a bit to much on info is there a way I can filter it so it only list me out the Unit's I have deployed ???
[03:12] <lazyPower> Teranet: try 'juju status $application'
[03:12] <lazyPower> Teranet or `juju status --format=short`
[03:18] <Teranet> thx still not really what I like to see but better
[03:21] <lazyPower> Teranet - if there's another filtered view that would be useful for you, if you dont mind filing a bug its likely to get included in the list of filters. you can see what kind of enhanced status outputs we have available via juju status --help
[03:26] <Teranet> This is almost perfect : juju status --format=oneline     just more like a table look would be nice
[03:26] <Teranet> with color
[03:26] <lazyPower> excellent, glad you've found something that works better for you
[03:27] <lazyPower> but thats good feedback, and again a bug would be handy to reference when talking about the feature with the core devs :)
[03:27] <Teranet> I certainly can file a bug is I know where this could be filed best within the juju github bug report
[03:28] <lazyPower> https://bugs.launchpad.net/juju/+filebug   would be preferrable
[03:32] <Teranet> ok will do thx
[03:32] <lazyPower> Thanks Teranet :)
[03:37] <Teranet> reported it as detailed as I could  : https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1661145
[03:37] <mup> Bug #1661145: Feature request for juju status  <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1661145>
[03:38] <Teranet> now  ido have to still figure out why neutron and openvswitch won't do VLAN's for the Openstack setup on eth1 :-(   grrrr
[07:39] <mhilton> morning all
[07:59] <kjackal> Good morning Juju world!
[08:57] <admcleod> kjackal: :]
[09:29] <Zic> lazyPower: hi (NO, it's not a new problem, as usual :>), a simple wishfeature (if you confirm it's a good idea, I can officially submit): redirect http to https in the kube-api-loadbalancer
[09:29] <Zic> I can do it on my own but as the vhost file is managed by Juju, it will be overwritten
[09:30] <Zic> lazyPower: just for browsing content, I understand that kubectl cannot match the redirection but it is directly configured to https in the default ~/.kube/config
[09:52] <chetann> Hello ,
[09:52] <chetann> need help in juju
[09:52] <chetann> anybody there?
[09:54] <chetann> Hi , need help in setting ip version of kubernetes using juju
[09:54] <chetann> Hi , need help in setting up version of kubernetes using juju
[09:57] <Zic> chetann: hi, describe your problem precisely
[09:58] <chetann> we are running this : juju deploy cs:bundle/canonical-kubernetes-20
[09:59] <chetann> or let me ask in diffrent way
[10:00] <chetann> in this charm : charm: "cs:~containers/kubernetes-master-10"   how to check what version of kubernetes master will going to provision
[10:01] <chetann> when we deploy the above charm it deply 1.5.2 version for kubernetes master , but we wish to have 1.4.2 for kubernetes master
[10:01] <chetann> when we deploy the above charm it deply 1.5.2 version for kubernetes master , but we wish to have 1.4.4 for kubernetes master
[10:03] <marcoceppi> chetann: Hi, you can do that, but it's a bit manual. Let me dig you up instructions
[10:05] <chetann> ok
[10:05] <chetann> thanks
[14:19] <lazyPower> Zic - I think thats a good contribution
[14:40] <Zic> lazyPower: I had once again a "certificate error" in one of my k8s-master (saw this un /var/log/syslog) but was just for une pod (kube-dns), restarting this pod (by deleting it) just fixed the problem
[14:40] <Zic> (the last time, all requests had this type of error)
[14:41] <lazyPower> we need to figure out why thats happening
[14:41] <Zic> not so important this time as I recovered quickly
[14:41] <lazyPower> Zic - fyi i'm goign to be traveling until feb 8'th
[14:41] <Zic> the kube-dns pod was in CLBO during this time
[14:41] <lazyPower> starting later today
[14:42] <Zic> ok, I have no troubles anyway, just to let you know if you see some other report about this :)
[14:42] <lazyPower> Zic - i'll keep that in the back of my mind and try to come up with a suggestion for us to trace this issue
[14:42] <Zic> thanks :)
[14:43] <lazyPower> but as it stands right now, you're finding some edge cases we haven't seen in our long running instances, or testing
[14:43] <lazyPower> so its hard to really recommend a fix until we truly understand whats happening
[14:44] <Zic> the weird part is that it's only happened to kube-dns this time, all other requests that I saw was OK
[14:44] <Zic> and it stopped when I deleted the pod and it respawned
[14:49] <mbruzek> Hello Zic I see you are back at kubernetes today.
[14:50] <lazyPower> o/ mbruzek
[14:50] <mbruzek> \o lazyPower
[14:51] <Zic> mbruzek: I didn't crash the cluster this time :D
[14:51] <Zic> just saw a strange but quickly fixed error :)
[14:51] <mbruzek> I have faith in you Zic, you just are not trying hard enough today. You need more coffee
[14:52]  * lazyPower snickers
[14:52] <lazyPower> Feel the internet-troll flow through you mbruzek. The troll exists in all of us.
[14:52] <lazyPower> <3
[14:53] <mbruzek> sorry. Maybe *I* haven't had enough coffee today.
[14:53] <mbruzek> Zic knows how to break clusters better than anyone I know. I *like* that!
[14:54] <mbruzek> I apprecate that and the feedback and the challenge.
[14:54] <Zic> :D
[14:55] <Zic> it's the only error I encountered in 1.5.2 :)
[14:56] <lazyPower> mbruzek thats a constant state of being for me... lack of coffee
[15:02] <Zic> hey, this message does not even contain a problem/error, AMAZING -> do you know how I can "clean up" the InfluxDB database? I have some old pods that's not existing anymore, and same for deleted namespaces
[15:02] <Zic> I searched through InfluxDB docs but it's not very clear to me
[15:06] <lazyPower> Zic - that seems like there's some latency or issue with etcd again if the pods aren't being reaped and namespaces are lingering
[15:07] <Zic> oh, I thought that it was normal to conserve by default pods in InfluxDB as it can be used for history
[15:08] <Zic> but here, in the drop-down list of Pods, I have some old entry of pods that didn't exist, and old namespaces :(
[15:09] <Zic> I need to find some etcdctl command to explore what etcd have
[15:09] <Zic> like a list of pods
[15:12] <Zic> etcdctl ls / --recursive
[15:13] <Zic> seems OK
[15:14] <lazyPower> Zic - yeah, all of the k8s data is stored in /repository/
[15:15] <lazyPower> and it tree's off down there based on object type
[15:15] <Zic> I don't know if it's normal but I saw some old namespaces that's here
[15:15] <Zic> but they do not contain any pods or ressources
[15:15] <Zic> (and they are not shown in the kubectl get ns)
[15:15] <lazyPower> i dont think it actually wipes the key-space
[15:15] <lazyPower> i think it just wipes the values
[15:15] <Zic> ok, it seems normal so
[15:15] <Zic> I saw also some persistentvolumeClaim that not longer exist
[15:16] <Zic> but as they are not returned by kubectl get pvc --all-namespaces, it seems OK also
[15:16] <Zic> I don't know where InfluxDB get its obsolete pods :/
[15:16] <Zic> it's not broken as new namespaces and new pods appeared in Grafana
[15:17] <Zic> but the old one stayed
[15:17] <Zic> and as I did so many tests, it's quite long now <3
[15:17] <lazyPower> Zic - lets follow up on teh k8s mailing list to ask about this. I think its behavior of the addon
[15:17] <lazyPower> if the authors indicate it should be getting wiped, we probably have something slightly misconfigured
[15:17] <lazyPower> or some oddity
[15:17] <lazyPower> not certain which
[15:17] <lazyPower> but i'll err on the former
[15:22] <Zic> it's maybe the behaviour yes, I was testing Prometheus (with Grafana also) in my old K8S cluster installed by kubeadm (shame! shame! I was not presented to Juju at this time :p) and Prometheus did the wipe
[15:22] <Zic> I'm just realising now that InfluxDB has maybe a different behaviour on this point
[16:02] <pranav_> Hey. Can anyone here help me with a query on hooks?
[16:02] <perrito666> pranav_: ask the question and well see who can help you :)
[16:03] <pranav_> Alright :). I have multiple relations in my charm that i need to wait on and I want my config-changed hook to be called after all the relations are done
[16:03] <pranav_> is there a way that config-change can be called after relation hooks?
[16:04] <perrito666> pranav_: until all the relations in one charm right?
[16:05] <pranav_> yes. Right now I am moving my charm to blocked state when even one of the relation is not up
[16:06] <pranav_> But once relations are done, I don't know how to automatically move to config
[16:07] <perrito666> pranav_: mm i thought config-changed was called after relation is established
[16:07] <perrito666> lazyPower: happen to know anything about this?
[16:07] <pranav_> The documentation says its called after install & upgrade
[16:08] <perrito666> rahworks: I see, I believe your option is to check in every relation
[16:16] <pranav_> Ah ok. Will have to figure a way out. Can i use the status in any way to automatically trigger somethin in JUJU?
[16:17] <pranav_> I did see the following way, but am yet to explore on it :
[16:17] <pranav_> @when(‘apache.installed’) def do_something():    # Install a webapp on top of the Apache Web server...    set_state(‘webapp.available’)
[16:18] <rick_h> pranav_: perrito666 is this a reactive charm? if so you could use state for this right?
[16:18] <rick_h> pranav_: so you can track the state of each relation and then @when.... each is up execute
[16:20] <pranav_> I haven't checked what reactive charm is. Any pointers to read on it?
[16:21] <rick_h> https://jujucharms.com/docs/stable/developer-event-cycle
[16:21] <rick_h> pranav_: ^ for some beginner notes
[16:21] <rick_h> pranav_: lots of folks working on charms have experience on the mailing list and the #juju freenode channel
[16:21] <rick_h> pranav_: but it's kind of a framework to help track state and make charming a bit easier
[16:22] <Zic> mbruzek: h/34
[16:22] <Zic> oops
[16:22] <mbruzek> h/42
[16:22] <Zic> that free hl... don't know why your nick was on my IRC prompt :)
[16:23] <mbruzek> No problem.
[16:23] <pranav_> I did read up the event thing but couldn't find anything on the reactive thing. But i will go through it once and get back post some reading. Thanks Guys! :)
[16:28] <Zic> mbruzek: to eliminate immediately this use of unwanted hl, I have a question :-] -> we're right that running kubectl command on random kubernetes-master (locally for example, or by modifying the ~/kube/config of your workstation to a master directly instead of the kube-api-loadbalancer) cannot do anything wrong?
[16:29] <Zic> because I saw in juju status that there is an official "master" of... masters
[16:29] <Zic> but as the nginx vhost of kube-api-loadbalancer just have an upstream { } block, I think it's just a roundrobbin, right?
[16:30] <mbruzek> Zic: The load balancer is out attempt at making the masters HA.
[16:31] <mbruzek> Zic: You can scale up your master nodes separately than the worker nodes, and request different sizes from Juju
[16:31] <Zic> yeah, but I saw a system of "lock" in /var/log/syslog which said only one of my master is having a "lock"
[16:31] <Zic> is a notion of "active" kubernetes-master? or they are all active?
[16:31] <mbruzek> Zic: To answer your question more directly. Yes you can point to a master directly in the configuration
[16:31] <Zic> ok
[16:32] <mbruzek> zic: should you lose that node, it will not work.
[16:32] <Zic> I feared that I didn't understand something and do nasty things by running sometimes to a master which is not "the active one"
[16:32] <Zic> mbruzek: yeah, I'm just using this when I don't have the kubectl binary locally
[16:32] <Zic> I'm SSHing directly to one master and use its kubectl command
[16:35] <Zic> this message was responsible of my question: leaderelection.go:247] lock is held by mth-k8smaster-03 and has not yet expired
[16:35] <ryebot> Zic: Shouldn't matter. All of the masters use the same source of truth.
[16:36] <ryebot> Zic: what was that in response to?
[16:36] <Zic> because I have this kind of error sometime in the non-locked masters, but no error at all in the locked one: jwt.go:239] Signature error (key 0): crypto/rsa: verification erro / handlers.go:58] Unable to authenticate the request due to an error: crypto/rsa: verification error
[16:36] <Zic> it's not like the first time where all requests have this in return
[16:37] <Zic> here, is just... "sometime" in /var/log/syslog
[16:37] <Zic> all my kubectl command works perfectly, the dashboard too
[16:37] <ryebot> Zic: Hmm, not sure what's causing that, but I can tell you with a lot of confidence that it shouldn't matter from where you run kubectl, they all point to the same place
[16:38] <Zic> ok
[16:44] <Zic> I really don't know what can I do with this crypto/rsa error, all is working actually, but I'm fearing a bit
[16:57] <ryebot> Zic: Can you paste the logs for us somewhere to look at?
[16:57] <Zic> yep
[16:59] <Zic> http://paste.ubuntu.com/23912041/
[17:00] <Zic> there is ~5 examples in this extract
[17:13] <ryebot> Zic: Thanks, we're taking a look
[17:17] <ryebot> Zic: The lock logging, at least, is normal and expected. Looking into the error.
[17:18] <ryebot> Zic: Did you by any chance change the service account token signing key?
[17:21] <Zic> ryebot: nope, my only operation since the restauration of this cluster was testing StatefulSet :)
[17:22] <ryebot> Zic: okay, cool; still investigating.
[17:24] <Zic> ryebot: for the record, the last weeks I have a ton of error like that, not just a bit, in all masters, and all operations was completely blocked if it involve writing (like kubectl create/delete), reading was OK (get/describe)
[17:24] <Zic> ryebot: here, I just have some, and the "locking" master does not have any
[17:24] <Zic> all is working actually, I'm just fearing it will come again :x
[17:25] <ryebot> Zic: understood, it's a reasonable concern
[17:48] <Zic> ryebot: another maybe useful information, I had kube-dns which show a "30" in the Restart column of kubectl get pods
[17:49] <Zic> don't know if it seems high
[17:57] <Zic> ryebot: I'm leaving my office but I'm staying on IRC as usual, feel free to ping back me if you discover something; and thanks for your involvement as usual :)
[18:24] <Mac_> Hi, I'm using "juju charm get" to download the charm. But I'm not able to download some of the charm, e.g. keystone, neutron-api, and some others.
[18:24] <Mac_> But I can deploy them directly from charm shop.
[18:26] <Mac_> $ juju charm get keystone
[18:26] <Mac_> Error: keystone not found in charm store.
[18:27] <Mac_> Any suggestion?
[18:36] <magicaltrout> just did my DC/OS office hour demoing juju, quite a few folks on the call and she said she's gonna chuck the video around internally because they're on a big ease of use drive internally
[18:36] <magicaltrout> so I better get those Centos base layers working....
[18:37] <rick_h> Mac_: try just charm get? You using the charm snap?
[18:38] <rick_h> Mac_: actually the command is "pull" in there now.
[18:39] <rick_h> charm pull keystone
[18:39] <Mac_> charm get result in the same error
[18:39] <Mac_> $ charm get keystone
[18:39] <Mac_> Error: keystone not found in charm store.
[18:39] <rick_h> Mac_: I think you've got a really out of date tool as get is no longer a valid command
[18:40] <Mac_> $ charm pull keystone
[18:40] <Mac_> Error: pull is not a valid subcommand
[18:40] <Mac_> I'm working on Ubuntu 14.04.5.
[18:41] <Mac_> I'm trying to patch the charm for my environment, therefore need to make local charm repo.
[18:41] <rick_h> Mac_: oic, hmm. I think the new charm command is only available as a snap these days.
[18:42] <rick_h> Mac_: maybe just download the zip file from the page https://jujucharms.com/keystone/
[18:42] <rick_h> Mac_: look on the right column by the file listing for "Download .zip"
[18:44] <Mac_> So the zip is the same as the "charm get"?
[18:44] <Mac_> Will try, thanks.
[18:44] <Mac_> rick_h: thanks.
[18:45] <rick_h> Mac_: yes, it's the zip in the store for that charm
[18:49] <Mac_> Another question, does the "juju deploy" can resolve the series and revision, e.g. "cs:trusty/percona-cluster-31" , with the downloaded zip or the dir by "charm get"?
[18:50] <Mac_> Or shoud I select series and revision before download?
[18:59] <rick_h> Mac_: what version of Juju are you on?
[18:59] <Mac_> $ juju --version
[18:59] <Mac_> 1.25.9-trusty-arm64
[18:59] <rick_h> Mac_: so for 1.25 you need to setup a charm repo directory structure that has the charm in a directory called trusty
[19:00] <Mac_> .
[19:00] <Mac_> └── trusty
[19:00] <Mac_>     ├── ceilometer
[19:00] <Mac_>     ├── ceilometer-agent
[19:00] <Mac_>     ├── glance
[19:00] <Mac_>     ├── mongodb
[19:01] <Mac_>     ├── nagios
[19:01] <Mac_>     ├── nova-cloud-controller
[19:01] <Mac_>     ├── nrpe
[19:01] <Mac_>     ├── ntp
[19:01] <Mac_>     └── rabbitmq-server
[19:01] <Mac_> Like this?
[19:04] <Mac_> And I have something like "cs:~cordteam/trusty/neutron-api-4"
[19:05] <Mac_> also need ./~cordteam/trusty/ ?
[20:14] <Mac_> It seems the charm ./revision is not auto generated.
[20:16] <Mac_> for example, cs:trusty/ceilometer-240, but the ./revision is 44
[20:17] <Mac_> So if I deploy from cs, it shows 240, but if I deploy from local, it show 44
[20:22] <Mac_> And the contents are also different
[20:22] <rick_h> Mac_: so when you deploy a charm locally, it auto updates the revision as it can't tell what changes there are/etc
[20:23] <rick_h> Mac_: when you go from the store, each upload to the store creates a revision and so the store is tracking it
[20:23] <rick_h> Mac_: so there's a disconnect when you go from the store to a local files on disk
[20:24] <Mac_> ok, I'll try "charm get" again, but I just did that this morning.......
[20:29] <rick_h> Mac_: I'm sorry, you don't need to re-download
[20:29] <rick_h> Mac_: if you deploy from local it'll just increment the number over and over
[20:29] <rick_h> Mac_: there's absolutely no association to the revision you download from the store and the revision it shows once you deploy it locally to be honest
[20:34] <Mac_> But I thought the charm deployed with "cs:trusty/ceilometer" and "charm get ceilometer" should both be the latest.....
[20:35] <Mac_> And now I cannot "charm get", I think it's because I did "bzr lp-login".
[20:36] <Mac_> $ charm get ceilometer
[20:36] <Mac_> Branching ceilometer to /cord/build/platform-install/juju-charm/trusty/var
[20:36] <Mac_> Warning: Permanently added 'bazaar.launchpad.net,91.189.95.84' (RSA) to the list of known hosts.
[20:36] <Mac_> Permission denied (publickey).
[20:36] <Mac_> ConnectionReset reading response for 'BzrDir.open_2.1', retrying
[20:36] <Mac_> Warning: Permanently added 'bazaar.launchpad.net,91.189.95.84' (RSA) to the list of known hosts.
[20:36] <Mac_> Permission denied (publickey).
[20:36] <Mac_> Error during branching:  Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist.
[20:36] <Mac_> Is it because I'm not a charmer?
[20:38] <Mac_> And there's no lp-logout, so I'm stuk.....
[20:38] <Mac_> @@
[20:40] <rick_h> Mac_: so the issue is that charm get is from a time when all charms had to be put into launchpad bzr and then the store pulled them out of there
[20:40] <rick_h> Mac_: but today, charms are uploaded with a newer charm tool (the snap) and can come from github, your own drive, etc
[20:40] <rick_h> Mac_: that's why the "download zip" is your best bet atm
[20:41] <rick_h> Mac_: so I'd not use anything pulled from bzr and I'd stop using the charm get command all together because it's just not current enough
[20:41] <ryebot> Zic: after some investigation, we still don't have a solution. Would you mind opening a bug and tagging us in it so we can track it?
[20:41] <ryebot> Zic: On our end, we'll keep investigating.
[20:42] <Mac_> I see....
[20:51] <Mac_> rick_h: Can the new charm tool (the snap) get old version of charm?
[21:02] <rick_h> Mac_: yes you need to use the full URL to get an older version like cs:trusty/keystone-5
[21:04] <icey> has anybody tried mixing bash + python in a reactive, layered charm?
[21:05] <rick_h> icey: not seen it myself, what's got you thinking about the mix?
[21:05] <icey> rick_h: a discussion we had on the openstack team a couple of days ago
[21:06] <icey> rick_h: I couldn't come up with a way to make it work with a bit of thinking but figured that maybe somebody else had thought about it
[21:18] <kwmonroe> icey: we've mixed bash actions with reactive py charms.. see https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager as the reactive py charm with actions/* as bash stuffs.
[21:19] <icey> kwmonroe: I've done that kind of thing before, I'm wondering more something that actually mixes reactive bits
[21:20] <kwmonroe> icey: which reactive bits?  you can do stuff like 'is_state' from bash, https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/actions/mrbench#L20
[21:21] <icey> kwmonroe: imagine a base layer that `apt-get install -y samba`, and then adding a python layer on top of that
[21:21] <icey> for example
[21:21] <kwmonroe> well your first problem is samba...
[21:21] <magicaltrout> don't you use hdfs for everything these days?
[21:21] <kwmonroe> don't you?!?!
[21:21] <icey> ha kwmonroe
[21:22] <icey> ok, how about `apt-get install -y squid`
[21:22] <icey> point is mixing layers that use bash with layers that use python
[21:22] <icey> in the actual reactive bits
[21:27] <kwmonroe> icey: does your py layer need to react to things like "apt.installed.squid"?  i think that'll work -- stub would know for sure.
[21:27] <icey> kwmonroe: part of the question then is how does the bash reactive stuff get called
[21:28] <icey> given that both python and bash reactive bits may want to execute on each hook
[21:30] <kwmonroe> stub: if i apt install squid in a bash layer, and include that bash layer in an upper python layer, will @when(apt.installed.squid) recognize that squid was installed and be set?
[21:31] <kwmonroe> icey: as for "how does bash reactive stuff get called", it happens with calls to 'charms.reactive x'
[21:31] <kwmonroe> charms.reactive is a bash script available on anything that has charms.reactive in its wheelhouse
[21:32] <icey> kwmonroe: but how would my `squid.sh` get executed so that I could call `charms.reactive set_state('apt.squid.installed')
[21:37] <kwmonroe> icey: i don't know what squid.sh is in this scenario, but any bash stuff that needs to set a state would do "sudo apt install squid; charms.reactive set_state good2go", and then you could react in a later layer with @when(good2go).
[21:38] <icey> kwmonroe: let me make a super basic version and share, I thihnk it's confusing
[21:40] <kwmonroe> roger that icey, but don't push my limits.  if you do, you'll have to answer to cory_fu.  all i know is that bash stuff can do reactive stuff by calling "charms.reactive foo", where foo is:  http://paste.ubuntu.com/23913564/
[21:41] <icey> kwmonroe: I know about that, we'll see if this (super stupid) example can be made to work ;-)
[21:43] <icey> kwmonroe: https://github.com/ChrisMacNaughton/layers_test
[21:44] <kwmonroe> icey: by virtue having to do more than 2 clicks through that repo, i can tell you're going to need cory_fu.
[21:44] <icey> HA
[21:44] <icey> kwmonroe: I'm just goint to try that charm :-P
[21:44] <icey> well, pair of layers, built into a charm
[21:45] <cory_fu> icey: LGTM
[21:45] <icey> cory_fu: you think that will actually work?
[21:45] <cory_fu> Should, yeah
[21:45] <icey> awesome :)
[21:46] <kwmonroe> hey icey, LGTM.  that should work.
[21:46] <icey> thanks kwmonroe ;-)
[21:52] <icey> wow +1 cory_fu kwmonroe :) it works!
[21:52] <icey> downright voodoo ;-P
[21:52] <kwmonroe> icey: if you blog about your experiences, you'll need another 15 minutes of help.
[21:53] <icey> why would I need help to blog about it...?
[21:53] <kwmonroe> lol
[21:54] <cory_fu> icey: Ignore kwmonroe's sass.  :)
[21:54] <kwmonroe> icey: i fubar'd that.  i meant to say 'you'll *get* another', as if irc help was tied to evangalism.
[21:54] <cory_fu> heh
[21:54] <icey> hahaha
[21:54] <icey> =! cory_fu
[21:54] <icey> +1
[21:55] <kwmonroe> you had it right.. hahaha != cory_fu.  he doesn't mess around.
[21:57] <icey> thanks again guys :)
[22:02] <Mac_> rick_h: thanks!!