[07:59] <Ting_> I have short-live access key to aws which have been confirmed working well by aws command line, but fails when deloying juju on aws with this access key. Anyone have idea about this?
[08:00] <Ting_> The error says: authentication failed. please ensure the access key id you have specified is correct.
[12:24] <tvansteenburgh> bdx: you around?
[14:37] <ybaumy> has anyone tested the grafana charm?
[14:37] <rick_h> ybaumy: just put up a PR against it yesterday
[14:37] <rick_h> ybaumy: will show it off a little bit in the juju show today
[14:38] <ybaumy> rick_h: juju show?
[14:38] <rick_h> ybaumy https://www.youtube.com/watch?v=NUx6kYE60Mc&list=PLW1vKndgh8gI6iRFjGKtpIx2fnJxlr5FF
[14:38] <rick_h> ep #20 coming in 3.5hrs
[14:39] <ybaumy> rick_h: is that you?
[14:39] <rick_h> ybaumy: rick is me yes
[14:39] <rick_h> tim is tvansteenburgh in that last episode
[14:40] <ybaumy> rick_h: never seen. will watch it tomorow. would be cool if you could get into grafana
[14:41] <ybaumy> rick_h: i have a real world case which came in this week so i have to set it up for our network guys
[14:41] <ybaumy> rick_h: they want it for their switches
[14:41] <rick_h> ybaumy: cool yea I'm adding MySQL support to it currently so I can build dashboards on data in mysql
[14:41] <rick_h> ybaumy: will talk about it in the show today
[14:42] <ybaumy> rick_h: great will be looking forward to it
[14:45] <rick_h> ybaumy: I did a blog post using grafana for monitoring juju controllers a while ago here as well: http://mitechie.com/blog/2017/3/20/operations-in-production
[14:49] <ybaumy> rick_h: thanks lad
[14:53] <ybaumy> rick_h: i hope grafana will be the first charm that reaches production for me. curretly im having a testcluster for kubernetes and openstack. kube is fine but openstack i cant get storage tiering really to work which is the problem currently.
[14:54] <ybaumy> rick_h: first i was planing using influxdb for grafana but i guess mysql is fine too
[15:05] <stub> I believe we have at least one grafana instance in production backed by influxdb. Most are talking to prometheus.
[15:05] <ybaumy> i read influxdb is best for performance
[15:05] <ybaumy> in this case
[15:06] <ybaumy> thats why i wanted to use it
[15:06] <ybaumy> but im open for other stuff
[15:07] <stub> Unless you are going to hit scaling limits, I'd go with whatever fits with grafana best and lets you write your queries easily.
[15:07] <stub> (I haven't used influxdb, but most of the grafana docs seem to be based around that backend so it is likely the best fit)
[15:08] <ybaumy> well i have no experience in scaling this application. we have like circa 100 switches in the datacenters
[15:10] <ybaumy> i still need information on which metrics the network department want to see there
[15:10] <ybaumy> so i cannot say who may datasources there will be in the end
[15:10] <stub> if you are just graphing metrics, 100 switches is small scale.
[15:11] <ybaumy> ok
[15:11] <ybaumy> thats good to hear
[15:11] <stub> Something like prometheus talks about tens of thousands of devices with hundreds of time series on them
[15:12] <stub> Assuming decent hardware, my guess is any backend is good for you
[15:12] <ybaumy> ok
[15:43] <ybaumy> time to watch GoT final episode. then wait until 2019 .. i hope i make it there
[15:51] <stormmore> morning /o juju world
[17:34] <rick_h> Juju Show #20 in 30 minutes!
[17:34] <rick_h> are you ready?
[17:53] <rick_h> juju show link to join in the fun: https://hangouts.google.com/hangouts/_/vamq45vtirbtrefyry63x4ccsee (tvansteenburgh marcoceppi hml bdx kwmonroe and anyone interested)
[17:53] <rick_h> the link to watch the stream https://www.youtube.com/watch?v=iSVd7g0I4pI
[17:53] <rick_h> ybaumy: ^
[18:01] <rick_h> arosales: aisrael cory_fu ^ going to chat some charming if you can make it
[18:06] <bdx> lol
[18:06] <bdx> yes
[18:09] <arosales> rick_h: thanks for the invite, but I have a conflict
[18:26] <bdx> rick_h: and now this https://bugs.launchpad.net/bugs/1602192
[18:26] <mup> Bug #1602192: when starting many LXD containers, they start failing to boot with "Too many open files" <lxd> <verification-done> <verification-done-xenial> <lxd (Ubuntu):Fix Released> <lxd (Ubuntu Xenial):Fix Released> <https://launchpad.net/bugs/1602192>
[18:26] <kwmonroe> rick_h: production lxd:  https://github.com/lxc/lxd/blob/master/doc/production-setup.md
[18:26] <bdx> the fix was released for lxd for the too many open files
[18:28] <bdx> rick_h, kwmonroe: nice work, men
[18:29] <kwmonroe> thanks bdx!
[18:30] <rick_h> arosales: np, always want to reach out to folks
[18:31] <rick_h> bdx: <3
[18:31] <arosales> rick_h: I appreciate that :-)
[18:32] <rick_h> kwmonroe: hah, camera was at 6% battery life left
[18:32] <kwmonroe> woohoo!  perfect!
[18:32] <rick_h> note to self, camera as webcam means full battery only
[18:32] <kwmonroe> or, ya know, plug it in.
[18:33] <rick_h> kwmonroe: :P
[18:34] <bdx> are you guys tracking ^ bug?
[18:35] <bdx> the "too many open files" one
[18:35] <rick_h> bdx: haven't thought about it in a long time tbh. What's up? You still getting it?
[18:36] <bdx> look at the latest in that bug
[18:36] <bdx> from stgrabber
[18:36] <bdx> and yea, I have been
[18:36] <bdx> all the while
[18:37] <bdx> so I was excited to be able to apply the sysctl configs (the production lxd sysctl configs that @kwmonroe listed above)
[18:37] <bdx> and see a significant increase in the # of lxd I could deploy via localhost lxd provider
[18:38] <bdx> but the fact is
[18:38] <bdx> I *really* only care about deploying my lxd to MAAS or AWS
[18:39] <bdx> so the production fix is almost useless unless you want to go around hacking face
[18:39] <bdx> :)
[18:39] <bdx> but now
[18:40] <bdx> with the "The verification of the Stable Release Update for lxd has completed " - per that bug
[18:40] <bdx> we should be seeing resolution for the "too many open files" across all providers
[18:40] <bdx> because its fixed in lxd
[18:40] <bdx> if I'm reading this correctly
[18:40] <rick_h> bdx: right
[18:41] <bdx> yessssss
[18:42] <bdx> ok
[18:42] <bdx> so
[18:43] <bdx> how can I get that stable verified lxd from 'updates' to be the lxd that gets installed on all my maas/aws nodes?
[18:44] <rick_h> so it just means the -updates repos of xenial need to be enabled. Is that out of the box? /me doesn't recall on the cloud images
[18:44] <rick_h> I think they are, but that should be all that's needed.
[18:45] <rick_h> you're looking for the specified lxd on there when it comes up:
[18:45] <rick_h> This bug was fixed in the package lxd - 2.0.10-0ubuntu1~16.04.2
[18:45] <bdx> rick_h: this http://imgur.com/a/FZ9B5 ?
[18:45] <bdx> needs to include updates?
[18:46] <rick_h> bdx: maybe? I'm not sure on maas. My GCE instances I was using for the demo in the show today have them enabled it looks like
[18:47] <rick_h> deb http://us-east1.gce.archive.ubuntu.com/ubuntu/ xenial-updates main restricted
[18:47] <rick_h> Version: 2.0.2-0ubuntu1~16.04.1 - from apt-cache show lxd on there
[18:47] <bdx> rick_h: OOOoooo
[18:47] <rick_h> so it's not quite there...hmmm
[18:47] <bdx> sudo apt list --upgradable | http://paste.ubuntu.com/25433632/
[18:48] <bdx> run apt update
[18:48] <rick_h> might need to be sync'd out to the stuff out there
[18:48] <bdx> then you will get it
[18:48] <rick_h> k, cool
[18:48] <bdx> its just sitting there
[18:49] <bdx> I've been waiting so long for this moment ... I don't even want to install it yet
[18:49] <bdx> :)
[18:50] <rick_h> lol, the cake is a lie?
[18:51] <bdx> I'm soaking it in man
[18:51] <bdx> might even take a few days off work
[18:52] <rick_h> lol, /me feels like bdx got a new toy for christmas
[18:52] <rick_h> ok, I need coffee /me walks away to make some
[18:54] <bdx> definitely ..... not being able to get any density from lxd on my maas nodes has been a thorn no doubt
[20:01] <BarDweller> Hi all.. If I'm running canonical kube, via juju, inside a vm, where I have a bridged networking interface, how do I get the kube to recognise the bridged network i/f as the external IP for ingress etc ?
[20:11] <magicaltrout> "there should be ~400 cores and 2 TB of memory available for your Kube cluster." finally i get to properly deploy some k8s stuff
[20:13] <tvansteenburgh> wow, jackpot
[20:17] <magicaltrout> tvansteenburgh / rick_h i see you folks "messing" with grafana, if i want resource stats out of CDK, would that be the way to go?
[20:17] <magicaltrout> i'm used to deploying nagios on juju, not messed with grafana yet
[20:18] <tvansteenburgh> magicaltrout: our internal clusters use prometheus + grafana
[20:18] <magicaltrout> cool, thanks
[20:21] <tvansteenburgh> BarDweller: we're not ignoring you, i just don't know the answer
[20:22] <BarDweller> no probs.. got any general advice for configuring ingress external ip's ?
[20:22]  * BarDweller kube noob, but learning fast =)
[20:23]  * tvansteenburgh googles
[20:24] <tvansteenburgh> BarDweller: this might be what you want: https://stackoverflow.com/questions/40136891/gcloud-ingress-loadbalancer-static-ip/40164860#40164860
[20:24] <tvansteenburgh> define a Service with the external IP
[20:25] <BarDweller> hmm.. mebbe.. I'm way off in the weeds reading about the nginx kubernetes ingress controller
[20:28] <tvansteenburgh> no, strike that, you don't want type LoadBalancer
[20:28] <tvansteenburgh> https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
[20:29] <tvansteenburgh> that looks better ^
[20:29] <tvansteenburgh> and map that Service to the ingress
[20:29] <stormmore> time to figure out how to charm!
[20:32] <BarDweller> I mean.. I can bring up kube ok, deploy services ok, but the external ip is always blank even if I tell juju to expose it, I think because the ingress thingy doesn't know it's external ip, and the ingress seems to be nginx-ingress-controller, created by the replication controller nginx-ingress-controller which does setup a config map for the controllers it creates.. am reading thru the docs for those to see if I can spot how to tell it to
[20:32] <BarDweller> use a particular ip/interface for the external side
[20:36] <tvansteenburgh> BarDweller: right, and I think the way to do that is the create a Service with an externalIPs entry, and put that Service in front of the nginx-ingress-controller
[21:30] <magicaltrout> tvansteenburgh: other question that seems obvious but i shall ask
[21:31] <magicaltrout> for cdk HA Master we just deploy more?
[21:32] <magicaltrout> scratch that found that github issue that claims that is the case
[21:40] <tvansteenburgh> magicaltrout: affirmative
[21:48] <magicaltrout> okay other random question
[21:48] <magicaltrout> if i'm doing a manual cloud deployment
[21:48] <magicaltrout> how do I add units to a model thats not "default"?
[21:50] <magicaltrout> or when you add unit are they model  specific?
[21:51] <magicaltrout> model specific
[21:51] <magicaltrout> nifty