[03:31] <rick_h> bdx: that might be possible. I think that reference to a long model history was mentioned before. anastasiamac do you recall a bug/fix around a long model history causing issues recently?
[03:32]  * anastasiamac looking
[03:33] <bdx> rick_h: so, the issue dissapears when I create a new model
[03:33] <rick_h> bdx: yea, which is why I don't see it in anything I can access
[03:33] <bdx> I've re-written half of my charms today
[03:33] <bdx> thinking its my charm code
[03:34] <rick_h> bdx: https://github.com/juju/juju/commit/fd20944902f03e60e8477dd6969a9d42d1c50701
[03:34] <rick_h> anastasiamac: ^
[03:34] <bdx> its been a very frustrating day .... I was suppose to do a production deploy for one of our apps .... but then the model started acting super wonky, and I started re-writing charm code ... really its been the wonky-ness of the model this whole time
[03:35] <bdx> rick_h: what does that do?
[03:36] <rick_h> bdx: it adds an index to the db to make queries faster when collections get large
[03:36] <rick_h> bdx: at least that's my read of that commit
[03:36] <anastasiamac> rick_h: bdx: the bug refered in that commit was https://bugs.launchpad.net/juju/+bug/1668646
[03:36] <mup> Bug #1668646: Model migration fails <juju:Fix Released by thumper> <https://launchpad.net/bugs/1668646>
[03:36] <rick_h> bdx: yea, https://github.com/juju/juju/pull/7059
[03:36] <bdx> yeah
[03:36] <bdx> that it
[03:37] <anastasiamac> rick_h: bdx: but it seemed to have been afecting migration only...
[03:37] <bdx> that *is it
[03:37] <bdx> ooh
[03:37] <rick_h> anastasiamac: yea, just looking. I'm not sure if that change is useful or not here and bdx's perf problem
[03:37] <bdx> oh, possibly not
[03:37] <bdx> yeah
[03:38] <rick_h> I mean an index on a collection is an index on the collection.
[03:38] <rick_h> but things like juju status/etc I'm not sure if they hit this collection
[03:38] <anastasiamac> rick_h: bdx: we hae cyclone tail here - exciting... i have missed the beginning of ur conversation... what is happenng?
[03:38] <rick_h> anastasiamac: oh! are you by it? I guess you are.
[03:39] <rick_h> anastasiamac: how was that? I've never been in a cyclone.
[03:39] <anastasiamac> rick_h: it's a lot of water with a lot of wind :D
[03:39] <rick_h> anastasiamac: bdx is noticing sluggishness in api response on a model in JAAS. He's verified that if he creates a new model it's fast. So an older model with a long history seems to slow down.
[03:39] <rick_h> anastasiamac: heh, I wonder what's different in cyclone, hurricane, etc.
[03:41] <anastasiamac> rick_h: no diff in terminology - except the location/where it happens: http://oceanservice.noaa.gov/facts/cyclone.html
[03:41] <anastasiamac> rick_h: but schools, etc closed. ppl r told to saty indoors :)
[03:42] <rick_h> anastasiamac: well yea. Here hurricanes are beastly natural diasters
[03:42] <rick_h> I'm glad you're ok!
[03:42] <anastasiamac> rick_h: bdx: JAAS is runing on 2.0x underneath
[03:42] <bdx> anastasiamac, rick_h: juju has cyclone tails happening in it right now too ... possibly I could grant you guys acls on the model so you can see/experience it first hand
[03:42] <anastasiamac> rick_h: bdx: we have fixed every leak/performance issue we r aware of in 2.2... but it's at beta stage atm...
[03:43] <rick_h> anastasiamac: yes, it is.
[03:43] <bdx> anastasiamac: ahh, so what I'm experiencing is probably fixed in 2.2 you are thinking ?
[03:43] <rick_h> anastasiamac: right, I'm eager for 2.2 to come out and then to test/verify if the perf issue is effected
[03:43] <anastasiamac> bdx: m hoping \o/
[03:43] <magicaltrout> you can't rush perfection ;)
[03:44] <anastasiamac> magicaltrout: :D
[03:48] <bdx> rick_h, anastasiamac: do you think my loaded models with long history are bogging down the controllers though?
[03:48] <rick_h> bdx: no, I can see the cpu/memory/disk IO/etc on all three of them
[03:48] <rick_h> bdx: and nothing there is looking off. Load on all three are < 3
[03:49] <bdx> rick_h: so, what gives then?
[03:49] <rick_h> bdx: that's why I was thinking your hint that a new model was fast means that there's probably just a slow query/something in that model.
[03:51] <bdx> rick_h: yeah ... I mean ... its really chaotic
[03:51] <anastasiamac> bdx: rick_h: here is the list of performance bugs that have been fixed in 2.2 :) it's possible that some of ur long living models have some leaking db connections.. bug # 1635311,
[03:51] <anastasiamac> bug # 1671258, bug # 1651291, bug # 1634328, bug # 1587644, bug # 1581069, bug # 1649719
[03:51] <bdx> rick_h, anastasiamac: I can deploy the same thing 10 times and everything gets borked/stops in a different state everytime
[03:51] <bdx> lol
[03:51] <bdx> I can make any sense of it
[03:52] <bdx> anastasiamac: thx, looking
[04:36] <bdx> rick_h, anastasiamac: on a new model -> http://paste.ubuntu.com/24279037/
[04:37] <bdx> seamless
[12:15] <SimonKLB> is there anyone around that i can chat with regarding the charm partner programme?
[12:21] <stub> SimonKLB: I think you want arosales, who is in a US timezone
[12:26] <stub> SimonKLB: Or SaMnCo , who is EU
[12:29] <SaMnCo> I am EU
[12:29] <SaMnCo> @SimonKLB ^
[12:30] <SaMnCo> And you can certainly talk to me about CPP :)
[12:39] <SimonKLB> thanks!
[13:32] <Zic> hi Juju world, a new question, I found Grafana/InfluxDB very slowly these last days on our CDK clusters, I just noted that they have very short "limits" (100MHz / 100Mio per containers) sets on the influxdb-grafana pods, can I edit this value "freely" in /etc/kubernetes/addons/influxdb-grafana-controller.yaml and not be overrided by next Juju updates of CDK?
[15:04] <rick_h> kwmonroe: ping, halp! I'm trying to figure out where to go here to file a bug on the zeppelin charm but lost in this site.
[15:04] <kwmonroe> rick_h: working as designed.
[15:04] <kwmonroe> :)
[15:04] <rick_h> kwmonroe: bah
[15:05] <rick_h> kwmonroe: I want to file a bug about relating to pgsql but :(
[15:05] <rick_h> and the haproxy one
[15:05] <kwmonroe> rick_h: since zepp is upstream at bigtop, the appropriate thing to do is open a JIRA here:  https://issues.apache.org/jira/browse/BIGTOP/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel
[15:05] <kwmonroe> rick_h: do you see a "Create" button at the top of that page?
[15:06] <rick_h> kwmonroe: no, I probably need to create an account and all that
[15:06] <kwmonroe> ah shoot rick_h.  yeah, i just logged out and noticed "create" isn't an option without an account
[15:06] <rick_h> yea, gotcha. Ok will do that then.
[15:07] <kwmonroe> no no rick_h - that's a poor UX i think.  we keep a fork of their repo.  how about i update our bigtop charm bug links to point to https://github.com/juju-solutions/bigtop/issues
[15:07] <kwmonroe> i think that'll be easier for people to work with
[15:08] <kwmonroe> though it means i'll have to translate issues into a JIRA, but still, i think it's more likely that people will grok opening a gh issue vs a jira.
[15:19] <kklimonda> is there a way for me to tell juju which IP is the "public" one?
[15:19] <kklimonda> (preferably on the model level)
[15:20] <rick_h> kwmonroe: up to you. I'm paid to go through pain on it if I need to but agree. When I went to the bugs-url for the charm I ended up getting lost in links trying to figure out how to find the charm and it wasn't clear to me what was charm vs other code.
[15:24] <kwmonroe> understood rick_h -- unfortunately there's no way to link people directly to a sub-module of bigtop (like the charms) for filing bugs, so i do think it's an extra burden to expect people to figure out bigtop's bug tracker.  please open your issue(s) at  https://github.com/juju-solutions/bigtop/issues and i'll take on the pain of translating to jiras.  i'll update the bugs-url next rev.
[15:26] <rick_h> kwmonroe: all good ty
[15:26] <kwmonroe> plus it's far easier for me to nak issues at github.
[16:02] <jamespage> lazyPower: hullo - do we have a good bundle reference for *beat + elasticsearch and kibana ?
[16:50] <kklimonda> is there something about reversations I'm missing? for the second time I've seen MAAS hand out reserved IP wreaking havoc in the network
[17:16] <stormmore> 0/
[17:16] <stormmore> o/ juju world
[17:37] <cholcombe> centos7 bootstrap with the manual provider on juju 2.1.2 doesn't seem to work: http://paste.ubuntu.com/24282504/  Here's a paste of my virtual machine log
[17:38] <cholcombe> slightly more context: http://paste.ubuntu.com/24282509/
[17:40] <cholcombe> axw: ^^
[17:41] <cholcombe> axw: i'll let this setup running if you want to poke at it or gather any more logs
[17:49] <cholcombe> axw: i think the issue was centos7 comes out of the box with a firewalld enabled.  i'm killing that and trying again
[17:50] <cholcombe> axw: yeah that was it.  ignore me
[20:58] <gQuigs> hi there, trying to get started with charms.. and found some odd inconstancies..   could the charm command be broken on zesty?
[20:59] <gQuigs> bug report - https://bugs.launchpad.net/ubuntu/+source/charm-tools/+bug/1675240
[20:59] <mup> Bug #1675240: charm-create crashed with ImportError in /usr/lib/python2.7/dist-packages/charmtools/utils.py: cannot import name path <amd64> <apport-crash> <zesty> <charm-tools (Ubuntu):New> <https://launchpad.net/bugs/1675240>
[21:05] <petevg> kwmonroe: half of a fix for that matrix issue you found here: https://github.com/juju/python-libjuju/pull/100 (cc cory_fu)
[21:05] <petevg> (The other half is a commit to matrix, but it needs to be fixed in python-libjuju first.)
[21:52] <stormmore> lazyPower, you go a con and this place goes quiet!
[21:52] <magicalt1out> ARGHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
[22:22] <aisrael> what's up, magicalt1out?
[22:23] <magicalt1out> nowt, it was just a bit quiet :)
[22:23] <aisrael> lol
[23:49] <axw> cholcombe: you can't use centos7 for the bootstrap machine. only ubuntu is supported for running the controller