[00:52] <xavpaice> I'm just about to do a mgopurge of a site running 2.1, and hear rumor that we don't need to stop the state server beforehand - is that the case?
[00:53] <xavpaice> h00pz, first thing to do is request the ability to change the name of a charm for a running application without redeploying the application
[00:55] <h00pz> xavpaice, yah im already building my own bundle, next step, make charm names smaller cuz
[00:57] <xavpaice> at least you can deploy apps with any name you like
[00:58]  * xavpaice was pointed at https://github.com/juju/juju/wiki/MgoPurgeTool which has everything I might need
[07:30] <erik_lonroth3> When I'm setting states "set_state('foo.bar') ... am I also required to remove that state when I'm done with it? As I understand it, a "state" is something that otherwise is persistent for the charm until its removed?
[08:11] <kjackal> Good morning Juju world!
[08:12] <kjackal> erik_lonroth3: Yes, "foo.bar" is more like a flag that you can raise and lower. It is not automaticaly unset, it is not en event
[08:44] <kjackal> Hi rick_h, I heard you showed some interest in the Instana charm PoC. https://jujucharms.com/u/instana-charms/instana/  and https://instana.atlassian.net/wiki/pages/viewpage.action?pageId=15630376
[08:45] <kjackal> This charm is just a PoC that Instana can iterate on. If you want to fill up some slot in the some upcomming juju show, I can help.
[08:52] <erik_lonroth3> kjackal: thanx
[08:54] <erik_lonroth3> Also, I'm trying to understand how to differentiate between hooks and states. Its kind of hard to get a good grasp about how it works.
[08:55] <erik_lonroth3> I've understood that hooks are run by juju as part of some kind of cycke. Byt which are those and how can I access states from different components etc. there are alot of questions and I can't really get my head around the reactive framework
[09:14] <kjackal> erik_lonroth3: Since you are using the reactive framework you should really not use hooks.
[09:15] <erik_lonroth3> I know, but I'm perplexed about which states exists, and how I can use them etc. For example. How would I find out which states a different charm is in?
[09:16] <erik_lonroth3> ... is states.
[09:16] <erik_lonroth3> or
[09:16] <kjackal> erik_lonroth3: The way I understand it is that hooks are essentialy executable files under the /hooks dir. Juju will call these hooks at the right time throuout the lifecycle of the infrastructure
[09:16] <erik_lonroth3> I have  understood that "hooks" are run aswell as setting reactive states.
[09:17] <erik_lonroth3> juju seems to do both things.
[09:17] <kjackal> erik_lonroth3: reactive takes over the hooks and from then on you are programming the lifecycle of your infrastructure using states
[09:17] <erik_lonroth3> I'm looking for the equivalent to "d_relation_joined" state.
[09:17] <erik_lonroth3> db_relation_joined
[09:18] <erik_lonroth3> Yes, I understand that part. But all the hooks that exists - do they have a corresponding state?
[09:19] <kjackal> No, hooks and states are seperate. Let me give you an example
[09:19] <erik_lonroth3> getting coffee
[09:19] <erik_lonroth3> =)
[09:20] <erik_lonroth3> back
[09:21] <kjackal> You have two charms. MariaDb and and Media wiki. THese two charms use the mysql interface
[09:21] <kjackal> if you go to http://interfaces.juju.solutions/ and look for the mysql interface you can end to the git repo
[09:22] <kjackal> Here it is: https://github.com/johnsca/juju-relation-mysql
[09:23] <kjackal> Interfaces is the only place where hooks and states are mixed. This gives you an understanding how these two work together
[09:23] <erik_lonroth3> I'll take a look
[09:23] <kjackal> Have a look here: https://github.com/johnsca/juju-relation-mysql/blob/master/provides.py#L26
[09:24] <erik_lonroth3> I'm trying to write a charm that uses a postgresql database. I haven't got it to work yet since I'm struggeling understanding how to use the interfaces, states etc.
[09:24] <kjackal> The joined method of the interface will be called when the @hook('{provides:mysql}-relation-joined') is triggered
[09:24] <kjackal> the join method of the interface will set the state conversation.set_state('{relation_name}.database.requested')
[09:25] <erik_lonroth3> let me read.
[09:25] <erik_lonroth3> I'll look and come back. Thanx alot for helping me!
[09:26] <kjackal> Here is the example code to be used for pgsql interface: http://interface-pgsql.readthedocs.io/en/stable/requires.html#example-usage
[09:27] <kjackal> erik_lonroth3: hope it is uptodate ^
[09:42] <Zic> lazyPower: just did a quick "real test" : I stopped the VPN between FR (which have kubernetes-master machine and some kube-dns pods) <-> US (some kube-dns pods also) and on FR, all resolving through kube-dns are OK, for US however, resolving works sometime, sometime not... (I think it works when the kube-dns service throw a resolving demand to a kube-dns pod @US, and don't work if it throw to FR kube-dns pods)
[09:42] <Zic> so, in my case, it's a bit a disaster :(
[09:42] <Zic> 2 points of presence are tied together
[09:43] <Zic> don't know if something exist for this kind of case except the kube-fed
[09:47] <kjackal> Hi Zic from a high level perspective your setup better fits to federation
[09:48] <Zic> yup, I'm waiting it on CDK, I know it's already planned very soon :)
[09:48] <Zic> but for now, I don't know what can I do to mitigate this effect :/
[09:48] <kjackal> Zic: kubefed 1.6.2 is already snapped and we are working on lighting up this scenario but we are not there yet
[09:49] <magicaltrout> work faster!
[09:49] <Zic> huhu
[09:49] <kjackal> stop interupting me magicaltrout lol
[09:52] <Zic> and my tests was pointed to kube-dns, actually I think I have the same problem with all K8s services: the're not updated with the remaining pods on US
[09:52] <Zic> they all continue to forward to FR+US instead of just US, as the master located in France cannot send the update information, afaik
[09:54] <Zic> kjackal: do you know if converting an existing CDK cluster to a "one of" kube-fed CDK clusters will be possible?
[09:54] <Zic> or do I need to restart from scratch and build a new CDK with kube-fed clusters?
[09:54] <Zic> (rephrasing my question: do you know if such a scenario is planed to be supported in the future Juju scenario of kube-fed)
[09:55] <Zic> CDK classic cluster -> CDK clusters, part of a Federation with a new CDK cluster
[09:57] <kjackal> Zic converting a cluster is a resonable ask. We are looking at what is the most frictionless path.
[10:03] <Zic> kjackal: in fact, I will let the FR cluster as it is, and deploy a new one, with a full controll-plane at US
[10:03] <Zic> then linking them through a Federation
[10:03] <Zic> anyway, for what can I done, I'm thinking about two completely separate cluster in waiting of kube-fed, it's maybe a better way
[10:04] <Zic> s/done/do/
[11:07] <rick_h> kjackal: ty, I'll see thanks. I was looking through recently updated charms in the store and ran across it and thought it was cool
[12:49] <lazyPower> Zic:  i'm not sure i follow
[12:49] <lazyPower> Zic: help me understand the conundrum with the US DNS scenario
[12:50] <lazyPower> Zic: ahhhh, i think i'm understanding. As these two clusters are feeding from the same etcd backend, you're getting dns entries that point to services in BOTH clusters
[12:50] <lazyPower> Zic: is that consistent with your findings?
[12:53] <lazyPower> erik_lonroth3: are you moving forward now with a better understanding of states?
[12:53] <lazyPower> magicaltrout: you scallywag :P
[12:53] <erik_lonroth3> I'm going to work more on it later today as I had to do another thing just nu.
[12:54] <erik_lonroth3> now
[12:54] <magicaltrout> just telling kjackal to do some work for a change!
[12:55] <lazyPower> erik_lonroth3: ok. When you progress into looking at that deeper dont hesitate to ping either myself or kjackal. I've done a fair amount of charm training and relationships + states are arguably one of the harder concepts to grasp because it touches both the old-world of hook based programming, and the new world of state based programming.
[12:55] <magicaltrout> nooooooooooooooooooooooooooooooooooooooooooooooooooooo
[12:55] <magicaltrout> not hooks ;'(
[12:55] <lazyPower> magicaltrout: you have to deal with hooks to set the proper states :)
[12:55] <lazyPower> and you know this
[12:55] <erik_lonroth3> Yes, I think thats what confuses me alot. Also read that the old "hook" framework will go away, which makes me hesitant to even use "hooks" in my code.
[12:55]  * magicaltrout pretends hooks don't exist. Its worked well so far
[12:56] <lazyPower> as far as i know we're not removing the hooks. Thats still a baseline juju primitive
[12:56] <magicaltrout> kjackal is quite primitive
[12:56] <lazyPower> erik_lonroth3: and disregard magicaltrout, someone yanked his chain this morning...
[12:56] <magicaltrout> its 2pm
[12:57] <lazyPower> its 8am in KC MO
[12:57] <lazyPower> time zones are not a thing
[12:57] <magicaltrout> you're trying to tell me kjackal isn''t primitive?
[12:57] <magicaltrout> hmm
[12:57] <magicaltrout> maybe primal then
[12:57] <lazyPower> well thats debateable
[12:57] <lazyPower> i'll cede to that
[12:57] <magicaltrout> hehe
[12:57] <lazyPower> something something greek geeks something something ;)
[12:59] <kjackal> LOL!!! You people are crazy!
[13:01] <lazyPower> :D
[13:11] <Zic> lazyPower: yup, FR have all the controlplane (kubernetes-master, etcd, easyrsa, kubeapilb) + kubernetew-worker, US just have kubernetes-worker
[13:12] <Zic> if the VPN go offline between FR and US, Kubernetes services components is buggy, as they continue to forward some request to FR pods from US
[13:12] <Zic> (which cannot, as VPN is dead)
[13:13] <Zic> I think there is no local system at kubernetes-worker which check if all pods of a service is alive, it's only the master & etcd which have the info
[13:17] <lazyPower> Zic: yeah, thats not something we can directly support today without making modifications to upstream
[13:18] <lazyPower> the expectation there woudl be to run a control plane with independent etcd backends per cluster, and use federation
[13:18] <lazyPower> you're on teh right path with the analysis this morning
[13:18] <lazyPower> sorry I didn't catch that before, i clearly had not devoted enough braincells to the question when it was originally posed
[13:19] <Zic> I think I will do some GeoDNS magic which said "if VPN goes down, switch US traffic to FR" in waiting kube-fed as part of CDK :)
[13:19] <Zic> it's not the best way, but it can mitigate this issue
[13:22] <Zic> lazyPower: as I questioned kjackal this morning, do you plan converting an existing CDK cluster to be part of Federation cluster in the next Juju scenario with kube-fed? or should I plan a new cluster(s) to switch to the fed model?
[13:23] <lazyPower> Zic:  you would need to redeploy the US cluster as its bound to the current FR control plane
[13:23] <lazyPower> you might be able to just unrelate and deploy a new CP and then relate to the new CP
[13:23] <lazyPower> but tahts not something I have tested personally.
[13:24] <lazyPower> and i hesitate to say its a good idea to mix stale state with new state.
[13:24] <Zic> lazyPower: the US cluster is just AWS instances, no problem to respawn from scratch here :)
[13:24] <Zic> (it's more the FR part, which is a composition of VMware and physical servers)
[13:25] <Zic> if I can just juju delete-machine all US, then deploy a new CDK cluster to US and tied it to FR via kube-fed, it will be cool :)
[13:29] <rick_h> Reminder: Juju show at 2pm EST today!
[14:09] <lazyPower> Zic: thats the integration path i would propose. We can certainly work with you to test teh feature when its in alpha state to ensure it covers your needs
[14:09] <lazyPower> kjackal: I'm going to cc you on this as we have a stakeholder for the feature now
[14:19] <SimonKLB> hey folks! im currentinly in the process of writing a charm to support more advanced forms of configuration which is currently not possible (in a nice way) when only using the normal charm config
[14:19] <SimonKLB> i would love to use subordinate charms for this but, from what i can see, those are restricted in that they are not removable
[14:20] <SimonKLB> what is the exact reason for this and would it be possible to make them more flexible in the future?
[14:20] <lazyPower> ping rick_h ^
[14:21] <lazyPower> rick_h: i know this was a topic we've visited in the past but I do forget the reasons why they are so tightly coupled.
[14:24] <kklimonda> I've had to increase timeout on lxd waitready (and lxd upstart service file) -- right now it's 600 seconds. I /believe/ that service was hitting this timeout and some containers were not starting randomly, but I'm not 100% sure (could be a placebo for all I know) - is this something that someone else have ever seen?
[14:27] <lazyPower> kklimonda: not in my experience but i'm sure there's hardware factors at play there
[14:27] <lazyPower> for example i tend to put my lxd machines on ssd's backed by either btrfs or zfs for fast cloning
[14:33] <rick_h> lazyPower: SimonKLB it's the case of hulk smashing. We can't promise they remove and leave the state like they started. It's like hulk-smashing. It's not promised in the model tbh
[14:35] <SimonKLB> rick_h: how is it different from a poorly written stop hook in a normal charm? i could see how a normal charm could leave the model in a mess as well if you dont clean everything up properly?
[14:35] <SimonKLB> or what makes the subordinate more invasive in that regard?
[14:37] <rick_h> SimonKLB: because that's put into a container or on a machine that when torn down goes away
[14:38] <rick_h> SimonKLB: I admit it's not a perfect system. There's room for improvement. However, it's a bit tough to promise the model is good and solid in some situations and subordinates are one of those.
[14:39] <SimonKLB> rick_h: are there any differences between co-locating two charms on the same machine and subordinate charms?
[14:39] <rick_h> SimonKLB: right, but we strongly suggest folks don't hulk smash for the same reason
[14:39] <rick_h> SimonKLB: and push using lxd containers and the like
[14:41] <SimonKLB> rick_h: so right now i do `juju deploy X` and `juju deploy Y --to [machine with X]` which works fine, the reason i would like to use subordinates is that you wouldnt have to know the location of X and if you add another unit of X, Y would automatically get installed there as well
[14:41] <rick_h> SimonKLB: definitely and agree that it's what they're for
[14:42] <rick_h> SimonKLB: but they carry some extra though on how clean things will be if you're adding/removing them
[14:42] <SimonKLB> if the only reason for not making stop available for subordinates is that it could leave a mess behind, i dont see why co-locating multiple charms on the same machine is allowed either?
[14:42] <SimonKLB> or am i missing something?
[14:42] <rick_h> SimonKLB: it's allowed but not recommended. Almost all bundles in the store don't do it. It's not "good practice"
[14:43] <rick_h> it's like installing your application and the db on the same machine
[14:43] <rick_h> sure, you want to do it for testing/etc
[14:43] <rick_h> but it's not helping with best practices
[14:43] <rick_h> SimonKLB: so subordinates can do it, but they're not ideal in that they don't remove cleanly and juju shows that.
[14:43] <rick_h> SimonKLB: like I said, it's the history. I'm not saying it can't be made better
[14:43] <rick_h> SimonKLB: basically I'm just telling you how it got to where it is.
[14:45] <SimonKLB> rick_h: yea i see why it's a bad habit of co-locating stuff like that, but im sure there are going to be cases where deploying an extra machine would be unecessary, especially for "add-on charms" like this
[14:46] <SimonKLB> add-on charms, that you want to be able to add _and_ remove that is
[14:46] <rick_h> SimonKLB: and so we fully support it in subordinates. Please use them. We do all the time for things like the landscape client, nagios, etc.
[14:46] <SimonKLB> wait, so removing subordinates is possible?
[14:47] <rick_h> SimonKLB: no, you can add them and setup add-on charms.
[14:47] <rick_h> SimonKLB: but if you want to remove and change that thing you need to rebuild it with a new deploy
[14:47] <rick_h> SimonKLB: by all means, file a bug (there might be one) and we can look at improving things
[14:47] <SimonKLB> rick_h: that would be for juju core?
[14:48] <rick_h> SimonKLB: sure thing. https://launchpad.net/juju
[14:48] <SimonKLB> rick_h: would it be best to propose a new type of charm-type or should i push for stop hooks in subordinates?
[14:49] <SimonKLB> im worried that trying to get stop hooks in subordinates would have a lot of pushback since that could break a lot of the current setups
[14:49] <rick_h> SimonKLB: improving subordinates to act like a full application sounds like a starting piont
[14:49] <rick_h> SimonKLB: basically I'd focus on the pain point.
[14:49] <rick_h> SimonKLB: e.g. why is the lack of removing them causing you issues
[14:49] <SimonKLB> rick_h: alright, ill give it a shot! :) thanks
[14:50] <rick_h> SimonKLB: np, thanks for the feedback!
[15:39] <bdx> some grumbling going on at a kubernetes workshop about how their workshop doesn't work on ubuntu, but it works on windows ..... https://github.com/apprenda/kubernetes-workshop
[15:43] <bdx> and osx
[15:43] <bdx> what haters
[16:05] <rick_h> bdx: any hint as to what's broken?
[16:12] <bdx> rick_h: I'm inquiring
[16:13] <bdx> rick_h: looks like some compiled "provisioning" bins
[16:14] <rick_h> bdx: yes looking at their stuff its all compiled so not immediately obvious why they'd hit.
[16:14] <bdx> http://imgur.com/a/awIAW
[16:31] <lazyPower> SimonKLB: rick_h - i can say that during my pilot of the dex charm - i opted for snap packaging as my delivery format and being able to snap remove and have an atomic operation that just wiped it out from the machine was a pleasant experience.
[16:31] <lazyPower> (hours later)
[16:34] <bdx> rick_h: I've spotted some things that would cause the docker file to fail on ubuntu
[16:35] <bdx> https://github.com/apprenda/kubernetes-workshop/blob/master/DockerFiles/web/Dockerfile#L6
[16:35] <bdx> http://paste.ubuntu.com/24549762/
[16:36] <bdx> xenial at least ... I think nginx in trusty still has an nginx conf there
[16:37] <bdx> not sure if that RUN cmd failing would bork the whole dockerfile or not
[16:38] <bdx> nah, the conf doesn't exist in /etc/nginx/conf.d in trusty either
[16:38] <bdx> I bet there are some other small gotchas like that
[16:47] <bdx> but I guess thats different then the workshop actually running on ubuntu
[16:59] <SimonKLB> lazyPower: yea if there was some kind of payload i would probably just integrate it into the charm, but in the case of the charm im currently building it's more about using juju stuff such as interfaces, actions and the config
[17:41] <rick_h> 20min to juju show #12!
[17:47] <rick_h> https://www.youtube.com/watch?v=oJukQzROo-Q to watch
[17:47] <rick_h> https://hangouts.google.com/hangouts/_/jurpmjck7ffwhpi2coqxwpl4aye to participate and get your camera/mic working
[17:48] <tychicus> rick_h: thanks
[17:50] <rick_h> hatch: jrwren lazyPower kwmonroe ^
[17:51] <rick_h> bdx: magicaltrout as well if you're feeling chatty today ^
[17:59] <rick_h> hatch: you coming today?
[18:13] <bdx> that "accounts" page is awesome!
[18:14] <bdx> that will be a great place to track ssh keys too
[18:14] <bdx> when ssh keys become user sensitive
[18:16] <zeestrat> rick_h: How many more beta/rc's are you aiming for 2.2?
[18:32] <rick_h> zeestrat: I'm honestly not sure. I definitely think there's at least one more beta coming
[18:35] <Merlijn_S> Damn, we have to push more of our charms, we do have a reactive non-subordinate jupyter charm, but it's not in the store for the moment..
[18:39] <rick_h> Merlijn_S: doh!
[18:40] <rick_h> Merlijn_S: well I might know an italian professor interested in checking it out :)
[18:41] <Merlijn_S> haha, I'll contact him. It's also heavily used by my colleagues for teaching
[18:42] <rick_h> Merlijn_S: yea, it seems perfectly awesome for that. I wonder if we could pull off some sort of juju/charmschool with it
[18:43] <Merlijn_S> it also has an auto-generated xkcd password (to stop students from cheating ;)) maybe less relevant for charm schools :)
[18:43] <rick_h> lol
[18:43] <rick_h> have to love open source ops, the great features you get baked in!
[18:45] <lazyPower> Merlijn_S: I have some updates for che incoming
[18:46] <Merlijn_S> Awesome!
[18:46] <lazyPower> is whats in tengu-team's master the latest effort or is there more i've not gotten?
[18:46] <rick_h> Merlijn_S: I was going to do a blog post around this charm, if you guys push yours up I'd appreciate it so I can compare and maybe adjust the focus there a bit
[18:46] <Merlijn_S> @rick_h https://jujucharms.com/u/tengu-team/jupyter-notebook/0
[18:46] <rick_h> Merlijn_S: hah, that's some fast service! :)
[18:47]  * lazyPower notes charm push vs lp push... the day we finally got to party in real time
[18:47] <Merlijn_S> @lazyPower master is the latest, I haven't touched it in a while
[18:47] <lazyPower> Merlijn_S: ack. I have s'more ideas but i'm light on time to contribute them. I will however fix the immutable config since 5.9.1 has some serious UI/UX improvements <3
[18:47]  * rick_h notes Merlijn_S might not have set perms since he can't see it
[18:56] <Merlijn_S> @rick_h can you see it now?
[18:56] <rick_h> Merlijn_S: bingo ty much!
[19:00] <Merlijn_S> @lazyPower I would really like to have the charm always install the latest, but Che is changing so fast that they break the charm quickly..
[19:00] <lazyPower> well tbh it works with the nightlies
[19:01] <lazyPower> but i'm not exactly using your stack anymore because everything i push to github is initially owned by you
[19:01] <lazyPower> https://github.com/juju-solutions/layer-dex/commits/master
[19:02] <lazyPower> all that boilerplate credit :D
[19:03] <Merlijn_S> Ow, that's an issue :)
[19:03] <Merlijn_S> Any idea how to fix this?
[19:03] <lazyPower> The only thing i can figure is to omit the .git from that boilerplate
[19:04] <lazyPower> make the end user initialize the repository
[19:04] <lazyPower> that or just own all the boilerplate coming from teh che charm and enjoy having bloating GH stats
[19:04] <lazyPower> either way is fine :) i'm just on a quest to really figure out how this is put together so its more useful when i'm on my chromium book
[19:04] <Merlijn_S> It's been a while since I looked at it, but from what I can remember, che insists on it being a git repo. Che pulls that from github during project creation
[19:05] <lazyPower> ah, that makes sense
[19:05] <lazyPower> i've started initializing a blank stack and go from there
[19:05] <lazyPower> the workspace snapshots seem to do a good enough job of perserving state that i'm not redoing config management everytime i spin up the container
[19:05] <Merlijn_S> PS: I've been working on documentation for getting started with developing for-and-in JaaS: https://ibcnservices.github.io/tengu-docs/use/eclipse-che.html
[19:06] <lazyPower> kick butt! That's a nice start
[19:06] <Merlijn_S> users can start charming on JaaS without ever leaving their browser
[19:06] <lazyPower> rick_h: ^
[19:06] <rick_h> Merlijn_S: very cool. Will look it over.
[19:07] <Merlijn_S> Would be nice if we could have a way for che to import the credentials of the model/user that deployed che
[19:07] <Merlijn_S> The "controller" relationship might come in handy here
[19:07] <lazyPower> now you're cookin
[19:07] <lazyPower> that's what i'm talkin bout
[19:08] <rick_h> Merlijn_S: heads up some docs changes will be going live hopefully tomorrow https://github.com/juju/docs/pull/1826
[19:09] <rick_h> Merlijn_S: yea the representing a controller as an application will be awesome.
[19:12] <Merlijn_S> @lazyPower: you were the one that talked about connecting the openvpn charm to an SDN-type thing right?
[19:12] <lazyPower> yep
[19:13] <Merlijn_S> We're running into an issue with the openvpn charm where it doesn't correctly push the routes to the GCE network, because each GCE VPS is in a 255.255.255.255 subnet.
[19:14] <lazyPower> hmmm
[19:14] <lazyPower> is it how openvpn is configuring itself? like is it hard coded to 255.255.255.0?
[19:14] <Merlijn_S> So we're looking for a way to tell the VPN charm what networks are attached to the server. If we create a relationship that allows another charm to tell the vpn charm what networks it is attached to, is that something you could use to connect an SDN to the charm?
[19:15] <Merlijn_S> The charm triest to figure out what networks it's connected to based on the output from the puppet `facter` tool
[19:16] <Merlijn_S> but that fails on GCE because each GCE VPS thinks it's the only server connected to that network. GCE does some funky trickery with the default gateway or smth to make it work
[19:17] <lazyPower> Merlijn_S: i think so. I can try to strawman something between openvpn and flannel when i'm not up to my eyeballs in kubernetes features
[19:17] <lazyPower> maybe a friday lab
[19:18] <lazyPower> the basic bit would be adding the route to openvpn incoming from flannel, and then making sure that route is connected and its subnet is accounted for
[19:18] <lazyPower> the rest should be automagic
[19:19] <Merlijn_S> Hm, so would you need anything from the openvpn side? What would that relationship look like?
[19:22] <lazyPower> it should only need to pass its network configuration over
[19:22] <lazyPower> such as cidr
[19:22] <lazyPower> the rest we can probe from route
[19:28] <kwmonroe> hey petevg, wadda you make of this?  http://paste.ubuntu.com/24550557/ -- i def have bt-0.12.0 on the system and verified this commit is in place:  https://github.com/juju-solutions/bundletester/commit/96f323abd81c537b99612a8fab3cf2fcf41170f5
[19:30] <petevg> kwmonroe: That's bad.
[19:40] <petevg> kwmonroe: that var should just default to a falsey value (that's the default behavior of the argparser lib)
[19:41] <petevg> ... I have no idea why it wouldn't be doing so.
[19:41] <petevg> kwmonroe: are you calling bundletester from a cli, or are you invoking it in some other way?
[19:42] <kwmonroe> petevg: it's cwr that's calling bt, sorry i didn't show the invocation before:  http://paste.ubuntu.com/24550619/
[19:47] <petevg> kmwonroe: that's the problem. cloud weather report runs the tester class of bundletester directly, but it doesn't bother to setup the args.
[19:47] <petevg> kwmonroe: two solutions:
[19:48] <petevg> 1) Make cloud weather report exercise bundletester's arg parser (not great).
[19:48] <petevg> 2) Pass in "no-matrix" explicitly when invoking bundletester from cloud weather report.
[19:48] <petevg> ... or "no_matrix". I think that the dash has been converted to an underscore by the time we invoke the tester.
[19:50] <kwmonroe> ack petevg, i'll give option 2 a whirl shortly
[19:50] <petevg> Cool. thx, kwmonroe.
[19:53] <Merlijn_S> @rick_h I have some more feedback on the getting started page. Where can I put that?
[19:54] <rick_h> Merlijn_S: shoot me an email and I'll work it into next doc updates I've got going on.
[19:54] <rick_h> Merlijn_S: or pr or pastebin or whatever works for you
[19:54] <Merlijn_S> aight, I'll send you an email
[19:56] <Budgie^Smore> wow I almost fell asleep watching a docker intro video!