[02:11] <thumper> anyone here who understands writing reactive charms?
[02:18] <magicaltrout> that sounds like a trick question
[03:17] <thumper> well, I'm trying to work out why my code isn't doing what I think it should be doing
[06:08] <anrah> I've written some, but can't promise anything :)
[06:08] <anrah> I'm interedted whether anyone has idea how to mock those reactive states
[06:09] <anrah> To write unit tests for the charm without actually deploying the unit
[06:09] <anrah> And leave amulet for integration testing
[12:56] <anrah> Oh, https://www.youtube.com/watch?v=NwzdbzvsvzY answers almost all the questions :)
[14:26] <D4RKS1D3> Hi, I am looking for some information regarding the step between pressed/curtin to cloud-init
[14:27] <D4RKS1D3> and how juju do the dinamically actions to maas
[16:55] <rick_h> reminder Juju Show in 1hr (arosales, hml, kwmonroe, tvansteenburgh, marcoceppi, magicaltrout, bdx, and anyone else that might be intersted)
[16:56] <hml> rick_h: I’ll be watching, what’s the topic this week?
[16:56] <rick_h> hml: going to run through new storage stuff to play with in juju 2.3
[17:51] <rick_h> for anyone that wants to join The Juju Show https://hangouts.google.com/hangouts/_/7mskwxg6qnhqbnfbhhwfqrt6tqe and for watchers check out https://www.youtube.com/watch?v=jrOP3nHNRcs
[17:52] <rick_h> 8mins and counting, I wish we had a cool space-x countdown setup heh
[17:56] <hml> rick_h: any chance you can repeat the link to watch the juju show?  i seemed to have missed it.  :-)
[17:56] <rick_h> hml: watch is https://www.youtube.com/watch?v=jrOP3nHNRcs
[17:56] <hml> rick_h: ty!
[18:00] <rick_h> anyone else coming in?
[18:00] <rick_h> going once...going twice...
[18:03] <CoderEurope> are you guys taking questions for the show ?
[18:04] <kwmonroe> no questions related to big data.  all else is fair game.
[18:07] <hml> rick_h: you’re in the small window - watching from youtube.  :-)
[18:08] <kwmonroe> gawd i hope i wasn't picking my nose
[18:08] <CoderEurope> Question: On marco's jujucharms webpage , https://jujucharms.com/u/marcoceppi/discourse/ the charm has been updated to xenial (not precise) | My question is I am rerouting people to this page & it looks too "out of date" for them to use. How and when do we change this? perhaps you could refer me to the correct 'web-team' for the jujucharms' page ?
[18:13] <CoderEurope> no quite - its been updated in github to xenial but the webpage does nopt reflect this.
[18:14] <kwmonroe> CoderEurope: that's then a question of building the updated gh source and pushing to the charm store.
[18:21] <CoderEurope> kwmonroe, if the gh source is build to the charm store - is those details (xenial version) automatically updated at the top of the web-page ? | if not I guess ~I am just saying that this needs abit of tweaking with versions and instructions.
[18:21] <CoderEurope> Here is the change : https://github.com/marcoceppi/discourse-charm/commits/master
[18:27] <kwmonroe> yeah CoderEurope, whomever builds that updated source can call 'charm build --series xenial' and then push that to the store.  the charm series will be accurate at the top of the jujucharms.com page for the newly pushed charm.
[18:29] <kwmonroe> CoderEurope: alternatively, the source can be updated (metadata.yaml) to specify 1 or more series.  with that, you wouldn't need to specify a series to 'charm build'
[18:29] <CoderEurope> kwmonroe, So how do we get marco to push it for automatic update to the store ? or are we doing that now ?
[18:31] <CoderEurope> great show by the way !
[18:31] <rick_h> CoderEurope: ty
[18:31] <rick_h> CoderEurope: the way we get marcoceppi to update is to go "HEYYYYY marcoceppi!"
[18:32] <CoderEurope> rick_h,  cool beans
[18:32] <rick_h> CoderEurope: but really, the best thing is to setup such that marco isn't the single point there and that you've got folks that can build a community around it
[18:32] <rick_h> CoderEurope: and keep it fresh so folks can go on vacations and such w/o a problem
[18:32] <kwmonroe> CoderEurope: marcoceppi is away at the moment, but when he returns, he'll see all these messages.  as rick_h was saying earlier, maybe a better approach would be to create a discource-team with interested parties so that any team member could updated the source/store.
[18:32] <CoderEurope> that sounds good.
[18:35] <CoderEurope> rick_h, What does a 'typical' jujucharms community team look like ? Can you give me an example link ?
[18:36] <rick_h> CoderEurope: so https://jujucharms.com/u/bigdata-charmers is the bigdata community that kwmonroe is part of
[18:36] <kwmonroe> CoderEurope: and that "team" is defined in launchpad here:  https://launchpad.net/~bigdata-charmers
[18:37] <rick_h> https://jujucharms.com/u/prometheus-charmers/ is another example
[18:37] <rick_h> smaller one working around a single workload (well the space around it)
[18:39] <CoderEurope> thanks guys - I shall revisit this soon.
[18:39] <CoderEurope> As an aside ......
[18:40] <CoderEurope> Iam guessing that zookeeper wasn't this project that I backed ? https://is.gd/ovyazy
[18:42] <kwmonroe> negative CoderEurope -- the zookeeper charm is based on http://zookeeper.apache.org/
[18:42] <CoderEurope> kwmonroe, yeah thought as much.
[18:58] <magicaltrout> cross model relations...... if I have a k8s cluster running on openstack and would like to flex workers by slapping more into a aws environment how likely is that to "work"?
[18:59] <magicaltrout> s/model/cloud
[19:02] <rick_h> magicaltrout: the thing is going to be if the k8 cluster is controlling things like proxies/other settings it'll be doing it in the wrong cloud.
[19:02] <rick_h> magicaltrout: I think there's some nuance there that I'm not sure about. tvansteenburgh might know more specifically
[19:03] <rick_h> magicaltrout: also note, you're deploying new workers in the other cloud and relating them. So config changes/etc have to be done twice, once in each cloud and such right?
[19:10] <magicaltrout> hmm, we do have a plan to stick the openstack into an address range that shares the AWS VPC range
[19:10] <magicaltrout> so networking stuff would hopefully be reasonably transparent
[19:14] <magicaltrout> i'm told 1300 cores isn't enough
[19:14] <magicaltrout> and apparantly we need to flex up to an additional 940 cores on EC2
[19:14] <magicaltrout> \o/
[19:16] <rick_h> lol nice!