[00:17] <stormmore> lazyPowe_ just for my knowledge and if you don't mind me know, what timezone does your team work in
[00:58] <lazyPowe_> stormmore most of us are US Central time
[01:00] <stormmore> lazyPowe_ good to know :)
[01:13] <stormmore> lazyPowe_ I am one of those weird people that works in whatever timezone is most suitable for the task at hand ;-)
[01:15] <lazyPowe_> stormmore - i hear ya. As a traveling monk of the python order, i tend to shift my schedule around but normally work CST hours.
[01:17] <stormmore> lazyPowe_ oh I am just a BOFH ;-)
[01:17] <lazyPowe_> I was told i had to quit sporting that title
[01:17] <lazyPowe_> and stop deleting my $lusers $HOME
[01:19]  * stormmore will never stop sporting that title! computers would be so much better without their users! 
[01:21] <lazyPowe_> welp, have a good evening. Things are looking good here
[01:21]  * lazyPowe_ dips out for the night
[07:09] <anrah> Hi all! Is there a way to add another network interface for a unit deployed to OpenStack?
[08:27] <BlackDex> anrah: deployed via juju? bare-metal? using maas? what version of juju?
[08:27] <BlackDex> what version of ubuntu?
[08:32] <kjackal> Good morning Juju world!
[08:33] <miken> axw: Hi there. I've just added a comment on https://bugs.launchpad.net/juju/+bug/1643430 . Interested if you know of any other workaround... (it can wait 'til tomorrow though)
[08:33] <mup> Bug #1643430: Unassigned units in an error state cannot be removed <juju:Triaged> <https://launchpad.net/bugs/1643430>
[08:33] <Budgie^Smore> Morning! D*$n it is already :-/
[08:33] <miken> Morning kjackal
[08:35] <axw> miken: I'm not aware of any workaround at the moment, sorry
[08:35] <axw> miken: I mean, apart from getting access to the controller and hacking the mongo DB
[08:40] <anrah> BlackDex: I mean that I am not deploying openstack but deploying own charms to existing openstack
[08:40] <anrah> BlackDex: 16.04 and juju 2.0.2
[08:43] <BlackDex> miken: what version of juju?
[08:43] <BlackDex> anrah: using maas? or juju local with juju add-machine?
[08:44] <miken> axw: Are there details of doing that on another bug? I could ask those with access to the controller to do so.
[08:44] <miken> BlackDex: 2.0.2
[08:45] <BlackDex> hmm, i know of a tool for juju 1.25 called mgopurge, don't know if that can be used for 2.0.2
[08:46] <BlackDex> miken: https://github.com/niedbalski/fix-1459033
[08:46] <axw> miken: not that I know of. mgopurge is for fixing broken transactions, that wouldn't help in this case.
[08:46] <BlackDex> ow now
[08:46] <BlackDex> no wrong one
[08:47] <BlackDex> or that could also be a fix
[08:47] <BlackDex> maybe
[08:47] <BlackDex> but only for 1.25.2
[08:47] <BlackDex> and the mongo cleanup is https://github.com/niedbalski/fix-1613866
[08:47] <BlackDex> but, i don't know if that works for 2.x
[08:48] <axw> miken: don't try and remove the model (yet)
[08:49] <miken> Thanks BlackDex
[08:49] <miken> axw: oh, let me update that RT then :)
[08:50] <miken> axw: Did you want access to it, or what's possible?
[08:50] <axw> miken: if you destroy-model then it gets into an even worse state. if you restart the controller agent, it should try to assign the unit to a machien again
[08:50] <miken> Ah - thanks. I'll note that.
[08:51] <axw> miken: sorry, should have said before - memories are fading back in
[08:54] <anrah> BlackDex: Not maas, OpenStack is my cloud
[08:55] <BlackDex> maas is not a cloud ;) it's a bare-metal provisioner
[08:55] <anrah> BlackDex: well yeah :) But anyway just to OpenStack
[08:56] <BlackDex> but if i'm not mistaken, juju using lxd on 16.04 would add all bridged interface to the lxd's
[08:56] <BlackDex> if not, then you probably need to change the template/profile of the lxd containers juju created
[08:57] <anrah> I'm not using LXD Juju just provisions new instances to OpenStack
[08:58] <BlackDex> you mean using openstack as a cloud provider?
[08:58] <anrah> Yes
[08:59] <anrah> on my model config I say: network: 5c7cd500-c581-4491-86fa-af95a71e8c18
[08:59] <anrah> basically i want another network to my model and from there to my instances
[09:05] <BlackDex> i haven't done that much with juju using openstack as a cloud provider
[09:05] <BlackDex> i would guess that creating a network and adding the id of that network would be enough
[09:08] <anrah> Yes, that works for one network, but the need is basically to separate the mgt and data planes
[09:09] <anrah> So for external communications the instances would use interface X and for management (ssh + other stuff) interfaxe /
[09:09] <anrah> *interface Y
[09:10] <anrah> I can do that by manually adding another network after the instance is running but I was wondering is there a way to do that with Juju
[09:10] <BlackDex> then i think you have to look into the spaces for this
[09:11] <BlackDex> https://jujucharms.com/docs/2.0/network-spaces
[09:24] <CoderEurope> Any time anywhere - twitter works on my phone http://imgur.com/v1sOmkB
[09:25] <magicaltrout> that tweet is a lie
[09:25] <magicaltrout> you don't see the london eye from canonical's office! :P
[09:41] <anrah> BlackDex: yep, currenly only for MAAS, Have to figure out something :)
[10:51] <Zic> hi here, maybe it's an "RTFM" question: how can I check, without actually doing any action, if upgrades are available for all charms that I'm using in one juju command?
[10:56] <marcoceppi> Zic: `juju status` should show if upgrades are available at the top of the status output
[10:56] <marcoceppi> Zic: I could also write you a real quick `juju show-upgrades` plugin, because I don't think it's as apparent
[11:00] <stub> magicaltrout: Not now, but the previous office had a great view of it.
[11:03] <Zic> marcoceppi: oh I didn't notice for juju status, thanks
[12:10] <surf> what is the difference between model and controller in JUJU?
[12:14] <Zic> hmm, I think EasyRSA did something wrong here: http://paste.ubuntu.com/23899101/
[12:19] <verterok> good morning
[12:20]  * verterok gets coffee and reads the backlog
[12:29] <Zic> if someone of the CDK team is around?
[12:35] <marcoceppi> surf: a controller is a special model which runs the Juju control plane that exposes the GUI, API server, and dispatches event in the deployment
[14:39] <Zic> lazyPower: do you believe me if I said that I have another problem and was spotting you come up? (hello anyway ^^)
[14:41] <rick_h> Zic: lol, why does that sound like "I'mmmmm baaaaacccckkkk!" in my head
[14:41] <Zic> huhu :)
[14:41] <Zic> I'm back to haunt
[14:42] <Zic> lazyPower: when you will have a little time: http://paste.ubuntu.com/23899271/
[14:42] <Zic> sorry in advance
[14:45] <lazyPower> Zic it appears that the etcdoperator deployment has left some garbage behind, and potentially changed some tls certificates. I'm not positive which as i haven't used etcd-operator
[14:46] <Zic> oh, don't know it was possible, but as it was the only thing I did to encounter this error...
[14:46] <Zic> do you think it's recoverable? or do I must restore to an old snapshot?
[14:46] <lazyPower> well you're getting TLS errors in your log spam here - Jan 31 12:13:18 mth-k8smaster-01 kube-apiserver[1177]: E0131 12:13:18.033706    1177 handlers.go:58] Unable to authenticate the request due to an error: crypto/rsa: verification error
[14:47] <Zic> yeah but don't know that etcd-operator will touch that part
[14:47] <lazyPower> try dumping your kubernetes objects and seeing if it left something behind you can delete
[14:47] <lazyPower> i imagine it added something to the k8s object store and thats whats causing the error
[14:47] <lazyPower> but thats just a guess
[14:49] <Zic> lazyPower: I can't run delete command over kubectl anymore btw :(
[14:50] <lazyPower> i'm really baffled at how this keeps happening.
[14:51] <Zic> to reassure you, before all this exotic stuff (Vitess & etcd-operator) all was working fine (like an ElasticSearch cluster or a Cassandra one)
[14:52] <Zic> I was waiting your advice before return to a previous working state
[14:52] <Zic> but then, I will avoid etcd-operator
[14:52] <lazyPower> my thoughts are to check your tls certificates with x509 validation to ensure you have the correct IP Addresses
[14:53] <lazyPower> including the SDN address
[14:53] <lazyPower> Zic - another thought would be to deploy the kubernetes-e2e charm, and run an e2e validation suite post restore/fix to ensure the cluster is behaving as we expect it to
[14:54] <lazyPower> Zic - https://jujucharms.com/u/containers/kubernetes-e2e/
[14:56] <Zic> oh, I was searching this kind of solution
[14:56] <lazyPower> Zic - i think its probably fine to use etcd-operator, but we need to know what its doing
[14:56] <Zic> it's like a conformity tools for CDK?
[14:56] <lazyPower> and then account for anything its done
[14:56] <lazyPower> yeah, e2e is written by google + contributors to validate the k8s deployment behaves as expected
[14:56] <Zic> sounds cool
[14:56] <lazyPower> it runs very complex scenarios in kubernetes automatically, and generates quite a bit of load durin its testing suite
[14:57] <lazyPower> and at the end will report any errors it discovers during the test run. we run this daily on CDK and publish the results to gubernator (their upstraem dashboard)
[14:58] <lazyPower> https://k8s-gubernator.appspot.com/builds/canonical-kubernetes-tests/logs/kubernetes-gce-e2e-node
[15:22] <Zic> lazyPower: all my certficates have the good names in it (tested via openssl x509) and they was not modified since the 16th of this month
[15:23] <lazyPower> seems like its something else thats caused the problem. did etcd-operator leave anything behind in the kube-system namespace?
[15:23] <Zic> nope, I search through RC, deployments, pods, PV and PVClaim, statefulset, thirdpartyressources, servicess... in --all-namespaces
[15:24] <Zic> lazyPower: I think I'm done with etcd enough, Vitess is capable to use an etcd cluster or ZooKeeper, I'm more confident with ZK, I think I will switch to it
[15:24] <Zic> and more, I will let you working :D
[15:24] <Zic> we passed so many hours together, I don't have enough pizzas to send to you!
[15:25] <lazyPower> :D
[15:25] <lazyPower> at this point i'd be happy with a pint and a pack of gum :D
[15:26] <lazyPower> however i'm bummed you ran into so many errors that caused cluster crash, there's obviously something going on with that last deployment (i presume) that has altered the state of the cluster. I'm also wondering if you're using NTP on your servers to ensure there's no clock drift?
[15:26] <lazyPower> i know that clock skew can cause some weird issues here and there to crop up
[16:11] <kwmonroe> rick_h: reminder received (wrt juju ci show-and-tell tomorrow)
[16:11] <rick_h> kwmonroe: you ok for it?
[16:11] <rick_h> kwmonroe: and thank for the ack
[16:12] <kwmonroe> you bet! what time is the show tomorrow?
[16:12] <rick_h> kwmonroe 2pm est
[16:12] <rick_h> kwmonroe: I'll get you an invite
[16:12] <kwmonroe> thx rick_h
[16:40] <mskalka> marcoceppi, Are you available for a minute? Also do you ever respond to pings with 'Polo!'?
[16:41] <marcoceppi> mskalka: Polo, I mean no - never.
[16:42] <mskalka> so I'm charming up Rocket.Chat just to get familiar with the reactive framework, and I've run into a roadblock, namely the MongoDB charm.
[16:44] <marcoceppi> mskalka: how can I help?
[16:46] <mskalka> after some poking around I see you have a layered version in the works, but it's not completed. I'm interested in pushing it forwards a bit, what's a good place to start?
[16:46] <marcoceppi> mskalka: we need to figure out replicasets, it's a bit out of my depth, but this is the latest layer: https://github.com/marcoceppi/layer-mongodb
[16:47] <marcoceppi> mskalka: I also have this fork for the mongodb interface: https://github.com/marcoceppi/interface-mongodb which adds new support for a peer
[16:48] <marcoceppi> mskalka: basically, when a charm is deployed, the leader should init the RS, then for each peer added the charm should check "am I the master of the RS" and if so, add the new unit to the RS
[16:48] <marcoceppi> I just can't get it to work for me, for the life of me
[16:48] <mskalka> ok. I'll look into how the old charm handles it, maybe I can suss it out
[16:49] <marcoceppi> mskalka: thar be dragons in the old charm
[16:49] <mskalka> thar be dragons in most pre-reactive charms ;)
[16:49] <marcoceppi> truth
[16:50] <mskalka> alright that's a good a place to start as any, thanks!
[16:50] <marcoceppi> mskalka: I'm about to relocate, but I'll be online in a bit again, I'm happy to help work through it.
[16:50] <mskalka> marcoceppi, sounds good, if I run into a major roadblock I'll ping you
[16:56] <mbruzek> lazyPower: Do you have some of the images for older kubernetes? I am working on my presentation and I don't know where to get our older images
[17:00] <marcoceppi> mbruzek: I have some
[17:00] <lazyPower> mbruzek i do, hang on while i fish up a lightning talk slide
[17:00] <bdx> anyone interested in Juju <-> newrelic integration this would be a great time to make a ticket with them for a python sdk, I've started the commotion see http://i.imgur.com/dvPlFcb.png
[17:00] <mbruzek> Thanks guys
[17:00] <lazyPower> or marcoceppi can totally swoop in
[17:00]  * marcoceppi HAWK SCREEEEECH
[17:00] <mbruzek> murica
[17:00] <marcoceppi> bdx: yassssss
[17:00] <lazyPower> oh i totally lied - https://docs.google.com/presentation/d/1m69DG957JK9PMCEXNFL80_0DFXBfQbfIEB6obAK6HPY/edit#slide=id.g70d4533c6_0_22
[17:01] <lazyPower> only has one slide of k8s formation(s)
[17:01] <lazyPower> pre circle icons to boot
[17:27] <mskalka> marcoceppi, OK, I have a good idea of what's happening in the old Mongo charm (or a horribly myopic idea, we'll see) can we discuss later this afternoon when I've had a change to poke at it's ugly innards a little more?
[17:29] <Zic> did you already have Flannel starting before the network target? I observe this only on my baremetal server where I use bonding interface
[17:34] <lazyPower> Zic - ts started after the network-online.target is reached  https://github.com/juju-solutions/charm-flannel/blob/master/templates/flannel.service
[17:49] <stormmore> hola juju world! :)
[17:51] <stormmore> lazyPower should there we use a NTP charm as a subordinate on a cluster?
[17:51] <lazyPower> stormmore - indeed
[17:51] <lazyPower> stormmore https://jujucharms.com/ntp/
[17:52] <stormmore> lazyPower I am aware of that one, related to what though in the k8s model?
[17:52] <lazyPower> kubernetes-master, kubernetes-worker, etcd
[17:53] <lazyPower> we should probably include that in the bundle at some point in  the near future
[17:53] <stormmore> btw when I actually get my hardware, etc. setup, I plan on having a model just for "testing" the bundles, etc.
[17:53] <stormmore> lazyPower that is what I was thinking, I know it is part of the openstack bundle
[17:53] <lazyPower> stormmore - yeah, thats a great idea to have a separate model for testing things like upgrades and what not before you execute against a production cluster
[17:54] <lazyPower> despite our best efforts, we're going to try to test every scenario but we'll find corner cases that are rough until we've gotten a few hundred upgrades under our belt and shaken all the bugs out of that path
[17:54] <lazyPower> i'm sure Zic will testify :)
[17:56] <stormmore> lazyPower - exactly my thoughts, plus being able to help with developing things along side you guys :)
[17:56] <lazyPower> <3
[18:16] <Zic> lazyPower: I'm testifying everything lazyPower said <3
[18:16] <Zic> (or, near that)
[19:08] <rick_h> marcoceppi: do you know anyone that knows the nagios charms well I can bug?
[19:34] <tvansteenburgh> rick_h: what do you make of this? http://pastebin.ubuntu.com/23901081/
[19:34]  * rick_h goes ruh roh
[19:35] <rick_h> tvansteenburgh: looks like they up'd the version of the terms to 1 and you haven't agreed to tht yet
[19:35] <rick_h> tvansteenburgh: try juju agree but with a /1 vs a /0?
[19:35] <tvansteenburgh> rick_h: right, but /1 isn't listed in the charm's terms
[19:36] <rick_h> tvansteenburgh: oic, hmm
[19:37] <tvansteenburgh> rick_h: also, it seems i'll never be able to agree to /0 anyway, according to this: https://github.com/juju/juju/blob/afeb62dd9f750437a97ffbf275b1d1524836d513/cmd/juju/romulus/agree/agree.go#L95
[19:40] <rick_h> tvansteenburgh: that's fishy there..
[19:40]  * rick_h is trying to see if he can list the terms for that team with charm terms -u xxx
[19:42] <rick_h> tvansteenburgh: ok, so sounds like a question/bug for cmars
[19:42] <rick_h> tvansteenburgh: I can agree to /1 and get the terms listed/etc
[19:42] <rick_h> tvansteenburgh: but I also see I cannot agree to /0, but that's what's listed in charm terms -u ibmcharmers
[19:42] <mskalka> marcoceppi: *puts on swim trunks* Marco?
[19:43] <rick_h> tvansteenburgh: so it sure seems like the UX is trying to increment to revision to non-zero so that it doesn't do 0 based counting for users
[19:43] <rick_h> tvansteenburgh: but not being complete
[19:45] <tvansteenburgh> rick_h: okay. i've got automation that agrees to terms. it relies on the terms returned by the api being accurate. i guess in the meantime i could maybe parse the charm pull error message for the terms i need to agree to
[19:45] <rick_h> tvansteenburgh: it cmars confirms the logic I'd just check that the version the charm API says and if that's 0 add one. The charm should be updated to say /1
[19:45] <rick_h> tvansteenburgh: and that should be reliable
[19:46] <cmars> just back from lunch
[19:46] <tvansteenburgh> rick_h: oh, i see. cool, that would be easier.
[19:46] <cmars> tvansteenburgh, term revisions start with 1
[19:47] <tvansteenburgh> https://api.jujucharms.com/v5/~ibmcharmers/ibm-websphere-liberty-5/meta/any?include=revision-info&include=promulgated&include=id-name&include=owner&include=terms
[19:47] <rick_h> cmars: right, but if I charm terms -u ibmcharmes I get back a /0 for ibm-wlp/0
[19:47] <rick_h> cmars: so the charm is set to that and that doesn't work as you note above
[19:47] <rick_h> sorry, -u ibmcharmers
[19:48] <cmars> rick_h, just because the charm metadata has a term revision 0, doesn't mean that a term revision 0 exists
[19:48] <cmars> it's an invalid term ID
[19:49] <tvansteenburgh> ok so that is just parsed out of the charm's metadata.yaml?
[19:49] <rick_h> cmars: ok, so that's from the user? I'd assumed that the terms api would auto handle incrementing the revision so that they can't change existing revisions/etc
[19:49] <cmars> tvansteenburgh, correct
[19:49] <cmars> rick_h, ^^
[19:49] <rick_h> oic, I thought it was pulling form the terms api itself as to what terms are stored
[19:49] <rick_h> cmars: is there a way to query the terms service directly?
[19:49] <cmars> rick_h, yes
[19:50] <cmars> rick_h, see https://github.com/juju/terms-client for example
[19:51] <rick_h> cmars: ah ok, I was looking through juju/charm command and didn't see anything.
[19:51] <tvansteenburgh> aha
[19:51] <cmars> rick_h, we add terms-client to the charm snap
[19:51] <cmars> plugins
[19:54] <rick_h> cmars: sorry, I'm missing something. So as a diff snap? I've got the charm snap but not finding any show-term and the like. Is that a recent update?
[19:54] <stormmore> lazyPower did you get a chance to talk to the team about elasticsearch/kibana?
[19:55] <lazyPower> stormmore ah thanks for reminding me, i haven't.
[19:55] <tvansteenburgh> cmars: my use case is, "for a given charm url, show me a list of the terms i need to agree to"
[19:55] <stormmore> lazyPower no problem, hence the reminder ;-)
[19:55] <tvansteenburgh> rick_h: it's in the latest snap from --candidate
[19:55] <rick_h> tvansteenburgh: ah, I'm on stable
[19:55] <lazyPower> stormmore - let me table this for tomorrow and at bare minimum i'll run a bundle generation and kick a deploy before i EOD today to ensure it still turns up correctly
[19:56] <rick_h> tvansteenburgh: k, I feel less out of it then ty
[19:56] <lazyPower> stormmore - if it works as expected in its current form i'll send you over the bundle in my namespace, and we can pilot from there
[19:56] <stormmore> lazyPower no problem, my dev teams are being slow anyway :)
[19:56] <lazyPower> stormmore - i know for a fact bdx wanted this integration in teh past, and i do believe that hasn't changed
[19:57] <cmars> tvansteenburgh, so for that, you'd use this API call: https://github.com/juju/terms-client/blob/master/api/api.go#L315
[19:57] <cmars> it's macaroon authenticated, because the request is made for a logged in user
[19:58] <tvansteenburgh> ok
[19:59] <stormmore> in the meantime I am going to look at deploying Nexus 3 into the cluster for our private registry
[19:59] <cmars> tvansteenburgh, you'd build a list of term IDs from charm metadata, then add them to a CheckAgreementsRequest (https://github.com/juju/terms-client/blob/master/api/wireformat/entities.go#L171) and call GetUnsignedTerms with that
[20:03] <tvansteenburgh> cmars, rick_h: i think i have what i need now, thanks for your help!
[20:03] <cmars> ok, great
[20:31] <cmars> can someone review my merge proposal into charmhelpers? https://code.launchpad.net/~cmars/charm-helpers/add-metricenv/+merge/315952
[20:37] <marcoceppi> cmars: any reason this isn't in hookenv?
[20:37] <cmars> marcoceppi, it's a different hook execution environment
[20:37] <marcoceppi> so are actions, but they're in hookenv
[20:37] <marcoceppi> different or not, it's still a hook environment?
[20:37] <cmars> marcoceppi, you can't use metricenv stuff from normal hooks at all. and vice-versa
[20:38] <marcoceppi> cmars: likewise with relations (to an extent) and actions
[20:39] <cmars> marcoceppi, i thought separating them would make this distinction clearer to the API user. "these aren't available for hooks generally -- these are special"
[20:40] <marcoceppi> cmars: it's better ot have the commands check if htey are in the right hook context and raise exceptions when not, but not even the actions do this
[20:40] <marcoceppi> I don't really see how this warrents a departure from existing hookenv.py
[20:40] <marcoceppi> that said, hookenv and charmhelpers in general needs to be reitred for something better. but that's a longer story
[20:41] <lazyPower> our world is going to catch on fire when we do that
[20:41] <cmars> marcoceppi, i can concatenate it as well. hookenv is only 1037 LOC, there's room ;)
[20:41] <lazyPower> you're referring to gutting plumbing from 99.9% of all charms
[20:41] <marcoceppi> lazyPower: gutting, improving, tomato, tomato
[20:42]  * mskalka shudders
[20:42] <lazyPower> ^
[20:42] <marcoceppi> I have an elaborate plan for this
[20:42] <lazyPower> i have elaborate arguments for you every step of the way <3
[20:43] <lazyPower> but trolling aside, whats your plan marcoceppi?
[20:43] <marcoceppi> lazyPower: we've talked abou tthis before, years in the making
[20:44] <marcoceppi> charmhelpers is bloatware, I'd like to pull the things out of core and make them feel more like how juju presents it's tools
[20:45] <lazyPower> yeah
[20:45] <lazyPower> thats easily a 6 month project if not a year in terms of deprecation and cleanup effort
[20:45] <lazyPower> a lot of older charms are gonna get bit by that and die off slowly
[20:45] <lazyPower> which i'm OK with
[20:46] <lazyPower> if you're not maintaining it, let it die (i'm deeply seated in this camp of having unmaintained charms)
[20:46] <cmars> marcoceppi, i can move the code if necessary to get that landed. anything else need to change for that MP?
[20:46] <lazyPower> ls
[20:46] <marcoceppi> cmars: you make a lot of assumptions that add-metric exists on disk
[20:47] <cmars> marcoceppi, no more than open-port
[20:47] <marcoceppi> cmars: open-port has been in juju since 0.2
[20:47] <marcoceppi> cmars: I recommend taking a look at status-set and network-get on how they implement newer features without dealing with tracebacks
[20:49] <magicaltrout> if people didn't have elaborate plans they wouldn't work at Canonical.....
[20:49] <stokachu> world domination
[20:51] <mskalka> marcoceppi: while you're here, can I pick your brain for a minute?
[20:51] <marcoceppi> mskalka: go for it
[20:52] <mskalka> marcoceppi: just want to know what you've tried in the past to get the replset thing moving, then run what I have in mind past you
[20:52] <mskalka> marcoceppi: just to be sure I'm not barking up the wrong trees
[20:57] <marcoceppi> mskalka: so, I've never really tried to code it, I'll be honest I've never gotten it to work manually
[20:58] <marcoceppi> mskalka: my plan was to do this: is-leader? does leader-settings say I've bootstrapped this rs? no - init rs, yes ignore
[20:58] <marcoceppi> on each new peer addition, each unit checks to see if it's the RS leader (not the juju leader) and if it is, adds the peer
[20:59] <mskalka> marcoceppi: that's what I had in mind as well, without the 'is rs init'd', just have a @only_once on leader elected to spin it up
[21:01] <mskalka> marcoceppi: then again a sanity check is probably a good idea. Then just fill in the rest for broken/departedl(kick off new election if leader, else remove)
[21:02] <marcoceppi> mskalka: yeah, I wouldn't always trusty @only_once we @when('leader.elected') can just see if leader_get('rs.init') is true (and even verify by probing mongo)
[21:03] <marcoceppi> mskalka: there's a weird, potential race contention that could arrise, where juju re-elects a leader to a unit which has just done install but not get config / relations and so it'd run @only_once and is_leader, init a new RS and you've got split brain
[21:03] <marcoceppi> by checking (and setting) leadership settings you can persist that data between elections
[21:04] <mskalka> marcoceppi: I thought about that, I don't have enough juju experience yet to know if that would be an issue haha
[21:05] <mskalka> macroceppi: alright it seems like I'm headed in the right direction then. I'll see if I can finish this up today or tomorrow, time allowing
[21:08] <marcoceppi> mskalka: \o/
[21:08] <mskalka> o7
[21:33] <cmars> marcoceppi, updated, please take another look at https://code.launchpad.net/~cmars/charm-helpers/add-metricenv/+merge/315952 ?
[23:10] <stormmore> so if I need to add an ingress entry point, do I modify the ingress controllers that are already in my cluster or should I be creating a new one?
[23:20] <lazyPower> stormmore - you just add an ingress object
[23:21] <lazyPower> the rest should be handled transparently
[23:21] <lazyPower> stormmore - something like https://bitbucket.org/chuckbutler/awesome-potato/src/f820bfc106fbbc00ca6045e07e27ac1f86b8b4f5/deploy/development.yaml?at=juju-demo&fileviewer=file-view-default#development.yaml-101:115
[23:31] <stormmore> cool thanks, that is what I thought. just a little tired after doing a few 18+ hour days lately
[23:32] <lazyPower> i hear ya
[23:33] <lazyPower> there are cases where you will want to scale your ingress controller, but most of those reasons have vanished in the 1.5.x release of k8s as their ingress api uses teh same api-contoller pod for all namespaces so long as you scope your ingress objects with a namespace
[23:33] <stormmore> is it better to use kind: Deployment vs kind: ReplicationController?
[23:33] <lazyPower> before you had to deploy an ingress controller for every namespace, and it was tedius and resource intensive to run all those nginx pods
[23:33] <lazyPower> yeah, you can use either, but deployments are favored as you can do rolling updates with them
[23:33] <lazyPower> in blue/green style deployments
[23:34] <stormmore> I am still getting my head around all the options in the YAML files
[23:34] <lazyPower> deployments create rc's which create pods
[23:34] <lazyPower> so you can run a --rolling-update on a deployment, and it will upgrade the RC, and slowly phase out the pods under the old RC until it can successfully delete all of them
[23:34] <lazyPower> if your rolling update fails, you can reasonably revert back to the existing RC
[23:36] <stormmore> nice so apparently a lot of the videos I have been watching by hightower is outdated :)
[23:36] <lazyPower> kubes moves so fast though
[23:36] <lazyPower> its hard not to be outdated
[23:36] <stormmore> apparently!
[23:37] <lazyPower> Deployments are also still a beta resource
[23:37] <lazyPower> so thats possibly why its not promoted in the training material
[23:37] <lazyPower> like we tend to shy away from anything thats not listed as stable in the API because its subject to change. beta's dont usually get changes, but the time we decide to do that it'll break and we'll have to change an implementation detail
[23:38] <lazyPower> and nobody wants that
[23:38] <stormmore> how would recommend handling different paths? i.e. I really don't care about all the nodes responding to all the paths and it looks like I can create a single Ingress for the "host" and use paths to point to the right service
[23:38] <stormmore> we were talking about creating api.domain.com/v1/<service> as our structure
[23:39] <lazyPower> thats useful when you want to map a microservice into your url structure like foo.com/api  would route to your bacakend golang web-api impl
[23:39] <lazyPower> and / routes to your expressjs frontend
[23:39] <lazyPower> i dont use that particular format often, i tend to deploy with subdomains more often than i url mux
[23:39] <lazyPower> but it does work, and works reasonably well might i add
[23:40] <stormmore> lol :) yup sounds like I am on the right track for my thoughts
[23:40] <stormmore> so it appears that the Ingress now uses the service lb, do you know if there is going to be other options for the LB than round robin? for instance least conn?
[23:43] <lazyPower> i dont off hand
[23:43] <lazyPower> i would need to go dig around in the issue tracker
[23:43] <lazyPower> i'm fairly certain there's a lot of talk around this, a lot of users are going the route of cloud-provider LB's, but that gets expensive quickly. We're talking around making some supporting charms to enable that class of infrastructure but nothing concrete yet
[23:44] <lazyPower> https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/183 -- as an example
[23:46] <stormmore> personally I don't care if it in or out of cluster other than the fact that out of cluster comes at a cost for bare metal clusters but even between services in the cluster it would be nice to be able to better balance connections based on load or other metric
[23:47] <stormmore> for now, I am pushing my dev teams to make their stuff fully stateless to account for the roundrobin service LB\
[23:49] <lazyPower> hmm
[23:49] <lazyPower> stormmore - remind me another time to revisit the haproxy lb approach.
[23:49] <lazyPower> i'm fairly certain we can tune this behavior in the nginx/haproxy flavors of an ingress controller
[23:50] <lazyPower> i'm totally open to trying to patch this with a configmap so we can further tune the ingress behavior
[23:51] <lazyPower> we have such a patch already submitted that needs additional vetting to enable running a registry in k8s
[23:51] <lazyPower> stormmore - you might want to tag this issue and track it - https://github.com/juju-solutions/kubernetes/pull/100
[23:51] <lazyPower> s/issue/pr/
[23:56] <stormmore> cool thanks