[00:17] lazyPowe_ just for my knowledge and if you don't mind me know, what timezone does your team work in [00:58] stormmore most of us are US Central time [01:00] lazyPowe_ good to know :) [01:13] lazyPowe_ I am one of those weird people that works in whatever timezone is most suitable for the task at hand ;-) [01:15] stormmore - i hear ya. As a traveling monk of the python order, i tend to shift my schedule around but normally work CST hours. [01:17] lazyPowe_ oh I am just a BOFH ;-) [01:17] I was told i had to quit sporting that title [01:17] and stop deleting my $lusers $HOME [01:19] * stormmore will never stop sporting that title! computers would be so much better without their users! [01:21] welp, have a good evening. Things are looking good here [01:21] * lazyPowe_ dips out for the night [07:09] Hi all! Is there a way to add another network interface for a unit deployed to OpenStack? === frankban|afk is now known as frankban [08:27] anrah: deployed via juju? bare-metal? using maas? what version of juju? [08:27] what version of ubuntu? [08:32] Good morning Juju world! [08:33] axw: Hi there. I've just added a comment on https://bugs.launchpad.net/juju/+bug/1643430 . Interested if you know of any other workaround... (it can wait 'til tomorrow though) [08:33] Bug #1643430: Unassigned units in an error state cannot be removed [08:33] Morning! D*$n it is already :-/ [08:33] Morning kjackal [08:35] miken: I'm not aware of any workaround at the moment, sorry [08:35] miken: I mean, apart from getting access to the controller and hacking the mongo DB [08:40] BlackDex: I mean that I am not deploying openstack but deploying own charms to existing openstack [08:40] BlackDex: 16.04 and juju 2.0.2 [08:43] miken: what version of juju? [08:43] anrah: using maas? or juju local with juju add-machine? [08:44] axw: Are there details of doing that on another bug? I could ask those with access to the controller to do so. [08:44] BlackDex: 2.0.2 [08:45] hmm, i know of a tool for juju 1.25 called mgopurge, don't know if that can be used for 2.0.2 [08:46] miken: https://github.com/niedbalski/fix-1459033 [08:46] miken: not that I know of. mgopurge is for fixing broken transactions, that wouldn't help in this case. [08:46] ow now [08:46] no wrong one [08:47] or that could also be a fix [08:47] maybe [08:47] but only for 1.25.2 [08:47] and the mongo cleanup is https://github.com/niedbalski/fix-1613866 [08:47] but, i don't know if that works for 2.x [08:48] miken: don't try and remove the model (yet) [08:49] Thanks BlackDex [08:49] axw: oh, let me update that RT then :) [08:50] axw: Did you want access to it, or what's possible? [08:50] miken: if you destroy-model then it gets into an even worse state. if you restart the controller agent, it should try to assign the unit to a machien again [08:50] Ah - thanks. I'll note that. [08:51] miken: sorry, should have said before - memories are fading back in [08:54] BlackDex: Not maas, OpenStack is my cloud [08:55] maas is not a cloud ;) it's a bare-metal provisioner [08:55] BlackDex: well yeah :) But anyway just to OpenStack [08:56] but if i'm not mistaken, juju using lxd on 16.04 would add all bridged interface to the lxd's [08:56] if not, then you probably need to change the template/profile of the lxd containers juju created [08:57] I'm not using LXD Juju just provisions new instances to OpenStack [08:58] you mean using openstack as a cloud provider? [08:58] Yes [08:59] on my model config I say: network: 5c7cd500-c581-4491-86fa-af95a71e8c18 [08:59] basically i want another network to my model and from there to my instances [09:05] i haven't done that much with juju using openstack as a cloud provider [09:05] i would guess that creating a network and adding the id of that network would be enough [09:08] Yes, that works for one network, but the need is basically to separate the mgt and data planes [09:09] So for external communications the instances would use interface X and for management (ssh + other stuff) interfaxe / [09:09] *interface Y [09:10] I can do that by manually adding another network after the instance is running but I was wondering is there a way to do that with Juju [09:10] then i think you have to look into the spaces for this [09:11] https://jujucharms.com/docs/2.0/network-spaces [09:24] Any time anywhere - twitter works on my phone http://imgur.com/v1sOmkB [09:25] that tweet is a lie [09:25] you don't see the london eye from canonical's office! :P [09:41] BlackDex: yep, currenly only for MAAS, Have to figure out something :) === wesleyma` is now known as wesleymason [10:51] hi here, maybe it's an "RTFM" question: how can I check, without actually doing any action, if upgrades are available for all charms that I'm using in one juju command? [10:56] Zic: `juju status` should show if upgrades are available at the top of the status output [10:56] Zic: I could also write you a real quick `juju show-upgrades` plugin, because I don't think it's as apparent [11:00] magicaltrout: Not now, but the previous office had a great view of it. [11:03] marcoceppi: oh I didn't notice for juju status, thanks === barmaley is now known as barmaley-testing === barmaley-testing is now known as barmaley [12:10] what is the difference between model and controller in JUJU? [12:14] hmm, I think EasyRSA did something wrong here: http://paste.ubuntu.com/23899101/ [12:19] good morning [12:20] * verterok gets coffee and reads the backlog [12:29] if someone of the CDK team is around? === sparkieg` is now known as sparkiegeek [12:35] surf: a controller is a special model which runs the Juju control plane that exposes the GUI, API server, and dispatches event in the deployment === scuttle|afk is now known as scuttlemonkey [14:39] lazyPower: do you believe me if I said that I have another problem and was spotting you come up? (hello anyway ^^) [14:41] Zic: lol, why does that sound like "I'mmmmm baaaaacccckkkk!" in my head [14:41] huhu :) [14:41] I'm back to haunt [14:42] lazyPower: when you will have a little time: http://paste.ubuntu.com/23899271/ [14:42] sorry in advance [14:45] Zic it appears that the etcdoperator deployment has left some garbage behind, and potentially changed some tls certificates. I'm not positive which as i haven't used etcd-operator [14:46] oh, don't know it was possible, but as it was the only thing I did to encounter this error... [14:46] do you think it's recoverable? or do I must restore to an old snapshot? [14:46] well you're getting TLS errors in your log spam here - Jan 31 12:13:18 mth-k8smaster-01 kube-apiserver[1177]: E0131 12:13:18.033706 1177 handlers.go:58] Unable to authenticate the request due to an error: crypto/rsa: verification error [14:47] yeah but don't know that etcd-operator will touch that part [14:47] try dumping your kubernetes objects and seeing if it left something behind you can delete [14:47] i imagine it added something to the k8s object store and thats whats causing the error [14:47] but thats just a guess [14:49] lazyPower: I can't run delete command over kubectl anymore btw :( [14:50] i'm really baffled at how this keeps happening. [14:51] to reassure you, before all this exotic stuff (Vitess & etcd-operator) all was working fine (like an ElasticSearch cluster or a Cassandra one) [14:52] I was waiting your advice before return to a previous working state [14:52] but then, I will avoid etcd-operator [14:52] my thoughts are to check your tls certificates with x509 validation to ensure you have the correct IP Addresses [14:53] including the SDN address [14:53] Zic - another thought would be to deploy the kubernetes-e2e charm, and run an e2e validation suite post restore/fix to ensure the cluster is behaving as we expect it to [14:54] Zic - https://jujucharms.com/u/containers/kubernetes-e2e/ [14:56] oh, I was searching this kind of solution [14:56] Zic - i think its probably fine to use etcd-operator, but we need to know what its doing [14:56] it's like a conformity tools for CDK? [14:56] and then account for anything its done [14:56] yeah, e2e is written by google + contributors to validate the k8s deployment behaves as expected [14:56] sounds cool [14:56] it runs very complex scenarios in kubernetes automatically, and generates quite a bit of load durin its testing suite [14:57] and at the end will report any errors it discovers during the test run. we run this daily on CDK and publish the results to gubernator (their upstraem dashboard) [14:58] https://k8s-gubernator.appspot.com/builds/canonical-kubernetes-tests/logs/kubernetes-gce-e2e-node [15:22] lazyPower: all my certficates have the good names in it (tested via openssl x509) and they was not modified since the 16th of this month [15:23] seems like its something else thats caused the problem. did etcd-operator leave anything behind in the kube-system namespace? [15:23] nope, I search through RC, deployments, pods, PV and PVClaim, statefulset, thirdpartyressources, servicess... in --all-namespaces [15:24] lazyPower: I think I'm done with etcd enough, Vitess is capable to use an etcd cluster or ZooKeeper, I'm more confident with ZK, I think I will switch to it [15:24] and more, I will let you working :D [15:24] we passed so many hours together, I don't have enough pizzas to send to you! [15:25] :D [15:25] at this point i'd be happy with a pint and a pack of gum :D [15:26] however i'm bummed you ran into so many errors that caused cluster crash, there's obviously something going on with that last deployment (i presume) that has altered the state of the cluster. I'm also wondering if you're using NTP on your servers to ensure there's no clock drift? [15:26] i know that clock skew can cause some weird issues here and there to crop up === zz_CyberJacob is now known as CyberJacob [16:11] rick_h: reminder received (wrt juju ci show-and-tell tomorrow) [16:11] kwmonroe: you ok for it? [16:11] kwmonroe: and thank for the ack [16:12] you bet! what time is the show tomorrow? [16:12] kwmonroe 2pm est [16:12] kwmonroe: I'll get you an invite [16:12] thx rick_h [16:40] marcoceppi, Are you available for a minute? Also do you ever respond to pings with 'Polo!'? [16:41] mskalka: Polo, I mean no - never. [16:42] so I'm charming up Rocket.Chat just to get familiar with the reactive framework, and I've run into a roadblock, namely the MongoDB charm. [16:44] mskalka: how can I help? [16:46] after some poking around I see you have a layered version in the works, but it's not completed. I'm interested in pushing it forwards a bit, what's a good place to start? [16:46] mskalka: we need to figure out replicasets, it's a bit out of my depth, but this is the latest layer: https://github.com/marcoceppi/layer-mongodb [16:47] mskalka: I also have this fork for the mongodb interface: https://github.com/marcoceppi/interface-mongodb which adds new support for a peer [16:48] mskalka: basically, when a charm is deployed, the leader should init the RS, then for each peer added the charm should check "am I the master of the RS" and if so, add the new unit to the RS [16:48] I just can't get it to work for me, for the life of me [16:48] ok. I'll look into how the old charm handles it, maybe I can suss it out [16:49] mskalka: thar be dragons in the old charm [16:49] thar be dragons in most pre-reactive charms ;) [16:49] truth [16:50] alright that's a good a place to start as any, thanks! [16:50] mskalka: I'm about to relocate, but I'll be online in a bit again, I'm happy to help work through it. [16:50] marcoceppi, sounds good, if I run into a major roadblock I'll ping you [16:56] lazyPower: Do you have some of the images for older kubernetes? I am working on my presentation and I don't know where to get our older images [17:00] mbruzek: I have some [17:00] mbruzek i do, hang on while i fish up a lightning talk slide [17:00] anyone interested in Juju <-> newrelic integration this would be a great time to make a ticket with them for a python sdk, I've started the commotion see http://i.imgur.com/dvPlFcb.png [17:00] Thanks guys [17:00] or marcoceppi can totally swoop in [17:00] * marcoceppi HAWK SCREEEEECH [17:00] murica [17:00] bdx: yassssss [17:00] oh i totally lied - https://docs.google.com/presentation/d/1m69DG957JK9PMCEXNFL80_0DFXBfQbfIEB6obAK6HPY/edit#slide=id.g70d4533c6_0_22 [17:01] only has one slide of k8s formation(s) [17:01] pre circle icons to boot [17:27] marcoceppi, OK, I have a good idea of what's happening in the old Mongo charm (or a horribly myopic idea, we'll see) can we discuss later this afternoon when I've had a change to poke at it's ugly innards a little more? [17:29] did you already have Flannel starting before the network target? I observe this only on my baremetal server where I use bonding interface [17:34] Zic - ts started after the network-online.target is reached https://github.com/juju-solutions/charm-flannel/blob/master/templates/flannel.service === mskalka is now known as mskalka|lunch [17:49] hola juju world! :) [17:51] lazyPower should there we use a NTP charm as a subordinate on a cluster? [17:51] stormmore - indeed [17:51] stormmore https://jujucharms.com/ntp/ [17:52] lazyPower I am aware of that one, related to what though in the k8s model? [17:52] kubernetes-master, kubernetes-worker, etcd [17:53] we should probably include that in the bundle at some point in the near future [17:53] btw when I actually get my hardware, etc. setup, I plan on having a model just for "testing" the bundles, etc. [17:53] lazyPower that is what I was thinking, I know it is part of the openstack bundle [17:53] stormmore - yeah, thats a great idea to have a separate model for testing things like upgrades and what not before you execute against a production cluster [17:54] despite our best efforts, we're going to try to test every scenario but we'll find corner cases that are rough until we've gotten a few hundred upgrades under our belt and shaken all the bugs out of that path [17:54] i'm sure Zic will testify :) [17:56] lazyPower - exactly my thoughts, plus being able to help with developing things along side you guys :) [17:56] <3 === mskalka|lunch is now known as mskalka [18:16] lazyPower: I'm testifying everything lazyPower said <3 [18:16] (or, near that) [19:08] marcoceppi: do you know anyone that knows the nagios charms well I can bug? === frankban is now known as frankban|afk [19:34] rick_h: what do you make of this? http://pastebin.ubuntu.com/23901081/ [19:34] * rick_h goes ruh roh === scuttlemonkey is now known as scuttle|afk [19:35] tvansteenburgh: looks like they up'd the version of the terms to 1 and you haven't agreed to tht yet [19:35] tvansteenburgh: try juju agree but with a /1 vs a /0? [19:35] rick_h: right, but /1 isn't listed in the charm's terms [19:36] tvansteenburgh: oic, hmm [19:37] rick_h: also, it seems i'll never be able to agree to /0 anyway, according to this: https://github.com/juju/juju/blob/afeb62dd9f750437a97ffbf275b1d1524836d513/cmd/juju/romulus/agree/agree.go#L95 [19:40] tvansteenburgh: that's fishy there.. [19:40] * rick_h is trying to see if he can list the terms for that team with charm terms -u xxx [19:42] tvansteenburgh: ok, so sounds like a question/bug for cmars [19:42] tvansteenburgh: I can agree to /1 and get the terms listed/etc [19:42] tvansteenburgh: but I also see I cannot agree to /0, but that's what's listed in charm terms -u ibmcharmers [19:42] marcoceppi: *puts on swim trunks* Marco? [19:43] tvansteenburgh: so it sure seems like the UX is trying to increment to revision to non-zero so that it doesn't do 0 based counting for users [19:43] tvansteenburgh: but not being complete [19:45] rick_h: okay. i've got automation that agrees to terms. it relies on the terms returned by the api being accurate. i guess in the meantime i could maybe parse the charm pull error message for the terms i need to agree to [19:45] tvansteenburgh: it cmars confirms the logic I'd just check that the version the charm API says and if that's 0 add one. The charm should be updated to say /1 [19:45] tvansteenburgh: and that should be reliable [19:46] just back from lunch [19:46] rick_h: oh, i see. cool, that would be easier. [19:46] tvansteenburgh, term revisions start with 1 [19:47] https://api.jujucharms.com/v5/~ibmcharmers/ibm-websphere-liberty-5/meta/any?include=revision-info&include=promulgated&include=id-name&include=owner&include=terms [19:47] cmars: right, but if I charm terms -u ibmcharmes I get back a /0 for ibm-wlp/0 [19:47] cmars: so the charm is set to that and that doesn't work as you note above [19:47] sorry, -u ibmcharmers [19:48] rick_h, just because the charm metadata has a term revision 0, doesn't mean that a term revision 0 exists [19:48] it's an invalid term ID [19:49] ok so that is just parsed out of the charm's metadata.yaml? [19:49] cmars: ok, so that's from the user? I'd assumed that the terms api would auto handle incrementing the revision so that they can't change existing revisions/etc [19:49] tvansteenburgh, correct [19:49] rick_h, ^^ [19:49] oic, I thought it was pulling form the terms api itself as to what terms are stored [19:49] cmars: is there a way to query the terms service directly? [19:49] rick_h, yes [19:50] rick_h, see https://github.com/juju/terms-client for example [19:51] cmars: ah ok, I was looking through juju/charm command and didn't see anything. [19:51] aha [19:51] rick_h, we add terms-client to the charm snap [19:51] plugins [19:54] cmars: sorry, I'm missing something. So as a diff snap? I've got the charm snap but not finding any show-term and the like. Is that a recent update? [19:54] lazyPower did you get a chance to talk to the team about elasticsearch/kibana? [19:55] stormmore ah thanks for reminding me, i haven't. [19:55] cmars: my use case is, "for a given charm url, show me a list of the terms i need to agree to" [19:55] lazyPower no problem, hence the reminder ;-) [19:55] rick_h: it's in the latest snap from --candidate [19:55] tvansteenburgh: ah, I'm on stable [19:55] stormmore - let me table this for tomorrow and at bare minimum i'll run a bundle generation and kick a deploy before i EOD today to ensure it still turns up correctly [19:56] tvansteenburgh: k, I feel less out of it then ty [19:56] stormmore - if it works as expected in its current form i'll send you over the bundle in my namespace, and we can pilot from there [19:56] lazyPower no problem, my dev teams are being slow anyway :) [19:56] stormmore - i know for a fact bdx wanted this integration in teh past, and i do believe that hasn't changed [19:57] tvansteenburgh, so for that, you'd use this API call: https://github.com/juju/terms-client/blob/master/api/api.go#L315 [19:57] it's macaroon authenticated, because the request is made for a logged in user [19:58] ok [19:59] in the meantime I am going to look at deploying Nexus 3 into the cluster for our private registry [19:59] tvansteenburgh, you'd build a list of term IDs from charm metadata, then add them to a CheckAgreementsRequest (https://github.com/juju/terms-client/blob/master/api/wireformat/entities.go#L171) and call GetUnsignedTerms with that [20:03] cmars, rick_h: i think i have what i need now, thanks for your help! [20:03] ok, great [20:31] can someone review my merge proposal into charmhelpers? https://code.launchpad.net/~cmars/charm-helpers/add-metricenv/+merge/315952 [20:37] cmars: any reason this isn't in hookenv? [20:37] marcoceppi, it's a different hook execution environment [20:37] so are actions, but they're in hookenv [20:37] different or not, it's still a hook environment? [20:37] marcoceppi, you can't use metricenv stuff from normal hooks at all. and vice-versa [20:38] cmars: likewise with relations (to an extent) and actions [20:39] marcoceppi, i thought separating them would make this distinction clearer to the API user. "these aren't available for hooks generally -- these are special" [20:40] cmars: it's better ot have the commands check if htey are in the right hook context and raise exceptions when not, but not even the actions do this [20:40] I don't really see how this warrents a departure from existing hookenv.py [20:40] that said, hookenv and charmhelpers in general needs to be reitred for something better. but that's a longer story [20:41] our world is going to catch on fire when we do that [20:41] marcoceppi, i can concatenate it as well. hookenv is only 1037 LOC, there's room ;) [20:41] you're referring to gutting plumbing from 99.9% of all charms [20:41] lazyPower: gutting, improving, tomato, tomato [20:42] * mskalka shudders [20:42] ^ [20:42] I have an elaborate plan for this [20:42] i have elaborate arguments for you every step of the way <3 [20:43] but trolling aside, whats your plan marcoceppi? [20:43] lazyPower: we've talked abou tthis before, years in the making [20:44] charmhelpers is bloatware, I'd like to pull the things out of core and make them feel more like how juju presents it's tools [20:45] yeah [20:45] thats easily a 6 month project if not a year in terms of deprecation and cleanup effort [20:45] a lot of older charms are gonna get bit by that and die off slowly [20:45] which i'm OK with [20:46] if you're not maintaining it, let it die (i'm deeply seated in this camp of having unmaintained charms) [20:46] marcoceppi, i can move the code if necessary to get that landed. anything else need to change for that MP? [20:46] ls [20:46] cmars: you make a lot of assumptions that add-metric exists on disk [20:47] marcoceppi, no more than open-port [20:47] cmars: open-port has been in juju since 0.2 [20:47] cmars: I recommend taking a look at status-set and network-get on how they implement newer features without dealing with tracebacks [20:49] if people didn't have elaborate plans they wouldn't work at Canonical..... [20:49] world domination [20:51] marcoceppi: while you're here, can I pick your brain for a minute? [20:51] mskalka: go for it [20:52] marcoceppi: just want to know what you've tried in the past to get the replset thing moving, then run what I have in mind past you [20:52] marcoceppi: just to be sure I'm not barking up the wrong trees [20:57] mskalka: so, I've never really tried to code it, I'll be honest I've never gotten it to work manually [20:58] mskalka: my plan was to do this: is-leader? does leader-settings say I've bootstrapped this rs? no - init rs, yes ignore [20:58] on each new peer addition, each unit checks to see if it's the RS leader (not the juju leader) and if it is, adds the peer [20:59] marcoceppi: that's what I had in mind as well, without the 'is rs init'd', just have a @only_once on leader elected to spin it up [21:01] marcoceppi: then again a sanity check is probably a good idea. Then just fill in the rest for broken/departedl(kick off new election if leader, else remove) [21:02] mskalka: yeah, I wouldn't always trusty @only_once we @when('leader.elected') can just see if leader_get('rs.init') is true (and even verify by probing mongo) [21:03] mskalka: there's a weird, potential race contention that could arrise, where juju re-elects a leader to a unit which has just done install but not get config / relations and so it'd run @only_once and is_leader, init a new RS and you've got split brain [21:03] by checking (and setting) leadership settings you can persist that data between elections [21:04] marcoceppi: I thought about that, I don't have enough juju experience yet to know if that would be an issue haha [21:05] macroceppi: alright it seems like I'm headed in the right direction then. I'll see if I can finish this up today or tomorrow, time allowing [21:08] mskalka: \o/ [21:08] o7 [21:33] marcoceppi, updated, please take another look at https://code.launchpad.net/~cmars/charm-helpers/add-metricenv/+merge/315952 ? === JoseeAntonioR is now known as jose === mskalka is now known as mskalka|afk [23:10] so if I need to add an ingress entry point, do I modify the ingress controllers that are already in my cluster or should I be creating a new one? [23:20] stormmore - you just add an ingress object [23:21] the rest should be handled transparently [23:21] stormmore - something like https://bitbucket.org/chuckbutler/awesome-potato/src/f820bfc106fbbc00ca6045e07e27ac1f86b8b4f5/deploy/development.yaml?at=juju-demo&fileviewer=file-view-default#development.yaml-101:115 [23:31] cool thanks, that is what I thought. just a little tired after doing a few 18+ hour days lately [23:32] i hear ya [23:33] there are cases where you will want to scale your ingress controller, but most of those reasons have vanished in the 1.5.x release of k8s as their ingress api uses teh same api-contoller pod for all namespaces so long as you scope your ingress objects with a namespace [23:33] is it better to use kind: Deployment vs kind: ReplicationController? [23:33] before you had to deploy an ingress controller for every namespace, and it was tedius and resource intensive to run all those nginx pods [23:33] yeah, you can use either, but deployments are favored as you can do rolling updates with them [23:33] in blue/green style deployments [23:34] I am still getting my head around all the options in the YAML files [23:34] deployments create rc's which create pods [23:34] so you can run a --rolling-update on a deployment, and it will upgrade the RC, and slowly phase out the pods under the old RC until it can successfully delete all of them [23:34] if your rolling update fails, you can reasonably revert back to the existing RC [23:36] nice so apparently a lot of the videos I have been watching by hightower is outdated :) [23:36] kubes moves so fast though [23:36] its hard not to be outdated [23:36] apparently! [23:37] Deployments are also still a beta resource [23:37] so thats possibly why its not promoted in the training material [23:37] like we tend to shy away from anything thats not listed as stable in the API because its subject to change. beta's dont usually get changes, but the time we decide to do that it'll break and we'll have to change an implementation detail [23:38] and nobody wants that [23:38] how would recommend handling different paths? i.e. I really don't care about all the nodes responding to all the paths and it looks like I can create a single Ingress for the "host" and use paths to point to the right service [23:38] we were talking about creating api.domain.com/v1/ as our structure [23:39] thats useful when you want to map a microservice into your url structure like foo.com/api would route to your bacakend golang web-api impl [23:39] and / routes to your expressjs frontend [23:39] i dont use that particular format often, i tend to deploy with subdomains more often than i url mux [23:39] but it does work, and works reasonably well might i add [23:40] lol :) yup sounds like I am on the right track for my thoughts [23:40] so it appears that the Ingress now uses the service lb, do you know if there is going to be other options for the LB than round robin? for instance least conn? [23:43] i dont off hand [23:43] i would need to go dig around in the issue tracker [23:43] i'm fairly certain there's a lot of talk around this, a lot of users are going the route of cloud-provider LB's, but that gets expensive quickly. We're talking around making some supporting charms to enable that class of infrastructure but nothing concrete yet [23:44] https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/183 -- as an example [23:46] personally I don't care if it in or out of cluster other than the fact that out of cluster comes at a cost for bare metal clusters but even between services in the cluster it would be nice to be able to better balance connections based on load or other metric [23:47] for now, I am pushing my dev teams to make their stuff fully stateless to account for the roundrobin service LB\ [23:49] hmm [23:49] stormmore - remind me another time to revisit the haproxy lb approach. [23:49] i'm fairly certain we can tune this behavior in the nginx/haproxy flavors of an ingress controller [23:50] i'm totally open to trying to patch this with a configmap so we can further tune the ingress behavior [23:51] we have such a patch already submitted that needs additional vetting to enable running a registry in k8s [23:51] stormmore - you might want to tag this issue and track it - https://github.com/juju-solutions/kubernetes/pull/100 [23:51] s/issue/pr/ [23:56] cool thanks