[08:06] <kjackal> Good morning Juju world!
[11:11] <marcoceppi> kjackal: +1 to this change, i want to land it asap, but please drop maintainers field https://github.com/battlemidget/juju-layer-nginx/pull/14
[11:13] <kjackal> marcoceppi: should I drop the tags field as well?
[11:14] <marcoceppi> kjackal: probably, that one doesn't affect ownership of the charm as much, but if you don't have a strong opinion, then +1 to removing it
[11:17] <kjackal> marcoceppi: ok, done
[11:17] <marcoceppi> kjackal: merged
[11:17] <kjackal> thanks
[14:20] <kjackal> Hey cory_fu, got a question on juju resources and Amulet tests. These two do not mix nicely at the moment. What will be our strategy for testing charms with resources? Should we expect the store to have the resources beforehand?
[14:21] <cory_fu> kjackal: https://github.com/juju/amulet/issues/142
[14:22] <kjackal> cory_fu: yeap, thats why I am saying they do not play well. Should I disable the tests of apache kafka?
[14:22] <cory_fu> kjackal: The standard assumption for charms is that you should be able to deploy them out of the box without error.  That means there will at a minimum need to be a placeholder resource already in the store.  However, that doesn't help for testing a charm from local, since locally deployed charms will never fetch resources from the store
[14:24] <cory_fu> kjackal: The kafka charm should already have a resource uploaded to the store.  That was one of the few big data charms I was able to get a functioning resource into the store for.  However, it should also fall back if the resource is not provided, IIRC
[14:26] <kjackal> cory_fu: yes, undestood. For testing local charms you should first attach the resources to the controller. What I am not sure is what will happen if you call bundletester on a charm on the store. I think in this case the resources will be fetched correctly.
[14:26] <kjackal> cory_fu: I think the fall back works only if the getting resources is not implemented
[14:27] <kjackal> cory_fu: thats is on apache kafka, let me lookup the line
[14:30] <kjackal> cory_fu: https://github.com/juju-solutions/layer-apache-kafka/blob/fcd0b28242a1330530d219fa6d92e266403d24ea/lib/charms/layer/apache_kafka.py#L38
[14:40] <cory_fu> kjackal: Right, I guess if deploying locally with 2.0 you *must* provide the resource.  If deploying from the store, like with BT, then yes, it will automatically fetch the resource from the store
[14:44] <kjackal> cory_fu: ok, sounds good
[19:45] <justicefries> hey all. I'm working on a pretty decent Juju setup, and I'm hitting the point where I need to get some Windows-based CI machines into the mix. i've been having a bit of a general automation nightmare with actually cutting images for them. anyway, what I'm wondering is if I need to actually do Windows config management (install build requirements and CI
[19:45] <justicefries> agent), is that something worth wrapping up as a charm?
[19:45] <justicefries> or should I just keep plugging away with whiskey and powershell?
[20:09] <justicefries> also, can I add dependencies across models? let's say I want to do Kubernetes federation and had the chart to support it, does juju support the whole notion of cross-model dependencies?
[20:09] <lazyPower> justicefries  - so your first question about windows components
[20:10] <lazyPower> we support windows charms written in powershell, and i can put you in touch with some cloudbase peeps who are our primary point of contact for widnows charming, they kind of wrote the book on that
[20:10] <justicefries> fantastic, I'd love that. its such a small part of my infra that causes so much pain.
[20:10] <lazyPower> justicefries - and regarding federated clusters - our current k8s charms dont support the federated cluster feature set, but cross model relations are coming (theyd ont exist today)
[20:11] <lazyPower> i would suspect we'll see something around 2.1 timeframe, but please dont hold me to that as its speculation. I jsut know we have it on the roadmap, not when it'll land
[20:11] <justicefries> no, that's fine. I was figuring if it was an issue with the chart, but supported, i'd open a PR.
[20:11] <justicefries> but since its not, I'll wait until they're in. :)
[20:11] <lazyPower> ok :) justicefries - if you're interested in tracking that, the canonical-kubernetes-bundle would be a great place to start that conversation
[20:11] <lazyPower> we've been talking about adding federated cluster support via config initially until xmodel relations land
[20:12] <lazyPower> likely to tackle that in the late december/early jan timeframe assuming we stay on top of our k8s roadmap
[20:12] <lazyPower> https://github.com/juju-solutions/bundle-canonical-kubernetes
[20:12] <justicefries> sure sure. that makes sense. part of me would rather wait, but I suppose that's what joining the convo is for. :)
[20:12] <justicefries> ah yes that was my next question, I was in the juju org. :)
[20:13] <lazyPower> yeah, the earlier you join the conversation and let us know what your production cocnerns/needs are, the sooner we can get it in our planning sessions and make it happen
[20:13] <lazyPower> for example, the etcd snapshot/restore for cloning clusters came up 2 weeks ago and i just pushed that out this week since it was a bitesized task
[20:14] <justicefries> sure sure. out of curiosity - when that charm got upgraded for people, did that trigger everyone having to re-make clusters on that charm, or was it a graceful upgrade?
[20:15] <lazyPower> so graceful upgrade -- i'm glad you asked. The feature itself was a drop in. but ot actually restore a cluster it rquires a snapshot/redeploy
[20:15] <lazyPower> its extremely difficult to coordinate the cluster down/up steps during a restore, even the etcd admin guide recommends a nuke/re-pave from a snapshot
[20:15] <justicefries> nice, and sure, that makes sense.
[20:15] <justicefries> yeah.
[20:16] <lazyPower> thats one of our more finicky components, as i'm sure you've noticed if you've had the pleasure of administering an etcd cluster :)
[20:16] <justicefries> edges are all still a little sharp and etcd is that pet you keep chained to the strongest post you got lest it eat the mailman and entire post office.
[20:16] <justicefries> bad analogy but yeah
[20:16] <justicefries> exactly. :)
[20:16] <lazyPower> hahaha
[20:16] <lazyPower> oh man, i wish i could pin that message. thats brilliant
[20:25] <justicefries> now is there any sense of a general system charm so I can throw out having a separate CM setup?
[20:26] <justicefries> without calling out any names, I'm emerging from the relative darkness of pure cloud-config based provisioning and back into the fun world of CM.
[20:27] <lazyPower> when you say general system charm
[20:27] <lazyPower> what do you mean?
[20:27] <justicefries> so I want to lay down my base config on units. nothing application specific.
[20:28] <lazyPower> that sounds really close to what our internal services department uses a layer called "base-node" for
[20:28] <justicefries> ah look at that, in the layer docs there's base layers.
[20:28] <lazyPower> most charms are built from layers, and they just mix in the base-node layer to gain that functionality. it does however mean they are on the onus for ensuring they keep their deployed charms updated as they are adding a custom layer
[20:28] <justicefries> nice.
[20:29] <lazyPower> but, its not exactly the best ux. I think there's room for improvement there, if you're looking to just apply a set of policies a subordinate might be a good route forward, you eat a juju agent in turn for getting a single source to apply to the base configuration.
[20:29] <justicefries> that's true. and i imagine a base layer wouldn't be "converging" on a config over time.
[20:29] <justicefries> hmm, ok.
[20:30] <lazyPower> the benefit to that approach, is it scales with your deployment, has isolated concerns
[20:30] <lazyPower> the detriment is potential race with the principal charm, and eating that second agent.
[20:30] <justicefries> right.
[20:30] <lazyPower> so it really depends on what your policy is for your infra, and how you plan to execute on that
[20:31] <justicefries> maybe I'll start with something traditional and then migrate towards ripping it out. we end up with a lot of dynamic infrastructure that's directly managed now, and I wouldn't mind at some point turning the actual workloads that cause the infra to spin up into a charm creation.
[20:33] <justicefries> i mean, on the other hand, outside of basic things like security.
[20:33] <justicefries> maybe some of the sysctl and other tweaks I do as "base" right now really relate to an application.
[20:33] <justicefries> so there's probably equal room for re-thinking some of how that gets laid down now.
[20:35] <lazyPower> right, the blanket base configuration has a place/time.
[20:35] <lazyPower> which is why i'm hesitant to guide you away from it, more of a choose your own adventure thing
[20:36] <lazyPower> if there's security measures you would take, we're likely to ask that you either bug it or submit a PR so everyone consuming those charms can be as secure as you're setting up your infra to be
[20:36] <justicefries> that makes sense.
[20:37] <justicefries> yeah, I'll first distill everything down into what truly is my base, then re-frame my question if there's anything left with maybe a better idea of how I'd like it to work in juju.
[20:38] <justicefries> probably worth opening a proposal/issue at that point.
[20:39] <lazyPower> yep, i agree with that 100%
[20:45] <justicefries> hmm. one more for now.
[20:46] <justicefries> juju HA - any way to split that out on a way depending on cloud support? eg, AWS/GCE AZs.
[20:56] <justicefries> side note - before I open an issue on juju/juju, any reason its not compiled with 1.7? I'm of course using MacOS Sierra at the moment.
[21:33] <justicefries> actually, it looks like the PR was closed.
[21:33] <justicefries> but the release binary shipping for OSX is still being built with go 1.6
[21:34] <rick_h> justicefries: so the issue is building across trusty/xenial/etc and what Go we can use for that.
[21:34] <rick_h> justicefries: so we try to keep up a bit, but also don't want to create unecessary work
[21:35] <justicefries> that makes sense.
[21:51] <lazyPower> justicefries - when you say split that out, do you mean specify the AZ?
[21:51] <lazyPower> justicefries - aiui, when you juju ensure-ha, its an auto split across the AZ its deployed into. us-east-1-a and us-east-1-c  for example
[21:51] <justicefries> fantastic.
[21:51] <lazyPower> i admittedly have very little experience with an HA controller.
[21:51] <lazyPower> but would be happy to step through it if you need information
[21:51] <justicefries> i may just start with backups and restores.
[21:58] <justicefries> how is jenkins building juju for OSX? want to build on my own, but I must be missing some release configs/flags/something, because its looking for sierra artifacts which I obviously don't want.
[22:10] <rick_h> justicefries: hmm, does the brew recipe not handle that?
[22:10] <rick_h> justicefries: /me isn't sure tbh hasn't looked at build osx magic
[22:11] <justicefries> brew hasn't been updated to 2.0.0 yet, seems there's an open PR.
[22:11] <lazyPower> right, we were blocked until 2 or 3 weeks ago when 2.0.1 landed with the sierra fix
[22:11] <lazyPower> i'm not sure what the status of that would be today, i'm pretty sure our release team handles updating brew.
[22:11] <justicefries> ahh sure.
[22:12]  * lazyPower makes a note to go poke about brew
[22:12] <justicefries> yeah, the 2.0.1 OSX download on jujucharms.com is still built with 1.6
[22:12] <lazyPower> i've been telling people to fetch from the release page and install juju /usr/local/bin/juju
[22:12] <lazyPower> justicefries - pardon my ignorance, whats teh big todo about getting it bumped to go 1.7?
[22:12] <rick_h> lazyPower: yea, the latest news there was updating 1.25 with the same sierra fix (1.25.8 I think) to help with the transition I think
[22:12] <justicefries> i'm doing it in an ubuntu container just because. so go 1.7 added support for sierra.
[22:12] <justicefries> go 1.6 will just randomly panic :O
[22:12] <justicefries> and its not consistent.
[22:12] <lazyPower> ah yeah that i do know about
[22:13] <lazyPower> i get the random panics from both kubernetes kubectl and juju
[22:13] <lazyPower> i thought it was hardware related though
[22:13] <lazyPower> as it effected both, but that makes total sense
[22:13] <justicefries> nope purely OS
[22:13] <lazyPower> welp, nothing to do here
[22:13]  * lazyPower jetpacks away
[22:13] <justicefries> it'll make you feel crazy that's for sure.
[22:13] <lazyPower> indeed, thanks for validating my sanity and hardware
[22:13] <lazyPower> its real fun when running a watch
[22:13] <lazyPower> as it just randomly hangs with a stack for a second, then pops back over to the proper status output
[22:14] <justicefries> yup
[22:16] <bdx> can I associate a users charm store sso login with a juju model user so juju users have access to their charm store team namespaces via juju gui?
[22:16] <lazyPower> rick_h - ^ i'm pretty sure this already exists today?
[22:18] <rick_h> bdx: lazyPower so not with the model user
[22:18] <bdx> rick_h: in what way can it be accomplished?
[22:19] <rick_h> bdx: lazyPower what are we trying to do? I mean the user can charm login different from model login so should be able to access things?
[22:20] <bdx> rick_h: I have my 'creativedrive' namespace with private charms, I'm giving our devs a tour of the gui, and we are wondering how we deploy our 'creativedrive' charms via gui
[22:21] <bdx> rick_h: when I try the usso login button, I get https://s12.postimg.org/7yf8276h9/Screen_Shot_2016_11_16_at_2_20_26_PM.png
[22:21] <rick_h> bdx: oh hmm...
[22:21] <bdx> when I login with my juju admin user I still can't see my 'creativedrive' namespace charms
[22:21] <rick_h> bdx: right, the GUI doesn't know the split I guess
[22:22] <rick_h> bdx: at one point in time there was a double login issue there
[22:22] <rick_h> bdx: if you go to the gui and the user profile is there a link to login to the charmstore?
[22:22] <rick_h> hatch: ^ where's the login to the charmstore link these days?
[22:23] <bdx> rick_h: when I search for my 'creativedrive' namespace charm https://s11.postimg.org/vtao1yhib/Screen_Shot_2016_11_16_at_2_25_19_PM.png
[22:23] <rick_h> bdx: right, so there was a link to login to the charmstore as a separate login from the juju controller
[22:23] <rick_h> bdx: that would solve what you're looking for
[22:23] <bdx> oooh I found it
[22:24] <bdx> https://s11.postimg.org/sctse3d2b/Screen_Shot_2016_11_16_at_2_26_53_PM.png
[22:24] <rick_h> there you go right, from the profile page
[22:24] <bdx> rick_h, lazyPower: thanks guys ... srry .. my bad
[22:24] <rick_h> been a while since I monkeyed with it
[22:24] <rick_h> bdx: all good, give that a go and try that out and if there's a suggestion on how to make it more obvious let us know
[22:25] <rick_h> bdx: at one point in time I thought a failed search result had a hint, but that might have come/gone
[22:25] <justicefries> hmm. so I've created a user as a superuser, logged in as him. i created a model under admin to use.
[22:25] <justicefries> but when I try to grant it to myself: juju grant justicefries admin admin/aws-test (or aws-test) it can't find it.
[22:25] <rick_h> justicefries: hmm, not sure what "under admin" would be
[22:26] <rick_h> justicefries: so what happens if you just "juju models"
[22:26] <justicefries> so I did --owner=admin when creating the model
[22:26] <justicefries> under my superuser user? nothing.
[22:26] <justicefries> under admin, the model.
[22:26] <rick_h> justicefries: ok, so you created a model without your own user having permission
[22:26] <NewServerGuy> Using Lubuntu, can't install "sudo apt install  zfsutils-linux" from the basic tutorial.
[22:26] <rick_h> justicefries: so you'll need to switch to admin (logout/login)
[22:26] <justicefries> exactly. as admin, I made justicefries a superuser.
[22:26] <rick_h> justicefries: ok, who are you currently logged in as? "juju whoami"
[22:26] <lazyPower> NewServerGuy
[22:27] <justicefries> ok. i'm mostly trying to figure out a good flow for this. want multiple users, but I don't want to "lose" a model if someone leaves, so for production models I want them consolidated somehow.
[22:27] <lazyPower> which version of lubuntu?
[22:27] <justicefries> i'm under my user currently.
[22:27] <justicefries> so I think I know the answer, this is more of a flow question now.
[22:27] <rick_h> justicefries: so as long as you create them, have the admin with admin access, and have users have write access they can't destroy the model/deal with if they leave
[22:27] <NewServerGuy> lazyPower: Yes?
[22:27] <justicefries> got it.
[22:27] <rick_h> justicefries: so I'd suggest just using the admin user, create all the models, add write access for other users
[22:27] <justicefries> so for production models, its probably worth having under a "production" user or "admin".
[22:27] <justicefries> that makes sense.
[22:28] <lazyPower> NewServerGuy - which version of lubuntu are you using? the getting started guide assumes xenial+
[22:28] <rick_h> justicefries: yea, the admin default user is meant to encourage that flow
[22:28] <justicefries> 👍
[22:28] <rick_h> justicefries: so you create all the other users, and admin is kind of like "root"
[22:28] <NewServerGuy> not sure.
[22:28] <justicefries> but there's no way to use --owner with grant, like with add-model
[22:28] <lazyPower> NewServerGuy - can you pastebin the output of lsb_release -a?
[22:28] <justicefries> even if you're a superuser.
[22:29] <rick_h> justicefries: you don't really want to. You want the owner to stay admin so that users can't kill the model off
[22:29] <rick_h> justicefries: so you just grant access
[22:29] <justicefries> +1 make sense.
[22:29] <justicefries> er, makes.
[22:30] <NewServerGuy> uname -r
[22:30] <NewServerGuy> 4.2.0-27-generic
[22:30] <NewServerGuy> lazyPower http://paste.ubuntu.com/23487630/
[22:30] <lazyPower> NewServerGuy - i dont think the zfs packages have been backported to trusty, its a xenial forward feature iirc.
[22:30] <lazyPower> NewServerGuy - so, you can still use lxd without zfs...
[22:31] <NewServerGuy> lazyPower what do I change in this script? http://paste.ubuntu.com/23487634/
[22:32] <bdx> lazyPower, NewServerGuy: http://serverascode.com/2014/07/01/zfs-ubuntu-trusty.html
[22:32] <lazyPower> bdx  ah man, thats a ppa though
[22:32] <bdx> you can run zfs on trusty ... I've had mixed results though .
[22:32] <lazyPower> not the canonical supported zfs stuff
[22:32] <bdx> aaah
[22:33] <lazyPower> yeah we experimented with this before too, and it was not as good as we had hoped
[22:33] <lazyPower> NewServerGuy - lets start with where you're stuck
[22:33] <lazyPower> because this script looks fine, but its obviously not going stellar just yet, so what phase does it get to?
[22:33] <NewServerGuy_fro> refuses to do anything with lxd.
[22:35] <justicefries> one weird thing with multiple users and the GUI, I had to explicitly specify the model, else I got this: ERROR cannot retrieve model details: model name "justicefries/" not valid
[22:35] <lazyPower> ok, so its basically erroring out during the bootstrap phase?
[22:36] <lazyPower> NewServerGuy - if you could capture the output of juju bootstrap lxd lxd  --debug  and pastebin it, that would be a good first step.
[22:40] <NewServerGuy> paste.ubuntu.com/23487670/
[22:41] <lazyPower> NewServerGuy - whats the output of juju --version?
[22:41] <NewServerGuy> 1.25.6-trusty-i386
[22:41] <lazyPower> ahhh and now it gets clearer to me
[22:42] <lazyPower> ok, sorry about the long winded trail to get here. juju 1.25 does not support the lxd provider. You'll need to install the juju 2 package. You can continue using juju 1.25 but you'll want to look at the 1.25 documentation
[22:43] <lazyPower> NewServerGuy - https://jujucharms.com/docs/1.25/config-LXC
[22:43] <lazyPower> this document should un-muddy the waters for you on 1.25
[22:43] <NewServerGuy> lazyPower We're trying to get a cloud going for a classroom.
[22:43] <lazyPower> NewServerGuy - i would encouage you to use xenial, and juju 2.0.1 in that case
[22:44] <NewServerGuy> is xenial an OS?
[22:44] <lazyPower> Xenial is the 16.04 release of Ubuntu
[22:44] <lazyPower> the Lubuntu install you're currently using is our last LTS release, 14.04
[22:45] <justicefries> hm, ok. canonical-kubernetes works nicely.
[22:45] <lazyPower> justicefries too much metal for one hand \ooo, ,ooo/
[22:45] <justicefries> hahaha
[22:46] <justicefries> i set a constraint while everything was still pending to up the worker machine size.
[22:46] <justicefries> and it doesn't seem to be taking. do I have to add the units myself?
[22:46] <lazyPower> ah that wont work for you though, you'll need to set constraints before you deploy
[22:46] <justicefries> 👍
[22:46] <lazyPower> or you'll need to use something like conjure-up where you can edit the constraints on the fly
[22:46] <justicefries> i can probably add units at this point and it'll take yeah?
[22:46] <lazyPower> only if you set-model-constraints
[22:46] <lazyPower> add-unit doesn't take constraints in 2.0+
[22:47] <justicefries> ah ha.
[22:47] <justicefries> even if the application name has a constraint?
[22:47] <lazyPower> err
[22:47] <lazyPower> i'm not sure i follow
[22:47] <justicefries> so I have a constraint on kubernetes-worker
[22:47] <justicefries> and I do: juju add-unit -n 3 kubernetes-worker
[22:47] <lazyPower> ah, it might.
[22:47] <lazyPower> i think it will
[22:48] <lazyPower> give it a whirl and tell me if i need to go home for the day ;)
[22:48] <justicefries> hahaha. ;) sounds good.
[22:49] <justicefries> it would be nice to be able to at some point pin out "versions" of my application, so if I have a whole bunch of old workers I need to scale down, I can do it in one big swath. not sure mechanically how that'd work yet.
[22:49] <justicefries> again, need to play and come up with what I want haha.
[22:49] <lazyPower> justicefries - welp, have i got good news for you
[22:49] <lazyPower> juju reports versions in status output if the charm author uses the application_set_version helper
[22:50] <lazyPower> so for example in canonical kubernetes, you'll notice all the apps report their current versions, so you a window into whats out there
[22:50] <justicefries> look at that.
[22:50] <lazyPower> its not exactly pinning, but its introspection, and if the charm supports resources you can even lock that to provide whatever bins you wish at whatever version the charm supports
[22:50] <lazyPower> risky when doing stuff like kubernetes 1.5 with our 1.4 charms
[22:50] <lazyPower> but most of the time it'll just work
[22:51] <justicefries> ok, so if I could do it based on extra model and/or application constraints (i want to know all 1.4.5 with an instance_type=m3.medium) I think that'd be gold.
[22:51] <justicefries> i'm thinking like kubernetes labels and selectors right.
[22:51] <justicefries> i know now that I want ot get rid of kuernetes-worker/0-2, but it'd be nice if I could automagically grab that based on constraints or other metadata.
[22:51] <lazyPower> well, i dont think you can do that out of the box without some status parsing/munging
[22:51] <lazyPower> we dont have filters on status other than filtering to the app that i'm aware of
[22:52] <lazyPower> juju help commands && juju status --help would be good there
[22:52] <justicefries> huh. -o yaml and -o json are empty o.o
[22:52] <NewServerGuy_fro> Soy, lazyPower, running on a laptop, I already got the first page of the tutorial done on my laptop....
[22:52] <justicefries> oh
[22:52] <justicefries> that's output file lol
[22:52] <NewServerGuy_fro> How do I access my mediawiki page on other computers on the network?
[22:52] <lazyPower> NewServerGuy_fro - ok, glad i didn't discourage you
[22:53] <NewServerGuy_fro> No prob. On the laptop I got done two days ago. It's the server that's punking me.
[22:53] <lazyPower> NewServerGuy_fro - so thats a more advanced use of lxc, and it requires bridging the lxc bridge.
[22:53] <lazyPower> to your physical ethernet adapter
[22:53] <lazyPower> and i've helped people turn their lxc deployments into hot dumpster fires by doing that
[22:53] <NewServerGuy_fro> How is that done?
[22:53] <NewServerGuy_fro> SHIT?
[22:53] <justicefries> hmm. so its not quite enough information to automate that the way I'd ultimately like. i think I can work around it just with the policy and multiple models in that case.
[22:53] <NewServerGuy_fro> So that's dangerous?
[22:54] <bdx> lazyPower: is there a best practice for that yet?
[22:54] <lazyPower> well, only if you dont know what you're doing or how your network topology is laid out
[22:54] <NewServerGuy_fro> fuck.
[22:54] <lazyPower> bdx - i'm pretty sure the lxd install screens offer this out of the box these days
[22:54] <lazyPower> did that change?
[22:54] <lazyPower> i'm referring to lxc, on trusty. As i believe thats what NewServerGuy_fro is using.
[22:54] <bdx> ooo
[22:55] <bdx> nating the host adapter to the container ip?
[22:55] <lazyPower> what you basically do, is remove the nat, and the lxcbr0 becomes a bridge adapter
[22:55] <lazyPower> so you're pulling ip's directly from your router/DHCP server
[22:56] <lazyPower> but again, this is moderate to advanced networking
[22:56] <bdx> ooooh, when lxdbr0 is a bridge on your adapter
[22:56] <NewServerGuy_fro> lazyPower I'm using more up to date Ubuntu on my laptop and installing more uptodate Ubuntu on server now.
[22:56] <bdx> yeah .. I've built fire on top of that method too
[22:56] <justicefries> this sure as hell beats my other kubernetes setup.
[22:56] <NewServerGuy_fro> LAP:Ubuntu ; SERV-1:Ubuntu
[22:57] <NewServerGuy_fro> Where I called you from originally --> SERV-2, is currently install new Ubuntu.
[22:57] <lazyPower> yeah here's a post from 2013 where i covered this - and its really risky without knowing what you're doing - http://dasroot.net/posts/2013-12-22-making-juju-visible-on-your-lan/
[22:58] <lazyPower> you can easily hoze a lxc deployment where no containers will work because networking is borked
[22:58] <lazyPower> but if you're feeling brave, there's that.
[22:58] <jrwren> NewServerGuy_fro: http://jrwren.wrenfam.com/blog/tag/bridge/ from 2016 instead of 2013 ;}
[22:58] <lazyPower> hey right on jrwren
[22:58] <lazyPower> <3
[22:59] <jrwren> still, its... tricky
[23:00] <bdx> my context is out of focus with aws ... I miss my maas stacks
[23:00] <lazyPower> bdx metal, for lyfe
[23:01] <bdx> 4sho
[23:01] <lazyPower> NewServerGuy_fro - i'm at my EOD, but i'm happy to pick this up tomorrow
[23:02] <lazyPower> and there are others in here to lend a hand like jay, who is pretty knowledgeable. if you get extremely stuck, dont despare, hit us up on the mailing list juju@lists.ubuntu.com and we'll be happy to circle and get you an answer
[23:02] <lazyPower> *circle back
[23:04] <NewServerGuy_fro> lazyPower, THanks man!
[23:04] <lazyPower> anytime. we're here to help :)
[23:13] <justicefries> be nice if I could point at an existing elasticsearch instance (AWS-managed one) within the kubernetes bundle.
[23:13] <justicefries> otherwise this is slick.
[23:16] <bdx> justicefries: you bring up a good point
[23:16] <bdx> it would be cool to be able to drop in managed services in some places for sure
[23:16] <justicefries> yup
[23:29] <justicefries> marcoceppi: wonder if you'd just have a :proxy interface much like you have the :client interface now?
[23:43] <justicefries> weird, kubeapi-load-balancer died out on me, had to add a new unit.
[23:43] <justicefries> and machine
[23:45] <justicefries> marcoceppi: I assume a cloud provider/AWS specific charm would work the same way huh? make an elb-proxy charm, have an interface on it, add relations, the charm handles the specifics of mapping the ELB to related machines.