[08:18] <admcleod> .
[08:50] <kjackal> Good morning Juju world!
[09:25] <armaan> jamespage:  Hello, for upgrading from Mitaka to Newton, it is essential that we upgrade the host OS from trusty to Xenial? Is that assumption correct?
[09:26] <jamespage> armaan: Newton is only available on >= xenial for Ubuntu so yes that is the case
[09:26] <jamespage> however I'm not quite sure where Juju is on series upgrades between Ubuntu releases yet
[09:28] <armaan> jamespage: ok, is it possible to throw away one lxc-managed unit of a service, and then redeploy it with Xenial, instead of the in-place upgrade of a whole OS?
[09:30] <armaan> jamespage: I think "juju deploy" has "--series" for that; but I'm not sure if "juju add-unit" does too?
[09:38] <jamespage> armaan: no - I think the juju approach to this was to allow you to in-place upgrade units, and they inform juju that the application is now series X for example
[09:39] <jamespage> but as I said I'm not sure whether that work is in 2.2 or not
[09:39] <jamespage> there are other complexities to the upgrade process as well - for example having to switch from LXC -> LXD
[09:43] <armaan> jamespage: You mean Mitaka units are managed via lxc and Newton units are managed via lxd?
[09:44] <armaan> jamespage: is there any documentation -- which i can look into?
[09:52] <armaan> jamespage: perhaps, this script will be useful in migrating lxc containers to lxd. https://github.com/lxc/lxd/blob/master/scripts/lxc-to-lxd
[09:57] <stub> Anyone know where the metadata.yaml documentation is?
[10:00] <stub> Or to answer my question, can charms declare their supported architectures now?
[10:52] <rick_h> stub: no, arch hasn't made it to the metadata. It's still a constraint since it's a machine property vs a charm one.
[10:53] <stub> rick_h: I've got people wanting to add supported architectures to the snap layer, so a unit goes blocked if you deploy it onto an unsupported architecture. I was thinking that would be better done as a deploy time check.
[14:17] <stokachu> jamespage: is the charmstore skipping a stable Ocata and waiting on Pike instead?
[14:51] <aluria> hi o/ -- in Juju2, is there a way to get where an application is bound to? I see metadata.yaml's extra-bindings but I'd like to check on a live environment what the config is
[15:43] <jamespage> stokachu: no
[15:43] <jamespage> its just not been updated for ocata
[15:44] <jamespage> yet
[15:44] <stokachu> ok
[15:45] <stokachu> jamespage: any eta by chance?
[16:02] <chrome0> Network space question: is it possible to update jujus notion of available spaces on a machine? Eg. in cases where the provisioners' knowledge about available network resources is incomplete?
[16:04] <chrome0> ^^ f.ex. a maas deployed machine, but which has no complete knowledge of the network ressources, or when doing a juju add-machine ssh:... style manual placement
[17:17] <rick_h> chrome0: hmm, so there's reload spaces but that's about updating Juju to know about the spaces the underlying IaaS substrates knows about.
[17:18] <rick_h> chrome0: there's juju add-space, but honestly I've not tinkered with spaces and manual machines because typically, spaces are set as constraints during deploy "bind x to y" and so the underlying system makes sure the machines/vms have interfaces on y
[17:18] <chrome0> Yep - but in this case it's really applying an override if the underlying substrate has incomplete data
[17:20] <rick_h> chrome0: so the goal, or the thing that's hurting you is that you've got a machine with different networks but you're not able to tell the application which ones to use for what purposes? (matching the bindings to the networks on the host that are sitting there available?)
[17:20] <chrome0> rick_h : Yea so I have net resources on a machine but juju doesn't know about them.
[17:21] <rick_h> chrome0: so the test would be, can you use juju add-space and build up a mental model of what's there and then deploy with bindings ?
[17:23] <chrome0> rick_h : The spaces are present (and in use on other nodes). The machine I'm referring to here is the maas bootstrap node - whose networking maas obv doesn't control
[17:23] <chrome0> I'd like to add a container to that maas node and use bindings there
[17:24] <rick_h> chrome0: ? so you're trying to deploy something with Juju onto the maas machine itself?
[17:24] <chrome0> Yes
[17:24] <rick_h> oic, you just wanted to take the hardness up a level :P
[17:24] <rick_h> hmmm, so you created the container manually on the maas machine and then used juju add-machine to register it in juju?
[17:24] <chrome0> Well, making use of resources and all ;-)
[17:25] <rick_h> chrome0: hah, yea it's a long standing feature request from folks for sure
[17:25] <chrome0> I was actually thinking of doing add-machine for the metal itself
[17:25]  * rick_h cringes a bit at that
[17:25] <rick_h> the worry there is...is there any pattern of deploy/remove/destroy that could try to take down the maas machine then?
[17:26] <rick_h> chrome0: the normal pattern folks use is to create a container on the maas machine and then juju add-machine to the model so you can deploy to it
[17:26] <rick_h> it keeps Juju from messing at anything above the container level and for most things it's just as performant/etc.
[17:26] <chrome0> Ok, that works for me as well
[17:27] <chrome0> rick_h : ...but I'm still unsure how juju would know about the containers' network spaces?
[17:27] <rick_h> chrome0: well, spaces in the end is tracking of subnets matched against what's on the network definitions in eni
[17:27] <rick_h> chrome0: so...if the container has access to IPs on the host machine network then it should match up?
[17:28] <rick_h> chrome0: if not...then there's the issue in that the container network info is scoped into a locked box and won't be able to see out to the other machines in the model
[17:28] <chrome0> Yep
[17:29] <rick_h> chrome0: honestly, all I can say is to experiment with it. If the containers can get IPs on the host (and that's in the container that's manually setup so juju doesn't control that) it might "just work"
[17:29] <chrome0> Ack, cheers
[17:29] <rick_h> chrome0: if not, I'd hit up the mailing list and see if we can get the network specialist folks to poke at it and suggest tweaks
[17:30] <rick_h> they're probably asleep atm
[17:30] <chrome0> I gotta run now, but will give that a shot later
[17:30] <rick_h> chrome0: k, let me know how it goes
[17:30] <chrome0> Cheers
[17:34] <skay> lazyPower: I have a filebeats charm question. the example shows it being deployed in the same environment with logstash and the service to get logs from
[17:34] <lazyPower> skay: yep
[17:34] <skay> lazyPower: but I'd like to use filebeats from a different environment to send to an ELK stack running elsewhere. is that possible?
[17:35] <lazyPower> skay: not currently, but the layer could be extended to support a manual configuration option for the es/logstash endpoint
[17:35] <skay> lazyPower: okay. thanks for the sanity check
[17:35] <lazyPower> skay: np, sorry about the limited feature set
[17:36] <lazyPower> i haven't touched filebeat in quite a long time, the focus moved to k8s proper.
[17:36] <skay> lazyPower: no apologies
[17:36] <rick_h> skay: on the same controller? 2.2?
[17:36] <lazyPower> oh good point rick_h, xmodel relations!
[17:36] <skay> rick_h: I don't know. maybe in our staging environment, but I don't know how prod is set up
[17:36]  * lazyPower had cranial flatulence
[17:36] <skay> rick_h: lazyPower: tell me more, docs?
[17:37] <rick_h> skay: so there's a feature flagged feature in 2.2 that the team's working on that lets you add relations across models
[17:37] <rick_h> skay: sec, I've got a sheet on my desk for a blog post I want to get going but not done it yet. Let me see what I can pull up
[17:37] <skay> rick_h: I am way behind on juju news!
[17:37] <rick_h> skay: :) https://www.youtube.com/playlist?list=PLW1vKndgh8gI6iRFjGKtpIx2fnJxlr5FF just to plug
[17:39] <skay> lazyPower: also, what is k8s an abbreviation for?
[17:39] <lazyPower> skay: kubernetes
[17:40] <skay> aw, this is like i18n or l10n
[17:41] <rick_h> skay: can you load this? https://goo.gl/bHMVkx
[17:41] <skay> rick_h: yes
[17:41] <rick_h> heh yea "1, 2, skip a few..."
[17:41] <rick_h> skay: so that's not in the official docs yet and such, but rough notes on what it is and how it works.
[17:41] <rick_h> skay: we want to get users testing it between 2.2 and 2.3 so I'll be pushing out a larger call to bang on it, but your use case really matches up with what we want to do
[17:42] <rick_h> skay: so any tinkering is <3
[17:42] <lazyPower> ^
[17:42] <lazyPower> bugs will be helpful too if you find defects
[17:42] <rick_h> +1
[17:42] <lazyPower> (in layer-filebeat)
[17:42] <lazyPower> (and juju core)
[17:47] <bdx> how can I completely uninstall juju-2.0.1 beta from osx?
[17:47] <bdx> users experiencing difficulty installing 2.2 due to pre-existing beta install
[17:48] <rick_h> bdx: brew uninstall --force juju
[17:48] <rick_h> bdx: I had to do this as I was tinkering with things
[17:48] <bdx> rick_h: awesome, thx
[17:48] <rick_h> bdx: then brew upgrade and make sure brew info juju@2.0 looks like it's what you want there
[17:49] <bdx> entirely ... `brew uninstall --force juju` doesn't seem to be doing the trick
[17:49] <bdx> trying to find it and rm manually
[17:49] <rick_h> bdx: does it run successfully? or fail with an error?
[17:52] <bdx> successfully
[17:52] <bdx> it just didn't actually uninstall 2.0.1
[17:53] <lazyPower> :S
[17:54] <bdx> rick_h: a combination of --force install and force uninstall seemed to do the trick
[17:55] <bdx> I'm sorry I don't have more specifics here
[17:55] <bdx> thx thx
[17:55] <rick_h> bdx: hmm, ok thanks for the heads up
[17:55] <rick_h> bdx: if you find something let us know and we'll try to get the word out and make sure folks know the work around
[17:59] <Budgie^Smore> o/ juju world
[18:01] <rick_h> what's up Budgie^Smore
[18:01] <lazyPower> \o Budgie^Smore
[18:02] <Budgie^Smore> not much waiting for a scheduling email
[18:11] <natefinch> howdy juju folks.  Just curious if auto scaling is a thing that is being planned anytime in the near future?
[18:11] <lazyPower> natefinch: SimonKLB wrote an autoscaler charm actually
[18:11] <lazyPower> natefinch: @k8s-bot ok to test
[18:11] <lazyPower> gah
[18:11] <lazyPower> natefinch: https://jujucharms.com/charmscaler/2
[18:12] <natefinch> wow
[18:13] <natefinch> that was not the answer I was expecting, but that is super cool
[18:19] <lazyPower> natefinch: Happy to help :)
[18:19] <lazyPower> and its getting better ever day, SimonKLB has been pretty responsive to bugs/requests.
[18:19] <lazyPower> *every
[18:20] <natefinch> lazyPower: current company is using Nomad, but it doesn't do well with things that don't work in docker, like cassandra.  and since we use several of these apps, I was thinking about juju, but autoscaling is something we'd want to have available once we get to general availability
[22:36] <cholcombe> i think ec2 is still having that issue i saw on friday where instances take an hour or more to spin up
[22:41] <lazyPower> cholcombe: i just ran a deploy in us-east-1 that took me less than 20 minutes to bootstrap, execute tests, and complete.
[22:42] <cholcombe> lazyPower: maybe something is wrong with us-west-2 then?
[22:42] <cholcombe> maybe it's just me then.  hmm
[22:42] <lazyPower> cholcombe: possible