[00:42] <veebers> wallyworld: FYI bug I mentioned in startup that's holding up roll out of snap lxd 3 in the lab: https://bugs.launchpad.net/apparmor/+bug/1777017
[00:42] <mup> Bug #1777017: snap install lxd doesn't work within a container <AppArmor:New> <https://launchpad.net/bugs/1777017>
[00:42] <wallyworld> ack
[00:59] <veebers> Hah, I doubt this Juju was written in Go: https://www.trademe.co.nz/a.aspx?id=1662397527
[03:15] <veebers> wallyworld: is this statement correct "Starting with Juju 2.0, users can upload resources to the controller or the Juju Charm Store. . ." I thought it was only the charmstore that would take a resouce? (maybe the controller does some caching after first deploy etc.?)
[03:16] <wallyworld> juju attach-resource
[03:17] <wallyworld> i haven't got the exact workflow mapped out in my head
[03:18] <wallyworld> but one scenario is to upgrade an existing resource for a deployed charm
[03:18] <wallyworld> the controller does cache resources
[03:19] <veebers> wallyworld: ack, thanks
[05:50] <blahdeblah> Hi.  Anyone able to point me to a decent reference about how juju 2 configures lxd network interfaces on trusty?
[06:56] <wallyworld> manadart: ^^^^^ did you have any info we could pass to  blahdeblah?
[06:56] <wallyworld> or jam?
[06:58] <blahdeblah> thanks wallyworld
[06:58] <wallyworld> np, sorry i'm not the best person to ask
[07:27] <manadart> blahdeblah: There is no "reference" as far as I am aware, but I can answer any questions you have.
[07:29] <blahdeblah> manadart: So I've got a machine where one of the LXD containers doesn't have the network interfaces I expect, and I am trying to work out what actually controls that.  i.e. How does juju decide which network interfaces to give to a container when it creates one on a host?
[07:29] <manadart> This is also beginning to differ between the 2.3 series and 2.4+ as we accommodate LXD clustering.
[07:30] <manadart> blahdeblah: So this is container machines as opposed to the LXD provider, yes?
[07:30] <blahdeblah> Correct; this is 2.3.4 running MAAS provider to drive an Openstack cloud
[07:37] <blahdeblah> manadart: ^ In case you missed it
[07:38] <manadart> blahdeblah: Got it, just doing a quick scan of the 2.3 logic :)
[07:38] <blahdeblah> thanks - appreciated
[08:00] <manadart> blahdeblah: OK, so there is a series of checks and fallbacks.
[08:01] <manadart> The parent machine is interrogated for its interfaces and these are passed to the call for creating the container.
[08:01] <blahdeblah> manadart: Oh, so the config of the parent is what really drives what the LXDs get?
[08:01] <manadart> It appears that ones without a parent device are assigned the default LXD bridge.
[08:02] <blahdeblah> What do you mean "parent device"?
[08:07] <manadart> For example, when no usable interfaces are passed to the create container call, the default is to create "eth0" with the the default LXD bridge as its parent.
[08:08] <blahdeblah> Hmmm - not sure how these ones got set up then; they're neutron gateways which end up with 4-5 "physical" interfaces.
[08:08] <manadart> And there are containers on the same machine with *different* network config?
[08:16] <manadart> blahdeblah: Scratch that. LXD differs from KVM. The host machine does not factor in network config when creating a container.
[08:18] <blahdeblah> aluria: ^
[08:19] <blahdeblah> manadart: Oh, so what does decide the network config?
[08:23] <manadart> So when the machine is initialised for LXD containers, the LXD bridge is configured (via file for the older LXD on Trusty) with a 10.0.*.0/24 subnet.
[08:24] <manadart> The container is created with an "eth0" device that has this bridge as its parent.
[08:28] <manadart> Now there is a final fallback that will not attempt to create the container with explicit devices, and instead just pass it the default LXD profile.
[08:29] <manadart> But that is invoked only when there is no bridge device, and I can't see in the logic how this would come about.
[08:29] <blahdeblah> manadart: that's really odd, because these containers have 3-4 interfaces in most cases, passed through without being part of the usual NAT-ed lxdbr0.
[08:29] <blahdeblah> And I would have thought that there would need to be some explicit request from something to make it that way.
[08:30] <blahdeblah> anyway, dinner time for me - aluria will be able to answer any further Qs on this setup
[08:30] <manadart> blahdeblah: OK. Let me interrogate this further.
[08:31] <manadart> It would be good to know what version of LXD is on the machine.
[08:33] <aluria> manadart: hi o/ -- let me move a couple of dhcp agents out from the failing container and I will grab all the info for you
[08:33] <manadart> Gah. LXD/KVM is the other way around. The host interface *are* passed via network config.
[08:37] <aluria> manadart: so "lxd" pkg on metal is 2.0.11-0ubuntu1~14.04.4
[08:37] <manadart> aluria: Thx
[08:38] <aluria> "neutron agent-list" shows container names of the type "juju-machine-5-lxc-1", except for 22/lxd/8, which was deployed after juju1->juju2 migration -- "juju-958f87-22-lxd-8"
[08:39] <aluria> for now, I have checked "ip netns" on the container with issues (22/lxd/8) and removed the instances via "neutron dhcp-agent-network-remove" and "neutron dhcp-agent-network-add" somewhere else
[08:59] <manadart> aluria: So the errant container is gone? Do we know what interfaces it got vs what the host had?
[09:00] <aluria> manadart: the errant container is still there, but misses eth2; on a different host, a converted lxc->lxd container shows "convert_net2:" on "lxc config device show <lxdname>"
[09:01] <aluria> manadart: I see 2 possibilities: 1) after the upgrade from juju1->juju2, containers are only created with a couple of nics;  2) we missed an extra constraint so as to create 22/lxd/8 with 3 ifaces (not 2)
[09:02] <aluria> for now, I've run -> lxc config device add juju-958f87-22-lxd-8 eth2 nic name=eth2 nictype=bridged parent=br1-dmz  // and I see the 3rd iface on the container
[09:14] <manadart> aluria: It looks like space constraints would limit the devices passed to the container.
[12:57] <Cynerva> is there a way to specify ec2 availability zones in a bundle?
[12:57] <Cynerva> i can do it without bundles, with `juju add-machine zone=us-east-1c` or `juju deploy ubuntu --to zone=us-east-1c`
[12:58] <Cynerva> if i try to use a similar placement directive in a bundle, i get: invalid placement syntax "zone=us-east-1c"
[13:27] <gman> I seem to be have issue with juju etcd  3.2.9, is there a way to install a partitcular version like etcd 2.3.8
[13:35] <rick_h_> gman: hmm, that'll be up to the charm and where the charm gets the software from
[13:36] <rick_h_> gman: looking at the charm readme: https://jujucharms.com/etcd/ it looks like you can use a deb or a snap so maybe there's a snap path you can use or maybe the deb is older than the snap and will work for you
[13:39] <manadart> If anyone would be so kind... https://github.com/juju/juju/pull/8825. Not a big one.
[13:44] <stickupkid> manadart: looking
[13:46] <stickupkid> manadart: LGTM - becoming cleaner
[13:47] <manadart> stickupkid: Yes; quite satisfying. The next one will remove a hoard of code.
[13:48] <manadart> stickupkid: Thanks.
[13:49] <stickupkid> nps
[17:32] <bdx> hello all
[17:32] <bdx> can we place some priority on stuck models being able to get --force destroyed or something
[17:33] <bdx> I've got models stuck in "destroying" on every controller
[17:34] <bdx> I'm not sure creating 5 new controllers and migrating all my non-stuck models to the new controllers is the correct answer
[17:34] <bdx> sounds like a rats nest
[17:34] <bdx> possibly a lion's den
[17:34] <bdx> bear's cave
[17:35] <bdx> either way
[17:35] <pmatulis> bdx, maybe check the logs. you might be able to unstick them if you get more information as to what's going on
[17:36] <pmatulis> controller model logs is what i would look at first
[17:36] <bdx> pmatulis: right, I have a good idea as to why each is stuck
[17:37] <bdx> some are stuck bc machines physically left the model without being removed via juju
[17:37] <bdx> some are stuck because x-model relations wouldn't remove
[17:37] <bdx> some are stuck for other reasons
[17:37] <bdx> either way
[17:37] <bdx> I've got a week worth of things to do today
[17:37] <bdx> and everyday
[17:38] <bdx> chasing these stuck models is not on the agenda
[17:38] <bdx> nor should it be on anyones
[17:38] <pmatulis> you can --force remove machines
[17:41] <bdx> pmatulis: not if they are already physically gone
[17:42] <bdx> possibly in some cases it works
[17:42] <bdx> in others not so much
[17:42] <pmatulis> bdx, did you try?
[17:42] <bdx> oh my
[17:42] <bdx> I'm probably 20hrs into trying to get rid of these models
[17:42] <bdx> giving up
[17:42] <bdx> I don't care enough to keep chasing this rat tail
[17:43] <bdx> but I do care enough to bitch about it and make sure it gets nipped
[17:43] <bdx> pmatulis: thanks for the insight
[17:44] <rick_h_> bdx: it's on the roadmap for this cycle. we've started the spec and definitely will be getting it going
[17:44] <bdx> awesome
[17:44] <bdx> thanks, rick_h
[17:44] <bdx> _
[17:44] <rick_h_> bdx: we had some conversations around various cases for this last week and atm we're doing the series upgrade and cluster stuff but it'll come around
[17:45] <bdx> awesome
[18:35] <bdx> hey guys
[18:36] <bdx> is there anyone around who still deals in maintaining the haproxy charm development
[18:36] <bdx> I've had a PR up for some time that fixes a critical bug
[18:36] <bdx> if someone could give me a review it would be greatly appreciated
[18:36] <bdx> maintainership
[18:36] <bdx> https://code.launchpad.net/~jamesbeedy/charm-haproxy/options_newline
[18:36] <bdx> thanks
[20:29] <stokachu> bdx: what's your deal
[20:33] <stokachu> bdx: https://twitter.com/JamesBeedy/status/1007715556022079488
[20:35] <stokachu> bdx: what part of the code of conduct do you not understand?
[20:35] <bdx> I understand the code of conduct
[20:36] <bdx> If I need to drop a few curse words to get the wheels rolling so be it
[20:36] <bdx> sue me
[20:36] <stokachu> ok we're done
[20:36] <bdx> I'm sorry if I offended you
[20:37] <stokachu> It's offensive to Ubuntu