[00:42] wallyworld: FYI bug I mentioned in startup that's holding up roll out of snap lxd 3 in the lab: https://bugs.launchpad.net/apparmor/+bug/1777017 [00:42] Bug #1777017: snap install lxd doesn't work within a container [00:42] ack [00:59] Hah, I doubt this Juju was written in Go: https://www.trademe.co.nz/a.aspx?id=1662397527 [03:15] wallyworld: is this statement correct "Starting with Juju 2.0, users can upload resources to the controller or the Juju Charm Store. . ." I thought it was only the charmstore that would take a resouce? (maybe the controller does some caching after first deploy etc.?) [03:16] juju attach-resource [03:17] i haven't got the exact workflow mapped out in my head [03:18] but one scenario is to upgrade an existing resource for a deployed charm [03:18] the controller does cache resources [03:19] wallyworld: ack, thanks [05:50] Hi. Anyone able to point me to a decent reference about how juju 2 configures lxd network interfaces on trusty? [06:56] manadart: ^^^^^ did you have any info we could pass to blahdeblah? [06:56] or jam? [06:58] thanks wallyworld [06:58] np, sorry i'm not the best person to ask === frankban|afk is now known as frankban [07:27] blahdeblah: There is no "reference" as far as I am aware, but I can answer any questions you have. [07:29] manadart: So I've got a machine where one of the LXD containers doesn't have the network interfaces I expect, and I am trying to work out what actually controls that. i.e. How does juju decide which network interfaces to give to a container when it creates one on a host? [07:29] This is also beginning to differ between the 2.3 series and 2.4+ as we accommodate LXD clustering. [07:30] blahdeblah: So this is container machines as opposed to the LXD provider, yes? [07:30] Correct; this is 2.3.4 running MAAS provider to drive an Openstack cloud [07:37] manadart: ^ In case you missed it [07:38] blahdeblah: Got it, just doing a quick scan of the 2.3 logic :) [07:38] thanks - appreciated [08:00] blahdeblah: OK, so there is a series of checks and fallbacks. [08:01] The parent machine is interrogated for its interfaces and these are passed to the call for creating the container. [08:01] manadart: Oh, so the config of the parent is what really drives what the LXDs get? [08:01] It appears that ones without a parent device are assigned the default LXD bridge. [08:02] What do you mean "parent device"? [08:07] For example, when no usable interfaces are passed to the create container call, the default is to create "eth0" with the the default LXD bridge as its parent. [08:08] Hmmm - not sure how these ones got set up then; they're neutron gateways which end up with 4-5 "physical" interfaces. [08:08] And there are containers on the same machine with *different* network config? [08:16] blahdeblah: Scratch that. LXD differs from KVM. The host machine does not factor in network config when creating a container. [08:18] aluria: ^ [08:19] manadart: Oh, so what does decide the network config? [08:23] So when the machine is initialised for LXD containers, the LXD bridge is configured (via file for the older LXD on Trusty) with a 10.0.*.0/24 subnet. [08:24] The container is created with an "eth0" device that has this bridge as its parent. [08:28] Now there is a final fallback that will not attempt to create the container with explicit devices, and instead just pass it the default LXD profile. [08:29] But that is invoked only when there is no bridge device, and I can't see in the logic how this would come about. [08:29] manadart: that's really odd, because these containers have 3-4 interfaces in most cases, passed through without being part of the usual NAT-ed lxdbr0. [08:29] And I would have thought that there would need to be some explicit request from something to make it that way. [08:30] anyway, dinner time for me - aluria will be able to answer any further Qs on this setup [08:30] blahdeblah: OK. Let me interrogate this further. [08:31] It would be good to know what version of LXD is on the machine. [08:33] manadart: hi o/ -- let me move a couple of dhcp agents out from the failing container and I will grab all the info for you [08:33] Gah. LXD/KVM is the other way around. The host interface *are* passed via network config. [08:37] manadart: so "lxd" pkg on metal is 2.0.11-0ubuntu1~14.04.4 [08:37] aluria: Thx [08:38] "neutron agent-list" shows container names of the type "juju-machine-5-lxc-1", except for 22/lxd/8, which was deployed after juju1->juju2 migration -- "juju-958f87-22-lxd-8" [08:39] for now, I have checked "ip netns" on the container with issues (22/lxd/8) and removed the instances via "neutron dhcp-agent-network-remove" and "neutron dhcp-agent-network-add" somewhere else [08:59] aluria: So the errant container is gone? Do we know what interfaces it got vs what the host had? [09:00] manadart: the errant container is still there, but misses eth2; on a different host, a converted lxc->lxd container shows "convert_net2:" on "lxc config device show " [09:01] manadart: I see 2 possibilities: 1) after the upgrade from juju1->juju2, containers are only created with a couple of nics; 2) we missed an extra constraint so as to create 22/lxd/8 with 3 ifaces (not 2) [09:02] for now, I've run -> lxc config device add juju-958f87-22-lxd-8 eth2 nic name=eth2 nictype=bridged parent=br1-dmz // and I see the 3rd iface on the container [09:14] aluria: It looks like space constraints would limit the devices passed to the container. === mup_ is now known as mup [12:57] is there a way to specify ec2 availability zones in a bundle? [12:57] i can do it without bundles, with `juju add-machine zone=us-east-1c` or `juju deploy ubuntu --to zone=us-east-1c` [12:58] if i try to use a similar placement directive in a bundle, i get: invalid placement syntax "zone=us-east-1c" [13:27] I seem to be have issue with juju etcd 3.2.9, is there a way to install a partitcular version like etcd 2.3.8 [13:35] gman: hmm, that'll be up to the charm and where the charm gets the software from [13:36] gman: looking at the charm readme: https://jujucharms.com/etcd/ it looks like you can use a deb or a snap so maybe there's a snap path you can use or maybe the deb is older than the snap and will work for you [13:39] If anyone would be so kind... https://github.com/juju/juju/pull/8825. Not a big one. [13:44] manadart: looking [13:46] manadart: LGTM - becoming cleaner [13:47] stickupkid: Yes; quite satisfying. The next one will remove a hoard of code. [13:48] stickupkid: Thanks. [13:49] nps === frankban is now known as frankban|afk [17:32] hello all [17:32] can we place some priority on stuck models being able to get --force destroyed or something [17:33] I've got models stuck in "destroying" on every controller [17:34] I'm not sure creating 5 new controllers and migrating all my non-stuck models to the new controllers is the correct answer [17:34] sounds like a rats nest [17:34] possibly a lion's den [17:34] bear's cave [17:35] either way [17:35] bdx, maybe check the logs. you might be able to unstick them if you get more information as to what's going on [17:36] controller model logs is what i would look at first [17:36] pmatulis: right, I have a good idea as to why each is stuck [17:37] some are stuck bc machines physically left the model without being removed via juju [17:37] some are stuck because x-model relations wouldn't remove [17:37] some are stuck for other reasons [17:37] either way [17:37] I've got a week worth of things to do today [17:37] and everyday [17:38] chasing these stuck models is not on the agenda [17:38] nor should it be on anyones [17:38] you can --force remove machines [17:41] pmatulis: not if they are already physically gone [17:42] possibly in some cases it works [17:42] in others not so much [17:42] bdx, did you try? [17:42] oh my [17:42] I'm probably 20hrs into trying to get rid of these models [17:42] giving up [17:42] I don't care enough to keep chasing this rat tail [17:43] but I do care enough to bitch about it and make sure it gets nipped [17:43] pmatulis: thanks for the insight [17:44] bdx: it's on the roadmap for this cycle. we've started the spec and definitely will be getting it going [17:44] awesome [17:44] thanks, rick_h [17:44] _ [17:44] bdx: we had some conversations around various cases for this last week and atm we're doing the series upgrade and cluster stuff but it'll come around [17:45] awesome [18:35] hey guys [18:36] is there anyone around who still deals in maintaining the haproxy charm development [18:36] I've had a PR up for some time that fixes a critical bug [18:36] if someone could give me a review it would be greatly appreciated [18:36] maintainership [18:36] https://code.launchpad.net/~jamesbeedy/charm-haproxy/options_newline [18:36] thanks [20:29] bdx: what's your deal [20:33] bdx: https://twitter.com/JamesBeedy/status/1007715556022079488 [20:35] bdx: what part of the code of conduct do you not understand? [20:35] I understand the code of conduct [20:36] If I need to drop a few curse words to get the wheels rolling so be it [20:36] sue me [20:36] ok we're done [20:36] I'm sorry if I offended you [20:37] It's offensive to Ubuntu