/srv/irclogs.ubuntu.com/2018/06/15/#juju.txt

veeberswallyworld: FYI bug I mentioned in startup that's holding up roll out of snap lxd 3 in the lab: https://bugs.launchpad.net/apparmor/+bug/177701700:42
mupBug #1777017: snap install lxd doesn't work within a container <AppArmor:New> <https://launchpad.net/bugs/1777017>00:42
wallyworldack00:42
veebersHah, I doubt this Juju was written in Go: https://www.trademe.co.nz/a.aspx?id=166239752700:59
veeberswallyworld: is this statement correct "Starting with Juju 2.0, users can upload resources to the controller or the Juju Charm Store. . ." I thought it was only the charmstore that would take a resouce? (maybe the controller does some caching after first deploy etc.?)03:15
wallyworldjuju attach-resource03:16
wallyworldi haven't got the exact workflow mapped out in my head03:17
wallyworldbut one scenario is to upgrade an existing resource for a deployed charm03:18
wallyworldthe controller does cache resources03:18
veeberswallyworld: ack, thanks03:19
blahdeblahHi.  Anyone able to point me to a decent reference about how juju 2 configures lxd network interfaces on trusty?05:50
wallyworldmanadart: ^^^^^ did you have any info we could pass to  blahdeblah?06:56
wallyworldor jam?06:56
blahdeblahthanks wallyworld06:58
wallyworldnp, sorry i'm not the best person to ask06:58
=== frankban|afk is now known as frankban
manadartblahdeblah: There is no "reference" as far as I am aware, but I can answer any questions you have.07:27
blahdeblahmanadart: So I've got a machine where one of the LXD containers doesn't have the network interfaces I expect, and I am trying to work out what actually controls that.  i.e. How does juju decide which network interfaces to give to a container when it creates one on a host?07:29
manadartThis is also beginning to differ between the 2.3 series and 2.4+ as we accommodate LXD clustering.07:29
manadartblahdeblah: So this is container machines as opposed to the LXD provider, yes?07:30
blahdeblahCorrect; this is 2.3.4 running MAAS provider to drive an Openstack cloud07:30
blahdeblahmanadart: ^ In case you missed it07:37
manadartblahdeblah: Got it, just doing a quick scan of the 2.3 logic :)07:38
blahdeblahthanks - appreciated07:38
manadartblahdeblah: OK, so there is a series of checks and fallbacks.08:00
manadartThe parent machine is interrogated for its interfaces and these are passed to the call for creating the container.08:01
blahdeblahmanadart: Oh, so the config of the parent is what really drives what the LXDs get?08:01
manadartIt appears that ones without a parent device are assigned the default LXD bridge.08:01
blahdeblahWhat do you mean "parent device"?08:02
manadartFor example, when no usable interfaces are passed to the create container call, the default is to create "eth0" with the the default LXD bridge as its parent.08:07
blahdeblahHmmm - not sure how these ones got set up then; they're neutron gateways which end up with 4-5 "physical" interfaces.08:08
manadartAnd there are containers on the same machine with *different* network config?08:08
manadartblahdeblah: Scratch that. LXD differs from KVM. The host machine does not factor in network config when creating a container.08:16
blahdeblahaluria: ^08:18
blahdeblahmanadart: Oh, so what does decide the network config?08:19
manadartSo when the machine is initialised for LXD containers, the LXD bridge is configured (via file for the older LXD on Trusty) with a 10.0.*.0/24 subnet.08:23
manadartThe container is created with an "eth0" device that has this bridge as its parent.08:24
manadartNow there is a final fallback that will not attempt to create the container with explicit devices, and instead just pass it the default LXD profile.08:28
manadartBut that is invoked only when there is no bridge device, and I can't see in the logic how this would come about.08:29
blahdeblahmanadart: that's really odd, because these containers have 3-4 interfaces in most cases, passed through without being part of the usual NAT-ed lxdbr0.08:29
blahdeblahAnd I would have thought that there would need to be some explicit request from something to make it that way.08:29
blahdeblahanyway, dinner time for me - aluria will be able to answer any further Qs on this setup08:30
manadartblahdeblah: OK. Let me interrogate this further.08:30
manadartIt would be good to know what version of LXD is on the machine.08:31
aluriamanadart: hi o/ -- let me move a couple of dhcp agents out from the failing container and I will grab all the info for you08:33
manadartGah. LXD/KVM is the other way around. The host interface *are* passed via network config.08:33
aluriamanadart: so "lxd" pkg on metal is 2.0.11-0ubuntu1~14.04.408:37
manadartaluria: Thx08:37
aluria"neutron agent-list" shows container names of the type "juju-machine-5-lxc-1", except for 22/lxd/8, which was deployed after juju1->juju2 migration -- "juju-958f87-22-lxd-8"08:38
aluriafor now, I have checked "ip netns" on the container with issues (22/lxd/8) and removed the instances via "neutron dhcp-agent-network-remove" and "neutron dhcp-agent-network-add" somewhere else08:39
manadartaluria: So the errant container is gone? Do we know what interfaces it got vs what the host had?08:59
aluriamanadart: the errant container is still there, but misses eth2; on a different host, a converted lxc->lxd container shows "convert_net2:" on "lxc config device show <lxdname>"09:00
aluriamanadart: I see 2 possibilities: 1) after the upgrade from juju1->juju2, containers are only created with a couple of nics;  2) we missed an extra constraint so as to create 22/lxd/8 with 3 ifaces (not 2)09:01
aluriafor now, I've run -> lxc config device add juju-958f87-22-lxd-8 eth2 nic name=eth2 nictype=bridged parent=br1-dmz  // and I see the 3rd iface on the container09:02
manadartaluria: It looks like space constraints would limit the devices passed to the container.09:14
=== mup_ is now known as mup
Cynervais there a way to specify ec2 availability zones in a bundle?12:57
Cynervai can do it without bundles, with `juju add-machine zone=us-east-1c` or `juju deploy ubuntu --to zone=us-east-1c`12:57
Cynervaif i try to use a similar placement directive in a bundle, i get: invalid placement syntax "zone=us-east-1c"12:58
gmanI seem to be have issue with juju etcd  3.2.9, is there a way to install a partitcular version like etcd 2.3.813:27
rick_h_gman: hmm, that'll be up to the charm and where the charm gets the software from13:35
rick_h_gman: looking at the charm readme: https://jujucharms.com/etcd/ it looks like you can use a deb or a snap so maybe there's a snap path you can use or maybe the deb is older than the snap and will work for you13:36
manadartIf anyone would be so kind... https://github.com/juju/juju/pull/8825. Not a big one.13:39
stickupkidmanadart: looking13:44
stickupkidmanadart: LGTM - becoming cleaner13:46
manadartstickupkid: Yes; quite satisfying. The next one will remove a hoard of code.13:47
manadartstickupkid: Thanks.13:48
stickupkidnps13:49
=== frankban is now known as frankban|afk
bdxhello all17:32
bdxcan we place some priority on stuck models being able to get --force destroyed or something17:32
bdxI've got models stuck in "destroying" on every controller17:33
bdxI'm not sure creating 5 new controllers and migrating all my non-stuck models to the new controllers is the correct answer17:34
bdxsounds like a rats nest17:34
bdxpossibly a lion's den17:34
bdxbear's cave17:34
bdxeither way17:35
pmatulisbdx, maybe check the logs. you might be able to unstick them if you get more information as to what's going on17:35
pmatuliscontroller model logs is what i would look at first17:36
bdxpmatulis: right, I have a good idea as to why each is stuck17:36
bdxsome are stuck bc machines physically left the model without being removed via juju17:37
bdxsome are stuck because x-model relations wouldn't remove17:37
bdxsome are stuck for other reasons17:37
bdxeither way17:37
bdxI've got a week worth of things to do today17:37
bdxand everyday17:37
bdxchasing these stuck models is not on the agenda17:38
bdxnor should it be on anyones17:38
pmatulisyou can --force remove machines17:38
bdxpmatulis: not if they are already physically gone17:41
bdxpossibly in some cases it works17:42
bdxin others not so much17:42
pmatulisbdx, did you try?17:42
bdxoh my17:42
bdxI'm probably 20hrs into trying to get rid of these models17:42
bdxgiving up17:42
bdxI don't care enough to keep chasing this rat tail17:42
bdxbut I do care enough to bitch about it and make sure it gets nipped17:43
bdxpmatulis: thanks for the insight17:43
rick_h_bdx: it's on the roadmap for this cycle. we've started the spec and definitely will be getting it going17:44
bdxawesome17:44
bdxthanks, rick_h17:44
bdx_17:44
rick_h_bdx: we had some conversations around various cases for this last week and atm we're doing the series upgrade and cluster stuff but it'll come around17:44
bdxawesome17:45
bdxhey guys18:35
bdxis there anyone around who still deals in maintaining the haproxy charm development18:36
bdxI've had a PR up for some time that fixes a critical bug18:36
bdxif someone could give me a review it would be greatly appreciated18:36
bdxmaintainership18:36
bdxhttps://code.launchpad.net/~jamesbeedy/charm-haproxy/options_newline18:36
bdxthanks18:36
stokachubdx: what's your deal20:29
stokachubdx: https://twitter.com/JamesBeedy/status/100771555602207948820:33
stokachubdx: what part of the code of conduct do you not understand?20:35
bdxI understand the code of conduct20:35
bdxIf I need to drop a few curse words to get the wheels rolling so be it20:36
bdxsue me20:36
stokachuok we're done20:36
bdxI'm sorry if I offended you20:36
stokachuIt's offensive to Ubuntu20:37

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!