[00:05] stokachu: thank you for update \o/ it still leaves my questions open :) specifically, r we fixing it on conjure-up side or juju? nested lxd requirement came too late into 2.1 (we have not supported it previously) [00:07] anastasiamac_, so the branch that jam has will try to get a dhcp address from a nested container [00:07] anastasiamac_, i think that is a good approach [00:08] anastasiamac_, and i have been fixing it in conjure-up with the alteration of lxd profiles myself [00:08] anastasiamac_, now my fixes don't work anymore [00:08] and this is a heavily utilized feature of conjure-up and is at the center of http://ubuntu.com/cloud [00:09] anastasiamac_, so releasing juju 2.1 without having a solution i dont think is a good idea [00:10] anastasiamac_, menn0 https://github.com/juju/juju/compare/staging...jameinel:2.1-bridge-tweak that is what he referenced to me earlier [00:26] anastasiamac_, does that answer your question? [00:30] stokachu: yes, coordination is the key :) thank you [00:30] stokachu: of course, i meant cooperation \o/ [00:30] :D [01:38] anyone around that can test conjure-up for bug https://bugs.launchpad.net/juju/+bug/1664409? [01:38] Bug #1664409: juju 2.1-beta5 - juju 2.1rc2 - localhost failing to allocate a nested container with an ip [01:39] sudo snap install conjure-up --classic --edge [01:39] conjure-up kubernetes-core localhost [01:50] axw: thanks for reviews [01:51] np [02:02] axw: IIANM, the new azure regions are being tested right? so we'll be able to land your PR "soon" [02:03] wallyworld: pretty sure that's what aaron said [02:03] great, yeah i thought so too [02:12] wallyworld, anastasiamac_: fyi https://github.com/go-yaml/yaml/pull/241 [02:12] looking [02:13] babbageclunk: do you know how to add patches for upstreams to our codebase? [02:15] thumper: lgtm [02:15] wallyworld: yeah, but getting it merged may be harder [02:15] :-( may need to do as a patch like for mgo [02:15] yeah, that's my thinking [02:39] I'm probably going to get disconnected as I test briding stuff. If I don't respond, you're talking to the ghost of axw [02:39] bridging* [02:40] always the bridge-maid, never the bridge [02:40] lol [02:47] wallyworld: to avoid stepping on toes, in case anyone else is doing this... m going to try bootstrap openstack mitaka with ur keystone v3 related changes :D [02:54] awesome ty [02:54] we need to thoroughly test this change [02:55] far out, another 2 blessed builds [02:56] wallyworld: v3 stuff should probably go into release notes. cut and paste from ur email would b awesome :D [02:56] indeed, that's the plan, plus a bit more [02:56] \o/ [02:59] menn0 jam wallyworld: will be a couple of minutes late, start without me [02:59] ok [03:00] wallyworld: give me 5 mins - need to sort out kids [03:01] k [03:05] https://github.com/juju/juju/pull/6983 [03:05] wallyworld: ^^ === mbarnett_ is now known as mbarnett === JoseeAntonioR is now known as jose === petevg_ is now known as petevg === arosales_ is now known as arosales === zeestrat_ is now known as zeestrat === nottrobin_ is now known as nottrobin [03:54] jam: do you think it's worth us both looking at the LXD networking thing independently? I can repro easily [03:54] axw: can you look at jameinel/juju 2.1-bridge-tweaks ? [03:54] jam: ok [04:04] jam: seems sane, I'll test it [04:05] right after I have some lunch [04:23] jam: so that might help in cloud images, but in the case of LXD provider it's not the issue. there, lxd-bridge is failing to start [04:23] journalctl says: "ip6tables v1.6.0: can't initialize ip6tables table `filter': Tabl" [04:23] err. ip6tables v1.6.0: can't initialize ip6tables table `filter': Table does not exist... [04:24] jam: 2.0.9 made some changes to do with ip6tables: https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1660506, so maybe related [04:24] Bug #1660506: SRU of LXD 2.0.9 (upstream bugfix release) [04:27] so it sounds like we have >1 bug, then [04:52] jam: actually I think I was just looking at outdated instructions. just need to add some kernel modules to the profile [04:52] testing again now [05:02] axw: did you want to meet to chat? We talked a bit earlier, so I don't feel like we have to if you're focused [05:02] jam: I'm happy to skip today [05:03] jam: which was the kubernetes/conjure-up bug #? [05:03] was/is [05:03] axw: bug #1664409 [05:03] Bug #1664409: juju 2.1-beta5 - juju 2.1rc2 - localhost failing to allocate a nested container with an ip [05:03] thanks [05:12] plz add PRs to the bugs you work on :) [05:12] * anastasiamac_ looks at some ppl... === bradm_ is now known as bradm [05:51] axw: so testing the lxd case locally, it does look like my branch fixes the 'nested container doesn't use DHCP', I'm not 100% sure, but I'm pretty close to being sure [05:51] I saw it trigger the debug statement that should indicate the fix is working [05:51] I'm creating a trivial bundle for testing [05:51] jam: ok, cool [05:52] axw: hm. I wish nested containers would use the host LXD to seed the images... :) [05:52] I wonder how awful and hackish that would be [05:54] jam: probably not too hackish if we had the proxying story down, but otherwise I'm not sure [06:10] jam: I'm not sure where the change was made, but conjure-up kubernetes-core localhost is putting easyrsa on a top-level machine now. the bundle still appears to put it in a lxd container tho [06:11] jam: anyway, adding a unit of it to a (nested) lxd container works with your patch, doesn't work without it [06:11] axw: https://github.com/juju/juju/pull/6985 [06:12] axw: I think stokachu changed kubernetes-core to use a local bundle that *doesn't* deploy into a container [06:12] I still need to solve the multi-network case, which I haven't actually formulated a solution for [06:12] jam: yeah that's what I figured, just can't see the commit on github [06:12] * axw nods [06:12] but that likely also fixes bugs we've seen in GCE, etc. [06:13] axw: do you know how to tell conjure-up to refresh its cache? [06:13] jam: I don't know what cache you're talking about, so that'll be a no :) [06:14] axw: ~/.cache/conjure-up holds the spells, etc. [06:14] or ~/.cache/conjure-up-spells [06:14] but I don't know what/how that gets refreshed [06:15] I see [06:15] no idea, sorry [06:15] other than when you do it the first time it says "nothing found, refreshing" [06:16] jam: reviewed [06:30] axw: thanks. I'm trying to see if I can reproduce the nesting failure without juju in the mix [06:30] jam: which one? [06:32] axw: security.nesting: true not being enough to actually deploy a nested container [06:32] and I did [06:32] ok [06:38] https://github.com/lxc/lxd/issues/2885 [06:41] axw: does this look like the correct for bug 1645408? https://github.com/juju/juju/compare/2.1...wallyworld:azure-min-root-disk?expand=1 [06:41] Bug #1645408: azure root-disk constraint fails if < default root disk size [06:41] *the correct fix [06:42] * axw looks [06:50] wallyworld: hmmm. I guess so? I can't find anywhere that azure specifies what the minimum OS disk size is [06:50] axw: me either. the best i found is that for linux it will be "approx 30GB" [06:51] wallyworld: where'd you see that? [06:52] wallyworld: also, can you please rename defaultRootDiskSize to minRootDiskSize [06:52] wallyworld: probably worth citing the bug since there's no docs or API that state the minimum [06:53] axw: her's one place https://docs.microsoft.com/en-gb/azure/virtual-machines/virtual-machines-linux-expand-disks?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json [06:53] first para [06:53] another place said ~30GB [06:54] will do those changes, just have to duck out for school pickup in a sec [06:54] wallyworld: how very vague :) [06:54] wallyworld: ok [06:54] yeah, you'd think they'd say explicitly === frankban|afk is now known as frankban [08:06] anastasiamac_: can you please review https://github.com/juju/utils/pull/265? [08:11] axw: sure.. but need to go afk for a bit - kids time/dinner/life/etc... [08:12] axw: nm - lgtm'ed :D [08:13] anastasiamac_: thanks [08:27] axw: here's that azure pr [08:27] https://github.com/juju/juju/pull/6986 [08:28] wallyworld: trade you https://github.com/juju/juju/pull/6987 [08:28] ok [08:30] wallyworld: LGTM, thanks [08:30] ty [08:31] axw: +1 for your too. yay for no more fork [09:36] axw: the bot seems unhappy [10:07] axw: not sure if your up for a small review https://github.com/juju/juju/pull/6984 [10:09] jam: looks like bootstrap cmd error waiting for lxd instance address. failed to bootstrap model: refreshing addresses: instances not found [10:09] wallyworld: well, not the only error. we were seeing mongo teardown errors in 2 of my attempts, and a PubSub timeout failure in the first attempt [10:10] jam: i have seen a similar lxd error before and it was spurious i think [10:10] i think you've just been unlucky :-( [10:12] 4 times unlucky sounds like hardware that is unhappy [10:16] i think the record is 9 or something [10:33] jam: 5th time lucky! [10:38] wallyworld: and here with us actually getting blesses, I thought this was getting better :) [10:39] indeed :-( [10:39] we did have a good run there for a bit [10:42] wallyworld: I'm struggling to find a way to test this bit of code, I don't want to take your evening but having a chat would be good [10:42] sure [10:42] HO? [10:42] wallyworld: https://hangouts.google.com/hangouts/_/canonical.com/a-team-standup?authuser=1 [10:52] wallyworld: sorry was at school classroom meetings. gotta get kids ready shortly, will see if I can squeeze a review in [10:52] axw: no worries, ty. only if you have time [11:15] axw: thank you [12:17] anastasiamac_: thanks for spotting the real failure :) [12:17] axw: sorry i didn't pick on it earlier \o/ but all in the day's work :D [12:18] bleh, could have sworn I added the dep === coreycb` is now known as coreycb === freyes__ is now known as freyes [13:39] Bug #1613855 opened: SetAPIHostPorts runs afoul of "state changing too quickly" === frankban is now known as frankban|afk [19:29] morning folks [19:31] morning thumper [19:31] kinda early for you isnt it? [19:32] not too early [19:32] 8:30 [19:32] I'm normally here sometime between 8 and 9 [19:32] is there any way to remove a storage pool once it's created? [19:32] I thought it was somewhere between 5 and 6 [19:33] cholcombe: I would hope so [19:33] thumper: i can't seem to find it anywhere [19:33] i'm on juju 2.0.3 [19:33] mmmm, I would not be so sure, you will need to ask axw but he will not be here for a few hours [19:34] cholcombe: yeah... seems strange [19:34] I'm sure there is a reason for the behaviour, I just don't know what that reason is. [19:34] i'm also looking for the magic juju env variable that lets me mount loopback devices in my lxc containers [20:10] daughter forgot her guitar, just running down to her school [20:10] bbs === frankban|afk is now known as frankban [22:14] Hey wallyworld? [22:14] yo [22:15] wallyworld: I'm trying to work out why this comment is here: https://github.com/juju/juju/blame/staging/state/cleanup.go#L292 [22:15] I don't think you're actually responsible for it, it looks like it was something that got merged in. [22:17] I guess 2 qs - do you know how I could find the original commit that added that (going to the previous in the blame doesn't help, the function just disappears)? [22:17] yeah, not sure. william did the work from memory. i would have thought we'd need to also clean up non local charms eg resources etc [22:17] you could look in develop history [22:17] i think it would have been committed to that branch [22:18] duh, didn't click that I was in the staging branch. [22:18] i'm not familiar with the code but can look around to see if the reason becomes apparent [22:19] hmm, no, that doesn't work - no matter what branch I look on it says you did it. [22:22] i'll have to have a look into it [22:29] wallyworld: ha ha, that commit is 3694 files changed, 330,000 lines added, 180,000 removed. I think it's when you revived the CMR spike branch you guys had done a long time before? [22:32] ah, right. yes that branch was merged in. are you saying that branch introduced that isLocal check? [22:33] wallyworld: no, just that it shadows the real commit that added that and I don't know how to get past it. [22:33] wallyworld: but I've found it just by assuming that Will did it. [22:34] yeah, i'm not across the detail there [22:34] wallyworld: unfortunately his PR doesn't really explain why we wouldn't remove the charm if it's not local. [22:35] :-( === frankban is now known as frankban|afk === mwhudson_ is now known as mwhudson === mwhudson is now known as Guest32650 === externalreality_ is now known as externalreality === mwhudson_ is now known as mwhudson