[00:05] <anastasiamac_> stokachu: thank you for update \o/ it still leaves my questions open :) specifically, r we fixing it on conjure-up side or juju? nested lxd requirement came too late into 2.1 (we have not supported it previously)
[00:07] <stokachu> anastasiamac_, so the branch that jam has will try to get a dhcp address from a nested container
[00:07] <stokachu> anastasiamac_, i think that is a good approach
[00:08] <stokachu> anastasiamac_, and i have been fixing it in conjure-up with the alteration of lxd profiles myself
[00:08] <stokachu> anastasiamac_, now my fixes don't work anymore
[00:08] <stokachu> and this is a heavily utilized feature of conjure-up and is at the center of http://ubuntu.com/cloud
[00:09] <stokachu> anastasiamac_, so releasing juju 2.1 without having a solution i dont think is a good idea
[00:10] <stokachu> anastasiamac_, menn0 https://github.com/juju/juju/compare/staging...jameinel:2.1-bridge-tweak that is what he referenced to me earlier
[00:26] <stokachu> anastasiamac_, does that answer your question?
[00:30] <anastasiamac_> stokachu: yes, coordination is the key :) thank you
[00:30] <anastasiamac_> stokachu: of course, i meant cooperation \o/
[00:30] <stokachu> :D
[01:38] <stokachu> anyone around that can test conjure-up for bug https://bugs.launchpad.net/juju/+bug/1664409?
[01:38] <mup> Bug #1664409: juju 2.1-beta5 - juju 2.1rc2 - localhost failing to allocate a nested container with an ip <conjure> <regression> <juju:Triaged> <juju 2.1:Triaged> <https://launchpad.net/bugs/1664409>
[01:39] <stokachu> sudo snap install conjure-up --classic --edge
[01:39] <stokachu> conjure-up kubernetes-core localhost
[01:50] <wallyworld> axw: thanks for reviews
[01:51] <axw> np
[02:02] <wallyworld> axw: IIANM, the new azure regions are being tested right? so we'll be able to land your PR "soon"
[02:03] <axw> wallyworld: pretty sure that's what aaron said
[02:03] <wallyworld> great, yeah i thought so too
[02:12] <thumper> wallyworld, anastasiamac_: fyi https://github.com/go-yaml/yaml/pull/241
[02:12] <wallyworld> looking
[02:13] <thumper> babbageclunk: do you know how to add patches for upstreams to our codebase?
[02:15] <wallyworld> thumper: lgtm
[02:15] <thumper> wallyworld: yeah, but getting it merged may be harder
[02:15] <wallyworld> :-( may need to do as a patch like for mgo
[02:15] <thumper> yeah, that's my thinking
[02:39] <axw> I'm probably going to get disconnected as I test briding stuff. If I don't respond, you're talking to the ghost of axw
[02:39] <axw> bridging*
[02:40] <axw> always the bridge-maid, never the bridge
[02:40] <babbageclunk> lol
[02:47] <anastasiamac_> wallyworld: to avoid stepping on toes, in case anyone else is doing this... m going to try bootstrap openstack mitaka with ur keystone v3 related changes :D
[02:54] <wallyworld> awesome ty
[02:54] <wallyworld> we need to thoroughly test this change
[02:55] <wallyworld> far out, another 2 blessed builds
[02:56] <anastasiamac_> wallyworld: v3 stuff should probably go into release notes. cut and paste from ur email would b awesome :D
[02:56] <wallyworld> indeed, that's the plan, plus a bit more
[02:56] <anastasiamac_> \o/
[02:59] <axw> menn0 jam wallyworld: will be a couple of minutes late, start without me
[02:59] <wallyworld> ok
[03:00] <menn0> wallyworld: give me 5 mins - need to sort out kids
[03:01] <wallyworld> k
[03:05] <thumper> https://github.com/juju/juju/pull/6983
[03:05] <thumper> wallyworld: ^^
[03:54] <axw> jam: do you think it's worth us both looking at the LXD networking thing independently? I can repro easily
[03:54] <jam> axw: can you look at jameinel/juju 2.1-bridge-tweaks ?
[03:54] <axw> jam: ok
[04:04] <axw> jam: seems sane, I'll test it
[04:05] <axw> right after I have some lunch
[04:23] <axw> jam: so that might help in cloud images, but in the case of LXD provider it's not the issue. there, lxd-bridge is failing to start
[04:23] <axw> journalctl says: "ip6tables v1.6.0: can't initialize ip6tables table `filter': Tabl"
[04:23] <axw> err. ip6tables v1.6.0: can't initialize ip6tables table `filter': Table does not exist...
[04:24] <axw> jam: 2.0.9 made some changes to do with ip6tables: https://bugs.launchpad.net/ubuntu/+source/lxd/+bug/1660506, so maybe related
[04:24] <mup> Bug #1660506: SRU of LXD 2.0.9 (upstream bugfix release) <verification-done> <lxd (Ubuntu):Invalid> <lxd (Ubuntu Xenial):Fix Released by stgraber> <https://launchpad.net/bugs/1660506>
[04:27] <jam> so it sounds like we have >1 bug, then
[04:52] <axw> jam: actually I think I was just looking at outdated instructions. just need to add some kernel modules to the profile
[04:52] <axw> testing again now
[05:02] <jam> axw: did you want to meet to chat? We talked a bit earlier, so I don't feel like we have to if you're focused
[05:02] <axw> jam: I'm happy to skip today
[05:03] <axw> jam: which was the kubernetes/conjure-up bug #?
[05:03] <axw> was/is
[05:03] <jam> axw: bug #1664409
[05:03] <mup> Bug #1664409: juju 2.1-beta5 - juju 2.1rc2 - localhost failing to allocate a nested container with an ip <conjure> <regression> <juju:Triaged> <juju 2.1:Triaged> <https://launchpad.net/bugs/1664409>
[05:03] <axw> thanks
[05:12] <anastasiamac_> plz add PRs to the bugs you work on :)
[05:12]  * anastasiamac_ looks at some ppl...
[05:51] <jam> axw: so testing the lxd case locally, it does look like my branch fixes the 'nested container doesn't use DHCP', I'm not 100% sure, but I'm pretty close to being sure
[05:51] <jam> I saw it trigger the debug statement that should indicate the fix is working
[05:51] <jam> I'm creating a trivial bundle for testing
[05:51] <axw> jam: ok, cool
[05:52] <jam> axw: hm. I wish nested containers would use the host LXD to seed the images... :)
[05:52] <jam> I wonder how awful and hackish that would be
[05:54] <axw> jam: probably not too hackish if we had the proxying story down, but otherwise I'm not sure
[06:10] <axw> jam: I'm not sure where the change was made, but conjure-up kubernetes-core localhost is putting easyrsa on a top-level machine now. the bundle still appears to put it in a lxd container tho
[06:11] <axw> jam: anyway, adding a unit of it to a (nested) lxd container works with your patch, doesn't work without it
[06:11] <jam> axw: https://github.com/juju/juju/pull/6985
[06:12] <jam> axw: I think stokachu changed kubernetes-core to use a local bundle that *doesn't* deploy into a container
[06:12] <jam> I still need to solve the multi-network case, which I haven't actually formulated a solution for
[06:12] <axw> jam: yeah that's what I figured, just can't see the commit on github
[06:12]  * axw nods
[06:12] <jam> but that likely also fixes bugs we've seen in GCE, etc.
[06:13] <jam> axw: do you know how to tell conjure-up to refresh its cache?
[06:13] <axw> jam: I don't know what cache you're talking about, so that'll be a no :)
[06:14] <jam> axw: ~/.cache/conjure-up holds the spells, etc.
[06:14] <jam> or ~/.cache/conjure-up-spells
[06:14] <jam> but I don't know what/how that gets refreshed
[06:15] <axw> I see
[06:15] <axw> no idea, sorry
[06:15] <jam> other than when you do it the first time it says "nothing found, refreshing"
[06:16] <axw> jam: reviewed
[06:30] <jam> axw: thanks. I'm trying to see if I can reproduce the nesting failure without juju in the mix
[06:30] <axw> jam: which one?
[06:32] <jam> axw: security.nesting: true not being enough to actually deploy a nested container
[06:32] <jam> and I did
[06:32] <axw> ok
[06:38] <jam> https://github.com/lxc/lxd/issues/2885
[06:41] <wallyworld> axw: does this look like the correct for bug 1645408? https://github.com/juju/juju/compare/2.1...wallyworld:azure-min-root-disk?expand=1
[06:41] <mup> Bug #1645408: azure root-disk constraint fails if < default root disk size <juju:Triaged> <https://launchpad.net/bugs/1645408>
[06:41] <wallyworld> *the correct fix
[06:42]  * axw looks
[06:50] <axw> wallyworld: hmmm. I guess so? I can't find anywhere that azure specifies what the minimum OS disk size is
[06:50] <wallyworld> axw: me either. the best i found is that for linux it will be "approx 30GB"
[06:51] <axw> wallyworld: where'd you see that?
[06:52] <axw> wallyworld: also, can you please rename defaultRootDiskSize to minRootDiskSize
[06:52] <axw> wallyworld: probably worth citing the bug since there's no docs or API that state the minimum
[06:53] <wallyworld> axw: her's one place https://docs.microsoft.com/en-gb/azure/virtual-machines/virtual-machines-linux-expand-disks?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json
[06:53] <wallyworld> first para
[06:53] <wallyworld> another place said ~30GB
[06:54] <wallyworld> will do those changes, just have to duck out for school pickup in a sec
[06:54] <axw> wallyworld: how very vague :)
[06:54] <axw> wallyworld: ok
[06:54] <wallyworld> yeah, you'd think they'd say explicitly
[08:06] <axw> anastasiamac_: can you please review https://github.com/juju/utils/pull/265?
[08:11] <anastasiamac_> axw: sure.. but need to go afk for a bit - kids time/dinner/life/etc...
[08:12] <anastasiamac_> axw: nm - lgtm'ed :D
[08:13] <axw> anastasiamac_: thanks
[08:27] <wallyworld> axw: here's that azure pr
[08:27] <wallyworld> https://github.com/juju/juju/pull/6986
[08:28] <axw> wallyworld: trade you https://github.com/juju/juju/pull/6987
[08:28] <wallyworld> ok
[08:30] <axw> wallyworld: LGTM, thanks
[08:30] <wallyworld> ty
[08:31] <wallyworld> axw: +1 for your too. yay for no more fork
[09:36] <jam> axw: the bot seems unhappy
[10:07] <wallyworld> axw: not sure if your up for a small review https://github.com/juju/juju/pull/6984
[10:09] <wallyworld> jam: looks like bootstrap cmd error waiting for lxd instance address. failed to bootstrap model: refreshing addresses: instances not found
[10:09] <jam> wallyworld: well, not the only error. we were seeing mongo teardown errors in 2 of my attempts, and a PubSub timeout failure in the first attempt
[10:10] <wallyworld> jam: i have seen a similar lxd error before and it was spurious i think
[10:10] <wallyworld> i think you've just been unlucky :-(
[10:12] <jam> 4 times unlucky sounds like hardware that is unhappy
[10:16] <wallyworld> i think the record is 9 or something
[10:33] <wallyworld> jam: 5th time lucky!
[10:38] <jam> wallyworld: and here with us actually getting blesses, I thought this was getting better :)
[10:39] <wallyworld> indeed :-(
[10:39] <wallyworld> we did have a good run there for a bit
[10:42] <jam> wallyworld: I'm struggling to find a way to test this bit of code, I don't want to take your evening but having a chat would be good
[10:42] <wallyworld> sure
[10:42] <wallyworld> HO?
[10:42] <jam> wallyworld: https://hangouts.google.com/hangouts/_/canonical.com/a-team-standup?authuser=1
[10:52] <axw> wallyworld: sorry was at school classroom meetings. gotta get kids ready shortly, will see if I can squeeze a review in
[10:52] <wallyworld> axw: no worries, ty. only if you have time
[11:15] <wallyworld> axw: thank you
[12:17] <axw> anastasiamac_: thanks for spotting the real failure :)
[12:17] <anastasiamac_> axw: sorry i didn't pick on it earlier \o/ but all in the day's work :D
[12:18] <axw> bleh, could have sworn I added the dep
[13:39] <mup> Bug #1613855 opened: SetAPIHostPorts runs afoul of "state changing too quickly" <juju:Fix Released by mfoord> <juju-core:Triaged> <https://launchpad.net/bugs/1613855>
[19:29] <thumper> morning folks
[19:31] <perrito666> morning thumper
[19:31] <perrito666> kinda early for you isnt it?
[19:32] <thumper> not too early
[19:32] <thumper> 8:30
[19:32] <thumper> I'm normally here sometime between 8 and 9
[19:32] <cholcombe> is there any way to remove a storage pool once it's created?
[19:32] <perrito666> I thought it was somewhere between 5 and 6
[19:33] <thumper> cholcombe: I would hope so
[19:33] <cholcombe> thumper: i can't seem to find it anywhere
[19:33] <cholcombe> i'm on juju 2.0.3
[19:33] <perrito666> mmmm, I would not be so sure, you will need to ask axw but he will not be here for a few hours
[19:34] <thumper> cholcombe: yeah... seems strange
[19:34] <thumper> I'm sure there is a reason for the behaviour, I just don't know what that reason is.
[19:34] <cholcombe> i'm also looking for the magic juju env variable that lets me mount loopback devices in my lxc containers
[20:10] <thumper> daughter forgot her guitar, just running down to her school
[20:10] <thumper> bbs
[22:14] <babbageclunk> Hey wallyworld?
[22:14] <wallyworld> yo
[22:15] <babbageclunk> wallyworld: I'm trying to work out why this comment is here: https://github.com/juju/juju/blame/staging/state/cleanup.go#L292
[22:15] <babbageclunk> I don't think you're actually responsible for it, it looks like it was something that got merged in.
[22:17] <babbageclunk> I guess 2 qs - do you know how I could find the original commit that added that (going to the previous in the blame doesn't help, the function just disappears)?
[22:17] <wallyworld> yeah, not sure. william did the work from memory. i would have thought we'd need to also clean up non local charms eg resources etc
[22:17] <wallyworld> you could look in develop history
[22:17] <wallyworld> i think it would have been committed to that branch
[22:18] <babbageclunk> duh, didn't click that I was in the staging branch.
[22:18] <wallyworld> i'm not familiar with the code but can look around to see if the reason becomes apparent
[22:19] <babbageclunk> hmm, no, that doesn't work - no matter what branch I look on it says you did it.
[22:22] <wallyworld> i'll have to have a look into it
[22:29] <babbageclunk> wallyworld: ha ha, that commit is 3694 files changed, 330,000 lines added, 180,000 removed. I think it's when you revived the CMR spike branch you guys had done a long time before?
[22:32] <wallyworld> ah, right. yes that branch was merged in. are you saying that branch introduced that isLocal check?
[22:33] <babbageclunk> wallyworld: no, just that it shadows the real commit that added that and I don't know how to get past it.
[22:33] <babbageclunk> wallyworld: but I've found it just by assuming that Will did it.
[22:34] <wallyworld> yeah, i'm not across the detail there
[22:34] <babbageclunk> wallyworld: unfortunately his PR doesn't really explain why we wouldn't remove the charm if it's not local.
[22:35] <wallyworld> :-(