[02:38] <blahdeblah> anastasiamac: just dropped some preliminary results from 1.25.9-proposed testing into lp:1645729
[02:38] <blahdeblah> Anyone looking after that bug in axw's absence?
[08:31] <hoenir> can someone review this PR? https://github.com/juju/juju/pull/6523
[08:34] <anastasiamac> hoenir: looking
[08:35] <anastasiamac> hoenir: natefinch should b online soon and he is today's reviewer and has windows expertise. I'll pass this on :) thank you!
[08:37] <hoenir> anastasiamac, thanks!
[08:40] <hoenir> anastasiamac, could you please post the review calendar ?
[08:41] <anastasiamac> hoenir: i think it's internal to juju team calendar
[08:41] <anastasiamac> hoenir: but if u ask here, we can tell u who is on a particular day :)
[08:42] <anastasiamac> hoenir: with holidays just around the corner, some ppl r on and off so the schedule is not easily read
[12:20] <voidspace> frobware: ping
[12:35] <redir> web sokets.
[12:35] <redir> ignore ^^
[12:52] <voidspace> jam: so lxc is showing that only default and docker profiles exist on the machine, and despite specifying a memory constraint the default profile has been used
[12:52] <voidspace> jam: and the memory amount is *incorrect* (constraint not honoured) - this is with a modified version of tych0's patch
[12:52] <voidspace> jam: so it looks like this isn't enough and there's more work to do...
[12:53] <jam> voidspace: there can also be instance specific configuration, let me track it down
[12:55] <jam> voidspace: lxc config is where you can set it directly for each instance
[12:55] <voidspace> jam: I'll look
[12:55] <jam> lxc config show $INSTANCE_ID
[12:55] <voidspace> jam thanks
[12:56] <jam> profiles are just a way of aggregating a bunch of default config (AIUI)
[12:56] <jam> but ultimately it is the config of the instance itself that controlls it.
[12:56] <voidspace> jam: so we wouldn't/shouldn't create a new profile per config permutation but should do it via config
[12:56] <jam> voidspace: correct.
[12:57] <jam> that's also what we're planning on (already do?) for per container network devices.
[12:57] <voidspace> jam: so I see this: in the config for the instance:   user.limits.memory: 0xc8206f7080MB
[12:57] <jam> voidspace: right, so you're running into our other bug
[12:57] <voidspace> jam: which looks suspicious
[12:57] <jam> I don't have the number offhand
[12:57] <jam> voidspace: but we have an open bug that all of our config gets prefixed with "user."
[12:58] <jam> voidspace: we have some other ones that end up showing up as "user.user.stuff"
[12:58] <jam> I think the machine-ids/model/controller-uuids
[12:58] <voidspace> jam: so all our config is broken anyway... ?
[12:58] <voidspace> ouch
[12:58] <voidspace> but this doesn't look like a sensible value anyway 0xc8206f7080MB
[12:59] <voidspace> jam: I requested 888M
[12:59] <jam> voidspace: *that* looks like it is serializing a memory pointer
[12:59] <voidspace> I will see what a default config looks like
[12:59] <voidspace> yep
[12:59] <jam> so we probably have "Memory"
[12:59] <jam> where it should be
[12:59] <jam> "Memory()"
[12:59] <voidspace> hah
[12:59] <voidspace> jam: this gives me some tools to check though, thanks
[13:00] <jam> voidspace: and it sounds like all our config is, indeed, broken, and we need to fix it and have some care for keys that we are going to look up later
[13:00] <jam> we may need to force some to be "user.user."
[13:00] <jam> voidspace: https://bugs.launchpad.net/ubuntu/+source/juju/+bug/1640079/comments/6 mentions the bug
[13:00] <mup> Bug #1640079: LXDs doesn't start after a reboot <lxd-provider> <openstack> <uosci> <juju:Triaged by rharding> <juju (Ubuntu):Triaged> <https://launchpad.net/bugs/1640079>
[13:00] <jam> but I don't have a root bug
[13:00] <voidspace> jam: ok, thanks - so there maybe three bugs to fix
[13:01] <jam> voidspace: well, that might be the bug, given it seems we set "user.boot.autostart"
[13:01] <voidspace> jam: use the correct config keys, use the right values (not pointers), *then* we can use constraints
[13:01] <voidspace> but fixing constraints won't actually work until the other two  are fixed
[13:01] <voidspace> at least with lxd
[13:01] <jam> voidspace: presumably
[13:01] <voidspace> nice :-)
[13:13] <voidspace> jam: the pointer issue is my fault - constraints.Value is defined with pointers
[13:13] <voidspace> jam: trying again, I think the "user" prefix on the key will *still* mean it doesn't work
[13:26] <jam> voidspace: agreed
[13:26] <jam> we have to fix that other bug first
[13:31] <voidspace> jam: the specific problem that ante has is with kvm containers - as reported in the specific bug I'm on
[13:31] <voidspace> jam: I only started looking at lxd first as we had this patch from tycho and thought it would be easier
[13:31] <voidspace> jam: I'm switching to looking at kvm instead, I'll email rick_h to keep him up to date
[13:32] <voidspace> jam: when kvm is done I can look at fixing the lxd issue too which will mean fixing the user key bug too
[13:32] <jam> voidspace: fwiw, I'm pretty sure we care far more about LXD than KVM
[13:32] <voidspace> jam: hmmm, ante doesn't I don't think as he's doing kvm placement on physical hardware
[13:32] <voidspace> jam: so I will ask rick_h then which is higher priority :-)
[13:32] <rick_h> voidspace: the reason that lxd wasn't in the original bug is because lxd has not handled constraints on the container.
[13:33] <voidspace> rick_h: so from discussion above with jam, we have another bug with lxd that means our config options are being specified incorrectly
[13:33] <rick_h> voidspace: jam so I think it's worth validating with Ante, but I do believe the ideal state is lxd respecting constraints with both kvm/lxd respecting them with the placement directive.
[13:33] <rick_h> voidspace: I see
[13:33] <voidspace> rick_h: see https://bugs.launchpad.net/ubuntu/+source/juju/+bug/1640079/comments/6
[13:33] <mup> Bug #1640079: LXDs doesn't start after a reboot <lxd-provider> <openstack> <uosci> <juju:Triaged by rharding> <juju (Ubuntu):Triaged> <https://launchpad.net/bugs/1640079>
[13:33] <voidspace> rick_h: so for lxd constraints to work at all, that needs fixing too
[13:33] <rick_h> voidspace: right, that one is the one macgreagoir is looking at
[13:34] <voidspace> rick_h: ah...
[13:34] <jam> rick_h: that bug is also that we are setting "user.user.stuff"
[13:36] <voidspace> jam: rick_h: so what gets set in the lxd config is "user.limits.memory: 888MB" instead of "limits.memory" and thusly lxd ignores it
[13:44] <rick_h> voidspace: ok, so please go ahead and chase the kvm side if you're unlocked to do that now with nested kvms
[13:44] <rick_h> voidspace: and then macgreagoir and yourself can peek and test the fix for the lxd config issues
[13:44] <voidspace> rick_h: ok, yep
[13:53] <frobware> voidspace: pong; though looking through the scrollback it looks you have answers & further questions...
[14:04] <voidspace> frobware: I do, but I think I'm switching to kvm anyway for the moment
[14:05] <voidspace> frobware: rick_h: for creating a root disk of a specific size for lxd we need to create a new lxd *device* (rather than overriding settings in the metadata)
[14:05] <rick_h> voidspace: let's leave the disk along for now please
[14:05] <voidspace> frobware: rick_h: currently the only devices we create are nics, so it would need to be new code to create a block device
[14:05] <rick_h> voidspace: just focus on the memory/cpu/easy ones
[14:05] <voidspace> rick_h: yep, that's just how we would do it
[14:05] <rick_h> ah ok coolio
[14:05] <voidspace> rick_h: not hard code, just new code
[14:06] <rick_h> voidspace: rgr
[14:06] <voidspace> rick_h: it's not a config setting via metadata - it's a setting on a block device, which we don't do yet because we just use the default block device
[14:06] <rick_h> voidspace: makes sense.
[14:39] <voidspace> rick_h: with current 2.1 branch, if I do juju add-machine kvm:0 -constraints="mem=888M" then I see the memory constraint honoured
[14:39] <voidspace> rick_h: is "kvm:0" a placement directive or is that not enough to trigger the bug?
[14:39] <rick_h> voidspace: can you check that if that's done in a bundle?
[14:39] <voidspace> rick_h: I can try and do it with a bundle
[14:39] <voidspace> ack
[14:39] <rick_h> voidspace: maybe we're barking up the wrong tree in that he was doing it from a bundle in that bug if I recall? /me checks the bug again
[14:40] <voidspace> rick_h: the bug was specifically from a bundle, yes
[14:40] <voidspace> rick_h: I will try that
[14:40] <rick_h> voidspace: k, ty
[14:41] <voidspace> rick_h: I'm still hopeful it's already been fixed by someone else by accident ;-)
[14:41] <rick_h> voidspace: I give you hope...but reserver the right to take it away :P
[14:41] <voidspace> rick_h: hah :-)
[14:41] <voidspace> sounds about right...
[14:43] <voidspace> macgreagoir: setting the node cpu type to "host-passthrough" in virt-manager is enough to enable nested kvm
[14:44] <voidspace> macgreagoir: that or "copy host cpu configuration" works, but either option has (or had) known issues
[14:44] <voidspace> macgreagoir: and allegedly "host-passthrough" is the least buggy way - according to the fedora and virt-manager folks anyway
[14:44] <voidspace> macgreagoir: but it works...
[14:45] <macgreagoir> voidspace: Cool.
[14:46] <macgreagoir> voidspace: I'll make a note to test that in my env too, cheers.
[17:05] <mup> Bug #1649637 opened: Juju agent in a "failed" state after machine reboot on some charms <juju-core:New> <https://launchpad.net/bugs/1649637>
[18:12] <voidspace> rick_h: ping
[18:12] <rick_h> voidspace: pong, otp if I'm slow to react
[18:12] <voidspace> rick_h: kk
[18:13] <voidspace> rick_h: who is a good person to talk to about bundle format?
[18:13] <voidspace> rick_h: I might just email ivoks and ask  to see their actual bundle...
[18:14] <rick_h> voidspace: might as well and see, but the format is https://github.com/juju/charm/blob/v5/bundledata.go
[18:15] <voidspace> rick_h: thanks
[18:26] <voidspace> rick_h: I cannot successfuly set the constraint on the kvm machine -as far as I can tell that isn't permitted by bundles
[18:27] <voidspace> rick_h: however if I set the constraint at the service level and use constraints it *does* appear to be ignored
[18:27] <voidspace> rick_h: so I am pursuing that as the bug and emailing ante about the specific way they are doing it
[18:35] <voidspace> rick_h: yup, the *only* valid names are numbers, so I do not *believe* they are specifying the constraint at the machine level
[18:35] <rick_h> voidspace: correct
[18:36] <rick_h> voidspace: I mean normally you'd say juju deploy $aplication --constraints...
[18:36] <voidspace> rick_h: but I can reproduce a service level constraint being ignored by a placement directive in a bundle
[18:37] <voidspace> rick_h: where the service has a constraint and is placed into a KVM container that constraint is ignored in the container creation
[18:37] <voidspace> rick_h: I do not know if that is the bug they mean, the snippet example in the bug seems fictitious
[18:37] <voidspace> rick_h: I have emailed anyway
[20:16] <menn0> thumper, babbageclunk: morning
[20:16] <menn0> thumper, babbageclunk: review please: https://github.com/juju/juju/pull/6695
[20:16] <thumper> menn0: morning
[20:16] <babbageclunk> morning menn0!
[20:16] <thumper> I looked
[20:16] <babbageclunk> also morning thumper!
[20:16] <thumper> and I'd like an answer to anastasiamac
[20:16] <babbageclunk> I did not looked.
[20:17] <thumper> apart from that , looked fine
[20:17] <menn0> thumper: already answered
[20:17]  * thumper looks again
[20:18] <thumper> menn0: but where is that close?
[20:18] <menn0> thumper: I believe you implemented it!
[20:18]  * menn0 looks
[20:18] <thumper> must be good then
[20:19] <menn0> thumper: in migration/precheck.go, the defer at the bottom of SourcePrecheck
[20:20] <thumper> ok, got it
[20:20] <thumper> approved
[20:21] <menn0> thumper: cheers
[20:47] <anastasiamac> menn0: thumper: thnx \o/
[20:58] <veebers> thumper: would you have any idea why juju (or may it's just lxd) would create lxd containers with ~1GB disk space?
[20:59] <thumper> nope
[20:59] <veebers> only happening on one host, the others seem fine
[20:59] <thumper> some lxd local config?
[20:59] <veebers> hmm, any pointers on who to poke thumper ?
[20:59] <thumper> stgraber
[20:59] <veebers> none of the profiles suggest anything. I'll dig deeper though
[20:59] <veebers> ah yeah very good point, I'll ping him. CHeers
[21:00] <anastasiamac> thumper: menn0: babbageclunk: any takers to review https://github.com/go-goose/goose/pull/36?
[21:38] <menn0> anastasiamac: i've had a look and have a question
[21:38] <anastasiamac> menn0: thnx
[21:40] <anastasiamac> babbageclunk: could u plz also land this for 2.2 (develop) https://github.com/juju/juju/pull/6678
[22:06] <thumper> menn0: oh ffs, I found my bug
[22:06] <thumper> menn0: not the line number problem, but the other bit
[22:06] <thumper> was a copy paste error from a previous test
[22:07] <thumper> where I was calling workertest.CleanKill(c, w)
[22:07] <thumper> I now need to defer that :)
[22:07] <thumper> yes, the worker was dead, so had unsubscribed, so... unsurprisingly, didn't do anything
[22:11] <mup> Bug #1649637 changed: Juju agent in a "failed" state after machine reboot on some charms <juju:Triaged> <https://launchpad.net/bugs/1649637>
[22:23] <babbageclunk> anastasiamac: oops, yup, recreating that PR for develop
[22:24] <anastasiamac> babbageclunk: \o/
[22:31] <alexisb> anastasiamac, ping
[22:32] <alexisb> thumper, ping
[22:33] <thumper> hey
[22:33] <alexisb> heya in ian's abscense can you jump on the release call
[22:33] <thumper> sure
[22:33] <alexisb> abentley has a q I need a tech lead for
[22:59] <babbageclunk> anastasiamac: review plz? (forward ported migration fix) https://github.com/juju/juju/pull/6700
[23:00] <anastasiamac> babbageclunk: u do not need review for simple forward port. u can self-stamp :D
[23:01] <babbageclunk> anastasiamac: ok cool, thanks
[23:01] <anastasiamac> babbageclunk: \o/
[23:21] <alexisb> thumper, babbageclunk thoughts on this: https://bugs.launchpad.net/juju/+bug/1649719
[23:21] <mup> Bug #1649719: 2.1-beta2: memory leak  <oil> <oil-2.0> <juju:New> <https://launchpad.net/bugs/1649719>
[23:22] <thumper> alexisb: hmm...
[23:23] <thumper> did the beta-2 have the fix from axw?
[23:23] <alexisb> that is a good point
[23:23] <alexisb> the one he fwd ported from 1.25 correct?
[23:23]  * alexisb looks
[23:27] <anastasiamac> alexisb: thumper: looks like it has been committed https://github.com/juju/juju/commit/cd41f256af304b883be5176e8562ff7b002e4ace
[23:27] <anastasiamac> alexisb: thumper: dunno if it made beta cut tho...
[23:31] <alexisb> ok that fix is not in beta2
[23:31] <alexisb> so will start with that
[23:31] <thumper> alexisb: I commented on the bug
[23:31] <alexisb> heh, glad you told me
[23:32] <alexisb> was doing that myself, /me refreshes
[23:33] <alexisb> anastasiamac, fyi, the request thumper made in https://launchpad.net/bugs/1649719
[23:33] <mup> Bug #1649719: 2.1-beta2: memory leak  <oil> <oil-2.0> <juju:New> <https://launchpad.net/bugs/1649719>
[23:33] <alexisb> is what we need to send paul
[23:37] <menn0> thumper: great that you found the bug
[23:38] <menn0> the line number thing is still weird though
[23:38] <thumper> yeah
[23:45] <anastasiamac> blahdeblah: ^^
[23:45] <blahdeblah> Oh, that was me Paul, not him Paul?