[02:47] wallyworld: should we fallback to release anytime we'd use devel - even if the user has specified devel explicitly? [02:48] babbageclunk: yeah, IMO. develop just indicates level of risk the user wants. [02:48] ok [02:48] we need to behave linke snaps [02:48] *like [02:53] wallyworld: I'm not sure how snaps work in this case [02:56] babbageclunk: you need to read up on them then :-) [02:57] wallyworld: trying - any pointers? The closest I can find is that you can publish to multiple channels when publishing - but that's not the case here is it? [02:58] no, this isn't about publishing but the user specifying the level of risk they are willing to accept [02:58] i can try and find some snappy docs [02:59] Right, but it seems like you could publish to beta without publishing to edge, and then someone who was on edge wouldn't see that version? [02:59] I'm reading the docs on snapcraft.io [03:00] babbageclunk: they would see a higher version in beta [03:00] if published there [03:01] whoa, there are channel branches? [03:01] babbageclunk: you always get the highest version - the channel determines how far down you look to see what's available [03:01] yeah [03:01] you can create a short lived snap (30 days default) from a branch [03:01] a feature branch [03:02] if you want longer you need to pay [03:03] https://forum.snapcraft.io/t/channel-terminology-and-policy/551 [03:09] Ok, I think I get the fallback between risk levels (modulo the talk about closing which I guess doesn't matter here). [03:10] babbageclunk: correct, closing is not relevant here [03:10] wallyworld: what about the other stream levels (proposed, testing) [03:11] ignore testing. we don't really use proposed in practice i don't think, but we could sp probably should support it [03:11] I think it makes sense for all of them to fallback to released, right? [03:11] it's almost like testing < develop < proposed < released [03:12] But if they all had released after them that would be right? Or should I only have the fallback for develop? [03:14] wallyworld: ^ [03:14] babbageclunk: sorry, on call, so distracted [03:14] the above order would be analogous to snap channels [03:20] wallyworld: okthanks! [04:24] babbageclunk: sorry, out of meeting now if you still had questions [04:26] wallyworld: no, I think I understand - I've made everything follow that fallback, just going through the places that call PreferredStream now to make them do the fallback if they need to. Thanks! [04:29] babbageclunk: ok, sgtm [04:57] wallyworld: still at the guessing phase, but I have a suspicion that it's nothing to do with skews now. the logs say that the leadership worker is failing with "not found" and "state changing too quickly; try again soon" [04:57] wallyworld: from looking at the code, that doesn't make sense... unless there's 2 leadership workers [04:57] interesting [04:57] wallyworld: we run a leadership worker *per state* [04:58] wallyworld: so one might be expiring stuff while the other is extending [04:58] oh dear [04:58] wallyworld: I think the best thing to do would be to make it so we run one per model [05:01] axw: that sounds logical, one per state sounds crazy [05:03] axw: although i think there's an aspect where maybe there's a controller level doc, rather than per model [05:03] the clock#singular-controller lease doc [05:03] wallyworld: I'm just talking about the applications one atm [05:03] ah ok [05:04] in that case sounds good to do [05:04] wallyworld: I think we'd want a per-machine one for singular [05:04] but I'll leave that for now [05:06] per controller i think? [05:13] wallyworld: each controller machine needs to run the worker for failover, that's how the singular workers are implemented [05:13] they'll each try to claim, but only one will succeed [05:14] * axw bbs [05:21] wallyworld: https://github.com/juju/names/pull/84 [05:26] thumper: looking [05:28] thumper: lgtm === frankban|afk is now known as frankban === hazmat_ is now known as hazmat === akhavr1 is now known as akhavr === frankban is now known as frankban|afk [09:47] wallyworld: I was wrong, when we expire the lease, the other manager will refresh its cache when the txn fails. so :/ [09:47] still shit, but probably not shit enough to continue trying to untangle everything [09:50] not sure why it's getting "state changing too quickly errors" though... that refresh should fix it === frankban|afk is now known as frankban [16:42] curl https://s3.amazonaws.com//aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O [16:42] chmod +x ./awslogs-agent-setup.py [16:43] jmeta-u./awslogs-agent-setup.py -n -r us-east-1 -c awslogs.confw [16:43] w [17:01] whoops [17:01] somehow I found the shortcut to tell terminator to broadcast to all terminal windows [17:01] :) [17:02] glad it wasn't a secret === frankban is now known as frankban|afk [21:20] wallyworld: I think we should skip the release call again [21:20] we know what we are doing [21:32] ok [21:36] wallyworld: https://github.com/juju/juju/pull/7943 [21:36] ok [21:38] wallyworld: I'm confused, got a minute? [21:38] sure [21:38] 1:1 HO [21:52] thumper: the hamster died, but we had finished [21:52] yeah [22:41] wallyworld: I've just clicked that environs.Tools.Tools already does fallback between custom source and public source. [22:41] wallyworld: implementation wise it's simpler for me if the fallback goes: [22:42] custom/devel, public/devel, custom/released, public/released. Is that crazy or alright? [22:44] babbageclunk: i'd have to re-read the code - i didn't think custom came into it for agent binaries, just image metadata [22:45] for agent binaries, i thought we just had agent-stream to tweak things [22:46] sorry, thinko'd the function name: environs/tools.FindTools [22:47] It uses GetMetadataSources which adds one for agent-metadata-url and one for the default public source. [22:47] we may use custom if agent binaries are sourced from metadata supplies at bootstrap [22:48] but then it becomes custom > testing > develop > proposed > released [22:49] No, custom has streams in it. [22:50] So custom/public (and also cloud sources) are orthogonal to stream. [23:35] babbageclunk: joining us? we miss you [23:35] :-)