=== tamas_erdei is now known as terdei === tinwood_ is now known as tinwood [13:53] stickupkid or achilleasa: Can I get a quick review of https://github.com/juju/juju/pull/12351 ? [13:54] manadart, me check [13:57] I have to change it. [13:57] booo [14:00] stickupkid: It's actually fine. Let me add a comment. [14:01] stickupkid: Done. Carry on. [14:20] morning all [14:39] o/ [14:40] Hi [14:41] I find juju sometimes quite slow to detect changes (like a machine that has been rebooted ... sometimes it says that an app or a unit is active for minutes before detecting it is down) [14:41] is there a way to speed things up ? [14:41] force a model status refresh or something ? [14:52] Hybrid512, what version of Juju? I thought that newer Juju does faster detection (older juju used a "have you pinged within the last 1min or something to that effect"), new ones use active connections to the controller, though stopped packets could look like connected sockets that are actually dead [14:52] latest [14:53] 2.8.6 [14:54] I had a very simple deployment (one mysql-innodb-cluster app on 3 VMs), I just powered off the VMs and it took a few minutes before juju detected the whole deployment as down [14:54] machines appeared down quite fast but lxd containers, units and even apps ... it took longer [14:55] I thought we did liveliness pings, but socket connections when broken I think do have a kernel default of 2.5min before they are killed [14:55] sometimes, especially in case of hard failure, it could be usefull to be able to trigger a detection by hand [14:55] There isn't really a way to speed it up, as the controller doesn't ping the agents, it is the agents reporting in that tells us they are alive. [14:56] Hybrid512, and the problem is that in the Kernel, it hasn't decided [14:56] they are dead yet. [14:56] couldn't it be forced ? [14:57] I mean, I know that my whole setup is down but I want to make sure the status is clean before restarting things [14:57] in some situations, you would like things to be a bit faster [14:58] it's like the --force when you destroy a model ... when you want to kill a deployment, you just don"t care to wait for the system to cleanly remove every relations, units, ... you just kill everything and that's fine [14:59] so having something similar to just clean a status, that would be great [15:01] for example, I just did it right now, same deployment (mysql-innodb-cluster on 3 VMs), everything was green than I shutoff the VMs through MaaS at once [15:01] I did it about a minute ago ... everything is still green in my juju status [15:01] not even a machine is marked as down [15:02] starting to go down just now ... and not all of them [15:02] Hybrid512, when you power off a machine TCP doesn't sent a Shutdown packet, thus we don't see them disconnecting, vs just being idle (IIRC) [15:02] I *think* we have the ability to force clients to respond to a "are you still alive" packet, but IIRC we were using it mostly from the client to the controller "is the controller still available". I'd have to check. [15:02] can it be tweaked ? [15:03] controller -> agents checks would be nice, even if done forcefully by hand only in some situations [15:08] another question : I already asked for this one but never had an answer ... is it possible to map juju models to MaaS tags or even better, MaaS Resource Pools ? [15:09] it is quite complicated to clearly see what machines are used by what Juju model within the MaaS UI [15:09] it would be really great to map a Juju Model to a MaaS resource pool when deploying a bundle to a MaaS cloud with Juju [15:09] Hybrid512, so we do tag each instance with the model uudi, but probably not in a way that MAAS exposes it as such. [15:10] I know there have been some requests from MAAS recently for updating the set of tags that we use to interact with it [15:10] would be really great (and a lot more readable) to create a Resource Pool for these machines mapped on the juju model [15:10] would make more sens than tags to me (but tags is already a good option) [15:10] MAAS resource pools are relatively new, so not something that we're leveraging yet. There is also the question of should we be targetting an existing pool, or should we be pulling instances out of a 'generic' pool and putting them into an explicit one. [15:11] I thought pools were an Admin level construct, and if Juju asked MAAS for a machine in a pool, then it *couldn't* use a machine that wasn't already labeled as part of that pool. [15:12] well, from my perspective, you should just create a resource pool with the model name and allocate deployed machines inside that pool [15:12] btw, tags could be a good alternative but as far as I can see, my machines don't have any "visible" tag within MaaS ui with the model name [17:13] achilleasa, if you get a chance https://github.com/juju/juju/pull/12352