/srv/irclogs.ubuntu.com/2020/11/23/#juju.txt

=== tamas_erdei is now known as terdei
=== tinwood_ is now known as tinwood
manadartstickupkid or achilleasa: Can I get a quick review of https://github.com/juju/juju/pull/12351 ?13:53
stickupkidmanadart, me check13:54
manadartI have to change it.13:57
stickupkidbooo13:57
manadartstickupkid: It's actually fine. Let me add a comment.14:00
manadartstickupkid: Done. Carry on.14:01
jammorning all14:20
Hybrid512o/14:39
Hybrid512Hi14:40
Hybrid512I find juju sometimes quite slow to detect changes (like a machine that has been rebooted ... sometimes it says that an app or a unit is active for minutes before detecting it is down)14:41
Hybrid512is there a way to speed things up ?14:41
Hybrid512force a model status refresh or something ?14:41
jamHybrid512, what version of Juju? I thought that newer Juju does faster detection (older juju used a "have you pinged within the last 1min or something to that effect"), new ones use active connections to the controller, though stopped packets could look like connected sockets that are actually dead14:52
Hybrid512latest14:52
Hybrid5122.8.614:53
Hybrid512I had a very simple deployment (one mysql-innodb-cluster app on 3 VMs), I just powered off the VMs and it took a few minutes before juju detected the whole deployment as down14:54
Hybrid512machines appeared down quite fast but lxd containers, units and even apps ... it took longer14:54
jamI thought we did liveliness pings, but socket connections when broken I think do have a kernel default of 2.5min before they are killed14:55
Hybrid512sometimes, especially in case of hard failure, it could be usefull to be able to trigger a detection by hand14:55
jamThere isn't really a way to speed it up, as the controller doesn't ping the agents, it is the agents reporting in that tells us they are alive.14:55
jamHybrid512, and the problem is that in the Kernel, it hasn't decided14:56
jamthey are dead yet.14:56
Hybrid512couldn't it be forced ?14:56
Hybrid512I mean, I know that my whole setup is down but I want to make sure the status is clean before restarting things14:57
Hybrid512in some situations, you would like things to be a bit faster14:57
Hybrid512it's like the --force when you destroy a model ... when you want to kill a deployment, you just don"t care to wait for the system to cleanly remove every relations, units, ... you just kill everything and that's fine14:58
Hybrid512so having something similar to just clean a status, that would be great14:59
Hybrid512for example, I just did it right now, same deployment (mysql-innodb-cluster on 3 VMs), everything was green than I shutoff the VMs through MaaS at once15:01
Hybrid512I did it about a minute ago ... everything is still green in my juju status15:01
Hybrid512not even a machine is marked as down15:01
Hybrid512starting to go down just now ... and not all of them15:02
jamHybrid512, when you power off a machine TCP doesn't sent a Shutdown packet, thus we don't see them disconnecting, vs just being idle (IIRC)15:02
jamI *think* we have the ability to force clients to respond to a "are you still alive" packet, but IIRC we were using it mostly from the client to the controller "is the controller still available". I'd have to check.15:02
Hybrid512can it be tweaked ?15:02
Hybrid512controller -> agents checks would be nice, even if done forcefully by hand only in some situations15:03
Hybrid512another question : I already asked for this one but never had an answer ... is it possible to map juju models to MaaS tags or even better, MaaS Resource Pools ?15:08
Hybrid512it is quite complicated to clearly see what machines are used by what Juju model within the MaaS UI15:09
Hybrid512it would be really great to map a Juju Model to a MaaS resource pool when deploying a bundle to a MaaS cloud with Juju15:09
jamHybrid512, so we do tag each instance with the model uudi, but probably not in a way that MAAS exposes it as such.15:09
jamI know there have been some requests from MAAS recently for updating the set of tags that we use to interact with it15:10
Hybrid512would be really great (and a lot more readable) to create a Resource Pool for these machines mapped on the juju model15:10
Hybrid512would make more sens than tags to me (but tags is already a good option)15:10
jamMAAS resource pools are relatively new, so not something that we're leveraging yet. There is also the question of should we be targetting an existing pool, or should we be pulling instances out of a 'generic' pool and putting them into an explicit one.15:10
jamI thought pools were an Admin level construct, and if Juju asked MAAS for a machine in a pool, then it *couldn't* use a machine that wasn't already labeled as part of that pool.15:11
Hybrid512well, from my perspective, you should just create a resource pool with the model name and allocate deployed machines inside that pool15:12
Hybrid512btw, tags could be a good alternative but as far as I can see, my machines don't have any "visible" tag within MaaS ui with the model name15:12
stickupkidachilleasa, if you get a chance https://github.com/juju/juju/pull/1235217:13

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!