[17:14] <sbraz> falcojr: i meant that "addr" gets *checked* before by something else, sorry, i forgot the word in my comment :P
[17:15] <sbraz> falcojr: you're james falcon, right? :P
[17:20] <falcojr> sbraz: Yes, I am. And thanks, that makes sense
[17:21] <minimal> holmanb: PR raised with upstream ifupdown-ng for "onlink", also raised patch to existing Alpine ifupdown-ng package
[17:22] <minimal> are there any RHEL/Fedora experts in the channel?
[17:54] <sbraz> minimal: i'm no expert but maybe i can ask around if necessary, i know someone from cloudlinux
[17:55] <sbraz> minimal: what are the hopes that debian switches to ifupdown-ng at one point?
[17:57] <minimal> sbraz: no idea if Debian would switch from ifupdown to ifupdown-ng or whether they'd go in the direction of systemd-networkd instead
[17:57] <minimal> sbraz: RHEL/Fedora question is regarding an aspect of their handling of CA certs, haven't found the info I'm looking for online so far
[17:59] <sbraz> we've had numerous issues with ifupdown in the following case: ipv4 dhcp+ipv6 static; at boot a dhcp request is issued, the network comes up, something sends an IPv6 RA (or something, i'm no ipv6 expert), ifupdown either can't add the static ipv6 address or it's the gateway that's duplicate, networking.service fails; dhcpd gets killed; $LEASE hours later, the IPv4 expires, server is unreachable; reboot 
[18:00] <sbraz> the server → it works for $LEASE more hours before failing again :P
[18:00] <sbraz> they fixed the route problem in debian 11 by using "ip ro replace default" instead of "ip ro add default"
[18:01] <sbraz> and i think they also run "ip addr flush" now so the address can't be duplicate any more
[18:02] <sbraz> but still, the fact that ifupdown can fail (for a number of reasons) and that the dhcp-obtained ipv4 stays there even though the dhcp client is still dead… it's not ideal
[18:02] <minimal> sbraz: how are you specifying the network config? via ConfigDrive2? via Openstack metadata server? what's the exact network config provided to cloud-init in that situation?
[18:03] <sbraz> minimal: via configdrivev2, and with debian 11 we don't have issues any more because the generated e/n/i file works perfectly
[18:03] <sbraz> but with debian 10 we still had issues so we added a bunch of pre-up scripts
[18:04] <minimal> hmm, wonder what changed between 10 and 11
[18:04] <minimal> am interested to see any example of your ConfigDrive network file as the only one I've seen to date (elsewhere) was a bit "wacky"
[18:15] <sbraz> minimal: i know one of the things that helped was https://salsa.debian.org/debian/ifupdown/-/commit/aa9b12d76a8cc3fb919fc64d624a45f3eca86444 but it's been committed a while back
[18:15] -ubottu:#cloud-init- Commit aa9b12d in debian/ifupdown "Default accept_ra to 0 when gateway is set (Closes: #775660)"
[18:16] <sbraz> but sometimes i think RAs were still received before ifupdown could start
[18:16] <sbraz> https://salsa.debian.org/debian/ifupdown/-/commit/3fb794b2dc1f16da09409522826142ef5e226ddc helped fix the problem
[18:16] -ubottu:#cloud-init- Commit 3fb794b in debian/ifupdown "Fix race condition adding a static IPv6 default route on RA networks"
[18:17] <sbraz> and https://salsa.debian.org/debian/ifupdown/-/commit/fe9fb5882ab5d238122d986454b0d156477bc8d0 is helpful too as it flushes leftover ips
[18:17] -ubottu:#cloud-init- Commit fe9fb58 in debian/ifupdown "Flush automatically assigned addresses on ifup when accept_ra=0."
[18:20] <sbraz> i guess the one thing that changed from 10 to 11 is the route replace, the rest was here before
[18:56] <minimal> yeah we were chatting on here before Xmas about the handling of DHCPv6 vs static v6 vs SLAAC etc with regarding to network config v2
[19:56] <esv> hey folks, does cloud-init treat instances restored from a recovery vault as a cloned/new instance? files under /var/lib/cloud/sem and /var/lib/cloud/instance/sem do not seem to be updated, however per-instance scripts are being run again
[21:43] <blackboxsw> esv on most platforms when a new instance is launched from a cloned image, the platform/cloud itself provides a new instance-id in meta-data or in DMI information for the VM.  When cloud-init sees a new instance-id that is different than /var/lib/cloud/data/previous-instance-id it will log something like: " Update datasource metadata and network config due to events: boot-new-instance
[21:43] <blackboxsw> you can check quickly in logs if instance id changed with 'grep "previous iid found to be" /var/log/cloud-init.log"
[21:44] <blackboxsw> last quote should be a single quote ' 
[21:52] <esv> right after posting this found a hit on what azure does and indeed the doc says it would change, my confusion comes from the fact that /var/lib/cloud/instance/sem files have the same i-node timestamp as the original creation date.
[21:53] <esv> guess I need to wrech a couple of my lab servers and see what happens 
[21:53] <esv> thnx for the tip, will search on that.