[13:22] <ananke> holmanb: thanks! I'll try to take a closer look today, hope I can finish my other projects
[14:13] <holmanb> ananke: sounds good
[15:59] <akutz> falcojr: I just added an assertion to that PR for set-name and resent the PR. I realized it was just printing the result :(
[16:04] <akutz> By the way falcojr, this may be a silly question as I'm sure if you were aware of any land-mines you would just fix them, but could you possibly take 10m today or sometime soon and poke into the network state/render code and see if anything stands out as what might be a "set-name" issue laying in wait?
[16:26] <falcojr> Thanks akutz, yeah...I noticed we had two similar issues like that recently, so I think it's worth doing an audit and/or adding some unit tests to see if there's other big things we've missed
[16:26] <akutz> Yeah, and since I know (not blaming, just observing) your patch back in July caused the DNS one, and you work in that area not infrequently, I was hoping you might have a good sense of any warning signs.
[16:29] <falcojr> yeah, I think until recently there hasn't been much v2 config usage outside of direct netplan passthrough which is why I think we're uncovering some of these issues now
[16:30] <akutz> I *think* what's likely a good indicator is anywhere a device id/name is used. If there's a chance that's been updated, or not updated, to reflect the set-name value, then anything else an id/name may have an incorrect one. I think it's a disconnect between what key is in the network state and the key in config.ethernets. After the NS is parsed, its keys are the ethernets[*].id values OR set-name if it was specified. But since config.ethernets is still
[16:30] <akutz>  referenced quite a bit, it still used the old id/key. Both issues are nearly identical, but slightly different. One was iterating over config.ethernets and using that key to access the NS devices, and the other was the opposite.
[16:31] <akutz> One quick hack would be to simply update the keys in config.ethernets to match their set-name values if those are present before doing anything else.
[16:38] <falcojr> good advice
[18:33] <akutz> Hey falcojr, I'm seeing weirdness on the tests for https://github.com/canonical/cloud-init/pull/1100. The 3.5 tests seem to fail randomly between xenial and py3, but only for Python 3.5. It seems to be based on the rendered content for networkd. I'm guessing the rendered text can be different? Not the keys in the dict, but the actual networkd file line/section ordering?
[18:33] <akutz> I can simplify the assertion so this doesn't fail, but I only want to do so if we confirm the rendered output isn't deterministic. 
[18:35] <akutz> It only seems to fail for Python 3.5 though. And it's random at that. 
[18:41] <falcojr> akutz: looks like a symptom of iterating over a dictionary
[18:41] <falcojr> 3.6+ order is preserved (as an implementation detail). 3.5 and below the order can be random
[18:41] <akutz> I guess. I'm fixing it in the test though by sorting the output of both before comparison.
[18:42] <falcojr> sounds good
[18:49] <akutz> Okay, it's pushed and running. Hopefully this clears it up. Not to be pushy, but do we have an idea when 21.5 might drop? I ask because I want to be able to indicate to the Kubernetes image builder community when they can stop downgrading and pinning Cloud-Init to 21.1. Since Cluster API uses "set-name," these latest bugs have hit those images hard.
[18:50] <akutz> (I know 21.4 *just* dropped, but I wasn't sure if perhaps there was a tentative ETA on the next release date)
[18:52] <akutz> Ironically it was folks from MSFT who discovered the latest bug. They based their VMware DS for Cloudbase-Init off of my original DS, and I think between that and Azure, they've been doing a lot more testing around Cloud-Init. 
[19:06] <akutz> Oh for the love of pete, my block comment got flagged in 3.6 by pylint for not having any effect :) Okay, fixing that...
[19:50] <falcojr> akutz: we do quarterly releases. 21.4 is the 4th release of 2021, so next will be 22.1 in 3-ish months from now
[19:50] <akutz> Ack, thank you.