[16:10] <blackboxsw> holmanb: falcojr are there any other compelling branches we'd like to get into Jammy from upstream? I'm thinking about trying to queue up an upload so we can start getting broader exposure and testing to the new LXD datasource as well as GCP being detected in init-local timeframe instead of init-network
[16:12] <blackboxsw> holmanb: if we can get your unittest refactor in that'd be good. also Falcor what do you think about esposem's branch https://github.com/canonical/cloud-init/pull/1124
[16:46] <minimal> I'm assuming the AWS data source does not currently support AWS' recently announced IPv6 endpoint for IMDS. Is anyone working on this?
[16:47] <minimal> s/AWS data source/EC2 data source/
[16:53] <blackboxsw> minimal: we have it on the schedule to implement next in our queue My holmanb has graciously agreed to drive this feature. I expect we'll have it in Jan
[16:54] <blackboxsw> s/My/Mr./
[16:56] <blackboxsw> I presume you're referring to nitro systems with ipv6-only IMDS support? https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances
[16:57] <blackboxsw> we've done an initial spike to scope the impact to cloud-init , but haven't coded up the solution quite yet.
[17:03] <minimal> blackboxsw: yupe, that's it, have been waiting for a very long time for EC2 to support ipv6-only
[17:04] <blackboxsw> +1 I had originally scheduled that we thought we'd get there by Dec 15th. We'll make sure we keep this prioritized so we can get it out. as it is it is "next" in our queue
[17:04] <minimal> blackboxsw: I did find it "funny" to see the recent change to the Vultr DS for supporting their ipv6only VMs added a bringup for IPv4 to get to the IPv4-only metadata server - I guess IPv6-only means something slightly different to Vultr :-)
[17:08] <blackboxsw> yes an interesting that the "fix" is to setup ipv4  routes to metadata servers instead of setting up ipv6 speaking IMDS. hopefully that reall support will come
[17:10] <minimal> blockboxsw: well I guess Vultr's "problem" is that (from memory) any IPv6 addresses given to machines are global routable addresses rather than AWS doing the "right" thing of using a ULA (non-globally-routable) prefix
[17:10] <minimal> so it wouldn't make sense for Vultr to have their metadata service on a global IPv6 address
[17:12] <blackboxsw> +1 my misread/misunderstanding of the dynamic there.
[17:15] <minimal> I see to remember someone opened an issue regarding the Openstack DS a while ago about IPv6 metadata and, again, the issue was there was no "agreed" IPv6 addressing decision for Openstack
[17:15] <minimal> maybe 2022 will be the year of IPv6 :-) (famous last words)
[17:46] <minimal> blackboxsw: actually a thought, how would the EC2 DS know to use IPv6 only? I guess it would have to try both v4 and v6 and use whichever responds (and assuming it always responds on IPv4 as its nothing to do with a IPv6-only VPC then would IPv6 IMDS ever be used?).
[18:07] <blackboxsw> minimal: yep the plan is to use "Happy Eyeballs" https://en.wikipedia.org/wiki/Happy_Eyeballs
[18:07] <blackboxsw> basically determine which stack is first responder.
[18:08] <blackboxsw> and prefer that during boot config operations.
[18:10] <minimal> blackboxsw: right. I spent time earlier today trying to see if Linux ipv4 can be completely disabled in general and doesn't seem to be the case,whereas ipv6 (module at least) has always had a cmdline/sysctl "big switch". The DS connection to metadata server happens so early in boot that firewall rules wouldn't be in place etc...
[18:31] <holmanb> @minimal: I've tested the AWS ipv6 imds, but haven't gotten any further than scoping out the Python tooling available. That's next in my queue :)
[18:37] <holmanb> minimal:  if there isn't an ipv4 switch builtin you could always blackhole ipv4 traffic by dropping ipv4 routes from the routing table - not sure exactly what you're trying to do but that's another option to chew on
[18:38] <holmanb> @blackboxsw - fyi at this point #1126 is pending your approval
[19:02] <blackboxsw> holmanb: thanks checking now.
[20:03] <minimal> holmanb: was intending to disable ipv4 in general. Re dropping routes, I'm not sure iptables services runs before cloud-init-local and so a drop rule might not be in place before c-i tries to bring up ipv4 for metadata server access
[20:11] <blackboxsw> https://github.com/canonical/cloud-init/pull/1126 landed for unit test refactor
[20:12] <blackboxsw> that'll inevitably result in PR merge conficts for any active PR
[20:12] <blackboxsw> like holmanb's https://github.com/canonical/cloud-init/pull/1101
[20:19] <holmanb> @blackboxsw - Thanks!
[21:45] <blackboxsw> Just finished testing latest cloud-init from tip for the LXD datasource in --channel=latest/edge and we properly support the cloud-init.* config keys over user.(user-data,network-config,meta-data) whenever that gets released as stable 4.20 or 4.21
[21:46] <blackboxsw> I might draw up an integration test that'll install lxd from latest/edge, launch a container, setup those cloud-init config settings and ensure DataSourceLXD acts appropriately
[21:47] <blackboxsw> also I think I'm going to add lxdev API 1.0/devices support for cloud-init to pave the way for hot-plug on lxd.
[22:01] <blackboxsw> holmanb: +1 on your sentiments https://github.com/canonical/cloud-init/pull/1101#issuecomment-985870274
[22:03] <holmanb> @blackboxsw: sounds good, I expect the PR to be updated before my eod (~1hr)
[22:04] <holmanb> @blackboxsw: on a somewhat related note, I found this today: https://github.com/canonical/cloud-init/pull/1130
[22:06] <blackboxsw> +1 nice will review that. I also will put up a PR for release into Jammy ubuntu/devel that you can take a look at if you want. If we +1 I can drop a cloud-init release into tip which would get us better mechanisms to test both LXD datasource and GCP. 
[22:06] <blackboxsw> as it'd be included in jammy cloudimages in a day or two
[22:07] <holmanb> @blackboxsw: excellent, sounds good to me
[22:08] <holmanb> @blackboxsw: pyright type checking indicates that there are likely other sleeping dragons in the annotation codepath, but I hit that one by accident when testing, so I couldn't ignore it :)
[22:15] <blackboxsw> holmanb: I think we might also want to sort cloud-init proper better handling that invalid/empty cloud-config too https://github.com/canonical/cloud-init/pull/1130#pullrequestreview-823160074
[22:19] <holmanb> @blackboxsw: +1 I agree
[22:19] <blackboxsw> but I am on the fence about whether that aspect could a separate PR though
[22:19] <blackboxsw> you're call about inclusion or separation
[22:19] <blackboxsw> your*
[22:22] <holmanb> Since there is a chance that we might want to do the error handling at some common point in the codepath, I think it makes sense to hold off and on the current PR. That way any rework that needs to happen can happen outside of our git history.
[22:23] <holmanb> I don't feel strongly about it, but that's the direction I lean
[22:28] <blackboxsw> +1 works for me
[23:08] <blackboxsw> holmanb: I expect you are probably gone. but if not https://github.com/canonical/cloud-init/pull/1131 is up for an upstream release PR into Jammy if you or James get a chance to look at it today or Monday morning