[04:22] <knaccc> hey y'all, is it correct that I should not be editing 50-cloud-init.yaml to put in a custom nameserver list or search domains, and should instead create a /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg and then...? any help greatly appreciated
[04:26] <knaccc> i'm very confused that when i set up the custom DNS servers (8.8.8.8 etc), "systemd-resolve --status" shows that it's recognizing that for "Link 2 (enp3s0f0)", but not for "Global". and i think therefore for some reason it's ignoring my DNS settings and search domain
[04:26] <knaccc> the server has 2 eth interfaces, and only one is configured,
[13:51] <Odd_Bloke> knaccc: cloud-init will regenerate /etc/netplan/50-cloud-init.yaml on each boot, so yes, you don't want to modify that.  If you're still having the issue, please pastebin all config you have in /etc/netplan and we can try to help out. :)
[15:32] <blackboxsw> rharper: lucasmoura Odd_Bloke falcojr this would be additional set of commits if we ran new-update-snapshot https://pastebin.ubuntu.com/p/qVMwsrKHKH/
[15:33] <Odd_Bloke> That LGTM, would definitely be nice to get that Chef change in.
[15:34] <rharper> blackboxsw: yeah, +1
[15:53] <blackboxsw> rharper: Odd_Bloke falcojr lucasmoura https://github.com/canonical/cloud-init/pull/411  upload of new-upstream-release for groovy and SRU afterward
[16:09] <Odd_Bloke> blackboxsw: Reviewing now.
[16:15] <blackboxsw> thanks Odd_Bloke focal right after it https://github.com/canonical/cloud-init/pull/412
[16:15] <blackboxsw> same changeset basically
[16:16] <blackboxsw> I have to change the eoan bionic and xenial a bit more
[16:16] <blackboxsw> aaand cloud-init status tim
[16:16] <blackboxsw> aaand cloud-init status time
[16:17] <blackboxsw> #endmeeting
[16:18] <Odd_Bloke> blackboxsw: Cool; one note: we have a lot of stale Build-Depends now: unittest2 isn't required, six isn't required, and we've stopped using pyflakes and pep8 directly (and my local build didn't fail due to the absence of flake8, so I don't think we need to replace it).
[16:18] <Odd_Bloke> (That build was on focal, I'm just bootstrapping a groovy build env now, to test the actual correct release.)
[16:21] <blackboxsw> #startmeeting cloud-init status meeting
[16:21] <meetingology> Meeting started Tue Jun  2 16:21:15 2020 UTC.  The chair is blackboxsw. Information about MeetBot at http://wiki.ubuntu.com/meetingology.
[16:21] <meetingology> Available commands: action commands idea info link nick
[16:21] <blackboxsw> hi folks, time for another cloud-init upstream status meeting.
[16:22] <blackboxsw> we use this meeting to provide a venue for any cloud-init interested parties to keep up to date on current development, release-related info and expedite distributed development where possible.
[16:22] <blackboxsw> this meeting is a welcome place for interruptions, questions, requests and unrelated discussions at any point. so don't be shy :)
[16:23] <blackboxsw> #chair Odd_Bloke smoser rharper
[16:23] <meetingology> Current chairs: Odd_Bloke blackboxsw rharper smoser
[16:23] <blackboxsw> The topics we generally cover in this meeting are the following: Previous Actions, Recent Changes, In-progress Development, Community Charter, Office Hours (~30 mins).
[16:24] <blackboxsw> previous meeting minutes live here (and I just saw I forgot to publish last minutes so I pushed them now)
[16:24] <blackboxsw> #link https://cloud-init.github.io/
[16:24] <blackboxsw> #topic Previous Actions
[16:25] <blackboxsw> nothing actionable brought up in last meeting on 05/19
[16:26] <blackboxsw> Odd_Bloke: ahh we should fix devel with those pkg drops on next upload
[16:26] <blackboxsw> we did drop that for Xenial, Bionic Eoan and maybe focal too?
[16:26] <blackboxsw> so an oversight for groovy
[16:27] <blackboxsw> next topic
[16:27] <blackboxsw> #topic Recent Changes
[16:28] <blackboxsw> the following are commits landed in tip of master found via git log --since 05/19/2020 : https://paste.ubuntu.com/p/QFvgWhjXY9/
[16:28] <Odd_Bloke> blackboxsw: When you say "next upload" are you referring to the upload you're about to do, or the one after that?
[16:28] <blackboxsw> Odd_Bloke: if you'd like we can adjust the current upload so that devel, focal, bionic xenial eoan all drop those stale deps
[16:28] <blackboxsw> I think X, B E have all dropped them
[16:29] <blackboxsw> so maybe I re-do ubuntu/devel PR Odd_Bloke ?
[16:29] <blackboxsw> probably good/better/correct to keep all releases on the same footing.
[16:29] <Odd_Bloke> blackboxsw: I think it's worth doing, we've uploaded without fixing it a few times before, and we've remembered this time around.
[16:30] <blackboxsw> yeah sounds good Odd_Bloke I'll re-do that devel PR (and make sure focal drops it too)
[16:30] <blackboxsw> if needed
[16:30] <Odd_Bloke> And it should just be a case of pushing a new commit to your existing branch.
[16:30] <Odd_Bloke> Thanks!
[16:30] <blackboxsw> +1
[16:32] <blackboxsw> things of note in the recent commits landed.  https://github.com/canonical/cloud-init/pull/358  Mattew Ruffell  improved cc_grub_dpkg to be more dynamic in matching disks instead of a hardcoded device list
[16:33] <blackboxsw> thanks Matthew
[16:33] <blackboxsw> and chef_license support https://github.com/canonical/cloud-init/commit/0919bd46bbd1b12158c369569ec1298bb000dd8a
[16:34] <blackboxsw> thanks bipinbachhao  for the config extension there
[16:34] <blackboxsw> #topic  In-progress Development
[16:35] <blackboxsw> a couple of new notables in flight at the moment:
[16:38] <blackboxsw> - falcojr: introduction of feature-flags for cloud-init upstream to give us a toggle to retain original behavior of #include failures on stable downstream releases. https://github.com/canonical/cloud-init/pull/367  . Upstream cloud-init will fail loudly and raise an Exception if someone tries to #include a url which fails. this differs from original cloud-init behavior which was to try our best to get a system up
[16:38] <blackboxsw> and running, even amid not-critical failures
[16:39] <blackboxsw> per the above, if downstreams (distributiions) would like to retain a more permissive warn on #include user-data issues, a cloudinit/feature_overrides.py file would need to be introduced in the downstream
[16:40] <blackboxsw> - Also meena and Odd_Bloke and others have been working toward a refactor of cloudinit.net modules. Dan added a doc PR to capture this approach https://github.com/canonical/cloud-init/pull/391
[16:41] <blackboxsw> beyond that, there are a number of PRs up from lucas on json schema additions for cloudinit/config/cc_* modules to get better validation of #cloud-config user-data
[16:42] <blackboxsw> For ubuntu proper, we have started the StableReleaseUpdate process for cloud-init to publish master into ubuntu/xenial, bionic, eoan and focal releases
[16:43] <blackboxsw> some of these changes will add the opportunity to enable 'new' features on platforms like Azure
[16:43] <blackboxsw> and AWS
[16:43] <blackboxsw> Azure (xenial) will be dropping walinuxagent support
[16:44] <blackboxsw> AWS will now surface a datasource config option apply_full_imds_network_config boolean
[16:45] <blackboxsw> if set true in an Ec2(aws) image network configuration from cloud-init can come completely from IMDS for every connected NIC. That config will include all secondary IPv4/IPv6 addressses configured for the machine
[16:46] <blackboxsw> Upstream has  started the Ubuntu SRU process (which generally takes around 10-14 days). We plan to include every commit that has landed in tip of master as of commitish 5f7825e22241423322dbe628de1b00289cf34114
[16:46] <blackboxsw> the bug related to this SRU work is here
[16:46] <blackboxsw> #link https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1881018
[16:47] <blackboxsw> #topic Community Charter
[16:48] <blackboxsw> upstream has signed up to get as much of the json schema coverage as we  can for cloudinit/config/cc*py modules since invalid #cloud-config user-data formats tends to have one of the highest incidence of errors (because writing YAML is something humans shouldn't have to do :) )
[16:49] <blackboxsw> so we are chopping away at defining JSON schema for as many cloud config modules as possible . there are still plenty to choose from. Anyone can feel free to grab a JSON schema bug and help us with bettering cloud-init
[16:49] <blackboxsw> bugs are filed for each config module which needs schema definition:
[16:49] <blackboxsw> #link  https://bugs.launchpad.net/cloud-init/?field.tag=bitezise
[16:50] <blackboxsw> a big thanks to lucasmoura for starting to grab a number of these
[16:50] <blackboxsw> #topic Office Hours (next ~30 mins)
[16:50] <blackboxsw> This 'section' of the meeting is a time where a couple of upstream devs will be available in channel for any discussions, questions, bug work or PR reviews.
[16:51] <blackboxsw> In the absence of discussions/topics here we scrub the review queue.
[16:51] <blackboxsw> since we are mid-stream on Ubuntu SRU at the moment, I'll be addressing review comments on some of the functional 'upload' branches we've put together
[16:52] <blackboxsw> and, let's update the topic for next IRC meeting too while we are at it
[16:59] <blackboxsw> Odd_Bloke: just pushed ubuntu/devel dropping python3-six|unittest2|nose
[17:01] <blackboxsw> and just re-pushed ubuntu/focal to drop python3-six
[17:04] <blackboxsw> oops and missed you others. reworking
[17:12] <blackboxsw> ok re-pushed. focal and devel PRs in shape
[17:13] <blackboxsw> dropped the following build-deps: python3-six, python3-unittest2, python3-pep8, python3-nose, python3-pyflakes
[17:20] <Odd_Bloke> blackboxsw: +1 on the ubuntu/devel upload.
[17:21] <blackboxsw> whew, think we got all of the dropped deps between the two of us... thanks!
[17:21] <blackboxsw> Odd_Bloke: thanks focal looks good and sbuilds
[17:21] <blackboxsw> just finished eoan and building now to test
[17:23] <meena> what? me??
[17:24] <blackboxsw> well yes indeedy meena, just trying to keep you highlighted as participating in the cloud-init status meeting :) you've thankfully reviewed, pushed and prodded us to talk about cloudinit.net refactor and how best to address it I think :) credit due ;)
[17:26] <blackboxsw> community notice: upload to Ubuntu groovy of cloud-init master accepted [ubuntu/groovy-proposed] cloud-init 20.2-45-g5f7825e2-0ubuntu1 (Accepted)
[17:30] <Odd_Bloke> blackboxsw: One issue with https://github.com/canonical/cloud-init/pull/412
[17:31] <meena> blackboxsw: i'm just waiting for Odd_Bloke to provide the basic infrastructure so i can start moving code… without that, i have to bug other projects in my … 2 hours of free time per day.
[17:31] <meena> blackboxsw: yesterday, i tried to build an android app on my laptop and gave up after an hour.
[17:35] <blackboxsw> nice review again Odd_Bloke, will reflect that patch to each series. as every other ubuntu/* is missing enabling various cloud datasources beyond just Rbx
[17:54] <blackboxsw> Odd_Bloke: rharper so Xenial is interesting for datasource config via dpkg
[17:55] <blackboxsw> We are missing: Hetzner, IBMCloud, Oracle, and  RbxCloud
[17:55] <blackboxsw> one was an oversight on previous SRUs
[17:55] <blackboxsw> but Oracle and IBMCloud, I'm trying to recall if there is a reason we didn't want to surface either of those datasources as configurable on Xenial
[17:56] <blackboxsw> a little warning bell is going off in my head
[17:56] <blackboxsw> Hetzner I thought was 'ok'
[17:56] <blackboxsw> Oracle currently gets detected as OpenStack on Xenial.
[17:57] <rharper> IBMCloud and Oracle are sensitive
[17:57] <rharper> not sure about Hetzner or RbxCloud though
[17:57] <blackboxsw> upstream Oracle datasource is 'good', but I wasn't sure if there was extra baggage associated with *not* backporting that functionality
[17:57] <rharper> blackboxsw: I think you might want to check with CPC on those
[17:58] <meena> Hetzner is also detected as OpenStack on FreeBSD… but… only thru cloud-init itself, not thru ds-identify
[18:03] <meena> (i'm not sure how much of that is my fault having helped a lot with Hetzner and FreeBSD and ds-identify myself)
[18:03] <knaccc> Odd_Bloke thanks for your reply. I managed to fix things in the end, but kinda by cheating. Now my /etc/netplan/50-cloud-init.yaml only contains the IP addresses configuration, and I make the nameservers and search domain apply in the "Global" scope (as reported by systemd-resolve --status) by simply modifying the /etc/resolv.conf file. All configuration survives reboot just fine, and I am no longer
[18:03] <knaccc> scared that resolv.conf will be overwritten because I found a web page that said that "Note: The mode of operation of systemd-resolved is detected automatically, depending on whether /etc/resolv.conf is a symlink to the local stub DNS resolver file or contains server names." Although you said in your message that "cloud-init will regenerate /etc/netplan/50-cloud-init.yaml on each boot, so yes, you don't
[18:03] <knaccc> want to modify that", the OVH instructions directly contradict that and tell me to edit it to add all IP addresses to my interface (see Ubuntu 18.04 section here: https://docs.ovh.com/gb/en/vps/network-ipaliasing-vps/). I'm therefore very confused about why OVH seem to contradict the instructions that are in that config file, and confused as to what other location I should be editing/creating instead
[18:06] <ddstreet> knaccc why do you want to change resolved 'Global' section?
[18:08] <blackboxsw> heh meena not at fault :) . Just need to make sure we move cloud-platforms to a better way of detecting the right datasource when we can.
[18:08] <knaccc> ddstreet if I put the nameservers and search domain into the /etc/netplan/50-cloud-init.yaml file, it gets ignored completely (i.e. although those configurations show up in systemd-resolve --status against that specific "link", the "Global" nameservers and lack of any search domain in that Global section are taking precedence). Therefore I had to configure nameservers and search domain at the resolv.conf
[18:08] <knaccc> level so that it appeared in the Global section, and then suddenly everything worked for the first time
[18:08] <blackboxsw> I should tie off our cloud-init status meeting. Thanks folks for all who've attended
[18:08] <blackboxsw> #endmeeting
[18:08] <meetingology> Meeting ended Tue Jun  2 18:08:56 2020 UTC.
[18:08] <meetingology> Minutes:        http://ubottu.com/meetingology/logs/cloud-init/2020/cloud-init.2020-06-02-16.21.moin.txt
[18:09] <knaccc> oh oops, sorry i didn't realise the meeting was still in progress when I interjected
[18:09] <blackboxsw> knaccc: the meeting is a welcome place for any and all discussions
[18:09] <blackboxsw> all good
[18:09] <blackboxsw> I was just supposed to end it about ~20 mins ago to capture logs
[18:10] <blackboxsw> we were in the "open discussion office hours"  part of the meeting
[18:10] <ddstreet> knaccc i don't know what your exact config is, but you don't need nameservers in the global section for dns to work
[18:11] <knaccc> ddstreet since the OVH config already specified a resolv.conf file, that seemed to be overriding my settings in the 50-cloud-init.yaml file. I should have thought to try deleting the resolv.conf file to see if that would stop the "Global" section for overriding the link settings
[18:13] <ddstreet> yeah with systemd-resolved you don't want to modify the /etc/resolv.conf file
[18:13] <knaccc> ddstreet do you disagree with the web page I found that says "Note: The mode of operation of systemd-resolved is detected automatically, depending on whether /etc/resolv.conf is a symlink to the local stub DNS resolver file or contains server names"?
[18:14] <knaccc> (that's a quote from here https://wiki.archlinux.org/index.php/Systemd-resolved )
[18:14] <ddstreet> knaccc no that's correct, but if you maintain your /etc/resolv.conf yourself, you need to 100% manage it
[18:14] <knaccc> ddstreet ah yes, that's the impression i got. since this is a dedicated server, and that resolv.conf configuration will probably never change, that should be OK, right?
[18:15] <knaccc> all resolv.conf contains is the cloudflare and google dns servers, and my search domain. so those will probably never change, ever
[18:15] <ddstreet> well that depends on what exactly you did, i.e. remove the symlink and create the file yourself, and remove the 127.0.0.53 from it
[18:15] <ddstreet> i.e. you want to bypass resolved entirely
[18:16] <ddstreet> when you do that, it doesn't matter what systemd-resolve --status says, because you aren't using it
[18:17] <knaccc> ah that makes sense. yes the resolv.conf file was already put there by OVH on a fresh install. so i'm guessing they did itthat way to make things easier for people who were used to just editing resolv.conf
[18:18] <knaccc> i'll try deleting resolv.conf and then seeing if 50-cloud-init.yaml nameservers and search domain suddenly get picked up
[18:18] <ddstreet> well you need to recreate the symlink to /run/systemd/resolve/stub-resolv.conf
[18:18] <ddstreet> and make sure something has told resolved about your actual nameservers, i.e. networkd in most cases
[18:19] <knaccc> ddstreet ah thanks, yes i see, i'm supposed to do: ln -sf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
[18:20] <knaccc> i'll try that, when i'll be reinstalling another server in a few hours
[18:20] <knaccc> although it seems like since i don't have some kind of liquid situation with frequently changing nameservers, maybe i should just stick to simplicity and just do things in resolv.conf
[18:20] <knaccc> ddstreet if i'm not supposed to be editing 50-cloud-init.yaml file, could you point me towards what I should be ediitng please?
[18:22] <knaccc> i wonder if OVH are using cloud-init in a kind of "one-shot" mode, where it's useful for them to just config a dedicated server the first time using cloud-init, but then i can then just edit the 50-cloud-init.yaml file myself from then on, because neither OVH nor any kind of system thing will ever overwrite 50-cloud-init.yaml
[18:27] <ddstreet> i didn't say you shouldn't edit that file
[18:28] <ddstreet> but in general, no you shouldn't, it's created from the cloud data
[18:28] <ddstreet> you should edit your instance config from the cloud provider's contorls
[18:34] <blackboxsw> so rharper/Odd_Bloke for this SRU, I'm thinking we hold on IBMCloud, Hetzner and Oracle datasource enablement and tackle that separately once we circle the wagons with regard to Xenial expected behavior
[18:35] <blackboxsw> I'm good with adding RbxCloud as that datasource was just recently added and detecting it falls at the end of the list for ds-identify
[18:37] <rharper> blackboxsw: that sounds good to me
[18:38] <blackboxsw> this is all for xenial and bionic, only adding RbxCloud, xenial and bionic will still lack Oracle and IBMCloud support. Xenial will also lack Hetzner
[18:44] <knaccc> ddstreet thanks for your advice. is cloud-init attempting to connect to OVH on every reboot to see if there are new settings available? as long as i'm confident that this dedicated server will never need an automatic network setting update or any other type of updates from OVH via cloud-init, do you see any problem with me just disabling cloud-init by doing "touch /etc/cloud/cloud-init.disabled"? is that
[18:44] <knaccc> the best way to disable it? it is a bit creepy to me that OVH could just use cloud-init to mess with my dedicated server unexpectedly
[18:57] <rharper> knaccc: cloud-init generally only operates on first boot; subsequent boots runs cloud-init to determine if it is on the same instance or if it has been captured and booted on a different platform; in which case it clears data and runs like first boot again
[18:58] <rharper> knaccc: I'm not sure which Datasource OVH uses ( I think OpenStack, but not 100%s) most datasources do not generate network config on *every* boot; some platforms do (Azure for example);  if you are only worried about network config changes, you can configure cloud-init to not bother configuring networking at all;
[18:58] <Odd_Bloke> blackboxsw: To be clear, the issue is not that we're missing DSes, it's that our two lists are inconsistent.
[18:58] <blackboxsw> Odd_Bloke: sorry I figured that out later. but we also have missing datasources on Bionic and Xenial
[18:59] <rharper> Odd_Bloke: do you think we miss these due to image builds for certain platforms which might include the DS automatically ?
[18:59] <lucasmoura> blackboxsw, I am trying to write one of our manual tests for the next SRU. I am starting with focal, but I just want to verify something. Is it right that the cloud-init package to be tested is not on focal-proposed yet ?
[18:59] <rharper> it seems like some of them would scream if cloud-init didn't work at all on certain releases
[19:00] <rharper> lucasmoura: yeah, it's not uploaded to the pocket release yet
[19:00] <lucasmoura> rharper, ack
[19:00] <rharper> lucasmoura: instead you can: add-apt-repository -y ppa:~cloud-init-dev/proposed
[19:00] <rharper> then apt update && apt install cloud-init
[19:01] <lucasmoura> Oh right, now I remember that we commented that issue on the daily. The lxc-proposed-snapshot use the pocket by default right ?
[19:01] <Odd_Bloke> rharper: blackboxsw: Given the uncertainty we have over it, and the lack of bug reports, I think we should proceed with the current set of DSes.
[19:01] <blackboxsw> Odd_Bloke: I've specifically added RbxCloud (because it's newly added to tip) to X, B, E and F
[19:01] <Odd_Bloke> That seems fine to me too. :)
[19:01] <blackboxsw> ok, the others let's leave untouched
[19:01] <knaccc> rharper yes OVH is openstack, but that's for their public cloud. I think they're just suddenly using cloud-init on their private dedicated servers too since it makes it easy for them. Just to clarify, are you saying that cloud-init is probably connecting to OVH on every reboot to ask OVH if any changes need to be made, and OVH is saying "no" each time? or is it my server that it is itself deciding not to
[19:01] <knaccc> contact OVH on subsequent boots?
[19:01] <blackboxsw> and RbxCloud is last detected datasource anyway, so no impact to other datasources unless nothing else matches
[19:02] <rharper> knaccc: no; I can't say what they're doing without seeing some logs;  If they are using it on dedicated servers; it may use "offline" datasources like NoCloud which reads from a file, or ConfigDrive which reads from an iso
[19:05] <rharper> knaccc: you can see which datasource was detected  looking at /run/cloud-init/cloud.cfg , would show something like  datasource_list: [ NoCloud, None ]
[19:07] <rharper> knaccc: on platforms where there is a remote datasource (Like Openstack);  cloud-init does not reconnect to the metadata service on subsequent boots by default;  in some datasources, the only way to determine an instance's unique id is to fetch the values from the platfroms metadata server; for those platfroms, cloud-init does fetch the metadata on each boot; if the instance-id matches, no further work is done.
[19:08] <Odd_Bloke> blackboxsw: +1 on focal.
[19:08] <blackboxsw> lucasmoura: here's what we typically push to a vm under test https://github.com/cloud-init/ubuntu-sru/blob/master/sru-templates/manual/ec2-sru#L35-L40
[19:08] <blackboxsw> which does the setup cloud-init/proposed  PPA
[19:09] <blackboxsw> falcojr: too ^   so it's a good guideline on what to test until we ping the ubuntu SRU team to our my current work in progress ubuntu/* branches uploaded
[19:09] <blackboxsw> thanks Odd_Bloke build-and-pushing ubuntu/focal then
[19:11] <blackboxsw> just eoan bionic and xenial to go if lucasmoura falcojr wanted to look at those too I've captured the same type of new-upstream-snapshot changesets for those that I just pushed into ubuntu/focal  (though I haven't updated the PR doc/context for the drop of debian/control build-deps and the cloud-init.templates RbxCloud addition
[19:11] <blackboxsw> eoan https://github.com/canonical/cloud-init/pull/409
[19:11] <blackboxsw> bionic https://github.com/canonical/cloud-init/pull/409
[19:12] <knaccc> rharper this is my /etc/cloud/cloud.cfg: https://pastebin.com/raw/YwTa8Cpb I think that since there are no datasources listed in that, that that means there are no updates being checked for on every reboot?
[19:12] <blackboxsw> xenial https://github.com/canonical/cloud-init/pull/406
[19:13] <blackboxsw> knaccc: you can /should also check /etc/cloud/cloud.cfg.d/90_dpkg.cfg
[19:13] <blackboxsw> that is typically where the cloud image creator defaults that datasource_list
[19:13] <lucasmoura> blackboxsw, thanks I will use this approach then
[19:13] <knaccc> blackboxsw that has just this line, and I don't know what it means: datasource_list: [ NoCloud, ConfigDrive, OpenNebula, DigitalOcean, Azure, AltCloud, OVF, MAAS, GCE, OpenStack, CloudSigma, SmartOS, Bigstep, Scaleway, AliYun, Ec2, CloudStack, Hetzner, IBMCloud, Oracle, Exoscale, RbxCloud, None ]
[19:14] <lucasmoura> And I can help with reviews, no problem. I will start with eoan
[19:15] <blackboxsw> knaccc: so that list means on initial boot, ds-itentify script will try to detect each of those datasources in that specific order. so if cloud-init hadn't already detected the datasource for your image, it'd try to go through that list to find which one matches your platform. sorry I may have muddied the water with your previous discussion, I was just responding to your latest comment
[19:16] <blackboxsw> I'm reading your backlog knaccc to respond more appropriately to your underlying concern (it was about whether cloud-init would try to redetect/reconfigure your instance across boots right ?)
[19:16] <knaccc> blackboxsw yes that's right
[19:17] <knaccc> blackboxsw i just want to make sure that now that OVH has used cloud-init once to auto-configure the freshly reinstalled private dedicated server, that cloud-init will never mess with my config again on reboot
[19:17] <lucasmoura> blackboxsw, I know we can easily extract that from the changelog, but on the steps to reproduce the package, it is missing the 2.header part
[19:18] <knaccc> blackboxsw in which case, do you agree that the best way to stop cloud-init from messing with my system is to just do "touch /etc/cloud/cloud-init.disabled"?
[19:18] <Odd_Bloke> I'm looking at eoan now.
[19:18] <blackboxsw> thanks Odd_Bloke, I'm uploading ubuntu/focal
[19:19] <Odd_Bloke> blackboxsw: Still working through the patches, but I've already posted a Q, FYI.
[19:19] <rharper> knaccc: no, you need to look at the /run/cloud-init/cloud.cfg ;  /etc/cloud/cloud.cfg is the default settings and it enables all of the supported datasource;  at boot time cloud-init uses a systemd generator to detect which platform (or which datasource) is configured, NoCloud looks for files in /var/lib/cloud/seed/nocloud-net/*  or a iso/filesystem label with 'cidata' for example and then if we find a datasource or platfrom, then we enable
[19:19] <rharper> cloud-init to run (otherwise we don't run at all)
[19:20] <rharper> knaccc: w.r.t disabling cloud-init, touching that file will disable cloud-init
[19:20] <knaccc> rharper aha, /run/cloud-init/cloud.cfg says just: datasource_list: [ ConfigDrive, None ]
[19:21] <blackboxsw> and knaccc cloud-init status --long will show you ultimately what cloud-init proper detected as the valid datasource.
[19:21] <blackboxsw> so I'd expect cloud-init status --long to show ConfigDrive in the output
[19:21] <knaccc> yes, that --long command only shows: DataSourceConfigDrive [net,ver=2][source=/dev/nvme0n1p4]
[19:22] <blackboxsw> ok so the configdrive path source in that case was from /dev/nvme0n1p4
[19:25] <knaccc> blackboxsw ok, now i'm really confused. my two SSD drives (as reported by parted) are /dev/nvme0n1 and /dev/nvme1n1, and i have no idea what /dev/nvme0n1p4 is
[19:26] <blackboxsw>  /dev/nvme0n1p4  is just partition 4 on  /dev/nvme0n1
[19:26] <blackboxsw> I think
[19:28] <knaccc> i guess it could be a temporary partition that existed for a while during machine setup
[19:28] <knaccc> and it gets wiped out when the disk is configured
[19:29] <knaccc> ok, i think i'm starting to get comfortable understanding this enough to just disable cloud-init now and not worry about the consequences, you've both been a great help in enabling me to get some kind of mental model of what cloud-init is doing and where, so thank you blackboxsw rharper
[19:30] <knaccc> i was terrified that cloud-init would suddenly overwrite things on reboot and brick the machine, but i'm feeling much better about how to handle this now
[19:31] <knaccc> (by brick the machine, i mean suddenly revert the network config or something else unexpectedly, and cause things to break)
[19:49] <rharper> knaccc: sure, glad we could help
[19:52] <Odd_Bloke> blackboxsw: Did you see my comments on eoan?
[19:52] <Odd_Bloke> Just realised I didn't ping you with them, now I'm done.
[19:53] <Odd_Bloke> Haha, I see in my inbox right now that you have. :p
[19:57] <powersj> FYI cloud-init + azure whitepaper: https://pages.ubuntu.com/rs/066-EOV-335/images/Cloud-init_on_Azure.pdf
[19:58] <powersj> ^ AnhVoMSFT thanks for your help on that
[20:03] <blackboxsw> wow excllent powersj and AnhVoMSFT !
[20:22] <blackboxsw> Odd_Bloke: ok, thanks I see the issue with the quilt patch renderer-do-not-prefer-netplan.patch
[20:23] <blackboxsw> lucasmoura: it'll be the same issue on bionic too. you can see Odd_Bloke's failure by running: quilt push -a; tox -e py3 tests/unittests/test_render_cloudcfg.py
[20:24] <blackboxsw> just so we know best approach for verifying future  upload checks. # apply all quilt patches and test the world  with quilt push -a; tox -p auto
[20:24] <lucasmoura> blackboxsw, okay
[20:24] <blackboxsw> sorry gents, I neeed about 20 mins to fix this on the 3 branches
[20:24] <blackboxsw> Xenial/bionic/eoan
[20:43] <Odd_Bloke> blackboxsw: I caught this by performing a build locally, should we include that as part of the process before you submit the PR?
[20:44] <blackboxsw> Odd_Bloke: you think new-upstream-snapshot should run sbuild?
[20:44] <blackboxsw> or is it PR review requirement
[20:48] <Odd_Bloke> I think the submitter (and soon-to-be uploader) should run it, I don't think that automation should Just Do It (because different people will have different package building setups).
[20:48] <blackboxsw> Odd_Bloke: agreed. I'll add a PR to ubuntu-sru to mention that pre-requisite prior to PR.
[20:49] <Odd_Bloke> Yeah, new-upstream-snapshot could just mention it in that block of commands it suggests running?
[20:50] <blackboxsw> yes that works
[21:36] <tgm4883> Trying to use cloudinit on a centos 7 image in vmware, feels like I'm close, but cloud init throws a warning that it's unable to get datra from the vmware source. When I use the vmware rpctool I can see and decode the base64 data so the VM at least knows about it.
[21:37] <tgm4883> But nothing that I've told it to do actually happens, any pointers on where to look next? The logs only tell me that the data isn't found, but I can see the data when I query it
[21:42] <rharper> tgm4883:   maybe test out a newer cloud-init version?  https://copr.fedorainfracloud.org/coprs/g/cloud-init/cloud-init-dev/  ;   vmware issues have been a challenge for the community to work with.
[21:43] <rharper> tgm4883: there's also some use of a non-upstream datasource from vmware,  https://github.com/vmware/cloud-init-vmware-guestinfo/blob/master/DataSourceVMwareGuestInfo.py   which we don't directly support; I've seen users attempt to use this version (it works for some and not for others)
[21:43] <tgm4883> Will do
[21:44] <tgm4883> yea that's the one I'm using
[21:45] <tgm4883> My *very limited* understanding of how this all works is that these datasources are essentially plugins for cloud-init correct? So maybe if I understood how the data gets from that plugin to cloud-init (or rather, if I could somehow run and debug the plugin directly) it might help
[21:46] <tgm4883> I do see cloud-init trying to use it, but just says that getting data from that class failed
[21:52] <rharper> tgm4883: you can rerun cloud-init manually if you can access the image;  typically we modify the image with a root password and ssh keys, etc when debugging;  once on a system, you can manually run 'cloud-init init --local' and 'cloud-init init'  (those are the first two stages which exercise the datasource);
[22:18] <blackboxsw> eoan finally fixed for tomorrow, or any late birds still around https://github.com/canonical/cloud-init/pull/409
[22:30] <tgm4883> rharper: thanks for the help. I didn't get a chance to try the new version of cloud-init yet, but I did notice that the datasource that is installed in vmware's RPM is older than the one installed via the install script, and the one in the install script at least returns data when running it by itself, so that gives me some more to test with
[23:33] <rharper> tgm4883: ah, ok
[23:34] <rharper> tgm4883: sure,  glad we can help