[09:12] <qbd> hello Odd_Bloke, just some feedback here that removing the 'groups' for that borg user worked and I can now login
[09:18] <qbd> what would be the recommended way to assemble a two disk mdadm raid0 array using cloud-init ?
[12:14] <kryl> Hi, is it possible to have "conflict" between cloud.conf and configs in cloud.cfg.d/* files ?
[12:14] <kryl> I want to debug what's happening? I put files in this directory but I didn't get any actions.
[12:40] <meena> kryl: if there was a conflict, you should have gotten an error
[13:01] <kryl> meena, I hope, but where :-p It will be cool to "debug" cloud-init files before to launch them in a blind server without KVM.
[13:16] <kryl> is there a module to setup "locales" in debian/ubuntu systems ?
[13:16] <kryl> or a place where I can find existing modules ?
[14:21] <Odd_Bloke> kryl: Check out https://cloudinit.readthedocs.io/en/latest/topics/modules.html
[14:22] <Odd_Bloke> (More specifically: https://cloudinit.readthedocs.io/en/latest/topics/modules.html#locale)
[15:59] <Odd_Bloke> smoser: Thoughts on https://bugs.launchpad.net/cloud-utils/+bug/1912904?
[16:00] <Odd_Bloke> Seems like a reasonable addition to me, but I'm not familiar with how cloud-localds is used (or if we already have a way of doing this).
[16:12] <Krikke> there's an option to add ssh keys in the user config
[16:12] <Krikke> users.ssh_authorized_keys
[16:13] <Krikke> though I'm using terraform to include the file
[16:14] <Krikke> https://github.com/ixevix/bootstrap-vm-terraform-libvirt
[16:25] <kryl> I can setup locale module in cloud_config_modules section in cloud.cfg, but what about adding locale: ar_AE as an example. Do I need to write that in cloud.cfg.d/xxx.cfg ? or I can add them in cloud.cfg directly (before or after cloud_config_module section ?)
[16:26] <kryl> Odd_Bloke, I saw it but it's not so clear. And what about UTF8 ?
[16:27] <minimal> Odd_Bloke: its writing a complete user-data file so only really of use to people to already creating their own user-data with more contents. I'd expect the majority of people using an ISO for cloud-init would be specifying more things in their user-data
[16:27] <minimal> s/people to already/people not already/
[16:27] <kryl> I'm wondering what's the best option to run "ansible" local playbooks. I saw there is something for chef, puppet, saltstack ... but nothing for ansible actually.
[16:41] <Odd_Bloke> minimal: So I do think there's a set of people who want to debug An Image, but have never heard of cloud-init; they want a way to backdoor it, and nothing more.
[16:44] <Odd_Bloke> Some of those folks will have figured something out, but others will have been put off by the variety of tools required to just get in.
[16:45] <Odd_Bloke> To be clear, I'm not saying we therefore must include something to cater to them, but I think that's a use-case that we could serve better.
[16:49] <minimal> in the case of that script it'll mean that that defaults will be used for various c-i modules, and (I assume) fallback to DHCP for network as no network YAML specified. No sure what the end result of the various modules' default setting would be.
[16:50] <minimal> i.e. is the result unsafe (is the default account password-less and accessible via SSH? etc)
[17:45] <Odd_Bloke> Hmm, so one consequence that I've noticed of releasing a hotfix upstream release is that the versions produced by our Ubuntu daily builds now sort _lower_ than the versions in the archive.  The 20.4.1 tag isn't in the main branch's history (because it's on a forked branch with only the fix on top of 20.4), so the most recent tag is 20.4.  (And 20.4 < 20.4.1.)
[18:01] <Odd_Bloke> I have some thoughts on how to address this: add an epoch to all of the recipe builds, so this will never be an issue again; or, temporarily prefix the recipe versions with something which will sort them above 20.4.1 which we can drop once 21.1 is out; or, add a tag like "20.4.2-not-really" to HEAD.
[18:01] <Odd_Bloke> (I don't like that last one.)
[18:02] <Odd_Bloke> I've also realised that this means that we presently have an upgrade issue to the devel release: as 20.4.1 > 20.4, the 20.4 snapshot in hirsute will not be upgraded to.
[18:03] <Odd_Bloke> (We'll release and upload 21.1 before hirsute releases, so this won't be an issue by release time/FF.)
[18:04] <blackboxsw> Odd_Bloke: So I'm wondering as well in the future we we actually don't bump the 20.4.X revision but instead just bump ~18.04.2
[18:04] <blackboxsw> that way the extra releases into ubuntu/xenial|bionic|etc don't actually cause concern versus the upstream daily recipe builder
[18:05] <Odd_Bloke> blackboxsw: I think we want it to be very clear that the Ubuntu stable releases have a bugfix that was important enough to merit an upstream hotfix release.
[18:05] <powersj> then why not call it 21.1 or 20.5
[18:06] <powersj> when we choose semantic-like versioning we said the .x, x would increment as we did releases. I'd need to look, but I don't think we talked much about doing patch versions
[18:06] <blackboxsw> In this case we didn't go through an official upstream release in master, just hotfix releases direct to the ubuntu/<series> branches
[18:07] <powersj> ah
[18:07] <Odd_Bloke> powersj: I think we'd have this same problem: the 21.1 or 20.5 tags would still not be in history.
[18:07] <blackboxsw> so daily build recipes wouldn't "see" that release tags
[18:07] <powersj> yeah
[18:07] <blackboxsw> ... on master
[18:08] <Odd_Bloke> And 20.4.1 _is_ an official upstream release, but it was a cherry-pick directly on top of 20.4 to minimise the delta to the one required by the fix.
[18:08] <Odd_Bloke> (If this had been Ubuntu-only, we'd have cherry-pick'd into the release branches and this would be a non-issue.)
[18:10] <blackboxsw> right... I'm not quite sure what to do in this particular case. I'm trying to wrap my head around the epoch suggestion to walk through version matrix fallout.
[18:12] <blackboxsw> Odd_Bloke: and at the moment new-upstream-snapshot releases into hirsute will still be something like 20.4-71-ga9c904dc-0ubuntu1 so we'd have to change our tooling too
[18:12] <blackboxsw> right?
[18:12] <blackboxsw> because we need hirsute releases to be 20.4.1-71......
[18:13] <rick_h> Is this an issue because once we cut a release we've not revved to the next release? For instance, once 20.4 was cut daily should have been a pre-release of 21.1? 21.1-alphaX maybe?
[18:26] <kryl> Please how to use metadata variables (from ec2) I can get them well listed with cloud-init query ds.... but I want to use them to setup my hostname :-) I tryied with runcmd and {{ varName }} but it doesn't work ! I put this config in cloud.cfg.d directory
[18:26] <kryl> it looks simple but I don't understand why to use an external script who will parse again the metadata url ?
[18:32] <blackboxsw> Correct me please Odd_Bloke: this is an issue because we hotfixed a release (20.4 + a single cherrypick) as 20.4.1 directly into ubuntu/<release> branches. Master never saw this upstream release tag 20.4.1 because the commit that we added to master to revert happened along with a bunch of other commits so tagging that "release 20.4.1" would not be representative in master of what was hotfixed into
[18:32] <blackboxsw> ubuntu/<release>
[18:39] <blackboxsw> kryl: you'll need to provide a ## template: jinja line at the beginning of the #cloud-config that you provide to declare your cloud-config userdata as a template that needs var substitution.
[18:40] <kryl> before or after #cloud-config ?
[18:40] <blackboxsw> kryl: you can check on your booted system `sudo cloud-init query userdata` to see if the first line has that header  Example here https://cloudinit.readthedocs.io/en/latest/topics/instancedata.html#using-instance-data
[18:40] <kryl> I don't provide them as global config it's just additionals files in cloud.cfg.d directory
[18:41] <blackboxsw> kryl: ahh hrm checking that approach
[18:42] <kryl> one more question this module: https://cloudinit.readthedocs.io/en/latest/topics/modules.html#resolv-conf won't work for debian sys in any way ?
[18:42] <blackboxsw> I'm not certain if we handle jinja template processing of separate cloud.cfg.d files but I'm checking now.
[18:42] <kryl> alright
[18:43] <blackboxsw> kryl: what's your file content in /etc/cloud/cloud.cfg.d/<custom>.cfg
[18:43] <blackboxsw> if you can share some of it
[18:44] <blackboxsw> https://paste.ubuntu.com is a good sharing tool if needed
[18:52] <kryl> something like that : https://paste.ubuntu.com/p/jvdwt9KM4Q/
[18:52] <kryl> and preserve_hostname must be at false.. but it's not the pb here.
[18:53] <blackboxsw> kryl: I'm still checking about inability to provide ## template: jinja in cloud.cfg.d custom config files. But minimally you could provide an /etc/cloud/cloud.cfg.d/99-myname.cfg like the following: https://paste.ubuntu.com/p/vVsPyM5X5Q/
[18:54] <blackboxsw> generally ## template: jinja has to be before #cloud-config
[18:55] <kryl> got it for usage of cloud-init app
[18:55] <kryl> thank you
[18:59] <blackboxsw> kryl: would you please file a short bug/feature request for supporting jinja templates in /etc/cloud/cloud.cfg.d files? I think that would be a useful feature. : https://bugs.launchpad.net/cloud-init/+filebug.
[19:00] <kryl> ok I'll do that tomorrow
[19:00] <blackboxsw> kryl: also if your images provide any config directives on disk (like runcmd) note that any user specifying their own runcmd in #cloud-config at launch time will drop any config options on disk
[19:00] <blackboxsw> thanks kryl
[19:00] <blackboxsw> will override the config you provided on disk and prefer the runcmd that the admin launching the VM provided instead.
[19:03] <kryl> any idea what about resolv_conf module ?
[19:03] <kryl> I'm wondering if it will apply on debian sys because in the documentation it seems to be dedicated to other OS
[19:21] <Odd_Bloke> blackboxsw: rick_h: So for our daily build versioning, we don't do "next release minus", we do "last release plus"; this is easier to do in LP recipe builds (because it can use git tags to determine "last release"; there's no canonical source of "next release" available there).
[19:22] <blackboxsw> kryl: in the docs generally we don't recommend using this module on Debian as network configuration should really be handled /etc/network/interfaces configuration which is handled properly by systemd-resolved services on the system. So, I believe that config module is disallowed on debian/ubuntu to allow the network and DNS rendering backplane to render the right config from a single source of truth.
[19:22] <Odd_Bloke> So yeah, the problem is that "20.4 plus" sorts lower than "20.4.1".
[19:23] <kryl> blackboxsw,ok thank you for your search
[19:24] <Odd_Bloke> But to get recipes to build with "pre-release" versioning, I think we'd have to manually modify the recipes after each release, rather than them just rolling forward automatically with no intervention.
[19:24] <blackboxsw> kryl: I say that with too many words and I might be incorrect. but generally I think there are enough networking edge-cases on Debian and Ubuntu with using cc_resolve_conf that using the primary network config files /etc/network/interfaces or /etc/netplan/* (or metadata services providing network_confing) are the preferred mechanism to describe such DNS config
[19:24] <blackboxsw> kryl: np
[19:25] <kryl> IT's for a special use case, I have to prevent dhcpclient (used from ENI / debian) to change the file resolv.conf... :-) and ideally fix it manually, maybe with cloud-init
[19:27] <Odd_Bloke> blackboxsw: There's an upstream/20.4.1 branch: https://github.com/canonical/cloud-init/tree/upstream/20.4.1
[19:27] <Odd_Bloke> Perhaps we should just merge that back into master?
[19:30] <Odd_Bloke> A quick test indicates that would solve the problem: https://paste.ubuntu.com/p/5CBN49kPjw/
[19:32] <Odd_Bloke> (And that is what you do with hotfix branches in gitflow, at least.)
[19:33] <kryl> I have to leave, GL for next steps ;)
[19:43] <blackboxsw> Odd_Bloke: confirmed. I think merging your upstream/20.4.1 will bump our version in master above 20.4 based and it'll pull in that annotated tag that'll get our `git describe` reporting properly
[19:44] <blackboxsw> only interesting caveat with that is that our git descr offset will not really be comparable in master vs ubuntu/X|B|F series really
[19:45] <blackboxsw> let me rethink that last statement of mine as I don't think that is actually true...
[19:45] <Odd_Bloke> Only until 21.1, then everything will be realigned, I think.
[19:46] <blackboxsw> Odd_Bloke: +1 on that. once 21.1 annotated upstream tag is cut in master all alignment of git describe commit offsets from a given signed tag will be properly aligned. I'm checking ubuntu/xenial's git desc vs master and also what happens with next new-upstream-snapshot
[19:52] <blackboxsw> Odd_Bloke: ok, so +1 on git merge origin/upstream/20.4.1 into master. I'd like us to augment that commit message though with the reason for why we now have the merge marker ,accounting for the hotfix release that needed to by sync-d "up" into master. Does that make sense?
[19:53] <blackboxsw> because I know future-me isn't going to remember why specifically we approached this in master "next time"
[19:54] <blackboxsw> Odd_Bloke: it might also be worth us noting "hotfix" procedure in https://github.com/canonical/uss-tableflip/blob/master/doc/ubuntu_release_process.md or https://github.com/canonical/uss-tableflip/blob/master/doc/upstream_release_process.md
[19:56] <blackboxsw> Odd_Bloke: one issue is that git describe on ubuntu/xenial is currently 20.4.1-426-gb889283c which will still sort higher than master at 20.4.1-72-g28cb7f03  right?
[19:57] <blackboxsw> I presume though, that our daily build recipe will actually just bump the base version 20.4 to 20.4.1 for all ubuntu releases. So maybe this is a non-issue https://code.launchpad.net/~cloud-init-dev/+archive/ubuntu/daily
[19:58] <blackboxsw> because daily recipes are still building {latest-tag}  which will change/increment from 20.4 -> 20.4.1 for all builds
[20:01] <Odd_Bloke> blackboxsw: The version we need to care about isn't the one that `git describe` in ubuntu/xenial produces, but the version in xenial itself (which is the version in debian/changelog in ubuntu/xenial: 20.4.1-0ubuntu1~16.04.1).
[20:05] <rick_h> Odd_Bloke:  looking at the CLA process, do you recall what's meant by the "Please add the Canonical Project Manager or contact"? Is that meant for the group signings using the same form vs individual contributor?
[20:06] <rick_h> oh lol it's in the doc, nvm
[20:09] <blackboxsw> blackboxsw: +1. Ok you are right  daily recipes to construct their own deb versioning based in {latest-tag}-{revno} sort of like `git describe` behavior (though through launchpad build recipes. So `git describe` not used for daily PPA builds. And for our real releases, right we rely on debian/changelog solely for that published version. new-upstream-snapshot creates that debian/changelog version.
[20:09] <blackboxsw> Odd_Bloke: ^ oops. I'm talking to myself again
[20:10] <blackboxsw> so as long as we are cognizant of new-upstream-snapshot release on next xenial/bionic/etc upload we need to make sure we are still dropping the <patchrev> version from 21.1.<patchrev> during that upload if new-upstream-snapshot isn't smart enough on that front
[21:56] <Odd_Bloke> OK, so unless someone objects before tomorrow, I'm going to get Rick to lift the squash-only requirement on the cloud-init repo for long enough for us to merge the upstream/20.4.1 branch into master.  It has to be a _merge_ (rather than a squash) to get the 20.4.1 tag into master's history, which fixes `git describe` (and by extension Ubuntu daily builds).  (It also represents what happened: we briefly
[21:56] <Odd_Bloke> forked cloud-init for a release; that fork has now been reintegrated into master.)