[00:29] <faa> hello, maybe someone has an example debian double network interface?
[01:21] <johnsonshi> rharper: As discussed previously, cloud-init's swap file module runs before the mount module. Cloud-init swap create also does not ensure that it has a path to the file, thereby throwing an exception. https://bugs.launchpad.net/cloud-init/+bug/1869114
[08:44] <faa> help required, debian, may cloud-init write file (/etc/network/interfaces.d/enp0s0) previous network stage? datasource NoCloud iso
[08:54] <andras-kovacs> imho by default it makes a config for your interface but what is the problem you are facing right now?
[08:58] <faa> debian if interface not status link up, cloud-init ignor this interface (only write config) require restart network
[09:04] <faa> log https://pastebin.com/gv6HgpjY first interface link up in image template
[09:05] <andras-kovacs>  enp0s1 |  True |         127.0.0.1 that's strange, doesn't it?
[09:05] <andras-kovacs> are you using NetworkManager?
[09:07] <faa> ip changed, not standart debian 10 service, with wery old cloud-init
[09:13] <andras-kovacs> sorry, I can't follow you
[09:13] <andras-kovacs> if you have problems with the old network.service try to disble it and use NetworkManager instead (but I'm not 100% surre how it looks like in Debian nowadays)
[09:15] <faa> it's minimal server install without external packages
[09:24] <nrajasekhar> Hello
[09:26] <nrajasekhar> I need some help with cloud-init usage on suse
[09:26] <nrajasekhar> I want to change the password on first login after creating the aws instance
[09:27] <nrajasekhar> How can I achieve it
[09:28] <nrajasekhar> any help here is much appreciated
[09:29] <andras-kovacs> do you want to change only or store it in a meta service somewhere?
[09:30] <andras-kovacs> I would change it with a runcmd command maybe
[09:31] <nrajasekhar> change and save it
[09:31] <andras-kovacs> I think you can find a whole solution with google for that
[09:32] <nrajasekhar> yes, I have tried all the available options. none of them worked for me
[09:32] <nrajasekhar> I have referred the link that I am pasting here : https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/installation_and_configuration_guide/setting_up_cloud_init
[09:32] <andras-kovacs> I would call a simple shell script which would set up a new password ,encrypt it with my ssh pubkey and upload it to the meta data server
[09:33] <andras-kovacs> first things first, are you trying to make it work with Atomic?
[09:34] <nrajasekhar> sorry but I am not sure what that option mean from the link.
[09:34] <andras-kovacs> which option?
[09:35] <nrajasekhar> password: atomic
[09:35] <andras-kovacs> this how-to while looks perfectly good for me I think it's a general one and not for AWS exactly
[09:35] <nrajasekhar> oh ok
[09:36] <nrajasekhar> from open build service, I have taken sample template and modifying to achieve it
[09:36] <nrajasekhar> but none of the options worked
[09:36] <andras-kovacs> I don't know that one
[09:37] <nrajasekhar> could you please suggest any method or example from a link?
[09:37] <andras-kovacs> but we should know that what do you want to achieve
[09:37] <nrajasekhar> https://build.opensuse.org/
[09:37] <nrajasekhar> ok let me explain
[09:38] <nrajasekhar> we generate Appliance in the OVA format
[09:38] <andras-kovacs> why?
[09:38] <andras-kovacs> I mean why in ova? do you use virtualbox or what?
[09:39] <nrajasekhar> that's the way we deliver it to customers
[09:40] <nrajasekhar> recently my team has decided to deploy our product on to AWS
[09:40] <andras-kovacs> idk but ova ususally means virtualbox which also means IDE attached disks
[09:40] <nrajasekhar> upon googling I found that OVA can be used to create AWS ami and an instance
[09:41] <nrajasekhar> https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
[09:42] <nrajasekhar> I could able to successfully deploy OVA and create an instance out it and access my application
[09:42] <andras-kovacs> than what's the problem?
[09:42] <nrajasekhar> but in the OVA, we have hardcoded the root password
[09:42] <andras-kovacs> do you want to randomize some passwords?
[09:43] <nrajasekhar> now we want to make sure on first login, the user changes the password and save it
[09:44] <andras-kovacs> finally! :D
[09:44] <andras-kovacs> so this is what you want here
[09:44] <nrajasekhar> yes :-)
[09:45] <andras-kovacs> you just need to expire the password of that account I think and that's all
[09:45] <andras-kovacs> https://www.tecmint.com/force-user-to-change-password-next-login-in-linux/
[09:46] <andras-kovacs> you can put it in a runcmd command also but I would set it in the "ova" before I upload it to AWS.
[09:46] <andras-kovacs> I mean you can do it without cloud-init also
[09:47] <nrajasekhar> ok
[09:47] <nrajasekhar> out of curiosity, Is there any option in clou-init?
[09:48] <nrajasekhar> *cloud-int
[09:48] <nrajasekhar> *cloud-init
[09:50] <andras-kovacs> https://cloudinit.readthedocs.io/en/latest/topics/modules.html#users-and-groups
[09:51] <andras-kovacs> anything else you want there is runcmd for that
[09:52] <nrajasekhar> sure I will give a try.
[09:52] <nrajasekhar> @andras-kovacs: thanks for all the help and info.
[09:55] <andras-kovacs> ywc!
[13:36] <Goneri> the CI is broken because of https://github.com/gabrielfalcao/HTTPretty/issues/397
[13:36] <Goneri> a 1.0.2 release has just been pushed.
[13:37] <Odd_Bloke> nrajasekhar: You should consider whether having a password in a cloud environment is appropriate at all, but https://cloudinit.readthedocs.io/en/latest/topics/modules.html#set-passwords will allow you to set passwords and by default they should require resetting on first login.
[13:38] <Odd_Bloke> Goneri: I've just restarted the CI for your PR.  Assuming that new version has been released, we should pick it up automatically.
[13:38] <Odd_Bloke> andras-kovacs: (Thanks for helping out!)
[13:39] <Goneri> thanks
[13:40] <Odd_Bloke> And yep, it's got past the point it failed before.
[13:52] <Goneri> Odd_Bloke, once this patch is merged, I will clean up cloudinit/sources/DataSourceNoCloud.py to remove the OS specific code we use to build devlist.
[13:52] <Goneri> it should be in cloudinit/util.py
[13:52] <Odd_Bloke> OK, nice!
[13:52] <Goneri> and the find_devs() method should be in cloudinit/distros/
[13:53] <Goneri> since it's OS (and distro) specific
[14:26] <eggbean> Is there a vs code extension for #cloud-config?
[14:27] <eggbean> Can't find anything
[14:30] <andras-kovacs> it's not so complex IMHO
[14:31] <andras-kovacs> # vim:syntax=yaml
[14:31] <andras-kovacs> so just use yaml syntax and that's all
[14:36] <eggbean> andras-kovacs: yeh okay.  It's just that as I am new to it, autocomplete would have been useful to remind me of the exact keys
[14:37] <andras-kovacs> I see :D
[14:44] <amansi26> Hi, I have a doubt . I am using ConfigDrive as datasource and deployig a DHCP network. netcfg.get('config') in stages.py is coming to None. Distro is rhel and cloud init version is 19.1.
[14:44] <amansi26> can someone lease guide me
[14:44] <amansi26> please*
[14:50] <andras-kovacs> wait a minute
[14:50] <andras-kovacs> so it's rhel 8, right?
[14:51] <andras-kovacs> in 7 the latest is 18.5 as I remember
[14:52] <amansi26> it is rhel 7.7
[14:59] <andras-kovacs> oh wow
[15:02] <amansi26> It is just failing for DHCP network. Static is working fine
[15:02] <andras-kovacs> I'm not sure I understand why do you need to "pull" the DHCP config
[15:03] <andras-kovacs> the DHCP server doesn't supply the necessary data?
[15:04] <andras-kovacs> I mean I don't get the deploy part. Cloud-init makes a dhcp config for your interface and that should work.
[15:04] <amansi26> Didn't get your last message
[16:22] <Odd_Bloke> eggbean: I'm not aware of any such extension.  There is JSON Schema for some of the cloud config modules, you might be able to do something with that?
[17:09] <Odd_Bloke> rharper: You still have requested changes on https://github.com/canonical/cloud-init/pull/147, which I believe are all addressed.  Could you either dismiss your review or do enough to give it an Approve?
[17:15] <rharper> lemme look
[17:15] <robjo> rharper: what file contains the more or less complete jsonschema for network and routing configuration?
[17:15] <rharper> we've no schema written for network config v1;  netplan (v2) I think has schema in source;
[17:16] <robjo> OK, I guess I have to keep winging it :(
[17:19] <rharper> Odd_Bloke: maybe I'm missing something, but on conversation page, I see my requested changes/comments, there's a 'view changes' button, which almost always ends up on a 'whoops can't find your changes' page ...  what is that supposed to do?
[18:15] <blackboxsw> Odd_Bloke: approved (but needs rebase) https://github.com/canonical/cloud-init/pull/278/files
[18:37] <Goneri> rharper, is there anything else I need to adjust? https://github.com/canonical/cloud-init/pull/147
[18:49] <rharper> Goneri: thanks, I'm re-reviewing
[18:55] <bwatson> Anyone have any luck bootstrapping a RHEL 8.1 or CentOS 8 generic cloud image (kvm) via cloud-init on VMWare?  I add the *.iso as a virtual CD-ROM to the machine and power it on, but the only thing I see on screen is: Probing EDD (edd=off to disable)... ok
[18:56] <bwatson> or is EL8 just not ready for this yet?  I've been using this technique with EL 6/7 and Ubuntu 16/18 for some time now
[19:02] <Goneri> rharper, thanks. I'm testing a fix.
[19:24] <Goneri> rharper, https://github.com/canonical/cloud-init/pull/147/commits/02694900dd656b94ed056a03952eeab276aa2694
[19:24] <Goneri> rharper, I don't default on DHCP anymore.
[19:28] <rharper> Goneri: ok, dhcp on just one interface then ?
[19:29] <rharper> dhcp_interfaces() does this  ever return more than one ?
[19:29] <Goneri> it depends on the metadata
[19:30] <Goneri> so yes you can get more than one, in this case the defaul route will be indeed important
[19:30] <Goneri> but it's not different to what we do we FreeBSD or NetBSD.
[19:31] <Goneri> I would say, we cannot fix the network environment for the user.
[19:32] <rharper> well,  they don't have control over it in a cloud
[19:33] <rharper> in Azure, for example, you have to DHCP on primary nic, and secondary nics can also DHCP but the route can break the primary nic configuration;  so we ensure the network-config we write to the OS has a metric value that's lower priority for non-primary nics
[19:34] <Odd_Bloke> rharper: My guess is that a force-push means that the commits references are no longer present.
[19:35] <rharper> Odd_Bloke: yeah; I was thinking that;  seems like if a force push happens the message should go away since its not a great experience;
[19:35] <Goneri> rharper, if you've got two DHCP network without a default route, bad things will happen.
[19:35] <rharper> no, they * both* have a default route
[19:35] <Goneri> but it's already the case with the other BSD. I'm not sure how I can fix that.
[19:35] <rharper> they may even point to the same router; but the question out of which interface does the packet egress
[19:35] <rharper> on Azure for example, packets destined for the internet may *only* come from eth0;
[19:35] <rharper> they are dropped otherwise
[19:35] <rharper> so the routing table entries matter
[19:36] <rharper> multi-nic DHCP needs help since the platform network metadata is imprecise (they just say DHCP on all of the nics)
[19:36] <Odd_Bloke> blackboxsw: Thanks for the review!
[19:36] <rharper> cloud-init helps out here by ensuring that we don't clobber the default route for the primary interface
[19:36] <rharper> on AWS, this can happen as well and on OpenStack
[19:37] <rharper> I would prefer to *skip* multi-nic DHCP until we know that we won't break primary nic networking configuration
[19:37] <Goneri> rharper, so basically,: if NIC is not primary and default_route_exists, then ignore default route
[19:38] <rharper> yes; the preference is to assign a route metric of lower priority for all nics but primary
[19:38] <rharper> or you can use route-tables
[19:38] <Goneri> ok FreeBSD and NetBSD need to be adjusted too then.
[19:39] <Goneri> Could we put that aside for now and merge the current patch. I will prepare a set-up to test the setup that you describe.
[19:39] <Goneri> and come with a new patch that address this problem.
[19:39] <rharper> Goneri: I think that's reasonable since it's a general fix for all BSD networking
[19:39] <Goneri> I would also like to push a fix to clean up the devlist mess.
[19:40] <rharper> https://github.com/aws/ec2-net-utils/blob/master/ec2net-functions
[19:40] <rharper> this is linux specific, but it may be of help to understand how it's solved on Ec2
[19:41] <Goneri> Yes, understood.
[19:42] <rharper> Odd_Bloke: I'm +1 on 147 now, with the note that Goneri will follow up to handle multi-nic DHCP (and other cleanups)
[19:42] <Odd_Bloke> Nice, thanks!
[20:08] <Odd_Bloke> Goneri: #147 landed! \o/
[20:08] <Goneri> ahah!
[20:08] <Goneri> I will refresh https://bsd-cloud-image.org/ later today.
[20:22] <Goneri> thanks meena Odd_Bloke and rharper for the review.
[20:24] <blackboxsw> Odd_Bloke: sorry about missing the build recipe failures a few days ago, I had mistakenly thought it was just focal. but as you pointed out, xenial has been failing for a wihle.
[20:24] <blackboxsw> so I stopped at fixing focal as I saw bionic eoan were fine. I forgot to look through xenial
[20:24] <blackboxsw> working that now
[20:24] <blackboxsw> should be minor
[20:31] <rharper> blackboxsw: that was me poking you on that ... I'm subscribed to the recipe failures
[20:31] <Odd_Bloke> rharper: I poked him in privmsg because I was going to take a look if he wasn't already.
[20:32] <rharper> ah
[20:32] <rharper> Odd_Bloke: thanks! =)
[20:32] <blackboxsw> rharper/Odd_Bloke right. I fixed focal a few days ago, didn't realize that xenial patch was broken too
[20:32] <blackboxsw> so that did fall through the cracks until Odd_Bloke pinging me pvt
[20:32] <Odd_Bloke> I'm not sure why I'm not getting those emails TBH.
[20:32] <blackboxsw> privately
[20:32] <Odd_Bloke> Maybe I'm filtering them, let me check.
[20:50] <blackboxsw> Odd_Bloke: rharper so, we use SRU_BLOCKER/RELEASE_BLOCKER comment in cloud-init to give us a heads up about something that needs attention during the next SRU or RELEASE. I have to quilt refresh a number of patches at the moment on xenial, and I was wondering if either of you have a suggestion on a common comment prefix we can also use for resolved SRU/RELEASE blocker differences.
[20:51] <blackboxsw> a common prefix would allow us to easily see in an ubuntu series if there are things patches that will remind us of differing behavior on other series
[20:52] <blackboxsw> since I have to refesh multiple debian/patches now it'd be nice to start instrumenting that "SRU_RESOLVED" maybe?
[20:52] <blackboxsw> SRU_FIX?
[20:53]  * blackboxsw goes with SRU_FIX unless there are firm objections
[20:58] <blackboxsw> put up https://github.com/canonical/cloud-init/pull/284
[20:58] <blackboxsw> for review
[20:59] <blackboxsw> I'm thinking we probably should also queue bionic and eoan with new-upstream-snapshot --skip-release just so we'll have a common changeset to look at once we actually do perform an SRU to those series in the future
[20:59] <blackboxsw> what do you folks think? do this for bionic and eoan as well so daily recipes are building the same 'snapshots'
[20:59] <blackboxsw> even though build recipes aren't failing there
[21:00]  * blackboxsw queues those for review pending discussion
[21:00] <Odd_Bloke> blackboxsw: The daily recipes merge master in, so they'd be building the same snapshot regardless, I think?
[21:00] <blackboxsw> Odd_Bloke:
[21:00]  * Odd_Bloke nods sagely in response.
[21:00] <blackboxsw> I think so right. so functionally may not make a difference on finished deb.
[21:00] <blackboxsw> hah
[21:01] <blackboxsw> though it'll make for vastly different xenial vs bionic/eoan on our next SRU
[21:01] <Odd_Bloke> Well, we'll still merge in master at that point to each branch.
[21:01] <blackboxsw> because current fixed xenial  will have snapshotted up until today
[21:01] <blackboxsw> yes bionic/eoan won't do that until we actually try to perform the next SRU
[21:01] <Odd_Bloke> We aren't releasing these branches anywhere, we're essentially just fixing merge conflicts.
[21:02] <blackboxsw> not releasing, but at SRU time, we will review a PR like 284 for xenial and it'll be way different from the PR for bionic/eoan
[21:02] <blackboxsw> bionic/eoan will be a lot bigger and will include what we will be pushing the upstream/ubuntu/xenial today
[21:03] <Odd_Bloke> We don't really review those PR diffs, though, we basically just confirm that the uploader hasn't fat-fingered using the tooling.
[21:03] <rharper> blackboxsw: I think that's fine, we run the same tools on each branch
[21:03] <Odd_Bloke> (i.e. I'm not reading through that xenial diff, I'm reviewing it by performing the actions locally.)
[21:03] <rharper> so they can vary, but my output should match the PR
[21:03] <rharper> Odd_Bloke: exactly
[21:04] <rharper> I end up diffing my local branch against the PR
[21:04] <rharper> and the delta should be timestamps and names
[21:04] <blackboxsw> ahh roger
[21:04] <blackboxsw> in that case it doesn't matter, for future of bionic/eoan
[21:05] <blackboxsw> for this xenial branch you'll see my diffs as well for the manual quilt refresh changes as I changed the patches a bit with the SRU_FIX prefix
[21:05] <blackboxsw> I went through https://github.com/CanonicalLtd/uss-tableflip/blob/master/doc/ubuntu_release_process.md#when-the-daily-recipe-build-fails
[21:20] <blackboxsw> rharper: are we ok with this response? https://github.com/canonical/cloud-init/pull/284#issuecomment-604692176
[21:20] <blackboxsw> also should I put up a doc PR  against uss-tableflip suggesting the use of SRU_FIX in patch comments ?
[21:20] <blackboxsw> for when we add new patches (like netplan eni priority in the near future)
[21:33] <rharper> blackboxsw: it wasn;t clear to me if you added any strings or just ran the steps from the docs ?
[21:33] <rharper> blackboxsw: I don't want to add any markers in the patches or code;  I'm just not going to remember to look
[21:33] <rharper> I just want to run the tools and compare branches
[21:35] <blackboxsw> right rharper, but once https://github.com/canonical/cloud-init/pull/267 lands we need another patch and it'd be nice if that patch represented that it omitted content from ubuntu/bionic branch to retain original behavior
[21:35] <blackboxsw> kindof like the existing requirements.txt patch comments that we aren't adding jsonschema deps to avoid changing behavior
[21:36] <blackboxsw> except that comment and text currently is unstructured, so breadcrumbs are hard to find.
[21:37] <blackboxsw> Also we need to have some structure/procedure to track in cloud-init source certain features that we don't want to accidentally release into stable releases. And that convention currently is loosely SRU_BLOCKER in comments and no uss-tableflip documentation that says, hey make sure you double check during next sru that you aren't leaking unintented behavior
[21:37] <blackboxsw> *unintended behavior*
[21:38] <Odd_Bloke> blackboxsw: I can find that requirements.txt comment by looking through d/patches, I don't need to grep for it.
[21:39] <Odd_Bloke> Is it true of all the other modifications we make that they're in d/patches?
[21:39] <blackboxsw> Odd_Bloke: good point. so maybe no prefix required on patched files
[21:39] <Odd_Bloke> If so, then I think that's a better catalogue of changes than any manually managed prefix is going to give us.
[21:42] <rharper> blackboxsw: I suspect before adding the strings, we need a complete use-case and walk through what it will actually improve.   I can see a use-case for documenting behavioral differences between releases ...
[21:42] <rharper> but let's set out with that in-mind and design a tool/process with the goal in mind;
[21:44] <blackboxsw> Odd_Bloke: it's not a whole set of patches as we directly have adapted cloudinit/settings.py and debian/cloud-init.templates per release to enable datasources after the release goes stable
[21:45] <blackboxsw> and there may be others.  but mostly debian/patches captures the majority of the functional differences in cloud-init source. not packaging diffs
[21:47] <blackboxsw> rharper/Odd_Bloke: good points. so shall I leave the debian/patch comments untouched then. and just manually merge as best I can to avoid any other diff introduced by SRU_FIX prefix?
[21:47] <blackboxsw> I have one approve at the moment
[21:47] <blackboxsw> but can alter that approach to keep the debian/patch diff smaller
[21:56] <Odd_Bloke> rharper: blackboxsw: If either of you are looking for some small reviews to do, I've opened 4 small PRs that cleanup some more Py2 support code I found.
[21:58] <blackboxsw> Im all about the smalls
[21:58] <blackboxsw> :)
[22:05] <rharper> blackboxsw: I would prefer to leave them untouched;  when you say "merge as best you can" what do you mean?
[22:13] <blackboxsw> ok rharper sorry force pushed 284. without comment changing liberties
[22:13] <blackboxsw> diff from yours should be smaller now
[22:14] <blackboxsw> rharper: I meant manually merging the existing patch because quilt failed to update
[22:15] <blackboxsw> nothing generally should be dropped, but it presented me with an opportunity I thought to standardize comments. But, I'm good not doing that. it's of limited use anyway
[22:17] <blackboxsw> in manually merging a patch conflict, there really shouldn't be any changes unless upstream changed some subset of the exact lines of the patch file and there could be a possibility that the patch refresh author needs to make the appropriate manual decision on what functionally should remain after the patch (like if we have a variable renamed or something). So "merge as best you can" meant being smart about your
[22:17] <blackboxsw> manual choice when resolving that quilt patch update
[22:19] <blackboxsw> rharper: I forgot to ping you again earlier on the netplan proiritization branch https://github.com/canonical/cloud-init/pull/267   do you think it needs a cloud_test to install ifupdown on focal to confirm behavior?
[22:20] <blackboxsw> or shall we just chalk that into a manual one-off SRU/release test
[22:20] <blackboxsw> I thought at standup I had missed review comments from you, but I don't see anything new there.