[04:46] <amiko2020> Hi, Im having a hardtime using cloud-init because i have to delete and recreate the instance evry time i want to change something in the script - can anyone please help me edit the script and rerun without this nightmare?
[12:32] <Odd_Bloke> smoser: I think requiring manual intervention in image capture use cases would be a nightmare; I'm confident most people who use cloud-init to make that work have no idea that cloud-init is what makes it work. :p
[12:34] <Odd_Bloke> I agree it works generally correctly as-is, my proposed "check_failsafe" would be something users (or perhaps clouds, if they really can't figure out how to fix their IMDS) would have to enable themselves, if they were in a case where that behaviour is appropriate.
[13:09] <smoser> Odd_Bloke: i see your point.l i just dont' know. long ago i was a fan of "trudge along and do what you can" manner of handling of errors.
[13:09] <smoser> but that has shifted to "fail loudly".
[13:10] <smoser> and setting 'check_failsafe' seems like it is going to end up with lots of people setting it and then never noticing that stuff didn't work.
[13:10] <smoser> and that host A and host B had the same ssh keys.
[13:28] <Odd_Bloke> Right, with a broken IMDS we cannot be fully correct; the question is whether it's preferable to break new instances, or long-running existing instances.  This would let users choose.  (But as I say, it's more of a thought experiment: I don't think we're going to prioritise fixing a corner case for broken cloud deployments.)
[13:36] <smoser> "silently break new instances"
[13:36] <smoser> is what you're proposing.
[13:38] <smoser> Odd_Bloke: i appreciate the thought experiment, and i'm probably not as opposed as it may seem from above.
[13:39] <smoser> one thing that confuses me, though, is why the sudden increase in these issues ?
[13:39] <smoser> (or at least in reports of them).  it certainly feels like an increase at least.
[13:55] <viks____> Odd_Bloke:  I installed cloud-init 20.3 on debian9, took the snapshot, created fresh instance out of the snapshot... still root password did not set on debian9.. i'm not sure if this is the right way to test... :(
[13:57] <smoser> viks____: did you collect-logs anywhere ?
[13:58] <viks____> smoser: if i create new instance out of a snapshot, will the cloud init run again?
[13:58] <smoser> collect-logs will tell us if it did.
[13:59] <smoser> but yes, it is supposed to.
[14:02] <viks____> smoser: you meant `/var/log/cloud-init.log`?
[14:06] <smoser> run 'cloud-init collect-logs' and post the tarball somewhere.
[14:07] <smoser> there are lots of things it collects... we can troubleshoot one by one or ask you to provide a tarball taht has all of them.
[14:07] <smoser> you can also open a bug and post it there if posting it someherr is a problem
[14:12] <viks____> here it is: https://file.io/UryHOSz5WtKd
[14:16] <smoser> that 404 for me
[14:19] <viks____> https://file.io/aAMaYHrgUykz
[14:19] <viks____> let me know if that worked
[14:24] <viks____> or https://tmpfiles.org/download/71959/cloud-init.tar.gz
[14:25] <smoser> $ host file.io
[14:25] <smoser> file.io has address 10.123.243.135
[14:25] <smoser> 10. ?
[14:25] <smoser> the tmpfiles.org worked
[14:30] <viks____> ok
[14:31] <smoser> viks____:well i'm pretty sure that cloud-init didn't start on the new instance.
[14:31] <smoser> cloud-init's systemd generator did see that it was running on openstack, and did enable cloud-init.target
[14:31] <smoser> but i do not think that it ran from there.
[14:32] <viks____> https://www.irccloud.com/pastebin/qVtMbCzN/
[14:32] <smoser> but thats not clear, because
[14:32] <smoser>  2020-09-02 07:05:45,161 - util.py[DEBUG]: Cloud-init v. 20.3 running 'init-local' at Wed, 02 Sep 2020 07:05:45 +0000. Up 86824.48 seconds.
[14:33] <smoser> that is confusing to me because that is the first thing that cloud-inti shoudl run on boot (other than the generator)
[14:33] <smoser> but certainly the system hadn't been up 86k seconds at that point.
[14:34] <smoser> so i'm not sure what htat was
[14:34] <smoser> also your journal.txt that is collected is not present. you must not have been root.
[14:34] <smoser> i'm not sure really if the pip install is going to work or not.
[14:34] <smoser> not something that is well tested.
[14:35] <smoser> you'd probably hvae better luk using a build from unstable... or attempting to build a package.
[14:36] <smoser> but i'mi guessing building a package of 30.4 on oldstable is non-trivial
[14:36] <smoser> sorry. thats about all the help i can give.
[14:37] <smoser> its possible that the pip install coudl/should work, but its not something i'm familiar with or that i'm able to spend more time troubleshooting now.
[14:39] <viks____> i could not find any prebuilt packages for debian9... so i was trying above steps... with the http://cdimage.debian.org/cdimage/openstack/current-9/debian-9-openstack-amd64.qcow2 ,  root password setting was not working via cloudinit where as it works for debian10 i.e. http://cdimage.debian.org/cdimage/openstack/current-10/debian-10-openstack-amd64.qcow2. not sure what the problem could be :(
[14:40] <viks____> someara: anyways thnx for your time
[15:20] <meena> how old is Debian 9, if it comes with cloud-init 0.7.x?
[15:34] <viks____> meena: as per the timestamp it's packaged recently http://cdimage.debian.org/cdimage/openstack/current-9/, but as per wiki it is released in 2017..
[15:42] <Odd_Bloke> I think 17.1 was the first release under the new versioning scheme, so that checks out.
[15:42] <meena> viks____: why do you need such an old image, and why aren't you customising it?
[15:45] <Odd_Bloke> meena: Using a stable version of Debian is reasonable; they don't have an SRU/backporting process like Ubuntu does (or, rather, they don't grant exceptions like we have for cloud-init in Ubuntu), so there isn't a reliable source of new cloud-init packages for Debian.
[15:45] <Odd_Bloke> And I believe viks____ _is_ trying to customise it, with a newer version of cloud-init.
[15:47] <Odd_Bloke> smoser: I don't know if you care to figure it out, but FYI file.io resolves to (two) sensible addresses from here.
[15:50] <viks____> meena: i see from  https://www.debian.org/ that 9 and 10 are currently 2 stable releases under development
[15:56] <smoser> hm.. that is odd indeed.
[15:57] <viks____> meena: do you have any reference on how to customize without leaving behind any history that the cloud-init is updated with so and so commands...?
[15:58] <smoser> hm... even wierder, it resolves to 2 sensible ipv5 addrs here now also.
[15:58] <smoser> (ipv4)
[15:58] <smoser> https://paste.ubuntu.com/p/Z8WpN9yV7d/
[15:58] <smoser> check that out, that was within 10 seconds of each other, and within 2 minutes ago.
[15:59] <smoser> seems like systemd-resolve must have cached that address,. and then re-looked and re-acquired. who knows.
[16:08] <Odd_Bloke> Yeah, the caching sometimes catches me out if I've been connected to a VPN or a captive portal or whatever.
[16:09] <Odd_Bloke> viks____: If you wanted to modify the image in a pristine fashion, you could look at using `mount-image-callback`: http://ubuntu-smoser.blogspot.com/2014/08/mount-image-callback-easily-modify.html
[16:10] <Odd_Bloke> But you'd still need to figure out _how_ to install cloud-init into the mounted image.
[16:12] <viks____> Odd_Bloke: ok..
[20:45] <apatt> Odd_Bloke Just to close the loop on the discussion we had yesterday, I was able to get what I wanted by putting in a basic network config in /var/lib/cloud/seed/nocloud-net/network-config and then also adding a NoCloud datasource in cloud.cfg  pointing to a network share. So now the systems come up, get their networking and then continue on with
[20:45] <apatt> their business without any of the funky netplan stuff :-)
[20:46] <apatt> cc smoser
[20:47] <apatt> End result is a bare metal image I can use on Lenovo, SMC, Dell and HPE servers in my environment without any changes
[20:50] <apatt> Many thanks to you both for your help
[20:52] <smoser> jjcan you explain "also adding a nocloud datasource in cloud.cfg"
[20:52] <smoser> apatt:^