[00:18] +1 apple-corps[m] you could open a browser to https://paste.ubuntu.com/ and ctrl-C -ctrl-v your content into the form there if that helps [00:19] falcojr: paride holmanb for tomorrow: typo/thinko on jammy image stream names for azure integration test runs: https://github.com/canonical/pycloudlib/pull/182 [00:19] Pull 182 in canonical/pycloudlib "azure: fix cut-n-paste image issue" [Open] [00:23] once the above lands, we'll need a PR to bump pycloudlib commitish in integration-requirements.txt to tip of main so Jammy Azure builds can start succeeding [00:23] in cloud-init/integration-requirements.txt that is [00:45] blackboxsw: sent a response to the email. It seems like the script is valid and where expected. No errors or warning from cloud-init === gschanuel215 is now known as gschanuel21 === jbg7 is now known as jbg [19:21] sorry falcojr if you'd already looked I fixed the integration test I added on /run/cloud-init/cloud-id PR https://github.com/canonical/cloud-init/pull/1244 [19:21] Pull 1244 in canonical/cloud-init "cloud-id: publish /run/cloud-init/cloud-id- files" [Open] [19:21] heh you already did thanks! [19:21] blackboxsw: literally just approved it =P [19:22] pycloudlib.Instance.read_from_file got me with the rstrip of whitespace on that one [19:53] blackboxsw: Any ideas on what I can do next to debug that cloud-init script? I replied via email. It appears that the script is shellified. I run the script as root manually, no errors [19:54] I replied via email following up on your details regarding the logs, etc. [20:03] apple-corps[m]: checking now. thx [20:46] apple-corps[m]: also, the fact that cloud-init status --long in your email said "running" means cloud-init is still trying to finish setup. [20:46] apple-corps[m]: you might also want to run `sudo cloud-init status --wait --long` on the instance to make sure all of cloud-init's boot/config stages completed [20:48] if it is taking your system a really long time to get setup, something is likely funky with networking. you can run `cloud-init analyze blame`and `systemd-analyze blame` to get a better understanding of why things are taking so long [21:25] cjp256: hrm per your mount integration test what lsb_release did you try to exercise? I'm seeing failures on Jammy due to keys changingt from lsblk --json output. [21:25] I'll sort it locally and provide a patch suggestion to https://github.com/canonical/cloud-init/pull/1250 otherwise I think we can land it [21:25] Pull 1250 in canonical/cloud-init "mounts: fix mount opts string for ephemeral disk" [Open] [21:33] ahh this looks like a jammy `lsblk --json` may have introduced specific json format incompatible 'mountpoints': [...] instead of 'mountpoint': '...' unrelated to your PR. [21:50] suggested integration test diff pushed to https://github.com/canonical/cloud-init/pull/1250 [21:50] Pull 1250 in canonical/cloud-init "mounts: fix mount opts string for ephemeral disk" [Open] [22:00] the thing about the _netdev mount option is.... we shouldn't be running mount -a until network is already up because cloud-init doesn't run cc_mount until the init-network stage when network is already up. [22:01] that said, we have invalid disk mount options at the moment, we'll fix that in this PR and then look at whether we can actually drop the _netdev mount option in the future [22:11] blackboxsw: thanks. I will look to collect more information and attempt to resolve the issue or file a report. [22:13] falcojr: I pushed the minor integration test change to test_disk_setup as a separate PR https://github.com/canonical/cloud-init/pull/1261 [22:13] Pull 1261 in canonical/cloud-init "tests: lsblk --json output changes mountpoint key to mountpoinst []" [Open] [22:32] hey ppl, I am working with a customer who's running the following command thru a bootcmd: - cloud-init-per instance command123.sh [22:33] and he swears up and down that the instance was rebooted from the command line and not from the hypervisor. [22:33] my question is why it keeps trying to run it, is it because it has never found the binary b4? [22:35] oh, the command123.sh script (customer specified full path) does not exist. [22:38] esv cloud-init analyze show will show number of reboots cloud-init saw and also report whether bootcmd is skipped or not per boot [22:38] `cloud-init analyze show` [22:40] oops. esv, also the bootcmd module runs always (every reboot) so script will have to be cognizant of idempotent behavior across reboots see: `Module frequency: always` @ https://cloudinit.readthedocs.io/en/latest/topics/modules.html#bootcmd [22:41] it's different than runcmd module which is `Module frequency: once-per-instance` https://cloudinit.readthedocs.io/en/latest/topics/modules.html#runcmd [22:41] which means cloud-init will only re-run runcmd if instance-id changes in metadata [22:42] sorry, I have split my attention here and may have misread the initial question/intent [22:43] no worries, this information is much appreciated and helps me work on a different case. [22:44] yeah mixing cloud-init-per instance in bootcmd is interesting. I'll peek at that in lxc launches to see if it holds up across reboot [22:44] it is interesting how I tried to change the home directory of a user with a runcmd and with a per-instance script, the runcmd seems to be winning all the time [22:44] s/hold up/doesn't fire/ [22:50] esv, funny that up until now I don't think I've yet used cloud-init-per, I'm walking through how it looks at cloud-init's semaphores etc to see if that's not working as desired [22:51] esv are we sure `cloud-init query instance-id` isn't changing across boots? [22:52] that's the beauty of my job, everyday I get to learn how customers are shooting themselves in the foot [22:54] so I can confirm the shallow understanding that cloud-init-per instance SOME_TAG echo 'madeit' will run once and only once unless instance-id changes [22:55] I can also confirm that if customer's changes it'll keep running that thing as if it is a new instance-id [22:55] the strange thing is that customer keeps getting error messages that the file doesn't exist on every reboot [22:55] so if changes across reboot, the script will run again [22:56] root@dev-b:~# cloud-init-per instance WHATa /root/doit.sh [22:56] YEP [22:56] root@dev-b:~# cloud-init-per instance WHATa /root/doit.sh [22:56] root@dev-b:~# cloud-init-per instance WHATb /root/doit.sh [22:56] YEP [22:56] root@dev-b:~# cloud-init-per instance WHATb /root/doit.sh [22:56] sorry either instance-id delta or tag delta results in re-trigger [22:56] told the customer to fix the script name and move it from bootcmd to runcmd. [22:58] +1 that's easiest and runs once per instance (unless they need that script run earlier in boot) and will runcmd will ensure to get executed if instance-id changes on subsequent boot, but won't run every boot be default [22:58] thnx [22:58] no worries [23:42] 16:18:57 err_msg = "No such file or directory: b'lxc'" well that's strange from our jenkins [23:42] sure enough [23:43] $ snap changes [23:43] ID Status Spawn Ready Summary [23:43] 453 Done today at 23:15 UTC today at 23:16 UTC Auto-refresh snap "lxd" [23:43] $ date [23:43] Thu Feb 10 23:42:41 UTC 2022 [23:43] rekicking the lxd_container job