[00:18] <blackboxsw> +1 apple-corps[m] you could open a browser to https://paste.ubuntu.com/ and ctrl-C -ctrl-v your content into the form there if that helps
[00:19] <blackboxsw> falcojr: paride holmanb for tomorrow: typo/thinko on jammy image stream names for azure integration test runs: https://github.com/canonical/pycloudlib/pull/182
[00:23] <blackboxsw> once the above lands, we'll need a PR to bump pycloudlib commitish in integration-requirements.txt to tip of main so Jammy Azure builds can start succeeding
[00:23] <blackboxsw> in cloud-init/integration-requirements.txt that is
[00:45] <apple-corps[m]> blackboxsw: sent a response to the email. It seems like the script is valid and where expected. No errors or warning from cloud-init
[19:21] <blackboxsw> sorry falcojr if you'd already looked I fixed the integration test I added on /run/cloud-init/cloud-id PR https://github.com/canonical/cloud-init/pull/1244
[19:21] <blackboxsw> heh you already  did thanks!
[19:21] <falcojr> blackboxsw: literally just approved it =P
[19:22] <blackboxsw> pycloudlib.Instance.read_from_file got me with the rstrip of whitespace on that one
[19:53] <apple-corps[m]> blackboxsw: Any ideas on what I can do next to debug that cloud-init script? I replied via email. It appears that the script is shellified. I run the script as root manually, no errors
[19:54] <apple-corps[m]> I replied via email following up on your details regarding the logs, etc.
[20:03] <blackboxsw> apple-corps[m]: checking now. thx
[20:46] <blackboxsw> apple-corps[m]: also, the fact that cloud-init status --long in your email said "running" means cloud-init is still trying to finish setup.
[20:46] <blackboxsw> apple-corps[m]: you might also want to run `sudo cloud-init status --wait --long` on the instance to make sure all of cloud-init's boot/config stages completed
[20:48] <blackboxsw> if it is taking your system a really long time to get setup, something is likely funky with networking. you can run `cloud-init analyze blame`and `systemd-analyze blame` to get a better understanding of why things are taking so long
[21:25] <blackboxsw> cjp256: hrm per your mount integration test what lsb_release  did you try to exercise? I'm seeing failures on Jammy due to keys changingt from lsblk --json output.
[21:25] <blackboxsw> I'll sort it locally and provide a patch suggestion to https://github.com/canonical/cloud-init/pull/1250 otherwise I think we can land it
[21:33] <blackboxsw> ahh this looks like a jammy `lsblk --json` may have introduced specific json format incompatible 'mountpoints': [...]  instead of 'mountpoint': '...' unrelated to your PR.
[21:50] <blackboxsw> suggested integration test diff pushed to https://github.com/canonical/cloud-init/pull/1250
[22:00] <blackboxsw> the thing about the _netdev mount option is.... we shouldn't be running mount -a until network is already up because cloud-init doesn't run cc_mount until the init-network stage when network is already up.
[22:01] <blackboxsw> that said, we have invalid disk mount options at the moment, we'll fix that in this PR and then look at whether we can actually drop the _netdev mount option in the future
[22:11] <apple-corps[m]> blackboxsw: thanks. I will look to collect more information and attempt to resolve the issue or file a report.
[22:13] <blackboxsw> falcojr: I pushed the minor integration test change to test_disk_setup as a separate PR https://github.com/canonical/cloud-init/pull/1261
[22:32] <esv> hey ppl, I am working with a customer who's running the following command thru a bootcmd: - cloud-init-per instance <tag> command123.sh 
[22:33] <esv> and he swears up and down that the instance was rebooted from the command line and not from the hypervisor. 
[22:33] <esv> my question is why it keeps trying to run it, is it because it has never found the binary b4?
[22:35] <esv> oh, the command123.sh script (customer specified full path) does not exist.
[22:38] <blackboxsw> esv cloud-init analyze show will show number of reboots cloud-init saw and also report whether bootcmd is skipped or not per boot
[22:38] <blackboxsw> `cloud-init analyze show`
[22:40] <blackboxsw> oops. esv, also the bootcmd module runs always (every reboot) so script will have to be cognizant of idempotent behavior across reboots see: `Module frequency: always` @ https://cloudinit.readthedocs.io/en/latest/topics/modules.html#bootcmd
[22:41] <blackboxsw> it's different than runcmd module which is `Module frequency: once-per-instance` https://cloudinit.readthedocs.io/en/latest/topics/modules.html#runcmd
[22:41] <blackboxsw> which means cloud-init will only re-run runcmd if instance-id changes in metadata
[22:42] <blackboxsw> sorry, I have split my attention here and may have misread the initial question/intent
[22:43] <esv> no worries, this information is much appreciated and helps me work on a different case. 
[22:44] <blackboxsw> yeah mixing cloud-init-per instance in bootcmd is interesting. I'll peek at that in lxc launches to see if it holds up across reboot
[22:44] <esv> it is interesting how I tried to change  the home directory of a user with a runcmd and with a per-instance script, the runcmd seems to be winning all the time
[22:44] <blackboxsw> s/hold up/doesn't fire/
[22:50] <blackboxsw> esv, funny that up until now I don't think I've yet used cloud-init-per, I'm walking through how it looks at cloud-init's semaphores etc to see if that's not working as desired
[22:51] <blackboxsw> esv are we sure `cloud-init query instance-id` isn't changing across boots?
[22:52] <esv> that's the beauty of my job, everyday I get to learn how customers are shooting themselves in the foot
[22:54] <blackboxsw> so I can confirm the shallow understanding that cloud-init-per instance SOME_TAG echo 'madeit' will run once and only once unless instance-id changes
[22:55] <blackboxsw> I can also confirm that if customer's <tag> changes it'll keep running that thing as if it is a new instance-id
[22:55] <esv> the strange thing is that customer keeps getting error messages that the file doesn't exist on every reboot
[22:55] <blackboxsw> so if <tag> changes across reboot, the script will run again
[22:56] <blackboxsw> root@dev-b:~# cloud-init-per instance WHATa /root/doit.sh 
[22:56] <blackboxsw> YEP
[22:56] <blackboxsw> root@dev-b:~# cloud-init-per instance WHATa /root/doit.sh 
[22:56] <blackboxsw> root@dev-b:~# cloud-init-per instance WHATb /root/doit.sh 
[22:56] <blackboxsw> YEP
[22:56] <blackboxsw> root@dev-b:~# cloud-init-per instance WHATb /root/doit.sh 
[22:56] <blackboxsw> sorry either instance-id delta or tag delta results in re-trigger
[22:56] <esv> told the customer to fix the script name and move it from bootcmd to runcmd.
[22:58] <blackboxsw> +1 that's easiest and runs once per instance (unless they need that script run earlier in boot) and will runcmd will ensure   to get executed if instance-id changes on subsequent boot,  but won't run every boot be default
[22:58] <esv> thnx
[22:58] <blackboxsw> no worries
[23:42] <blackboxsw> 16:18:57 err_msg    = "No such file or directory: b'lxc'"   well that's strange from our jenkins
[23:42] <blackboxsw> sure enough
[23:43] <blackboxsw> $ snap changes 
[23:43] <blackboxsw> ID   Status  Spawn               Ready               Summary
[23:43] <blackboxsw> 453  Done    today at 23:15 UTC  today at 23:16 UTC  Auto-refresh snap "lxd"
[23:43] <blackboxsw> $ date
[23:43] <blackboxsw> Thu Feb 10 23:42:41 UTC 2022
[23:43] <blackboxsw> rekicking the lxd_container job