[13:47] <Hook25> Hello, any lxd setup I try to create on my host leads cloud-init to remain stuck on  status "running" with detail "DataSourceLXD". I'm using the lxd snap. How should I go about debugging this issue?
[13:58] <falcojr> Hook25: cloud-init is made up of several boot services, so it is likely one of the services isn't starting for some reason. how are you launching the container? Does "systemctl --failed" or "systemctl | grep cloud" say anything interesting? If so you can drill down into each particular service. /run/cloud-init/status.json should tell  you what has
[13:58] <falcojr> started and stopped and when. If /var/log/cloud-init.log contains any warnings, errors, or tracebacks, that might also point to the issue
[14:06] <Hook25> systemctl lists snap.lxd.activate.service as failed (no idea why or if it is relevant) as for cloud services the following are reported as dead: cloud-init.target cloud-final.service cloud-config.service while cloud-config.target is active and all the others are either exited or listening. The log file doesn't list anything particularly useful at
[14:06] <Hook25> first glance (only debug messages, no errors/tracebacks)
[14:07] <Hook25> as for the /run/cloud-init/status.json, it lists the datasource "DataSourceLXD" as finished, so maybe that is an issue, as the status says that it is currently running. stage is none
[14:22] <falcojr> `systemd-analyze critical-chain`? if cloud-init is waiting for lxd to finish, that could be your problem. 
[14:23] <Satyr> Hi chat!
[14:23] <Satyr> Faced a difficulty when trying to build a qcow image via packer with cloud-init ubuntu 22.04.03.
[14:23] <Satyr> I have a task to use fstype=xfs for root partition and no way to do it in 20+ iterations.
[14:23] <Satyr> This configuration(attached) worked for PXE, but does not work for qcow.
[14:23] <Satyr>   storage:
[14:23] <Satyr>     config:
[14:23] <Satyr>     - ptable: gpt
[14:23] <Satyr>       match:
[14:23] <Satyr>         serial: "DELL*"
[14:23] <Satyr>       wipe: superblock-recursive
[14:23] <Satyr>       preserve: false
[14:23] <Satyr>       grub_device: false
[14:23] <Satyr>       type: disk
[14:23] <Satyr>       id: target-disk
[14:23] <Satyr>     - device: target-disk
[14:23] <Satyr>       size: 1G
[14:23] <Satyr>       wipe: superblock
[14:23] <Satyr>       flag: boot
[14:23] <Satyr>       id: format-1
[14:23] <Satyr>     - path: /
[14:23] <Satyr>       device: format-1
[14:23] <Satyr>       type: mount
[14:23] <Satyr>       id: mount-1
[14:23] <Satyr>     - path: /boot/efi
[14:23] <Satyr>       device: format-0
[14:23] <Satyr>       type: mount
[14:23] <Satyr>       id: mount-0
[14:23] <Satyr>     version: 2
[14:23] <Satyr> I get the error:
[14:23] <Satyr> FAIL: autoinstall config did not create needed bootloader partition
[14:23] <Satyr> I dug into the code and saw that the error occurs when no disk is specified with grub_device: true. (Although this works with PXE)
[14:23] <Satyr> When I set grub_device: true to a disk and not a partition, I can see that the partitions are created correctly, but then when the script does chroot /target & grub-install it tries to install grub on both: Both disk & partition, and gives another error:
[14:23] <Satyr> finish: cmd-install/stage-curthooks/builtin/cmd-curthooks/install-grub:FAIL:installing grub to target devices
[14:23] <Satyr> finish: cmd-install/stage-curthooks/builtin/cmd-curthooks/configuring-bootloader: FAIL:configuring target system bootloader
[14:23] <Satyr> And I realize that if I use layout , there for Ubunta in the code subiquity is nailed ext4.
[14:23] <Satyr> So the question is, is there any way to change the default setting for ubuntu to xfs for root dir in the iso image or some other storage config to make everything go without errors?
[14:23] <falcojr> Satyr: please use a pastebin service for large pastes
[14:24] <Hook25> root@jammy:~# systemd-analyze critical-chain
[14:24] <Hook25> Bootup is not yet finished (org.freedesktop.systemd1.Manager.FinishTimestampMonotonic=0).
[14:24] <Hook25> Please try again later.
[14:24] <Hook25> Hint: Use 'systemctl list-jobs' to see active jobs
[14:24] <Hook25> list jobs says that 109  snapd.seeded.service                 start running while everything else is waiting, could this be the issue here?
[14:25] <falcojr> Hook25: yes, that looks like your issue
[14:26] <falcojr> Satyr: your question looks to be about subiquity, but this channel is for cloud-init. You may find one or two people here that might have an idea, but none of us are subiquity experts
[14:27] <Hook25> falcojr tyvm
[14:29] <Satyr> falcojr thx, I made an error
[14:50] <falcojr> Satyr: try #ubuntu-server
[17:10] <minimal> as we're only 1 week away from the scheduled release of 24.1 has a fix for #4783 been forgotten about?
[20:11] <blackboxsw> minimal: thanks for bump on that issue. i agree we need it for 24.1 I've added the 'priority' label to it. And it already has the 24.1 label so it'll get in before 24.1 is cut.
[21:20] <minimal> blackboxsw: thanks. I see holmanb has recently tagged #4772 as "24.1" for me. I'd also like to get #4876 into 24.1 if possible, it's Alpine specific
[21:21] <blackboxsw> yep agreed, just got the notification on that too. thanks for the re-open on that last week will try to clear it out today. LGTM. Just want to retest on alpine container
[21:24] <blackboxsw> holman: your inifiniband PR also landed right (related to dhcpcd support on Azure) Anything else we are waiting on to get dhcpcd support into Ubuntu noble?
[21:25] <blackboxsw> if nothing else maybe end of day today we make a cut to publish where things are at w.r.t. tip of main if jenkins test runs only have explainable errors (such as deb822 warning messages etc)
[21:25] <blackboxsw> holman: that and your apport fixes that are in flight
[22:23] <falcojr> I don't actually know why #4783 would be required for 24.1. It's purely a testing issue
[22:26] <minimal> falcojr: it "breaks" my cloud-init packaging for Alpine which runs tests as part of the packaging. I assume some other distro packaging could be similarly affected
[22:27] <minimal> for Alpine packaging is expected to run tests unless there is a "valid" reason not to do so
[22:27] <falcojr> gotcha, that makes sense
[22:29] <minimal> currently I'm having to patch my local "24.1" package as Alpine has a newer version of py3-jsonschema is the "Edge" (i.e. what will be the next Alpine release) repo
[22:30] <minimal> if c-i 24.1 comes out without a fix for #4783 then I'll have to add a workaround patch to the published Alpine Edge cloud-init 24.1 package