[08:07] <caribou> smoser: thank you very much, you've just saved me hours of investigationn
[08:08] <caribou> so I went ahead & tested the kernel in -proposed which does fix the bug, so it's now verification-done
[08:08] <caribou> I'll go back to working on adapting the MP
[11:25] <smoser> caribou: \o/
[11:39] <smoser> caribou: let me know if you need anything. its always ok to bother me.
[11:48] <smoser> robjo: https://bugs.launchpad.net/cloud-init/+bug/1779139 ?
[11:48] <smoser> you see that ?
[11:49] <robjo> smoser: yes, there is also an euqivalent openSUSE bug
[11:49] <robjo> I've not had time to poke at this.
[11:50] <robjo> While the proposed patch of "After=network-online.target" supposedly avoids the problem I know there are other issues related to waiting for the network
[11:50] <robjo> SO for now I removed the patch again from the SUSE package
[11:51] <robjo> I think this may also be partially related to the dhcp client issue
[14:18] <smoser> robjo: which is dhclient issue ?
[14:19] <robjo> smoser: I couldn't find it
[14:19] <smoser> i do remember discussing that. and ultimatelly i would like to have a sufficient python dhclient in cloud-init.
[14:19] <robjo> but cloud-init uses dhclient to make a request and then looks at the lease file
[14:20] <robjo> well that fails on SUSE because the interface is controled by wicked
[14:20] <robjo> discussed the dhclient issue recently with blackboxsw and he said he'd put it on the agenda for the summit
[14:21] <robjo> I added some notes to the bug, let me try again to find it
[14:21] <smoser> but that proved to be more than a afternoon worth of work. all the example python dhclients oddly expect that your interface already has an address
[14:21] <smoser> :
[14:21] <smoser> :)
[14:22] <robjo> Here it is lp#1733226
[14:23] <robjo> https://bugs.launchpad.net/cloud-init/+bug/1733226
[14:35] <smoser> thanks robjo
[15:37] <lorddoskias> hello, how does cloud-init detect first boot? if i have tested it locally ona an image, what do i have to do to ensure that the next time the image is run (in the cloud) first boot directives will also run
[15:41] <smoser> lorddoskias: "per instance" things are done when the instance-id found changes.
[15:41] <smoser> instance-id varies by "datasource"
[15:42] <smoser> the goal is that you do not have to 'clean' anything.
[15:43] <lorddoskias> smoser: so what i did is locally configure cloud-init and then execute systemctl start cloud-init-local systemctl start cloud-init etc
[15:44] <lorddoskias> after i've verified that everything is working i uploaded this image to a private openstack-based cloud
[15:44] <smoser> that should be sufficient, yes.
[15:44] <lorddoskias> when i run a new instance, based on that image cloud-init will work as if it's the first time this image is booted?
[15:44] <smoser> you could also test locally by attaching a config-drive (openstack) or a "NoCloud" data disk.
[15:45] <smoser> but yes, it shoudl re-generate ssh keys, and other "per instance" things. as if it was new.
[15:45] <smoser> if you want to clean state just to remove additional unused data, you can do that too... rm -Rf /var/lib/cloud/
[15:45] <lorddoskias> right. Now another question regarding the users module i can see in the docs that users should be configured like so : users: - name: root
[15:45] <lorddoskias> however, the default config has users: - root
[15:46] <smoser> the default config from where?
[15:46] <lorddoskias> from opensuse
[15:46] <smoser> trunk does not have that... it uses a per-distro user.
[15:46] <lorddoskias> my question is what's the differente between having - name: root and - root
[15:49] <smoser> well, ... if it does work with just 'root' as teh value , then it will probaly just take defaults from system_info
[15:49] <smoser> and you can speify moer values with the name: username
[15:49] <smoser> and such
[15:50] <lorddoskias> doesn't it work the same in the usptream cloud-init i.e. you can have "- blah" or "- admin" or whatever?
[15:50] <lorddoskias> even here the documentation (https://cloudinit.readthedocs.io/en/latest/topics/examples.html?highlight=system_info#including-users-and-groups) gives examples as : users: - default -bob
[15:51] <smoser> probably
[15:51] <lorddoskias> but this syntax is never explained
[15:52] <smoser> well, the syntax is explained in the link you provide d ('Valid Values')
[15:52] <smoser> so if you say that
[15:52] <smoser> - users: [root]
[15:52] <seba> I have a VM where cloud init is run on every boot and I don't know where to start debugging. I just want it to be run once and not to regenerate ssh-keys on every boot.
[15:52] <smoser> works, i believe you. it probalby just picks 'root' as 'name:' and takes defautls for other fields.
[15:53] <seba> on first boot it finds its datasource (iso) and everything works fine. when I reboot it searches for different datasources (iso is not attached anymore)
[15:53] <smoser> but i personally would take the morem verbose format.
[15:54] <lorddoskias> smoser: yeah, makes sense
[15:55] <smoser> seba: https://git.launchpad.net/cloud-init/tree/doc/examples/cloud-config.txt
[15:55] <smoser> you want 'manual_cache_clean: True'
[15:55] <smoser> or just
[15:57] <seba> hm. okay, I can see why I want this parameter. I just wonder why I didn't encounter this problem earlier
[15:57] <smoser> never mind 'or just'.  you can write that to a file in /etc/cloud/cloud.cfg.d/
[15:57] <smoser> you had the cdrom there subsequently? or didnt notice it? if you're running bionic+ cloud-init should completely disable itself on subsequent runs.
[15:57] <seba> debian stretch
[15:58] <smoser> it depends on cloud-init version and such.
[15:58] <seba> I've also run it on xenial and bionic
[15:58] <smoser> on ubuntu, the cloud-init-generator will disable cloud-init if it does not find a datasource
[15:58] <seba> currently cloud-init 0.7.9-2 on the host I encounter problems with
[15:59] <smoser> thats older than i'd care to make guesses on
[15:59] <smoser> manual_cache_clean is what you want though. and that goes back 0.7.9 for sure
[15:59] <seba> noted and thx for the suggestion, will try it :)
[16:06] <seba> smoser, works for me, thx again
[18:45] <blackboxsw> hrm empty /etc/os-release on the copr build images. I'm adding a build to list /etc to see if we have other version files we can source or maybe use lsb_release?
[18:45] <smoser> is that there?
[18:45] <blackboxsw> yeah it looks there, just load_file looks like zero length
[18:45] <blackboxsw> maybe check /etc/lsb-release
[18:46] <blackboxsw> or lsb_release util as an option.
[18:46] <blackboxsw> checking to see if that util is even there
[18:46] <blackboxsw> but I have to wait 20 mins on each build/failure attempt
[18:46] <blackboxsw> https://copr.fedorainfracloud.org/coprs/blackboxsw/cloud-init/build/776371/
[18:46] <smoser> i doubt it has lsb_release.
[18:47] <blackboxsw> more specifically https://copr-be.cloud.fedoraproject.org/results/blackboxsw/cloud-init/epel-6-x86_64/00776371-cloud-init/build.log.gz
[19:02] <smoser> ugh
[19:02] <smoser> powersj: why green here
[19:02] <smoser>  https://jenkins.ubuntu.com/server/view/cloud-init,%20curtin,%20streams/job/cloud-init-integration-proposed-a/26/console
[19:03] <powersj> because the last command was green
[19:03] <powersj> ugh
[19:13] <blackboxsw> found centos-release system-release redhat-release files in centos 6, might leverage one of them.
[19:13] <blackboxsw> redhat-release is probably the most ubiquitous as I recall that from a loooong time ago in HP-land
[19:14] <blackboxsw> this build, when run should give us all the content we need for any /etc/*release file in the failed centos6 env
[19:14] <dpb1> blackboxsw: that's whats broken the build?
[19:15] <blackboxsw> dpb1: there is an /etc/os-release in copr centos6 images (and various fedora rawhide images). but it is an empty file
[19:15] <blackboxsw> so the build can't succeed in running get_linux_distro
[19:15] <dpb1> blackboxsw: is it acceptable to just stub it out to get the build passing?
[19:17] <blackboxsw> dpb1: we can stub it out, or make it also check for empty content in /etc/os-release.
[19:17] <blackboxsw> I'll have a fix for this today no prob. but just waiting on a copr build run
[19:17]  * dpb1 nods
[19:17] <blackboxsw> https://copr.fedorainfracloud.org/coprs/blackboxsw/cloud-init/build/776389/ should have enough info for me to get a fix that will work for centos/redhat envs
[19:18] <blackboxsw> easy to just source a different /etc/redhat-release file if available instead I think
[19:19] <smoser> this garbage is why its deprecated in python :)
[19:24] <powersj> smoser: pushed a fix for the proposed testing
[19:25] <blackboxsw> yeah our get_linux_distro is basically going to re-write platform.dist() as we start supporting more oses
[19:27] <smoser> for a more limited set of distros, yes.
[19:27] <blackboxsw> yeah , it's not too bad
[19:35] <blackboxsw> https://pastebin.ubuntu.com/p/7xVR3dQ2yv/.... ok. something to work with. but not optimal
[19:40]  * smoser will check back in later. our launch-softlayer doesn't 'just work' for launching --proposed unfortunately. need to fix that somehow.
[22:13] <blackboxsw> smoser: ok, build worked for centos6, not for rawhide https://copr-be.cloud.fedoraproject.org/results/blackboxsw/cloud-init/fedora-rawhide-i386/00776423-cloud-init/build.log.gz
[22:16] <blackboxsw> BUILDSTDERR: /var/tmp/rpm-tmp.vOrrcj: line 31: /usr/bin/python: No such file or directory .... hrm
[22:20] <blackboxsw> or rather:                mockbuild.exception.Error: Command failed:
[22:20] <blackboxsw>  # /usr/bin/systemd-nspawn -q -M 39c142373fa449acb013a47ca64b0cf4 -D /var/lib/mock/775457-fedora-rawhide-i386-1531181651.602050/root -a --capability=cap_ipc_lock --bind=/tmp/mock-resolv.erqniz1j:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;<mock-chroot>\007"
[22:20] <blackboxsw> --setenv=PS1=<mock-chroot> \s-\v\$  --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target i686 --nodeps /builddir/build/SPECS/cloud-init.spec
[22:20] <blackboxsw> ... bad paste sorry
[22:24] <blackboxsw> yeah so the overall build from spec is failing, but no tracebacks about why in rawhide ... hrm