[08:07] smoser: thank you very much, you've just saved me hours of investigationn [08:08] so I went ahead & tested the kernel in -proposed which does fix the bug, so it's now verification-done [08:08] I'll go back to working on adapting the MP === otubo1 is now known as otubo [11:25] caribou: \o/ [11:39] caribou: let me know if you need anything. its always ok to bother me. [11:48] robjo: https://bugs.launchpad.net/cloud-init/+bug/1779139 ? [11:48] Ubuntu bug 1779139 in cloud-init "metadata api detection runs before network-online" [Undecided,Incomplete] [11:48] you see that ? [11:49] smoser: yes, there is also an euqivalent openSUSE bug [11:49] I've not had time to poke at this. [11:50] While the proposed patch of "After=network-online.target" supposedly avoids the problem I know there are other issues related to waiting for the network [11:50] SO for now I removed the patch again from the SUSE package [11:51] I think this may also be partially related to the dhcp client issue [14:18] robjo: which is dhclient issue ? [14:19] smoser: I couldn't find it [14:19] i do remember discussing that. and ultimatelly i would like to have a sufficient python dhclient in cloud-init. [14:19] but cloud-init uses dhclient to make a request and then looks at the lease file [14:20] well that fails on SUSE because the interface is controled by wicked [14:20] discussed the dhclient issue recently with blackboxsw and he said he'd put it on the agenda for the summit [14:21] I added some notes to the bug, let me try again to find it [14:21] but that proved to be more than a afternoon worth of work. all the example python dhclients oddly expect that your interface already has an address [14:21] : [14:21] :) [14:22] Here it is lp#1733226 [14:23] https://bugs.launchpad.net/cloud-init/+bug/1733226 [14:23] Ubuntu bug 1733226 in cloud-init "cloud-init-local service fails on SUSE distros" [Undecided,New] [14:35] thanks robjo === telling_ is now known as telling [15:37] hello, how does cloud-init detect first boot? if i have tested it locally ona an image, what do i have to do to ensure that the next time the image is run (in the cloud) first boot directives will also run [15:41] lorddoskias: "per instance" things are done when the instance-id found changes. [15:41] instance-id varies by "datasource" [15:42] the goal is that you do not have to 'clean' anything. [15:43] smoser: so what i did is locally configure cloud-init and then execute systemctl start cloud-init-local systemctl start cloud-init etc [15:44] after i've verified that everything is working i uploaded this image to a private openstack-based cloud [15:44] that should be sufficient, yes. [15:44] when i run a new instance, based on that image cloud-init will work as if it's the first time this image is booted? [15:44] you could also test locally by attaching a config-drive (openstack) or a "NoCloud" data disk. [15:45] but yes, it shoudl re-generate ssh keys, and other "per instance" things. as if it was new. [15:45] if you want to clean state just to remove additional unused data, you can do that too... rm -Rf /var/lib/cloud/ [15:45] right. Now another question regarding the users module i can see in the docs that users should be configured like so : users: - name: root [15:45] however, the default config has users: - root [15:46] the default config from where? [15:46] from opensuse [15:46] trunk does not have that... it uses a per-distro user. [15:46] my question is what's the differente between having - name: root and - root [15:49] well, ... if it does work with just 'root' as teh value , then it will probaly just take defaults from system_info [15:49] and you can speify moer values with the name: username [15:49] and such [15:50] doesn't it work the same in the usptream cloud-init i.e. you can have "- blah" or "- admin" or whatever? [15:50] even here the documentation (https://cloudinit.readthedocs.io/en/latest/topics/examples.html?highlight=system_info#including-users-and-groups) gives examples as : users: - default -bob [15:51] probably [15:51] but this syntax is never explained [15:52] well, the syntax is explained in the link you provide d ('Valid Values') [15:52] so if you say that [15:52] - users: [root] [15:52] I have a VM where cloud init is run on every boot and I don't know where to start debugging. I just want it to be run once and not to regenerate ssh-keys on every boot. [15:52] works, i believe you. it probalby just picks 'root' as 'name:' and takes defautls for other fields. [15:53] on first boot it finds its datasource (iso) and everything works fine. when I reboot it searches for different datasources (iso is not attached anymore) [15:53] but i personally would take the morem verbose format. [15:54] smoser: yeah, makes sense [15:55] seba: https://git.launchpad.net/cloud-init/tree/doc/examples/cloud-config.txt [15:55] you want 'manual_cache_clean: True' [15:55] or just [15:57] hm. okay, I can see why I want this parameter. I just wonder why I didn't encounter this problem earlier [15:57] never mind 'or just'. you can write that to a file in /etc/cloud/cloud.cfg.d/ [15:57] you had the cdrom there subsequently? or didnt notice it? if you're running bionic+ cloud-init should completely disable itself on subsequent runs. [15:57] debian stretch [15:58] it depends on cloud-init version and such. [15:58] I've also run it on xenial and bionic [15:58] on ubuntu, the cloud-init-generator will disable cloud-init if it does not find a datasource [15:58] currently cloud-init 0.7.9-2 on the host I encounter problems with [15:59] thats older than i'd care to make guesses on [15:59] manual_cache_clean is what you want though. and that goes back 0.7.9 for sure [15:59] noted and thx for the suggestion, will try it :) [16:06] smoser, works for me, thx again === rezroo1 is now known as rezroo [18:45] hrm empty /etc/os-release on the copr build images. I'm adding a build to list /etc to see if we have other version files we can source or maybe use lsb_release? [18:45] is that there? [18:45] yeah it looks there, just load_file looks like zero length [18:45] maybe check /etc/lsb-release [18:46] or lsb_release util as an option. [18:46] checking to see if that util is even there [18:46] but I have to wait 20 mins on each build/failure attempt [18:46] https://copr.fedorainfracloud.org/coprs/blackboxsw/cloud-init/build/776371/ [18:46] i doubt it has lsb_release. [18:47] more specifically https://copr-be.cloud.fedoraproject.org/results/blackboxsw/cloud-init/epel-6-x86_64/00776371-cloud-init/build.log.gz [19:02] ugh [19:02] powersj: why green here [19:02] https://jenkins.ubuntu.com/server/view/cloud-init,%20curtin,%20streams/job/cloud-init-integration-proposed-a/26/console [19:03] because the last command was green [19:03] ugh [19:13] found centos-release system-release redhat-release files in centos 6, might leverage one of them. [19:13] redhat-release is probably the most ubiquitous as I recall that from a loooong time ago in HP-land [19:14] this build, when run should give us all the content we need for any /etc/*release file in the failed centos6 env [19:14] blackboxsw: that's whats broken the build? [19:15] dpb1: there is an /etc/os-release in copr centos6 images (and various fedora rawhide images). but it is an empty file [19:15] so the build can't succeed in running get_linux_distro [19:15] blackboxsw: is it acceptable to just stub it out to get the build passing? [19:17] dpb1: we can stub it out, or make it also check for empty content in /etc/os-release. [19:17] I'll have a fix for this today no prob. but just waiting on a copr build run [19:17] * dpb1 nods [19:17] https://copr.fedorainfracloud.org/coprs/blackboxsw/cloud-init/build/776389/ should have enough info for me to get a fix that will work for centos/redhat envs [19:18] easy to just source a different /etc/redhat-release file if available instead I think [19:19] this garbage is why its deprecated in python :) [19:24] smoser: pushed a fix for the proposed testing [19:25] yeah our get_linux_distro is basically going to re-write platform.dist() as we start supporting more oses [19:27] for a more limited set of distros, yes. [19:27] yeah , it's not too bad [19:35] https://pastebin.ubuntu.com/p/7xVR3dQ2yv/.... ok. something to work with. but not optimal [19:40] * smoser will check back in later. our launch-softlayer doesn't 'just work' for launching --proposed unfortunately. need to fix that somehow. === blackboxsw changed the topic of #cloud-init to: Reviews: http://bit.ly/ci-reviews | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting: Monday 7/2 16:00 UTC | cloud-init 18.3 released (06/20/2018) [22:13] smoser: ok, build worked for centos6, not for rawhide https://copr-be.cloud.fedoraproject.org/results/blackboxsw/cloud-init/fedora-rawhide-i386/00776423-cloud-init/build.log.gz [22:16] BUILDSTDERR: /var/tmp/rpm-tmp.vOrrcj: line 31: /usr/bin/python: No such file or directory .... hrm [22:20] or rather: mockbuild.exception.Error: Command failed: [22:20] # /usr/bin/systemd-nspawn -q -M 39c142373fa449acb013a47ca64b0cf4 -D /var/lib/mock/775457-fedora-rawhide-i386-1531181651.602050/root -a --capability=cap_ipc_lock --bind=/tmp/mock-resolv.erqniz1j:/etc/resolv.conf --setenv=TERM=vt100 --setenv=SHELL=/bin/bash --setenv=HOME=/builddir --setenv=HOSTNAME=mock --setenv=PATH=/usr/bin:/bin:/usr/sbin:/sbin --setenv=PROMPT_COMMAND=printf "\033]0;\007" [22:20] --setenv=PS1= \s-\v\$ --setenv=LANG=en_US.UTF-8 -u mockbuild bash --login -c /usr/bin/rpmbuild -bb --target i686 --nodeps /builddir/build/SPECS/cloud-init.spec [22:20] ... bad paste sorry [22:24] yeah so the overall build from spec is failing, but no tracebacks about why in rawhide ... hrm