[01:32] blackboxsw: i pushed there. === Wulf4 is now known as Wulf [06:09] moin. how do i setup nocloud correctly? i added this datasource: NoCloud: to cloud-init.cfg but im getting an error on first boot. and on disable ec2 im getting an error too. http://paste.ubuntu.com/26116710/ [06:10] version is now cloud-init-17.1+46.g7acc9e68-1.el7.centos.noarch [06:11] strangely this didnt happen yesterday. but now i created a new vmware image template and then added a host to foreman [09:00] maybe i should use ansible to setup puppet and the rest [09:04] i was using cloud-init with openstack and that was ok. but that datasource nocloud thing i dont get [14:27] ybaumy: i'm not sure about the first issue with ec2 disable. there is probably something in /var/log/cloud-init-output.log or /var/log/cloud-init.log (you could pastebin the whole of both would be helpful) [14:27] for nocloud you'd need to give mreo data... not sure what would cause there to be no md (metadata) on that object. [14:28] for setting up nocloud, i suggest using 'cloud-localds' http://paste.ubuntu.com/26118885/ [14:29] but basically nocloud is a attached disk in either iso or vfat that has a 'user-data' and 'meta-data' on them. meta-data is yaml formated, and expected to have 'instance-id'. [14:56] smoser: i think i dont need all of that nocloud parameters. i only want to setup puppet and thats about it. i feel like im breaking a leg here for that to do. [14:56] its nto a lot of parameters. [14:56] instance-id [14:57] i'm only suggesting using cloud-localds to get you a disk so you dont have to worry about impmlementation, and you can see a working example. [14:58] ok [15:02] smoser: blackboxsw rharper Could I please get some feedback on the chrony support? See mailing list message dated 11/14 "RFQ chrony support" which should of course have been "RFC" [15:03] :-( [15:03] yeah, that is fair === shardy is now known as shardy_afk [15:13] smoser: how do i create a new instance-id for each cloned vmware image [15:14] i guess they have to be unique [15:15] hmm i could make a firstboot scripts that runs before cloud-init [15:16] ybaumy: just attach different data in a disk, no ? [15:16] i'm confused. [15:16] instance-id=$(uuidgen) [15:16] so i'm missing something for sure. [15:16] im confused as well [15:17] ok let me explain what i want to do [15:18] i have a vmware template that is centos. then i want to automatically deploy this image with foreman and register it to the katello part [15:18] therefor i need puppet integration [15:18] the puppet part i still not working 100% but 90% [15:19] so i thought using nocloud datasource is the best solution when running cloud-init [15:19] ok. so i think youa ve a couple options [15:19] a.) vmware has an "OVF" datasource that should mostly do what you're wondering about. [15:20] i really have no familiarity with it vmware though. [15:21] b.) when you create a new "instance" (that is 'automatically deploy') you can create a second disk and attach it with the necessary NoCloud data [15:22] c.) you can just put NoCloud data into each image hard coded in /var/lib/cloud/seed/nocloud-net. it will just never change, you'll have to find some other way to identify individual systems or give them specific instructions. [15:22] the 'instance-id' is really just a bit of information specific to this instance. [15:22] it allows cloud-inti to do things that should only be done on a "new instance". [15:22] smoser: c) sounds th ebest option. since all the custom parts are run through puppet when it is setup [15:23] but what does puppet use to determine what their role is [15:23] if 100 things come and say "hi puppet master" all at the same time, how does it decide what is supposed to do what? [15:23] foreman has hostgroups which is a customer and all those hostgroups have different parameters for puppet [15:24] and hostgroups are based on ? [15:24] hostnames ? which you're providing ? [15:24] how does the instance know its hostname? [15:24] so i just create a host put it in a hostgroup which inherits all parameters for that hostgroup [15:25] hostname gets through dhcp on the first setup before customizing [15:25] hostgroups are just groups with different attributes and parameters [15:26] its really cool [15:27] i will try c) now [15:27] lets see how far i get [15:28] are you using cloud-inti for anything else ? [15:28] does it do anything for you? [15:28] nope just puppet i need [15:29] then you can just disable cloud-init entirely. [15:29] ok and use what? [15:29] nothign ? [15:29] you said it doesnt do anything for you [15:29] well there is the problem with proxy servers on subnets [15:30] so i need a instance i can tell which proxy server the puppet.conf should hold [15:30] proxy server=pupetmaster [15:31] does foreman provide any endpoint that the node can query to learn about itself? other than the dhcp server ? [15:31] but maybe you are right and should just script it msyself [15:32] well there is remote ssh execution [15:32] for example [15:32] i could use ansible to register the host and setup puppet as well [15:33] it seems like forman kind of needs a mechanism by which a node can query its parameters [15:33] which is basically what a datasource is to cloud-init [15:33] then the node would look and see what proxy settings it had and what it was supposed to do. [15:33] thats what i thought yesterday [15:34] other wise you're just deploying dozens of clones that differ only in hostname [15:34] if you can provide hostnames you could manufacture them such that they indicate some information [15:34] group1-name1 [15:35] there are clones as long as they havent contacted the puppetmaster [15:35] and then in code inside key off 'group1' to know the hard coded proxy server. [15:35] but you are right [15:36] i should find another way to setup servername in puppet.conf [15:36] this just doesnt work [15:37] since cloud-init.cfg is not generated [15:37] depending on the hostgroup [15:38] you should convince foreman that they need a node-facing endpiont [15:38] and then write a datasource for cloud-init to utilize that. [15:39] it kind of lacks this feature i must saa [15:39] say [15:39] you need to have *some* id for a node [15:40] https://theforeman.org/plugins/foreman_ansible/0.x/index.html i guess this what im trying now [15:41] but in the end its the same problem [15:41] i need a foreman_url [15:41] which is the proxy puppetmaster in the end [15:41] i have think about it === shardy_afk is now known as shardy [15:48] i think i need a installtion network [15:48] then its always the same [15:48] server [15:49] if i dont have a datasource [15:49] there is no other way i can see === Wulf4 is now known as Wulf [17:24] robjo: https://hackmd.io/CwEwHARgbAzDDsBaArPOjhQMYEZEEMBOHQjQkeAUxgCYb9h8og== [17:24] blackboxsw, rharper [17:24] also. interested in feedback there. [18:23] smoser: added some comments [18:37] @blackboxsw sorry to bump, but can you take a look when you get a chance? https://code.launchpad.net/~dojordan/cloud-init/+git/cloud-init/+merge/334341 [18:48] Good deal dojordan I have context on an azure deployment right now so I'll test it out and see if we can labd it today [18:48] 'land' rather [18:52] great, thanks! feel free to ping with questions [18:53] also note that the platform doesn't publicly support it yet so I've been testing the azure fabric changes in private test clusters. I have the logs from that if need be [19:00] smoser: is there an easy way to modify the prposed-branche and merge-into for MP ? [19:05] robjo: thanks. i responded theer. [19:05] we can carry on here. also [19:05] rharper: ? [19:05] https://code.launchpad.net/~cloud-init-dev/cloud-init/+git/cloud-init/+merge/334733 [19:05] the targets need swapping [19:05] :) [19:10] rharper: so.. interestingly, i could 'resubmit' that proposal and swap the source and target [19:10] which is not really re-submitting [19:11] but i only saw that i could do that after i had already submitted another. [19:11] so, commented there to that affect. [19:11] ok, thanks [19:11] I didn't know that either [19:16] smoser: ntp discuss sounds fine, you've a few isc-dhcp where I think you mean isc-ntp; [19:27] more comment sin the hackmd.io document, but maye we should move the discussion here to be more interactive? [19:33] ah. yeah. probably. [19:33] robjo: yes that is fine.s orry [19:34] To my last comment, my concern is primarily with the "custom image builders" and the expectation that is implied in "auto", i.e. it just works [19:35] rharper: fixed rthose. [19:36] If we use auto and there's no other mechanism then basically the cloud-init package on SUSE would have to supply a /etc/cloud/cloud.cfg.d/00-distro.cfg [19:37] which of course the "cutom image builder" can mess up more easily than having something in the cloud-init code that handle "auto" in some deterministic way that is "distro default" compliant [19:37] But maybe I am overthinking the problem [19:37] robjo: in your scenario, wouldnt the user have likely had cloud-init installed from a package you maintain ? [19:37] in which case it would have a setting not 'auto'. [19:37] and would Depend on the right client [19:38] Well it appears that lost of custom image builders decide to clobber the default cloud.cfg and thus I'd say the answer is no [19:39] "I changed something and broke it" [19:39] ? [19:39] I see people reading teh doc, then put in their own cloud.cfg and set "ntp: enabled", done [19:39] right, I would expect this 'auto' problem only to show up on Sles/OpenSuse if the custom image builder is not installing stock sles cloud-init via yast, but rolling their own off of lp:cloud-init master [19:39] wouldn't the distro object itself in code have a preference? the system config is for overriding the built-in-code preference, no ? [19:40] well the "in-code" preference only works if I have a "if distro.version == X:" block [19:40] much like how we set locale by default, or other distro specific settings, default in the code, but allow system settings to override; default net renderer is another like these where code has a default preference, and allowing system config to override [19:40] 'auto' will generally do the right thing. You're free to have 'auto' on Suse have different "right thing" than on Ubuntu. [19:40] which means I have to read os-relese file and that gets me back to my testing issue :( [19:40] But I think giving preference to already installed clients versus non-installed clients is important. [19:41] we can fix your tests. i agree those tests are in need of re-factoring. [19:41] blackboxsw or I can help with tests. [19:41] agreed with the preference to installed clients, and maybe that in the end is the solution [19:41] I'm not sure what's wrong with reading os-release? if SLES vs OpenSuSE have different default clients for ntp; that can be accomodated in the distro classes, right? [19:42] it would seem quite reasonable for the 'distro' object that is instantiated to "know" more about the speficis than it does. [19:42] * blackboxsw loves mocks, yeah if we want some simple targeted mock ups or unit tests on a agreed upon direction, I'm game for helping out [19:42] it's SLES vs. SLES, not SLES vs openSUSE [19:42] :) [19:42] ah [19:42] SLES 12 -> ntp, SLES 15 -> chrony [19:42] so sles.py gets loaded and tere I have to make a decision [19:43] yes; we're in similar situation Xenial will carry one preference, where as bionic will likely use something different [19:43] i think reading os-release is fine. [19:43] even in the presence of a default installed client (timesyncd in this case) [19:44] in Ubuntu i'mi fine with carrying a package patch to maintain old behavior even if upstream cloud-init changes.. [19:44] I would thnk the distro object probably already has the system_info structure which has that detail any how, right ? [19:44] and if someone builds their own package, or starts wiping out files installed by cloud-init or any other package when they build their image.... [19:44] i dont really see how that is my problem :) [19:44] Ican build teh package such that on SLES 12 cloud-init delivers /etc/cloud/cloud.cfg.d/00-distro.cfg with system_info['ntp_client'] = ntp and on SLES 15 that's set to chrony, and that's fair, just pointing out that that was not necessary previously and is also reasonably easy to break [19:45] make the change in /etc/cloud/cloud.cfg [19:45] and if the user changes it or blows that file away, then it will default to auto [19:45] and auto will do the right thing [19:45] even including looking at /etc/os-release [19:46] but pleease preferring installed clients over non-installed clients. [19:46] rharper: system_info does not have distro version [19:46] might not include release number [19:46] platform.linux_distribution() [19:46] ('Ubuntu', '16.04', 'xenial') [19:46] [19:47] that seems pretty specific [19:47] worse for me openSUSE and SLES have the same data, which is why handling flavor and distro on SLES/openSUSE is a three way ball juggling exercise ;) [19:49] ah I was looking down the path of determining "flavor" and there we do not look at that information. [19:50] So yes, platform.linux_distribution() would do the trick [21:14] smoser: rharper hrm, so kvm-wise, I've used mount-image-callback to install a new cloud-init deb in the cloudimage I downloaded. Though it seems that the generator hasn't run because cloud-init is no longer configured as a listed systemd dependency in xenial in this image [21:14] should I have done something additional (beyond the mchroot dpkg -i new-cloud-init.deb)? [21:14] no [21:14] it's never listed as a dep [21:14] all generators in /lib/systemd/generators/* are invoked, there I think is a link to the cloud-init one [21:15] /lib/systemd/system-generators/cloud-init-generator [21:16] systemd runs that, which in turn checks for the kernel parameters or disabled file, and if so, leaves things off; otherwise it'll run and import ds-identify stuff, which will result in cloud-init getting enabled via a dynamic link in multi-user.target.wants/ [21:16] in /run [21:16] /run/systemd/generator-early I think [21:16] bbiab [21:16] * rharper relocates [22:23] blackboxsw: updated https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/333513 [22:23] grabbing [22:24] if you agree with my suggested change then we can pull [22:24] yeah that makes sense smoser will apply and land. [22:25] just ran into (and am fixing with a trivial) https://bugs.launchpad.net/cloud-init/+bug/1736600 [22:25] Launchpad bug 1736600 in cloud-init "cloud-init modules -h documents unsupported --mode init" [Low,Triaged] [22:25] just finished testing ovf with msaika's changes [22:25] looks to work without impacted standard ovf behavior with one curiousity. [22:26] if I rm -rf /var/log/cloud* /var/lib/cloud; sudo reboot on the kvm machine with the updated locally installed deb package cloud-init doesn't run by default, I had to cloud-init init --local; cloud-init init, cloud-init modules --mode config ... --mode final . [22:28] blackboxsw: did your ovf test use no-cloud ? [22:28] if so, you nuked your /var/lib/cloud/seed/no-cloud [22:28] which means ds-identify won't enable cloud-init [22:28] hrm yes I think I did... and I nuked seed (which contained nothing) [22:28] blackboxsw: aren't you working on a cloud-init clean to DTRT so we don't have to rm -rf /var/lib/cloud* [22:28] but [22:28] ugh [22:28] * smoser sees [22:28] tests/cloud_tests/testcases/modules/salt_minion.py [22:28] it's presense triggers cloud-init [22:29] ahhh [22:29] and says what is going on there ? [22:29] oops [22:29] right [22:29] oh. cloud_tests. silly me [22:29] and rharper yeah that branch cloud-init clean is minutes from landing [22:29] :) [22:29] =) [22:29] then I don't have to worry about shooting myself in the foot [22:29] I know smoser really wants to relocate seed to somewhere else outside of /var/lib/cloud [22:30] * rharper relocates home [22:30] bbiab [22:30] msaika is also using the seed directory for vmware's markefiles too [22:30] and that branch is emminent [22:30] https://code.launchpad.net/~msaikia/cloud-init/+git/cloud-init/+merge/330105 [22:30] ugh. markerfiles shoud go in /var/lib/cloud/data, no? [22:30] was there a reason for seed? [22:31] powersj: or anyone who wanted [22:31] https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/334780 [22:31] strike that comment about /var/lib/cloud/seed ... looks like the marker files are just in /var/lib/cloud/ [22:32] smoser: thx I'll play with that next [22:32] smoser: still time to comment that. She's around all week (per pvt msg). [22:32] smoser: I'll suggest the path change [22:33] markefiles were originally under / and I wanted something that cloud-init clean would still blow away [22:37] oh i see. [22:37] i have to read it to know what mnarker files are for. i forget in thatcontext [22:37] i also approved https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/330115 [22:38] blackboxsw: did you aggree/disagree with my comment on 333513 ? [22:39] * blackboxsw wonders if we should suggest the same makerpath changes for https://code.launchpad.net/~dojordan/cloud-init/+git/cloud-init/+merge/334341 dojordan is doing something kinda similary with markerfiles for azure [22:44] smoser: right yeah vmware is using them user-provided customization scripts which run pre-and post-cloud-init network configuration in vmware's get_data() method [22:54] * smoser is behind on reviews [22:54] and hasto run now. [23:03] FWIW the marker file in my PR won't exist once the actual provisioning is completed. therefore by the time you can login and run clean, the file will be gone