smoser | blackboxsw: i pushed there. | 01:32 |
---|---|---|
=== Wulf4 is now known as Wulf | ||
ybaumy | moin. how do i setup nocloud correctly? i added this datasource: NoCloud: to cloud-init.cfg but im getting an error on first boot. and on disable ec2 im getting an error too. http://paste.ubuntu.com/26116710/ | 06:09 |
ybaumy | version is now cloud-init-17.1+46.g7acc9e68-1.el7.centos.noarch | 06:10 |
ybaumy | strangely this didnt happen yesterday. but now i created a new vmware image template and then added a host to foreman | 06:11 |
ybaumy | maybe i should use ansible to setup puppet and the rest | 09:00 |
ybaumy | i was using cloud-init with openstack and that was ok. but that datasource nocloud thing i dont get | 09:04 |
smoser | ybaumy: i'm not sure about the first issue with ec2 disable. there is probably something in /var/log/cloud-init-output.log or /var/log/cloud-init.log (you could pastebin the whole of both would be helpful) | 14:27 |
smoser | for nocloud you'd need to give mreo data... not sure what would cause there to be no md (metadata) on that object. | 14:27 |
smoser | for setting up nocloud, i suggest using 'cloud-localds' http://paste.ubuntu.com/26118885/ | 14:28 |
smoser | but basically nocloud is a attached disk in either iso or vfat that has a 'user-data' and 'meta-data' on them. meta-data is yaml formated, and expected to have 'instance-id'. | 14:29 |
ybaumy | smoser: i think i dont need all of that nocloud parameters. i only want to setup puppet and thats about it. i feel like im breaking a leg here for that to do. | 14:56 |
smoser | its nto a lot of parameters. | 14:56 |
smoser | instance-id | 14:56 |
smoser | i'm only suggesting using cloud-localds to get you a disk so you dont have to worry about impmlementation, and you can see a working example. | 14:57 |
ybaumy | ok | 14:58 |
robjo | smoser: blackboxsw rharper Could I please get some feedback on the chrony support? See mailing list message dated 11/14 "RFQ chrony support" which should of course have been "RFC" | 15:02 |
smoser | :-( | 15:03 |
smoser | yeah, that is fair | 15:03 |
=== shardy is now known as shardy_afk | ||
ybaumy | smoser: how do i create a new instance-id for each cloned vmware image | 15:13 |
ybaumy | i guess they have to be unique | 15:14 |
ybaumy | hmm i could make a firstboot scripts that runs before cloud-init | 15:15 |
smoser | ybaumy: just attach different data in a disk, no ? | 15:16 |
smoser | i'm confused. | 15:16 |
smoser | instance-id=$(uuidgen) | 15:16 |
smoser | so i'm missing something for sure. | 15:16 |
ybaumy | im confused as well | 15:16 |
ybaumy | ok let me explain what i want to do | 15:17 |
ybaumy | i have a vmware template that is centos. then i want to automatically deploy this image with foreman and register it to the katello part | 15:18 |
ybaumy | therefor i need puppet integration | 15:18 |
ybaumy | the puppet part i still not working 100% but 90% | 15:18 |
ybaumy | so i thought using nocloud datasource is the best solution when running cloud-init | 15:19 |
smoser | ok. so i think youa ve a couple options | 15:19 |
smoser | a.) vmware has an "OVF" datasource that should mostly do what you're wondering about. | 15:19 |
smoser | i really have no familiarity with it vmware though. | 15:20 |
smoser | b.) when you create a new "instance" (that is 'automatically deploy') you can create a second disk and attach it with the necessary NoCloud data | 15:21 |
smoser | c.) you can just put NoCloud data into each image hard coded in /var/lib/cloud/seed/nocloud-net. it will just never change, you'll have to find some other way to identify individual systems or give them specific instructions. | 15:22 |
smoser | the 'instance-id' is really just a bit of information specific to this instance. | 15:22 |
smoser | it allows cloud-inti to do things that should only be done on a "new instance". | 15:22 |
ybaumy | smoser: c) sounds th ebest option. since all the custom parts are run through puppet when it is setup | 15:22 |
smoser | but what does puppet use to determine what their role is | 15:23 |
smoser | if 100 things come and say "hi puppet master" all at the same time, how does it decide what is supposed to do what? | 15:23 |
ybaumy | foreman has hostgroups which is a customer and all those hostgroups have different parameters for puppet | 15:23 |
smoser | and hostgroups are based on ? | 15:24 |
smoser | hostnames ? which you're providing ? | 15:24 |
smoser | how does the instance know its hostname? | 15:24 |
ybaumy | so i just create a host put it in a hostgroup which inherits all parameters for that hostgroup | 15:24 |
ybaumy | hostname gets through dhcp on the first setup before customizing | 15:25 |
ybaumy | hostgroups are just groups with different attributes and parameters | 15:25 |
ybaumy | its really cool | 15:26 |
ybaumy | i will try c) now | 15:27 |
ybaumy | lets see how far i get | 15:27 |
smoser | are you using cloud-inti for anything else ? | 15:28 |
smoser | does it do anything for you? | 15:28 |
ybaumy | nope just puppet i need | 15:28 |
smoser | then you can just disable cloud-init entirely. | 15:29 |
ybaumy | ok and use what? | 15:29 |
smoser | nothign ? | 15:29 |
smoser | you said it doesnt do anything for you | 15:29 |
ybaumy | well there is the problem with proxy servers on subnets | 15:29 |
ybaumy | so i need a instance i can tell which proxy server the puppet.conf should hold | 15:30 |
ybaumy | proxy server=pupetmaster | 15:30 |
smoser | does foreman provide any endpoint that the node can query to learn about itself? other than the dhcp server ? | 15:31 |
ybaumy | but maybe you are right and should just script it msyself | 15:31 |
ybaumy | well there is remote ssh execution | 15:32 |
ybaumy | for example | 15:32 |
ybaumy | i could use ansible to register the host and setup puppet as well | 15:32 |
smoser | it seems like forman kind of needs a mechanism by which a node can query its parameters | 15:33 |
smoser | which is basically what a datasource is to cloud-init | 15:33 |
smoser | then the node would look and see what proxy settings it had and what it was supposed to do. | 15:33 |
ybaumy | thats what i thought yesterday | 15:33 |
smoser | other wise you're just deploying dozens of clones that differ only in hostname | 15:34 |
smoser | if you can provide hostnames you could manufacture them such that they indicate some information | 15:34 |
smoser | group1-name1 | 15:34 |
ybaumy | there are clones as long as they havent contacted the puppetmaster | 15:35 |
smoser | and then in code inside key off 'group1' to know the hard coded proxy server. | 15:35 |
ybaumy | but you are right | 15:35 |
ybaumy | i should find another way to setup servername in puppet.conf | 15:36 |
ybaumy | this just doesnt work | 15:36 |
ybaumy | since cloud-init.cfg is not generated | 15:37 |
ybaumy | depending on the hostgroup | 15:37 |
smoser | you should convince foreman that they need a node-facing endpiont | 15:38 |
smoser | and then write a datasource for cloud-init to utilize that. | 15:38 |
ybaumy | it kind of lacks this feature i must saa | 15:39 |
ybaumy | say | 15:39 |
smoser | you need to have *some* id for a node | 15:39 |
ybaumy | https://theforeman.org/plugins/foreman_ansible/0.x/index.html i guess this what im trying now | 15:40 |
ybaumy | but in the end its the same problem | 15:41 |
ybaumy | i need a foreman_url | 15:41 |
ybaumy | which is the proxy puppetmaster in the end | 15:41 |
ybaumy | i have think about it | 15:41 |
=== shardy_afk is now known as shardy | ||
ybaumy | i think i need a installtion network | 15:48 |
ybaumy | then its always the same | 15:48 |
ybaumy | server | 15:48 |
ybaumy | if i dont have a datasource | 15:49 |
ybaumy | there is no other way i can see | 15:49 |
=== Wulf4 is now known as Wulf | ||
smoser | robjo: https://hackmd.io/CwEwHARgbAzDDsBaArPOjhQMYEZEEMBOHQjQkeAUxgCYb9h8og== | 17:24 |
smoser | blackboxsw, rharper | 17:24 |
smoser | also. interested in feedback there. | 17:24 |
robjo | smoser: added some comments | 18:23 |
dojordan | @blackboxsw sorry to bump, but can you take a look when you get a chance? https://code.launchpad.net/~dojordan/cloud-init/+git/cloud-init/+merge/334341 | 18:37 |
blackboxsw | Good deal dojordan I have context on an azure deployment right now so I'll test it out and see if we can labd it today | 18:48 |
blackboxsw | 'land' rather | 18:48 |
dojordan | great, thanks! feel free to ping with questions | 18:52 |
dojordan | also note that the platform doesn't publicly support it yet so I've been testing the azure fabric changes in private test clusters. I have the logs from that if need be | 18:53 |
rharper | smoser: is there an easy way to modify the prposed-branche and merge-into for MP ? | 19:00 |
smoser | robjo: thanks. i responded theer. | 19:05 |
smoser | we can carry on here. also | 19:05 |
smoser | rharper: ? | 19:05 |
rharper | https://code.launchpad.net/~cloud-init-dev/cloud-init/+git/cloud-init/+merge/334733 | 19:05 |
rharper | the targets need swapping | 19:05 |
smoser | :) | 19:05 |
smoser | rharper: so.. interestingly, i could 'resubmit' that proposal and swap the source and target | 19:10 |
smoser | which is not really re-submitting | 19:10 |
smoser | but i only saw that i could do that after i had already submitted another. | 19:11 |
smoser | so, commented there to that affect. | 19:11 |
rharper | ok, thanks | 19:11 |
rharper | I didn't know that either | 19:11 |
rharper | smoser: ntp discuss sounds fine, you've a few isc-dhcp where I think you mean isc-ntp; | 19:16 |
robjo | more comment sin the hackmd.io document, but maye we should move the discussion here to be more interactive? | 19:27 |
smoser | ah. yeah. probably. | 19:33 |
smoser | robjo: yes that is fine.s orry | 19:33 |
robjo | To my last comment, my concern is primarily with the "custom image builders" and the expectation that is implied in "auto", i.e. it just works | 19:34 |
smoser | rharper: fixed rthose. | 19:35 |
robjo | If we use auto and there's no other mechanism then basically the cloud-init package on SUSE would have to supply a /etc/cloud/cloud.cfg.d/00-distro.cfg | 19:36 |
robjo | which of course the "cutom image builder" can mess up more easily than having something in the cloud-init code that handle "auto" in some deterministic way that is "distro default" compliant | 19:37 |
robjo | But maybe I am overthinking the problem | 19:37 |
smoser | robjo: in your scenario, wouldnt the user have likely had cloud-init installed from a package you maintain ? | 19:37 |
smoser | in which case it would have a setting not 'auto'. | 19:37 |
smoser | and would Depend on the right client | 19:37 |
robjo | Well it appears that lost of custom image builders decide to clobber the default cloud.cfg and thus I'd say the answer is no | 19:38 |
smoser | "I changed something and broke it" | 19:39 |
smoser | ? | 19:39 |
robjo | I see people reading teh doc, then put in their own cloud.cfg and set "ntp: enabled", done | 19:39 |
blackboxsw | right, I would expect this 'auto' problem only to show up on Sles/OpenSuse if the custom image builder is not installing stock sles cloud-init via yast, but rolling their own off of lp:cloud-init master | 19:39 |
rharper | wouldn't the distro object itself in code have a preference? the system config is for overriding the built-in-code preference, no ? | 19:39 |
robjo | well the "in-code" preference only works if I have a "if distro.version == X:" block | 19:40 |
rharper | much like how we set locale by default, or other distro specific settings, default in the code, but allow system settings to override; default net renderer is another like these where code has a default preference, and allowing system config to override | 19:40 |
smoser | 'auto' will generally do the right thing. You're free to have 'auto' on Suse have different "right thing" than on Ubuntu. | 19:40 |
robjo | which means I have to read os-relese file and that gets me back to my testing issue :( | 19:40 |
smoser | But I think giving preference to already installed clients versus non-installed clients is important. | 19:40 |
smoser | we can fix your tests. i agree those tests are in need of re-factoring. | 19:41 |
smoser | blackboxsw or I can help with tests. | 19:41 |
robjo | agreed with the preference to installed clients, and maybe that in the end is the solution | 19:41 |
rharper | I'm not sure what's wrong with reading os-release? if SLES vs OpenSuSE have different default clients for ntp; that can be accomodated in the distro classes, right? | 19:41 |
smoser | it would seem quite reasonable for the 'distro' object that is instantiated to "know" more about the speficis than it does. | 19:42 |
* blackboxsw loves mocks, yeah if we want some simple targeted mock ups or unit tests on a agreed upon direction, I'm game for helping out | 19:42 | |
robjo | it's SLES vs. SLES, not SLES vs openSUSE | 19:42 |
smoser | :) | 19:42 |
rharper | ah | 19:42 |
robjo | SLES 12 -> ntp, SLES 15 -> chrony | 19:42 |
robjo | so sles.py gets loaded and tere I have to make a decision | 19:42 |
rharper | yes; we're in similar situation Xenial will carry one preference, where as bionic will likely use something different | 19:43 |
smoser | i think reading os-release is fine. | 19:43 |
rharper | even in the presence of a default installed client (timesyncd in this case) | 19:43 |
smoser | in Ubuntu i'mi fine with carrying a package patch to maintain old behavior even if upstream cloud-init changes.. | 19:44 |
rharper | I would thnk the distro object probably already has the system_info structure which has that detail any how, right ? | 19:44 |
smoser | and if someone builds their own package, or starts wiping out files installed by cloud-init or any other package when they build their image.... | 19:44 |
smoser | i dont really see how that is my problem :) | 19:44 |
robjo | Ican build teh package such that on SLES 12 cloud-init delivers /etc/cloud/cloud.cfg.d/00-distro.cfg with system_info['ntp_client'] = ntp and on SLES 15 that's set to chrony, and that's fair, just pointing out that that was not necessary previously and is also reasonably easy to break | 19:44 |
smoser | make the change in /etc/cloud/cloud.cfg | 19:45 |
smoser | and if the user changes it or blows that file away, then it will default to auto | 19:45 |
smoser | and auto will do the right thing | 19:45 |
smoser | even including looking at /etc/os-release | 19:45 |
smoser | but pleease preferring installed clients over non-installed clients. | 19:46 |
robjo | rharper: system_info does not have distro version | 19:46 |
rharper | might not include release number | 19:46 |
rharper | platform.linux_distribution() | 19:46 |
rharper | ('Ubuntu', '16.04', 'xenial') | 19:46 |
rharper | 19:46 | |
rharper | that seems pretty specific | 19:47 |
robjo | worse for me openSUSE and SLES have the same data, which is why handling flavor and distro on SLES/openSUSE is a three way ball juggling exercise ;) | 19:47 |
robjo | ah I was looking down the path of determining "flavor" and there we do not look at that information. | 19:49 |
robjo | So yes, platform.linux_distribution() would do the trick | 19:50 |
blackboxsw | smoser: rharper hrm, so kvm-wise, I've used mount-image-callback to install a new cloud-init deb in the cloudimage I downloaded. Though it seems that the generator hasn't run because cloud-init is no longer configured as a listed systemd dependency in xenial in this image | 21:14 |
blackboxsw | should I have done something additional (beyond the mchroot dpkg -i new-cloud-init.deb)? | 21:14 |
rharper | no | 21:14 |
rharper | it's never listed as a dep | 21:14 |
rharper | all generators in /lib/systemd/generators/* are invoked, there I think is a link to the cloud-init one | 21:14 |
rharper | /lib/systemd/system-generators/cloud-init-generator | 21:15 |
rharper | systemd runs that, which in turn checks for the kernel parameters or disabled file, and if so, leaves things off; otherwise it'll run and import ds-identify stuff, which will result in cloud-init getting enabled via a dynamic link in multi-user.target.wants/ | 21:16 |
rharper | in /run | 21:16 |
rharper | /run/systemd/generator-early I think | 21:16 |
rharper | bbiab | 21:16 |
* rharper relocates | 21:16 | |
smoser | blackboxsw: updated https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/333513 | 22:23 |
blackboxsw | grabbing | 22:23 |
smoser | if you agree with my suggested change then we can pull | 22:24 |
blackboxsw | yeah that makes sense smoser will apply and land. | 22:24 |
blackboxsw | just ran into (and am fixing with a trivial) https://bugs.launchpad.net/cloud-init/+bug/1736600 | 22:25 |
ubot5 | Launchpad bug 1736600 in cloud-init "cloud-init modules -h documents unsupported --mode init" [Low,Triaged] | 22:25 |
blackboxsw | just finished testing ovf with msaika's changes | 22:25 |
blackboxsw | looks to work without impacted standard ovf behavior with one curiousity. | 22:25 |
blackboxsw | if I rm -rf /var/log/cloud* /var/lib/cloud; sudo reboot on the kvm machine with the updated locally installed deb package cloud-init doesn't run by default, I had to cloud-init init --local; cloud-init init, cloud-init modules --mode config ... --mode final . | 22:26 |
rharper | blackboxsw: did your ovf test use no-cloud ? | 22:28 |
rharper | if so, you nuked your /var/lib/cloud/seed/no-cloud | 22:28 |
rharper | which means ds-identify won't enable cloud-init | 22:28 |
blackboxsw | hrm yes I think I did... and I nuked seed (which contained nothing) | 22:28 |
rharper | blackboxsw: aren't you working on a cloud-init clean to DTRT so we don't have to rm -rf /var/lib/cloud* | 22:28 |
rharper | but | 22:28 |
smoser | ugh | 22:28 |
* smoser sees | 22:28 | |
smoser | tests/cloud_tests/testcases/modules/salt_minion.py | 22:28 |
rharper | it's presense triggers cloud-init | 22:28 |
blackboxsw | ahhh | 22:29 |
smoser | and says what is going on there ? | 22:29 |
blackboxsw | oops | 22:29 |
blackboxsw | right | 22:29 |
smoser | oh. cloud_tests. silly me | 22:29 |
blackboxsw | and rharper yeah that branch cloud-init clean is minutes from landing | 22:29 |
blackboxsw | :) | 22:29 |
rharper | =) | 22:29 |
blackboxsw | then I don't have to worry about shooting myself in the foot | 22:29 |
rharper | I know smoser really wants to relocate seed to somewhere else outside of /var/lib/cloud | 22:29 |
* rharper relocates home | 22:30 | |
rharper | bbiab | 22:30 |
blackboxsw | msaika is also using the seed directory for vmware's markefiles too | 22:30 |
blackboxsw | and that branch is emminent | 22:30 |
blackboxsw | https://code.launchpad.net/~msaikia/cloud-init/+git/cloud-init/+merge/330105 | 22:30 |
smoser | ugh. markerfiles shoud go in /var/lib/cloud/data, no? | 22:30 |
smoser | was there a reason for seed? | 22:30 |
smoser | powersj: or anyone who wanted | 22:31 |
smoser | https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/334780 | 22:31 |
blackboxsw | strike that comment about /var/lib/cloud/seed ... looks like the marker files are just in /var/lib/cloud/<markerfile> | 22:31 |
powersj | smoser: thx I'll play with that next | 22:32 |
blackboxsw | smoser: still time to comment that. She's around all week (per pvt msg). | 22:32 |
blackboxsw | smoser: I'll suggest the path change | 22:32 |
blackboxsw | markefiles were originally under / and I wanted something that cloud-init clean would still blow away | 22:33 |
smoser | oh i see. | 22:37 |
smoser | i have to read it to know what mnarker files are for. i forget in thatcontext | 22:37 |
smoser | i also approved https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/330115 | 22:37 |
smoser | blackboxsw: did you aggree/disagree with my comment on 333513 ? | 22:38 |
* blackboxsw wonders if we should suggest the same makerpath changes for https://code.launchpad.net/~dojordan/cloud-init/+git/cloud-init/+merge/334341 dojordan is doing something kinda similary with markerfiles for azure | 22:39 | |
blackboxsw | smoser: right yeah vmware is using them user-provided customization scripts which run pre-and post-cloud-init network configuration in vmware's get_data() method | 22:44 |
* smoser is behind on reviews | 22:54 | |
smoser | and hasto run now. | 22:54 |
dojordan | FWIW the marker file in my PR won't exist once the actual provisioning is completed. therefore by the time you can login and run clean, the file will be gone | 23:03 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!