[09:44] I have no idea how the current/latest fbsd stuff works out for c-i === harmw_ is now known as harmw [09:44] it's been quite a while === smoser` is now known as smoser [14:18] harlowja, i do not remember any specs per say on updating metadata. I'm not sure whether or not the MD gets updated if you add a device or not (in the openstack metadata service) [14:19] but i have said before, attempting to do dynamic interface over config drive is really just stupid, and should not be attempted. [17:48] smoser ya, the shitty part is that we have folks here who poll the metadata stuff to see when it changes [17:49] and instead if said polling turned into 'wait for event that configdrive ejected and reinserted' that'd at least avoid polling [17:49] you can't just pull a disk from a guest [17:49] and expect that that wont up upset something [17:49] try it, just yank your disk from your mac. see what happens. [17:50] its *not* a sane event mechanism. [17:50] k [17:50] what would change in the md ? [17:50] macs don't have disks [17:50] lol [17:50] my mac runs off pixie dust [17:50] lol [17:50] obviously [17:50] well, pry it open, get something to dissolve the glue, then rip it out [17:50] lol [17:51] what would change in the md ? [17:51] so the guys here alter the instance users via it apparently [17:52] and apparently that works [17:52] http://lists.openstack.org/pipermail/openstack-dev/2016-June/097705.html [17:52] and it appears u can update said stuff, which is werid [17:53] not sure why nova allows updating that stuff and having it be reflected [17:53] (abuse waiting to happen) [17:54] (maybe its one of those bug-features, lol) [17:54] (that snuck in) [17:55] anyways, i'm not super-attached to that, the guys here have 'used this feature' (for better/worse), ha [17:58] it just does raise the question, how do things that change get reflected in the instance (long polling metadata, repeated polling ... blah blah) [18:00] smoser how does amazon do this, just repeated polling? [18:04] i completely disagree with "abusing nova" [18:04] metadata is dynamic. config drive should not be. [18:04] but config-drive has equivalent of metadata in it [18:04] aka its a mirror of metadata [18:05] so then its a bad equivalent of the metadata service [18:05] yes [18:05] web services are read/write and dynamic [18:05] ya, so then maybe config-drive needs to be dumber [18:06] disks are read/write and dynamic, but i dont think you really want to deal with that. [18:06] and not contain a full mirror [18:06] i really think basically config drive should provide you with networking information that is guaranteed static [18:06] and tells you where you can get dynamic information [18:07] wrt disks being read/write and dynamic... [18:07] one could argue that since disks *are* read/write, then the guest should be able to modify the contents of metadata by updating a key in the json and writing the new file [18:07] ya, so it either is accepted that its not a full equivalent, orrr it is mutated into being that [18:07] and the host should have to deal with that [18:08] right, the pull the disk solution [18:08] but that woudl be more insane than posting events through a iso9660 filesystem [18:08] (i was suggesting guest tells host, and host tells guest... [18:08] ie, bi-directional [18:08] sounds like a qemu-metadata-agent would be better than [18:09] which then is pretty much the metadata service [18:09] lol [18:09] right. [18:09] except for web services do not poll well [18:09] as you suggested [18:10] but long-poll may be acceptable [18:10] i hadnt thought of metadata changing [18:10] ya, i wasn't really aware this was a feature, ha [18:10] until i tried it, lol [18:10] but for other events such as hotplug or remove of a disk or network, you do get an event sent to the guest [18:10] which it can respond to [18:11] ideally the networking information in the MD woudl get updated [18:11] right, so its not like it would be impossible to do the eject metadata disk/cd [18:11] (if full parity is really wanted) [18:11] it might just be weird (ie for windows) [18:11] (and the program doing something with that disk needs to be written so that it can have the disk drop at anytime..) [18:12] what happens if you eject while the guest is doing a read ? [18:12] or has it mounted. [18:12] and thus has whatever flag in that cdrom there is that says "no thank you" to the eject button [18:12] ya, that sucks [18:12] lol [18:12] there are solutions to this, and those solutions should be used. [18:13] ie, etcd or zookeeper [18:13] or fancier stuff like that. [18:13] ya [18:13] nova is weird [18:13] lol [18:13] in that this feature exists, people started using it, but it is pretty shoddy lol [18:13] and half-parity and half... [18:17] openstack is weird [18:17] ha [18:23] the nova metadata service really should not be used as a replacement for zookeeper or etcd and the like. [18:23] i do agree that that is a bit of abuse. [18:23] i think its sane to expect metadata to change in it. [18:23] i think its *insane* to expect config drive to change [18:24] k [18:24] ya, the feature parity of what is config-drive and what is not then i think needs to happen [18:30] harlowja, i didn't know of pythonn -m json.tool [18:30] thats nice [18:31] rharper, ^ . that is easy pretty print of json [18:32] harlowja, yeah, you need etcd or zookeeper. [18:33] or probably one of 30 different other solutions [18:33] ya [18:33] something like that [18:33] and then... [18:33] in full openstack style [18:33] you should suggest a project to run that as a service [18:35] :) [18:47] smoser: nice [18:56] harlowja, so in rhel/centos [18:57] we're not currently writing any .link files for renaming [18:57] but i tihnk you do write the 70-persistent-net files [18:58] is that right ? [18:58] ya, i do write it [18:58] i jsut realized that on ubuntu, we're writing those .link files, but they will [almost] never be read. [18:58] or respected at least [18:58] as cloud-init then reads the config drive information and renames them himself. [18:59] ya, i also noticed that the team here is injecting dns searchdomains into the old eni format [18:59] and it appears neutron doesn't put those in yet [19:00] ? [19:00] it only appears to put in the nameservers, but not the searchdomains (and there is ongoing/idle/dead? work in neutron to fix this) [19:00] oh. [19:00] so your openstack declares networking in the old ENI format way [19:05] ya, which does look for search domains [19:05] it appears the network_data json stuff doesn't have that, lol [19:05] https://bugs.launchpad.net/neutron/+bug/1108644 [19:06] not implemented (yet) [19:06] so ummm, ya, oops [19:06] and afaik the old renderers will use that if its avail [19:07] while the newer ones may or may not [19:07] ha [19:07] https://code.launchpad.net/~harlowja/cloud-init/cloud-init-dns-sysconfig/+merge/297817 at least uses it if it exists (which in openstack case it won't, lol) [19:07] pretty sure the eni renderer already uses it (if it exists) [19:07] smoser also, something we need to cleanup is the 2 eni parsers that now exist, lol [19:08] pretty sure i can just use the better one and delete mine, lol [19:10] yeah. i think so. get rid of yours. as the other is pretty good at thsi point. [19:18] ya [20:14] when making a custom module, why is #cloud-config needed at the top of the injected user-data to 'activate' the module? [20:32] sather, custom module ? === gfidente is now known as gfidente|afk [21:00] smoser: cloud-init directive [21:01] http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/config/cc_foo.py [21:01] if you provide cloud-config via user-data, it must start with #cloud-config (or in a multipart archive, it must be type cloud-config) [21:01] building off of ^^ [21:02] in order for a moulde to be run, though, it has to be listed in the /etc/cloud./cloud.cfg [21:02] they're not discovered. [21:02] hmm [21:03] Passing it as user-data though will get it to run? [21:04] (assuming it has #cloud-config as first line of the file)? [21:06] no. [21:06] you cannot really pass in config modules [21:06] you *can* pass in 'handlers' [21:06] ok, that's what I'm doing [21:06] sorry for the confusion [21:06] well, i dont knwo that you are. [21:08] http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/part-handler-v2.txt [21:08] that is a part handler [21:08] you can pass it in as user-data. [21:09] ok no [21:09] I'm passing a yaml-formatted text file [21:09] with #config-data on top [21:09] right. [21:09] I have a custom cc_do_something.py module in my /usr/lib/python/site-packages/cloudinit/config [21:09] directory [21:10] when the user-data text has the key `do_something:` it runs that module [21:11] so it's not a part-handler [21:12] sather, right. it doesn't work like that. [21:12] you have to enable your module [21:13] by adding it to one of the sections (cloud_init_modules, cloud_config_modules, or cloud_final_modules) [21:13] then cloud-init will run it and feed it the whole config [21:13] which you can do whatever you want with [21:14] interesting. I'm not sure how this is working then [21:15] I kind of re-implemented cc_write_files.py because I have to use a specifically formatted yaml file and can't prepend the necessary info for write_files [21:15] but it seems to work without touching /etc/cloud/cloud.cfg [21:16] (original question was escaping #cloud-config at the top of the injected user-data) [21:18] by 're-implemented' I mean I just used cc_foo.py as a base and wrote to a file using the value from cfg['key'] [21:18] without using cloudinit.util