[08:35] i need help with the OVF datasource http://pastebin.centos.org/876866/ [08:39] smoser: you are here? [08:45] anyone ever used that OVF provider? [11:22] hi :) [11:22] i want to use cloud init on ubuntu 16.04 to enable a second interface on a newly created instance [11:23] any tips how to do that? everything i tried needs another reboot (e.g. using write_files to write to interfaces.d) or is ignored completly (e.g. a config file from the example i wrote to /etc/cloud/cloud.cfg.d/) [11:23] it drives me crazy [13:23] ybaumy: are you trying to run on a cloud somewhere or just providing data locally ? [13:41] smoser: cannot get OVF to mount a iso at all. i am now switching to NoCloud datasource which atleast mounts /dev/sr0 find a user-data and meta-data file... but nothing happens [13:41] ybaumy: i'd suggest using NoCloud rather than ovf [13:41] but i'd be surprised if doc/sources/ovf/README does not work [13:42] and similar for nocloud [13:42] https://asciinema.org/a/132013 [13:43] 1. im not using ubuntu .. im using centos 2. like in the paste url earlier i just seems to recognize the OVF in datasource_list at all [13:45] ok that cloud-localds i havent done [13:46] and im using vmware [13:49] ybaumy: well... a.) try the copr repo to get newer cloud-init [13:50] b.) if it fails, run 'cloud-init collect-logs' and file a bug [13:52] ybaumy: i'd also kind of expect the openstack centos image to work with nocloud identically to how the asciicinema demo for ubuntu did [13:53] something from http://cloud.centos.org/centos/7/images/ [14:04] smoser: thanks will try [14:13] works!!!! [14:14] i had that seefrom: None in there [14:14] and removed it to see what happens [14:14] that was the problem [14:30] ybaumy: so the centos image and nocloud "just worked" ? [14:31] i should make a ascii...spelling-thing of that too [14:35] no i have created a custom ovf image in vcloud director [14:35] i used copr cloud-init latest version [15:01] oh. ok. good. [16:33] https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348000 is up for the azure case. I've not pushed logic to keep original fallback network_config yet [16:57] smoser: rharper ping when you have about 10 mins to talk upgrade path azure [16:57] I added scenarios to the bottom of the doc https://hackmd.io/aODzXfa_TOikNtYBLt8erA?both [17:10] blackboxsw: i'm here. [17:15] smoser: joining [17:41] blackboxsw: smoser still going ? [17:41] yep join in on the fun [17:41] k [18:19] blackboxsw or rharper https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/348711 [18:19] or powersj [18:19] that will go a long way to "fix" ing our gpg related errors [19:22] couple comments on that branch https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/348711 [19:22] looks good thogh [19:22] *though, take 'em or leave 'em as far as my suggestions [19:26] blackboxsw: https://hackmd.io/jlq3C4qbSgurZ_DZ5GTiuw [19:27] i updated the top hunk of that to use log2dch --hackmd [19:27] blackboxsw: in your suggestion of less try/except... [19:27] hard to read in line.. [19:28] ahh excellent on hackmd output [19:28] but i think that you'd have to pass in a "retries=(1)" in order to get a single run [19:28] in curtin's subp we do basically: [19:28] subp() # first time [19:28] for trynum, naplen in enumerate(...): [19:29] subp other tries [19:29] i wanted to avoid the two 'subp' calls [19:29] ohh shoot, right smoser yeah I only though through the retries part, forgot the initial :/ [19:29] yeah. so that is why it is as it is. [19:33] yeah oops, +1 [19:34] was thinking about appending to retries when enumerating, but that's just adding complexity in another place. :/ [19:35] the way its done here is actually a way the read_url logic could have been done [19:35] the exception_cb stuff. [19:36] your iterator can be anything, and decide to exit based on other things [19:38] although the context of the exception is important there. [19:44] smoser: that makes more sense, I was just thinking a bit too simplistically (heh, and incorrectly) there [20:30] blackboxsw: i think at one point you suggested dropping the minion integration test [20:30] is that right ? [20:30] because all it does is veriy we write files that we already unit test write [20:31] i'm looking at [20:31] https://bugs.launchpad.net/cloud-init/+bug/1778737 [20:31] Ubuntu bug 1778737 in cloud-init "salt-minion test needs fixing" [Undecided,New] [20:31] smoser: yeah I mentioned that it really doesn't do much more than unit tests and validate that the minion package was installed. [20:32] we could tweak it to install salt server on the instance that then we'd actually have something to test (full integration there0 [20:32] the easiest way to fix that specific issue is to just rid ourselves of that test :) [20:32] i'm not opposed to installing server and configuring minion to talk to localhost [20:32] smoser: you could drop it for now, I'll file a bug for a real integration test and can assign it to me to resolve [20:33] or you can assign the bug to me now and I can get it post this SRU. it'd be nice to not have a timing issue affecting CI, I didn't see how frequent the failure was [20:33] reading the bug [20:35] yeah I've seen those tracebacks fequently via journalctl. But was able to get a working minion talking to a non-local server fairly easily. it wouldn't be too hard to get that server setup locally for avoid a couple of the issues with trying to lookup a salt hostname etc. [20:36] it didn't involve too much config, if we can write_files to seed the expected client key [20:36] ... in the server config [20:37] smoser: yeah, I'm all for dropping the existing salt-minion test for the moment. it really doesn't do much. and can easily be referenced whenever we get to a better integration test for it [20:52] blackboxsw: ok. removal on its way [20:55] blackboxsw: https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/348719 [20:56] approved https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/348719 [20:57] BTW, having a datasource express different scoped update_event lists it reacts to for 'network' vs 'storage' is making update_metadata and clear_cached_data logic a bit complex as we now have to determine what cached attributes to clear across an update_metadata refresh. it'll be interesting how this solutioon shakes out. [21:01] blackboxsw: hrm, AFAICT there are separate things; 1) what does the cached metadata look like (this should always match what we fetched from the service 2) what does the ds/cloud-init do when (1) is complete and ds is configured to apply the update ? [21:03] +1 on part 1. for part 2, right we need to (today) also set ds._network_config back to UNSET or None to ensure that the updated metadata cached gets propagated to ds.network_config, instead of the cache value there too [21:03] hrm [21:03] ... and we need to make sure get_data doesn't arbitrarily clear_cache_data on the _network_config attr by default [21:03] because that would ensure any subsequent call to ds.network_config would get the 'new' metadtaa [21:03] well, I think we talked about decoupling the fetch of new data [21:03] from updating the ds object attributes [21:07] correct there too, crawl_data(read) vs get_data (persist). but in this case get_data doesn't persist the _network_config attribute, that is done within each ds.network_config call. [21:08] so if ds.update-metadata is called with EventType.BOOT and ds 'network' scope wants to react to that event we perform the following: [21:09] 1. get_data (which calls crawl_data) clears the generic ds.cached_attr_defaults and persists new values to them. 2. clear ds._network_config so the next call to ds.network_config (note no underbar) will generate the network config from fresh metadata. [21:11] if we don't clear ds._network_config on a datasource, then we expect that this datasource would continue to present cached original ds.network_config [21:11] I should have a diff here locally that I can push in the next few minutes to better explain [21:14] yes, I see what you;re saying; we may need to let network_config property be a bit more complex and ask the ds for other states; maybe it could choose whether render the current value versus re-rendering based on whether the refresh indicated new settings (and we've a event flag saying we need to update) [21:14] I much prefer the network_config property handler deal with clearing rather thant other parts of the object resetting it underneath [21:15] but let's see what you have in a diff and go from there [21:20] htis is interesting https://jenkins.ubuntu.com/server/job/cloud-init-ci/ [21:21] the "averarge stage time" of the maas compat test either went up recently or is affected a bunch by the 700ms fails times [21:21] (it thinks average is 2m 3s, while reality seems more like 3m) [21:45] ok finally pushed the changes to https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348000