[08:35] <ybaumy> i need help with the OVF datasource http://pastebin.centos.org/876866/
[08:39] <ybaumy> smoser: you are here?
[08:45] <ybaumy> anyone ever used that OVF provider?
[11:22] <BloqueNegro> hi :)
[11:22] <BloqueNegro> i want to use cloud init on ubuntu 16.04 to enable a second interface on a newly created instance
[11:23] <BloqueNegro> any tips how to do that? everything i tried needs another reboot (e.g. using write_files to write to interfaces.d) or is ignored completly (e.g. a config file from the example i wrote to /etc/cloud/cloud.cfg.d/)
[11:23] <BloqueNegro> it drives me crazy
[13:23] <smoser> ybaumy: are you trying to run on a cloud somewhere or just providing data locally ?
[13:41] <ybaumy> smoser: cannot get OVF to mount a iso at all. i am now switching to NoCloud datasource which atleast mounts /dev/sr0 find a user-data and meta-data file... but nothing happens
[13:41] <smoser> ybaumy: i'd suggest using NoCloud rather than ovf
[13:41] <smoser> but i'd be surprised if doc/sources/ovf/README does not work
[13:42] <smoser> and similar for nocloud
[13:42] <smoser>  https://asciinema.org/a/132013
[13:43] <ybaumy> 1. im not using ubuntu .. im using centos 2. like in the paste url earlier i just seems to recognize the OVF in datasource_list at all
[13:45] <ybaumy> ok that cloud-localds i havent done
[13:46] <ybaumy> and im using vmware
[13:49] <smoser> ybaumy: well... a.) try the copr repo to get newer cloud-init
[13:50] <smoser> b.) if it fails, run 'cloud-init collect-logs' and file a bug
[13:52] <smoser> ybaumy: i'd also kind of expect the openstack centos image to work with nocloud identically to how the asciicinema demo for ubuntu did
[13:53] <smoser> something from http://cloud.centos.org/centos/7/images/
[14:04] <ybaumy> smoser: thanks will try
[14:13] <ybaumy> works!!!!
[14:14] <ybaumy> i had that seefrom: None in there
[14:14] <ybaumy> and removed it to see what happens
[14:14] <ybaumy> that was the problem
[14:30] <smoser> ybaumy: so the centos image and nocloud "just worked" ?
[14:31] <smoser> i should make a ascii...spelling-thing of that too
[14:35] <ybaumy> no i have created a custom ovf image in vcloud director
[14:35] <ybaumy> i used copr cloud-init latest version
[15:01] <smoser> oh. ok. good.
[16:33] <blackboxsw> https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348000 is up for the azure case. I've not pushed logic to keep original fallback network_config  yet
[16:57] <blackboxsw> smoser: rharper ping when you have about 10 mins to talk upgrade path azure
[16:57] <blackboxsw> I added scenarios to the bottom of the doc https://hackmd.io/aODzXfa_TOikNtYBLt8erA?both
[17:10] <smoser> blackboxsw: i'm here.
[17:15] <blackboxsw> smoser: joining
[17:41] <rharper> blackboxsw: smoser still going ?
[17:41] <blackboxsw> yep join in on the fun
[17:41] <rharper> k
[18:19] <smoser> blackboxsw or rharper https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/348711
[18:19] <smoser> or powersj
[18:19] <smoser> that will go a long way to "fix" ing our gpg related errors
[19:22] <blackboxsw> couple comments on that branch https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/348711
[19:22] <blackboxsw> looks good thogh
[19:22] <blackboxsw> *though, take 'em or leave 'em as far as my suggestions
[19:26] <smoser> blackboxsw: https://hackmd.io/jlq3C4qbSgurZ_DZ5GTiuw
[19:27] <smoser> i updated the top hunk of that to use log2dch --hackmd
[19:27] <smoser> blackboxsw: in your suggestion of less try/except...
[19:27] <smoser> hard to read in line..
[19:28] <blackboxsw> ahh excellent on hackmd output
[19:28] <smoser> but i think that you'd have to pass in a "retries=(1)" in order to get a single run
[19:28] <smoser> in curtin's subp we do basically:
[19:28] <smoser>   subp() # first time
[19:28] <smoser>   for trynum, naplen in enumerate(...):
[19:29] <smoser>     subp other tries
[19:29] <smoser> i wanted to avoid the two 'subp' calls
[19:29] <blackboxsw> ohh shoot, right smoser yeah I only though through the retries part, forgot the initial :/
[19:29] <smoser> yeah. so that is why it is as it is.
[19:33] <blackboxsw> yeah oops, +1
[19:34] <blackboxsw> was thinking about appending to retries when enumerating, but that's just adding complexity in another place. :/
[19:35] <smoser> the way its done here is actually a way the read_url logic could have been done
[19:35] <smoser> the exception_cb stuff.
[19:36] <smoser> your iterator can be anything, and decide to exit based on other things
[19:38] <smoser> although the context of the exception is important there.
[19:44] <blackboxsw> smoser: that makes more sense, I was just thinking a bit too simplistically (heh, and incorrectly) there
[20:30] <smoser> blackboxsw: i think at one point you suggested dropping the minion integration test
[20:30] <smoser> is that right ?
[20:30] <smoser> because all it does is veriy we write files that we already unit test write
[20:31] <smoser> i'm looking at
[20:31] <smoser> https://bugs.launchpad.net/cloud-init/+bug/1778737
[20:31] <blackboxsw> smoser: yeah I mentioned that it really doesn't do much more than unit tests and validate that the minion package was installed.
[20:32] <blackboxsw> we could tweak it to install salt server on the instance that then we'd actually have something to test (full integration there0
[20:32] <smoser> the easiest way to fix that specific issue is to just rid ourselves of that test :)
[20:32] <smoser> i'm not opposed to installing server and configuring minion to talk to localhost
[20:32] <blackboxsw> smoser: you could drop it for now, I'll file a bug for a real integration test and can assign it to me to resolve
[20:33] <blackboxsw> or you can assign the bug to me now and I can get it post this SRU. it'd be nice to not have a timing issue affecting CI, I didn't see how frequent the failure was
[20:33] <blackboxsw> reading the bug
[20:35] <blackboxsw> yeah I've seen those tracebacks fequently via journalctl. But was able to get a working minion talking to a non-local server fairly easily. it wouldn't be too hard to get that server setup locally for avoid a couple of the issues with trying to lookup a salt hostname etc.
[20:36] <blackboxsw> it didn't involve too much config, if we can write_files  to seed the expected client key
[20:36] <blackboxsw> ... in the server config
[20:37] <blackboxsw> smoser: yeah, I'm all for dropping the existing salt-minion test for the moment. it really doesn't do much. and can easily be referenced whenever we get to a better integration test for it
[20:52] <smoser> blackboxsw: ok. removal on its way
[20:55] <smoser> blackboxsw: https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/348719
[20:56] <blackboxsw> approved https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/348719
[20:57] <blackboxsw> BTW, having  a datasource express different scoped update_event lists it reacts to for 'network' vs 'storage' is making update_metadata and clear_cached_data logic a bit complex as we now have to determine what cached attributes to clear across an update_metadata refresh. it'll be interesting how this solutioon shakes out.
[21:01] <rharper> blackboxsw: hrm, AFAICT there are separate things;  1) what does the cached metadata look like (this should always match what we fetched from the service  2) what does the ds/cloud-init do when (1) is complete and ds is configured to apply the update ?
[21:03] <blackboxsw> +1 on part 1.    for part 2, right we need to (today) also set ds._network_config back to UNSET or None to ensure that the updated metadata cached gets propagated  to ds.network_config, instead of the cache value there too
[21:03] <rharper> hrm
[21:03] <blackboxsw> ... and we need to make sure get_data doesn't arbitrarily clear_cache_data on the _network_config attr by default
[21:03] <blackboxsw> because that would ensure any subsequent call to ds.network_config would get the 'new' metadtaa
[21:03] <rharper> well, I think we talked about decoupling the fetch of new data
[21:03] <rharper> from updating the ds object attributes
[21:07] <blackboxsw> correct there too, crawl_data(read)  vs get_data (persist). but in this case get_data doesn't persist the _network_config attribute, that is done within each ds.network_config call.
[21:08] <blackboxsw> so if ds.update-metadata is called with EventType.BOOT and ds 'network' scope wants to react to that event we perform the following:
[21:09] <blackboxsw> 1. get_data (which calls crawl_data)  clears the generic ds.cached_attr_defaults and persists new values to them.  2. clear ds._network_config so the next call to ds.network_config (note no underbar) will generate the network config from fresh metadata.
[21:11] <blackboxsw> if we don't clear ds._network_config on a datasource, then we expect that this datasource would continue to present cached original ds.network_config
[21:11] <blackboxsw> I should have a diff here locally that I can push in the next few minutes to better explain
[21:14] <rharper> yes, I see what you;re saying;  we may need to let network_config property be a bit more complex and ask the ds for other states; maybe it could choose whether render the current value versus re-rendering based on whether the refresh indicated new settings (and we've a event flag saying we need to update)
[21:14] <rharper> I much prefer the network_config property handler deal with clearing rather thant other parts of the object resetting it underneath
[21:15] <rharper> but let's see what you have in a diff and go from there
[21:20] <smoser> htis is interesting https://jenkins.ubuntu.com/server/job/cloud-init-ci/
[21:21] <smoser> the "averarge stage time" of the maas compat test either went up recently or is affected a bunch by the 700ms fails times
[21:21] <smoser> (it thinks average is 2m 3s, while reality seems more like 3m)
[21:45] <blackboxsw> ok finally pushed the changes to https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/348000