=== Wulf4 is now known as Wulf === blaisebool is now known as Guest46113 [16:21] I'm running AWS EC2 instances. I would like to pass a parametrized URL as userdata like "#include http://path/foo?a=b&c=d". foo should contain python code (same for all instances) that will be executed by cloud-init and dynamically create a cloud-config configuration and inject it into cloud-init. I kind of got it working with a part-handler and sys._getframe, but it's really ugly. Is there (or should [16:21] there be) any proper way to achieve the same? [16:26] Wulf, you're wanting the instance to identify itself in the '#include' i guess ? [16:26] somehow ? [16:26] (headers would be an option) [16:27] smoser: no, the web server is static (e.g. s3), so the client (cloud-init) won't identify itself [16:27] smoser: but the url might contain like ?name=newhostname [16:28] i guess you overrode the part-handler for '#include' ? [16:28] what you're asking for does sound interesting [16:28] smoser: no, for a new mime type [16:42] So is there currently any other way to download+execute arbitrary code than a part-handler? [16:43] well, a '#!' [16:44] Wulf, but that has the same general issue ... i think you're wanting python access to hostname or instanceid ... is that right ? [16:44] is that what you were using sys._getframe for ? [16:46] smoser: I want python access to the cloud-config part so I can set the hostname of a list off ssh keys, host specific mounts, etc. I use sys._getframe to access walker_callback.data['handlers']['text/cloud-config'] [16:47] *or a list of ssh...* [16:49] so i think i udnerstand, but i'm kind of confused. [16:51] is the url static for each node ? [16:51] ie, you want static url and static content [16:52] that produces different cloud-config for the node based on information available on the node. [16:52] smoser: the content is static, the url might vary with ?parameter=value which the webserver behind it ignores [16:52] i think [17:14] powersj, do you know why the copr build failures ? [17:15] smoser: I looked briefly friday and it appears that the lxd pull is failing [17:15] lxd file pull* [17:16] says the file does not exist; I didn't have enough time to determine if it was due to the lxd 2.15 update, but that is the only thing to have changed === blackboxsw_away is now known as blackboxsw [17:23] blackboxsw: \o/ [17:27] powersj: and smoser you guys are around an awful lot for vacationing ;) [17:27] blackboxsw, i'm officiall in today [17:27] ohh didn't realize that. [17:27] * powersj isn't, but broken AC means I'm sitting at coffee shop [17:27] powersj, has no excuse. [17:28] blackboxsw, i'm out the rest of the week thoguh [18:08] blackboxsw, if you were just bored... [18:09] 2 things [18:09] a.) when i run: tox-venv py3 python3 -m nose tests/unittests/ [18:09] tests.unittests.test_datasource.test_openstack.TestOpenStackDataSource is the slowest. [18:10] b.) i believe if you launch an openstack instance without user-data then it will re-try to get it (if its not been given it will 404) [18:10] and we should accept that as "no user data", rather than trying again [18:11] (i think these 2 things might be related) [18:12] roger smoser [18:12] ok so we want the code to avoid retries on 404? [18:13] for user-data specifically [18:25] hmm I already see MetadataReader.should_retry_cb in cloudinit/sources/helpers/openstack.py which should return False if error code >= 400 [18:34] blackboxsw, ssh ubuntu@10.245.162.120 [18:34] i think you should be able to get there (with vpn) [18:34] /var/log/cloud-init.log -> http://paste.ubuntu.com/25012904/ [18:34] I'm there [18:35] 2017-07-03 18:26:51,445 - url_helper.py[DEBUG]: [0/6] open 'http://169.254.169.254/openstack/latest/user_data' with {'timeout': 10.0, 'method': 'GET', 'allow_redirects': True, 'headers': {'User-Agent': 'Cloud-Init/0.7.9'}, 'url': 'http://169.254.169.254/openstack/latest/user_data'} configuration [18:35] 2017-07-03 18:26:52,514 - url_helper.py[DEBUG]: Please wait 1 seconds while we wait to try again [18:35] heh, well that looks like it's not using that retry logic I saw for the base metadata crawler. ok thx. [18:36] looks like it hit that retry about 5 times/ [18:36] ok [18:36] right [18:37] ok I exited your instance [18:38] ok it's the wait _for_url call [18:38] got it. [18:38] sure I'll pull together a fix === MrWatson is now known as NostawRm [18:45] blackboxsw, file a bug please too [18:45] and thank you [18:45] no prob [19:13] smoser: vendordata and networkdata (both are optional metadata, shall we also avoid retries in those cases?) [19:13] i think those work as expected. [19:13] i think. [19:14] i think they're both guaranteed [19:16] ok I was just trying to queue off of the commonolity of networkdata and vendordata also being listed as optional in cloudinit/sources/helpers/openstack.py 227-241 [19:17] commonality rather [19:17] as in, if optional: don't retry 404 [19:17] but we can make it specific to just user_data if we want [19:23] blackboxsw, they may be optional. i was just going from memory [19:24] you can look in https://github.com/openstack/nova/blob/master/nova/api/metadata/base.py#L449 [19:24] thats the backend [19:58] If I'm reading things right, looks like network_data is avail on Liberty, but not on other versions. Vendor data is available on Havana, but Vendor-data2 is avail on Newton2. So they too look optional. [19:58] depending on the version of openstack [19:59] https://github.com/openstack/nova/blob/master/nova/api/metadata/base.py#L627 [20:03] ahh that _check_os_version call is that the version is >= so vendor_data exists >= Havana, network_data exists >= Liberty [20:03] does cloud-init need to support < Havana? [20:05] i didnt know there is a vendor-data2. [20:07] blackboxsw, https://releases.openstack.org/ [20:08] i'd say we can get away with breaking support for a platform that has been EOL for 2.5 years. [20:08] i guess the realistic scenario would be someone running 12.04 but wanted to run 16.04 guests... we'd break them. [20:09] heh, didn't know if cloud-init had compelling customer base running on pre-Havana (I'd think any vendor running on that version of openstack would have moved to something a bit more recent anyway) [22:57] gotta head out for the day