[01:41] blackboxsw, we will. none of those are terribly important. the ppc64 breakage isn't present in the SRU'd version. [01:41] i dont think === shardy_afk is now known as shardy === shardy is now known as shardy_mtg === shardy_mtg is now known as shardy === shardy is now known as shardy_mtg === shardy_mtg is now known as shardy [16:42] blackboxsw, i should have noticed / thought of this before, but by doing the json schema doc [16:42] we lose [16:42] python3 [16:43] >> from cloudinit.config import cc_bootcmd [16:43] >> help(bootcmd) [16:43] er... help(cc_bootcmd) [16:43] right ? [17:00] hrm right smoser [17:00] sorry was in another window [17:01] good thought, I bet we can fix that [17:04] yeah I can fix that. smoser WDYT? http://paste.ubuntu.com/25491373/ [17:04] blackboxsw, http://paste.ubuntu.com/25491366/ [17:04] that was mine [17:04] haha [17:05] yours prettier [17:05] yeah we already have get_schema_doc. no need for a 2nd function :) [17:06] I can fold that into the cc_bootcmd/resizefs branch (and fix runcmd/ntp as well) [17:06] get_schema_doc modifies schema [17:07] ahh true, could deepcopy [17:07] the other issue with your solutin there is it is slow. i think [17:07] i think we then hit usage of yaml on import of cc_bootcmd [17:08] so i dont know. [17:08] lemme see how yours outputs [17:08] not anywhere near as nice. [17:09] and i'd have used yorus if i had know of that [17:09] right, but at least it's something (because I'd like to trim the module docstr to a single line of text in all modules w/ jsonschema) [17:10] * blackboxsw really wonders how much 'support' we need to worry about for help() callers in python shell [17:10] I'd be inclined to use your markup to meet the need if it arises (but avoid the cost of templating etc on module load) [17:15] blackboxsw, [17:15] yaml_string = yaml.dump(property_dict.get('enum')).strip() [17:15] property_type = yaml_string[1:-1].replace(', ', '/') [17:15] is there a reason not : [17:15] property_type = '/'.join(property_dict['enum']) ? [17:17] smoser: for docs I wanted the actual strings rendered to match what we expect users to write in a yaml file [17:17] 'true/false' instead of True/False [17:17] ah. that does make sense. [17:18] but doesn't yaml accept both? [17:18] Regexp: [17:18] y|Y|yes|Yes|YES|n|N|no|No|NO [17:18] |true|True|TRUE|false|False|FALSE [17:18] |on|On|ON|off|Off|OFF [17:18] meh, so maybe it doesn't matter http://yaml.org/type/bool.html [17:22] heh not fully supported [17:22] http://pastebin.ubuntu.com/25491483/ [17:22] some of the regexps don't seem to match. better to be safe/strict [17:23] http://paste.ubuntu.com/25491485/ [17:24] +1 simple and efficient [17:24] I'll add that smoser [17:24] better not to spec cycles on yaml.load if all we want is a simple dict lookuo [17:24] better not to spec cycles on yaml.load if all we want is a simple dict lookup [17:25] blackboxsw, name 'ymap' as "_YAML_AS_STRING" or something and put it at top of that [17:26] yeah w/ a oneline comment explaining the intent [17:26] will do [17:27] then, we're fairly close to not even importing yaml [17:27] except on checking [17:28] could separate that to a different module [17:30] hrm, also in some of the other doc generation we have yaml.dump calls [17:30] like rendering schema['examples'] [17:36] i think that was the only other one [17:36] right? [17:52] blackboxsw, ^ [17:52] i'm not sure why wer'e using yaml to dump the examples, which are already formatted strings [18:14] oh. i see. [18:20] sorry was running an errand. [18:21] smoser: yeah I had allowed for either md formatted strings or python dict objects [18:21] probably something that we don't really have to support and we could drop the yaml.dumps stuff [18:22] I had thought we might want to leverage examples automated unittests, and having a examples in dict format already provided would make it easy to pass into some automated handle tests. [18:23] but, we could just as easily have the unit tests yaml.load(example_content_string) [18:23] if we every go down that route in unittests [18:23] s/every/ever/ [18:23] http://paste.ubuntu.com/25491809/ [18:23] I'm glad you agree :) and did the work [18:24] thanks [18:24] hardest part was making flake8 happy [18:25] I like it [18:26] and then we're not far from having yaml only necessary at all in the schema path when validating a cloud-config [18:26] specifically *not* when validating a dictonary [18:27] (i realize we have yaml everywhere, so not a big deal) [18:27] I'm not even sure we need the else: [18:27] example_content = '\n'.join([e + "\n" for e in example]) [18:28] as all examples will now be an instance of string [18:28] still good to avoid dependency if we don't really need it [18:29] yeah [19:07] blackboxsw: rharper thanks for comments, uploaded new version [19:09] powersj, of kvm ? [19:10] smoser: yes [19:34] smoser: thanks for the explanation re: arrays [20:04] smoser: pushed your changes into https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/330243 + added the __doc__ fixes and an Examples SPACER for documentation where modules have > 1 example [20:05] onto powersj branch [20:08] powersj: have you pushed this to latest? https://code.launchpad.net/~powersj/cloud-init/+git/cloud-init/+merge/327646 [20:08] you mentioned uploaded new version, but I don't see new commites [20:08] you mentioned uploaded new version, but I don't see new commits [20:08] * blackboxsw tries a git fetch [20:08] blackboxsw, he ammended / --force i think [20:08] i think [20:09] yeah trying to keep it down to a single commit [20:09] +1 [20:09] thx [20:09] sorry.... if you wanted to see the diff [20:10] nah, just "broken" process for me when looking at a review I've commented on is seeing whether commits have come in afterward [20:10] I need to figure out a better process for finding updates on branches [20:11] how do you guys normally determine whether a branch you've reviewed has been updated? Just git fetch on the cmdline and watch whether a branch you care about updates? [20:13] also, as you mentioned powersj, having actual commits of the diffs related to review comments speeds up the re-review because I can see what changed and if some comment was missed or interpreted a different way without hunting for the places where comments were made. [20:13] yeah... [20:13] but, review 'speed' is not an issue when we leave a branch of yours floating in the breeze for weeks ;) [20:14] https://www.youtube.com/watch?v=bcYppAs6ZdI [20:16] lol [20:23] powersj: approved https://code.launchpad.net/~powersj/cloud-init/+git/cloud-init/+merge/327646 thanks man [20:23] blackboxsw: thx, if you do find a run that dies or has issues, please let me know :) [20:24] powersj: will run it again right now [20:24] what cmdline did you want? [20:24] the test run I called or the ssh connect from paramiko that is failing [20:32] 3 successes in a row on your latest branch [20:32] powersj, i'm ok with allowing image.execute to take a string or an array [20:33] and splitting it if it is a string [20:41] smoser: ok do you want me to go back and revert places where I changed arrays to strings? [20:41] blackboxsw: thanks for the re-test [20:54] powersj, just commented there. [20:54] wont your changes *break* lxd ? [20:54] ah. you changed that one to take a string [20:55] right and then res = self.pylxd_container.execute(['/bin/bash', '-c'] + [command] [20:56] tested both backends ;) don't want any regressions [21:05] http://paste.ubuntu.com/25492439/ [21:05] i'll propose that for merge really quick [21:09] powersj, https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/330459 [21:11] smoser: +1'ed [21:11] thank for that [21:11] thank you for that rather [21:29] rharper: adapted util.json_loads to handle unserializable content, just pushed 95c1151..c000917 [21:29] thanks for the review, https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/330115 is done [21:30] blackboxsw: nice! [21:30] and no instance-data.json is written. ? [21:30] still true? [21:30] in descr of commit [21:30] on MP [21:31] ahh not true [21:34] IIUC, we won't throw TypeErrors anymore? [21:34] lrharper: so, I'm pretty certain that no TypeErrors should be raised now during json_loads. But, I did leave the except TypeError: in get_data() just in case there is something I wasn't aware of. I didn't want some datasources to start raising TypeError exceptions on the json_loads call [21:35] but generally the json.loads(default=our_default_handling_functor) should catch everything unknown to json [21:35] ok [21:35] I'mchanging the commit msg for this to represent that [21:35] hrm, I'm still confused [21:36] we're parsing metadata ; then we want to dump the metadata which may have unserializble objects [21:36] we don't have an json to load before we dump; so we still have a path where the metadata might have something we cant serialize [21:36] no? [21:37] what would be needed is for json_dumps() to replace the unserializable ojbect with some replacement string indicating error or something like that ? [21:37] that may not be possible; however, we could pre-filter the metadata dictionary for content types that aren't json encodable (which is similar to the load IIUC) [21:39] correct rharper, I added json_serialize_default() which returns the string 'Warning: redacted unserializable type {0}' for any unserializable keys/values [21:39] blackboxsw: I think you confused me with the json_loads; in dumps you set default=json_serialize_default, which looks like what I was suggested (replaces with string) [21:39] nice [21:39] ahh sorry loads... my bad [21:39] sorry for the confusion [21:39] I meant dumps. Sorry for using the wrong words ;) [21:39] excellent; that's really cool [21:40] so again , I *think* that should handle all cases of unserializable content, but I left the try/except ValueError & log.warning message in place just in case we run into some other unforseen issue [21:41] except TypeError rather [21:46] yep [21:50] powersj, i responded in that kvm merge [21:50] i have to run now... we can just order thes a bit different, and som elittle things i mentioned in line. [21:52] smoser: ok thanks [21:52] really appreciate it [22:01] powersj: do you recall where #cloud-init irc logs are archived? I thought there was an IRC archive url somewhere which had historical #cloud-init logs [22:01] I wanted to reference them in the meetingology meeting minutes from last meeting [22:02] blackboxsw: https://irclogs.ubuntu.com/2017/09/ [22:02] that'd be it [22:02] thanks