=== shardy is now known as shardy_lunch === shardy_lunch is now known as shardy [15:35] blackboxsw: tell me what to do [15:41] smoser: sure, was making coffee, feel free to do that too [15:41] blackboxsw: ? [15:41] ok I'm on puppet/landscape SRU verification now [15:41] * powersj must have missed a message [15:41] * smoser too [15:41] * blackboxsw made coffee [15:41] :) [15:41] ah. [15:42] yeah, let me warm some up. tell me what sru item to work on [15:42] smoser: if you wanted to grab TEST: [5bba5db2](https://git.launchpad.net/cloud-init/commit/?id=5bba5db2) [#1686485](http://pad.lv/#1686485) cc_ntp: fallback on timesyncd configuration if ntp is not installable [15:43] or - TEST: [10f067d8](https://git.launchpad.net/cloud-init/commit/?id=10f067d8) [#1717598](http://pad.lv/#1717598) GCE: Fix usage of user-data. [15:43] rharper: can you look at verifying https://bugs.launchpad.net/cloud-init/+bug/1718287 [15:43] Ubuntu bug 1718287 in cloud-init "systemd mount targets fail due to device busy or already mounted" [High,Fix committed] [15:43] or the Azure dhcp networkd test. (need to verify xenial zesty still work [15:43] it is really just a ticky mark we're lookign for as I know you made significant effort to diagnose and verify on trunk. [15:44] or maybe suggest to someone else if they could test with -proposed . [15:45] wow, smoser want to lopk at this. I'm seeing strange behavior from lxc [15:46] We do need to make sure we get MAAS test results, as well as a Curtin, run with cloud-init in proposed per our SRU exception. [15:46] http://paste.ubuntu.com/25760062/ my user-data listed in /var/lib/cloud/instance/user-data.txt [15:46] inside the lxc... but cloud-init.log says: 2017-10-17 15:21:16,445 - cc_runcmd.py[DEBUG]: Skipping module named runcmd, no 'runcmd' key in configuration [15:47] blackboxsw: is runcmd indented under puppet? [15:47] yet is saw the puppet config key and ran that module [15:47] ahh lemme see [15:47] bah [15:47] thanks powersj [15:47] dumb [15:47] what's a silly bit of whitespace between friends [15:48] #timeforstrictschemavalidation [15:48] I love how often I shoot myself in the foot w/ writing the yaml file [16:14] ok done with landscape and puppet SRU verfication [16:15] * blackboxsw moves on to - TEST: [f761f2b5](https://git.launchpad.net/cloud-init/commit/?id=f761f2b5) [#1715738](http://pad.lv/#1715738) [#1715690](http://pad.lv/#1715690) cloud-config modules: honor distros definitions in each module [16:15] smoser: yes I'll look at that one (mounts) [16:19] I'll also spin up a xenial-daily on aws for the ipv6 test [16:39] ok done w/ #1715738 [16:39] ok done w/ bug#1715738 [16:40] onto aws ipv6 [17:10] blackboxsw: sorry... so on http://pad.lv/#1686485 [17:10] you were the test you showed didnt really do much... right? [17:10] or was i missing something [17:10] you just tried an invalid 'ntp:' config. [17:18] smoser: that ntp config is valid and should install only the ntp package not timesyncd on xenial zesty [17:18] an emty ntp config only installs the 'ntp [17:18] an emty ntp config only installs the '$ntp' package [17:19] on snappy i'd expect that ntp doesn't get installed [17:19] with that same config [17:20] but yeah the test didn't intend to test much other than ntp being installed (not broken by our snappy-related changes [17:21] admitedly, that test doc needs work, as I have invalid comments about spacewalk/intent there [17:24] I have a couple other tests docs that I've fixed as I went through the testing, I'll copy them into my sru-info in the next couple mins [17:25] and smoser please review the bug verification suggestions I had for these bugs as you are already doing to make sure I [17:25] am not going crazy (or testing nothing) [17:26] blackboxsw: thanks. [17:26] i think you probably gave invalid config there. [17:27] ntp: [17:27] oh. guess not. [17:27] $ python3 -c 'import yaml; print(yaml.load("ntp:\n"))' [17:27] {'ntp': None} [17:27] If both pools and servers are [17:27] empty, 4 default pool servers will be provided of [17:27] the format ``{0-3}.{distro}.pool.ntp.org``. [17:27] YYy [17:27] :) [17:28] actually. [17:28] yeah we had a discussion w/ rharper on that when I was doing json schema definition for that module [17:28] code though does [17:28] if not isinstance(ntp_cfg, (dict)): [17:28] so it should default [17:28] raise runtiome [17:28] i sued [17:28] used [17:28] printf "%s\n%s\n%s\n" "#cloud-config" "ntp:" " pools: []" > my.cfg [17:28] ntp_cfg = cfg.get('ntp', {}) # at the start though [17:29] so it'll default to empty dict if None [17:29] * blackboxsw checks my python [17:30] umm [17:30] bah [17:30] right if None is actually set for the key, then it falls over as you said, becuase [17:30] mine works. [17:30] and verified. [17:30] are you putting stuff int he bugs ? [17:32] I was going to put the SRU templates in all the bugs once verification is done (and my test scripts are vetted ;) ) [17:32] I can start putting the templates up on each bug description for the logs I've already handled [17:32] i'm not rushed on it. [17:32] i'll update the sru template that have. [17:32] but, do you think it's a good idea to put them there? [17:32] would like to show results too... [17:33] sounds good. [17:34] * blackboxsw needs to run an errand back in a bit [17:54] powersj: i remember that i was supposed to provide a list of "launch on cloud" commands [17:54] from our sprint [17:54] i dont know that i did [17:54] smoser: correct [17:54] http://paste.ubuntu.com/25760727/ [17:54] or a link to a gist :) [17:55] that is what i have. [17:55] just written down once during an sru when i wanted to validate stuff. [17:55] smoser: awesome [17:59] has anyone run into issues with cloud-init and creating multiple directories with mkdir -p? I'm passing in a shell script on AWS that has mkdir -p /opt/path1 /var/log/path1 but when I look at the output of cloud-init it's only showing mkdir =p /opt/path1....I'm really confused as to how this could be happening [18:02] intheclouddan[m]: can you give config you're using ? [18:02] you're using runcmd? [18:02] runcmd: [18:02] - [sh, -c, 'mkdir /foo//bar/ /zee/wark'] [18:02] - mkdir /foo/bar /zee/wark [18:03] - ['mkdir', '/foo/bar', '/zee/wark'] [18:03] no passing in a bash script [18:03] well, output shoudl be collected in /var/log/cloud-init-output.log [18:03] i'd look there for errors. [18:03] and also /var/log/cloud-init.log [18:03] for WARN [18:21] nice paste reference smoser [18:43] * blackboxsw is trying to figure out (remember) again how I setup the ipv6 association on an instance [18:43] it seems I had it setup on my xenial vm, but not on my zesty vm [18:43] ... ec2 that is. gotta dig through the docs [18:44] * blackboxsw is specifically not asking the tome of smoser knowledge for this, because I want to make this painful for me so I learn it 'right' [18:49] blackboxsw: likely a flag during instance launch under advacned [18:49] google knows the awscli command to toggle it [18:50] heh, tome of rharper knowledge. [18:50] well, possible answers book [18:50] hah [18:56] smoser: testing underway on the fstab one; an up-to-date xenial VM falls over right away; -proposed is 8 reboot loops in with no issues; how far do we want to run up the count to declare success ? [18:58] smoser1: testing underway on the fstab one; an up-to-date xenial VM falls over right away; -proposed is 8 reboot loops in with no issues; how far do we want to run up the count to declare success ? [19:11] blackboxsw: where are we updating our test-case and results for the SRU ? a single bug, or the various bugs we're validating ? [19:11] rharper: I've only so far been adding a log comment to the megabug. https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1721847 [19:11] Ubuntu bug 1721847 in cloud-init (Ubuntu Zesty) "sru cloud-init 2017-10-06 (17.1-18-gd4f70470-0ubuntu1)" [Medium,Fix committed] [19:12] but when done, all test verification and log output will be updated on each specific bug description [19:12] so each separate bug that relates to ubuntu will have it's desc updated with verification script && results [19:12] s/it's/its/ [19:14] blackboxsw: ok, so I'll keep my own log for this specific bug, and we can append to both places as needed [19:16] I think I'm going to call success on this case, 1) the change ensures that datasourceOVF doesn't poke any block device unless it has iso9660 files on that; I can verify that current cloud-init pokes *each* block device and -proposed pokes *none*; I've got about 28 reboots with no issue so far. [19:37] ok sounds good rharper [19:37] thanks for the reboot run. just attaching output of the reboot script w/ anecdotal evidence of # of successful reboots should be good [19:38] I just attached updated SRU validation for ec2 ipv6 support to the megabug [19:38] found the cli params needd [19:39] aws ec2 assign-ipv6-addresses --network-interface-id eni-3b32d910 --ipv6-address-count 1 [19:39] then clean-reboot (rm -rf /var/log/cloud-init* /var/lib/cloud*); [19:41] smoser: rharper anyone working on - TEST in artful: [9d2a87dc](https://git.launchpad.net/cloud-init/commit/?id=9d2a87dc) [#1718029](http://pad.lv/#1718029) Azure, CloudStack: Support reading dhcp options from systemd-networkd.? [19:41] blackboxsw: I'm not, collecting my logs for the fstab one [19:42] and s-"=frequent IRC quits"-moser is probably dealing with network issues today :) [19:45] https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1724354 [19:45] Ubuntu bug 1724354 in cloud-init (Ubuntu Zesty) "WARNING in logs due to missing python-jsconschema" [Medium,Confirmed] [19:45] :-( [19:47] blackboxsw: that and i am transitioning (i think) to weechat from xchat+bip [19:48] partially as a result of finicky network... i for some reason when teathering i'd get errors connecting to freenode about needing SASL and bip does not support that at all [19:48] so, here i am learning a new irc client. [19:54] heh, agreed, I figured I'd give this irccloud a try for a year, but yeah I'm compelled to go back to xchat+bip. I'm not a fan of the forced backscroll to pull content from overnight or last week etc . [19:55] so irccloud you have to keep requesting to download a zipped collection of all IRC back channels so you can grep them etc. [19:57] smoser: I'm working on cmdline azure test for SRU [19:57] and I'll update the sru-info/bug for that when I get the magic [20:13] just ran into this v [20:13] https://github.com/Azure/azure-cli/issues/4692 [20:13] on azure-cli (artful) [20:13] trying xenial [20:15] blackboxsw: fun [20:16] yeah, tempted to just use the UI for the moment. even the cli install instructions are busticated. [20:16] blackboxsw: hold on. [20:17] smoser: can you check your env for where you grabbed "azure" cli [20:17] https://gist.github.com/smoser/5806147/ [20:17] 'azure' is the node version [20:17] 'az' is the (newly favored) python client. [20:19] i think if you use that 'az-ubuntu' it should work [20:20] worked for me a couple days ago [20:20] spits a command line like you'd never want to type [20:20] $ az-ubuntu . --dry-run [20:21] az vm create --name=smoser1017x --image=Canonical:UbuntuServer:16.04-DAILY-LTS:latest --admin-username=smoser "--ssh-key-value=ssh-rsa AAAAB3...keydata.kerhe...Hw== smoser@brickies" [20:22] awsome thanks [20:38] blackboxsw: you can 'snap install azure-cli' . i think i just installed it with pip. [20:38] npm version works well [20:39] at least for login and group creation [20:39] still looking over your az-ubuntu [20:42] blackboxsw: its a mess. :) 'azure-ubuntu' (which may well work if you have the npm cli) was what i had originally, but that only works with the old 'asm' mdoe. [20:43] looks like cmdline options changed [20:43] here's what works for me [20:43] * rharper moves to zesty for the fstab tessts [20:43] tests even [20:43] azure vm create -n xenial-azure-test -g srugroup1 --image-urn=Canonical:UbuntuServer:17.04-DAILY:latest --ssh-publickey-file=/id_rsa.pub --admin-username=ubuntu [20:44] hrm looks like it's still prompting me for location even though my group is defined in useast2 [20:45] man naming convention is bogus eastus2 not useast2 [20:45] improper ordering [20:46] just to be != EC2 [20:47] blackboxsw: it changed too between ASM and ARM mode [20:48] what about that launcher that Robert had ? [20:49] Robert ? [20:49] rcj ? [20:50] SUSE [21:17] blackboxsw: https://gist.github.com/smoser/5806147 [21:17] that is updated, and just "worked for me" here. [21:26] trying again. I went UI for xenial [21:26] but will test [21:26] well, it did just work for me. [21:26] and i mentioned that you ahve to add the group [21:26] thats a pita [21:26] but.. oh well, and also showed how to get a list of locations. [21:26] * smoser goes to dinner [21:42] smoser: here's the diff to your script that works for me http://paste.ubuntu.com/25761944/ [21:46] since I'm also reckless and run as root in my lxc I also specify --admin-username=ubuntu as azure is smart and says (don't use root) [22:05] ok azure verification done. thx smoser for the az-ubuntu love [22:13] * rharper kicks off recreate loop on zesty instance and grabs dinner [22:14] smoser: I'm marking Azure validated for https://bugs.launchpad.net/cloud-init/+bug/1718029 for the cloudstack component. we'll probably just notify the most recent interested party [22:14] Ubuntu bug 1718029 in cloud-init "cloudstack and azure datasources broken when using netplan/systemd-networkd" [High,Fix committed] [22:14] rharper: same above [22:21] ok two SRU bugs remain needing verification [22:21] - TEST: [da6562e2](https://git.launchpad.net/cloud-init/commit/?id=da6562e2) [#1718287](http://pad.lv/#1718287) DataSourceOVF: use util.find_devs_with(TYPE=iso9660) [22:21] I think rharper is on this one [22:22] ^ [22:22] and I think smoser is on this one - TEST: [10f067d8](https://git.launchpad.net/cloud-init/commit/?id=10f067d8) [#1717598](http://pad.lv/#1717598) GCE: Fix usage of user-data. [22:22] for tomorrow of course. [22:22] but I think that wraps up ubuntu's 17.1 SRU for xenial/zesty [23:21] ah, that's why we can't recreate on Zesty, it only runs the EC2 datasource since Zesty doesn't have the backwards compat ds-identify configuration; we never run OVF on Zesty EC2 instances [23:22] smoser: blackboxsw: so, I think in the SRU detail I'm mention that Zesty doesn't run the OVF datasource so the bug was never present there; [23:41] blackboxsw: smoser: ok, updated fstab test-case log; on zesty which used strict-mode ds-identify, OVF never runs so it always passed, I showed that it worked on current version and proposed didn't regress that.