[10:20] once one of you is up please help me to look at bug 1737704 [10:20] bug 1737704 in cloud-init (Ubuntu) "Cloud-init seems not run on today's bionic images (20171211)" [Undecided,New] https://launchpad.net/bugs/1737704 [13:10] I'm not sure when you get online, but around now seems about right so doing a shotgun ping for the bug above [13:10] smoser: rharper: blackboxsw: ^^ [14:13] Has anyone succesfully managed to make systemd stop using apt at boot? At random my cloud-init scripts will fail because apt is locked [14:17] cpaelzer: i'll take a look [14:20] smoser: thank you, let me know if you need any debug data from me [15:40] rharper: or blackboxsw when you're in if you want to review [15:40] https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1737704 [15:40] Launchpad bug 1737704 in cloud-init (Ubuntu) "Cloud-init fails if iso9660 filesystem on non-cdrom path in 20171211 image." [High,In progress] [15:40] http://paste.ubuntu.com/26170810/ [15:40] that'd be good. [15:40] launchpad git is down, so no merge proposal that way right now. but if you can review then we can do that quicker when lp returns [16:09] https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/335086 [16:09] blackboxsw: ^ [16:19] powersj: if you kill https://jenkins.ubuntu.com/server/view/cloud-init/job/cloud-init-integration-nocloud-kvm-x/3/console [16:19] will it collect artifacts ? [16:19] hm.. [16:19] smoser: not normally [16:38] powersj: fyi, i just ran this successfully: [16:38] tox-venv citest python3 -m tests.cloud_tests run --platform=nocloud-kvm --preserve-data --data-dir=results --verbose --os-name=xenial --repo ppa:cloud-init-dev/daily --test=tests/cloud_tests/testcases/modules/locale.py [16:38] 2017-12-12 16:36:12,825 - tests.cloud_tests - DEBUG - after setup complete, installed cloud-init version is: b'17.1-1723-g05b2308-0ubuntu1+1343~trunk~ubuntu16.04.1' [16:39] (i do have a patch locally to allow it to accept ppa:) [16:39] smoser: there is a --ppa option ;) [16:40] I'll go play with torkoal and see if it is something going on locally [16:40] oh funy. i migth submit a patch to remove it then and just support ppa in --repo [16:40] ok [16:47] blackboxsw: ^ https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/335086 please [17:00] powersj: http://paste.ubuntu.com/26171272/ [17:00] that is "launch stuff on clouds" that i have. [17:00] blackboxsw: where do you think that should go? we said qa-scripts, but im' not sure if it fits ther... maybe in a doc/ dir ? [17:01] smoser: thx [17:01] smoser: I was thinking qa-scripts/doc [17:01] I was able to launch xenial nocloud-kvm test with built in cloud-init, so now building the tree and trying that one [17:01] then we can adapt those docs into scripts as we have cycles [17:11] can anyone point me to documentation to use cloud-init for deploying windows 10 on bare metal? [17:12] * blackboxsw gueses that's going to be https://cloudbase.it/cloudbase-init/ not cloud-init [17:14] smoser: looking at centos failures, I think that was a temporary connectivity/DNS issue and as a result we were unable to update our yum sources. Wasn't able to reproduce locally or on torkoal and the re-run seems to be working [17:14] smoser: now triaging why the nocloud-kvm xenial test is failing with the built deb. [17:16] blackboxsw: thanks [17:21] smoser: using the cloud-init-dev/daily ppa with xenial is not going to give you an up to date deb. Looks like Xenial hasn't been built in 5 days [17:21] https://launchpad.net/~cloud-init-dev/+archive/ubuntu/daily [17:23] here is the xenial build failure: https://launchpadlibrarian.net/348862957/buildlog.txt.gz [17:24] ds-identify-behavior-xenial.patch does not apply [17:40] smoser: it looks like your ds-identify fix will unbreak nocloud-kvm tests [17:41] applied it and re-ran locally and tests can run [17:45] smoser I see your ds-identify branch properly fixed the issue https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/335086. [17:50] trivial branch update for droping an unsupported modules cmdline param https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/335094 [17:50] dropping even [17:53] ok merging https://code.launchpad.net/~powersj/cloud-init/+git/cloud-init/+merge/335052 and https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/335086. to master [18:07] blackboxsw: https://github.com/cloud-init/qa-scripts/blob/master/doc/launching.md [18:11] good good. almost done w/ a merge-tool for cloud-init. [18:12] wanted to test it on powers' branch [18:28] waiting on tox [18:32] blackboxsw: let me know when you land and i will pull and upload to ubuntu [18:33] smoser: the script I hvae needs work. I'm merging now. and will test on the next branch post this release. [18:33] smoser: do we still want to do a revision bump in cloud-init master [18:33] prior to next upload [18:33] ? [18:34] @blackboxsw, what else is needed to land my PR? [18:36] dojordan: I think you are good on your branch. in trying to limit deltas I don't think we are landing it for this release. but right afterward. So it should land just after the 17.2 cut. I was able to test and exercise both the markerfile path and non-marker path and watch expected behavior (like camping on an infinite polling loop for IMDS or and not taking that path on normal installs) [18:37] dojordan: my expectation is that it will be in ubuntu bionic images within the week. Just not at official upstream 17.2 cut. [18:37] it'll be 17.2.X [18:37] dojordan: you also mentioned it's Xenial-only to start right? [18:38] I expect we'll have an Ubuntu SRU into xenial with that shortly into the new year [18:39] even stuff that's landing right now in master isn't going to make it into xenial until we go through our first SRU of the new year. [18:40] and our next SRU to xenial will take an upstream snapshot which would include whatever is landed in master. (post 17.2 cut). [18:41] Correct, but any Azure image will unblock us until we GA [18:45] dojordan: sorry that went over my head, or didn't jive with my understanding of your branch. Do azure xenial images currently ship with the marker file or ovf PreprovisionedVM content setting? [18:46] dojordan: ohh you mean that markerfile or configuration won't exist in azure images until your team GA's on the IMDS service? [18:47] no, sorry let me try again. As long as my changes land in an azure image it will help unblock our testing. But when we go GA (middle of next year hopefully), we will start only preprovisioning (using the ovf setting) 16.04 LTS. The purpose of the marker file is incase we occur a VM reboot in the middle of the process [18:47] @blackboxsw yes [18:48] ok undrestood. thanks for clarification dojordan [18:48] @blackboxsw, does that make sense? [18:49] ok so from my understanding, your branch not landing today doesn't block your testing currently right? The cut of 17.2 upstream release from ubuntu perspective won't change the speed at which xenial images get your branch because of the following: [18:50] - ubuntu's policy for cloud-init updates to non-bionic (xenial, zesty, artful) is we take latest snapshot from master. [18:50] - I'm currently waiting on the last SRU to land per https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1733653 [18:50] Launchpad bug 1733653 in cloud-init (Ubuntu Artful) "sru cloud-init (17.1-27-geb292c18) update to 17.1.46-g7acc9e86-0ubuntu1" [Medium,Fix committed] [18:51] which will update xenial ,zesty, artful to pre-17.2 upstream release [18:53] - If we perform a 17.2 sru to xenial, zesty artful (takes a week or two to test/publish), we can still land new merges into master after that snapshot is taken [18:53] - anything that lands in master will be available in ubuntu bionic series within a day [18:53] sorry, what is SRU? [18:53] which would unblock testing if you have the ability to use bionic images [18:53] great, that works for us [18:53] sorry acronym nightmares [18:54] "sru" = stable release update [18:54] https://wiki.ubuntu.com/StableReleaseUpdates ubuntu stable updates [18:54] specifically for cloud-init we have to do the following additional verification work: https://wiki.ubuntu.com/CloudinitUpdates [18:55] so that's the only long-pole for publishing to xenial [18:55] and that process we've gotten down from about 4-6 weeks to about 2 weeks as we are trying to perform these SRUs more frequently [18:58] smoser: Sorry, about being thick headed about this, but I still fail to see the point of http://paste.ubuntu.com/26166233/ [18:59] I really have no intentions on spending much more time on writing additional tests for the "old network config path" [19:00] I have no problem with the changes, I am just missing the point [19:01] is it covered at all? [19:02] robjo hm.. well i dont knwo. I'm not bent on it if you're going to drop that code. But if you're going to do so, then adding a test of it immediately before you do so doesn't help much. [19:02] right ? [19:02] but if you're going to fix it (as one of your MPs) was doing,t hen it makes sense to add a test and make the code more testable. [19:03] that patch is fairly simple code motion. [19:03] yes and no, since the test is already written and covers what was recently broken it does do the trick [19:04] i didntthink it *did* cover what was recently broken [19:04] yes, and I guess the light in my head is not coming on about the advantage of moving the code out of the class [19:04] smoser: https://code.launchpad.net/~chad.smith/cloud-init/+git/cloud-init/+merge/335099 for ubuntu/devel snapshot update w/ your and powersj changees [19:04] testing via 'distro.apply_network' is just more complexity, which required you to use the filesystemmocking test. this just seemed easier. [19:04] blackboxsw: oh nice. thanks. i'll pull [19:07] two things broke recently, when I built and released 17.1 in openSUSE w.r.t. networking, 1.) there is a path where network was set to "manual" for start up, that's obviously not useful in a cloud environment and 2.) there was no path from v1 to the translation of the network config [19:09] The latter was addressed in "netV1ToTranslate" branch and if I remember correctly the comment there was "this should be tested" [19:10] Anyway it's sufficiently aged that I do not remember all the details [19:10] :-). [19:11] i think all your mp are generally in reasonable shape, there were just things we wanted to take care of to make them more maintainable. the state things are in isnt your fault, but we want things to get better. [19:12] blackboxsw: uploaded. [19:14] thx smoser [19:14] I am in support for making things better, from my perspective that means in the not too distant future rip out _write_network() and figure out how to get to the sysconfig renderer [19:19] Is there a way to banish ijw for the rest of the day, may the network trouble on that end will resolve itself tomorrow? [19:19] i dont think so :) [19:19] at least not that i'm aware of [19:32] smoser: well, we've ops, we can kickban [19:33] robjo: what IRC client do you use? [19:33] pidgin [19:34] from xchat I can disable join/departure messages. ahh used to use pidgin, wasn't sure if that was a config option. [19:34] wow. pidgin [19:34] for a moment I was wondering if you were talking about some network software stack ... didn't realize it was the re-join irc messages [19:35] i used pidgin because it was the nicest chat client with IBM/Lotus sametime support. [19:35] yes [19:35] meanwhile [19:35] and i think it allowed you to change your user agent [19:35] sneaky [19:35] cheat the sametime servers [19:35] blamed the clients for crashing the server [19:35] so that you could avoid being shutoff for identifying yourself as using an unapproved client. [19:35] :) [19:35] classic IBM [19:35] https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/335100 [19:36] +1 [19:37] smoser: I think that upload to bionic that bothered cloud-init hit curtin's vmtest on bionic [19:37] it caused no network, right? [19:37] * rharper will kickban next join [19:38] rharper: well, cloud-init disalbes itself. [19:38] but it would only hit that path if you had an iso9660 filesystem on something not /dev/cdrom or /dev/sr0 [19:38] right, I think that's how we launch in xkvm [19:38] but lemme check [19:39] so it doesnt seem likely, unless we're misunderstanding. no we seed rom url dont we ? [19:39] for the boot [19:39] after install [19:39] we inject user data for collection [19:39] vi an nocloud seed [19:39] iso [19:39] ah. yeah. then that is it. [19:39] yes. [19:39] -drive file=/var/lib/jenkins/slaves/torkoal/workspace/curtin-vmtest-devel-amd64/output/BionicTestBonding/boot/seed.img,id=disk02,if=none,format=raw,index=2,media=cdrom [19:40] -device virtio-blk,drive=disk02,serial=seed.img [19:40] hm... media=cdromn [19:40] rharper: thanks [19:40] so, I think that trips it [19:40] it's not IDE [19:40] robjo: sure [19:40] it's virtio device, with iso9660 filesystem [19:40] so, not /dev/sr* [19:40] i wondered how virtio cdroms would show up. [19:40] as a virtio device [19:40] there are no virtio cdroms; just virtio block [19:41] they just show as a /dev/vdx [19:41] right [19:41] which I think fails the uploaded check [19:41] right? [19:41] well, before your fix [19:42] hm.. [19:46] i'm just thinking. i knew it wasnt a great filter, but we are currently allowing OVF only on a /dev/sr? device or a /dev/sr[0-9]|/dev/hd[a-z] [19:51] we do a blkid query for is9660 filesystem; do we also check sr ? [19:52] i guess if we get a bug for supporting OVF on virtio cdrom then we can just open that up a bit. [19:52] we do blkid query yes. and filter ovf by iso9660 fs based on that. [19:52] but the check for "is this an OVF iso transport" is [19:52] grep http://url /dev/ [19:53] ovf supports just iso9660, not where it comes from [19:53] so in order to avoid that as far as possible will only do that on a cd device [19:53] its not perfect [19:53] hrm [19:54] it seems to me that ovf is less reliable, but things like labels are better; can we order the higher quality "yes" answers first ? [19:54] for example, OVF can only be a yest if we fail to find the cidata/config-2 labels ? [19:54] https://www.dmtf.org/sites/default/files/standards/documents/DSP0243_2.1.1.pdf [19:54] when dealing with iso9660 checks ? [19:55] it prety clearly says "cd-rom device" [19:55] well, vmware fails that [19:55] ? [19:55] they attach a cdrom [19:56] no? [19:56] there is an file in dir support [19:56] we tested this via lxd [19:56] sure. and that will get identified. [19:56] and that is fine. [19:56] we only go down to looking for a ovf on a cdrom if other measures fall out. [19:56] and we explicitly avoid known lables [19:57] ok, then I'm not sure why you're worried about that gep on sr ? [19:57] grep on sr* [19:57] http://paste.ubuntu.com/26172208/ [19:57] oh, I see your question w.r.t virtio cdrom [19:57] it's not really a cdrom [19:57] because if you attach a iso9660 ovf via virtio it will not be found right now. [19:57] yeah [19:58] which is probably "oh well" [19:58] yeah [19:58] well, they can always use a virtio-scsi cdrom device, whihc does show up as an sr0 [19:58] instead of a virtio-disk [19:59] not that they can tell [19:59] or know to do that [19:59] ah. ok yeah. [19:59] :) [19:59] yeah, i was looking to do taht. [20:13] smoser: rharper can you guys remind me again what https://launchpad.net/ubuntu/bionic/+queue?queue_state=3&queue_text=cloud-init a "proposed" pocket done means for cloud-init? [20:13] [20:13] rmadison still shows cloud-init rev ....58 instead of 60. I wonder how long we wait for updates in bionic when we our updates hit the proposed pocket [20:14] blackboxsw: https://launchpad.net/ubuntu/+source/cloud-init [20:14] it is in -proposed, or pending a publisher runwill be [20:15] so is the publisher run only daily, or more frequently [20:16] hourly-ish ? [20:16] something on that order [20:19] blackboxsw: when i was testing resolvconf things adn was impatient [20:19] i did this [20:19] http://paste.ubuntu.com/26172361/ [20:19] oh. /me updates gist. forgot i put it there [20:20] * blackboxsw reads that. and nice tracking gist comment :) [20:20] I was curious per a discussion in #ubuntu-release. I wanted to tell folks around what time they could expect fixes to be seen [20:22] ok SRU looks unblocked [20:22] anyway, what you can do is: [20:22] POCKET=bionic wait-for-package cloud-init 17.1-60-ga30a3bb5-0ubuntu1 && mpg321 ~/Music/22-Andrew_Lloyd_Webber-Joseph_Megamix.mp3 [20:23] i'm not exactly sure how often things get copied. i think proobably more frequently for development releases. [20:29] hehe will have to spin up the "Amazing technicolor deamcoat mix" as general policy [21:13] approved https://code.launchpad.net/~powersj/cloud-init/+git/cloud-init/+merge/335051 with unrelated question about direction for ec2. [21:15] * powersj looks [21:19] I'm guessing with EC2, we'd probably also have a ec2.publish_keys method or something in the EC2Instance.start method to ensure we've upoaded the known key to EC2. [21:22] blackboxsw: for ec2 you can take a sneak peak at https://git.launchpad.net/~powersj/cloud-init/commit/?id=2d5c6d156cb4506260cb1abef54daefb7a0ffe05 [21:22] specifically tests/cloud_tests/platforms/ec2/platform.py [21:22] and def _upload_public_key(self, config): [21:41] powersj burning down the AWS infra [21:42] https://code.launchpad.net/~powersj/cloud-init/+git/cloud-init/+merge/335053 questions about where we ultimately should be controlling/documenting integration test deps [21:42] thx for the peek powersj looking [21:43] blackboxsw: agreed on tox discussion [21:43] I was a little confused why it even came up given paramiko has been in there for a while. All I thought I did was move it from one file to another [21:44] powersj: maybe we need to extend tools/read-dependencies to also handle these one-off integration test deps... as I'd really like to see make ci-deps-ubuntu work for everything (not just unit test deps) [21:57] blackboxsw: i'm interested in thoguhts on https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/335108 [21:59] checking it out smoser [22:01] hrm I thought oauth-key was based on the user client connecting to maas. accessing a maas instance to peek at the metadata now [22:02] blackboxsw: well, maas sends to curtin the credentials for this node. and those get written into /etc/cloud/cloud.cfg.d [22:03] and then cloud-init uses those to talk to maas. [22:03] ohh these are cloud-init's keys to talk back to maas. ok so they'd be unique per node [22:04] well, they are unique per install [22:04] after https://bugs.launchpad.net/maas/+bug/1507586 [22:04] Launchpad bug 1507586 in MAAS "previous owner of node can use oauth creds to retrieve current owner's user-data" [Critical,Fix released] [22:05] who is that smoser guy, he sure finds a lot of bugs [22:05] that bug was really just a sneaky way of me trying (unsuccessfully) to get bug 944325 fixed. [22:05] bug 944325 in MAAS "no separation of instance id from node id" [Wishlist,Triaged] https://launchpad.net/bugs/944325 [22:05] "maybe if i find a security vulnerability they'll add a feature for me" [22:23] smoser: just responded, on your branch. I know the approach you are taking looks like it makese sense for how cloud-init currently looks like it will work, but I wonder about a couple of things noted in the comments