[00:55] https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1866930 created and https://github.com/canonical/cloud-init/pull/114 pushed for tomorrow rharper [00:55] Ubuntu bug 1866930 in cloud-init (Ubuntu) "[FFe] ec2 add support for configuring secondary ipv4 and ipv6 addresses" [Undecided,New] [09:17] hi [09:18] I'm working on OpenStack Neutron and I want to talk with You about Metadata over IPv6 [09:18] since some time we have spec https://review.opendev.org/#/c/315604/ to add support for IPv6 in Neutron [09:19] *for Metadata over IPv6 in Neutron [09:19] the proposal in this spec is to use fe80::a9fe:a9fe as IPv6 address which is equivalent to 169.254.169.254 [09:20] but as cloud-init is one of the main users of metadata service, I would like to ask also You about that, what do You think about it and if this will work for You [09:20] any feedback here or in the spec's review will be appreciated :) [09:22] slaweq: just a note - most of the cloud-init community lives in US timezones, so replies can take a few hours to come. [09:23] tribaal: thx for that info [09:23] tribaal: is there any active ML for cloud-init where I can maybe send such question? [09:25] slaweq: yes, I think https://launchpad.net/~cloud-init (notice the mailing list link) would be the relevant contact point. Note that you need to be part of the "team" to post, but to my knowledge that's just a matter of clicking "join" :) [09:26] tribaal: thx a lot [09:26] I will try that [09:27] slaweq: from a scan of the openstack-specific DS code in the tree it seems like using ipv6 (or an ipv6 fallback) for the datasource is possible [09:28] I'm not sure how that is implemented in other datasources however (currently the openstack datasource will check on http://169.254.169.254 unconditionally) [09:29] ah, my bad. it will check there unless the metadata ur is supplied through the config as metadata_urls [09:29] tribaal: can You point me to the code with that? [09:29] sure, sec [09:30] tribaal: thx a lot [09:30] slaweq: https://github.com/canonical/cloud-init/blob/master/cloudinit/sources/DataSourceOpenStack.py#L59 [09:31] tribaal: so in fact if we will add som IPv6 address, it can be configured in cloud-init's config to be used [09:32] so in fact no changes in cloud-init code are required in fact [09:32] slaweq: yes, it looks like that would eb the case. [09:32] tribaal: maybe we can propose to add such IPv6 address to the default list later [09:33] but it's not required to make it working [09:33] slaweq: if the ipv6 option becomes an openstack standard perhaps you could consider adding it to the default URLs (making it a list). But some guidance from the real cloud-init developers would be better than mine here :) [09:33] yes, it looks like. [09:33] tribaal: ok, thx a lot for help [09:33] from my uneducated perspective at least :) [09:33] slaweq: anytime :) [09:33] I will send email to the ML to get some more feedback about that but You helped me a lot with that [09:34] good to hear! === vrubiolo1 is now known as vrubiolo [12:46] meena: I saw you left a comment on the pull request https://github.com/canonical/cloud-init/pull/204 [12:46] Odd_Bloke: ^ [12:46] I also saw a comment about considering the CVE-2020-8631. [12:47] meena: Odd_Bloke I'm just a little confused if this PR actually addressed the CVE or not, because it still uses random.choice [13:06] otubo: It doesn't use random.choice, it uses random.SystemRandom.choice. [13:06] Which uses the system's random pool instead of the pseudorandom Mersenne Twister that random.choice uses. [13:07] Odd_Bloke: I noticed that. And that was the reason I was confused if it was really fixed or not :) Now it's clear. Thanks! [13:07] :) [13:42] rharper: blackboxsw: https://github.com/canonical/cloud-init/pull/244/ <-- this adds the capability to store just GitHub usernames as CLA signers (and adds dhensby as the first such person) [14:24] rharper: Thanks for the review, I've addressed your comment. :) [14:24] cool, I'll re-review [14:25] Thanks! [14:26] (I happened to look at the tab as your review showed up, I wasn't just sitting there waiting for a review to come in. ;) [14:26] hehe === vrubiolo1 is now known as vrubiolo [17:38] checking out master and running tox gives me error tox.ConfigError: ConfigError: No support for the posargs substitution type [17:42] AnhVoMSFT, can you paste the full output? and `tox --version` and `python -V` [17:42] err python3 -V :D [17:49] tox --version [17:49] 2.3.1 imported from /usr/lib/python3/dist-packages/tox/__init__.py [17:49] python3 -V [17:49] Python 3.5.2 [17:50] that looks like Ubuntu 16.04? [17:50] xenial? [17:52] yes powersj [18:02] Odd_Bloke, ^ possible regression from yesterday? If I checkout master without yesterday's commits all is well [18:02] powersj full output https://paste.ubuntu.com/p/b4ncHRJhxc/ [18:03] (sorry for the delay, multitasking) [18:03] Oh, hmm. [18:03] Let me take a look. [18:05] Yeah, OK, looks like it was introduced by my pytest branch. [18:06] AnhVoMSFT: A short-term workaround is to install tox from PyPI into a virtualenv. That tox version should be fine. [18:06] In the meantime, I'll dig into fixing it properly. [18:14] blackboxsw: rharper: https://github.com/canonical/cloud-init/pull/245 <-- this fixes running tox on xenial [18:15] * blackboxsw tries to reproduce that problem in the first place [18:19] approved Odd_Bloke just waiting on CI [18:19] .... again [18:20] Odd_Bloke, thanks! [18:27] blackboxsw: https://bugs.launchpad.net/cloud-init/+bug/1867029 ; didn't we look at an issue like this? IIRC, the bionic ifupdown package does not include the source /etc/network/interfaces.d/*.cfg line in /etc/network/interfaces, right ? [18:27] Ubuntu bug 1867029 in cloud-init "Package ifupdown breaks network configuration by cloud-init" [Undecided,New] [18:27] Landed. [18:27] AnhVoMSFT: master should now be back to working, thanks for letting us know it was broken! [18:28] blackboxsw: Looking at timestamps, we weren't waiting for Travis on that one, just on our actual CI jobs. [18:29] rharper: checking [18:29] Our worst queue times are likely in the mornings, because I think most of the multipass team are in Europe. :p [18:30] rharper: we did have an issue just like this (iternally) someone was installing ubuntu-desktop on cloud-images which pulled in ifupdown [18:30] yes [18:30] and on bionic the package does not include the source line [18:30] so we render eni [18:30] but nothing sources it [18:31] right, presence of ifupdown on bionic is non-standard/recommended and cloud-init checks ifupdown first, but stock cloud images have ifupdown includes disabled [18:31] because we should be using netplan [18:31] right [18:32] still feels a bit like it should be a bug in ifupdown package for bionic. if installing it, it should probably emit the proper include, or cloud-init can emit a warning saying "hey there I see ifupdown and will be rendering that but I see no includes for the interfaces.d file I'm about to write, so I'm going to include it kthxbye" [18:33] I think it's probably worth considering it as a cloud-init bug, because we should be smart enough to look at the environment config files for the rendering we are emitting to make sure it is valid [18:33] what do you think? [18:34] and it seems like a low hanging fruit kind of bug too [18:34] if we handle it that way in net renderer. [18:35] I would like to see a clearer description of the problem; what is the gap when ifupdown is installed on bionic? [18:38] when ifupdown and netplan are both installed cloudinit network renders prioritize writing /etc/network/interfaces.d/*.cfg instead of /etc/netplan/50-cloud-init.yaml [18:38] problem on certified cloud-images is that the magic /etc/network/interfaces config file is in a disabled state [18:39] http://paste.ubuntu.com/p/9yhHVZjqGB/ [18:39] blackboxsw: yes [18:39] I updated the bug [18:39] Right. This feels like it could be an ifupdown bug, we should pursue that route first, I think. [18:39] and ifupdown package doesn't actually write or update that file if it already exists in images [18:39] the workaround is to just put the source line in there; we need to sort out if ifupdown packaging or cloud-init (or both) should ensure that's present [18:39] Because that comment is currently a lie, I think. :p [18:40] blackboxsw: even if it were missing, it doesn't put the right source line IIRC [18:40] yeah I think ifupdown package postinst script should be a bit smarter instead of just a presence check (because we know cpc images intentially disable that file with comments etc)_ [18:40] rharper: if /etc/network/interfaces file is missing it writes the appropriate include [18:41] "it" being ifupdown deb postinst script [18:41] yeah and that comment is a lie, because just installing ifupdown leaves the same /etc/network/interfaces file in place (disabled) [18:42] just tested that [18:42] same ENI content [18:42] blackboxsw: it's functional, but ISTR some subtly between source-directory and source *.cfg [18:42] I forget what at the moment [18:42] I think there's a built-in regex that ifupdown uses [18:43] with source_directory vs. source *.cfg [18:43] and on bionic the ifupdown.postinst which source-directories the right dir http://paste.ubuntu.com/p/TKtfmN7Zyt/ [18:43] I think I saw a debian bug related to this since their images used source_directory [18:43] blackboxsw: yes, it's the correct directory [18:43] but it restricts what files it reads within it [18:43] cloud-init's cfg is read [18:43] but there are subtle regex differences for what files are named [18:43] and what ifupdown actually reads [18:44] so, ubuntu cloud-image with ifupdown and source *.cfg will not always see the same files as an image with source_directory [18:44] rharper: are you saying the "source-directory /etc/network/interfaces.d is more strict than "source /etc/network/interfaces.d/*.cfg" ? [18:44] possibly the opposite [18:44] ahh less strict [18:44] but I need to find the bug [18:46] https://github.com/canonical/cloud-init/commit/a6faf3acef02bd8cd4d46ac9efeebf24b3f21d81 [18:46] The current filename, 50-cloud-init.cfg, does not match against the RE [18:46] that is used to scan the directory for configurations (ASCII upper- and [18:46] lower-case letters, ASCII digits, ASCII underscores, and ASCII [18:46] minus-hyphens): [18:47] ahh interesting. ok [18:48] Odd_Bloke: just hired someone in Argentina ;) [18:48] so do we want cloud-init behavior to prescribe that anyone wanting network config in ENI need have *.cfg files? [18:48] blackboxsw: well, the distro class has a say in it [18:48] Also, one of our devs is in Tennessee, so… ;P [18:49] Saviq: so annual meetings in Columbia to split the difference? [18:50] so TN and Agentina have to travel the same distance :) [18:50] With how things are going, no f2f meetings for a year ;P [18:50] true story [18:53] Saviq: Please go to bed (and stop kicking off CI runs). ;) [19:03] I'm looking at the daily build failures next. [19:03] (I believe it should just be a case of updating the Build-Depends in each packaging branch.) [19:18] Odd_Bloke: awesome! [19:23] Hmm, actually, this is a little more complicated than that. If I update the Build-Depends without pulling in the upstream commit, then builds will fail when we just cherry-pick changes (because nose won't be installed). [19:24] s/the upstream commit/\0 that introduced pytest/ [19:29] rharper: https://github.com/canonical/cloud-init/pull/114 is up for re-review (ec2 secondary ips). one question I have is how do we want cloud-init [19:30] finally got my unittests fixes. FWIW assertItemsEqual for dict type items instead of assertEqual because python dict key ordering is different on xenial version on python as opposed to > xenial [19:30] I keep forgetting that [19:31] blackboxsw: yeah,I saw that change in your tests; that was new to me w.r.t having a non-ordered content comparision [19:32] yeah it bit us on landscape tests all the time, and sometimes I've seen it in cloud-init too [19:32] most of the time i remember when writing the test... ahh well [19:32] I get bit by that all the time :) [19:32] silly former landscapers :) [19:33] hehe you'd think we learned byt the time :) [19:33] hah [19:33] tribaal: btw, thanks for point slaweq to the mailing list and the datasource bits; much appreciated [19:33] anytime :) [19:35] seems like a pretty straightforward patch to write (to me), but the multicast point brought up on the list as a followup is an interesting approach [19:36] assertEqual should work on dicts. [19:36] yeah, and I'm worried about the cost of either hitting ipv4 when there is no, or ipv6 when there is none [19:36] even though I'm not directly involved with an openstack cloud anymore, I could see the multicast thing transfer well to other clouds as well, indeed [19:36] tribaal: I [19:36] you? [19:36] :) [19:37] hehe, I've not responded to that yet; the idea of having a common "IMDS" url is interesting, but I'm not sure it makes anything more common other than the location; the content is still per-platform; [19:37] Yeah, that's true [19:51] blackboxsw: Just added a comment about assertItemsEqual; you're actually papering over a _list_ being out-of-order, this isn't to do with dict key ordering. [19:52] (Unless, I suppose, that list is being generated from dict keys.) [19:52] Odd_Bloke: ahh right, so we could/should sort that list to avoid thrashing on different versions of python [19:52] right the source comes ultimately from keys [19:52] we can sort it in code proper or assertItemsEqual [19:53] I'd prefer to handle in unit tests because we don't truly need to do the sorting work for what cloud-init renders [19:53] I think better to have the code produce something that's stable across Python versions. [19:53] but I can be swayed either way [19:53] rharper: I need a swing vote on that :) [19:53] Well, I give an example where the order does make a difference. [19:53] surely [19:54] I don't know if it makes a _practical_ difference, but there would be different internal state at least. [19:54] is the a *value* list or the order of the keys() output ? [19:55] we end up running json.dumps, with sort_keys=True, so what we write out is in sorted *key* order, but the value indexed by the key is not sorted AFAICT in our code; [19:55] at some unittime cost; it may be better to dump the output via util.json_dumps() and read it back and compare [19:56] rharper: the list value comes out sorted differently from macs_to_nics.items() [19:56] the value, right [19:56] so the value of the 'addresses' sorts differently on xenial [19:56] based on what mac we originally process from IMDS [19:57] I can make the change to sort the macs_to_nics dict. and then we should not have this problem. [19:57] easy enough. and minor cpu cost\ [19:57] that seems better and then assertEqual should work as well, IIUC [19:58] right then assertEqual will work [19:58] and we may have need for ordered internal state in the future anyway. [19:58] blackboxsw: thoughts on whether we can generically walk the dict, and on value types apply sorted ? [19:58] that is, if we add new keys in the future, how do we prevent having more unsorted lists [20:15] blackboxsw: rharper: What do you think to this approach: https://github.com/canonical/cloud-init/pull/246 [20:15] looking [20:15] It means our daily builds will needlessly install python3-nose and our archive builds will needlessly install python3-pytest. [20:16] The alternative would be to cherry-pick the pytest commit onto each release branch, so that we can consistently just test with pytest. [20:16] hrm [20:16] (And all of this inconsistency will go away in the next SRU regardless.) [20:17] leading with that last comment there; I;m in favor of some short term "needless" package installs to avoid git cherry pick on each release branch pain [20:17] Odd_Bloke: I think we ultimately have to cherry-pick into each ubuntu/ anyway right [20:17] blackboxsw: on sru, we new-upstream-snapshot *and* update build deps [20:17] so, no ? [20:17] our new-upstream-snapshot doesn't pull int debian/* file changes [20:18] rharper: ahh I see, right deps should get corrected via new-upstream-snapshot [20:18] ok [20:18] right, but issue is that tools/read-deps picks up the new change, but debian/ is stale [20:18] The deps won't be corrected by new-upstream-snapshot. [20:18] just ubuntu/xenial/debian doesn't see updates from ubuntu/devel/debian due to new-upstream-snapshot [20:18] Odd_Bloke: btw, are we actually causing you guys delays? this is Travis, I assume? is it b/c we're under the same canonical/ GH org? [20:19] We will need to manually drop the python3-nose Build-Depends once it is actually unused. [20:19] Saviq: Yeah, we've had the canonical org put onto a paid Travis plan so that will hopefully help. [20:19] Odd_Bloke: right, we move release branch to master; *and* update debian Build-Deps [20:20] Saviq: Part of the problem is that we aren't (yet) using a landing bot like bors, so we really notice delays. [20:20] ok Odd_Bloke is it worth an SRU_BLOCKER comment added to ubuntu/(xenial|bionic|eoan)/debian/control or a trello card on our SRU template board? [20:20] blackboxsw: This isn't a blocker for SRUs IMO. [20:20] I guess what I'm wondering is how we avoid losing that work item [20:21] Odd_Bloke: ah so that's why travis-ci.com got enabled on the org ;) [20:21] blackboxsw: I was just thinking we might want to to release branch merging in our CI builds; so we could have see this fail in CI if we merged release branch debian on top of the PR branch , [20:22] blackboxsw: Perhaps on the SRU template board? [20:22] (If we do drop it it's not the end of the world, we'll just be installing nose in our build environments unnecessarily.) [20:22] s/drop it/forget to make the change/ [20:23] rharper: Chad is +1 on the PR, I take it from your above comments that you are too? [20:23] yes, I'll mark approve as well [20:27] OK, I'm going to make sure that focal now builds, then will propose a similar change for each other release. [20:27] Saviq: Yep, that's why. :) [20:31] I suppose that was inevitable, and makes sense anyway… [20:44] ok, Im reviewing Goner's netbsd(*bsd) support PR https://github.com/canonical/cloud-init/pull/62 and the network config emitted from the distro is v1 in both cases. I'm thinking since we are touching/reviewing that support, maybe we should make that generate_fallback config could be simple v2 https://github.com/canonical/cloud-init/pull/62/files#diff-1708fc6fbf7cc4ca958a7adab7ad615eR35-R40 [20:45] rharper: Odd_Bloke any opinions on generally whether we should be moving distro network rendering to v2 or leave them all v1? [20:45] blackboxsw: do you mean datasource ? [20:45] I knew datasource-wise we'd like to make that migration to v2 in the datasource [20:45] rharper: I do mean distro in some cases constuct v1 network config for fallback [20:46] we could move them to v2 as they are being touched like the link above [20:46] so that eventually we can move our internal state off of v1 [20:46] and drop our network config v1 -> v2 translation layer logic [20:47] what generates v1 in distro ? [20:47] distro.generate_fallback_config is the method I'm talking about [20:47] just bsd no ? [20:47] could this be done in a 2nd iteration. I understand the idea, but I would like to land the PR too :-) [20:47] it calls net.generate_fallback_config() [20:47] well our base Distro class does [20:47] which emits v2 [20:47] blackboxsw: huh ? [20:47] right rharper in the existing PR, it did v1. [20:47] def generate_fallback_config(self): [20:47] return net.generate_fallback_config() [20:47] for netbsd only. and I wanted to steer that to v2 [20:47] rharper: https://github.com/canonical/cloud-init/pull/62/files#diff-1708fc6fbf7cc4ca958a7adab7ad615eR35-R40 [20:48] I want that v2 [20:48] Goneri: it could be done on a followup [20:48] blackboxsw: I suppose, I'm not too worried about it; the biggest win to moving to v2 in fallback is that if target distro supports v2, then we can pass it through as it is [20:49] it's unlikely that v2 will ever be around on bsd, so it's not, IMO, a big issue to emit v1 to convert to network_state() [20:49] right, which isn't going to happen on *bsd. so its probably more paper work to move to v2 [20:49] right [20:49] ok makes sense, thanks for being the sounding board. Goneri we won't worry about that then [21:01] https://github.com/canonical/cloud-init/pull/247 <-- eoan [21:15] https://github.com/canonical/cloud-init/pull/248 <-- bionic [21:25] https://github.com/canonical/cloud-init/pull/249 <-- xenial [21:25] blackboxsw: rharper: Review of ^ would be appreciated! [21:25] ok [21:26] 247 merged [21:26] rharper: you on 248 or 249? [21:27] I object to https://github.com/canonical/cloud-init/pull/248/files formatting as cloud-init attribution in previous changelog entries generally carries a different 'style' [21:29] blackboxsw: neither as you hopped on one already; was thinking it was easier to review all of them together for comparision purposes [21:29] rharper: +1 though Id like you and Odd_Bloke weigh in on 248 [21:29] specifically my unrealistic https://github.com/canonical/cloud-init/pull/248#discussion_r391278819 [21:29] or pedantic [21:30] it just represents something 'different' and I get all prickly with change ;) [21:31] same nit/request w.r.t https://github.com/canonical/cloud-init/pull/249/files [21:31] I think generally we redacted authorship attribution in debian/changelogs for the upstream team. in ubuntu deb changelog entries, but should that be different w.r.t. specific packaging changes to the debian/* directory [21:33] blackboxsw: generally I think it might be useful to keep authorship for the debian dir change sinces that's downstream changes rather than upstream code ; but I'm fine redacting as well; [21:33] ok /me backs down [21:33] works for me [21:34] though when new-upstream-snapshot combines these items I wonder if it's worth us redacting that to make the changelog a bit more readable [21:34] since it's likely to include a lot of fixes [21:34] redacting the authorship sections [21:38] heh I see Odd_Bloke redacted :) thanks man [21:39] I mean thanks Dan [21:39] Done for both. [21:40] approved waiting on ci [21:41] FWIW, those names are categorically different from the "New upstream snapshot" lines where we redact the names, and are a standard feature of Debian changelogs. [21:42] though Odd_Bloke do you think that not bumping debian/changelog version will actually not allow our builds to succeed [21:42] https://github.com/canonical/cloud-init/pull/249 and https://github.com/canonical/cloud-init/pull/248 [21:42] Why? [21:42] I think we actually need a new dch -i right? [21:42] ohh wait recipe builds failed [21:43] so there isn't a build of that binary already in repos [21:43] That version isn't really used anyway, the recipe build produces its own I think? [21:43] I think the changelog is only used for proper uploads, and we haven't released these versions yet. [21:43] Odd_Bloke: I was thinking only in terms of daily repos [21:43] daily ppa [21:44] which should already have a successful build pre-pytest [21:44] maybe?? [21:44] The recipe builds don't use that version string. [21:44] They look like 20.1-2346-g65a1b90-0ubuntu1+1792~trunk~ubuntu20.04.1 [21:46] ahh right it's based on something else ahh right https://launchpad.net/~cloud-init-dev/+archive/ubuntu/daily [21:46] ok safe! [21:46] thanks for that [21:46] which makes sense as we might fix a ubuntu/ branch with more commits and not alter the debian/changelog so we'd need a different unique counter [21:47] to allow unreleased uploads to get tested. [21:47] ok I'm back on waiting for CI again :) [21:47] actually on a netbsd review [21:49] blackboxsw: I'm watching CI for those packaging branches, you can focus on something else. :) [22:03] OK, all daily builds are now back to working. \o/ [22:25] rharper, https://github.com/canonical/cloud-init/pull/238 is that ready? [23:16] powersj: I think I need a 2nd pass there. [23:16] my irc bouncer died again [23:16] will be making the switch to thelounge.chat tonight [23:25] powersj: I would like to have some runtime verification on it before landing