[01:45] <Goneri> meena, hum, let's me check
[01:48] <Goneri> meena, my bad, I forgot it during the last rebase. sorry. I just pushed a fix.
[03:05] <rharper> smoser: http://paste.ubuntu.com/p/CQRgW9HQRw/
[03:13] <smoser> rharper: yeah, i verified that works too.  was ripping the ctn_cfg out from other places.
[10:33] <meena> powersj: vendor-data is a bit underdocumented.
[11:02] <meena> I fucked up.
[15:10] <Odd_Bloke> meena: Should we revert https://github.com/canonical/cloud-init/commit/e2840f1771158748780a768f6bfbb117cd7610c6 ?
[15:14] <meena> Odd_Bloke: i guess we an leave the test
[15:14] <meena> I'll blame it on mental overload
[15:17] <Odd_Bloke> These things happen!
[15:17] <Odd_Bloke> Particularly in complex code bases. :p
[15:17] <Odd_Bloke> meena: Do you want to propose the revert, or shall I?
[15:24] <powersj> meena, is that a kind hint that you want me to go write more docs :)
[15:26] <smoser> rharper: https://github.com/cloud-init/qa-scripts/pull/15
[15:26] <meena> Odd_Bloke: im on the go, rn, so if you're in the code, go ahead
[15:27] <meena> powersj: if i knew how it works, i'd do it myself 😅
[15:27] <Odd_Bloke> meena: Cool, will do.
[15:34] <Odd_Bloke> meena: https://github.com/canonical/cloud-init/pull/116
[15:34] <Odd_Bloke> powersj: I don't appear to be able to add non-Canonical folks as reviewers on PRs. :/
[15:34] <Odd_Bloke> meena: (I would have added you as a reviewer if not for ^. :p)
[15:35] <powersj> Odd_Bloke, nor am I
[15:37] <rharper> smoser: thanks!
[15:51] <Odd_Bloke> powersj: I've tried adding meena as a Read permitted collaborator.
[15:51] <Odd_Bloke> To see if that adds them to the list.
[15:54] <Odd_Bloke> smoser: rharper: So is that profile PR a fix for the pstart issue we're seeing, or a simplification so we can reproduce it with less moving parts?
[15:56] <smoser> that shoudl fix it, Odd_Bloke
[15:56] <smoser> the issue was the "cached" (memory) config that we restored on exit was stale.
[15:59] <rharper> Odd_Bloke: I used smoser simplified reproducier to file the issue with lxd yesterday
[15:59] <smoser> and then we got the response expected
[15:59] <rharper> and stgraber pointed out the error and the solution as well
[15:59] <smoser> and used --dump-commands
[15:59] <rharper> ah, yes
[15:59] <smoser> which was helpful.
[15:59] <rharper> and I followed up with a, and what _should_ we do
[16:00] <rharper> so, team effort to extract a solution
[16:00] <smoser> the one thing i'im not sure about is how old 'lxc profile add'
[16:00] <smoser> is
[16:00] <rharper> right, I've not verified it on Xenial
[16:00] <rharper> but we're running bionic + snap-based lxd in CI
[16:00] <smoser> seems old enough.
[16:03] <rharper> ooo, it's 19.4 day
[16:04] <smoser> https://github.com/cloud-init/qa-scripts/pull/16 gets rid of:
[16:05] <smoser>   rcontainer = remote + ":" + name if remote else name
[17:04] <meena> Odd_Bloke: did i approve too early (before accepting your invitation?) or do i still have no write access, like my github message says?
[17:06] <Odd_Bloke> meena: I didn't grant you write access, I just granted you explicit read access, in the hopes that would mean you showed up in the drop-down listing potential reviewers.
[17:06] <Odd_Bloke> And I _think_ that worked.
[17:10] <blackboxsw> Odd_Bloke: that looks good on my side and I see someone was able to set meena as reviewer
[17:10] <blackboxsw> on the PR
[17:16] <Odd_Bloke> I believe meena just reviewed it of their own accord.
[17:16] <Odd_Bloke> Rather than it being requested.
[17:17] <Odd_Bloke> (Obviously it _was_ requested, just not through the GH UI. :p)
[17:17] <blackboxsw> followup suggestion on powersj's set hostname branch https://github.com/canonical/cloud-init/pull/109#discussion_r358543988
[17:20] <Odd_Bloke> blackboxsw: Which platforms base dynamic DNS on the initial DHCP requests?  (I thought those were the platforms that the current behaviour doesn't work well for?)
[17:20] <blackboxsw> and per our conversation yesterday Odd_Bloke I'm in agreement with you I think. If metadata or user-data set a hostname for a platform, it seems correct for cloud-init to assert that the hostname remains set per the opinionated deployment directives.
[17:21] <blackboxsw> Odd_Bloke: juju driving some version of vSphere (so OVF + juju). I believe the interactions of both substrates rely on DDNS that falls over during our init-local sandbox DHCP attempts caused the DDNS in that scenario to keep transfering DNS entries for the 'ubuntu' hostname to different vms as they are brought up
[17:24] <blackboxsw> Odd_Bloke: more context on that is here https://bugs.launchpad.net/cloud-init/+bug/1746455/comments/5  and here https://bugs.launchpad.net/cloud-init/+bug/1746455/comments/23
[17:27] <blackboxsw> Odd_Bloke: so juju deploys (or deployed) some magic that adds DHCP hostname request logic to the deployed target (and a lot of cloud-init user-data). It is possible that there is an issue in juju machine deployment logic that needs a bit more investment to make sure to avoid triggering this case
[17:29] <Odd_Bloke> blackboxsw: I'm a little confused.  That's a case where this behaviour causes problems, but your proposed paragraph suggests that that's the motivation for having this behaviour.
[17:30] <blackboxsw> Odd_Bloke: sorry I wasn't clear here. That is the case where cloud-init's behavior of setting hostname from metadata before running sandboxed dhcp client fixes that issue.
[17:30] <blackboxsw> if we didn't set hostname before our sandbox dhclient run, we would break juju deploying on vsphere
[17:31] <Odd_Bloke> Oh, you know what, I'm thinking of Azure-specific behaviour.
[17:31] <Odd_Bloke> Which is handled by the datasource, not through this route.
[17:31] <blackboxsw> I don't really know about platforms on which us setting hostname early and repeatedly resetting is causing problems
[17:31] <blackboxsw> ahh right Or
[17:31] <blackboxsw> ahh right Odd_Bloke
[17:32] <Odd_Bloke> Which explains why I couldn't understand what was going on. :p
[17:32] <blackboxsw> yet, doesn't explain why I didn't. But, some mysteries are better left unsolved
[18:25]  * powersj looks for 19.4
[19:24] <blackboxsw> Odd_Bloke: powersj rharper I know we are in discussion about best process for reviewing PRs quickly
[19:24] <blackboxsw> can we in the nearterm label a PR with incomplete if it is waiting on Proposer feedback/resolution?
[19:32] <Odd_Bloke> blackboxsw: I have reservations about introducing a stopgap when we plan on having a full process in the next few days.
[19:32] <blackboxsw> thanks Odd_Bloke +1
[19:32] <blackboxsw> will leave PRs undecorated at the moment
[19:33]  * blackboxsw was going through initial upstream 19.4 board tasks 
[19:33] <Odd_Bloke> blackboxsw: Did you cut 19.4 yet?
[19:33] <blackboxsw> Odd_Bloke: nope, doing that in a matter of a few minutes
[19:33] <Odd_Bloke> blackboxsw: Because I think https://github.com/canonical/cloud-init/pull/116 is worth including.
[19:33] <blackboxsw> just reviewing CI right now
[19:33] <blackboxsw> ok Odd_Bloke will review that
[19:34] <blackboxsw> Odd_Bloke: was there a bug associated with that issue?
[19:34] <blackboxsw> that PR I mean
[19:34] <blackboxsw> as in, how did we/you discover the issue?
[19:36] <blackboxsw> just an errant extra chaneset that snuck  in in https://github.com/canonical/cloud-init/commit/e2840f1771158748780a768f6bfbb117cd7610c6 or something else triggered discovery of the issue?
[19:36] <blackboxsw> *changeset*
[19:39] <blackboxsw> maybe here is the context I was missing https://bugs.launchpad.net/cloud-init/+bug/1854594/comments/8
[19:39] <blackboxsw> the last comment being, oops shouldn't have done that
[19:41] <meena> blackboxsw: basically, the thing that hid the issue was system_info/distro being overwritten ➕ me misinterpreting pw(8) wrong.
[19:41] <blackboxsw> my question around that PR is that https://bugs.launchpad.net/cloud-init/+bug/1854594 isn't really fixed then right?
[19:41] <blackboxsw> and wasn't really broken either right
[19:41] <meena> blackboxsw: you said something about reading comprehension too this morning, i believe.
[19:41] <meena> blackboxsw: yes, wasn't broken ☹ shouldn't have fixed it.
[19:42] <meena> i'm very sorry about this mishap.
[19:42] <blackboxsw> roger doger. ok I'm marking that bug as invalid then. no worries
[19:43] <blackboxsw> ok merged
[19:43] <meena> my perfect score is gone
[19:44] <Goneri> you're still perfect <3 :-)
[19:44] <Odd_Bloke> meena: Honestly, don't worry about it, these things happen.
[19:44] <Odd_Bloke> And we caught it!
[19:45] <blackboxsw> heh.
[19:45] <blackboxsw> +1
[19:45] <meena> Odd_Bloke: i just wish we caught it before SRU release xD
[19:45] <Odd_Bloke> meena: The SRU doesn't affect FreeBSD, so no harm there.
[19:46] <Odd_Bloke> blackboxsw: So I've got someone to open up the CLA migration MP and PR.  Do I merge the PR and close the MP out?
[19:48] <blackboxsw> Odd_Bloke: since we can't use review-mps from git@github.com:CanonicalLtd/uss-tableflip.git  anymore due to our security config, we'll have to do that manually. Right, merge PR locally once you confirm the matching LP Branch proposal on associated LP account. And Manually close the LP-side branch proposal with a comment pointing to the upstream github commitish
[19:48] <blackboxsw> and you've probably seen ad-m's next PR that can followup after this
[19:48] <Odd_Bloke> Yep, I'm working through his things ATM.
[19:48] <blackboxsw> tkx
[19:49] <Odd_Bloke> blackboxsw: Why do I need to merge the PR locally?
[19:49] <Odd_Bloke> (As opposed to using the GH UI.)
[19:50] <blackboxsw> Odd_Bloke: sorry I meant using the GH UI. (I meant 'local' in the sense of in github, not over in launchpad)
[19:50] <Odd_Bloke> Cool
[19:50] <blackboxsw> Odd_Bloke: rharper what do we think? https://github.com/canonical/cloud-init/pull/77
[19:51] <ahosmanMSFT> Hi all, I'm working on this cloud-init (Azure) bug that might be connected to cloud config. Here is the behavior: start a vm, ssh via password, cloud-init clean, then restart. Now the user can't ssh via password. Looking into DataSourceAzure, the functionality is there, but the behavior doesn't match. Am I missing something? https://git.launchpad.net/cloud-init/tree/cloudinit/sources/DataSourceAzure.py#n1206
[19:51] <blackboxsw> static6 eni support for 19.4?
[19:52] <blackboxsw> I'm thinking hold as it needs a bit more review and we will have a 20.1 early next year.
[19:52] <blackboxsw> hi ahosmanMSFT.
[19:53] <blackboxsw> ahosmanMSFT: could it be related to the byteswapping branch that just landed?
[19:53] <blackboxsw> I didn't see this in SRU testing
[19:53] <blackboxsw> which didn't have your fix
[19:54] <ahosmanMSFT> blackboxsw: This bug has apparently existed for a while
[19:54] <blackboxsw> ahosmanMSFT: a good attempt would be to try reproducing on current xenial vms
[19:54] <blackboxsw> to see if it has the same broken behavior
[19:55] <ahosmanMSFT> I've reproduced the behavior on both xenial and bionic
[19:55] <blackboxsw> generally after a cloud-init clean all cloud-init semaphores should be removed and all modules should get re-run
[19:55] <blackboxsw> ok is there a bug filed ?
[19:56] <ahosmanMSFT> blackboxsw: Not sure, saw this in our internal backlog and picked it up
[19:57] <ahosmanMSFT> It's because the password get's redacted in ovf-env.xml file
[20:00] <blackboxsw> best bet is to check to see if a bug is filed @ https://bugs.launchpad.net/cloud-init/+bugs. If not, then try filing a bug at https://bugs.launchpad.net/cloud-init/+filebug file a bug  with the cloud-config required to show the problem and attach logs tarfile from cloud-init collect-logs .  Generally we'd expect to see the ssh, password and user_groups cofig modules running after a clean.  But, due to sourcing
[20:00] <blackboxsw> content from ovf-env.xml this information might not be present anymore after a clean.
[20:00] <blackboxsw> as you said, if it was redacted and only run once
[20:00] <blackboxsw> then new cloud-init boots after clean may not have that metadata available anymore
[20:01] <blackboxsw> TBH, cloud-init clean is a developer tool, and a blunt hammer.  It's not intended for typical use-cases for ongoing system maintenance
[20:02] <blackboxsw> so it is not a typical consumer use pattern that I'd expect you to jump through hoops to support if the initial provisioning of a vm does properly setup that configuration
[20:22] <ahosmanMSFT> blackboxsw: Is there a way to persist metadata, or mitigating the issue cleaning+restarting causing password lock. You are right, the typical consumer wouldn't run in to this. But I'd like to find some resolution
[20:31] <rharper> ahosmanMSFT: sounds similar to this, https://bugs.launchpad.net/cloud-init/+bug/1849677
[20:33] <rharper> ahosmanMSFT: w.r.t persistent metadata, it's not clear to me what you mean;   for debugging, I suggest creating an additional user on the system after the initial launch with known password and sudo privs;  then after a clean + reboot; login as the secondary user you created, so you can collect cloud-init logs
[20:38] <ahosmanMSFT> rharper: That is the bug I am dealing with. Creating a new user/pass was how I was getting on the vm to collect logs.
[20:39] <rharper> ok, I didn't see any logs from the secondary boot to contrast with the first one;
[20:41] <rharper> ahosmanMSFT: interesting, so it seems to me that it' may be a side-effect of redacting the password value in ovf-xml file;  that's not kept in original state in your boot, clean, reboot scenario, right ?
[20:41] <rharper> in which case, on the secondary boot (after clean) the xml contents are still redacted instead of the original password supplied ?
[20:41] <rharper> does that sound plausible ?
[20:42] <rharper> ahosmanMSFT: on comment #2, https://bugs.launchpad.net/cloud-init/+bug/1849677/comments/2  ; there is mention that the iso only exists on first boot
[20:57] <ahosmanMSFT> rharper: exactly, the ovf is cached via waagent
[20:57] <ahosmanMSFT> Trying the committed fix now...
[20:57] <rharper> ah, ok
[20:58] <rharper> sounds like you've reproduced the issue that we now have a fix for
[21:14] <meena> anyone considered adding CircleCI to Travis?
[21:16] <meena> … instead of just having travis.
[21:32] <ahosmanMSFT> blackboxsw rharper: Has this fix worked for you? https://bugs.launchpad.net/cloud-init/+bug/1849677/comments/5
[21:32] <ahosmanMSFT> I just tried it and it didn't work, still get locked out
[21:34] <rharper> ahosmanMSFT: we did not test it directly
[21:34] <rharper> do you have logs from first and second boot ?
[21:42] <Odd_Bloke> meena: Why would we use both?
[21:42] <meena> Odd_Bloke: for freebsd.
[21:45] <Odd_Bloke> meena: Oh, I didn't know they supported it.
[21:46] <blackboxsw> ok upstream release bug complete for 19.4
[21:46] <blackboxsw> https://bugs.launchpad.net/cloud-init/+bug/1856761
[21:46] <blackboxsw> adding a simple script update PR to make this a bit more automated
[21:54] <meena> Odd_Bloke: at least last time i looked at it
[21:55] <powersj> blackboxsw, thank you!
[21:59] <meena> Odd_Bloke: cirrus ci
[22:00] <meena> https://cirrus-ci.org/guide/FreeBSD/
[22:01] <Odd_Bloke> Aha, OK.
[22:01] <Odd_Bloke> That explains why I couldn't find it in the Circle docs. :p
[22:05] <meena> they're all ci and c and stuff
[22:10] <blackboxsw> https://github.com/CanonicalLtd/uss-tableflip/pull/27
[22:10] <blackboxsw> upstream release process update and simple script
[22:11] <blackboxsw> upstream release commit https://github.com/canonical/cloud-init/pull/121
[22:34] <meena> so, anyone got anything against merging Goneri's pr, so we can go on with life and refactor cloudinit.net? https://md.hecke.rs/z17JGX4HT4emH5jTEhMuTA
[22:39] <blackboxsw> thanks Odd_Bloke . scubbed changelog (we probably need that change for log2dch too now to clean out merge markers)
[22:39] <blackboxsw> srubbed even
[22:48] <Odd_Bloke> blackboxsw: Well, we shouldn't need to any longer, as we've disabled merge commits.
[22:50] <meena> who allowed hetzner to be merged without documentation???
[23:33] <blackboxsw> hrm I think I may have run out of daylight and reviewers on the upstream 19.4 cut for today, rharper or powersj or smoser if around I've queued  https://github.com/canonical/cloud-init/pull/121 from tip for cloud-init 19.4 release
[23:34] <blackboxsw> if that lands today, I'll publish 19.4 to focal, and send a release email & discourse post out