[10:50] <meena> absolutely love how the most innocent do this on *BSD functions turns into a can of worms https://github.com/canonical/cloud-init/pull/298
[13:24] <beezly> Hello all - I'm trying to configure an EFS mount using the mounts module in cloud-init, but I'm hitting a problem. It appears to completely ignore the entry in the mounts yaml.
[13:26] <beezly> Hello all - I'm trying to configure an EFS mount using the mounts module in cloud-init, but I'm hitting a problem. It appears to completely ignore the entry in the mounts yaml.
[13:26] <beezly> oops - sorry about that.
[13:26] <beezly> I can see it reading the YAML ok and it says... mounts configuration is [['fs-10e8d4db:/', '/opt/jenkins_workspace', 'efs', 'defaults,_netdev', 0, 0]]
[13:27] <beezly> but then it says "Attempting to determine the real name of fs-10e8d4db:/" followed by "Ignorming nonexistant named mount fs-10e8d4db:/"
[13:42] <Odd_Bloke> beezly: It looks to me like the code doesn't handle "devices" that don't look like an actual /dev/... device, unfortunately: https://github.com/canonical/cloud-init/blob/master/cloudinit/config/cc_mounts.py#L115
[13:42] <beezly> yeah - i'm just looking through that. Might have a patch soon-ish.
[13:43] <beezly> I'm making an assumption that anything that matches ^.+:/.* should be considered a network device.
[13:44] <beezly> what version of python do people develop cloud-init against?
[13:45] <Odd_Bloke> We support 3.4+.
[13:45] <beezly> thanks.
[13:46] <Odd_Bloke> beezly: https://cloudinit.readthedocs.io/en/latest/topics/hacking.html lists the steps to get setup for development (presumably you know what you're doing with git &c., but you will need to sign the CLA).  Thanks for looking at this!
[13:46] <beezly> ok, thanks.
[14:47] <beezly> @Odd_Bloke I'm finding the Launchpad/Github process really confusing. I'm trying to create my launchpad fork at lp:~beezly/cloud-init but each time I push I get "fatal: remote error: Path translation timed out."
[14:49] <Odd_Bloke> beezly: Apologies, we appear to have a doc building error, but there are new docs on CLA signature here: https://github.com/canonical/cloud-init/blob/master/HACKING.rst
[14:49] <Odd_Bloke> beezly: But I can see your GH username in our CLA records, so you're good to go.
[14:49] <beezly> ah great!
[14:50] <Odd_Bloke> The docs I pointed you at (which I fixed in master yesterday) predate the GitHub username field being in the CLA form.
[15:13] <Odd_Bloke> rharper: blackboxsw: If I could get a quick review of https://github.com/canonical/cloud-init/pull/301 to add beezly as a CLA signer, I'd appreciate it.
[15:14] <blackboxsw> Odd_Bloke: approved
[15:16] <Odd_Bloke> Thanks!
[15:56] <beezly> @Odd_Bloke thanks for the super-quick turnaround on that stuff. I just tried out that PR on an EC2 instance of mine and it works as expected.
[16:02] <Odd_Bloke> beezly: Nice, I'm bouncing between a lot of meetings today, but will hopefully get to reviewing it before too long. :)
[16:03] <beezly> no problem - I'll try and respond quickly, but I have a pretty full day tomorrow so it might be the weekend.
[16:27] <Odd_Bloke> rharper: https://github.com/canonical/cloud-init/pull/300/ <-- is r"^.+:/.*" a reasonable regex to match NFS shares (remote:/path/on/server) and no other devices that you're aware of?
[16:30] <rharper> Odd_Bloke: hrm
[16:30] <Odd_Bloke> (Disk devices, that is.)
[16:32] <Goneri> r"^\S+:/\S*"
[16:32] <Goneri> .+ will catch the spaces. This is not what you want.
[16:33] <Odd_Bloke> You can't have spaces in NFS mount targets?
[16:34] <Goneri> not in the hostname, maybe in the mount point.
[16:34] <Odd_Bloke> Hmm, not as we're going to put it in fstab, I think.
[16:35] <Odd_Bloke> They'd have to be escaped to \040.
[16:35] <Goneri> and here come the non-breaking space :-)
[16:39] <rharper> Odd_Bloke: AFACIT, ":" in the spec will be considered NFS,  left of the colon is a hostname or FQDN, and afterward is any "path" spec
[16:40] <rharper> FQDN/hostnames cannot have whitespace IIUC, and paths cannot either without encoding/escaping
[16:41] <Odd_Bloke> I think you _might_ be able to have unencoded/escaped whitespace if you're just running mount yourself, but it definitely won't work in fstab (which is whitespace-separated) regardless.
[16:46] <Odd_Bloke> So I think what we would want in an ideal world is two checks.  First, we check if something looks like it's intended to be a network share (which the proposed regex is sufficient for, I think).  If it does then, second, we check if it's valid for use in an fstab, and emit a warning if it isn't.
[16:47] <Odd_Bloke> Or, in fact, that second step could perform the encoding and only emit a warning if that also fails.
[16:48] <Odd_Bloke> So I think I've convinced myself that this check is fit for purpose: it allows people to configure NFS shares through cloud-init, and shouldn't match anything other than NFS shares.
[16:49] <Odd_Bloke> We could further improve the experience to not configure broken mounts, but users could also do that themselves by modifying their cloud-config.
[16:49] <Odd_Bloke> Whereas today they can't even do that.
[16:52] <rharper> We don't issue the mount directly; right, we only encode it into fstab entry and we mount -a
[16:52] <rharper> so, it has to be correct enough to have fstab parsable
[16:55] <Odd_Bloke> If `mount -a` will skip unparseable lines, then we don't necessarily have to have it parseable.
[16:56] <rharper> I think we get a unit failed error
[16:56] <rharper> it's not fatal to cloud-init
[16:56] <Odd_Bloke> Arguably it's easier for a user to figure out what's going on if we write out the unparseable line than if we just emit a warning.
[16:56] <rharper> thinking, we don't normally sanitize/fixup cloud-config provided values
[16:56] <rharper> yes
[16:56] <rharper> I was heading just there
[16:56] <Odd_Bloke> Yep, I wasn't disagreeing with you. :)
[16:57] <rharper> and I was just violently agreeing with you as well , haha
[16:58] <Odd_Bloke> OK, so I think that regex is fine, except I'm not sure if there _must_ be a leading slash on the path.
[16:59] <rharper> could we not just check for ":" in the line ?
[16:59] <rharper> since we're not parsing it ?
[17:00] <Odd_Bloke> Yeah, that's what I'm wondering about.
[17:01] <rharper> and we could possibly help with the _netdev option , if that's not present
[17:02] <rharper> https://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html  is useful;
[17:02] <Odd_Bloke> Yeah, I think we could improve the experience further, and I might ask beezly if they're interested in doing more work after this.
[17:02] <rharper> yeah
[17:03] <rharper> I wonder if the efs-utils is in the ubuntu image? or the aws snap ?
[17:03] <Odd_Bloke> There isn't a likely-looking package in the archive, so I don't think it's deb-packaged at least.
[17:04] <Goneri> Odd_Bloke, could you please remove the "wip" flag here https://github.com/canonical/cloud-init/pull/298
[17:04] <blackboxsw> done Goneri
[17:04] <Goneri> thanks!
[17:05] <blackboxsw> I'm assigned to that PR as of today. will get you a review there
[17:05] <Goneri> also, is there anything required here: https://github.com/canonical/cloud-init/pull/289
[17:05] <Goneri> blackboxsw, I'm mostly just some over due cleanup/refactoring.
[17:05] <blackboxsw> sounds good
[17:05] <Goneri> blackboxsw, ultimately I would like to move these functions in distros/
[17:06] <Odd_Bloke> rharper: OK, so I'm thinking I'll ask beezly to drop the "/" from the regex, and call that good.  OK with you?
[17:20] <rharper> Odd_Bloke: any reason for the regex over just the ':'  ?
[18:51] <blackboxsw> Odd_Bloke: what git commits should I be cherry picking into ubuntu/devel for the dropping invalid underscores from hostnames
[18:54] <blackboxsw> I was thinking the following in this order: 4f825b3e6d8fde5c239d29639b04d2bea6d95d0e  c478d0bff412c67280dfe8f08568de733f9425a1.... but I think I might need another before c478
[18:55] <vasartori> Hi all, I'm getting a strange behaviour in cloud-init with https://github.com/vmware/cloud-init-vmware-guestinfo. When the vm is customized by open-vm-tools, it works, after reboot, cloud-init (from user-data) fails, because an directory exists (/var/lib/cloud/instance). If I remove manually this directory, works. Any tip?
[18:58] <rharper> Odd_Bloke: so, I just rebased Goneri 's set-passwd change, and now the squash-and-merge wants to put in a Co-Authored by me (which it is not) ... how do I merge the PR and ensure that Goneri is in the author field ?
[19:00] <rharper> vasartori: there are known issues with vmware workflows and cloud-init, specifically around changing the instance id.  There's upstream work on trying to sort out how to prevent changing the instance-id but still running for additional "customizations" ;  it's somewhat a misunderstanding of how cloud-init is designed (it performs most customizations just once) where the vmware workflows invole multiple rounds of customization aainst the same
[19:00] <rharper> instance;
[19:01] <Goneri> rharper, I can rebase it myself if you prefer
[19:03] <blackboxsw> Goneri: that'd work , but rharper that issue was a github snaffu. they broke something on the backend when I had merged. I think make sure that your squashed commit message does not contain Co-Authored-By: ryan.harper as a footer
[19:03] <rharper> Goneri: I think if you do and force push, that should reset it to you as last author
[19:03] <vasartori> rharper So,  should i give up using it?
[19:04] <Goneri> done
[19:04] <rharper> vasartori: I don't know;
[19:04] <Goneri> I've a script for that: https://gist.github.com/goneri/c0fab52388465f5da2b2979ee5166f08
[19:04] <blackboxsw> vasartori: it'd be nice if we could ultimately work with vmware somehow to get that datasource upstream in cloud-init to ease support also it'd be good to file a support ticket against VMWare somehow reporting that issue. But, there may be a bug in cloud-init upstream that still needs resolving as rharper states
[19:08] <rharper> vasartori: as a community, we don't have access to all the different versions of vmware, customizations, platforms; so it's difficult for us to develop a way to 1) provide uniqie instance-ids to vmware VMs  2) design a workflow that translates the "recustomize" and alreadu "customized" VM;   we continue to work with the VMware engineers when they submit PRs to cloud-init;
[19:11] <rharper> =(
[19:12] <blackboxsw> yeah it's a strange relationship that we hope can be improved with engagement from VMware and the cloud-init community in general through fix requests against vmware (and our discussions with VMWare reps/developers)
[19:14] <rharper> blackboxsw: they left
[19:15] <blackboxsw> well shucks
[19:20] <vasartori> rharper and blackboxsw Thanks a lot for the answers... I'll try to talk with vmware...
[19:20] <Odd_Bloke> rharper: I believe that they reverted things so that the submitter of the PR is the author of the squashed commit.
[19:20] <Odd_Bloke> s/they/Github/
[19:21] <blackboxsw> thanks vasartori I had worried you got frustrated and took off :)
[19:21] <Odd_Bloke> So you shouldn't need to do anything special, just remove the Co-Authored line.
[19:21] <blackboxsw> vasartori: the more voices, the more likely we can improve engagement thanks
[19:22] <vasartori> blackboxsw my internet has gonna away :p . You are 100% correct, I'll try to make some noise there :p
[19:25] <Odd_Bloke> blackboxsw: I believe you'll want (in reverse order): 1bbc4908f c478d0bff 2566fdbee c5e949c02
[19:25] <Odd_Bloke> (`git cherry-pick c5e949c02 2566fdbee c478d0bff 1bbc4908f` onto ubuntu/devel looks right to me, and pytest runs.)
[19:26] <Odd_Bloke> s/runs/passes/ of course. :p
[19:29] <rharper> Odd_Bloke: I see (just remove the co-author bit);
[19:29] <rharper> I hope so, even after a rebase / force-push by Goneri the UI still wants to inject that Co-Authored line...  dropping that and merging
[19:35] <rharper> Odd_Bloke:  yeah, that worked.  merged and Goneri has credit.
[19:35] <Odd_Bloke> \o/
[19:37] <Goneri> Yeah!
[20:03] <blackboxsw> rharper: Odd_Bloke I've put up a cherry-pick branch for review into ubuntu/devel with neew tooling
[20:03] <blackboxsw> https://github.com/canonical/cloud-init/pull/303
[20:12] <rharper> k
[20:53] <meena> oh no github down
[20:57] <powersj> doh
[20:57] <powersj> https://www.githubstatus.com/
[20:58] <blackboxsw> farrr
[21:01] <blackboxsw> well at least github has a near 500 error splashscreen
[21:01] <blackboxsw> *neat*
[21:05] <meena> well, time to sleep then!
[21:05] <meena> Good night.
[21:14] <blackboxsw> Odd_Bloke: rharper, the big question with https://github.com/canonical/cloud-init/pull/303 is do we want to support --force push of dropped commits in ubuntu/devel per cherry-pick -t in https://github.com/CanonicalLtd/uss-tableflip/pull/45
[21:15] <blackboxsw> or do we want to re-add cherry picks that we dropped to support fixing our daily recipes?
[21:15]  * rharper has to shift metal gears .. 
[21:16] <rharper> IIUC, we shouldn't have to force push, if we cherry pick, when we merge, the merge will see we've already landed the cherry picks ... no ?  /me 's git fu is weak-sauce
[21:17] <rharper> we have a list of cherry picks;  we need to cherry pick each one, refresh patches, update changelog.   then post release, we'll do a new-upstream-snapshot, the merge should see the cherry picks, I though it ill replay the commits that are interleaved automatically ?  or does it not do it that way ?
[21:18] <rharper> https://stackoverflow.com/questions/14486122/how-does-git-merge-after-cherry-pick-work
[21:20] <blackboxsw> rharper: we'll have to --force if we drop the 'dropped cherry-picks to make daily recipe build'
[21:20] <rharper> blackboxsw: I thought daily was master with merged release branch ... and IIRC, the steps post cherry-pick and release is to remove the cherry picks on the release branch, right ?
[21:20] <rharper> we only need the cherry picks *in* for a release
[21:20] <rharper> once we've uploaded, we can un do them
[21:20] <rharper> right ?
[21:20] <rharper> which is what the refresh --patches only was to be ?
[21:20] <blackboxsw> rharper: correct. we only need cpicks 'in' for a release
[21:21] <blackboxsw> in this circumstance, we've already released a cherry pick once and applied a followup couple commits to remove that cpick so dialies work
[21:21] <blackboxsw> *dailies*
[21:21] <blackboxsw> rharper: right but refresh --patches is fairly shallow in that it actually imports all commits via new-snapshot, but only uses that to drop cpicks, so the code in ubuntu/devel will now officially match tip
[21:22] <blackboxsw> the intent of --refresh-patches was that our next release would perform the full new-upstream-snapshot and end up listing all the commits is had already pulled into ubuntu/devel in the debian/changelog
[21:23] <blackboxsw> during feature-freeze, we can't import all commits from tip to properly refresh/drop patches, because we also can't release those other non-cherry commits
[21:23] <rharper> then I'm not sure what problem we're fixing ... we manually cherry pick, release/upload,  and commit changes to drop cpick patches right ?
[21:23] <rharper> at that point, daily still works;
[21:23] <blackboxsw> rharper: right I'll draw up a hackmd doc with the scenarios we are handling.
[21:24] <blackboxsw> then we can comment on best plan for a second cherry-pick release
[21:24] <rharper> the new case is, we'd like to release a second cpick only release ?
[21:24] <rharper> and what does't work there ?
[21:43] <blackboxsw> rharper: here are the use-cases https://hackmd.io/VbmtcZLyR4650aqqmfMMYg?both
[21:43] <blackboxsw> and yes this new cases is second cpick release instead of new-upstream-snapshot
[21:44] <blackboxsw> so we already have dropped the earlier released cpick from ubuntu/devel so we either need to unwind that dropped cpick to add new cpicks, or re-add all cpicks and re-release
[21:45] <blackboxsw> I'm not sure about the best approach. maybe the best approach is not to unwind/pop the dropped cpick commits. and just re-add the original cpicks that we dropped from ubuntu/devel which allowed us to fix daily builds
[21:50] <blackboxsw> also, maybe I'm misunderstanding what we did with new-upstream-snapshot --update-patches-only. I thought it actually pulled in all commits. rerunning now to confirm.
[21:52] <rharper> blackboxsw: so, I was thinking for cpick releases, we'd have a file with the cpick hashes we wanted ... then we could easily revert those to get daily working, and then if we did a second cpick release, the list is just longer this time (we update it with new cpicks) and then pull them out afterwards
[21:53] <Odd_Bloke> Found a gap in our pytest testing vs non-pytest testing, which I've addressed in https://github.com/canonical/cloud-init/pull/304
[21:54] <Odd_Bloke> (As it happens, no pytest tests are using util.subp, but this will catch any future additions.)
[21:57] <blackboxsw> rharper: the problem is if we are reverting the original cpick hashes from ubuntu/<series> then we've botched the commits that we ended up releasing already
[21:57] <blackboxsw> so we don't have an exact point anymore in ubuntu/<series> that was used to build/dput
[21:57] <rharper> do you want to do hangout?  I might be faster
[21:57] <blackboxsw> so maybe the best bet is to just re-cpick
[21:57] <blackboxsw> yeah I'd like to
[21:58] <rharper> k
[21:58] <rharper> standup ?
[21:58] <blackboxsw> yessir
[22:00] <Odd_Bloke> blackboxsw: I'm not on the call so this may not be helpful, but reverts in git create a new commit (whereas in bzr they reset the history to before the commit in question).  Just in case there's a confusion in nomenclature, a git revert would not mean we'd lose the commit that was previously uploaded.
[22:00] <Odd_Bloke> (Apologies to you and rharper if I've just stepped all over your conversation unhelpfully. :p)
[22:10] <blackboxsw> Odd_Bloke: always a help. will read now
[22:45] <blackboxsw> rharper: Odd_Bloke ok PR 303 in in good shape I think for cloud-init
[22:45] <blackboxsw> updated description with the manual steps I performed
[22:45] <blackboxsw> and shouldn't involve a --force push due to mangled commitishes
[22:48]  * blackboxsw starts reworking the uss-tableflip script changes for handling --pop-all
[22:57] <rharper> blackboxsw: +1, Iemme look
[23:00] <rharper> blackboxsw: "I manually deleted the 'drop ec2 secondary nics cpick changelog" ; you just opened the file and deleted content ? or is there some dch command that will drop it ?
[23:01] <blackboxsw> rharper: hrm my dch command dropped me into that (maybe because I actually wasn't running that as a bash script but individual commands on the commandlin
[23:02] <blackboxsw> if you try that separate cherry-pick 6600c64 on the commandline directly (instead of inside a bash script) it might interactively prompt you for the changelog changes that were added. I always have to :wq after each step
[23:04] <blackboxsw> maybe it's because I have EDITOR=vi in my env?
[23:08] <rharper> blackboxsw: I see, you manually removed it after running cherry pick, during the dch -i interactive session
[23:08] <rharper> I've reproduced your branch;  have you build-package and test-built it yet ?
[23:09] <blackboxsw> rharper: will do that now. was testing tooling (which I think is almost there for --pop-all)
[23:11] <rharper> blackboxsw: I'll hold off approving; but I've commented that I can reproduce your branch
[23:11] <rharper> I'll check back later
[23:12] <blackboxsw> +1 thanks rharper
[23:27] <blackboxsw> rharper: ok tooling isn't going to change this manual process I forgot
[23:27] <blackboxsw> so 303 is good
[23:28] <blackboxsw> if +1'd then I can upload that.
[23:28] <blackboxsw> tomorrow we can sort the uss-tableflip and my followup PR to --pop-all for daily recipe builds
[23:28]  * blackboxsw needs to make dinner
[23:47] <blackboxsw> build-package works on my branch
[23:47] <blackboxsw> can build-and-push if agreed