=== frickler_ is now known as frickler [10:50] absolutely love how the most innocent do this on *BSD functions turns into a can of worms https://github.com/canonical/cloud-init/pull/298 === beezly is now known as beezly[away] === beezly[away] is now known as beezly [13:24] Hello all - I'm trying to configure an EFS mount using the mounts module in cloud-init, but I'm hitting a problem. It appears to completely ignore the entry in the mounts yaml. [13:26] Hello all - I'm trying to configure an EFS mount using the mounts module in cloud-init, but I'm hitting a problem. It appears to completely ignore the entry in the mounts yaml. [13:26] oops - sorry about that. [13:26] I can see it reading the YAML ok and it says... mounts configuration is [['fs-10e8d4db:/', '/opt/jenkins_workspace', 'efs', 'defaults,_netdev', 0, 0]] [13:27] but then it says "Attempting to determine the real name of fs-10e8d4db:/" followed by "Ignorming nonexistant named mount fs-10e8d4db:/" [13:42] beezly: It looks to me like the code doesn't handle "devices" that don't look like an actual /dev/... device, unfortunately: https://github.com/canonical/cloud-init/blob/master/cloudinit/config/cc_mounts.py#L115 [13:42] yeah - i'm just looking through that. Might have a patch soon-ish. [13:43] I'm making an assumption that anything that matches ^.+:/.* should be considered a network device. [13:44] what version of python do people develop cloud-init against? [13:45] We support 3.4+. [13:45] thanks. [13:46] beezly: https://cloudinit.readthedocs.io/en/latest/topics/hacking.html lists the steps to get setup for development (presumably you know what you're doing with git &c., but you will need to sign the CLA). Thanks for looking at this! [13:46] ok, thanks. [14:47] @Odd_Bloke I'm finding the Launchpad/Github process really confusing. I'm trying to create my launchpad fork at lp:~beezly/cloud-init but each time I push I get "fatal: remote error: Path translation timed out." [14:49] beezly: Apologies, we appear to have a doc building error, but there are new docs on CLA signature here: https://github.com/canonical/cloud-init/blob/master/HACKING.rst [14:49] beezly: But I can see your GH username in our CLA records, so you're good to go. [14:49] ah great! [14:50] The docs I pointed you at (which I fixed in master yesterday) predate the GitHub username field being in the CLA form. [15:13] rharper: blackboxsw: If I could get a quick review of https://github.com/canonical/cloud-init/pull/301 to add beezly as a CLA signer, I'd appreciate it. [15:14] Odd_Bloke: approved [15:16] Thanks! === powersj changed the topic of #cloud-init to: pull-requests https://git.io/JeVed | Meeting minutes: https://goo.gl/mrHdaj | Next status meeting April 14 16:15 UTC | 20.1 (Feb 18) | 20.2 (TBD) | https://bugs.launchpad.net/cloud-init/+filebug === hjensas is now known as hjensas|afk [15:56] @Odd_Bloke thanks for the super-quick turnaround on that stuff. I just tried out that PR on an EC2 instance of mine and it works as expected. [16:02] beezly: Nice, I'm bouncing between a lot of meetings today, but will hopefully get to reviewing it before too long. :) [16:03] no problem - I'll try and respond quickly, but I have a pretty full day tomorrow so it might be the weekend. [16:27] rharper: https://github.com/canonical/cloud-init/pull/300/ <-- is r"^.+:/.*" a reasonable regex to match NFS shares (remote:/path/on/server) and no other devices that you're aware of? [16:30] Odd_Bloke: hrm [16:30] (Disk devices, that is.) [16:32] r"^\S+:/\S*" [16:32] .+ will catch the spaces. This is not what you want. [16:33] You can't have spaces in NFS mount targets? [16:34] not in the hostname, maybe in the mount point. [16:34] Hmm, not as we're going to put it in fstab, I think. [16:35] They'd have to be escaped to \040. [16:35] and here come the non-breaking space :-) [16:39] Odd_Bloke: AFACIT, ":" in the spec will be considered NFS, left of the colon is a hostname or FQDN, and afterward is any "path" spec [16:40] FQDN/hostnames cannot have whitespace IIUC, and paths cannot either without encoding/escaping [16:41] I think you _might_ be able to have unencoded/escaped whitespace if you're just running mount yourself, but it definitely won't work in fstab (which is whitespace-separated) regardless. [16:46] So I think what we would want in an ideal world is two checks. First, we check if something looks like it's intended to be a network share (which the proposed regex is sufficient for, I think). If it does then, second, we check if it's valid for use in an fstab, and emit a warning if it isn't. [16:47] Or, in fact, that second step could perform the encoding and only emit a warning if that also fails. [16:48] So I think I've convinced myself that this check is fit for purpose: it allows people to configure NFS shares through cloud-init, and shouldn't match anything other than NFS shares. [16:49] We could further improve the experience to not configure broken mounts, but users could also do that themselves by modifying their cloud-config. [16:49] Whereas today they can't even do that. [16:52] We don't issue the mount directly; right, we only encode it into fstab entry and we mount -a [16:52] so, it has to be correct enough to have fstab parsable [16:55] If `mount -a` will skip unparseable lines, then we don't necessarily have to have it parseable. [16:56] I think we get a unit failed error [16:56] it's not fatal to cloud-init [16:56] Arguably it's easier for a user to figure out what's going on if we write out the unparseable line than if we just emit a warning. [16:56] thinking, we don't normally sanitize/fixup cloud-config provided values [16:56] yes [16:56] I was heading just there [16:56] Yep, I wasn't disagreeing with you. :) [16:57] and I was just violently agreeing with you as well , haha [16:58] OK, so I think that regex is fine, except I'm not sure if there _must_ be a leading slash on the path. [16:59] could we not just check for ":" in the line ? [16:59] since we're not parsing it ? [17:00] Yeah, that's what I'm wondering about. [17:01] and we could possibly help with the _netdev option , if that's not present [17:02] https://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html is useful; [17:02] Yeah, I think we could improve the experience further, and I might ask beezly if they're interested in doing more work after this. [17:02] yeah [17:03] I wonder if the efs-utils is in the ubuntu image? or the aws snap ? [17:03] There isn't a likely-looking package in the archive, so I don't think it's deb-packaged at least. [17:04] Odd_Bloke, could you please remove the "wip" flag here https://github.com/canonical/cloud-init/pull/298 [17:04] done Goneri [17:04] thanks! [17:05] I'm assigned to that PR as of today. will get you a review there [17:05] also, is there anything required here: https://github.com/canonical/cloud-init/pull/289 [17:05] blackboxsw, I'm mostly just some over due cleanup/refactoring. [17:05] sounds good [17:05] blackboxsw, ultimately I would like to move these functions in distros/ [17:06] rharper: OK, so I'm thinking I'll ask beezly to drop the "/" from the regex, and call that good. OK with you? [17:20] Odd_Bloke: any reason for the regex over just the ':' ? [18:51] Odd_Bloke: what git commits should I be cherry picking into ubuntu/devel for the dropping invalid underscores from hostnames [18:54] I was thinking the following in this order: 4f825b3e6d8fde5c239d29639b04d2bea6d95d0e c478d0bff412c67280dfe8f08568de733f9425a1.... but I think I might need another before c478 [18:55] Hi all, I'm getting a strange behaviour in cloud-init with https://github.com/vmware/cloud-init-vmware-guestinfo. When the vm is customized by open-vm-tools, it works, after reboot, cloud-init (from user-data) fails, because an directory exists (/var/lib/cloud/instance). If I remove manually this directory, works. Any tip? [18:58] Odd_Bloke: so, I just rebased Goneri 's set-passwd change, and now the squash-and-merge wants to put in a Co-Authored by me (which it is not) ... how do I merge the PR and ensure that Goneri is in the author field ? [19:00] vasartori: there are known issues with vmware workflows and cloud-init, specifically around changing the instance id. There's upstream work on trying to sort out how to prevent changing the instance-id but still running for additional "customizations" ; it's somewhat a misunderstanding of how cloud-init is designed (it performs most customizations just once) where the vmware workflows invole multiple rounds of customization aainst the same [19:00] instance; [19:01] rharper, I can rebase it myself if you prefer [19:03] Goneri: that'd work , but rharper that issue was a github snaffu. they broke something on the backend when I had merged. I think make sure that your squashed commit message does not contain Co-Authored-By: ryan.harper as a footer [19:03] Goneri: I think if you do and force push, that should reset it to you as last author [19:03] rharper So, should i give up using it? [19:04] done [19:04] vasartori: I don't know; [19:04] I've a script for that: https://gist.github.com/goneri/c0fab52388465f5da2b2979ee5166f08 [19:04] vasartori: it'd be nice if we could ultimately work with vmware somehow to get that datasource upstream in cloud-init to ease support also it'd be good to file a support ticket against VMWare somehow reporting that issue. But, there may be a bug in cloud-init upstream that still needs resolving as rharper states [19:08] vasartori: as a community, we don't have access to all the different versions of vmware, customizations, platforms; so it's difficult for us to develop a way to 1) provide uniqie instance-ids to vmware VMs 2) design a workflow that translates the "recustomize" and alreadu "customized" VM; we continue to work with the VMware engineers when they submit PRs to cloud-init; [19:11] =( [19:12] yeah it's a strange relationship that we hope can be improved with engagement from VMware and the cloud-init community in general through fix requests against vmware (and our discussions with VMWare reps/developers) [19:14] blackboxsw: they left [19:15] well shucks [19:20] rharper and blackboxsw Thanks a lot for the answers... I'll try to talk with vmware... [19:20] rharper: I believe that they reverted things so that the submitter of the PR is the author of the squashed commit. [19:20] s/they/Github/ [19:21] thanks vasartori I had worried you got frustrated and took off :) [19:21] So you shouldn't need to do anything special, just remove the Co-Authored line. [19:21] vasartori: the more voices, the more likely we can improve engagement thanks [19:22] blackboxsw my internet has gonna away :p . You are 100% correct, I'll try to make some noise there :p [19:25] blackboxsw: I believe you'll want (in reverse order): 1bbc4908f c478d0bff 2566fdbee c5e949c02 [19:25] (`git cherry-pick c5e949c02 2566fdbee c478d0bff 1bbc4908f` onto ubuntu/devel looks right to me, and pytest runs.) [19:26] s/runs/passes/ of course. :p [19:29] Odd_Bloke: I see (just remove the co-author bit); [19:29] I hope so, even after a rebase / force-push by Goneri the UI still wants to inject that Co-Authored line... dropping that and merging [19:35] Odd_Bloke: yeah, that worked. merged and Goneri has credit. [19:35] \o/ [19:37] Yeah! [20:03] rharper: Odd_Bloke I've put up a cherry-pick branch for review into ubuntu/devel with neew tooling [20:03] https://github.com/canonical/cloud-init/pull/303 [20:12] k [20:53] oh no github down [20:57] doh [20:57] https://www.githubstatus.com/ [20:58] farrr [21:01] well at least github has a near 500 error splashscreen [21:01] *neat* [21:05] well, time to sleep then! [21:05] Good night. [21:14] Odd_Bloke: rharper, the big question with https://github.com/canonical/cloud-init/pull/303 is do we want to support --force push of dropped commits in ubuntu/devel per cherry-pick -t in https://github.com/CanonicalLtd/uss-tableflip/pull/45 [21:15] or do we want to re-add cherry picks that we dropped to support fixing our daily recipes? [21:15] * rharper has to shift metal gears .. [21:16] IIUC, we shouldn't have to force push, if we cherry pick, when we merge, the merge will see we've already landed the cherry picks ... no ? /me 's git fu is weak-sauce [21:17] we have a list of cherry picks; we need to cherry pick each one, refresh patches, update changelog. then post release, we'll do a new-upstream-snapshot, the merge should see the cherry picks, I though it ill replay the commits that are interleaved automatically ? or does it not do it that way ? [21:18] https://stackoverflow.com/questions/14486122/how-does-git-merge-after-cherry-pick-work [21:20] rharper: we'll have to --force if we drop the 'dropped cherry-picks to make daily recipe build' [21:20] blackboxsw: I thought daily was master with merged release branch ... and IIRC, the steps post cherry-pick and release is to remove the cherry picks on the release branch, right ? [21:20] we only need the cherry picks *in* for a release [21:20] once we've uploaded, we can un do them [21:20] right ? [21:20] which is what the refresh --patches only was to be ? [21:20] rharper: correct. we only need cpicks 'in' for a release [21:21] in this circumstance, we've already released a cherry pick once and applied a followup couple commits to remove that cpick so dialies work [21:21] *dailies* [21:21] rharper: right but refresh --patches is fairly shallow in that it actually imports all commits via new-snapshot, but only uses that to drop cpicks, so the code in ubuntu/devel will now officially match tip [21:22] the intent of --refresh-patches was that our next release would perform the full new-upstream-snapshot and end up listing all the commits is had already pulled into ubuntu/devel in the debian/changelog [21:23] during feature-freeze, we can't import all commits from tip to properly refresh/drop patches, because we also can't release those other non-cherry commits [21:23] then I'm not sure what problem we're fixing ... we manually cherry pick, release/upload, and commit changes to drop cpick patches right ? [21:23] at that point, daily still works; [21:23] rharper: right I'll draw up a hackmd doc with the scenarios we are handling. [21:24] then we can comment on best plan for a second cherry-pick release [21:24] the new case is, we'd like to release a second cpick only release ? [21:24] and what does't work there ? [21:43] rharper: here are the use-cases https://hackmd.io/VbmtcZLyR4650aqqmfMMYg?both [21:43] and yes this new cases is second cpick release instead of new-upstream-snapshot [21:44] so we already have dropped the earlier released cpick from ubuntu/devel so we either need to unwind that dropped cpick to add new cpicks, or re-add all cpicks and re-release [21:45] I'm not sure about the best approach. maybe the best approach is not to unwind/pop the dropped cpick commits. and just re-add the original cpicks that we dropped from ubuntu/devel which allowed us to fix daily builds [21:50] also, maybe I'm misunderstanding what we did with new-upstream-snapshot --update-patches-only. I thought it actually pulled in all commits. rerunning now to confirm. [21:52] blackboxsw: so, I was thinking for cpick releases, we'd have a file with the cpick hashes we wanted ... then we could easily revert those to get daily working, and then if we did a second cpick release, the list is just longer this time (we update it with new cpicks) and then pull them out afterwards [21:53] Found a gap in our pytest testing vs non-pytest testing, which I've addressed in https://github.com/canonical/cloud-init/pull/304 [21:54] (As it happens, no pytest tests are using util.subp, but this will catch any future additions.) [21:57] rharper: the problem is if we are reverting the original cpick hashes from ubuntu/ then we've botched the commits that we ended up releasing already [21:57] so we don't have an exact point anymore in ubuntu/ that was used to build/dput [21:57] do you want to do hangout? I might be faster [21:57] so maybe the best bet is to just re-cpick [21:57] yeah I'd like to [21:58] k [21:58] standup ? [21:58] yessir [22:00] blackboxsw: I'm not on the call so this may not be helpful, but reverts in git create a new commit (whereas in bzr they reset the history to before the commit in question). Just in case there's a confusion in nomenclature, a git revert would not mean we'd lose the commit that was previously uploaded. [22:00] (Apologies to you and rharper if I've just stepped all over your conversation unhelpfully. :p) [22:10] Odd_Bloke: always a help. will read now [22:45] rharper: Odd_Bloke ok PR 303 in in good shape I think for cloud-init [22:45] updated description with the manual steps I performed [22:45] and shouldn't involve a --force push due to mangled commitishes [22:48] * blackboxsw starts reworking the uss-tableflip script changes for handling --pop-all [22:57] blackboxsw: +1, Iemme look [23:00] blackboxsw: "I manually deleted the 'drop ec2 secondary nics cpick changelog" ; you just opened the file and deleted content ? or is there some dch command that will drop it ? [23:01] rharper: hrm my dch command dropped me into that (maybe because I actually wasn't running that as a bash script but individual commands on the commandlin [23:02] if you try that separate cherry-pick 6600c64 on the commandline directly (instead of inside a bash script) it might interactively prompt you for the changelog changes that were added. I always have to :wq after each step [23:04] maybe it's because I have EDITOR=vi in my env? [23:08] blackboxsw: I see, you manually removed it after running cherry pick, during the dch -i interactive session [23:08] I've reproduced your branch; have you build-package and test-built it yet ? [23:09] rharper: will do that now. was testing tooling (which I think is almost there for --pop-all) [23:11] blackboxsw: I'll hold off approving; but I've commented that I can reproduce your branch [23:11] I'll check back later [23:12] +1 thanks rharper [23:27] rharper: ok tooling isn't going to change this manual process I forgot [23:27] so 303 is good [23:28] if +1'd then I can upload that. [23:28] tomorrow we can sort the uss-tableflip and my followup PR to --pop-all for daily recipe builds [23:28] * blackboxsw needs to make dinner [23:47] build-package works on my branch [23:47] can build-and-push if agreed