[00:11] <holmanb> blackboxsw: I got some initial comments up on the A-D schema PR. More to come tomorrow
[08:39] <meena> somebody reminded me that i need to submit a fix from almost a year ago: https://bsd.network/@brd/107678887598348834 / https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=254339
[16:22] <holmanb> meena: Do you know of any docs describing running cloud-init with freebsd? I wouldn't know how to test such a PR.
[19:43] <meena> holmanb: you install it, and enable the service 
[19:44] <meena> oh, and enable logging, or else the whole thing is going to be useless 
[19:45] <meena> pkg install net/cloud-init
[19:45] <meena> sysrc cloudinit_enable="YES"
[19:48] <meena> in /usr/local/etc/cloud-init/ subdir make it so the logging config is included, so you're not flying blind 
[19:55] <holmanb> meena: thanks!
[21:44] <blackboxsw> holmanb: I mentioned at standup today as far as setup of AWS Nitro-based systems with IPv6 IMDS I followed this guide (+ an initial pycloudlib Ec2 instance launch with dual-stack via ec2.get_or_create_vpc() call). https://aws.amazon.com/blogs/networking-and-content-delivery/introducing-ipv6-only-subnets-and-ec2-instances/
[21:45] <blackboxsw> I'm now able to confirm access to Ec2's IPv6 IMDS on my end. will run through a couple of tests locally and see if I can't shape up a PR for pycloudlib to make this easier. Right now to get access to IPv6 only I'm doing (probably a silly thing) ssh IPv4 into my dual-stack instance as a bastion host that can then ssh into the IPv6 only instance.
[22:40] <holmanb> blackboxsw: thanks for the ptr
[22:42] <blackboxsw> what remains to be understood is how I can not seem to scp into the instance, but can ssh in.
[22:56] <echeadle> I am trying to use cloud-init on Ubuntu server 21.10.   I am using packer. I start the process and watch the progress on a virtualbox display.   The graphical installer always runs and stops the process.  This did not happen on 20.04.  Is there an easy way to disable the graphical installer so cloud-init can finish?
[22:57] <blackboxsw> one interesting "cost" I'm seeing on these systems with Nitro IPv6 IMDS is a really slow crawl of metadata..   IPv6(+03.58600s) versus ipv4(+00.19400s)
[22:59] <holmanb> @blackboxsw: are you using a commit with ipv4 before ipv6?
[22:59] <blackboxsw> holmanb: no I was just trying to reorder ipv6 first intentionally to see how it'd behave
[22:59] <holmanb> ahh, huh
[22:59] <holmanb> interesting 
[23:00] <blackboxsw> yeah holmanb not a logic problem in your implementation, but an IMDS IPv6 implementation prob I think
[23:00] <holmanb> curious
[23:01] <holmanb> curious if aws devs are aware of that
[23:01] <blackboxsw> echeadle: thanks for the question, I'm presuming you might be using  Ubuntu server live iso maybe which is subiquity mased? Can you walk us through steps to reproduce the problem?
[23:01] <holmanb> 3.5s is significant
[23:01] <blackboxsw> s/subiquity mased/subiquity-based installer ISOs/ ?
[23:01] <blackboxsw> holmanb: yeah 
[23:02] <blackboxsw> holmanb: yeah I need to get more data on that to see where the hangup is. it might be misconfiguration of routes not sure. haven't gotten more that a high-level smell there so far
[23:02] <echeadle> Sorry, yes it is the live server: https://releases.ubuntu.com/21.10/ubuntu-21.10-live-server-amd64.iso"
[23:03] <echeadle> In the boot_command I have "autoinstall ds=nocloud-net;s=http://{{ .HTTPIP }}:{{ .HTTPPort }}/
[23:04] <minimal> blackboxsw: on a related IPv6 note, the fallback DHCP in cloud-init is IPv4 only currently
[23:04] <blackboxsw> echeadle: for a "headless" deployment you might want to use stock ubuntu server images (not server-live ISOs): https://cloud-images.ubuntu.com/impish/current/
[23:04] <echeadle> It seems to find the user-data file and seems to run. But the ubuntu installer always runs after cloud-init runs.  On 20.04 things rand fine
[23:05] <echeadle> Ok thanks.  I am new to Ubuntu and in the examples I was following, they used the live server.
[23:06] <echeadle> I appreciate the help.
[23:06] <blackboxsw> echeadle: somewhat related, I think some of the interactions with cloud-init and server-live images are not exactly desirable for folks trying to roll their own images. See comment here for details
[23:06] <blackboxsw> echeadle: https://bugs.launchpad.net/subiquity/+bug/1958377/comments/11
[23:06] <echeadle> fantastic
[23:06] <blackboxsw> echeadle: to better educate me, are you following any public packer procedure/docs that you could reference for us?
[23:07] <blackboxsw> then I can better understand use-cases folks run into
[23:09] <echeadle> I started with Jeff Geerling's example repo found on github at  geerlingguy/packer-boxes.
[23:09] <echeadle> https://imagineer.in/blog/packer-build-for-ubuntu-20-04/
[23:09] <echeadle> https://github.com/Praseetha-KR/packer-ubuntu
[23:09] <blackboxsw> minimal: +1 and yes thanks, I'm going through the paces of exercising holmanb's https://github.com/canonical/cloud-init/pull/1160
[23:10] <echeadle> https://www.golinuxcloud.com/generate-user-data-file-ubuntu-20-04/
[23:10] <blackboxsw> ... per ec2/ipv6: and testing our failure paths, including our dhcp4-only sandboxed dhclient action on an IPv6 only box.  some rough edges for us to understand/sort there
[23:10] <blackboxsw> thanks echeadle 
[23:10] <echeadle> were three url's I found. Then just looking around stackoverflow and comments in askubunty
[23:11] <minimal> blackboxsw: yeah I know. I was recently trying to do IPv6-only with NoCloud via seed-urls and hit the problem of no IPv6 DHCP which defeated the IPv6-only aspect
[23:12] <blackboxsw> minimal: yeah I totally recall  seeing your conversation here back and forth on that. That is definitely a gap we hope to shore up in the short term here.
[23:12]  * blackboxsw needs to re-read that in logs again for context 
[23:13] <minimal> blackboxsw: it was to do with there (obviously) being no way to provide network-config and so the fallback kicked in
[23:15] <minimal> blackboxsw: unrelated question: would there be any object to writing cloud-init code that extracts data from /etc/shadow? I was going to write a PR for the issue I flagged a while ago about needing to prefix user passwords with a "*" rather than using the lock_passwd option but obviously code cannot prefix an existing hashed password without accessing /etc/shadow
[23:15] <minimal> s/object/objection/
[23:15] <blackboxsw> ahh, ok. I was hoping we can grow both dhcp4/6 support for fallback as well as EphemeralDHCPv4/EphemeralDHCPv6 for pre-networking discovery
[23:20] <blackboxsw> minimal: my security bells are going off.  I vaguely recall seeing discussion on the /etc/shadow * character. but can't place the orig problem.   this was due to passwd -l or something?
[23:22] <minimal> blackboxsw: its to do with openssh specifically - if a password is locked AND if PAM is not enabled for openssh then it refused to let key-based ssh work for that account
[23:24] <minimal> I assume most distros have PAM compiled for openssh and also the sshd_config has PAM enabled by default - in my case Alpine provided to distinct packages for openssh (with no PAM compiled in) and openssh-pam. Even with PAM support present someone may not wish to have it enabled for openssh. So I wanted to write functionality to prefix the user's password with "*", rather than "!" (which signifies a locked password)
[23:24] <minimal> however there seems to be no utility that will do the "*" prefix (unlike "passwd -l") and so I'd need to read /etc/shadow in order to get any existing hash to prefix it
[23:26] <blackboxsw> +1 and interesting condition there. I can't claim to understand the implications fully yet and I need to read some more man pages to feel super comfortable about the write direct to /etc/shadow. I'll take that question to our security team too to see if there are adverse concerns there. The only thing I'd like to understand more fully is whether there were a preferred utility that did that for you anyway outside of going 
[23:26] <blackboxsw> into /etc/shadow directly to make that change (in case /etc/shadow format changed in the future etc )
[23:28] <minimal> blackboxsw: that's my point, I have found no util that provides a means to prefix a hashed password
[23:29] <minimal> without this I have no means to "lock" an account to prevent password access (in general, not just via SSH) yet permit key-based SSH access to the same account
[23:32] <blackboxsw> minimal: +1, I've almost understood. Strange to me that a util doesn't exist to do this, but I admit being in the shallow end of the pool here.
[23:33] <minimal> blackboxsw: I guess there are not many use cases for wanted to add an arbitary prefix to a hashed password entry
[23:35] <minimal> usermod -p "*" is no use as that looses any existing password (and so this change can't be reverted later)
[23:38] <blackboxsw> I can promise to make sure the right eyes see a PR like this to vet any concerns and I'll spend a bit more time understanding the potential impact of direct modification.  Given that chage and passwd, usermod  and their related tools  play around  in that space too writing/updating the file is a fairly well trodden by these tools. I'll spend some time reading through man pages there to see if anything else that could 
[23:38] <blackboxsw> suffice.
[23:38] <minimal> blackboxsw: I believe this is the relevant line in openssh: https://github.com/openssh/openssh-portable/blob/master/auth.c#L136
[23:39] <minimal> note the "!options.use_pam" part - its treatment is condition on whether PAM is enabled/disabled
[23:40] <minimal> in general it seems that BSDs and Linux treat locked accounts differently - according to the Linux manpages a locked account is locked for password access, not for all access (i.e. SSH key). I think BSDs treat locked to mean locked in all/most cases
[23:41] <minimal> I sat down yesterday to write a c-i PR for this and then ti dawned on me that I didn't know of any util to add the "*" prefix....doh!
[23:42] <blackboxsw> ahh ok. I'm basically at your square one yesterday :) 
[23:42] <blackboxsw> just trying to vet that it is not the case so I can understand context
[23:46] <minimal> blackboxsw: the openssh "problem" is easy to see - edit /etc/ssh/sshd_config and change UsePAM to "no", restart sshd, and then try and ssh in and see it refused
[23:46] <minimal> for an account with a locked password that is....