[07:48] <beantaxi> Can I use lxc config set b1 user.user-data "$(cat my-user-data.cfg), as @rharper suggested above, but with an existing bash file I use for user-data in an EC2 launch-template? Or do I need to convert it to something more cloud-config friendly
[10:20] <meena> beantaxi: what does your bash script look like / do, what should it do? is it used to create stuff from the outside, or the inside?
[12:25] <beantaxi> meena: Typical initialization stuff at the moment ... a lot of apt installs, a few mounts ... also downloads a few things from S3.
[12:49] <beantaxi> I wrote the script, and others like it, before I'd heard of cloud-init. I like rharpers idea of using lxd to develop init scripts, instead of firing up fully blown EC2 instances all the time, but not if I need to add in a 'convert all existing bash scripts to cloud-config yaml' task at this time.
[14:04] <rharper> beantaxi: to make a multi-part mime message as your user-data, you can use cloud-init's tools/make-mime.py ;  it's in the cloud-init source repot
[14:04] <rharper> https://paste.ubuntu.com/p/N9HHZ9rQZ2/
[14:05] <rharper> s/repot/repo
[14:26] <beantaxi> rharper: Thanks. I saw a reference to make-mime.py ... I probably could have phrased my question as "can I use my bash-script as is, or will I have to use something like make-mime.py to create a full multipart MIME file." & I couldn't find any docs which said one way or the other.
[14:27] <rharper> yeah; it's under-documented
[14:27] <rharper> s/under/un
[14:38] <beantaxi> haha! well played. make-mime.py seems fairly useful and painless. Btw, fuzzy question - how does cloud-init view supporting pure-bash initialization? I could understand if it was seen as an annoying legacy necessary evil
[14:56] <Odd_Bloke> beantaxi: It's not considered legacy by any means; it's a totally valid way of specifying user-data, and will continue to be so.
[14:58] <rharper> beantaxi: I'm not sure I understand what "pure bash initialization" ?  in terms of the contents of the payloads, cloud-init does not have an opinion;
[14:58] <Odd_Bloke> I took it to mean "passing in a shell script as user-data", but maybe I misunderstood.
[15:00] <beantaxi> Odd_Bloke: That's exactly right. rharper: I suppose I meant generally; eg if I have issues scripting my startup in bash, will those be seen as valid question, or will I be seen as annoying everyone by stubbornly not switching to cloud-config yaml (apologies if Im using the wrong terms of art)
[15:02] <rharper> beantaxi: definitely valid;  there are always trade-offs between writing things your self in a script; vs. leaning on the cloud-config modules;  and you can always mix both;  with the multi-part mime archive, you can include any number of shell scripts and #cloud-config files
[15:03] <beantaxi> rharper: That's definitely the answer that's best for me :) so no arguments here
[15:07] <beantaxi> Wow guys ... I just went from yesterday's complete ignorance, to just now firing up an lxd Ubuntu 18.04 container, using lxc config set user.user-data with my make-mime'd bash script ... and I appear to have a working system, with my systemd-based web scrapers happily running and downloading?
[15:07] <beantaxi> Did this all just happen?!?
[15:08] <Odd_Bloke> ^_^
[15:10] <beantaxi> Ok, here's something I like to see. It's an error message so that might seem odd. + mkfs -t xfs /dev/xvdf
[15:10] <beantaxi> Error accessing specified device /dev/xvdf: No such file or directory
[15:11] <beantaxi> This is on an EC2 instance, where I have attached volumes at /dev/xvdf thru h, so on the host system those devices are valid. But in my lxd container, those appear to be invalid ... which is what I was hoping for but had no idea if that'd work.
[15:13] <beantaxi> What I wrote is a little unclear ... what I was hoping for was that my container would have no idea of it's parents devices, and that's what I'm seeing. So this is pretty cool.
[15:24] <rharper> beantaxi: device block manipulation won't be present inside a container;
[15:24] <rharper> beantaxi: even launching as a VM, the virtualization layer on Ec2 exposes their virtual disks as xvdX (this is a Xen thing);
[15:28] <blackboxsw_> Odd_Bloke: if you get a chance today I think I need an upstream  +1 review on https://github.com/canonical/cloud-init/pull/516
[15:28] <blackboxsw_> have an approval from lucas
[15:28] <beantaxi> rharper: Ah, so _that's_ where the x in xvd* comes from. If I really wanted my container to have access to host devices, it looks like lxc config devices add <something> <something> disk <etc> could allow that
[20:13] <AnhVoMSFT> @rharper @Odd_Bloke is there anyway to tell a systemd service to "start After A, but if A does not exist, start After B". More specifically, we're dealing with a scenario where network.target isn't reliable enough, so we want the service to start After=NetworkManager-wait-online.target, but if NetworkManager-wait-online.target does not exist, it should start After=network-online.target
[20:13] <rharper> you can include more than one After, if the target specified does not exists, it's ignored
[20:14] <rharper> so network.target and network-online.target are two very different things ...   can you just start after network-online.target (which is reached after networkd or network-manager online.targets are reached
[20:15] <rharper> or are you trying to adjust cloud-init.service (which runs after network.target and after the $service-online.target but *before* the network-online.target ?
[20:16] <rharper> I know there have been issues getting NM-wait-online.target to work in the same spot as systemd-networkd-wait-online.service ;  specifically around when dbus starts and things like that;
[20:24] <AnhVoMSFT> no, this is for rpc-statd.services and rpc-statd-notify.services. in RHEL these services are configured to start after network-online, which causes a conflict with cloud-init's init phase when it tries to start mount -a, which causes NFS mounts to lock up due to cyclic dependencies
[20:25] <AnhVoMSFT> @rharper there're more details in this bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1858930
[20:27] <rharper> hrm, I thought mount -a would ignore any of the _netdev mounts;
[20:27] <rharper> is _netdev not on the option list in fstab for the nfs mounts?
[20:58] <AnhVoMSFT> no, in the cases I looked at they don't have the _netdev option list
[20:59] <AnhVoMSFT> should they be on there?
[20:59] <AnhVoMSFT> note that this isn't an issue for NFSv4 mounts, only NFSv3 (NFSv4 don't require the rpc-statd* services)
[21:00] <rharper> yes
[21:00] <rharper> all network mounts require _netdev to prevent mount from bringing up the mount before networking is present
[21:01] <rharper> AnhVoMSFT: ^
[21:07] <AnhVoMSFT> Thanks @rharper, let me try this and add that to our documentation. Is this documented in cloud-init doc? If not we can contribute a PR there
[21:08] <rharper> documented in man (8) mount;  and I believe we mention _netdev  as well but not 100% sure
[21:12] <rharper> AnhVoMSFT: looks like we could use  a doc/example update for nfs;  and we do have code in cc_mounts which checks if the device is "network" related;  so possible could update the defmounts to append a ,_netdev in the opts for nfs mounts
[21:17] <AnhVoMSFT> @rharper "update the defmounts to append a ,_netdev in the opts for nfs mounts" : how does this work?
[21:18] <rharper> cloud-init in cc_mounts updates/generates /etc/fstab for any of the provide mounts;  if one of the devices was: nfshost:/mypath /localmount  ....  we could ensure that the options field with also get a ,_netdev on it;
[21:19] <rharper> it's a bit tricky since, if a user provided their own mount options, clobbering that might interfere;   I think a good step 1 is to test and then document that for nfs mounts, users should provide mount options, like defaults,_netdev ;
[21:20] <rharper> later one could see if cloud-init can safely append the _netdev automatically if not present in mounts that are network devices
[21:21] <rharper> AnhVoMSFT: see cloudinit/config/cc_mounts.py:is_network_device()  ; which is just a regex checking for : in the device name;   using that later on when composing the updates to fstab, we could check for network device mounts if _netdev is in the opts, and if not, append it
[21:24] <AnhVoMSFT> @rharper does mount -a actually ignore _netdev?
[21:24] <AnhVoMSFT> -a, --all
[21:24] <AnhVoMSFT>               Mount  all  filesystems  (of  the given types) mentioned in fstab (except for those whose line contains the noauto keyword).  The filesystems are mounted following their order in
[21:24] <AnhVoMSFT>               fstab.
[21:25] <AnhVoMSFT> looking at "man mount" it only mentioned that it would only skip those with noauto keyword on it
[21:33] <rharper> hrm, even if we ignored the nfs mounts; there's noting to actually mount things;
[21:34] <rharper> I know I played with this before; it looks to me like on systemd systems, we shouldn't use the mount -a at all, but rather the daemon-reload we have, which will re-parse fstab (and then with _netdev options) those mount units won't run until after networking (and will happen automatically);
[21:35] <rharper> in the mount man page theres mount -a -O _netdev ; which would exclude any mount that had _netdev option set;  then the daemon-reload would create the mount units for the nfs entries ...;  for non-systemd systems; there's nothing to trigger a mount post network coming up though
[21:36] <rharper> this would be an existing issue that they've already solved (either by not using cloud-init cc_mounts to handle nfs mounts, or appending some final script which runs mount -a again