=== Daniel is now known as toolz [13:44] Hi all does someone have a reference `user-data` file for partitioning Ubuntu 20/22 Cloud Images? I wish to create a 30% for `/` and remaining for `/home`. I am unable to partition the QEMU image (via Packer). Any help would be appreciated. [13:46] tarruda: Yeah, like frickler wrote: you can install cloud-init from official bullseye-backports. [16:44] samip537: I don't think so :( contributions are definitely welcome [16:48] SDes91: 30/70 split makes sense in an install scenario, but in a cloud scenario repartitioning the image doesn't really jive, does it? [16:49] holmanb so maybe it is best to stick to a live server (Ubuntu) and go with Subiquity / Autoinstall then? [16:50] SDES91: Yeah if I had specific partition/disk requirements I would probably bake that into an image, and then use the image with cloud-init. [16:53] holmanb would make sense however would `disk_setup` and `fs_setup` actually work with the live-server? I have a `curtin` storage config and would want to stick cloud-init because subiquity / autoinstall takes an eternity to setup everything up [16:54] SDes91: During an install, you have lots of flexibility since you typically install from a livecd/usb/whatever, and can slice up your disks however you like before copying data. In a running cloud-init instance the disk modifications happen to the already-installed-instance - much less flexible [16:54] SDes91: what do you mean by live-server? [16:55] holmanb I mean Ubuntu Live Server Images as opposed to Cloud Images here. [16:56] SDes91: ah, I think I see what you mean. There are some guides out there about how to use server images as a base for building your instance (the guides typically involve installing cloud-init, sometimes running an initial module followed by some combination of `cloud-init clean`) [16:57] Unfortunately this isn't something I've personally done, otherwise I'd point you to one. [17:01] holmanb Well I wish to create a custom image so I can load it into a Hardware Device. They only possibility is to configure `user-data` either with subiquity / cloud-init config to get the partitioning right. [17:08] SDes91: I see. It sounds like subiquity is your best option then, if you're limited to cloud-init or subiquity. [18:17] I'm reading through the docs and I'm not quite sure how to figure something out: where does cloud-init gets its network interface names, when the datasource is DataSoruceEc2Local? I can't find it in metadata provided by AWS, etc [18:18] the issue is simple: depending on instance type, I get different NIC names: ethX vs ensX. I see cloud-init write out netplan config with those values, so I'm wondering where cloud-init gets them from, and ultimately I want to figure out how to control that [18:24] ananke: with network v2 you could match on e* and then use set-name to set to whatever you want. Should work unless you have an interface that starts with a character besides `e`, I think [18:26] ananke: I would have to dig a little to find the exact source of the names, but it might just be whatever the kernel originally named them. [18:30] holmanb: the issue is, cloud-init's config for netplan is automatically generated: '# This file is generated from information provided by the datasource.', so I'm not sure how I would even attempt to use set-name in this context [18:31] I'm right now poking around trying to figure out what's responsible for changes in the network names. AWS own documentation says it is expected, but I'm wondering what's responsible for it, and whether it can be controlled [18:41] ananke: the list of ethernet device names is simply obtained from /sys/class/net/ [18:41] looks like this may be related to the network driver: vif vs ena [18:41] minimal: thanks [18:42] ananke: depending on your kernel config it may name interfaces using old-style (ethX) or new-style (ensX) names [18:45] ananke: controlled by kernel cmdline line setting "net.ifnames=" (defaults to using predictable names, set to "0" to disable) [18:47] so AFAIK cloud-init takes the interface names from /sys/class/net/ and matches then against interfaces mentioned in network config (obtained from IMDS on AWS) [20:08] minimal: kernel config is the same, it's the same identical image in both cases. difference is in ec2 instance type, which dictates what network driver will be used [21:53] ananke: which file are you referring to? [21:56] ananke: oh you're refering to manually setting the netplan config? why not use cloud-init networkv2? https://cloudinit.readthedocs.io/en/latest/topics/network-config.html [23:42] ananke: the ethernet network driver used should not make a difference to the ethernet device naming... [23:42] holmanb: netplan config is automatically populated by cloud-init. I'm talking about booting the same identical VM image on AWS, and having different interface names, based on the EC2 type. [23:44] ananke: have you tried enabling debugging for cloud-init? I'd expect you would then see entries in cloud-init.log showing cloud-init looking up the device names in /sys/class/net [23:44] minimal: it sure does, because they are essentially two different types of networking devices. not quite the difference say between ethernet and IB, but clearly there is. ena driver appears to be using 'ens' device names, while ivf uses traditional device names [23:45] minimal: the issue isn't cloud-init, it was something I suspected at first, because of the cloud-init generated netplan config [23:45] ananke: what is the kernel module for this ivf interface? [23:46] vif interface, that was a transposition. to be honest, I haven't looked which driver is used, it appears to be one built into the kernel [23:47] ananke: well that's my question, what make/model of network device is it? knowing the kernel module will indicate this [23:48] minimal: likely whatever xen provides [23:49] ananke: something like "lshw" or "hwinfo" should indicate the driver used by that interface [23:50] I'm suspecting that this driver does not support the "new" ethernet interface naming scheme whereas the "ena" one does [23:52] presumably. I'm booting a sample instance right now [23:52] hah, driver: vif [23:53] and AWS docs state: Amazon EC2 instances have three different virtual network adapters, VIF, Intel 82599 VF, and Elastic Network Adapter (ENA). [23:54] ananke: yeah I found that, but from a quick search of kernel source I can't find a VIF driver [23:56] minimal: it's not a module, that's for sure. looks like official ubuntu AMIs don't have /proc/config.gz enabled, so I'd have to dig deeper [23:56] ananke: if the vif driver a loadable module ("lsmod") or compiled into the kernel? [23:56] ^ [23:57] comparing lsmod output on t2.x instance and t3.x instance shows that the vif one doesn't have any additional modules loaded, while the t3 has ena and a few others [23:57] does "dmesg" give any useful info re the VIF? [23:58] anyway, point being is that I've been able to narrow it down to the driver, and it's not a cloud-init related issue. I wanted to eliminate cloud-init as a possible culprit, thinking it was a network config handed down from the metatada service [23:58] normally it would give some info re: vendor [23:58] also perhaps "lspci" [23:58] closest is: [ 2.728564] xen_netfront: Initialising Xen virtual ethernet driver [23:59] there is nothing on lspci [23:59] like I mentioned earlier, my guess it's part of xen support [23:59] anyway, what exactly is the actual problem? cloud-init gets the network config from the AWS IMDS so then should set things up appropriately