[00:04] smoser: I want it to, i set it to true right now getting ready to test [00:07] would genisoimage execute the meta-data (containg #cloud-config) and user-data in the order specified? [00:07] so for exampl I should probably set the meta-data first, then the user-data (which can also be #cloud-config but supplied by user) [00:07] genisoimage -output cloud-init.iso -volid cidata -joliet -rock meta-data user-data ? [00:09] https://cloudinit.readthedocs.io/en/latest/topics/modules.html#set-passwords Also is that top level password by default the root password? [00:09] For example even on ubuntu which seems to say the default user is ubuntu [01:27] vans163, not sure what you mean 'execute' [01:28] password is for default user. [01:29] meta-data does not get executed. and order of arguments to genisoimage does not matter. you're mastering a filesystem. [01:33] smoser: i see okay, I think i need to use chpasswd then to change root password [09:21] j/close === shardy is now known as shardy_lunch === shardy_lunch is now known as shardy [14:46] is the correct way to mount the cloud-init for qemu to set it to -hda ? [14:46] also do we need to remove it before restarting the vm? or would cloud-init gaurantee it wont execute twice? [14:57] vans163: I don't know what config you have - but e.g. user-scripts usually only run once [14:58] vans163: on those you find a bit more, maybe the frist slightly outdated http://stackoverflow.com/questions/6475374/how-do-i-make-cloud-init-startup-scripts-run-every-time-my-ec2-instance-boots http://cloudinit.readthedocs.io/en/latest/topics/examples.html#run-commands-on-first-boot [14:58] and I must admit I should know better but I don't [14:58] gut feeling tells me to remember that the actual config is only done once, but I'm not feeling confident enough to tell any hard facts [14:58] highlighting smoser for some real insight for you [14:58] smoser: ^^ [15:00] vans163, most user-ata is handled once "per instance" [15:00] meaning per-instance-id. if you want to run it again, you'd need to change thte instance id that is on the config disk. [15:00] mostly , it shoudl "do the right thing". [15:01] if you want to detach that disk, then you will need to tell cloud-init "manual_cache_clean: True". And you have to write that (probably via user-data) into /etc/cloud.cfg.d or the next reboot, it will go looking for a datasource again. [15:02] -hda doesn't matter. [15:02] any block device will suffice. cloud-init finds disks by the disk label, so as long as the guest sees the block device its fine. [15:02] vans163: alternatively, if you're just re-testing that same user-data; I usually blow away the instance data (sudo rm -rf /var/lib/cloud/* /var/log/cloud-init* ) and reboot; [15:02] i would nto suggest using '-hda' generally as virtio is superior in just about everyr way [15:03] other than having a nice shorthand cli name [15:03] smoser: im thinking if I dynamically can add disks and things. if then it would be good to assign that cloud-init disk to something that would not get in the way. [15:03] for example one instance may need cloudinit.iso mycustomiso.iso disk.qcow2 disk2.qcow2 [15:04] where should cloudinit.iso so it does not get in the way [15:04] i dont know what "get in the way means". [15:04] because afaik using -cdrom it automatically assigns a hdx [15:04] there's only one cdrom [15:04] it doesnt have to be a cdrom [15:04] any block device. [15:04] vans163: yeah, if you use virtio to host the iso file, that works fine too [15:05] pretty sure you *could* attaach multiple cdroms to a guest if you wanted. [15:05] the IDE devices (hd[abcd]) supports only 4 devices, but virtio-blk, or scsi support many more [15:05] but i'd just do it as a virtio disk [15:05] surely you could with virtio cdrom [15:05] smoser: only two, IIRC [15:05] well, virtio block [15:05] with an ISO payload [15:05] ah so -drive id=disk0,file=/cloud/cloudinit/100.iso,if=virtio for example? [15:05] ther eis a virtio cdrom i think, no? [15:05] that's not quite the same as a cdrom device [15:05] vans163: yeah [15:05] then disk1 could be the qcow2? [15:06] smoser: no, just virtio-blk, or virtio-scsi (which can have cdrom properties) [15:06] ah. sure. [15:06] vans163: sure; Ubuntu cloud-images find the root devices via label [15:06] and cloud-init finds the datasource disk via label as well [15:06] disk1=...disk1.qcow2 disk2=...disk2.qcow2 disk5=...Another.iso [15:06] root=cloudimg-rootfs userdata=cidata [15:06] vans163, cloud-localds --help shows: [15:06] * kvm -net nic -net user,hostfwd=tcp::2222-:22 \ [15:06] -drive file=disk1.img,if=virtio -drive file=my-seed.img,if=virtio [15:07] that works fine. [15:07] ah so i can drop the id then, im not using it elsewere in the cmdline [15:37] drop the id ? [15:37] smoser: -drive id=disk0,.. [15:38] oh. well, that is a more verbose way of doing things. you do need the moer verbose way in some cases. [15:39] qemu is not fun ot interact with from a command line. [15:39] i use xkvm. http://bazaar.launchpad.net/~curtin-dev/curtin/trunk/view/head:/tools/xkvm [15:39] which generates those id= and such. [15:40] its not perfect, but makes at least network devices and block devices easier [15:40] xkvm --disk=/tmp/asdf --dry-run --netdev=user [15:40] qemu-system-x86_64 -enable-kvm -device virtio-scsi-pci,id=virtio-scsi-xkvm -device virtio-net-pci,netdev=net00 -netdev type=user,id=net00 -drive file=/tmp/asdf,id=disk00,if=none,index=0 -device virtio-blk,drive=disk00,serial=asdf [15:40] http://bazaar.launchpad.net/~curtin-dev/curtin/trunk/view/head:/tools/xkvm [15:43] smoser: that looks sweet ty [15:43] gonna check it out === rtheis_ is now known as rtheis [16:21] so im just seeing a black screen with blinking cursor [16:21] i tried to use a live iso and it works fine, when i use the cloud-init iso i made with genisoimage [16:21] black screen, blinking cursor at bottom left [16:23] I used genisoimage -output 100.iso -volid cidata -joliet -rock user-data meta-data user-data was an empty file, meta-data had the #cloud-config [16:23] catting the 100.iso i can see the #cloud-config is htere [16:25] could this mean my #cloud-config is incorrect, or could it mean the iso is not correctly generated? [17:27] vans163: if you suspect issues with your cloud-config you could use lxd to test it if the config is not too complex. [17:28] if the config is bad you can view the /var/log/cloud-init-output.log and it will have a message about failing to read the user data [17:32] i think its to do with qemu and the image i am using, i cant figure this out atm [17:32] I download xenial-server-cloudimg-amd64-disk1.img and set that as first disk via -disk [17:32] but even that does not boot. afaik without cloud init the cloud image should boot no? [17:33] i can boot using a live iso just fine though, say if i insert a Fedora_24_workstation.iso [17:33] Qemu just says booting harddisk.. failed to boot, not a bootable disk. I set virtio off [17:34] gave the .img 777 even [17:35] file on it reveals xenial.img: QEMU QCOW Image (v2), 2361393152 bytes [17:35] Are you using qemu via the cli? or via someting like virt-manager? [17:35] cli for now i can paste my config [17:35] if you could [17:39] so 2 cases here i tried both. 1 with that -cdrom 1 without. With cdrom errorcode 0004, without cdrom hangs indefinately on Reading harddrive (i forgot to change format=raw to format=qcow2) https://gist.github.com/anonymous/5049a93256c69f4a2a3dd93bc163c826 [17:40] "Could not read from CDROM (errorcode 0004) [17:40] 100.iso is the genisoimage with the #cloud-config [17:43] my thoughts are even if the #cloud-config is wrong, the CDROM should still boot/read from and give an error [17:44] or maybe not.. [17:45] ahhh missing a \ at end of the -cdrom line. added it and its stuck indefinately at booting from harddrive [17:45] iotop reveals 0 disk activity [17:47] http://imgur.com/a/FrBJV [17:52] looking - nice catch on the \ [17:55] powersj: ty note if I change that -cdrom to -cdrom /root/iso/fedora_24.iso i boot immediately to the live installer [17:55] my best guess right now is not proper genisoimage ? [17:57] well so your qemu command line says boot order=d which means to boot the cdrom first [17:57] order=c will boot the drive first [17:59] when using -drive I think you have to use order=c as well, but I'm not certai [17:59] certain, rather [18:09] qemu-system-x86_64 -enable-kvm -boot order=c -drive format=qcow2,file=ubuntu-16.04-server-cloudimg-amd64-disk1.img [18:10] is what I used as a very simple example to get drive going. Otherwise may chime in with better suggestions [18:16] let me try [18:18] it just boots from harddrive first now and stucks indefinetaly. I think a good question would be can a ubuntu cloud image (qcow2 format) boot itself? [18:18] without a cloud-init iso supplie [18:19] vans163, you have to boot the cloud image. [18:20] i'm confused as to what you were trying to boot. [18:21] does not matter what, if I boot the CDROM it fails and boots the harddrive, if i boot the harddrive it skips booting cdrom [18:21] in both cases the problem is the same, infinite stuck while booting harddrive seems there are issues https://bugs.launchpad.net/cloud-images/+bug/1573095 [18:21] going to try another cloud image [18:24] vans163, can you just show what you're doing ? [18:24] you're using qemu (or kvm) right ? [18:28] this is qemu. and i think the problem is im not using the right clud image [18:28] it seems most cloud images are for openstack or ec2 [18:30] going to try the generic centos image [18:30] 1 moment [18:34] okay so the centos generic cloud image boots to grub atleast [18:34] then the infinite hang, guessing this is where cloud-init comes into play [18:34] so after grub it needs to do cloud-init? [18:34] *after grub boots kernel [18:37] http://paste.ubuntu.com/23486774/ [18:37] ^ that works. i dont know about centos images, or what they have. [18:38] but quite possible you can substitue the centos cloud image for this. [18:38] smoser: wow let me try that! [18:50] smoser: ty so much it works! [18:51] smoser: 1 key point that i missed was the xenial image is compressed. had to qemu-img convert -O qcow2 to uncomrpessed [18:51] you dont have to do that. [18:51] strange [18:51] you can, but you dont have to [18:51] let me try again [18:51] first time i tried it hanged at Reading harddrive.. [18:51] if you try with compressed, it will be slower. [18:51] as you're doing cpu decompression on every read [18:51] but it will work [18:52] i did exactly above and it worked. [18:52] yoyur right [18:52] it works in both cases [18:52] strange i think im just confused as i have liek 10 of these configs [18:52] must of confused something, but compressed and not compressed both work [18:53] going to try that same seed with the openstack debian image [18:54] works [18:57] I found the problem. I was using -cpu host [18:57] when I add that to your config, i get the hang on boot [18:57] just like with my config [18:57] but as far as im aware -cpu host gives you better performance? [19:01] it seems it might not be the case === shardy is now known as shardy_afk [19:12] vans163: -cpu host just exports host cpu features into the guest but the default qemu64 cpu is usually sufficient unless you have a highly tuned application looking for specific recent cpuflag features and the app would otherwise turn things off [19:13] for example, ffmpeg does per-cpu tuning of ussing sse2 vs sse4; there are ways to expose just those features instead of -cpu host which can cause issues for migration (if that's a use-case) , you can do -cpu qemu64,+sse4 etc. or use the cpu classes like Nehalem, etc. [19:13] vans163, where are you running this ? [19:13] adding -cpu host doesn't change anything for me. [19:15] you probably need -m 512 (or more) . default memory is 128, and with that i saw an OOM on boot. i'd just not go there. [19:15] smoser: i get warning: host doesn't support requested feature: CPUID.0DH:EAX [bit 0] (bit 0 1 2) 3 of these errors [19:15] yeah, i see that. [19:15] just noise [19:15] Maybe -cpu host is passing all features of the cpu but this particular cpu is buggy? [19:15] hum [19:16] rharper: good to know that -cpu qemu64,+sse4 I will note that, as I woudl like the guest to take advantage of things like that and AES-NI === shardy_afk is now known as shardy === shardy is now known as shardy_afk [19:19] also note that i never had an issue using -cpu host before, but here its refusing to boot the qcow2 image [19:19] and on this node i can boot a qcow2 image with fedora that i installed from a iso with no problems. and win10 works [19:20] vans163: if you have the bootlog from the failed scenario, I can look at see if something obvious comes up [19:20] i built qemu from source hence its version 2.7.5 [19:20] maybe something do to with that? [19:20] any idea how to fetch that? [19:21] you can use -serial stdio [19:21] and it will spit serial console data to stdout [19:21] sec [19:21] Ubuntu cloud images boot with console=ttyS0 by default, so boot messages will show up there [19:22] nothing out of the ordinary, those same warning msgs [19:22] then vnc shows Booting from hard disk... MBR and stuck [19:23] remove -cpu host, and it shows the same thing except continues [19:23] then boots [19:23] maybe theres a way to get more logs? [19:24] it does not even get to boot http://imgur.com/a/FrBJV this screen is where it gets stuck on . Except it also prints MBR now [19:24] maybe.. its in some kind of eufi mode and looks for a GPT? [19:25] It also prints SYSLINUX 6.03 EDD 20150820 Copyright (C) 1994-2014 H. Peter Anvin et al after the MBR on a new line. and thats it [19:26] cpu usage is 100% io seems 0 [19:27] what host cpu do you have? [19:27] I wonder if the 16-bit emulation in-kernel kvm is failing [19:27] Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz stepping 7, microcode 0x710 [19:28] newer intel cpus have a feature to provide support for 16-bit "big real mode" instead of having kvm do software 16-bit emulation [19:28] ohh i enabled 1 thing new, on the motherboard i enabled power saving mode [19:29] before it had all powersaving and C states disabled, and i passed to the kernel to disable it [19:29] power states shouldn't matter [19:29] okay so thats off the table [19:29] can you show your full commandline ? with the -cpu host ? [19:29] i could run it in valgrind and print a report [19:29] it's in-guest [19:29] so valgrind won't show yuou anything [19:29] ah [19:29] it's stuck in bios [19:30] sec [19:30] which is early bootstrap of the vm, running 16-bit asm of SeaBIOS [19:30] https://gist.github.com/anonymous/5cd8e1110161adba6a597b1494ea458d remove that -cpu host everything works [19:31] this may help see the difference of bios boot between with and without -cpu host; -chardev stdio,id=seabios -device isa-debugcon,iobase=0x402,chardev=seabios [19:31] and just the debian image? or all images? [19:31] all [19:31] ok [19:31] (ubuntu deb fedora) [19:32] i could boot non-cloud images but I did not test once i enabled power saving [19:32] let me try.. [19:32] so its not an oversight [19:33] gonna try your extra param first [19:35] ok, on Yakkety qemu 2.6.1, that boots fine with a Xenial VM [19:35] on my Intel NUC (i5) [19:35] can you try a different qemu as well? I can't quite see why -cpu would affect the bios boot [19:35] if you drop -enable-kvm [19:36] and it boots, that might suggest some emulation if 16-bit boot code being an issue ... what host kernel ? [19:36] https://www.diffchecker.com/JOdZ8vQk left fails right works. (no diff) [19:36] very strange [19:37] what about leaving -cpu host, vut droppping the -smp ? [19:37] Linux x7 4.7.6-200.fc24.x86_64 [19:38] I'm just poking at things; there's noething obviously wrong with your command, and I see no reason why host cpu flags would affect it; you could try to extract the kernel/initrd from the image, and boot those directly with -kernel and -initrd; that would take the bios bits out of the picture [19:38] basically we need to hunt and peck to find out which code path is failing; something is sensitive to the cpu flags; just not sure what or why; nothing obvious [19:39] drop smp still stuck. drop kvm gives error (host cpu model require kvm). Do you think I should try the QEMU that is in fedora 24 (the current host system) [19:39] yeah, that's another data point [19:40] it would be 2.6.2 [19:43] 2.6.2 works [19:43] o.o [19:43] does not get stuck [19:44] Also 2.6.2 does not print the warnings that CPU extension bits are missing [19:45] maybe it means 2.7.5 got better support for other cpu extensions, and host is enabling them but really the CPU dont have them, and QEMU gets stuck trying to use them? [19:48] lemme see what that's about [19:48] it did sorta stickout [19:49] i tried googling for those errors but there is close to 0 details [19:49] *warnings [19:52] https://lists.gnu.org/archive/html/qemu-devel/2016-09/msg07278.html [19:52] that looks relevant [19:52] the xsave cpu flag, it seems [19:54] rharper: let me check if that landed [19:56] IT does not seem so . https://github.com/qemu/qemu/blob/83c83f9a5266ff113060f887f106a47920fa6974/target-i386/cpu.c#L2264 [19:56] but i386 arnt I using x86_64? [19:56] or is i386 still used? [19:56] in an emulated way to something [19:57] it's not i386 specific [19:57] it's just how the cpu targets are organized [19:57] the target-i386 code produces both a 32-bit and 64-bit cpu model [19:58] they have 2.8 out now, let me try to patch that manually first, then stash and try 2.8 [19:59] Im not sure if I even need 2.7.5 i just wanted -device virtio-input-host-pci support [20:03] funny the latest commit on 2.7.5 is sept 28. so 2.8 might fix things [20:11] commit 453ac8835b002263a6 is very nice, its a doc update on pcie passthrough best practises [21:31] rharper: the snappy module is primarily for snap based system I take it? If for example I wanted it to install packages from the store on a cloud image could I use it for that? [21:32] I was hoping https://paste.ubuntu.com/23487411/ would install those snaps, not just install snap [21:33] are those snaps in stable channel ? [21:33] yes [21:33] which cloud-init are you testing? [21:33] those certainly should get installed [21:33] I'm testing with 'hello' snap [21:33] but I can switch to those [21:34] powersj: it will work on any system that has snapd installed (so classic, or all snap) [21:34] rharper: using lxd cloud image of xenial with the cloud config linked above [21:34] powersj: snap has to be installed, which it is in xenial + [21:35] snap list after boot works, but shows nothing installed [21:35] which cloud-init are you running ? [21:35] we're just now sru [21:35] 'snap' command into xenial cloud-init; it's not there yet [21:35] so, yakkety would work [21:36] on default lxc, snap install fails [21:36] 0.7.8-1-g3705bb5-0ubuntu1~16.04.3 [21:36] so there may be some sort of lxd update or config to enable snap install to work [21:36] # snap install hello [21:36] error: cannot perform the following tasks: [21:36] - Mount snap "ubuntu-core" (423) ([start snap-ubuntu\x2dcore-423.mount] failed with exit status 1: Job for snap-ubuntu\x2dcore-423.mount failed. [21:36] maybe error like that in cloud-init-output.log [21:37] or an error in cloud-init.log around installing snaps [21:37] might pop in #lxd to check about snap installing [21:38] ok I'll mark this down as something to circle back to then [21:38] "modules-final/config-snappy: SUCCESS: config-snappy ran successfully" [21:38] thrm [21:39] there should be some subp on the command in the log [21:40] if not, we definitely need a task for util.subp to create events so they're trackable via cloudinit-analyze [21:40] magicalChicken: smoser ^^ item for roadmap