[16:34] <plm> Hi all
[16:34] <plm> people, I would like to run Ubuntu ARM 18.4 in a VM, what is the best choice?
[16:34] <blackflow> !crosspost
[16:36] <plm> blackflow: I need no GUI, so is just server, because this I post here too. Sorry
[16:37] <blackflow> plm: so you want ARM on non-ARM harware?
[16:37] <blackflow> qemu. in fact, always qemu. whether it uses kvm or not, is a different question. :)
[16:37] <plm> blackflow: yes.
[16:38] <plm> blackflow: actually I have the 14.4 running on qemu
[16:38] <plm> blackflow: I dont know how that image was made. But I need a newer version of python
[16:39] <plm> blackflow: I tryed a ppa, works installing a new python version (python3.6), but erros when installing python3.6-venv
[16:39] <plm> blackflow: I tryied too to upgrade from 14.4 directaly to 18.4, but have errors
[16:40] <blackflow> yeah go through 16.04 first
[16:40] <blackflow> but be aware there's plenty of radically different stuff now, since 14.04
[16:41] <blackflow> as for your actual problem not sure I understand it. and personally I have zero exp. with ARM. Was jsut saying that for any virtualization solutions, same arch or cross arch, qemu is your tool of choice.
[16:41] <plm> blackflow: ok. I will to try go to 16.4 and after to 18.4. Will try that now.
[16:41] <plm> blackflow: all right, using qemu =D
[16:44] <plm> blackflow: hey
[16:44] <plm> blackflow: sorry. I did to you a wrong information.
[16:44] <plm> I tried update  14.4 to 16.4 and I have error too, but...
[16:45] <plm> blackflow: I tried again, without apt-get -f dist-upgrade, but just install the new python version (python3.5) and works
[16:45] <plm> blackflow: 16.4 dont have python 3.6
[16:45] <plm> so I will to try chances sources.list to install 3.5
[16:46] <plm> and after installed I will change sources.list to use bionic, and try apt-get install python3.6
[16:47] <blackflow> plm: yeah that's totall not advisable. don't install packages from different releases.
[16:47] <blackflow> *totally
[16:47] <plm> blackflow: was the unique form to hace success
[16:47] <plm> upagrade 14.4 to 16.4 I have error
[16:47] <blackflow> what error?
[16:48] <plm> blackflow: moment, I will try again and post here the error.
[16:57] <plm> blackflow: the qemu is very slow, just wait more a moment =D
[17:01] <blackflow> you can post it, someone might help, I have to step out for a while
[17:05] <plm> blackflow: :(
[17:05] <plm> blackflow: all right
[17:05] <plm> blackflow: anyway, thanks
[17:32] <plm> blackflow: are yoy there? This is the error doing upgrade from 14.4 to 16.4:
[17:32] <plm> /var/lib/dpkg/info/debconf.prerm: 12: /var/lib/dpkg/info/debconf.prerm: pyclean: not found
[17:32] <plm> dpkg: warning: subprocess old pre-removal script returned error exit status 127
[17:32] <plm> dpkg: trying script from the new package instead ...
[17:34] <plm> blackflow: complete log error: http://dpaste.com/0XQF5PZ
[17:36] <plm> anyone more can help me with this upgrade error?
[17:42] <_KaszpiR_> pycompile: not found
[17:42] <plm> _KaszpiR_: yes, I see that.
[17:42] <_KaszpiR_> https://stackoverflow.com/questions/30962402/dpkg-error-pycompile-not-found
[17:43] <plm> _KaszpiR_: I did *exactally* that that thared recommend, and not works
[17:44] <plm> _KaszpiR_: sudo apt-get  -f install; sudo dpkg --configure -a; sudo apt install -f --reinstall python3-minimal
[17:44] <plm> _KaszpiR_: I tried too that alternative above there
[17:44] <TJ-> plm: see if there is a file in the system already with "dpkg -S pyclean" and "dpkg -S pycompile"
[17:46] <plm> TJ-: all right. Please, give me a momento, I crating that scenario again.
[17:47] <TJ-> plm: it's possible the files are there but they have a shebang line that refers to an executable that is missing (such as the python 2.x vs 3.x issue)
[17:50] <plm> TJ-: all right. If necessary, and better to upgrade from 14.4 to 16.4, I can remove any packages, and after upgraded to 16.4, install again. Anyway, please, wait, I creating that scenario again to do that commands that yuy paste me.
[18:21] <plm> TJ-: more some minutes =D
[18:26] <plm> TJ-: done
[18:27] <plm> TJ-: root@mintboxa:~# dpkg -S pyclean
[18:27] <plm> dh-python: /usr/share/debhelper/autoscripts/prerm-pypyclean
[18:27] <plm> TJ-: python-minimal: /usr/bin/pycompile
[18:27] <plm> python: /usr/share/debhelper/autoscripts/postinst-pycompile
[18:28] <plm> TJ-: there are more lines. Complete log is here: http://dpaste.com/174STZ6
[18:28] <plm> TJ-: and now, what I need to do?
[18:30] <TJ-> plm: what does "head /usr/bin/pyclean" report?
[18:30] <TJ-> plm: first line only - the shebang
[18:31] <plm> TJ-: http://dpaste.com/37MF6BP
[18:31] <plm> TJ-: root@mintboxa:~# head /usr/bin/pyclean
[18:31] <plm> #! /usr/bin/python
[18:31] <plm> # -*- coding: UTF-8 -*- vim: et ts=4 sw=4
[18:34] <plm> TJ-: I need to change that "#! /usr/bin/python"?
[18:36] <TJ-> plm: no, I'm trying to check on what to expect. What does "readlink -e /usr/bin/python" report?
[18:38] <plm> TJ-: root@mintboxa:~# readlink -e /usr/bin/python
[18:38] <plm> root@mintboxa:~#
[18:38] <TJ-> plm: also check for "head /usr/bin/pycompile" and use readlink -e on what it shows, too. Ensure both point to executables
[18:38] <plm> root@mintboxa:~# head /usr/bin/pycompile
[18:38] <plm> #! /usr/bin/python
[18:38] <plm> # -*- coding: utf-8 -*- vim: et ts=4 sw=4
[18:38] <TJ-> plm: what does "file /usr/bin/python" report ?
[18:38] <plm> root@mintboxa:~# readlink -e /usr/bin/pycompile
[18:38] <plm> /usr/bin/pycompile
[18:38] <plm> root@mintboxa:~#
[18:39] <plm> root@mintboxa:~# file /usr/bin/python
[18:39] <plm> /usr/bin/python: ERROR: cannot open `/usr/bin/python' (No such file or directory)
[18:39] <plm> root@mintboxa:~#
[18:39] <TJ-> plm: there is your problem
[18:39] <TJ-> plm: what does "ls -l /usr/bin/python*" report? It might just be a missing symlink
[18:39] <plm> TJ-: there is no python.
[18:40] <TJ-> plm: there should be!
[18:40] <plm> TJ-: http://dpaste.com/2JFK0GH
[18:40] <plm> TJ-: many lines, I paste ^
[18:41] <TJ-> plm: so there is both python2.7 and python3.4 installed but nothing is symlinking from /usr/bin/python. Let's fix that manually with "sudo ln -s python2.7 /usr/bin/python"
[18:41] <TJ-> plm: now your upgrade problems should be solved
[18:42] <plm> TJ-: =D Trying again apt-get -f dist-upgrade
[18:45] <plm> TJ-: debconf pass =D  upgrading...
[18:53] <plm> TJ-: still upgrading. Is possible to give more power to qemu? Qemu is very slow :(
[18:57] <TJ-> plm: shouldn't be if you're using KVM hardware accelearion
[18:58] <TJ-> plm: unless it needs more cores or more RAM of course
[18:59] <plm> TJ-: this is my qemu start.sh http://dpaste.com/3GRCB9Z
[18:59] <plm> TJ-: There that ai put more RAM/core dedicated to qemu, or qemu get automatically from my system. My ssytem has 12GB RAM and many cores
[19:02] <plm> I think is that "-m 1024" that is 1024MB, right?
[19:03] <TJ-> plm: yes
[19:04] <plm> "-smp <NUMBER> - Specify the number of cores the guest is permitted to use. The number can be higher than the available cores on the host system"
[19:04] <plm> I will to try use after upgrade, the "-smp 4" =D
[19:04] <TJ-> plm: it's an ARM cortex, so can't expect it to be fast. But, is it I/O bound to a slow device such as SDcard?
[19:05] <plm> TJ-: no, is in MY SATA HD, very fast
[19:06] <TJ-> plm: ok, what's the device ?
[19:06] <plm> TJ-: I think is RAM just 1024 the problem, and maybe without -smp param, it get just one core.
[19:06] <plm> TJ-: Disk /dev/sda: 931,5 GiB, 1000204886016 bytes, 1953525168 sectors
[19:07] <plm> TJ-: Model Family:     Western Digital Blue
[19:07] <plm> Device Model:     WDC WD10EZEX-00BN5A0
[19:07] <TJ-> plm: it's not SATA, the VM says root=/dev/mmcblk0p1 -- mmc is likely an SD card
[19:08] <plm> TJ-: I dont know why that "root=/dev/mmcblk0p1" I have just one disc in my PC
[19:08] <TJ-> plm: is it mapping the boot files inside the VM guest as an SD card then?
[19:09] <TJ-> plm: Ah, further on I see "-sd rootfs.img"
[19:10] <TJ-> plm: you can use 'iotop' on the host to see if there's a bottleneck there.
[19:11] <plm> 13391 be/4 root        3.47 K/s  114.65 K/s  0.00 %  1.00 % qemu-system-arm -M realview-pb-a8 -cpu cortex-a8 -m 1024 -ke~.1.1:255.0.0.0:armqemu -sd rootfs.img -initrd initrd.img -usb
[19:11] <plm> TJ-: ^
[19:13] <TJ-> plm: maybe it is doing full emulation - not providing any hardware support like KVM does on x86 ?
[19:13] <plm> pi@deskdev-pi:~$ cat /proc/cpuinfo  | grep -i kvm
[19:13] <plm> pi@deskdev-pi:~$
[19:13] <plm> TJ-: ^
[19:17] <TJ-> plm: there you go then; I know there is kvm support for HYP on ARM - not sure if it is baked in though or you need external (self-built) binaries for that
[19:19] <TJ-> plm: i see 'kvmtool' is in the Ubuntu archives, so you could install that and use "lkvm" to run the guests
[19:21] <TJ-> plm: you might also check the kernel was built with CONFIG_KVM
[19:21] <plm> TJ-: in the host: root@deskdev-pi:~# apt-cache search kvmtool
[19:21] <plm> kvmtool - Native Linux KVM TOOL
[19:21] <plm> just install kvmtool in the host?
[19:21] <plm> and how I call "lkvm" for the qemu guests?
[19:21] <TJ-> plm: well not 'just' - that's the userspace handler. You need to ensure the CPU/kernel support HYP mode and can load the kvm module
[19:22] <plm> "CONFIG_KVM" are there easy waty (cat) to check?
[19:22] <TJ-> plm: you'd need to install kvmtool and "man lkvm"
[19:22] <plm> ok
[19:22] <TJ-> plm: "grep KVM /boot/config-$(uname -r)" usually
[19:23] <plm> CONFIG_KVM_GUEST=y
[19:23] <plm> many lines, will put on dpaste
[19:23] <plm> TJ-: http://dpaste.com/0Z4WB3H
[19:25] <TJ-> plm: the host is ARM yes?
[19:25] <plm> TJ-: no, is a x86, intel
[19:26] <plm> model name	: Intel(R) Core(TM) i5-3330 CPU @ 3.00GHz
[19:26] <TJ-> Oh! I thought you said it was an ARM coretex
[19:26] <plm> 3 cores
[19:26] <plm> sorry if I say wrong
[19:26] <plm> the guest (qemu) are running the arm
[19:26] <TJ-> so everything is as expected. you cannot get hardware acceleration (KVM) support for another foreign architecture like ARM
[19:27] <plm> ohh
[19:27] <TJ-> so qemu is running in software emulation mode, which is why it is slow
[19:27] <plm> mut if my qemu was runing a x86, I can get acceleration, right?
[19:27] <plm> but maybe confifgure more RAM and -smp 3 help =D
[19:28] <TJ-> when you run a guest of the same architecture as the host, qemu can use KVM to allow the host CPU to safely execute most instructions with 0 delay. With a different arch, the host has to simulate every machine instruction
[19:28] <plm> TJ-: Understood.
[19:29] <plm> TJ-: well, I think is finishing the upgrade to 16.4. after I will upgrade to 18.4 =D
[19:29] <plm> TJ-: as python problem was fixed, to 18.4 will have no problems, right?
[19:30] <TJ-> That's one reason I like the PC Engines APU series - based on x86 AMD CPUs - and designed to be routers but can support much more. I've got SSD in one, as well as SD-card, with 4 gigabit ports, and 2 spare mini-pcie slots
[19:30] <TJ-> plm: I'd hope so :)
[19:42] <plm> TJ-: please, what model did you bought? I would like to check price on the internet =D
[19:42] <plm> TJ-: finished upgrade. doing a reboot to check if boot ok
[19:47] <plm> TJ-:
[19:48] <plm> TJ-: error after reboot mounting filesystem
[19:49] <plm> TJ-: I will paste de image becouse is not possible to copy from qemu window
[19:50] <plm> TJ-: https://paste.pics/3V0PK
[19:50] <plm> TJ-: are you there
[19:50] <plm> ?
[19:57] <plm> TJ-: ping =D
[20:05] <TJ-> plm: sorry, was at dinner. APU2C4  https://pcengines.ch/apu2c4.htm
[20:06] <plm> TJ-: hey =D
[20:07] <TJ-> plm: with this case https://pcengines.ch/case1d2bluu.htm
[20:07] <plm> I will check later =D Did you see my image with erros?
[20:07] <plm> *picture
[20:08] <plm> TJ-: I just did reboot after upgraded
[20:15] <TJ-> plm: was that when running the guest with qemu?
[20:18] <plm> TJ-: yes
[20:18] <plm> TJ-: that picture is just of guest. I have the qemu running on host x86 linux
[20:18] <TJ-> plm: the only causes of those messages is in containers, not virtual machines
[20:18] <plm> TJ-: look a new picure
[20:19] <TJ-> plm ^^^ that I could find, so I'm not sure what is going on there.
[20:19] <plm> TJ-: https://i.paste.pics/3V0TM.png
[20:20] <TJ-> plm: "failed to mount tmpfs at/sys/fs/cgroup" suggest sysfs isn't mounted at /sys/
[20:21] <TJ-> plm: so I suspect something wrong with how systemd is starting up, or on some config it is relying on
[20:21] <plm> TJ-: but before upgrade boot ok
[20:21] <plm> was the upgrade make that problem, right?
[20:22] <plm> maybe to do something before reboot, after upgraded?
[20:22] <TJ-> plm: I'd doubt it - this shouldall happen automatically
[20:22] <plm> shit :(
[20:23] <plm> that was my hope =D
[20:23] <plm> Are there something to do to change that, maybe before booting?
[20:38] <TJ-> plm: not sure what is going on. systemd (the init daemon that runs as PID 1) has internal logic to mount sysfs, so maybe that is there but it fails to create cgroup part - there isn't enough info. Can you boot it with "systemd.log_level=debug" on the guest kernel's command line ?
[20:41] <plm> TJ-: yes
[20:42] <plm> TJ-: qemu-system-arm: -usb: Could not open 'systemd.log_level=debug': No such file or directory
[20:43] <plm> TJ-: I did:
[20:43] <TJ-> plm: you might want to add a serial console to the guest so you can capture the text output, rather than trying to screenshot what may be a LOT of output
[20:43] <plm> function run_qemu ()
[20:43] <plm> {
[20:43] <plm>         qemu-system-arm -M realview-pb-a8 -cpu cortex-a8 -m 1024 -kernel uImage.realview-vm.kernel -net  nic -net tap,ifname=tap0 -append "root=/dev/mmcblk0p1 rootwait rw ip=${QEMU_IP}:${IP}:${IP}:255.0.0.0:armqemu" -sd rootfs.img -initrd initrd.img -usb systemd.log_level=debug
[20:43] <plm> }
[20:43] <TJ-> plm: !!!! silly
[20:43] <TJ-> plm: you have to add inside the quotes for the -append "..." !!
[20:43] <plm> ohh
[20:43] <plm> right
[20:44] <plm> TJ-:         qemu-system-arm -M realview-pb-a8 -cpu cortex-a8 -m 1024 -kernel uImage.realview-vm.kernel -net  nic -net tap,ifname=tap0 -append "root=/dev/mmcblk0p1 rootwait rw ip=${QEMU_IP}:${IP}:${IP}:255.0.0.0:armqemu systemd.log_level=debug" -sd rootfs.img -initrd initrd.img -usb
[20:45] <plm> TJ-: I tried to run with that conf above ^, but not show more error than last time
[20:46] <TJ-> plm: so it is failing VERY early then
[20:47] <plm> plm: :( I was take a look on this: https://github.com/nongiach/arm_now]
[20:47] <plm> https://github.com/nongiach/arm_now
[20:47] <plm> TJ-: what do you think ^?
[20:48] <plm> ubuntu 18.4 on that ^
[20:56] <plm> arm_now works here, but is not possible install python, and other apps :(
[20:56] <TJ-> plm: are you doing exploit testing? It doesn't seem ideal for much else
[20:56] <plm> TJ-: you are rith, I need a full system
[20:56] <plm> TJ-: look this :
[20:57] <plm> TJ-: https://gist.github.com/Liryna/10710751
[20:57] <plm> That ^ I think will works right? I see arm_now in this url, but as comment, o the bottom of page
[20:58] <TJ-> plm: you could boot the guest as far as the end of the initial ramdisk init script, before it calls the systemd init on the real rootfs - that should give you a busybox shell to investigate from. in that -append="..." add "break=init"
[21:00] <plm> TJ-: doing now, moment
[21:01] <plm> TJ-: qemu-system-arm -M realview-pb-a8 -cpu cortex-a8 -m 1024 -kernel uImage.realview-vm.kernel -net  nic -net tap,ifname=tap0 -append "root=/dev/mmcblk0p1 rootwait rw ip=${QEMU_IP}:${IP}:${IP}:255.0.0.0:armqemu systemd.log_level=debug break=init" -sd rootfs.img -initrd initrd.img -usb
[21:02] <plm> TJ-: pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ sudo ./qemu_start.sh
[21:02] <plm> Configuring NAT
[21:02] <plm> TUNSETIFF: Device or resource busy
[21:02] <plm> qemu-system-arm: -net tap,ifname=tap0: could not configure /dev/net/tun (tap0): Device or resource busy
[21:03] <plm> TJ-: sorry
[21:03] <plm> two qemu session
[21:03] <TJ-> plm: I was about to say that :)
[21:04] <plm> TJ-:
[21:04] <plm> works
[21:04] <plm> Busysbox line
[21:04] <plm> TJ-: (iniramfs)
[21:05] <TJ-> plm: right, so "mount" - is there a sysfs at /sys ?
[21:06] <plm> https://paste.pics/3V0YL
[21:06] <plm> no
[21:07] <plm> "mount" show just "mount: no  /proc/mounts"
[21:07] <plm> TJ-: ^
[21:07] <TJ-> plm: hmmm
[21:07] <TJ-> plm: "cat /proc/mounts" ?
[21:08] <plm> TJ-: https://paste.pics/3V0YW
[21:09] <TJ-> plm: so it's not pivoted yet.. so we need to track down where the rootfs is right now. do "ls -la" and show me
[21:10] <plm> TJ-: https://paste.pics/3V101
[21:11] <TJ-> plm: ahhh, under /root/ I think. Show me "ls -la /root/" please
[21:11] <plm> TJ-: https://paste.pics/3V108
[21:12] <TJ-> plm: aha! there's the rootfs! OK "cat /root/proc/mounts"
[21:13] <plm> TJ-: https://paste.pics/3V10G
[21:15] <TJ-> plm: we're making progress, there's the sysfs. lets see what is in it "ls -l /root/sys/fs/"
[21:17] <plm> TJ-: https://paste.pics/3V10T
[21:18] <TJ-> plm: so the node is there but nothing else in it yet. So it must be systemd's job to mount the cgroup fs there, and then add its other file-systems.
[21:18] <TJ-> plm: I'm not sure why it isn't doing that then - the initrd is preparing the ground correctly
[21:19] <plm> hmm
[21:19] <TJ-> plm: try getting the init to continue to boot to systemd with "exit"
[21:19] <plm> ok
[21:19] <plm> TJ-: https://paste.pics/3V112
[21:20] <TJ-> plm: I wonder if this "autofs4" is the root cause?
[21:20] <plm> I dont know :(
[21:21] <TJ-> plm: restart the guest - let it drop to the shell again, then try "modprobe autofs4"
[21:21] <plm> ok
[21:21] <TJ-> plm: I suspect that will fail with the same error and so we'll need to search the file-system for it. I'm wondering if it is missing
[21:21] <plm> TJ-: done, and now?
[21:22] <plm> "modprobe autofs4" just pass to next line
[21:22] <TJ-> that suggests it loaded!
[21:22] <TJ-> try "lsmod | grep autofs"
[21:22] <plm> "exit"?
[21:22] <plm> ok
[21:22] <TJ-> plm: that ought to show the module is loaded
[21:22] <plm> dont have lsmod
[21:22] <plm> lsmod: not found
[21:22] <plm> "lsmod: not found"
[21:22] <TJ-> oh phooey of course!
[21:23] <TJ-> because we're not in the real rootfs yet
[21:23] <plm> TJ-: "/bin/sh: lsmod: not found"
[21:23] <plm> ok
[21:23] <TJ-> plm: this may work: "/root/sbin/lsmod | grep autofs"
[21:24] <plm> TJ-: https://paste.pics/3V11T
[21:28] <TJ-> plm: lol - so we got the command to work but it expects /proc/modules, and we're at /root/proc/modules! let's do it manually: "grep autofs4 /root/proc/modules"
[21:29] <plm> TJ-: "grep autofs4 /root/proc/modules" just pass to next line
[21:29] <TJ-> plm: so no match then.
[21:30] <TJ-> plm: ok, lets try "find /root/lib/modules -name 'autofs4.ko' "
[21:30] <plm> "/root/lib/modules" do not exists
[21:30] <plm> "/root/lib/modprobe.d" exists
[21:31] <TJ-> plm: eeek, that'd cause the problems alright!
[21:31] <plm> hmmm
[21:31] <TJ-> plm: show me "ls /root/lib/modules/"
[21:31] <plm> :)
[21:31] <TJ-> plm: all the modules for each kernel version should be under that path
[21:32] <TJ-> plm: which suggests the dist-upgrade didn't complete correctly
[21:32] <plm> "ls /root/lib/modules/" show "not file or directory"
[21:32] <plm> TJ-: I did "apt-get -f dist-ugprade"
[21:32] <TJ-> plm: now I'm really concerned; that is very broken
[21:33] <plm> TJ-: after that no error show for me
[21:33] <TJ-> plm how about "find /root/lib/"
[21:33] <TJ-> plm: I want to see what IS found
[21:33] <plm> "find /root/lib/" show many many lines, need to paste?
[21:33] <TJ-> plm: I think you need to fix this using qemu-static and a chroot, not a virtual machine.
[21:34] <TJ-> plm: screenshot what you can see, that'll give me an idea
[21:34] <plm> TJ-: "find /root/lib/" show more than a page, I cant copy more than a page, becouse past in qemu window
[21:34] <plm> ok
[21:34] <TJ-> plm: show me the last page
[21:34] <TJ-> plm: actually no, hang on
[21:34] <TJ-> plm: show me this instead: "find /root/lib -type d"
[21:34] <plm> TJ-: https://paste.pics/3V136
[21:35] <plm> TJ-: https://paste.pics/3V139 "find /root/lib -type d"
[21:36] <plm> TJ-: https://paste.pics/3V13G "find /root/lib -type d" - page 1
[21:36] <TJ-> plm: can you remind me of the content of the script on the host that starts qemu?
[21:37] <plm> TJ-: 	qemu-system-arm -M realview-pb-a8 -cpu cortex-a8 -m 1024 -kernel uImage.realview-vm.kernel -net  nic -net tap,ifname=tap0 -append "root=/dev/mmcblk0p1 rootwait rw ip=${QEMU_IP}:${IP}:${IP}:255.0.0.0:armqemu systemd.log_level=debug break=init" -sd rootfs.img -initrd initrd.img -usb
[21:37] <plm> TJ-: function run_qemu ()
[21:37] <plm> {
[21:37] <plm> 	qemu-system-arm -M realview-pb-a8 -cpu cortex-a8 -m 1024 -kernel uImage.realview-vm.kernel -net  nic -net tap,ifname=tap0 -append "root=/dev/mmcblk0p1 rootwait rw ip=${QEMU_IP}:${IP}:${IP}:255.0.0.0:armqemu systemd.log_level=debug break=init" -sd rootfs.img -initrd initrd.img -usb
[21:37] <TJ-> plm: the problem seems to be there are no kernel (modules) installed, which means the linux-image-$VERSION-generic packages are missing
[21:37] <plm> hmm
[21:37] <plm> Is possible to copy from original qemu image?
[21:37] <TJ-> plm: ahhh, there is the problem! you're loading the kernel image external to the rootfs, so you've not installed the matching kernel modules
[21:38] <TJ-> plm: you need the modules that were built with that kernel ("uImage.realview-vm.kernel") so the versions match
[21:38] <plm> TJ-: http://dpaste.com/0MCCHK1
[21:38] <TJ-> plm: it's not a 'true' virtual machine
[21:38] <plm> this is complete start.sh qemu ^
[21:39] <plm> hmm
[21:39] <TJ-> plm: as in you're providing the kernel and initrd so the OS in the rootfs doesn't have them, and so it likely doesn't even have access to the correct packages that contain the modules
[21:39] <plm> TJ-: but how before upgrade that works?
[21:40] <TJ-> plm: when the system was 14.04 this didn't matter because the init system was upstart and it didn't try to insmod any kernel modules. But systemd expects to be able to
[21:40] <plm> TJ-: pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ ls
[21:40] <plm> initrd.img  qemu_start.sh  rootfs.img  uImage.realview-vm.kernel
[21:40] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$
[21:41] <plm> understood
[21:41] <TJ-> plm: the OS inside the rootfs has no control over its boot device, its kernel, or its initial ramdisk
[21:41] <plm> what we can to do to fix that?
[21:42] <plm> TJ-: how is possible to fix that?
[21:42] <TJ-> plm: usually the OS installs the linux-image-$VERSION-generic (contains kernel + modules) and linux-headers-$VERSION-generic (contains files required to build other modules for that  kernel) and the modules are stored at /lib/modules/$VERSION/ - but your rootfs is mssing all those
[21:43] <plm> TJ-: are there something to do in qemu config, like as, I will start again 14.4 and upgrade again to 16.4
[21:43] <TJ-> plm: you need to find the built modules matching that kernel you are using, then install them into the rootfs at /lib/modules/$VERSION/  and then run "depmod --all" to fill the module cache
[21:44] <TJ-> plm: with the external kernel/initrd there's not a lot you can do - they are not Ubuntu kernels
[21:44] <plm> TJ-: now, or before upgrade to 16.4?
[21:44] <plm> hmm
[21:44] <plm> TJ-: so, are there no solution for this case?
[21:44] <TJ-> plm: is there not an Ubuntu kernel image that you can use?
[21:44] <plm> plm: I can use any kernel.
[21:45] <plm> but that is what I have
[21:45] <TJ-> plm: where did you get that uImage.realview-vm.kernel ?
[21:46] <TJ-> plm: because that's where you would need to get the matching modules / headers
[21:46] <plm> plm: I dont know, this are here many years, i think :(
[21:46] <plm> TJ-: I dont know, this are here many years, i think :(
[21:48] <TJ-> plm: is that guest image every run on real hardware, or always as a virtual machine?
[21:49] <plm> hmm
[21:49] <plm> TJ-: always ion VM
[21:50] <plm> TJ-: just after i complete my python app, I generate a package in this arm VM and after send to run in a real armv7
[21:50] <TJ-> plm: ok, so you could change things then. I'd suggest installing qemu-user-static and binfmt-support on the host, loop-mounting that rootfs and using qemu-user-static to chroot into it
[21:51] <plm> all right, doing now
[21:52] <TJ-> that package contains "/usr/bin/qemu-arm-static" which you should be able to call on (automatically if you install binfmt-support)
[21:52] <plm> TJ-: binfmt-support already installed and now I installed the "qemu-user-static"
[21:52] <plm> TJ-:  and now?
[21:52] <plm> "loop-mounting that rootfs and using qemu-user-static to chroot into it" how to do that?
[21:52] <TJ-> plm: hmm, it is a while since I had to do this, can't recall all the steps now
[21:53] <plm> TJ-: "mount -o loop rootfs.img /mnt/rootfs"
[21:53] <plm> ?
[21:53] <plm> mount -o loop rootfs.img /mnt/rootfs
[21:53] <plm> https://stackoverflow.com/questions/75862/mount-rootfs-on-loopback
[21:53] <TJ-> plm: "sudo mkdir /target" then "sudo chroot /target /bin/bash" - if that works you'll be running the *ARM* binaries
[21:53] <plm> ok
[21:53] <TJ-> plm: oh yeah, mount loop first!!!
[21:54] <plm> TJ-: I will stop that lat vm busibox, ok?
[21:54] <TJ-> plm: "sudo mkdir /target", "sudo mount -o loop rootfs.img /target", then  "sudo chroot /target /bin/bash" - if that works you'll be running the *ARM* binaries
[21:54] <TJ-> plm: tes
[21:54] <TJ-> yes
[21:54] <plm> ok
[21:55] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ sudo mount -o loop rootfs.img /target
[21:55] <plm> mount: wrong fs type, bad option, bad superblock on /dev/loop0,
[21:55] <plm>        missing codepage or helper program, or other error
[21:55] <plm> TJ-: ^
[21:55] <plm> oh
[21:56] <TJ-> plm: hmm, it is ext4 isn't it?
[21:56] <TJ-> plm: maybe it's due to the OMAP - is it big-endian or little-endian?
[21:56] <plm> TJ-: good question
[21:56] <plm> :)
[21:56] <plm> I think is little endiar
[21:57] <plm> cortex a8 (armv7)
[21:57] <TJ-> plm: "file rootfs.img"
[21:57] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ file rootfs.img
[21:57] <plm> rootfs.img: DOS/MBR boot sector; partition 1 : ID=0x83, active, start-CHS (0x0,32,33), end-CHS (0x17e,113,51), startsector 2048, 6141952 sectors
[21:57] <plm> TJ-: ^
[21:57] <TJ-> plm: hmmm, not much help was it!
[21:58] <TJ-> plm: it looks like a raw disk image though, mentioning partitions, so you need to do "sudo losetup --partscan /dev/loop0 rootfs.img"
[21:58] <TJ-> plm: then you should have some /dev/loop0pX nodes, probablly just loop0p1 ?
[21:59] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ sudo losetup --partscan /dev/loop0 rootfs.img
[21:59] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$
[21:59] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ ls /dev/loop0p1
[21:59] <plm> /dev/loop0p1
[21:59] <plm> Yes
[21:59] <TJ-> plm: yay! so then "mount /dev/loop0p1 /target"
[21:59] <TJ-> with 'sudo' of course
[22:00] <plm> done
[22:00] <plm> =D
[22:00] <plm> mounted
[22:00] <plm> TJ-: ^
[22:02] <TJ-> plm: OK, acid test now. "sudo chroot /target /bin/bash" - if this works are you get a root shell (# not $ prompt) then it worked
[22:02] <TJ-> s/are/and/
[22:02] <plm> pi@deskdev-pi:~$ sudo chroot /target /bin/bash
[22:02] <plm> chroot: failed to run command ‘/bin/bash’: No such file or directory
[22:03] <plm> TJ-: ^
[22:03] <plm> pi@deskdev-pi:~$ sudo chroot /target
[22:03] <plm> chroot: failed to run command ‘/bin/bash’: No such file or directory
[22:03] <TJ-> plm: OK, check if one exists first: "ls /target/bin/bash"
[22:03] <plm> pi@deskdev-pi:~$ ls /target/bin/bash
[22:03] <plm> /target/bin/bash
[22:03] <plm> TJ-: yes, exists
[22:03] <TJ-> plm: now we'll check it is ARM format: "file /target/bin/bash"
[22:04] <plm> TJ-: pi@deskdev-pi:~$ file /target/bin/bash
[22:04] <plm> /target/bin/bash: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 2.6.32, BuildID[sha1]=3b053d97ab6f0b39d06f6a7dc204f5398b29cb68, stripped
[22:05] <TJ-> plm: good, so the problem here then must be qemu-arm-static not being called
[22:07] <plm> TJ-: "qemu-user-static" was what you say to me install
[22:07] <plm> "qemu-arm-static" do not exists
[22:07] <TJ-> plm: "pastebinit <( update-binfmts --display | grep -C 6 qemu-arm )"
[22:08] <plm> TJ-: pi@deskdev-pi:~$ pastebinit <( update-binfmts --display | grep -C 6 qemu-arm )
[22:08] <plm> http://paste.ubuntu.com/p/VfCfMQRFFH/
[22:08] <TJ-> so they're there and installed
[22:09] <plm> TJ-: root@deskdev-pi:~# apt-cache search qemu-arm-static
[22:09] <plm> root@deskdev-pi:~#
[22:09] <plm> root@deskdev-pi:~# apt-cache search qemu-user-static
[22:09] <plm> qemu-user - QEMU user mode emulation binaries
[22:09] <plm> qemu-user-static - QEMU user mode emulation binaries (static version)
[22:09] <plm> "qemu-user-static" already installed
[22:10] <TJ-> it' so long since I needed to do this I can't recall all the nuances, but everything looks correct
[22:12] <plm> TJ-: maybe change in the /target to have a bash
[22:12] <plm> TJ-: pi@deskdev-pi:/target$ ls
[22:12] <plm> bin  boot  dev  etc  home  lib  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[22:12] <plm> pi@deskdev-pi:/target$
[22:13] <TJ-> plm: show me "ls /proc/sys/fs/binfmt_misc"
[22:13] <plm> TJ-: on the target ou host?
[22:13] <plm> pi@deskdev-pi:~$ ls /proc/sys/fs/binfmt_misc
[22:13] <plm> cli  jarwrapper  python3.5  qemu-aarch64  qemu-arm    qemu-cris  qemu-microblaze  qemu-mips64    qemu-mipsel  qemu-ppc64       qemu-ppc64le  qemu-sh4    qemu-sparc        qemu-sparc64  status
[22:13] <plm> jar  python2.7   python3.6  qemu-alpha    qemu-armeb  qemu-m68k  qemu-mips        qemu-mips64el  qemu-ppc     qemu-ppc64abi32  qemu-s390x    qemu-sh4eb  qemu-sparc32plus  register      wine
[22:13] <plm> pi@deskdev-pi:~$
[22:13] <plm> TJ-: host ^
[22:14] <TJ-> plm: goood
[22:15] <TJ-> aha!
[22:15] <plm> what? =D
[22:15] <TJ-> plm: "sudo cp /usr/bin/qemu-armeb-static /target/usr/bin/"
[22:15] <TJ-> plm: "sudo cp /usr/bin/qemu-arm-static /target/usr/bin/"
[22:15] <TJ-> plm: then "sudo chroot /target /bin/bash"
[22:16] <plm> TJ-: works
[22:16] <plm> =D
[22:16] <TJ-> plm: want me to explain why?
[22:17] <plm> TJ-: becuse guest do not know that is a static arm?
[22:17] <TJ-> well no.. you recall we did "update-binfmts --display" and part of that showed "interpreter = /usr/bin/qemu-arm-static"
[22:18] <plm>  interpreter = /usr/bin/qemu-arm-static
[22:18] <plm> line 33
[22:18] <computa_mike> Hi - I'm running festival on a virtual Ubuntu Server - Is there a virtual soundcard device I can use that will allow me to render test to speech on the server?  See reference to snd_dummy.  Is this the right way to go?
[22:18] <plm> instead /bin/bash is /usr/bin/qemu-arm-static?
[22:18] <TJ-> so, when we chroot the into /target/ that becomes the new root dir, so when the kernel recognises the 'magic' bytes of /bin/bash as being for ARM it tries to execute /usr/bin/qemu-arm-static *inside* the chroot - so we had to copy those in
[22:19] <plm> TJ-: ohh, now in a real "system"
[22:19] <TJ-> plm: you still don't have the proc sys dev file-systems so the guest won't have any network right now I doubt, so we'll need to do that now we have the chroot working
[22:19] <plm> ok
[22:20] <TJ-> plm: try "ping 1.1.1.1" - I expect it to fail :)
[22:20] <plm> root@deskdev-pi:/# ping 1.1.1.1
[22:20] <plm> PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
[22:20] <plm> Unsupported ioctl: cmd=0x8906
[22:20] <plm> 64 bytes from 1.1.1.1: icmp_seq=1 ttl=55 time=17.9 ms
[22:20] <plm> TJ-: ^
[22:20] <TJ-> plm: eeek! I didn't expect that :D
[22:20] <TJ-> plm: let's test DNS. "ping iam.tj"
[22:20] <plm> TJ-: but a ioctl problem
[22:21] <plm> root@deskdev-pi:/# ping iam.tj
[22:21] <plm> ping: unknown host iam.tj
[22:21] <TJ-> plm: right, no DNS as yet
[22:21] <plm> TJ-: change the resolv.conf?
[22:21] <TJ-> without DNS we can't use apt to install packages
[22:21] <plm> TJ-: ok, I will put "nameserver 8.8.8.8" on /etc/resolv.conf, right?
[22:22] <TJ-> no, do "exit" to return to the host
[22:22] <plm> TJ-: ok
[22:22] <plm> TJ-: after exist /target continuit mounted
[22:22] <TJ-> then do "sudo mount --bind /etc/resolv.conf /target/etc/resolv.conf" - this puts the host's dns config inside the guest.
[22:22] <plm> TJ-: df show mounted
[22:22] <TJ-> then do "chroot /target /bin/bash" again, and try "ping iam.tj"
[22:23] <plm> ok
[22:23] <plm> TJ-: pi@deskdev-pi:~$ sudo mount --bind /etc/resolv.conf /target/etc/resolv.conf
[22:23] <plm> pi@deskdev-pi:~$ chroot /target /bin/bash
[22:23] <plm> chroot: cannot change root directory to '/target': Operation not permitted
[22:23] <plm> pi@deskdev-pi:~$
[22:23] <TJ-> 'sudo'
[22:23] <plm> ohh
[22:23] <plm> TJ-: done
[22:23] <TJ-> sorry, I forget it sometimes
[22:23] <TJ-> test the ping
[22:24] <plm> TJ-: root@deskdev-pi:/# ping iam.tj
[22:24] <plm> PING iam.tj (109.74.197.122) 56(84) bytes of data.
[22:24] <plm> Unsupported ioctl: cmd=0x8906
[22:24] <plm> 64 bytes from astute.ly (109.74.197.122): icmp_seq=1 ttl=50 time=210 ms
[22:26] <TJ-> plm: OK we'll ignore that ioctl for now
[22:26] <TJ-> we have DNS
[22:26] <TJ-> so you can do "apt update"
[22:26] <plm> TJ-: doing "apt update"
[22:26] <plm> TJ-: done
[22:26] <TJ-> which means you can then do "apt install linux-image-generic linux-headers-generic"
[22:26] <plm> TJ-: ok, doing
[22:27] <plm> TJ-: downloading
[22:27] <plm> 0 upgraded, 8 newly installed, 0 to remove and 0 not upgraded.
[22:27] <plm> After this operation, 384 MB of additional disk space will be used.
[22:27] <TJ-> this may break because it'll want to install GRUB boot-loader I think - which may mean we need the devtmps mounting (from outside)
[22:27] <TJ-> cancel it for now with Ctrl+C if you haven't already said "yes"
[22:27] <plm> ok
[22:27] <plm> ok, canceled
[22:28] <plm> yes, I did yes
[22:28] <TJ-> because the VM was using mmcblk device, and now we're using a loop, we may have problems configuring this
[22:29] <plm> TJ-: problem just with grup?
[22:29] <TJ-> plm: but.. now you can use the rootfs in this way (with qemu-arm-static) do you need to run this as a VM any more?
[22:29] <plm> *grub
[22:29] <plm> TJ-: anymore?
[22:30] <plm> TJ-: I not understand. we can do anything with this VM
[22:30] <plm> TJ-: I have original saved
[22:30] <plm> TJ-: original with 14.4
[22:31] <TJ-> plm: no, I mean something different.
[22:32] <TJ-> What I mean is, now you are able to directly run programs (using this chroot method) in the ARM rootfs, will you still need to run it as a virtual machine with QEMU ?
[22:32] <plm> TJ-: oh, do you say if I need to run like as befode, starting start.sh on qemu?
[22:32] <plm> TJ-: no, this way is fine if I can run everything like as a qemu MACHINE
[22:33] <plm> TJ-: maybe just fix that ioctl =D
[22:33] <TJ-> plm: right now the only different is, mounting via chroot, the init system doesn't run, so it doesn't start like a 'real' PC or virtual machine does
[22:33] <TJ-> plm: if you need it to start system services we'd need to do some more work for that.
[22:34] <plm> TJ-: all right, it dont have a IP addres too.
[22:34] <TJ-> plm: correct, it's part of the host, not a separate 'pc'
[22:34] <plm> TJ-: no, this is fine. Can I use full apt-get etc etc, right?
[22:34] <plm> Thumpxr: no prlbmea about not ipadress, I copy via cp the files com guest to host and vice versa
[22:34] <TJ-> plm: yes, as long as you do that mount --bind for /etc/resolv.conf
[22:35] <plm> TJ-: so, now can I upgrade to 18.4?
[22:35] <TJ-> plm: errrrrr! you really like trying to break things don't you!? :D
[22:35] <plm> TJ-: hahaah
[22:35] <TJ-> plm: do you have a snapshot of it as it is now, in case it goes wrong ?
[22:35] <plm> TJ-: I need python3.6
[22:36] <plm> 16.4 has just python3.5
[22:36] <TJ-> plm: if you have a snapshot/copy of the current rootfs then sure, try a do-release-upgrade
[22:36] <plm> "do you have a snapshot of it as it is now, in case it goes wrong ?" not yet. Bu I can save this vm that we are working.
[22:37] <TJ-> plm: if you need to make a snapshot you'll need to exit the chroot and unmount and close the loop device first
[22:37] <plm> I do a copy and upgrade to 18.4 in the new copy
[22:37] <plm> TJ-: allright, trying that now
[22:38] <TJ-> plm: so you'd do "exit"  then "sudo umount /target/etc/resolv.conf; sudo umount /target; sudo losetup -d /dev/loop0"
[22:38] <TJ-> plm: if all that works you can safely make a copy of the rootfs.img
[22:38] <plm> pi@deskdev-pi:~$ sudo umount /target/etc/resolv.conf
[22:38] <plm> pi@deskdev-pi:~$ sudo umount /target
[22:38] <plm> pi@deskdev-pi:~$ sudo losetup -d /dev/loop0
[22:38] <plm> pi@deskdev-pi:~$
[22:38] <plm> TJ-: doing a copy
[22:39] <plm> TJ-: done, now go to chroot again right?
[22:39] <TJ-> plm: once you've made a copy, then "sudo losetup -P /dev/loop0 rootfs.img; sudo mount /dev/loop0p1 /target; sudo mount --bind /etc/resolv.conf /target/etc/resolv.conf" then "sudo  chroot /target /bin/bash"
[22:40] <plm> TJ-: i@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ sudo losetup -P /dev/loop0 rootfs.img
[22:40] <plm> losetup: rootfs.img: failed to set up loop device: Device or resource busy
[22:40] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$
[22:41] <TJ-> plm: did you check it was correcly detached before you did the copy? use "losetup -a" to list the active loops
[22:41] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ losetup -a
[22:41] <plm> /dev/loop0: []: (/home/pi/tmp/tmp/vm-cortex_a8_omap3/rootfs.img)
[22:41] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$
[22:41] <plm> TJ-: ^
[22:42] <TJ-> plm: right so it wasn't detached earlier. OK. best to make sure it gets detached, and redo the copy
[22:42] <plm> TJ-: ok
[22:42] <TJ-> try "sudo losetup -v -d /dev/loop0" then check it has gone with "losetup -a"
[22:43] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ sudo losetup -v -d /dev/loop0
[22:43] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ losetup -a
[22:43] <plm> /dev/loop0: []: (/home/pi/tmp/tmp/vm-cortex_a8_omap3/rootfs.img)
[22:43] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$
[22:43] <plm> TJ-: ^
[22:43] <TJ-> plm: something still has a handle to it, we need to find out what.
[22:43] <TJ-> plm: try "mount | grep target"
[22:43] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ mount | grep target
[22:43] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$
[22:44] <TJ-> plm: strange! how about "sudo lsof /dev/loop0p1"
[22:44] <TJ-> plm: actually, does "ls /dev/loop0p1" list the node?
[22:44] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ sudo lsof /dev/loop0p1
[22:44] <plm> lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
[22:44] <plm>       Output information may be incomplete.
[22:44] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$
[22:45] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ ls /dev/loop0p1
[22:45] <plm> /dev/loop0p1
[22:45] <TJ-> plm: right, so "losetup -d" didn't remove it - meaning something has it open
[22:45] <TJ-> plm:  try "sudo lsof | grep target"
[22:45] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ sudo lsof | grep target
[22:45] <plm> lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
[22:46] <plm>       Output information may be incomplete.
[22:46] <plm> TJ-: ^
[22:46] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$
[22:46] <plm> ohh
[22:46] <plm> my GUI
[22:46] <TJ-> plm: oh grrrr!
[22:46] <TJ-> plm: *shoot* it :D
[22:46] <plm> TJ-: when that command was start, the GUI show the mount point
[22:46] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$ losetup -a
[22:46] <TJ-> plm: of course, those interfering GUIs
[22:46] <plm> pi@deskdev-pi:~/tmp/tmp/vm-cortex_a8_omap3$
[22:46] <plm> doing backup again
[22:46] <TJ-> plm: yay! so copy then rebuild
[22:47] <plm> *snapchot
[22:47] <TJ-> plm: once you've made a copy, then "sudo losetup -P /dev/loop0 rootfs.img; sudo mount /dev/loop0p1 /target; sudo mount --bind /etc/resolv.conf /target/etc/resolv.conf" then "sudo  chroot /target /bin/bash"
[22:49] <plm> TJ-: done =D
[22:49] <plm> root@deskdev-pi:/# uname -a
[22:49] <plm> Linux deskdev-pi 4.4.0-101-generic #124-Ubuntu SMP Fri Nov 10 18:29:59 UTC 2017 armv7l armv7l armv7l GNU/Linux
[22:49] <plm> root@deskdev-pi:/#
[22:49] <plm> root@deskdev-pi:/# ping 8.8.8.8
[22:49] <plm> PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
[22:49] <plm> Unsupported ioctl: cmd=0x8906
[22:49] <plm> that ioctl is a problem?
[22:49] <TJ-> not sure, but is the ping not working now?
[22:49] <plm> root@deskdev-pi:/# cat /etc/apt/sources.list
[22:49] <plm> deb http://ports.ubuntu.com/ubuntu-ports/ xenial main universe
[22:49] <plm> deb http://ports.ubuntu.com/ xenial main restricted universe multiverse
[22:49] <plm> ok, changing to bionic and upgrade
[22:50] <plm> all right?
[22:50] <plm> TJ-: ^?
[22:51] <TJ-> plm: that ioctl warning - could be 16.04 specific. This issue seems to suggest 18.04 will be OK: https://github.com/ryankurte/docker-rpi-emu/issues/11
[22:51] <TJ-> plm: yes. do "do-release-upgrade"
[22:51] <plm> "do-release-upgrade" instead "apt-get update; apt-get -f dist-upgrade"?
[22:52] <TJ-> plm: yes
[22:52] <plm> "/etc/apt/sources.list" 3L, 136C written
[22:52] <plm> E138: Can't write viminfo file /home/pi/.viminfo!
[22:52] <plm> Press ENTER or type command to continue
[22:52] <TJ-> d-r-u looks for the release files correctly
[22:52] <TJ-> plm: did you alter /etc/apt/sources.list ? You shouldn't - d-r-u will do that correctly
[22:52] <plm> TJ-: save file, but show that error, why?
[22:53] <plm> root@deskdev-pi:/# ls -l /etc/apt/sources.list
[22:53] <TJ-> plm: because there is no user 'pi' inside the chroot?
[22:53] <plm> -rw-r--r-- 1 root root 136 Oct  6 22:52 /etc/apt/sources.list
[22:53] <plm> ohh
[22:54] <TJ-> plm: I suspect your host environment copied something over about the 'pi' user. You can check with "env |  grep pi"
[22:54] <plm> TJ-: http://dpaste.com/253DD9W
[22:54] <plm> TJ-: I copy just that qemu-static.. to bash remember?
[22:55] <TJ-> right, that was why
[22:55] <TJ-> you can do "export HOME=/root/" to fix that up whilst inside the chroot
[22:55] <plm> TJ-: "export HOME=/root/" not show more the error, yes
[22:56] <plm> TJ-: doing "do-release-upgrade"
[22:56] <plm> TJ-: root@deskdev-pi:/# do-release-upgrade
[22:56] <plm> bash: do-release-upgrade: command not found
[22:57] <plm> root@deskdev-pi:/# apt-get install do-release-upgrade
[22:57] <plm> E: Unable to locate package do-release-upgrade
[22:57] <plm> TJ-: are you sure that is "do-release-upgrade"?
[22:57] <TJ-> hang on
[22:58] <TJ-> plm: "apt install ubuntu-release-upgrader-core"
[22:58] <plm> TJ-: ohh, in my host that is the "16.4" has "do-release-upgrade", but the host is x86
[22:58] <plm> ok
[22:58] <plm> E: Unable to locate package ubuntu-release-upgrader-core
[22:58] <plm> TJ-: ^
[22:58] <plm> ohh
[22:58] <plm> TJ-: I need back to sources.list to xenial
[22:58] <plm> ?
[22:58] <TJ-> plm: grrr, on the host do "dpkg -S do-release-upgrade" and see what package it is
[22:59] <TJ-> plm: YES!!!!
[22:59] <plm> root@deskdev-pi:~# dpkg -S do-release-upgrade
[22:59] <plm> ubuntu-release-upgrader-core: /usr/share/man/man8/do-release-upgrade.8.gz
[22:59] <plm> ubuntu-release-upgrader-core: /usr/bin/do-release-upgrade
[23:00] <plm> TJ-: do-release-upgrade dont have on guest (ARM)
[23:00] <TJ-> plm: right, so correct sources.list and try again
[23:01] <plm> TJ-: maybe dont have that for arm?
[23:01] <TJ-> plm: it's python code I think so should be :)
[23:01] <plm> root@deskdev-pi:/# cat /etc/apt/sources.list
[23:01] <plm> deb http://ports.ubuntu.com/ubuntu-ports/ xenial main universe
[23:01] <plm> deb http://ports.ubuntu.com/ xenial main restricted universe multiverse
[23:01] <plm> "apt-get update" done
[23:01] <plm> root@deskdev-pi:/# apt-get install do-release-upgrade
[23:01] <plm> E: Unable to locate package do-release-upgrade
[23:01] <TJ-> plm: "apt install ubuntu-release-upgrader-core"
[23:01] <plm> TJ-: ^
[23:01] <plm> works
[23:01] <plm> =D
[23:02] <TJ-> plm finally :D
[23:02] <TJ-> plm: you sure know how to break things :p
[23:02] <plm> TJ-: hahaa, 16.4 to 18.4 cant break, ubuntu rocks
[23:03] <plm> TJ-: ubuntu can't break one LTS to another, think about stable servers
[23:03] <plm> TJ-: installed
[23:03] <plm> doing "do-release-upgrade"
[23:04] <plm> what is the diferrence between "do-release-upgrade" and "apt-get -f distyr-upgrade"?
[23:04] <TJ-> I'm going to go soon, it is past midnight here and my eyes are dying
[23:04] <TJ-> plm: d-r-u takes care of some corner-case issues
[23:04] <plm> TJ-: ohh, what time is it?
[23:04] <TJ-> plm: but other than that it calls dist-upgrade under the hood
[23:04] <plm> Here is 20PM
[23:05] <TJ-> I'm in England
[23:05] <plm> "d-r-u" I not understand this, where you see this?
[23:05] <plm> TJ-: I'm Brazil.
[23:05] <TJ-> plm: d-r-u == do-release-upgrade - we often shorten these long names to single-letters like that
[23:05] <plm> TJ-: ohh
[23:05] <plm> TJ-: now is very fast
[23:06] <plm> TJ-: now is faster than inside VM
[23:06] <TJ-> plm: your ARM image should be now it isn't doing virtual machine emulation
[23:06] <TJ-> plm: with a VM it has to pretend to have all that hardware too and simulate the way it behaves
[23:07] <plm> TJ-: but now I have a ARMv7 running?
[23:07] <plm> the static-qemu dont have the same with chroot?
[23:07] <TJ-> plm: I suspect it may be possible to use LXD (the container technology) to run a full proper ARM container on your x96 host - but I've never seen anyone do that. I'll investigate that tomorrow
[23:08] <TJ-> plm: yes, but it only has to simulate the ARM instructions in the binaries, not a whole load of hardware
[23:08] <plm> TJ-: hmm
[23:09] <plm> TJ-: now, after upgraded (if no erros) to 18.4, i have a no a "reboot", right? +D
[23:09] <plm> TJ-: http://dpaste.com/2QQDZBK
[23:09] <plm> TJ-: look this error ^
[23:10] <TJ-> plm, no reboot required, correct
[23:11] <plm> TJ-: yesterday  I tried a updagrade 14.4 to 18.4 and downloaded. Doing fo course, with apt-get -f dist-upgrade.
[23:11] <TJ-> plm: and you CAN run an LXD  ARM container on X86. See  https://askubuntu.com/questions/816886/how-do-run-an-arm-lxd-container-on-my-intel-host#816887
[23:11] <TJ-> plm: Answer "no" to that question whilst we investigate it
[23:12] <TJ-> plm: show me the content of /etc/apt/sources.list (from the 'guest' chroot)
[23:12] <plm> TJ-: ok, "no" say
[23:12] <plm> Aborting
[23:12] <plm> Reading package lists... Done
[23:12] <plm> Building dependency tree
[23:12] <plm> Reading state information... Done
[23:12] <plm> root@deskdev-pi:/# cat /etc/apt/sources.list
[23:12] <plm> deb http://ports.ubuntu.com/ubuntu-ports/ bionic main universe
[23:12] <plm> deb http://ports.ubuntu.com/ bionic main restricted universe multiverse
[23:13] <TJ-> plm: did you change it back to xenial earlier?
[23:13] <plm> TJ-: yes, just to install the d-r-u
[23:13] <plm> aftet I nack to bionic
[23:13] <TJ-> plm: "sed -i 's/bionic/xenial/g' /etc/apt/sources.list "
[23:13] <plm> After I back to bionic
[23:14] <plm> root@deskdev-pi:/# sed -i 's/bionic/xenial/g' /etc/apt/sources.list
[23:14] <plm> root@deskdev-pi:/#
[23:14] <plm> TJ-: ^
[23:14] <TJ-> plm: then "cat /etc/apt/sources.list" you should see it is now xenial
[23:14] <plm> root@deskdev-pi:/# cat /etc/apt/sources.list
[23:14] <plm> deb http://ports.ubuntu.com/ubuntu-ports/ xenial main universe
[23:14] <plm> deb http://ports.ubuntu.com/ xenial main restricted universe multiverse
[23:14] <TJ-> plm: OK, now retry "do-release-upgrade"
[23:14] <plm> TJ-: doing
[23:18] <plm> TJ-: stopped in "Reading state information... Done", but before stopped a time in this line too.
[23:19] <TJ-> give it chance, it should be figuring out what needs doing
[23:19] <plm> working
[23:23] <plm> TJ-: done
[23:23] <plm> TJ-: http://dpaste.com/3JJT5HR
[23:26] <TJ-> plm: it's not done any package upgrades!
[23:26] <plm> TJ-: yes, becaouse already with xenial, since last upgrade
[23:27] <plm> last when I did from 14.4
[23:28] <TJ-> but it's supposed to be installing the 18.04 bionic packages
[23:28] <TJ-> what does "dpkg --print-architecture" report?
[23:29] <plm> TJ-: root@deskdev-pi:/# dpkg --print-architecture
[23:29] <plm> armhf
[23:29] <TJ-> that's available in the ports archive too
[23:30] <TJ-> try "sudo apt update && sudo apt full-upgrade" - see if that suggest package upgrades
[23:31] <plm> TJ-: http://dpaste.com/2H6PH5B
[23:31] <TJ-> that's more like it! go ahead :)
[23:31] <TJ-> looks like d-r-u is broken for armhf
[23:32] <plm> TJ-: d-r-u change automatically the sources.list right?:
[23:32] <TJ-> Yes which is why you've got that long list of new packages
[23:33] <plm> TJ-: ok, "sudo apt update && sudo apt full-upgrade" will be fine to go 18.4?
[23:33] <plm> =D
[23:33] <TJ-> yes
[23:33] <plm> TJ-: so, doing =D
[23:34] <plm> TJ-: uhull, going to 18.4 ARMv7 =D
[23:34] <plm> 3 minutes =D
[23:35] <TJ-> I'm going to leave now before any more errors pop up - I need sleep!
[23:35] <TJ-> good luck with it. I should be around tomorrow if you need any more help
[23:37] <plm> TJ-: hey! Thank you so much for today!
[23:37] <plm> TJ-: you help me so, so much!
[23:37] <plm> TJ-:Good night, and tomorrow I piong you ifI need more help =D
[23:38] <TJ-> g'night :)