[00:20] <jayjo> I noticed that the ubuntu server docs has virtualization info for GUIs: https://ubuntu.com/server/docs/virtualization-virt-tools .. mainly saying to use a workstation to connect to a server. I do have a workstation that I can use to connect to a local server, but what
[00:20] <jayjo> is the recommendation for on the server itself?
[01:35] <mybalzitch> you can install x and run virt-manager locally
[01:35] <mybalzitch> proxmox is worth looking at if all you are doing is virtualization
[05:30] <cpaelzer__> Soni: yes s390x fully works
[05:31] <cpaelzer> Soni: a friend once wrote this blog which shows various ways to run it (and there would be more obviously) and all of them have network :-) http://ubuntu-on-big-iron.blogspot.com/2020/08/12-different-ways-of-running-ubuntu-server-on-kvm.html
[05:33] <cpaelzer> Soni: s390x has some special quirks, unrelated to KVM but at the Host/HW Level e.g. that you need to set the OSA card to level2+primary_router for most "bridge-through" use cases
[05:33] <cpaelzer> Soni: if you have troubles feel free to reach out if it doesn't become a day long support session :-)
[06:13] <jayjo_> I think I've got the bridge networking figured out for my two NIC VM (pfsense). But I'm installing on a headless ubuntu server. Is there way to connect to the console of pfsense through KVM on the server machine, as opposed to using something like `virt-manager -c qemu+ssh://virtnode1.mydomain.com/system` on a workstation?
[06:29] <jayjo_> this was my attempt: "virt-install --virt-type kvm --name pfsense --ram 2048 --vcpus 2 --cdrom=/home/jayjo/pfSense-CE-2.4.5-RELEASE-p1-amd64.iso --disk /home/jayjo/kvm/images/pfsense.qcow2,bus=virtio,size=10,format=qcow2 --network default --network bridge=virbr0 --graphics none --console pty,target_type=serial --os-type=linux --os-variant=freebsd10.0" but that gives me "connected to domain pfsense;
[06:29] <jayjo_> Escape character is ^]"
[06:40] <cpaelzer__> jayjo_: that means it connected you to the (virtual) console of the system
[06:40] <cpaelzer> if nothing appears thre at all then your ISO is not booting in there
[06:47] <jayjo_> hmm. Does that mean that no serial console is available in that iso? There are two memstick installations available at https://docs.netgate.com/pfsense/en/latest/install/download-installer-image.html, one VGA and one serial, and then the ISO.
[06:48] <cpaelzer> I don't know the pfsesne iso enough, but I'd at least expect the kernel messages to fly by initially
[06:48] <mgedmin> iirc qemu supports emulating vga via an ncurses-based text UI
[06:48] <mgedmin> the problem with serial consoles is the OS has to be configured to use it
[06:49] <cpaelzer> mgedmin: that would be "--curses" as qemu arg, the config jayjo_ uses presents no graphics (--graphics none)
[06:50] <jayjo_> that was just my attempt at installing and interacting on a server. I think I should read more into "--curses" and see if that could help with the image I have
[06:56] <cpaelzer> jayjo_: FYI the first 3/3 search engine results I've seen on this use --graphics vnc,listen=0.0.0.0 so maybe it needs a graphic to be presented to the guest
[06:57] <cpaelzer> one was different and mentioned that there the cpu passed to the guest was "too new" https://forum.netgate.com/topic/148138/stuck-at-booting-kvm-ubuntu18-04-server but I can't be sure from here which case you are hitting
[07:04] <lordievader> Good morning
[09:15] <xnox> RoyK:  quite out of date. Soni: yeap, we do have OpenStack on s390x and everything it needs. thus yes, there is qemu kvm with networking.
[11:45] <Soni> xnox: cpaelzer: https://paste.ubuntu.com/p/pCxshZq35p/ ?
[13:23] <xnox> Soni:  using debian, not ubuntu? debian compiles for much older cpu-arch, and doens't have toolchain vectorization on, nor hw compression enabled, nor encryption accelerated at all. Thus io, network performance, http, tls, will all be a lot slower. compared to e.g. using Ubuntu 20.04 focal.
[13:23] <xnox> Soni:  also Ubuntu has support together with IBM, thus one can esclalate ubuntu issues to either Canonical or IBM, with Ubuntu. Not with debian.
[13:24] <Soni> xnox: this is qemu-system-s390x on x86_64, for what it's worth
[13:24] <xnox> Soni:  i generaly don't do things quite the same, but i guess that's taste. FOr example, instead of virtio ccw scsi, instead of ccw. ccw is very new.
[13:25] <xnox> Soni:  right, but qemu-system-s390x on x86_64 ubuntu? or debian?
[13:25] <Soni> on arch :v
[13:25] <Soni> but sure let's try ubuntu server on qemu
[13:25] <Soni> because you can never try enough distros
[13:25] <xnox> Soni:  please use ubuntu, i cannot speak for qemu on non-ubuntu. We had to build qemu correctly, with backported s390x features, and with the correct s390x firmware build as well to support ipl.
[13:26] <xnox> Soni:  when we say that s390x qemu/kvm works correctly, we do mean ubuntu host.
[13:26] <Soni> oh
[13:26] <Soni> hm
[13:26] <xnox> (hint, channel name ;-) )
[13:26] <xnox> because we did a _lot_ of work to make it work.
[13:26] <Soni> so, ubuntu on qemu on ubuntu on qemu on arch?
[13:26] <xnox> hahahahhahahhahahhaa
[13:27] <xnox> i wish one could load a co-processor kernel, on a subset of cpus, and ask to launch a process using that kernel. Such that i.e. one could do true hw isolation for containers.
[13:34] <Soni> wait, how do you download ubuntu server for s390x?
[13:37] <Soni> oh, found it
[13:41] <Soni> so uh, this is the only big-endian ubuntu, yeah?
[13:50] <ahasenack> yes
[14:07] <xnox> Soni:  with current and future releases => yes
[14:07] <xnox> Soni:  for qemu, we have cloud-images which one can just launch in place; just add a cloud config drive, to enable ssh authentication on first boot (or install things you need, etc)
[14:07] <xnox> Soni:  http://cloud-images.ubuntu.com/releases/focal/release/
[14:08] <Soni> okay
[14:08] <xnox> Soni:  we also used to have powerpc (big endian 32bit), but it's all about ppc64el these days (64bit, little endian, openpower)
[14:08] <Soni> yeah it seems nobody wants to break (and then fix) little-endian software
[15:03] <Soni> xnox: oh okay so ubuntu server has network
[15:04] <Soni> thanks :3
[15:06] <Soni> (didn't need to run qemu on qemu)
[19:48] <anton> Hello, I tried enabling livepatch on a VPS running Ubuntu 20.04 and it's saying "2020/10/05 19:41:31 error executing enable: cannot enable machine: bad temporary server status 500 (URL: https://livepatch.canonical.com/api/machine-tokens) server response: machine token already exists". It shouldn't be enabled or exist on this machine. Is there something I have to set to get it to work, since
[19:48] <anton> it shouldn't already exist on the machine?
[20:01] <icey> ddstreet: regarding the OpenStack package set, I can propose an update to it in the morning (assuming I find the right repo ;-) ) - I'd appreciate your thoughts on updating it with the list of subscribed packages from https://bugs.launchpad.net/~ubuntu-openstack/+packagebugs
[20:02] <rbasak> rafaeldtinoco: ^
[20:03] <icey> ah - wondered who would be better as I saw rafaeldtinoco in some commit logs, thanks rbasak!
[20:03] <rafaeldtinoco> icey: yes, please, let me know
[20:03] <tomreyn> anton: this message sounds to me like this vps has the same machine id as another system which livepatch was already enabled on
[20:03] <tomreyn> anton: maybe you or the host clones it and didn't regenerate the machine id
[20:04] <icey> rafaeldtinoco: easier to email you a list, create a merge proposal, something else entirely?
[20:04] <rafaeldtinoco> rbasak: are we going for a seed update <-> pkgset automatic sync ?
[20:04] <rbasak> rafaeldtinoco: based on https://git.launchpad.net/~developer-membership-board/+git/packageset/tree/pkgset-report.py, looks like openstack is still a manually managed packageset and openstack seeded packages end up in the ubuntu-server packageset?
[20:04] <rafaeldtinoco> yep
[20:05] <rbasak> I don't mind how you approach it
[20:05] <rafaeldtinoco> thats where I was going to (and in a meeting now so I might be unresponsive soon)
[20:05] <rafaeldtinoco> icey: lets start with an email to myself ?
[20:05] <rbasak> Between you and icey I guess
[20:05] <rafaeldtinoco> and let me tackle it
[20:05] <icey> rafaeldtinoco: I'm going to bed soon so will also be unresponsive! I'll send you an email tomorrow to start the conversation
[20:05] <rbasak> It might turn out to be tricky because doing it automatically doesn't work so well when there are multiple seeds "claiming" a package and I think there might be quite a few of those in the openstack team managed packages
[20:06] <rafaeldtinoco> icey: take your time, tomorrow morning we're in a sprint .. can be the afternoon
[20:06] <rafaeldtinoco> icey: =) thanks a lot!
[20:06] <tomreyn> anton: see also: machine-id man page, section 5
[20:06] <rafaeldtinoco> rbasak: yep, we're stuck there right now (in the automation)
[20:06] <icey> rafaeldtinoco: no worries :) it's 22h here right now, so might be a late start in the morning!
[20:06] <rafaeldtinoco> but its been hard to find time to take this and fix
[20:06] <rafaeldtinoco> ill have to do a PTO to fix DMB related stuff #)
[20:06] <rbasak> Yeah it's far from trivial
[20:06] <rbasak> Maybe in the short term doing it manually would be easier
[20:06] <rafaeldtinoco> yep
[20:06] <rafaeldtinoco> ill do it manually based in the list for now
[20:07] <rafaeldtinoco> and come up with a way to update it
[20:07] <rafaeldtinoco> (perhaps an email to dmb list or something)
[20:07] <anton> ok
[20:11] <rbasak> rafaeldtinoco: as long as you're happy that the list all meet the packageset definition "Upstream OpenStack components" - if they don't then they shouldn't be in that packageset.
[20:11] <rafaeldtinoco> rbasak: and Ill try to cross check with other seeds
[20:11] <rafaeldtinoco> just to make sure we're good in responsibilities
[20:11] <rbasak> +1
[22:39] <znf> Is there a way to find packages that are installed, but not part of any currently enabled repository?
[22:40] <znf> I've just did a 14.04 -> 20.04 upgrade
[22:40] <znf> and I feel like there's SO much stuff left behind
[22:46] <TJ-> znf yes, "apt list --installed | grep ,local"
[22:47] <znf> thanks
[23:35] <Soni> how long does it take to install ubuntu server s390x on qemu? it's been like 8 hours...
[23:39] <znf> uhm
[23:39] <znf> sounds like your qemu is very slow :)
[23:48] <JanC> it's transcompiling s390x code to amd64 code
[23:48] <JanC> I assuma
[23:48] <JanC> *assume
[23:49] <JanC> but 8 hours seems like a lot
[23:50] <JanC> but 8 hours seems like a lot
[23:50] <Soni> uh hm
[23:51] <Soni> wait, how many cores does qemu emulate by default?
[23:51] <Soni> sorry, CPUs
[23:51] <Soni> (actually we have no idea, we don't know anything about mainframes lol)