[01:01] I am reading through this guide to set up pass throughs for guest OS's : https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Isolating_the_GPU , when it says "You can then add those vendor-device ID pairs to the default parameters passed to vfio-pci whenever it is inserted into the kernel." is this implying create a file or add it to the grub kernal config? [01:02] I am using ubuntu-server, that just happens to be a recomended guide [01:27] Howdy, folks. I'm hoping someone might be able to point me in the right direction for this. I have an Ubuntu 16.04 machine that I just upgraded from 14.04. It is a KVM/libvirt hypervisor, serving a manually configured (outside of libvirt, that is) routed network to its VMs. This much is fine; my VMs get address, and successfully are able to reach and be reached from the internet. [01:27] The challenge is that the VMs can no longer initiate any traffic to my intranet machines. [01:31] This means that while my router and hypervisor can successfully communicate bidirectionally on port 22/80/443/etc., none of my other physical systems can. My workstation/laptop (this is my home network) can successfully determine via netcat that ports are open on the VMs, but it cannot receive traffic from those ports. [01:31] ( examples: http://lpaste.net/8120205894421053440 ) ... any suggestions on where I might look to determine what has become misconfigured as a result of the upgrade on the hypervisor would be appreciated. [01:37] This is confusing to me because the VMs can all reach one another; and they can reach the router. It's just everything *ELSE* on the physical network they can't reach. [01:39] Logos01: are you using a firewall or doing something else with that bridge? [01:41] Logos01: also have these been rebooted etc after the upgrade and got on a new kernel? [01:41] drab: Well, I *do* have an haproxy instance acting as a loadbalancer for ports 80/443 from the outside world to my machines for the openconnect daemon and webserver stack; but I also have an independent VM that's running a Katello instance to act as management for the machines. Mostly it's a lab for me to practice/sandbox/experiment my sysadmin-skills on my own recognizance. [01:41] drab: Yes. This started on Friday and I've rebooted a couple of times. [01:41] I updated on Friday, it's persisted over the weekend. Granted I didn't really investigate it on Saturday. [01:42] I've pretty much narrowed it down to the VMs not getting routing information from the VM-net gateway onwards (mtr is a lovely thing) [01:42] ok, if you get on the hypervisor and tcpdump, do you ee the netcat traffic from the www-node1 going to the laptop? [01:42] ok [01:43] are the vms on a diff subnet/network than the physical stuff on your lan? [01:43] but it sounds like yuo got it already :) [01:44] Yeah, the VMs are all on 192.168.121.0/24 ; the physical systems are all on 192.168.1.0/24 [01:45] ok [01:45] My router (192.168.1.1) has a routing table entry to the hypervisor -- 192.168.1.3. [01:46] http://lpaste.net/354253 <-- mtr output example [01:46] you mean an entry to direct .121/24 to the HV? [01:47] Logos01: I suggest trying 'ip route get ....' commands on all the different computers (real and virtual) with IPs from all the real and virtual computers.. [01:47] what's ip route ls on the VMs? [01:47] yeah, or that, try the get [01:48] sarnold: That all looks correct. [01:48] http://lpaste.net/354254 [01:49] drab: And yes, the router has a static routing table entry using 192.168.1.3 (the hypervisor's physical address) as the gateway for 192.168.121.0/24 [01:50] Logos01: if you tcpdump traffic on the bridge, do you see the replies on the br interface? [01:51] I'm guessing they are getting lost on the HV and not going back to destination [01:51] maybe something funny with asymmetric routing, maybe they are taking a diff path on the way back and getting dropped [01:51] I assume you tcpdumnp'ed on your laptop, yes? [01:51] and don't see that traffic coming back at all [01:52] I'm wondering if the laptypo is sending traffic to the router, but it gets it back directly from the VM [01:52] doesn't recognize it and drops it [01:53] http://lpaste.net/354255 <-- not necessarily useful but [01:54] Hrm... interesting ... laptop1 is in fact showing icmp from katello [01:54] * Logos01 tries adding the routing table entry on the laptop locally [01:54] :) [01:55] * Logos01 facepalms [01:55] Why did I not need to do this before, I wonder ... [01:55] You know what? I may have had to and it's just been so long I don't remember it. [01:55] maybe you did and forgot? :) [01:55] yeah, I do that all the time, that's why I use ansible now :P [01:55] or whatevert, just don't do changes by hand [01:55] been bitten by it far too many times [01:56] drab: ... my ansible setup is on my laptop and was what was inspiring me to work on this. [01:56] <_< [01:56] even if it doesn't work after an upgrade I see stuff failing and I know I have to change something [01:56] Because I couldn't ssh to the VMs. [01:56] lol :D [01:56] good inspiration [01:56] I mean it's only 17.04 and I'm finally migrating my physicals from 14.04 to 16.04. You can tell I am suuuper on it about latest-and-greatest. [01:57] Anyhow, I appreciate it. [01:57] latest and greatet is overrated :) [01:58] Logos01: btw, maybe there was a point in this all... :) any chance you can share libvirt setup? I'm trying to get started on KVM [01:59] Logos01: I have my own bridge and stuff, so I want none of the automagic [01:59] at least until I understand where the magic comes from [02:00] drab: Oh. I ripped out the libvirt networking component and am instead running my own manually initialized dnsmasq instance (it's not starting anymore but my VMs are all statically configured now anyhow) [02:00] Also, the upgrade to 16.04 overwrote my /etc/iptables/rules.v4 file so it's a mess until I rewrite it. [02:00] But... [02:00] k, care to share how to rip that out? I have a centralized dnsmasq, don't want any additional dnsmasq or bridge set up [02:01] just use the bridge I tell it to [02:01] You just set the default network it defines to not autostart [02:01] (And then never start it) [02:01] drab: depending ujpon how little magic you want you may prefer a different tool entirely; libvirt afterall is just a wrapper around qemu and iptables and so on glued together with an xml parser [02:02] sarnold: I'd love that, but I couldn't find much of a documentation on that and I'm already quite behind to figure it all out [02:02] so trying to find a compromise between magic and starting from scratch [02:03] sarnold: Lots of things work with libvirt as the backend for their hypervisor management though [02:03] Like in my case I was actually using Katello to spin-up / spin-down VMs [02:03] http://lpaste.net/354256 <-- current state of my hypervisor's iptables. (I'm not thrilled with this.) [02:03] Logos01: that's very true. [02:03] It's fugly and I know it. [02:03] but before I need to get a container setup for kvm [02:04] Used to be a loooot prettier. [02:04] so I can experiment without trashing the host [02:04] Logos01: also you don't happen to have tried libvirt with lxc, do you? [02:05] I was playing around with the notion a while back. [02:05] But I never went anywhere with it. [02:05] Honestly I'm starting to look at rkt right now -- especially with the asshattery that Docker is pulling now. [02:05] (Monthly releases with each new monthly release marking the end-of-life of the previous month.) [02:05] Of the docker engine itself, that is. (Oh, they'll have LTS too. Quarterly instead of monthly.) [02:07] drab: But yeah, once you *HAVE* a bridge device manually created and configured to allow traffic in/out via iptables forwarding rules, you can just define libvirt domains (VMs) to use that bridge-device for their networking. [02:09] drab: a few similar things are listed here http://www.linux-kvm.org/page/Management_Tools [02:09] I just added mine to /etc/network/config [02:09] sarnold: iirc you have a zfs nas, don't you? do you happen to have looked into sanoid/znapsend for backups? [02:09] ZFS ... :D [02:10] drab: I've only got the one zfs system so far, so I haven't looked at sending snapshots anywhere yet [02:10] k [02:10] http://lpaste.net/354257 [02:10] I've narrowed it down to those two solutions, need to test them and figure out hwo to work with ZVOL since I'll need those for KVM [02:11] drab: I've never actually heard of either. I should really start doing zfs send/recv for my snapshots [02:11] whoa, r00t? crazy man :) [02:11] sole filesystem [02:11] Was that way back in 12.04 too [02:11] O_O [02:12] Yeah, the latop's made a few migrations with me. I even once used zfs send/recv to migrate the OS from one laptop to another. [02:12] so how did you put / on zfs? [02:12] zfs-native PPA [02:12] And, at the time, zfs-grub PPA [02:13] u blogged about it? or any links? [02:13] heh those days it felt even hairier than today [02:13] drab: I basically followed the howto/walkthroughs for this from the zfs-native ppa peeps [02:13] k [02:13] will google that out, thank you [02:14] https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer [02:14] I ended up putting two drives in mdadm for root [02:14] drab: stick to rlaager's guide for today's stuff [02:14] and the rest on zfs [02:14] sarnold: Heheh, hard to find now though [02:15] sarnold: He's actually merged it into the page I linekd to [02:15] Well. https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS [02:15] yeah then we're looking at the same thing [02:16] not for me then, seen that before [02:16] ... I have to figure out how to get zfSnap to honor the com.son:auto-snapshot flag [02:17] drab: I have historically had a habit of moving from one company to the next once every year to year-and-a-half. I pretty much always wind up using zfs as sole filesystem on my personal linux machines when doing so [02:18] So ... I've done that process a few times. [02:18] Sadly, on my *current* work laptop, they gave me an encrypted disk drive so I can't reinstall the OS. :-( === xlogik_ is now known as xlogik === lfrlucas_ is now known as lfrlucas === skarface is now known as antix === fr0st| is now known as fr0st === Raboo_ is now known as Raboo === thebwt_ is now known as thebwt === eshlox_ is now known as eshlox [05:26] Hey there! I'm running 16.04 server. I'd like to run several commands on shutdown/reboot. I'm looking for something like rc.local, but for the shutdown process. In which file would i put those commands? [05:27] faekjarz, do you know about @reboot in cron [05:27] oops I do not think I read your question properly [05:27] so you want when you shutdown run these commands [05:27] aye [05:37] i think i'll see what i can do with this → https://www.freedesktop.org/software/systemd/man/systemd-halt.service.html [05:38] faekjarz: Best I can think of in your case would be to start up a dummy service that has a series of ExecPost steps [05:39] And never directly interact with the service otherwise; shutting down the host would cause it to shut down that dummy service and thus execute those commands as part of the shutdown process. [05:40] Err, they'd be ExecStopPost commands [05:40] Just have the actual startup command be something tiny and silly like a simple script with a "while true ; do sleep 10 ; done" command inside it. [05:43] Logos01: interesting, thanks [05:44] You'd want to make them part of the sysinit.target.wants, I *think* [05:45] Or actually hey -- there's a shutdown.target.wants [05:45] haha -- forgot all about this. [05:47] https://unix.stackexchange.com/questions/39226/how-to-run-a-script-with-systemd-right-before-shutdown/39604#39604 [05:47] There ya go [05:49] yea, looks about right, thanks Logos01! Once again, i can avoid actually understanding systemd ;D [05:57] (oh, there's a #systemd channel …of course) [05:58] jamespage: did the tempest run return good results on bug 1672367 to mark it v-done? [05:58] bug 1672367 in libvirt (Ubuntu) "libvirt uses password-secret on old style drive_add syntax" [Undecided,New] https://launchpad.net/bugs/1672367 [06:03] faekjarz: I approve of that sentiment. === Isla_de_Muerte is now known as NwS [07:28] cpaelzer: lemme check [07:34] cpaelzer: I swear I triggered the test run but apparently not - have done so now === smb` is now known as smb === smb is now known as Guest44125 === wyre_ is now known as wyre [08:27] I'm getting an error while trying to restart sshd. (Tried /etc/init.d/ssh restart and the systemd version) [08:27] ssh.service failed because the control process exited with error code. See "systemctl status ssh.service" and "journalctl -xe" for details. [08:28] FilipNortic_: and did you do as instructed? [08:29] yes [08:30] was no real info in either case [08:31] well status says: failed to start. UNIT enterd failed state [08:31] FilipNortic_: then we can't help you :) but just in case, please do pastebin the logs [08:32] FilipNortic_: in the status, there's excerpt from the log below, anything in there? [08:34] blackflow: http://lpaste.net/354264 [08:36] FilipNortic_: there should be more, please check with journalctl -xe, or journalctl -p err or journalctl -u ssh.service -n 40 [08:39] ok i'll try to extract the relevant parts [08:41] error: Bind to port 22 on 0.0.0.0 failed: Address already in use. and fatal: safely_chroot: stat("/home/ftpuser"): No such file or directory [08:42] there you go :) So first, is there an ssh daemon already running? Is this in a container bound to host's IP? [08:44] can there be multiple ssh daemons [08:44] yes, but each bound to its own port [08:44] (though not with default set up in Ubuntu, you'd have to run additional ssh daemons either in a container, or manually / with a custom unit file) [08:46] root 23097 1 0 Apr03 ? 00:00:01 /usr/sbin/sshd [08:46] this is the only shhd process i find [08:49] FilipNortic_: ss -4lp | grep ssh [08:49] this will give you port used and pid of the process named ssh [08:49] if you have that, then you can't run additional daemons on the same port [08:49] but.... sounds to me like you're doing something wrong here. What exactly do you wish to achieve? [08:50] tcp LISTEN 0 128 *:ssh *:* users:(("sshd",pid=23097,fd=3)) [08:50] we were trying to configure sftp [08:51] right, so configure it within the existing ssh daemon, you don't need to run an additional (and how do you even run it btw) [08:53] there wasn't suppose to be an additional one the fist time we restarted sshd it worked fine and ftp worked then we tried to give access to the group instead and upon that restat we got the bind error [08:53] is it trying to start itself twice or something like that? [08:59] FilipNortic_: can you pastebin your sshd_config file? [08:59] http://lpaste.net/5543173436047622144 [09:05] can't see anything wired in it [09:06] FilipNortic_: yeah, looks okay, except I dont think you need any options for internal-sftp. [09:07] FilipNortic_: also, this setup is very unsafe, you allow password auth and use default port 22. Just a matter of time until a bot breaks in. [09:09] yeah I know that much. so far they all try as root but i will change it just need 2 resolve this first [09:10] I still have no clue what is wrong [09:15] FilipNortic_: you can't log in as root, you have PermitRootLogin no [09:16] yeah but i still se boots trying [09:16] was kind of my point (though a sort of mute one) [09:19] ah you mean the bots try as root.... yeah.... sorry, my mind was in the context of your sftp group users [09:25] but what i really which to know is why i can't restart sshd, if there's another process blocking why can't I see it [09:26] To enable sftp on my host I needed to add 'Subsystem sftp /usr/lib64/misc/sftp-server'. [09:26] That path might be a bit different on Ubuntu. [09:29] internal-sftp is needed with that group match stanza to chroot sftp users, otherwise they could roam freely on the system [09:30] and forcing the command it blocks regular ssh log in, allows only sftp [09:30] as for why it's behaving like FilipNortic_ says, I don't know. it's not normal behavior for ssh [09:33] any idee how to get back too a normal stat... should i try and kill the sshd process [09:44] FilipNortic_: first, when you "systemctl restart ssh.service", does it log an error about binding to port 22 again? [09:46] error: Bind to port 22 on 0.0.0.0 failed: Address already in use. [09:46] sshd[30276]: error: Bind to port 22 on :: failed: Address already in use. [09:46] yeah [09:51] weird. === rizonz_ is now known as rizonz [10:03] zul: urllib3 and requests are still wedged in zesty-proposed - something you have time for? [10:08] cpaelzer_: testing OK - marked bug 1672367 as requested [10:08] bug 1672367 in libvirt (Ubuntu) "libvirt uses password-secret on old style drive_add syntax" [Undecided,New] https://launchpad.net/bugs/1672367 [10:11] thank you so much jamespage! [10:11] the next bunch of SRUs are waiting, so this should help to clear the queue [10:11] cpaelzer_: yw [10:12] well waiting is too muhc, I need to code them up first :-/ [10:12] ah yes the relentless queue of SRU's [10:12] if you are not having them you either own "hello" or your package isn't used a lot :-) === thib_ is now known as thib [10:46] rbasak: given my frequent typos, could I ask you to re-release uvt as uvtoool [10:46] it would be nicer and auto-supports triple-o that way right :-) [11:02] :-) [11:16] when I run: netstat -tapn | grep ssh [11:16] i get: tcp6 0 0 :::22 :::* LISTEN 23097/sshd and tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 23097/sshd [11:17] this is one for ipv4 and on for ipv6 ? [11:17] it's the same pid as you can see [11:18] ahh right [11:19] also I think 'ss' is prefered to netstat these days [11:20] yeah i ran that first [11:20] "ss -tap" is nice [11:20] hateball: It is. [11:20] kind of hoped it missed something [11:20] needs sudo to show which pid uses ports <1024 iirc [11:20] Just like the ipconfig -> ip [11:20] ifconfig* [11:22] is ip the new one? [11:25] FilipNortic_: Yes. [11:26] FilipNortic_: Do you still have the problem of starting ssh? If you are not connected via ssh you could kill that remaining process and start the service again. [11:28] well ssh is my only method of connection right now [11:29] but does killing the sshd service stop the established connections [11:39] any other recomendations ? change port and se if i can start anther daemon there? === tsimonq2alt is now known as tsimonq2 [11:56] You could do that as a detour to restart ssh on the original port. [11:57] Though, I am not sure how ssh behaves with running multiple daemons. [11:59] not sure if this suits but "sshd has had support for multiple ListenAddress directives for a good while" [12:05] so it might still try and restart the old one [12:17] FilipNortic_: There is no way of access of another kind? [12:18] there should be a vnc point set up by the server provider but it comes up blank when i try it [12:19] guess i have to call thier support [12:26] Or you run the commands in a screen/tmux and hope for the best :P (bad advice, I know) === cpaelzer_ is now known as cpaelzer [12:44] is it save to remove package "landscape-common" if I don't plan to use landscape? [12:45] Check the reverse dependencies. [12:45] fnordahl: probably start doing SRU processing again later this week [12:45] ronator: If nothing (important) requires it, I'd say it is save to remove. [12:46] jamespage: sure... [12:46] lordievader: thats exactly where my question was aiming at :) [12:47] zul: that would be great. just a update of the package to be based on horizon-9.1.2. would suffice as the necessary patches have been upstreamed [12:48] ronator: apt-cache can tell you the reverse dependencies. [12:48] thx lemme check that [12:49] lordievader: like so? $ apt-cache rdepends landscape-common [12:49] shows only landscape-common and -client so should be fine thx [12:50] ronator: Indeed, apt will also show you if it has to remove more due to a dependecy. [12:52] lordievader: yes I know. We tested landscape for a short period of time, I removed it and now I was unsure if landscape-common was always there. removing didn't raise any dependencies, but you never know, so I asked and learned something new :) [12:55] jamespage: my old nemesis dogtag-pki [15:17] hello [15:17] I have 4 network devices [15:17] enp5s0, enp6s0, rename4, rename5 [15:17] why the hell are two of them called rename* [15:21] Sounds like they got stuck halfway through the rename, possibly due to a conflict. === JanC is now known as Guest85044 === JanC_ is now known as JanC [15:21] Do you have four NICs in reality? And can you reproduce this eg. on a live USB boot? [15:21] Also, which release? [15:22] rbasak, no, there is a dual 82571EB and a dual 82574L controller [15:22] one is onboard, one is pcie [15:22] iirc, dmesg should have some indication of what is going on (or syslog) [15:23] Perhaps it's trying to rename each of the two NICs on each controller to the same enpXs0 name? [15:24] this is my dmesg: http://paste.ubuntu.com/24313876/ [15:24] I try to find something :) [15:26] [ 4.009571] e1000e 0000:02:00.0 rename4: renamed from eth2 [15:26] [ 4.022635] e1000e 0000:02:00.1 rename5: renamed from eth3 [15:27] Which release? [15:27] looks to be 16.04 with 16.04.1 kernel [15:27] yes [15:28] that rename is happening much earlier than the other [15:29] you of course, if not concerned with hotplug could use net.ifnames=0 (iirc) [15:30] I wonder if this is a bug. If so, it'd be nice to fix it properly. [15:31] i think it would require some systemd bugging -- may be worth filing regardless [15:34] btw: the pcie LAN card is the new device [15:34] before I just used the on board [15:35] but the same card was used in another 16.04 server before without any problems [17:17] evening ,, i have like 10 drives lying around and heard of zsf , silly question i need to fist format them all on to the same system frmat ? they are mostly ntfs [17:17] Henster: you mean zfs? [17:18] yes sorry [17:24] Henster: nah, it wont' care, zfs utils will just take care of that [17:24] Henster: aiui, what drab said [17:24] Henster: you just need to tell zfs what disks to use [17:25] wow ok cool [17:25] and is it reasy just to add extra dives ? [17:25] yes and no [17:25] yes as in it's easy, no as in it probably doesn't work as you think it does [17:26] do all the drives have to be the same size ? [17:26] Henster: please read through this at thev ery least: https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/ [17:26] zfs is great, but also not particularly forgiving [17:26] heh [17:27] it's closer to what linux used to be: friendly but chooses its friends wisely [17:27] ok cool thanks was lookign for more content .. [17:27] is there a newer or better version than zfs now ? [17:28] https://wiki.ubuntu.com/ZFS [17:28] follow that to get going [17:28] read the other thing past the first chapter about edbian to understand more about the concepts [17:28] it's still the best walkthrough around [17:28] along with the other one I'm about to paste... sec [17:29] this is the best resource on zfs I've found, explaining the concepts in enough detrails that you won't shoot yourself in the foot while not being voerwhelming (and avoid cargo culting some of the many misunderstandings spread on the internet): [17:29] cool [17:29] https://forums.freenas.org/index.php?threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/ [17:32] thank you so much , new toys for my server :) [17:32] there's a few more good docs I bookmarked, but don't want to voerwhelm you, that should keep you busy for a while :) [17:33] as you proabbly heard for raid, raidz is not a backup, so backup your stuff! [17:41] anyone know of a clamav/clamd *alternative* that works fine with amavisd-new? ClamAV eats over 780MB RAM in running on even an idle mail server, so it causes some... problems. [18:07] re "raid is no backup" https://twitter.com/leyrer/status/847816162557689857 [18:08] lol, and the prize goes to the https://twitter.com/nuintari/status/848249592609202179 [18:08] but I 'spose only for MP fans :D [18:12] hehehe [18:14] Ancient machines [18:15] sarnold: ohai [18:15] gutenabend dasjoe :) [18:15] hallo teward :) [18:15] Hi sarnold :) [18:16] sarnold: what do you know about clamav being a memory-consuming resource whore on servers and if there's any solution for it? Or should I be bothering the server team to add a warning to the server guide about ClamAV taking up massive resource usage and have minimum reqs. of 2GB RAM or more to use it on the server [18:16] since you've got some security team insights :P [18:16] (clamav for mailservers == resource hog) [18:17] heh? clamav doesn't consume a lot of memory [18:18] teward: heh, 2gigs feels smallish today.. [18:20] patdk-wk: well, running clamav ate 750MB of RAM on a VPS where i'm setting up a test mailserver with amavis+clamav [18:20] and it actually swapped so much I had to force-restart the VPS [18:20] so............ [18:20] clamav did? your your av libs for clamav did? [18:20] i'll let you restate your question (E: Unclear what's being asked) [18:20] my clamav with a LOT of 3rd party libs added to it, is using 710megs of ram [18:21] is clamav using all that memory? or is your clamav-virus-definitions using it all [18:21] patdk-wk: stock ClamAV from the repos. 650MB RAM + the rest was swap. [18:21] patdk-wk: looked to be the clamav process on htop [18:21] what clamav process? clamavd? [18:22] i'd have to relaunch it to check. I'mi currently away from my SSH console, but will get back to you :) [18:22] odd though [18:22] mine is using 710megs exactly, no swap [18:22] unless it's a leaky version in Xenial [18:22] using clamav libs, securite, bofhland, foxhole, ... [18:23] but then the stock clamav libs are 250megs [18:23] well, i have a trial of Avast's solution for antivirus, giving that a test go, otherwise Postfix + DoveCot + Amavis + SPF + DKIM + DMARC all works heh [18:23] I use bitdefender also, but that is slow, cause it won't run in daemon mode and uses lots of ram also [18:24] but then, my mailservers have 30gigs of ram [18:24] clamav uses only alittle ram, spamassassin uses a lot more [19:48] hi all. i want to add a xubuntu environment to my server for use over NX. i am running 16.04 amd64 [19:48] is xubuntu-desktop still the correct metapackage? [19:56] if that is what you want to use [19:56] you should probably ask xubuntu though [19:57] it is [19:58] both what i want to use, and the correct package :D [21:22] patdk-wk: spamassassin eats most of my RAM currently, on the box, next big user is Amavis but the problem is on a small email server (1GB RAM is low, yes), clamav's RAM usage is actually an issue. Avast's solution seems to behave better in terms of resource usage [21:45] teward: clamd (note the d) eating up a lot of RAM? [21:55] Has anyone got any problems with recent qemu update? My VMs on top of Xen are not starting anymore. [21:55] libvirtd gives this error: invalid argument: could not find capabilities for arch=x86_64 [21:56] cpaelzer: --^ [21:56] blackflow: yep. [22:03] Is anyone here running VMs on Xen and have restarted a server after applying upgrades today? [22:29] queeq: please file a bug report against whatever it is that actually does your vms, whether that's qemu, libvirt, or xen. Of the three the most recently changed was five days ago, so it'd be best to be more specific than "today's updates" -- dpkg -l output of the affected packages, etc., would be helpful [22:30] Thanks sarnold. I'm not sure it's a bug. I now tried downgrading qemu and it didn't help. Neither libvirt nor xen were upgraded recently [22:31] Upgrade that I suspected caused the issue included the following... [22:32] Upgrade: landscape-common:amd64 (16.03-0ubuntu2, 16.03-0ubuntu2.16.04.1), grub-common:amd64 (2.02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), makedev:amd64 (2.3.1-93ubuntu1, 2.3.1- 93ubuntu2~ubuntu16.04.1), grub-xen-bin:amd64 (2.02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), qemu-system-x86:amd64 (1:2.5+dfsg-5ubuntu10.9, 1:2.5+dfsg-5ubuntu10.10), grub2-common:amd64 (2. 02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), [22:32] grub-pc:amd64 (2.02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), libapparmor1:amd64 (2.10.95-0ubuntu2.5, 2.10.95-0ubuntu2.6), grub-pc-bin:amd64 (2. 02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), libapparmor-perl:amd64 (2.10.95-0ubuntu2.5, 2.10.95-0ubuntu2.6), qemu-utils:amd64 (1:2.5+dfsg-5ubuntu10.9, 1:2.5+dfsg-5ubuntu10.10), apparmor:amd64 (2.10.95-0ubuntu2.5, 2.10.95-0ubuntu2.6), wget:amd64 (1.17.1-1ubuntu1.1, 1.17.1-1ubuntu1.2), [22:32] grub-xen-host:amd64 (2.02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), qemu-block-extra:amd64 (1:2.5+dfsg- 5ubuntu10.9, 1:2.5+dfsg-5ubuntu10.10), qemu-system-common:amd64 (1:2.5+dfsg-5ubuntu10.9, 1:2.5+dfsg-5ubuntu10.10) [22:34] I don't think it could be caused by grub. I'm now trying to downgrade apparmor, but not sure it could have caused this either [22:38] Nah, apparmor downgrade haven't helped [22:39] pfew ;) [22:39] not a surprise [22:39] but still [22:41] Don't know what else to try... Something went wrong. And I haven't found recent information on this error. There were some bugs with qemu capabilities caching back in 2015, but that's it... [22:41] pfew times two [22:41] libvirtd verbose logging doesn't give any additional clues either [22:44] The only error is this: error : virCapabilitiesDomainDataLookupInternal:699 : invalid argument: could not find capabilities for arch=x86_64 [22:46] queeq: skim this mail and see if it rings any bells https://lists.ubuntu.com/archives/ubuntu-devel/2016-September/039492.html [22:47] Thanks, will do [22:48] queeq: I'm guessing that `virsh cpu-models x86_64` returns an error? [22:49] tyhicks: "this function is not supported by the connection driver: virConnectGetCPUModelNames" [22:49] queeq: is a libvirtd process even running? [22:50] Yes it is [22:50] odd [22:50] I'm no help here [22:50] thanks anyway :) [22:50] oh [22:50] I guess virConnectGetCPUModelNames could be a qemu/kvm thing [22:51] Maybe, but there's xen as a hypervisor, no kvm [22:52] That's why I suspected qemu upgrade to be the cause [22:56] sarnold: That mail hasn't rang any bells. It's mostly migration-related between major versions. [22:57] In my case this was very minor upgrade without any migration. This setup has been working fine for a long time. Until today, lol :D [22:59] queeq, I have vms running on kvm, and installed the recent qemu updates, but havent rebooted yet. how do you manage the vms? I'm guessing its not virt-manager [23:00] It is virt-manager usually [23:00] But bridging is manual [23:01] why do you mention bridging? [23:01] Im rebooting my host. lets see what happens [23:01] Because this is part of VM management :) [23:01] I define bridges in /etc/network/interfaces [23:02] Me too, I turned off libvirt's networking because it conflicts with another bridge I have on the host [23:03] one of the guests in Windows Server 2008, which provides dns, dhcp, and is the domain controller. so until it finishes booting, I cant browse [23:04] You seem to have more complex setup. I've only Linux guests [23:04] both guests are running. the other guest is ubuntu server running bacula [23:04] So you had no problems [23:05] you should save teh xml file for the guests, and search for refernece to x86_64, or whatever the error is [23:05] reference [23:05] There is a reference for it, but I think it's very standard file [23:06] Ive had to cut out sections in the past, that were supported on centos kvm, for example, but not in ubuntu's kvm [23:07] then just save and import the xml file [23:07] Oh, I thought you're talking about qemu capabilities cache [23:08] I mean using virsh to save the xml definition, edit it, then import it back [23:09] I think they're stored in xml anyway, /etc/libvirt/libxl/vmname.xml [23:11] Also accessible via virsh edit vmname [23:14] Haha, when I tried to edit it, it gives me the same error again [23:14] compdoc: what arch do you have set in those xml files? [23:15] hvm [23:15] Hey everyone. I Asked in the Ubuntu channel but I figure this is worth a shot too. I have four disks in my PC. Two are raid0 array for Windows,, and the other two are just standard use for data and whatnot. I'd like to dualboot Windows 10 and Ubuntu(or others) safely, but I don't know how to properly install along the raid or install to one of the [23:15] other disks and where to put the bootloader for the latter idea. I was urged to try asking here, but I'm looking for desktop use. anyone have suggestions? Thanks. [23:16] not sure how recent that is [23:16] Straaaange, same arch as I have [23:16] Could you show dpkg -l | grep qemu? [23:18] sorry, thats an old backup file. this is what I sue now, for more modern chipset features: hvm [23:18] *use [23:18] Boulevard: your BIOS would try to read MBR from single disk first, anyway, so the way to do it would be to install grub on the one you are booting from [23:19] compdoc: arch is the same.... [23:19] I have hvm [23:19] https://pastebin.com/uN01w5nD [23:20] So I could safely drop linux into the 300gb or so I cleaned up on one of my other disks and then drop the loader on my raid? [23:20] I apologize, I haven't cut my teeth on these hardcore installs before [23:21] queeq, so youre booting a xen kernel? is it a standard ubuntu package? [23:22] the host, I mean [23:24] Thanks compdoc, looks similar to mine, but I don't have qemu for archs like ppc or sparc. I use custom kernel. [23:26] Boulevard: you can drop the loader on any disk, `update-grub` utility should be able to find both windows and linux installations [23:27] You would then just need to point BIOS to the disk where grub is installed [23:27] By the disk I mean physical drive [23:28] So it'd see windows from the raid and linux from my other non raid [23:29] Oh, sorry, I missed there's RAID. What kind of RAID is that? [23:29] Yes, sorry. [23:29] Raid0. Bios controlled, not hardware [23:30] The CPU is a bit old, so I'm using raid to squeeze some speed out of the whole thing. [23:30] compdoc: what emulator do you have set in the xml? [23:30] Boulevard: fyi, bios raid is fakeraid and usually does not actually help [23:31] Eh? :o [23:31] Must be placebo effect then. I thought it helped a bit at least [23:31] Boulevard: not sure if it would work in this case. I remember long time ago I was trying to set up something like this with no success. Ended up using Linux mdraid or zfs [23:31] Boulevard: it might help a bit, but it's not real raid and isn't really accelerated [23:32] Neither mdraid nor zfs raids are cpu intensive [23:32] right [23:33] I suppose I'll just get a couple new hard disks soon and dedicate an os to either of them then. I just reinstalled windows this weekend so I don't really feel like fiddling around with too much so soon [23:33] Boulevard: so the issue is just deciding where to put the bootloader? where is it now? [23:33] Hell they're 50 bucks on egg right now for 7.2 1TB's. (I'll take two dozen on the double :P) [23:34] The Linux bootloader? Nowhere. I'm running a live usb session right now [23:35] Boulevard: some suggested reading before you build your 24-disk storage machine https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/ [23:35] the whole blog series is wonderful [23:35] Nah I don't need that much XD. Jut being cheeky :P [23:36] Seems that I've lost hardware virtualization on my host [23:36] I see a 1TB volume under dev/mapper/ and then sda and sdb, which are my two 500GB raid disk members [23:37] Tried creating new VM and it only has an arch option of xen (paravirt) [23:37] SDC is my 1tb media drive, which would be a candidate for install if it would work smoothly. sdd is the linux usb and sde is my external [23:38] wtf really, how's it possible? [23:38] Well. Whatever. I'll just order a couple disks and set it up like a normie. Thanks for the help guys. [23:38] Have a good afternoon :) [23:39] queeq: do you see the flag in /proc/cpuinfo? [23:40] Which one should it be? [23:40] vmx [23:40] oh well [23:40] I know [23:40] or you can run `kvm-ok` [23:41] I had a power outage today and there were some issues with BIOS setup. I guess I lost vmx option there [23:41] It seemed to have reset to default. As long as this is remote machine I wasn't able to check thoroughly [23:41] strange [23:41] Thanks everyone! [23:42] The clue is solved :) [23:42] nacc: you don't happened to have tried to run kvm/libvirtd in a container, have you? [23:42] and when I say tried I mean succeeded :) [23:42] This is a home server and there were some nasty outages lately. Everything started with it being unable to boot. [23:43] drab: no, i haven't tried that [23:43] drab: i assume it would only work in a privileged container, bu teven then ... maybe not :0 [23:44] nacc: yeah, doesn't look like it... [23:44] When a screen was attached, it was residing on a BIOS warning about faulty set up. I instructed remote person to enter BIOS, and upon entering the Linux started to boot and I thought it's all fine [23:44] Shite, killed half of the night trying to troubleshoot this [23:44] queeq: sorry :/ [23:45] np :) Without your help I would kill another couple of hours [23:45] Thanks all, good night [23:45] nn [23:47] is it ok to rant about xml in here? I don [23:47] t want to offer anybody :) [23:47] offend* [23:47] its too late for that :/ [23:47] jk [23:47] heh [23:48] queeq, if I remember right, the bit I had to cut out of the xml file was at the bottom [23:49] compdoc: thanks, the issue is resolved now. 99.9% probability is that my BIOS settings got screwed after power outage [23:50] ah, cool [23:50] I have dual-bios Gigabyte MB there which seems to have rewritten main BIOS with backup settings, and virtualization was turned off there. [23:50] queeq: so you just had to turn on hardware accelerated vms in the bios? [23:50] There's no vmx flag in /proc/cpuinfo [23:51] sarnold: I dunno yet, will ask a person who has physical access to the computer to do it tomorrow [23:51] ugh [23:51] But considering everything I've seen today I'm pretty sure this is the issue [23:52] is anybody aware of any nefarious consequence if I take out the 127.0.1.1 hostname entry from /etc/hosts ? [23:52] it's getting in the way quite a bit, and the reason it was added seems some old bug [23:52] I've never once heard of it getting in the way [23:53] sarnold: ok, let me tell you about it then you will have had :) [23:53] 127.0.0.1 is still there? [23:53] yes [23:53] with localhost [23:54] some systems have it, some dont. [23:54] but 127.0.1.1 with the hostname is added at install time [23:54] yup [23:54] sarnold: what happens is that people, me included, use "localhost" to refer, to, well, local host [23:54] they otherwise use "hostname" to refer to the ip/interface that hostname should resolve to [23:54] however because of the 127.0.0.1, using hostname still refers to localhost [23:55] when configuring certain daemons, if you use hostname meaning the certain public ip/interface (could be lan), you get screwed becasue the daemon starts to listen on localhost [23:55] sounds like bugs in those daemons [23:55] if I want tomsething to listen on localhost I will say, you guessed it, localhost [23:55] because a hostname does not define an interface [23:56] if you want to listen on a public ip, use the public ip? [23:56] or specify the interface to use [23:56] the daemons are doing the right thing, theya re calling gethostbyname [23:56] what happens when hostname resolves to multiple IPs? [23:56] which depending on how nss is configured will likely go hosts, dns [23:58] drab: thanks for the explanation. I've never seen anyone use hostnames quite like that before :) normally people either want wildcard binds or they want to bind to specific IPs or interfaces. [23:58] nacc: each ip would likely have its own hostname (plus a cname to all of those), or if it doesn't specifying the ip itself makes sense then [23:59] nacc: I'm not saying it's a general situation where specifying the hostname is always the right thing to do [23:59] i don't think a 'hostname' uniquely identifies an interface [23:59] I'm pointing to what seems a logical assumption: localhost means localhost, hostname means "something else". if the ip it points to resolves to the local machine, that's fine [23:59] nacc: sure, I'm not saying ti should be [23:59] and it seems like the daemons you are using make that assumption