[01:01] <Mead> I am reading through this guide to set up pass throughs for guest OS's : https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Isolating_the_GPU ,  when it says "You can then add those vendor-device ID pairs to the default parameters passed to vfio-pci whenever it is inserted into the kernel."  is this implying create a file or add it to the grub kernal config?
[01:02] <Mead> I am using ubuntu-server, that just happens to be a recomended guide
[01:27] <Logos01> Howdy, folks. I'm hoping someone might be able to point me in the right direction for this. I have an Ubuntu 16.04 machine that I just upgraded from 14.04.  It is a KVM/libvirt hypervisor, serving a manually configured (outside of libvirt, that is) routed network to its VMs. This much is fine; my VMs get address, and successfully are able to reach and be reached from the internet.
[01:27] <Logos01> The challenge is that the VMs can no longer initiate any traffic to my intranet machines.
[01:31] <Logos01> This means that while my router and hypervisor can successfully communicate bidirectionally on port 22/80/443/etc., none of my other physical systems can.  My workstation/laptop (this is my home network) can successfully determine via netcat that ports are open on the VMs, but it cannot receive traffic from those ports.
[01:31] <Logos01> ( examples: http://lpaste.net/8120205894421053440 )    ... any suggestions on where I might look to determine what has become misconfigured as a result of the upgrade on the hypervisor would be appreciated.
[01:37] <Logos01> This is confusing to me because the VMs can all reach one another; and they can reach the router. It's just everything *ELSE* on the physical network they can't reach.
[01:39] <drab> Logos01: are you using a firewall or doing something else with that bridge?
[01:41] <drab> Logos01: also have these been rebooted etc after the upgrade and got on a new kernel?
[01:41] <Logos01> drab: Well, I *do* have an haproxy instance acting as a loadbalancer for ports 80/443 from the outside world to my machines for the openconnect daemon and webserver stack; but I also have an independent VM that's running a Katello instance to act as management for the machines.  Mostly it's a lab for me to practice/sandbox/experiment my sysadmin-skills on my own recognizance.
[01:41] <Logos01> drab: Yes. This started on Friday and I've rebooted a couple of times.
[01:41] <Logos01> I updated on Friday, it's persisted over the weekend. Granted I didn't really investigate it on Saturday.
[01:42] <Logos01> I've pretty much narrowed it down to the VMs not getting routing information from the VM-net gateway onwards (mtr is a lovely thing)
[01:42] <drab> ok, if you get on the hypervisor and tcpdump, do you ee the netcat traffic from the www-node1 going to the laptop?
[01:42] <drab> ok
[01:43] <drab> are the vms on a diff subnet/network than the physical stuff on your lan?
[01:43] <drab> but it sounds like yuo got it already :)
[01:44] <Logos01> Yeah, the VMs are all on 192.168.121.0/24 ; the physical systems are all on 192.168.1.0/24
[01:45] <drab> ok
[01:45] <Logos01> My router (192.168.1.1) has a routing table entry to the hypervisor -- 192.168.1.3.
[01:46] <Logos01> http://lpaste.net/354253  <-- mtr output example
[01:46] <drab> you mean an entry to direct .121/24 to the HV?
[01:47] <sarnold> Logos01: I suggest trying 'ip route get ....' commands on all the different computers (real and virtual) with IPs from all the real and virtual computers..
[01:47] <drab> what's ip route ls on the VMs?
[01:47] <drab> yeah, or that, try the get
[01:48] <Logos01> sarnold: That all looks correct.
[01:48] <Logos01> http://lpaste.net/354254
[01:49] <Logos01> drab: And yes, the router has a static routing table entry using 192.168.1.3 (the hypervisor's physical address) as the gateway for 192.168.121.0/24
[01:50] <drab> Logos01: if you tcpdump traffic on the bridge, do you see the replies on the br interface?
[01:51] <drab> I'm guessing they are getting lost on the HV and not going back to destination
[01:51] <drab> maybe something funny with asymmetric routing, maybe they are taking a diff path on the way back and getting dropped
[01:51] <drab> I assume you tcpdumnp'ed on your laptop, yes?
[01:51] <drab> and don't see that traffic coming back at all
[01:52] <drab> I'm wondering if the laptypo is sending traffic to the router, but it gets it back directly from the VM
[01:52] <drab> doesn't recognize it and drops it
[01:53] <Logos01> http://lpaste.net/354255  <-- not necessarily useful but
[01:54] <Logos01> Hrm... interesting ... laptop1 is in fact showing icmp from katello
[01:54]  * Logos01 tries adding the routing table entry on the laptop locally
[01:54] <drab> :)
[01:55]  * Logos01 facepalms
[01:55] <Logos01> Why did I not need to do this before, I wonder ...
[01:55] <Logos01> You know what? I may have had to and it's just been so long I don't remember it.
[01:55] <drab> maybe you did and forgot? :)
[01:55] <drab> yeah, I do that all the time, that's why I use ansible now :P
[01:55] <drab> or whatevert, just don't do changes by hand
[01:55] <drab> been bitten by it far too many times
[01:56] <Logos01> drab: ... my ansible setup is on my laptop and was what was inspiring me to work on this.
[01:56] <Logos01> <_<
[01:56] <drab> even if it doesn't work after an upgrade I see stuff failing and I know I have to change something
[01:56] <Logos01> Because I couldn't ssh to the VMs.
[01:56] <drab> lol :D
[01:56] <drab> good inspiration
[01:56] <Logos01> I mean it's only 17.04 and I'm finally migrating my physicals from 14.04 to 16.04. You can tell I am suuuper on it about latest-and-greatest.
[01:57] <Logos01> Anyhow, I appreciate it.
[01:57] <drab> latest and greatet is overrated :)
[01:58] <drab> Logos01: btw, maybe there was a point in this all... :) any chance you can share libvirt setup? I'm trying to get started on KVM
[01:59] <drab> Logos01: I have my own bridge and stuff, so I want none of the automagic
[01:59] <drab> at least until I understand where the magic comes from
[02:00] <Logos01> drab: Oh. I ripped out the libvirt networking component and am instead running my own manually initialized dnsmasq instance (it's not starting anymore but my VMs are all statically configured now anyhow)
[02:00] <Logos01> Also, the upgrade to 16.04 overwrote my /etc/iptables/rules.v4 file so it's a mess until I rewrite it.
[02:00] <Logos01> But...
[02:00] <drab> k, care to share how to rip that out? I have a centralized dnsmasq, don't want any additional dnsmasq or bridge set up
[02:01] <drab> just use the bridge I tell it to
[02:01] <Logos01> You just set the default network it defines to not autostart
[02:01] <Logos01> (And then never start it)
[02:01] <sarnold> drab: depending ujpon how little magic you want you may prefer a different tool entirely; libvirt afterall is just a wrapper around qemu and iptables and so on glued together with an xml parser
[02:02] <drab> sarnold: I'd love that, but I couldn't find much of a documentation on that and I'm already quite behind to figure it all out
[02:02] <drab> so trying to find a compromise between magic and starting from scratch
[02:03] <Logos01> sarnold: Lots of things work with libvirt as the backend for their hypervisor management though
[02:03] <Logos01> Like in my case I was actually using Katello to spin-up / spin-down VMs
[02:03] <Logos01> http://lpaste.net/354256  <-- current state of my hypervisor's iptables. (I'm not thrilled with this.)
[02:03] <sarnold> Logos01: that's very true.
[02:03] <Logos01> It's fugly and I know it.
[02:03] <drab> but before I need to get a container setup for kvm
[02:04] <Logos01> Used to be a loooot prettier.
[02:04] <drab> so I can experiment without trashing the host
[02:04] <drab> Logos01: also you don't happen to have tried libvirt with lxc, do you?
[02:05] <Logos01> I was playing around with the notion a while back.
[02:05] <Logos01> But I never went anywhere with it.
[02:05] <Logos01> Honestly I'm starting to look at rkt right now -- especially with the asshattery that Docker is pulling now.
[02:05] <Logos01> (Monthly releases with each new monthly release marking the end-of-life of the previous month.)
[02:05] <Logos01> Of the docker engine itself, that is.  (Oh, they'll have LTS too. Quarterly instead of monthly.)
[02:07] <Logos01> drab: But yeah, once you *HAVE* a bridge device manually created and configured to allow traffic in/out via iptables forwarding rules, you can just define libvirt domains (VMs) to use that bridge-device for their networking.
[02:09] <sarnold> drab: a few similar things are listed here http://www.linux-kvm.org/page/Management_Tools
[02:09] <Logos01> I just added mine to /etc/network/config
[02:09] <drab> sarnold: iirc you have a zfs nas, don't you? do you happen to have looked into sanoid/znapsend for backups?
[02:09] <Logos01> ZFS ... :D
[02:10] <sarnold> drab: I've only got the one zfs system so far, so I haven't looked at sending snapshots anywhere yet
[02:10] <drab> k
[02:10] <Logos01> http://lpaste.net/354257
[02:10] <drab> I've narrowed it down to those two solutions, need to test them and figure out hwo to work with ZVOL since I'll need those for KVM
[02:11] <Logos01> drab: I've never actually heard of either. I should really start doing zfs send/recv for my snapshots
[02:11] <drab> whoa, r00t? crazy man :)
[02:11] <Logos01> sole filesystem
[02:11] <Logos01> Was that way back in 12.04 too
[02:11] <drab> O_O
[02:12] <Logos01> Yeah, the latop's made a few migrations with me.  I even once used zfs send/recv to migrate the OS from one laptop to another.
[02:12] <drab> so how did you put / on zfs?
[02:12] <Logos01> zfs-native PPA
[02:12] <Logos01> And, at the time, zfs-grub PPA
[02:13] <drab> u blogged about it? or any links?
[02:13] <sarnold> heh those days it felt even hairier than today
[02:13] <Logos01> drab: I basically followed the howto/walkthroughs for this from the zfs-native ppa peeps
[02:13] <drab> k
[02:13] <drab> will google that out, thank you
[02:14] <Logos01> https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-16.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer
[02:14] <drab> I ended up putting two drives in mdadm for root
[02:14] <sarnold> drab: stick to rlaager's guide for today's stuff
[02:14] <drab> and the rest on zfs
[02:14] <Logos01> sarnold: Heheh, hard to find now though
[02:15] <Logos01> sarnold: He's actually merged it into the page I linekd to
[02:15] <Logos01> Well. https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS
[02:15] <drab> yeah then we're looking at the same thing
[02:16] <drab> not for me then, seen that before
[02:16] <Logos01> ... I have to figure out how to get zfSnap to honor the com.son:auto-snapshot flag
[02:17] <Logos01> drab: I have historically had a habit of moving from one company to the next once every year to year-and-a-half. I pretty much always wind up using zfs as sole filesystem on my personal linux machines when doing so
[02:18] <Logos01> So ... I've done that process a few times.
[02:18] <Logos01> Sadly, on my *current* work laptop, they gave me an encrypted disk drive so I can't reinstall the OS. :-(
[05:26] <faekjarz> Hey there! I'm running 16.04 server. I'd like to run several commands on shutdown/reboot. I'm looking for something like rc.local, but for the shutdown process. In which file would i put those commands?
[05:27] <lynorian> faekjarz, do you know about @reboot in cron
[05:27] <lynorian> oops I do not think I read your question properly
[05:27] <lynorian> so you want when you shutdown run these commands
[05:27] <faekjarz> aye
[05:37] <faekjarz> i think i'll see what i can do with this → https://www.freedesktop.org/software/systemd/man/systemd-halt.service.html
[05:38] <Logos01> faekjarz: Best I can think of in your case would be to start up a dummy service that has a series of ExecPost steps
[05:39] <Logos01> And never directly interact with the service otherwise; shutting down the host would cause it to shut down that dummy service and thus execute those commands as part of the shutdown process.
[05:40] <Logos01> Err, they'd be ExecStopPost commands
[05:40] <Logos01> Just have the actual startup command be something tiny and silly like a simple script with a "while true ; do sleep 10 ; done" command inside it.
[05:43] <faekjarz> Logos01: interesting, thanks
[05:44] <Logos01> You'd want to make them part of the sysinit.target.wants, I *think*
[05:45] <Logos01> Or actually hey -- there's a shutdown.target.wants
[05:45] <Logos01> haha -- forgot all about this.
[05:47] <Logos01> https://unix.stackexchange.com/questions/39226/how-to-run-a-script-with-systemd-right-before-shutdown/39604#39604
[05:47] <Logos01> There ya go
[05:49] <faekjarz> yea, looks about right, thanks Logos01! Once again, i can avoid actually understanding systemd ;D
[05:57] <faekjarz> (oh, there's a #systemd channel …of course)
[05:58] <cpaelzer> jamespage: did the tempest run return good results on bug 1672367 to mark it v-done?
[06:03] <Logos01> faekjarz: I approve of that sentiment.
[07:28] <jamespage> cpaelzer: lemme check
[07:34] <jamespage> cpaelzer: I swear I triggered the test run but apparently not - have done so now
[08:27] <FilipNortic_> I'm getting an error while trying to restart sshd. (Tried /etc/init.d/ssh restart and the systemd version)
[08:27] <FilipNortic_> ssh.service failed because the control process exited with error code. See "systemctl status ssh.service" and "journalctl -xe" for details.
[08:28] <blackflow> FilipNortic_: and did you do as instructed?
[08:29] <FilipNortic_> yes
[08:30] <FilipNortic_> was no real info in either case
[08:31] <FilipNortic_> well status says: failed to start. UNIT enterd failed state
[08:31] <blackflow> FilipNortic_: then we can't help you :)   but just in case, please do pastebin the logs
[08:32] <blackflow> FilipNortic_: in the status, there's excerpt from the log below, anything in there?
[08:34] <FilipNortic_> blackflow: http://lpaste.net/354264
[08:36] <blackflow> FilipNortic_: there should be more, please check with journalctl -xe, or journalctl -p err or journalctl -u ssh.service -n 40
[08:39] <FilipNortic_> ok i'll try to extract the relevant parts
[08:41] <FilipNortic_> error: Bind to port 22 on 0.0.0.0 failed: Address already in use. and fatal: safely_chroot: stat("/home/ftpuser"): No such file or directory
[08:42] <blackflow> there you go :)  So first, is there an ssh daemon already running? Is this in a container bound to host's IP?
[08:44] <FilipNortic_> can there be multiple ssh daemons
[08:44] <blackflow> yes, but each bound to its own port
[08:44] <blackflow> (though not with default set up in Ubuntu, you'd have to run additional ssh daemons either in a container, or manually / with a custom unit file)
[08:46] <FilipNortic_> root     23097     1  0 Apr03 ?        00:00:01 /usr/sbin/sshd
[08:46] <FilipNortic_> this is the only shhd process i find
[08:49] <blackflow> FilipNortic_: ss -4lp | grep ssh
[08:49] <blackflow> this will give you port used and pid of the process named ssh
[08:49] <blackflow> if you have that, then you can't run additional daemons on the same port
[08:49] <blackflow> but.... sounds to me like you're doing something wrong here. What exactly do you wish to achieve?
[08:50] <FilipNortic_> tcp    LISTEN     0      128     *:ssh                   *:*                     users:(("sshd",pid=23097,fd=3))
[08:50] <FilipNortic_> we were trying to configure sftp
[08:51] <blackflow> right, so configure it within the existing ssh daemon, you don't need to run an additional (and how do you even run it btw)
[08:53] <FilipNortic_> there wasn't suppose to be an additional one the fist time we restarted sshd it worked fine and ftp worked then we tried to give access to the group instead and upon that restat we got the bind error
[08:53] <FilipNortic_> is it trying to start itself twice or something like that?
[08:59] <blackflow> FilipNortic_: can you pastebin your sshd_config file?
[08:59] <FilipNortic_> http://lpaste.net/5543173436047622144
[09:05] <FilipNortic_> can't see anything wired in it
[09:06] <blackflow> FilipNortic_: yeah, looks okay, except I dont think you need any options for internal-sftp.
[09:07] <blackflow> FilipNortic_: also, this setup is very unsafe, you allow password auth and use default port 22. Just a matter of time until a bot breaks in.
[09:09] <FilipNortic_> yeah I know that much. so far they all try as root but i will change it just need 2 resolve this first
[09:10] <FilipNortic_> I still have no clue what is wrong
[09:15] <blackflow> FilipNortic_: you can't log in as root, you have PermitRootLogin no
[09:16] <FilipNortic_> yeah but i still se boots trying
[09:16] <FilipNortic_> was kind of my point (though a sort of mute one)
[09:19] <blackflow> ah you mean the bots try as root.... yeah.... sorry, my mind was in the context of your sftp group users
[09:25] <FilipNortic_> but what i really which to know is why i can't restart sshd, if there's another process blocking why can't I see it
[09:26] <lordievader> To enable sftp on my host I needed to add 'Subsystem sftp /usr/lib64/misc/sftp-server'.
[09:26] <lordievader> That path might be a bit different on Ubuntu.
[09:29] <blackflow> internal-sftp is needed with that group match stanza to chroot sftp users, otherwise they could roam freely on the system
[09:30] <blackflow> and forcing the command it blocks regular ssh log in, allows only sftp
[09:30] <blackflow> as for why it's behaving like FilipNortic_ says, I don't know. it's not normal behavior for ssh
[09:33] <FilipNortic_> any idee how to get back too a normal stat... should i try and kill the sshd process
[09:44] <blackflow> FilipNortic_: first, when you "systemctl restart ssh.service", does it log an error about binding to port 22 again?
[09:46] <FilipNortic_>  error: Bind to port 22 on 0.0.0.0 failed: Address already in use.
[09:46] <FilipNortic_> sshd[30276]: error: Bind to port 22 on :: failed: Address already in use.
[09:46] <FilipNortic_> yeah
[09:51] <blackflow> weird.
[10:03] <jamespage> zul: urllib3 and requests are still wedged in zesty-proposed - something you have time for?
[10:08] <jamespage> cpaelzer_: testing OK - marked bug 1672367 as requested
[10:11] <cpaelzer_> thank you so much jamespage!
[10:11] <cpaelzer_> the next bunch of SRUs are waiting, so this should help to clear the queue
[10:11] <jamespage> cpaelzer_: yw
[10:12] <cpaelzer_> well waiting is too muhc, I need to code them up first :-/
[10:12] <jamespage> ah yes the relentless queue of SRU's
[10:12] <cpaelzer_> if you are not having them you either own "hello" or your package isn't used a lot :-)
[10:46] <cpaelzer_> rbasak: given my frequent typos, could I ask you to re-release uvt as uvtoool
[10:46] <cpaelzer_> it would be nicer and auto-supports triple-o that way right :-)
[11:02] <rbasak> :-)
[11:16] <FilipNortic_> when I run: netstat -tapn | grep ssh
[11:16] <FilipNortic_> i get: tcp6       0      0 :::22                   :::*                    LISTEN      23097/sshd and tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      23097/sshd
[11:17] <FilipNortic_> this is one for ipv4 and on for ipv6 ?
[11:17] <hateball> it's the same pid as you can see
[11:18] <FilipNortic_> ahh right
[11:19] <hateball> also I think 'ss' is prefered to netstat these days
[11:20] <FilipNortic_> yeah i ran that first
[11:20] <hateball> "ss -tap" is nice
[11:20] <lordievader> hateball: It is.
[11:20] <FilipNortic_> kind of hoped it missed something
[11:20] <hateball> needs sudo to show which pid uses ports <1024 iirc
[11:20] <lordievader> Just like the ipconfig -> ip
[11:20] <lordievader> ifconfig*
[11:22] <FilipNortic_> is ip the new one?
[11:25] <lordievader> FilipNortic_: Yes.
[11:26] <lordievader> FilipNortic_: Do you still have the problem of starting ssh? If you are not connected via ssh you could kill that remaining process and start the service again.
[11:28] <FilipNortic_> well ssh is my only method of connection right now
[11:29] <FilipNortic_> but does killing the sshd service stop the established connections
[11:39] <FilipNortic_> any other recomendations ? change port and se if i can start anther daemon there?
[11:56] <lordievader> You could do that as a detour to restart ssh on the original port.
[11:57] <lordievader> Though, I am not sure how ssh behaves with running multiple daemons.
[11:59] <ronator> not sure if this suits but "sshd has had support for multiple ListenAddress directives for a good while"
[12:05] <FilipNortic_> so it might still try and restart the old one
[12:17] <lordievader> FilipNortic_: There is no way of access of another kind?
[12:18] <FilipNortic_> there should be a vnc point set up by the server provider but it comes up blank when i try it
[12:19] <FilipNortic_> guess i have to call thier support
[12:26] <lordievader> Or you run the commands in a screen/tmux and hope for the best :P (bad advice, I know)
[12:44] <ronator> is it save to remove package "landscape-common" if I don't plan to use landscape?
[12:45] <lordievader> Check the reverse dependencies.
[12:45] <zul> fnordahl: probably start doing SRU processing again later this week
[12:45] <lordievader> ronator: If nothing (important) requires it, I'd say it is save to remove.
[12:46] <zul> jamespage: sure...
[12:46] <ronator> lordievader: thats exactly where my question was aiming at :)
[12:47] <fnordahl> zul: that would be great. just a update of the package to be based on horizon-9.1.2. would suffice as the necessary patches have been upstreamed
[12:48] <lordievader> ronator: apt-cache can tell you the reverse dependencies.
[12:48] <ronator> thx lemme check that
[12:49] <ronator> lordievader: like so?  $ apt-cache rdepends landscape-common
[12:49] <ronator> shows only landscape-common and -client so should be fine thx
[12:50] <lordievader> ronator: Indeed, apt will also show you if it has to remove more due to a dependecy.
[12:52] <ronator> lordievader: yes I know. We tested landscape for a short period of time, I removed it and now I was unsure if landscape-common was always there. removing didn't raise any dependencies, but you never know, so I asked and learned something new :)
[12:55] <zul> jamespage: my old nemesis dogtag-pki
[15:17] <Aison> hello
[15:17] <Aison> I have 4 network devices
[15:17] <Aison> enp5s0, enp6s0, rename4, rename5
[15:17] <Aison> why the hell are two of them called rename*
[15:21] <rbasak> Sounds like they got stuck halfway through the rename, possibly due to a conflict.
[15:21] <rbasak> Do you have four NICs in reality? And can you reproduce this eg. on a live USB boot?
[15:21] <rbasak> Also, which release?
[15:22] <Aison> rbasak, no, there is a dual 82571EB and a dual 82574L controller
[15:22] <Aison> one is onboard, one is pcie
[15:22] <nacc> iirc, dmesg should have some indication of what is going on (or syslog)
[15:23] <rbasak> Perhaps it's trying to rename each of the two NICs on each controller to the same enpXs0 name?
[15:24] <Aison> this is my dmesg: http://paste.ubuntu.com/24313876/
[15:24] <Aison> I try to find something :)
[15:26] <nacc> [    4.009571] e1000e 0000:02:00.0 rename4: renamed from eth2
[15:26] <nacc> [    4.022635] e1000e 0000:02:00.1 rename5: renamed from eth3
[15:27] <rbasak> Which release?
[15:27] <nacc> looks to be 16.04 with 16.04.1 kernel
[15:27] <Aison> yes
[15:28] <nacc> that rename is happening much earlier than the other
[15:29] <nacc> you of course, if not concerned with hotplug could use net.ifnames=0 (iirc)
[15:30] <rbasak> I wonder if this is a bug. If so, it'd be nice to fix it properly.
[15:31] <nacc> i think it would require some systemd bugging -- may be worth filing regardless
[15:34] <Aison> btw: the pcie LAN card is the new device
[15:34] <Aison> before I just used the on board
[15:35] <Aison> but the same card was used in another 16.04 server before without any problems
[17:17] <Henster> evening ,, i have like 10 drives lying around and heard of  zsf , silly question i need to fist format them all on to the same system frmat ? they are mostly ntfs
[17:17] <nacc> Henster: you mean zfs?
[17:18] <Henster> yes sorry
[17:24] <drab> Henster: nah, it wont' care, zfs utils will just take care of that
[17:24] <nacc> Henster: aiui, what drab said
[17:24] <nacc> Henster: you just need to tell zfs what disks to use
[17:25] <Henster> wow ok cool
[17:25] <Henster> and is it reasy just to add extra dives ?
[17:25] <drab> yes and no
[17:25] <drab> yes as in it's easy, no as in it probably doesn't work as you think it does
[17:26] <Henster> do all the drives have to be the same size ?
[17:26] <drab> Henster: please read through this at thev ery least: https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/
[17:26] <drab> zfs is great, but also not particularly forgiving
[17:26] <nacc> heh
[17:27] <drab> it's closer to what linux used to be: friendly but chooses its friends wisely
[17:27] <Henster> ok cool thanks was lookign for more content ..
[17:27] <Henster> is there a newer or better version than zfs now ?
[17:28] <drab> https://wiki.ubuntu.com/ZFS
[17:28] <drab> follow that to get going
[17:28] <drab> read the other thing past the first chapter about edbian to understand more about the concepts
[17:28] <drab> it's still the best walkthrough around
[17:28] <drab> along with the other one I'm about to paste... sec
[17:29] <drab> this is the best resource on zfs I've found, explaining the concepts in enough detrails that you won't shoot yourself in the foot while not being voerwhelming (and avoid cargo culting some of the many misunderstandings spread on the internet):
[17:29] <Henster> cool
[17:29] <drab> https://forums.freenas.org/index.php?threads/slideshow-explaining-vdev-zpool-zil-and-l2arc-for-noobs.7775/
[17:32] <Henster> thank you so much , new toys for my server :)
[17:32] <drab> there's a few more good docs I bookmarked, but don't want to voerwhelm you, that should keep you busy for a while :)
[17:33] <drab> as you proabbly heard for raid, raidz is not a backup, so backup your stuff!
[17:41] <teward> anyone know of a clamav/clamd *alternative* that works fine with amavisd-new?  ClamAV eats over 780MB RAM in running on even an idle mail server, so it causes some... problems.
[18:07] <sarnold> re "raid is no backup" https://twitter.com/leyrer/status/847816162557689857
[18:08] <drab> lol, and the prize goes to the https://twitter.com/nuintari/status/848249592609202179
[18:08] <drab> but I 'spose only for MP fans :D
[18:12] <sarnold> hehehe
[18:14] <dasjoe> Ancient machines
[18:15] <teward> sarnold: ohai
[18:15] <sarnold> gutenabend dasjoe :)
[18:15] <sarnold> hallo teward :)
[18:15] <dasjoe> Hi sarnold :)
[18:16] <teward> sarnold: what do you know about clamav being a memory-consuming resource whore on servers and if there's any solution for it?  Or should I be bothering the server team to add a warning to the server guide about ClamAV taking up massive resource usage and have minimum reqs. of 2GB RAM or more to use it on the server
[18:16] <teward> since you've got some security team insights :P
[18:16] <teward> (clamav for mailservers == resource hog)
[18:17] <patdk-wk> heh? clamav doesn't consume a lot of memory
[18:18] <sarnold> teward: heh, 2gigs feels smallish today..
[18:20] <teward> patdk-wk: well, running clamav ate 750MB of RAM on a VPS where i'm setting up a test mailserver with amavis+clamav
[18:20] <teward> and it actually swapped so much I had to force-restart the VPS
[18:20] <teward> so............
[18:20] <patdk-wk> clamav did? your your av libs for clamav did?
[18:20] <teward> i'll let you restate your question (E: Unclear what's being asked)
[18:20] <patdk-wk> my clamav with a LOT of 3rd party libs added to it, is using 710megs of ram
[18:21] <patdk-wk> is clamav using all that memory? or is your clamav-virus-definitions using it all
[18:21] <teward> patdk-wk: stock ClamAV from the repos.  650MB RAM + the rest was swap.
[18:21] <teward> patdk-wk: looked to be the clamav process on htop
[18:21] <patdk-wk> what clamav process? clamavd?
[18:22] <teward> i'd have to relaunch it to check.  I'mi currently away from my SSH console, but will get back to you :)
[18:22] <patdk-wk> odd though
[18:22] <patdk-wk> mine is using 710megs exactly, no swap
[18:22] <teward> unless it's a leaky version in Xenial
[18:22] <patdk-wk> using clamav libs, securite, bofhland, foxhole, ...
[18:23] <patdk-wk> but then the stock clamav libs are 250megs
[18:23] <teward> well, i have a trial of Avast's solution for antivirus, giving that a test go, otherwise Postfix + DoveCot + Amavis + SPF + DKIM + DMARC all works heh
[18:23] <patdk-wk> I use bitdefender also, but that is slow, cause it won't run in daemon mode and uses lots of ram also
[18:24] <patdk-wk> but then, my mailservers have 30gigs of ram
[18:24] <patdk-wk> clamav uses only alittle ram, spamassassin uses a lot more
[19:48] <SineDeviance> hi all. i want to add a xubuntu environment to my server for use over NX. i am running 16.04 amd64
[19:48] <SineDeviance> is xubuntu-desktop still the correct metapackage?
[19:56] <patdk-wk> if that is what you want to use
[19:56] <patdk-wk> you should probably ask xubuntu though
[19:57] <SineDeviance> it is
[19:58] <SineDeviance> both what i want to use, and the correct package :D
[21:22] <teward> patdk-wk: spamassassin eats most of my RAM currently, on the box, next big user is Amavis but the problem is on a small email server (1GB RAM is low, yes), clamav's RAM usage is actually an issue.  Avast's solution seems to behave better in terms of resource usage
[21:45] <blackflow> teward: clamd (note the d) eating up a lot of RAM?
[21:55] <queeq> Has anyone got any problems with recent qemu update? My VMs on top of Xen are not starting anymore.
[21:55] <queeq> libvirtd gives this error: invalid argument: could not find capabilities for arch=x86_64
[21:56] <nacc> cpaelzer: --^
[21:56] <teward> blackflow: yep.
[22:03] <queeq> Is anyone here running VMs on Xen and have restarted a server after applying upgrades today?
[22:29] <sarnold> queeq: please file a bug report against whatever it is that actually does your vms, whether that's qemu, libvirt, or xen. Of the three the most recently changed was five days ago, so it'd be best to be more specific than "today's updates" -- dpkg -l output of the affected packages, etc., would be helpful
[22:30] <queeq> Thanks sarnold. I'm not sure it's a bug. I now tried downgrading qemu and it didn't help. Neither libvirt nor xen were upgraded recently
[22:31] <queeq> Upgrade that I suspected caused the issue included the following...
[22:32] <queeq> Upgrade: landscape-common:amd64 (16.03-0ubuntu2, 16.03-0ubuntu2.16.04.1), grub-common:amd64 (2.02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), makedev:amd64 (2.3.1-93ubuntu1, 2.3.1-                  93ubuntu2~ubuntu16.04.1), grub-xen-bin:amd64 (2.02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), qemu-system-x86:amd64 (1:2.5+dfsg-5ubuntu10.9, 1:2.5+dfsg-5ubuntu10.10), grub2-common:amd64 (2.        02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9),
[22:32] <queeq> grub-pc:amd64 (2.02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), libapparmor1:amd64 (2.10.95-0ubuntu2.5, 2.10.95-0ubuntu2.6), grub-pc-bin:amd64 (2.     02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), libapparmor-perl:amd64 (2.10.95-0ubuntu2.5, 2.10.95-0ubuntu2.6), qemu-utils:amd64 (1:2.5+dfsg-5ubuntu10.9, 1:2.5+dfsg-5ubuntu10.10), apparmor:amd64 (2.10.95-0ubuntu2.5, 2.10.95-0ubuntu2.6), wget:amd64 (1.17.1-1ubuntu1.1, 1.17.1-1ubuntu1.2),
[22:32] <queeq> grub-xen-host:amd64 (2.02~beta2-36ubuntu3.8, 2.02~beta2-36ubuntu3.9), qemu-block-extra:amd64 (1:2.5+dfsg-   5ubuntu10.9, 1:2.5+dfsg-5ubuntu10.10), qemu-system-common:amd64 (1:2.5+dfsg-5ubuntu10.9, 1:2.5+dfsg-5ubuntu10.10)
[22:34] <queeq> I don't think it could be caused by grub. I'm now trying to downgrade apparmor, but not sure it could have caused this either
[22:38] <queeq> Nah, apparmor downgrade haven't helped
[22:39] <sarnold> pfew ;)
[22:39] <sarnold> not a surprise
[22:39] <sarnold> but still
[22:41] <queeq> Don't know what else to try... Something went wrong. And I haven't found recent information on this error. There were some bugs with qemu capabilities caching back in 2015, but that's it...
[22:41] <tyhicks> pfew times two
[22:41] <queeq> libvirtd verbose logging doesn't give any additional clues either
[22:44] <queeq> The only error is this: error : virCapabilitiesDomainDataLookupInternal:699 : invalid argument: could not find capabilities for arch=x86_64
[22:46] <sarnold> queeq: skim this mail and see if it rings any bells https://lists.ubuntu.com/archives/ubuntu-devel/2016-September/039492.html
[22:47] <queeq> Thanks, will do
[22:48] <tyhicks> queeq: I'm guessing that `virsh cpu-models x86_64` returns an error?
[22:49] <queeq> tyhicks: "this function is not supported by the connection driver: virConnectGetCPUModelNames"
[22:49] <tyhicks> queeq: is a libvirtd process even running?
[22:50] <queeq> Yes it is
[22:50] <tyhicks> odd
[22:50] <tyhicks> I'm no help here
[22:50] <queeq> thanks anyway :)
[22:50] <tyhicks> oh
[22:50] <tyhicks> I guess virConnectGetCPUModelNames could be a qemu/kvm thing
[22:51] <queeq> Maybe, but there's xen as a hypervisor, no kvm
[22:52] <queeq> That's why I suspected qemu upgrade to be the cause
[22:56] <queeq> sarnold: That mail hasn't rang any bells. It's mostly migration-related between major versions.
[22:57] <queeq> In my case this was very minor upgrade without any migration. This setup has been working fine for a long time. Until today, lol :D
[22:59] <compdoc> queeq, I have vms running on kvm, and installed the recent qemu updates, but havent rebooted yet. how do you manage the vms? I'm guessing its not virt-manager
[23:00] <queeq> It is virt-manager usually
[23:00] <queeq> But bridging is manual
[23:01] <compdoc> why do you mention bridging?
[23:01] <compdoc> Im rebooting my host. lets see what happens
[23:01] <queeq> Because this is part of VM management :)
[23:01] <compdoc> I define bridges in /etc/network/interfaces
[23:02] <queeq> Me too, I turned off libvirt's networking because it conflicts with another bridge I have on the host
[23:03] <compdoc> one of the guests in Windows Server 2008, which provides dns, dhcp, and is the domain controller. so until it finishes booting, I cant browse
[23:04] <queeq> You seem to have more complex setup. I've only Linux guests
[23:04] <compdoc> both guests are running. the other guest is ubuntu server running bacula
[23:04] <queeq> So you had no problems
[23:05] <compdoc> you should save teh xml file for the guests, and search for refernece to x86_64, or whatever the error is
[23:05] <compdoc> reference
[23:05] <queeq> There is a reference for it, but I think it's very standard file
[23:06] <compdoc> Ive had to cut out sections in the past, that were supported on centos kvm, for example, but not in ubuntu's kvm
[23:07] <compdoc> then just save and import the xml file
[23:07] <queeq> Oh, I thought you're talking about qemu capabilities cache
[23:08] <compdoc> I mean using virsh to save the xml definition, edit it, then import it back
[23:09] <queeq> I think they're stored in xml anyway, /etc/libvirt/libxl/vmname.xml
[23:11] <queeq> Also accessible via virsh edit vmname
[23:14] <queeq> Haha, when I tried to edit it, it gives me the same error again
[23:14] <queeq> compdoc: what arch do you have set in those xml files?
[23:15] <compdoc> <type arch="x86_64" machine="pc-i440fx-trusty">hvm</type>
[23:15] <Boulevard> Hey everyone. I Asked in the Ubuntu channel but I figure this is worth a shot too. I have four disks in my PC. Two are raid0 array for Windows,, and the other two are just standard use for data and whatnot. I'd like to dualboot Windows 10 and Ubuntu(or others) safely, but I don't know how to properly install along the raid or install to one of the
[23:15] <Boulevard> other disks and where to put the bootloader for the latter idea. I was urged to try asking here, but I'm looking for desktop use. anyone have suggestions? Thanks.
[23:16] <compdoc> not sure how recent that is
[23:16] <queeq> Straaaange, same arch as I have
[23:16] <queeq> Could you show dpkg -l | grep qemu?
[23:18] <compdoc> sorry, thats an old backup file. this is what I sue now, for more modern chipset features:   <type arch='x86_64' machine='pc-q35-2.5'>hvm</type>
[23:18] <compdoc> *use
[23:18] <queeq> Boulevard: your BIOS would try to read MBR from single disk first, anyway, so the way to do it would be to install grub on the one you are booting from
[23:19] <queeq> compdoc: arch is the same....
[23:19] <queeq> I have <type arch='x86_64' machine='xenfv'>hvm</type>
[23:19] <compdoc> https://pastebin.com/uN01w5nD
[23:20] <Boulevard> So I could safely drop linux into the 300gb or so I cleaned up on one of my other disks and then drop the loader on my raid?
[23:20] <Boulevard> I apologize, I haven't cut my teeth on these hardcore installs before
[23:21] <compdoc> queeq, so youre booting a xen kernel? is it a standard ubuntu package?
[23:22] <compdoc> the host, I mean
[23:24] <queeq> Thanks compdoc, looks similar to mine, but I don't have qemu for archs like ppc or sparc. I use custom kernel.
[23:26] <queeq> Boulevard: you can drop the loader on any disk, `update-grub` utility should be able to find both windows and linux installations
[23:27] <queeq> You would then just need to point BIOS to the disk where grub is installed
[23:27] <queeq> By the disk I mean physical drive
[23:28] <Boulevard> So it'd see windows from the raid and linux from my other non raid
[23:29] <queeq> Oh, sorry, I missed there's RAID. What kind of RAID is that?
[23:29] <Boulevard> Yes, sorry.
[23:29] <Boulevard> Raid0. Bios controlled, not hardware
[23:30] <Boulevard> The CPU is a bit old, so I'm using raid to squeeze some speed out of the whole thing.
[23:30] <queeq> compdoc: what emulator do you have set in the xml?
[23:30] <nacc> Boulevard: fyi, bios raid is fakeraid and usually does not actually help
[23:31] <Boulevard> Eh? :o
[23:31] <Boulevard> Must be placebo effect then. I thought it helped a bit at least
[23:31] <queeq> Boulevard: not sure if it would work in this case. I remember long time ago I was trying to set up something like this with no success. Ended up using Linux mdraid or zfs
[23:31] <nacc> Boulevard: it might help a bit, but it's not real raid and isn't really accelerated
[23:32] <queeq> Neither mdraid nor zfs raids are cpu intensive
[23:32] <nacc> right
[23:33] <Boulevard> I suppose I'll just get a couple new hard disks soon and dedicate an os to either of them then. I just reinstalled windows this weekend so I don't really feel like fiddling around with too much so soon
[23:33] <nacc> Boulevard: so the issue is just deciding where to put the bootloader? where is it now?
[23:33] <Boulevard> Hell they're 50 bucks on egg right now for 7.2 1TB's. (I'll take two dozen on the double :P)
[23:34] <Boulevard> The Linux bootloader? Nowhere. I'm running a live usb session right now
[23:35] <sarnold> Boulevard: some suggested reading before you build your 24-disk storage machine https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/
[23:35] <sarnold> the whole blog series is wonderful
[23:35] <Boulevard> Nah I don't need that much XD. Jut being cheeky :P
[23:36] <queeq> Seems that I've lost hardware virtualization on my host
[23:36] <Boulevard> I see a 1TB volume under dev/mapper/ and then sda and sdb, which are my two 500GB raid disk members
[23:37] <queeq> Tried creating new VM and it only has an arch option of xen (paravirt)
[23:37] <Boulevard> SDC is my 1tb media drive, which would be a candidate for install if it would work smoothly. sdd is the linux usb and sde is my external
[23:38] <queeq> wtf really, how's it possible?
[23:38] <Boulevard> Well. Whatever. I'll just order a couple disks and set it up like a normie. Thanks for the help guys.
[23:38] <Boulevard> Have a good afternoon :)
[23:39] <nacc> queeq: do you see the flag in /proc/cpuinfo?
[23:40] <queeq> Which one should it be?
[23:40] <drab> vmx
[23:40] <queeq> oh well
[23:40] <queeq> I know
[23:40] <nacc> or you can run `kvm-ok`
[23:41] <queeq> I had a power outage today and there were some issues with BIOS setup. I guess I lost vmx option there
[23:41] <queeq> It seemed to have reset to default. As long as this is remote machine I wasn't able to check thoroughly
[23:41] <nacc> strange
[23:41] <queeq> Thanks everyone!
[23:42] <queeq> The clue is solved :)
[23:42] <drab> nacc: you don't happened to have tried to run kvm/libvirtd in a container, have you?
[23:42] <drab> and when I say tried I mean succeeded :)
[23:42] <queeq> This is a home server and there were some nasty outages lately. Everything started with it being unable to boot.
[23:43] <nacc> drab: no, i haven't tried that
[23:43] <nacc> drab: i assume it would only work in a privileged container, bu teven then ... maybe not :0
[23:44] <drab> nacc: yeah, doesn't look like it...
[23:44] <queeq> When a screen was attached, it was residing on a BIOS warning about faulty set up. I instructed remote person to enter BIOS, and upon entering the Linux started to boot and I thought it's all fine
[23:44] <queeq> Shite, killed half of the night trying to troubleshoot this
[23:44] <nacc> queeq: sorry :/
[23:45] <queeq> np :) Without your help I would kill another couple of hours
[23:45] <queeq> Thanks all, good night
[23:45] <drab> nn
[23:47] <drab> is it ok to rant about xml in here? I don
[23:47] <drab> t want to offer anybody :)
[23:47] <drab> offend*
[23:47] <compdoc> its too late for that :/
[23:47] <compdoc> jk
[23:47] <drab> heh
[23:48] <compdoc> queeq, if I remember right, the bit I had to cut out of the xml file was at the bottom
[23:49] <queeq> compdoc: thanks, the issue is resolved now. 99.9% probability is that my BIOS settings got screwed after power outage
[23:50] <compdoc> ah, cool
[23:50] <queeq> I have dual-bios Gigabyte MB there which seems to have rewritten main BIOS with backup settings, and virtualization was turned off there.
[23:50] <sarnold> queeq: so you just had to turn on hardware accelerated vms in the bios?
[23:50] <queeq> There's no vmx flag in /proc/cpuinfo
[23:51] <queeq> sarnold: I dunno yet, will ask a person who has physical access to the computer to do it tomorrow
[23:51] <sarnold> ugh
[23:51] <queeq> But considering everything I've seen today I'm pretty sure this is the issue
[23:52] <drab> is anybody aware of any nefarious consequence if I take out the 127.0.1.1 hostname entry from /etc/hosts ?
[23:52] <drab> it's getting in the way quite a bit, and the reason it was added seems some old bug
[23:52] <sarnold> I've never once heard of it getting in the way
[23:53] <drab> sarnold: ok, let me tell you about it then you will have had :)
[23:53] <compdoc> 127.0.0.1 is still there?
[23:53] <drab> yes
[23:53] <drab> with localhost
[23:54] <compdoc> some systems have it, some dont. <shrug>
[23:54] <drab> but 127.0.1.1 with the hostname is added at install time
[23:54] <compdoc> yup
[23:54] <drab> sarnold: what happens is that people, me included, use "localhost" to refer, to, well, local host
[23:54] <drab> they otherwise use "hostname" to refer to the ip/interface that hostname should resolve to
[23:54] <drab> however because of the 127.0.0.1, using hostname still refers to localhost
[23:55] <drab> when configuring certain daemons, if you use hostname meaning the certain public ip/interface (could be lan), you get screwed becasue the daemon starts to listen on localhost
[23:55] <nacc> sounds like bugs in those daemons
[23:55] <drab> if I want tomsething to listen on localhost I will say, you guessed it, localhost
[23:55] <nacc> because a hostname does not define an interface
[23:56] <nacc> if you want to listen on a public ip, use the public ip?
[23:56] <nacc> or specify the interface to use
[23:56] <drab> the daemons are doing the right thing, theya re calling gethostbyname
[23:56] <nacc> what happens when hostname resolves to multiple IPs?
[23:56] <drab> which depending on how nss is configured will likely go hosts, dns
[23:58] <sarnold> drab: thanks for the explanation. I've never seen anyone use hostnames quite like that before :) normally people either want wildcard binds or they want to bind to specific IPs or interfaces.
[23:58] <drab> nacc: each ip would likely have its own hostname (plus a cname to all of those), or if it doesn't specifying the ip itself makes sense then
[23:59] <drab> nacc: I'm not saying it's a general situation where specifying the hostname is always the right thing to do
[23:59] <nacc> i don't think a 'hostname' uniquely identifies an interface
[23:59] <drab> I'm pointing to what seems a logical assumption: localhost means localhost, hostname means "something else". if the ip it points to resolves to the local machine, that's fine
[23:59] <drab> nacc: sure, I'm not saying ti should be
[23:59] <nacc> and it seems like the daemons you are using make that assumption