[00:04] <genii> Probably has symlinks
[00:04] <genii> Or recursion
[00:05] <genii> You should probably try rsync instead
[00:42] <PCatinean> genii, it certainly does
[00:42] <PCatinean> can I do rsync on a ftp server?
[00:48] <sdeziel> PCatinean: yeah, rsync can operate over SSH, like SCP
[05:17] <maxagaz> Hi
[08:03] <lordievader> Good morning
[10:33] <peetaur2> any bcache users here? since systemd, any machine with bcache (not even as anything required for booting or any services) ends up as either https://brockmann-consult.de/peter2/2017-08-14%20ubuntu%2016.04%20emergency%20mode%20fail.png  or https://brockmann-consult.de/peter2/2017-11-24%20node101%20systemd%20bcache%20fail.png
[10:34] <peetaur2> first is as normal... and 2nd is with noauto in all fstab entries, including swap and boot, and I don't believe it matters if root is so I didn't set it there
[10:34] <peetaur2> I'll test it on root too..but should do anything since it mounts root in initramfs stage
[11:17] <albech> Good morning all you smart people. I have noticed that btrfs is available and wondering if anyone has experience with this both performance and stability. I will be using it to merge (no raid) 6x2TB volumes from our SAN. Unfortunately Xen cannot allocate more than 2TB at a time. This 12TB volume will not be running as any kind of raid, since the underlaying hardware raid of our SAN will provide that. The volume will primarily be used as file storage 
[11:18] <albech> I have always been a fan of EXT and XFS from my old IRIX days.
[11:19] <albech> But it really seems like btrfs is providing some interesting and simplified features, that would be harder to achieve with ext4
[11:31] <JanC> ZFS is also available
[11:34] <albech> JanC: how does it compare to btrfs in the above scenario?
[11:37] <peetaur2> zfs is mature, but not native or properly integrated
[11:37] <JanC> right now I think it's more reliable than btrfs (but it's been some time since I used btrfs); I can't say much about performance
[11:37] <peetaur2> btrfs is not mature and said not to be stable..the btrfs wiki says raid5/6 is not to be used, preview only, and others say single devices work great but raid can corrupt
[11:37] <albech> I only played around with it on a couple of Sun systems..
[11:38] <albech> peetaur2: i wll not be using raids
[11:38] <peetaur2> how will you "merge" then?
[11:38] <JanC> technically it's somewhat like a RAID 0
[11:38] <JanC> I suppose?
[11:38] <albech> more like jbod
[11:39] <peetaur2> and I'm not sure if a jbod setup exists in btrfs (I'd imagine it would, as "single" with multiple disks), but the people saying raid is less stable than single mean devices rather than which mode the devices are in.
[11:40] <peetaur2> and personally, I have used raid10 in the past, and had issues but not so bad...and probably fixed ages ago. And currently I just use it as a single device (just using snaps), and never had corruption, just some out of space after deleting snaps and it doesn't clear the space
[11:41] <albech> i will be using the underlaying raid of the SAN so relying it the SANs ability to recover on diskfailure rather than btrfs'
[11:43] <albech> so annoyed that in 2017 volumes larger than 2TB is not supported by Xenserver.. :/
[11:45] <albech> JanC: are you sure its like a raid0, cause from what i can read they have a raid0 and something that looks like jbod.
[11:48] <peetaur2> is it citrix? I didn't believe that xen only supported 2TB and looked it up and only found one citrix post where they just said using GPT solved it I think
[11:50] <albech> peetaur2: yeah citrix
[11:52] <albech> peetaur2: i didnt believe it either, until i digged around a little.. its going to be a dealbreaker next time we review our infrastructure setup.
[11:52] <peetaur2> use ceph and kvm :)
[11:52] <peetaur2> for large scale, something with openstack
[11:53] <albech> peetaur2: unfortunately i think most of the organization is leaning towards vmware :(
[11:53] <peetaur2> hahah "2TB minus 4GB"  https://docs.citrix.com/content/dam/docs/en-us/xenserver/xenserver-7-0/downloads/xenserver-7-0-config-limits.pdf
[11:53] <albech> peetaur2: the management has been bombarded with vmware adds in their fancy magazines and apparently it is working
[11:53] <peetaur2> years ago it was more vmware...many switched to openstack
[11:53] <peetaur2> openstack costs nothing for license, but more for labor, overall less
[11:54] <peetaur2> and of course you can get support contracts and stuff too
[11:54] <albech> seems like a pain to admin
[11:54] <peetaur2> yeah, as I said, I think it costs more labor
[11:55] <albech> doubt they will want to commit to that with the price of manpower in Denmark :/
[11:55] <peetaur2> why does this pdf just say nfs and lvm...that's not all that's available is it? can you use ceph rbd and iSCSI?
[11:55] <albech> yes
[11:55] <albech> iscsi
[17:07] <samba35> if i am using rdns/usb (mobile and thetering ) to use internet and i have two such device/phone can i aggregate internet speed using ovs (openvswitch) in ubuntu ?
[17:29] <metastable> samba35: Best you can do is load-balance. You won't get double the speed on a single connection, but you can handle double the connections at those speeds.
[17:30] <samba35> i have two usb connection/one isp but two connections
[17:31] <samba35> using two mobile using usb
[18:22] <nafallo> eexxiitt
[22:44] <keithzg> Hrmm, so I guess these days (17.10) systemd-resolve has taken over from resolvconf in the role of "reason why one ends up just hand-editing /etc/resolv.conf"? :P
[22:54] <metastable> keithzg: What issues are you having that require manual editing of resolv.conf?
[23:11] <keithzg> metastable: Inability to resolve any addresses
[23:12] <metastable> keithzg: That would seem to be a problem with the configuration you're handing resolved.
[23:13] <keithzg> metastable: *shrug* nothing of the configuration of the server in question has changed, but it just stopped resolving properly after recent updates.
[23:14] <keithzg> The computer in question has a static IP configured in /etc/network/interfaces and it seems that for whatever reason systemd-resolve is no longer getting the DNS server information from that configuration anymore.
[23:14] <keithzg> Easy enough to just add it manually to a manually-created /etc/resolv.conf, but obviously I shouldn't *have* to do that.
[23:14] <metastable> Would help to see the content of /e/n/i.
[23:16] <keithzg> metastable: Here it is: https://paste.kde.org/p2yz0gaxb  (but again, this configuration is unchanged, apparently since May 26th)