[03:51] <axisys> with zfs all bets are off.. I am from solaris and I loved zfs.. but I am not sure ubuntu will win the fight with oracle on zfs license.. so may be btrfs is a better route?
[03:52] <axisys> lot of ubuntu server using zfs or rather btrfs?
[03:55] <genii> openzfs is pretty mature
[04:00] <axisys> genii: ah.. I forgot about that.. so people are using it?
[04:00] <genii> Yes
[04:00] <genii> ..more than btrfs, anyhow
[04:01] <axisys> genii: which package offers openzfs ?
[04:02] <genii> zfsutils
[04:02] <axisys> I see it https://wiki.ubuntu.com/ZFS
[04:02] <axisys> genii: thanks a lot
[04:02] <genii> Glad to assist
[04:05] <axisys> so I guess, if I use /home as a zfs partition then I cannot use quota per user
[04:05] <drab> yeah when I said zfs I meant openzfs. ppl definitely use it, ZOL list is pretty active and community very competent afaics
[04:06] <drab> I've been building new infra nodes with it for 6 months now and converting more stuff to it
[04:06] <drab> so far I've very happy with what I've seen and snapshotting/send/receive has made a whole lot of things easier
[04:07] <axisys> drab: so user quota will not work same way as ext4 unless I create one zfs partition per user
[04:07] <drab> plus I use a lot of lxd and zfs is the default backend since it's, like genii said, actually more mature than btrfs at this point
[04:08] <axisys> also /boot partition will still have to be ext* .. unless boot works to with zfs like solaris does?
[04:08] <drab> axisys: I don't know how ti works on solaris, but on linux datasets aren't really partitions even tho they kinda look like it
[04:08] <drab> but they are completely flexible, kinda like the lvm/vg stuff, but much much easier
[04:08] <drab> axisys: boot can work, but that's where I haven't pushed the issue yet
[04:08] <axisys> drab: right .. datasets.. it has been 5_ yrs since I played with it in solaris.. we mostly linux shop now
[04:09] <drab> I use a small raid1 for that or a sataDOM
[04:10] <axisys> drab: I wont use sataDOM, I like disk hot swaps .. in anycase.. so a raid1 and then rest 10 disks as single disk lun and then create zpools ?
[04:10] <axisys> raid1 will be used for ext2 for /boot
[04:11] <drab> fair enough, we don't have large budgets and the bays are too precious for us
[04:11] <drab> that's what I do, yeah, minus the lun thing if you mean using the onboard raid
[04:11] <drab> generally the recommendation is to not mix hw raid with zfs
[04:11] <axisys> drab: yes single disk raid0 logical units
[04:12] <axisys> drab: but then I have to bypass raid .. hmm
[04:12] <drab> up to you, everything I read when I picked it up 6 months ago said to avoid hw raid and just do passthrough
[04:12] <drab> if you google this stuff out you'll see most folks flash their raid controller in IT mode/passthrough
[04:12] <axisys> ofcourse I wont have to worry taking the server down to replace raid controller battery :-)
[04:13] <drab> fwiw root on zfs is possible and there's tutorials about it and "it works"
[04:13] <drab> it just didn't seem stable enough to me when I looked at it
[04:13] <drab> too much thinkering for a server
[04:13] <drab> tinkering*
[04:14] <axisys> I won't do it now 1000s of users rely on this server :-)
[04:14] <drab> :)
[04:14] <axisys> drab: how about user quota ?
[04:14] <axisys> drab: short from dataset per user :-)
[04:16] <drab> that's how I do it, create a /homes DS, then one DS per user, works very well for me, but I'm not doing 1000s of users, maybe at that scale something works differently
[04:16] <drab> still, on paper that's the design I've seen implemented and it makes sense to me, it's a single instance command including quota setting if you use properties and inheritance
[04:17] <axisys> drab: yes I am familiar with zfs.. built solaris containers with zpool and all other magics.. but just out of practice, so need a refresher..
[04:18] <axisys> they call it zones / containers depending on the context.. :-)
[04:19] <drab> cool, never used solaris
[04:19] <drab> gtg, ttyl, best of luck with setting the new box up
[04:19] <axisys> drab: thanks for your help!
[08:00] <lordievader> Good morning
[17:07] <IShavedForThis_> whats the best program for automatic plex renaming for ubuntu server?
[17:09] <tomreyn> IShavedForThis_: what is "plex renaming"?
[17:23] <hehehe> hi
[17:24] <hehehe> http://readcomicbooksonline.net/
[17:52] <hehehe> who here used scaleway bare metal servers?
[17:52] <hehehe> any good?
[17:52] <hehehe> https://www.scaleway.com/baremetal-cloud-servers/
[17:52] <hehehe> haha
[17:56] <hehehe> The C1 server has a 4-cores ARMv7 CPU with 2GB of RAM and a 1 Gbit/s network card.
[17:56] <hehehe> :D
[17:59] <hehehe> All of these servers are designed for the cloud and for horizontal scaling.
[17:59] <hehehe> ...
[17:59] <hehehe> so nice
[18:04] <hehehe> also
[18:04] <hehehe> how to prevent admin from planting logical bomb etc?
[18:05] <hehehe> I think all commands should be sent in text file reviewed by admin 2
[18:06] <hehehe> I wonder if verelox girl chats here
[18:06] <hehehe> why did u deleted all customers data ? :D