[04:32] <ruben23> hi guys anyone can help how to add user or user email on my postfix mail server, anyone have idea.?
[08:30] <lordievader> Good morning
[15:56] <vlm> i made a raid 1 from two raid 0 arrays with mdadm,how do i go about scrubbing the raid,is it enough to scrub the main raid 1?
[16:00] <sdeziel> vlm: did you kind of re-implemented raid10 by stacking multiple md devices?
[16:28] <vlm> sdeziel: dunno maybe,ive never made a raid before though,thought this was how i could make a raid 10 or could i do it right away?
[16:29] <sdeziel> vlm: I got confused what you built is called a raid0+1. It seems that mdadm requires stacking like you did for this raid level. raid10 is similar (but inverted) and has native support by mdadm, no stacking required
[16:30] <vlm> sdeziel ah ok ,ill head over to kernel.org try get read the guide there and read up on lvls aswell thanks for asnwer
[16:31] <sdeziel> vlm: you should probably take a look at ZFS too. I've switched away from mdadm to ZFS and wouldn't look back
[16:32] <sdeziel> there are some special edge cases where mdadm is a better solution but for pretty much everything else, I feel ZFS nails it
[16:32] <vlm> sdeziel yeah i always wanted to give it ago,just i keep reading here and there that is not as stable or so as it is in freebsd and such?
[16:33] <sdeziel> vlm: I seriously question those claims. ZFS is in 'main' on Ubuntu, so you get good support for it. Been working well since 16.04 IIRC
[16:33] <vlm> sdeziel also i think with raidz i wanted i couldnt easily expand it without requiring much more hardware or so i think it got expensive
[16:34] <vlm> sdeziel oh thats nice to hear though,yeah i always wanted a raidz or so
[16:34] <sdeziel> true, growing a zpool requires some thinking. That's why usually go with simple mirroring that are all tied up into a stripe at the zpool level
[16:35] <sdeziel> this gets you a kind of raid10 on steroid that is easy to grow
[16:36] <vlm> sdeziel ill look into that,though this is just a simple raid i dont need snapshots for it so maybe ill be ok with mdadm
[16:37] <sdeziel> maybe you don't need snapshot but you surely want checksums to protect your data ;)
[16:37] <sdeziel> and transparent compression, and ...
[16:37] <vlm> sdeziel: though doesnt the mdadm scrubs do that ?
[16:37] <sdeziel> not really
[16:38] <sdeziel> or I should say, really not ;)
[16:38] <sdeziel> I don't have to go into details ATM but if you care about your data ZFS has more to offer natively than mdadm can do
[16:40] <vlm> sdeziel alright think ill consider trying it then if i can achieve the same setup 0+1
[17:38] <vlm> sdeziel: i now made one vdev i believ consisting of 2 mirrors,that should be the same as 0+1 shouldnt it?
[17:38] <sdeziel> vlm: that's more akin to raid10
[17:41] <vlm> sdeziel: arent thouse in effect the same or so?
[17:41] <vlm> sdeziel: they seem so similar to me -_-
[17:42] <sdeziel> vlm: I'm not an expert by any mean, I'm basically looking at https://en.wikipedia.org/wiki/Nested_RAID_levels
[17:42] <vlm> sdeziel: im seeing a noticable improvement in throughput now though,up 200MB/s + up from 184MB/s something
[17:42] <sdeziel> vlm: but ZFS is a different beast
[17:44] <sdeziel> vlm: while you benchmark your new array, check 'zpool iostat -v 1'
[17:47] <vlm> sdeziel: my initial reading tells me they are the same in performance and redundancy, just different layout also raid 10 might not be supported on ancient hardware maybe
[17:48] <vlm> sdeziel: my system is very low on ram so that might get challenging i tuned arc_max and min for starters now i need to read up on how to track down if there a memory issues
[17:50] <sdeziel> vlm: you can reduce the ARC usage if you really cannot afford the RAM cost... it will harm the performance of course
[17:51] <sdeziel> you can tune the ARC usage by tweaking the 'primarycache' setting on any ZFS or ZVOL
[17:51] <vlm> sdeziel: just random search on web says about 1G ram on 1TB of storage
[17:51] <sdeziel> on tightly constrained machines (512M of available RAM), I ran with just metadata caching
[17:52] <vlm> sdeziel: what size of pool?
[18:30] <sdeziel>  vlm: it was a mirror of 2x 750GB
[18:39] <vlm> sdeziel: nice to be able to run zfs on such small memory,though saw someone mention something regarding writes and block size,so if turn off performance=all and just metadata it would result in much heavier write operations on disk?
[18:40] <sdeziel> vlm: the ARC is mostly (exclusively?) for reads
[18:42] <sdeziel> vlm: I'm not advising to tweak primarycache blindly. This should only be done if you feel the ARC doesn't shrink enough when there is memory pressure
[18:43] <vlm> sdeziel: considering adding some maybe,ill see how it turns out first
[18:43] <sdeziel> vlm: I would suggest you run with stock default params for a while and then only try to fine tune a thing or two. The ZFS folks have worked hard to get good defaults and they know their stuff way better than I do
[18:44] <sdeziel> vlm: ZFS will happily take all the RAM you throw at it but it can accommodate surprisingly well with very little too.
[18:45] <sdeziel> vlm: how much RAM do you have and how much data are we talking about?
[18:45] <sdeziel> BTW, the arc_summary script will show you a lot of nice stats if that interest you
[18:46] <vlm> sdeziel: im about 4GB though i got a lot going so think im free about 1-2GB
[18:49] <vlm> sdeziel: im off abit thanks for all the usefull information that script also very informative,ill do some reading on info zpool aswell whole other world this compared to mdadm -_-
[18:49] <sdeziel> vlm: you are welcome
[20:24] <sveinse> I'm running 18.04 server and I'm setting up ipv6 with SLAAC and I need the server to generate a stable IP address. I notice that my ubuntu 18.04 and 20.04 does this out of box, but the server does not. Where might I find more info about this?
[20:25] <sveinse> *my ubuntu 18.04 and 20.04 desktop
[20:59] <RoyK> sveinse: you can turn off ipv6 privacy extensions - that'll make the SLAAC address based on the MAC address
[20:59] <RoyK> sveinse: very useful if privacy isn't relevant
[21:00] <RoyK> sveinse: I used that for a bunch of infoscreens running off raspberry pi machines, the hostpart of their ip defining the machine and the webserver using that to serve the correct data
[21:00] <RoyK> easy peasy :D
[21:01] <sarnold> RoyK: ooo that sounds neat
[21:03] <RoyK> sarnold: scales well too ;)
[21:04] <sveinse> RoyK: my ubuntu desktop creates two ips, one "temporary dynamic" and one "dynamic mngtmpaddr". And I believe the latter is a stable IP but still using the priv extensions. This is what I want for my server too.
[21:08] <RoyK> sveinse: for the server, I'd use a static IP, not SLAAC, but then, that's perhaps just me (although I doubt it's "just" me)
[21:10] <RoyK> remember that if you choose a SLAAC address for a server and the server is replaced, it'll get a new IP. Just use a static address for those. It's easier unless you have a *lot* of servers to manage, in which case there are other ways to sort it out
[21:12] <sveinse> RoyK: yes. I want to skip setting up DHCPv6, and nor do I really want to set static IPs on each server. So I would have hoped SLAAC could save me the trouble, but I do indeed see the contradiction is what I want to achieve
[21:14] <RoyK> sveinse: may I ask what sort of environment this is?
[21:15] <sveinse> I wonder which is least troublesome DHCPv6 for central config of IP addresses or on each of the servers :D For ipv4 its central DHCP today
[21:15] <RoyK> (probably up north, so that dhcpv6 will freeze during winter)
[21:16] <sveinse> RoyK: A small office network. Around 20-ish servers with fixed ip
[21:16] <RoyK> ok
[21:17] <RoyK> then it might perhaps be just as much work setting up dhcpv6 than setting dhcpv6 addresses manually? ;)
[21:18] <sveinse> For some unexplained reason I have a mental impression that SLAAC is better than DHCPv6 due to it being stateless. Thus for most clients and users, I do want SLAAC for them.
[21:18] <RoyK> not as fancy, but if set manually, it'll work regardless of the dhcp server running or not
[21:18] <sveinse> The reason for having ipv4 DHCP assignment of static resources is because then the dhcp server is the one stop shop for assignments
[21:19] <RoyK> we still stick to static ip addresses on servers at work, and we have a wee bit more than 20 servers
[21:20] <sveinse> RoyK: yeah, another company with 1400 employees does that too on its servers
[21:20] <RoyK> sveinse: I work for oslomet
[21:21] <sveinse> RoyK: oh, cool, this is in Stavanger
[21:21] <RoyK> I guessed somewhere in .no by your name and ip :)
[21:23] <sveinse> May I ask if you use any of the link ips, fe80::, for anything for the network infrastructure? Or only the global scoped ips?
[21:24] <sveinse> I noticed that windows tend to use fe80:: as default GW, while linux use the global
[21:24] <RoyK> I'm not sure about that, as for default gateway or similar. there might be something out there. I haven't worked too much with the networking details
[21:24] <sveinse> ok, thanks