=== Napsterbater is now known as Guest96179 === Napsterbater_ is now known as Napsterbater === Napsterbater is now known as Guest11261 === Napsterbater_ is now known as Napsterbater [04:32] hi guys anyone can help how to add user or user email on my postfix mail server, anyone have idea.? [08:30] Good morning === denningsrogue6 is now known as denningsrogue [15:56] i made a raid 1 from two raid 0 arrays with mdadm,how do i go about scrubbing the raid,is it enough to scrub the main raid 1? [16:00] vlm: did you kind of re-implemented raid10 by stacking multiple md devices? [16:28] sdeziel: dunno maybe,ive never made a raid before though,thought this was how i could make a raid 10 or could i do it right away? [16:29] vlm: I got confused what you built is called a raid0+1. It seems that mdadm requires stacking like you did for this raid level. raid10 is similar (but inverted) and has native support by mdadm, no stacking required [16:30] sdeziel ah ok ,ill head over to kernel.org try get read the guide there and read up on lvls aswell thanks for asnwer [16:31] vlm: you should probably take a look at ZFS too. I've switched away from mdadm to ZFS and wouldn't look back [16:32] there are some special edge cases where mdadm is a better solution but for pretty much everything else, I feel ZFS nails it [16:32] sdeziel yeah i always wanted to give it ago,just i keep reading here and there that is not as stable or so as it is in freebsd and such? [16:33] vlm: I seriously question those claims. ZFS is in 'main' on Ubuntu, so you get good support for it. Been working well since 16.04 IIRC [16:33] sdeziel also i think with raidz i wanted i couldnt easily expand it without requiring much more hardware or so i think it got expensive [16:34] sdeziel oh thats nice to hear though,yeah i always wanted a raidz or so [16:34] true, growing a zpool requires some thinking. That's why usually go with simple mirroring that are all tied up into a stripe at the zpool level [16:35] this gets you a kind of raid10 on steroid that is easy to grow [16:36] sdeziel ill look into that,though this is just a simple raid i dont need snapshots for it so maybe ill be ok with mdadm [16:37] maybe you don't need snapshot but you surely want checksums to protect your data ;) [16:37] and transparent compression, and ... [16:37] sdeziel: though doesnt the mdadm scrubs do that ? [16:37] not really [16:38] or I should say, really not ;) [16:38] I don't have to go into details ATM but if you care about your data ZFS has more to offer natively than mdadm can do [16:40] sdeziel alright think ill consider trying it then if i can achieve the same setup 0+1 [17:38] sdeziel: i now made one vdev i believ consisting of 2 mirrors,that should be the same as 0+1 shouldnt it? [17:38] vlm: that's more akin to raid10 [17:41] sdeziel: arent thouse in effect the same or so? [17:41] sdeziel: they seem so similar to me -_- [17:42] vlm: I'm not an expert by any mean, I'm basically looking at https://en.wikipedia.org/wiki/Nested_RAID_levels [17:42] sdeziel: im seeing a noticable improvement in throughput now though,up 200MB/s + up from 184MB/s something [17:42] vlm: but ZFS is a different beast [17:44] vlm: while you benchmark your new array, check 'zpool iostat -v 1' [17:47] sdeziel: my initial reading tells me they are the same in performance and redundancy, just different layout also raid 10 might not be supported on ancient hardware maybe [17:48] sdeziel: my system is very low on ram so that might get challenging i tuned arc_max and min for starters now i need to read up on how to track down if there a memory issues [17:50] vlm: you can reduce the ARC usage if you really cannot afford the RAM cost... it will harm the performance of course [17:51] you can tune the ARC usage by tweaking the 'primarycache' setting on any ZFS or ZVOL [17:51] sdeziel: just random search on web says about 1G ram on 1TB of storage [17:51] on tightly constrained machines (512M of available RAM), I ran with just metadata caching [17:52] sdeziel: what size of pool? === ijohnson is now known as ijohnson|lunch [18:30] vlm: it was a mirror of 2x 750GB [18:39] sdeziel: nice to be able to run zfs on such small memory,though saw someone mention something regarding writes and block size,so if turn off performance=all and just metadata it would result in much heavier write operations on disk? [18:40] vlm: the ARC is mostly (exclusively?) for reads [18:42] vlm: I'm not advising to tweak primarycache blindly. This should only be done if you feel the ARC doesn't shrink enough when there is memory pressure [18:43] sdeziel: considering adding some maybe,ill see how it turns out first [18:43] vlm: I would suggest you run with stock default params for a while and then only try to fine tune a thing or two. The ZFS folks have worked hard to get good defaults and they know their stuff way better than I do [18:44] vlm: ZFS will happily take all the RAM you throw at it but it can accommodate surprisingly well with very little too. [18:45] vlm: how much RAM do you have and how much data are we talking about? [18:45] BTW, the arc_summary script will show you a lot of nice stats if that interest you [18:46] sdeziel: im about 4GB though i got a lot going so think im free about 1-2GB [18:49] sdeziel: im off abit thanks for all the usefull information that script also very informative,ill do some reading on info zpool aswell whole other world this compared to mdadm -_- [18:49] vlm: you are welcome === ijohnson|lunch is now known as ijohnson [20:24] I'm running 18.04 server and I'm setting up ipv6 with SLAAC and I need the server to generate a stable IP address. I notice that my ubuntu 18.04 and 20.04 does this out of box, but the server does not. Where might I find more info about this? [20:25] *my ubuntu 18.04 and 20.04 desktop [20:59] sveinse: you can turn off ipv6 privacy extensions - that'll make the SLAAC address based on the MAC address [20:59] sveinse: very useful if privacy isn't relevant [21:00] sveinse: I used that for a bunch of infoscreens running off raspberry pi machines, the hostpart of their ip defining the machine and the webserver using that to serve the correct data [21:00] easy peasy :D [21:01] RoyK: ooo that sounds neat [21:03] sarnold: scales well too ;) [21:04] RoyK: my ubuntu desktop creates two ips, one "temporary dynamic" and one "dynamic mngtmpaddr". And I believe the latter is a stable IP but still using the priv extensions. This is what I want for my server too. [21:08] sveinse: for the server, I'd use a static IP, not SLAAC, but then, that's perhaps just me (although I doubt it's "just" me) [21:10] remember that if you choose a SLAAC address for a server and the server is replaced, it'll get a new IP. Just use a static address for those. It's easier unless you have a *lot* of servers to manage, in which case there are other ways to sort it out [21:12] RoyK: yes. I want to skip setting up DHCPv6, and nor do I really want to set static IPs on each server. So I would have hoped SLAAC could save me the trouble, but I do indeed see the contradiction is what I want to achieve [21:14] sveinse: may I ask what sort of environment this is? [21:15] I wonder which is least troublesome DHCPv6 for central config of IP addresses or on each of the servers :D For ipv4 its central DHCP today [21:15] (probably up north, so that dhcpv6 will freeze during winter) [21:16] RoyK: A small office network. Around 20-ish servers with fixed ip [21:16] ok [21:17] then it might perhaps be just as much work setting up dhcpv6 than setting dhcpv6 addresses manually? ;) [21:18] For some unexplained reason I have a mental impression that SLAAC is better than DHCPv6 due to it being stateless. Thus for most clients and users, I do want SLAAC for them. [21:18] not as fancy, but if set manually, it'll work regardless of the dhcp server running or not [21:18] The reason for having ipv4 DHCP assignment of static resources is because then the dhcp server is the one stop shop for assignments [21:19] we still stick to static ip addresses on servers at work, and we have a wee bit more than 20 servers [21:20] RoyK: yeah, another company with 1400 employees does that too on its servers [21:20] sveinse: I work for oslomet [21:21] RoyK: oh, cool, this is in Stavanger [21:21] I guessed somewhere in .no by your name and ip :) [21:23] May I ask if you use any of the link ips, fe80::, for anything for the network infrastructure? Or only the global scoped ips? [21:24] I noticed that windows tend to use fe80:: as default GW, while linux use the global [21:24] I'm not sure about that, as for default gateway or similar. there might be something out there. I haven't worked too much with the networking details [21:24] ok, thanks