[00:25] <berglh> can anyone tell me how to update the ulimit for nofiles (soft/hard) for a non-root user on ubuntu 17.10 without restarting the whole server? when i edit /etc/security/limist.conf and /etc/systemd/user|system.conf and set the nofiles limits, then logout of the box and back on, the new settings for the user don't take effect. am i missing something?
[03:32] <chamar> join #go-nuts
[07:15] <lordievader> Good morning
[11:38] <jancoow> Hi. I'm running a ubuntu server for several years now. I've a feeling that the performance of my fileserver is going down. I've like ~10 harddisk inside it and I'm using greyhole. I wanna know from each disk if they have some problems etc.
[11:38] <jancoow> how can I do this the best
[11:38] <jancoow> I don't get all the SMART information and I've no clue if the tests are up-to-data
[11:38] <hateball> jancoow: if they support S.M.A.R.T, query with smartctl
[11:38] <jancoow> yeah all of them have smart
[11:41] <jancoow> hateball: could you maybe help me a bit further? :)
[11:41] <jancoow> I just wanna make sure everything is all right :)
[11:42] <jancoow> Maybe my samba configuration is just wrong;
[11:44] <hateball> jancoow: pastebin your smartctl output and I can have a look
[11:51] <jancoow> hateball: do you want individuel pastebins for each drive?
[11:54] <jancoow> hateball: hdd1: https://jancokock.me/f/49c53 hdd2: https://jancokock.me/f/1377e hdd3: https://jancokock.me/f/de675 hdd4: https://jancokock.me/f/9b81b hdd5: https://jancokock.me/f/70c46 hdd6:  https://jancokock.me/f/e4ef3 hdd7: https://jancokock.me/f/8a5e5
[11:57] <rbasak> jancoow: if ext4 then "e4defrag -c ..." is useful
[11:57] <jancoow> rbasak: defrag for ext4  ? :O
[11:57] <rbasak> It'll tell you if it's required.
[11:57] <rbasak> Hopefully not :)
[11:58] <jancoow> i'm also running badblocks now
[11:58] <rbasak> But pathological cases will be able to fragment any filesystem I think.
[11:58] <rbasak> And it's easy and quick enough so you might as well eliminate that.
[12:01] <jancoow> how long does it take for the command to finish?
[12:02] <hateball> jancoow: look at the error rates on hdd2
[12:03] <hateball> .. and hdd3
[12:03] <hateball> 4 and 5 also has some, tho not nearly as bad
[12:04] <jancoow> what are these rates?
[12:04] <jancoow> again these stupid seagates..
[12:04] <jancoow> 2 years ago they both failed at the same time
[12:05] <jancoow> lost a lot of data (which was redunant stored on both..)
[12:05] <jancoow> I did got 2 new ones because I was still under RMA
[12:05] <jancoow> but now they are failing again??
[12:07] <hateball> well it can be the controller as well
[12:07] <jancoow> rbasak: http://jancokock.me/f/6045f/ still wating on hdd3
[12:07] <jancoow> hateball: should I run badblocks on them?
[12:08] <jancoow> so I can check if there are any bad blocks?
[12:08] <jancoow> I think I will move the landing disk from greyhole to hdd1. I don't trust these seagates anymore
[12:09] <hateball> oh I didnt even notice they were seagate and not WD. seagate report smart data differently
[12:09] <jancoow> Yeah that's one thing I hate about smart. There is no actual standard
[12:09] <jancoow> And the raw values are sometimes encoded..
[12:09] <jancoow> Why just not one standard which makes it easy for everyone
[12:11] <hateball> Because https://xkcd.com/927/
[12:11] <jancoow> yeah exactly
[12:12] <rbasak> jancoow: I think you actually need to ready the output of -c.
[12:12] <rbasak> -c tells you whether it's needed. It doens't actually do it.
[12:13] <jancoow> It says "Done."
[12:13] <jancoow> I will try without running the command in the background
[12:13] <rbasak> standard> SMART is a standard. The individual parameters checked can be manufacturer-specific. But if the disk thinks there's something wrong, smartctl will tell you that.
[12:13] <rbasak> Yeah it should give you more output than that.
[12:13] <rbasak> (IIRC)
[12:14] <jancoow> yeah I know, smart is the standard
[12:14] <jancoow> but indeed the values are specific
[12:14] <jancoow> and that's what I hate
[12:16] <jancoow> rbasak: https://jancokock.me/f/33e32
[12:20] <rbasak> The "I think I'm about to fail" indication isn't specific though.
[12:21] <rbasak> jancoow: sorry. The manpage says to use -v as well.
[12:21] <jancoow> oh I needed root permissions
[12:52] <jancoow> rbask
[12:52] <jancoow> rbasak: after I used the sudo command I do get some more information
[12:52] <jancoow> It has 5 fragmented files
[12:53] <jancoow> so that's not a big issue
[12:54] <rbasak> Great!
[12:54] <jancoow> rbasak: now checking the other 6 disks ;p
[13:10] <jancoow> rbasak: yay. None of them need defrag
[13:18] <theGoat> so i have some nfs exports on my ubuntu 14.04 server, but the transfer rates seem slow.  only able to write at about 50 MB/s or so.  SCP i am able to push close to 90.   is there any tuning i can don on the nfs server?
[14:35] <andreas> theGoat: try rsize and wsize
[14:53] <theGoat> andreas: yeah i have them set to 65536
[15:03] <joelio> theGoat: udp mode any quicker?
[15:03] <theGoat> let me give it a whack
[15:03] <joelio> should be udp default iirc
[15:04] <joelio> if it is udp, you might want to wrangle your rmem kernel params etc
[15:05] <theGoat> joelio: udp doesn't seem to be any better.  kernel params on the client or server side
[15:06] <joelio> perhaps check the rmem settings then, only other thing I can suggest
[15:06] <theGoat> joelio: on the server or client side?
[15:06] <joelio> both really, check out http://www.tldp.org/HOWTO/NFS-HOWTO/performance.html or something similar
[15:07] <joelio> (may be a bit old that, but still is valid) - also vhat version of NFS, 3 or 4?
[15:17] <theGoat> figured it out.  had the export set to sync, and not async.  once i changed it to async:
[15:17] <theGoat> dd if=/dev/zero of=/nfs/software/zerofile bs=1024k count=500
[15:17] <theGoat> 500+0 records in
[15:17] <theGoat> 500+0 records out
[15:17] <theGoat> 524288000 bytes transferred in 7.002229 secs (74874441 bytes/sec)
[15:32] <joelio> cool, got there in the end
[15:42] <theGoat> yep
[15:42] <theGoat> thanks for the help
[16:03] <joelio> np dude
[18:04] <albech1> anyone know of an interface for administrating websites on a shared webserver. the different departments in our company need this for wikis etc.
[18:42] <andreas> nacc: what's the preferred way to use the git workflow to update the version of a package? That doesn't come from debian. It's a plain new upstream version
[18:43] <nacc> andreas: uupdate/uscan, probably
[18:43] <nacc> andreas: i can tell you how I have done it, if you want. HO?
[18:43] <andreas> and then git commit/delete as necessary?
[18:43] <andreas> add/delete I meant
[18:43] <nacc> andreas: right, uupdate will create a new directory
[18:43] <andreas> yes
[18:43] <nacc> andreas: you'll need to effecitvely move it in place over your git repo
[18:43] <andreas> ok
[18:44] <nacc> andreas: it's something i want to wrap better, as that can be error prone :)
[18:44] <andreas> I was just wondering if there was such a wrapper already :)
[18:44] <andreas> thx
[18:46] <nacc> andreas: `git status --ignored` can help, to see what has updated
[20:43] <blizzow> Does anyone here know if the nagios-nrpe-server package is fixed to honor the allow arguments flag in the config? Last I checked it was broken and wouldn't read the allow arguments flag.
[20:45] <nacc> blizzow: is there a bug?
[20:49] <blizzow> nacc, there have been some firefights with the maintainer saying that the args option a security hole. The point of the feature is to let the user choose. Having the option is not a security hole, setting the option is.
[20:51] <blizzow> It's like disabling the ability to set the listening address in mysql via the conf file.  Yes, if you set it to 0.0.0.0, you could be in for a bad time. But the designers intended the behavior to be configurable. Intentionally breaking the ability to configure certain parts is asinine.
[20:52] <nacc> blizzow: i meant is there a bug #, not your opinion on the bug :)
[20:52] <blizzow> nacc: there have been multiple bugs filed, the maintainer consistently closes them citing "security hole"
[20:52] <blizzow> like a TSA agent.
[20:53] <nacc> blizzow: ... give me the bug numbers?
[20:53] <nacc> blizzow: and more importantly, is there an ubuntu bug filed?
[20:54] <blizzow> https://bugs.launchpad.net/ubuntu/+source/nagios-nrpe/+bug/1555258
[20:55] <nacc> blizzow: well, that's fix released in all ubuntu releases.
[20:55] <nacc> blizzow: so not sure what you were just talking about?
[20:57] <blizzow> It's not clear to me which way the package maintainers have decided to swing, allow command args to be configured, or disable them entirely.
[20:58] <blizzow> I was hoping to get an answer here before installing, testing, and possibly rebuilding.
[20:58] <nacc> blizzow: it is fixed in Ubuntu.
[20:58] <nacc> blizzow: as that bug says, a few times.
[20:59] <nacc> blizzow: you will need to modify your configuration to allow it locally.
[20:59] <nacc> blizzow: the functionality is there by default, but disabled.
[21:00] <blizzow> I guess I'll try the latest.
[21:00] <nacc> blizzow: latest what?
[21:00] <nacc> blizzow: you don't to run Artful to get the fix.
[21:01] <nacc> *don't need to run
[21:01] <blizzow> package. like I said, the maintainer for a long time was marking the bug "fixed" because args are a security hole if not used properly.
[21:02] <nacc> blizzow: well, yes, you'd try the latest package. Not sure what else you'd try? Sorry, this feels like a rather circular converstaion.
[21:02] <nacc> blizzow: also, I think you mean the Debian maintainer? As the LP bug says, Ubuntu has decided to diverge from Debian on this issue.
[21:03] <sarnold> have you tried since may?
[21:05] <blizzow> I was just hoping to get an answer here before testing the current package, that's all. I spent a bunch of time building a system to build a custom deb with it enabled to distribute among my systems.
[21:05] <nacc> blizzow: why wouldn't you use a PPA ?
[21:06] <nacc> blizzow: but regardless, yes, fixed.
[22:17] <rh10> guys, which version of python do you mostly use for system administration tasks? or devops?
[22:17] <rh10> 2 or 3?
[22:17] <teward> nacc: rbasak: dpb1: et. al.: I may have a meeting conflict tomorrow with a client, so I might not be able to chair the meeting this week.  Sucks that clients are slow at responding to me.
[22:19] <dpb1> teward: hah
[22:19] <rh10> and how long python 2 will b esupported? is next LTS ubuntu version will be shipped with python 2 onboard?
[22:19]  * dpb1 looks at schedule
[22:19] <teward> dpb1: i'mma pull myself off the chair list, bother me later :P
[22:19] <teward> rh10: I use Py3 because Py2 dies in 2020
[22:19] <teward> and Py3 is becoming the standard
[22:19] <dpb1> teward: ok, sounds good
[22:19] <rh10> teward, thanks
[22:19] <teward> rh10: i think both will be in the repositories, but Py2 is *dead* in 2020 by upstream
[22:20] <dpb1> teward: let us know when/if your schedule is more "predictable". :)
[22:20] <teward> dpb1: who should I put in for it, rbasak, you, or TBD?
[22:20] <teward> dpb1: will do.
[22:20] <dpb1> teward: who is next up?
[22:20] <teward> rbasak
[22:20] <teward> after me
[22:20] <dpb1> put him in for next, he can slide the next person up if he wants (his schedule is a bit weird now too).
[22:22] <teward> done
[23:12] <drab> hi, anybody around using netdata, possibly with influxdb as a backend storage?
[23:14] <sarnold> "auto-detects everything, it can collect up to 5000 metrics per server out of the box" mmm nice for the very lazy admin like me :)
[23:14] <drab> the other interesting thing is that it's very much lxc aware
[23:14] <drab> so it plays really nicely with all the containers I have
[23:16] <sarnold> some of these screen capture/video/image things are really cool
[23:16] <drab> I've had this sort of problem forever, being able to look at what's happening "right now" in high def and have historical
[23:17] <drab> that's how I used to use collecd
[23:17] <sarnold> everyone has :) e.g. pcp is ~twenty years old now..
[23:17] <drab> but the frontend stuff is non-existing/a pain for high res
[23:17] <drab> so netdata + influx may finally be a way to fix this
[23:18] <drab> especially since the data at that high rate doesn't need to leave the box, which is the other issue if you even say you're sending stuff to statsd somewhere closeby, still lots of stuff leaving the host
[23:21] <sarnold> zfs support :)
[23:23] <drab> yep, only cavia is that being zfs at kernel level and therefore all containers sharing the same kernels those numbers are repeated
[23:23] <drab> I'm still trying to figure out exactlty what to do with that part
[23:23] <drab> maybe there's something I'm misunderstanding