[00:00] <geigerCounter> Well, I uhm... there's options. There's scp, ftps, https. If you're accessing the webserver via a browser, then https is probably the way you want to go.
[00:00] <Henster> 1. Cost of data on cloud serverexpensive 2. i dont have fast uncapt internet here .3 not all the files will be active
[00:01] <geigerCounter> What cloud service are you using Henster and what's your monthly budget?
[00:01] <Henster> ok cool , i learned how use puttygey gen ,, can i iuse putty keys in the terminal without using a putty like terminal
[00:01] <Henster> Im at digegal ocean
[00:01] <Henster> digital
[00:02] <Henster> god my spelling tonight
[00:02] <Henster> also its 2am here lol
[00:02] <geigerCounter> Ahh.
[00:02] <Henster> dude dont hack me lol
[00:02] <geigerCounter> Pssh.
[00:03] <geigerCounter> Don't worry, I don't want to.
[00:03] <geigerCounter> I'd hate it if that happened to me, so I'm not gonna do it to anyone else.
[00:03] <Henster> ha ha cool man
[00:15] <geigerCounter> sarnold: Do you have any suggestions on what I might do to fix this issue?
[00:16] <sarnold> geigerCounter: you're still stuck at the unexp3ected packet?
[00:16] <geigerCounter> Yeah
[00:17] <sarnold> geigerCounter: you could try connecting to it with openssl s_client's -starttls smtp support and see what output you get..
[00:18] <geigerCounter> How does that work exactly? I'm not too familiar with how s_client works...
[00:19] <sarnold> I have to look it upevery time I use it; something like openssl s_client -CApath /etc/ssl/certs/ -verify 2 -connect servername:port -starttls smtp   ... but it might require more
[00:22] <geigerCounter> Well.. that seems right?
[00:24] <geigerCounter> Well the connection holds up, but when I try to login it says "Invalid base64 data"
[00:31] <nacc> jgrimm: fyi, i think puppet should migrate now and i'm SRUing the longstanding bug with 16.04 + systemd
[00:41] <sarnold> geigerCounter: ugh when another fellow has trouble logging in via base64 decode errors it looks a bit hard to decipher -why- https://lists.gt.net/exim/users/57020
[00:42] <sarnold> geigerCounter: but 11 years ago philip said he'd make the authenticators log alongside the smtp dialog in whatever debug mode is used.. maybe he did? :)
[00:42] <geigerCounter> ...
[00:44] <geigerCounter> Well let's give this a read.
[00:53]  * geigerCounter facepalms
[00:53] <geigerCounter> I just ran tcpdump without any qualifiers and without piping it to less
[00:54] <geigerCounter> And now to wait fifteen years.
[00:54] <geigerCounter> I'm jk of course, I just interrupted it
[00:56] <sarnold> hehe
[01:09] <fishcooker> i have log about 28G how to reduce the file size to be latest 7G ?
[01:09] <fishcooker> i have issue when i try to gzip it because the cpu usage would be increase
[01:10] <sarnold> fishcooker: do you need to keep the logs?
[01:10] <tarpman> fishcooker: rotate more often so the individual files don't get that big ?
[01:12] <patdk-lap> set logrotate to daily, or maybe to a file size (1day or longer then)
[01:13] <patdk-lap> wonder what it would take to modify logrotate to nice gzip
[01:14] <patdk-lap> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=652600
[01:15] <sarnold> hah I didn't expect it to be that easy
[01:15] <sarnold> a one-liner but I didn't think of just fiddling with the crontab :)
[01:15] <patdk-lap> I wouldn't want to myself
[01:15] <patdk-lap> since logrotate can restart services
[01:15] <sarnold> I'd rather switch to lz4 or something quicker anyway
[01:16] <patdk-lap> do those restarts also get the new nice? I believe so, maybe
[01:16] <sarnold> heh
[01:16] <sarnold> I bet you're right
[01:16] <patdk-lap> probably have to play with compresscmd and compressopt to get it
[01:17] <fishcooker> looks like the logrotate with this conf http://vpaste.net/cBJwO cause the log big
[01:17] <patdk-lap> that doesn't *cause* a big log
[01:17] <fishcooker> i think delay compress sarnold
[01:17] <patdk-lap> the fact your programming is LOGGING a lot causes a big log :)
[01:18] <sarnold> why is gluster logging so much?
[01:18] <sarnold> should it? do you need the logs?
[01:18] <fishcooker> the question is why the compress doesn't work
[01:18] <patdk-lap> how do you mean, doesn't work?
[01:18] <fishcooker> because i noticed the log is raw log not the gz one
[01:18] <patdk-lap> you have delaycompress and compress in there
[01:19] <fishcooker> i think it should be compress only patdk-lap
[01:19] <patdk-lap> if you don't want delaycompress, why have it?
[01:19] <fishcooker> noted
[01:19] <sarnold> the manpage notes that delaycompress is needed if the program can't be told to close the log file
[01:21] <fishcooker> how about the postrotate ... does it force the service close the log file and create a new one, sarnold?
[01:24] <sarnold> fishcooker: I think you're right, this looks like it should log rotate when it gets SIGHUP https://github.com/gluster/glusterfs/blob/master/glusterfsd/src/glusterfsd.c#L1387
[01:24] <fishcooker> nice reference sarnold... i will do without delaycompress
[01:29]  * patdk-lap has never thought of this, I just never use delaycompress :)
[01:29] <fishcooker> the history of the big is we did set debug mode so the file become so big
[01:30] <fishcooker> it become bigger when we cant access the mount point from glusterfs server
[01:39] <patdk-lap> now to just fix systemd journal filling up /run
[02:41] <geigerCounter> Hey sarnold, what do you think I should do about not being able to remotely connect to exim over port 25
[02:41] <geigerCounter> ?
[02:42] <sarnold> geigerCounter: perhaps your ISP is blocking it? I think most ISPs do block it to knock back spam, and you've got to ask them to allow it..
[02:44] <geigerCounter> sarnold: Oh huh
[02:44] <geigerCounter> Could you try connecting to my server then?
[02:49] <sarnold> geigerCounter: sure
[03:53] <geigerCounter> sarnold: Did it work?
[04:07] <patdk-lap> geigerCounter, connecting to your server on port 25 is normally not blocked
[04:07] <patdk-lap> you connecting to other peoples port 25 is normally ALWAYS blocked
[04:22] <geigerCounter> patdk-lap: Well I can't connect to my server on port 25, I get no response until the connection times out
[08:36] <lordievader> Good morning
[12:25] <El_Presidente> hello
[12:26] <El_Presidente> what is the correct installation command to get the new hwe kernel installed in a proper way? i only found the "desktop" version : apt-get install --install-recommends xserver-xorg-hwe-16.04
[12:26] <El_Presidente> is it something like apt-get install --install-recommends linux-image-generic-hwe-16.04 ?
[12:26] <El_Presidente> or linux-generic-hwe-16.04
[15:43] <rc-is-me> Can someone please assist? I have a problem with server 16.04 on VPS. It's using venet0:0 and when I try to ping 8.8.8.8 I get no reply and 100% packet loss. I have tried this on a different OS CentOs6 and get the same problem. Could someone please point me in the right direction?
[15:43] <rc-is-me> I can't connect to or from it
[17:28] <samba35> how do i fix this error Feb  libvirtd[6040]: unsupported configuration: Security driver apparmor not enabled
[17:33] <rbasak> Are you using libvirt and kernel packges from Ubuntu?
[17:39] <samba35> yes
[18:21] <tomreyn> enable apparmor, i would guess
[19:02] <nerfed> does anyone know of a good place to get help with performance issues, I've exhausted everything that I can think of and I cannot track down the source of random intermittent performance spikes that are destroying my real time processes
[19:05] <nerfed> I have intermittent performance spikes where some or all processes stop receiving CPU time for anywhere from a few hundred milliseconds to over 8 seconds randomly anywhere from every few minutes to every few hours. my processes are scheduled are real-time round robin, CPU hyperthreading and frequncy scaling is disabled, the hard disks are barely ever touched at all as most writes and reads are from a ram disk, CPU load is barely
[19:06] <nerfed> I'm not sure if it's the linux kernel, something in Ubuntu server 16.04 or perhaps caused by something hardware related
[19:06] <nerfed> the only hints that I can see when the spikes happen and certain processes freeze for x amount of time, is if netdata's collections don't freeze up during that period, they show a large drop in interrupts and softirqs, which is probably just a result of the processes not being scheduled by the CPU during that period
[23:54] <zzz_> are croned shell commands logged anywhere?