[09:50] <yurtesen> utkarsh2102 I do not know why you got a response right away but not me. :) maybe you should have sent e-mail.
[10:09] <arkanoid> hello! I was happily using ubuntu sever headless when I was requested to add desktop capability for a specific demostration
[10:10] <arkanoid> I've installed it and I have gnome up ad running, but desktop is limited to 1024x748 
[10:10] <arkanoid> this is lshw https://termbin.com/k1k1
[10:10] <arkanoid> gnome says that graphis is llvmpipe
[10:11] <arkanoid> I'd like to enable normal resolution and intel iGPU acceleration
[10:41] <kazaaakas> Hello, guys, i faced with problem .. how tu use certbot that hi makes 000-default.conf to SSL? I wanna that sameone try open with browser my server IP it's open /var/www/html/ folder index.html file with SSL (https) support. My question related with Ubuntu 20.04, Apache2, Certbot.
[12:22] <arkanoid> solved by using HWE kernel. The iGPU is too modern for default 20.04 kernel
[15:28] <bittin> https://www.brighttalk.com/webcast/6793/541159 Ubuntu server talk soon :)
[16:01] <bittin> seems to not start in time :(
[16:01] <bittin> ah now just a minute late
[16:08]  * bittin is watching
[16:24] <foo> I've got a 9GB log file. I need to split it up, and compress them / gunzip them... any tools/suggestions to help with this?
[16:24] <foo> logrotate, perhaps? /me investigates 
[16:30] <coke> yea that's what logrotate does but systemd is going away from text log files  
[16:34] <foo> coke: ok, thanks
[16:34] <foo> looks like split can do what I want in the short immediate term 
[16:34] <foo> I think I can split files, specify a filesize, then gunzip it - how does that sound? 
[16:34] <foo> I'm trying to figure out the size to split on
[16:35] <coke> just look at the log rotate configs of other services? 
[16:36] <coke> in /etc/logrotate.d/
[16:36] <foo> coke: ah, good idea, thanks
[16:37] <foo> rotate 14 ... might be 14 days... *checks help
[16:37] <coke> you have to tell the deamon to reload usually so it starts using the new file 
[16:37] <coke> maybe you can find one for the service you want to work with already somewhere?
[16:38] <coke> keeping log files round is usually the backup servers job 
[16:39] <foo> coke: thank you for the help, really appreciate it 
[16:40] <foo> coke: this is a service we wrote, we append text to the .log file - it's 9GB now, been running for 6 months 
[16:40] <foo> aha, rotate 14 says "keep 14 archived log files" 
[16:44] <foo> coke: ifi logrotate is moving away from text files, should I be looking at something else? 
[16:44] <coke> log rotate will stay log rotate 
[16:44] <foo> coke: aha, ok. 
[16:45] <coke> but less services use it these days cause they use journalctl
[16:52] <foo> coke: ohhhh. do you recommend one over the other? Still learning the ropes here
[16:53] <coke> if you want to log into a file log rotate is the way to go 
[16:54] <coke> but on very busy systems and if your service is running on more than one server there are better options 
[16:54] <foo> coke: cool. thanks. 
[16:55] <coke> but since you just found out about a 6m old log file I think that's not your issue :)
[16:56] <foo> coke: I don't think logrotate will retroactively do things, so I probably need to manually split that file 
[16:57] <coke> nah you should move the large file away and compress it elsewhere 
[16:58] <foo> coke: surprised gunzip app.txt doesn't work: gunzip: app.txt: unknown suffix -- ignored
[16:58] <foo> also tried gunzip app.log 
[17:02] <sdeziel> foo: `gzip app.txt` will give you a compressed file named `app.txt.gz`. `gunzip` is to decompress
[17:03] <foo> sdeziel: facepalm, this guide https://www.geeksforgeeks.org/gunzip-command-in-linux-with-examples/ is wrong. thank you.
[17:10] <foo> sdeziel: if I run gzip file.log on a 9GB file, and if I only have 2GB free space, will that be an issue? *thinks
[17:11] <sdeziel> foo: possibly :/
[17:11] <coke> depends on the log file and how well it compresses 
[17:11] <sdeziel> foo: but usually, text compresses very well
[17:12] <foo> I suppose gzip will throw an error if it happens? I simply didn't want it to throw an error
[17:12] <coke> it will write the .gz until it runs out of space and only delete the uncompressed file if it succeeds 
[17:12] <foo> I mean, I simply didn't want it to crash the server
[17:13] <sdeziel> foo: before you compress it, I'd make sure the service no longer writes to it
[17:13] <sdeziel> foo: otherwise, you'll get a "phantom" file of 9G and growing + the gzip'ed copy
[17:14] <coke> if the service keeps the file open 
[17:14] <foo> err, this is now throwing a production error
[17:14] <foo> since 2.5G is so low...
[17:14] <foo> my options: A) run gzip on the large file now (it is not being written to)
[17:15] <foo> will gzip cvrash the server
[17:17] <coke> if the server crashes if the disk gzip is writing on is full: yes 
[17:17] <foo> ok, I'm transferring this to another system now.
[17:17] <foo> Will gzip this on another system, thanks ya'll.
[17:18] <coke> logging to / or the same partition/volume/disk your service needs space on is a very bad idea 
[17:19] <foo> coke: oh? I can change that, never thought about that.
[17:19] <foo> coke: why?
[17:19] <foo> please excuse the possibly noob question
[17:19] <foo> 35% done downloading, so, close.
[17:27] <foo> What, this is odd. I cleared the space, still getting OSError: [Errno 28] No space left on device: '/dev/shm/tmpb8ywrww2'
[17:28] <coke> reload the service writing the log 
[17:28] <sdeziel> foo: are you 100% the service no longer holds the old log file open?
[17:28] <foo> sdeziel: yes, I've restarted the server, I think that should have done it
[17:28] <foo> also restarted nginx and postgersql
[17:28] <foo> postgresql
[17:29] <foo> unless it is hanging *checks
[17:29] <foo> nope, uh, about to do a reboot unless someone has another thought. odd.
[17:30] <sdeziel> foo: you can check with `grep deleted /proc/$pid/maps` and see if the if the old log is there ($pid is the PID of your service)
[17:30] <foo> sdeziel: nothing
[17:30] <foo> I can also see from ps aux the timestamp shows the service was rebooted within the past 2 minutes
[17:31] <sdeziel> foo: typically /dev/shm is it's own mountpoint (not using the rootfs)
[17:31] <sdeziel> a tmpfs IIRC
[17:31] <sdeziel> which has 50% of the RAM as size limit
[17:31] <foo> df -h shows I have 12G available 
[17:32] <sdeziel> is that ^ for "/" or "/dev/shm" ?
[17:32] <foo> sdeziel: https://bpa.st/6WWQ
[17:32] <foo> sdeziel: /dev/vda1 
[17:33] <sdeziel> foo: => tmpfs           992M  992M     0 100% /dev/shm
[17:33] <foo> sdeziel: how do I reset that? I did see that
[17:33] <foo> (but we have 4G RAM)
[17:33] <foo> *thinks
[17:33] <sdeziel> foo: `free -mt` will tell you
[17:33] <foo> Maybe stuff got stuck when I ctrl+c 'd gzip file.txt 
[17:33] <foo> sdeziel: https://bpa.st/JL4A
[17:34] <sdeziel> foo: as for freeing space from /dev/shm, you can go in there and `rm` what's unneeded
[17:34] <sdeziel> foo: so the box has ~2G of RAM hence the ~1G-ish /dev/shm
[17:34] <foo> sdeziel: can I rm * ? 
[17:34] <foo> /dev/shm# ls|wc -l
[17:34] <foo> 253876
[17:34] <sdeziel> foo: it's your server/your files, you know better than I do what's of value ;)
[17:35] <foo> sdeziel: there is a ton of random filenames... no idea what these are :)
[17:35] <foo> Would a reboot reset this 
[17:35] <foo> ?
[17:35] <foo> Likely.
[17:35] <sdeziel> foo: yes because /dev/shm is a tmpfs so it won't persist through a reboot
[17:35] <foo> sdeziel: if a reboot would reset that, I'm likely fine to rm * , agree?
[17:36] <sdeziel> foo: possibly but I don't know what put those files there
[17:37] <foo> sdeziel: I just did a reboot, super appreciate your help. Haven't had something like this break in production in a long time, heh.
[17:37] <foo> yup, that fixed it
[17:37] <sdeziel> foo: good :)
[17:43] <coke> the cleansing relief of a reboot 
[17:44] <coke> foo: make sure you check back tomorrow to find out if your logrotate config worked 
[17:49] <foo> coke: thanks, haven't set that up yet. I got to add more space to this system, no reason to be dealing with this.
[17:49] <foo> FWIW, the gzip of the 9.2 GB log file shrunk it down to 4.6GB
[17:50] <coke> that's not very much 
[17:50] <foo> Only a text file FWIW
[17:50] <coke> yea but normal log files go down way more cause they always repeat the same words 
[17:52] <foo> ohh, interesting
[17:52] <foo> is it a word repeat only, or is an exact line match needed? 
[17:53] <coke> you could try bzip2 and see if it works better 
[17:53] <coke> takes way longer but is more sophisticated than bzip 
[17:54] <foo> aha, ok, good to know. I may try that, for now, the log file is off production server and I have it locally
[17:54] <foo> Which is likely fine, I can inspect what I need to locally
[17:54] <coke> there is no real reason to keep 6 months old logs 
[17:55] <foo> garee
[17:55] <foo> agree*
[17:55] <coke> especially if here is user data in it and your users are from the EU 
[17:55] <foo> I got to set up logrotate, likely really easy... looks like I just define the block, stick the file in there, restart logrotate and we're good 
[17:55] <foo> nahh, all US
[18:27] <athos> Could anyone please import php-symfony-polyfill and php-twig into git ubuntu? :)
[18:46] <rbasak> athos: done
[18:56] <athos> thanks!
[19:14] <kanashiro> athos, since you grabbed the crmsh merge for this cycle this is a bug we need to tackle: https://bugs.launchpad.net/ubuntu/+source/crmsh/+bug/1972730
[19:15] <kanashiro> we should try to reach out to the debian maintainer to get 4.4.0 in unstable
[19:15] <kanashiro> I can do that if you want
[19:15] <kanashiro> I've been in touch with him before
[20:27] <athos> kanashiro: sounds good! thank you :)