[09:50] utkarsh2102 I do not know why you got a response right away but not me. :) maybe you should have sent e-mail. [10:09] hello! I was happily using ubuntu sever headless when I was requested to add desktop capability for a specific demostration [10:10] I've installed it and I have gnome up ad running, but desktop is limited to 1024x748 [10:10] this is lshw https://termbin.com/k1k1 [10:10] gnome says that graphis is llvmpipe [10:11] I'd like to enable normal resolution and intel iGPU acceleration [10:41] Hello, guys, i faced with problem .. how tu use certbot that hi makes 000-default.conf to SSL? I wanna that sameone try open with browser my server IP it's open /var/www/html/ folder index.html file with SSL (https) support. My question related with Ubuntu 20.04, Apache2, Certbot. [12:22] solved by using HWE kernel. The iGPU is too modern for default 20.04 kernel [15:28] https://www.brighttalk.com/webcast/6793/541159 Ubuntu server talk soon :) [16:01] seems to not start in time :( [16:01] ah now just a minute late [16:08] * bittin is watching [16:24] I've got a 9GB log file. I need to split it up, and compress them / gunzip them... any tools/suggestions to help with this? [16:24] logrotate, perhaps? /me investigates [16:30] yea that's what logrotate does but systemd is going away from text log files [16:34] coke: ok, thanks [16:34] looks like split can do what I want in the short immediate term [16:34] I think I can split files, specify a filesize, then gunzip it - how does that sound? [16:34] I'm trying to figure out the size to split on [16:35] just look at the log rotate configs of other services? [16:36] in /etc/logrotate.d/ [16:36] coke: ah, good idea, thanks [16:37] rotate 14 ... might be 14 days... *checks help [16:37] you have to tell the deamon to reload usually so it starts using the new file [16:37] maybe you can find one for the service you want to work with already somewhere? [16:38] keeping log files round is usually the backup servers job [16:39] coke: thank you for the help, really appreciate it [16:40] coke: this is a service we wrote, we append text to the .log file - it's 9GB now, been running for 6 months [16:40] aha, rotate 14 says "keep 14 archived log files" [16:44] coke: ifi logrotate is moving away from text files, should I be looking at something else? [16:44] log rotate will stay log rotate [16:44] coke: aha, ok. [16:45] but less services use it these days cause they use journalctl [16:52] coke: ohhhh. do you recommend one over the other? Still learning the ropes here [16:53] if you want to log into a file log rotate is the way to go [16:54] but on very busy systems and if your service is running on more than one server there are better options [16:54] coke: cool. thanks. [16:55] but since you just found out about a 6m old log file I think that's not your issue :) [16:56] coke: I don't think logrotate will retroactively do things, so I probably need to manually split that file [16:57] nah you should move the large file away and compress it elsewhere [16:58] coke: surprised gunzip app.txt doesn't work: gunzip: app.txt: unknown suffix -- ignored [16:58] also tried gunzip app.log [17:02] foo: `gzip app.txt` will give you a compressed file named `app.txt.gz`. `gunzip` is to decompress [17:03] sdeziel: facepalm, this guide https://www.geeksforgeeks.org/gunzip-command-in-linux-with-examples/ is wrong. thank you. [17:10] sdeziel: if I run gzip file.log on a 9GB file, and if I only have 2GB free space, will that be an issue? *thinks [17:11] foo: possibly :/ [17:11] depends on the log file and how well it compresses [17:11] foo: but usually, text compresses very well [17:12] I suppose gzip will throw an error if it happens? I simply didn't want it to throw an error [17:12] it will write the .gz until it runs out of space and only delete the uncompressed file if it succeeds [17:12] I mean, I simply didn't want it to crash the server [17:13] foo: before you compress it, I'd make sure the service no longer writes to it [17:13] foo: otherwise, you'll get a "phantom" file of 9G and growing + the gzip'ed copy [17:14] if the service keeps the file open [17:14] err, this is now throwing a production error [17:14] since 2.5G is so low... [17:14] my options: A) run gzip on the large file now (it is not being written to) [17:15] will gzip cvrash the server [17:17] if the server crashes if the disk gzip is writing on is full: yes [17:17] ok, I'm transferring this to another system now. [17:17] Will gzip this on another system, thanks ya'll. [17:18] logging to / or the same partition/volume/disk your service needs space on is a very bad idea [17:19] coke: oh? I can change that, never thought about that. [17:19] coke: why? [17:19] please excuse the possibly noob question [17:19] 35% done downloading, so, close. [17:27] What, this is odd. I cleared the space, still getting OSError: [Errno 28] No space left on device: '/dev/shm/tmpb8ywrww2' [17:28] reload the service writing the log [17:28] foo: are you 100% the service no longer holds the old log file open? [17:28] sdeziel: yes, I've restarted the server, I think that should have done it [17:28] also restarted nginx and postgersql [17:28] postgresql [17:29] unless it is hanging *checks [17:29] nope, uh, about to do a reboot unless someone has another thought. odd. [17:30] foo: you can check with `grep deleted /proc/$pid/maps` and see if the if the old log is there ($pid is the PID of your service) [17:30] sdeziel: nothing [17:30] I can also see from ps aux the timestamp shows the service was rebooted within the past 2 minutes [17:31] foo: typically /dev/shm is it's own mountpoint (not using the rootfs) [17:31] a tmpfs IIRC [17:31] which has 50% of the RAM as size limit [17:31] df -h shows I have 12G available [17:32] is that ^ for "/" or "/dev/shm" ? [17:32] sdeziel: https://bpa.st/6WWQ [17:32] sdeziel: /dev/vda1 [17:33] foo: => tmpfs 992M 992M 0 100% /dev/shm [17:33] sdeziel: how do I reset that? I did see that [17:33] (but we have 4G RAM) [17:33] *thinks [17:33] foo: `free -mt` will tell you [17:33] Maybe stuff got stuck when I ctrl+c 'd gzip file.txt [17:33] sdeziel: https://bpa.st/JL4A [17:34] foo: as for freeing space from /dev/shm, you can go in there and `rm` what's unneeded [17:34] foo: so the box has ~2G of RAM hence the ~1G-ish /dev/shm [17:34] sdeziel: can I rm * ? [17:34] /dev/shm# ls|wc -l [17:34] 253876 [17:34] foo: it's your server/your files, you know better than I do what's of value ;) [17:35] sdeziel: there is a ton of random filenames... no idea what these are :) [17:35] Would a reboot reset this [17:35] ? [17:35] Likely. [17:35] foo: yes because /dev/shm is a tmpfs so it won't persist through a reboot [17:35] sdeziel: if a reboot would reset that, I'm likely fine to rm * , agree? [17:36] foo: possibly but I don't know what put those files there [17:37] sdeziel: I just did a reboot, super appreciate your help. Haven't had something like this break in production in a long time, heh. [17:37] yup, that fixed it [17:37] foo: good :) [17:43] the cleansing relief of a reboot [17:44] foo: make sure you check back tomorrow to find out if your logrotate config worked [17:49] coke: thanks, haven't set that up yet. I got to add more space to this system, no reason to be dealing with this. [17:49] FWIW, the gzip of the 9.2 GB log file shrunk it down to 4.6GB [17:50] that's not very much [17:50] Only a text file FWIW [17:50] yea but normal log files go down way more cause they always repeat the same words [17:52] ohh, interesting [17:52] is it a word repeat only, or is an exact line match needed? [17:53] you could try bzip2 and see if it works better [17:53] takes way longer but is more sophisticated than bzip [17:54] aha, ok, good to know. I may try that, for now, the log file is off production server and I have it locally [17:54] Which is likely fine, I can inspect what I need to locally [17:54] there is no real reason to keep 6 months old logs [17:55] garee [17:55] agree* [17:55] especially if here is user data in it and your users are from the EU [17:55] I got to set up logrotate, likely really easy... looks like I just define the block, stick the file in there, restart logrotate and we're good [17:55] nahh, all US [18:27] Could anyone please import php-symfony-polyfill and php-twig into git ubuntu? :) [18:46] athos: done [18:56] thanks! [19:14] athos, since you grabbed the crmsh merge for this cycle this is a bug we need to tackle: https://bugs.launchpad.net/ubuntu/+source/crmsh/+bug/1972730 [19:14] Launchpad bug 1972730 in crmsh (Ubuntu Jammy) "WARNING: crmadmin -S unexpected output" [Undecided, New] [19:15] we should try to reach out to the debian maintainer to get 4.4.0 in unstable [19:15] I can do that if you want [19:15] I've been in touch with him before [20:27] kanashiro: sounds good! thank you :)