[00:24] what's the correct way to sync an ubuntu server with a local ntp server? I'm having trouble turning off system timedatectl stuff to get it to sync with antp server. Is chrony the right tool? Or is there just a setting I can change with default tools? [00:25] on the server docs page I'm finding stuff the ubuntu to be the ntp server, not the client [00:28] noarb-: you can use chrony if you'd rather, I think you just systemctl disable systemd-timesyncd.service ; systemctl stop systemd-timesyncd.service ; apt install chrony ; then do whatever is necessary to configure chrony [00:28] noarb-: https://vitux.com/how-to-install-ntp-server-and-client-on-ubuntu/ [00:29] noarb-: start at: "Configure NTP Client to be Time Synced with the NTP Server [00:29] " [00:38] leftyfb: I used the tutorial you'd posted. Is there a way to verify at the end that it's working? I was able to use sudo ntpdate NTP-server-host to get the offset of my NTP server, but now that command says "the NTP socket is in use, exiting". ntpq -p shows NTP-server-host .INIT. 16 u 1 64 0 0.000 0.000 0.000, but not sure how else to verify [00:39] my initial offset was about -0.500s, so it's hard to tell by eyeballing it if it's working [00:45] timedatectl still says System clock synchronized: yes; NTP service: n/a [01:06] Having issues downloading/installing virtual box on ubuntu 20.04 [01:11] !info virtualbox focal | Guest9029 [01:11] Guest9029: virtualbox (6.1.16-dfsg-6~ubuntu1.20.04.2, focal): x86 virtualization solution - base binaries. In component multiverse, is optional. Built by virtualbox. Size 20,964 kB / 109,033 kB. (Only available for amd64.) === M4he is now known as mahe [01:17] Guest9029: you will have to way more specific than that [01:18] in the terminal it was giving me an error waiting for cache lock: could not get lock. [01:18] do you still get the error? [01:20] yes [01:20] put the command and error message into a pastebin site and share the link, someone can probably take a look and make suggestions [01:21] guest your trying to update intthe terminal and tell you that is lock weel the way i do is power off the vm then power on and try again to update and upgrade [01:22] and for me works that way if is that what you trying to do [01:23] okay i will try this thank you [01:23] if it's what I think it is, the unattended-upgrades script running, then you're powering off potentially in the middle of updates. That's not a great idea. [01:24] Guest9029: Be aware that there can be only one instance of a package manager active at a given time. What shows - ps -efly | egrep '(dpkg|apt)' - ? [01:30] lock sounds unattended updates are ready [01:30] always, run proper updates first, then install [01:30] if guest* manages to come back, I wouldn't be surprised if the next step is debug why apt is broken.. [01:31] maybe his mirror is in california, it is shaking there for some time now [01:32] "mirror mirror on the floor..." [02:07] Is it practical to make a custom user who has mostly read only rights for installing the tarballs if I am paranoid about it affecting system files? (ie even if I chose to modify the prefix in 'configure')? Or will just installing it without a root mode be fine and it shouldn't be able to touch system folders even if it mistakenly tries to [02:15] pikapika: use lxd [02:16] leftyfb, isn't a container a rather "overpowerd" tool for that? [02:16] pikapika: I use it all the time for testing things like that, no [02:16] How much storage does a blank lxd take [02:16] do you get to set the directory, or its installed in some system location by default [02:18] pikapika: depends on the distro you choose and you can pick dir mode or btrfs, the former allowing you to pick the location only by first settings configs. Pretty sure btrfs allows you to change the location of the pool as well. Neither of which I'm familiar with [05:04] Oh, I noticed a small bug on recent versions of firefox. It seems the cache doesn't migrate properly, so if anyone is experiencing crashing, clear your cache directory (~/.cache/mozilla/firefox) [05:05] I should see if cruddy cache is also causing my issues on Android. [05:06] i didnt have crashing ruedii but i noticed some 100% cpu on some FF sessions on latest updates [05:06] maybe related [05:07] checkout htop while playing with FF [05:08] Could be worth checking deleting the cache. Such bugs can express in different manners based on people's system setup. [05:09] No harm in deleting that cache directory, it's all votile stuff. I'm thinking of making that directory a tmpfs on my system to speed things up. [05:09] Maybe make it tmpfs with backing store, though. [05:09] i tested it on a clean FF reinstall with all dirs removed [05:10] Did you remember the cache directory. It's separate for some reason (likely in case someone wants to make the cache directory no-sync and/or tmpfs) [05:11] yeah we used to set cache on 0 and combine with ssd [05:11] but i think my issue is caused by something else [05:12] afk a bit first [05:19] Certain websites waste a LOT of processor time and on slower processors will use 100%. (Cough, facebook, cough). === Ricardus_ is now known as Ricardus === kostkon_ is now known as kostkon [06:12] hey [06:12] I am searching for Nautilus source code GIT that has been used in ubuntu 18.04 === arif-ali_ is now known as arif-ali [06:36] i need to compile an extra module from kernel, how can i do this in ubuntu? [06:47] docmax, this looks like it might help https://serverfault.com/questions/674415/compiling-an-individual-kernel-module-debian-ubuntu [07:45] Does `apt update` fail for everyone or is it just my ISP? === EriC^ is now known as EriC^^ [08:02] alkisg: works for me are you root or add sudo before apt [08:04] toddc, it appears that us.archive.ubuntu.com maps to either 91.189.91.38, 91.189.91.39; the first one is broken, but if I force .39, then it works [08:04] What do you get with `host us.archive.ubuntu.com` ? [08:05] (thank you btw) [08:09] toddc: never mind both seem to be working now. Maybe it was some server syncing glitch [08:48] Hi I installed gnome-calendar through snap on Ubuntu 20.04. It doesn't start. Is there a way to see the problem why it's not starting? Which is the recommended way to install the calendar, apt or snap? [08:49] is "snap" something like flatpak? [08:49] kinda [08:49] ok, it came up yesterday and that reminded me. [08:57] isgj: how are you starting calendar? maybe you can start it from the terminal using the command and see any output it gives [09:00] https://paste.ubuntu.com/p/hxR2DJJ7tF/ [09:02] isgj: try without sudo [09:05] that was run with sudo https://paste.ubuntu.com/p/9gtCY6FT43/ === pseudocrat is now known as Pseudocrat [09:08] isgj: give this a shot https://github.com/bitwarden/desktop/issues/215#issuecomment-623436919 [09:08] Issue 215 in bitwarden/desktop "Errors from snap install on Ubuntu...." [Closed] [09:13] isgj: seems like it might be a bug in the snap and it needs to connect to dbus and can't cause it's missing some part in the config, just a guess [09:15] thanks, all the permissions are on, but it doesn't work [09:21] isgj: is there anything showing up in "dmesg -w" when you try running gnome-calendar from the terminal? [09:26] when I run it without sudo https://paste.ubuntu.com/p/bGcB44vSfj/ [09:28] no idea, maybe you can allow it via apparmor somehow, ive no experience with it though or how to use it safely [09:28] isgj: maybe you could use a trusted ppa or another trusted snap that doesnt have the issue [09:31] hmm, this snap is from canonical [09:45] is ports.ubuntu.com down for anyone else? [09:46] hugotty: I had the same issue an hour ago with us.archive.ubuntu.com [09:46] It got resolved automatically after 10 minutes [09:46] I think they introduced a new server syncing script that is somehow broken [09:52] Ah alright, I'll wait a bit then, thanks alkisg === Europa is now known as Guest4201 === Europa_ is now known as Europa [10:02] I had to log out and in again, but it doesn't sync the calendars 😫️ [10:04] hugotty: does it work now? I've reported the issue to #ubuntu-devel and they're asking for feedback [10:11] *.ubuntu.com is down? [10:14] TR1950X, no issues from here.. [10:45] Hi everyone, im running into a bit of a strange situation. We have a backup server (20.04) that is burning out of free space, however, I cannot determine what is growing as there is about 3tb of storage unaccounted for. The server has 18tb is is currently almost 95% full (17tb used), however, using du -h I can only account for about 13tb [10:45] it has to be something in our /home directory but it's as if du -h only returns 13tb max [10:46] can this be possible? [10:46] or is there another way for me to get better accurate amounts when looking for storage usage? [10:47] mauzilla: im using bleachbit and stacer usualy to cleanup space and tmp files, not sure whats most reccomended for server [10:47] bleachbit can be used in cli [10:47] let me install and have a look [10:54] !info ncdu | mauzilla perhaps also? [10:54] mauzilla perhaps also?: ncdu (1.15.1-1, hirsute): ncurses disk usage viewer. In component universe, is optional. Built by ncdu. Size 43 kB / 104 kB [11:05] TRX1950X: same here, can't reach any of canonical's servers [11:05] TR1950X * [11:06] hugotty: we have #ubuntu-mirrors to ask about known issues on ubuntu repos if you like [11:07] alright I'll hop in there. Think this might have something to do with ISPs because traceroute just stops at one of my ISP's switches [11:11] huh, the problem just now resolved itself! [11:11] the magic of ubuntu hugotty :p [11:18] it certainly felt magical ;) [11:18] : ) === BlessJah_ is now known as BlessJah === rhymeswithmogul is now known as signofzeta [13:20] Hi ther! [13:21] Any ideas on how to solve this issue? https://pastebin.com/2vLjCi2z [13:22] adac: try sudo apt --fix-broken install === Marco_Polo is now known as Guest9248 [13:26] Saxle, hmm no luck what I get is this: https://pastebin.com/jt66rkuu [13:29] adac: what about sudo apt autoremove and sudo apt update && sudo apt full-upgrade ? [13:30] adac: Are you on LTS 20.04? Did you upgrade from 18.04? === Abrax- is now known as Abrax [13:36] adac: try to reconfigure the packages which need configuration. sudo dpkg-reconfigure [13:37] Saxle, yes I did an upgrade grom 18.04 to 20.04 some time ago [13:38] lotuspsychje, I'll try [13:41] lotuspsychje, Saxle actually automremove does indeed remove some packages, but one still has an issue [13:41] https://pastebin.com/Lc1rwmCs [13:42] sudo dpkg-reconfigure linux-image-5.8.0-59-generic [13:42] usr/sbin/dpkg-reconfigure: linux-image-5.8.0-59-generic is broken or not fully installed [13:42] !info linux-image-generic focal [13:42] linux-image-generic (5.4.0.77.80, focal): Generic Linux kernel image. In component main, is optional. Built by linux-meta. Size 3 kB / 18 kB. (Only available for amd64, armhf, arm64, powerpc, ppc64el, s390x.) [13:43] !info linux-image-generic-hwe-20.04 [13:43] linux-image-generic-hwe-20.04 (5.11.0.22.23, hirsute): Generic Linux kernel image. In component main, is optional. Built by linux-meta. Size 3 kB / 18 kB. (Only available for amd64, armhf, arm64, powerpc, ppc64el, s390x.) [13:43] adac: try sudo apt install --reinstall linux-image-5.8.0-59-generic [13:44] Saxle, it says https://pastebin.com/P0Ht0yGM [13:47] If I want to remove this package I get: [13:47] https://pastebin.com/ZsGYkhvC [13:47] wow that looks weird [13:47] strange that it wants to remove all that [13:47] uh yes [13:47] LD [13:47] why would hwe depend on 59 [13:48] well maybe you should do it, and try reinstall hwe back after [13:48] yepp I just did. lets see what happens :) [13:50] Maybe its not fully installed? adac: Try sudo apt install linux-image-generic-hwe-20.04 linux-hwe-5.8-headers-5.8.0-59 linux-generic-hwe-20.04 linux-headers-generic-hwe-20.04 [13:52] Saxle, will now run my ansible again, should re-install all what is needed. Lets see [13:52] Good luck [13:52] Hehehe thanks :-) === Abrax is now known as Azriel === Azriel is now known as Abrax [13:57] Hmm nope no luck [13:57] One more question: Is that correct: [13:57] https://pastebin.com/3ujPP8QJ [13:58] Yes to install hwe kernel for 20.04 [13:58] Looks like it didn't install before when you upgraded adac [13:59] Saxle, when I run this manually on the machine I get this: [13:59] https://pastebin.com/cp2kDMBU [14:00] adac: i would always trust what apt says it wants to do [14:00] its a self fixing mechanism [14:01] and get rid of its complaints systematic [14:02] there are some exceptions too ofc [14:02] kk I will check the error messages [14:03] df -h | grep boot [14:03] dev/md1 487M 453M 8.6M 99% /boot [14:03] Dependancy issues can be annoying idk if your package linux-image-5.8.0-59-generic may have issues [14:03] looks pretty full [14:03] hi [14:03] !rootirc | root__ [14:03] root__: It's not technically our business, but we'd like to tell you that IRC'ing as root is a Very Bad Idea (tm). After all, doing anything as root when root is not needed is bad, and especially bad with software that connects to the Internet. [14:12] Saxle, lotuspsychje I had to purge some older kernels manually. Not sure why autoremove kept them. Afterwards all was fine now [14:13] !yay | adac [14:13] adac: Glad you made it! :-) [14:14] Well also thanks to you guys :-) [14:14] welcome adac [14:15] Feels good when all is running fine again. And it is Friday noon as well here. So have a nice WE guys! [14:15] same to you mate [14:16] adac great === jpe__ is now known as sharperguy [15:37] When it comes to securing my Ubuntu server, I use a firewall behind a reverse proxy with ports 80 and 443 open and 1 more port open for ssh. Are there any other suggestions to secure my server ? [15:47] firewall is a great start but what is running on those ports usually is what gets you screwed [15:47] especially when its some web thing you dont really know much about and dont run updates very often [15:51] You could use fail2ban to block some attacks devslash [15:52] coke its not some qweb thing i dont know much about its Traefik [15:53] if you use traefik then the web apps are probably in containers [15:53] so should be way better already [16:00] Saxle: fail2ban only makes sense if you even users to log in with passwords (and you shouldn't) [16:03] coke: not true. fail2ban can be used for a lot more than remote authentication attempts [16:03] even allow* users [16:04] yea anything that is vulnerable to brute forcing [16:04] but ssh with key only auth isn't [16:04] coke: for instance, I wrote a filter that checks for wordpress login attempts or sql injection on the site or bots [16:05] coke: it's also useful to just for a thing in a log and then do a thing, not related to ip addresses or banning(firewall) [16:05] look for* [16:09] ip blocking is something that you get strange support calls about so I dont know how broad I wouldn't want to use it [16:09] if you know certain http requests are bad just error them? === TheHoliestRoger is now known as mentholtissueguy [16:33] Hello. I'm looking for advises on Ubuntu Server 21.04 for raspberry pi. I haven't found any IRC channel about this king of release. Is there one ? [16:34] webchat54: it depends on the advice you're looking for. First off, why would you put a non-LTS release on something you consider a server? [16:36] Because I want to use GPIO and only version since 21.04 provide a good way to do that. [16:37] webchat54: what kind of advice do you need? === mentholtissueguy is now known as TheHoliestRoger [16:42] Existing python3-rpi-gpio in ubuntu rasperry release before 21.04 seems to be deprecated and replaced by python3-lgpio. In my case I'm looking for help and advices to install a RTC BS1302 module. [16:42] My BS1302 RTC module is documented to use: https://github.com/sourceperl/rpi.rtc with python3-rpi.gpio. But since ubunt 21.04 this package seems deprecated and the correct way is to use python3-lgpio (https://discourse.ubuntu.com/t/how-to-use-raspberry-pi-gpio-pins-with-ubuntu/22009 & lgpio doc is http://abyz.me.uk/lg/py_lgpio.html). [16:42] So I don't find any python implementation for this DS1302 RTC module that use lgpio.... If someone have more information about that. Thanks in advance [16:43] I thought .04 were LTS releases? [16:43] donofrio: every 2 years [16:50] I don't have skill to develop a python lgpio implemention at all. I'm looking for some advices on who to contact... Probably the team that develop the py_gpio. But I have no idea on who to contact in the ubuntu team... [16:54] donofrio: 20.04 is a LTS release and LTS releases come out every two years, the next LTS release will be 22.04, then 24.04 and so on and so forth [16:55] everything in between are interim releases which are only supported for 9 months [16:57] donofrio: also only Ubuntu LTS receives 5 years of support, all other LTS flavours Xubuntu, Kubuntu, Ubuntu MATE etc are only supported for 3 years [17:02] webchat54: if you look at the package page, there're links to contact the maintainers: https://packages.ubuntu.com/hirsute/python3-lgpio [17:02] Hello, I am having problems pairing a bluetooth keyboard. I am using Linux Lite 5.4. [17:02] webchat54: you may need to look through the options there and see where the best place to ask your question is :-) [17:04] webchat54: it also looks like that maintainer has an irc channel: #ubuntu-motu [17:04] Ok. Thanks davidkrauser [17:04] webchat54: sorry I couldn't be more helpful directly :-) [17:05] The keyboard is the Logitech 613. Has both wireless (with dongle) and bluetooth connectivity. [17:05] My fault. No problem :) [18:45] anybody know if NVIDIA has a feed or a consistent naming scheme for releasing new drivers? [18:45] I'd like to setup a cron job to mirror the latest .run driver files [18:46] whenever there is a new release [18:48] don't use such run file, use the ones ubuntu provides [18:48] What's wrong with using the .run file? [18:48] maybe they give a .rss feed, dunno [18:49] patiencezero: for one, we cannot support the .run files here [18:49] if you want to be a beta tester, fine, but that is beyond ubuntu support. [18:49] yeah, I pinged one of the guys from the Ubuntu "graphics drivers" team to see how they check [18:50] I imagine there's a pipeline of some sort for y'all to update the ppa, just hoping that pipeline isn't a manual check. lol [18:51] I've always assumed it was an nvidia engineering telling us they'd like us to make a new build of their new driver [18:51] you mean https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa ?? [18:51] just run updates to spot a new version [18:52] there is also !hwe [18:52] !hwe [18:52] The Ubuntu LTS enablement stacks provide newer kernel and X support for existing LTS releases, see https://wiki.ubuntu.com/Kernel/LTSEnablementStack [18:53] I'll be attempting to support several distros that don't host proprietary drivers [18:53] I could see this turning into a nightmare though, with the DWM / x-server breaking every time there's a kernel update. [18:54] I guess I'd have to write a hook to have it reinstall the latest nvidia driver everytime there's a kernel update... [18:54] patiencezero: if this is going to be a cross distro effort, maybe #linux is a better place to discuss it. and note there's https://www.nvidia.com/en-us/drivers/unix/ [18:55] thanks Tom, good idea. I'll check there! === Lutine_ is now known as Lutin === ghostcube__ is now known as ghostcube [19:04] Mekaneck, oh thank you for the background on LTS versions [19:08] donofrio: no problem, you're welcome :) [19:29] Does anyone know where I can find an updated list of patches that are applied to Ubuntu's kernels? There used to be one at https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.11/SOURCES, but that's 404ing now. I had a kernel build script that depended on it. [19:34] thismachine: in the git repository for each release [19:35] thismachine: e.g. git://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/hirsute [19:38] TJ-: https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/hirsute/tree/ has no SOURCES file, though? [19:42] Thanks for the links. Is there a git repo for the latest kernel itself? I'm trying to build the latest kernel releases, eg 5.13.x. There used to a repo that stayed up to date with mainline releases found at ~kernel-ppa/mainline [19:44] Hi y'all. I'm back from earlier (the guy who's trying to rescue some files). I found some discrepancy between df -h /home/$USER and the output of gddrescue. Is it because I used --sparse with ddrescue? [19:44] If I run ls -lah from /home/$USER it says my sbd1.img from ddrescue is 466G, however, when I run df -h /home/$USER it returns "/dev/sda6 441G 187G 231G 45% /home" [19:44] How is it possible for a 466G file to be in /home when /home is 441G? [19:44] https://kernel.ubuntu.com/~kernel-ppa/mainline/?C=N;O=D [19:44] Or even worse, how come /home is only 45% used? lol [19:44] Once again, is it because of the sparse file deal? How do I know its true size? [19:44] thismachine, ^ [19:44] thismachine: https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.13/ says "To obtain the source from which they are built fetch the commit below: git://git.launchpad.net/~ubuntu-kernel-test/ubuntu/+source/linux/+git/mainline-crack cod/mainline/v5.13" [19:44] Guest14, you had a faulty disk, wasn't it? [19:45] so discrepancy would be expected.. [19:45] oerheks yes. I was able to back it up with ddrescue [19:45] oerheks wdym? this is a good disk though [19:45] How is it possible for a 466G file to be in /home when /home is 441G? .. errors in the filetable? [19:45] Guest14, could be reflinks or compression, too. You can use compsize to figure out if that's the cause of the discrepancy [19:46] tohreyn, oerheks thanks [19:46] So I plugged my faulty disk into my good computer and used ddrescue to back it up, though, I used the sparse flag [19:46] I thought that might explain why a '466G' file can be in a 441G partition [19:47] I don't see any other way [19:47] Guest14: is it owned by root? [19:48] Guest14: root gets 5% more disk space than is really shown, you can have a disk at over 100% capacity [19:49] gordonjcp, no way telling when he copied.. [19:49] only the original disk can show restored bad blocks [19:49] Guest14: 105% of 441GB is 463GB... [19:50] i like gordonjcp's theory, which i think refers to ext2-4 only. [19:50] gordonjcp yes, it's owned by root but as you said 105% of 441GB is 463, though not only that [19:51] df -h /home/$USER returns "/dev/sda6 441G 187G 231G 45% /home" [19:51] because i have other files under /home/$USER [19:51] how come is 45% free? [19:51] thismachine: tomreyn since v5.6.18 the build scripts changed and as the HEADER tells, the commit of the build is shown - there are no longer patches on top [19:52] TJ-: ah, i didn't notice that the HEADER message changed [19:53] the git commit now includes the packaging patches [19:55] Okay. I guess my google-fu is better today. Apparently one can tell a sparse file's size using `du -h --apparent-size sba1.img` to see how much it could expand to [19:55] and `du -h sba`.img` to check the _actual_ size [19:56] In this case, the actual size is 187G. That explains why it is sitting on /dev/sda6 which only is 441G and it is only 45% used [20:00] TJ- you mentioned that once ddrescue was done recovering the data, I created a backup of that image with cp --sparse=always and then "try using the fsck tool (initially in the do-not-change-anything mode!)" do i have to mount the image first? I'm not sure if this question is stupid [20:01] since one mounts disks not images, right? [20:02] Guest14: no, it isn't and no you don't; fsck can operate on file-systems inside regular files [20:03] Guest14: but you could also attach the file to a loopdev (/dev/loopX) so it appears as a block device [20:04] TJ- is there any advantages to doing the latter? [20:04] Guest14: e.g. "sudo losetup -f -P --show path/to/file.img" and that will report which /dev/loopX it is attached to [20:04] Guest14: yes; more tooling options that would expect the file-system in a block device [20:05] hello [20:05] How do I authenticate scripts from ubuntu-archive-tools with my UbuntuOne/launchpad account? Is it possible to use exported ssh key? I can't find any documentation on this online. [20:06] so I guess before I run "sudo losetup -f -P --show path/to/file.img" I have to attach file.img to a loopdev? [20:06] Would I pass /dev/loopX to fsck or file.img? [20:07] Guest14: no, the command I gave you would do that for you [20:07] Guest14: aside for correcting the path,that command is complete [20:08] Guest14: losetup 'mounts' a file as a block device (confusing use of 'mount' in this context, I'm afraid!) [20:09] Guest14: if it reports /dev/loop8, then you can do things like "fsck -N -V /dev/loop8" [20:09] Indeed. I'm still trying to wrap my head around the concept [20:09] TJ- Guest14: if it reports /dev/loop8, then you can do things like "fsck -N -V /dev/loop8" << this makes sense [20:10] loopdev isn't bothered whether what is in the file makes sense as a block device; it just presents whatever is in the file as a block-addressable device that behaves somewhat like a physical disk/SSD [20:11] I'm curious though to 'mount' this file as a block device, would I need 187G (_actual_ size used in disk by sparse file) free or 441G (the maximum size to which file.img may expand)? [20:11] doesn't enter into it [20:12] I see. I'm curious, what is file.mapfile for? We haven't needed so far [20:12] no space/memory is being used. All a lopdev does is translate sector based access (e.g. read sector 8) to a fseek(path/to/file, 512 * 8, SEEK_SET) [20:12] Does linux know 'this (sparse) file img.file has a matching file.mapfile'? [20:13] Guest14: the mapfile was created by ddrescue to record which sectors it managed to extract and which had problems [20:13] Guest14: the mapfile is nothing to do with anything but the ddrescue tool (it can use the mapfile to restart/resume a recovery) [20:14] Got it. That makes sense. For some reason I thought it was a 'metadata' sort of thing in general and not only for ddrescue [20:15] By the way, journalk reported 6 errors that day. You mentioned "If that suggests there aren't too many faults" [20:15] Guest14: that sounds like a lucky/good result! [20:16] Guest14: I've done rescues where 80% of sectors had failed [20:17] Guest14: those jobs can often take a week... generally I put special coolers on the devices to stop them failing thermally due to the prolonged aggresive access [20:18] Cool. I hope I can recover my files, though, to be honest, this has been a quite insightful experience so far [20:20] A bit off topic, I'm glad #ubuntu has logs. Can refer back to this conversation at any time [20:21] I often wonder how my brain retains and can produce all this arcane stuff - I often have to refer back to my own logs for guidance on things I need to do! [20:23] I'd pictured you going through your great Zettelkasten lol [20:24] :D [20:24] I am generally the 'low-level' and 'go to the source-code' guy around here :) [20:26] Guest14: TJ-: How'd you know that? [20:26] TJ-: I just quickly looked at the kernel code before answering your question [20:26] lol [20:28] :) [20:28] "Feel the source, Luke" [20:33] I guess. Step by step. I'm slowly dabbling into C [20:33] I was going to ask you if you would recommend any books to learn this much about Linux right before you told me you were the 'go to the source-code' guy around here === war is now known as Guest7976 [20:49] anyone tried gnome40? is it rough or usable by this point? [20:50] tried fedora34 with gnome40 seems stable to me [20:50] Where in gnome can I edit the width of the scroll bars? [20:51] but not in ubuntu. havent tried it in ubuntu [20:54] I just re-installed grub on my partition after having reinstalled Windows. Unfortunately, it did not work in the way that GRUB loads on machine boot. === kevr is now known as tiny-kevr [20:55] rapid16 what comes up, rather than grub === tiny-kevr is now known as kevr [21:00] Well it just boots straight into Windows 10 [21:02] rapid16 can you boot into a live ubuntu? [21:03] rapid16: it is possible the firmware is in fastboot mode where it'll boot the default OS directly [21:06] Okay. I'll check that [21:07] Guest14: I'd recommend reading well-written source. Kernel is generally well written, easy to reason if you skim it and don't try to resolve every pre-processor macro or struct. [21:20] I discovered I can access Ubuntu by booting using EUFI boot menu. But I need to install GRUB somehow so that I get option between Ubuntu and Windows. [21:22] rapid16: Might see what resukts when booted into ubuntu with terminakl command ' sudo update-grub '. See if this picks up Windows too. [21:22] results* [21:23] Yes, it saw Windows too [21:24] rapid16: Reboot too see the effect :D === genii is now known as genii-core [21:43] TJ- well, I haven't tried to read the kernel code yet, though, I reckon one would need more than just C knowledge to grok it [21:46] Guest14: if you don't try to understand every word, but skim for the meaning, you'll quickly pick up C language structure and style [21:49] Hey guys, what is the ~/.ssh/known_hosts used for? Can one access to the root via SSH only using a ssh key? [21:50] supremekai: the known_hosts file keeps track of the remote ssh host keys you've said are legitimate; that way, if you contact them again and they've got a different key, you'll get a warning about it [21:51] sarnold, how do I know the last login made by those keys? [21:51] supremekai: you can access root via only an ssh key, that's pretty common, the ubuntu cloud images set up an 'ubuntu' user with passwordless sudo access; so you just ssh in to the ubuntu account, run sudo, and no passwords are invovled anywhere [21:51] sarnold, I got 2-3 freelancers working on this server, can one use know_hosts to get illegitimated access? [21:51] supremekai: no [21:51] supremekai: known_hosts only keeps tracks of the servers you can ssh'd to [21:52] oh [21:52] supremekai: ~/.ssh/authorized_keys sounds more like the file you're thinking about [21:52] that one I don't have it sarnold [21:52] so, known_hosts it's like from SERVER to another HOST via ssh [21:52] and authorized_keys from another HOST to SERVER via ssh? [21:53] supremekai: known_hosts is the ssh *client* keeping track of the various servers you've contacted; authorized_keys is consulted by the server when a client connects [21:53] oh ok [21:53] how can I "decrypt" or "dehash" it? [21:53] 'cause I don't know "who" are those hosts [21:55] supremekai: often that info is obscured or removed deliberately [21:57] TJ-, can I remove those "known_hosts"? [21:57] what is the worst case scenario of doing so? [21:59] supremekai: sure you can; each time your ssh client connects to a remote host, that remote's pubkey is recorded in known_hosts once you've agreed to connect to it, so you don't get asked every time you connect [21:59] ohhhhh [21:59] now I get it!! [21:59] thnks!!! [21:59] That's the warning it usually prompts on putty or such [21:59] oh ok ok [22:01] yes === ircuser7 is now known as AppXPrt === AppXPrt is now known as AppXprt [22:08] firefox notifications doesn't use notify-send in wayland session, any solution to that? [22:11] ice9: so 'notify-send' on its own works? === ubuntu is now known as Guest5210 [22:33] Hey, how would be the easiest way to do a custom partitioning, want to use btrfs with subvolumes, which the ubuntu installer doesn't allow [22:39] Guest5210: it allows btrfs without subvolumes? [22:40] matsaman: sure [22:40] Do this from a live session, setting up btrfs, then run the installer again [22:40] so i can edit the disk while the installer is running and during disk setup just click next? [22:41] no. get a guide to do this manually. [22:42] hmm, I thought it sounded a bit promising, but with the caveat that it ought to be done before you get to the partitioning screen [22:42] i know how to partition the disk, i just don't know how the installer reacts if you don't want to use it's disk setup [22:42] but if a guide already exists it will probably be helpful :) [22:43] some guide with nice pics https://www.addictivetips.com/ubuntu-linux-tips/ubuntu-btrfs/ [22:45] oerheks: thanks, so I assume I first have to setup the disks with the disk setup, and then edit it before it tries to install the system/ [22:45] ? [22:46] well what is your plan, encrypted? [22:46] encrypted + 4 subvolumes [22:46] https://mutschler.eu/linux/install-guides/ubuntu-btrfs/ and tons more to find .. [22:47] hmm that first guide doesn't seem to mention anything about subvolumes [22:49] and the second is also rather not really going in subvolumes + it uses the normal ubiquity disk partitioning it seems like. as it's using ubiquity to setup everything (except grub) [22:49] can't you install onto an existing filesystem hierarchy? [22:50] "Note that the subvolume @ is mounted to /, whereas the subvolume @home is mounted to /home." .. the second one kinda does, in passing.. [22:50] does it really automount @ to / oO that would be new to me [22:50] with zfs, I have like a dozen or two datasets, and ubuntu got installed into it [22:51] yeah, same here, but I used the debootstrap approach and it wasn't super fun :) [22:51] and a bit later, I copied the whole installation somewhere else, redid the pool, and copied it back, and it still worked [22:52] sarnold: with debootstrap, is there an easy way to install ubuntu-minimal? i tried that, but the install was borked, as gnome didn't even have gnome-terminal, networkmanager or the likes [22:52] i wouldn't mind going that route [22:53] Guest5210: yeah, apt install ubuntu-minimal etc [22:53] hm interesting, because I used, ubuntu-minimal^ [22:54] let's try it again I guess [22:54] ah, it may not be a task, too [22:54] I don't know off-hand how to list the tasks that are available, it's not the easiest thing to learn, heh [23:24] forgive me for being blunt, but is snap no longer a hot piece of garbage? I've had nothing but horrible experiences with it and that's putting it lightly [23:24] no its still hot garbage.... [23:25] well that's a shame [23:25] bancroft: My take on snaps: has its place - but, on the desktop use judicously :( [23:26] bancroft: it seems to work [23:27] I ran into permission issues when installing apps with snap [23:27] at least to get lxd, slack, zoom, [23:27] yes, that's working as intended I think [23:27] due to the improved isolation of such programs [23:27] (that's why I like it for crap like slack or zoom) [23:28] cbreak: do you use bash-completion with lxd? i have issues with the latest lxd version [23:28] not that I'm aware of, no [23:28] I use zsh [23:28] and no intentionally installed completion [23:29] I use bash completion with lxd delivered by the snap; the completion is glacial [23:29] but it works [23:29] sarnold: i always have to source .bashrc, so that lxd completions work. but the latest gives me also an error when I start gnome-terminal with _have command not found [23:30] Guest5210: oh weird. I've never seen anythiung like that [23:30] weirdly, multipass works fine. so it has to be lxd imo [23:30] Guest5210: Nice intro for LCD: https://popey.com/blog/2021/07/lxd-container-manager/ . [23:31] yup already read it ;). have him in my rss feed [23:31] LXD** [23:31] nah i am for LCD :) [23:31] managed to get firefox working inside lxd [23:31] along with wireguard [23:31] without completion :) [23:31] ha :) [23:53] sarnold: thanks, got mostly everything working, except I can’t get network manager get to manage my Ethernet device :/