=== beaver is now known as beaver-idle === mIk3_09 is now known as mIk3_08 [02:01] hi :) does Ubuntu focal have drivers for RTL8152B network adapter? are they available on apt? [02:05] also, how do I discover that? my search engine only finds tutorials for how to compile it. OTOH, Realtek network adapters usually work out of the box, which would explain why no-one blogs about them. [02:05] (except for compile help) === mIk3_09 is now known as mIk3_08 [02:06] The description of firmware-realtek includes * Realtek RTL8152/RTL8153 firmware (rtl_nic/rtl8153b-2.fw) [02:08] that's really close [02:09] fixed in the 5.9 kernel [02:09] !hwe [02:09] The Ubuntu LTS enablement stacks provide newer kernel and X support for existing LTS releases, see https://wiki.ubuntu.com/Kernel/LTSEnablementStack [02:09] hwe give 5.11 .. [02:09] https://itectec.com/ubuntu/ubuntu-cant-get-rtl8125b-working-on-20-04/ [02:10] does that mean the upcoming LTS will ship with the drivers? [02:10] interesting how you would update.. [02:10] oh, sure [02:10] the current LTS ships with the drivers [02:10] or install 21.10 [02:11] that sounds good. thanks JackFrost, oerheks and ravage. :) [02:11] 20.04 HWE is 5.11 or with edge even 5.13 [02:11] so it should just work [02:13] and apparently the firmware is only needed for half duplex [02:13] ...seeing the commit message here https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/drivers/net/ethernet/realtek?id=0439297be95111cf9ef5ece2091af16d140ce2ef [02:13] Commit 0439297 in kernel/git/next/linux-next.git "r8169: add support for RTL8125B" [02:15] tomreyn: So yet another mark against "wifi always needs non-free firmware". [02:18] jhutchins: i assume you're not quoting me? [02:18] "always' is not true. [02:19] RTL8125B is wired, 2,5 gbE === M4he is now known as mahe [02:28] I wonder if that's what the Intel firmware is for. === genii is now known as genii-core === nacj_2 is now known as Mibix === beaver|idle is now known as beaver [06:14] film [06:15] yes === diskin_ is now known as diskin [07:22] * imman [ 0day (xc) Our ] OFFICIAL CHANNEL FOR DEAL 0DAY https://un.org https://tjc.org ... . https://ircnow.org channel #0dev and channel #0day === imman is now known as skraito [07:36] Hello. Running 20, is it to be expected that checkarray triggers a rebuildArray/RebuildFinished event when it runs (regularly) just out of no reason (raid is just fine)? [08:01] talking about PXE boot, i understand that one image can be deployed to multiple PCs over the network, my question is how is that achieved when the clients are not in total sync ? One pC asks for the image now, another asks after 5 seconds, the server sends the data to client1, but client2 is lagging how is that solved? [08:36] greetings, many of my launchpad repos read "Cannot initiate the connection to ppa.launchpad.net:80 (2001:67c:1560:8008::19). - connect (101: Network is unreachable) [IP: 91.189.95.85 80]". ARe there alternative repos for launchpad? [08:49] correction: I can't connect to any launchpad repo. === Bilge- is now known as Bilge [10:30] I have a raid controller can some one help me isntall it [10:30] *install it [10:42] hello. I have a catch 22 situation. I can't start apache2 because it is missing an SSL certificate from letsencrypt but I can't fix letsencrypt certificate because I don't have a live webserver [10:47] any suggestions to point me in the right direction? Otherwise, I am thinking I might have to remove apache2 and reinstall... [10:47] Jan 19 03:09:47 BELV-SERV4-NVR systemd[1]: Starting The Apache HTTP Server... [10:47] Jan 19 03:09:48 BELV-SERV4-NVR apachectl[2173]: Action 'start' failed. [10:47] Jan 19 03:09:48 BELV-SERV4-NVR apachectl[2173]: The Apache error log may have more information. [10:47] Jan 19 03:09:48 BELV-SERV4-NVR systemd[1]: apache2.service: Control process exited, code=exited status=1 [10:47] Jan 19 03:09:48 BELV-SERV4-NVR systemd[1]: apache2.service: Failed with result 'exit-code'. [10:47] !paste | webchat10 [10:47] webchat10: For posting multi-line texts into the channel, please use https://paste.ubuntu.com | To post !screenshots use https://imgur.com/ !pastebinit to paste directly from command line | Make sure you give us the URL for your paste - see also the channel topic. [10:50] Jan 19 03:09:47 BELV-SERV4-NVR systemd[1]: Starting The Apache HTTP Server... [10:50] Jan 19 03:09:48 BELV-SERV4-NVR apachectl[2173]: Action 'start' failed. [10:50] Jan 19 03:09:48 BELV-SERV4-NVR apachectl[2173]: The Apache error log may have more information. [10:50] Jan 19 03:09:48 BELV-SERV4-NVR systemd[1]: apache2.service: Control process exited, code=exited status=1 [10:50] Jan 19 03:09:48 BELV-SERV4-NVR systemd[1]: apache2.service: Failed with result 'exit-code'. [10:50] Jan 19 03:09:48 BELV-SERV4-NVR systemd[1]: Failed to start The Apache HTTP Server. [10:50] sigh [10:52] webchat10: use https://paste.ubuntu.com [10:52] [Wed Jan 19 03:09:48.060678 2022] [ssl:emerg] [pid 2343:tid 139773782346688] AH02572: Failed to configure at least one certificate and key for www.example.com:443 [10:52] [Wed Jan 19 03:09:48.060976 2022] [ssl:emerg] [pid 2343:tid 139773782346688] SSL Library Error: error:140A80B1:SSL routines:SSL_CTX_check_private_key:no certificate assigned [10:52] [Wed Jan 19 03:09:48.060997 2022] [ssl:emerg] [pid 2343:tid 139773782346688] AH02311: Fatal error initialising mod_ssl, exiting. See /var/log/apache2/error.log for more information [10:52] AH00016: Configuration Failed [10:52] dude [10:52] what we asked you to do? [10:53] webchat10: or are you just fooling around, flooding the channel...? [10:53] no, first time trying this site to ask for some help since my website is donw [10:54] down [10:54] i told you twice to use https://paste.ubuntu.com [10:54] I tried copying some output to that paste.ubuntu.com and then I copied from there. [10:54] Doesn't look right yet? you can't see my output? [10:54] you need to post the link.... [10:55] https://paste.ubuntu.com/p/VR57fspZ58/ [10:57] https://paste.ubuntu.com/p/zTBDkMB3R3/ [10:58] I changed my real domain and put example.com [10:58] So, I'm asking how to remove references to my certificate and then i can try reissuing the certificate [10:59] that's all beyond my knowledge, wait for someone who knows how to assist you further [10:59] or ask in #linux maybe [10:59] I messed up and only created my certificate via letsencrypt for i.e. example.com but not for www.example.com and I'm trying to fix it [10:59] edit /etc/apache2/sites-available/yoursitename.conf (BACKUP FIRST!) [10:59] oh [10:59] oh [10:59] I had used certbot [10:59] in that case, just ask LE for the "www." as well [10:59] um [11:04] webchat10: certbot certonly -d comma,separated,domain,list --dry-run [11:05] webchat10: note the "--dry-run" option -- that will let you verify the command will do what you expect. If all looks well, re-run the command without that option. [11:05] right. but I already removed my certs so I have to start over with certbot [11:05] so, the error relates to missing my certificates (not found) [11:07] Either (1) remove the symlink in /etc/apache2/sites-enabled/site-https so that apache doesn't try to spin up that webserver [11:08] or (2) comment out the entire host:443 configuration directive in the file (if one config file is handling both 80 and 443) [11:08] then restart apache. [11:09] I try that [11:22] I tried commenting out the two seperate directive files for 80 and 443 and then i remembered I probably want to the directive file for 80 and I restored its backup (for the port 80 directive file). But not working. I tried restarting apache2 but still won't start. The log file says it is still missing certificate file *.pem [11:23] why did you touch ANYTHING in the :80 directive file === user_ is now known as shantanuo [11:23] its all ok, I restored it [11:23] also, why is your :80 file referencing a cert at all? [11:23] Its not [11:24] how many sites are in sites-enabled? [11:24] wait [11:24] also [11:24] EVERYTHING in /etc/apache2/sites-enabled is a SYMLINK back to /etc/apache2/sites-available, right? [11:25] 2 virtual hosts. my site I'm trying to restore and one called example.com to see if could get 2 different web sites to show on localhost [11:26] not sure about that [11:26] webchat10: okay, check that [11:29] https://paste.ubuntu.com/p/W7gYWdsg4v/ [11:29] It appears it is symlinked [11:31] indeed [11:31] so then just remove the symlink 000-default-le-ssl.conf [11:31] in enabled or available? [11:32] the SYMLINK is in sites-enabled [11:32] leave the file in sites-available alone [11:32] do you know syntax to remove symlink? [11:33] "rm" [11:33] well, "sudo rm thefilename" [11:40] ok. If I go to the ip address the site works but if I go to http://sitename.com it tries to go to https://sitename.com and won't work yet [11:41] so there is some file telling it to force to the https version of the site. [11:44] RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent] [11:44] yes, that configuration directive is in your default:80 file [11:44] I'll try commenting out that? [11:44] comment that out for the time being, then restart apache. [11:45] although, as i recall, that doesn't actually mess with certbot [11:48] yeah. commenting it out didn't work, the browser is still trying to redirect to https [11:50] oh, it was a permanent redirect. The browser remembers. Use a different browser / forget the site from the one you're using [11:51] I'll try [11:52] note that "forgetting" a site will also tend to remove stored passwords, so be careful there ;) [11:53] another browser works fine for port 80 so I can now try to fix the certificate [11:53] I think you so much. I'll advise if certbot gives me issue [11:54] I can see how to do the sitename.com and www.sitename.com I think with certbot, I just missed it the first time [12:03] https://paste.ubuntu.com/p/dwczKYSGRh/ [12:03] I know this isn't letsencrypt support but, I'm getting closer, just not quite right yet [12:04] how do I fix a cancelled snap refresh? [12:04] ❯ snap refresh telegram-desktop --edge [12:04] error: cannot refresh "telegram-desktop": refreshing disabled snap "telegram-desktop" not supported [12:04] ❯ snap enable telegram-desktop [12:04] error: snap "telegram-desktop" has "refresh-snap" change in progress [12:06] nvm, `snap abort --last=refresh` fixed it [12:10] webchat10: See the logfile /var/log/letsencrypt/letsencrypt.log [12:14] There is a bunch of information in theletsencrypt log about snap and the python script [12:16] and I would imagine also the error it encountered [12:16] it just says stopiteration, an internal error occured [12:17] ... this is why I use 'certonly' ... [12:17] actually, try that [12:17] certbot certonly -d yourdomain.tld,www.yourdomain.tld --dry-run [12:18] obviously, replace "yourdomain.tld" with the actual domain. [12:18] right. Ok. [12:18] so dry run to see if it will work, and then if it is succesfull, try again without dry run? [12:18] yes [12:19] basically "certonly" tells certbot that you only want it to get you the certificate(s) and that you will do the work yourself to link them in the relevant application(s) [12:22] dry run says successful [12:24] OK, so then run it without "dry run" (and PAY ATTENTION to the filenames / paths) [12:24] er without "--dry-run" [12:28] it just says no resert required. I did not choose to create new certs [12:29] renewal not required [12:29] Hiyas all [12:30] I also restored my backup 443 directive file [12:30] then certbot successfully created / downloaded your certs before. Did it give you the Certificate Path? [12:30] should I try creating new certs [12:30] it didn't ask [12:30] it doesn't _ask_ ; it _tells_ you [12:30] yes, it says where the certs were saved [12:31] okay, so edit your /etc/apache2/sites-available/domain-443 file, and make sure the apache directive for the certificate and key file is correct [12:32] ok [12:33] heyo BluesKaj [12:33] hey djph [12:34] webchat10: IIRC, it'll be SSLCertificateFile /etc/letsencrypt/live/[site]/fullchain.pem and SSLCertificateKeyFile /etc/letsencrypt/live/[site]/privkey.pem [12:35] those pem file names should be listed in /etc/apache2/sites-available/000-default-le-ssl.conf? [12:35] whatever file controls your HTTPS site [12:36] right that is the file name that controls 443 directive [12:36] if it's "000-default-le-ssl.conf", then yes. === fredix_ is now known as fredi === fredi is now known as fredix__ [12:45] I added those lines and now it works! thianks so much for your help === |avril is now known as avril === JeremyD is now known as SleePy === mauved_ is now known as mauved === LjL^ is now known as LjL === five643333 is now known as five64333 === chrfle_ is now known as chrfle === SirScott8 is now known as SirScott === Amaranth5 is now known as Amaranth === jgee9 is now known as jgee === y0sh- is now known as y0sh_ === Olfodr_ is now known as Olfodr === dstein64- is now known as dstein64 === russjr089 is now known as russjr08 === Speed2u_ is now known as Speed2u === greyblue9_ is now known as greyblue9 === terrorjack0 is now known as terrorjack === Fossil_ is now known as Fossil === Voeid4 is now known as Voeid === relipse_ is now known as relipse === Irrelevant2 is now known as Irrelevant === Raqbit3 is now known as Raqbit === Jubes7 is now known as Jubes === madhens_ is now known as madhens === loonybin is now known as lagbox === fredix__ is now known as fredix === Kamilion|ZNC is now known as Kamilion === ubot3 is now known as ubottu [12:51] dearest creatures in creation, I have a question for your attention: does Ubuntu 21.10 use systemd-resolved, and/or will Ubuntu 22.04? [12:52] (I'm currently using 20.04) [12:52] badp: I'm pretty sure -resolved has been used since like 18.04 === Pokey is now known as [Pokey] === [Pokey] is now known as Pokey === Pokey is now known as _Pokey_ === _Pokey_ is now known as __Pokey__ === __Pokey__ is now known as [Pokey] === kostkon_ is now known as kostkon [13:03] My web site came up but it didn't show as secure. I had deleted the symbolic link previously. I re-added it here: [13:03] sudo ln -s /etc/apache2/sites-available/000-default-le-ssl.conf /etc/apache2/sites-enabled/000-default-le-ssl.conf [13:03] sudo service apache2 restart. now it shows with the padlock [13:24] iomari891: if the launchpad issue still affects you, and you think it's not your end but the server side, check the /topic in #launchpad for how to report such issues. [13:30] webchat10: then you're good. if you want, re-edit the default.conf so that it sends the https redirect [13:36] webchat10: there are the a2ensite and a2dissite commands for managing the symlinks in sites-enabled. and similarily a2enmod + a2dismod for modules. [13:37] djph: you're right, I had misdiagnosed things. 20.04 does ship with resolved, it's just misbehaving for me [13:38] for context, I'm the victim of Enterprise Grade Closed Source VPN Software™, wherein version 5.2 and 5.3 of global protect® take completely different routes in setting DNS things with... version 5.2 modifies resolv.conf directly (which is dirty but works well enough), whereas version 5.3 configures systemd-resolved (which is probably more elegant but only works for a few minutes before breaking). Hopefully resolved is better behaved in 22 [13:38] .04 [13:39] or Those Responsible will fix their Enterprise Grade Closed Source VPN Software™ :/ [13:40] can't you just use openconnect? [13:43] apparently there's a way to do that and still go through Okta®, yes, but I've already burned on this topic about 200% of the spoons I had budgeted :) [13:44] thanks for the help~ [13:44] badp: wouldn't know. I don't use either :) [13:44] wouldn't wish it on you either === yano1 is now known as yano === Kow_ is now known as Kow === onepict is now known as tisiphone [15:14] hi :) I'm using Ubuntu focal with NetworkManager for wifi. I want it to reconnect to the wifi whenever the access point's MAC disappears from the ARP list (arp -n). is there an easier way than writing my own bash script to periodically $(arp -n | grep …)? [15:17] well yes, use the return value of grep -q … :D but really I meant just "than my own script". [15:20] Sven_vB: as in NM loses the connection and doesn't automatically reconnect? [15:31] Sven_vB: why are you specifically mentioning "whenever the access point's MAC disappears from the ARP list"? Sure that happens when an AP is offline for some amount of time, but why exactly have you dug that deep? [16:14] djph, according to NM the connection is still up. it seems dead though, yes. [16:16] leftyfb, the AP's MAC was easy to find in iwconfig output, so arp -n was my first idea to check if it's online [16:18] Sven_vB: I don't understand using the client's arp table to determine if an AP is up. Doesn't the AP software/controller for this sort of thing? Or just a ping? [16:18] originally my plan was to look up the IP in arp -n and then ping the AP, but it turns out that it vanishes from the list soon enough so pinging wasn't even necessary. [16:19] my assumption is that the AP is always up, so if it becomes invisible, it's a local problem. only one of 8 Ubuntu focal machines is affected. [16:20] Sven_vB: I can't think of any case where a single AP would "become invisible" to 1 client and not others. Not unless it was done on purpose [16:21] since "nmcli connection up id $wifiname" solves the problem, I assume it's a software problem. [16:21] I doubt it [16:21] just masks the problem [16:22] probably some weird driver glitch for that one antenna chipset [16:22] leftyfb: see it all the time when my mom takes her tablet outside :D [16:22] djph: yeah, that's a loss of signal and expected [16:22] my attempts to fix the underlying problem have exhausted my spare time, so for now I'm ok with just reconnecting quickly enough [16:23] leftyfb: yeah, you tell your luddite mom that and see what happens :D [16:24] oh, I see now your doubt was referring to "solve" [16:25] agreed then. === Leoneof|2 is now known as Leoneof [16:29] hi, just wondering. what is the best way to stream internet radio on ubuntu? especially tune-in radio [16:29] jsbach, send or receive? [16:30] Sven_vB, just receive as an audio player [16:30] I like qmmp for that [16:31] want to be able to login to tunein-radio.com for example [16:31] Sven_vB, thanks. is that not the new xmms? [16:31] I think it is [16:32] ok [16:32] if they use HTTP Basic Auth via HTTPS, you can probably just insert "user:pass@" in front of the hostname [16:33] ah! ok! gotta google it. no time now. thought, ubuntu has already ported in some project which has a comfortable gui for that. [16:33] https://matrix.to/#/#thefreedomcellnetwork:halogen.city [16:33] anyways nevermind [16:33] jsbach, there may be more specialized projects indeed === ArcherGodson is now known as ArcherGosdon [16:41] jsbach uses ubuntu, my life is complete [16:45] Anyway I've got a question, I have a small compute cluster on a university network with five ubuntu 20.04 machines. They are all just dhcp. for a few years DNS seemed to work, you can ping the hostname of any of them and it resolves correctly. Yesterday I needed to do some maintenance on two of the five. The procedure was identical, they both came back up however, for one of them, the dns server seems to still have the old address cached, so using the [16:45] hostname times out. But the other came up perfectly. I've tried running $ systemd-resolve --flush-caches on both the server itself as well as other machines on the network that try to connect, and I've of course waited overnight. Why is the dns server still giving incorrect address? [16:46] octav1a, even i have to do some compromises [16:46] lol [16:46] I'm glad Brandenburg 5 wasn't one of them. [16:47] octav1a, hostname as in DHCP assignes fixed IPs by MAC, or do you mean avahi/mDNS hostnames? [16:47] octav1a: weird that the host didn't just get the same IP address back. [16:47] octav1a, how does the DNS server know about DHCP changes? [16:50] Sven_vB: network admin is not completely my area of expertise, so I'm open to getting a better idea of what's actually going on. I assumed that by giving a hostname to the machines during the setup process, this name was transmitted to the network sometime after it was first connected. Therefore I'm assuming there should be some similar mechanism to update the address. [16:50] I'm not sure how the IPs are distributed but obviously it's not static to MAC long term. [16:51] (which I would assume for any CHDP really) [16:51] DHCP* [16:52] maybe this is more networking but I'm thinking there should be a suite of tools in ubuntu that would help to manage or configure these things at a high level. [16:54] in your scenario it would seem useful to assign static IPs. that way you can give longer validity periods for the DNS entries. [16:55] you'll need some way to notify the DNS about IP changes. if you use dynamic IPs, a good way would be to make the DHCP server notify your DNS. [16:57] Are these things all controlled by the administrators of the DHCP and/or DNS server administrators? There would not be a way to 'push' the changes from the client machines? I don't manage those parts of the network. [16:57] What is running your DNS server? Your DHCP server? Do they talk with each other? === jhutchin1 is now known as jhutchins [16:58] where is the focal server guide source? I can't find it in https://launchpad.net/serverguide only focal [16:58] I mean only bionic and xenial [16:58] mncheckm, probably a question for #ubuntu-server ... [16:59] ogra, for me that channel is empty since some time [16:59] ogra, never mind, it was a typo [16:59] 232 people there 🙂 [16:59] https://ubuntu.com/server/docs [17:00] Maik, the source of it 😉 [17:01] oops, my bad [17:01] https://code.launchpad.net/serverguide [17:02] https://discourse.ubuntu.com/t/ubuntu-server-guide/12504 actually [17:02] IIRC all official docs moved to discourse as input/source === icedtea is now known as BobTheTomato === ledeni_ is now known as ledeni [17:34] ogra: Perhaps there are some old links that we could clean up. [17:36] jhutchins, for sure [17:41] hello [17:42] have you managed to port PKGSRC to Ubuntu? I asked about 2 years ago. is it done already? [17:43] whom did you ask ? [17:43] jason1237: Why do you want it? [17:43] perhaps he/she is still around to answer 🙂 ) [17:44] isn't that like asking about porting yum? [17:44] Isn't pkgsrc a package management system? It would seem more likely for individual packages to be ported. [17:44] leftyfb: Yeah. [17:44] one could create a snap of it 😛 [17:44] leftyfb: Then again, Yellowdog Update Manager... [17:45] jhutchins: to make a package from source automatically, using user account. [17:45] jason1237: I see zero reason for any linux distro to port pkgsrc to linux [17:45] jhutchins: pkgsrc is beautiful, innovative tool. it would be time to port it, after so many years. [17:45] jason1237: There are several ways to do that in native Ubuntu. [17:45] why root account? [17:46] jhutchins: i believe that native ubuntu, can do that. [17:47] jhutchins: pkgsrc : cd www/firefox-esr ; make <-- ubuntu certainly do that. [17:47] jason1237: yes, you can compile applications on ubuntu [17:48] I found this jason1237 https://www.reddit.com/r/bashonubuntuonwindows/comments/eui9od/unprotip_using_netbsds_pkgsrc_within_ubuntu_lts/ [17:48] you need too and libs manually, in ubuntu. [17:48] here it works in a single command and well: "make" [17:48] Is that an issue? getting libs ? [17:49] oh god [17:49] i loose my time. [17:49] I feel like we disappointed them :) [17:49] now I'm not going to be able to sleep tonight [17:49] I am so sorry, leftyfb [18:12] how would i go about making ssh logins go faster? 'markm {~} kobaz$ time ssh root@vbox-markm-64 ls' ---> real 0m0.265s 'markm {~} kobaz$ time ssh root@ch-dh.client ls' ---> real 0m5.803s [18:12] I was thinking it could be dns lookups... but i have my ip in /etc/hosts on the ch-dh box [18:13] vbox-markm-64 is debian/buster and ch-dh is ubuntu/bionic which is much-much slower [18:14] also... UseDNS no on ubuntu/bionic, doesn't make ssh any faster [18:14] what's the latest leafpad that was shipped as a deb package, and which Ubuntu had it? [18:15] https://dpaste.com/2YYMEZN7L does this have anything to do with sshd slowdowns? livepatch? [18:16] from the man page search I found that bionic had leafpad 0.8.18.something, ... and then I found that it's a link to the package. nice. [18:17] Kobaz: use "time nc -z IP PORT" to measure the time spent on setting up the tcp connection [18:17] tomreyn: 0.03 seconds [18:17] for both? [18:18] vbox-markm-64 is .006 [18:18] it's local on the lan... where ch-dh is on a vpn, with 30ms ping [18:18] so the network does not seem to pose an issue in both cases. now try the same for both using hostnames rather than ip addresses [18:19] correct [18:19] tomreyn: that's using hostnames actually [18:19] both hostnames are in local dns zones. so it's doing lookups as well, included in the time [18:19] okay, not what i sggested but thios means its neither the network nor the name resolution - on a quick glance [18:19] correct [18:20] so do the ssh connection on localhost on both systems [18:20] i knew it wasn't a network issue [18:20] we did not, though [18:20] yeah, that's fine [18:21] what does "yeah, that's fine" respond to? [18:21] localhost ssh is slow, yeah [18:21] ... and then I found I don't even need to know which Ubuntu shipped it, and can instead just browse http://archive.ubuntu.com/ubuntu/pool/universe/l/leafpad/ . [18:21] that's fine: that you didn't know, and needed to find out [18:22] it's nice when you share relevant info, but it's also good that we established that. [18:22] so ssh localhost is slow somehwere. where? [18:22] if I were to hazard a guess, it's this snap.canonical-livepatch that kicks off when you ssh [18:22] well I guess I need to still guess a version for the signature checks though. [18:22] tomreyn: on the ch-dh box [18:23] so, login prompt comes up quick, and then when you authenticate, then there's a delay [18:23] always wondered what caused that [18:23] Kobaz: shoould i then ask whether it is also slow on the other system or will you check on your own? [18:23] tomreyn: it's not.. vbox-markm-64 box is completely 'normal' [18:24] so basically i'm comparing another box with a pretty typical ssh setup on debian, to a typical out-of-the-box setup for ssh on ubuntu, and ubuntu is considerably slower for authenticating ssh [18:24] Kobaz: so compare authentication mechanisms used on both system, sshd (server) and ssh (client) configurations, and sshd versions. [18:24] both are using local pam with local user acounts [18:25] just wondering if there's like 'normal fixes' for that type of thing [18:25] like if you [18:26] if you're doing a lot of local ssh without reverse dns, then setting UseDNS no. is a big speedup... that sort of thing [18:26] tomreyn: so basically i guess i'll strace sshd for profiling and see what's taking so long === atol is now known as atol71 === atol71 is now known as atol [18:27] Kobaz: you can do this, or you can compare configurations, or you can ssh -vvv [18:27] yeah [18:29] Sven_vB: you can check which ubuntu release provides a package, and version of that, on https://packages.ubuntu.com [18:29] tomreyn, thanks! === unixlab is now known as nicoz- [18:30] Sven_vB: note this site is community maintained and not guaranteed to always have the very latest info - though it usually does. [18:31] so you method of accessing archive mirrors directly is more reliable, but also more cumbersome [18:32] yeah, currently trying to figure out how I can get apt to check the signature from http://archive.ubuntu.com/ubuntu/pool/universe/l/leafpad/leafpad_0.8.18.1-5.dsc [18:34] tomreyn: if I were to hazard a guess, it's related to this: Started snap.canonical-livepatch.canonical-livepatch.1128b499-1b1c-4efa-9ed6-65c3ef4e42f1.scope [18:34] * Sven_vB found debsig-verify [18:34] which goes to syslog any time an ssh session is opened [18:37] Kobaz, do you have livepatch running without valid token ? [18:37] Kobaz: i can't comment on that. if that's an option (production system?) you could try rebooting and see whether it makes a difference. === Tristam_ is now known as Tristam [18:42] I inherited this box... not sure how to check for a valid token [18:42] dscverify for said leafpad dsc reports "gpg: Signature made Tue Mar 15 18:34:58 2016 CET using RSA key ID 04EBE9EF" "gpg: Can't check signature: public key not found", odd, shouldn't it be signed with the Ubuntu Archive key? [18:43] Hi, I'm using a dvorak keyboard layout. At instalation I select the closest match for my keyboard, which is Spanish Latam Dvorak, but this isn't quite right, so I set my layout manually with: setxkbmap -model pc68 -layout us -variant dvorak-alt-intl It works but at random times it is reverted. How can I made this permanent? [18:46] mtellez, xkb changes seem to be reverted lots of times, especially when input devices are connected or disconnected. I'd try /etc/defaults/keyboard [18:46] without the s [18:49] Sven_vB: thanks for the quick response. I'm gonna try this. [18:56] I'm runnin 20.04 LTS on a M$ Surface Book 3. I'm running kernel 5.14.16-surface. I want to know how to utilize the "dtx" system to detach the tablet from the keyboard. [19:04] Firefishe: you're not using an ubuntu kernel -> you're not running ubuntu. [19:05] you could try installing a supported ubuntu version incl. kernel on this device (i do not know whether this can work), and get support here, or you could try asking in #linux [19:11] Firefishe: You can also try #linux-surface - it's unfortunate that Ubuntu on Surface is not supported here. [19:12] Ubuntu on * is supported here, but it consists of specific kernel and userland, not something someone else put together and calls "Ubuntu". [19:17] jhutchins: Umm... Thank you. That's where I should be. [19:27] !livepatch | Kobaz [19:27] Kobaz: Canonical Livepatch is a service offered by Canonical for 64 bit 14.04 and higher installs that modifies the currently running kernel for updates without the need to restart. More information can be found at https://ubottu.com/y/livepatch and https://www.ubuntu.com/server/livepatch === sarnold_ is now known as sarnold [19:44] I'm on 20.4.3 LTS, I want to automatically upgrade and reboot my OS (even kernel) so I can leave it unattended. I've tried set this up as follows - https://paste.ubuntu.com/p/w4QpjWpfx8/ but when I logged in it says `2 updates can be applied immediately. To see these additional updates run: apt list --upgradable`. So I think it might not be [19:44] working. Am I missing something? [19:45] rautor: i think you'd usually just install the "unattended-upgrades" package [19:46] !info unattended-upgrades focal [19:46] unattended-upgrades (2.3ubuntu0.1, focal): automatic installation of security upgrades. In component main, is optional. Built by unattended-upgrades. Size 48 kB / 440 kB [19:46] tomreyn: sorry that isn't in the paste, i have done that as part of this setup (the paste, i think, shows me configuring it) [19:47] I'm following this guide: https://help.ubuntu.com/community/AutomaticSecurityUpdates [19:47] rautor: by default, it installs *security* updates automatically. you can reconfigure it to install all available updates [19:48] normally its configuration file should be /etc/apt/apt.conf.d/50-unattended-upgrades.conf (from memory) [19:51] and in there, you can uncomment additional apt sources from which updates should be installed automatically [19:51] tomreyn: ah, i thought i'd done that but perhaps not then! https://help.ubuntu.com/community/AutomaticSecurityUpdates <- with reference to this, is it `Unattended-Upgrade::Allowed-Origins` I want to change? [19:51] yes [20:19] tomreyn: thank you! that did the job! [20:31] I want to apt remove --purge all php* packages to prepare a clean install of php 8.0 . Only a few php-related packages are meant to stay. But sudo apt remove --purge php-* offers me libapache2-mod-php* libapache2-mod-php7.4* libapache2-mod-php8.0* phpmyadmin* pkg-php-tools* as well. I must be understanding the wildcard wrong. Can anyone help? [20:34] agopo: probably libapache2-mod-php and libapache2-mod-php7.4 etc all Depend: upon some of the packages that you're removing [20:34] agopo: that's probably best, different tools require different php versions [20:34] agopo: most likely the libapache- packages rely... ^^ yeah, that [20:34] agopo: so you should just remove all those other packages and provide them yourself, just like you're about to do with php [20:38] sarnold, djph But if I --purge phpmyadmin, won't my installation get destroyed, because it's removing config files as well? [20:38] reinstall php 8 pulls them back in, no? [20:38] agopo: you could save them aside, or use apt remove without the --purge.. [20:39] I'll backup /etc/phpmyadmin === not_phunyguy is now known as phunyguy [21:27] I've now done a bit more reading about Secure Boot and come to the conclusion that it's worth at least experimenting with, but not in its default configuration. https://safeboot.dev/ seems to have the most comprehensive writeup on the subject I've found yet, though I'm going to have to do some of my own research on the AMD PSP side of things. Has [21:27] anyone here tried what safeboot.dev is suggesting? I'm particularly happy that they take the re-signing steps seriously and store keys in hardware. I have no illusions, even with SIP enabled, that this will be as good as a pixel phone or an iOS device's secure boot chain (and those took many years and many bugs to get close to good). But I do think [21:27] pushing these limits seems interesting and may have value for some threat models. [21:29] Hey all. All good? [21:29] ubercube: Do you understand what the original reason for developing secure boot was? [21:31] I think that the nuance might depend on who you ask, but the general idea of controlling what runs on a given piece of hardware would be my answer. What that means to various parties certainly differs wildly. (Queue repeats of the TCG discussions form early on) [21:31] ubercube: hopefully helpful to you https://wiki.ubuntu.com/UEFI/SecureBoot/Signing https://wiki.ubuntu.com/UEFI/SecureBoot/DKMS https://wiki.ubuntu.com/UEFI/SecureBoot [21:33] ubercube: It wasn't for the user to control. [21:33] octnun: if you don't have a ubuntu support question, offtopic chatter is in #ubuntu-offtopic ;) [21:34] sarnold: thanks. I read all those yesterday before I decided that going shim-less is the way to go in order to avoid arbitrary things being booted. [21:35] jhutchins: hence queue repeats of TCG discussions:)  or the walled garden discussions about apple. But in this case, I do seem to have some control and would like to use it :) [21:35] I'll leave aside the debate about the secret hardware/software in the PSP for now. [21:36] ubercube: I've never looked at replacing the keys in my own bios, but I thought that the shim would just use that same trust store [21:37] ubercube: do you know off-hand if I'm wrong here? [21:37] octnun: okay, ye? [21:38] sarnold: as I understand it, the microsoft shim is only needed if you use the default keys. no microsoft shim is needed if you use your own keys. it's all spelled out in quite some depth at https://safeboot.dev/ in a way that's more comprehensive than anywhere else I've found. Doesn't look like it's for the faint of heart, though. [21:39] ubercube: heh yeah.. certainly I tend to lose focus every time I look into secureboot anything [21:41] sarnold: if you /just/ want secureboot, that seems to be fairly easy depending on your mainboard. If you want it to be less wide open, you need to do much more. [21:41] sarnold, djph Returning with news concerning the php update. The cleanup was successful, only php8.0 is installed. Ampache music server, Wordpress and Nextcloud are working, too. Phpmyadmin doesn't, but I'll fix that tomorrow [21:41] Thanks for your help [21:42] agopo: woot :) [21:42] ubercube: heh, well, "standard secureboot" is dead simple, it's basically the default behaviour.. no big deal, there. it's the "I want to require the system to boot only things I sign" that I get fuzzy on the details. thanks for the link. [21:43] Hey all, I need some help. I'm getting an error trying to do an apt full-upgrade right now. It's complaining that my boot disk is full. I removed old kernels using autoremove, but the complaint remains. [21:43] My boot disk was sized when I installed Ubuntu, I just used the automatic settings for a full-disk installation on my 256GB nvme drive. [21:44] How can I clear up enough space to finish up this update? [21:44] sarnold: yeah that's far less straightforward, but the above seems to be steps in the right direction. [21:44] Partitioning makes sure that the space you need will be on the wrong partition. [21:44] I have an MSI laptop where I have an HDMI output. The monitor I have connected takes a while for the image to appear and sometimes it has no signal. Drivers are well installed. Can anyone give me a tip on this subject? [21:45] dakotakae: A full install takes less than 40G. [21:45] dakotakae: So you should be ok, the question is what's taking up the space on your /boot partition. [21:46] dakotakae: something like ls -lh /boot [21:47] https://pastebin.com/vKFcnb2F [21:47] dakotakae: if you've just started deleting files in /boot that will complicate uninstalling the packages [21:47] dakotakae: the best way to free up space in /boot is to truncate the files with bash's > redirections -- sudo -s , then > System.map-5.10.0-1057-oem [21:48] dakotakae: and > vmlinuz-5.10.0-1057-oem and > so on, for a specific old kernel that you will remove soon [21:48] yeah, no, I know not to touch boot directly :p [21:48] I removed old kernels the proper way. [21:50] Got it, truncating those two files helped. Thanks! [21:50] octnun: unless there are messages on this printed to dmesg -w (keep that running while conecting the external monitor) i'd guess on a firmware / hardware issue (try a bios upgrade?) [21:50] I just don't see why the boot partition is so small by default. [21:55] ok, going to try your tips. thanks === px_ is now known as px [22:01] I've got a virtualhost setup for a directory but for some reason its showing default apache2 page [22:15] robertparkerx: apache httpd virtualhosts work based on hostnames or ip address/port combinations, not directories. [22:16] robertparkerx: What do the logs say? === veemo is now known as ballfricker96 === ballfricker96 is now known as orbdoer98 === orbdoer98 is now known as veemo [22:57] Hi! What is the link to live cd .iso nly for i386? [22:57] webchat83: there isn't one for the latest version of ubuntu [22:57] For the last ver? [22:57] webchat83: https://ubuntu.com/blog/statement-on-32-bit-i386-packages-for-ubuntu-19-10-and-20-04-lts [23:00] What is the link for the old downloads? [23:02] I meant old versions? [23:03] xubuntu lubuntu and mate 18.04 i386 are EOL [23:03] webchat83: only the ubuntu releases listed in the channel topic are supported here, and none of them is both providing i386 installers and still getting security updates [23:03] Thanks  for the help! === guntbert_ is now known as guntbert [23:58] 1958046 [23:58] bug 1958046 [23:58] Bug 1958046 in apport (Ubuntu) "Cant get bug report again, if i didnt send it in 1st 2 times that i came up. Could come back all the time so latest bug reports can be easy sent." [Undecided, New] https://launchpad.net/bugs/1958046