=== chris14_ is now known as chris14 [01:43] arraybolt3: Ping [01:51] Liver_K: Pong [01:51] You never answered [01:52] Sorry, probably got distracted. [01:52] What was the question? [01:52] Search up in scrollback or logs for last time I spoke XD [01:52] (Bah, I don't even have it in my scrollback, I switched up how I was doing IRC.) [01:53] Oh [01:54] 2023-02-06 21:45:35 Liver_K arraybolt3: Do you think this bug will actually get fixed? [01:54] 2023-02-06 21:45:41 Liver_K Like in the foreseeable future? [01:54] Was that it? [01:54] (Found it in the logs of my old setup.) [01:54] Yep [01:55] In the foreseeable future, that will depend on what exactly the bug is. If it turns out that, for instance, NVIDIA dropped nvidia-uvm from the 390 driver, then no, it will probably not get fixed. If it is fixable, though, then it may take some time, but I would guess it would get fixed, especially if the change is simple enough. [01:55] I fear that it's probably NVIDIA's fault though since the NVIDIA closed-source drivers are proprietary and so there might not be anything Ubuntu can do about it. [01:56] Hrm [01:56] Then I will be needing a new card or a new distro [01:56] Cards are expensive [01:56] If it's NVIDIA's fault, then it will have to be a new card, since a new distro probably won't solve the problem. [01:57] What you could do is *try* a new distro, and if it does work, let us know since that will give us an important hint on whether moving forward is possible or not. [01:57] (FWIW I'm using CUDA over here on a 1050 Ti I got off eBay for I think less than $200.) [01:57] (So depending on your needs, it might not be that expensive.) [02:10] arraybolt3: Nvidia keeps all of its driver versions archived, so it doesn't actually matter if Nvidia removed it or not; for the ubuntu package, it could easily be fixed by just packaging an older version that still works [02:11] But I seriously doubt Nvidia would break their own driver like that [02:12] Ubuntu probably would not be willing to ship an older driver for security reasons. [02:12] Old drivers have old code, and old complex code usually has security vulnerabilities. [02:12] Even if the new version completely broke functionality with its supported cards? [02:13] new more complex code has more security vulnerabilities :) [02:15] Liver_K: This is a tough question to answer. Usually, a regression like this (assuming it is a regression) would be taken very seriously in a component this major. But sadly, because NVIDIA's code is closed-source, it's a black box. There's no safe thing for Ubuntu to do except to take NVIDIA's driver as-is. [02:15] I think we've been shipping these ancient things unchanged for years https://launchpad.net/ubuntu/+source/nvidia-graphics-drivers-340 [02:15] I still don't think Nvidia just "removed" the nvidia-uvm module from a driver that might not have even had it in the earlier versions. Even if they did, they can't be stupid enough to still let all of their other kernel modules depend on it while still keeping everything technically supported and maintained === alkisg1 is now known as alkisg [02:19] sarnold: That's 340... [02:19] sure, I was just saying that we ship some ancient nvidia stuff [02:25] Wait a second! [02:26] Nvidia is listing that driver version for my card [02:26] Maybe 340 will work [02:28] I sure hope they haven't ducked that one up too [02:29] Liver_K: oh that's one ducked up differently https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1853977 [02:29] -ubottu:#ubuntu-server- Launchpad bug 1853977 in nvidia-graphics-drivers-340 (Ubuntu) "nvidia-340 dpkg: error: version '-' has bad syntax: revision number is empty" [Undecided, Confirmed] [02:29] I got tired of nvidia games and went amd [02:30] I'm not using it for gaming [02:30] It's a headless machine [02:36] I'm not using it for gaming either === chris14_ is now known as chris14 [02:56] Oh lol gotcha [02:57] sarnold: I don't see 340 when searching for nvidia-headless [02:58] Or any other nvidia driver package [02:59] Liver_K: this page lists the binary packages built from that source package https://launchpad.net/ubuntu/+source/nvidia-graphics-drivers-340/340.108-0ubuntu8 === mckl2 is now known as mckl [14:51] Hello, im trying to add my public key to authorized_keys in a digital ocean ubuntu 18.04 droplet but whenever i paste my key in, its getting truncated and not showing the full key [14:51] is there a work around for this?  i cannot ssh into my server, i just have console access at the moment [14:53] kevindank: I think you normally add the ssh key when provisioning in the droplet. You add it somewhere on the web interface and they add it through cloud-init. [14:54] with ed25519 the public key is much shorter - you could try that, at least as a temporary solution and then add your own key over ssh. In the worst case scenario, you could type in the rest, if that also gets truncated. [14:54] effendy[m] yes, that is correct that they allow you to add it when provisioning.  However, im taking over someone elses droplet to assist in migrating it to a new host.  I've added the key through the web interface>settings>security>add key but when i attempt to manually add it to authorized_host i notice its truncated when i paste [14:56] i used ssh-keygen on my local machine to generate the key, its in my local .ssh file, and on the control panel but attempting to ssh in via putty or even to ftp i get a permission denied error...so in doing research im trying to add it to authorized_keys manually [14:56] i dont know if it matters that the local terminal im using is powershell (to generate the ssh key) [15:02] It shouldn't matter, as long as you use ssh-keygen. [15:02] puttys needs a different format. [15:03] okay thanks, im trying to just break up the paste into smaller batches [15:03] looks like its not truncated after doing that. [15:03] I'm not sure why you're simply ignoring what I said about ed25519 [15:03] it's just a fraction of the rsa [15:04] for example: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDh4wtdkqjfjKTq5HCDdExmpssdBdgL05Nl0NusIY7LN root@local.com [15:05] But ok, if you can paste one piece at a time, I guess that's ok too... [15:05] sorry, i missed that! [15:05] Thanks again for your help, im good. [15:10] @all Looking for somebody still using Xen on ubuntu. I have an ubuntu server with 20.04 LTS running as Dom0 and using openvswitch as bridge. Up to grub 2.04 and kernel 5.15.0.57 everything is running fine. Once I upgrade kernel and grub packages to 2.06 and 5.15.0.58 or .60 networking gets brocken. Any idea? [15:16] yosamite9999: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2002889 [15:16] -ubottu:#ubuntu-server- Launchpad bug 2002889 in linux (Ubuntu Xenial) "5.15.0-58.64 breaks xen bridge networking (pvh domU)" [High, In Progress] [15:22] yosamite9999: so it seems you can help on that bug ^ by testing the -proposed kernel and report if it works [15:25] @sdziel I can try even I'm not a developer. Since the server has quites some DomU's serving my houshold I need to find a slot but tomorrow and on Sunday, ideally before noon CET I can help. [15:28] yosamite9999: yeah, no need to be a developer. FYI the instructions on how to enable the -proposed repo are mentioned in the bug but for convenience: https://wiki.ubuntu.com/Testing/EnableProposed [15:35] @sdeziel Read the instructions already and will test the 5.15.0-66.73 kernel by tomorrow. If it's working, how can or should I confirm that? [15:36] add a comment to that bug report [15:37] yosamite9999: yeah, adding a comment in the bug reporting your success/failure and replacing the `verification-needed-jammy` by either `verification-done-jammy` or `verification-failed-jammy` [15:39] @sdeziel jammy? Server is still on focal. [15:40] s/jammy/focal then :) [15:41] focal hwe and jammy have the same kernel version [15:41] but in theory one fix for both. but edit the focal tag [15:41] ok, thx., lacking experience on that [15:46] hmm, yeah that's a good question re HWE kernel verification. AFAIK, the focal HWE kernel isn't yet available in focal-proposed, only the 5.4 one [15:47] ravage: while both focal HWE and jammy GA are the same, AFAIK, they are built using the target distro's tool. The build artifacts are not copied from jammy to focal so in theory, they are slightly different [15:48] probably not in a meaningful way for this bug though :) [16:38] @sdeziel well, I still have a 5.4 installed on the machine, if I understand you correctly this one can be tested with Focal, right? If so I can do at least that. [16:39] yosamite9999: I asked for some clarification in the bug on how to verify the focal HWE kernel but if you can test 5.4 from focal-proposed, that would also help [16:41] that's what I meant, I can already test the 5.4 and then, once we have clarity about the Focal HWE I can test that as well [16:43] yosamite9999: awesome, thanks for doing all that! [19:18] is there a way to point to a NTP source vs. installing Chrony? I dont want to run NTPD, I want my system to check in with an existing NTP server running Chrony [19:19] ah its under timedatectl [19:32] timesyncd.conf under systemd [19:32] cool [20:21] I strace -p a project, I see this: epoll_wait(5, - it's a system service. That appears to have stopped working. I don't know what or why this is happening. This system checks email via IMAP. It happens once a month. [20:21] The issue is I don't know how to catch when this happens [21:12] yosamite9999: dunno if you are tracking the bug but there a 5.15 kernel in focal-proposed (`rmadison linux-modules-5.15.0-66-generic` confirmed it) so it should be easy for you to install and test it [21:23] foo: a program hanging on epoll_wait() is probably just waiting for more data from the remote peer [21:24] foo: this is jumping to conclusions very prematurely, but one common problem is stateful firewalls that discard 'stale' TCP sessions: if they don't see any traffic on a connection for ten minutes or two hours or four days or something, they'll throw away the state they've accumulated for the tcp session -- but not send RST packets to both peers. This just looks like the connection *hangs* to both [21:25] peers [21:25] foo: are these long-lived imap sessions that might have no data for ten minutes or twenty minutes or something? [22:25] Anyone around using snmpd in an IPv6-only/-mostly environment? I have my snmpds set up to listen on IPv6 only, and there seems to be a race condition with the interface coming up. [22:25] I have the snmpd.service set with After=network-online.target, but this doesn't seem to wait correctly for the DHCPv6 address to be active on the interface, and I get the error: [22:25] Feb 11 06:53:52 dns snmpd[492]: Error opening specified endpoint "udp6:2001:db8:100:1::1" [22:27] I know I could do something like add a 10 second sleep before it starts, or tell it to retry a few times with a gap in between, but this seems ugly. Why doesn't network-online.target wait until all the addresses specified in netplan are online? [22:28] yeah, it's barely adequate :/ [22:37] blahdeblah_: I'm having trouble tracking down the references I thought I remembered :( I can't recall now if it was IP_FREEBIND or if it was ip_nonlocal_bind that might be usefufl [23:11] sarnold: Where are those options used? Can't say I've heard of them before. [23:12] blahdeblah_: IIRC, `sysctl net.ipv4.ip_nonlocal_bind=1` also works for IPv6 [23:13] Thanks - I'll read up on that [23:13] I mean this whole situation wouldn't be a problem if net-snmp weren't stupid. [23:14] If it would just reply from the address it were contacted on rather than from its default outgoing address, I wouldn't need all of this messing around. [23:14] *ow* [23:15] that *is* stupid :) [23:15] I intend to write up a bit of a rant about it. [23:17] "code from the 80s isn't always brilliant"? :) [23:18] "35 things programmers think they know about the network?" [23:48] I'm running ubuntu 20.04, server version in KVM. I have installed cloud-init via apt-get install cloud-init (no snap). Is it safe to assume that cloud-init will run at every boot? [23:59] lemoi: why don't you use our cloud image that includes cloud-init already prepared for you?