[00:00] which screen mode is it? [00:00] there's a handful of consoles.. vga text, vesafb, something else I think.. they're all pretty limited. using a terminal emulator will almost certainly give you better results: faster, way more glyphs, more features, fewer bugs, etc [00:00] is there a command that I can run to tell me the screen mode? I will try and google the answer. [00:00] the text console is *insanely* limited [00:12] henninb: are you using the console to aovid X/Wayland overhead, or because the hardware doesn't support graphics? [00:16] yes, I was trying to avoid x/Wayland. [00:17] my hardware is 1 year old [00:25] how does one determine which device is "scsi_eh_10"? -> https://gist.github.com/doug65536/2c23a82bd7bef9fd72e1d12ce287c87d [00:26] guessing it's a USB-UAS drive? which one though? [00:28] doug16k: I don't think it's 1:1 with drives [00:29] doug16k: my laptop has five of those threads but only two drives; my Big Machine with .. uh .. fifteen?ish? drives in it has seven of those threads [00:29] ah, they are worker thread names? [00:32] ya, must be, bottom of call stack is kthread_create_worker_on_cpu. thanks [00:32] I've never taken a look at them before, either :) I've been content tosee there's loads of worker threaders of various sorts.. [00:34] this looks like a reeeeally old kernel image. [00:36] tomreyn: you sure? the 4.15.60 in there feels like it came from this proposed kernel https://launchpad.net/ubuntu/+source/linux/4.15.0-60.67 [00:36] tomreyn: what suggests to you it's old? [00:37] hmm, you're right, i was thinking 4.15 was replaced by 4.18 [00:37] but i guess 4.15 is GA, and i mixed it up [00:37] https://launchpad.net/ubuntu/+source/linux/4.15.0-60.67 in proposed [00:38] my bad, sorry [00:38] aha then that makes sense :) [00:39] why proposed? and not hwe? [01:09] is it need to let .la files with .so or I can delete them? [01:11] Betal: it's probably best to leave those .la files there; without them, you'll have trouble building software [01:11] sarnold: but what if Iam not going to build [01:13] Betal: well, if you're *never* going to build, then that's probably fine; why bother though? === zbenjamin is now known as Guest28293 === zbenjamin_ is now known as zbenjamin [01:28] sarnold i was able to get ter-powerline-v16b.psf to work in the console, tty1. [01:28] henninb: sweet! [01:29] henninb: is that packaged for ubuntu? I don't spot it with apt-file search [01:29] no, I found it on a git repo. [01:29] bummer [01:29] anyway, nice to know there's choices :) [01:29] thanks especiallyh for reporting back [01:30] i alway appreciate getting ideas and thoughts from folks. [01:30] same here [01:30] thanks for your responses. [01:37] Hi - can someone please help me fix my audio? I rebooted because I heard some "cpu" noises in the background (very soft, but normally I don't hear that). After the reboot I have no audio at all anymore. [01:38] The "reason" as far as I can tell is that there is no longer a 'system:playback_1' and system:playback_2 in JACK. Aka, my soundcard no longer connects to JACK :/ [01:42] how to minimise shell? [01:42] opposite of ctrl shift + [01:42] ah ctrl - [01:42] ty [01:46] Hmmpf - I'm now using alsa_out to to recreate a JACK client and that works! So apparently something broke reading my .asoundrc since the last apt upgrade (this the first reboot since that) [01:51] I'm trying to install Ubuntu on a partition on one of my drives, but I keep getting the following error: "The ext4 file system creation in partition #1of SCSI7 (0,0,0) (sdc) failed." --- What can I do about this? [01:54] khanred: are there any details in a log file? another terminal or console? are there any messages in dmesg? [01:54] sarnold: i'll take a look [01:57] sarnold: these are my "important" logs - https://paste.ubuntu.com/p/Q7gF9f6p3D/ [02:02] khanred: hmm, I'm not sure those actually point out the error; the usb one *might*, if you're trying to install to usb.. [02:02] I'm not trying to install usb [02:02] khanred: the couldn't get size, and UEFI db list looks like it's probably harmless https://forums.opensuse.org/showthread.php/535324-MODSIGN-Couldn-t-get-UEFI-db-list [02:02] ok [02:03] let me send the whole logs [02:03] did you boot ubuntu in uefi mode? [02:03] yes [02:04] no partitions on sdc? [02:06] http://paste.ubuntu.com/p/2zB2w943vF/ [02:06] Full logs [02:10] "/dev/sdc1 already mounted or mount point busy" seems kind of interesting [02:12] is this in xorg or wayland? [02:14] I think xorg [02:14] it's 18.04 [02:17] khanred: what's mounted there? [02:18] oh what the hell [02:19] sdc is the flash drive i'm booting from to do the installation..... [02:19] and it's also where i've been trying to install the OS for some reason [02:19] Let me try _not_ doing that.... [02:20] khanred: that's probably worth a bug report, if you have the time and inclination :) it could probably try harder to communicate about that one :) [02:20] Alright, i'll do that [02:20] thanks! [02:23] ok so [02:23] I can see the partition I want in disks, but not when I run the installer [02:29] khanred, is it lvm [02:29] no [02:30] btw, im trying to create this partition/install on a separate disk from the one I have Windows on [02:49] this is frustrating [02:58] khanred, what are our partitiosn look like? sudo fdisk -l on paste.ubuntu.com [03:00] is sdc nvme? [03:00] its a hybrid drive [03:01] oh oke, not worth to mention immediatly [03:02] OerHeks: http://paste.ubuntu.com/p/5p5f2jwzcK/ [03:03] nvmes ought ot show up on /dev/nvme* [03:05] that's the last dime sandisk gets out of me. got a 64GB USB flash drive, dead in 2.5 months [03:06] barely used [03:06] works for maybe 20 minutes then goes dead, then all I/Os to it hang === mnemonic is now known as Guest69482 [03:11] OerHeks: Did you find anything of interest? [03:12] think for nvme you need to advance partitioning [03:12] doug16k: ouch :( [03:13] I don't have an nvme drive.. === mmidgett is now known as mTeK [04:23] hi all [04:24] so currently I'm on the adventure of trying to define my own sound/speaker configurations [04:51] de-facto: i found how to get dns proper working with network manager [04:51] https://askubuntu.com/questions/233222/how-can-i-disable-the-dns-that-network-manager-uses [04:51] ;) [05:31] Hey is there a chan for ubuntu-based distro makers? [05:31] just making something noob for our teams use [05:34] !alis [05:34] Alis is an IRC service to help you find channels. For help on using it, see "/msg Alis help list" or ask in #freenode. Example usage: "/msg Alis list http" [05:34] i have no clue about fork channels, good luck === coffeeguy is now known as zenguy [05:58] what's up? [08:24] Hi. I'm trying to ssh log in as root from a debian to a ubuntu with pubkey auth. I can do so as regular user, it works well. Then I copied /home/user/.ssh/authorized_key in /root/.ssh/authorized_key, I have 'PermitRootLogin prohibit-password', 'PubkeyAuthentication yes' and 'AuthorizedKeysFile .ssh/authorized_keys' in /etc/ssh/sshd_config. The permissions are good (700 for .ssh folder, 664 for [08:24] authorizedkeysfile). I did systemctl restart ssh. I activated the root account (set password, then 'passwd -u root'). No matter what 'ssh root@m.y.i.p' still gives me 'Permission denied (publickey).' [08:26] What do I miss ? [08:26] no need to put 700 to the authorized_keys file… did your chown the file ? did you restart ssh service ? [08:26] MaxLanar: 664 is not good for a keyfile, it means anybody in the group can modify it, that isblocking [08:28] MaxLanar, ^ try 644; if that doesn't help, it's log reading time [08:31] Habbie: lblume: No more luck with 644 :/. I pastebin the result with ssh -v ? [08:31] MaxLanar, yes, please pastebin ssh -v, and also paste logging from the -server- during your attempt [08:31] for example, if 664 was the issue, ssh -v would not tell you that [08:34] hello, i broke my network interface file by mistake. i think i broke lo interface now cant get any networking [08:35] For the sshd log on the server I check /var/log/auth.log ? [08:35] how can i fix this? [08:36] MaxLanar, that would be my guess [08:36] Ool: authorized_keys is own by root. Yes I restarted ssh service (systemctl restart ssh) [08:37] The .ssh directory is also owned by root? [08:37] lblume: yes [08:38] Well, time for the server logs, they'll likely be interesting :) [08:40] ssh -v root@192.168.1.100 : https://paste.debian.net/1097973/ [08:41] try the username, not root? [08:41] OerHeks: the username ? [08:44] 'cat /var/log/auth |grep -v cron:session' on the server : https://paste.debian.net/1097974/ [08:45] err, i am wrong [08:46] AuthorizedKeysFile .ssh/authorized_keys, this might need a full path? [08:47] no, that's fine [08:52] MaxLanar: Permissions/ownership on the directory above .ssh (/root ?) also good? Else raise sshd's LogLevel to debug to get more details what it does. [08:54] MaxLanar PermitRootLogin without-password [08:54] lblume: /root is 700 and owned by root [08:54] cluelessperson: That is good [08:56] MaxLanar chown root:root -R ~/.ssh && chmod 600 -R ~/.ssh [08:56] I think [08:57] Hey all [08:57] Is it possible to change root password? [08:58] Of mysql root* [08:58] Already tried using mysql documentation and mysql_secure* script. No result. [08:58] V7, are you able to log in to mysql? [08:58] Yes, with an empty password like "sudo mysql" [08:58] All in all, root persists asking empty password [08:59] I thought mysql has an internal password you have to manage [09:00] i learned this yesterday - the default now is to check what user is connecting [09:00] so 'sudo mysql' just works [09:00] and then you can add a passworded account, via GRANT i suppose [09:00] Habbie: So, is it possible to change mysql root's password? [09:02] yes [09:02] V7, it is possible to make another root account with a password, or to remove the socket auth from the account you have and give it a password [09:02] V7, i recommend the first option [09:02] I don't recall how off the top of my head though [09:02] something like this https://stackoverflow.com/questions/41846000/mariadb-password-and-unix-socket-authentication-for-root [09:04] Thank you Habbie, but now it's important to create a root password for mysql [09:05] For now it have an empty password [09:05] OK, that was the dumbest error. The log 'Could not open authorized keys '/root/.ssh/authorized_keys': No such file or directory' made me aware that my file lacked a 's' in the end.... Sorry for the disturbance and thank you all for the help. [09:06] V7, for now you have an empty password that only works if the person connecting is root [09:06] V7, that's not a problem [09:07] (I had 'authorized_keys' and 'authorized_key' in my /home/user/.ssh/ (don't remember why), then I copied the file with the wrong name to /root/.ssh :/) [09:09] good find [09:09] You can merge tham to *keys one [09:10] people ubuntu 18.04 has PYMOL python2 in apt how can i install the newst PYMOL python3 ? [09:11] not, see https://packages.ubuntu.com/search?keywords=python3-pymol [09:11] upgrade to disco? [09:12] with pip3 install ? [09:13] Hi there. [09:14] is it possible to install or have google calendar and set it up with notifications on ubuntu? [09:14] rhoks, Evolution can do that [09:14] OerHeks: apt search python3-pymol -> nothing [09:14] OerHeks: upgrade to disco? what is disco? [09:15] !disco [09:15] Ubuntu 19.04 (Disco Dingo) is the 30th release of Ubuntu, supported until January 2020. Release Notes: http://ubottu.com/y/dingo [09:15] !discoball [09:15] OerHeks: oh, so i can't install it in my current OS ha? [09:15] maybe if you build it yourself? [09:16] or see the answer of Ool [09:16] OerHeks: hmmm what file should i look for to attempt to build it myself? [09:16] perhaps asking in #python [09:17] Ool: ok [09:17] Habbie, I see. Because several other solutions on the web want you to install their own repositories and stuff. [09:17] rhoks, in general, try to avoid that indeed [09:18] Hello! On 18.04, I have the following problem with my secondary SSD: Initially, it was ext4. [09:18] I wanted to reformat it as xfs (no backup required, it's just for testing), but running [09:18] mkfs -t xfs /dev/sdb1 always gives me a kernel panic. The machine has the latest updates as of yesterday. [09:19] How can I have an XFS on that disk, please? [09:20] rhoks, i note that in my gnome preferences, i can also add a google account, but i don't know what that does [09:20] brokencycle, do you have details of the kernel panic? [09:21] I prefer not to have my OS in anyway attached to various social media crap [09:22] Yes Habbie I kinda read about that, but I was unsure of giving google access to my ubuntu machine, they already monitor everything we do on our browsers and emails.. But I guess if I'm gonna start scheduling my whole life on their Calendar service I guess I could just do it and connect my local user to my google account :S [09:23] rhoks, i don't think that gives google access to your machine [09:24] Habbie, perhaps, but in the world we live in I wouldn't be surprised if a whistleblower defects from google and shows us that they somehow hack their way in when people login like that [09:25] rhoks, then, by all means, limit your exposure, and configure just the calendar in one app [09:26] rhoks by all the accounts and dealings I've had with google employees, it seems that google currently understands the responsibility of consumer data, and avoids abusing it as much as possible. [09:27] @Habbie: I don't know how to capture it. I basically only IPMI access. [09:27] also, the types of people that develop those systems, are often the type of people that can't be easily manipulated into thinking it's okay to abuse people. [09:27] brokencycle, ok [09:27] brokencycle, anyway, a panic indicates a serious hardware or software problem [09:27] brokencycle, so, without wanting to be rude, the first question is not 'how can i please use XFS here' [09:28] the machine is brand new, and the software is just a generic Ubuntu 18.04, now 18.04.3 [09:28] It worked fine when I had ext4 on that disk, it just craps out when I try to reformat that as XFS. [09:29] brokencycle You'd have to get the machine into a state where you can reasonably run another tool or debug or trace while running that function/command [09:29] so perhaps boot a liveboot, or ram based configuration, and perform the format using that, streaming the trace data somewhere. [09:30] I am open to suggestions: I can ssh into the machine, but when the kernel panics, I'm obviously out of luck doing anything else on it. [09:30] What do you suggest? [09:31] If I do a hard reset, the machine comes up just fine, but I don't know a way forward from there. [09:32] If the kernel panic was written to disk somewhere, that would be great. Then I'd just collect that after reboot. [09:32] brokencycle So, partition the disk, write the log of the operation/trace to another partition while you format the test partition. [09:34] Running the mkfs command does not produce any output. You mean, I should strace it? [09:35] I mean, it works for ~20 seconds or so before the kernel panics. [09:36] brokencycle I'm not familiar with general kernel debugging, but I would start googling how to debug, log, dump, etc. [09:38] ok... thank you! [09:40] Yeah, I was gonna try to login to google via gnome Habbie but gnome asked for permission to everything basically (to see, edit, delete contacts and emails and whatnot)... So I'm gonna go with the evolution route. I suppose sudo apt install evolution is all thats needed to install it? [09:41] probably [09:41] some website wanted me to first install this repo ppa:gnome3-team/gnome3-staging [09:41] for some reason before installing evolution Habbie [09:42] that would give you a newer gnome and evolution i suppose [09:43] alright this is taking far too long I will get back to it tomorrow maybe, I'll use a chrome tab to schedule the day and print it for now :) [09:43] thanks for the help Habbie [09:43] np, good luck :) [09:44] Hi [09:44] is it possible for me to create multiple interfaces with different ip addresses and mac addresses on my pc towards the same vlan? [09:44] I need to "spoof" 500+ units [09:45] So instead of 500 VM i though I could use 500 "virtual nic's" ? [09:45] toffe, yes, make a bridge and tie a bunch of virtual interfaces with different MACs to it [09:45] toffe, brctl is key [09:45] Thanks [09:45] I'll look into it. Just gonna find a server who can handle this software .P [09:46] Hi [09:47] This is what i have for my firewall rule https://pastebin.com/DfJ7JrZE, i can run that script and do ufw enable and everything is okay but when i reboot the server i cannot access it via ssh!! [09:47] in those rules i tried to drop everything but ssh connections [09:54] Hi [09:54] This is what i have for my firewall rule https://pastebin.com/DfJ7JrZE, i can run that script and do ufw enable and everything is okay but when i reboot the server i cannot access it via ssh!! [09:54] in those rules i tried to drop everything but ssh connections [09:55] show iptables -L after applying those rules. [09:59] opios: did you get tomreyn's message? [10:00] no [10:01] show iptables -L after applying those rules. [10:03] opios: i dont think those rules are persistent if you used iptables to add them [10:04] opios: https://askubuntu.com/questions/119393/how-to-save-rules-of-the-iptables [10:27] EriC^^, damn i didnt know that i need to run the iptabe commands everytime [10:27] so my iptables rules are good i just need to make sure they are save and running after each boot [10:29] hi [10:30] opios: dunno about the rules tbh, but yeah you have to run it every time after rebooting [10:30] https://pastebin.com/DfJ7JrZE [10:30] i still get error report popup but no crash report in /var/crash [10:38] opios: apt show iptables-persistent [10:39] B|ack0p: does the error popup provide any details? do you see any related record in journalctl -b | nc termbin.com 9999 ? [10:40] tomreyn: unfortunately no detail shown on the error report popup [10:40] that s why it is annoying everytime i boot to ubuntu it popups [10:40] let me check [10:40] tomreyn, yeah i went with iptables-persistent [10:40] thanks [10:41] tomreyn: https://termbin.com/7fmt [10:41] B|ack0p: what about it? [10:43] tomreyn: that s the input of your command [10:43] journalctl -b | nc termbin.com 9999 [10:44] B|ack0p: yes, right. please look for errors yourself first, then point to them on this output. [10:45] ok [10:49] Error calling StartServiceByName for org.gnome.ScreenSaver: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.gnome.ScreenSaver exited with status === tinoco is now known as rafaeldtinoco [10:50] Aug 30 13:26:40 uthink-x61 gnome-shell[1340]: Error looking up permission: GDBus.Error:org.freedesktop.portal.Error.NotFound: No entry for geolocation [10:51] there are many errors related DBus and GDBus [10:51] what are they about? [10:53] gnome / gtk related errors are often, but not always, insignificant [10:53] are they critical errors? [10:54] i have found 1 hardware error: [10:54] Aug 30 13:26:30 uthink-x61 kernel: tpm tpm0: [Hardware Error]: Adjusting reported timeouts: A 10000->10000us B 10000->10000us C 0->752000us D 0->752000us [10:54] tpm might be thinkpad power manager? [10:54] more likley trusted platform module [10:54] do you have gnome-screensaver installed? [10:55] tomreyn: i installed gnome tweak extensions other day in a package [10:55] i dont know know if it contains screensave [10:55] r [10:55] i dont know if i have or not [10:55] is the package "gnome-screensaver" installed? [10:55] i disabled all extensions but let me check [10:56] apt list gnome-screensaver [10:56] it either says [installed] or not [10:56] ok [10:56] gnome-screensaver/bionic,now 3.6.1-8ubuntu3 amd64 [installed,automatic] [10:57] it seems installed but i dont know how and when [10:57] maybe installed inside package somehow [10:57] which desktop manager do you use? which desktop? [10:57] gnome [10:57] gnome-shell is mentioned on your logs, you i guess you use that as a desktop environment. do you use gdm as a login manager? [10:57] ubuntu 18/04 default desktop [10:58] but just to try i installed gnome session flashback [10:58] to try gnome classic on 18/04 [10:58] but i am on default gnome at the moment [10:58] ok, this will be why you have gnome-screensaver installed. [10:58] gnome-shell doesn't use it, but -flashback may [10:58] i logged in few times after i installed in gnome classic [10:59] does it cause error popup and system freeze? [10:59] i suspect this may cause the error reporting popup: Error calling StartServiceByName for org.gnome.ScreenSaver: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.gnome.ScreenSaver exited with status [10:59] which is probably due to gnome-screensaver [11:00] ...starting and failing under gnome-shell [11:00] earlier as i mentioned i faced system freeze 4 times [11:00] you did not mention "freeze" before [11:00] it didnt happen recently but it may [11:00] tomreyn: i did few days ago [11:00] and you suggested me tty [11:00] hah, well, my memory is not perfect [11:01] not happening recently but it may happen [11:01] i "suggested you tty"? [11:01] if freeze happens you suggested me sysrq and tty [11:01] to check what happens [11:01] and collect some reports etc [11:02] oh i think i'm recalling, i suggested you try blindly switching to a tty and try to ctrl-alt-del to see whether it is just a graphics issue. [11:02] but didnt happen yet [11:02] yes exactly [11:03] Just FYI: It seems to be a kernel bug in 4.15.0-58.64 because the problem went away with 5.0.0-25.26~18.04.1 [11:03] brokencycle: what bug? [11:03] The XFS kernel panic bug. [11:03] hm [11:05] I have asked about it earlier today. Short version: "mkfs.xfs /dev/sdb1" results in a kernel panic on "my" machine. [11:05] tomreyn: if i uninstall gnome flashback will i get rid of those errors related with gnome shell? [11:05] And reproducible so, I've tried almost 10 times, with different disks [11:06] Hi, what's the best way to avoid dynamic IPv6 addresses? I have configured it statically through netplan, I just want it to use the static address. I do *not* want to disable RA as it's needed for redundancy on the uplink (the router advertisements) - only need to get rid of the dynamic addresses. Happens now that bind for examples sends notifies from the dynamic address and that's really undesirable, it [11:06] brokencycle: did you report a bug? [11:06] should always send from it's static address [11:07] not yet, but it's on the list. someone pointed me to the very useful package 'linux-crashdump' [11:07] I've collected the output of that [11:07] B|ack0p: you could then uninstall gnome-screensaver (maybe you already can do so now), which should prevent the failure of gnome-screensaver starting on the gnome-shell session. which may get rid of the popup message you see. [11:08] i did apt purge gnome-session-flashback but only some kbs removed. [11:08] brokencycle: feel free to show the output so maybe we can tell whether it can be specific to your system or configuration somehow (and not a generic bug) [11:09] even screensaver not removed . i now removed screensaver manually [11:09] package was more than 100mbs how comes it only removes just some kbs [11:09] tomreyn: I need to talk to someone first, and it's several megabytes. the system is a fairly generic dell r440. [11:09] brand new, too [11:09] B|ack0p: the "gnome-flashback" package is a meta package, you'll need to remove all the packages it depends on (which are not required by other packages you wish to keep) [11:10] ... if you want it completely gone [11:10] i want complete uninstall if possible [11:10] but i cant find one by one installed packs [11:11] B|ack0p: /var/log/apt/history.log* lists the packages which were recently requested to be installed [11:11] i am not sure if DBus and GDBus errors related to flashback [11:11] me neither, but probably not [11:11] tomreyn: do you know what? i just got popup error report after i purged screensaver :p [11:12] without detail [11:12] Hiyas all [11:12] it usually pops up once when i boot to ubuntu [11:12] what does "popup error report" say then? [11:12] no details.. [11:12] just popup [11:12] what does it look like? [11:12] take a screen shot next time [11:12] looks like same as usual [11:12] ok if it happens after reboot i will do [11:13] and note down the time it happens [11:13] then companre to journalctl -b | nc termbin.com 9999 [11:13] ok [11:14] thx a lot [11:14] good luck [11:23] Habbie: I've set up the raspberry pi now to fake those interfaces is it veth you thought about when telling me about virtual interfaces? [11:24] toffe, i'm not sure what they would be called, sorry [11:24] Ok thanks :) [11:54] Hi [11:54] welcome MRD365 [11:54] MRD365: what can we do for you today? [11:55] I need lots of money [11:56] Hey! For some reason my VPS clock stays behind the real clock. VPS host said that their host's clock is working fine.. so now I have no idea what might be wrong. Any ideas? [11:56] which virtualization is it, which ubuntuversion is it? [11:56] Right now the time difference is like 30 minutes [11:57] is the VM's clock bound to that of the host? [11:57] Can you all hack? === Etua_ is now known as Etua [11:57] VMware I think and it's ubuntu 14.04 (yes it's very old, but we are running some legacy stuff). I'm thinking about updating to 18.04 but I'm not 100% sure yet [11:57] !ot | MRD365 [11:57] MRD365: #ubuntu is the Ubuntu support channel, for all Ubuntu-related support questions. Please register with NickServ (see /msg ubottu !register) and use #ubuntu-offtopic for other topics (though our !guidelines apply there too). Thanks! [11:57] ...as you very well know [11:58] Tomreyn: I have absolutely no idea.. but I know they are using that VMware datacenter software [11:58] Do you know METASPLOIT? [11:58] MRD365: stop [11:59] While I playin' a game via Steam, probably my graphical card crashes. [11:59] Ok I'm sorry [11:59] I don't know how it gets triggered but i got a black screen during the game and monitor says no cable connected. [11:59] howdy, how do I type a double prime using compose key on 18.04? [11:59] Diplomat: virt-what can tell you which virtualization is in use. [11:59] Since hdmi cable directly connected to graphical card, I suspect a driver crash. [11:59] Can someone help to troubleshoot it please? [11:59] !details | debouncer [11:59] debouncer: Please elaborate; your question or issue may not seem clear or detailed enough for people to help you. Please give more detailed information; for example, we might need errors, steps, relevant configuration files, Ubuntu version, and hardware information. Use a !pastebin to avoid flooding the channel. [11:59] Tomreyn: it's vmware [12:00] Am from Indonesia [12:00] are you? [12:00] MRD365: if you like to chitchat join #ubuntu-offtopic [12:00] MRD365: only ubuntu support questions here please [12:00] Ok, I just found out [12:01] Diplomat: what does this report? cat /sys/devices/system/clocksource/clocksource*/current_clocksource [12:02] tsc [12:02] Diplomat: oh i missed that you're running an !EOL version, no support here, sorry. [12:03] Lol, it's very eol [12:04] lotus|i5: https://pastebin.com/PVJcbXyG [12:05] debouncer: for gtx cards, you might try a bit higher driver version [12:05] debouncer: what shows in: ubuntu-drivers list plz? [12:06] 430 [12:06] debouncer: try to switch to 430 and reboot plz [12:07] alright [12:07] where is system backlog for graphical card? do you know? [12:08] journalctl -kb -1 | nc termbin.com 9999 to check + share you last kernel sessions' dmesg; decrease -1 further for earlier logs. --list-boots to list all boots. [12:08] i want to check while installing the new driver [12:08] dmesg -w or journalctl -kf in a separate window to run a logwatch on kernel messages [12:09] (ctrl-c to end it) [12:09] thanks === phd is now known as pav [12:10] omit the k to journalctl if you want to watch all logs [12:22] ^hello, is there anything needed to be taken care of using rsync with ntfs drives? [12:22] i get this error: ERROR: Warning! /bin/rm failed. [12:31] Surfer2011, that does not look like a message from rsync [12:32] rsnapshot my bad using rsync [12:33] Surfer2011, are you talking abut Ubuntu standard on some non-standard ntfs drivers? ntfs-3g? [12:33] /dev/disk/by-label/LaCie /srv/dev-disk-by-label-LaCie ntfs defaults,nofail 0 2 [12:33] is the fstab entry [12:37] Hey I have one of those BIOS mobo raids that I think is just software raid, I have to 8TB drives Identical I'm wanting to raid but when I set it up with BIOS correctly(was seen normally by windows so I know an OS can see all 14.7TB of space when I do a raid0 on it, my issue is when I do fdisk -l I see the raid but its only 1.8TB in available space, any ideas or tips? [12:40] Malgorath: without multiboot better to not use fake raid but real soft raid: mdadm . https://help.ubuntu.com/community/FakeRaidHowto [12:40] with multiboot, I don't know [12:41] Ool, was wondering that, wondering if a raid is even worth the hassle [12:41] btw not multibooting, just had windows on it and replaced it with ubuntu 19.04 [12:42] Decided if I need windows my laptop can run that horrible OS for gaming [12:42] Surfer2011: do you have write permissions on the ntfs as your user? [12:43] (that you're running rsync as) [12:44] brb going to turn off the bios raid stuff [12:44] yes i should [12:47] i do [12:53] guys, what on earth is juju? https://packages.ubuntu.com/xenial/juju-mongo-tools3.2 [12:54] !juju [12:54] Juju is a open source devops platform created to allow rapid deployment of applications in the cloud. More info at https://juju.ubuntu.com/ [12:54] In newer versions of ubuntu it's called mongo-tools, what is this juju thing for Xenial? [12:54] aaaah [12:54] It sounded so dodgy I thought it was a scam :D [12:54] thanks lotus|i5 [12:55] awesome :) [12:55] thank you mr. juju [12:55] welcome blip99 [13:00] lotus|i5, do juju packages require extra config? I installed this package I linked, but no mongo tools show up. no mongodump etc.. [13:01] `juju-mongo-tools3.2/xenial,now 3.2.4+ds-0ubuntu1 amd64 [installed]` [13:03] anyone know of a decent one-liner to tell if I've already run `pam-auth-update --enable mkhomedir`? [13:04] Mechanismus, why? looks safe to run twice [13:05] because I have it in a salt state and I don't want that state to run every time I apply the high state [13:07] I'm trying to get my salt states setup such that I can easily tell if a node has changes to apply. I need to add an 'unless' to the mkhomedir state so that it detects whether it needs to be run. [13:07] diff --git a/pam.d/common-session b/pam.d/common-session [13:07] +session optional pam_mkhomedir.so [13:07] that is what i observe as a change in /etc when i run it [13:08] i trust you can 'unless' on that [13:08] yeah I was thinking about grepping for "pam_mkhomedir.so" in common-session but I wonder if there's a better way [13:09] pam-auth-update really doesn't do much more than that either [13:54] Hi, I have a laptop with an onboard intel video card and a nvidia card. I think it's not an uncommon setup. I can't get it to display two external 4k monitors. So I'm thinking of getting an external graphics card that would connect via usb -c or thunderwhatever. [13:54] Two questions 1) does this sound reasonable? [13:54] 2) what external graphics card is most likely to not give me trouble with ubuntu? [13:58] I have a directory of alias files (each file defines an alias), can't seem to get them to load automatically using bashrc something like . /path/to/dir/* [13:58] anyone know how to get them all to load? [13:58] for f in path/to/dir/* ; do . $f ; done [13:59] the example I provided loads the first found in the directory [13:59] yes [13:59] yeah loop them eh? ok..thanks! [13:59] use . "$f" [13:59] in case there's whitespace in one of the names [13:59] iffraff: https://hackernoon.com/recipe-nvidia-titan-x-as-external-gpu-on-ubuntu-laptop-9df2dfc02fc6 pretty straight forward [14:02] trench: but isn't nvidia notorious for having crappy linux drivers? [14:03] Habbie: thanks! [14:06] iffraff: try and check if it works, if it doesn't send the stuff back? [14:08] can you return video cards? I guess I kind of assumed you could not [14:10] iffraff, that's not a ubuntu support question in any shape or form :) [14:11] iffraff: When researching anything with Linux, always pay attention to the date of publication. If it is more than a 3 years old, it is probably way out-dated as the Linux community is continuously evolving and changing faster than articles can be published. As for your 4k issue, are you certain that there isn't a limitation of the graphics card on how many displays it can drive at 4k... in a laptop form factor space is a premium [14:11] and heat dissipation is a problem. You would need to consult the laptop specifications to find out what your computer is capable of. You can use an external graphics card, though you will want to verify that your laptop has the right USB Type C support for that feature. Some USB-C ports do not have the required bandwidth for the data transfer needed. [14:11] Habbie: what video cards work best with ubuntu is not relevant to ubuntu? [14:11] iffraff, i meant the 'return' question, not the rest, sorry [14:12] iffraff: As what "works best" is not a support question. It is a polling question, and it is preferred that you ask those types of questions in #ubuntu-offtopic [14:14] pragmaticenigma: so the laptop is supposed to support multiple 4k monitorns, but it is also supposed to run windows, and i believe windows has some magic driver that bridges the two video cards. Hence I'm thinking of external video card. Is the main concern when using an external video card the connection ( I see that mentioned the most )? my system does have thunderbolt 3. so that should have the bandwidth [14:19] iffraff: Nvidia is the maker of the driver for windows, and they offer a driver for Linux as well. It is true that sometimes the Nvidia Linux driver is a little behind in feature parity with the Windows, but that usually is seen more in the CUDA availability and prsently some of the RTX capabilities. There is no magic driver, just that system architecture can switch from the lower powered Intel graphics chip for graphics [14:19] processing to the Nivida chipset. That is a feature that I haven't seen fully implemented at this time. The volunteers here usually recommend that a user chooses either Nvidia or the Intel from the BIOS/UEFI setup and stick with one or the other. [14:21] Yes, however to get the advertised dual 4k output you have to use both. I believe [14:21] iffraff: I would start with making sure the computer is setup to use the Nvidia chipset, full time. The inability of the unit to drive both of the external monitors makes me believe the issue might be that you're running on the Intel chipset, and not the Nvidia === lotus|i5 is now known as lotuspsychje [14:22] I did switch the chipset ( or check the chipset ) both manually via cli, and via the nvidia gui. Logged off and back on etc. So I'm fairly sure I was on nvidia. [14:23] iffraff: Are you trying to drive two external monitors and the laptop screen at the same time (giving you 3 displays?) [14:25] no, I don't need the laptop monitor. That said it's possible that while testing the laptop was open. What would happen is I had a thunderbolt to hdmi splitter and it would only ever display one monitor. However, which monitor depended on the order of hookup. so I know both monitors and cables did work. [14:26] iffraff: That's a bandwidth limitation of the splitter [14:27] iffraff: to drive two displays, they'd both need to be connected independently to the laptops graphics ports [14:28] I actually tried a number of splitters, including these sort of all in one laptop docking things, like the pluggable https://www.amazon.com/Plugable-Charging-Specific-Thunderbolt-DisplayPort/dp/B0779K9DG2/ref=sr_1_5?crid=PUASATCMPDOK&keywords=pluggable+usb+3.0+docking+station&qid=1567175265&s=gateway&sprefix=pluggable%2Caps%2C275&sr=8-5 [14:28] that one is usb-c but I also tried one that was thunderbolt [14:29] yo guys, given a path to an executable on disk -- what's the best way to find out all instances of it (i.e PIDs) ? [14:29] banisterfiend: look at "man ps" [14:29] banisterfiend, 'lsof /bin/foo' might do it [14:30] pragmaticenigma lol, i mean in the C API sorry [14:41] hey guys. I bought a Dell R710 server with no OS. Downloaded Ubuntu 18.04 LTS and put it on a USB stick. Every time I try to boot the installer, after the grub menu, I get an "out of range" error on my monitor. I googled for solutions, but the vga switches in the grub menu have not worked. Does anyone have any idea what I can do? My monitor's [14:41] native res is 1920x1080 and supports 59/60/120/144hz. [14:42] Laserburn, have you tried 'nomodeset' ? [14:42] I have not [14:42] will try [14:43] Laserburn: see also the #ubuntu-server channel for likeminded server volunteers [14:43] thanks! [14:49] hello, i just installed ubuntu, and i would like to see the grub at startup, how can i do that? [14:49] Elodin: hold shift at boot [14:49] thanks [14:50] i was thinking i was crazy... it was booting too fast for me to see the grub [14:50] or set it into /etc/default/grub: https://help.ubuntu.com/community/Grub2 === juboxi is now known as jubo2 [14:53] https://askubuntu.com/questions/16042/how-to-get-to-the-grub-menu-at-boot-time === saint__ is now known as saint_ [15:48] I upgraded to Disco and now icons seem like windows 95. Where did the better icons go? [16:11] i dont understand this, i create header file for dm-crypt (dd if=/dev/zero bs=1049600 count=1 of=myheader) - then i encrypt sda (cryptsetup luksFormat --hash=sha512 --key-size=512 --header myheader /dev/sda) - after that command, the header file (myheader) is now 16.0M in size - why did that happen? i want the header file to stay at the minimum ~1. [16:11] 1M size why it grow to 16.0M?? [16:18] try specifying --keyfile-size [16:19] oh actually, ignore me [16:22] brutser: looks like you'll need --align-payload 0 [16:23] at least this example says so https://superuser.com/questions/823922/dm-cryptluks-can-i-have-a-separate-header-without-storing-it-on-the-luks-encry [16:30] tomreyn: i thought so too, but i tried that already [16:31] tomreyn: actually had the exact same post in front of me [16:33] Command 'tailf' not found [16:33] in my ubuntu VM machine [16:35] tailf is depreciated [16:36] tomreyn: it's easy to reproduce > create test.txt with Hello World! inside NEXT dd if=/dev/zero bs=1049600 count=1 of=myheader NEXT cryptsetup luksFormat test.txt --header myheader --align-payload 0 --cipher twofish [16:36] you tell me if myheader grew to 16.0M or not [16:37] rexwin_: Tailf is deprecated, so replace it with tail -f. Explain what you want to do so we can helpmyou further. [16:37] help you* [16:38] I got in now [16:42] usb camera /dev/video0: fd=open(/dev/video0); unplug camera; /dev/video0 file gone immediately;can I still safely close the now missing device file via close(fd)? [16:44] ausjke, yes [16:45] ausjke, what you can't do is read or write from it - that would error (but it would still be safe, just need to handle the errors) [16:53] Habbie: now I insert the usb-camera back, and a new /dev/video0 is created, so I just need open/close again. [16:54] i have a race condition, when camera unplug, before I detect that and close that fd, usb-camera is re-inserted, but /dev/video0 is still held due to not-closed-in-time, so kernel create /dev/video1 instead for the camera, and my code breaks [16:55] my program will show /dev/video0(deleted) under /proc/PID/fd for this hotplug usb device [17:04] ausjke, your code should not expect the camera to always be on 0, indeed [17:05] hii, what's the best way to find out all the PIDS that were started from a specific path? i.e how do i find all the PIDs for instances of /usr/bin/hexchat [17:06] banisterfiend, you can do a bunch of things but they will all, inside, check /proc/*/exe and /proc/*/maps and /proc/*/fd [17:07] Habbie thanks do you know how to list all PIDs in system so i can iterate through them with /proc/ banisterfiend, yes, list /proc and filter out all non-numbers [17:08] Habbie hm, that's the only way? i was hoping there'd be something like: /proc/pids or some such would just return the pids [17:09] banisterfiend, you mean something like 'pstree -p ' ? or what ? [17:09] banisterfiend, not that i know of [17:09] banisterfiend, ps just reads /proc [17:09] pstree -p also just scans /proc [17:16] Habbie alrighty thanks [17:55] I'm using gstrunner to stream video on port 5000, the command runs with no problem however the port doesn't open. how can I troubleshoot it. (I'm on Raspbian, by default iptables seems to be disabled) [17:56] hello [18:03] I have a PC with 16GB of RAM, it has a 2GB swap file, should I augment the size to 4GB? Is any other size preferable? I don't plan to use hibernation at the moment [18:04] mithrison: router could block port? [18:04] hmm good point [18:04] mithrison: you're having this issue on raspbian? [18:05] molinot: got an ssd or spinner? [18:06] lotuspsychje, ssd [18:06] molinot: if you're not going to use hibernation, then disable swap completely. It's pretty useless with 16G of memory [18:06] molinot: ^ [18:06] molinot: was that a manual partitioning, or did you let ubuntu setup choose? [18:07] leftyfb, I open many tabs at once, and the PC becomes slower [18:07] molinot: swap isn't going to help you === dionysus70 is now known as dionysus69 [18:08] molinot: wich ubuntu are you running? [18:08] lotuspsychje, I don't remember, I think I let it choose, but several versions ago. I have the latest 19.04 [18:09] leftyfb yes. it's an issue on raspbian [18:09] mithrison: then why are you asking in #ubuntu? [18:09] molinot: if 19.04 with ssd & 16g ram going slow, then there's a bottleneck somewhere [18:10] lotuspsychje, I think the Chrome tabs eat more and more memory [18:11] brutser: maybe that's a matter of luks 1 vs luks 2 [18:12] molinot: i always tweak a bit more with installing preload & haveged, disabled unwanted services/systemd units, clean system with bleachbit, uninstall unneeded packages [18:14] lotuspsychje, alas, I use htop and it is the tabs who eat the memory, alas, when there is few left, it doesn't eat from the wwap file... [18:14] molinot: did you compare with other browsers, lets say chromium [18:15] well, I have it installed, I could give it a tray [18:15] try [18:15] I strongly recommend keeping swap enabled; the kernel can make better memory management decisions if it has some place to stuff data that processes don't appear to be using [18:15] you only need a gigabyte or two of the stuff [18:16] perhaps I have it disabled or something [18:17] molinot: I have 75 tabs open on a laptop with 16G. Among other applications open. I'm only using 3.4G of memory [18:18] leftyfb, chrome or chromium? [18:18] chrome [18:18] and I have a ton of extensions running as well [18:19] I have 8GB eaten with around 18 tabs open [18:20] https://www.cisecurity.org/advisory/a-vulnerability-in-google-chrome-could-allow-for-arbitrary-code-execution_2019-086/ [18:20] can ubuntu be updated? [18:21] I mean can chromium maintainer update chromium browser [18:21] kokokon: You can request the maintainers to update it. [18:21] molinot: does shift-escape in your browser bring up a tool to show you which tabs are consuming memory and cpu and so on? [18:23] kokokon: chromium-browser seems to already have this fix [18:24] tomreyn mine is not the latest [18:24] tomreyn: hmm I'm not sure we do -- I don't see CVE-2019-5869 on https://launchpad.net/ubuntu/+source/chromium-browser/+changelog [18:24] i was inspecting version numbers https://packages.ubuntu.com/search?keywords=chromium-browser&exact=1 [18:24] kokokon: your ubuntu version and chromium version plz? [18:24] https://www.cisecurity.org/advisory/a-vulnerability-in-google-chrome-could-allow-for-arbitrary-code-execution_2019-086/ states "Google Chrome versions prior to 76.0.3809.132" [18:24] sarnold, 5869 isn't even present in your CVE tracker [18:24] it could be chrome only, of course [18:25] 76.0.3809.132 ubuntu 18.04 [18:25] sarnold, I have a lot of subframes in strage sites [18:25] Habbie: yeah, but that's less surprising.. we often release the browsers before we triage the cves [18:25] kokokon: ok tnx [18:25] sarnold, ah [18:25] kokokon: You can request them to update it [18:25] them who? [18:26] sarnold are you the chromium maintainer? [18:26] kokokon: no [18:26] Maintainer: Ubuntu Developers. [18:26] alas our browser guy is on vacation [18:27] so I should update it manually right? [18:29] https://chromium.googlesource.com/chromium/src/+/51abe396b9580d43d53046180a4f95fdfe5140d9 is the fixed commit [18:32] tomreyn so I just find impact file on a system, change it and restart browser? [18:33] so i did not compare version numbers properly. https://chromereleases.googleblog.com/2019/08/stable-channel-update-for-desktop_26.html states that 76.0.3809.132 is the fixed version, but according to https://launchpad.net/ubuntu/+source/chromium-browser/+changelog and https://packages.ubuntu.com/search?keywords=chromium-browser&exact=1 ubuntu has 76.0.3809.100 based builds, which are probably still affected [18:36] yes [18:36] whats the easiest way to upgrade to 132? [18:40] Hi guys, sometimes when I move my mouse pointer to the right side of my screen all my windows kind of shift to the left side (as if they were trying to hide) how does one stop this from happening? [18:40] <_KaszpiR_> google-chrome-stable/stable,now 76.0.3809.132-1 amd64 [installed] [18:40] <_KaszpiR_> The web browser from Google [18:41] <_KaszpiR_> it's available [18:41] yes chrome however chromium? [18:41] there is an arch linux package [18:41] so there must be a source code somewhere [18:44] kokokon: you could switch from the .deb packaged chromium-browser to the snap packaged browser: https://snapcraft.io/chromium [18:45] or temporarily use https://download-chromium.appspot.com/ [18:46] <_KaszpiR_> chromium beta ppa [18:47] there's also https://launchpad.net/~canonical-chromium-builds/+archive/ubuntu/stage [18:47] tomreyn: https://cdn.kernel.org/pub/linux/utils/cryptsetup/v2.1/v2.1.0-ReleaseNotes < at the top it's explained [18:48] it's some space they reserve for future use, but it's insane, 16M for a header [18:48] brutser: so it's LUKS1 vs LUKS2 indeed i see [18:48] i feel 1M is already sufficient [18:48] it's been a while that i've seen someone sainyg 15 MB extra was "insane" [18:49] yea :) [18:49] tomreyn: yes it's only for luks2 yes, do you know what are the downsides for using luks1? [18:49] security in mind ^ [18:50] i assume the release notes you pointed to will discuss it. one difference is that luks 1 only has a single copy of the header, whereas luks 2 has two. [18:51] ok === nshireTimeout is now known as nshire [19:29] on my LTS 16 server, some programs stopped accepting valid SSL certificates unless I explicitly set /etc/ssl/certs/ca-certificates.crt as the CA file. what might have gone wrong? [19:37] thaway: entirely depends on the cert(s) and the software. Any chance they're they're EV certs? Because I think those all got recently deprecated. [19:37] Same goes for just extra long certs. [19:38] what are EV certs? [19:39] the programs that had issues were RoundCube (i.e. PHP) and wget [19:39] at least those are the ones with problems I noticed so far [19:40] ok reading this now: https://en.wikipedia.org/wiki/Extended_Validation_Certificate [19:40] example of the of the wget calls that's giving you issues? [19:41] wget https://github.com/wikimedia/mediawiki-extensions-YouTube/archive/master.tar.gz [19:42] and RoundCube failed to connect to my own LetsEncrypt-certed host, so I don't think it's EV-related [19:42] Run this for me: dpkg -l ca-certificates [19:42] (the LE cert was certainfly fresh; Thunderbird would still connect to it) [19:42] Version: 20170717~16.04.2 [19:42] (well, that's the package name on 18.04) [19:43] sarnold i have installed latest with snap however it is yet to appear in the menu [19:43] Err..... I don't have a 16.04 box infront of me, but that seems rather old... when's the last time you updated? [19:43] Command 'chromium' is available in '/snap/bin/chromium'The command could not be located because '/snap/bin' is not included in the PATH environment variable. [19:44] kokokon: you may need to log out and log in again to get a new PATH environment variable set up [19:44] thunderbird is a mozilla project app. I think all the moz stuff uses internal ssl systems with an internal certificate store. [19:44] ok [19:44] I update regularly... [19:45] hmm, my web host might have screwed this one [19:45] sources.list contains their addresses (strato) [19:45] ca-certificates 20170717~16.04.2 is the latest on xenial [19:45] Damn. That's just ... werid. [19:45] Thanks tomreyn. [19:45] tomreyn: oh ok [19:46] OK. Hrumm. You can use a -k in the wget or curl call, if you realy want, but I wonder if maybe they're mitm the stuff for a load balancer or something ugly. [19:47] wget's -k seems irrelevant. installing curl to test... [19:49] curl doesn't complain about the cert but downloads an HTML file saying "you're being redirected" -_- one point for wget on this one :P [19:49] thaway: so you're saying that wget -qO /dev/null https://github.com gives an error message -which? - but wget -qO /dev/null --ca-certificate /etc/ssl/certs/ca-certificates.crt https://github.com does not? [19:51] tomreyn: correct. the error is: cannot verify github.com's certificate, issued by ‘CN=DigiCert SHA2 Extended Validation Server CA,OU=www.digicert.com,O=DigiCert Inc,C=US’: Unable to locally verify the issuer's authority. [19:51] wget is for downloading, curl is for interacting directly with REST apis. At least, that's how I think of it. [19:52] makes sense [19:52] so this suggests that the compiled in CA path of this wget build you're using differs from /etc/ssl/certs/ca-certificates.crt [19:52] readlink -f $(which wget) [19:52] dpkg -S $(readlink -f $(which wget)) [19:52] what do those return? [19:54] tomreyn: /usr/bin/wget and package wget [19:54] wget version: 1.17.1-1ubuntu1.5 [19:54] and it's not only happening with wget you said, right? [19:55] still, can you show sha256sum /usr/bin/wget [19:55] c31d3e52ddcc0d9c32c79f43febf5e1609cce5ae60546e112163c4329f52cbd9 [19:56] I had a similar problem with RoundCube, which is a web-based email client written in PHP [19:56] i see a matching hash on a system i manage [19:57] (just disabled cert checking for that one as it connects to localhost anyway) [19:57] how do I download all files WITHOUT subdirectories using wget? [19:57] hi. i am having these errors and fails: https://paste.ubuntu.com/p/XXjrQMQHmk/ [19:57] wget -r -l1 I think. Recurse with a depth of 1 [19:57] how can i fix them? [19:58] one or some of them is causing system error popup everytime i boot into ubuntu 18.04 [19:58] system error report popup* [19:58] recently i faced that popup at 10.54pm [19:58] kyle__, thats what I thought too but it keeps downloading the subdirs anyway:| [19:59] thaway: do you have /etc/wgetrc or ~/.wgetrc or /etc/netrc or ~/.netrc ? [19:59] in the paste you can see possible fails/errors related to that error [19:59] gonf: maybe -l 0? [19:59] gonf: Ooh. Humm. I know -l0 will recurse indefinately. Mayb eI have to re-read the man page [19:59] * kyle__ hasn't read the wget one in probably 5 years [20:00] related to that time* [20:00] -l 0 means idd inf [20:00] thaway: and is the system time set correctly? [20:00] B|ack0p: pop into #snappy -- hopefully someone will know how to track down which snap icons are missing [20:00] Wow. I forgot how bad that manpage is. [20:01] tomreyn: only /etc/wgetrc and the only uncommented line is passive_ftp=on [20:01] hmm [20:01] sarnold: is it related to snap apps? [20:01] B|ack0p: yes [20:02] thaway: but none of the other three files is present? [20:02] I'm gonna say it! [20:03] tomreyn: correct [20:04] sarnold: they seem to be sleeping.. [20:05] i have 7 same error log this: [20:05] Aug 30 22:54:19 uthink-x61 gnome-software[3571]: Failed to load snap icon: local snap has no icon [20:05] and other one: Aug 30 22:54:17 uthink-x61 nautilus[3933]: Called "net usershare info" but it failed: Failed to execute child process “net” (No such file or directory) [20:06] thaway: i tested the same command i provided you earlier, the one you reported fails with "cannot verify github.com's certificate, issued by ‘CN=DigiCert SHA2 Extended Validation Server CA,OU=www.digicert.com,O=DigiCert Inc,C=US’: Unable to locally verify the issuer's authority." unless you specify --ca_certificate=/etc/ssl/certs/ca-certificates.crt . it works without errors on my test system. which puzzles me somewhat. [20:06] B|ack0p: yeah, it's a friday before a US weekend, and after end of usual office hours in europe.. it might not be real busy [20:06] B|ack0p: that net usershare info would be easy to fix by installing samba-common-bin but if you don't have it now then you probably don't care about doing SMB networking [20:07] sarnold: i am not sure about samba [20:08] and if it is necessary i can install? [20:08] if you look at full log here: https://termbin.com/xlo9 [20:08] tomreyn: thanks. yes, it's very weird. I'm sure I haven't fiddled with anything that could have caused it. the RoundCube problem came "out of nowhere". got a report from a user one day when I hadn't done anything on the server in a while... [20:09] at time 22.54.02 there is about 5 fails about fwupd [20:09] Aug 30 22:54:02 uthink-x61 fwupd[3593]: disabling plugin because: failed to.... [20:09] is it firewire? and why is it failing? is it error or normal? [20:11] B|ack0p: fwupd is a firmware update tool -- I think those messages are probably normal enough [20:12] sarnold: would you have any idea as to why wget on xenial (proper latest deb, but have not checked the libs) would start throwing cert validation errors ("cannot verify github.com's certificate, [..] Unable to locally verify the issuer's authority.") unless it's run with --ca-certificate /etc/ssl/certs/ca-certificates.crt ? [20:12] this is thaway's issue, i can't think of much other than libs being replaced by third parties' now. [20:12] tomreyn: quite often an unhashed /etc/ssl/certs directory will make it hard to do validation -- I believe update-ca-certificates should reforce the process [20:13] tomreyn: indeed using someone else's libraries could give that trouble.. [20:13] but would it not use the /etc/ssl/certs/ca-certificates.crt file either way? [20:13] i mean in case the rest of /etc/ssl/certs/ is not up to date [20:14] hmm not sure there [20:14] okay, i'm not certain either really [20:14] tomreyn: but perhaps if whoever built the hypothetical other libraries didn't use a similar enough set of configure flags.. [20:14] so I couldn't download only the files, but I used the index.html to grep only files..:| not sure if there is other way.. but using this for now:x [20:14] yes [20:15] tomreyn: did you find out if the crypto libs have been replaced? [20:15] now how do we tell which libs are being used [20:15] not yet, that's next [20:15] ldd or something better [20:15] I use one third-party package source for php/mysql stuff, let me check what it was called [20:15] ldd's easy. It's not great but it's easy. :) [20:15] i just always thnk of https://catonmat.net/ldd-arbitrary-code-execution [20:16] ondrej's apache2 and php sources for xenial [20:16] tomreyn: yes :( [20:16] there was an alternative, was it readelf or objdump? [20:17] btw strace on wget indicates it doesn't touch /etc/ssl/certs ... BUT! open("/usr/lib/ssl/cert.pem", O_RDONLY) [20:17] not sure if relevant [20:18] oh and the result of the open() call is ENOENT [20:18] tomreyn: readelf -d will show the libraries that an object file wants to use, but since it's not *executing* the thing, the way ldd does, you don't get to find out what the loader will actually load for them. just the names. [20:18] ah right, this was the problem with this approach, thanks sarnold! [20:19] so we have readelf -d $(which wget) | grep NEEDED [20:19] or just ldd $(which wget) [20:20] I decided to trust the wget executable and ran the ldd one. among the results: libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f52e7279000) [20:21] thaway: ^ can you show the full output of ldd $(which wget) on a pastebin? [20:21] thaway: and then sha256sum /lib/x86_64-linux-gnu/libssl.so.1.0.0 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 and any other libs if you like [20:24] here's the full ldd result: http://dpaste.com/3NQS9G9 [20:25] and the shas: http://dpaste.com/3J542FR [20:25] thaway: does /usr/lib/ssl/cert.pem exist? [20:25] tomreyn: nope [20:25] oh you said ENOENT, sorry [20:26] np [20:26] hmm those checksums match as well here [20:29] so by default wget doesn't use the CA file (ca-certificates.crt), it uses certificates directly based on their serials. [20:29] it looks them up in /usr/lib/ssl/certs which is actually a symlink to /etc/ssl/certs [20:30] thaway: did you maybe disable some CAs you don't trust lately? [20:30] tomreyn: didn't disable any CAs. by the way /usr/lib/ssl/certs isn't a symlink on my system. [20:31] contents of the dir: http://dpaste.com/3H50XPT [20:31] thaway: interesting. can you readlink -f /usr/lib/ssl/certs/244b5494.0 [20:32] no actual certs there it seems [20:33] is ca-certificates installed? [20:33] oh yes, we had this initially [20:33] yup [20:33] btw I ran update-ca-certificates but it made no difference [20:35] thaway: according to dpkg -S /usr/lib/ssl/certs/* which package do thee files belong to? [20:37] tomreyn: interesting: while the directory itself is from openssl, the contents don't belong to any package (dpkg -S says "no path found that matches pattern") [20:37] do you have gitlab-ce installed? [20:37] nope [20:38] the timestamps of the files in the dir are from 2017 and 2018 though [20:38] i don't knwo what it is, but something you ran or installed oin this system thought it'd be a good idea to remove the symlink and place these files there [20:39] do you have /etc/ssl/certs/244b5494.0 ? [20:39] tomreyn: nope, the contents of the dir are these: http://dpaste.com/3H50XPT [20:39] tomreyn: oh nvm, wait [20:40] I thought you meant /usr/lib/ssl/certs. yes I have that file in /etc/ssl/certs/ [20:40] and is it a symlink to DigiCert_High_Assurance_EV_Root_CA.pem in the same directory=? [20:41] tomreyn: yes [20:42] some of those files you have (but should not have) in /usr/lib/ssl/certs would be available at this location on Centos systems according to https://blog.hazrulnizam.com/renewing-ssl-certificates/ [20:43] i reallydon't know what caused things to be overwritten on your system, but personally i'd make sure i'm the only admin, and i know what i'm doing then. [20:43] or that all admins are legitimate and know what they are doing [20:43] I'm the only admin and ssh login requires a private key i.e. password-only auth is disabled [20:43] it shoul dbe easy to fix now that we know what the problem is. unfortunately we do not know what caused it, or whether it will happen again. [20:45] hmm, so the problem is that these files were created in 2017/2018 and now they're out of date? that at least makes sense [20:45] especially since I have no clue what I might have done back then :-) === Roy_Mustang is now known as A_D [20:45] no, the problem is that /usr/lib/ssl/certs is a directory (not a symlink to /etc/ssl/certs), contains files of unknown origin, and [20:46] ...that's it [20:46] well yeah, but they were created in 2017/2018, so I was wondering why the problem only really surfaced now [20:47] you don't know that, all you see is timestamps of files. [20:47] so far it does not looks like anyone falsified those, but they may just have been extracted from some tarball and moved there. [20:47] hmm, indeed, can be forged. but if they were really created then by mistake, it could be that they're simply outdated now, right? [20:48] you call it "outdated", i call it "should never exist". they're not part of any package you have installed. [20:49] and there are no such files at this location in xenial# [20:50] tomreyn: I understand how you mean, but I mean in terms of cert-validation. in other words, is it a possibility that they've been there since 2018/2017, and wget/RoundCube didn't make problems because the certs there were "valid" until now? [20:50] maybe. [20:52] thanks a ton for the help btw. to be honest the server is for kind of a political project and I do expect hacking attempts, but nothing CIA-level, just script kiddies :P [20:52] i'd not be happy if i found out that files of unknown origin defined the CAs my system is configured to trust. [20:53] indeed. there shouldn't be a problem with deleting them and making /usr/lib/ssl/certs a symlink to /etc/ssl/certs again, right? [20:53] (guess I'll move them away just in case rather than deleting) [20:54] this should make wget -qO /dev/null https://github.com run without error again [20:57] huh, can't move the directory: http://dpaste.com/0S7Y4RS [20:57] is this a VPS? which kind of? [20:58] So I tried enabling SELinux on Ubuntu 18.04.3, and it locked everything out; my SSH connection dropped and when I tried rebooting, it failed with a bunch of "Permission Denied" errors. Why? [20:58] tomreyn: it's a VPS from the company STRATO AG. don't know what kinds there are. [20:58] I used the "default" SELinux type [20:59] thaway: virt-what (package and same name command) can tell you [21:00] hmm, / is a "vzfs" mount which apparently is a virtual fs type that allows sharing between containers [21:00] tomreyn: openvz [21:00] thaway: blame strato then [21:00] I'll contact their support :) [21:00] thanks a lot again! [21:00] this is pretty bad really, but they made mistakes liek this before [21:01] Is there a lot of 19.10 alpha/daily discussion on, well, pretty much anywhere, such as Ubuntuforums? I'm guessing not, but then again, I'm asking rather than guessing. [21:02] !ubuntu+1 | thaurwylth [21:02] thaurwylth: Eoan Ermine is the codename for Ubuntu 19.10 - Support only in #ubuntu+1 [21:03] Hobadee: so you installed and enabled selinux and assumed it would just work? [21:04] tomreyn: Yup. I assumed the default policy would at least allow the system to boot.... [21:04] did you also disable apparmor? [21:04] tomreyn: Yes, disabled apparmor [21:05] purged it completely [21:05] by default policy, do you mean you installed package "selinux-policy-default"? [21:05] yes, and in /etc/selinux/config I have SELINUXTYPE=default [21:06] i see. well, i never tried, am not sure what to expect in this situation [21:07] I would expect it to work somewhat... :-/ [21:07] there's also selinux-basics, which looks like it's mean to create a workable configuration [21:07] but generally i wouldn't expect selinux on ubuntu to work, definitely not out of the box. [21:08] i.e. i'd expect what you reported [21:21] How I create a folder as zipped filename like: filename.zip -> filename/ [21:23] zip a -r filename.zip filename/ i think, check the man page to be sure [21:25] pyex: actually it's just zip -r filename.zip filename/ [21:26] hello, which file should i cat to get the cpu temperature? [21:27] you'd probably run a command instead, such as "sensors" [21:28] for i in *.zip; do unzip "$i" -d "$i"; done "error:checkdir: cannot create extraction directory: File exists [21:28] Elodin: actually, this might work: cat /sys/class/thermal/thermal_zone*/temp [21:28] I have to multi zip files, I need to extract each filename to in folder [21:28] tomreyn: there isnt such a folder thermal_zone [21:29] Elodin: also none which starts with this name? [21:29] how extrac to filename as same filename to folder? [21:29] pyex basename or dirname [21:29] tomreyn: i have thermal/ but not the child [21:30] depending on what your actual question is [21:30] Elodin: the proper module is probably not loaded, yet. install lm-sensors, run sensors-detect. if this is very recent hardware, download the "sensors-detect" perl script from lm-sensor's github instead. [21:31] tomreyn: already did this [21:31] Elodin: and "sensors" now reports what? [21:32] tomreyn: it reports something, but it's probably wrong [21:32] cluelessperson, : basename [21:32] tomreyn: lemme backref the story [21:33] tomreyn: i just bought this cpu and it was idling 40C on bios, i decided to stress it with stress test software and see how temp would behave. I have been running 100% workload for all cpus and temperature doesnt change [21:35] Elodin: that would also make me think you're not looking at the right sensor. [21:36] tomreyn: there aren't any other sensors to look at [21:37] maybe i dont have the kernel modules [21:37] fuck ill try another ubuntu [21:38] sensors-detect would print which modules need to be loaded [21:38] please watch the language [21:38] Elodin: please mind your language [21:38] oh sorry [21:39] not all mainboards, chipsets, cpus / architectures may be supported out of the box, or at all. it can take some research. [21:40] tomreyn: okay, i'm running sensors-detect again it asks me to allow to scan things in my mother board [21:40] i'm saying yes to all right [21:40] i would not recommend that, better go with defaults [21:40] there is --auto [21:40] oh [21:41] okay it told me smething about lm78 driver and asked me to add it to /etc/modules [21:42] so how do i load this module [21:43] What is the name of the Ubuntu channel for other stuff (not support)? [21:44] !ot | ava [21:44] ava: #ubuntu is the Ubuntu support channel, for all Ubuntu-related support questions. Please register with NickServ (see /msg ubottu !register) and use #ubuntu-offtopic for other topics (though our !guidelines apply there too). Thanks! [21:44] Elodin: like any other module: modprobe lm78 [21:44] Bashing-om, wnx [21:45] tnx [21:48] Hello [21:48] speeder39_, hello [21:49] Hi Aavar are you in the USA [21:49] speeder39_, why do you ask? === Aavar is now known as Aavar_ === Aavar_ is now known as Aavar [22:02] had to reinstall ubuntu after a boot menu mess up.. but forgotten how to install veracrypt so cant open my encrypted folder [22:03] doe anyone know how ? [22:04] think i found it, copy and paste in terminal, should work [22:34] hi all, I am testing out my applications and I am install a lot of deb packages on specific hardware. right now I am reinstalling (clean install ubuntu) everything I want to test the whole system. Is there a way to restore the system to original install or somehow clean the system. I can't use VM because I need access to specific hardware. I tried KVM but I am not able to get the NIC passthrough to work. Any advice? [22:36] ironpillow: you could probasbly use dpkg --get-selections and --set-selections to standardize the set of initial packages.. iirc apt install -f will then make those happen [22:38] you could also make a smallish filesystem and use dd to just blat around the disk image [22:41] hello, everthing is saying [FAILED] on boot proccess [22:41] why would that be [22:42] The services failed to start [22:42] that much i know. i was wondering what would be reasons for it to happen [22:42] Elodin: check journalctl once your up, hopefully you'll be able to spot the trouble [22:42] its happening on usbboots and diskboots [22:42] i cant bootup [22:43] everything FAILED [22:43] sarnold: I will also be install custom applications and will be creating numerous directories. This is outside of dpkg [22:43] Elodin: Even liveCD? [22:43] saor: live usb yes [22:43] ironpillow: rsync perhaps? [22:45] sarnold: sorry for being unclear. I am testing the install process of the application. It will install kernel module and create directories. I want to test when and why application might fail during it's install stage [22:46] ironpillow: oh :) and here I was suggesting the thing to let you side step the installer because those are usually brittle and slow :) [22:47] sarnold: :). I current process: 1. install new ubuntu on the hardware. 2. Install dpkg packages. 3. Install custom applications and kernel modules. 4. Make sure that custom applications and kernel modules are installed correctly each time code change is made. === lborda is now known as lborda_afk [23:59] Is there anyway to add a folder to the genome sidebar in 19.04 [23:59] *Gnome