[00:00] <deltab> which screen mode is it?
[00:00] <sarnold> there's a handful of consoles.. vga text, vesafb, something else I think.. they're all pretty limited. using a terminal emulator will almost certainly give you better results: faster, way more glyphs, more features, fewer bugs, etc
[00:00] <henninb> is there a command that I can run to tell me the screen mode? I will try and google the answer.
[00:00] <sarnold> the text console is *insanely* limited
[00:12] <deltab> henninb: are you using the console to aovid X/Wayland overhead, or because the hardware doesn't support graphics?
[00:16] <henninb> yes, I was trying to avoid x/Wayland.
[00:17] <henninb> my hardware is 1 year old
[00:25] <doug16k> how does one determine which device is "scsi_eh_10"? -> https://gist.github.com/doug65536/2c23a82bd7bef9fd72e1d12ce287c87d
[00:26] <doug16k> guessing it's a USB-UAS drive? which one though?
[00:28] <sarnold> doug16k: I don't think it's 1:1 with drives
[00:29] <sarnold> doug16k: my laptop has five of those threads but only two drives; my Big Machine with .. uh .. fifteen?ish? drives in it has seven of those threads
[00:29] <doug16k> ah, they are worker thread names?
[00:32] <doug16k> ya, must be, bottom of call stack is kthread_create_worker_on_cpu. thanks
[00:32] <sarnold> I've never taken a look at them before, either :) I've been content tosee there's loads of worker threaders of various sorts..
[00:34] <tomreyn> this looks like a reeeeally old kernel image.
[00:36] <sarnold> tomreyn: you sure? the 4.15.60 in there feels like it came from this proposed kernel https://launchpad.net/ubuntu/+source/linux/4.15.0-60.67
[00:36] <sarnold> tomreyn: what suggests to you it's old?
[00:37] <tomreyn> hmm, you're right, i was thinking 4.15 was replaced by 4.18
[00:37] <tomreyn> but i guess 4.15 is GA, and i mixed it up
[00:37] <OerHeks> https://launchpad.net/ubuntu/+source/linux/4.15.0-60.67 in proposed
[00:38] <tomreyn> my bad, sorry
[00:38] <sarnold> aha then that makes sense :)
[00:39] <OerHeks> why proposed? and not hwe?
[01:09] <Betal> is it need to let .la files with .so or I can delete them?
[01:11] <sarnold> Betal: it's probably best to leave those .la files there; without them, you'll have trouble building software
[01:11] <Betal> sarnold: but what if Iam not going to build
[01:13] <sarnold> Betal: well, if you're *never* going to build, then that's probably fine; why bother though?
[01:28] <henninb> sarnold i was able to get ter-powerline-v16b.psf to work in the console, tty1.
[01:28] <sarnold> henninb: sweet!
[01:29] <sarnold> henninb: is that packaged for ubuntu? I don't spot it with apt-file search
[01:29] <henninb> no, I found it on a git repo.
[01:29] <sarnold> bummer
[01:29] <sarnold> anyway, nice to know there's choices :)
[01:29] <sarnold> thanks especiallyh for reporting back
[01:30] <henninb> i alway appreciate getting ideas and thoughts from folks.
[01:30] <sarnold> same here
[01:30] <henninb> thanks for your responses.
[01:37] <Aleric> Hi - can someone please help me fix my audio?  I rebooted because I heard some "cpu" noises in the background (very soft, but normally I don't hear that). After the reboot I have no audio at all anymore.
[01:38] <Aleric> The "reason" as far as I can tell is that there is no longer a 'system:playback_1' and system:playback_2 in JACK.  Aka, my soundcard no longer connects to JACK :/
[01:42] <MrPlayfair> how to minimise shell?
[01:42] <MrPlayfair> opposite of ctrl shift +
[01:42] <MrPlayfair> ah ctrl -
[01:42] <MrPlayfair> ty
[01:46] <Aleric> Hmmpf - I'm now using alsa_out to to recreate a JACK client and that works! So apparently something broke reading my .asoundrc since the last apt upgrade (this the first reboot since that)
[01:51] <khanred> I'm trying to install Ubuntu on a partition on one of my drives, but I keep getting the following error: "The ext4 file system creation in partition #1of SCSI7 (0,0,0) (sdc) failed." --- What can I do about this?
[01:54] <sarnold> khanred: are there any details in a log file? another terminal or console? are there any messages in dmesg?
[01:54] <khanred> sarnold: i'll take a look
[01:57] <khanred> sarnold: these are my "important" logs - https://paste.ubuntu.com/p/Q7gF9f6p3D/
[02:02] <sarnold> khanred: hmm, I'm not sure those actually point out the error; the usb one *might*, if you're trying to install to usb..
[02:02] <khanred> I'm not trying to install usb
[02:02] <sarnold> khanred: the couldn't get size, and UEFI db list looks like it's probably harmless https://forums.opensuse.org/showthread.php/535324-MODSIGN-Couldn-t-get-UEFI-db-list
[02:02] <khanred> ok
[02:03] <khanred> let me send the whole logs
[02:03] <OerHeks> did you boot ubuntu in uefi mode?
[02:03] <khanred> yes
[02:04] <OerHeks> no partitions on sdc?
[02:06] <khanred> http://paste.ubuntu.com/p/2zB2w943vF/
[02:06] <khanred> Full logs
[02:10] <khanred> "/dev/sdc1 already mounted or mount point busy" seems kind of interesting
[02:12] <OerHeks> is this in xorg or wayland?
[02:14] <khanred> I think xorg
[02:14] <khanred> it's 18.04
[02:17] <sarnold> khanred: what's mounted there?
[02:18] <khanred> oh what the hell
[02:19] <khanred> sdc is the flash drive i'm booting from to do the installation.....
[02:19] <khanred> and it's also where i've been trying to install the OS for some reason
[02:19] <khanred> Let me try _not_ doing that....
[02:20] <sarnold> khanred: that's probably worth a bug report, if you have the time and inclination :) it could probably try harder to communicate about that one :)
[02:20] <khanred> Alright, i'll do that
[02:20] <sarnold> thanks!
[02:23] <khanred> ok so
[02:23] <khanred> I can see the partition I want in disks, but not when I run the installer
[02:29] <magic_ninja_work> khanred, is it lvm
[02:29] <khanred> no
[02:30] <khanred> btw, im trying to create this partition/install on a separate disk from the one I have Windows on
[02:49] <khanred> this is frustrating
[02:58] <OerHeks> khanred, what are our partitiosn look like? sudo fdisk -l  on paste.ubuntu.com
[03:00] <OerHeks> is sdc nvme?
[03:00] <khanred> its a hybrid drive
[03:01] <OerHeks> oh oke, not worth to mention immediatly
[03:02] <khanred> OerHeks: http://paste.ubuntu.com/p/5p5f2jwzcK/
[03:03] <sarnold> nvmes ought ot show up on /dev/nvme*
[03:05] <doug16k> that's the last dime sandisk gets out of me. got a 64GB USB flash drive, dead in 2.5 months
[03:06] <doug16k> barely used
[03:06] <doug16k> works for maybe 20 minutes then goes dead, then all I/Os to it hang
[03:11] <khanred> OerHeks: Did you find anything of interest?
[03:12] <lotuspsychje> think for nvme you need to advance partitioning
[03:12] <sarnold> doug16k: ouch :(
[03:13] <khanred> I don't have an nvme drive..
[04:23] <cluelessperson> hi all
[04:24] <cluelessperson> so currently I'm on the adventure of trying to define my own sound/speaker configurations
[04:51] <nexiu> de-facto: i found how to get dns proper working with network manager
[04:51] <nexiu> https://askubuntu.com/questions/233222/how-can-i-disable-the-dns-that-network-manager-uses
[04:51] <nexiu> ;)
[05:31] <MJCD> Hey is there a chan for ubuntu-based distro makers?
[05:31] <MJCD> just making something noob for our teams use
[05:34] <OerHeks> !alis
[05:34] <OerHeks> i have no clue about fork channels, good luck
[05:58] <cluelessperson> what's up?
[08:24] <MaxLanar> Hi. I'm trying to ssh log in as root from a debian to a ubuntu with pubkey auth. I can do so as regular user, it works well. Then I copied /home/user/.ssh/authorized_key in /root/.ssh/authorized_key, I have 'PermitRootLogin prohibit-password', 'PubkeyAuthentication yes' and 'AuthorizedKeysFile      .ssh/authorized_keys' in /etc/ssh/sshd_config. The permissions are good (700 for .ssh folder, 664 for
[08:24] <MaxLanar> authorizedkeysfile). I did systemctl restart ssh. I activated the root account (set password, then 'passwd -u root').  No matter what 'ssh root@m.y.i.p' still gives me 'Permission denied (publickey).'
[08:26] <MaxLanar> What do I miss ?
[08:26] <Ool> no need to put 700 to the authorized_keys file… did your chown the file ? did you restart ssh service ?
[08:26] <lblume> MaxLanar: 664 is not good for a keyfile, it means anybody in the group can modify it, that isblocking
[08:28] <Habbie> MaxLanar, ^ try 644; if that doesn't help, it's log reading time
[08:31] <MaxLanar> Habbie: lblume: No more luck with 644 :/. I pastebin the result with ssh -v ?
[08:31] <Habbie> MaxLanar, yes, please pastebin ssh -v, and also paste logging from the -server- during your attempt
[08:31] <Habbie> for example, if 664 was the issue, ssh -v would not tell you that
[08:34] <avernos> hello, i broke my network interface file by mistake. i think i broke lo interface now cant get any networking
[08:35] <MaxLanar> For the sshd log on the server I check /var/log/auth.log ?
[08:35] <avernos> how can i fix this?
[08:36] <Habbie> MaxLanar, that would be my guess
[08:36] <MaxLanar> Ool: authorized_keys is own by root. Yes I restarted ssh service (systemctl restart ssh)
[08:37] <lblume> The .ssh directory is also owned by root?
[08:37] <MaxLanar> lblume: yes
[08:38] <lblume> Well, time for the server logs, they'll likely be interesting :)
[08:40] <MaxLanar> ssh -v root@192.168.1.100 : https://paste.debian.net/1097973/
[08:41] <OerHeks> try the username, not root?
[08:41] <MaxLanar> OerHeks: the username ?
[08:44] <MaxLanar> 'cat /var/log/auth |grep -v cron:session' on the server : https://paste.debian.net/1097974/
[08:45] <OerHeks> err,  i am wrong
[08:46] <OerHeks> AuthorizedKeysFile      .ssh/authorized_keys, this might need a full path?
[08:47] <Habbie> no, that's fine
[08:52] <lblume> MaxLanar: Permissions/ownership on the directory above .ssh (/root ?) also good? Else raise sshd's LogLevel to debug to get more details what it does.
[08:54] <cluelessperson> MaxLanar      PermitRootLogin without-password
[08:54] <MaxLanar> lblume: /root is 700 and owned by root
[08:54] <MaxLanar> cluelessperson: That is good
[08:56] <cluelessperson> MaxLanar    chown root:root  -R ~/.ssh && chmod 600 -R ~/.ssh
[08:56] <cluelessperson> I think
[08:57] <V7> Hey all
[08:57] <V7> Is it possible to change root password?
[08:58] <V7> Of mysql root*
[08:58] <V7> Already tried using mysql documentation and mysql_secure* script. No result.
[08:58] <Habbie> V7, are you able to log in to mysql?
[08:58] <V7> Yes, with an empty password like "sudo mysql"
[08:58] <V7> All in all, root persists asking empty password
[08:59] <cluelessperson> I thought mysql has an internal password you have to manage
[09:00] <Habbie> i learned this yesterday - the default now is to check what user is connecting
[09:00] <Habbie> so 'sudo mysql' just works
[09:00] <Habbie> and then you can add a passworded account, via GRANT i suppose
[09:00] <V7> Habbie: So, is it possible to change mysql root's password?
[09:02] <cluelessperson> yes
[09:02] <Habbie> V7, it is possible to make another root account with a password, or to remove the socket auth from the account you have and give it a password
[09:02] <Habbie> V7, i recommend the first option
[09:02] <cluelessperson> I don't recall how off the top of my head though
[09:02] <Habbie> something like this https://stackoverflow.com/questions/41846000/mariadb-password-and-unix-socket-authentication-for-root
[09:04] <V7> Thank you Habbie, but now it's important to create a root password for mysql
[09:05] <V7> For now it have an empty password
[09:05] <MaxLanar> OK, that was the dumbest error. The log 'Could not open authorized keys '/root/.ssh/authorized_keys': No such file or directory' made me aware that my file lacked a 's' in the end.... Sorry for the disturbance and thank you all for the help.
[09:06] <Habbie> V7, for now you have an empty password that only works if the person connecting is root
[09:06] <Habbie> V7, that's not a problem
[09:07] <MaxLanar> (I had 'authorized_keys' and 'authorized_key' in my /home/user/.ssh/ (don't remember why), then I copied the file with the wrong name to /root/.ssh :/)
[09:09] <OerHeks> good find
[09:09] <V7> You can merge tham to *keys one
[09:10] <acresearch1> people   ubuntu 18.04 has PYMOL python2 in apt  how can i install the newst PYMOL python3 ?
[09:11] <OerHeks> not, see https://packages.ubuntu.com/search?keywords=python3-pymol
[09:11] <OerHeks> upgrade to disco?
[09:12] <Ool> with pip3 install ?
[09:13] <rhoks> Hi there.
[09:14] <rhoks> is it possible to install or have google calendar and set it up with notifications on ubuntu?
[09:14] <Habbie> rhoks, Evolution can do that
[09:14] <acresearch1> OerHeks: apt search python3-pymol   -> nothing
[09:14] <acresearch1> OerHeks: upgrade to disco?   what is disco?
[09:15] <OerHeks> !disco
[09:15] <V7> !discoball
[09:15] <acresearch1> OerHeks: oh, so i can't install it in my current OS ha?
[09:15] <OerHeks> maybe if you build it yourself?
[09:16] <OerHeks> or see the answer of Ool
[09:16] <acresearch1> OerHeks: hmmm    what file should i look for to attempt to build it myself?
[09:16] <Ool> perhaps asking in #python
[09:17] <acresearch1> Ool: ok
[09:17] <rhoks> Habbie, I see. Because several other solutions on the web want you to install their own repositories and stuff.
[09:17] <Habbie> rhoks, in general, try to avoid that indeed
[09:18] <brokencycle> Hello! On 18.04, I have the following problem with my secondary SSD: Initially, it was ext4.
[09:18] <brokencycle> I wanted to reformat it as xfs (no backup required, it's just for testing), but running
[09:18] <brokencycle> mkfs -t xfs /dev/sdb1 always gives me a kernel panic. The machine has the latest updates as of yesterday.
[09:19] <brokencycle> How can I have an XFS on that disk, please?
[09:20] <Habbie> rhoks, i note that in my gnome preferences, i can also add a google account, but i don't know what that does
[09:20] <Habbie> brokencycle, do you have details of the kernel panic?
[09:21] <cluelessperson> I prefer not to have my OS in anyway attached to various social media crap
[09:22] <rhoks> Yes Habbie I kinda read about that, but I was unsure of giving google access to my ubuntu machine, they already monitor everything we do on our browsers and emails.. But I guess if I'm gonna start scheduling my whole life on their Calendar service I guess I could just do it and connect my local user to my google account :S
[09:23] <Habbie> rhoks, i don't think that gives google access to your machine
[09:24] <rhoks> Habbie, perhaps, but in the world we live in I wouldn't be surprised if a whistleblower defects from google and shows us that they somehow hack their way in when people login like that
[09:25] <Habbie> rhoks, then, by all means, limit your exposure, and configure just the calendar in one app
[09:26] <cluelessperson> rhoks   by all the accounts and dealings I've had with google employees, it seems that google currently understands the responsibility of consumer data, and avoids abusing it as much as possible.
[09:27] <brokencycle> @Habbie: I don't know how to capture it. I basically only IPMI access.
[09:27] <cluelessperson> also, the types of people that develop those systems, are often the type of people that can't be easily manipulated into thinking it's okay to abuse people.
[09:27] <Habbie> brokencycle, ok
[09:27] <Habbie> brokencycle, anyway, a panic indicates a serious hardware or software problem
[09:27] <Habbie> brokencycle, so, without wanting to be rude, the first question is not 'how can i please use XFS here'
[09:28] <brokencycle> the machine is brand new, and the software is just a generic Ubuntu 18.04, now 18.04.3
[09:28] <brokencycle> It worked fine when I had ext4 on that disk, it just craps out when I try to reformat that as XFS.
[09:29] <cluelessperson> brokencycle   You'd have to get the machine into a state where you can reasonably run another tool or debug or trace while running that function/command
[09:29] <cluelessperson> so perhaps boot a liveboot, or ram based configuration, and perform the format using that, streaming the trace data somewhere.
[09:30] <brokencycle> I am open to suggestions: I can ssh into the machine, but when the kernel panics, I'm obviously out of luck doing anything else on it.
[09:30] <brokencycle> What do you suggest?
[09:31] <brokencycle> If I do a hard reset, the machine comes up just fine, but I don't know a way forward from there.
[09:32] <brokencycle> If the kernel panic was written to disk somewhere, that would be great. Then I'd just collect that after reboot.
[09:32] <cluelessperson> brokencycle   So, partition the disk, write the log of the operation/trace to another partition while you format the test partition.
[09:34] <brokencycle> Running the mkfs command does not produce any output. You mean, I should strace it?
[09:35] <brokencycle> I mean, it works for ~20 seconds or so before the kernel panics.
[09:36] <cluelessperson> brokencycle  I'm not familiar with general kernel debugging, but I would start googling how to debug, log, dump, etc.
[09:38] <brokencycle> ok... thank you!
[09:40] <rhoks> Yeah, I was gonna try to login to google via gnome Habbie but gnome asked for permission to everything basically (to see, edit, delete contacts and emails and whatnot)... So I'm gonna go with the evolution route. I suppose sudo apt install evolution is all thats needed to install it?
[09:41] <Habbie> probably
[09:41] <rhoks> some website wanted me to first install this repo ppa:gnome3-team/gnome3-staging
[09:41] <rhoks> for some reason before installing evolution Habbie
[09:42] <Habbie> that would give you a newer gnome and evolution i suppose
[09:43] <rhoks> alright this is taking far too long I will get back to it tomorrow maybe, I'll use a chrome tab to schedule the day and print it for now :)
[09:43] <rhoks> thanks for the help Habbie
[09:43] <Habbie> np, good luck :)
[09:44] <toffe> Hi
[09:44] <toffe> is it possible for me to create multiple interfaces with different ip addresses and mac addresses on my pc towards the same vlan?
[09:44] <toffe> I need to "spoof" 500+ units
[09:45] <toffe> So instead of 500 VM i though I could use 500 "virtual nic's" ?
[09:45] <Habbie> toffe, yes, make a bridge and tie a bunch of virtual interfaces with different MACs to it
[09:45] <Habbie> toffe, brctl is key
[09:45] <toffe> Thanks
[09:45] <toffe> I'll look into it. Just gonna find a server who can handle this software .P
[09:46] <opios> Hi
[09:47] <opios> This is what i have for my firewall rule https://pastebin.com/DfJ7JrZE, i can run that script and do ufw enable and everything is okay but when i reboot the server i cannot access it via ssh!!
[09:47] <opios> in those rules i tried to drop everything but ssh connections
 Hi
 This is what i have for my firewall rule https://pastebin.com/DfJ7JrZE, i can run that script and do ufw enable and everything is okay but when i reboot the server i cannot access it via ssh!!
 in those rules i tried to drop everything but ssh connections
[09:55] <tomreyn> show iptables -L after applying those rules.
[09:59] <geirha> opios: did you get tomreyn's message?
[10:00] <opios> no
 show iptables -L after applying those rules.
[10:03] <EriC^^> opios: i dont think those rules are persistent if you used iptables to add them
[10:04] <EriC^^> opios: https://askubuntu.com/questions/119393/how-to-save-rules-of-the-iptables
[10:27] <opios> EriC^^, damn i didnt know that i need to run the iptabe commands everytime
[10:27] <opios> so my iptables rules are good i just need to make sure they are save and running after each boot
[10:29] <B|ack0p> hi
[10:30] <EriC^^> opios: dunno about the rules tbh, but yeah you have to run it every time after rebooting
[10:30] <opios> https://pastebin.com/DfJ7JrZE
[10:30] <B|ack0p> i still get error report popup but no crash report in /var/crash
[10:38] <tomreyn> opios: apt show iptables-persistent
[10:39] <tomreyn> B|ack0p: does the error popup provide any details? do you see any related record in   journalctl -b | nc termbin.com 9999  ?
[10:40] <B|ack0p> tomreyn: unfortunately no detail shown on the error report popup
[10:40] <B|ack0p> that s why it is annoying everytime i boot to ubuntu it popups
[10:40] <B|ack0p> let me check
[10:40] <opios> tomreyn, yeah i went with iptables-persistent
[10:40] <opios> thanks
[10:41] <B|ack0p> tomreyn: https://termbin.com/7fmt
[10:41] <tomreyn> B|ack0p: what about it?
[10:43] <B|ack0p> tomreyn: that s the input of your command
[10:43] <B|ack0p> journalctl -b | nc termbin.com 9999
[10:44] <tomreyn> B|ack0p: yes, right. please look for errors yourself first, then point to them on this output.
[10:45] <B|ack0p> ok
[10:49] <B|ack0p> Error calling StartServiceByName for org.gnome.ScreenSaver: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.gnome.ScreenSaver exited with status
[10:50] <B|ack0p> Aug 30 13:26:40 uthink-x61 gnome-shell[1340]: Error looking up permission: GDBus.Error:org.freedesktop.portal.Error.NotFound: No entry for geolocation
[10:51] <B|ack0p> there are many errors related DBus and GDBus
[10:51] <B|ack0p> what are they about?
[10:53] <tomreyn> gnome / gtk related errors are often, but not always, insignificant
[10:53] <B|ack0p> are they critical errors?
[10:54] <B|ack0p> i have found 1 hardware error:
[10:54] <B|ack0p> Aug 30 13:26:30 uthink-x61 kernel: tpm tpm0: [Hardware Error]: Adjusting reported timeouts: A 10000->10000us B 10000->10000us C 0->752000us D 0->752000us
[10:54] <B|ack0p> tpm might be thinkpad power manager?
[10:54] <tomreyn> more likley trusted platform module
[10:54] <tomreyn> do you have gnome-screensaver installed?
[10:55] <B|ack0p> tomreyn: i installed gnome tweak extensions other day in a package
[10:55] <B|ack0p> i dont know know if it contains screensave
[10:55] <B|ack0p> r
[10:55] <B|ack0p> i dont know if i have or not
[10:55] <tomreyn> is the package "gnome-screensaver" installed?
[10:55] <B|ack0p> i disabled all extensions but let me check
[10:56] <tomreyn> apt list gnome-screensaver
[10:56] <tomreyn> it either says [installed] or not
[10:56] <B|ack0p> ok
[10:56] <B|ack0p> gnome-screensaver/bionic,now 3.6.1-8ubuntu3 amd64 [installed,automatic]
[10:57] <B|ack0p> it seems installed but i dont know how and when
[10:57] <B|ack0p> maybe installed inside package somehow
[10:57] <tomreyn> which desktop manager do you use? which desktop?
[10:57] <B|ack0p> gnome
[10:57] <tomreyn> gnome-shell is mentioned on your logs, you i guess you use that as a desktop environment. do you use gdm as a login manager?
[10:57] <B|ack0p> ubuntu 18/04 default desktop
[10:58] <B|ack0p> but just to try i installed gnome session flashback
[10:58] <B|ack0p> to try gnome classic on 18/04
[10:58] <B|ack0p> but i am on default gnome at the moment
[10:58] <tomreyn> ok, this will be why you have gnome-screensaver installed.
[10:58] <tomreyn> gnome-shell doesn't use it, but -flashback may
[10:58] <B|ack0p> i logged in few times after i installed in gnome classic
[10:59] <B|ack0p> does it cause error popup and system freeze?
[10:59] <tomreyn> i suspect this may cause the error reporting popup: <B|ack0p> Error calling StartServiceByName for org.gnome.ScreenSaver: GDBus.Error:org.freedesktop.DBus.Error.Spawn.ChildExited: Process org.gnome.ScreenSaver exited with status
[10:59] <tomreyn> which is probably due to gnome-screensaver
[11:00] <tomreyn> ...starting and failing under gnome-shell
[11:00] <B|ack0p> earlier as i mentioned i faced system freeze 4 times
[11:00] <tomreyn> you did not mention "freeze" before
[11:00] <B|ack0p> it didnt happen recently but it may
[11:00] <B|ack0p> tomreyn: i did few days ago
[11:00] <B|ack0p> and you suggested me tty
[11:00] <tomreyn> hah, well, my memory is not perfect
[11:01] <B|ack0p> not happening recently but it may happen
[11:01] <tomreyn> i "suggested you tty"?
[11:01] <B|ack0p> if freeze happens you suggested me sysrq and tty
[11:01] <B|ack0p> to check what happens
[11:01] <B|ack0p> and collect some reports etc
[11:02] <tomreyn> oh i think i'm recalling, i suggested you try blindly switching to a tty and try to ctrl-alt-del to see whether it is just a graphics issue.
[11:02] <B|ack0p> but didnt happen yet
[11:02] <B|ack0p> yes exactly
[11:03] <brokencycle> Just FYI: It seems to be a kernel bug in 4.15.0-58.64 because the problem went away with 5.0.0-25.26~18.04.1
[11:03] <B|ack0p> brokencycle: what bug?
[11:03] <brokencycle> The XFS kernel panic bug.
[11:03] <B|ack0p> hm
[11:05] <brokencycle> I have asked about it earlier today. Short version: "mkfs.xfs /dev/sdb1" results in a kernel panic on "my" machine.
[11:05] <B|ack0p> tomreyn: if i uninstall gnome flashback will i get rid of those errors related with gnome shell?
[11:05] <brokencycle> And reproducible so, I've tried almost 10 times, with different disks
[11:06] <freakynl> Hi, what's the best way to avoid dynamic IPv6 addresses? I have configured it statically through netplan, I just want it to use the static address. I do *not* want to disable RA as it's needed for redundancy on the uplink (the router advertisements) - only need to get rid of the dynamic addresses. Happens now that bind for examples sends notifies from the dynamic address and that's really undesirable, it
[11:06] <tomreyn> brokencycle: did you report a bug?
[11:06] <freakynl> should always send from it's static address
[11:07] <brokencycle> not yet, but it's on the list. someone pointed me to the very useful package 'linux-crashdump'
[11:07] <brokencycle> I've collected the output of that
[11:07] <tomreyn> B|ack0p: you could then uninstall gnome-screensaver (maybe you already can do so now), which should prevent the failure of gnome-screensaver starting on the gnome-shell session. which may get rid of the popup message you see.
[11:08] <B|ack0p> i did apt purge gnome-session-flashback but only some kbs removed.
[11:08] <tomreyn> brokencycle: feel free to show the output so maybe we can tell whether it can be specific to your system or configuration somehow (and not a generic bug)
[11:09] <B|ack0p> even screensaver not removed . i now removed screensaver manually
[11:09] <B|ack0p> package was more than 100mbs how comes it only removes just some kbs
[11:09] <brokencycle> tomreyn: I need to talk to someone first, and it's several megabytes. the system is a fairly generic dell r440.
[11:09] <brokencycle> brand new, too
[11:09] <tomreyn> B|ack0p: the "gnome-flashback" package is a meta package, you'll need to remove all the packages it depends on (which are not required by other packages you wish to keep)
[11:10] <tomreyn> ... if you want it completely gone
[11:10] <B|ack0p> i want complete uninstall if possible
[11:10] <B|ack0p> but i cant find one by one installed packs
[11:11] <tomreyn> B|ack0p: /var/log/apt/history.log* lists the packages which were recently requested to be installed
[11:11] <B|ack0p> i am not sure if DBus and GDBus errors related to flashback
[11:11] <tomreyn> me neither, but probably not
[11:11] <B|ack0p> tomreyn: do you know what? i just got popup error report after i purged screensaver :p
[11:12] <B|ack0p> without detail
[11:12] <BluesKaj> Hiyas all
[11:12] <B|ack0p> it usually pops up once when i boot to ubuntu
[11:12] <tomreyn> what does "popup error report" say then?
[11:12] <B|ack0p> no details..
[11:12] <B|ack0p> just popup
[11:12] <tomreyn> what does it look like?
[11:12] <tomreyn> take a screen shot next time
[11:12] <B|ack0p> looks like same as usual
[11:12] <B|ack0p> ok if it happens after reboot i will do
[11:13] <tomreyn> and note down the time it happens
[11:13] <tomreyn> then companre to    journalctl -b | nc termbin.com 9999
[11:13] <B|ack0p> ok
[11:14] <B|ack0p> thx a lot
[11:14] <tomreyn> good luck
[11:23] <toffe> Habbie: I've set up the raspberry pi now to fake those interfaces is it veth you thought about when telling me about virtual interfaces?
[11:24] <Habbie> toffe, i'm not sure what they would be called, sorry
[11:24] <toffe> Ok thanks :)
[11:54] <MRD365> Hi
[11:54] <lotuspsychje> welcome MRD365
[11:54] <lotuspsychje> MRD365: what can we do for you today?
[11:55] <MRD365> I need lots of money
[11:56] <Diplomat> Hey! For some reason my VPS clock stays behind the real clock. VPS host said that their host's clock is working fine.. so now I have no idea what might be wrong. Any ideas?
[11:56] <tomreyn> which virtualization is it, which ubuntuversion is it?
[11:56] <Diplomat> Right now the time difference is like 30 minutes
[11:57] <tomreyn> is the VM's clock bound to that of the host?
[11:57] <MRD365> Can you all hack?
[11:57] <Diplomat> VMware I think and it's ubuntu 14.04 (yes it's very old, but we are running some legacy stuff). I'm thinking about updating to 18.04 but I'm not 100% sure yet
[11:57] <tomreyn> !ot | MRD365
[11:57] <tomreyn> ...as you very well know
[11:58] <Diplomat> Tomreyn: I have absolutely no idea.. but I know they are using that VMware datacenter software
[11:58] <MRD365> Do you know METASPLOIT?
[11:58] <tomreyn> MRD365: stop
[11:59] <debouncer> While I playin' a game via Steam, probably my graphical card crashes.
[11:59] <MRD365> Ok I'm sorry
[11:59] <debouncer> I don't know how it gets triggered but i got a black screen during the game and monitor says no cable connected.
[11:59] <anddam> howdy, how do I type a double prime using compose key on 18.04?
[11:59] <tomreyn> Diplomat: virt-what can tell you which virtualization is in use.
[11:59] <debouncer> Since hdmi cable directly connected to graphical card, I suspect a driver crash.
[11:59] <debouncer> Can someone help to troubleshoot it please?
[11:59] <lotus|i5> !details | debouncer
[11:59] <Diplomat> Tomreyn: it's vmware
[12:00] <MRD365> Am from Indonesia
[12:00] <MRD365> are you?
[12:00] <M_aD> MRD365: if you like to chitchat join #ubuntu-offtopic
[12:00] <lotus|i5> MRD365: only ubuntu support questions here please
[12:00] <MRD365> Ok, I just found out
[12:01] <tomreyn> Diplomat: what does this report?   cat /sys/devices/system/clocksource/clocksource*/current_clocksource
[12:02] <Diplomat> tsc
[12:02] <tomreyn> Diplomat: oh i missed that you're running an !EOL version, no support here, sorry.
[12:03] <Diplomat> Lol, it's very eol
[12:04] <debouncer> lotus|i5: https://pastebin.com/PVJcbXyG
[12:05] <lotus|i5> debouncer: for gtx cards, you might try a bit higher driver version
[12:05] <lotus|i5> debouncer: what shows in: ubuntu-drivers list plz?
[12:06] <debouncer> 430
[12:06] <lotus|i5> debouncer: try to switch to 430 and reboot plz
[12:07] <debouncer> alright
[12:07] <debouncer> where is system backlog for graphical card? do you know?
[12:08] <tomreyn> journalctl -kb -1 | nc termbin.com 9999    to check + share you last kernel sessions' dmesg; decrease -1 further for earlier logs. --list-boots  to list all boots.
[12:08] <debouncer> i want to check while installing the new driver
[12:08] <tomreyn> dmesg -w   or  journalctl -kf   in a separate window to run a logwatch on kernel messages
[12:09] <tomreyn> (ctrl-c to end it)
[12:09] <debouncer> thanks
[12:10] <tomreyn> omit the k to journalctl if you want to watch all logs
[12:22] <Surfer2011> ^hello, is there anything needed to be taken care of using rsync with ntfs drives?
[12:22] <Surfer2011> i get this error:  ERROR: Warning! /bin/rm failed.
[12:31] <Habbie> Surfer2011, that does not look like a message from rsync
[12:32] <Surfer2011> rsnapshot my bad using rsync
[12:33] <pav> Surfer2011, are you talking abut Ubuntu standard on some non-standard ntfs drivers? ntfs-3g?
[12:33] <Surfer2011> /dev/disk/by-label/LaCie /srv/dev-disk-by-label-LaCie ntfs defaults,nofail 0 2
[12:33] <Surfer2011> is the fstab entry
[12:37] <Malgorath> Hey I have one of those BIOS mobo raids that I think is just software raid, I have to 8TB drives Identical I'm wanting to raid but when I set it up with BIOS correctly(was seen normally by windows so I know an OS can see all 14.7TB of space when I do a raid0 on it,  my issue is when I do fdisk -l I see the raid but its only 1.8TB in available space, any ideas or tips?
[12:40] <Ool> Malgorath: without multiboot better to not use fake raid but real soft raid: mdadm . https://help.ubuntu.com/community/FakeRaidHowto
[12:40] <Ool> with multiboot, I don't know
[12:41] <Malgorath> Ool, was wondering that, wondering if a raid is even worth the hassle
[12:41] <Malgorath> btw not multibooting, just had windows on it and replaced it with ubuntu 19.04
[12:42] <Malgorath> Decided if I need windows my laptop can run that horrible OS for gaming
[12:42] <EriC^^> Surfer2011: do you have write permissions on the ntfs as your user?
[12:43] <EriC^^> (that you're running rsync as)
[12:44] <Malgorath> brb going to turn off the bios raid stuff
[12:44] <Surfer2011> yes i should
[12:47] <Surfer2011> i do
[12:53] <blip99> guys, what on earth is juju? https://packages.ubuntu.com/xenial/juju-mongo-tools3.2
[12:54] <lotus|i5> !juju
[12:54] <blip99> In newer versions of ubuntu it's called mongo-tools, what is this juju thing for Xenial?
[12:54] <blip99> aaaah
[12:54] <blip99> It sounded so dodgy I thought it was a scam :D
[12:54] <blip99> thanks lotus|i5
[12:55] <blip99> awesome :)
[12:55] <blip99> thank you mr. juju
[12:55] <lotus|i5> welcome blip99
[13:00] <blip99> lotus|i5, do juju packages require extra config?  I installed this package I linked, but no mongo tools show up.  no mongodump etc..
[13:01] <blip99> `juju-mongo-tools3.2/xenial,now 3.2.4+ds-0ubuntu1 amd64 [installed]`
[13:03] <Mechanismus> anyone know of a decent one-liner to tell if I've already run `pam-auth-update --enable mkhomedir`?
[13:04] <Habbie> Mechanismus, why? looks safe to run twice
[13:05] <Mechanismus> because I have it in a salt state and I don't want that state to run every time I apply the high state
[13:07] <Mechanismus> I'm trying to get my salt states setup such that I can easily tell if a node has changes to apply.  I need to add an 'unless' to the mkhomedir state so that it detects whether it needs to be run.
[13:07] <Habbie> diff --git a/pam.d/common-session b/pam.d/common-session
[13:07] <Habbie> +session        optional                        pam_mkhomedir.so
[13:07] <Habbie> that is what i observe as a change in /etc when i run it
[13:08] <Habbie> i trust you can 'unless' on that
[13:08] <Mechanismus> yeah I was thinking about grepping for "pam_mkhomedir.so" in common-session but I wonder if there's a better way
[13:09] <Habbie> pam-auth-update really doesn't do much more than that either
[13:54] <iffraff> Hi, I have a laptop with an onboard intel video card and a nvidia card.  I think it's not an uncommon setup.  I can't get it to display two external 4k monitors.  So I'm thinking of getting an external graphics card that would connect via usb -c or thunderwhatever.
[13:54] <iffraff> Two questions 1) does this sound reasonable?
[13:54] <iffraff> 2) what external graphics card is most likely to not give me trouble with ubuntu?
[13:58] <v0lksman> I have a directory of alias files (each file defines an alias), can't seem to get them to load automatically using bashrc something like . /path/to/dir/*
[13:58] <v0lksman> anyone know how to get them all to load?
[13:58] <Habbie> for f in path/to/dir/* ; do . $f ; done
[13:59] <v0lksman> the example I provided loads the first found in the directory
[13:59] <Habbie> yes
[13:59] <v0lksman> yeah loop them eh?  ok..thanks!
[13:59] <Habbie> use . "$f"
[13:59] <Habbie> in case there's whitespace in one of the names
[13:59] <trench> iffraff: https://hackernoon.com/recipe-nvidia-titan-x-as-external-gpu-on-ubuntu-laptop-9df2dfc02fc6 pretty straight forward
[14:02] <iffraff> trench: but isn't nvidia notorious for having crappy linux drivers?
[14:03] <v0lksman> Habbie: thanks!
[14:06] <trench> iffraff: try and check if it works, if it doesn't send the stuff back?
[14:08] <iffraff> can you return video cards?  I guess I kind of assumed you could not
[14:10] <Habbie> iffraff, that's not a ubuntu support question in any shape or form :)
[14:11] <pragmaticenigma> iffraff: When researching anything with Linux, always pay attention to the date of publication. If it is more than a 3 years old, it is probably way out-dated as the Linux community is continuously evolving and changing faster than articles can be published. As for your 4k issue, are you certain that there isn't a limitation of the graphics card on how many displays it can drive at 4k... in a laptop form factor space is a premium
[14:11] <pragmaticenigma> and heat dissipation is a problem. You would need to consult the laptop specifications to find out what your computer is capable of. You can use an external graphics card, though you will want to verify that your laptop has the right USB Type C support for that feature. Some USB-C ports do not have the required bandwidth for the data transfer needed.
[14:11] <iffraff> Habbie:  what video cards work best with ubuntu is not relevant to ubuntu?
[14:11] <Habbie> iffraff, i meant the 'return' question, not the rest, sorry
[14:12] <pragmaticenigma> iffraff: As what "works best" is not a support question. It is a polling question, and it is preferred that you ask those types of questions in #ubuntu-offtopic
[14:14] <iffraff> pragmaticenigma: so the laptop is supposed to support multiple 4k monitorns, but it is also supposed to run windows, and i believe windows has some magic driver that bridges the two video cards.  Hence I'm thinking of external video card. Is the main concern when using an external video card the connection ( I see that mentioned the most )? my system does have thunderbolt 3.  so that should have the bandwidth
[14:19] <pragmaticenigma> iffraff: Nvidia is the maker of the driver for windows, and they offer a driver for Linux as well. It is true that sometimes the Nvidia Linux driver is a little behind in feature parity with the Windows, but that usually is seen more in the CUDA availability and prsently some of the RTX capabilities. There is no magic driver, just that system architecture can switch from the lower powered Intel graphics chip for graphics
[14:19] <pragmaticenigma> processing to the Nivida chipset. That is a feature that I haven't seen fully implemented at this time. The volunteers here usually recommend that a user chooses either Nvidia or the Intel from the BIOS/UEFI setup and stick with one or the other.
[14:21] <iffraff> Yes, however to get the advertised dual 4k output you have to use both. I believe
[14:21] <pragmaticenigma> iffraff: I would start with making sure the computer is setup to use the Nvidia chipset, full time. The inability of the unit to drive both of the external monitors makes me believe the issue might be that you're running on the Intel chipset, and not the Nvidia
[14:22] <iffraff> I did switch the chipset ( or check the chipset ) both manually via cli, and via the nvidia gui.  Logged off and back on etc. So I'm fairly sure I was on nvidia.
[14:23] <pragmaticenigma> iffraff: Are you trying to drive two external monitors and the laptop screen at the same time (giving you 3 displays?)
[14:25] <iffraff> no, I don't need the laptop monitor.  That said it's possible that while testing the laptop was open. What would happen is I had a thunderbolt to hdmi splitter and it would only ever display one monitor.  However, which monitor depended on the order of hookup.  so I know both monitors and cables did work.
[14:26] <pragmaticenigma> iffraff: That's a bandwidth limitation of the splitter
[14:27] <pragmaticenigma> iffraff: to drive two displays, they'd both need to be connected independently to the laptops graphics ports
[14:28] <iffraff> I actually tried a number of splitters, including these sort of all in one laptop docking things, like the pluggable https://www.amazon.com/Plugable-Charging-Specific-Thunderbolt-DisplayPort/dp/B0779K9DG2/ref=sr_1_5?crid=PUASATCMPDOK&keywords=pluggable+usb+3.0+docking+station&qid=1567175265&s=gateway&sprefix=pluggable%2Caps%2C275&sr=8-5
[14:28] <iffraff> that one is usb-c but I also tried one that was thunderbolt
[14:29] <banisterfiend> yo guys, given a path to an executable on disk -- what's the best way to find out all instances of it (i.e PIDs) ?
[14:29] <pragmaticenigma> banisterfiend: look at "man ps"
[14:29] <lordcirth_> banisterfiend, 'lsof /bin/foo' might do it
[14:30] <banisterfiend> pragmaticenigma lol, i mean in the C API sorry
[14:41] <Laserburn> hey guys.  I bought a Dell R710 server with no OS.  Downloaded Ubuntu 18.04 LTS and put it on a USB stick.  Every time I try to boot the installer, after the grub menu, I get an "out of range" error on my monitor.  I googled for solutions, but the vga switches in the grub menu have not worked.  Does anyone have any idea what I can do?  My monitor's
[14:41] <Laserburn>  native res is 1920x1080 and supports 59/60/120/144hz.
[14:42] <ioria> Laserburn, have you tried 'nomodeset' ?
[14:42] <Laserburn> I have not
[14:42] <Laserburn> will try
[14:43] <lotuspsychje> Laserburn: see also the #ubuntu-server channel for likeminded server volunteers
[14:43] <Laserburn> thanks!
[14:49] <Elodin> hello, i just installed ubuntu, and i would like to see the grub at startup, how can i do that?
[14:49] <lotuspsychje> Elodin: hold shift at boot
[14:49] <Elodin> thanks
[14:50] <Elodin> i was thinking i was crazy... it was booting too fast for me to see the grub
[14:50] <Ool> or set it into /etc/default/grub: https://help.ubuntu.com/community/Grub2
[14:53] <Ool> https://askubuntu.com/questions/16042/how-to-get-to-the-grub-menu-at-boot-time
[15:48] <rapidwave> I upgraded to Disco and now icons seem like windows 95. Where did the better icons go?
[16:11] <brutser> i dont understand this, i create header file for dm-crypt (dd if=/dev/zero bs=1049600 count=1 of=myheader) - then i encrypt sda (cryptsetup luksFormat --hash=sha512 --key-size=512 --header myheader /dev/sda) - after that command, the header file (myheader) is now 16.0M in size - why did that happen? i want the header file to stay at the minimum ~1.
[16:11] <brutser> 1M size why it grow to 16.0M??
[16:18] <tomreyn> try specifying --keyfile-size
[16:19] <tomreyn> oh actually, ignore me
[16:22] <tomreyn> brutser: looks like you'll need --align-payload 0
[16:23] <tomreyn> at least this example says so https://superuser.com/questions/823922/dm-cryptluks-can-i-have-a-separate-header-without-storing-it-on-the-luks-encry
[16:30] <brutser> tomreyn: i thought so too, but i tried that already
[16:31] <brutser> tomreyn: actually had the exact same post in front of me
[16:33] <rexwin_> Command 'tailf' not found
[16:33] <rexwin_> in my ubuntu VM machine
[16:35] <Ben64> tailf is depreciated
[16:36] <brutser> tomreyn: it's easy to reproduce > create test.txt with Hello World! inside NEXT dd if=/dev/zero bs=1049600 count=1 of=myheader NEXT cryptsetup luksFormat test.txt --header myheader --align-payload 0 --cipher twofish
[16:36] <brutser> you tell me if myheader grew to 16.0M or not
[16:37] <EoflaOE> rexwin_: Tailf is deprecated, so replace it with tail -f. Explain what you want to do so we can helpmyou further.
[16:37] <EoflaOE> help you*
[16:38] <rexwin_> I got in now
[16:42] <ausjke> usb camera /dev/video0: fd=open(/dev/video0); unplug camera; /dev/video0 file gone immediately;can I still safely close the now missing device file via close(fd)?
[16:44] <Habbie> ausjke, yes
[16:45] <Habbie> ausjke, what you can't do is read or write from it - that would error (but it would still be safe, just need to handle the errors)
[16:53] <ausjke> Habbie: now I insert the usb-camera back, and a new /dev/video0 is created, so I just need open/close again.
[16:54] <ausjke> i have a race condition, when camera unplug, before I detect that and close that fd, usb-camera is re-inserted, but /dev/video0 is still held due to not-closed-in-time, so kernel create /dev/video1 instead for the camera, and my code breaks
[16:55] <ausjke> my program will show /dev/video0(deleted) under /proc/PID/fd for this hotplug usb device
[17:04] <Habbie> ausjke, your code should not expect the camera to always be on 0, indeed
[17:05] <banisterfiend> hii, what's the best way to find out all the PIDS that were started from a specific path? i.e how do i find all the PIDs for instances of /usr/bin/hexchat
[17:06] <Habbie> banisterfiend, you can do a bunch of things but they will all, inside, check /proc/*/exe and /proc/*/maps and /proc/*/fd
[17:07] <banisterfiend> Habbie thanks do you know how to list all PIDs in system so i can iterate through them with /proc/<PID/exe ?
[17:07] <Habbie> banisterfiend, yes, list /proc and filter out all non-numbers
[17:08] <banisterfiend> Habbie hm, that's the only way? i was hoping there'd be something like: /proc/pids or some such would just return the pids
[17:09] <ioria> banisterfiend, you mean something like 'pstree -p ' ? or what ?
[17:09] <Habbie> banisterfiend, not that i know of
[17:09] <Habbie> banisterfiend, ps just reads /proc
[17:09] <Habbie> pstree -p also just scans /proc
[17:16] <banisterfiend> Habbie alrighty thanks
[17:55] <mithrison> I'm using gstrunner to stream video on port 5000, the command runs with no problem however the port doesn't open. how can I troubleshoot it. (I'm on Raspbian, by default iptables seems to be disabled)
[17:56] <molinot> hello
[18:03] <molinot> I have a PC with 16GB of RAM, it has a 2GB swap file, should I augment the size to 4GB? Is any other size preferable? I don't plan to use hibernation at the moment
[18:04] <lotuspsychje> mithrison: router could block port?
[18:04] <mithrison> hmm good point
[18:04] <leftyfb> mithrison: you're having this issue on raspbian?
[18:05] <lotuspsychje> molinot: got an ssd or spinner?
[18:06] <molinot> lotuspsychje, ssd
[18:06] <leftyfb> molinot: if you're not going to use hibernation, then disable swap completely. It's pretty useless with 16G of memory
[18:06] <lotuspsychje> molinot: ^
[18:06] <lotuspsychje> molinot: was that a manual partitioning, or did you let ubuntu setup choose?
[18:07] <molinot> leftyfb, I open many tabs at once, and the PC becomes slower
[18:07] <leftyfb> molinot: swap isn't going to help you
[18:08] <lotuspsychje> molinot: wich ubuntu are you running?
[18:08] <molinot> lotuspsychje, I don't remember, I think I let it choose, but several versions ago. I have the latest 19.04
[18:09] <mithrison> leftyfb yes. it's an issue on raspbian
[18:09] <leftyfb> mithrison: then why are you asking in #ubuntu?
[18:09] <lotuspsychje> molinot: if 19.04 with ssd & 16g ram going slow, then there's a bottleneck somewhere
[18:10] <molinot> lotuspsychje, I think the Chrome tabs eat more and more memory
[18:11] <tomreyn> brutser: maybe that's a matter of luks 1 vs luks 2
[18:12] <lotuspsychje> molinot: i always tweak a bit more with installing preload & haveged, disabled unwanted services/systemd units, clean system with bleachbit, uninstall unneeded packages
[18:14] <molinot> lotuspsychje, alas, I use htop and it is the tabs who eat the memory, alas, when there is few left, it doesn't eat from the wwap file...
[18:14] <lotuspsychje> molinot: did you compare with other browsers, lets say chromium
[18:15] <molinot> well, I have it installed, I could give it a tray
[18:15] <molinot> try
[18:15] <sarnold> I strongly recommend keeping swap enabled; the kernel can make better memory management decisions if it has some place to stuff data that processes don't appear to be using
[18:15] <sarnold> you only need a gigabyte or two of the stuff
[18:16] <molinot> perhaps I have it disabled or something
[18:17] <leftyfb> molinot: I have 75 tabs open on a laptop with 16G. Among other applications open. I'm only using 3.4G of memory
[18:18] <molinot> leftyfb, chrome or chromium?
[18:18] <leftyfb> chrome
[18:18] <leftyfb> and I have a ton of extensions running as well
[18:19] <molinot> I have 8GB eaten with around 18 tabs open
[18:20] <kokokon> https://www.cisecurity.org/advisory/a-vulnerability-in-google-chrome-could-allow-for-arbitrary-code-execution_2019-086/
[18:20] <kokokon> can ubuntu be updated?
[18:21] <kokokon> I mean can chromium maintainer update chromium browser
[18:21] <EoflaOE> kokokon: You can request the maintainers to update it.
[18:21] <sarnold> molinot: does shift-escape in your browser bring up a tool to show you which tabs are consuming memory and cpu and so on?
[18:23] <tomreyn> kokokon: chromium-browser seems to already have this fix
[18:24] <kokokon> tomreyn mine is not the latest
[18:24] <sarnold> tomreyn: hmm I'm not sure we do -- I don't see CVE-2019-5869 on https://launchpad.net/ubuntu/+source/chromium-browser/+changelog
[18:24] <tomreyn> i was inspecting version numbers https://packages.ubuntu.com/search?keywords=chromium-browser&exact=1
[18:24] <lotuspsychje> kokokon: your ubuntu version and chromium version plz?
[18:24] <tomreyn> https://www.cisecurity.org/advisory/a-vulnerability-in-google-chrome-could-allow-for-arbitrary-code-execution_2019-086/ states "Google Chrome versions prior to 76.0.3809.132"
[18:24] <Habbie> sarnold, 5869 isn't even present in your CVE tracker
[18:24] <tomreyn> it could be chrome only, of course
[18:25] <kokokon>  76.0.3809.132  ubuntu 18.04
[18:25] <molinot> sarnold, I have a lot of subframes in strage sites
[18:25] <sarnold> Habbie: yeah, but that's less surprising.. we often release the browsers before we triage the cves
[18:25] <lotuspsychje> kokokon: ok tnx
[18:25] <Habbie> sarnold, ah
[18:25] <EoflaOE> kokokon: You can request them to update it
[18:25] <kokokon> them who?
[18:26] <kokokon> sarnold are you the chromium maintainer?
[18:26] <sarnold> kokokon: no
[18:26] <kokokon> Maintainer: Ubuntu Developers.
[18:26] <sarnold> alas our browser guy is on vacation
[18:27] <kokokon> so I should update it manually right?
[18:29] <tomreyn> https://chromium.googlesource.com/chromium/src/+/51abe396b9580d43d53046180a4f95fdfe5140d9 is the fixed commit
[18:32] <kokokon> tomreyn so I just find impact file on a system, change it and restart browser?
[18:33] <tomreyn> so i did not compare version numbers properly. https://chromereleases.googleblog.com/2019/08/stable-channel-update-for-desktop_26.html states that 76.0.3809.132 is the fixed version, but according to https://launchpad.net/ubuntu/+source/chromium-browser/+changelog and https://packages.ubuntu.com/search?keywords=chromium-browser&exact=1 ubuntu has 76.0.3809.100 based builds, which are probably still affected
[18:36] <kokokon> yes
[18:36] <kokokon> whats the easiest way to upgrade to 132?
[18:40] <pennTeller> Hi guys, sometimes when I move my mouse pointer to the right side of my screen all my windows kind of shift to the left side (as if they were trying to hide) how does one stop this from happening?
[18:40] <_KaszpiR_> google-chrome-stable/stable,now 76.0.3809.132-1 amd64 [installed]
[18:40] <_KaszpiR_>   The web browser from Google
[18:41] <_KaszpiR_> it's available
[18:41] <kokokon> yes chrome however chromium?
[18:41] <kokokon> there is an arch linux package
[18:41] <kokokon> so there must be a source code somewhere
[18:44] <sarnold> kokokon: you could switch from the .deb packaged chromium-browser to the snap packaged browser: https://snapcraft.io/chromium
[18:45] <tomreyn> or temporarily use https://download-chromium.appspot.com/
[18:46] <_KaszpiR_> chromium beta ppa
[18:47] <tomreyn> there's also https://launchpad.net/~canonical-chromium-builds/+archive/ubuntu/stage
[18:47] <brutser> tomreyn: https://cdn.kernel.org/pub/linux/utils/cryptsetup/v2.1/v2.1.0-ReleaseNotes < at the top it's explained
[18:48] <brutser> it's some space they reserve for future use, but it's insane, 16M for a header
[18:48] <tomreyn> brutser: so it's LUKS1 vs LUKS2 indeed i see
[18:48] <brutser> i feel 1M is already sufficient
[18:48] <tomreyn> it's been a while that i've seen someone sainyg 15 MB extra was "insane"
[18:49] <brutser> yea :)
[18:49] <brutser> tomreyn: yes it's only for luks2 yes, do you know what are the downsides for using luks1?
[18:49] <brutser> security in mind ^
[18:50] <tomreyn> i assume the release notes you pointed to will discuss it. one difference is that luks 1 only has a single copy of the header, whereas luks 2 has two.
[18:51] <brutser> ok
[19:29] <thaway> on my LTS 16 server, some programs stopped accepting valid SSL certificates unless I explicitly set /etc/ssl/certs/ca-certificates.crt as the CA file.  what might have gone wrong?
[19:37] <kyle__> thaway: entirely depends on the cert(s) and the software.  Any chance they're they're EV certs?  Because I think those all got recently deprecated.
[19:37] <kyle__> Same goes for just extra long certs.
[19:38] <thaway> what are EV certs?
[19:39] <thaway> the programs that had issues were RoundCube (i.e. PHP) and wget
[19:39] <thaway> at least those are the ones with problems I noticed so far
[19:40] <thaway> ok reading this now: https://en.wikipedia.org/wiki/Extended_Validation_Certificate
[19:40] <kyle__> example of the of the wget calls that's giving you issues?
[19:41] <thaway> wget https://github.com/wikimedia/mediawiki-extensions-YouTube/archive/master.tar.gz
[19:42] <thaway> and RoundCube failed to connect to my own LetsEncrypt-certed host, so I don't think it's EV-related
[19:42] <kyle__> Run this for me: dpkg -l ca-certificates
[19:42] <thaway> (the LE cert was certainfly fresh; Thunderbird would still connect to it)
[19:42] <thaway> Version: 20170717~16.04.2
[19:42] <kyle__> (well, that's the package name on 18.04)
[19:43] <kokokon> sarnold i have installed latest with snap however it is yet to appear in the menu
[19:43] <kyle__> Err..... I don't have a 16.04 box infront of me, but that seems rather old... when's the last time you updated?
[19:43] <kokokon> Command 'chromium' is available in '/snap/bin/chromium'The command could not be located because '/snap/bin' is not included in the PATH environment variable.
[19:44] <sarnold> kokokon: you may need to log out and log in again to get a new PATH environment variable set up
[19:44] <kyle__> thunderbird is a mozilla project app.  I think all the moz stuff uses internal ssl systems with an internal certificate store.
[19:44] <kokokon> ok
[19:44] <thaway> I update regularly...
[19:45] <thaway> hmm, my web host might have screwed this one
[19:45] <thaway> sources.list contains their addresses (strato)
[19:45] <tomreyn> ca-certificates 20170717~16.04.2 is the latest on xenial
[19:45] <kyle__> Damn.  That's just ... werid.
[19:45] <kyle__> Thanks tomreyn.
[19:45] <thaway> tomreyn: oh ok
[19:46] <kyle__> OK.  Hrumm.  You can use a -k in the wget or curl call, if you realy want, but I wonder if maybe they're mitm the stuff for a load balancer or something ugly.
[19:47] <thaway> wget's -k seems irrelevant.  installing curl to test...
[19:49] <thaway> curl doesn't complain about the cert but downloads an HTML file saying "you're being redirected" -_-  one point for wget on this one :P
[19:49] <tomreyn> thaway: so you're saying that    wget -qO /dev/null https://github.com    gives an error message -which? - but    wget -qO /dev/null --ca-certificate /etc/ssl/certs/ca-certificates.crt https://github.com   does not?
[19:51] <thaway> tomreyn: correct.  the error is:  cannot verify github.com's certificate, issued by ‘CN=DigiCert SHA2 Extended Validation Server CA,OU=www.digicert.com,O=DigiCert Inc,C=US’:  Unable to locally verify the issuer's authority.
[19:51] <kyle__> wget is for downloading, curl is for interacting directly with REST apis.  At least, that's how I think of it.
[19:52] <thaway> makes sense
[19:52] <tomreyn> so this suggests that the compiled in CA path of this wget build you're using differs from /etc/ssl/certs/ca-certificates.crt
[19:52] <tomreyn> readlink -f $(which wget)
[19:52] <tomreyn> dpkg -S $(readlink -f $(which wget))
[19:52] <tomreyn> what do those return?
[19:54] <thaway> tomreyn: /usr/bin/wget and package wget
[19:54] <thaway> wget version:  1.17.1-1ubuntu1.5
[19:54] <tomreyn> and it's not only happening with wget you said, right?
[19:55] <tomreyn> still, can you show    sha256sum /usr/bin/wget
[19:55] <thaway> c31d3e52ddcc0d9c32c79f43febf5e1609cce5ae60546e112163c4329f52cbd9
[19:56] <thaway> I had a similar problem with RoundCube, which is a web-based email client written in PHP
[19:56] <tomreyn> i see a matching hash on a system i manage
[19:57] <thaway> (just disabled cert checking for that one as it connects to localhost anyway)
[19:57] <gonf> how do I download all files WITHOUT subdirectories using wget?
[19:57] <B|ack0p> hi. i am having these errors and fails: https://paste.ubuntu.com/p/XXjrQMQHmk/
[19:57] <kyle__> wget -r -l1 I think.  Recurse with a depth of 1
[19:57] <B|ack0p> how can i fix them?
[19:58] <B|ack0p> one or some of them is causing system error popup everytime i boot into ubuntu 18.04
[19:58] <B|ack0p> system error report popup*
[19:58] <B|ack0p> recently i faced that popup at 10.54pm
[19:58] <gonf> kyle__, thats what I thought too but it keeps downloading the subdirs anyway:|
[19:59] <tomreyn> thaway: do you have /etc/wgetrc or ~/.wgetrc or /etc/netrc or ~/.netrc ?
[19:59] <B|ack0p> in the paste you can see possible fails/errors related to that error
[19:59] <thaway> gonf: maybe -l 0?
[19:59] <kyle__> gonf: Ooh.  Humm.  I know -l0 will recurse indefinately.  Mayb eI have to re-read the man page
[19:59]  * kyle__ hasn't read the wget one in probably 5 years
[20:00] <B|ack0p> related to that time*
[20:00] <gonf> -l 0 means idd inf
[20:00] <tomreyn> thaway: and is the system time set correctly?
[20:00] <sarnold> B|ack0p: pop into #snappy -- hopefully someone will know how to track down which snap icons are missing
[20:00] <kyle__> Wow.  I forgot how bad that manpage is.
[20:01] <thaway> tomreyn: only /etc/wgetrc and the only uncommented line is passive_ftp=on
[20:01] <B|ack0p> hmm
[20:01] <B|ack0p> sarnold: is it related to snap apps?
[20:01] <sarnold> B|ack0p: yes
[20:02] <tomreyn> thaway: but none of the other three files is present?
[20:02] <SignalsOut> I'm gonna say it!
[20:03] <thaway> tomreyn: correct
[20:04] <B|ack0p> sarnold: they seem to be sleeping..
[20:05] <B|ack0p> i have 7 same error log this:
[20:05] <B|ack0p> Aug 30 22:54:19 uthink-x61 gnome-software[3571]: Failed to load snap icon: local snap has no icon
[20:05] <B|ack0p> and other one: Aug 30 22:54:17 uthink-x61 nautilus[3933]: Called "net usershare info" but it failed: Failed to execute child process “net” (No such file or directory)
[20:06] <tomreyn> thaway: i tested the same command i provided you earlier, the one you reported fails with "cannot verify github.com's certificate, issued by ‘CN=DigiCert SHA2 Extended Validation Server CA,OU=www.digicert.com,O=DigiCert Inc,C=US’:  Unable to locally verify the issuer's authority." unless you specify --ca_certificate=/etc/ssl/certs/ca-certificates.crt .  it works without errors on my test system. which puzzles me somewhat.
[20:06] <sarnold> B|ack0p: yeah, it's a friday before a US weekend, and after end of usual office hours in europe.. it might not be real busy
[20:06] <sarnold> B|ack0p: that net usershare info would be easy to fix by installing samba-common-bin but if you don't have it now then you probably don't care about doing SMB networking
[20:07] <B|ack0p> sarnold: i am not sure about samba
[20:08] <B|ack0p> and if it is necessary i can install?
[20:08] <B|ack0p> if you look at full log here: https://termbin.com/xlo9
[20:08] <thaway> tomreyn: thanks.  yes, it's very weird.  I'm sure I haven't fiddled with anything that could have caused it.  the RoundCube problem came "out of nowhere".  got a report from a user one day when I hadn't done anything on the server in a while...
[20:09] <B|ack0p> at time 22.54.02 there is about 5 fails about fwupd
[20:09] <B|ack0p> Aug 30 22:54:02 uthink-x61 fwupd[3593]: disabling plugin because: failed to....
[20:09] <B|ack0p> is it firewire? and why is it failing? is it error or normal?
[20:11] <sarnold> B|ack0p: fwupd is a firmware update tool -- I think those messages are probably normal enough
[20:12] <tomreyn> sarnold: would you have any idea as to why wget on xenial (proper latest deb, but have not checked the libs) would start throwing cert validation errors ("cannot verify github.com's certificate, [..] Unable to locally verify the issuer's authority.") unless it's run with --ca-certificate /etc/ssl/certs/ca-certificates.crt ?
[20:12] <tomreyn> this is thaway's issue, i can't think of much other than libs being replaced by third parties' now.
[20:12] <sarnold> tomreyn: quite often an unhashed /etc/ssl/certs directory will make it hard to do validation -- I believe update-ca-certificates should reforce the process
[20:13] <sarnold> tomreyn: indeed using someone else's libraries could give that trouble..
[20:13] <tomreyn> but would it not use the /etc/ssl/certs/ca-certificates.crt file either way?
[20:13] <tomreyn> i mean in case the rest of /etc/ssl/certs/ is not up to date
[20:14] <sarnold> hmm not sure there
[20:14] <tomreyn> okay, i'm not certain either really
[20:14] <sarnold> tomreyn: but perhaps if whoever built the hypothetical other libraries didn't use a similar enough set of configure flags..
[20:14] <gonf> so I couldn't download only the files, but I used the index.html to grep only files..:| not sure if there is other way.. but using this for now:x
[20:14] <tomreyn> yes
[20:15] <sarnold> tomreyn: did you find out if the crypto libs have been replaced?
[20:15] <tomreyn> now how do we tell which libs are being used
[20:15] <tomreyn> not yet, that's next
[20:15] <tomreyn> ldd or something better
[20:15] <thaway> I use one third-party package source for php/mysql stuff, let me check what it was called
[20:15] <sarnold> ldd's easy. It's not great but it's easy. :)
[20:15] <tomreyn> i just always thnk of https://catonmat.net/ldd-arbitrary-code-execution
[20:16] <thaway> ondrej's apache2 and php sources for xenial
[20:16] <sarnold> tomreyn: yes :(
[20:16] <tomreyn> there was an alternative, was it readelf or objdump?
[20:17] <thaway> btw strace on wget indicates it doesn't touch /etc/ssl/certs ... BUT!  open("/usr/lib/ssl/cert.pem", O_RDONLY)
[20:17] <thaway> not sure if relevant
[20:18] <thaway> oh and the result of the open() call is ENOENT
[20:18] <sarnold> tomreyn: readelf -d will show the libraries that an object file wants to use, but since it's not *executing* the thing, the way ldd does, you don't get to find out what the loader will actually load for them. just the names.
[20:18] <tomreyn> ah right, this was the problem with this approach, thanks sarnold!
[20:19] <tomreyn> so we have    readelf -d $(which wget) | grep NEEDED
[20:19] <tomreyn> or just    ldd $(which wget)
[20:20] <thaway> I decided to trust the wget executable and ran the ldd one.  among the results: libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f52e7279000)
[20:21] <tomreyn> thaway: ^ can you show the full output of    ldd $(which wget)   on a pastebin?
[20:21] <tomreyn> thaway: and then   sha256sum /lib/x86_64-linux-gnu/libssl.so.1.0.0 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0    and any other libs if you like
[20:24] <thaway> here's the full ldd result: http://dpaste.com/3NQS9G9
[20:25] <thaway> and the shas: http://dpaste.com/3J542FR
[20:25] <tomreyn> thaway: does /usr/lib/ssl/cert.pem exist?
[20:25] <thaway> tomreyn: nope
[20:25] <tomreyn> oh you said ENOENT, sorry
[20:26] <thaway> np
[20:26] <tomreyn> hmm those checksums match as well here
[20:29] <tomreyn> so by default wget doesn't use the CA file (ca-certificates.crt), it uses certificates directly based on their serials.
[20:29] <tomreyn> it looks them up in /usr/lib/ssl/certs which is actually a symlink to /etc/ssl/certs
[20:30] <tomreyn> thaway: did you maybe disable some CAs you don't trust lately?
[20:30] <thaway> tomreyn: didn't disable any CAs.  by the way /usr/lib/ssl/certs isn't a symlink on my system.
[20:31] <thaway> contents of the dir: http://dpaste.com/3H50XPT
[20:31] <tomreyn> thaway: interesting. can you     readlink -f /usr/lib/ssl/certs/244b5494.0
[20:32] <thaway> no actual certs there it seems
[20:33] <tomreyn> is ca-certificates installed?
[20:33] <tomreyn> oh yes, we had this initially
[20:33] <thaway> yup
[20:33] <thaway> btw I ran update-ca-certificates but it made no difference
[20:35] <tomreyn> thaway: according to    dpkg -S  /usr/lib/ssl/certs/*   which package do thee files belong to?
[20:37] <thaway> tomreyn: interesting: while the directory itself is from openssl, the contents don't belong to any package  (dpkg -S says "no path found that matches pattern")
[20:37] <tomreyn> do you have gitlab-ce installed?
[20:37] <thaway> nope
[20:38] <thaway> the timestamps of the files in the dir are from 2017 and 2018 though
[20:38] <tomreyn> i don't knwo what it is, but something you ran or installed oin this system thought it'd be a good idea to remove the symlink and place these files there
[20:39] <tomreyn> do you have /etc/ssl/certs/244b5494.0 ?
[20:39] <thaway> tomreyn: nope, the contents of the dir are these: http://dpaste.com/3H50XPT
[20:39] <thaway> tomreyn: oh nvm, wait
[20:40] <thaway> I thought you meant /usr/lib/ssl/certs.  yes I have that file in /etc/ssl/certs/
[20:40] <tomreyn> and is it a symlink to DigiCert_High_Assurance_EV_Root_CA.pem in the same directory=?
[20:41] <thaway> tomreyn: yes
[20:42] <tomreyn> some of those files you have (but should not have) in  /usr/lib/ssl/certs would be available at this location on Centos systems according to https://blog.hazrulnizam.com/renewing-ssl-certificates/
[20:43] <tomreyn> i reallydon't know what caused things to be overwritten on your system, but personally i'd make sure i'm the only admin, and i know what i'm doing then.
[20:43] <tomreyn> or that all admins are legitimate and know what they are doing
[20:43] <thaway> I'm the only admin and ssh login requires a private key i.e. password-only auth is disabled
[20:43] <tomreyn> it shoul dbe easy to fix now that we know what the problem is. unfortunately we do not know what caused it, or whether it will happen again.
[20:45] <thaway> hmm, so the problem is that these files were created in 2017/2018 and now they're out of date?  that at least makes sense
[20:45] <thaway> especially since I have no clue what I might have done back then :-)
[20:45] <tomreyn> no, the problem is that /usr/lib/ssl/certs is a directory (not a symlink to /etc/ssl/certs), contains files of unknown origin, and
[20:46] <tomreyn> ...that's it
[20:46] <thaway> well yeah, but they were created in 2017/2018, so I was wondering why the problem only really surfaced now
[20:47] <tomreyn> you don't know that, all you see is timestamps of files.
[20:47] <tomreyn> so far it does not looks like anyone falsified those, but they may just have been extracted from some tarball and moved there.
[20:47] <thaway> hmm, indeed, can be forged.  but if they were really created then by mistake, it could be that they're simply outdated now, right?
[20:48] <tomreyn> you call it "outdated", i call it "should never exist". they're not part of any package you have installed.
[20:49] <tomreyn> and there are no such files at this location in xenial#
[20:50] <thaway> tomreyn: I understand how you mean, but I mean in terms of cert-validation.  in other words, is it a possibility that they've been there since 2018/2017, and wget/RoundCube didn't make problems because the certs there were "valid" until now?
[20:50] <tomreyn> maybe.
[20:52] <thaway> thanks a ton for the help btw.  to be honest the server is for kind of a political project and I do expect hacking attempts, but nothing CIA-level, just script kiddies :P
[20:52] <tomreyn> i'd not be happy if i found out that files of unknown origin defined the CAs my system is configured to trust.
[20:53] <thaway> indeed.  there shouldn't be a problem with deleting them and making /usr/lib/ssl/certs a symlink to /etc/ssl/certs again, right?
[20:53] <thaway> (guess I'll move them away just in case rather than deleting)
[20:54] <tomreyn> this should make     wget -qO /dev/null https://github.com   run without error again
[20:57] <thaway> huh, can't move the directory: http://dpaste.com/0S7Y4RS
[20:57] <tomreyn> is this a VPS? which kind of?
[20:58] <Hobadee> So I tried enabling SELinux on Ubuntu 18.04.3, and it locked everything out; my SSH connection dropped and when I tried rebooting, it failed with a bunch of "Permission Denied" errors.  Why?
[20:58] <thaway> tomreyn: it's a VPS from the company STRATO AG.  don't know what kinds there are.
[20:58] <Hobadee> I used the "default" SELinux type
[20:59] <tomreyn> thaway: virt-what (package and same name command) can tell you
[21:00] <thaway> hmm, / is a "vzfs" mount which apparently is a virtual fs type that allows sharing between containers
[21:00] <thaway> tomreyn: openvz
[21:00] <tomreyn> thaway: blame strato then
[21:00] <thaway> I'll contact their support :)
[21:00] <thaway> thanks a lot again!
[21:00] <tomreyn> this is pretty bad really, but they made mistakes liek this before
[21:01] <thaurwylth> Is there a lot of 19.10 alpha/daily discussion on, well, pretty much anywhere, such as Ubuntuforums? I'm guessing not, but then again, I'm asking rather than guessing.
[21:02] <tomreyn> !ubuntu+1 | thaurwylth
[21:03] <tomreyn> Hobadee: so you installed and enabled selinux and assumed it would just work?
[21:04] <Hobadee> tomreyn: Yup.  I assumed the default policy would at least allow the system to boot....
[21:04] <tomreyn> did you also disable apparmor?
[21:04] <Hobadee> tomreyn: Yes, disabled apparmor
[21:05] <Hobadee> purged it completely
[21:05] <tomreyn> by default policy, do you mean you installed package "selinux-policy-default"?
[21:05] <Hobadee> yes, and in /etc/selinux/config I have SELINUXTYPE=default
[21:06] <tomreyn> i see. well, i never tried, am not sure what to expect in this situation
[21:07] <Hobadee> I would expect it to work somewhat... :-/
[21:07] <tomreyn> there's also selinux-basics, which looks like it's mean to create a workable configuration
[21:07] <tomreyn> but generally i wouldn't expect selinux on ubuntu to work, definitely not out of the box.
[21:08] <tomreyn> i.e. i'd expect what you reported
[21:21] <pyex> How I create a folder as zipped filename like: filename.zip -> filename/
[21:23] <tomreyn> zip a -r filename.zip filename/    i think, check the man page to be sure
[21:25] <tomreyn> pyex: actually it's just    zip -r filename.zip filename/
[21:26] <Elodin> hello, which file should i cat to get the cpu temperature?
[21:27] <tomreyn> you'd probably run a command instead, such as "sensors"
[21:28] <pyex> for i in *.zip; do unzip "$i" -d "$i"; done "error:checkdir:  cannot create extraction directory: File exists
[21:28] <tomreyn> Elodin: actually, this might work: cat /sys/class/thermal/thermal_zone*/temp
[21:28] <pyex> I have to multi zip files, I need to extract each filename to in folder
[21:28] <Elodin> tomreyn: there isnt such a folder thermal_zone
[21:29] <tomreyn> Elodin: also none which starts with this name?
[21:29] <pyex> how extrac to filename as same filename to folder?
[21:29] <cluelessperson> pyex   basename or dirname
[21:29] <Elodin> tomreyn: i have thermal/ but not the child
[21:30] <cluelessperson> depending on what your actual question is
[21:30] <tomreyn> Elodin: the proper module is probably not loaded, yet. install lm-sensors, run sensors-detect. if this is very recent hardware, download the "sensors-detect" perl script from lm-sensor's github instead.
[21:31] <Elodin> tomreyn: already did this
[21:31] <tomreyn> Elodin: and "sensors" now reports what?
[21:32] <Elodin> tomreyn: it reports something, but it's probably wrong
[21:32] <pyex> cluelessperson, : basename
[21:32] <Elodin> tomreyn: lemme backref the story
[21:33] <Elodin> tomreyn: i just bought this cpu and it was idling 40C on bios, i decided to stress it with stress test software and see how temp would behave. I have been running 100% workload for all cpus and temperature doesnt change
[21:35] <tomreyn> Elodin: that would also make me think you're not looking at the right sensor.
[21:36] <Elodin> tomreyn: there aren't any other sensors to look at
[21:37] <Elodin> maybe i dont have the kernel modules
[21:37] <Elodin> fuck ill try another ubuntu
[21:38] <tomreyn> sensors-detect would print which modules need to be loaded
[21:38] <tomreyn> please watch the language
[21:38] <hggdh> Elodin: please mind your language
[21:38] <Elodin> oh sorry
[21:39] <tomreyn> not all mainboards, chipsets, cpus / architectures may be supported out of the box, or at all. it can take some research.
[21:40] <Elodin> tomreyn: okay, i'm running sensors-detect again it asks me to allow to scan things in my mother board
[21:40] <Elodin> i'm saying yes to all right
[21:40] <tomreyn> i would not recommend that, better go with defaults
[21:40] <tomreyn> there is --auto
[21:40] <Elodin> oh
[21:41] <Elodin> okay it told me smething about lm78 driver and asked me to add it to /etc/modules
[21:42] <Elodin> so how do i load this module
[21:43] <Aavar> What is the name of the Ubuntu channel for other stuff (not support)?
[21:44] <Bashing-om> !ot | ava
[21:44] <tomreyn> Elodin: like any other module: modprobe lm78
[21:44] <Aavar> Bashing-om, wnx
[21:45] <Aavar> tnx
[21:48] <speeder39_> Hello
[21:48] <Aavar> speeder39_, hello
[21:49] <speeder39_> Hi Aavar are you in the USA
[21:49] <Aavar> speeder39_, why do you ask?
[22:02] <mystic> had to reinstall ubuntu after a boot menu mess up..  but forgotten how to install veracrypt so cant open my encrypted folder
[22:03] <mystic> doe anyone know how ?
[22:04] <mystic> think i found it,  copy and paste in terminal, should work
[22:34] <ironpillow> hi all, I am testing out my applications and I am install a lot of deb packages on specific hardware. right now I am reinstalling (clean install ubuntu) everything I want to test the whole system. Is there a way to restore the system to original install or somehow clean the system. I can't use VM because I need access to specific hardware. I tried KVM but I am not able to get the NIC passthrough to work. Any advice?
[22:36] <sarnold> ironpillow: you could probasbly use dpkg --get-selections and --set-selections to standardize the set of initial packages.. iirc apt install -f will then make those happen
[22:38] <sarnold> you could also make a smallish filesystem and use dd to just blat around the disk image
[22:41] <Elodin> hello, everthing is saying [FAILED] on boot proccess
[22:41] <Elodin> why would that be
[22:42] <saor> The services failed to start
[22:42] <Elodin> that much i know. i was wondering what would be reasons for it to happen
[22:42] <sarnold> Elodin: check journalctl once your up, hopefully you'll be able to spot the trouble
[22:42] <Elodin> its happening on usbboots and diskboots
[22:42] <Elodin> i cant bootup
[22:43] <Elodin> everything FAILED
[22:43] <ironpillow> sarnold: I will also be install custom applications and will be creating numerous directories. This is outside of dpkg
[22:43] <saor> Elodin: Even liveCD?
[22:43] <Elodin> saor: live usb yes
[22:43] <sarnold> ironpillow: rsync perhaps?
[22:45] <ironpillow> sarnold: sorry for being unclear. I am testing the install process of the application. It will install kernel module and create directories. I want to test when and why application might fail during it's install stage
[22:46] <sarnold> ironpillow: oh :) and here I was suggesting the thing to let you side step the installer because those are usually brittle and slow :)
[22:47] <ironpillow> sarnold: :). I current process: 1. install new ubuntu on the hardware. 2. Install dpkg packages. 3. Install custom applications and kernel modules. 4. Make sure that custom applications and kernel modules are installed correctly each time code change is made.
[23:59] <bray90820> Is there anyway to add a folder to the genome sidebar in 19.04
[23:59] <bray90820> *Gnome