[00:37] sdeziel: I just did an apt-get dist-upgrade that ran probably 15 or 20 times faster than it might have, thanks to removing os-prober :) [00:37] sarnold: hehe :) [00:37] granted I still haven't rebooted..but at least the upgrades are going way faster :) [00:40] sarnold: if you'd like to make "apt-get update" faster too: https://paste.ubuntu.com/p/kMh2JXW4jC/ [00:51] sdeziel: heh, I did the acquire-languages one once and apt-cache search didn't work quite right for ages and it took me a lot longer than I'd care to admit to figure out that I'd done it to myself :) [01:01] apt-cache search works well here for some reason, go figure :) [01:02] hmmmm [01:02] I "inflict" those settings to every machine I manage so I hope it doesn't cause problem [01:07] good evening [08:46] Good morning [09:32] is there any way to enable disabled network interface at installation? [09:32] i am aksing this because i can't see them using ifconfig -a [09:36] If ifconfig -a can't see it then I think the NIC isn't visible to userspace at all (unless you're in different namespace which I presume you're not) [09:36] Sounds like a driver or BIOS issue. [09:37] this should be related to ubuntu 18 installation [09:37] Module not loaded/available? [09:37] at install it saw all of them and i can see them using lspci [09:37] but i disabled some of them using disable option [09:45] ok, i can see them now after reboot [09:45] i don't know why i was not able to see them before [10:10] is it possible to specify a custom non-53 dns port in "nmcli con mod"? [10:42] Hi, i'm not getting much help over on ubuntu so thought would try here [10:42] I'm getting this error when trying to do a release upgrade: https://paste.ubuntu.com/p/JJkzN4Kdby/ [10:42] any ideas? [10:47] Hackerpcs: may I ask why would you want that? And is it for client side or server (dnsmasq/systemd-resolved) side? [10:48] I wanted to direct it to localhost:53535 dnscrypt-proxy I run via docker [10:49] dnsmasq runs on 53 [10:49] Hackerpcs: ideally, since that's localhost, you'd want to turn of dnsmasq and have the docker listen on port 53 [10:50] *off . Meanwhile, I think you can run port forwarding without NAT via iptables, but I'm not sure of exact iptables syntax OTOH [10:51] actually I think it's regular DNAT just for the same IP but different port. [10:51] I don't want it to act like a server for any other endpoint, just locally [10:52] yes that doesn't conflict anything I said. :) [10:54] you have several options really, none of which require you to alter the port number for the client/lookup side. You can do DNAT, you can disable all but the docker as the resolver, or you can run the docker on a separate IP, but port 53, and simply adjust your /etc/resolv.conf [10:54] (directly or whichever tool you use that manages resolv.conf) [10:54] I think I'll go to the route of disabling dnsmasq on NM and systemd-resolved [10:55] and have dnscrypt act directly on 53 [10:55] that's what I'd do. Keep it simple. [11:07] http://releases.ubuntu.com/18.04/ubuntu-18.04.2.0-live-server-amd64.iso.torrent , as found at https://www.ubuntu.com/download/alternative-downloads#bittorrents is a 404 [11:07] who could fix this? [11:10] hm dnscrypt via docker doesn't seem to like being run on 53, "[FATAL] listen udp 0.0.0.0:53: bind: permission denied" [11:12] tomreyn: I think it should be http://releases.ubuntu.com/18.04/ubuntu-18.04.2-live-server-amd64.iso.torrent [11:13] let us highlight a few people on #ubuntu-devel about it [11:13] Hackerpcs: I don't know docker, but ports<1000 require CAP_NET_BIND_SERVICE for unprivilegd users [11:13] thanks cpaelzer [11:29] I'm trying first directly outside docker to see how to configure it and then try in docker. I've commented out dnsmasq in networkmanager config and stopped/disabled systemd-resolved service but /etc/resolv.conf keeps getting reset [11:29] shouldn't it be used directly after those two are stopped? [11:34] Hackerpcs: I think you'll need dns=none rather than just commenting it out, in NetworkManager.conf. systemd-resolved you'll have to "mask", not just disable [11:34] Hackerpcs: then restart NetworkManager, and unlink resolv.conf, install a static one under /etc (because default is a symlink into /run/ [11:35] unlink /etc/resolv.conf that's what I mean [11:36] Hackerpcs: btw, if it runs outside of docker.... why do you need docker in the first place? [11:38] I had a ready made compose with pihole/dnscrypt/openvpn from home https://pastebin.com/vvGp56tL and on the vps I'm working on I just wanted the openvps and dnscrypt part so I just tried if it works directly [11:40] Hackerpcs: my question still stands. if it runs normally, why use docker at all. docker is primarily useful when you need environments that can't exist natively (eg. library conflicts or lack of native packaged solutions). everything else is just abuse of dockeer and complicating things for no benefit. [11:40] if it's isolation/security that you need, systemd service unit files have plenty of options to containerize a service with far less complexity. [11:41] the main thing I wanted was to run openvpn on the vps and the docker image used kylemanna/openvpn makes it very easy. after that I want dnscrypt as the dns resolver, on home it is used by pihole, here I don't want pihole, just dnscrypt directly [11:42] reason to use docker in the first place was the ease of setting up openvpn with the image [11:42] so dnscrypt is part of the openvpn dock(er)? [11:43] yeah, no need to be in docker here though so it can run alone directly outside of docker [11:43] Sure. simplify as much as possible. [11:45] use of docker at home (irrelevant to the vps) is because with docker setting up pihole and openvpn is very easy in contrast to installing the directly and don't touch the main system files with their settings, files, etc [11:45] installing them* [11:46] that's a rather vague reason. openvpn is packaged in Ubuntu and thus with files perfectly under control. the only problem with clobbering "main system files" is when you install outside of the package manager, for which docker was primarily invented: deployment of software alien in the environment where it's installed. [11:48] the docker container mentioned above has scripts in it to automate the creation of CA, client certs, etc. it takes 6 commands to setup the openvpn server compared to setting it up yourself [11:48] yes and when it breaks you'll be left with no knolwedge of how to fix it, because you never understood the process. ;) [11:50] Why would it break? It's a docker image regurarly maintained [11:50] because software breaks. sooner or later. [11:50] even that one: https://github.com/kylemanna/docker-openvpn/issues [11:51] everything has issues, if I encounter one I'll deal with it :) [11:51] _anyway_ ... keep in mind docker is probably running it unprivileged by default, so your tests outside of it should reflect that (ie don't run as root). for which it'll need CAP_NET_BIND_SERVICE capability, easily set via systemd unit file [11:53] I won't try to run it more from docker, I want to run dnscrypt on two more VPS that doesn't have docker nor need it so I'll take the direct use case on all three [11:53] (set via CapabilityBoundingSet AND AmbientCapabilities) [13:27] Hello. I had a wordpress site which was deleted by a mistake. The www folder. Now I reinstalled wordpress and worked on it. Error log of apache says /var/www/kamanacademy.com/public_html/runtime/wp-content/plugins/akismet/.htaccess: Require not allowed here, referer: [13:28] I investigated the matter and read that AuthConfig should be set and I did it. But problem was still there [13:29] Seems more like apache/nginx or wordpress issue to me [13:32] siavoshkc: read http://httpd.apache.org/docs/2.4/howto/htaccess.html#when [13:33] Is there a way in systemd to make an ExecStart line that, if an env var is set to "no", returns 0 rather than running the normal command? [13:33] short version: if you manage the server configuration yourself and can move instructions from the .htraccess file into the vhost configuration, then do it, and disable htaccess. If you cannot (why? should only be the case on shared hosting) then loosen AllowOverride. [13:34] Some kind of ExecStart=if [ "$SHOULD_BE_TRUE" != "true" ]; then return 0; else start_my_daemon; fi [13:43] tomreyn: Well. Nothing there is done by me. This is one of default plug-ins of WP. It also creates that .htaccess file by itself [13:45] I am afraid I may not be able to fully discover the reason this plug-in is using .htaccess there. akismet (the plug-in) even creates that file on a Win machine [13:46] siavoshkc: if you instruct apache httpd to ignore htaccess files, then it will also ignore this one. you'd just need to ensure you keep the cofiguration you imported into the vhost up to date. [13:46] https://termbin.com/f3b0 [13:51] siavoshkc: did you read the full "When (not) to use .htaccess files" section of the Apache HTTPd how-to i pointed you to? did you understand it? [13:53] tomreyn: No. I will. But for now I know that for some reason wordpress uses .htaccess files by design. [13:54] Cannot config server to ignore them. I added AuthConfig to server conf. This time in the correct ssl conf file. Seems it is fixed. [13:55] I will check if it is WP general policy or not. [13:55] yes, wordpress provides .htaccess files out of the box. this is because it is often used in shared hosting, where .htaccess files may be the only way (if with a performance impact) to add webserver directives. [13:56] AllowOverride=None would make apache httpd ignore .htaccess files (IIRC) [13:59] i'm trying to help you make your website perform a little better, at the expense of having to review / monitor configuration changes on wordpress upgrades. you can choose not to want this, in which case you should loosen AllowOverride to no longer disallow Require directives in .htaccess files (the alternatvie i also suggested initially). [14:00] this would provide more automation but, on the downside, less oversight / understanding of how wordpress operates. [14:01] general webserver help is also available in #httpd [14:01] (for apache httpd only) [14:27] I'm looking at a package that has tests disabled for armhf due to crashes, but that was back in 2017. If I wanted to check to see if the tests are alright now, how would I do so? Do I need an armhf machine? [14:41] kstenerud: I wonder if tests run in PPAs, if yes, that might fit the bill [14:45] tomreyn: I understood that .htaccess files should be avoided in my case as I have root access to it (Well proven :-). But I am not capable of reconfig apache right now. I should study more to move configurations from .htaccess files to main conf file. [15:01] siavoshkc: i don't blame you - just moving the configs is really simple though [15:02] this said, it's a good approach to make concious decisions about what one wants to handle and what seems like too much. [15:03] the performance impact is probably one you can handle differently, or may not actually need to handle at all. [15:05] tomreyn: You know I just understood what is going on here while I was chatting with you. Now I need time to digest this knowledge and investigate .htaccess files to see why they are using too many rewrite rules, how many of them exist and how should I move them to .conf in a controlled manner. [15:09] :) makes sense. === BlackDex_ is now known as BlackDex [15:55] rbasak: curious if you'd ever seen this https://git.centos.org/ [15:56] i had not. it looks pretty impressive / workalike [16:06] Interesting, thanks! [17:02] I'm trying to use NRPE to monitor Xenial machines from a Bionic Nagios. The NRPE communication fails with "CHECK_NRPE: (ssl_err != 5) Error - Could not complete SSL handshake with 172.22.30.2: 1". Any idea how to workaround? [17:12] sdeziel: handshake fails - incompatible ssl library versions? you could test with "openssl s_client -connect ip:port" and add extra parameters to limit ssl / tls protocol version and cipher suites. [17:12] or you use a script which automates it, such as testssl, sslyze or the like. [17:13] tomreyn: the pcap shows that is wants to use anon cipher with TLS 1.2 [17:13] sounds like you should be filing a bug report [17:13] and tag it "security", that's unless you modified the defaults [17:14] but the README mentions communication problems between 2.15 (Xenial) and Bionic (3.2.1) [17:15] do you have a link, more info on where to find this? [17:15] I always knew NRPE SSL implementation was shitty but this interop problem I cannot workaround [17:15] tomreyn: /usr/share/doc/nagios-nrpe-server/NEWS.Debian.gz on Bionic [17:21] "Because the default SSL support without certificates configured in nrpe.cfg uses pre-generated key data, configuring SSL certificates is strongly advised when STunnel is not used." [17:21] ...sounds dodgy. [17:22] yeah, it is [17:22] telling the agent side (Xenial) to not use SSL make it work [17:22] but i'm not sure what "pre-generated key data" refers to specifically. if this is the snakeoil certificates which are generated individually per system, maybe it's ok. [17:22] but I'd rather not have to reconfigure all my agents [17:22] tomreyn: I think it means compile-time generated key [17:23] i'm afraid so, too. whic i'd consider a security bug, if it is a default config. [17:24] however, this should be mostly unrelated to the issue you're running into [17:25] right [17:25] however shitty the TLS implementation is, I'd like it to work ;) [17:26] i.e. i don't think this explains why anonymous cipher suites are used when TLS *is* used, nor why your handshake fails (unless that's because the smarter end, probably the bionic server, points out that it is not going to connect with an anon cipher) [17:27] with SSL/TLS there are usually two things you can choose between (though this is now finally changing): (1) make it work, (2) make it work in a way that it provides reasonable security [17:28] maybe start with a bug report and we explore it further from there? [17:28] tomreyn: the client side is Bionic and I see the server hello from Xenial just fine. The Bionic client then kills the connection with an alert: handshake failure [17:29] yeah, I'll probably open one if there isn't already, I should have checked before asking here [17:29] and how did you tell it's using an anon cipher? [17:30] LP: #1782650 [17:30] Launchpad bug 1782650 in nagios-nrpe (Ubuntu) "nrpe plugin in bionic fails with "Error - Could not complete SSL handshake"" [Undecided,New] https://launchpad.net/bugs/1782650 [17:30] tomreyn: wireshark [17:30] oh right you mentioned this [17:31] do you see this, too? "My syslog from the nagios server reports errors such as: check_nrpe: Error: (!log_opts) Could not complete SSL handshake with xxx.xxx.xxx.xxx: dh key too small" [17:32] quote from comment 4 [17:32] tomreyn: yes! [17:33] well, 512 bit DH keys were a good thing when time berners lee was doing something useful. [17:34] don't we love export grade ciphers :) [17:37] so i guess the nrpe package in xenial needs a fix which allows it to use longer keys (but maybe doesn't default to it?). [17:39] the server gets to pick the DH key AFAICT so it seems the client side (Bionic) would need to accept such weak keys [17:42] the 'server' here runs on xenial, right? i.e. my current understanding is that the connection is initiated by an ssl client on bionic to something ("agent") running on the xenial system. i'm assumign this because you said "the client side is Bionic and I see the server hello from Xenial just fine." [17:44] correct [17:44] but in TLS, the server picks the DH group and the client agrees or not (kills the connection) [17:45] tomreyn: here's the cipherscan output of the Xenial server: https://paste.ubuntu.com/p/V8ZgrhKgQW/ [17:45] so this agent running on xenial, which is the server in the TLS connection, needs to be updated / patched so that it can support, and con be configured to use, 2048-bit DH. [17:46] that would likely introduce regressions if the server was to pick bigger DH params [17:46] which is why I _think_ the new client should be told to support weak DH params to interop with old servers (agents) [17:47] tomreyn: do you know if there is some param to pass to openssl to tell it to not bother with too small DH params? [17:49] you can specify a cipher suite [17:49] but as your scan suggests there is a larger problem which is that there is *only* support for ADH cipher suites [17:51] which basically means you're happy with MitM and don't need to know about it. [17:51] I don't mind the ADH thing [17:51] yeah, I knew that from the get go [17:52] so we should separate between fixing your immediate issue - monitoring not working - and fixing the actual issue - nagios using terrible TLS. [17:53] Hey guys, i'm looking to learn more about linux server management , so i thought the best way to do this would be to set up my own server from scratch and use it to stream my content to my phone, tv, etc [17:54] debating weather i should buy my own vps or set up one on my home network with a spare pc or something [17:54] tomreyn: nrpe supports proper TLS apparently but I don't want to setup a PKI for this :/ [17:54] sdeziel: i think the reason you can't convince bionic to use 512 DH will be that its libssl (i'm blindly assuming it uses this for now) will not allow that / no longer supports it. [17:55] tomreyn: yes, that's my understanding of it [17:55] sdeziel: do you have a workaround which works for you, yet? [17:56] my understanding is that you can do clear text, but not sure you want this. [17:56] tomreyn: correct, clear text works but requires to reconfig all the agents [17:58] tomreyn: I've attached my pcap to the LP if you'd like to take a peak (although we already discussed most of it) [18:03] openssl seems to now reject DH < 768 bits [18:03] i don't think you'll get around replacing + reconfiguring all xenial agents if you'd like to actually fix this. [18:06] yes, current libssl sets some slightly elevated, still much too low, new standards, and things break (now obviously, while already broken before really). [18:07] https://git.openssl.org/gitweb/?p=openssl.git;a=blob;f=ssl/s3_clnt.c;h=218534734dd7e7467a14d1d0914207a77343e218;hb=refs/heads/OpenSSL_1_0_2-stable#l3611 [18:07] so maybe if I can make it use an export grade cipher the client won't reject the 512 bits DH [18:11] but EXPORT ciphers are not available on Bionic [18:11] I'll bite the bullet and reconfig all my agents I guess [18:13] you could also rebuild the bionic 'client' against an older libssl. but... HBD [18:15] tomreyn: yeah, no way ;) [18:15] btw line 20 of https://paste.ubuntu.com/p/V8ZgrhKgQW/ is also a problem, if true. [18:16] so whether nagios does TLS properly would yet have to be seen. i'm a bit happy right now i no longer depend on it. [18:18] I was wonder to switch run check over SSH and ditch NRPE, this might push me into that direction [18:21] sounds error prone and complex, i guess i'd do the PKI rather, if annoying. [18:22] sdeziel: what i don't yet understand is why this popped up for you today [18:22] did you deploy updates one either end? [18:22] tomreyn: I started monitoring my legacy (Xenial) machines [18:22] oh ok [18:23] before those were monitored by an even older NMS (Trusty) [18:23] by the way there is this https://github.com/NagiosEnterprises/nrpe/issues/30 [18:25] last post refers to a branch adding "backward-compatible update for SSL/TLS" in enterprise nrpe. maybe this is just what we were looking at initially. [18:26] well this is old, ignore it [18:28] well, if only the old nrpe allowed configuring the cipher list... [18:28] case closed ;) [18:28] tomreyn: many thanks! [18:28] have a good weekend (leaving early :) ) [18:28] i'm not sure i added much, but it was 'fun'. [20:07] Hi guys [20:10] Please help me with wireless issue [20:11] !details | zzlatev [20:11] zzlatev: Please elaborate; your question or issue may not seem clear or detailed enough for people to help you. Please give more detailed information; for example, we might need errors, steps, relevant configuration files, Ubuntu version, and hardware information. Use a !pastebin to avoid flooding the channel. [20:11] lordcirth__: I use ubuntu server 16, but I don't have wireless [20:11] Intel(R) Pentium(R) Dual CPU T2310 @ 1.46GHz [20:11] network controller: Broadcom Corporation BCM4311 802.11b/g WLAN [14e4:4311] (rev 02) [20:12] I have only wired connection [20:13] lordcirth__: I should install firmware-b43-installer ? [20:14] zzlatev, this seems to be a complete guide: https://askubuntu.com/questions/55868/installing-broadcom-wireless-drivers [20:14] when I type : lspci -nn -d 14e4: [20:15] it shows: Broadcom Corporation BCM4311 802.11b/g WLAN [14e4:4311] (rev 02) [20:15] without "wireless network adapter" [20:17] лор [20:17] lordcirth__: [20:17] лор == floor? [20:17] WLAN means wireless [20:18] dang. google says "lore". not what I expected, by a lot ;) [20:18] nope, this is bulgarian language, sorry about that [20:18] лорд - lord [20:19] lordcirth__: after this, how can I configure wirless settings? [20:20] zzlatev, What Ubuntu server version? [20:21] 16.04 [20:21] https://askubuntu.com/questions/464507/ubuntu-14-04-server-wifi-wpa2-personal/464552#464552 [20:22] I will try [20:28] lordcirth__: [20:28] [ 24.265305] b43-phy0: Broadcom 4311 WLAN found (core revision 13) [20:29] how can I check does wifi works [20:29] zzlatev, did you set it up in /etc/network/interfaces? [20:30] # The loopback network interface [20:30] auto lo [20:30] iface lo inet loopback [20:30] # The primary network interface [20:30] auto enp2s1 [20:30] iface enp2s1 inet dhcp [20:31] I have two network interfaces [20:34] lordcirth__: [20:34] So no, you didn't [20:35] 'ip addr' will show you what interfaces are available to be used [20:36] lordcirth__: https://pastebin.com/s25E5A6a [20:36] after ip addr [20:36] zzlatev, so, do the setup I linked, using 'wlan0' as the interface name [20:36] like this: [20:36] auto wlan0 [20:36] iface wlan0 inet static [20:36] address 192.168.1.150 [20:36] netmask 255.255.255.0 [20:36] gateway 192.168.1.1 [20:36] wpa-ssid [20:36] wpa-psk [20:37] dns-nameservers 8.8.8.8 192.168.1.1 [20:37] ? [20:37] Yes. But please don't paste so many lines to channel, use a pastebin [20:37] OK, sorry [20:38] lordcirth__: after two of already exiting blocks ? [20:38] yes [20:39] like this: [20:39] https://pastebin.com/ReDs1u3J [20:40] be sure to set the ssid and psk to whatever you actually need [20:41] wpa-ssid Zlatev wpa-psk siriussam [20:41] shoud I use < > ? [20:41] No, <> is used to indicate "substitute here" [20:41] OK, now I save [20:41] how could I try? [20:42] I think: ifup wlan0 [20:42] wpa-ssid is network name, right? [20:42] yes [20:43] RTNETLINK answers: File exists [20:43] Failed to bring up wlan0. [20:44] sarnold: aftr ifup wlan0 [20:44] is there anything near the end of dmesg output? [20:44] no, only RTNETLINK answers: File exists [20:44] Failed to bring up wlan0. [20:44] ifdown, then try again? [20:45] lordcirth__: [20:45] ifdown: interface wlan0 not configured [20:45] zzlatev, ip link set wlan0 down [20:45] then ifup again [20:45] ifdown: interface wlan0 not configured [20:46] its shows that interface is not configured.. [20:46] after ifup : RTNETLINK answers: File exists [20:46] Failed to bring up wlan0. [20:49] lordcirth__: sarnold [20:49] https://pastebin.com/qC6ENgxh [20:50] after : sudo ifdown wlan0 && sudo ifup -v wlan0 [20:50] seems like I going somewhere [20:50] I'm* [20:50] zzlatev, sometimes it says file exists, but still does something. 'ip addr show wlan0; [20:50] hmm I wonder if it's that ip route add command that fails [20:50] '* [20:51] Ah, yeah, could be that the default route exists already on the ethernet [20:51] You might want to remove the 'gateway' line [20:51] gateway 192.168.0.1 ? [20:51] or try ifdown enp2s1 ? [20:53] zzlatev, you should only have one gateway line in the file [20:53] after I removed gateway I got the same error [20:53] I have only one gateway line? [20:55] lordcirth__: [20:56] zzlatev, not sure what's up. what's 'ip route' say? [20:56] ./bin/ip addr add 192.168.0.110/255.255.255.0 broadcast 192.168.0.255 dev wlan0 label wlan0 [20:56] ./bin/ip link set dev wlan0 up [20:56] https://pastebin.com/dZmiVbuD [21:02] lordcirth__: sarnold [21:02] After I reboot, it doesn't show in dhcp client list [21:02] in my router [21:02] so... i get facked [21:03] you configured it for static [21:09] yes, but it doesn't turn on without a lan cable [21:15] sarnold: what do you mean [21:15] zzlatev: you gave the wlan0 interface IP address 192.168.0.110 -- so there's no reason for the router's dhcp client list to list this NIC [21:16] sarnold: so what should I set [21:17] zzlatev: heh, it's not my network, I don't know what to suggest :) [21:17] zzlatev: if you want it static, set it static.. if you want it dhcp, set it to use dhcp :) [21:17] ok [23:03] Is there an option I can pass a install to not bother with creating a user? [23:26] Epx998: since root is disabled by default, you don't have a choice but to create a user regardless of the method of installation (whether live server or alternate or preseeded) [23:26] so 'not really' is the answer here. [23:28] i guess if it was a desktop installation you could do an oem install [23:32] ^