[00:37] <sarnold> sdeziel: I just did an apt-get dist-upgrade that ran probably 15 or 20 times faster than it might have, thanks to removing os-prober :)
[00:37] <sdeziel> sarnold: hehe :)
[00:37] <sarnold> granted I still haven't rebooted..but at least the upgrades are going way faster :)
[00:40] <sdeziel> sarnold: if you'd like to make "apt-get update" faster too: https://paste.ubuntu.com/p/kMh2JXW4jC/
[00:51] <sarnold> sdeziel: heh, I did the acquire-languages one once and apt-cache search didn't work quite right for ages and it took me a lot longer than I'd care to admit to figure out that I'd done it to myself :)
[01:01] <sdeziel> apt-cache search works well here for some reason, go figure :)
[01:02] <sarnold> hmmmm
[01:02] <sdeziel> I "inflict" those settings to every machine I manage so I hope it doesn't cause problem </BOFH>
[01:07] <sdeziel> good evening
[08:46] <lordievader> Good morning
[09:32] <patstoms> is there any way to enable disabled network interface at installation?
[09:32] <patstoms> i am aksing this because i can't see them using ifconfig -a
[09:36] <rbasak> If ifconfig -a can't see it then I think the NIC isn't visible to userspace at all (unless you're in different namespace which I presume you're not)
[09:36] <rbasak> Sounds like a driver or BIOS issue.
[09:37] <patstoms> this should be related to ubuntu 18 installation
[09:37] <lordievader> Module not loaded/available?
[09:37] <patstoms> at install it saw all of them and i can see them using lspci
[09:37] <patstoms> but i disabled some of them using disable option
[09:45] <patstoms> ok, i can see them now after reboot
[09:45] <patstoms> i don't know why i was not able to see them before
[10:10] <Hackerpcs> is it possible to specify a custom non-53 dns port in "nmcli con mod"?
[10:42] <ren0v0> Hi, i'm not getting much help over on ubuntu so thought would try here
[10:42] <ren0v0> I'm getting this error when trying to do a release upgrade:  https://paste.ubuntu.com/p/JJkzN4Kdby/
[10:42] <ren0v0> any ideas?
[10:47] <blackflow> Hackerpcs: may I ask why would you want that? And is it for client side or server (dnsmasq/systemd-resolved) side?
[10:48] <Hackerpcs> I wanted to direct it to localhost:53535 dnscrypt-proxy I run via docker
[10:49] <Hackerpcs> dnsmasq runs on 53
[10:49] <blackflow> Hackerpcs: ideally, since that's localhost, you'd want to turn of dnsmasq and have the docker listen on port 53
[10:50] <blackflow> *off . Meanwhile, I think you can run port forwarding without NAT via iptables, but I'm not sure of exact iptables syntax OTOH
[10:51] <blackflow> actually I think it's regular DNAT just for the same IP but different port.
[10:51] <Hackerpcs> I don't want it to act like a server for any other endpoint, just locally
[10:52] <blackflow> yes that doesn't conflict anything I said. :)
[10:54] <blackflow> you have several options really, none of which require you to alter the port number for the client/lookup side. You can do DNAT, you can disable all but the docker as the resolver, or you can run the docker on a separate IP, but port 53, and simply adjust your /etc/resolv.conf
[10:54] <blackflow> (directly or whichever tool you use that manages resolv.conf)
[10:54] <Hackerpcs> I think I'll go to the route of disabling dnsmasq on NM and systemd-resolved
[10:55] <Hackerpcs> and have dnscrypt act directly on 53
[10:55] <blackflow> that's what I'd do. Keep it simple.
[11:07] <tomreyn> http://releases.ubuntu.com/18.04/ubuntu-18.04.2.0-live-server-amd64.iso.torrent , as found at https://www.ubuntu.com/download/alternative-downloads#bittorrents is a 404
[11:07] <tomreyn> who could fix this?
[11:10] <Hackerpcs> hm dnscrypt via docker doesn't seem to like being run on 53, "[FATAL] listen udp 0.0.0.0:53: bind: permission denied"
[11:12] <cpaelzer> tomreyn: I think it should be http://releases.ubuntu.com/18.04/ubuntu-18.04.2-live-server-amd64.iso.torrent
[11:13] <cpaelzer> let us highlight a few people on #ubuntu-devel about it
[11:13] <blackflow> Hackerpcs: I don't know docker, but ports<1000 require CAP_NET_BIND_SERVICE for unprivilegd users
[11:13] <tomreyn> thanks cpaelzer
[11:29] <Hackerpcs> I'm trying first directly outside docker to see how to configure it and then try in docker. I've commented out dnsmasq in networkmanager config and stopped/disabled systemd-resolved service but /etc/resolv.conf keeps getting reset
[11:29] <Hackerpcs> shouldn't it be used directly after those two are stopped?
[11:34] <blackflow> Hackerpcs: I think you'll need dns=none rather than just commenting it out, in NetworkManager.conf. systemd-resolved you'll have to "mask", not just disable
[11:34] <blackflow> Hackerpcs: then restart NetworkManager, and unlink resolv.conf, install a static one under /etc  (because default is a symlink into /run/
[11:35] <blackflow> unlink /etc/resolv.conf that's what I mean
[11:36] <blackflow> Hackerpcs: btw, if it runs outside of docker.... why do you need docker in the first place?
[11:38] <Hackerpcs> I had a ready made compose with pihole/dnscrypt/openvpn from home https://pastebin.com/vvGp56tL and on the vps I'm working on I just wanted the openvps and dnscrypt part so I just tried if it works directly
[11:40] <blackflow> Hackerpcs: my question still stands. if it runs normally, why use docker at all. docker is primarily useful when you need environments that can't exist natively (eg. library conflicts or lack of native packaged solutions). everything else is just abuse of dockeer and complicating things for no benefit.
[11:40] <blackflow> if it's isolation/security that you need, systemd service unit files have plenty of options to containerize a service with far less complexity.
[11:41] <Hackerpcs> the main thing I wanted was to run openvpn on the vps and the docker image used kylemanna/openvpn makes it very easy. after that I want dnscrypt as the dns resolver, on home it is used by pihole, here I don't want pihole, just dnscrypt directly
[11:42] <Hackerpcs> reason to use docker in the first place was the ease of setting up openvpn with the image
[11:42] <blackflow> so dnscrypt is part of the openvpn dock(er)?
[11:43] <Hackerpcs> yeah, no need to be in docker here though so it can run alone directly outside of docker
[11:43] <blackflow> Sure. simplify as much as possible.
[11:45] <Hackerpcs> use of docker at home (irrelevant to the vps) is because with docker setting up pihole and openvpn is very easy in contrast to installing the directly and don't touch the main system files with their settings, files, etc
[11:45] <Hackerpcs> installing them*
[11:46] <blackflow> that's a rather vague reason. openvpn is packaged in Ubuntu and thus with files perfectly under control. the only problem with clobbering "main system files" is when you install outside of the package manager, for which docker was primarily invented: deployment of software alien in the environment where it's installed.
[11:48] <Hackerpcs> the docker container mentioned above has scripts in it to automate the creation of CA, client certs, etc. it takes 6 commands to setup the openvpn server compared to setting it up yourself
[11:48] <blackflow> yes and when it breaks you'll be left with no knolwedge of how to fix it, because you never understood the process. ;)
[11:50] <Hackerpcs> Why would it break? It's a docker image regurarly maintained
[11:50] <blackflow> because software breaks. sooner or later.
[11:50] <blackflow> even that one:  https://github.com/kylemanna/docker-openvpn/issues
[11:51] <Hackerpcs> everything has issues, if I encounter one I'll deal with it :)
[11:51] <blackflow> _anyway_ ... keep in mind docker is probably running it unprivileged by default, so your tests outside of it should reflect that (ie don't run as root). for which it'll need CAP_NET_BIND_SERVICE capability, easily set via systemd unit file
[11:53] <Hackerpcs> I won't try to run it more from docker, I want to run dnscrypt on two more VPS that doesn't have docker nor need it so I'll take the direct use case on all three
[11:53] <blackflow> (set via CapabilityBoundingSet   AND  AmbientCapabilities)
[13:27] <siavoshkc> Hello. I had a wordpress site which was deleted by a mistake. The www folder. Now I reinstalled wordpress and worked on it. Error log of apache says /var/www/kamanacademy.com/public_html/runtime/wp-content/plugins/akismet/.htaccess: Require not allowed here, referer:
[13:28] <siavoshkc> I investigated the matter and read that AuthConfig should be set and I did it. But problem was still there
[13:29] <Hackerpcs> Seems more like apache/nginx or wordpress issue to me
[13:32] <tomreyn> siavoshkc: read http://httpd.apache.org/docs/2.4/howto/htaccess.html#when
[13:33] <kstenerud> Is there a way in systemd to make an ExecStart line that, if an env var is set to "no", returns 0 rather than running the normal command?
[13:33] <tomreyn> short version: if you manage the server configuration yourself and can move instructions from the .htraccess file into the vhost configuration, then do it, and disable htaccess. If you cannot (why? should only be the case on shared hosting) then loosen AllowOverride.
[13:34] <kstenerud> Some kind of ExecStart=if [ "$SHOULD_BE_TRUE" != "true" ]; then return 0; else start_my_daemon; fi
[13:43] <siavoshkc> tomreyn: Well. Nothing there is done by me. This is one of default plug-ins of WP. It also creates that .htaccess file by itself
[13:45] <siavoshkc> I am afraid I may not be able to fully discover the reason this plug-in is using .htaccess there. akismet (the plug-in) even creates that file on a Win machine
[13:46] <tomreyn> siavoshkc: if you instruct apache httpd to ignore htaccess files, then it will also ignore this one. you'd just need to ensure you keep the cofiguration you imported into the vhost up to date.
[13:46] <siavoshkc> https://termbin.com/f3b0
[13:51] <tomreyn> siavoshkc: did you read the full "When (not) to use .htaccess files" section of the Apache HTTPd how-to i pointed you to? did you understand it?
[13:53] <siavoshkc> tomreyn: No. I will. But for now I know that for some reason wordpress uses .htaccess files by design.
[13:54] <siavoshkc> Cannot config server to ignore them. I added AuthConfig to server conf. This time in the correct ssl conf file. Seems it is fixed.
[13:55] <siavoshkc> I will check if it is WP general policy or not.
[13:55] <tomreyn> yes, wordpress provides .htaccess files out of the box. this is because it is often used in shared hosting, where .htaccess files may be the only way (if with a performance impact) to add webserver directives.
[13:56] <tomreyn> AllowOverride=None would make apache httpd ignore .htaccess files (IIRC)
[13:59] <tomreyn> i'm trying to help you make your website perform a little better, at the expense of having to review / monitor configuration changes on wordpress upgrades. you can choose not to want this, in which case you should loosen AllowOverride to no longer disallow Require directives in .htaccess files (the alternatvie i also suggested initially).
[14:00] <tomreyn> this would provide more automation but, on the downside, less oversight / understanding of how wordpress operates.
[14:01] <tomreyn> general webserver help is also available in #httpd
[14:01] <tomreyn> (for apache httpd only)
[14:27] <kstenerud> I'm looking at a package that has tests disabled for armhf due to crashes, but that was back in 2017. If I wanted to check to see if the tests are alright now, how would I do so? Do I need an armhf machine?
[14:41] <sdeziel> kstenerud: I wonder if tests run in PPAs, if yes, that might fit the bill
[14:45] <siavoshkc> tomreyn: I understood that .htaccess files should be avoided in my case as I have root access to it (Well proven :-). But I am not capable of reconfig apache right now. I should study more to move configurations from .htaccess files to main conf file.
[15:01] <tomreyn> siavoshkc: i don't blame you - just moving the configs is really simple though
[15:02] <tomreyn> this said, it's a good approach to make concious decisions about what one wants to handle and what seems like too much.
[15:03] <tomreyn> the performance impact is probably one you can handle differently, or may not actually need to handle at all.
[15:05] <siavoshkc> tomreyn: You know I just understood what is going on here while I was chatting with you. Now I need time to digest this knowledge and investigate .htaccess files to see why they are using too many rewrite rules, how many of them exist and how should I move them to .conf in a controlled manner.
[15:09] <tomreyn> :) makes sense.
[15:55] <smoser> rbasak: curious if you'd ever seen this https://git.centos.org/
[15:56] <smoser> i had not. it looks pretty impressive / workalike
[16:06] <rbasak> Interesting, thanks!
[17:02] <sdeziel> I'm trying to use NRPE to monitor Xenial machines from a Bionic Nagios. The NRPE communication fails with "CHECK_NRPE: (ssl_err != 5) Error - Could not complete SSL handshake with 172.22.30.2: 1". Any idea how to workaround?
[17:12] <tomreyn> sdeziel: handshake fails - incompatible ssl library versions? you could test with "openssl s_client -connect ip:port" and add extra parameters to limit ssl / tls protocol version and cipher suites.
[17:12] <tomreyn> or you use a script which automates it, such as testssl, sslyze or the like.
[17:13] <sdeziel> tomreyn: the pcap shows that is wants to use anon cipher with TLS 1.2
[17:13] <tomreyn> sounds like you should be filing a bug report
[17:13] <tomreyn> and tag it "security", that's unless you modified the defaults
[17:14] <sdeziel> but the README mentions communication problems between 2.15 (Xenial) and Bionic (3.2.1)
[17:15] <tomreyn> do you have a link, more info on where to find this?
[17:15] <sdeziel> I always knew NRPE SSL implementation was shitty but this interop problem I cannot workaround
[17:15] <sdeziel> tomreyn: /usr/share/doc/nagios-nrpe-server/NEWS.Debian.gz on Bionic
[17:21] <tomreyn> "Because the default SSL support without certificates configured in nrpe.cfg uses pre-generated key data, configuring SSL certificates is strongly advised when STunnel is not used."
[17:21] <tomreyn> ...sounds dodgy.
[17:22] <sdeziel> yeah, it is
[17:22] <sdeziel> telling the agent side (Xenial) to not use SSL make it work
[17:22] <tomreyn> but i'm not sure what "pre-generated key data" refers to specifically. if this is the snakeoil certificates which are generated individually per system, maybe it's ok.
[17:22] <sdeziel> but I'd rather not have to reconfigure all my agents
[17:22] <sdeziel> tomreyn: I think it means compile-time generated key
[17:23] <tomreyn> i'm afraid so, too. whic i'd consider a security bug, if it is a default config.
[17:24] <tomreyn> however, this should be mostly unrelated to the issue you're running into
[17:25] <sdeziel> right
[17:25] <sdeziel> however shitty the TLS implementation is, I'd like it to work ;)
[17:26] <tomreyn> i.e. i don't think this explains why anonymous cipher suites are used when TLS *is* used, nor why your handshake fails (unless that's because the smarter end, probably the bionic server, points out that it is not going to connect with an anon cipher)
[17:27] <tomreyn> with SSL/TLS there are usually two things you can choose between (though this is now finally changing): (1) make it work, (2) make it work in a way that it provides reasonable security
[17:28] <tomreyn> maybe start with a bug report and we explore it further from there?
[17:28] <sdeziel> tomreyn: the client side is Bionic and I see the server hello from Xenial just fine. The Bionic client then kills the connection with an alert: handshake failure
[17:29] <sdeziel> yeah, I'll probably open one if there isn't already, I should have checked before asking here
[17:29] <tomreyn> and how did you tell it's using an anon cipher?
[17:30] <sdeziel> LP: #1782650
[17:30] <sdeziel> tomreyn: wireshark
[17:30] <tomreyn> oh right you mentioned this
[17:31] <tomreyn> do you see this, too?  "My syslog from the nagios server reports errors such as: check_nrpe: Error: (!log_opts) Could not complete SSL handshake with xxx.xxx.xxx.xxx: dh key too small"
[17:32] <tomreyn> quote from comment 4
[17:32] <sdeziel> tomreyn: yes!
[17:33] <tomreyn> well, 512 bit DH keys were a good thing when time berners lee was doing something useful.
[17:34] <sdeziel> don't we love export grade ciphers :)
[17:37] <tomreyn> so i guess the nrpe package in xenial needs a fix which allows it to use longer keys (but maybe doesn't default to it?).
[17:39] <sdeziel> the server gets to pick the DH key AFAICT so it seems the client side (Bionic) would need to accept such weak keys
[17:42] <tomreyn> the 'server' here runs on xenial, right? i.e. my current understanding is that the connection is initiated by an ssl client on bionic to something ("agent") running on the xenial system. i'm assumign this because you said "the client side is Bionic and I see the server hello from Xenial just fine."
[17:44] <sdeziel> correct
[17:44] <sdeziel> but in TLS, the server picks the DH group and the client agrees or not (kills the connection)
[17:45] <sdeziel> tomreyn: here's the cipherscan output of the Xenial server: https://paste.ubuntu.com/p/V8ZgrhKgQW/
[17:45] <tomreyn> so this agent running on xenial, which is the server in the TLS connection, needs to be updated / patched so that it can support, and con be configured to use, 2048-bit DH.
[17:46] <sdeziel> that would likely introduce regressions if the server was to pick bigger DH params
[17:46] <sdeziel> which is why I _think_ the new client should be told to support weak DH params to interop with old servers (agents)
[17:47] <sdeziel> tomreyn: do you know if there is some param to pass to openssl to tell it to not bother with too small DH params?
[17:49] <tomreyn> you can specify a cipher suite
[17:49] <tomreyn> but as your scan suggests there is a larger problem which is that there is *only* support for ADH cipher suites
[17:51] <tomreyn> which basically means you're happy with MitM and don't need to know about it.
[17:51] <sdeziel> I don't mind the ADH thing
[17:51] <sdeziel> yeah, I knew that from the get go
[17:52] <tomreyn> so we should separate between fixing your immediate issue - monitoring not working - and fixing the actual issue - nagios using terrible TLS.
[17:53] <azx> Hey guys, i'm looking to learn more about linux server management , so i thought the best way to do this would be to set up my own server from scratch and use it to stream my content to my phone, tv, etc
[17:54] <azx> debating weather i should buy my own vps or set up one on my home network with a spare pc or something
[17:54] <sdeziel> tomreyn: nrpe supports proper TLS apparently but I don't want to setup a PKI for this :/
[17:54] <tomreyn> sdeziel: i think the reason you can't convince bionic to use 512 DH will be that its libssl (i'm blindly assuming it uses this for now) will not allow that / no longer supports it.
[17:55] <sdeziel> tomreyn: yes, that's my understanding of it
[17:55] <tomreyn> sdeziel: do you have a workaround which works for you, yet?
[17:56] <tomreyn> my understanding is that you can do clear text, but not sure you want this.
[17:56] <sdeziel> tomreyn: correct, clear text works but requires to reconfig all the agents
[17:58] <sdeziel> tomreyn: I've attached my pcap to the LP if you'd like to take a peak (although we already discussed most of it)
[18:03] <sdeziel> openssl seems to now reject DH < 768 bits
[18:03] <tomreyn> i don't think you'll get around replacing + reconfiguring all xenial agents if you'd like to actually fix this.
[18:06] <tomreyn> yes, current libssl sets some slightly elevated, still much too low, new standards, and things break (now obviously, while already broken before really).
[18:07] <sdeziel> https://git.openssl.org/gitweb/?p=openssl.git;a=blob;f=ssl/s3_clnt.c;h=218534734dd7e7467a14d1d0914207a77343e218;hb=refs/heads/OpenSSL_1_0_2-stable#l3611
[18:07] <sdeziel> so maybe if I can make it use an export grade cipher the client won't reject the 512 bits DH
[18:11] <sdeziel> but EXPORT ciphers are not available on Bionic
[18:11] <sdeziel> I'll bite the bullet and reconfig all my agents I guess
[18:13] <tomreyn> you could also rebuild the bionic 'client' against an older libssl. but... HBD
[18:15] <sdeziel> tomreyn: yeah, no way ;)
[18:15] <tomreyn> btw line 20 of https://paste.ubuntu.com/p/V8ZgrhKgQW/ is also a problem, if true.
[18:16] <tomreyn> so whether nagios does TLS properly would yet have to be seen. i'm a bit happy right now i no longer depend on it.
[18:18] <sdeziel> I was wonder to switch run check over SSH and ditch NRPE, this might push me into that direction
[18:21] <tomreyn> sounds error prone and complex, i guess i'd do the PKI rather, if annoying.
[18:22] <tomreyn> sdeziel: what i don't yet understand is why this popped up for you today
[18:22] <tomreyn> did you deploy updates one either end?
[18:22] <sdeziel> tomreyn: I started monitoring my legacy (Xenial) machines
[18:22] <tomreyn> oh ok
[18:23] <sdeziel> before those were monitored by an even older NMS (Trusty)
[18:23] <tomreyn> by the way there is this https://github.com/NagiosEnterprises/nrpe/issues/30
[18:25] <tomreyn> last post refers to a branch adding "backward-compatible update for SSL/TLS" in enterprise nrpe. maybe this is just what we were looking at initially.
[18:26] <tomreyn> well this is old, ignore it
[18:28] <sdeziel> well, if only the old nrpe allowed configuring the cipher list...
[18:28] <sdeziel> case closed ;)
[18:28] <sdeziel> tomreyn: many thanks!
[18:28] <sdeziel> have a good weekend (leaving early :) )
[18:28] <tomreyn> i'm not sure i added much, but it was 'fun'.
[20:07] <zzlatev> Hi guys
[20:10] <zzlatev> Please help me with wireless issue
[20:11] <lordcirth__> !details | zzlatev
[20:11] <zzlatev> lordcirth__: I use ubuntu server 16, but I don't have wireless
[20:11] <zzlatev> Intel(R) Pentium(R) Dual  CPU  T2310  @ 1.46GHz
[20:11] <zzlatev> network controller: Broadcom Corporation BCM4311 802.11b/g WLAN [14e4:4311] (rev 02)
[20:12] <zzlatev> I have only wired connection
[20:13] <zzlatev> lordcirth__: I should install firmware-b43-installer ?
[20:14] <lordcirth__> zzlatev, this seems to be a complete guide: https://askubuntu.com/questions/55868/installing-broadcom-wireless-drivers
[20:14] <zzlatev> when I type : lspci -nn -d 14e4:
[20:15] <zzlatev> it shows: Broadcom Corporation BCM4311 802.11b/g WLAN [14e4:4311] (rev 02)
[20:15] <zzlatev> without "wireless network adapter"
[20:17] <zzlatev> лор
[20:17] <zzlatev> lordcirth__:
[20:17] <sarnold> лор == floor?
[20:17] <lordcirth__> WLAN means wireless
[20:18] <sarnold> dang. google says "lore". not what I expected, by a lot ;)
[20:18] <zzlatev> nope, this is bulgarian language, sorry about that
[20:18] <zzlatev> лорд - lord
[20:19] <zzlatev> lordcirth__: after this, how can I configure wirless settings?
[20:20] <lordcirth__> zzlatev, What Ubuntu server version?
[20:21] <zzlatev> 16.04
[20:21] <lordcirth__> https://askubuntu.com/questions/464507/ubuntu-14-04-server-wifi-wpa2-personal/464552#464552
[20:22] <zzlatev> I will try
[20:28] <zzlatev> lordcirth__:
[20:28] <zzlatev> [   24.265305] b43-phy0: Broadcom 4311 WLAN found (core revision 13)
[20:29] <zzlatev> how can I check does wifi works
[20:29] <lordcirth__> zzlatev, did you set it up in /etc/network/interfaces?
[20:30] <zzlatev> # The loopback network interface
[20:30] <zzlatev> auto lo
[20:30] <zzlatev> iface lo inet loopback
[20:30] <zzlatev> # The primary network interface
[20:30] <zzlatev> auto enp2s1
[20:30] <zzlatev> iface enp2s1 inet dhcp
[20:31] <zzlatev> I have two network interfaces
[20:34] <zzlatev> lordcirth__:
[20:34] <lordcirth__> So no, you didn't
[20:35] <lordcirth__> 'ip addr' will show you what interfaces are available to be used
[20:36] <zzlatev> lordcirth__: https://pastebin.com/s25E5A6a
[20:36] <zzlatev> after ip addr
[20:36] <lordcirth__> zzlatev, so, do the setup I linked, using 'wlan0' as the interface name
[20:36] <zzlatev> like this:
[20:36] <zzlatev> auto wlan0
[20:36] <zzlatev> iface wlan0 inet static
[20:36] <zzlatev> address 192.168.1.150
[20:36] <zzlatev> netmask 255.255.255.0
[20:36] <zzlatev> gateway 192.168.1.1
[20:36] <zzlatev> wpa-ssid <your_router>
[20:36] <zzlatev> wpa-psk <your_wpa_key>
[20:37] <zzlatev> dns-nameservers 8.8.8.8 192.168.1.1
[20:37] <zzlatev> ?
[20:37] <lordcirth__> Yes. But please don't paste so many lines to channel, use a pastebin
[20:37] <zzlatev> OK, sorry
[20:38] <zzlatev> lordcirth__: after two of already exiting blocks ?
[20:38] <sarnold> yes
[20:39] <zzlatev> like this:
[20:39] <zzlatev> https://pastebin.com/ReDs1u3J
[20:40] <sarnold> be sure to set the ssid and psk to whatever you actually need
[20:41] <zzlatev> wpa-ssid Zlatev wpa-psk siriussam
[20:41] <zzlatev> shoud I use < > ?
[20:41] <lordcirth__> No, <> is used to indicate "substitute here"
[20:41] <zzlatev> OK, now I save
[20:41] <zzlatev> how could I try?
[20:42] <sarnold> I think: ifup wlan0
[20:42] <zzlatev> wpa-ssid is network name, right?
[20:42] <lordcirth__> yes
[20:43] <zzlatev> RTNETLINK answers: File exists
[20:43] <zzlatev> Failed to bring up wlan0.
[20:44] <zzlatev> sarnold: aftr ifup wlan0
[20:44] <sarnold> is there anything near the end of dmesg output?
[20:44] <zzlatev> no, only RTNETLINK answers: File exists
[20:44] <zzlatev> Failed to bring up wlan0.
[20:44] <lordcirth__> ifdown, then try again?
[20:45] <zzlatev> lordcirth__:
[20:45] <zzlatev> ifdown: interface wlan0 not configured
[20:45] <lordcirth__> zzlatev, ip link set wlan0 down
[20:45] <lordcirth__> then ifup again
[20:45] <zzlatev> ifdown: interface wlan0 not configured
[20:46] <zzlatev> its shows that interface is not configured..
[20:46] <zzlatev> after ifup : RTNETLINK answers: File exists
[20:46] <zzlatev> Failed to bring up wlan0.
[20:49] <zzlatev> lordcirth__: sarnold
[20:49] <zzlatev> https://pastebin.com/qC6ENgxh
[20:50] <zzlatev> after : sudo ifdown wlan0 && sudo ifup -v wlan0
[20:50] <zzlatev> seems like I going somewhere
[20:50] <zzlatev> I'm*
[20:50] <lordcirth__> zzlatev, sometimes it says file exists, but still does something. 'ip addr show wlan0;
[20:50] <sarnold> hmm I wonder if it's that ip route add command that fails
[20:50] <lordcirth__> '*
[20:51] <lordcirth__> Ah, yeah, could be that the default route exists already on the ethernet
[20:51] <lordcirth__> You might want to remove the 'gateway' line
[20:51] <zzlatev> gateway 192.168.0.1 ?
[20:51] <sarnold> or try ifdown enp2s1 ?
[20:53] <lordcirth__> zzlatev, you should only have one gateway line in the file
[20:53] <zzlatev> after I removed gateway I got the same error
[20:53] <zzlatev> I have only one gateway line?
[20:55] <zzlatev> lordcirth__:
[20:56] <lordcirth__> zzlatev, not sure what's up. what's 'ip route' say?
[20:56] <zzlatev> ./bin/ip addr add 192.168.0.110/255.255.255.0 broadcast 192.168.0.255 	  dev wlan0 label wlan0
[20:56] <zzlatev> ./bin/ip link set dev wlan0   up
[20:56] <zzlatev> https://pastebin.com/dZmiVbuD
[21:02] <zzlatev> lordcirth__: sarnold
[21:02] <zzlatev> After I reboot, it doesn't show in dhcp client list
[21:02] <zzlatev> in my router
[21:02] <zzlatev> so... i get facked
[21:03] <sarnold> you configured it for static
[21:09] <zzlatev> yes, but it doesn't turn on without a lan cable
[21:15] <zzlatev> sarnold: what do you mean
[21:15] <sarnold> zzlatev: you gave the wlan0 interface IP address 192.168.0.110 -- so there's no reason for the router's dhcp client list to list this NIC
[21:16] <zzlatev> sarnold: so what should I set
[21:17] <sarnold> zzlatev: heh, it's not my network, I don't know what to suggest :)
[21:17] <sarnold> zzlatev: if you want it static, set it static.. if you want it dhcp, set it to use dhcp :)
[21:17] <zzlatev> ok
[23:03] <Epx998> Is there an option I can pass a install to not bother with creating a user?
[23:26] <teward> Epx998: since root is disabled by default, you don't have a choice but to create a user regardless of the method of installation (whether live server or alternate or preseeded)
[23:26] <teward> so 'not really' is the answer here.
[23:28] <tomreyn> i guess if it was a desktop installation you could do an oem install
[23:32] <teward> ^