[01:29] I'm having a mild panic over the whole ransomware shanagains. [01:30] Which doesn't apply to Ubuntu machines, of course. But I'm tasked to migrate a Ubuntu file server to Windows for future-proofed service which leaves me with that glaring cravat. [02:15] Just say 'No' [02:17] once the system has been replaced by a windows machine, the original machine could be turned into a handy zfs storage target, and store snapshots from the windows machine [02:17] that way when the botnets invade and encrypt all the things, you'll have snapshots to roll back to [02:19] then again the main reason MS still ships SMBv1 enabled by default is due to crappy linux based devices requiring it ;-) [02:20] lol [05:37] good morning [05:37] cpaelzer: isn't it too early? are you trying to take over pitti's spot of being awake entirely too soon? :) [05:38] oh I'm actually here a while already sarnold [05:38] oh my :) [05:38] sarnold: it isn't that badwaking up 5am (homoe work) 6.15am (kids) ~7.xx (start) - that is not too early [05:39] homoe sounds interesting, but is just cleaning up and prepping kids food :-) [05:39] lol [05:40] poor kids, that's also too early :) [05:40] sarnold: unless I want to shove them from bed to school instantly that is the time they need [05:41] lets hope the first hour or two is classes that don't matter much, hehe [07:46] help me? https://bpaste.net/show/0a5a7299eea8 [07:46] I am trying to allow remote access for my postgresql server [07:49] geek: looks like it's bound to localhost rather than 192.168.0.100 or 0.0.0.0 [07:49] grep "listen_addresses" /etc/postgresql/*/main/postgresql.conf [07:50] I did hostssl all all 0.0.0.0/0 md5 [07:50] also listen_addresses = "*" [07:50] maybe the hostssl is the problem, let me try host [07:51] nmap only show port 22 [07:51] I did ufw allow 5432 already [07:52] ufw allow from 192.168.1.102 which is the ip that I am trying to connect [07:53] I don't think the firewall rules would influence which IP addresses it listens to [07:57] my conf https://bpaste.net/show/4e9f2edcd375 [07:59] maybe throw a gigantic error into the file and make sure that postgresqal refuses to start at all? just something to double-check that the two of you agree on which file to use to configure it :) [09:19] Hi guys, is it possible to reserve memory for the OS itself? If yes how? [09:20] What do you mean by "the OS itself"? [09:21] rbasak, there is one aplication that eats up all memory [09:22] hmm but maybe the better approach would be to limit the memory usage of that app [09:22] instead of "reserve stuff for the OS itself" which is rather ah broad term as you say [10:26] mwhahaha: I'm just pushing fixed versions of openstackclient + deps to pike-updates [10:27] should sync out in the next hour to the UCA [13:47] jamespage: ok since we use the openstack infra mirrors it'll probably be a bit longer. I'll let you know (they are still failing at the moment) [13:47] mwhahaha: ack [14:48] hey all good morning, I have an Ubuntu server with two nics facing two different networks, I'm setting up a default route for one of them and the other I'm doing a route after the system boots up, is this the correct way [14:48] ? [14:49] adding* a route [14:56] anyone recommend me a good tutorial for setup ssl in posgresql? [14:57] jge: is there a reason you can't setup default routes for both at boot? [15:00] nacc: I could do that but I thought configuring multiple gateways was bad practice.. [15:03] multiple gateways is fine but they won't "just work", there are a few ways to do it, setting up metrics, or configuring the kernel, etc [15:03] if you just set two gateways, you will have a bad time [15:05] for a specific route, the way I do it is in /etc/network/interfaces, I add "up ip route add ..." and "down ip route del ..." for the interface the route is on [15:08] jge: ^ [15:08] yikes, I see what might be going on.. this is a remote server with two NICs one facing the management LAN and the other is to an upstream provider (internet).. the way we log into this box is through a VPN which lands on the management network, problem is that all VPN traffic is seeing as coming from a public IP so return traffic is probably being forced through my default gateway instead of the [15:09] management lan [15:12] this public IP being our office IP [15:13] so I can add a static route but this will create problems as this is also a webserver, so whenever anyone tries to access it from the office (no VPN) the box will route all return traffic through this static route out to the management LAN.. [15:14] damn :( [15:30] qman__: if I specify a metric in a default route, would it try the lowest metric first then move on to the second? any other way I can have both responses (return traffic) go out to both gateways? [15:30] or that's a bad idea.. [15:33] or I'm wondering if there's a way to tell the box to return traffic through the same interface it was requested from.. [15:34] you have to tweak the kernel settings for that [15:34] https://unix.stackexchange.com/questions/4420/reply-on-same-interface-as-incoming === dpawlik is now known as _danpawlik === _danpawlik is now known as danpawlik_absent [15:41] it sounds like the real issue you have therre is that requests tunnelling your site-to-site vpn end up as coming from a public ip address [15:43] qman__: that could be a solution, thanks for that.. how would I make that rule and route persistent though? [15:44] tomreyn: I know.. our server is colocated and the company who does it asks all clients to NAT interesting traffic to either a subnet they give you or your public IP [15:45] I guess we can change it but geez that will take days for them to do.. [15:47] Ubuntu Server Bug Squashing Day #4 will be tmrw, here [15:47] just sent an e-mail to the server list re: the same [16:09] Hi where is the option to select the custom adapter. Here i want to select eth0,eth1 and eth2 for seperate bridge vmnet1,2 and 3. https://i.imgur.com/KR6bwYK.png [16:12] I am able to connect through pgadmin3 but not with psql http://sprunge.us/UfOC https://i.imgur.com/TMJegsm.png http://sprunge.us/QOYO?c++ I am setting up ssl in postgrs on ubuntu server === JanC_ is now known as JanC [17:13] nacc: not sure about bug 1658469 [17:13] bug 1658469 in apache2 (Ubuntu) "mod_http2 is not available under Apache 2.4.23 / Ubuntu 17.04 xenial" [Low,Fix committed] https://launchpad.net/bugs/1658469 [17:13] rbasak: refer to -hardened and my mention about nghttp2 [17:13] nacc: to my knowledge we've never added and then removed things to avoid putting things in an LTS. [17:13] Yeah I saw that, but nacc wasn't in that channel. [17:14] yep. [17:14] nacc: IIRC, the Security team had NACK'd http2 back in Xenial [17:14] at least nghttp2 [17:14] nacc: if it's not good enough for an LTS, it's not good enough for a non-LTS release. [17:14] (NGINX rolls their own implementation separate from nghttp2) [17:16] rbasak: any chance that upload to proposed can be NACK'd and rejected because of the MIR and nghttp2 contention? [17:17] teward: let's see what nacc thinks. If we did decide to reverse this, we'd upload a revert to artful-proposed, and that's make any MIR moot. [17:18] mmkay. Just thought I'd ask :) [17:31] rbasak: yes, this was the plan with the security team [17:32] rbasak: as in, we want it in 18.04 presumably, (another 2 years of no http/2 support seems less than ideal) [17:32] rbasak: and 17.10 is an appropriate place to start staging it [17:32] rbasak: if you want me to upload a version that again drops it, I can, but the whole point is for the MIR to be processed, one way or another [17:33] rbasak: i guess i don't understand what you mean by "to my knowledge we've never added and then removed things" [17:33] rbasak: nacc summed it up [17:41] nacc: I think one big concern will be https://bugs.launchpad.net/ubuntu/+source/nghttp2/+bug/1677958 unless we've confirmed it's been fixed [17:41] Launchpad bug 1677958 in nghttp2 (Ubuntu) "no SSL certificate verify " [Undecided,Confirmed] [17:41] last update was the 3rd, code maintainer saying they confirmed the bug [17:41] teward: i think i addressed that [17:42] and large text in the source code making a note insecure for production use [17:42] let me find the e-mail [17:42] the reported e-mailed me offline [17:42] teward: that particular chunk of code is an example in their docs, iirc [17:42] teward: and the automated 'static analysis' tool that perosn is using is dumb and doesn't know that [17:43] teward: it's not part of nghttp's shipped libraries or binaries, afaict [17:44] teward: the upstream response was: http://paste.ubuntu.com/24588157/ [17:44] nacc: then i think bug triage needs done on that bug as 'It's not part of the standard code, so not a bug" [17:44] just saying :) [17:44] teward: i know [17:44] teward: it's on my todo [17:44] in any case, if that's a non issue then the MIR reviewer will get the next say heh [17:44] I'll just do it now [17:44] ... oh dear, i found a vulnerable system on my network... [17:44] *disappears to fix it* [17:50] If one were to have two updated 16.04 servers. One being in use with some basic installs and a few extra and the other being a shiny, new, better and better server. Is there a built in migration functionality or a package that can be used to gather all installed data (installed packages, db files, installed web apps, etc) and transfer or mirror exactly what is on server A(old) to server B(New)? [17:51] I have heard about third-party tools for migrations like that, I do not know how well they work [17:52] Skittishtrigger: o dpm [17:52] oops [17:52] Skittishtrigger: I don't think there's any guaranteed-to-work tool for that [17:52] But with both machines being somewhat alike I would try rsyncing / over to the new one's disks by using a live system, chrooting into the cloned system and letting grub reinstall itself [17:52] normally I just backup the configs on server A, copy to Server B after backing up the distribution-default configs, and test from there to make things work [17:52] Pay attention to fstab and mdadm [17:53] Dang. I have been reading up and it seems everyone has an opinion on how to do it and whats best but no definitive answer for it. [17:57] Skittishtrigger: another popular answer would be "destroy old server, restore on new one from backup". Alternatively "have your configuration management reprovision everything on the new hardware" [18:02] I was hoping there would be something using a script where you do movethisbox.sh and in the script it finds(greps, ls, w/e) all files, makes a list of all packages and versions, copies all databases, copies all configs, copies all user/www/srv files, and puts all this into a tar or zip with another script that auto installs it all. Just cause I have hopes and don't want to do it manually. lmao [18:04] Oh well, guess I have to be all proper abou it. lol. Thank you both for your time and help. [18:09] rsync -aAXv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} root@old-server:/ root@new-server:/mnt/rootfs/ [18:13] nacc: what if upstream don't declare it ready by 18.04? [18:16] rbasak: we'll hold back the internet by not supporting http2 :) [18:17] nacc: I think the core issue is that it's still considered "experimental" by Apache [18:18] not whether we're holding back the Internet or not [18:18] IIRC that was the original issue too [18:18] right [18:18] but experimental in this case means the spec can change [18:18] in any case, isn't this what MIR would address? [18:18] amongst other issues [18:18] Has anyone seen this SSL error on Ubuntu 14.04LTS - error:0407006A:rsa routines:RSA_padding_check_PKCS1_type_1:block type is not 01; error:04067072:rsa routines:RSA_EAY_PUBLIC_DECRYPT:padding check failed [18:18] also, people have been asking in the apache page about that status with no response [18:18] They verison of OpenSSL that ships with 14.04 has left official support [18:19] i cant figure out how the error is occuring, but the openssl library seems to be the logical choifce [18:19] No, it's more likely a problem in your key file [18:20] dasjoe: if i decrypt manually everything works [18:20] Manually, as in with pen and paper? [18:20] dasjoe: decryption also works for ~1 hour or more and then i just get pelted with that error [18:20] sadjoe: no; im doing DKIM [18:20] dasjoe, sorry [18:21] if i use online tools (any of them) all the decryption works fine [18:21] teward: i'll ask apache2 folks [18:21] manually on the same system? [18:22] JanC: no, third party sites. on the host system it works perfectly for a while. then something happens on the system & it just starts failing everyone [18:23] that's really weird; memory issue or something? [18:23] im googling anything i can think of but nothing is getting me closer to an answer [18:23] thatstevecena: memory gone bad? [18:23] JanC: now that i think of it maybe it is [18:23] or could be rather [18:24] we were looking to move to 16.04LTS anyways. i can grab fresh hardware & see if that fixed it [18:24] fixes* [18:25] (we're on 14.04LTS now) [18:25] disk issue could in theory also be the issue, if it's flaky and sometimes returns corrupted data [18:26] JanC: very good points i hadnt though od [18:26] of* [18:27] I wouldn't consider disk issue if the process keeps running, I doubt it'd unload the key from RAM [18:28] dasjoe: process yeah, keeps trucking right along [18:28] depends on whether it forks workers or something like that [18:28] it just goes from working to failure [18:29] sadly its my only MTA so taking it offline to check anything isnt possible [18:31] this is all good though. ive been banging my head against this for about a week. its given me some new angles to think from [18:33] oh; something else this made me think of: [18:33] we're currently using Untangle for antispam. it works more like a packet sniffer than an appliance. it stands to reason that it could be causing issues too, no? [18:35] nacc: I think the HTTP/2 spec is pretty solid at this point, unless you're saying it's still under massive changes and revision (which it usually was doing) [18:35] teward: no, the 'experimental' of mod_http2 is that mod_http2 itself isn't fixed yet [18:35] teward: per their own text [18:35] teward: it's not about http/2, it's about their implementation of what mod_http2 does [18:36] ah, i mention because: [18:36] [2017-05-16 14:18:26] <nacc> but experimental in this case means the spec can change [18:36] ambiguity :p [18:36] yeah, spec of mod_http2 [18:36] sorry about that [18:36] nacc: ambiguity is the death of developers :) [18:36] no problem. [18:36] teward: yep :) [18:36] nacc: if they don't even know what they all want to do with it yet, I'd consider it unfit for LTS, but it's ultimately not my call [18:36] (nginx is more cared about than Apache by me heh) [18:37] teward: right, but i can't know if they will stabilize it by 18.04 [18:37] teward: so that's what i'm trying to figure out [18:37] nacc: if they don't respond back, it may be safe to assume they don't even know heh [18:37] teward: i think they're just reserving the right to change behavior still [18:40] teward: also, the 'experimental' status in apache2 i think means taht between releases, interfaces/directives might change [18:40] it's basically an out that they dont' have to stay BC [18:49] thank you everyone. im going to go down the server rebuild path on newer hardware. i appreciate the help! === vamiry_ is now known as vamiry [22:16] Hi people.! [22:16] I've a question [22:16] how you protect against rnswr attack? [22:23] what's "rnswr"? google's no help [22:24] sorry... ransomware.! [22:25] federicoaguirre: backups, updated antivirus, patch all your systems, don't open suspicious links, sites, emails, attachments. [22:25] common sense protections [22:26] and backups in the off chance you *do* get hit so you don't lose all the data [22:30] disable password logins on ssh [22:30] don't use web-based control panels [22:34] disable 'root' logon via SSH [22:34] beat yourself against the wall when you do get crypto'd [22:34] wait... that's not a protection. [22:36] teward: well, maybe it protects against a recurrence. one hopes [22:42] Thnks to all guys.! [22:45] tarpman: :P