[01:22] <drab> GLaDER: that was a long time ago and you probably already figured it out, but no, more is needed: you need to install zfsutils-linux, zfs is not default in ubuntu
[01:22] <drab> bbl
[04:48] <GLaDER> drab: It's not? I thought it was since a while back.
[06:07] <riz0n> Hello, I have a scenario I want to discuss so I can determine the best configuration to deploy... Currently I have two ubuntu servers online. They are both running Ubuntu 16.04.2 with ISPConfig 3.1.3. The first server is the primary web server, which also has the primary DNS. My ISP blocks SMTP port 25, which cripples my ability to send and receive mail from other mail servers using the SMTP port 25. Because of this, the second server (from a
[06:07] <riz0n> second location) acts as a mail server as well as and secondary DNS. Currently mail that originates from the web server is sent over to the second server using port 587, and that server takes care of the business end of delivering it to the correct destination. The second server also contains all the inboxes, and through Dovecot, allows download by IMAP and POP3. I want to make some changes. I want to bring all the "email" over to the primary
[06:07] <riz0n> web server, including all inboxes and their mail. The web server already has Dovecot and Postfix installed, so implementing that should be a breeze. I want the primary web server to provide IMAP, POP3, SMTP (using port 587), and Webmail (using Roundcube). So what I would like to do is continue to use the second server strictly as the "MX". Any incoming mail on the MX that is destined to any of my domains would get pushed on to the primary web
[06:07] <riz0n> server on 587 (or whatever), which would then place them in the appropriate inboxes or return them to sender. When any mail destined for the outside world makes it's way into the Postfix queue on the primary web server, it would continue it's current trend of bouncing those messages back through the MX (secondary server), which does not have port 25 blocked and can complete message delivery. What would be the best way to go about implementing
[06:07] <riz0n> this configuration, and what kind of limitations would I be looking at with this type of configuration?
[07:00] <genii> riz0n: Maybe instead move all the inboxes to the primary, share them over nfs with kerberos to the other one which mounts them
[07:11] <riz0n> Well here is the issue. The plan is to eventually decommission the second system, it is almost 10 years old and only x86, no RAID or anything fancy. The primary web server is pretty new (set up within a few weeks) which runs 64-bit Ubuntu as a VM and it has a RAID10+hotspare. Soon, this system will have a new happy home in a datacenter somewhere, and the secondary server will no longer be needed to use as a MX (or anything mail related). I want
[07:11] <riz0n> to prepare as much as possible by having the new server handle inboxes, webmail, IMAP, POP3, and SMTP (587) for clients. I want to reduce the responsibilities and roles of the secondary server down to simply being the "Primary MX" that mail from the outside world first hits, which would then be handed over to the new server as if it originally come from the outside world. Any mail generated by the new server would go back to the "Primary MX"
[07:11] <riz0n> which would deliver it on to it's rightful destination (THIS PART is already in place and functional, as any mail generated by the new server must go through another SMTP, such as the one my ISP provides (no thanks) or one of my own on a port other than 25...
[07:13] <riz0n> My research says I need to configure Postfix on the second old server as a "primary MX host for a remote site"
[07:15] <genii> Yes
[07:20] <Poster> you may also consider keeping a secondary DNS server somewhere
[07:21] <riz0n> Plus I want the second older server to no longer house any of the accounts, inboxes, or emails. This is in case the system is physically compromised, sabotaged, stolen, or suffers failure, then all the inbox accounts and their contents remain intact and secure on the new server, and there's no private email for the server's new owner to pilfer. Also, the plan is to incorporate into the NEW server a method of inbox messages being encrypted using
[07:21] <riz0n> something such as GPG..., but that's another story worthy of discussion on another day. :)
[07:21] <riz0n> Poster, yes. The new server acts as the primary DNS server. The old server is the secondary/slave server.
[07:22] <Poster> you could probably use an inexpensive VPS for the secondary system
[07:26] <riz0n> Once my new rack server finds it's way to a datacenter, I run a second VM for the secondary system. My new server has enough disk space, cores, and RAM installed for me to be able to run a few VPS. 2 4-core 2GHz Intel Xeon (8 cores total), 32GB RAM, 5x 300GB 15K SAS (RAID10, two striped "mirrored pairs" which provides 600GB usable space, and a hot-spare). The old server is only 1-core P4 2.4GHZ, 2GB HyperX DDR, 120GB IDE hard disk... yes, I
[07:26] <riz0n> know, I'm living dangerously on the edge with that old relic!
[07:27] <Poster> oh, well that's good and bad
[07:28] <Poster> if something physically happens to the physical system any guests will go too
[07:28] <Poster> or the datacenter which it resides
[07:28] <riz0n> I would like to entertain the thought of keeping this "old" system online as a secondary DNS, and also setting up another system here in the new server's place to also act as a secondary DNS.
[07:28] <Poster> I was referring to a light weight VPS somewhere else, possibly hundreds or thousands of miles away
[07:34] <riz0n> You sure are right about that. That's why I have the new server set up here at this time, so I can stress test it. I also acquired the "little brother" to this server, which has two 4-core 1.6GHZ Xeons, but can only take two hard disks, which I have 2x 600GB 10K SAS. I thought about also deploying it to the same datacenter as the big brother, simply to sit and run as a spare that I could log into and bring to life if SHTF with the primary
[07:34] <riz0n> server. But that really wouldn't make a lot of sense, and I think it would be better suited to go to ANOTHER datacenter in another geographical region, connected to a different power grid and different ISP... But, at this time, my needs and the demand on what I currently have now do not merit the added cost. But if I was going to pay for a VPS somewhere else, I would rather put that money towards having a second full-blown server online.
[07:44] <riz0n> In the past, and currently, I've run the web server from my home on a "small business" cable account. Because of issues with the provider, which left me no choice to move the "old" server to my partner's residence, which operates frmo the same cable carrier. Eventually the phone company came and installed DSL, which providers a faster uplink to the web over the cable company (but not by much, I only get 2Mbps up). So I set up new equipment
[07:44] <riz0n> here, and eventually transferred all the sites back home, leaving the old server to operate only as a mail server and DNS... it still does Apache2 HTTP/HTTPS, but only for Webmail access... Through the "transferring all the sites back home" I discovered the DSL company had blocked Port 25. Their reasoning is to stop the spread of spam from infected hosts... Oh please, tat sounds so early 2000's lol.... but of course they could unblock port 25
[07:44] <riz0n> for me IF i signed up for THEIR small business service, which costs about 8x more than what I am currently paying for the same bandwidth. Our monthly service is around $45 before "fees", and their small business is $300 to $350 a month. We could get full duplex fiber with an entire IP block from the electric company for that amount of money (IF'n they had their fiber up my street, AND I had $300 to $350 a month to blow.....). So the cheapest
[07:44] <riz0n> option was to use the two servers from two locations, and split the roles between them both, at least for now.....
[07:57] <Poster> If you're just looking for MX and secondary DNS you really don't need much
[07:57] <Poster> I'm assuming you're not real high volume mail of course
[07:57] <Poster> I understand the appeal of a true colo, I had one for many years, but the cost in doing so is quite high in comparison to going the VPS route
[08:50] <genii> Having your own rack space in the datacentre is nice.
[09:55] <lordievader> Good morning
[10:29] <PresidentTrump> is there any reason why I shouldn't make my sql backups accessible by www-data?
[10:37] <lordievader> Is there any reason you should?
[10:37] <lordievader> Do you want your webserver which may be compromised to be able to mess with sql backups?
[11:04] <PresidentTrump> lordievader, I don't really but my openswift storage can only be mounted as one user
[11:04] <lordievader> Make a backup user and give that access to the storage?
[11:06] <PresidentTrump> lordievader, I am using the openswift storage for other files that need to be accessed by www-data
[11:07] <lordievader> In the end it is your own decision. But I wouldn't want my webserver anywhere near sql backups or any other backups.
[11:08] <PresidentTrump> I naturally would want the same
[11:08] <PresidentTrump> but I was thinking... database credentials are already visible by www-data
[11:09] <PresidentTrump> lordievader, so if www-data got compromised then they already have access to the database
[11:09] <lordievader> I may hope to a very restricted db user.
[11:09] <tomreyn> either have multiple mounts of the openswift stroage as different users or have a dedicated user for the openswift mount and add a cron jobs or incron to copy changes taking place on openswift to locations where they are needed, owned by users who should be able to read/write them.
[11:10] <lordievader> Indeed, something like that.
[11:14] <PresidentTrump> lordievader, what permissions should I restrict the db user to?
[11:14] <PresidentTrump> this is a crud application
[11:15] <lordievader> PresidentTrump: The bare essentials.
[11:18] <PresidentTrump> lordievader, what privileges are not bare essential?
[11:18] <lordievader> Depends on the applications. Make a list of what your application requires, allow that on the database required deny all else.
[11:52] <PresidentTrump> lordievader, after talking to my colleagues we determined we need everything including drop
[16:14] <ren0v0> Hi, I'm not able to update my ulimits and make them stick, for some reason /etc/security/limits.conf is changing "file size -f" instead of "open files"  >  https://pastebin.com/USrYRxJ6
[16:14] <ren0v0> can anyone help ?
[16:40] <fallentree> ren0v0: limits.conf is ignored under systemd afaik
[20:11] <linuxn00b> hi all
[20:11] <linuxn00b> anybody here?
[20:11] <linuxn00b> i have a question about linux
[20:12] <tarpman> !ask | linuxn00b
[20:20] <tomreyn> https://lists.debian.org/debian-devel/2017/06/msg00308.html
[20:20] <tomreyn> [WARNING] Intel Skylake/Kaby Lake processors: broken hyper-threading
[20:27] <gheorghe_> oh damn, i have i7-6700K
[20:27] <gheorghe_> skylake
[20:34] <gheorghe_> seems it easy to solve. apt update; apt-get install intel-microcode
[20:34] <tomreyn> and reboot. if you have a matching cpu
[20:35] <gheorghe_> why reboot? it will ruin my uptime
[20:35] <gheorghe_> i have 7 days uptime on debian stretch
[20:35] <gheorghe_> this is horrible. my life is over.
[20:35] <gheorghe_> i want to try now
[20:39] <gheorghe_> seems i got lucky here guys: https://paste.debian.net/973276/
[20:39] <gheorghe_> i enabled contrib and nonfree right after installing debian, cause i needed the drivers for GTX 970. i also got the microcode at it has the latest version :D
[20:39] <gheorghe_> 3.20170511.1 is good, right ?
[20:42] <tomreyn> i think this is #ubuntu-server
[20:47] <gheorghe_> yes, this is #ubuntu-server . the questions was regarding the microcode that is also available on ubuntu. also, this will affect all my #ubuntu-server VMs that I am running with KVM on my debian desktop ;)
[20:57] <tomreyn> i only know what's written in this mailing list post i pointed to.
[22:02] <odc> too bad the ubuntu package for intel-microcode is out of date :/
[23:59] <IShavedForThis_> hey guys, Do you know of any good tutorial to set up a VM on Ubuntu? security in regards to keeping people out from my main server is important, and I feel like you guys are the ones to ask/ know of a truly good tutorial