[00:05] I skipped all the bits about serial cables, but the rest looked decent :) [00:28] sarnold: thanks, yeah. I know it serial cabling is old and busted, but I'm stoked about having console access. It means I can really fuzz with it in my lab without worrying about lossing network connectivty breaking my SSH access. [00:29] Mead: yeah, and it's often the only way to get a handle on some specific kernel problems [00:29] Mead: quite often server gear will have BMCs on board that can do serial over network and save you the hassle of the serial cable itself :) [00:34] * Mead googles BMC serial over network [00:37] looks like a potential security problem [01:02] Mead: yeah, the usual implementation of BMC devices is poor enough that they are almost always given their own networks [01:18] also because ipmi sol is based on udb so is really fun to use on a congested network :) [01:18] *udp [04:56] what kinds of things would I need to worry about breaking if I upgrade a system I've been running for 5 years on 14.04 to 16? [04:57] Ham62: i would say a clean apt without issues, and no ppa's enabled [04:57] Ham62: for services you running, best to ask specificly so volunteers can think along [04:57] a backup is also a good idea [04:58] well the most important things I have running right now are nginx, apache for some CGI stuff, and a gopher server [04:59] the CGI stuff was mostly done with FreeBASIC and x86 assembly [04:59] nasm [04:59] I'm mostly worried a bunch of packages won't support my CPU properly [04:59] it's running on an Athlon XP [05:01] and I have a couple services I have are started using the rc.local file [05:01] what about init Ham62 [05:01] are those going to break? [05:01] from 15.04 and higher its systemd now yeah [05:01] oh darn [05:02] yeah the gopher server is launched with socat at boot and I have a custom remote compiler server which is started through there on one of the user accounts [05:02] !systemd [05:02] systemd is the default init system for Ubuntu 15.04 onwards. For information on transitioning from upstart to systemd, see https://wiki.ubuntu.com/SystemdForUpstartUsers For a guide to basic service management with systemd, see https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units [05:03] neither of those are real services though [05:03] they're just processes I have running as a user in the background constantly [05:05] Ham62: you might also wanna read up https://wiki.ubuntu.com/XenialXerus/ReleaseNotes [09:58] Ham62: anything allright there with the upgrade plan? [10:15] Greetings everyone [10:15] In a pickle and need some help ^_^. I hope this is the right channel to ask, seeing it's nginx/www-data user related, but also linux related. What is the best way (with vsftp) to give access to a specific /var/www directory, with it running under www-data user/group? As I assume nginx requires both user/group to be www-data. [10:15] Skyrider: so as I was saying, I'd make the dirs owned by the v/s/ftp user, and then put nginx in that user's group [10:16] That I saw, wanted to reply to that :D === DerRaiden is now known as DerRaiden`afk [10:16] by default dirs and fils are group readable so nginx's www-data user, being in that separate user's group, will have read access. [10:17] Wouldn't that mess up with file/directory permissions though seeing the owner/group gets changed? [10:17] Ah [10:17] I assume the owner/group differs then though? [10:17] as long as you keep the dirs and files readable to their group, and www-data is in that group, all is fine [10:17] owner by whomever creates the files, while the group remains www-data. [10:18] no, you don't change file/dir ownership to www-data [10:19] example: chown myuser:myuser /var/www/some-website-dir ; chmod -R g+r /var/www/some-website-dir ; usermod -a -G myuser www-data ; [10:19] I know the last part adds the user to the group, what does g+r do. [10:19] the chmod is just for example here, g+r is default [10:19] makes the files and dirs readable to the group they belong to (in this case, myuser's, if that user is used to place the files via v/s/ftp) [10:20] Should I use mount though? Currently their FTP is set to the home directory. [10:20] so as you upload files they will be owned by myuser:myuser assuming that's the user you logged into v/s/ftp with [10:21] no need to mounts, you can symlink the website dir under myuser's home somewhere [10:21] no need *for [10:21] Better to symlink, or directly set their directory to the specific /var/www/xxx directory. [10:21] I'm assuming this is for the v/s/ftp acccess? those daemons running on the same machine, and this is not some nfs export [10:22] Just FTP access to give specific access to a specific site/sub-domain, ya. [10:22] **specific user [10:22] Skyrider: I'm old school, and I'd symlink under myuser's ~/public_html/somesite.com [10:23] though really... uh depending on what this is excactly, you can omit /var/www completely and use only the home dirs? [10:23] I mean, if it's some packaged web app that installs under /var/www/ then yeah. if it's not, then just keep it all under ~/ [10:24] oki, that's set... now to symlink it :D [10:24] or whatever you want. point is, if I understood your problem correctly, you want v/s/ftp uploadable files to be readable to nginx? [10:25] Merely creating an ftp so users can edit, add, etc web files through the ftp. [10:25] Seeing I have multiple domains/sub-domains, need to create multiple FTP users for that. [10:26] then you don't want users to log in as www-data. you want this instead, nginx in those users' groups so it can read their files. separation of concerns and least privilege principle. and this covers only static sites. [10:26] They don't need a /home/ directory though [10:26] as the current ftp server requires that ( 500 OOPS: cannot change directory:/home/xxxx ) [10:26] they DO need A "home" directory. why not /home/ [10:26] that doesn't sound right [10:26] The directory does not exist, hence the error :p [10:27] ah so you created users without -m ? [10:27] Indeed. I like being organized. [10:27] If I have users like test-web, test-forums, test-dev .. like weird in my eyes :p [10:27] well having users have a home is wise. you contain them there. you can also configure sftp with chroot to their homes, instead of insecure (vs)ftp [10:27] For having a home directory that is. [10:28] I tried SFTP before, bit weird. They kept having access to the / directory, even though it isn't for writing, they had read. [10:28] yeah because you need to explicitely chroot them for sftp access [10:29] That bad though to use FTP over SFTP? Even though not secure, can make it more secure by altering the port / whitelist. [10:29] google up "chrooting sftp users". in essence, you have, say, /home// as their home and thus chroot. that dir needs to be owned by root and not group writable. dirs _under_ it (like, say, ~/public_html) can be normal user owned dirs they write into. [10:30] Skyrider: problem with FTP is that the data channel is never encrypted. while the control channel is (where log-in happens), data isn't. [10:30] What about FTP TSL. [10:31] I never really had to create multiple users over (S)ftp before, hence I'm asking all this :D [10:31] anyway, in addition to root owned chroot, you need to set up sshd for them to force the chroot and internal-sftp command only so they can't ssh in [10:31] FTPS (FTP over TLS/SSL) is exactly what I was talking about. control channel encrypted, data channel isn't. [10:31] with pure FTP, not even control channel is, so you get plaintext passwords over the wire [10:32] FTPS = FTP over TLS/SSl; SFTP = FTP over SSH. [10:34] sftp is really superior. it's well supported by programs like filezilla if your users are windowsites. you can force keys instead of passwords, and everything is nicely encrypted. [10:35] Is it a hassle to properly set up sftp with proper rights / access only to specific /var/www/ directories with no ssh access over ftp? :P [10:36] no. three lines of confg in sshd_config, and a chown+chmod when you create their home dirs. [10:36] but then you make the /var/www/ dirs as their homes. personally I'd just go with /home/ . /var/www/ is primarily for packaged web applications. [10:37] and at the end symlinks, got it. [10:37] Guess I'll remove this ftp package and go with sftp [10:37] no, I'd go with /home/ period. no symlinks [10:37] /home//public_html/website-dir.com [10:38] I thought you said you were oldskool [10:38] yes, and this is it with public_html [10:38] .. /var/www/xxx -> symlink /home/user [10:38] the old school part was about public_html :) [10:38] ah ^^ [10:38] I do prefer having things in /var/www though for all web related stuff. [10:39] well you can do in reverse. symlink from /var/www/ to their /home/user/public_html/site.com dirs :) look, that part is really whatever you feel most comfortable with. the important thing is that the dirs/files are _user_ owned and that nginx is in their groups for read access. [10:40] but again, that's for static sites. with php it becomes a bit more tricky if you want it properly secured. [10:41] All php stuff :p [10:41] the rabbit hole deepens then :) [10:42] what I do in this case is run a php-fpm pool per user, as that user, with apparmor policy that prevents _writing_ except in specified dirs. that I can do becuse we control the application and know exctly what those dirs are. [10:43] if that's not an option for you, then php-fpm pool per user, as that user, is the best you can do, but then php can change its own code and you're vulnerable. [10:45] Interesting. [10:45] at any rate, you _will_ want to chmod o-rwx their homedirs -- forbid listing and read access to other users, otherise they can upload PHP code that will scan and sniff other users files [10:45] Good to know, thanks. I'll look into that. As for your last line, I do trust this user :) [10:46] I have to go for now, but I'll stick in this channel until I set up my irc bouncer again. I appreciate your time in helping me out, gotta pick up my wife. I'll be back soon :), again, thanks! [10:46] so combined with sftp chroots, you have chown root:myuser /home/user-home-dir ; chmod 750 /home/user-home-dir/ and precreate a "public_html" or whatever the name, dir in which they will have write access in their home (As their home roots aren't writable to them) [10:47] "I do trust this user" -- famous last words, aka. "pics taken 5 seconds before disaster" :) [11:28] when is "apt-get update" run automatically? [11:33] "depends" [11:34] on several factors, such as your ubuntu version [11:40] bionic [11:41] no..wait.. xenial [11:50] oskie: systemctl list-timers apt-daily* [11:52] tomreyn: awesome, that's what i've been looking for for a while [11:58] it's been there all the time, whispering your name! [12:11] good morning [13:48] sarnold: which logs? I believe the error has to do with azure really long dns names maxing out. https://feedback.azure.com/forums/216843-virtual-machines/suggestions/10197480-the-azure-vm-internal-dns-domain-names-are-too-lon === Eickmeyer is now known as Eickmeyer___ [14:37] blackflow: Back [14:41] Wow this is confusing. [14:42] No matter what I try to add in sshd config, I always get "Network error: Software caused connection abort" when trying to connect with the user. [14:43] Skyrider: check auth.log [14:43] Skyrider: another deeper approach is to run sshd manually in debugging mode on a high port. [14:45] " bad ownership or modes for chroot directory component "/var/www/"" that explains it. [14:49] well, I did mention a few times the chroot dir needs to be owned by root :) [14:49] and mustn't be group/other writable [14:51] That's the odd thing, it is owned by root. [14:51] And I believe the chmod is set to .. 755? Somewhere according to the internet. [14:51] Do you have a tutorial I can follow by any chance? [14:51] you'd have to paste the full sshd_config in question [14:51] that statement scares me --> Somewhere according to the internet [14:51] indeed. [14:52] I'm running the cmds in a test directory Ussat [14:52] still [14:52] And I know what the cmds do :p, plus.. I have a backup ready just in cae. [14:52] ***case [14:52] fair nuff, still [14:52] I'd like to stay and help, but there's a mtb trail with my name on it. bbl. [14:52] For example, I ran the tutorial: https://45squared.com/setting-sftp-ubuntu-16-04/ - Yet doesn't work properly. [14:53] No worries bf :) [15:20] There is something I noticed. [15:21] Whenever a directory's role is not www-data, I get "errors" / warnings like "The $cfg['TempDir'] (./tmp/) is not accessible." [15:22] I could of course alter the permissions of the directory to fix that, but how would user/group stay as www-data, regardless the user adding/altering files? [15:38] anyone ever run into a case where even if you have installed everything and disabled cloud.cfg's preserve_hostname module by setting it to false, the system still resets its hostname every time? [15:38] 18.04.2 server from the SUbiquity installer [15:41] I give up >_> [15:42] Skyrider: you wouldn't be able to set the user of the file [15:42] but you COULD set the group with stickybit on the directories [15:42] anything created in a directory with the group stickybit would get group www-data [15:42] but that's just a 'hack' [15:43] Yea, I'm familiar with the group cmd. Though how would I best fix this issue? The only thing I want is to create a sftp user under a specific /var/www/ directory, but if the owner www-user is changed to someone else, file permissions on the web application will start to appear unless file permissions is changed. [15:43] Apparently the www-data has the "proper" rights to auto solve that issue right away. [15:44] Skyrider: what are you trying to do with sftp in /var/www? [15:45] teward: during the past two or three days there were two people around on irc who reported that the system hostname they configured got reset. i don't think they knew about "cloud.cfg's preserve_hostname" (i don't, or didn't), though. one of them determined cloud-init to be the source of this issue. [15:45] also i think there's a related open bug report [15:45] tomreyn: yeah i had preserve_hostname set to false though [15:45] tomreyn: and it STILL reset it [15:45] i just got angry at it and yoinked cloud-init out of the equation [15:45] apt-get remove'd it and it worked [15:45] tomreyn: got a link to the bug per chance? [15:45] that#s what the other user did, too [15:46] i was afraid you'd ask this [15:46] tomreyn: i also think it's intermittent [15:46] because two servers were both deployed with the same ISO [15:46] one had this happen [15:46] the other didn't [15:46] only difference was a really short hostname for the one [15:46] and that's the one where cloud-init was being derp [15:46] cryptodan: Setting up 3 different SFTP users to access 3 sub-domains. [15:49] teward: ugly. :-/ ok, i'll look for this bug report, but no promises [15:50] tomreyn: never expect any promises :P [15:51] I saw somewhere last week on the internet that there's a package that checks a directory at all times and changes the user/group if it has changed. [15:52] Any idea what it might be called? [16:07] teward: bug 1780867 [16:07] bug 1780867 in subiquity "hostname unchangeable / some daemon changes and resets /etc/hostname" [Critical,Fix committed] https://launchpad.net/bugs/1780867 [16:08] also bug 1770451 might be related (but that' sjust a random find while searching for the other) [16:08] bug 1770451 in cloud-init (Ubuntu) "hostname not set: Failed to create bus connection: No such file or directory" [Undecided,Incomplete] https://launchpad.net/bugs/1770451 === setuid_ is now known as setuid [16:16] hmm 1780867 isn't really new though (nor its dupe), nor was it updated during the past 3 days. but i think this is what i had in mind. [18:04] Skyrider: nginx won't create such dirs, so that must be some PHP app. of course, the dir must be writable the to the user php-fpm is running as. for such random dirs you can't know in advance, you should run php-fpm as the user owning the dir, not as www-data. [18:05] blackflow: though... in a default setup, php-fpm *is* running as www-data [18:05] yes but the dirs must be owned by the sftp user in order to freely upload php apps [18:05] that's the use case Skyrider has [18:06] so, nginx as www-data, in supplement group of the sftp user. php-fpm running as that user in full. [18:06] (if there's need for both PHP and sftp user to write files) [18:18] sftp chroot is weird. [18:19] The main directory has to be root, I get that. but in that main directory, no one can create directories or files, because that specific directory is owned by root. [18:19] All the other sub-directories inside the root directory can be altered. [18:19] Skyrider: it's not once you understand why the (ch)root must be root owned. one way to escape chroot is to double-chroot with symlinks, so openssh enforces no-write, no-ownership of the (ch)root [18:20] So instead of /var/www/testwebsite/subdomain I need to have /var/www/testwebsite/domain/subdomain [18:20] Because root is messing up the main directory's permissions [18:20] Skyrider: well see, that's why I recommended you to use /home/user/ as (ch)root, and then have ~/public_html/ for all their sites. [18:21] I'll consider it ^^ [18:21] but you wanted your way, so... :) you'll just have to do the same. a "base" chroot dir/home for the sftp user, and a dir they can upload their sites to [18:21] the base is the var/ww :D [18:22] just for one user? [18:22] Each domain/subdomain its own user. [18:22] Fake the sake that not a single user has access to all. [18:23] well I really recommend you to use /home/ . /var/www was never meant to be used by random sftp user accounts. it's default place to put packaged web applications, root owned, www-data accessible, and not via sftp. [18:24] standard for decades has been ~/public_html, from early apache years, carried over by shared hosting industry, all the commercial and non-commercial hosting panels, etc... /home// as home dir, sftp chroot, and then public_html aka htdocs on some platforms, as "docroot" for apache [18:26] I'm actually using icron :D [18:27] and if you _do_ insist on /var/www/ you will _still_ need to replicate the structure. /var/www//sites/www.somesite.com/ [18:27] name the "sites" subdir as you wish. [18:28] replicate the structure? [18:28] you need one extra user-owned dir under chroot [18:29] IF you want to allow them to create subdirs for sites. I don't know how you intend to configure nginx to run with that, you'd still need root to add a server {} stanza for each domain [18:31] do yourself a favor and don't reinvent the wheel, do what the industry has been doing for many years now. /home//public_html-or-sites-or-htdocs-orwhatever/{somesite.com,anothersite.com,foobarbaz.com} [18:46] I have locked my docker-ce packages. But now I want to do a release upgrade to 18.04 [18:47] but I do get: Please install all available updates for your release before upgrading [18:47] But I do not want to upgrade this package [18:47] it should stay on the same version [18:49] blackflow: : To make things simple.. dont really want to bother with it much anymore, I've made a single user to access a single domain with all its subdomains. [18:49] I got it to work, though all sub directories appear to be empty when I log into the user. [18:51] setfacl appears to be the cause [18:53] I give up -_- [18:53] Not sure why it's such a bothersome to simply add a sftp user access to a specific directory, having the ability to read/write and maintain the original user/owner.. [18:55] I'm not even sure why it's displaying 0 directories/files right now. [18:58] adac: chances are you will have to upgrade the docker packages anyways because of new libraries/dependencies for build and runtime that Docker has to build against (outdated ones won't work) [18:59] teward, I have running this docker version also already with bionic (I think need to check really) [19:00] maybe the same *version* of the Docker codebase but it still has to build against *newer* libraries in Binoic vs. Xenial. [19:00] so it's *different* at the binary level [19:00] teward, thing is the newest docker version wll not work with my kubernetes version [19:00] but not the code level [19:00] kk I see [19:02] for do-release-upgrade it'll still complain, yes. You might have to disable your Docker repository you're using to get it and do a full `apt-get update && apt-get dist-upgrade` afterwards then install the newer docker package version for the newer Ubuntu. However I can't guarantee this'll work [19:02] Kubernetes is a little tricky. [19:02] adac: the other thing is, what are you on now, 16.04? [19:02] why upgrade to 18.04 if things're just working? [19:03] teward, I'm re-setup my whole infrastrucutre and I have this one single host that is part of this new infrastrcuture already but still has 16.04 [19:03] and I would really like to have the same versions everywhere [19:04] i'm assuming you don't want to upgrade kubernetes then :P [19:04] which you'd probably end up having to do [19:06] teward, actually I'm using Rancher. Rancher supports 1.13.5 which at most supports docker 18.06.3 [19:06] yes at some point I will upgrade kubernetes anyway that is true [19:06] :) [19:09] teward, I'm trying now out what you have suggested [19:09] this host can be down a bit no problem if somethign is not working [19:09] backup first [19:09] just in case :P [19:09] :) [19:16] teward, backup is running [19:39] Skyrider: I'm sorry but it's not bothersome at all. it's very, very simple. User owns files. nginx's www-data is in user's group (supplemantal!). php-fpm runs as user (one pool per user). User's home dir is root owned and there's a subdir (or more) where the user can upload stuff. Very, very simple. [19:40] Skyrider: no idea why you invoked ACLs, that will just unnecessarily complicate matters to no end. [19:40] it's amazing the flexibility you can have with the simplicity of unix acls [19:41] yes but it's rather hard to maintain, the ACLs are not immediately obvious, not visible in ls, you have to know they're there. personally I prefer to put a nice, auditable apparmor profile, instead of fiddle with ACLs [19:41] MAC > DAC and ACLs are DAC on steroid. [19:41] *steroids. still DAC tho. [19:42] blackflow: I've yet to try php-fpm Apparmor hat support, do you have some experience with it? [19:42] Skyrider: btw, just so we're on the same page, the approach I'm preaching here, been doing that for many years and currently I have tha very setup for hundreds of clients and their websites. [19:44] sdeziel: nope. My setup is simple enough where "owner" is the only variation among pools. for more complex stuff I intend to have custom named profiles via systemd units, and one pool master per user, per unit, per profile. [19:44] in other words, I wouldn't go with one process changing hats, but statically fix processes to profiles. [19:45] blackflow: OK. The hat thing is a per-pool thing [19:45] yeah. I disliked hats even with apache and selinux, years ago. I prefered MLS instead [19:46] oh I see, you want multiple masters [19:46] uhuh. [19:46] the masters themselves don't really add any overhead and I can individually restart pools, unlike when with one master [19:48] interesting idea and side effect. Too bad I was looking for a reason to try Apparmor hats [19:51] blackflow: Can I undo the acls? [19:52] Skyrider: sure, -b flag to setfacl [19:53] Thanks :) [19:53] Just curious.. You say to set it to the users home directory instead. [19:54] How is the user/group with this exactly with the web files? [19:54] web files owned by the user, and as you mentioned, www-data in their group? [19:58] Skyrider: web files must be owned by user so that the default ownership (755 dirs, 644 files) allows them to write. with PHP in the game, you need to drop access to "others", so 750 and 640. In that case nginx (www-data) loses access, so you need to add www-data to users groups, so nginx can read the files. [20:00] that's also the least privilege principle in action. and of course, php-fpm process must run as the user, in order to have exclusive access to user/site files, and in order to write them (uploads). === keithzg_ is now known as keithzg [20:37] Hmmm, so postfix rejects mail if the domain is in the virtual_alias_domains but the TO address isn't listed in the virtual alias map, even if there's CC's on that email that are? That is surprising to me! [20:39] postfix doesn't care about CC. and the alias map is really the authoritative here, just the domain won't work [20:39] envelope to or message to? [20:40] has to be envelope, postfix doesn't care about To header either [20:40] yeah, thats where i was leading to [20:40] Hmm I wonder how it was working *before* I set up the virual_alias stuff, that certainly seems to have been when these emails started getting rejected rather than passed on. [20:41] anybody happen to know how long the Ubuntu 16.04 EC2 images will support new hardware? The first graphic on this page: https://www.ubuntu.com/about/release-cycle makes it look like hardware support has already stopped for 16.04 but I'm not sure if that also applies to the AWS specific kernels which are also based off of the 4.4.0 GA kernel, not the 4.15 HWE kernel [20:41] * keithzg is having a hard time digging through it all and figuring out what's going on since the verbosity for amavis is set so high, heh, still failing to understand why it dies from time to time [20:42] keithzg: postfix has its own rahter verbose logs, you shouldn't consult amavis at all. theres #postfix here on freenode if you need more help with it. [20:43] blackflow: Well it's /var/log/mail.log I'm looking at, and `journalctl -u postfix` doesn't have anything [20:44] keithzg: wrong unit, postfix.service. you need postfix@ for the instance running iirc? [20:45] neway, can you pastebin the problem entries? though really, I recommend #postfix for this particular issue, probably isn't specific to ubuntu defaults [20:46] paulatx: so did you read this? https://wiki.ubuntu.com/Kernel/LTSEnablementStack#Ubuntu_16.04_LTS_-_Xenial_Xerus [20:46] paulatx: oh yes, according to what you wrote you probably did. [20:47] so the aws images can't be used with the HWE kernel? [20:47] i mean, you can't just install it like on regular ubuntu? [20:49] blackflow: Literally none of the postfix@ instances show anything, went through them one by one, but bizarrely postfix@* works, so apparently tab completion for journalctl leaves out whatever one I actually need to use? [20:52] keithzg: postfix@-.service [20:52] tomreyn: well the AWS images obtained from https://cloud-images.ubuntu.com/locator/ec2/ have the AWS tuned kernel enabled by default as detailed here: https://blog.ubuntu.com/2017/04/05/ubuntu-on-aws-gets-serious-performance-boost-with-aws-tuned-kernel. I'm trying to figure out when the support for new hardware will stop on those AWS tuned kernels [20:52] keithzg: that's the default template instance [20:52] unless of course you have something else set up, this should be the default [20:53] blackflow: Huh. Yeah, that works. Just weirdly isn't one of the many things listed when I try and tab-autocomplete `journalctl -u postfix@` [20:54] keithzg: WorksForMe(tm) :) [20:54] blackflow: Hah! [20:55] I'm pleasantly surprised the wildcard approach worked, too; that's a bit more user-friendliness and standard unsurprising handling than I normally expect from the systemd gang [20:55] paulatx: hmm, sorry, that's indeed beyond my horizon. i suggest you ask the same question here again tomorrow during UK business hours [20:55] blackflow: Funny enough, on another server it *does* work fine! [20:56] keithzg: patience and they will find a way to disappoint ;) [20:56] blackflow: haha, true taht [20:56] keithzg: so, can you pastebin the error, reason for NOQUEUE? [20:57] tomreyn: ok will do, thanks [20:59] blackflow: https://paste.ubuntu.com/p/2mSSGcyNfY/ is the paste [21:02] * keithzg is trying now heeding that warning and changing the relay_domains to be more strict [21:03] keithzg: also address that warning in lines 2 and 3 [21:03] blackflow: Yeah that's the warning I'm talking about, heh [21:03] ah k [21:03] There was an overlap, with the emails coming from phabricator.gmcl.internal, and the relay_domains being gmcl.com and gmcl.internal [21:04] 'sfine :) [21:04] (The central issue here is, some emails, but not all emails, from our Phabricator instance aren't making it to users) [21:08] well you'll have to investigate on a per-case basis. in this case, if you have a domain in virtual_alias_domains, you need to have the address in the virtual_alias_maps too. the postfix virtual readme has full explanation with examples. [21:08] keithzg: http://www.postfix.org/VIRTUAL_README.html [21:09] blackflow: Yeah, the weird thing is, in theory the emails are being sent to multiple users, and *one* of them is noreply@phabricator.gmcl.internal. But others are normal users, and they receive email from Phabricator fine in most circumstances. It's just this one subclass of email that's being rejected this way. [21:10] normal users as in they're "mailboxes" (virtual_mailbox_*) ? [21:10] aka destined for the virtual transport [21:11] Well, as in they're someguy@gmcl.com [21:11] (which this mailserver is the endpoint for, and they have "local" accounts (actually LDAP, but valid as real users on the system)) [21:32] Hmm. Everything's still being rejected. [21:33] Time to specify an alias for noreply@phabricator.gmcl.internal and get some of these emails, see what they're actually trying to do [21:36] guys when rebooting my ubuntu server 16.06 I get: [21:36] *16.04 [21:36] I thought we established that first. if you want to treat this envelope recipient as alias, you need it in the map. alias = forwarder, btw, so it has to forward to (alias for) a valid address too, which can be a virtual mailbx, or an external relay'd transport [21:36] device not accepting address 36 71 [21:37] -71 actually [21:37] any ideas what that problem might be and how to solve it? [21:38] actually there should be no USB dongle on that server. is a hosted server [21:38] blackflow: Yeah but the thing is, noreply@phabricator.gmcl.internal is in theory only *one* of the recipients; the others are all valid. And merely receiving emails for noreply wouldn't be terribly helpful, since that wouldn't then get the emails to the actual users. [21:39] keithzg: there's always just one recipient. if your sending MUA had CC, then it ran a RCPT TO for each of them. CC has no meaning for postfix. [21:39] one recipient as in one RCPT TO envelope recipient. [21:39] blackflow: Well exactly, which is why I'm wondering if Phabricator is doing something terribly silly in this case. [21:39] int _this_ case, you simply don't have the address in virtual alias maps, as the error is stating [21:40] and it's not checking anything else it seems which means you do have the domain, hence the expectation to consult the map [21:40] Particularly because of the very suspicious nature of there theoretically being three recipients (noreply, and two cc's) and the postfix log shows three copies being sent to noreply [21:40] Receiving noreply's emails wouldn't actually solve anything per se [21:40] irrelevant. your postfix has no idea where noreply@phabricator.gmcl.internal is, and how to deliver to it. [21:41] (according to the log you pastebin'd) [21:43] blackflow: Sure? But if as you say Postfix has no idea of "TO", then it shouldn't be seeing three copies sent to noreply; and of course any emails to noreply go nowhere, that's actually desired. [21:43] Hence I'm thinking maybe Phabricator is doing something wrong. [21:44] keithzg: I have no idea what your setup is. I'd really recommend you to pop into #postfix. read the /topic and prepare the logs and configs as specified by the !getting_help factoid. [21:45] but on the face value, from that log entry, it's very simple. postfix has no idea how to deliver to that address. it's not defined in the virtual_alias_maps (to have an alias'd destination), but the domain is, hence postfix looking for it there. [21:46] keithzg: where do you want mail RCPT TO that address, be sent instead? [21:47] blackflow: Nowhere! [21:47] The emails shouldn't be going to noreply anyways, and emails to noreply should indeed be rejected. It should be seeing emails to actual users, and most of the time that's how Phabricator sends email, but for some reason here it's sending all three copies to a single TO, which is noreply, instead of the actual Phabricator users with their valid @gmcl.com addresses [21:47] well it IS going nowhere. postfix will either accept and deliver, or respond with NOQUEUE like it is now [21:47] Sure, exactly. [21:48] And hence why I'm thinking the problem at very least involves the Phabricator side of things too, since it shouldn't be just sending to noreply [21:48] uhm, so why are we chasing hte postfix red herring then :) you should check the MUA that's apparently trying to send to that address [21:48] "23:32 < keithzg> Hmm. Everything's still being rejected." <-- implies you don't want it rejected.... you should really get your story straight and start at the beginning, but in #postfix :) [21:48] Well that's why I'm trying to receive the emails, so I can be sure of their exact actual headers :) [21:49] yeah, no, sorry. please pop into #postfix and prepare all the details as explained by the !getting_help factoid there. thnks :) [21:49] I mean, but as you say it's looking like Postfix is probably a red herring [21:49] (you'll get better postfix support there, and this isn't ubuntu issue per se ;) [21:50] keithzg: well it's rejecting which is apparently what you do want it to do. [21:50] blackflow: Yeah exactly, *postfix* seems to be acting according to design and intention, it's just somewhere beforehand where something's going wrong. [21:50] (Probably Phabricator, maybe nullmailer) [21:52] keithzg: "trying to receive emails" -- then just create the alias entry for a local or any other address. [21:52] receive it, see what's in it. [21:53] blackflow: Yup, that is *precisely* what I've done [21:54] * keithzg now waits on the automated stuff that finds some files and commits a record of them via git-annex, which is then noticed by Phabricator's Diffusion, and then Herald rules send out the emails . . . it's all very Rub Goldbergian ;) [21:55] s/Rub/Rube [21:55] held by Canonical's duct tape :) [21:56] One of my favourite brands of duct tape :D [22:16] Things seem to be working well enough now and I have enough info to dive in and try and unpick the specifics myself; many thanks, blackflow :) And apologies for broadly ignoring your entreaties to bring my problem over to #postfix instead :D [22:42] keithzg: np, it's just it looked all the way as if you wanted the alias to actually work :) [22:45] blackflow: Yeah naw, as usual for me it's a subtlely weirder problem with more moving parts involved, haha [23:24] anyone still using local mirrors these days ? [23:25] yeah I've got on [23:25] one [23:27] so much install traffic ? [23:27] will be prettu big I guess ? [23:27] &pretty [23:27] * [23:29] gislaved: ~1.5 terabytes would probably do; mine's at 1.27 TB used at the moment: http://paste.ubuntu.com/p/KdKfBDMbts/ [23:29] sarnold heh, share storage I believe ? [23:29] *shared [23:29] ? [23:30] or single disk ? [23:30] or VM disk ? [23:30] ah that zfs pool is on a nine-disk array: three vdevs of triple-mirror spinning metal drives [23:31] :)