sarnoldI skipped all the bits about serial cables, but the rest looked decent :)00:05
Meadsarnold: thanks, yeah. I know it serial cabling is old and busted, but I'm stoked about having console access. It means I can really fuzz with it in my lab without worrying about lossing network connectivty breaking my SSH access.00:28
sarnoldMead: yeah, and it's often the only way to get a handle on some specific kernel problems00:29
sarnoldMead: quite often server gear will have BMCs on board that can do serial over network and save you the hassle of the serial cable itself :)00:29
* Mead googles BMC serial over network00:34
Meadlooks like a potential security problem00:37
sarnoldMead: yeah, the usual implementation of BMC devices is poor enough that they are almost always given their own networks01:02
mwhudsonalso because ipmi sol is based on udb so is really fun to use on a congested network :)01:18
Ham62what kinds of things would I need to worry about breaking if I upgrade a system I've been running for 5 years on 14.04 to 16?04:56
lotuspsychjeHam62: i would say a clean apt without issues, and no ppa's enabled04:57
lotuspsychjeHam62: for services you running, best to ask specificly so volunteers can think along04:57
lotuspsychjea backup is also a good idea04:57
Ham62well the most important things I have running right now are nginx, apache for some CGI stuff, and a gopher server04:58
Ham62the CGI stuff was mostly done with FreeBASIC and x86 assembly04:59
Ham62I'm mostly worried a bunch of packages won't support my CPU properly04:59
Ham62it's running on an Athlon XP04:59
Ham62and I have a couple services I have are started using the rc.local file05:01
lotuspsychjewhat about init Ham6205:01
Ham62are those going to break?05:01
lotuspsychjefrom 15.04 and higher its systemd now yeah05:01
Ham62oh darn05:01
Ham62yeah the gopher server is launched with socat at boot and I have a custom remote compiler server which is started through there on one of the user accounts05:02
ubottusystemd is the default init system for Ubuntu 15.04 onwards. For information on transitioning from upstart to systemd, see https://wiki.ubuntu.com/SystemdForUpstartUsers For a guide to basic service management with systemd, see https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units05:02
Ham62neither of those are real services though05:03
Ham62they're just processes I have running as a user in the background constantly05:03
lotuspsychjeHam62: you might also wanna read up https://wiki.ubuntu.com/XenialXerus/ReleaseNotes05:05
lotuspsychjeHam62: anything allright there with the upgrade plan?09:58
SkyriderGreetings everyone10:15
SkyriderIn a pickle and need some help ^_^. I hope this is the right channel to ask, seeing it's nginx/www-data user related, but also linux related. What is the best way (with vsftp) to give access to a specific /var/www directory, with it running under www-data user/group? As I assume nginx requires both user/group to be www-data.10:15
blackflowSkyrider: so as I was saying, I'd make the dirs owned by the v/s/ftp user, and then put nginx in that user's group10:15
SkyriderThat I saw, wanted to reply to that :D10:16
=== DerRaiden is now known as DerRaiden`afk
blackflowby default dirs and fils are group readable so nginx's www-data user, being in that separate user's group, will have read access.10:16
SkyriderWouldn't that mess up with file/directory permissions though seeing the owner/group gets changed?10:17
SkyriderI assume the owner/group differs then though?10:17
blackflowas long as you keep the dirs and files readable to their group, and www-data is in that group, all is fine10:17
Skyriderowner by whomever creates the files, while the group remains www-data.10:17
blackflowno, you don't change file/dir ownership to www-data10:18
blackflowexample:    chown myuser:myuser /var/www/some-website-dir ;           chmod -R g+r /var/www/some-website-dir ;       usermod -a -G myuser www-data ;10:19
SkyriderI know the last part adds the user to the group, what does g+r do.10:19
blackflowthe chmod is just for example here, g+r is default10:19
blackflowmakes the files and dirs readable to the group they belong to (in this case, myuser's, if that user is used to place the files via v/s/ftp)10:19
SkyriderShould I use mount though? Currently their FTP is set to the home directory.10:20
blackflowso as you upload files they will be owned by myuser:myuser  assuming that's the user you logged into v/s/ftp with10:20
blackflowno need to mounts, you can symlink the website dir under myuser's home somewhere10:21
blackflowno need *for10:21
SkyriderBetter to symlink, or directly set their directory to the specific /var/www/xxx directory.10:21
blackflowI'm assuming this is for the v/s/ftp acccess? those daemons running on the same machine, and this is not some nfs export10:21
SkyriderJust FTP access to give specific access to a specific site/sub-domain, ya.10:22
Skyrider**specific user10:22
blackflowSkyrider: I'm old school, and I'd symlink under myuser's  ~/public_html/somesite.com10:22
blackflowthough really... uh depending on what this is excactly, you can omit /var/www completely and use only the home dirs?10:23
blackflowI mean, if it's some packaged web app that installs under /var/www/ then yeah. if it's not, then just keep it all under ~/10:23
Skyrideroki, that's set... now to symlink it :D10:24
blackflowor whatever you want. point is, if I understood your problem correctly, you want v/s/ftp uploadable files to be readable to nginx?10:24
SkyriderMerely creating an ftp so users can edit, add, etc web files through the ftp.10:25
SkyriderSeeing I have multiple domains/sub-domains, need to create multiple FTP users for that.10:25
blackflowthen you don't want users to log in as www-data. you want this instead, nginx in those users' groups so it can read their files. separation of concerns and least privilege principle.   and this covers only static sites.10:26
SkyriderThey don't need a /home/ directory though10:26
Skyrideras the current ftp server requires that ( 500 OOPS: cannot change directory:/home/xxxx )10:26
blackflowthey DO need A "home" directory. why not /home/<username>10:26
blackflowthat doesn't sound right10:26
SkyriderThe directory does not exist, hence the error :p10:26
blackflowah so you created users without -m ?10:27
SkyriderIndeed. I like being organized.10:27
SkyriderIf I have users like test-web, test-forums, test-dev .. like weird in my eyes :p10:27
blackflowwell having users have a home is wise. you contain them there. you can also configure sftp with chroot to their homes, instead of insecure (vs)ftp10:27
SkyriderFor having a home directory that is.10:27
SkyriderI tried SFTP before, bit weird. They kept having access to the / directory, even though it isn't for writing, they had read.10:28
blackflowyeah because you need to explicitely chroot them for sftp access10:28
SkyriderThat bad though to use FTP over SFTP? Even though not secure, can make it more secure by altering the port / whitelist.10:29
blackflowgoogle up "chrooting sftp users". in essence, you have, say, /home/<username>/   as their home and thus chroot. that dir needs to be owned by root and not group writable. dirs _under_ it (like, say, ~/public_html) can be normal user owned dirs they write into.10:29
blackflowSkyrider: problem with FTP is that the data channel is never encrypted. while the control channel is (where log-in happens), data isn't.10:30
SkyriderWhat about FTP TSL.10:30
SkyriderI never really had to create multiple users over (S)ftp before, hence I'm asking all this :D10:31
blackflowanyway, in addition to root owned chroot, you need to set up sshd for them to force the chroot and internal-sftp command only so they can't ssh in10:31
blackflowFTPS (FTP over TLS/SSL) is exactly what I was talking about. control channel encrypted, data channel isn't.10:31
blackflowwith pure FTP, not even control channel is, so you get plaintext passwords over the wire10:31
blackflowFTPS = FTP over TLS/SSl;   SFTP = FTP over SSH.10:32
blackflowsftp is really superior. it's well supported by programs like filezilla if your users are windowsites. you can force keys instead of passwords, and everything is nicely encrypted.10:34
SkyriderIs it a hassle to properly set up sftp with proper rights / access only to specific /var/www/ directories with no ssh access over ftp? :P10:35
blackflowno. three lines of confg in sshd_config, and a chown+chmod when you create their home dirs.10:36
blackflowbut then you make the /var/www/ dirs as their homes. personally I'd just go with /home/ . /var/www/ is primarily for packaged web applications.10:36
Skyriderand at the end symlinks, got it.10:37
SkyriderGuess I'll remove this ftp package and go with sftp10:37
blackflowno, I'd go with /home/ period. no symlinks10:37
blackflow /home/<username>/public_html/website-dir.com10:37
SkyriderI thought you said you were oldskool10:38
blackflowyes, and this is it with public_html10:38
Skyrider.. /var/www/xxx -> symlink /home/user10:38
blackflowthe old school part was about public_html :)10:38
Skyriderah ^^10:38
SkyriderI do prefer having things in /var/www though for all web related stuff.10:38
blackflowwell you can do in reverse. symlink from /var/www/ to their /home/user/public_html/site.com   dirs :)   look, that part is really whatever you feel most comfortable with. the important thing is that the dirs/files are _user_ owned and that nginx is in their groups for read access.10:39
blackflowbut again, that's for static sites. with php it becomes a bit more tricky if you want it properly secured.10:40
SkyriderAll php stuff :p10:41
blackflowthe rabbit hole deepens then :)10:41
blackflowwhat I do in this case is run a php-fpm pool per user, as that user, with apparmor policy that prevents _writing_ except in specified dirs. that I can do becuse we control the application and know exctly what those dirs are.10:42
blackflowif that's not an option for you, then php-fpm pool per user, as that user, is the best you can do, but then php can change its own code and you're vulnerable.10:43
blackflowat any rate, you _will_ want to chmod o-rwx their homedirs -- forbid listing and read access to other users, otherise they can upload PHP code that will scan and sniff other users files10:45
SkyriderGood to know, thanks. I'll look into that. As for your last line, I do trust this user :)10:45
SkyriderI have to go for now, but I'll stick in this channel until I set up my irc bouncer again. I appreciate your time in helping me out, gotta pick up my wife. I'll be back soon :), again, thanks!10:46
blackflowso combined with sftp chroots, you have   chown root:myuser  /home/user-home-dir ; chmod 750 /home/user-home-dir/   and precreate a "public_html" or whatever the name,  dir in which they will have write access in their home (As their home roots aren't writable to them)10:46
blackflow"I do trust this user"   -- famous last words, aka. "pics taken 5 seconds before disaster"  :)10:47
oskiewhen is "apt-get update" run automatically?11:28
tomreynon several factors, such as your ubuntu version11:34
oskieno..wait.. xenial11:41
tomreynoskie: systemctl list-timers apt-daily*11:50
oskietomreyn: awesome, that's what i've been looking for for a while11:52
tomreynit's been there all the time, whispering your name!11:58
ahasenackgood morning12:11
AvidWolf43sarnold: which logs? I believe the error has to do with azure really long dns names maxing out. https://feedback.azure.com/forums/216843-virtual-machines/suggestions/10197480-the-azure-vm-internal-dns-domain-names-are-too-lon13:48
=== Eickmeyer is now known as Eickmeyer___
Skyriderblackflow: Back14:37
SkyriderWow this is confusing.14:41
SkyriderNo matter what I try to add in sshd config, I always get "Network error: Software caused connection abort" when trying to connect with the user.14:42
rbasakSkyrider: check auth.log14:43
rbasakSkyrider: another deeper approach is to run sshd manually in debugging mode on a high port.14:43
Skyrider" bad ownership or modes for chroot directory component "/var/www/"" that explains it.14:45
blackflowwell, I did mention a few times the chroot dir needs to be owned by root :)14:49
blackflowand mustn't be group/other writable14:49
SkyriderThat's the odd thing, it is owned by root.14:51
SkyriderAnd I believe the chmod is set to .. 755? Somewhere according to the internet.14:51
SkyriderDo you have a tutorial I can follow by any chance?14:51
blackflowyou'd have to paste the full sshd_config in question14:51
Ussatthat statement scares me --> Somewhere according to the internet14:51
SkyriderI'm running the cmds in a test directory Ussat14:52
SkyriderAnd I know what the cmds do :p, plus.. I have a backup ready just in cae.14:52
Ussatfair nuff, still14:52
blackflowI'd like to stay and help, but there's a mtb trail with my name on it. bbl.14:52
SkyriderFor example, I ran the tutorial: https://45squared.com/setting-sftp-ubuntu-16-04/ - Yet doesn't work properly.14:52
SkyriderNo worries bf :)14:53
SkyriderThere is something I noticed.15:20
SkyriderWhenever a directory's role is not www-data, I get "errors" / warnings like "The $cfg['TempDir'] (./tmp/) is not accessible."15:21
SkyriderI could of course alter the permissions of the directory to fix that, but how would user/group stay as www-data, regardless the user adding/altering files?15:22
tewardanyone ever run into a case where even if you have installed everything and disabled cloud.cfg's preserve_hostname module by setting it to false, the system still resets its hostname every time?15:38
teward18.04.2 server from the SUbiquity installer15:38
SkyriderI give up >_>15:41
tewardSkyrider: you wouldn't be able to set the user of the file15:42
tewardbut you COULD set the group with stickybit on the directories15:42
tewardanything created in a directory with the group stickybit would get group www-data15:42
tewardbut that's just a 'hack'15:42
SkyriderYea, I'm familiar with the group cmd. Though how would I best fix this issue? The only thing I want is to create a sftp user under a specific /var/www/ directory, but if the owner www-user is changed to someone else, file permissions on the web application will start to appear unless file permissions is changed.15:43
SkyriderApparently the www-data has the "proper" rights to auto solve that issue right away.15:43
cryptodanSkyrider: what are you trying to do with sftp in /var/www?15:44
tomreynteward: during the past two or three days there were two people around on irc who reported that the system hostname they configured got reset. i don't think they knew about "cloud.cfg's preserve_hostname"  (i don't, or didn't), though. one of them determined cloud-init to be the source of this issue.15:45
tomreynalso i think there's a related open bug report15:45
tewardtomreyn: yeah i had preserve_hostname set to false though15:45
tewardtomreyn: and it STILL reset it15:45
tewardi just got angry at it and yoinked cloud-init out of the equation15:45
tewardapt-get remove'd it and it worked15:45
tewardtomreyn: got a link to the bug per chance?15:45
tomreynthat#s what the other user did, too15:45
tomreyni was afraid you'd ask this15:46
tewardtomreyn: i also think it's intermittent15:46
tewardbecause two servers were both deployed with the same ISO15:46
tewardone had this happen15:46
tewardthe other didn't15:46
tewardonly difference was a really short hostname for the one15:46
tewardand that's the one where cloud-init was being derp15:46
Skyridercryptodan: Setting up 3 different SFTP users to access 3 sub-domains.15:46
tomreynteward: ugly. :-/ ok, i'll look for this bug report, but no promises15:49
tewardtomreyn: never expect any promises :P15:50
SkyriderI saw somewhere last week on the internet that there's a package that checks a directory at all times and changes the user/group if it has changed.15:51
SkyriderAny idea what it might be called?15:52
tomreynteward: bug 178086716:07
ubottubug 1780867 in subiquity "hostname unchangeable / some daemon changes and resets /etc/hostname" [Critical,Fix committed] https://launchpad.net/bugs/178086716:07
tomreynalso bug 1770451 might be related (but that' sjust a random find while searching for the other)16:08
ubottubug 1770451 in cloud-init (Ubuntu) "hostname not set: Failed to create bus connection: No such file or directory" [Undecided,Incomplete] https://launchpad.net/bugs/177045116:08
=== setuid_ is now known as setuid
tomreynhmm 1780867 isn't really new though (nor its dupe), nor was it updated during the past 3 days. but i think this is what i had in mind.16:16
blackflowSkyrider: nginx won't create such dirs, so that must be some PHP app. of course, the dir must be writable the to the user php-fpm is running as. for such random dirs you can't know in advance, you should run php-fpm as the user owning the dir, not as www-data.18:04
tewardblackflow: though... in a default setup, php-fpm *is* running as www-data18:05
blackflowyes but the dirs must be owned by the sftp user in order to freely upload php apps18:05
blackflowthat's the use case Skyrider has18:05
blackflowso, nginx as www-data, in supplement group of the sftp user. php-fpm running as that user in full.18:06
blackflow(if there's need for both PHP and sftp user to write files)18:06
Skyridersftp chroot is weird.18:18
SkyriderThe main directory has to be root, I get that. but in that main directory, no one can create directories or files, because that specific directory is owned by root.18:19
SkyriderAll the other sub-directories inside the root directory can be altered.18:19
blackflowSkyrider: it's not once you  understand why the (ch)root must be root owned. one way to escape chroot is to double-chroot with symlinks, so openssh enforces no-write, no-ownership of the (ch)root18:19
SkyriderSo instead of /var/www/testwebsite/subdomain I need to have /var/www/testwebsite/domain/subdomain18:20
SkyriderBecause root is messing up the main directory's permissions18:20
blackflowSkyrider: well see, that's why I recommended you to use /home/user/ as (ch)root, and then have ~/public_html/ for all their sites.18:20
SkyriderI'll consider it ^^18:21
blackflowbut you wanted your way, so... :)   you'll just have to do the same. a "base" chroot dir/home for the sftp user, and a dir they can upload their sites to18:21
Skyriderthe base is the var/ww :D18:21
blackflowjust for one user?18:22
SkyriderEach domain/subdomain its own user.18:22
SkyriderFake the sake that not a single user has access to all.18:22
blackflowwell I really recommend you to use /home/ . /var/www was never meant to be used by random sftp user accounts. it's default place to put packaged web applications, root owned, www-data accessible, and not via sftp.18:23
blackflowstandard for decades has been ~/public_html, from early apache years, carried over by shared hosting industry, all the commercial and non-commercial hosting panels, etc...   /home/<username>/ as home dir, sftp chroot, and then public_html   aka htdocs on some platforms, as "docroot" for apache18:24
SkyriderI'm actually using icron :D18:26
blackflowand if you _do_ insist on /var/www/ you will _still_ need to replicate the structure.   /var/www/<user>/sites/www.somesite.com/18:27
blackflowname the "sites" subdir as you wish.18:27
Skyriderreplicate the structure?18:28
blackflowyou need one extra user-owned dir under chroot18:28
blackflowIF you want to allow them to create subdirs for sites. I don't know how you intend to configure nginx to run with that, you'd still need root to add a server {} stanza for each domain18:29
blackflowdo yourself a favor and don't reinvent the wheel, do what the industry has been doing for many years now.   /home/<username>/public_html-or-sites-or-htdocs-orwhatever/{somesite.com,anothersite.com,foobarbaz.com}18:31
adacI have locked my docker-ce packages. But now I want to do a release upgrade to 18.0418:46
adacbut I do get: Please install all available updates for your release before upgrading18:47
adacBut I do not want to upgrade this package18:47
adacit should stay on the same version18:47
Skyriderblackflow: : To make things simple.. dont really want to bother with it much anymore, I've made a single user to access a single domain with all its subdomains.18:49
SkyriderI got it to work, though all sub directories appear to be empty when I log into the user.18:49
Skyridersetfacl appears to be the cause18:51
SkyriderI give up -_-18:53
SkyriderNot sure why it's such a bothersome to simply add a sftp user access to a specific directory, having the ability to read/write and maintain the original user/owner..18:53
SkyriderI'm not even sure why it's displaying 0 directories/files right now.18:55
tewardadac: chances are you will have to upgrade the docker packages anyways because of new libraries/dependencies for build and runtime that Docker has to build against (outdated ones won't work)18:58
adacteward, I have running this docker version also already with bionic (I think need to check really)18:59
tewardmaybe the same *version* of the Docker codebase but it still has to build against *newer* libraries in Binoic vs. Xenial.19:00
tewardso it's *different* at the binary level19:00
adacteward, thing is the newest docker version wll not work with my kubernetes version19:00
tewardbut not the code level19:00
adackk I see19:00
tewardfor do-release-upgrade it'll still complain, yes.  You might have to disable your Docker repository you're using to get it and do a full `apt-get update && apt-get dist-upgrade` afterwards then install the newer docker package version for the newer Ubuntu.  However I can't guarantee this'll work19:02
tewardKubernetes is a little tricky.19:02
tewardadac: the other thing is, what are you on now, 16.04?19:02
tewardwhy upgrade to 18.04 if things're just working?19:02
adacteward, I'm re-setup my whole infrastrucutre and I have this one single host that is part of this new infrastrcuture already but still has 16.0419:03
adacand I would really like to have the same versions everywhere19:03
tewardi'm assuming you don't want to upgrade kubernetes then :P19:04
tewardwhich you'd probably end up having to do19:04
adacteward, actually I'm using Rancher. Rancher supports 1.13.5 which at most supports docker 18.06.319:06
adacyes at some point I will upgrade kubernetes anyway that is true19:06
adacteward, I'm trying now out what you have suggested19:09
adacthis host can be down a bit no problem if somethign is not working19:09
tewardbackup first19:09
tewardjust in case :P19:09
adacteward, backup is running19:16
blackflowSkyrider: I'm sorry but it's not bothersome at all. it's very, very simple. User owns files. nginx's www-data is in user's group (supplemantal!). php-fpm runs as user (one pool per user). User's home dir is root owned and there's a subdir (or more) where the user can upload stuff. Very, very simple.19:39
blackflowSkyrider: no idea why you invoked ACLs, that will just unnecessarily complicate matters to no end.19:40
sarnoldit's amazing the flexibility you can have with the simplicity of unix acls19:40
blackflowyes but it's rather hard to maintain, the ACLs are not immediately obvious, not visible in ls, you have to know they're there. personally I prefer to put a nice, auditable apparmor profile, instead of fiddle with ACLs19:41
blackflowMAC > DAC and ACLs are DAC on steroid.19:41
blackflow*steroids. still DAC tho.19:41
sdezielblackflow: I've yet to try php-fpm Apparmor hat support, do you have some experience with it?19:42
blackflowSkyrider: btw, just so we're on the same page, the approach I'm preaching here, been doing that for many years and currently I have tha very setup for hundreds of clients and their websites.19:42
blackflowsdeziel: nope. My setup is simple enough where "owner" is the only variation among pools. for more complex stuff I intend to have custom named profiles via systemd units, and one pool master per user, per unit, per profile.19:44
blackflowin other words, I wouldn't go with one process changing hats, but statically fix processes to profiles.19:44
sdezielblackflow: OK. The hat thing is a per-pool thing19:45
blackflowyeah. I disliked hats even with apache and selinux, years ago. I prefered MLS instead19:45
sdezieloh I see, you want multiple masters19:46
blackflowthe masters themselves don't really add any overhead and I can individually restart pools, unlike when with one master19:46
sdezielinteresting idea and side effect. Too bad I was looking for a reason to try Apparmor hats19:48
Skyriderblackflow: Can I undo the acls?19:51
blackflowSkyrider: sure, -b flag to setfacl19:52
SkyriderThanks :)19:53
SkyriderJust curious.. You say to set it to the users home directory instead.19:53
SkyriderHow is the user/group with this exactly with the web files?19:54
Skyriderweb files owned by the user, and as you mentioned, www-data in their group?19:54
blackflowSkyrider: web files must be owned by user so that the default ownership (755 dirs, 644 files) allows them to write. with PHP in the game, you need to drop access to "others", so 750 and 640. In that case nginx (www-data) loses access, so you need to add www-data to users groups, so nginx can read the files.19:58
blackflowthat's also the least privilege principle in action. and of course, php-fpm process must run as the user, in order to have exclusive access to user/site files, and in order to write them (uploads).20:00
=== keithzg_ is now known as keithzg
keithzgHmmm, so postfix rejects mail if the domain is in the virtual_alias_domains but the TO address isn't listed in the virtual alias map, even if there's CC's on that email that are? That is surprising to me!20:37
blackflowpostfix doesn't care about CC. and the alias map is really the authoritative here, just the domain won't work20:39
dlloydenvelope to or message to?20:39
blackflowhas to be envelope, postfix doesn't care about To header either20:40
dlloydyeah, thats where i was leading to20:40
keithzgHmm I wonder how it was working *before* I set up the virual_alias stuff, that certainly seems to have been when these emails started getting rejected rather than passed on.20:40
paulatxanybody happen to know how long the Ubuntu 16.04 EC2 images will support new hardware? The first graphic on this page: https://www.ubuntu.com/about/release-cycle makes it look like hardware support has already stopped for 16.04 but I'm not sure if that also applies to the AWS specific kernels which are also based off of the 4.4.0 GA kernel, not the 4.15 HWE kernel20:41
* keithzg is having a hard time digging through it all and figuring out what's going on since the verbosity for amavis is set so high, heh, still failing to understand why it dies from time to time20:41
blackflowkeithzg: postfix has its own rahter verbose logs, you shouldn't consult amavis at all. theres #postfix here on freenode if you need more help with it.20:42
keithzgblackflow: Well it's /var/log/mail.log I'm looking at, and `journalctl -u postfix` doesn't have anything20:43
blackflowkeithzg: wrong unit, postfix.service. you need postfix@ for the instance running iirc?20:44
blackflowneway, can you pastebin the problem entries? though really, I recommend #postfix for this particular issue, probably isn't specific to ubuntu defaults20:45
tomreynpaulatx: so did you read this?  https://wiki.ubuntu.com/Kernel/LTSEnablementStack#Ubuntu_16.04_LTS_-_Xenial_Xerus20:46
tomreynpaulatx: oh yes, according to what you wrote you probably did.20:46
tomreynso the aws images can't be used with the HWE kernel?20:47
tomreyni mean, you can't just install it like on regular ubuntu?20:47
keithzgblackflow: Literally none of the postfix@ instances show anything, went through them one by one, but bizarrely postfix@* works, so apparently tab completion for journalctl leaves out whatever one I actually need to use?20:49
blackflowkeithzg: postfix@-.service20:52
paulatxtomreyn: well the AWS images obtained from https://cloud-images.ubuntu.com/locator/ec2/ have the AWS tuned kernel enabled by default as detailed here: https://blog.ubuntu.com/2017/04/05/ubuntu-on-aws-gets-serious-performance-boost-with-aws-tuned-kernel.  I'm trying to figure out when the support for new hardware will stop on those AWS tuned kernels20:52
blackflowkeithzg: that's the default template instance20:52
blackflowunless of course you have something else set up, this should be the default20:52
keithzgblackflow: Huh. Yeah, that works. Just weirdly isn't one of the many things listed when I try and tab-autocomplete `journalctl -u postfix@`20:53
blackflowkeithzg: WorksForMe(tm) :)20:54
keithzgblackflow: Hah!20:54
keithzgI'm pleasantly surprised the wildcard approach worked, too; that's a bit more user-friendliness and standard unsurprising handling than I normally expect from the systemd gang20:55
tomreynpaulatx: hmm, sorry, that's indeed beyond my horizon. i suggest you ask the same question here again tomorrow during UK business hours20:55
keithzgblackflow: Funny enough, on another server it *does* work fine!20:55
blackflowkeithzg: patience and they will find a way to disappoint ;)20:56
keithzgblackflow: haha, true taht20:56
blackflowkeithzg: so, can you pastebin the error, reason for NOQUEUE?20:56
paulatxtomreyn: ok will do, thanks20:57
keithzgblackflow: https://paste.ubuntu.com/p/2mSSGcyNfY/ is the paste20:59
* keithzg is trying now heeding that warning and changing the relay_domains to be more strict21:02
blackflowkeithzg: also address that warning in lines 2 and 321:03
keithzgblackflow: Yeah that's the warning I'm talking about, heh21:03
blackflowah k21:03
keithzgThere was an overlap, with the emails coming from phabricator.gmcl.internal, and the relay_domains being gmcl.com and gmcl.internal21:03
blackflow'sfine :)21:04
keithzg(The central issue here is, some emails, but not all emails, from our Phabricator instance aren't making it to users)21:04
blackflowwell you'll have to investigate on a per-case basis. in this case, if you have a domain in virtual_alias_domains, you need to have the address in the virtual_alias_maps too. the postfix virtual readme has full explanation with examples.21:08
blackflowkeithzg: http://www.postfix.org/VIRTUAL_README.html21:08
keithzgblackflow: Yeah, the weird thing is, in theory the emails are being sent to multiple users, and *one* of them is noreply@phabricator.gmcl.internal. But others are normal users, and they receive email from Phabricator fine in most circumstances. It's just this one subclass of email that's being rejected this way.21:09
blackflownormal users as in they're "mailboxes" (virtual_mailbox_*) ?21:10
blackflowaka destined for the virtual transport21:10
keithzgWell, as in they're someguy@gmcl.com21:11
keithzg(which this mailserver is the endpoint for, and they have "local" accounts (actually LDAP, but valid as real users on the system))21:11
keithzgHmm. Everything's still being rejected.21:32
keithzgTime to specify an alias for noreply@phabricator.gmcl.internal and get some of these emails, see what they're actually trying to do21:33
adacguys when rebooting my ubuntu  server 16.06 I get:21:36
blackflowI thought we established that first. if you want to treat this envelope recipient as alias, you need it in the map. alias = forwarder, btw, so it has to forward to (alias for) a valid address too, which can be a virtual mailbx, or an external relay'd transport21:36
adacdevice not accepting address 36 7121:36
adac-71 actually21:37
adacany ideas what that problem might be and how to solve it?21:37
adacactually there should be no USB dongle on that server. is a hosted server21:38
keithzgblackflow: Yeah but the thing is, noreply@phabricator.gmcl.internal is in theory only *one* of the recipients; the others are all valid. And merely receiving emails for noreply wouldn't be terribly helpful, since that wouldn't then get the emails to the actual users.21:38
blackflowkeithzg: there's always just one recipient. if your sending MUA had CC, then it ran a RCPT TO for each of them. CC has no meaning for postfix.21:39
blackflowone recipient as in one RCPT TO envelope recipient.21:39
keithzgblackflow: Well exactly, which is why I'm wondering if Phabricator is doing something terribly silly in this case.21:39
blackflowint _this_ case, you simply don't have the address in virtual alias maps, as the error is stating21:39
blackflowand it's not checking anything else it seems which means you do have the domain, hence the expectation to consult the map21:40
keithzgParticularly because of the very suspicious nature of there theoretically being three recipients (noreply, and two cc's) and the postfix log shows three copies being sent to noreply21:40
keithzgReceiving noreply's emails wouldn't actually solve anything per se21:40
blackflowirrelevant. your postfix has no idea where noreply@phabricator.gmcl.internal is, and how to deliver to it.21:40
blackflow(according to the log you pastebin'd)21:41
keithzgblackflow: Sure? But if as you say Postfix has no idea of "TO", then it shouldn't be seeing three copies sent to noreply; and of course any emails to noreply go nowhere, that's actually desired.21:43
keithzgHence I'm thinking maybe Phabricator is doing something wrong.21:43
blackflowkeithzg: I have no idea what your setup is. I'd really recommend you to pop into #postfix. read the /topic and prepare the logs and configs as specified by the !getting_help factoid.21:44
blackflowbut on the face value, from that log entry, it's very simple. postfix has no idea how to deliver to that address. it's not defined in the virtual_alias_maps (to have an alias'd destination), but the domain is, hence postfix looking for it there.21:45
blackflowkeithzg: where do you want mail RCPT TO that address, be sent instead?21:46
keithzgblackflow: Nowhere!21:47
keithzgThe emails shouldn't be going to noreply anyways, and emails to noreply should indeed be rejected. It should be seeing emails to actual users, and most of the time that's how Phabricator sends email, but for some reason here it's sending all three copies to a single TO, which is noreply, instead of the actual Phabricator users with their valid @gmcl.com addresses21:47
blackflowwell it IS going nowhere. postfix will either accept and deliver, or respond with NOQUEUE like it is now21:47
keithzgSure, exactly.21:47
keithzgAnd hence why I'm thinking the problem at very least involves the Phabricator side of things too, since it shouldn't be just sending to noreply21:48
blackflowuhm, so why are we chasing hte postfix red herring then :)  you should check the MUA that's apparently trying to send to that address21:48
blackflow"23:32 < keithzg> Hmm. Everything's still being rejected."  <-- implies you don't want it rejected.... you should really get your story straight and start at the beginning, but in #postfix :)21:48
keithzgWell that's why I'm trying to receive the emails, so I can be sure of their exact actual headers :)21:48
blackflowyeah, no, sorry. please pop into #postfix and prepare all the details as explained by the !getting_help factoid there. thnks :)21:49
keithzgI mean, but as you say it's looking like Postfix is probably a red herring21:49
blackflow(you'll get better postfix support there, and this isn't ubuntu issue per se ;)21:49
blackflowkeithzg: well it's rejecting which is apparently what you do want it to do.21:50
keithzgblackflow: Yeah exactly, *postfix* seems to be acting according to design and intention, it's just somewhere beforehand where something's going wrong.21:50
keithzg(Probably Phabricator, maybe nullmailer)21:50
blackflowkeithzg: "trying to receive emails" -- then just create the alias entry for a local or any other address.21:52
blackflowreceive it, see what's in it.21:52
keithzgblackflow: Yup, that is *precisely* what I've done21:53
* keithzg now waits on the automated stuff that finds some files and commits a record of them via git-annex, which is then noticed by Phabricator's Diffusion, and then Herald rules send out the emails . . . it's all very Rub Goldbergian ;)21:54
blackflowheld by Canonical's duct tape :)21:55
keithzgOne of my favourite brands of duct tape :D21:56
keithzgThings seem to be working well enough now and I have enough info to dive in and try and unpick the specifics myself; many thanks, blackflow :) And apologies for broadly ignoring your entreaties to bring my problem over to #postfix instead :D22:16
blackflowkeithzg: np, it's just it looked all the way as if you wanted the alias to actually work :)22:42
keithzgblackflow: Yeah naw, as usual for me it's a subtlely weirder problem with more moving parts involved, haha22:45
gislavedanyone still using local mirrors these days ?23:24
sarnoldyeah I've got on23:25
gislavedso much install traffic ?23:27
gislavedwill be prettu big I guess ?23:27
sarnoldgislaved: ~1.5 terabytes would probably do; mine's at 1.27 TB used at the moment: http://paste.ubuntu.com/p/KdKfBDMbts/23:29
gislavedsarnold heh, share storage I believe ?23:29
gislavedor single disk ?23:30
gislavedor VM disk ?23:30
sarnoldah that zfs pool is on a nine-disk array: three vdevs of triple-mirror spinning metal drives23:30

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!