[06:03] <lordievader> Good morning
[06:58] <cpaelzer> good morning lordievader
[07:35] <lordievader> Hey cpaelzer , how are you doing?
[07:37] <cpaelzer> pretty well for a monday
[07:45] <necrophcodr> I've got a server where my users do not have a /run/user/$UID directory
[07:45] <necrophcodr> Does that need manual creation now?
[11:15] <andreas> ...and here it goes :) http://reqorts.qa.ubuntu.com/reports/ubuntu-server/merges.html
[12:16] <cpaelzer> andreas: well no BB yet to start right?
[12:16] <andreas> cpaelzer: right, I was just curious about what was piling up
[12:16] <andreas> since I got some emails about bugs being fixed in debian
[12:16] <cpaelzer> as usual, everything :-)
[12:17] <cpaelzer> honestly early in the cycle we likely pick the few extra complex ones we know to do the transitions right
[12:17] <cpaelzer> those that "just" need a bump will come trice or only later
[12:17] <cpaelzer> (opinion)
[12:17] <necrophcodr> Is it possible to tell dpkg not to run any of the {pre,post}inst scripts?
[12:18] <necrophcodr> That is, when installing packages using apt
[12:21] <cpaelzer> necrophcodr: I don't know a good way to globally disable them, but you could modify (exit 0 in line 1) them in /var/lib/dpkg/info as needed
[12:22] <cpaelzer> for dpkg install that should be fine, as it only unpacks them again if not there (so I thought)
[12:22] <necrophcodr> cpaelzer: are all package scripts in /var/lib/dpkg/info before packages are downloaded and installed?
[12:22] <cpaelzer> not sure if apt refreshes the files in any case
[12:22] <cpaelzer> necrophcodr: no they are part of the download
[12:22] <cpaelzer> necrophcodr: you could run apt until it fails
[12:22] <cpaelzer> necrophcodr: then modify the file as needed
[12:22] <cpaelzer> necrophcodr: and then dpkg install those that you modified
[12:23] <cpaelzer> to continue with apt afterwards
[12:23] <necrophcodr> it doesn't have to be a good way either, i'm okay with hacky bullshit. i guess i'll have to do multi stage, so one downloading the packages and "fixing" the scripts, and one actually installing it
[12:23] <necrophcodr> unless the downloading of it doesn't install the script which is probably the case
[12:29] <cpaelzer> necrophcodr: apt is meant to do all-in-one nicely, maybe not the thing for your special case
[12:30] <cpaelzer> necrophcodr: but dpkg being the lower level tool certainly can help you
[12:30] <cpaelzer> necrophcodr: you can even set up --pre-invoke=command and such to do (whatever you need to do) regularly
[14:54] <MacroMan> Can I get a sanity check? These UFW rules should block port 3000 right?: https://paste.ngx.cc/504bdbc1f51f1495
[14:55] <TJ-> MacroMan: the default deny on INPUT will
[14:56] <MacroMan> Weird. I'm running grafana and I can access the mini-http server over port 3000, when clearly I shouldn't be able to
[14:57] <MacroMan> Are there any other ways through the firewall that aren't covered by my ufw status output?
[15:44] <TJ-> MacroMan: is UFW appling those rules to *all* interfaces? I prefer using iptables/ip6tables to inspect rules rather than some reduced front-end
[15:49] <MacroMan> TJ-: I only have one interface on this machine
[15:56] <TJ-> MacroMaare the connections coming in over IPv4 or IPv6?
[15:56] <TJ-> Where are testing it *from*? not the same machine?
[16:58] <rh10> guys, which way better to send email notify using smtp AUTH and TLS through external mail service? i need it for script's notifications.
[17:01] <rh10> from scripts actually
[17:01] <Seveas> rh10: local exim with a smarthost transport that's configured properly.
[17:02] <rh10> Seveas, thanks but dont suitable in my case. already use mail server for another purposes.
[17:02] <sdeziel> rh10: a sendmail provider like msmtp-mta or ssmtp would do then
[17:02] <Seveas> rh10: well, then configure that mailserver to do the relaying properly :)
[17:03] <sdeziel> rh10: with those, you configure your relay host, username/password and that's it
[17:04] <sdeziel> it's similar to running exim or postfix minus the permanently running daemons
[17:04] <rh10> sdeziel, got it. seems exactly what i need https://wiki.archlinux.org/index.php/Msmtp
[17:04] <rh10> sdeziel, thanks!
[17:05] <sdeziel> rh10: np
[17:08] <sdeziel> rh10: one word of caution though, if msmtp/ssmtp cannot relay your email right away, this email will be lost for good (no delivery retry). With msmtp, I think you get an error code on submission failure at least. That's why exim/postfix have daemons running
[17:10] <rh10> sdeziel, got it, thanks for warning
[17:11] <rh10> sdeziel, maybe is it real to handle, was mail send correctly, in script itself? like exit status of command or so on?
[17:16] <sdeziel> the sendmail command should return non 0 on relaying failure
[17:17] <rh10> sdeziel, nope. i mean in msmtp
[17:17] <rh10> to prevent lost of letters
[17:18] <rh10> smth like that
[17:19] <sdeziel> rh10: well, many MTA provide a sendmail command implementation for compat with existing software. Installing msmtp-mta will povide you msmtp's sendmail compat shim
[17:19] <rh10> sdeziel, got it, thanks!
[17:19] <sdeziel> rh10: that said, with msmtp (or it's sendmail compat shim), you will only know if the email was relayed (return 0) or not (non 0)
[17:20] <rh10> sdeziel, got it
[17:57] <andreas> hi, can someone please accept the trusty nomination I just made in this bug: https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/1719671
[17:59] <sarnold> andreas: done
[17:59] <andreas> thanks
[18:56] <gunix> any way to download a ca cert so that curl doesn't need --insecure flag any more? https://bpaste.net/show/b9dd27607487
[18:57] <gunix> curl is run from a python framework and i am trying to bypass that error at linux level, making it somehow ignore the cert
[18:59] <sarnold> gunix: --cacert if there's a single CA you want to trust; --capath if there's several
[19:00] <gunix> sarnold: will --cacert accept the cert per user or per system?
[19:00] <sarnold> gunix: I don't understand.. what do you mean?
[19:01] <gunix> sarnold: if i run with user david "curl --cacert link", and after that run with user martin "curl link", will it also work for martin?
[19:02] <sarnold> gunix: note that both --cacert and --capath take an argument that is a pathname to a certificate or a directory of hashed certificates
[19:02] <sarnold> gunix: so if martin and david want to trust the same certificate, they both need read access to the file
[19:02] <gunix> sarnold: do you have some examples with this? like a blog or something?
[19:03] <sarnold> gunix: no, but the curl manpage has good details
[19:03] <gunix> sarnold: is there public log to this chat?
[19:04] <sarnold> gunix: yes https://irclogs.ubuntu.com/2017/
[19:04] <gunix> sarnold: oh, only from yesterday. got it. thank you
[19:05] <gunix> hmm it's from today too
[19:05] <sarnold> gunix: logs are written every half hour or hour or something
[20:13] <andreas> nacc: I believe I'm done with the ubuntu-advantage sru bug and it's ready for sponsoring
[20:13] <andreas> https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/1719671
[20:18] <nacc> andreas: thank you for letting me know, i'll take a look shortly
[20:18] <andreas> nacc: the bug description is huge I'm afraid. I kept the same structure that joy started and added livepatch. Since it's for 3 releases of ubuntu, it got big
[20:18] <nacc> np, i've been reading the updates as they come inn
[20:19] <andreas> nacc: the more interesting bits are in the beginning, and at the very end (other info, regression potential)
[20:21] <nacc> andreas: ok
[21:48] <jge> hey all, curious any of you here ever configured a reverse proxy to talk to backend over SSL ? so Client > (HTTPS) > Reverse proxy > (HTTPS) > Backend Server
[21:49] <jge> using apache
[21:49] <jge> as the reverse proxy
[21:51] <jge> is this all that will suffice for the config: https://paste.ee/p/LqHSo
[21:52] <jge> ^^ does not include first leg https connections from clients
[22:04] <sarnold> jge: I think I hear of more people using nginx as a proxy/frontend, so if this doesn't work out keep in mind that you've got options
[22:05] <sarnold> haproxy links against libssl, it might do the job too
[22:07] <jge> that's true sarnold, thank you
[22:12] <drab> sarnold: ftr I ended up ditching everything and figuring out a reasonable way to put nfs in a container
[22:12] <drab> that samba setup was a never ending world of pain even after I figured out all the pam cifs stuff
[22:13] <sarnold> drab: damn :/ what a journey
[22:13] <drab> it boggles my mind how complicated it is the whole password management business... need to add new schemas to ldap, change the way you manage pwds... really not worth it unless you have to and need to support MS stuff
[22:13] <sarnold> drab: what's the config like now?
[22:13] <drab> sarnold: well I don't think I could have known until I tried... even with all the upfront research it wasn't obvious
[22:13] <sarnold> right
[22:14] <drab> sarnold: privileged container on a locked down host with zvol formatted as ext4
[22:14] <drab> this allows the use of quota and all other things without having to touch the host and it's all considered relatively safe
[22:14] <sarnold> drab: aha
[22:14] <drab> I still have the problem I wanted to avoid of getting a container to muck with the host's kernel, but that could not be avoided at this point
[22:14] <drab> since samba was not an option and neither is nfs userspace
[22:15] <drab> but truth to be told it's mostly the clients that have had bad times with nfs on some accasions so we should be ok and it's still all relatively containerized and isolated from the host
[22:15] <drab> plus that system offers no services other than nfs, so no logins or shells on it from anybody except IT team
[22:15] <drab> so I'm ok to live with that
[22:16] <drab> brb