[06:03] Good morning [06:58] good morning lordievader === JanC is now known as Guest95859 === JanC_ is now known as JanC [07:35] Hey cpaelzer , how are you doing? [07:37] pretty well for a monday [07:45] I've got a server where my users do not have a /run/user/$UID directory [07:45] Does that need manual creation now? [11:15] ...and here it goes :) http://reqorts.qa.ubuntu.com/reports/ubuntu-server/merges.html [12:16] andreas: well no BB yet to start right? [12:16] cpaelzer: right, I was just curious about what was piling up [12:16] since I got some emails about bugs being fixed in debian [12:16] as usual, everything :-) [12:17] honestly early in the cycle we likely pick the few extra complex ones we know to do the transitions right [12:17] those that "just" need a bump will come trice or only later [12:17] (opinion) [12:17] Is it possible to tell dpkg not to run any of the {pre,post}inst scripts? [12:18] That is, when installing packages using apt [12:21] necrophcodr: I don't know a good way to globally disable them, but you could modify (exit 0 in line 1) them in /var/lib/dpkg/info as needed [12:22] for dpkg install that should be fine, as it only unpacks them again if not there (so I thought) [12:22] cpaelzer: are all package scripts in /var/lib/dpkg/info before packages are downloaded and installed? [12:22] not sure if apt refreshes the files in any case [12:22] necrophcodr: no they are part of the download [12:22] necrophcodr: you could run apt until it fails [12:22] necrophcodr: then modify the file as needed [12:22] necrophcodr: and then dpkg install those that you modified [12:23] to continue with apt afterwards [12:23] it doesn't have to be a good way either, i'm okay with hacky bullshit. i guess i'll have to do multi stage, so one downloading the packages and "fixing" the scripts, and one actually installing it [12:23] unless the downloading of it doesn't install the script which is probably the case [12:29] necrophcodr: apt is meant to do all-in-one nicely, maybe not the thing for your special case [12:30] necrophcodr: but dpkg being the lower level tool certainly can help you [12:30] necrophcodr: you can even set up --pre-invoke=command and such to do (whatever you need to do) regularly === jstevewhite is now known as stwhite === stwhite is now known as jstevewhite [14:54] Can I get a sanity check? These UFW rules should block port 3000 right?: https://paste.ngx.cc/504bdbc1f51f1495 [14:55] MacroMan: the default deny on INPUT will [14:56] Weird. I'm running grafana and I can access the mini-http server over port 3000, when clearly I shouldn't be able to [14:57] Are there any other ways through the firewall that aren't covered by my ufw status output? [15:44] MacroMan: is UFW appling those rules to *all* interfaces? I prefer using iptables/ip6tables to inspect rules rather than some reduced front-end [15:49] TJ-: I only have one interface on this machine [15:56] MacroMaare the connections coming in over IPv4 or IPv6? [15:56] Where are testing it *from*? not the same machine? [16:58] guys, which way better to send email notify using smtp AUTH and TLS through external mail service? i need it for script's notifications. [17:01] from scripts actually [17:01] rh10: local exim with a smarthost transport that's configured properly. [17:02] Seveas, thanks but dont suitable in my case. already use mail server for another purposes. [17:02] rh10: a sendmail provider like msmtp-mta or ssmtp would do then [17:02] rh10: well, then configure that mailserver to do the relaying properly :) [17:03] rh10: with those, you configure your relay host, username/password and that's it [17:04] it's similar to running exim or postfix minus the permanently running daemons [17:04] sdeziel, got it. seems exactly what i need https://wiki.archlinux.org/index.php/Msmtp [17:04] sdeziel, thanks! [17:05] rh10: np [17:08] rh10: one word of caution though, if msmtp/ssmtp cannot relay your email right away, this email will be lost for good (no delivery retry). With msmtp, I think you get an error code on submission failure at least. That's why exim/postfix have daemons running [17:10] sdeziel, got it, thanks for warning [17:11] sdeziel, maybe is it real to handle, was mail send correctly, in script itself? like exit status of command or so on? [17:16] the sendmail command should return non 0 on relaying failure [17:17] sdeziel, nope. i mean in msmtp [17:17] to prevent lost of letters [17:18] smth like that [17:19] rh10: well, many MTA provide a sendmail command implementation for compat with existing software. Installing msmtp-mta will povide you msmtp's sendmail compat shim [17:19] sdeziel, got it, thanks! [17:19] rh10: that said, with msmtp (or it's sendmail compat shim), you will only know if the email was relayed (return 0) or not (non 0) [17:20] sdeziel, got it === jstevewhite is now known as stwhite === JanC_ is now known as JanC [17:57] hi, can someone please accept the trusty nomination I just made in this bug: https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/1719671 [17:57] Launchpad bug 1719671 in ubuntu-advantage-tools (Ubuntu Zesty) "[SRU] include recent version containing fips and livepatch" [Undecided,New] [17:59] andreas: done [17:59] thanks === stwhite is now known as jstevewhite [18:56] any way to download a ca cert so that curl doesn't need --insecure flag any more? https://bpaste.net/show/b9dd27607487 [18:57] curl is run from a python framework and i am trying to bypass that error at linux level, making it somehow ignore the cert [18:59] gunix: --cacert if there's a single CA you want to trust; --capath if there's several [19:00] sarnold: will --cacert accept the cert per user or per system? [19:00] gunix: I don't understand.. what do you mean? [19:01] sarnold: if i run with user david "curl --cacert link", and after that run with user martin "curl link", will it also work for martin? [19:02] gunix: note that both --cacert and --capath take an argument that is a pathname to a certificate or a directory of hashed certificates [19:02] gunix: so if martin and david want to trust the same certificate, they both need read access to the file [19:02] sarnold: do you have some examples with this? like a blog or something? [19:03] gunix: no, but the curl manpage has good details [19:03] sarnold: is there public log to this chat? [19:04] gunix: yes https://irclogs.ubuntu.com/2017/ [19:04] sarnold: oh, only from yesterday. got it. thank you [19:05] hmm it's from today too [19:05] gunix: logs are written every half hour or hour or something [20:13] nacc: I believe I'm done with the ubuntu-advantage sru bug and it's ready for sponsoring [20:13] https://bugs.launchpad.net/ubuntu/+source/ubuntu-advantage-tools/+bug/1719671 [20:13] Launchpad bug 1719671 in ubuntu-advantage-tools (Ubuntu Zesty) "[SRU] include recent version containing fips and livepatch" [Undecided,In progress] [20:18] andreas: thank you for letting me know, i'll take a look shortly [20:18] nacc: the bug description is huge I'm afraid. I kept the same structure that joy started and added livepatch. Since it's for 3 releases of ubuntu, it got big [20:18] np, i've been reading the updates as they come inn [20:19] nacc: the more interesting bits are in the beginning, and at the very end (other info, regression potential) [20:21] andreas: ok [21:48] hey all, curious any of you here ever configured a reverse proxy to talk to backend over SSL ? so Client > (HTTPS) > Reverse proxy > (HTTPS) > Backend Server [21:49] using apache [21:49] as the reverse proxy [21:51] is this all that will suffice for the config: https://paste.ee/p/LqHSo [21:52] ^^ does not include first leg https connections from clients [22:04] jge: I think I hear of more people using nginx as a proxy/frontend, so if this doesn't work out keep in mind that you've got options [22:05] haproxy links against libssl, it might do the job too [22:07] that's true sarnold, thank you [22:12] sarnold: ftr I ended up ditching everything and figuring out a reasonable way to put nfs in a container [22:12] that samba setup was a never ending world of pain even after I figured out all the pam cifs stuff [22:13] drab: damn :/ what a journey [22:13] it boggles my mind how complicated it is the whole password management business... need to add new schemas to ldap, change the way you manage pwds... really not worth it unless you have to and need to support MS stuff [22:13] drab: what's the config like now? [22:13] sarnold: well I don't think I could have known until I tried... even with all the upfront research it wasn't obvious [22:13] right [22:14] sarnold: privileged container on a locked down host with zvol formatted as ext4 [22:14] this allows the use of quota and all other things without having to touch the host and it's all considered relatively safe [22:14] drab: aha [22:14] I still have the problem I wanted to avoid of getting a container to muck with the host's kernel, but that could not be avoided at this point [22:14] since samba was not an option and neither is nfs userspace [22:15] but truth to be told it's mostly the clients that have had bad times with nfs on some accasions so we should be ok and it's still all relatively containerized and isolated from the host [22:15] plus that system offers no services other than nfs, so no logins or shells on it from anybody except IT team [22:15] so I'm ok to live with that [22:16] brb