[05:02] <jamespage> coreycb: hey - have you seen anything like this - https://launchpadlibrarian.net/377079438/buildlog_ubuntu-cosmic-amd64.horizon_3%3A14.0.0~b2-0ubuntu2~ubuntu18.10.1~ppa201807040545_BUILDING.txt.gz
[05:02] <jamespage> for the life of me I can't see why those py3 package install failures are happening
[05:06] <jamespage> coreycb: hmm that's a py3.7issue
[05:07] <jamespage> coreycb: reference - https://github.com/pypa/pipenv/issues/956
[05:07] <jamespage> that impacts
[05:07] <jamespage> https://www.irccloud.com/pastebin/wpe16Y9V/
[05:11] <jamespage> hurrah
[06:07] <lordievader> Good morning
[08:28] <manticorpus> Hello, I want your advice about one serie of operations :
[08:28] <manticorpus> 1. I create one tar file (with medium size files like ~50mo)
[08:28] <manticorpus> 2. A rsync process come and download the tar
[08:28] <manticorpus> 3. The rsync process remove the file
[08:28] <manticorpus> I want to be sure the tar is completly write before the step 2 is done. Any advice ?
[08:29] <manticorpus> Justice: You need to do special rules for that, generally I follow this guide : https://www.thomas-krenn.com/en/wiki/Two_Default_Gateways_on_One_System
[08:44] <lordievader> Create a script who does those steps sequentially?
[08:45] <manticorpus> The rsync process is not in my control, it's a client of my server
[08:45] <manticorpus> My best guess is that mv is atomic. So I create my tar in another directory and do a mv action
[09:11] <blackflow> manticorpus: yes, that's usually how such atomic ops are done. don't even have to be another dir, could be the same with a different name. eg.    .tmp_somename.tar   that you rename to   somename.tar     keep in mind, if you create it somewhere else, that you're still on teh same filesystem, otherwise mv will take much longer because it has to copy, not just rename.
[09:55] <manticorpus> blackflow: Thank you, as the rsync take all the dir I will do in another dir. Thanks you for your feedback
[10:00] <blackflow> manticorpus: just make sure it's on the same filesystem, or at least know what the consequences are if it's not.
[10:40] <coreycb> jamespage: i hadn't seen those issues yet but i think py3.7 just came out. i can dig into that tomorrow.
[10:45] <manticorpus> blackflow: It's thank you
[11:59] <Gargoyle> OK. I've run out of Google Foo. I'm lost in a proxy terminology maze of transparent, forward, reverse, anonymous, ssl_bump, intercept... and on and on.
[12:01] <Gargoyle> What I am trying to do is enable certain https requests from servers in a autoscale cluster to go out via a proxy server so that the receiving end gets the request from 1 or 2 fixed IP addresses and not all the random ephemeral ones that the autoscaling servers will have.
[12:02] <Gargoyle> I'm not looking to intercept and decrypt, MITM style - I just want the destination to get the proxy server IPs.
[12:06] <blackflow> Gargoyle: maybe with a tcp proxy? irrc nginx can do that too
[12:06] <blackflow> *iirc
[12:07] <sdeziel> Gargoyle: haproxy can do that too
[12:07] <Gargoyle> I think that's what I'm going to have to do. I hit a bit of a wall with nginx and streams, so I've tried squid but that seems to focus around intercepting.
[12:08] <blackflow> Gargoyle: what kind of wall? What was the problem with nginx?
[12:09] <sdeziel> Gargoyle: haproxy can be easily configured to terminate TCP or TLS or HTTP(S) and then hit a list of backends using TCP or TLS or HTTP(S)
[12:09] <Gargoyle> blackflow: Most likley me going code-blind. going to retry.
[12:10] <Gargoyle> sdeziel: Not looking to terminate the ssl - just the opposite.
[12:10] <sdeziel> Gargoyle: then operate in TCP mode and it will be load balanced between healthy backend
[12:10] <sdeziel> s/backend/backends/
[12:11] <sdeziel> Gargoyle: haproxy is nice because it supports doing fancy health checks on the backends. IIRC, the same requires NGINX Plus
[12:12] <Gargoyle> not doing reverse proxying. doing forward proxying - don't know what the destination is.
[12:13] <Gargoyle> So it will be requests to external apis like google, etc.
[12:14] <Gargoyle> back to nginx... vagrant destroy, vagrant up for about the 50th time already today! :P
[12:14] <blackflow> consider also some iptables routing magick on the "proxy node"
[12:15] <blackflow> s/routing/NAT/
[12:16] <Gargoyle> How would that work, blackflow. I originally came up with a NAT solution which required updating routing tables for the destination IP addresses so that traffic went out via the NAT box. Hit a hurdle with one of the 3rd parties not having fixed IP addresses.
[12:17] <sdeziel> Gargoyle: oh, sorry, I missed the forward part
[12:18] <Gargoyle> sdeziel: No worries - it's a bit of a oddball problem!
[12:18] <ahasenack> morning
[12:19] <blackflow> Gargoyle: depends on your network layout, whether you have some wan/lan boundary through a router, or if you just have to solve it at the dns level, designating a "proxy node"'s IP for all outbound domains, then it should be relatively straightforward to NAT, on that node, between LAN subnet and   ! LAN subnet
[12:21] <Gargoyle> OK. So I think nginx is working now... X-)
[12:21] <blackflow> :)
[12:21] <Gargoyle> I had missed the "resolver" directive...
[12:22] <Gargoyle> https://gist.github.com/gargoyle/851b8628099307581485e181cd5898c0
[12:23] <Gargoyle> TIL: nginx only does dns lookups on start/restart/reload
[12:23] <blackflow> huh, TIL ssl_preread
[12:23] <Gargoyle> Unless you have "resolver"
[12:23] <Gargoyle> Yeah... grabs the host from the SNI header.
[12:23] <blackflow> yah
[12:24] <blackflow> but eh.... 8.8.8.8?   eeeew. :)
[12:31] <Gargoyle> he he. It's easy to remember though. :D
[12:31] <Gargoyle> 1.1.1.1 ?
[12:34] <sdeziel> Gargoyle: beware that nginx's DNS resolver is vulnerable to DNS poisoning so you may want to use a closer resolver
[12:35] <blackflow> Gargoyle: I was hinting at "run your own caching resolver " :)
[12:35] <Gargoyle> Good to know, thanks.
[12:41] <blackflow> Bind9 works for me nicely, though Unbound is not bad either. Supposedly less vulnerable, but I suspect it's just a consequence of it being used less (and attempted against less)
[12:43] <RoyK> blackflow: and 8.8.4.4
[12:50] <blackflow> RoyK: hmm?
[15:04] <Ubu-1604> question : I was wondering about the SIZE of my 'snap' directories that have been made, this notepad program I am using seems to have made 3 snaps now (60 Meg each), and I was wondering can I delete these .. they all show to be 100% full.
[15:14] <Ubu-1604> guess I'll just try the unmount and remove the older snaps :)
[15:14] <Gargoyle> So I've pinged server.xyz and got 1.2.3.4 - great. But now I have added 4.5.6.7 server.xyz to /etc/hosts but I am still pinging the old IP address. systemd-resolver --flush-caches doesn't seem to do anything (18.04) any ideas?
[15:16] <Ubu-1604> Gargoyle: could try disconnect / reconnect to the net ... or even a full reboot .. at least in windowz the dns cache is loaded and the real only way to reload is is a reboot (works best) or try disconnect/reconnect.
[15:17] <Ubu-1604> Gargoyle: or stop/start the dns service .. just a thought.
[15:17] <Gargoyle> But this is linux!
[15:17] <Gargoyle> There is no separate service - i think it's all systemd
[15:17] <Ubu-1604> Gargoyle: ya I know .. I was njust trying to impress upon you that this dns cache is sometimes hard to FORCE to reload itself.
[15:18] <blackflow> one more reason to ditch it.
[15:19] <Ubu-1604> sorry I asked my snap question in the wrong channel ;)
[19:00] <mystic> i installed wine but it wont run
[20:21] <ahasenack> rbasak: for tomorrow likely, could you please check if the importer is still running? I'm seeing sssd is behind debian:
[20:22] <ahasenack> https://code.launchpad.net/~usd-import-team/ubuntu/+source/sssd/+git/sssd/+ref/debian/sid is 1.16.1-1
[20:22] <ahasenack> but rmadison shows 1.16.2-1 to be in debian's testing and unstable
[20:22] <ahasenack> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/merges.html agrees that sssd 1.16.2-1 has been uploaded on jun 27th
[20:32] <ahasenack> rbasak: I also don't see ubuntu/devel updated with our recent samba upload, that no-change rebuild one