[05:02] coreycb: hey - have you seen anything like this - https://launchpadlibrarian.net/377079438/buildlog_ubuntu-cosmic-amd64.horizon_3%3A14.0.0~b2-0ubuntu2~ubuntu18.10.1~ppa201807040545_BUILDING.txt.gz [05:02] for the life of me I can't see why those py3 package install failures are happening [05:06] coreycb: hmm that's a py3.7issue [05:07] coreycb: reference - https://github.com/pypa/pipenv/issues/956 [05:07] that impacts [05:07] https://www.irccloud.com/pastebin/wpe16Y9V/ [05:11] hurrah [06:07] Good morning [08:28] Hello, I want your advice about one serie of operations : [08:28] 1. I create one tar file (with medium size files like ~50mo) [08:28] 2. A rsync process come and download the tar [08:28] 3. The rsync process remove the file [08:28] I want to be sure the tar is completly write before the step 2 is done. Any advice ? [08:29] Justice: You need to do special rules for that, generally I follow this guide : https://www.thomas-krenn.com/en/wiki/Two_Default_Gateways_on_One_System [08:44] Create a script who does those steps sequentially? [08:45] The rsync process is not in my control, it's a client of my server [08:45] My best guess is that mv is atomic. So I create my tar in another directory and do a mv action [09:11] manticorpus: yes, that's usually how such atomic ops are done. don't even have to be another dir, could be the same with a different name. eg. .tmp_somename.tar that you rename to somename.tar keep in mind, if you create it somewhere else, that you're still on teh same filesystem, otherwise mv will take much longer because it has to copy, not just rename. [09:55] blackflow: Thank you, as the rsync take all the dir I will do in another dir. Thanks you for your feedback [10:00] manticorpus: just make sure it's on the same filesystem, or at least know what the consequences are if it's not. [10:40] jamespage: i hadn't seen those issues yet but i think py3.7 just came out. i can dig into that tomorrow. [10:45] blackflow: It's thank you [11:59] OK. I've run out of Google Foo. I'm lost in a proxy terminology maze of transparent, forward, reverse, anonymous, ssl_bump, intercept... and on and on. [12:01] What I am trying to do is enable certain https requests from servers in a autoscale cluster to go out via a proxy server so that the receiving end gets the request from 1 or 2 fixed IP addresses and not all the random ephemeral ones that the autoscaling servers will have. [12:02] I'm not looking to intercept and decrypt, MITM style - I just want the destination to get the proxy server IPs. [12:06] Gargoyle: maybe with a tcp proxy? irrc nginx can do that too [12:06] *iirc [12:07] Gargoyle: haproxy can do that too [12:07] I think that's what I'm going to have to do. I hit a bit of a wall with nginx and streams, so I've tried squid but that seems to focus around intercepting. [12:08] Gargoyle: what kind of wall? What was the problem with nginx? [12:09] Gargoyle: haproxy can be easily configured to terminate TCP or TLS or HTTP(S) and then hit a list of backends using TCP or TLS or HTTP(S) [12:09] blackflow: Most likley me going code-blind. going to retry. [12:10] sdeziel: Not looking to terminate the ssl - just the opposite. [12:10] Gargoyle: then operate in TCP mode and it will be load balanced between healthy backend [12:10] s/backend/backends/ [12:11] Gargoyle: haproxy is nice because it supports doing fancy health checks on the backends. IIRC, the same requires NGINX Plus [12:12] not doing reverse proxying. doing forward proxying - don't know what the destination is. [12:13] So it will be requests to external apis like google, etc. [12:14] back to nginx... vagrant destroy, vagrant up for about the 50th time already today! :P [12:14] consider also some iptables routing magick on the "proxy node" [12:15] s/routing/NAT/ [12:16] How would that work, blackflow. I originally came up with a NAT solution which required updating routing tables for the destination IP addresses so that traffic went out via the NAT box. Hit a hurdle with one of the 3rd parties not having fixed IP addresses. [12:17] Gargoyle: oh, sorry, I missed the forward part [12:18] sdeziel: No worries - it's a bit of a oddball problem! [12:18] morning [12:19] Gargoyle: depends on your network layout, whether you have some wan/lan boundary through a router, or if you just have to solve it at the dns level, designating a "proxy node"'s IP for all outbound domains, then it should be relatively straightforward to NAT, on that node, between LAN subnet and ! LAN subnet [12:21] OK. So I think nginx is working now... X-) [12:21] :) [12:21] I had missed the "resolver" directive... [12:22] https://gist.github.com/gargoyle/851b8628099307581485e181cd5898c0 [12:23] TIL: nginx only does dns lookups on start/restart/reload [12:23] huh, TIL ssl_preread [12:23] Unless you have "resolver" [12:23] Yeah... grabs the host from the SNI header. [12:23] yah [12:24] but eh.... 8.8.8.8? eeeew. :) [12:31] he he. It's easy to remember though. :D [12:31] 1.1.1.1 ? [12:34] Gargoyle: beware that nginx's DNS resolver is vulnerable to DNS poisoning so you may want to use a closer resolver [12:35] Gargoyle: I was hinting at "run your own caching resolver " :) [12:35] Good to know, thanks. [12:41] Bind9 works for me nicely, though Unbound is not bad either. Supposedly less vulnerable, but I suspect it's just a consequence of it being used less (and attempted against less) [12:43] blackflow: and 8.8.4.4 [12:50] RoyK: hmm? === Jare__ is now known as Jare [15:04] question : I was wondering about the SIZE of my 'snap' directories that have been made, this notepad program I am using seems to have made 3 snaps now (60 Meg each), and I was wondering can I delete these .. they all show to be 100% full. [15:14] guess I'll just try the unmount and remove the older snaps :) [15:14] So I've pinged server.xyz and got 1.2.3.4 - great. But now I have added 4.5.6.7 server.xyz to /etc/hosts but I am still pinging the old IP address. systemd-resolver --flush-caches doesn't seem to do anything (18.04) any ideas? [15:16] Gargoyle: could try disconnect / reconnect to the net ... or even a full reboot .. at least in windowz the dns cache is loaded and the real only way to reload is is a reboot (works best) or try disconnect/reconnect. [15:17] Gargoyle: or stop/start the dns service .. just a thought. [15:17] But this is linux! [15:17] There is no separate service - i think it's all systemd [15:17] Gargoyle: ya I know .. I was njust trying to impress upon you that this dns cache is sometimes hard to FORCE to reload itself. [15:18] one more reason to ditch it. [15:19] sorry I asked my snap question in the wrong channel ;) [19:00] i installed wine but it wont run [20:21] rbasak: for tomorrow likely, could you please check if the importer is still running? I'm seeing sssd is behind debian: [20:22] https://code.launchpad.net/~usd-import-team/ubuntu/+source/sssd/+git/sssd/+ref/debian/sid is 1.16.1-1 [20:22] but rmadison shows 1.16.2-1 to be in debian's testing and unstable [20:22] http://reqorts.qa.ubuntu.com/reports/ubuntu-server/merges.html agrees that sssd 1.16.2-1 has been uploaded on jun 27th [20:32] rbasak: I also don't see ubuntu/devel updated with our recent samba upload, that no-change rebuild one