[05:58] <lordievader> Good morning
[07:11] <foo> err, odd issue. Just spun up a new instance with ssh. I can ssh into the system but I cannot scp or git push ... it says system isn't responding 
[07:19] <foo> No firewall, I can scp in fine, just cannot scp in or push to a myhost:/some/git/repo.git
[07:19] <foo> So. odd.
[07:23] <lordievader> Configuration to restrict certain types of activity?
[07:24] <foo> lordievader: this is a brand new digital ocean droplet, I can't see any config to restrict certain activity
[07:24] <foo> I've done this process dozens of times
[07:24] <foo> https://bpa.st/5JOQ
[07:24] <foo> so odd.
[07:24]  * foo baffled.
[07:32] <lordievader> What do you get when you make that `scp -v` instead?
[07:33] <foo> lordievader: great question, I've tried it with -vvv ... nothing helpful.
[07:33] <foo> will paste output
[07:37] <foo> this is scp -vvv file.conf myhost:/home/path/here : https://bpa.st/HUFA
[07:37] <foo> in case anyone see's anything weird
[07:44] <foo> just tried with another host, scp worked fine... so it's not my network 
[07:45] <foo> this. is. so. odd.
[08:19] <tuxick> so nothing in ~/.ssh/config ?
[08:39] <mwhudson> foo: mtu issues?
[08:54] <TJ-> foo: X.Y.Z.Z not responding ... means you've no route, or firewall is blocking. try "ip route get X.Y.Z.Z"
[08:59] <tuxick> must be a rather clever firewall to tell ssh from scp?
[09:00] <TJ-> tuxick: the problem is likely on the /client/ no the target 'server'
[09:00] <tuxick> yeah spose so
[09:01] <TJ-> tuxick: and my suspicion is there's a Host entry affecting it
[16:11] <foo> back. 
[16:11] <foo> tuxick: well, I got the config for the host in .ssh/config
[16:12] <foo> TJ-: but if I can ssh into it, does that make sense? 
[16:12] <foo> tuxick: this is a brand new digital ocean droplet. I've got through this process dozens of times and I'm truly baffled.
[16:12] <foo> I checked in digital ocean and there's no firewall from what I can see there, how can I check on the server itself if there are any firewall rules
[16:12] <foo> ? 
[16:13] <foo> I've also checked my network to make sure I have no outbound issues here and I can scp to another host fine
[16:13] <foo> iptables -L is showing nothing, so I doubt this is a firewall issue
[16:13] <TJ-> foo: it's certainly weird but the fact scp reports timeout server not responding suggests some kind of block - on the target have you monitored the sshd log for connection attempts?
[16:14] <TJ-> foo: and are you using hostnames or IP addresses, and if address, IPv4 or IPv6 ?
[16:14] <foo> TJ-: IPv4
[16:14] <sdeziel> foo: when over SSH can you try calling `dmesg` and see if the long output hangs the session or not. This is what I use to rule out PMTU issues
[16:15] <foo> TJ-: dmesg looks like it works fine
[16:15] <foo> my latency is around 50-500ms but I can't imagine that's an issue, FWIW
[16:16] <foo> I have spent about 2 hours on this otherwise odd issue, I'm open to trying anything. 
[16:18] <sdeziel> foo: could you try scp'ing a tiny small file (like /etc/hostname)?
[16:19] <tuxick> try different users on either end :)
[16:20] <foo> sdeziel: the file I've been trying with is 4K... but I can try with a different one
[16:20] <foo> tuxick: I've tried to scp to another host it works fine
[16:20] <foo> I've tried to scp to another user on this specific host giving me issues, no go
[16:21] <TJ-> foo: "echo hello > /tmp/hello; scp -v /tmp/hello user@target"
[16:21] <sdeziel> foo: 4K is enough to trip on a PMTU blackhole
[16:21] <foo> uh, wtf. 
[16:21] <foo> touch testfile and tried to send testfile...
[16:21] <foo> it worked.
[16:21] <tuxick> aliens\
[16:21] <foo> I was trying to send conf.py before.
[16:22] <sdeziel> foo: try lowering the MTU of the sending machine as a potential workaround/confirmation
[16:22] <TJ-> foo: "tracepath host"
[16:23] <foo> sdeziel: how do I do that? 
[16:24] <sdeziel> foo: `sudo ip link set eth0 mtu 1400` but replace eth0 with whatever NIC name you have
[16:24] <foo> sdeziel: is there a way to see what it currently is? 
[16:24]  * foo is trying this with a few more files
[16:24] <foo> I'm still not convinced. This is too weird.
[16:24] <sdeziel> foo: you can check the current MTU with `ip link`
[16:25] <foo> bah, this other file worked fine... ok, now let me try to push a git repo (which was the original problem I had
[16:25] <TJ-> foo: the problem happens if the Don't fragment bit is set on a packet that is larger than the PMTU
[16:25] <foo> TJ-: how can we be certain that is the issue? 
[16:26] <TJ-> packets above a certain size, with DF set, don't arrive
[16:27] <TJ-> is there any VPN involved in the path?
[16:27] <TJ-> if any tunneling is used that'll reduce the MTU/MSS 
[16:28] <TJ-> foo: as I said above: "tracepath host"
[16:28] <TJ-> that discovers, as best it can, the maxium MTU on each router along the path
[16:29] <foo> TJ-: no, all really basic
[16:30] <sdeziel> TJ-: thanks for that, I always thought `mtr` was a better `traceroute` but the PMTU estimate the later gives is handy!
[16:32] <foo> sdeziel: so it looks like mtu 1500 is what's set
[16:33] <foo> sdeziel: so you're suggesting I set it to 1400? 
[16:33] <foo> I've been working in linux for 20 years and have done this process probably 50+ times and I've never had this issue. I'm open to having missed something, I'm just really confused right now.
[16:33] <sdeziel> foo: yeah, that's the default Ethernet MTU value. 1400 is a good starting point but you'll need to do a binary search to find the optimal value ;)
[16:34] <TJ-> foo: do a bisect to figure out the optimum - but if only the single target is affected this suggests the 'issue' is at the other end. Your altering the local setting affects everything leaving the local interface
[16:34] <TJ-> foo: if the route is passing through a tunnel at any point that'll reduce the max MTU
[16:36] <TJ-> foo: for TCP connections it is possible to set the Maximum Segment Size (MSS) rather than altering the interface MTU 
[16:36] <TJ-> foo: e.g. "ip route add 1.2.3.4/32 dev enp2s0 advmss 1400
[16:39] <patdk-lap> that assumes everything you care about uses tcp
[16:43] <TJ-> we're just dealing with an scp session here
[16:55] <foo> TJ- / patdk-lap -thank you
[16:55] <foo> What's odd is I seem to just be having this issue with this one server. 
[16:56] <patdk-lap> that isn't really *odd*
[16:56] <patdk-lap> normal causes is a vpn connection, or someone blocking icmp
[16:57] <patdk-lap> or has a misconfigured mtu
[16:57] <foo> patdk-lap: I have 2 systems, both at digital ocean, both ubuntu, possibly same data center. One of them I can scp and git push to, the other one I cannot. 
[16:57] <foo> Both relatively stock systems 
[16:58] <foo> ok, I just switched network...
[16:58] <foo> And it looks like this is working now. 
[16:58] <foo> So maybe it is the network here.
[16:58] <foo> (hotspot)
[16:59] <patdk-lap> ya, cell carriers are horrible about it
[16:59] <patdk-lap> but they normally do pmtu correctly
[16:59] <patdk-lap> they normally run on atm networks, and have smaller than 1500byte mtu on them
[17:00] <patdk-lap> if they where to do 1500mtu, they would have a half almost empty atm packet, causing a lot of waste on their cell network
[17:01] <foo> ok, update: I switched to my verizon hotspot, it works fine. I'm in an area with non-traditional internet, there is repeaters on homes and things are being bounced. So I wonder if the ISP here is doing something that is causing issues. Although I still find this odd because with 2 systems both MTU 1500... I can scp/git push to one but not the other.
[17:03] <patdk-lap> well, did you use tracepath to fine out why?
[17:03] <foo> patdk-lap: is that an os x package? I'm not showing that's available. (I appreciate your help, thanks for sharing that earlier)
[17:04] <patdk-lap> oh well this is ubuntu
[17:04] <patdk-lap> it's a stock ubuntu package
[17:04] <patdk-lap> should be any unix
[17:04] <foo> patdk-lap: ok, maybe I misunderstood - I assume you want me to run that command going from my client to the server, right?
[17:04] <patdk-lap> well, it can be run either way
[17:05] <patdk-lap> really it should be done in both directions
[17:05] <patdk-lap> but one direction will get a resonable result
[17:05] <patdk-lap> but it should show the paths each server is using and where it breaks 1500mtu
[17:06] <patdk-lap> the thing is, the path from you to the server, and the server to you, don't have to be the same, why you have to do it both ways to be 100% accurate
[17:09] <foo> patdk-lap: ohh. 
[17:09] <foo> I see.
[17:11] <foo> patdk-lap: thank you, I am trying to get that installed here in OS X so I can see what the issue is and have some certainty here.
[17:14] <patdk-lap> I'm guessing an outgoing paths are different
[17:14] <patdk-lap> I have had this case before at isp's
[17:14] <patdk-lap> where they loadbalance two routers
[17:14] <patdk-lap> one router goes crazy, and suddently every Odd ip goes offline
[17:15] <patdk-lap> but in your case, likely the mtu is messed up on that router
[17:15] <foo> patdk-lap: coincidentally, I'm on the phone with the WISP right now..
[17:15] <foo> who is seeing a broadcast storm
[17:15] <foo> on my network and they shut down my internet
[17:15] <foo> I don't understand the networking dynamics here... but does that make sense to you? 
[17:16] <patdk-lap> it makes sense, but I cannot say as to why
[17:16] <patdk-lap> normally generated by a bad/improper route
[17:23] <foo> Ok, let's recap - for anyone with WISP understanding/networking understanding: I deployed a new ubuntu server, I could ssh to it, but I could not scp files to it or git push a repo to it. It is an issue with my local network which is on a WISP. When I was doing scp or git push, they were seeing a spike in my network which they described as a "broadcast storm" which forced them to shut down the 
[17:23] <foo> network... they cannot explain what happened, and I don't know enough to understand this. Can anyone explain what might have happened here? (They're a small WISP, I'm remote)
[17:45] <usrnamewastaken> trying to get nextcloud (installed from initial installer) because its a snap doesnt appear to recognize smbclient being installed, anyone know how to fix this?
[17:46] <patdk-lap> sounds like their fault
[17:46] <patdk-lap> something wasn't halding the larger mtu right and looped it
[17:52] <tomreyn> usrnamewastaken: doesn't seem to be supported https://github.com/nextcloud/nextcloud-snap/issues/60
[17:53] <usrnamewastaken> well thats.. fun
[18:09] <foo> patdk-lap: err, I wish I could get tracepath working
[18:09]  * foo googles
[18:09] <foo> It sounds like that would answer this 
[18:10] <foo> https://apple.stackexchange.com/questions/125068/is-there-an-equivalent-utility-to-linuxs-tracepath-for-os-x
[19:39] <tuxick> oooh
[19:56] <patdk-lap> it's crazy those people are saying traceroute does everything and more than tracepath
[19:56] <patdk-lap> tracepath and traceroute are the same in how they are talking, but tracepath deals with mtu sizes, where traceroute only deals with if you can talk from one side to the other
[20:38] <Walex> foo: "brodcast storm" is silly. What probably is happening is that your wireless "modem" is sending packets faster than they can receive.
[20:39] <foo> Walex: thank you for circling back. I'm scp'ing a 4K file FWIW.
[20:39] <foo> It says connection lost and connection timed out
[20:39] <Walex> foo: your WISP probably are not entirely clueful
[20:40] <foo> Walex: I'm open to that, I finished that specific work I was doing and likely won't deal with it - and if I do I can switch to my verizon hotspot
[20:40] <foo> I might try to call them one more time
[20:40] <Walex> foo: they probabkly misconfigured the upwards connection because most people just download from the net.
[20:40] <Walex> foo: that used to happen in a different form also with ADSL ISPs.
[20:40] <foo> Walex: what is odd to me is I have another server I can scp data to just fine
[20:40] <Walex> foo:  and especially with modem ISPs.
[20:41] <Walex> foo:  the server that works is probably slower than the one does not work.
[20:41] <Walex> foo: anyhow there is a simple but annoying fix.
[20:41] <Walex> foo: which is to use a tool that uses "rate limiting".
[20:42] <Walex> foo: I happen to have written one
[20:43] <foo> Walex: what was the value for you writing one within the context of wyoyour workflow?
[20:43] <foo> This specific nuance I experienced was the first time I experienced that in 20 years
[20:43] <Walex> http://www.sabi.co.uk/#sourcesSabishape
[20:44] <Walex> foo: you can find others on the web, the key words are "rate limiter" or "traffic control".
[20:44] <Walex> foo: the reason I wrote it and why most such scripts exist is called "buffer bloat"
[20:45] <Walex> foo: which is sort of the opposite of what is happening to you
[20:47] <Walex> foo: are you vaguely familiar with networking terms like "congestion control"?
[20:48] <foo> Walex: unfortunately, no
[20:58] <Walex> foo: you can search the web for the keywords to get the details
[20:59] <foo> Walex: thank you
[20:59] <Walex> foo: what is happening is that your radio transmitter is sending too many chunks of data too fast, and the their radio receiver cannot handle how closely spaced they are. So they need to be sent at a certain cadence/rate that does not overwhelm their receiver, to avoid the "spike".
[21:00] <Walex> that's the big picture as simple as I can make it.
[21:00] <foo> Walex: and that makes sense even though it's a 4K file? 
[21:00] <Walex> foo: when you open a connection there is whole flow of other "packets" underneath.
[21:01] <Walex> foo: typically also the radio communicates in burst 250-500 bytes long.
[21:01] <Walex> but yes with 4KiB it is a bit extreme.
[21:03] <foo> Walex: ok, I might be seeing this issue with a regular git push... in which case, this is an issue. If I was going to attempt to help the WISP see this so they can fix it, how might I communicate this to them?
[21:03] <Walex> foo: part of the problem is that ISP, both wired and wireless, configure links so that the upward direction has a lot less capacity than the downwards, because most customers fetch stuff from the Internet, they don't send to it.
[21:03] <patdk-lap> what wisp is using 250-500byte packets? normal is 24xx bytes
[21:04] <patdk-lap> but we do know the issue is with mtu
[21:04] <patdk-lap> and his ssh login works cause a single packet doesn't go over 1000bytes or so
[21:04] <patdk-lap> but the scp will fill that 1500bytes easily
[21:04] <patdk-lap> I have had that issue before
[21:05] <Walex> foo: try first the reate limited tool to see if by cutting down on the upload rate things get better.
[21:05] <foo> patdk-lap: I don't think I asked you, if I want to communicate this with the WSIP - do I simply ask about their upstream and what their limts ar? 
[21:05] <foo> WISP* 
[21:05] <foo> are * 
[21:05] <Walex> patdk-lap: MTU is another possibilty, but seems quite unlikely because it works with the other server.
[21:05] <Walex> patdk-lap: but it could well be.
[21:06] <Walex> foo: did you actually do the 'tracepath' test?
[21:06] <patdk-lap> mac seems not to have tracepath or any reasonable way to install it :(
[21:06] <patdk-lap> only thing you can really to is ask the wisp what mtu setting you need to use to talk to their equipment
[21:06] <Walex> what is this guy doing in #Ubuntu-server then?
[21:07] <patdk-lap> other than that you cannot really do anything, except set an mss clamp
[21:07] <Walex> patdk-lap: even on MacOS X he can send PING packets of various sizes...
[21:07] <patdk-lap> yes, but he did that and I was sure it showed the mtu issue
[21:07] <patdk-lap> but dunno *where* the mtu issue is
[21:08] <Walex> patdk-lap: if it is MTU it is likely at the "modem" exit, but then the WISP have configured a max MTU on their client side "modem" that is not compatible with their service side stuff.
[21:08] <foo> thanks patdk-lap for chiming in
[21:09] <foo> Walex: I'm in ubuntu server because I had issues scp'ing a file to my ubuntu server...
[21:09] <foo> from os x
[21:09] <foo> and I didn't realize this was a network issue.
[21:09] <Walex> foo: foo: ahhhhhhh
[21:09] <foo> (I haven't seen this issue in ~25 years)
[21:09] <patdk-lap> I would say login to the ubuntu server, and tracepath from there
[21:09] <patdk-lap> it would show as nicely, but will tell us whatside the issue is on
[21:09] <foo> patdk-lap: err, duh, of course. I didn't think to do it that way despite you sharing
[21:10] <foo> patdk-lap: I think I simply want to traceath my public IP
[21:10] <patdk-lap> ya
[21:10] <patdk-lap> it might not even show anything though :(
[21:10] <patdk-lap> depends on how uniform the issue is
[21:10] <Walex> patdk-lap: yes I was thinking that too, worth a try.
[21:11] <Walex> foo: on MacOS X you can set the origin MTU to a small value, something like 'ifconfig eth0 mtu 576' (not sure that is the right syntax.
[21:11] <foo> Walex: https://bpa.st/4Q7Q
[21:11] <foo> patdk-lap: ^
[21:11] <foo> from server to my IP 
[21:12] <foo> (not actual IP
[21:12] <Walex> foo:  it can go smaller. If it works with a small MTU it is likely MTU,
[21:14] <patdk-lap> ya, that says the path the server took to you is clean
[21:14]  * foo set mtu and can see 576 took effect
[21:14] <Walex> foo: actually you can usde the GUI
[21:14]  * foo tries to scp a 4K py file 
[21:14] <Walex> https://support.zen.co.uk/kb/Knowledgebase/Changing-the-MTU-size-in-Mac-OS-X-10.6-to-10.9
[21:14] <patdk-lap> no way to install a linux docker in your mac or something?
[21:14] <foo> it does seem to work
[21:14] <foo> patdk-lap: can you stop being so smart.
[21:14] <foo> patdk-lap: ... yes, I have docker right here. 
[21:14] <Walex> foo: try with 1006
[21:15]  * foo tries with 1006
[21:15] <foo> Walex: worked
[21:15] <Walex> foo: note that the issue could still be sending too fast.
[21:15] <Walex> foo:  try 1280
[21:16] <foo> wa	worked
[21:16] <foo> Walex: worked
[21:16] <Walex> foo: which reminds me: is your service IPv6 by any chance (but it should not be because IPv6 in theory haas a minimum mTU of 1280).
[21:16] <Walex> foo: try 1400
[21:16] <foo> Walex: what's an easy way to tell if I'm ipv6? 
[21:17] <Walex> foo: huge numerical addresses with colons instsad of dots
[21:17] <Walex> foo: but from your paste I can see it…
[21:17] <foo> Walex: I really really appreciate this, this issue baffled me for 3 hours. While I am curious, I don't have a ton of time to figure this out so I appreciate the help
[21:17] <foo> Walex: 1400 worked
[21:17] <foo> Walex: my paste would indicate I'm ipv4?
[21:18] <Walex> foo:  yes, because the 'tracepath' reports IPv4 addresses.
[21:18] <foo> Walex: should I try 1500 to make sure it errors out?
[21:18] <foo> or something between 1400 and 1500?
[21:18] <Walex> foo: I if it is MTU, I think I know what your WISP have done, let' get closer to 1500
[21:18] <Walex> foo: try 1460
[21:18] <foo> ok, what next after 1400? 
[21:19] <patdk-lap> 1440, 1472
[21:19] <patdk-lap> 1496
[21:19] <Walex> patdk-lap: yes....
[21:19] <patdk-lap> 1496 is dsl issue (what I was saying about atm earilier)
[21:19] <Walex> patdk-lap: I think it is going to be 1472. I can't believe it is 1496.
[21:19]  * foo waits for 1460
[21:20] <foo> so MTU is how big a packet can be. The Internet runs at 1500, LAN usually runs around 1500. 
[21:20] <patdk-lap> ya, I sometimes hit 1472-1476, but kindof rare
[21:20] <foo> ok, 1460 failed
[21:20] <patdk-lap> well, the internet runs at random
[21:20] <foo> 1400 worked
[21:20] <patdk-lap> ethernet runs at 1500
[21:20] <Walex> foo: 1440?
[21:20] <patdk-lap> amazon runs at what 9001
[21:20]  * foo tries 1440
[21:21] <Walex> patdk-lap: yes, 9,000 is typical of "better" NICs.
[21:21] <patdk-lap> ya, all my servers are using 9000
[21:21] <foo> At this point, I'm doing this to A) satiat my curiosity and learn and B) to tell the WISP, as specifically as I can, WTF happened since they didn't seem to know
[21:21] <Walex> patdk-lap: I once had a cheap NIC that had different MTU limits on send and receive...
[21:21] <patdk-lap> I'm sure someone at the wisp knows, just not the support guys
[21:22] <patdk-lap> ya, older nics had a 7000 limit
[21:22] <Walex> my current WiFi has 2304. Just because.
[21:22] <patdk-lap> intel mostly has a 16k limit, but not normally work going over 9000, since most switchs cannot
[21:22] <foo> I won't be surprised if they call me and shut me down again... like they did last time
[21:22] <foo> ok, 1440 failed
[21:23] <Walex> foo: very odd.
[21:23] <foo> patdk-lap: yeah, I tried to get their most senior person. this is a small WSIP
[21:23] <foo> WISP* 
[21:23] <Walex> foo: 1420? 
[21:23] <Walex> foo:  I am curious because this is "encapsulation overhead" and that big is strange.
[21:24] <foo> 1420 failed
[21:24] <Walex> 1410?
[21:24] <foo> oh the suspense. 
[21:25] <patdk-lap> I find it odd the full size made it in, but it's so much smaller going out
[21:25] <foo> 1410 worked
[21:25] <Walex> patdk-lap: my laptop's wires NIC max MTU is 9194
[21:26] <Walex> foo: just for curiosity try some values 1411-1429
[21:26] <foo> so, the MTU of anything above 1410 is too large for this network upstream - is it that simple?
[21:26] <Walex> foo: halving
[21:26] <patdk-lap> yep
[21:26] <foo> Walex: ok, will do
[21:26] <foo> patdk-lap: was that "yep" for me? 
[21:26] <patdk-lap> but if that was your issue, you likely would have noticed this sooner
[21:26] <foo> Walex: wait, halving, what do you mean?
[21:27] <Walex> foo: first the middle of the range, then the middle of the new range, binary search, triangulation
[21:27] <patdk-lap> try 1415, then 1412 or 1417
[21:27] <foo> is it possible this issue only occurs with latency above 500ms+ or such? 
[21:27] <patdk-lap> foo, doubtful
[21:27] <Walex> foo: it is weird, hard to say
[21:28] <patdk-lap> they would have had to get very fancy to do something like that
[21:28] <Walex> it could still be a rate issue, as they say "spiked", and a smaller MTU makes the sending just a bit slower.
[21:28] <foo> Why would a WISP want to limit MTU like this?
[21:28] <patdk-lap> some kind of strange bandwidth rate policy, that you recover fast enough for a smaller mtu packet
[21:28] <patdk-lap> but it drops a large one cause of over limit
[21:28] <Walex> foo: it is usually encapsulation overheads.
[21:28] <patdk-lap> would be very fully
[21:29] <patdk-lap> normally the wisp would run say 2k or 9k mtu
[21:29] <patdk-lap> then they take your packet and wrap it in kindof like a vpn
[21:29] <foo> Walex: what does "encapsulation overheads" mean in lamen terms? :) 
[21:29] <foo> ahh
[21:29] <patdk-lap> so they know it comes from you
[21:29] <Walex> patdk-lap: but they seem to have configured their "modems" to be incomaptible with their POPs.
[21:29] <foo> POP?
[21:29] <patdk-lap> ya, issues with smaller wisps
[21:29] <patdk-lap> point of presence
[21:29] <patdk-lap> basically where hubs of their network
[21:29] <Walex> foo: their "masts", wireless access points
[21:30] <foo> 1412 works
[21:30] <Walex> foo: network data is encapsulated with "headers", so an ethernet frame has a header, then there is an IP header, then there is a TCP header
[21:30] <foo> 1415 fails
[21:30] <foo> Walex: I see
[21:31] <patdk-lap> the point of encapsilation for a wisp, is to make sure one customer doesn't spoof another customer and make get free service and the like
[21:31] <Walex> foo: foo: for things like wireless there are can be another header
[21:31] <patdk-lap> I'm unable to process sentences today it seems
[21:31] <foo> ohhhhh
[21:31] <foo> ok, ok, I think I'm getting
[21:32] <Walex> and the difference between 1500 and what you can do is probably the size of the headers that put around the IP packet
[21:32] <foo> There is an MTU limit of 1412 (we'll say) because they're adding onto the packet, eg. moving my traffic over the network via a VPN or such which uses the overhead or gap (1500-1412) 
[21:32] <foo> for security reasons and/or to privacy and/or to not allow someone to spoof someone else
[21:32] <Walex> foo: yes, pretty much, that difference is the header/envelope of *their* potocol.
[21:33] <patdk-lap> https://stubarea51.net/wp-content/uploads/2021/07/image-13.png
[21:33] <patdk-lap> there are a lot of ones to pick from, the largest one I see is 50bytes
[21:33] <Walex> foo: in theory MacOS X and/or ubuntu should be doing "adaptive" MTU checking.
[21:33] <patdk-lap> but they could be using two stacked on each other, say one to the customer, and one between their routers
[21:34] <Walex> patdk-lap: yes, an overhead that large is unusual.
[21:34] <foo> Walex: very interesting, is it possible scp/git does not do adaptive MTU checking?
[21:34] <foo> but say, uploading a video in safari does?
[21:34] <patdk-lap> they wouldn't it would be done lower at the tcp level or ip level
[21:34] <patdk-lap> in linux it's called pmtu
[21:34] <foo> ahh
[21:35] <Walex> foo: 'scp' is way above that kind of stuff, same for safari.
[21:35] <Walex> foo:  but some programs can disable it.
[21:35] <Walex> foo: also in some case adaptive MTU can cause trouble, or at least it could 15-20 years ago.
[21:35] <patdk-lap> also why most people don't notice these issues with tcp due to it checking
[21:35] <patdk-lap> it can become more of an issue for udp
[21:36] <patdk-lap> I have large blocks of verison dsl address space fixed at 1440 mss due to issues with pmtu
[21:37] <Walex> patdk-lap: I do the same also to avoid IPv4 fragmentation.
[21:37] <Walex> patdk-lap: also on WiFi smaller MTUs cause smaller losses on noisy channels.
[21:37] <patdk-lap> ya, moving into a block home, wifi interference is a non-issue anymore
[21:37] <Walex> foo: https://www.quora.com/Do-ISPs-regularly-use-MSS-clamping-to-prevent-MTU-black-holing-of-customer-traffic-If-so-does-this-break-any-RFCs-or-net-neutrality-legislation a bit technical
[21:38] <patdk-lap> getting cellservice inside the home is also a not-working thing
[21:38] <Walex> foo: have you tried a much larger file?
[21:39] <Walex> foo: BTW if you remember the first we tried was 576
[21:39] <Walex> foo: that was the "traditional" MTU many years go, and it is 576 because it is 512 bytes of data plus encapsulation overheads.
[21:40] <patdk-lap> these days, ipv6 pretty much says min is 1280, so should be safe to use that at the smallest
[21:40] <patdk-lap> unless your doing something really strange
[21:40] <Walex> patdk-lap: I am a traditionalist :-)
[21:41] <Walex> patdk-lap: actually in IPv6 the min MTU is much smaller than 576, but 576 is traditional.
[21:41] <patdk-lap> no
[21:41] <patdk-lap> 1280 in the rfc
[21:41] <patdk-lap> unless one was released and update it lately
[21:41] <Walex> sorry I meant IPv4. 1280 is indeed the smallest with IPv6.
[21:42] <Walex> 1006 was the MTU for SLiRP, I doubt anybody remembers that.
[21:42] <patdk-lap> ya, ipv4 dunno what min was, but most people stuck with 512 as the smallest
[21:42] <patdk-lap> used it a lot, still use it
[21:43] <Walex> ping -s 0 8.8.4.4 still works
[21:44] <Walex> foo anyhow the "sabishape" and "wondershaper" tools and others may still help you
[21:45] <foo> Walex: haven't tried larger file, only tried 4K files
[21:45] <Walex> foo: especially with a WISP and radio issues and a hugely different max upload and download capacities.
[21:45] <foo> Walex: I see
[21:45] <foo> This is all very informative, I learned something new
[21:45] <Walex> foo: those shapers are useful if downloading at full speed makes uploading very very slow.
[21:46] <Walex> foo: like you download an ISO file and have an SSH session at the same time and the link is not well configured by the ISP.
[21:47] <Walex> foo: it used to be terrible with modems and older ADSL lines.
[21:47] <foo> is there any harm in leaving my en0 mtu 1412 while I'm on this network for the next week? 
[21:48] <Walex> you can set always to 1412
[21:48] <foo> ok, cool. I'll leave it at that while I'm here
[21:49] <Walex> foo: for cases where it could be the full 1500 the different between 1412 and 1500 is really small in terms of speed etc.
[21:49] <foo> Walex: ok, so all this will do is very tiny limit my upload
[21:50] <foo> patdk-lap: now, about docker tracepath...
[21:50]  * foo tries
[21:50] <Walex> foo: tiny tiny limit both. The shaping/rate limiting tools instead mostly limit the dowload to prevent download flows to swamp the upload channel.
[21:52] <foo> Walex: is there a reason why my computer wasn't smart enough to figure this out? :) 
[21:53] <patdk-lap> mostly, it requires your wisp device or router connected to it to have a properly configured mtu
[21:53] <patdk-lap> then it should have
[21:53] <Walex> foo: we were mentioning that before with "adaptive mtu"
[21:54] <Walex> foo: "patdk-lap" mentioned the Linux 'pmtu' setting, and IIRC MacOS X has got somethigng equivalent. But often they are disabled by default because with ISPs that configure routers not optimally it can cause "blackholing"
[21:54] <foo> Walex / patdk-lap  - is it possible the misconfiguration is in whatever is providing DHCP isn't telling my computer the correct mtu?
[21:55] <patdk-lap> it shouldn't be, cause that should be on your local network at 1500
[21:55] <Walex> foo: can't remember is there is an MTU option in DHCP, but if there is your WISP should have set it right.
[21:55] <patdk-lap> but the other side of the router or wisp device should know it's not outputting a full 1500, so when it sees one come it, it will know it cannot send it, and tell your mac that
[21:56] <patdk-lap> there is one for dhcp, but it *shouldn't* be needed
[21:56] <Walex> patdk-lap: in an ideal world... :-)
[21:56] <patdk-lap> ya, your trading one issue for another
[21:56] <patdk-lap> no idea how well it will work
[21:57] <patdk-lap> many things *like printers* normally don't support changing the mtu
[21:57] <Walex> foo: also in theory it is possible to set different MTUs for different routes, so one can set a lrge MTU for the local network and a smaller one for the Internet. But often it is not worth it.
[21:59] <Walex> BTW public service announcement: having a small MTU like 576 or smaller is a good idea if accessing the internet using a cellphone as a gateway. I think.
[21:59] <foo> said differently, while I feel good in knowing we figured it out together - it's the MTU - I still wonder if there is something improperly configured with this WISP.
[21:59] <foo> Walex: I do that often, I camp off the grid with my hotspot hoisted in a tree... haha. Good to know.
[21:59] <Walex> foo: most likels the WISP have not thought it through.
[22:00] <Walex> foo: BTW there are ways to support 1500B MTUs on a links that can only do 1412B, but they are wasteful.
[22:01] <foo> patdk-lap: here ya go, this is way delayed - haha  https://bpa.st/FQ5A
[22:01] <foo> patdk-lap: anything in there that tells you my network only supports 1412 ?
[22:01] <Walex> foo: it claims it is 1452 instead of 1412
[22:02] <Walex> but the 40 byte difference is likely the 40 bytes of header.
[22:02] <Walex> foo: wait
[22:03] <Walex> foo: to do a proper 'tracepath' you must raise the 'en0
[22:03] <Walex> foo: to do a proper 'tracepath' you must raise the 'en0' MTU
[22:03] <foo> Walex: beyond 1500?
[22:03] <Walex> foo:  then 'tracepath' will do automatically what we did manually.
[22:03] <foo> Walex: ohhh, I see
[22:04] <foo> Walex: so according to this tracepath, theoretically, 1452 should work
[22:04] <patdk-lap> ya, issue with the wisp
[22:05] <patdk-lap> that is a total mess
[22:05] <patdk-lap> private ip to public ip to private ip back to public
[22:05] <patdk-lap> 3 diferent private address ranges
[22:05] <Walex> foo: had you alread set the 'en0' MTU to 1500 before the tracepath?
[22:05] <patdk-lap> just all over the place
[22:06] <foo> Walex: negative, this was in docker 
[22:06] <patdk-lap> oh ya, forgot, docker would be the 172.18.x.x network
[22:06] <patdk-lap> reset to 1500 to test
[22:06] <Walex> foo: then set it to something like 4000 both insider and outside docket if you can, or at least 1500
[22:10] <foo> ah yeah it was set to 1500 at start
[22:26] <mwhudson> wait so it was mtu?
[22:27] <patdk-lap> well, as soon as he said ssh was fine and scp wasn't, knew it was mtu, the question was why, and where
[22:32] <Walex> patdk-lap: it could still have been rate: SSH sends usually a lot more slowly than 'scp'. But I think it it relatively likely it was MTU
[22:33] <patdk-lap> ya, lots of esoteric things it could have been, just not nearly as likely
[22:33] <Walex> I was thinking that if the WISP said "spiked" they had alread seen a wireless frame flood sort of issue
[22:34] <patdk-lap> ya, but that shouldn't mess with mtu sizes, and scp shouldn't have failed
[22:34] <patdk-lap> just been *slower*
[22:34] <patdk-lap> or lots of dropped packets
[22:34] <Walex> most wireless stuff does not do well with rapid back-to-back frame sending.
[22:35] <Walex> patdk-lap: the claim from the WISP was that on seeing a flood their receiver reset the connection...
[22:36] <patdk-lap> yes, but that would cause any of the *mtu* issues
[22:36] <patdk-lap> scp would have instantly failed, not hung
[22:36] <Walex> maybe they did see that in other cases so they assumed it was that case.
[23:40] <foo> Walex_away / patdk-lap - it seems something else is going on 
[23:40] <foo> these guys are claiming that they've seen something saturate this network for the past week
[23:40] <foo> *shrug*
[23:40]  * foo on the phone with him now
[23:40] <foo> Only change in network here is me, my laptop, and 3 alexa devices in the past week
[23:43] <foo> I have 10 down 5 up here, nothing fancy
[23:59] <foo> wow, something on my phone is doing this