Diagnose intermittent connectivity where small traffic succeeds but larger transfers fail due to a mismatched MTU.
Prove the failure with a DF “large ping”, correct the MTU on eth0, then re-test to confirm stability.
Users report that connections work sometimes but fail on larger transfers. Small pings succeed, but some traffic hangs or drops,
and you suspect a misconfigured MTU on eth0.
MTU problems often look like “random network issues” because small packets pass while larger ones fragment or blackhole. Your job is to reproduce the failure, make a safe targeted change, then prove the fix with the same test.
eth0.1500.MTU (Maximum Transmission Unit) is the largest frame size your interface will send without needing fragmentation. When MTUs are mismatched along a path and ICMP “fragmentation needed” messages are blocked or ignored, you can get a blackhole: small packets work, larger packets fail.
Re-run the exact test that failed before you declare recovery. If you cannot reproduce the failure and then make it go away, you do not really know what you fixed.
eth0.
ip link show eth0
This confirms the interface is up and shows the current MTU value. If the MTU is too low (or simply mismatched vs the network path), you can see “works sometimes” behavior depending on packet size and fragmentation handling.
# Example failure state:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1200 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
ping -c 1 -M do -s 1400 1.1.1.1
-M do sets “do not fragment.” If the packet exceeds the path MTU, the kernel refuses to send it and reports the MTU problem.
This is a clean way to confirm the issue without relying on application symptoms.
ping: local error: message too long, mtu=1200
eth0.
sudo ip link set dev eth0 mtu 1500
This applies the corrected MTU immediately. In production, you would also persist the change in your network configuration so it survives reboot. The lab is incident-style: restore stable behavior first, then handle persistence as follow-up work.
ip link show eth0
Always re-check live interface state after making a change. This prevents you from “testing the old configuration” by mistake.
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
ping -c 1 -M do -s 1400 1.1.1.1
This is the exact test that failed earlier. If it succeeds now, you have strong evidence the MTU mismatch was the root cause.
ping -c 1 1.1.1.1
A normal ping confirms baseline reachability. Combined with the DF test above, you’ve validated both “small” and “large” packet behavior.
The issue may not be MTU. Confirm the symptom is truly size-related (try multiple payload sizes) and check for packet loss, duplex issues, congestion, or firewall/NAT path problems.
ip link set is runtime state. Persist the MTU in your network manager (NetworkManager, systemd-networkd, or distro config files)
so it survives reboot.
The bottleneck may be elsewhere in the path (tunnels, VPNs, overlays). The fix might require adjusting MTU end-to-end and ensuring PMTUD/ICMP is allowed to function.
Follow up with an application-level transfer test, and if needed, confirm the path MTU with tracepath or packet capture.
If you need to revert the MTU after the lab, set it back to the original value and confirm interface state.
sudo ip link set dev eth0 mtu 1200
ip link show eth0
The MTU matches the intended value on eth0, and your DF test behaves consistently with that configuration.
ip link show <iface>: display link state, MTU, and interface flags.ip link set dev <iface> mtu <value>: change interface MTU at runtime.ping -M do -s <bytes> <host>: test packet size limits with DF behavior.
-M do: do not fragment; fails fast if packet exceeds path MTU.-s <bytes>: ICMP payload size (headers are added on top).-c 1: send one packet for quick verification.tracepath <host>: discover path MTU and hop behavior without manual sizing.