... LIVE
💻 TCP Performance Tools
Available link bandwidth Enter bandwidth greater than 0.
Ping/RTT in milliseconds Enter RTT greater than 0 ms.
Current TCP receive window (default 64 KB without scaling) Enter window size greater than 0 KB.
Max TCP Throughput
⚠️ Disclaimer: Calculations use BDP formula per RFC 7323. Actual throughput depends on TCP congestion control algorithm, network path characteristics, and intermediate device behavior. Test with iperf3 to verify real-world performance.
RTT in milliseconds Enter RTT greater than 0 ms.
e.g. 0.01 = very good, 0.1 = acceptable, 1 = poor Enter packet loss between 0.0001% and 50%.
MSS = MTU minus 40 bytes (IP + TCP headers)
Mathis Throughput

Sources & Methodology

BDP formula from RFC 7323 TCP Extensions for High Performance. Mathis equation from Mathis, Semke, Mahdavi & Ott (1997). Throughput = Window/RTT (BDP limit) and Throughput = MSS/RTT x 1.22/sqrt(loss) (Mathis limit).
📘
RFC 7323 — TCP Extensions for High Performance (IETF)
Defines TCP window scaling, timestamps, and the PAWS mechanism. The authoritative source for BDP-based throughput limits and the requirement for buffer sizes to match bandwidth-delay product on high-performance links.
📙
Mathis et al. (1997) — The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm (ACM)
The seminal 1997 paper by Mathis, Semke, Mahdavi & Ott that derived the Mathis equation: Throughput = (MSS/RTT) x (C/sqrt(p)), where p is packet loss probability and C is approximately 1.22. The standard formula for estimating TCP throughput under packet loss.
📊
RFC 6349 — Framework for TCP Throughput Testing (IETF)
IETF standard for TCP throughput testing methodology including BDP measurement procedures, buffer sizing requirements, and recommended test parameters for network performance validation.

TCP Throughput Guide: BDP, Mathis Equation & Buffer Tuning

TCP throughput is limited by physics and protocol design, not just available bandwidth. Understanding the bandwidth-delay product, the Mathis equation, and how to tune OS TCP buffers is essential for any network engineer trying to squeeze maximum performance from high-speed WAN links, satellite connections, or high-latency cloud links. This guide covers every formula you need with worked examples and complete Linux tuning commands.

BDP Throughput = Window Size (bytes) / RTT (seconds) BDP = Bandwidth (bytes/sec) x RTT (seconds) Mathis Throughput = (MSS / RTT) x (1.22 / sqrt(packet_loss_rate)) Required Buffer = 2 x BDP (for full-duplex, safe margin)
BDP example: 1 Gbps link, 50ms RTT, 64 KB window
BDP = 125,000,000 x 0.050 = 6.25 MB needed in flight
Current limit = 65,536 / 0.050 = 1,310,720 bytes/sec = 10.5 Mbps (not 1 Gbps!)
Solution: increase socket buffer to at least 6.25 MB

Mathis example: 1% packet loss, 1460 MSS, 100ms RTT
= (1460 / 0.100) x (1.22 / sqrt(0.01)) = 14,600 x 12.2 = 178,120 bytes/sec = 1.4 Mbps

TCP Throughput vs Packet Loss: Reference Table

Packet loss has a devastating effect on TCP throughput due to the quadratic relationship in the Mathis equation. Even 0.1% packet loss can halve your effective throughput on a busy link.

Packet LossThroughput @ 50ms RTTThroughput @ 100ms RTTAssessment
0.001%579 Mbps289 MbpsExcellent
0.01%183 Mbps91 MbpsVery good
0.1%58 Mbps29 MbpsAcceptable
0.5%26 Mbps13 MbpsDegraded
1%18 Mbps9 MbpsPoor
5%8 Mbps4 MbpsSeverely degraded

TCP Buffer Tuning: Required Linux sysctl Settings

On high-bandwidth or high-latency links, the default OS TCP buffer sizes are far too small to achieve full throughput. The buffer must be at least as large as the BDP. For a 10 Gbps link with 10ms RTT, BDP = 12.5 MB. For a 1 Gbps link with 100ms RTT, BDP = 12.5 MB. Both require the same buffer size despite different link speeds because the product is identical.

💻 Linux TCP Tuning Commands (1 Gbps, 50ms RTT, BDP = 6.25 MB):
sysctl -w net.core.rmem_max=13107200
sysctl -w net.core.wmem_max=13107200
sysctl -w net.ipv4.tcp_rmem="4096 87380 13107200"
sysctl -w net.ipv4.tcp_wmem="4096 87380 13107200"
sysctl -w net.ipv4.tcp_window_scaling=1
sysctl -w net.ipv4.tcp_congestion_control=bbr

Set maximum to 2 times BDP for a safety margin. Use BBR for high-latency or lossy links.

BBR vs CUBIC: Which Congestion Control to Use

CUBIC is the default Linux TCP congestion control. It is loss-based: it grows the window until packet loss occurs, then backs off aggressively. On links with frequent or bursty loss, CUBIC performs poorly because it cannot distinguish between congestion loss and random loss. BBR (Bottleneck Bandwidth and RTT), developed by Google, is a model-based algorithm. BBR estimates the actual bottleneck bandwidth and minimum RTT to find the optimal operating point. It does not react to packet loss by reducing its window, making it far more effective on lossy, high-latency, or satellite links. For satellite internet, intercontinental WAN links, or any path with more than 0.1% random packet loss, BBR typically achieves significantly higher throughput than CUBIC.

Diagnosing TCP Throughput Problems: Three Root Causes

When measured TCP throughput is far below link speed, the cause falls into one of three categories: (1) Window-limited: throughput equals window size divided by RTT and is below link speed. Solution: increase TCP socket buffer sizes to exceed the BDP. (2) Loss-limited: packet loss is reducing throughput per the Mathis equation regardless of window size. Solution: identify and fix packet loss using continuous ping or iperf3 statistics. (3) Bandwidth-limited: throughput equals or approaches link speed but is below expected. Solution: verify physical layer, check interface utilization, review QoS policies.

Frequently Asked Questions
TCP maximum throughput = Window Size / RTT. For a 65,535 byte window with 50ms RTT: 65,535 / 0.050 = 1,310,700 bytes/sec = 10.5 Mbps. This is the window-limited maximum regardless of link speed. To achieve 1 Gbps throughput with 50ms RTT requires a window of at least 6.25 MB. Use this calculator to find the exact limit for your link.
BDP = bandwidth (bytes/sec) x RTT (seconds). It represents the maximum data in transit simultaneously. For a 1 Gbps link with 50ms RTT: BDP = 125,000,000 x 0.050 = 6.25 MB. TCP needs a receive window at least this large to fully utilize the link. If the window is smaller, throughput is capped at window/RTT regardless of available bandwidth.
The Mathis equation (1997) estimates TCP throughput under packet loss: Throughput = (MSS/RTT) x (1.22/sqrt(loss_rate)). With 1% loss, 1460 MSS, 100ms RTT: (1460/0.100) x (1.22/sqrt(0.01)) = 14,600 x 12.2 = 178,120 bytes/sec = 1.4 Mbps. This shows how even 1% packet loss can reduce gigabit TCP throughput to single-digit Mbps on high-latency links.
Three causes: (1) Window-limited: TCP window is smaller than BDP. Increase OS socket buffers. (2) Loss-limited: packet loss is capping throughput per Mathis equation. Find and fix packet loss. (3) Bandwidth-limited: actual link congestion or physical layer issue. Compare your measured throughput against both the BDP limit and Mathis limit to identify which applies.
Increase Linux TCP buffer sizes to match the BDP. Set net.ipv4.tcp_rmem maximum to at least 2x BDP. Enable BBR congestion control for high-latency or lossy links. Ensure TCP window scaling is enabled (default on modern OS). Test before and after with iperf3 to verify improvement. The calculator above shows exact required buffer sizes for your link parameters.
TCP window scaling (RFC 7323) extends the 16-bit TCP window field beyond 65,535 bytes. Without it, max throughput is 65,535/RTT. For 100ms RTT: 5.2 Mbps limit regardless of bandwidth. Window scaling allows windows up to 1 GB and is negotiated in the TCP handshake. All modern OS enable it by default. Verify with: sysctl net.ipv4.tcp_window_scaling (Linux) or netsh int tcp show global (Windows).
BBR (Bottleneck Bandwidth and RTT), developed by Google, is a model-based TCP congestion control algorithm. Unlike CUBIC which reacts to packet loss by reducing its window, BBR estimates the actual bottleneck bandwidth and minimum RTT to probe the optimal operating point. BBR handles high-latency, lossy, and satellite links much better than CUBIC. Enable on Linux: sysctl -w net.ipv4.tcp_congestion_control=bbr.
Throughput is inversely proportional to the square root of packet loss rate. Going from 0.01% to 0.1% loss (10x more loss) reduces throughput by about 3x (sqrt(10)). Going from 0.01% to 1% (100x more loss) reduces throughput by 10x. Even 0.01% packet loss on a 100ms RTT link limits throughput to about 91 Mbps per the Mathis equation, regardless of available bandwidth.
For a 10 Gbps link with 10ms RTT: BDP = 1,250,000,000 x 0.010 = 12.5 MB. For 50ms RTT: BDP = 62.5 MB. Set Linux buffers to 2x BDP. For the 50ms case: net.core.rmem_max=134217728 (128 MB). For the 10ms case: net.core.rmem_max=33554432 (32 MB). Also ensure your NIC supports jumbo frames and that MTU is set consistently across all path devices.
Server: iperf3 -s. Client test: iperf3 -c [server-ip] -t 30 for 30 seconds. For high-bandwidth links: iperf3 -c [server-ip] -P 4 (4 parallel streams). For bidirectional: iperf3 -c [server-ip] --bidir. Check statistics with --json for packet loss and retransmission data. Compare result to BDP-limited maximum to determine if window or bandwidth is the bottleneck.
Bandwidth is the maximum capacity of a link (like a pipe's width). Throughput is the actual data transferred per second (like water flowing). TCP throughput is always less than or equal to bandwidth, further limited by window size divided by RTT and packet loss. A 1 Gbps bandwidth link may deliver only 100 Mbps TCP throughput if the window is too small or packet loss is present.
Use the MSS negotiated for your connection: MSS = MTU minus 40 bytes (20 IP headers + 20 TCP headers). Standard Ethernet MTU 1500: MSS = 1460 bytes. PPPoE (DSL): MTU 1492, MSS = 1452 bytes. Jumbo frames MTU 9000: MSS = 8960 bytes. The MSS is visible in the TCP SYN packets in any packet capture. If unsure, use 1460 bytes for internet-connected links.
A single TCP stream is limited by one window/RTT constraint and one congestion control path. On high-bandwidth low-latency links, a single 1 Gbps stream may struggle to fill the pipe due to TCP slow start and congestion avoidance. Multiple parallel streams (iperf3 -P 4 to -P 8) each independently probe the available bandwidth, together saturating links that a single stream cannot. Use parallel streams whenever a single stream measures significantly below expected throughput.
Related Networking Calculators
🧮

Missing a Networking Calculator?

Can’t find the tool you need? Tell us — we build new ones every week.