TCP (Transmission Control Protocol) is a connection-oriented transport protocol that guarantees reliable, ordered delivery of data between applications. TCP establishes connections before data transfer and confirms successful delivery through acknowledgments.
Last updated: 2026-03-25
How TCP Works
TCP operates at the transport layer (OSI Layer 4) and provides reliable data delivery through connection establishment, segmentation, retransmission, and flow control.
TCP Connection Lifecycle:
1. Three-Way Handshake (Connection Establishment):
- SYN: Client sends SYNchronize packet with sequence number
- SYN-ACK: Server acknowledges SYN and sends its own SYN
- ACK: Client acknowledges server’s SYN
- Connection established, data transfer begins
2. Data Transfer:
- Data divided into segments with sequence numbers
- Each segment acknowledged (ACK) by receiver
- Missing segments retransmitted automatically
- Flow control prevents overwhelming receiver
3. Four-Way Handshake (Connection Termination):
- FIN: Initiator sends FINish packet
- ACK: Receiver acknowledges FIN
- FIN: Receiver sends its own FIN
- ACK: Initiator acknowledges, connection closed
TCP Segment Structure:
- Source Port: Sending application port (16 bits)
- Destination Port: Receiving application port (16 bits)
- Sequence Number: Position in data stream (32 bits)
- Acknowledgment Number: Next expected sequence number (32 bits)
- Flags: Control bits (SYN, ACK, FIN, RST, PSH, URG)
- Window Size: Flow control (16 bits)
- Checksum: Error detection (16 bits)
- Data: Application payload (variable)
When to Use TCP
Use TCP when you need:
- Reliable data delivery without loss
- Ordered data transmission (packets arrive in sequence)
- Flow control to prevent overwhelming receivers
- Congestion control to avoid network overload
- Applications requiring guaranteed delivery (web, email, file transfer)
- Long-lived connections with bidirectional communication
- Data integrity critical (financial transactions, database replication)
Do not use TCP when you need:
- Real-time speed over reliability (use UDP)
- Minimal latency overhead (use UDP)
- Broadcasting or multicasting (use UDP)
- Simple request-response with small payloads (use UDP)
- Streaming media where some packet loss acceptable (use UDP)
Signals You Need TCP
- Application requires 100% data delivery guarantee
- Data must arrive in exact order sent
- Connections span unreliable networks (internet)
- File transfers where corruption unacceptable
- User authentication and session management
- Email delivery requiring confirmation
- Database synchronization requiring consistency
TCP Features
Reliable Data Delivery
Mechanisms:
Acknowledgments (ACKs):
- Receiver sends ACK for each segment received
- ACK contains next expected sequence number
- Cumulative ACKs confirm all data up to sequence number
Retransmission:
- Sender starts timer when segment transmitted
- If ACK not received before timeout, segment retransmitted
- Adaptive timeout based on network conditions (RTT estimation)
Duplicate ACKs:
- Receiver sends duplicate ACKs for missing segments
- Three duplicate ACKs trigger fast retransmit
- Faster recovery than waiting for timeout
Sequencing:
- Each byte numbered with sequence number
- Segments carry sequence numbers of first byte
- Receiver reorders segments if received out of order
Flow Control
Purpose: Prevent sender from overwhelming receiver.
Mechanism: Sliding Window Protocol
- Receiver advertises available buffer space (window size)
- Sender transmits only up to advertised window
- Window size updates dynamically as data processed
- Zero window: Receiver buffer full, sender pauses
Window Sizes:
- Initial: 64KB (with Window Scale option)
- Maximum: 1GB (with 14-bit window scale)
- Advertised in each TCP segment
Implementation:
- Sender tracks: Last byte sent, last byte acknowledged, window size
- Send window = Window size - (Last byte sent - Last byte acknowledged)
- When window reaches zero, sender stops transmitting
Congestion Control
Purpose: Prevent overwhelming network capacity.
Algorithms:
Slow Start:
- Begin with congestion window (cwnd) = 1 segment
- Double cwnd each round-trip time (RTT)
- Exponential growth until threshold or packet loss
- Transition to congestion avoidance at threshold
Congestion Avoidance:
- Linear growth: Add 1 segment per RTT
- Continues until packet loss detected
- Conservative increase to probe available bandwidth
Fast Retransmit:
- Triggered by 3 duplicate ACKs
- Immediate retransmission without waiting for timeout
- Assumes packet loss, not congestion collapse
Fast Recovery:
- After fast retransmit, reduce cwnd to 50%
- Perform congestion avoidance (linear growth)
- Avoid slow start to maintain throughput
Congestion Signals:
- Timeout: Severe congestion, reset to slow start
- 3 Duplicate ACKs: Moderate congestion, fast recovery
- ECN (Explicit Congestion Notification): Network signals congestion proactively
Connection Management
State Machine:
- LISTEN: Server waiting for connection
- SYN_SENT: Client sent SYN, waiting for SYN-ACK
- SYN_RECEIVED: Server received SYN, sent SYN-ACK
- ESTABLISHED: Connection active, data transfer
- FIN_WAIT_1: Initiator sent FIN
- FIN_WAIT_2: Initiator waiting for FIN
- CLOSE_WAIT: Receiver got FIN, waiting for application close
- LAST_ACK: Receiver sent FIN, waiting for final ACK
- TIME_WAIT: Initiator sent final ACK, waiting before closing
TIME_WAIT State:
- Duration: 2 * Maximum Segment Lifetime (MSL)
- Typical: 60 seconds
- Purpose: Ensure final ACK reaches peer, allow old segments to expire
TCP Header Fields
| Field | Size | Purpose |
|---|---|---|
| Source Port | 16 bits | Sending application |
| Destination Port | 16 bits | Receiving application |
| Sequence Number | 32 bits | Byte position in stream |
| Acknowledgment Number | 32 bits | Next expected byte |
| Data Offset | 4 bits | Header length (20-60 bytes) |
| Reserved | 6 bits | Unused (set to zero) |
| Flags | 6 bits | Control bits |
| Window | 16 bits | Flow control |
| Checksum | 16 bits | Error detection |
| Urgent Pointer | 16 bits | Urgent data position |
| Options | Variable | Extensions |
TCP Flags:
- SYN: Synchronize sequence numbers (connection establishment)
- ACK: Acknowledgment field valid
- FIN: Finish (connection termination)
- RST: Reset (abort connection)
- PSH: Push function (deliver immediately)
- URG: Urgent pointer valid (prioritized data)
TCP Options
Maximum Segment Size (MSS):
- Largest segment payload size
- Typical: 1460 bytes (Ethernet MTU 1500 - 20 IP header - 20 TCP header)
- Negotiated during connection establishment
Window Scale:
- Extends window size field beyond 16 bits
- Shift count: 0-14
- Maximum window: 65,535 × 2^14 = 1GB
- Required for high-bandwidth, high-latency networks
Selective Acknowledgments (SACK):
- Acknowledge non-contiguous blocks of data
- Reduce retransmissions when multiple segments lost
- SACK-permitted option in SYN
- SACK blocks in ACK segments
Timestamps:
- Include timestamp in each segment
- Better RTT estimation
- Protection Against Wrapped Sequence Numbers (PAWS)
- Required for high-speed networks
Metrics and Measurement
Connection Performance:
Round-Trip Time (RTT):
- Time from sending segment to receiving ACK
- Typical internet: 20-100ms
- Same data center: <1ms
- TCP uses RTT for retransmission timeout calculation
Throughput:
- Maximum: determined by window size and RTT
- Formula: Throughput = Window Size / RTT
- Example: 64KB window, 50ms RTT = 1.28 MB/s = 10.24 Mbps
- With window scaling (1GB window): 160 Gbps theoretical
Connection Establishment:
- Three-way handshake: 1.5 RTT minimum
- TLS handshake adds: 1-2 RTT
- Total: 2.5-3.5 RTT for HTTPS connection
Reliability Metrics:
- Retransmission rate: Percentage of segments retransmitted
- Target: <1% in normal conditions
- High rate indicates congestion or network issues
- Out-of-order delivery: Segments received not in sequence
- Handled automatically by TCP
- Indicates network path changes or packet reordering
Flow Control:
- Window utilization: Percentage of advertised window used
- Target: 80-95%
- Low utilization indicates sender or receiver limitations
- Zero window events: Receiver buffer full
- Frequency indicates under-provisioned receiver
According to Cloudflare, TCP connections average 3-5 round trips for full page load on modern web applications. HTTP/3 (QUIC) reduces this to 0-1 round trips after initial connection.
Common Mistakes and Fixes
Mistake: Not tuning TCP buffer sizes for high-bandwidth connections Fix: Increase socket buffer sizes. Use window scaling option. Calculate optimal buffer: bandwidth × RTT.
Mistake: Ignoring TIME_WAIT state accumulation Fix: Enable TCP recycling or reuse. Reduce TIME_WAIT duration carefully. Implement connection pooling.
Mistake: Disabling Nagle’s algorithm indiscriminately Fix: Keep Nagle’s algorithm enabled for bulk transfers. Disable only for latency-sensitive small messages (interactive applications).
Mistake: Not handling TCP backpressure Fix: Implement application-level flow control. Monitor socket send buffer. Throttle upstream when buffer fills.
Mistake: Using short timeout values Fix: Use adaptive timeouts. Let TCP calculate retransmission timeout based on RTT estimates. Avoid manual timeout configuration.
Mistake: Not enabling TCP Fast Open Fix: Enable TCP Fast Open for known servers. Reduces connection establishment by 1 RTT. Requires application support.
Mistake: Ignoring TCP keepalives for long-idle connections Fix: Configure TCP keepalive for connections that must persist. Detect dead peers. Default: 2 hours idle before probe.
Frequently Asked Questions
What is the difference between TCP and UDP? TCP provides reliable, ordered delivery with flow control and congestion control. UDP provides best-effort delivery without guarantees. TCP is connection-oriented; UDP is connectionless. TCP has higher overhead; UDP is lightweight.
Why does TCP use a three-way handshake? Three-way handshake ensures both parties can send and receive, synchronizes sequence numbers, and prevents duplicate connections from delayed SYN packets. Both sides exchange initial sequence numbers.
What happens if a TCP segment is lost? Sender’s retransmission timer expires, segment retransmitted. Receiver may send duplicate ACKs for missing segment. Fast retransmit may trigger after 3 duplicate ACKs. Lost segment retransmitted without closing connection.
How does TCP handle out-of-order packets? TCP receiver buffers out-of-order segments. When missing segment arrives, receiver reassembles data stream. Application receives data in order. Duplicate ACKs signal missing segments to sender.
What is the maximum TCP throughput? Theoretical: limited by window size and RTT. Maximum sequence number: 4GB (32 bits). Window scaling extends to 1GB window. Practical limits: network bandwidth, congestion control, implementation overhead. 10-100 Gbps achievable with tuning.
How many concurrent TCP connections can a server handle? Depends on resources: memory per connection (8-64KB typical), file descriptors (default limits: 1024), CPU for context switching. Tuned servers: 100K-1M+ concurrent connections. Use connection pooling for efficiency.
What is Nagle’s algorithm? Nagle’s algorithm coalesces small packets into larger segments. Reduces overhead for interactive applications. Delays small sends until ACK received or buffer full. Disable for latency-sensitive applications (real-time gaming, Telnet).
How does TCP differ from TLS? TCP is transport protocol ensuring reliable delivery. TLS is security layer on top of TCP providing encryption and authentication. TLS uses TCP for transport. HTTPS = HTTP over TLS over TCP.
What causes TCP connection resets (RST)? Sending to closed port, application crashes, firewall blocking connection, one side crashes, invalid TCP segments. RST immediately terminates connection without graceful shutdown.
How does TCP fast retransmit work? When receiver gets out-of-order segment, sends duplicate ACK for expected sequence. After 3 duplicate ACKs, sender fast retransmits missing segment without waiting for timeout. Faster recovery than timeout-based retransmission.
How This Applies in Practice
TCP forms the foundation of reliable internet communication:
Web Applications:
- HTTP/1.1 and HTTP/2 use TCP connections
- Persistent connections reduce handshake overhead
- Keep-alive allows connection reuse
- TLS adds security layer over TCP
File Transfer:
- FTP uses separate TCP connections for control and data
- Large files benefit from TCP reliability
- Resume capability through TCP’s sequence numbering
- Flow control prevents overwhelming receivers
Email:
- SMTP uses TCP for reliable email delivery
- POP3/IMAP use TCP for retrieval
- Message integrity guaranteed
- Delivery confirmation through TCP acknowledgments
Database Replication:
- Master-slave replication over TCP
- Transaction integrity requires reliable delivery
- Long-lived connections minimize overhead
- Flow control prevents replica lag
Microservices:
- Service-to-service communication over TCP
- gRPC uses HTTP/2 over TCP
- Connection pooling reduces overhead
- Load balancing distributes TCP connections
TCP on Azion
Azion optimizes TCP performance at the edge:
- TCP optimization through your Application
- TCP Fast Open supported for reduced latency
- Connection pooling to origin servers
- Load balancing distributes TCP connections
- Edge termination reduces TCP RTT for users
- Real-Time Metrics monitor TCP performance
Azion’s global network reduces TCP RTT by serving content closer to users, improving throughput and reducing latency.
Learn more about Application Acceleration and Edge Application.
Related Resources
Sources:
-
RFC 793. “Transmission Control Protocol.” https://tools.ietf.org/html/rfc793
-
RFC 7323. “TCP Extensions for High Performance.” https://tools.ietf.org/html/rfc7323
-
RFC 2001. “TCP Selective Acknowledgment Options.” https://tools.ietf.org/html/rfc2018
-
Stevens, W. Richard. “TCP/IP Illustrated, Volume 1.” Addison-Wesley, 1994.