TCP vs UDP — Choosing the Right Transport

TCP-VS-UDP

A direct comparison of TCP and UDP across reliability, latency, overhead, and real-world application fit — and when the right answer is neither, and you should build on UDP with a custom protocol like QUIC.

layer4tcpudpcomparisontransportquiclatencyreliability

Overview

Every application that sends data over an IP network must choose a transport protocol. In practice, that choice is almost always TCP or UDP. The choice is not about which protocol is better — it is about which protocol’s trade-offs match the application’s requirements.

TCP provides reliability, ordering, and flow control at the cost of connection overhead, latency, and head-of-line blocking. UDP provides none of that structure, but delivers datagrams with the absolute minimum overhead and delay. Neither is universally correct. A protocol designed for file transfer would be foolish to use UDP; a protocol designed for live audio would be foolish to use TCP.

This article examines the trade-offs side by side, looks at the real-world consequences of each property, and explains why a third path — custom transport built on UDP — is increasingly common for performance-critical modern applications.


Side-by-Side Comparison

PropertyTCPUDP
ConnectionRequired (3-way handshake)None
ReliabilityGuaranteed (retransmission)Best-effort (no retransmission)
OrderingGuaranteed (sequenced delivery)Not guaranteed
Flow controlYes (receive window)No
Congestion controlYes (Reno, CUBIC, BBR, etc.)No
Error detectionMandatory checksumOptional checksum (IPv4); mandatory (IPv6)
Message framingByte stream (no boundaries)Datagram (boundaries preserved)
Header size20–60 bytes8 bytes
Connection setup latency1 RTT minimumNone
Multiplexing streamsNo (one stream per connection)Application-defined
Broadcast/multicastNoYes

Reliability: When It Matters and When It Doesn’t

TCP retransmission ensures that every byte sent by the application is eventually received by the peer. The mechanism: every segment is acknowledged. Unacknowledged segments are retransmitted after a timeout (RTO). The application sees a continuous, gapless byte stream.

This is essential for:

UDP provides no retransmission. Lost datagrams are gone. The application does not learn that a datagram was lost.

This is acceptable (or even preferred) for:


Head-of-Line Blocking

This is one of TCP’s most significant limitations for real-time applications.

TCP delivers data to the application in order. If segment N is lost, segments N+1, N+2, N+3 (which may have arrived successfully) are held in the TCP receive buffer until segment N is retransmitted and received. The application is blocked waiting.

For applications with independent data streams, this is catastrophic. If you have video, audio, and control messages all multiplexed over a single TCP connection (as HTTP/1.1 and HTTP/2 do), a single lost packet can block all three streams while waiting for the retransmission.

UDP has no head-of-line blocking. Each datagram is independent. A lost datagram does not block the delivery of subsequent datagrams. If you receive datagrams 1, 2, 4, 5 (with 3 missing), all of 1, 2, 4, 5 are immediately delivered to the application. The application decides what to do about the missing 3.

This is why QUIC (which runs over UDP) can multiplex multiple independent streams over a single connection without head-of-line blocking between streams. A lost packet in one stream does not block other streams.


Connection Overhead and Latency

TCP requires a 3-way handshake before any application data can flow:

  1. Client → Server: SYN
  2. Server → Client: SYN-ACK
  3. Client → Server: ACK + (optionally) first data

This is a minimum of 1 round trip of latency before data can be sent. For short-lived interactions (a DNS query, a game state update), this 1 RTT overhead is significant.

With TLS (which nearly all web traffic uses), the overhead is even higher:

UDP has zero connection overhead. The application sends data immediately. For protocols where the total round trip must be minimized — DNS, game updates, IoT telemetry — this matters enormously.


Flow Control and Congestion Control

TCP’s flow control (via the receive window) prevents the sender from overwhelming a slow receiver. TCP’s congestion control (CUBIC, BBR, Reno) prevents the sender from overwhelming the network.

These mechanisms are why TCP is a “good citizen” on the internet. TCP connections compete fairly with each other, back off when there is congestion, and do not indefinitely saturate network links.

UDP has no built-in flow control or congestion control. An application that sends 1 Gbps of UDP can saturate a 100 Mbps link, causing packet loss that affects everyone else on that link. This is a real concern:

Uncontrolled UDP flooding is the basis of UDP flood DDoS attacks — an attacker sends maximum-rate UDP traffic from many sources to exhaust the target’s bandwidth. This is UDP’s lack of congestion control weaponized.


Message Framing

TCP is a byte stream. There are no message boundaries. If an application calls send(data, 1000) and send(data, 500), the receiver may receive 1500 bytes in one recv() call, or 600 and 900, or 1500 bytes in three separate calls. The application must implement its own framing (e.g., length-prefixed messages, delimiter-based parsing) to reconstruct application-level messages from the byte stream.

UDP preserves message boundaries. One sendto(data, 1000) on the sender produces exactly one recvfrom(data, 1000) on the receiver (assuming the datagram is not fragmented). The application does not need to implement framing — each send is one atomic message.

For protocols with natural message boundaries (DNS requests, SNMP polls, game state updates), UDP’s datagram model is a better fit. For protocols with continuous data streams (file transfer, HTTP response bodies), TCP’s byte stream is a better fit.


Broadcast and Multicast

TCP requires a unicast connection between exactly two endpoints. You cannot TCP-broadcast to a subnet or TCP-multicast to a group — TCP has no concept of multiple receivers.

UDP supports broadcast and multicast. A single UDP send to a broadcast or multicast address delivers the datagram to all interested receivers simultaneously. This is essential for:


The Third Way: Custom Transport on UDP

Some applications need something more sophisticated than either raw TCP or raw UDP — but the standard TCP behavior doesn’t fit:

The pattern: when TCP’s one-size-fits-all reliability model doesn’t match the application, implement a tailored reliability model on UDP rather than fighting TCP’s behavior.


Decision Framework

Use TCP when:

Use UDP when:

Consider UDP with custom transport (QUIC, etc.) when:


Key Concepts

The wrong transport creates unfixable problems

If you use TCP for live audio, you will inevitably have audio glitches when retransmissions cause jitter. If you use raw UDP for file transfer, you will eventually have corrupted transfers. The transport choice is architectural — it is very difficult to change later. Match the transport to the application’s fundamental requirements.

Modern “TCP replacement” protocols are UDP underneath

QUIC, WebRTC, and custom game protocols are all built on UDP at the IP level. This is not because UDP is fundamentally better than TCP — it is because UDP gives application developers a blank canvas to implement exactly the transport semantics they need, without the kernel’s TCP implementation getting in the way.


References