- Home >
- Explainers >
- OSI Layers
OSI Layer 4: Core Functions, Protocols, and Security Best Practices
- 11 minutes to read
Table of Contents
What Is OSI Model Layer 4 (The Transport Layer)?
Layer 4 of the OSI model is the Transport Layer, which is responsible for managing end-to-end communication between applications, ensuring the complete and reliable delivery of data. It performs segmentation, flow control, and error checking to guarantee data reaches the correct application on the receiving host.
Key functions of the transport layer include:
- Segmentation and reassembly: Breaks down large data chunks from the session layer into smaller segments for transmission and reassembles them at the receiving end.
- End-to-end delivery: Provides logical communication between applications running on different devices, ensuring data is delivered to the intended service.
- Flow control: Regulates the rate of data transmission between a sender and receiver to prevent congestion and ensure the receiving device can keep up with the data flow.
- Error control: Detects and corrects errors in data transmission, retransmitting data segments that fail to arrive correctly to ensure data integrity.
- Port numbering: Uses port numbers to identify applications on a host, allowing multiple applications to run and communicate simultaneously.
Common transport layer protocols include:
- TCP (Transmission Control Protocol): A connection-oriented protocol that provides reliable, error-checked communication, suitable for applications like web browsing (HTTP).
- UDP (User Datagram Protocol): A faster, connectionless protocol with minimal overhead, ideal for time-sensitive applications such as real-time video and gaming.
This is part of a series of articles about OSI layers.
The Role of the Transport Layer in the OSI Model
The transport layer plays a key role in ensuring reliable communication between applications on different hosts. While the network layer (Layer 3) is concerned with routing packets across networks, Layer 4 is focused on complete data delivery between two endpoints, regardless of how the data travels through the network.
This layer provides two primary types of service: connection-oriented communication and connectionless communication. Connection-oriented protocols like TCP establish a session, confirm data receipt, and retransmit lost packets. Connectionless protocols like UDP send data without establishing a connection, offering faster transmission at the cost of reliability.
Transport layer protocols also manage flow control and congestion avoidance. The former ensures that a sender does not overwhelm a receiver with too much data at once, while the latter helps minimize packet loss during periods of heavy traffic. These features are critical for maintaining performance and stability in complex network environments.
By abstracting these responsibilities from the application layer, the transport layer allows software developers to build networked applications without needing to manage the intricacies of data transport.
Core Responsibilities of OSI Layer 4
End-to-End Delivery
End-to-end communication is the main responsibility of Layer 4, enabling applications on source and destination machines to exchange data uninterrupted by the nature of the underlying network infrastructure. The transport layer creates a logical link between the communicating endpoints, maintaining this connection even as the data traverses multiple routers, switches, and potentially different network technologies.
This abstraction assures higher layers, such as applications, that when they send data through the stack, it will reliably reach its intended peer process. Reliability is achieved using sequence numbers, acknowledgments, and retransmission strategies, particularly in protocols like TCP.
The sender segments the data stream, and the receiver acknowledges each segment, confirming receipt.
If an acknowledgment is not received within a certain time, the sender retransmits the original data. These mechanisms ensure that even in the event of lost or corrupted packets, the message can be reconstructed end-to-end, achieving reliable transport between nodes.
Segmentation, Reassembly, and Data Flow Control
Segmentation is the process by which the transport layer breaks down large chunks of data from higher layers into smaller, manageable segments for transmission. Each segment is assigned a sequence number, enabling the receiving end to reassemble the data in the correct order, even if the network delivers segments out of sequence.
Data flow management occurs simultaneously with segmentation and reassembly. Flow control mechanisms regulate how much data can be sent at a time, preventing overwhelming the receiver or congesting the network. Techniques such as sliding windows allow the sender to transmit multiple segments before needing an acknowledgment, optimizing throughput while still maintaining reliability.
Error Control Mechanisms
Error detection at Layer 4 is implemented using checksums attached to each data segment. Upon receipt, the destination host recalculates the checksum and compares it with the sent value.
If a mismatch is detected, this indicates that an error occurred in transit, prompting either a request for retransmission (in reliable protocols like TCP) or discarding the faulty segment (in protocols like UDP). This level of integrity checking ensures that applications receive valid, uncorrupted data, shielding higher layers from the complexities of network errors.
Correction mechanisms are closely tied to the protocol in use. TCP, for example, uses acknowledgments and timeouts to retransmit lost or corrupted data until it is successfully delivered. This guarantees that network imperfections do not propagate upwards, maintaining reliable communication and application stability.
Port Numbering (Addressing and Multiplexing)
Port addressing is a critical function of the transport layer, enabling the multiplexing and demultiplexing of data streams headed to or coming from various applications on the same device. Ports serve as logical endpoints, with each service or application bound to a specific port number. For example , web servers typically use port 80 for HTTP and port 443 for HTTPS, ensuring that traffic is correctly routed to the appropriate process upon arrival at the host.
Multiplexing, managed by Layer 4, allows multiple applications to use the network connection simultaneously. The transport protocol tags each segment with the relevant port numbers, ensuring data is correctly associated with the intended application process. This allows one host to support multiple simultaneous communications, with different services operating concurrently.
Tips from the expert

Steve Moore is Vice President and Chief Security Strategist at Exabeam, helping drive solutions for threat detection and advising customers on security programs and breach response. He is the host of the “The New CISO Podcast,” a Forbes Tech Council member, and Co-founder of TEN18 at Exabeam.
In my experience, here are tips that can help you effectively manage and mitigate insider threats:
Use TCP Fast Open (TFO) to reduce handshake latency: TFO allows data to be sent in the initial SYN packet, speeding up repeated connections between the same client and server. It’s highly beneficial for latency-sensitive applications and mobile networks.
Enable Explicit Congestion Notification (ECN) in TCP stacks: ECN marks packets instead of dropping them when congestion is detected, improving performance in high-throughput or lossy environments by allowing endpoints to reduce their rate proactively without packet loss.
Implement per-service port randomization to harden against scanning: Move beyond static port assignments for sensitive services. Randomizing ephemeral port ranges and service ports raises the bar for automated reconnaissance and exploits.
Use SYN proxying to offload and protect back-end resources: Deploy Layer 4 proxies that terminate and validate incoming TCP handshakes before establishing full sessions to origin servers. This mitigates SYN flood attacks and preserves server resources.
Introduce application-layer pacing when using UDP-heavy workloads: If the application relies on UDP (e.g., VoIP, streaming), implement pacing mechanisms like token buckets at the application layer to avoid burst-induced jitter and packet loss.
Key Protocols at the Transport Layer
Transmission Control Protocol (TCP)
The transmission control protocol (TCP) is the predominant Layer 4 protocol for reliable, connection-oriented communication between hosts on IP networks. It establishes a virtual connection through a handshake process, segments application data, and provides error checking, sequencing, and flow control. This ensures accurate, in-order delivery of data from sender to receiver, regardless of network fluctuations or congestion.
TCP’s ability to retransmit lost segments and manage network congestion makes it the protocol of choice for applications requiring reliability, such as web browsing, email, and file transfer. TCP also features flow control and congestion control algorithms, which dynamically adjust the rate of data transmission based on network capacity and receiver readiness.
User Datagram Protocol (UDP)
User datagram protocol (UDP) is a connectionless Layer 4 protocol for lightweight, low-latency transmissions. Unlike TCP, UDP does not establish a persistent connection or provide guarantees for delivery, ordering, or error correction beyond basic checksums. This makes it suitable for applications that prioritize speed over reliability, such as real-time video streaming, VoIP, or online gaming, where occasional data loss is acceptable and retransmissions could cause unacceptable delays.
UDP’s low overhead derives from its minimal protocol structure, consisting of headers for source and destination ports, length, and checksum. Because it lacks flow control and acknowledgment mechanisms, UDP allows applications direct and unfettered access to the network.
Stream Control Transmission Protocol (SCTP)
Stream control transmission protocol (SCTP) merges concepts from both TCP and UDP to address the needs of telecommunication networks and newer applications. It is connection-oriented like TCP but supports multi-streaming, which enables the transmission of multiple, independent data streams within a single session. This reduces the risk of head-of-line blocking, where a single lost segment in one stream delays others.
SCTP also introduces enhanced reliability features, including built-in support for message boundaries and multi-homing, where a single SCTP session can span multiple IP addresses for redundancy and failover. These capabilities make SCTP particularly beneficial in signaling transport for telecommunication networks and applications demanding high reliability.
Datagram Congestion Control Protocol (DCCP)
Datagram congestion control protocol (DCCP) addresses a gap between TCP and UDP by providing a connection-oriented protocol with congestion control, but without TCP’s strict ordering or guaranteed delivery. DCCP is optimized for applications like streaming media or voice, where data timeliness is more important than perfect reliability.
DCCP’s core advantage is its modular congestion control which can be tuned for application requirements. By providing facilities for quickly detecting and responding to network congestion, DCCP ensures fair use of bandwidth without overwhelming links or creating excessive packet loss.
The Interaction Between Layer 3 and Layer 4
The transport layer (Layer 4) and the network layer (Layer 3) operate closely together to enable reliable data delivery across interconnected networks. While Layer 3 is responsible for routing packets from the source to the destination across potentially multiple networks, Layer 4 ensures that once those packets reach the correct host, they are delivered accurately and in order to the correct process or application.
Layer 4 relies on Layer 3 for addressing and routing. The network layer provides logical addressing using IP addresses and determines the best path for data through the network. The transport layer then adds port numbers to distinguish between concurrent communications on the same device. Together, IP addresses and port numbers form a socket pair (source IP, source port, destination IP, destination port), uniquely identifying each communication session.
When sending data, the transport layer passes segments to the network layer, which encapsulates them into packets and handles their routing through the network. On receipt, the network layer removes its headers and forwards the payload to the transport layer, which verifies integrity, reorders out-of-sequence data, and delivers it to the appropriate application.
This clear separation of duties allows each layer to focus on different responsibilities (routing for Layer 3 and reliability and session management for Layer 4) while maintaining modularity in network design.
Layer 4 and Load Balancing
Layer 4 allows traffic distribution based on data contained in transport layer headers, such as IP addresses and port numbers. Load balancers operating at Layer 4 can direct incoming network connections to different backend servers by inspecting transport-layer information, ensuring even distribution of client workloads and maximizing resource utilization without needing to parse application data.
The advantage of Layer 4 load balancing lies in its speed and efficiency. By managing network connections at the transport level, load balancers act quickly to establish, relay, or terminate sessions without introducing the latency that deeper packet inspection brings. This enables the implementation of scalable, high-availability architectures where server resources are used optimally and downtime is minimized.
Common Attack Vectors at Layer 4 and How to Mitigate Them
SYN Flood / TCP SYN DoS
SYN flood attacks target the TCP three-way handshake by sending large volumes of SYN packets without completing the connection process. Each SYN packet causes the target to allocate memory and tracking state for a half-open connection. When these incomplete connections accumulate faster than they can be cleared, the server’s connection queue fills up. Legitimate clients are then unable to establish new connections because the server cannot accept additional handshake requests.
How to mitigate:
- Enable SYN cookies to avoid allocating state until the handshake completes
- Increase backlog queue sizes and tune TCP timeout values
- Apply rate limiting on incoming SYN packets
- Use load balancers or DDoS protection services that absorb handshake floods
ACK Floods
ACK floods overwhelm a target by sending large volumes of TCP ACK packets that appear to belong to established sessions. Stateful firewalls and servers must inspect these packets and attempt to match them against existing connection state. This forces excessive CPU and memory usage even though no valid data transfer is occurring. The attack degrades performance by exhausting processing capacity rather than connection slots.
How to mitigate:
- Implement stateful firewall rules that validate ACK packet legitimacy
- Drop unsolicited ACK packets that do not match known sessions
- Apply rate limiting for TCP packets with the ACK flag set
- Offload connection tracking to specialized network appliances
UDP Flood
UDP floods exploit the lack of session setup in the UDP protocol by sending high volumes of packets to random or targeted ports. Each packet must be processed individually, often triggering error responses or application-level handling. This consumes CPU resources and saturates network bandwidth. Services relying on UDP are particularly vulnerable because traffic cannot be throttled based on connection state.
How to mitigate:
- Filter unnecessary UDP traffic at the network edge
- Rate-limit UDP packets per source or destination port
- Disable unused UDP services and ports
- Use traffic scrubbing or DDoS mitigation platforms for volumetric attacks
Connection Table Exhaustion
Connection table exhaustion occurs when a firewall, load balancer, or server runs out of space to track active network sessions. Attackers generate large numbers of new or incomplete connections that rapidly fill the table. Once the limit is reached, new legitimate connections are dropped regardless of intent. This attack affects both TCP and UDP when stateful inspection is enabled.
How to mitigate:
- Increase connection table size where hardware allows
- Reduce idle and half-open connection timeouts
- Enforce per-IP or per-subnet connection limits
- Monitor connection table utilization and trigger automated defenses
Transport Reconnaissance
Transport reconnaissance focuses on mapping exposed services by probing Layer 4 behavior. Attackers send carefully crafted TCP or UDP packets to observe responses such as resets, acknowledgments, or silence. These responses reveal which ports are open, filtered, or closed, and may expose operating system characteristics. The collected data is used to identify attack surfaces and prioritize exploits.
How to mitigate:
- Block unnecessary ports and services at the firewall
- Use port-knocking or service authentication mechanisms
- Limit response information returned by network devices
- Monitor and alert on anomalous scanning patterns
Related content: Read our guide to OSI layers attacks (coming soon)
Best Practices for Working with Layer 4
Here are some important practices to keep in mind when setting up the transport layer.
1. Understand Protocol Behavior Under Load
Understanding how Layer 4 protocols behave under high network load is essential for maintaining performance and reliability. TCP dynamically adjusts its transmission rate using congestion control algorithms such as Reno, Cubic, or BBR. Under heavy traffic, these algorithms reduce the sending rate to prevent packet loss and network collapse, but excessive tuning or poor configuration can cause throughput degradation.
UDP offers no built-in congestion control: applications using it must implement their own rate-limiting or packet pacing to avoid overwhelming the network. Testing and observing protocol behavior in controlled load scenarios help identify bottlenecks and optimize configurations. Network engineers should use tools like iPerf or Wireshark to analyze retransmission rates, latency variations, and window scaling behavior.
2. Tune TCP Parameters for Optimal Performance
TCP’s performance depends heavily on tuning system and protocol parameters to match network conditions. The most influential settings include the TCP window size, buffer limits, and congestion control algorithm. Increasing the window size allows more data to be sent before requiring an acknowledgment, improving throughput over high-latency or high-bandwidth links. However, this must be balanced against memory availability and receiver capabilities.
Advanced tuning involves enabling selective acknowledgments (SACK), window scaling, and adjusting retransmission timeouts (RTO) for specific environments. For data centers, low latency and small buffers are optimal; for long-distance WAN links, larger windows and adaptive congestion control improve efficiency.
3. Use Proper Load Balancing Strategies
Effective Layer 4 load balancing requires selecting an approach that matches the application’s performance and reliability needs. Common strategies include round-robin, least-connections, and IP-hash methods. Round-robin distributes requests evenly across servers, while least-connections routes new sessions to the server with the fewest active connections. IP-hash ensures session persistence by directing clients with the same IP to the same backend, which is useful for stateful applications.
Administrators should monitor backend server health and use connection draining during maintenance to prevent disruption. For scalable architectures, load balancers should also support TCP connection reuse and SYN proxying to mitigate attacks. Combining Layer 4 load balancing with Layer 7 (application-level) routing can further optimize traffic distribution while minimizing latency and resource consumption.
4. Monitor and Analyze Port Activity for Behavioral Anomalies
Monitoring port activity at Layer 4 helps detect unusual behavior that could signal security threats, misconfigurations, or performance issues. Unexpected spikes in traffic to uncommon ports, unusually high numbers of connection attempts, or changes in typical port usage patterns can indicate reconnaissance efforts, compromised systems, or malfunctioning applications. Regularly logging and analyzing port-level traffic—both inbound and outbound—enables early detection of anomalies before they escalate into incidents.
Tools like Zeek, Suricata, or commercial firewalls with deep transport-layer visibility can flag suspicious behaviors such as port scans, lateral movement attempts, or unauthorized services becoming active. Enforcing strict firewall rules, limiting open ports to only required services, and setting up alerts for abnormal connection patterns are foundational practices for maintaining transport-layer security and operational hygiene.
5. Leverage Layer 4 Metrics for Network Optimization
Layer 4 generates valuable metrics that reveal the health and performance of a network. Key indicators include round-trip time (RTT), retransmission rates, congestion window size, and segment loss. By analyzing these metrics, administrators can identify issues such as excessive latency, bandwidth saturation, or faulty links.
Integrating these measurements into monitoring platforms like Prometheus, Grafana, or NetFlow collectors provides visibility into end-to-end performance trends. Metrics-driven adjustments, such as modifying buffer sizes or congestion algorithms, enable continuous optimization.
Network Security with Exabeam
A security operations platform strengthens network security, specifically focusing on OSI Layer 4, the transport layer. The platform gathers and analyzes various data sources, including connection states, segment flow, and port activity. This comprehensive data collection offers insight into end-to-end communication between applications, including segmentation, reassembly, and flow control mechanisms. By establishing baselines for typical Layer 4 operations, the platform can pinpoint deviations that may signal a security breach or an attack vector.
When suspicious events emerge, such as uncharacteristic SYN floods, ACK floods, UDP floods, connection table exhaustion, or transport reconnaissance, the platform integrates these findings with broader security intelligence. This integration helps contextualize Layer 4 anomalies within the larger threat environment, enabling security teams to understand the potential ramifications and source of an attack. The system’s capability to monitor protocol behavior and application port usage across layers assists in linking malicious Layer 4 actions to particular users or devices.
Through advanced analytics and behavioral modeling, the platform facilitates the discovery of intricate Layer 4 attacks that might bypass conventional signature-based defenses. The objective is to provide security teams with practical insights to effectively investigate and react to threats. This strategy cultivates a more resilient security stance by addressing vulnerabilities and malicious activities specifically targeting the fundamental data transport and connection management mechanisms.
Learn More About Exabeam
Learn about the Exabeam platform and expand your knowledge of information security with our collection of white papers, podcasts, webinars, and more.
-
Blog
How Behavioural Analytics Strengthens Compliance with Australia’s Protective Security Policy Framework (PSPF)
- Show More