Part 4-Transport Layer
Part 4-Transport Layer
Part 4-Transport Layer
1. Introduction & Physical Layer - Introduction to the Internet - Services and Protocols, Edge -
Protocol Layers and Service Models OSI and TCP/IP models.
2. Data link Layer - Link Layer – Services - Error Detection and Correction; Multiple Access
protocols Channel partitioning - Random access - Taking-Turns protocols - Switched LANs ARP - Ethernet
- Link layer switching – VLANs – MPLS.
3. Network Layer - Data plane forwarding vs. Control plane routing - Software Defined Networking
(SDN) approach - Network Services - Router architecture - Switching fabrics - Input and output queueing-
Core, Packet Switching vs. Circuit Switching - Performance Metrics Delay - Loss – Throughput - IPv4 and
IPv6 addressing DHCP -NAT - IPv4 and IPv6 fragmentation – SDN-based generalized forwarding -
Routing and Supporting Algorithms - Link State vs. Distance Vector - RIP - OSPF – BGP – ICMP - SNMP
- SDN Control Plane.
4. Transport Layer - Unreliable Connectionless vs. Reliable Connection-Oriented Services -
Multiplexing; Stop-and-Wait - Go-Back-N and Selective-Repeat - UDP vs. TCP - Flow and Congestion
Control.
5. Application Layer - Client-Server and Peer-to-Peer architectures - Application Layer protocols
6. Introduction to Wireless and Mobile Networks - Link characteristics - CDMA - 802.11 WiFi -
Bluetooth and Zigbee - Cellular Networks - GSM – UMTS – LTE - Mobility management and handoff -
Mobile IP.
Part-4 (Lecture Flow & Book Details)
Overview of the transport layer protocol in the Internet: James Kurose and Keith Ross, “Computer
Networking: A Top-down Approach” 6th edition, Addison Wesley 2010. Chapter-3- Only 3.1.2.
Multiplexing and Demultiplexing: James Kurose and Keith Ross, “Computer Networking: A Top-down
Approach” 6th edition, Addison Wesley 2010. Chapter-3- Only 3.2(Excluding connectionless and
connection-oriented Multiplexing and De-Multiplexing).
Connectionless transport: UDP: James Kurose and Keith Ross, “Computer Networking: A Top-down
Approach” 6th edition, Addison Wesley 2010. Chapter-3- Only 3.3- Only from Pg. No.: 198 to 200.
Connection-oriented transport: TCP: James Kurose and Keith Ross, “Computer Networking: A Top-down
Approach” 6th edition, Addison Wesley 2010. Chapter-3- Only 3.5, 3.5.1, 3.5.2 and 3.5.6.
TCP congestion control: James Kurose and Keith Ross, “Computer Networking: A Top-down Approach” 6th
edition, Addison Wesley 2010. Chapter-3- Only 3.7
Flow Control- Stop & Wait, Go Back N and Selective Repeat: Behrouz Forouzan, “Data Communication
and Networking”, Tata McGraw Hill 3th edition. Chapter – 11- only 11.1, 11.2, 11.3 and 11.4.
Overview of the transport layer protocol in the Internet:
UDP (User Datagram Protocol):
• Unreliable, connectionless service
• Extends IP’s delivery service between two end systems to a delivery service between two
processes running on the end systems
• Provides integrity checking by including error detection fields in their segments’ headers
• Sends data at any rate
No connection state
UDP TCP
UDP does not maintain connection state and does not track any TCP maintains connection state in the end systems. This
of these parameters. A server devoted to a particular connection state includes receive and send buffers, congestion-
application can typically support many more active clients control parameters and sequence and acknowledgment number
when the application runs over UDP. parameters. This state information is needed to implement
TCP’s reliable data transfer service and to provide congestion
control.
Connectionless transport: UDP
Small packet header overhead.
• The TCP segment has 20 bytes of header overhead in every segment, whereas UDP has only 8
bytes of overhead.
Connection-oriented transport: TCP
• TCP is an Internet’s transport-layer, connection-oriented and reliable transport protocol
The TCP Connection:
• TCP is said to be connection-oriented because before one application process can begin to send data
to another, the two processes must first “handshake” with each other
• Both sides of the connection will initialize many TCP state variables associated with the TCP
connection.
• The TCP “connection” is not an end-to-end TDM or FDM circuit as in a circuit switched network.
• Instead the “connection” is a logical one with common state residing only in the TCPs in the two
communicating end systems.
• The TCP protocol runs only in the end systems and not in the intermediate network elements (routers
and link-layer switches) the intermediate network elements do not maintain TCP connection state.
• A TCP connection provides a full-duplex service If there is a TCP connection between Process A on
one host and Process B on another host, then application-layer data can flow from Process A to Process
B at the same time as application-layer data flows from Process B to Process A.
The TCP Connection:
• A TCP connection is always point-to-point i.e. between a single sender and a single receiver.
• Suppose a process running in one host wants to initiate a connection with another process in
another host.
• Process that is initiating the connection is called the client process, while the other process is
called the server process.
• The client application process first informs the client transport layer that it wants to establish a
connection to a process in the server.
• clientSocket.connect((serverName,serverPort))
Where serverName is the name of the server and serverPort identifies the process on the server.
• TCP in the client then proceeds to establish a TCP connection with TCP in the server.
• The client first sends a special TCP segment; the server responds with a second special TCP
segment and finally the client responds again with a third special segment.
• The first two segments carry no payload, i.e. no application-layer data; the third of these
segments may carry a payload. Because three segments are sent between the two hosts, this
connection-establishment procedure is often referred to as a three-way handshake.
The TCP Connection:
• Once a TCP connection is established, the two application processes can send data to each
other.
• Let’s consider sending data from the client process to the server process.
• The client process passes a stream of data through the socket
• Once the data passes through the door, the data is in the hands of TCP running in the client.
• TCP directs this data to the connection’s send buffer, which is one of the buffers that is set
aside during the initial three-way handshake. From time to time, TCP will grab chunks of data
from the send buffer and pass the data to the network layer.
The TCP Connection:
• The maximum amount of data that can be grabbed and placed in a segment is limited by the
maximum segment size (MSS).
• The MSS is set by first determining the length of the largest link-layer frame that can be sent
by the local sending host (maximum transmission unit, MTU) and then setting the MSS to
ensure that a TCP segment (when encapsulated in an IP datagram) plus the TCP/IP header
length (typically 40 bytes) will fit into a single link-layer frame.
• Both Ethernet and PPP link-layer protocols have an MTU of 1,500 bytes.
• Typical value of MSS is 1460 bytes.
• MSS is the maximum amount of application-layer data in the segment not the maximum size
of the TCP segment including headers.
• TCP pairs each chunk of client data with a TCP header thereby forming TCP segments.
• The segments are passed down to the network layer where they are separately encapsulated
within network-layer IP datagrams. The IP datagrams are then sent into the network.
• When TCP receives a segment at the other end the segment’s data is placed in the TCP
connection’s receive buffer.
The TCP Connection:
• The application reads the stream of data from this buffer.
• Each side of the connection has its own send buffer and its own receive buffer.
• TCP connection consists of buffers, variables and a socket connection to a process in one host
and another set of buffers, variables and a socket connection to a process in another host.
• No buffers or variables are allocated to the connection in the network elements (routers,
switches and repeaters) between the hosts.
TCP Segment Structure:
• The TCP segment consists of header fields and a data field.
• The TCP header is typically 20 bytes (12 bytes more than the UDP header).
• As with UDP, the header includes source and destination port numbers, which are used for
multiplexing / demultiplexing data from/to upper-layer applications.
• As with UDP, the header includes a checksum field.
• A TCP segment header also contains the following fields:
• The 32-bit sequence number field and the 32-bit acknowledgment number field are used by
the TCP sender and receiver in implementing a reliable data transfer service
TCP Segment Structure:
• The 16-bit receive window
field is used for flow control, it
is used to indicate the number
of bytes that a receiver is
willing to accept.
• The 4-bit header length field
specifies the length of the TCP
header in 32-bit words.
• The TCP header can be of
variable length due to the TCP
options field.
• The optional and variable-
length options field is used
when a sender and receiver
negotiate the maximum
segment size (MSS).
TCP Segment Structure:
• The flag field contains 6 bits.
• The ACK bit is used to indicate that the value carried in the acknowledgment field is valid
• The RST, SYN and FIN bits are used for connection setup and teardown
• The CWR and ECE bits are used in explicit congestion notification.
• Setting the PSH bit indicates that the receiver should pass the data to the upper layer
immediately.
• The URG bit is used to indicate that there is data in this segment that the sending-side upper
layer entity has marked as “urgent.”
• The location of the last byte of this urgent data is indicated by the 16-bit urgent data pointer
field.
• TCP must inform the receiving-side upper-layer entity when urgent data exists and pass it a
pointer to the end of the urgent data. (In practice, the PSH, URG, and the urgent data pointer
are not used)
Sequence Numbers and Acknowledgment Numbers:
• Two of the most important fields in the TCP segment header are the sequence number field
and the acknowledgment number field.
• The sequence number for a segment is the byte-stream number of the first byte in the
segment.
Sequence Numbers and Acknowledgment Numbers :
• Suppose that the data stream consists of a file consisting of 500,000 bytes that the MSS is
1,000 bytes and that the first byte of the data stream is numbered 0.
• Recall that TCP is full-duplex, so that Host A may be receiving data from Host B while it
sends data to Host B
• Each of the segments that arrive from Host B has a sequence number for the data flowing
from B to A.
• The acknowledgment number that Host A puts in its segment is the sequence number of the
next byte Host A is expecting from Host B.
• Suppose that Host A has received all bytes numbered 0 through 535 from B and suppose that it
is about to send a segment to Host B. Host A is waiting for byte 536 and all the subsequent
bytes in Host B’s data stream. So Host A puts 536 in the acknowledgment number field of the
segment it sends to B.
• Suppose that Host A has received one segment from Host B containing bytes 0 through 535
and another segment containing bytes 900 through 1,000.
• For some reason Host A has not yet received bytes 536 through 899.
Sequence Numbers and Acknowledgment Numbers:
• Host A is still waiting for byte 536 (and beyond) in order to re-create B’s data stream.
• A’s next segment to B will contain 536 in the acknowledgment number field.
• Because TCP only acknowledges bytes up to the first missing byte in the stream, TCP is said
to provide cumulative acknowledgments.
• Host A received the third segment (bytes 900 through 1,000) before receiving the second
segment (bytes 536 through 899).
• Thus, the third segment arrived out of order.
• There are basically two choices: either
(1) the receiver immediately discards out-of-order segments
(2) The receiver keeps the out-of-order bytes and waits for the missing bytes to fill in the gaps.
• The initial sequence number was zero.
TCP Connection Management:
• Suppose a process running in one host (client) wants to initiate a connection with another
process in another host (server).
• The client application process first informs the client TCP that it wants to establish a
connection to a process in the server.
• The TCP in the client then proceeds to establish a TCP connection with the TCP in the server
in the following manner:
Step 1. The client-side TCP first sends a special TCP segment to the server-side TCP.
• This special segment contains no application-layer data. But one of the flag bits in the
segment’s header, the SYN bit is set to 1. This special segment is referred to as a SYN
segment.
• The client randomly chooses an initial sequence number (client_isn) and puts this number in
the sequence number field of the initial TCP SYN segment.
• This segment is encapsulated within an IP datagram and sent to the server.
TCP Connection Management:
Step 2. Once the IP datagram containing the TCP SYN segment arrives at the server host, the server
extracts the TCP SYN segment from the datagram, allocates the TCP buffers and variables to the
connection and sends a connection-granted segment to the client TCP.
• This connection-granted segment also contains no application-layer data.
• It contains three important pieces of information in the segment header.
First, the SYN bit is set to 1.
Second, the acknowledgment field of the TCP segment header is set to client_isn+1.
Thirdly the server chooses its own initial sequence number (server_isn) and puts this value in the
sequence number field of the TCP segment header.
• The connection-granted segment is referred to as a SYNACK segment.
Step 3. Upon receiving the SYNACK segment, the client also allocates buffers and variables to the
connection.
• The client host then sends the server yet another segment; this last segment acknowledges the server’s
connection-granted segment
• The SYN bit is set to zero, since the connection is established.
• This third stage of the three-way handshake may carry client-to server data in the segment payload.
TCP Connection Management :
TCP Connection Management (TCP states visited by Client & Server TCP):
TCP Congestion control:
• TCP provides a reliable transport service between two processes running on different hosts.
• Provides congestion-control mechanism.
• TCP must use end-to-end congestion control rather than network- assisted congestion control,
since the IP layer provides no explicit feedback to the end systems regarding network
congestion.
• The approach taken by TCP is to have each sender limit the rate at which it sends traffic into
its connection as a function of perceived network congestion.
• If a TCP sender perceives that there is little congestion on the path between itself and the
destination, then the TCP sender increases its send rate
• If the sender perceives that there is congestion along the path, then the sender reduces its send
rate.
• But this approach raises three questions.
• First, how does a TCP sender limit the rate at which it sends traffic into its connection?
Second, how does a TCP sender perceive that there is congestion on the path between itself
and the destination?
• Third, what algorithm should the sender use to change its send rate as a function of perceived
• end-to-end congestion?
TCP Congestion control:
• The TCP congestion-control mechanism operating at the sender keeps track of an additional
variable, the congestion window.
• The congestion window, denoted cwnd, imposes a constraint on the rate at which a TCP
sender can send traffic into the network.
• The amount of unacknowledged data at a sender may not exceed the minimum of cwnd and
rwnd,
LastByteSent – LastByteAcked <= min{cwnd, rwnd}
• Assume that the TCP receive buffer is so large that the receive-window constraint can be
ignored; thus the amount of unacknowledged data at the sender is solely limited by cwnd.
• Assume that the sender always has data to send, i.e., that all segments in the congestion
window are sent.
• The constraint above limits the amount of unacknowledged data at the sender and therefore
indirectly limits the sender’s send rate.
• Consider a connection for which loss and packet transmission delays are negligible.
• At the beginning of every RTT, the constraint permits the sender to send cwnd bytes of data
into the connection; at the end of the RTT the sender receives acknowledgments for the data.
TCP Congestion control:
• The sender’s send rate is roughly cwnd/RTT bytes/sec.
• Consider how a TCP sender perceives that there is congestion on the path between itself and
the destination.
• Let us define a “loss event” at a TCP sender as the occurrence of either a timeout or the receipt
of three duplicate ACKs from the receiver.
• When there is excessive congestion, then one (or more) router buffers along the path
overflows, causing a datagram (containing a TCP segment) to be dropped.
• The dropped datagram, in turn, results in a loss event at the sender—either a timeout or the
receipt of three duplicate ACKs—which is taken by the sender to be an indication of
congestion on the sender-to-receiver path.
• Let’s next consider when the network is congestion-free, that is, when a loss event doesn’t
occur.
• Acknowledgments for previously unacknowledged segments will be received at the TCP
sender.
• If acknowledgments arrive at a relatively slow rate (e.g., if the end-end path has high delay or
contains a low-bandwidth link), then the congestion window will be increased at a relatively
slow rate.
TCP Congestion control:
• If acknowledgments arrive at a high rate, then the congestion window will be increased more
quickly.
• Because TCP uses acknowledgments to trigger (or clock) its increase in congestion window
size, TCP is said to be self-clocking.
Principles of TCP:
1. A lost segment implies congestion and hence the TCP sender’s rate should be decreased
when a segment is lost.
2. An acknowledged segment indicates that the network is delivering the sender’s segments to
the receiver and hence the sender’s rate can be increased when an ACK arrives for a
previously unacknowledged segment.
3. Bandwidth probing.
1. Normal operation
2. Frame is lost
3. Acknowledgement is lost
4. Acknowledgement is delayed
STOP AND WAIT ARQ :
STOP AND WAIT ARQ :
STOP AND WAIT ARQ :
Bidirectional transmission:
• Stop and wait ARQ follows unidirectional data transfer
• Bi directional transmission is possible using full duplex or half duplex mode
• Each sender or receiver needs both S and R variables to track the frames
Piggybacking:
• Method to combine data frame with acknowledgement frame
• Saves bandwidth
• For ex: I frame of HDLC
STOP AND WAIT ARQ :
Disadvantages:
• Sender sends one outstanding frame at a time and waits for the acknowledgement results in
less efficiency
• Transmission medium is not utilized properly
Acknowledgement:
• Receiver sends positive acknowledgement when it receives the frame
• For damaged or out of order frames, receiver remains silent
• Receiver can send cumulative acknowledgement
Resending frames:
• When a frame is damaged, sender goes back and sends set of frames starting from damaged
to the last one sent
Go-Back-N ARQ:
Go-Back-N ARQ: Sender window size
Go-Back-N ARQ:
The size of the sender window must be less than 2m
Disadvantages:
• Inefficient for a noisy link and damage is more due to multiple frame transmission
• More bandwidth is required and slows down the transmission