0% found this document useful (0 votes)
46 views

Module 4

This document discusses transport layer services in computer networks. It covers key topics like connection-oriented and connectionless protocols, transport layer protocols, TCP and UDP, addressing using port numbers, encapsulation and demultiplexing, flow control, error control using sequence numbers, and sliding windows. The transport layer provides process-to-process communication between applications on different hosts using protocols like TCP and UDP.

Uploaded by

Tejaswini Ml
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Module 4

This document discusses transport layer services in computer networks. It covers key topics like connection-oriented and connectionless protocols, transport layer protocols, TCP and UDP, addressing using port numbers, encapsulation and demultiplexing, flow control, error control using sequence numbers, and sliding windows. The transport layer provides process-to-process communication between applications on different hosts using protocols like TCP and UDP.

Uploaded by

Tejaswini Ml
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 69

COMPUTER COMMUNICATION

NETWORKS

MODULE-4

1
MODULE-4

CO4-RECOGNIZE TRANSPORT LAYER SERVICES IN A


COMPUTER COMMUNICATION
NETWORK
Transport Layer: Introduction: Transport Layer Services, Connectionless and
Connection oriented Protocols, Transport Layer Protocols: Simple protocol, Stop and wait
protocol, Go-Back-N Protocol, Selective repeat protocol, Piggybacking.
Transport-Layer Protocols in the Internet: User Datagram Protocol: User Datagram,
UDP Services, UDP Applications, Transmission Control Protocol: TCP Services, TCP
Features, Segment, Connection, State Transition diagram, Windows in TCP, Error control,
TCP congestion control.

Total lecture hours-08


2
INTRODUCTION
• The transport layer is located between the application layer and the network layer.
• It provides a process-to-process communication between two application layers,
one at the local host and the other at the remote host.

3
Transport-Layer Services
• The first duty of a the transport layer is located between the network layer and the
application layer.
• The transport layer is responsible for providing services to the application layer; it
receives services from the network layer.

Process-to-Process Communication
• Tansport-layer protocol is to provide process-to-process communication.
• A process is an application-layer entity (running program) that uses the services of
the transport layer.

4
Addressing: Port Numbers
• Although there are a few ways to achieve process-to-process communication,
the most common is through the client-server paradigm .
• A process on the local host, called a client, needs services from a process
usually on the remote host, However, operating systems today support both
multiuser and multiprogramming environments.

5
6
ICANN Ranges
ICANN has divided the port numbers into three ranges: well-known, registered,
and dynamic (or private).
❑Well-known ports. The ports ranging from 0
to 1023 are assigned and controlled by ICANN. These are
the well-known ports.
❑ Registered ports. The ports ranging from 1024 to 49,151 are not assigned or
controlled by ICANN. They can only be registered with ICANN to prevent
duplication.
❑Dynamic ports. The ports ranging from 49,152 to 65,535 are neither
controlled nor registered. They can be used as temporary or private port numbers.

7
Socket Addresses
• A transport-layer protocol in the TCP suite needs both the IP address and the port
number, at each end, to make a connection.
• The combination of an IP address and a port number is called a socket address.
• The client socket address defines the client process uniquely just as the server
socket address defines the server process uniquely (see Figure 23.6).
• To use the services of the transport layer in the Internet, we need a pair of socket
addresses: the client socket address and the server socket address.
• These four pieces of information are part of the network-layer packet header and
the transport-layer packet header.
• The first header contains the IP addresses; the second header contains the port
numbers.

8
Encapsulation and Decapsulation
• To send a message from one process to another, the transport-layer protocol
encapsulates and decapsulates messages (Figure 23.7).
• Encapsulation happens at the sender site.
• When a process has a message to send, it passes the message to the transport layer
along with a pair of socket addresses and some other pieces of information, which
depend on the transport-layer protocol.
• The transport layer receives the data and adds the transport-layer header. The packets
at the transport layer in the Internet are called user datagrams, segments, or packets,
depending on what transport-layer protocol we use.

9
Multiplexing and Demultiplexing
• Whenever an entity accepts items from more than one source, this is referred to as
multiplexing (many to one); whenever an entity delivers items to more than one
source, this is referred to as demultiplexing (one to many).
• The transport layer at the source performs multiplexing; the transport layer at the
destination performs demultiplexing (Figure 23.8).

10
Flow Control
• Whenever an entity produces items and another entity consumes them, there
should be a balance between production and consumption rates.
• If the items are produced faster than they can be consumed, the consumer can be
overwhelmed and may need to discard some items.
• If the items are produced more slowly than they can be consumed, the consumer
must wait, and the system becomes less efficient. Flow control is related to the
first issue. We need to prevent losing the data items at the consumer site.

11
Pushing or Pulling
• Delivery of items from a producer to a consumer can occur in one of two ways:
pushing or pulling.
• If the sender delivers items whenever they are produced without a prior request
from the consumer⎯the delivery is referred to as pushing.
• If the producer delivers the items after the consumer has requested them, the
delivery is referred to as pulling.

12
Flow Control at Transport Layer
• In communication at the transport layer, we are dealing with four entities: sender
process, sender transport layer, receiver transport layer, and receiver process.
• The sending process at the application layer is only a producer. It produces message
chunks and pushes them to the transport layer.
• The sending transport layer has a double role: it is both a consumer and a producer.
It consumes the messages pushed by the producer.
• It encapsulates the messages in packets and pushes them to the receiving transport
layer.

13
Buffers

• Although flow control can be implemented in several ways, one of the solutions
is normally to use two buffers: one at the sending transport layer and the other at
the receiving transport layer.
• A buffer is a set of memory locations that can hold packets at the sender and
receiver.
• When the buffer of the sending transport layer is full, it informs the application
layer to stop passing chunks of messages; when there are some vacancies, it
informs the application layer that it can pass message chunks again.
• When the buffer of the receiving transport layer is full, it informs the sending
transport layer to stop sending packets.

14
Error Control
• In the Internet, since the underlying network layer (IP) is unreliable, we need to
make the transport layer reliable if the application requires reliability.
• Reliability can be achieved to add error control services to the transport layer. Error
control at the transport layer is responsible for
1. Detecting and discarding corrupted packets.
2. Keeping track of lost and discarded packets and resending them.
3. Recognizing duplicate packets and discarding them.
4. Buffering out-of-order packets until the missing packets arrive.

15
Sequence Numbers
• Error control requires that the sending transport layer knows which packet is to be
resent and the receiving transport layer knows which packet is a duplicate, or which
packet has arrived out of order. This can be done if the packets are numbered.
• We can add a field to the transport-layer packet to hold the sequence number of the
packet.
• When a packet is corrupted or lost, the receiving transport layer can somehow
inform the sending transport layer to resend that packet using the sequence number.

16
Combination of Flow and Error Control
• The error control requires the use of sequence and acknowledgment numbers by both
sides.
• These two requirements can be combined if we use two numbered buffers, one at the
sender, one at the receiver.
• At the sender, when a packet is prepared to be sent, we use the number of the next free
location, x, in the buffer as the sequence number of the packet.
• When the packet is sent, a copy is stored at memory location x, awaiting
the
acknowledgment from the other end.
• When an acknowledgment related to a sent packet arrives, the packet is purged and the
memory location becomes free.
• At the receiver, when a packet with sequence number y arrives, it is stored at the
memory location y until the application layer is ready to receive it.
• An acknowledgment can be sent to announce the arrival of packet y.

17
Sliding Window
• Since the sequence numbers use modulo 2m, a circle can represent the sequence
numbers from 0 to 2m − 1 (Figure 23.12).
• The buffer is represented as a set of slices, called the sliding window, that occupies
part of the circle at any time.
• At the sender site, when a packet is sent, the corresponding slice is marked. When
all the slices are marked, it means that the buffer is full and no further messages can
be accepted from the application layer.
• When an acknowledgment arrives, the corresponding slice is unmarked.
• If some consecutive slices from the beginning of the window are unmarked, the
window slides over the range of the corresponding sequence numbers to allow more
free slices at the end of the window.
.

18
19
20
Connectionless Service
• In a connectionless service, the source process (application program) needs to divide
its message into chunks of data of the size acceptable by the transport layer and deliver
• them to the transport layer one by one.
• The transport layer treats each chunk as a single unit without any relation between the
chunks.

21
Connection-Oriented Service
• In a connection-oriented service, the client and the server first need to establish a
logical connection between themselves.
• The data exchange can only happen after the connection establishment. After data
exchange, the connection needs to be torn down (Figure 23.15).
• As we mentioned before, the connection-oriented service at the transport layer is
different from the same service at the network layer. In the network layer,
connection-oriented service means a coordination between the two end hosts and all
the routers in between.
• At the transport layer, connection-oriented service involves only the two hosts; the
service is end to end.
• This means that we should be able to make a connection-oriented protocol at the
transport layer over either a connectionless or connection-oriented protocol at the
network layer.
• Flow control, error control, and congestion control can be implemented using a
connection-oriented protocol.

22
23
Finite State Machine
• The behavior of a transport-layer protocol, both when it provides a connectionless
and when it provides a connection-oriented protocol, can be better shown as a finite
state machine (FSM).

24
TRANSPORT-LAYER PROTOCOLS
• The TCP/IP protocol uses a transport-layer protocol that is either a modification or
a combination of some of these protocols.
• we first discuss all of these protocols as a unidirectional protocol (i.e., simplex) in
which the data packets move in one direction. At the end of the chapter, we briefly
discuss how they can be changed to bidirectional protocols where data can be
moved in two directions (i.e., full duplex).

25
Simple Protocol
• Our first protocol is a simple connectionless protocol with neither flow nor error
control.
• We assume that the receiver can immediately handle any packet it receives.
• In other words, the receiver can never be overwhelmed with incoming packets.
Figure 23.17 shows the layout for this protocol.

26
27
Stop-and-Wait Protocol
• Our second protocol is a connection-oriented protocol called the Stop-and-Wait
protocol, which uses both flow and error control.
• Both the sender and the receiver use a sliding window of size 1.
• The sender sends one packet at a time and waits for an acknowledgment before
sending the next one.
• To detect corrupted packets, we need to add a checksum to each data packet.
When a packet arrives at the receiver site, it is checked.
• If its checksum is incorrect, the packet is corrupted and silently discarded.

28
29
Pipelining
• In networking and in other areas, a task is often begun before the previous task
has ended. This is known as pipelining.
• There is no pipelining in the Stop-and-Wait protocol because a sender must
wait for a packet to reach the destination and be acknowledged before the next
packet can be sent.
• However, pipelining does apply to our next two protocols because several
packets can be sent before a sender receives feedback about the previous
packets.
• Pipelining improves the efficiency of the transmission if the number of bits in
transition is large with respect to the bandwidth delay product.

30
Go-Back-N Protocol (GBN)
• To improve the efficiency of transmission (to fill the pipe), multiple packets
must be in transition while the sender is waiting for acknowledgment.
• In other words, we need to let more than one packet be outstanding to keep the
channel busy while the sender is waiting for acknowledgment.
• In this section, we discuss one protocol that can achieve this goal; in the next
section, we discuss a second. The first is called Go-Back-N (GBN) (the rationale
for the name will become clear later).
• The key to Go-back-N is that we can send several packets before receiving
acknowledgments, but the receiver can only buffer one packet.
• We keep a copy of the sent packets until the acknowledgments arrive.
• Figure 23.23 shows the outline of the protocol. Note that several data packets
and acknowledgments can be in the channel at the same time.

31
32
The send window can slide one or more slots when an error-free ACK
with ackNo greater than or equal to Sf and less than Sn (in modular
arithmetic) arrives.

33
The receive window is an abstract concept defining an imaginary
box of size 1 with a single variable Rn. The window slides
when a correct packet has arrived; sliding occurs one slot at a
time.

34
35
36
Selective-Repeat Protocol
• The Go-Back-N protocol simplifies the process at the receiver.
• The receiver keeps track of only one variable, and there is no need to buffer out-
of-order packets; they are simply discarded.
• However, this protocol is inefficient if the underlying network protocol loses a
lot of packets.
• Each time a single packet is lost or corrupted, the sender resends all outstanding
packets, even though some of these packets may have been received safe and
sound but out of order.
• If the network layer is losing many packets because of congestion in the
network, the resending of all of these outstanding packets makes the congestion
worse, and eventually more packets are lost.
• This has an avalanche effect that may result in the total collapse of the network.
• Another protocol, called the Selective-Repeat (SR) protocol, has been devised,
• which, as the name implies, resends only selective packets, those that are
actually lost.
• The outline of this protocol is shown in Figure 23.31.

37
38
39
40
USER DATAGRAM PROTOCOL
• The User Datagram Protocol (UDP) is a connectionless, unreliable transport protocol.
• It does not add anything to the services of IP except for providing process-to process
communication instead of host-to-host communication.
• If UDP is so powerless, why would a process want to use it? With the disadvantages
come some advantages.
• UDP is a very simple protocol using a minimum of overhead. If a process wants to
send a small message and does not care much about reliability, it can use UDP.
Sending a small message using UDP takes much less interaction between the sender
and receiver than using TCP.

41
User Datagram
• UDP packets, called user datagrams, have a fixed-size header of 8 bytes made of
four fields, each of 2 bytes (16 bits). Figure 24.2 shows the format of a user
datagram.
• The first two fields define the source and destination port numbers.
• The third field defines the total length of the user datagram, header plus data.
• The 16 bits can define a total length of 0 to 65,535 bytes.
• However, the total length needs to be less because a UDP user datagram is stored
in an IP datagram with the total length of 65,535 bytes.
• The last field can carry the optional checksum.

42
UDP Services
• Flow Control
• Error Control
• Congestion Control
• Encapsulation and Decapsulation
• Queuing
• Multiplexing and Demultiplexing

43
UDP Applications
• Although UDP meets almost none of the criteria we mentioned earlier for a reliable
transport-layer protocol, UDP is preferable for some applications.
• The reason is that some services may have some side effects that either
are
unacceptable or not preferable.
• An application designer sometimes needs to compromise to get the optimum.
UDP Features

• Connectionless Service
• Lack of Error Control
• Lack of Congestion Control
• Full-Duplex Communication

44
TRANSMISSION CONTROL PROTOCOL
• Transmission Control Protocol (TCP) is a connection-oriented, reliable protocol.
• TCP explicitly defines connection establishment, data transfer, and connection
teardown phases to provide a connection-oriented service.
• TCP uses a combination of GBN and SR protocols to provide reliability.
• To achieve this goal, TCP uses checksum(for error detection), retransmission of
lost or corrupted packets, cumulative and selective acknowledgments, and timers.
TCP is the most common transport-layer protocol in the Internet.

TCP Services

• Process-to-Process Communication
• . Stream Delivery Service
• Sending and Receiving Buffers
• Segments
• Multiplexing and Demultiplexing

45
 TCP Features
 Numbering System
• Although the TCP software keeps track of the segments being transmitted or
received, there is no field for a segment number value in the segment header.
• Instead, there are two fields, called the sequence number and the acknowledgment
number.
• These two fields refer to a byte number and not a segment number.
 Byte Number
• TCP numbers all data bytes (octets) that are transmitted in a connection.
• Numbering is independent in each direction.
• When TCP receives bytes of data from a process, TCP stores them in the sending
buffer and numbers them.
• The numbering does not necessarily start from 0.

The bytes of data being transferred in each connection are numbered by TCP.
The numbering starts with an arbitrarily generated number.

46
 Sequence Number
After the bytes have been numbered, TCP assigns a sequence number to each
segment that is being sent.
The sequence number, in each direction, is defined as follows:
1. The sequence number of the first segment is the ISN (initial sequence number),
which is a random number.
2.The sequence number of any other segment is the sequence number of
the previous segment plus the number of bytes (real or imaginary) carried by
the previous segment. Later, we show that some control segments are thought
of as carrying one imaginary byte.

The value in the sequence number field of a segment defines the number
assigned to the first data byte contained in that segment.

47
Segment
• A packet in TCP is called a segment.
• The segment consists of a header of 20 to 60 bytes, followed by data from the
application program.
• The header is 20 bytes if there are no options and up to 60 bytes if it contains
options.

48
The use of the checksum in TCP is 49
mandatory.
A TCP Connection
 Connection Establishment
• TCP transmits data in full-duplex mode.
• When two TCPs in two machines are connected, they are able to send segments to
each other simultaneously. This implies that each party must initialize communication
and get approval from the other party before any data are transferred.

50
 Data Transfer

51
 Connection Termination

52
 State Transition Diagram
• To keep track of all the different events happening during connection
establishment, connection termination, and data transfer, TCP is specified as the
finite state machine (FSM).

53
The state marked ESTABLISHED in the FSM is in fact two different
sets of states that the client and server undergo to transfer data. 54
55
56
 Windows in TCP
• TCP uses two windows (send window and receive window) for each direction of
data transfer, which means four windows for a bidirectional communication.
• To make the discussion simple, we make an unrealistic assumption that
communication is only unidirectional (say from client to server); the bidirectional
communication can be inferred using two unidirectional communications with
piggybacking.

 Send Window
• The window size is 100 bytes, but later we see that the send window size is dictated
by the receiver (flow control) and the congestion in the underlying network
(congestion control).
• The figure shows how a send window opens, closes, or shrinks.

57
58
 Receive Window

59
Nagle’s algorithm is simple:
1.The sending TCP sends the first piece of data it receives from the
sending application program even if it is only 1 byte.
2.After sending the first segment, the sending TCP accumulates data in the
output buffer and waits until either the receiving TCP sends an acknowledgment
or until enough data have accumulated to fill a maximum-size segment. At this
time, the sending TCP can send the segment.
3.Step 2 is repeated for the rest of the transmission. Segment 3 is sent immediately
if an acknowledgment is received for segment 2, or if enough data have accumulated
to fill a maximum size segment.
✓The elegance of Nagle’s algorithm is in its simplicity and in the fact that it
considers the speed of the application program that creates the data and the speed of
the network that transports the data.
✓If the application program is faster than the network, the segments are
larger (maximum-size segments).
✓ If the application program is slower than the network, the segments are smaller (less
than the maximum segment size).

60
61
62
Some Scenarios:
• In these scenarios, segment is shown by a rectangle.
• If the segment carries data, we show the range of byte numbers and the value of the
acknowledgment field.
• If it carries only an acknowledgment, we show only the acknowledgment number in a
smaller box.

1. Normal Operation
2. Lost Segment
3. Fast Retransmission
4. Delayed Segment
5. Duplicate Segment
6. Automatically Corrected Lost ACK
7. Lost Acknowledgment Corrected by Resending a Segment
8. Deadlock Created by Lost Acknowledgment

63
TCP Congestion Control

• Congestion Window
• Congestion Detection
• Congestion Policies
• Congestion Avoidance

64
Policy Transition
The three versions of TCP: Taho TCP, Reno TCP, and New Reno TCP.

1. Taho TCP

65
2. Reno TCP

66
3. NewReno TCP
➢A later version of TCP, called NewReno TCP, made an extra optimization on
the Reno TCP.
In this version, TCP checks to see if more than one segment is lost in the current
window when three duplicate ACKs arrive.
➢ When TCP receives three duplicate ACKs, it retransmits the lost segment until a
new ACK (not duplicate) arrives.
➢ If the new ACK defines the end of the window when the congestion was
detected, TCP is certain that only one segment was lost.
➢However, if the ACK number defines a position between the
retransmitted segment and the end of the window, it is possible that the segment
defined by the ACK is also lost.
➢ NewReno TCP retransmits this segment to avoid receiving more and more
duplicate ACKs for it. Additive Increase, Multiplicative Decrease

67
TCP Throughput
➢The throughput for TCP, which is based on the congestion window behaviour,
can be easily found if the cwnd is a constant (flat line) function of RTT.
➢The throughput with this unrealistic assumption is throughput = cwnd / RTT.
In this assumption, TCP sends a cwnd bytes of data and receives acknowledgement
for
them in RTT time.
➢ The behaviour of TCP is not a flat line; it is like saw teeth, with many minimum
and maximum values.
➢If each tooth were exactly the same, we could
say that the throughput = [(maximum + minimum) / 2] / RTT.
➢ However, we know that the value of the maximum is twice the value of the
minimum because in each congestion detection the value of cwnd is set to half of its
previous value.
➢ So, the throughput can be better calculated as
throughput = (0.75)Wmax/ RTT
in which Wmax is the average of window sizes when the congestion occurs.

68
Thank You

69

You might also like