0% found this document useful (0 votes)
4 views49 pages

Part 4-Transport Layer

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 49

SYLLABUS:

1. Introduction & Physical Layer - Introduction to the Internet - Services and Protocols, Edge -
Protocol Layers and Service Models OSI and TCP/IP models. ​
2. Data link Layer - Link Layer – Services - Error Detection and Correction; Multiple Access
protocols Channel partitioning - Random access - Taking-Turns protocols - Switched LANs ARP - Ethernet
- Link layer switching – VLANs – MPLS.​
3. Network Layer - Data plane forwarding vs. Control plane routing - Software Defined Networking
(SDN) approach - Network Services - Router architecture - Switching fabrics - Input and output queueing-
Core, Packet Switching vs. Circuit Switching - Performance Metrics Delay - Loss – Throughput - IPv4 and
IPv6 addressing DHCP -NAT - IPv4 and IPv6 fragmentation – SDN-based generalized forwarding -
Routing and Supporting Algorithms - Link State vs. Distance Vector - RIP - OSPF – BGP – ICMP - SNMP
- SDN Control Plane. ​
4. Transport Layer - Unreliable Connectionless vs. Reliable Connection-Oriented Services -
Multiplexing; Stop-and-Wait - Go-Back-N and Selective-Repeat - UDP vs. TCP - Flow and Congestion
Control. ​
5. Application Layer - Client-Server and Peer-to-Peer architectures - Application Layer protocols ​
6. Introduction to Wireless and Mobile Networks - Link characteristics - CDMA - 802.11 WiFi -
Bluetooth and Zigbee - Cellular Networks - GSM – UMTS – LTE - Mobility management and handoff -
Mobile IP.​
Part-4 (Lecture Flow & Book Details)

Overview of the transport layer protocol in the Internet: James Kurose and Keith Ross, “Computer
Networking: A Top-down Approach” 6th edition, Addison Wesley 2010. Chapter-3- Only 3.1.2.
Multiplexing and Demultiplexing: James Kurose and Keith Ross, “Computer Networking: A Top-down
Approach” 6th edition, Addison Wesley 2010. Chapter-3- Only 3.2(Excluding connectionless and
connection-oriented Multiplexing and De-Multiplexing).
Connectionless transport: UDP: James Kurose and Keith Ross, “Computer Networking: A Top-down
Approach” 6th edition, Addison Wesley 2010. Chapter-3- Only 3.3- Only from Pg. No.: 198 to 200.
Connection-oriented transport: TCP: James Kurose and Keith Ross, “Computer Networking: A Top-down
Approach” 6th edition, Addison Wesley 2010. Chapter-3- Only 3.5, 3.5.1, 3.5.2 and 3.5.6.
TCP congestion control: James Kurose and Keith Ross, “Computer Networking: A Top-down Approach” 6th
edition, Addison Wesley 2010. Chapter-3- Only 3.7
Flow Control- Stop & Wait, Go Back N and Selective Repeat: Behrouz Forouzan, “Data Communication
and Networking”, Tata McGraw Hill 3th edition. Chapter – 11- only 11.1, 11.2, 11.3 and 11.4.
Overview of the transport layer protocol in the Internet:
UDP (User Datagram Protocol):
• Unreliable, connectionless service
• Extends IP’s delivery service between two end systems to a delivery service between two
processes running on the end systems
• Provides integrity checking by including error detection fields in their segments’ headers
• Sends data at any rate

TCP (Transmission Control Protocol):


• Reliable, connection-oriented service
• Extends IP’s delivery service between two end systems to a delivery service between two
processes running on the end systems
• Provides integrity checking by including error detection fields in their segments’ headers
• Provides flow control
• Provides congestion control by regulating the rate at which the sending sides of TCP
connections can send traffic into the network.
Multiplexing and Demultiplexing:

• Extending host-to-host delivery to process-to-process delivery is called transport-layer multiplexing and De


multiplexing
• Host-to-host delivery service provided by the network layer has to be changed to a process-to-process delivery
service for applications running on the hosts
• When the transport layer in your computer receives data from the network layer below it needs to direct the
received data to one of these four processes
• A process can have one or more sockets through which data passes from the network to the process and through
which data passes from the process to the network
• The transport layer in the receiving host does not actually deliver data directly to a process but instead to an
intermediary socket. Because at any given time there can be more than one socket in the receiving host, each
socket has a unique identifier, which depends on TCP or UDP socket.
• Job of delivering the data in a transport-layer segment to the correct socket is called demultiplexing.
• The job of gathering data chunks at the source host from different sockets encapsulating each data chunk with
header information to create segments and passing the segments to the network layer is called multiplexing
Multiplexing and Demultiplexing:
• The transport layer in the middle host must gather outgoing data from these sockets - form
transport-layer segments and pass these segments down to the network layer.
Port address:
• Transport-layer multiplexing requires that
(1) sockets have unique identifiers
(2) each segment have special fields that indicate the socket to which the segment is to be
delivered.
• These special fields are the source port number field and the destination port number
field.
• Each port number is a 16-bit number ranging from 0 to 65535.
• The port numbers ranging from 0 to 1023 are called well-known port numbers and are
restricted and they are reserved for use by application protocols such as
• HTTP (which uses port number 80)
• FTP (which uses port number 21)
• Each socket in the host could be assigned a port number and when a segment arrives at the
host the transport layer examines the destination port number in the segment and directs the
segment to the corresponding socket.
• The segment’s data then passes through the socket into the attached process
Connectionless transport: UDP
• UDP takes messages from the application process attaches source and destination port number
fields for the multiplexing/demultiplexing service, adds two other small fields and passes the
resulting segment to the network layer.
• The network layer encapsulates the transport-layer segment into an IP datagram and then
makes a best-effort attempt to deliver the segment to the receiving host.
• If the segment arrives at the receiving host, UDP uses the destination port number to deliver
the segment’s data to the correct application process.
• No handshaking between sending and receiving transport-layer entities before sending a
segment. For this reason UDP is said to be connectionless.
• DNS is an example of an application-layer protocol that uses UDP.
• When the DNS application in a host wants to make a query it constructs a DNS query message
and passes the message to UDP.
• Without performing any handshaking with the UDP entity running on the destination end
system the host-side UDP adds header fields to the message and passes the resulting segment
to the network layer.
Connectionless transport: UDP
• The network layer encapsulates the UDP segment into a datagram and sends the datagram to a
name server. The DNS application at the querying host waits for a reply to its query. If it
doesn’t receive a reply (possibly because the underlying network lost the query or the reply) it
might try resending the query, try sending the query to another name server or inform the
invoking application that it can’t get a reply.
Even though TCP provides reliable data transfer still UDP is preferred due to the
following reasons:
Finer application-level control over what data is sent and when.
UDP TCP
As soon as an application process passes data to UDP, it will TCP has a congestion-control mechanism that throttles the
package the data inside a UDP segment and immediately pass transport-layer TCP sender when one or more links between
the segment to the network layer. the source and destination hosts become excessively congested

TCP will continue to resend a segment until the receipt of the


segment has been acknowledged by the destination, regardless
of how long reliable delivery takes
Real-time applications often require a minimum sending rate,
do not want to overly delay segment transmission and can
tolerate some data loss
Connectionless transport: UDP
No connection establishment
UDP TCP
UDP does not introduce any delay to establish a connection TCP uses a three-way handshake before it starts to transfer
data
DNS would be much slower if it ran over TCP HTTP uses TCP rather than UDP since reliability is critical for
Web pages with text
The QUIC protocol (Quick UDP Internet Connection,, used in The TCP connection-establishment delay in HTTP is an
. Google’s Chrome browser uses UDP as its underlying important contributor to the delays associated with
transport protocol and implements reliability in an application- downloading Web documents
layer protocol on top of UDP.

No connection state
UDP TCP
UDP does not maintain connection state and does not track any TCP maintains connection state in the end systems. This
of these parameters. A server devoted to a particular connection state includes receive and send buffers, congestion-
application can typically support many more active clients control parameters and sequence and acknowledgment number
when the application runs over UDP. parameters. This state information is needed to implement
TCP’s reliable data transfer service and to provide congestion
control.
Connectionless transport: UDP
Small packet header overhead.
• The TCP segment has 20 bytes of header overhead in every segment, whereas UDP has only 8
bytes of overhead.
Connection-oriented transport: TCP
• TCP is an Internet’s transport-layer, connection-oriented and reliable transport protocol
The TCP Connection:
• TCP is said to be connection-oriented because before one application process can begin to send data
to another, the two processes must first “handshake” with each other
• Both sides of the connection will initialize many TCP state variables associated with the TCP
connection.
• The TCP “connection” is not an end-to-end TDM or FDM circuit as in a circuit switched network.
• Instead the “connection” is a logical one with common state residing only in the TCPs in the two
communicating end systems.
• The TCP protocol runs only in the end systems and not in the intermediate network elements (routers
and link-layer switches) the intermediate network elements do not maintain TCP connection state.
• A TCP connection provides a full-duplex service If there is a TCP connection between Process A on
one host and Process B on another host, then application-layer data can flow from Process A to Process
B at the same time as application-layer data flows from Process B to Process A.
The TCP Connection:
• A TCP connection is always point-to-point i.e. between a single sender and a single receiver.
• Suppose a process running in one host wants to initiate a connection with another process in
another host.
• Process that is initiating the connection is called the client process, while the other process is
called the server process.
• The client application process first informs the client transport layer that it wants to establish a
connection to a process in the server.
• clientSocket.connect((serverName,serverPort))
Where serverName is the name of the server and serverPort identifies the process on the server.
• TCP in the client then proceeds to establish a TCP connection with TCP in the server.
• The client first sends a special TCP segment; the server responds with a second special TCP
segment and finally the client responds again with a third special segment.
• The first two segments carry no payload, i.e. no application-layer data; the third of these
segments may carry a payload. Because three segments are sent between the two hosts, this
connection-establishment procedure is often referred to as a three-way handshake.
The TCP Connection:
• Once a TCP connection is established, the two application processes can send data to each
other.
• Let’s consider sending data from the client process to the server process.
• The client process passes a stream of data through the socket
• Once the data passes through the door, the data is in the hands of TCP running in the client.
• TCP directs this data to the connection’s send buffer, which is one of the buffers that is set
aside during the initial three-way handshake. From time to time, TCP will grab chunks of data
from the send buffer and pass the data to the network layer.
The TCP Connection:
• The maximum amount of data that can be grabbed and placed in a segment is limited by the
maximum segment size (MSS).
• The MSS is set by first determining the length of the largest link-layer frame that can be sent
by the local sending host (maximum transmission unit, MTU) and then setting the MSS to
ensure that a TCP segment (when encapsulated in an IP datagram) plus the TCP/IP header
length (typically 40 bytes) will fit into a single link-layer frame.
• Both Ethernet and PPP link-layer protocols have an MTU of 1,500 bytes.
• Typical value of MSS is 1460 bytes.
• MSS is the maximum amount of application-layer data in the segment not the maximum size
of the TCP segment including headers.
• TCP pairs each chunk of client data with a TCP header thereby forming TCP segments.
• The segments are passed down to the network layer where they are separately encapsulated
within network-layer IP datagrams. The IP datagrams are then sent into the network.
• When TCP receives a segment at the other end the segment’s data is placed in the TCP
connection’s receive buffer.
The TCP Connection:
• The application reads the stream of data from this buffer.
• Each side of the connection has its own send buffer and its own receive buffer.
• TCP connection consists of buffers, variables and a socket connection to a process in one host
and another set of buffers, variables and a socket connection to a process in another host.
• No buffers or variables are allocated to the connection in the network elements (routers,
switches and repeaters) between the hosts.
TCP Segment Structure:
• The TCP segment consists of header fields and a data field.
• The TCP header is typically 20 bytes (12 bytes more than the UDP header).
• As with UDP, the header includes source and destination port numbers, which are used for
multiplexing / demultiplexing data from/to upper-layer applications.
• As with UDP, the header includes a checksum field.
• A TCP segment header also contains the following fields:
• The 32-bit sequence number field and the 32-bit acknowledgment number field are used by
the TCP sender and receiver in implementing a reliable data transfer service
TCP Segment Structure:
• The 16-bit receive window
field is used for flow control, it
is used to indicate the number
of bytes that a receiver is
willing to accept.
• The 4-bit header length field
specifies the length of the TCP
header in 32-bit words.
• The TCP header can be of
variable length due to the TCP
options field.
• The optional and variable-
length options field is used
when a sender and receiver
negotiate the maximum
segment size (MSS).
TCP Segment Structure:
• The flag field contains 6 bits.
• The ACK bit is used to indicate that the value carried in the acknowledgment field is valid
• The RST, SYN and FIN bits are used for connection setup and teardown
• The CWR and ECE bits are used in explicit congestion notification.
• Setting the PSH bit indicates that the receiver should pass the data to the upper layer
immediately.
• The URG bit is used to indicate that there is data in this segment that the sending-side upper
layer entity has marked as “urgent.”
• The location of the last byte of this urgent data is indicated by the 16-bit urgent data pointer
field.
• TCP must inform the receiving-side upper-layer entity when urgent data exists and pass it a
pointer to the end of the urgent data. (In practice, the PSH, URG, and the urgent data pointer
are not used)
Sequence Numbers and Acknowledgment Numbers:

• Two of the most important fields in the TCP segment header are the sequence number field
and the acknowledgment number field.
• The sequence number for a segment is the byte-stream number of the first byte in the
segment.
Sequence Numbers and Acknowledgment Numbers :
• Suppose that the data stream consists of a file consisting of 500,000 bytes that the MSS is
1,000 bytes and that the first byte of the data stream is numbered 0.
• Recall that TCP is full-duplex, so that Host A may be receiving data from Host B while it
sends data to Host B
• Each of the segments that arrive from Host B has a sequence number for the data flowing
from B to A.
• The acknowledgment number that Host A puts in its segment is the sequence number of the
next byte Host A is expecting from Host B.
• Suppose that Host A has received all bytes numbered 0 through 535 from B and suppose that it
is about to send a segment to Host B. Host A is waiting for byte 536 and all the subsequent
bytes in Host B’s data stream. So Host A puts 536 in the acknowledgment number field of the
segment it sends to B.
• Suppose that Host A has received one segment from Host B containing bytes 0 through 535
and another segment containing bytes 900 through 1,000.
• For some reason Host A has not yet received bytes 536 through 899.
Sequence Numbers and Acknowledgment Numbers:
• Host A is still waiting for byte 536 (and beyond) in order to re-create B’s data stream.
• A’s next segment to B will contain 536 in the acknowledgment number field.
• Because TCP only acknowledges bytes up to the first missing byte in the stream, TCP is said
to provide cumulative acknowledgments.
• Host A received the third segment (bytes 900 through 1,000) before receiving the second
segment (bytes 536 through 899).
• Thus, the third segment arrived out of order.
• There are basically two choices: either
(1) the receiver immediately discards out-of-order segments
(2) The receiver keeps the out-of-order bytes and waits for the missing bytes to fill in the gaps.
• The initial sequence number was zero.
TCP Connection Management:
• Suppose a process running in one host (client) wants to initiate a connection with another
process in another host (server).
• The client application process first informs the client TCP that it wants to establish a
connection to a process in the server.
• The TCP in the client then proceeds to establish a TCP connection with the TCP in the server
in the following manner:
Step 1. The client-side TCP first sends a special TCP segment to the server-side TCP.
• This special segment contains no application-layer data. But one of the flag bits in the
segment’s header, the SYN bit is set to 1. This special segment is referred to as a SYN
segment.
• The client randomly chooses an initial sequence number (client_isn) and puts this number in
the sequence number field of the initial TCP SYN segment.
• This segment is encapsulated within an IP datagram and sent to the server.
TCP Connection Management:
Step 2. Once the IP datagram containing the TCP SYN segment arrives at the server host, the server
extracts the TCP SYN segment from the datagram, allocates the TCP buffers and variables to the
connection and sends a connection-granted segment to the client TCP.
• This connection-granted segment also contains no application-layer data.
• It contains three important pieces of information in the segment header.
First, the SYN bit is set to 1.
Second, the acknowledgment field of the TCP segment header is set to client_isn+1.
Thirdly the server chooses its own initial sequence number (server_isn) and puts this value in the
sequence number field of the TCP segment header.
• The connection-granted segment is referred to as a SYNACK segment.
Step 3. Upon receiving the SYNACK segment, the client also allocates buffers and variables to the
connection.
• The client host then sends the server yet another segment; this last segment acknowledges the server’s
connection-granted segment
• The SYN bit is set to zero, since the connection is established.
• This third stage of the three-way handshake may carry client-to server data in the segment payload.
TCP Connection Management :
TCP Connection Management (TCP states visited by Client & Server TCP):
TCP Congestion control:
• TCP provides a reliable transport service between two processes running on different hosts.
• Provides congestion-control mechanism.
• TCP must use end-to-end congestion control rather than network- assisted congestion control,
since the IP layer provides no explicit feedback to the end systems regarding network
congestion.
• The approach taken by TCP is to have each sender limit the rate at which it sends traffic into
its connection as a function of perceived network congestion.
• If a TCP sender perceives that there is little congestion on the path between itself and the
destination, then the TCP sender increases its send rate
• If the sender perceives that there is congestion along the path, then the sender reduces its send
rate.
• But this approach raises three questions.
• First, how does a TCP sender limit the rate at which it sends traffic into its connection?
Second, how does a TCP sender perceive that there is congestion on the path between itself
and the destination?
• Third, what algorithm should the sender use to change its send rate as a function of perceived
• end-to-end congestion?
TCP Congestion control:
• The TCP congestion-control mechanism operating at the sender keeps track of an additional
variable, the congestion window.
• The congestion window, denoted cwnd, imposes a constraint on the rate at which a TCP
sender can send traffic into the network.
• The amount of unacknowledged data at a sender may not exceed the minimum of cwnd and
rwnd,
LastByteSent – LastByteAcked <= min{cwnd, rwnd}
• Assume that the TCP receive buffer is so large that the receive-window constraint can be
ignored; thus the amount of unacknowledged data at the sender is solely limited by cwnd.
• Assume that the sender always has data to send, i.e., that all segments in the congestion
window are sent.
• The constraint above limits the amount of unacknowledged data at the sender and therefore
indirectly limits the sender’s send rate.
• Consider a connection for which loss and packet transmission delays are negligible.
• At the beginning of every RTT, the constraint permits the sender to send cwnd bytes of data
into the connection; at the end of the RTT the sender receives acknowledgments for the data.
TCP Congestion control:
• The sender’s send rate is roughly cwnd/RTT bytes/sec.
• Consider how a TCP sender perceives that there is congestion on the path between itself and
the destination.
• Let us define a “loss event” at a TCP sender as the occurrence of either a timeout or the receipt
of three duplicate ACKs from the receiver.
• When there is excessive congestion, then one (or more) router buffers along the path
overflows, causing a datagram (containing a TCP segment) to be dropped.
• The dropped datagram, in turn, results in a loss event at the sender—either a timeout or the
receipt of three duplicate ACKs—which is taken by the sender to be an indication of
congestion on the sender-to-receiver path.
• Let’s next consider when the network is congestion-free, that is, when a loss event doesn’t
occur.
• Acknowledgments for previously unacknowledged segments will be received at the TCP
sender.
• If acknowledgments arrive at a relatively slow rate (e.g., if the end-end path has high delay or
contains a low-bandwidth link), then the congestion window will be increased at a relatively
slow rate.
TCP Congestion control:
• If acknowledgments arrive at a high rate, then the congestion window will be increased more
quickly.
• Because TCP uses acknowledgments to trigger (or clock) its increase in congestion window
size, TCP is said to be self-clocking.
Principles of TCP:
1. A lost segment implies congestion and hence the TCP sender’s rate should be decreased
when a segment is lost.
2. An acknowledged segment indicates that the network is delivering the sender’s segments to
the receiver and hence the sender’s rate can be increased when an ACK arrives for a
previously unacknowledged segment.
3. Bandwidth probing.

TCP congestion-control algorithm:


• Three major components:
(1) slow start,
(2) congestion avoidance,
(3) fast recovery.
TCP Congestion control algorithms:
• Slow start and congestion avoidance are mandatory components of TCP, differing in how they
increase the size of cwnd in response to received ACKs.
• Fast recovery is recommended but not required for TCP senders.

FLOW Control Algorithms:

• Stop & Wait ARQ


• Go Back N ARQ
• Selective Repeat ARQ
STOP AND WAIT ARQ:
• Simplest among all types
• Control variable of sender: S (0 or1)
• Control variable of receiver: R (0 or 1) It is the sequence number of the next frame expected
• Sender transmits frame and starts timer
• If acknowledgement is not received with in the expiry time of the timer, retransmission
happens
• Sending device keeps a copy of the last frame transmitted until it receives acknowledgement
• Keeping a copy helps in retransmission of lost or damaged frame
• Data frame is numbered 0 then acknowledgement frame is numbered 1. This helps in
identifying duplicate frame transmission
STOP AND WAIT ARQ:
Functions of receiver:

• Receiver sends only positive acknowledgement


• If receiver detects error, discards the frame and will not send any acknowledgement
• If out of order frame is received, discards the frame
• Frame 0 – ACK1
• Frame 1- ACK0

Four situations are discussed:

1. Normal operation
2. Frame is lost
3. Acknowledgement is lost
4. Acknowledgement is delayed
STOP AND WAIT ARQ :
STOP AND WAIT ARQ :
STOP AND WAIT ARQ :

Bidirectional transmission:
• Stop and wait ARQ follows unidirectional data transfer
• Bi directional transmission is possible using full duplex or half duplex mode
• Each sender or receiver needs both S and R variables to track the frames

Piggybacking:
• Method to combine data frame with acknowledgement frame
• Saves bandwidth
• For ex: I frame of HDLC
STOP AND WAIT ARQ :
Disadvantages:
• Sender sends one outstanding frame at a time and waits for the acknowledgement results in
less efficiency
• Transmission medium is not utilized properly

• To improve efficiency multiple frames are transmitted,


• Two ARQs are used
1. Go-Back-N ARQ
2. Selective repeat ARQ
Go-Back-N ARQ:
Features:
• Transmit W frames before obtaining acknowledgement
• Copy of all transmitted frames till acknowledgement is received
• Sending station frames are numbered sequentially
• Sequence number is added in the header
• Sequence number ranges from 0 to 2m -1 where m is the number of bits in the sequence
number Sender sliding window:
• Windowing is used
• All frames are stored in a buffer
• Outstanding frames are enclosed in a window
• Frames to the left of the window is already
acknowledged
• Frames to the right of the window cannot be
sent
• Size of the sender window is 2m -1
• Window size is fixed
• Window is a sliding window
Go-Back-N ARQ:
Receiver sliding window:
• Size of the receiver window is 1
• Receiver expects frames to arrive in order
• Out of order frames are discarded
• Window is a sliding window
Control variables:
W=SL- SF+1
S= Sequence number of the recently sent frame
SF= Sequence number of the first frame in the
window
SL= Sequence number of the last frame in the
window
W= Size of the window
R= Sequence number of the expected frame to be
received
Go-Back-N ARQ:
Timers:
• Sender sets timer for each frame sent
• Receiver has no timer

Acknowledgement:
• Receiver sends positive acknowledgement when it receives the frame
• For damaged or out of order frames, receiver remains silent
• Receiver can send cumulative acknowledgement

Resending frames:
• When a frame is damaged, sender goes back and sends set of frames starting from damaged
to the last one sent
Go-Back-N ARQ:
Go-Back-N ARQ: Sender window size
Go-Back-N ARQ:
The size of the sender window must be less than 2m

The size of the receiver window is always 1

Bidirectional transmission and piggy backing:


• Used to improve efficiency
• Both directions requires sender window and receiver window

Disadvantages:
• Inefficient for a noisy link and damage is more due to multiple frame transmission
• More bandwidth is required and slows down the transmission

• Above disadvantages can be resolved using Selective repeat ARQ


Selective repeat ARQ:
Features:
• If one frame is damaged, then there is no need to resend multiple frames, only damaged frame
is resent
• Efficient for noisy links
• But processing at the receiver is more complex
• Provides negative acknowledgement for the damaged frame before timer expires
Sender and Receiver windows:
Size of the sender window = 2m / 2 or 2m-1
Size of the receiver window = 2m / 2 or 2m-1
Range of sequence numbers at the receiver
side
RF= Sequence number of the first frame in the
receiver window
RL= Sequence number of the last frame in the
receiver window
Selective repeat ARQ: Lost frame

The size of the


sender and receiver
window must be at
most one-half of 2m.
Selective repeat ARQ: Sender window size

Bidirectional transmission and piggy backing:


Used to improve efficiency
Both directions requires sender window and receiver window
Two neighboring nodes A and B use a 3 bit sequence number. Assume that A is
transmitting and B is receiving for an ARQ. Draw the diagram for the given
scenario.
i. Before A sends any frames.
ii. After A sends frames 0, 1, 2 and receives acknowledgement from B for 0 and 1.
iii. Frame 2 is lost in the channel and B sends an NAK to A for frame 2.
iv. Retransmission happens and gets back the acknowledgement.
v. After A sends frames 3, 4 and 5 and B acknowledges till frame 4 and the
acknowledgement is received by A.
vi.Identify the type of ARQ and find the window size at the sender and the
receiver side.
Give a Pictorial Representation for the below given Scenarios with their sender and receiver sliding windows:

Scenario 1: Go-Back N Protocol


i) Frames 0, 1 & 2 are transmitted and acknowelgded.
ii) Frames 3 is transmitted and acknowelgded.
iii) Frame 0 is lost, Frame 1 & 2 are received and Timeout for acknowelgded.
Scenario 2: Selective Repeat protocol
i) Frames 0 & 1 are transmitted and acknowelgded.
ii) Frame 2 is lost & Frame 3 is received.

You might also like