0% found this document useful (0 votes)
50 views31 pages

Unit 3

This document provides details about the Computer Networks syllabus under the CBCS pattern effective from 2021-2022. It includes information about the subject code, title, type, credits, units, and contents. The contents will cover topics such as introduction to networks, the OSI and TCP/IP models, physical layer, data link layer, network layer, transport layer, and network security. The recommended textbooks and online resources for the subject are also listed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views31 pages

Unit 3

This document provides details about the Computer Networks syllabus under the CBCS pattern effective from 2021-2022. It includes information about the subject code, title, type, credits, units, and contents. The contents will cover topics such as introduction to networks, the OSI and TCP/IP models, physical layer, data link layer, network layer, transport layer, and network security. The recommended textbooks and online resources for the subject are also listed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

BCA Syllabus under CBCS Pattern with effect from 2021-2022 Onwards

Subject COMPUTER NETWORKS


Semester IV
Title
Subject
21UCA07 Specialization NA
Code
Type Core: Theory L:T:P:C 71:5:0:4
Unit Contents Levels Sessions
Introduction – Network Hardware - Software - Reference
Models - OSI and TCP/IP Models - Example Networks:
I Internet, ATM, Ethernet and Wireless LANs - Physical K1 10
Layer - Theoretical Basis for Data Communication -
Guided Transmission Media.
Wireless Transmission - Communication Satellites -
II Telephone System: Structure, Local Loop, Trunks and K2 15
Multiplexing and Switching. Data Link Layer: Design
Issues - Error Detection and Correction.
Elementary Data Link Protocols - Sliding Window
III Protocols - Data Link Layer in the Internet - Medium K3 15
Access Layer - Channel Allocation Problem - Multiple
Access Protocols - Bluetooth.
Network Layer - Design Issues - Routing Algorithms -
IV Congestion Control Algorithms - IP Protocol - IP K3,K4 15
Addresses - Internet Control Protocols.
Transport Layer - Services - Connection Management -
V Addressing, Establishing and Releasing a Connection - K5 16
Simple Transport Protocol - Internet Transport Protocols
(ITP) - Network Security: Cryptography.
Learning Resources
Text
Books A. S. Tanenbaum, ―Computer Networksǁ, Prentice-Hall of India 2008, 4th Edition.
1. Stallings, ―Data and Computer Communicationsǁ, Pearson Education 2012,
7th Edition.
Reference 2. B. A. Forouzan, ―Data Communications and Networkingǁ, Tata McGraw
Books Hill 2007, 4th Edition.
3. F. Halsall, ―Data Communications, Computer Networks and Open
Systemsǁ, Pearson Education 2008.
Website / NPTEL & MOOC courses titled Computer Networks
Link https://nptel.ac.in/courses/106106091/

39
UNIT- III

ELEMENTARY DATA LINK PROTOCOLS

Simplest Protocol

It is very simple. The sender sends a sequence of frames without even thinking about the receiver. Data are
transmitted in one direction only. Both sender & receiver always ready. Processing time can be ignored.
Infinite buffer space is available. And best of all, the communication channel between the data link layers
never damages or loses frames. This thoroughly unrealistic protocol,which we will nickname ‘‘Utopia,’’
.The utopia protocol is unrealistic because it does not handle either flow control or error correction
Stop-and-wait Protocol

It is still very simple. The sender sends one frame and waits for feedback from the receiver. When the ACK
arrives, the sender sends the next frame
It is Stop-and-Wait Protocol because the sender sends one frame, stops until it receives confirmation from
the receiver (okay to go ahead), and then sends the next frame. We still have unidirectional
communication for data frames, but auxiliary ACK frames (simple tokens of acknowledgment) travel from
the other direction. We add flow control to our previous protocol.

NOISY CHANNELS
Although the Stop-and-Wait Protocol gives us an idea of how to add flow control to its predecessor,
noiseless channels are nonexistent. We can ignorethe error (as we sometimes do), or we need to add error
control to ourprotocols. We discuss three protocols in this section that use error control.

Sliding Window Protocols:


1 Stop-and-Wait Automatic Repeat
Request

2 Go-Back-N Automatic Repeat


Request

3 Selective Repeat Automatic Repeat Request

1 Stop-and-Wait Automatic Repeat Request


To detect and correct corrupted frames, we need to add redundancy bitsto our data frame. When
the frame arrives at the receiver site, it is checked and if it is corrupted, it is silently discarded. The
detection of errors in this protocol is manifested by the silence of the receiver.
Lost frames are more difficult to handle than corrupted ones. In ourprevious protocols, there was no
way to identify a frame. The received frame could be the correct one, or a duplicate, or a frame out of order.
The solution is to number the frames. When the receiver receives a data frame that is out of order, this
means that frames were either lost or duplicated
The lost frames need to be resent in this protocol. If the receiver does not respond when there is an
error, how can the sender know which frame to resend? To remedy this problem, the sender keeps a copy
of the sent frame. At the same time, it starts a timer. If the timer expires and there is no ACK for thesent
frame, the frame is resent, the copy is held, and the timer is restarted. Since the protocol uses the stop-and-
wait mechanism, there is only one specific frame that needs an ACK
Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent frame and retransmitting of
the frame when the timer expires

In Stop-and-Wait ARQ, we use sequence numbers to number the frames. The sequence numbers are
based on modulo-2 arithmetic.

In Stop-and-Wait ARQ, the acknowledgment number always announces in modulo-2 arithmetic the
sequence number of the next frame expected.
Bandwidth Delay Product:
Assume that, in a Stop-and-Wait ARQ system, the bandwidth of the line is 1 Mbps, and 1 bit takes 20 ms
to make a round trip. What is the bandwidth-delay product? If the system data frames are 1000 bits in
length, what is the utilization percentage of the link?
The link utilization is only 1000/20,000, or 5 percent. For this reason, for a link with a high bandwidth or long delay, the use
of Stop-and-Wait ARQ wastes the capacity of the link.

2. Go-Back-N Automatic Repeat Request

To improve the efficiency of transmission (filling the pipe), multipleframes must be in transition
while waiting for acknowledgment. In other words, we need to let more than one frame be outstanding to
keep thechannel busy while the sender is waiting for acknowledgment.
The first is called Go-Back-N Automatic Repeat. In this protocol we cansend several frames
before receiving acknowledgments; we keep a copy ofthese frames until the acknowledgments arrive.
In the Go-Back-N Protocol, the sequence numbers are modulo 2m, where m is the size of the
sequence number field in bits. The sequence numbers range from 0 to 2 power m- 1. For example, if m is
4, the only sequence numbers are 0 through 15 inclusive.
The sender window at any time divides the possible sequence numbers into four regions.
The first region, from the far left to the left wall of the window, defines the sequence numbers
belonging to frames that are already acknowledged.The sender does not worry about these frames and
keeps no copies of them.
The second region, colored in Figure (a), defines the range of sequencenumbers belonging to the
frames that are sent and have an unknown status. The sender needs to wait to find out if these frames have
been received or were lost. We call these outstanding frames.
The third range, white in the figure, defines the range of sequencenumbers for frames that can be
sent; however, the corresponding data packets have not yet been received from the network layer.
Finally, the fourth region defines sequence numbers that cannot be used until the window slides
The send window is an abstract concept defining an imaginary box ofsize 2m − 1 with three
variables: Sf, Sn, and Ssize. The variable Sf definesthe sequence number of the first (oldest) outstanding
frame. The variable Snholds the sequence number that will be assigned to the next frame to be sent.Finally, the
variable Ssize defines the size of the window.
Figure (b) shows how a send window can slide one or more slots to the right when an
acknowledgment arrives from the other end. The acknowledgments in this protocol are cumulative,
meaning that more than one frame can be acknowledged by an ACK frame. In Figure, frames 0, I, and 2 are
acknowledged, so the window has slide to the right three slots. Note that thevalue of Sf is 3 because frame 3
is now the first outstanding frame.The send window can slide one or more slots when a valid
acknowledgment arrives.

Receiver window: variable Rn (receive window, next frame expected) .


The sequence numbers to the left of the window belong to the frames already received and acknowledged;
the sequence numbers to the right of this window define the frames that cannot be received. Any received
frame with a sequence number in these two regions is discarded. Only a frame with asequence number
matching the value of Rn is accepted and acknowledged.The receive window also slides, but only one
slot at a time. When a correct frame is received (and a frame is received only one at a time), the window
slides.( see below figure for receiving window)

The receive window is an abstract concept defining an imaginary box of size 1with one single variable Rn.
The window slides when a correct frame has arrived; sliding occurs one slot at a time
Fig: Receiver window (before sliding (a), After sliding (b))
Timers
Although there can be a timer for each frame that is sent, in our protocol weuse only one. The reason is that
the timer for the first outstanding frame always expires first; we send all outstanding frames when this timer
expires.
Acknowledgment
The receiver sends a positive acknowledgment if a frame has arrived safe andsound and in order. If a frame is
damaged or is received out of order, the receiver is silent and will discard all subsequent frames until it
receives the one it is expecting. The silence of the receiver causes the timer of the unacknowledged frame
at the sender side to expire. This, in turn, causes thesender to go back and resend all frames, beginning with
the one with the expired timer. The receiver does not have to acknowledge each framereceived. It can send
one cumulative acknowledgment for several frames.
Resending a Frame
When the timer expires, the sender resends all outstanding frames. For example, suppose the sender has
already sent frame 6, but the timer for frame 3 expires. This means that frame 3 has not been
acknowledged; the sender goes back and sends frames 3,4,5, and 6 again. That is why the protocol is
called Go-Back-N ARQ.

Below figure is an example(if ack lost) of a case where the forward channel isreliable, but the reverse is not.
No data frames are lost, but some ACKs are delayed and one is lost. The example also shows how
cumulative acknowledgments can help if acknowledgments are delayed or lost
Below figure is an example(if frame lost)

Stop-and-Wait ARQ is a special case of Go-Back-N ARQ in which the size of the send window is 1.

3 Selective Repeat Automatic Repeat Request


In Go-Back-N ARQ, The receiver keeps track of only one variable, and there is no need to buffer out-of-
order frames; they are simply discarded. However,this protocol is very inefficient for a noisy link.
In a noisy link a frame has a higher probability of damage, which means the resending of multiple frames.
This resending uses up the bandwidth and slows down the transmission.
For noisy links, there is another mechanism that does not resend N frames when just one frame is damaged;
only the damaged frame is resent. This mechanism is called Selective Repeat ARQ.
It is more efficient for noisy links, but the processing at the receiver is more complex.

Sender Window (explain go-back N sender window concept (before & after sliding.) The only difference in
sender window between Go-back N and Selective Repeat is Window size)

Receiver window

The receiver window in Selective Repeat is totally different from the one in Go Back-N. First, the size of
the receive window is the same as the size of thesend window (2m-1).
The Selective Repeat Protocol allows as many frames as the size of the receiver window to arrive out of
order and be kept until there is a set of in- order frames to be delivered to the network layer. Because
the sizes of thesend window and receive window are the same, all the frames in the sendframe can
arrive out of order and be stored until they can be delivered. However the receiver never delivers packets
out of order to the network layer. Above Figure shows the receive window. Those slots inside the window
that are colored define frames that have arrived out of order and are waiting for their neighbors to arrive
before delivery to the network layer.

In Selective Repeat ARQ, the size of the sender and receiver window must be at most one-half of 2m

Delivery of Data in Selective Repeat ARQ:

Flow Diagram
Differences between Go-Back N & Selective Repeat

One main difference is the number of timers. Here, each frame sent or resent needs a timer, which
means that the timers need to be numbered (0, 1,2, and 3). The timer for frame 0 starts at the first request,
but stops when the ACK for this frame arrives.
There are two conditions for the delivery of frames to the network layer: First, a set of consecutive
frames must have arrived. Second, the set starts from the beginning of the window. After the first
arrival, there was only one frame and it started from the beginning of the window. After the last arrival,
there are three frames and the first one starts from the beginning of the window.
Another important point is that a NAK is sent.
The next point is about the ACKs. Notice that only two ACKs are sent here. The first one
acknowledges only the first frame; the second one acknowledgesthree frames. In Selective Repeat, ACKs are
sent when data are delivered tothe network layer. If the data belonging to n frames are delivered in one shot,
only one ACK is sent for all of them.

Piggybacking
A technique called piggybacking is used to improve the efficiency of thebidirectional protocols. When a
frame is carrying data from A to B, it can also carry control information about arrived (or lost) frames
from B; when a frameis carrying data from B to A, it can also carry control information about the arrived
(or lost) frames from A.

DATA LINK LAYER FUNCTIONS (SERVICES)


1. Providing services to the network layer:
1 Unacknowledged connectionless service.
Appropriate for low error rate and real-time traffic. Ex: Ethernet
2. Acknowledged connectionless service.
Useful in unreliable channels, WiFi. Ack/Timer/Resend
3. Acknowledged connection-oriented service.
Guarantee frames are received exactly once and in the right order. Appropriate over long, unreliable
links such as a satellite channel or a long- distance telephone circuit
2. Framing: Frames are the streams of bits received from the network layerinto manageable data units.
This division of stream of bits is done by DataLink Layer.
3. Physical Addressing: The Data Link layer adds a header to the frame in order to define physical
address of the sender or receiver of the frame, if the frames are to be distributed to different systems
on the network.
4. Flow Control: A receiving node can receive the frames at a faster rate than it can process the frame.
Without flow control, the receiver's buffercan overflow, and frames can get lost. To overcome this
problem, the data link layer uses the flow control to prevent the sending node on one side ofthe link from
overwhelming the receiving node on another side of the link.This prevents traffic jam at the receiver side.
5. Error Control: Error control is achieved by adding a trailer at the end ofthe frame. Duplication
of frames are also prevented by using this mechanism. Data Link Layers adds mechanism to prevent
duplication of frames.
Error detection: Errors can be introduced by signal attenuation and noise. Data Link Layer protocol
provides a mechanism to detect one or more errors. This is achieved by adding error detection bits in
the frame and thenreceiving node can perform an error check.
Error correction: Error correction is similar to the Error detection, except that receiving node not
only detects the errors but also determine wherethe errors have occurred in the frame.
6. Access Control: Protocols of this layer determine which of the devices has control over the link at any
given time, when two or more devices are connected to the same link.
7. Reliable delivery: Data Link Layer provides a reliable delivery service, i.e., transmits the network
layer datagram without any error. A reliable delivery service is accomplished with transmissions and
acknowledgements. A data link layer mainly provides the reliable delivery service over the links as
they have higher error rates and they can be corrected locally, link at which an error occurs rather than
forcing toretransmit the data.
8. Half-Duplex & Full-Duplex: In a Full-Duplex mode, both the nodes can transmit the data at the same
time. In a Half-Duplex mode, only one nodecan transmit the data at the same time.

FRAMING:
To provide service to the network layer, the data link layer must use the service provided to it by
the physical layer. What the physical layer does is accept a raw bit stream and attempt to deliver it to the
destination. This bit stream is not guaranteed to be error free. The number of bits received may be less
than, equal to, or more than the number of bits transmitted, and theymay have different values. It is
up to the data link layer to detect and, if necessary, correct errors. The usual approach is for the data
link layer to break the bit stream up into discrete frames and compute the checksum for each frame
(framing). When a frame arrives at the destination, the checksum is recomputed. If the newly
computed checksum is different fromthe one contained in the frame, the data link layer knows that an
error has occurred and takes steps to deal with it (e.g., discarding the bad frame and possibly also sending
back an error report).We will look at four framing methods:
1. Character count.
2. Flag bytes with byte stuffing.
3. Starting and ending flags, with bit stuffing.
4. Physical layer coding violations.
Character count method uses a field in the header to specify the number of characters in the frame. When
the data link layer at the destination sees the character count, it knows how many characters follow and
hence where the end of the frame is. This technique is shown in Fig. (a) For four frames of sizes 5, 5, 8, and
8 characters, respectively.

A character stream. (a) Without errors. (b) With one error


The trouble with this algorithm is that the count can be garbled by a transmission error. For example, if
the character count of 5 in the second frame of Fig. (b) becomes a 7, the destination will get out of
synchronization and will be unable to locate the start of the next frame. Even if the checksum is incorrect
so the destination knows that the frame is bad, it still has no wayof telling where the next frame starts.
Sending a frame back to the source asking for a retransmission does not help either, since the destination
doesnot know how many characters to skip over to get to the start of the retransmission. For this reason,
the character count method is rarely used anymore.
Flag bytes with byte stuffing method gets around the problem of resynchronization after an error by
having each frame start and end with special bytes. In the past, the starting and ending bytes were
different, but in recent years most protocols have used the same byte, called a flag byte, as both the
starting and ending delimiter, as shown in Fig. (a) as FLAG. In this way, if the receiver ever loses
synchronization, it can just search for the flag byte to find the end of the current frame. Two consecutive
flag bytes indicate the end of one frame and start of the next one.

(a) A frame delimited by flag bytes (b) Four examples of byte sequences
before and after byte stuffing

It may easily happen that the flag byte's bit pattern occurs in the data. This situation will usually
interfere with the framing. One way to solve this problem is to have the sender's data link layer insert a
special escape byte (ESC) just before each ''accidental'' flag byte in the data. The data link layeron the
receiving end removes the escape byte before the data are given tothe network layer. This technique
is called byte stuffing or character stuffing.
Thus, a framing flag byte can be distinguished from one in the data by the absence or presence of an
escape byte before it.
What happens if an escape byte occurs in the middle of the data? The answer is that, it too is stuffed
with an escape byte. Thus, any single escapebyte is part of an escape sequence, whereas a doubled one
indicates that asingle escape occurred naturally in the data. Some examples are shown in Fig. (b). In all
cases, the byte sequence delivered after de stuffing is exactlythe same as the original byte sequence.
A major disadvantage of using this framing method is that it is closelytied to the use of 8-bit
characters. Not all character codes use 8-bit characters. For example UNICODE uses 16-bit characters,
so a new technique had to be developed to allow arbitrary sized characters
Starting and ending flags, with bit stuffing allows data frames to contain an arbitrary number of bits
and allows character codes with an arbitrary number of bits per character. It works like this. Each frame
begins and endswith a special bit pattern, 01111110 (in fact, a flag byte). Whenever the sender's data link
layer encounters five consecutive 1s in the data, it automatically stuffs a 0 bit into the outgoing bit stream.
This bit stuffing is analogous to byte stuffing, in which an escape byte is stuffed into the outgoing character
stream before a flag byte in the data.
When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it automatically de-
stuffs (i.e., deletes) the 0 bit. Just as byte stuffing is completely transparent to the network layer in
both computers, so is bit stuffing. If the user data contain the flag pattern, 01111110, this flag is
transmitted as 011111010 but stored in the receiver's memory as 01111110.

Fig:Bit stuffing. (a) The original data. (b) The data as they appear on the line.
(c) The data as they are stored in the receiver's memory after destuffing.
With bit stuffing, the boundary between two frames can be unambiguously recognized by the flag pattern.
Thus, if the receiver loses track of where it is, all it has to do is scan the input for flag sequences, since
they can only occurat frame boundaries and never within the data.
Physical layer coding violations method of framing is only applicable to networks in which the encoding
on the physical medium contains some redundancy. For example, some LANs encode 1 bit of data by using
2 physicalbits. Normally, a 1 bit is a high-low pair and a 0 bit is a low-high pair. The scheme means that
every data bit has a transition in the middle, making it easy for the receiver to locate the bit boundaries.
The combinations high-high and low-low are not used for data but are used for delimiting frames in
some protocols.
As a final note on framing, many data link protocols use combination of a character count with one of the
other methods for extra safety. When a frame arrives, the count field is used to locate the end of the frame.
Only if the appropriate delimiter is present at that position and the checksum is correct isthe frame
accepted as valid. Otherwise, the input stream is scanned for the next delimiter

RANDOM ACCESS PROTOCOLS


We can consider the data link layer as two sub layers. The upper sub layer is responsible for data link
control, and the lower sub layer is responsible for resolving access to the shared media

The upper sub layer that is responsible for flow and error control is called the logical link control (LLC)
layer; the lower sub layer that is mostly responsiblefor multiple access resolution is called the media
access control (MAC) layer. When nodes or stations are connected and use a common link, called a
multipoint or broadcast link, we need a multiple-access protocol to coordinate access to the link.

Taxonomy of multiple-access protocols


RANDOM ACCESS
In random access or contention methods, no station is superior to another station and none is assigned the
control over another.
Two features give this method its name. First, there is no scheduled time for a station to transmit.
Transmission is random among the stations. That iswhy these methods are called random access. Second,
no rules specify whichstation should send next. Stations compete with one another to access the medium.
That is why these methods are also called contention methods.
ALOHA
1 Pure ALOHA
The original ALOHA protocol is called pure ALOHA. This is a simple, but elegant protocol. The idea is
that each station sends a frame whenever it has a frameto send. However, since there is only one channel
to share, there is thepossibility of collision between frames from different stations. Below Figure shows an
example of frame collisions in pure ALOHA.

Frames in a pure ALOHA network


In pure ALOHA, the stations transmit frames whenever they have data to send.
When two or more stations transmit simultaneously, there is collision and the frames are destroyed.
In pure ALOHA, whenever any station transmits a frame, it expects the
acknowledgement from the receiver.
If acknowledgement is not received within specified time, the station assumes that the frame (or
acknowledgement) has been destroyed.
If the frame is destroyed because of collision the station waits for a random
amount of time and sends it again. This waiting time must be random otherwise same frames will collide
again and again.
Therefore pure ALOHA dictates that when time-out period passes, each station
must wait for a random amount of time before resending its frame. Thisrandomness will help avoid more
collisions.
Vulnerable time Let us find the length of time, the vulnerable time, inwhich there is a possibility of
collision. We assume that the stations send fixed- length frames with each frame taking Tfr S to send.
Below Figure shows the vulnerable time for station A.
Station A sends a frame at time t. Now imagine station B has already sent a frame between t - Tfr and t. This

leads to a collision between the frames from station A and station B. The end of B's frame collides with the
beginning of A's frame. On the other hand, suppose that station C sends a frame between t andt + Tfr .
Here, there is a collision between frames from station A and station C. The beginning of C's frame collides
with the end of A's frame
Looking at Figure, we see that the vulnerable time, during which a collisionmay occur in pure ALOHA, is
2 times the frame transmission time. Pure ALOHA vulnerable time = 2 x Tfr

Procedure for pure ALOHA protocol


Example
A pure ALOHA network transmits 200-bit frames on a shared channel of 200kbps. What is the
requirement to make this frame collision-free?
Solution
Average frame transmission time Tfr is 200 bits/200 kbps or 1 ms. The vulnerable time is 2 x 1 ms =2 ms.
This means no station should send later than 1 ms before this station starts transmission and no station
should start sending during the one I-ms period that this station is sending.
The throughput for pure ALOHA is S = G × e −2G . The maximum throughput Smax = 0.184 when
G= (1/2).
PROBLEM
A pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps. What is the throughput
if the system (all stations together) produces a. 1000 frames per second b. 500 frames per second c. 250
frames per second.
The frame transmission time is 200/200 kbps or 1 ms.
a. If the system creates 1000 frames per second, this is 1 frame per
millisecond. The load is 1. In this case S = G× e−2 G or S = 0.135 (13.5 percent). This means that the
throughput is 1000 × 0.135 = 135 frames.Only 135 frames out of 1000 will probably survive.
b. If the system creates 500 frames per second, this is (1/2) frame permillisecond. The load is (1/2). In this
case S = G × e −2G or S = 0.184 (18.4 percent). This means that the throughput is 500 ×
0.184 = 92 and that only 92 frames out of 500 will probably survive. Note that this is the maximum
throughput case, percentage wise.
c. If the system creates 250 frames per second, this is (1/4) frame per
millisecond. The load is (1/4). In this case S = G × e − or S = 0.152 (15.2 percent). This means that the
2G

throughput is 250 × 0.152 = 38. Only 38 framesout of 250 will probably survive.
2 Slotted ALOHA

Pure ALOHA has a vulnerable time of 2 x Tfr . This is so because there is no rule that defines when the
station can send. A station may send soon after another station has started or soon before another station
has finished. Slotted ALOHA was invented to improve the efficiency of pure ALOHA.

In slotted ALOHA we divide the time into slots of Tfr s and force the station to send only at the
beginning of the time slot. Figure 3 shows an example offrame collisions in slotted ALOHA

FIG:3
Because a station is allowed to send only at the beginning of the synchronizedtime slot, if a station misses this
moment, it must wait until the beginning ofthe next time slot. This means that the station which started at the
beginningof this slot has already finished sending its frame. Of course, there is still thepossibility of collision
if two stations try to send at the beginning of the sametime slot. However, the vulnerable time is now reduced
to one-half, equal toTfr Figure 4 shows the situation
Below fig shows that the vulnerable time for slotted ALOHA is one-half that ofpure ALOHA. Slotted
ALOHA vulnerable time = Tfr

The throughput for slotted ALOHA is S = G × e−G . The maximumthroughput Smax = 0.368 when G = 1.

A slotted ALOHA network transmits 200-bit frames using a shared channel witha 200- Kbps bandwidth. Find
the throughput if the system (all stations together) produces
a. 1000 frames per second b. 500 frames per second c. 250 frames per second
Solution
This situation is similar to the previous exercise except that the network isusing slotted ALOHA instead
of pure ALOHA. The frame transmission time is200/200 kbps or 1 ms.
a. In this case G is 1. So S =G x e-G or S =0.368 (36.8 percent). This means that the throughput is 1000 x
0.0368 =368 frames. Only 368 out of 1000 frames will probably survive. Note that this is the maximum
throughput case,percentagewise.
b. Here G is 1/2 In this case S =G x e-G or S =0.303 (30.3 percent). This means that the throughput is
500 x 0.0303 =151. Only 151 frames out of 500 will probably survive.
c. Now G is 1/4. In this case S =G x e-G or S =0.195 (19.5 percent). This means that the throughput
is 250 x 0.195 = 49. Only 49 frames out of 250will probably survive
Comparison between Pure Aloha & Slotted Aloha
Carrier Sense Multiple Access (CSMA)
To minimize the chance of collision and, therefore, increase the performance, the CSMA method was developed. The chance
of collision can be reduced if a station senses the medium before trying to use it. Carrier sense multiple access (CSMA) requires that
each station first listen to the medium (or check the state of the medium) before sending. In other words, CSMA is based on the
principle "sense before transmit" or "listen before talk."
CSMA can reduce the possibility of collision, but it cannot eliminate it. The reason for this is shown in below Figure. Stations
are connected to a shared channel (usually a dedicated medium).
The possibility of collision still exists because of propagation delay; station may sense the medium and find it idle, only because the
first bit sent by another station has not yet been received.
At time tI' station B senses the medium and finds it idle, so it sends a frame. At time t2 (t2> tI)' station C senses the medium
and finds it idle because, at this time, the first bits from station B
have not reached station C. Station C also sends a frame. The two signals collide and both frames are destroyed.
Space/time model of the collision in CSMA

Vulnerable Time
The vulnerable time for CSMA is the propagation time Tp . This is the time needed for a signal to propagate from one end of the
medium to the other. When a station sends a frame, and any other station tries to send a frameduring this time, a collision
will result. But if the first bit of the frame reaches the end of the medium, every station will already have heard the bit and will
refrain from sending
Vulnerable time in CSMA

Persistence Methods
What should a station do if the channel is busy? What should a station do if the channel is idle? Three methods have been devised
to answer these questions:the 1-persistent method, the non-persistent method, and the p-persistent method

1-Persistent: In this method, after the station finds the line idle, it sends its frame immediately (with probability 1). This method has
the highest chance of collision because two or more stations may find the line idle and send their frames immediately.
Non-persistent: a station that has a frame to send senses the line. If the lineis idle, it sends immediately. If the line is not idle, it
waits a random amount of time and then senses the line again. This approach reduces the chance of collision because it is unlikely
that two or more stations will wait the same amount of time and retry to send simultaneously. However, this method reduces the
efficiency of the network because the medium remains idle when there may be stations with frames to send.
p-Persistent: This is used if the channel has time slots with a slot duration equal to or greater than the maximum propagation time.
The p-persistent approach combines the advantages of the other two strategies. It reduces the chance of collision and improves
efficiency.
In this method, after the station finds the line idle it follows these steps:
1. With probability p, the station sends its frame.
2. With probability q = 1 - p, the station waits for the beginning of the next time slot and checks the line again.
a. If the line is idle, it goes to step 1.
b. If the line is busy, it acts as though a collision has occurred and uses the backoff procedure.

a.
Carrier Sense Multiple Access with Collision Detection (CSMA/CD)
The CSMA method does not specify the procedure following a collision. Carrier sense multiple access with collision
detection (CSMA/CD) augments the algorithm to handle the collision.
In this method, a station monitors the medium after it sends a frame to see if the transmission was successful. If so, the station
is finished. If, however,there is a collision, the frame is sent again.
To better understand CSMA/CD, let us look at the first bits transmitted by the two stations involved in the collision. Although
each station continues to send bits in the frame until it detects the collision, we show what happens as the first bits collide. In below
Figure, stations A and C are involved in the collision.
Collision of the first bit in CSMA/CD
At time t 1, station A has executed its persistence procedure and starts sending the bits of its frame. At time t2, station C has not yet
sensed the firstbit sent by A. Station C executes its persistence procedure and starts sendingthe bits in its frame, which propagate both
to the left and to the right. The collision occurs sometime after time t2.Station C detects a collision at time t3when it receives the first
bit of A's frame. Station C immediately (or after a short time, but we assume immediately) aborts transmission.
Station A detects collision at time t4 when it receives the first bit of C's frame; it also immediately aborts transmission. Looking at
the figure, we see that A transmits for the duration t4 - tl; C transmits for the duration t3 - t2.

Minimum Frame Size


For CSMAlCD to work, we need a restriction on the frame size. Before sending the last bit of the frame, the sending station must
detect a collision, if any, and abort the transmission. This is so because the station, once the entire frame is sent, does not keep a copy
of the frame and does not monitor the line for collision detection. Therefore, the frame transmission time Tfr must be at least two
times the maximum propagation time Tp. To understand the reason, let us think about the worst-case scenario. If the two stations
involved in a collisionare the maximum distance apart, the signal from the first takes time Tp toreach the second, and the
effect of the collision takes another time Tp to reach the first. So the requirement is that the first station must still be transmitting after
2Tp .
Collision and abortion in CSMA/CD

Flow diagram for the CSMA/CD


PROBLEM
A network using CSMA/CD has a bandwidth of 10 Mbps. If the maximum propagation time (including the delays in
the devices and ignoring the time needed to send a jamming signal, as we see later) is 25.6 μs, what is the minimum size of the frame?
SOL
The frame transmission time is Tfr = 2 × Tp = 51.2 μs. This means, in the worst case, a station needs to transmit for a period of 51.2
μs to detect the collision. The minimum size of the frame is 10 Mbps × 51.2 μs = 512 bits or 64bytes. This is actually the minimum
size of the frame for Standard Ethernet.

DIFFERENCES BETWEEN ALOHA & CSMA/CD


The first difference is the addition of the persistence process. We need to sense the channel before we start sending the frame
by using one of the persistence processes
The second difference is the frame transmission. In ALOHA, we first transmit the entire frame and then wait for an
acknowledgment. In CSMA/CD, transmission and collision detection is a continuous process. We do not sendthe entire frame
and then look for a collision. The station transmits andreceives continuously and simultaneously
The third difference is the sending of a short jamming signal that enforcesthe collision in case other stations have not yet
sensed the collision.

Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)

We need to avoid collisions on wireless networks because they cannot bedetected. Carrier sense multiple access with collision
avoidance (CSMAlCA)was invented for wirelesss network. Collisions are avoided through the use of CSMA/CA's three strategies:
the inter frame space, the contention window, and

acknowledgments, as shown in Figure

Timing in CSMA/CA
Inter frame Space (IFS)
First, collisions are avoided by deferring transmission even if the channelis found idle. When an idle channel is found, the
station does not send immediately. It waits for a period of time called the inter frame space or IFS.
Even though the channel may appear idle when it is sensed, a distant station may have already started transmitting. The distant
station's signal has not yet reached this station. The IFS time allows the front of the transmitted signal by the distant station to reach
this station. If after the IFS time the channel is still idle, the station can send, but it still needs to wait a time equal to the contention
time. The IFS variable can also be used to prioritize stationsor frame types. For example, a station that is assigned shorter IFS has
a higher priority.
In CSMA/CA, the IFS can also be used to define the priority of a station or a frame.

Contention Window
The contention window is an amount of time divided into slots. A station that is ready to send chooses a random number of
slots as its wait time. The number of slots in the window changes according to the binary exponential back-off strategy. This means
that it is set to one slot the first time and then doubles each time the station cannot detect an idle channel after the IFS time. This is
very similar to the p-persistent method except that a random outcome defines the number of slots taken by the waiting station.
One interesting point about the contention window is that the stationneeds to sense the channel after each time slot. However,
if the station finds the channel busy, it does not restart the process; it just stops the timer and restarts it when the channel is sensed as
idle. This gives priority to the station with the longest waiting time.
In CSMA/CA, if the station finds the channel busy, it does not restart the timer of the contention window; it stops the timer
and restarts it when the channel becomes idle.
Acknowledgment
With all these precautions, there still may be a collision resulting in destroyed data. In addition, the data may be corrupted
during the transmission. The positive acknowledgment and the time-out timer can help guarantee that the receiver has received the
frame.
This is the CSMA protocol with collision avoidance.
 The station ready to transmit, senses the line by using one of the persistentstrategies.
 As soon as it finds the line to be idle, the station waits for an IFS (Inter frame
space) amount of time.
 If then waits for some random time and sends the frame.
 After sending the frame, it sets a timer and waits for the acknowledgementfrom the receiver.
 If the acknowledgement is received before expiry of the timer, then the
transmission is successful.
 But if the transmitting station does not receive the expectedacknowledgement before
the timer expiry then it increments the back off
parameter, waits for the back off time and re senses the line
Flow Control or TCP Sliding Window
TCP uses a sliding window, to handle flow control. The sliding window protocol used by TCP, however, is something between the Go-
Back-N andSelective Repeat sliding window.

The sliding window protocol in TCP looks like the Go-Back-N protocol because it does not use NAKs;
it looks like Selective Repeat because the receiver holds the out-of-order segments until the missing ones arrive.

There are two big differences between this sliding window and the onewe used at the data link layer.
1 the sliding window of TCP is byte-oriented; the one we discussed in thedata link layer is frame-oriented.
2 the TCP's sliding window is of variable size; the one we discussed inthe data link layer was of fixed size
Sliding window

The window is opened, closed, or shrunk. These three activities, as we willsee, are in the control of the receiver (and depend on
congestion in the network), not the sender.
The sender must obey the commands of the receiver in this matter. Opening a window means moving the right wall to the right.
This allowsmore new bytes in the buffer that are eligible for sending.
Closing the window means moving the left wall to the right. This meansthat some bytes have been acknowledged and the sender
need not worry about them anymore.
Shrinking the window means moving the right wall to the left.

The size of the window at one end is determined by the lesser of two values: receiver window (rwnd) or congestion window (cwnd).
The receiver window is the value advertised by the opposite end in a segment containing acknowledgment. It is the number of bytes the
other end can accept before its buffer overflows and data are discarded.
The congestion window is a value determined by the network to avoid congestion
Window management in TCP
When the window is 0, the sender may not normally send segments, with twoexceptions.
1) urgent data may be sent, for example, to allow the user to kill the processrunning on the remote machine.
2) the sender may send a 1-byte segment to force the receiver to reannounce the next byte expected and the window size. This packet is
called a window probe.
The TCP standard explicitly provides this option to prevent deadlock if a window update ever gets lost.

Senders are not required to transmit data as soon as they come in from the application. Neither are receivers required to send
acknowledgements as soon as possible.

For example, in Fig. when the first 2 KB of data came in, TCP, knowing that ithad a 4-KB window, would have been completely correct in
just buffering the data until another 2 KB came in, to be able to transmit a segment with a 4-KBpayload. This freedom can be used to improve
performance

3.
send echo of character
and/or output 2.
interpret
character
1.
send character Host with Telnet
Host with server
Telnet client
Remote terminal applications (e.g., Telnet) send characters to a server.
The server interprets the character and sends the output at the serverto the client.

For each character typed, you see three packets:


Client Server: Send typed character
Server Client: Echo of character (or user output) andacknowledgement for first packet
Client Server: Acknowledgement for second packet
Delayed Acknowledgement

• TCP delays transmission of ACKs for up to 500ms

• Avoid to send ACK packets that do not carry data.


– The hope is that, within the delay, the receiver will have data ready to be sent to the receiver. Then, the ACK can be piggybacked
with a datasegment
Exceptions:

• ACK should be sent for every full sized segment

• Delayed ACK is not used when packets arrive out of order

Although delayed acknowledgements reduce the load placed on the network by the receiver, a sender that sends multiple short
packets (e.g., 41-byte packets containing 1 byte of data) is still operating inefficiently. A way to reduce this usage is known as Nagle’s
algorithm (Nagle, 1984).

You might also like