0% found this document useful (0 votes)
3 views

Lecture-13-17_DC

The document discusses the functions of the data link layer in data communications, focusing on data link control and media access control. It explains framing techniques, flow control methods, and error control protocols such as Stop-and-Wait ARQ, Go-Back-N ARQ, and Selective Repeat ARQ. The content emphasizes the importance of efficient data transmission and error correction mechanisms to ensure reliable communication between nodes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Lecture-13-17_DC

The document discusses the functions of the data link layer in data communications, focusing on data link control and media access control. It explains framing techniques, flow control methods, and error control protocols such as Stop-and-Wait ARQ, Go-Back-N ARQ, and Selective Repeat ARQ. The content emphasizes the importance of efficient data transmission and error correction mechanisms to ensure reliable communication between nodes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

LECTURE 13 - 17

Course Name: Data Communications (DC)

Course Code : IT204

Credits: 4

1
Task of Data Link Layer
The two main functions of the data link layer are data link control and media access
control.
Data link control deals with the design and procedures for communication between two
adjacent nodes.
The media access control tells about how to share the link.
Data link control functions include framing, flow and error control, and software
implemented protocols that provide smooth and reliable transmission of frames between
nodes.
Protocol is a set of rules, implemented in software and run by the two nodes for data exchange
at the data link layer.

2
Framing
Data transmission in the physical layer means moving bits in the form of a signal from the
source to the destination.
The physical layer provides bit synchronization to ensure that the sender and receiver use
the same bit durations and timing.
The data link layer needs to pack bits into frames, so that each frame is distinguishable from
another.
Our postal system practices a type of framing.
The simple act of inserting a letter into an envelope separates one piece of information from
another; the envelope serves as the delimiter.

3
Framing
Framing in the data link layer separates a message from one source to a destination, or from
other messages to other destinations, by adding a sender address and a destination
address.
Although the whole message could be packed in one frame.
In this case, the frame can be very large, making flow and error control very inefficient.
When a message is carried in one very large frame, even a single-bit error would require the
retransmission of the whole message.
When a message is divided into smaller frames, a single-bit error affects only that small
frame.

4
Framing
Fixed-Size Framing
In fixed-size framing, there is no need for defining the boundaries of the frames; the size itself can
be used as a delimiter.
An example of this type of framing is the ATM wide-area network.

Variable-Size Framing
Variable-size framing, prevalent in local-area networks.
In variable-size framing, we need a way to define the end of the frame and the beginning of the
next.
Historically, two approaches were used for this purpose:
(a) Character-oriented approach
(b) Bit-oriented approach

5
A frame in a character-oriented protocol
In a character-oriented protocol, data to be carried are 8-bit characters from a coding system
such as ASCII.
The header normally carries the source and destination addresses and other control
information.
The trailer carries error detection or error correction redundant bits, are also multiples of 8
bits.
To separate one frame from the next, an 8-bit (1-byte) flag is added at the beginning and the
end of a frame.

6
A frame in a character-oriented protocol
Character-oriented framing was popular when only text was exchanged by the data link layers.
The flag could be selected to be any character not used for text communication.
Now, however, we send other types of information such as graphs, audio, and video.
Any pattern used for the flag could also be part of the information.
At the receiver, when it encounters this pattern in the middle of the data, it thinks the end of the
frame.
To fix this problem, The data section is stuffed with escape character (ESC), which has a
predefined bit pattern.
Whenever the receiver encounters the ESC character, it removes it from the data section and
treats the next character as data, not a delimiting flag.

7
Byte stuffing and unstuffing

Byte stuffing is the process of adding 1 extra byte whenever there is a flag or escape

(ESC) character in the text.

Character-oriented protocols present another problem in data communications.

The universal coding systems is used today, such as Unicode, have 16-bit and 32-bit

characters that conflict with 8-bit characters.

8
We can say that in general, the tendency is moving toward the bit-oriented protocols.
A frame in a Bit-oriented protocol
In a bit-oriented protocol, the data section of a frame is a sequence of bits to be interpreted by
the upper layer as text, graphic, audio, video, and so on.
However, in addition to headers (and possible trailers), we still need a delimiter to separate one
frame from the other.
Most protocols use a special 8-bit pattern flag 01111110 as the delimiter to define the
beginning and the end of the frame.
This flag can create the same type of problem we saw in the byte-oriented protocols.
If the flag pattern appears in the data, we need to somehow inform the receiver that this is not the
end of the frame.
We do this by stuffing 1 single bit (instead of 1 byte) to prevent the pattern from looking like a
flag.

9
A frame in a Bit-oriented protocol
The strategy is called bit stuffing.
In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0 is added.
This extra stuffed bit is eventually removed from the data by the receiver.

Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s follow a 0 in the

data, so that the receiver does not mistake the pattern 0111110 for a flag.

Bit stuffing and unstuffing

10
Outline of the Lecture
Why flow and error control?
Flow control techniques:
▪ Stop-and-wait flow control
▪ Sliding-window flow control
Performance of the flow control techniques
Backward Error Correction approaches:
▪ Stop-and-wait ARQ
▪ Go-back-N ARQ
▪ Selective-repeat ARQ

11
Why Flow and Error Control?
For reliable and efficient data communication a great deal of coordination is necessary
between at least two machines.
Constraints:
▪ Both Sender and receiver have limited speed
▪ Both Sender and Receiver have limited memory (Storage or Buffer)
Requirements:
▪ A fast sender should not overwhelm a slow receiver, which must perform a certain amount of
processing before passing the data on to the higher-level software.
▪ If error occur during transmission, it is necessary to devise mechanism to correct it

12
Stop-and-Wait Flow Control
The simplest form of flow control.
The source transmits a data frame.
After receiving the frame, the destination indicates its willingness to accept another frame by
sending back an ACK frame acknowledging the frame just received.
The source must wait until it receives the ACK frame before sending the next data frame.

A transmitter sends a frame then stops


and waits for an acknowledgment
• If a positive acknowledgment
Wait
(ACK) is received, the next frame is
Time
sent
• If No acknowledgment is received,
the same frame is transmitted again Wait
Time

13
Link Utilization in Stop-and-Wait

Frame
Tx Rx

a>1: Sender completes transmission of the entire frame before the leading bits of the frame arrive at the
receiver

Frame Rx
Tx 14
Sliding Window Flow Control
With the use of multiple frames for a single message, the stop-and-wait protocol does not
perform well.

✔ Only one frame at a time can be in transit.

In stop-and-wait flow control, if a>1, serious inefficiencies result.

Efficiency can be greatly improved by allowing multiple frames to be in transit at the same
time.

Efficiency can also be improved by making use of the full-duplex line

A B

ACK from the Rx can be sent as a part of Information frame that is called as Piggybacking.
15
Sliding Window (Sender and Receiver)
Sliding Window (Sender)

Sliding Window (Receiver)


Receiver also maintains window of size 1.
The receiver acknowledges a frame by sending an ACK frame that includes the sequence number of the
next frame expected. This also explicitly announces that it is prepared to receive the next N frames,
beginning with the number specified.
The scheme can be used to acknowledge multiple frames. It could receive frames 2, 3, 4 but withhold ACK
until frame 4 has arrived. By returning an ACK with sequence number 5, it acknowledges frame 2, 3, 4
at one time.
The receiver needs a buffer of size 1. Frame 0
ACK1 16
Sliding Window Flow Control

K=3
Sender Receiver

0 1 2 3 4 5 6 7 0 1 2 Frame 0 0 1 2 3 4 5 6 7 0 1 2

0 1 2 3 4 5 6 7 0 1 2 0 1 2 3 4 5 6 7 0 1 2

0 1 2 3 4 5 6 7 0 1 2

0 1 2 3 4 5 6 7 0 1 2 0 1 2 3 4 5 6 7 0 1 2

0 1 2 3 4 5 6 7 0 1 2
0 1 2 3 4 5 6 7 0 1 2

17
Piggybacking
The actual window size need not be the maximum possible size for a given sequence number
length.
▪ For a 3-bit sequence number, a window size of 4 can be configured.
If two stations exchange data, each need to maintain two windows. To save communication
capacity, a technique called piggybacking is used.
▪ Each data frame includes a field that holds the sequence number of that frame plus a
field that holds the sequence number used for ACK.
▪ If a station has an ACK but no data to send, it sends a separate ACK frame.

Frame ACK No.


No.

18
Link Utilization in Sliding-Window
The link Utilization

U = 1, for N > 2a+1


N/(1+2a), for N < 2a+1

Where N = the window size,

and a = Propagation Time / Transmission Time

Example: K=5, N = 25-1 = 31

a = 10, U = 1, N > 2a+1

a = 200, U = 31/(1+400)

19
Protocols
Now let us see how the data link layer can combine framing, flow control, and error control to
achieve the delivery of data from one node to another.
The protocols are normally implemented in software by using one of the common
programming languages.
To make our discussions language-free, we have written in pseudocode a version of each
protocol that concentrates mostly on the procedure instead of delving into the details of language
rules.

Stop-and-wait Stop-and-wait ARQ

Sliding window Go-Back-N ARQ

Selective Repeat ARQ

Taxonomy of protocols
20
Backward Error Control
Model of frame transmission:
▪ Data are sent as a sequence of frames
▪ Frames arrive at the same order as they are sent
▪ Each transmitted frame suffers an arbitrary and variable amount of delay before reception.
In addition to above, following two types of error may occur:
Lost frame: A frame fails to arrive at the other side.
▪ A noise burst may damage a frame to such an extent that it is not recognizable at the receiving end
Damaged frame: A recognizable frame does arrive, but some of the bits are in error.
Most common techniques for error control are based on some or all of the following:
▪ Error Detection: We have already discussed.
▪ Positive Acknowledgement: The destination returns a positive acknowledgement to successfully
received error-free frames.
▪ Retransmission after timeout: The source retransmits a frame that has not been acknowledged after a
predetermined amount of time.
▪ Negative acknowledgement and retransmission: The destination returns a negative
21
acknowledgement to frames in which an error is detected. The source retransmits such frames.
Error Control Techniques
Collectively, the mechanisms are referred to as Automatic Repeat Request (ARQ).
▪ Objective is to turn an unreliable data link into a reliable one.
Three versions of ARQ are:
- Stop-and-wait - Go-back-N - Selective-repeat

Although the Stop-and-Wait Protocol gives us an idea of how to add flow control to its
predecessor, noiseless channels are nonexistent. We discuss three protocols in this section
that use error control.

Error Control

Stop-and-wait ARQ Sliding window ARQ

Go-back-n Selective-repeat
22
Stop-and-Wait ARQ
Based on the stop-and-wait flow control technique.
▪ The source station transmits a single frame and then waits for an acknowledgement (ACK).
▪ No other data frame can be sent until the destination station’s reply arrives at the source
station.
To take care of lost and damaged frames, the stations are equipped with:
▪ A timer: If no recognizable ACK is received when the timer expires at the end of the time-out
interval, then the same frame is sent again.
▪ Requires that the transmitter maintain a copy of a transmitted frame until an ACK is received for
it.

23
Stop-and-Wait ARQ
The ACK frame may be damaged.

The sender will timeout and resend the same frame.

The receiver receives the same frame twice. ACK


1
How to identify duplicate frames?
ACK
A modulo-2 numbering scheme is used, where 0
frames are alternately labelled with o or 1, and Frame
0
positive acknowledgements are of the form ACK0 or
ACK1.

Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent frame and

retransmitting the frame when the timer expires.


In Stop-and-Wait ARQ, we use sequence numbers to number the frames. The sequence

numbers are based on modulo-2 arithmetic.


24
Efficiency of Stop-and-Wait ARQ
The Stop-and-Wait ARQ protocol is very inefficient if our channel is thick and long.
By thick, we mean that our channel has a large bandwidth.
By long, we mean the round-trip delay is long.
The product of these two is called the bandwidth-delay product.
The bandwidth-delay product is a measure of the number of bits we can send out of our system while waiting for news
from the receiver.
(Q. 1) Assume that, in a Stop-and-Wait ARQ system, the bandwidth of the line is 1 Mbps, and 1 bit takes 20 ms to make a
round trip. What is the bandwidth-delay product? If the system data frames are 1000 bits in length, what is the utilization
percentage of the link?
Solution
The bandwidth-delay product is

Advantages:
The main advantage of stop-and-wait ARQ is its simplicity.
It requires minimum buffer size.
Disadvantage:
It makes highly inefficient use of communication links, particularly when “a” is large
25
Go-back-N ARQ
Based on sliding window protocol.

▪ Most commonly used.

Basic Concept:

▪ A station may send a series of frames sequentially up to a maximum


number

▪ The number of unacknowledged frames outstanding is determined by


window size, using the sliding-window flow control technique.

▪ In case of no error, the destination will acknowledge incoming frames as


usual.

▪ If the destination detects error in a frame, or it receives a frame out


of order, it sends a NAK for that frame (using reject or REJ frame).

▪ The destination will discard the frame in error and all future frames
until the frame in error is correctly received,

▪ The source station, on receiving a REJ, must retransmit the frame in


error plus all succeeding frames.

26
Selective Repeat ARQ
In this case, only those frames are retransmitted for
which negative acknowledgement has been received,
in this case called SREJ, or time out has occurred.
More efficient than Go-back-N.

Receiver requires storage buffers to contain


out-of-order frames until the frame in error is correctly
received.
Receiver must also have the appropriate logic circuitry
needed for reinserting the frames in the correct order.
Transmitter is also more complex because it must be
capable of sending frames out of sequence.

27
Window Size Limit
Consider the following scenario with k =3, and a window size of 7:

▪ Sender sends frames 0 through 6.

▪ Receiver sends RR7 after receiving all the 7 frames.

▪ RR7 gets lost in transit.

▪ Sender times out and retransmits frame 0.

▪ The receiver has already advanced its receive window to receive frames 7, 0, 1, 2, 3, 4, 5.

▪ The receiver wrongly assumes that frame 7 has been lost and frame 0 is accepted as a new
frame.

▪ The problem can be alleviated by using window size of no more than half the possible
sequence number

28
Outline of the Lecture
Why Circuit Switching?
Switched Communication Network
Circuit Switching Fundamentals
▪ Advantages and Disadvantages
Switching Concepts
▪ Space division switching
• Crossbar Switches
▪ Time division Switching
Routing in Circuit-Switched Networks
Signaling in Circuit-Switched Networks

29
Introduction
How two devices perform communication when there are many devices?
One alternative is to establish point-to-point communication between each pair of devices using
mesh topology

Mesh topology is impractical for large number of devices


A better alternative is to use switching techniques leading to Switched Communication Network

Mesh Topology

30
Switched Communication Network
The end devices that wish to communicate are called Stations. The switching devices are called
Nodes. Some nodes connect to other nodes and some to attached stations.
Network Topology not Regular
Uses FDM or TDM for node-to-node communication
There exists multiple paths between a source-destination pair for better network reliability
The switching nodes are not concerned with the contents of data
Their purpose is to provide a switching facility that will move data from node to node until they
reach the destination
Stati
on
No
de

31
Switching Techniques
Possible Switching Techniques:
Circuit Switching
Message Switching
Packet Switching

32
Switching Techniques
Switching at Physical Layer
At the physical layer, we can have only circuit switching. There are no packets exchanged at the
physical layer.
The switches at the physical layer allow signals to travel in one path or another.
Switching at Dat-Link Layer
At the data-link layer, we can have packet switching. However, the term packet in this case
means frames or cells.
Packet switching at the data-link layer is normally done using a virtual-circuit approach.
Switching at Network Layer
At the network layer, we can have packet switching. In this case, either a virtual-circuit approach
or a datagram approach can be used.
Currently the Internet uses a datagram approach, but the tendency is to move to a virtual-circuit
approach.
Switching at Application Layer
At the application layer, we can have only message switching. The communication at the
application layer occurs by exchanging messages.
33
Switching Techniques
Communication via circuit switching implies that there is a dedicated communication path
between two stations.
The path is connected sequence of links between network nodes.
On each physical link, a logical channel is dedicated to the connection.
Circuit Switching Phases
Circuit Establishment
▪ To establish an end-to-end connection before any transfer of data.
▪ Some segments of the circuit may be a dedicated link, while some other segments may be shared.
Data Transfer
▪ Transfer data is from the source to the destination
▪ The data may be analog or digital, depending on the nature of the network.
▪ The connection is generally full-duplex.
Circuit Disconnect
▪ Terminate connection at the end of data transfer
▪ Signals must be propagated to deallocate the dedicated resources. 34
Circuit Switching
Originally developed for handling voice traffic, but is now also used for data traffic.
Once the circuit is established, the network is transparent to the users.
Information is transmitted at a fixed rate with no delay other than that required for propagation through the
communication medium.
Best known example is the Public Switched Telephone Network (PSTN)

Advantages
Fixed bandwidth, guaranteed capacity (No congestion)
35
Low variance in end-to-end Delay (Constant Delay)
Circuit Switching Disadvantages
Circuit establishment and circuit disconnect introduces extra overhead and delay
Constant data rate from source to destination
Channel capacity is dedicated for the duration of the connection, even if no data is being
transferred.
For voice connection, utilization is typically high
(Statistics: 64-73% time one speaker speaking, 3-7% time both are speaking, 20-30% time both
are silent)
Inefficient for bursty data traffic. In a typical user/host data connection, line utilization is poor.
Other users cannot use it even it is free of traffic.

36
Switching Node
Let us consider the operation of a single circuit switched node comprising a collection of
stations attached to a central switching unit, which establishes a dedicated path between any
two devices that wish to communicate.
Major elements of a single-node network.
Digital Switch: Provides a transparent (full-duplex) signal path between any pair of attached
devices.
Network Interface: represents the functions and hardware needed to connect digital devices
to the network (like telephones).
Control Unit
Control Unit: establishes, maintains, and tear down a connection.

attached devices
Full-Duplex links
Digital Switch

to
37
Network
Space Division Switching
Originally developed for the analog environment, and has been carried over to the digital
domain
In space division switch, the signal paths physically separate from one another (divided in
space).
Essentially a crossbar matrix

The basic building block of the switch is


a metallic crosspoint or semiconductor
gate that can be enabled or disabled by
a control unit.
Xilinx crossbar switch using FPGAs
Based on reconfigurable routing
infrastructure
High speed high capacity nonblocking
switches
Sizes varying from 64x64 to 1024x1024
with data rate of 200 Mbps 38
Blocking and Non-Blocking Networks
An important characteristic of a circuit-switch node is whether it is blocking or non-blocking
A blocking network is one which may be unable to connect two stations because all possible
paths between them are already in use
A non-blocking network permits all stations to be connected (in pairs) at once and grants all
possible connection requests as long as the called party is free
For a network which supports only voice traffic, a blocking configuration may be acceptable,
since most phone calls are of short duration
For data applications, where a connection may remain active for hours, non-blocking
configuration is desirable.

39
Limitations of Crossbar Switch
The number of crosspoints grows with the square of the number of attached stations.

Costly for a large switch

The failure of a crosspoint prevents connection between the two devices whose lines intersect at that crosspoint

The crosspoints are inefficiently utilized

Only a small fraction are engaged even if all of the attached devices are active

Solution is to build multistage pace division switches


By splitting the crossbar switch into smaller units and interconnecting them, it is possible to build multistage switches with fewer
crosspoints

8 8
7
7
6 4x2 2x2 4x2 6
5 5

4 4
3
2 4x2 2x2 4x2 3
2
1 1
40
Three-stage Space Division Switch

We can calculate the total number of crosspoints as follows:

41
Blocking in Multistage Switches
8x8 Switch using 4x2 and 2x2 crosspoint switch
The number of crosspoints needed goes down from 64 to 40
There is more than one path through the network to connect two endpoints, thereby increasing reliability
Multistage switches may lead to Blocking
The problem may be tacked by increasing the number or size of the intermediate switches, which also increases the cost

8 8 1 3
7
6
4x2 2x2 4x2 7
6 2 4
5 5 3 6
4 8

4 4
3
2
4x2 2x2 4x2 3
2
1 1 42
Time Division Switching
Both voice and data can be transmitted using digital signals.
All modem circuit switches use digital time-division multiplexing (TDM) technique for
establishing and maintaining circuits.
▪ Synchronous TDM allows multiple low-speed bit streams to share a high-speed line.
▪ A set of inputs is sampled in a round robin manner. The samples are organized serially into
slots (channels) to form a recurring frame of slots.
▪ During successive time slots, different I/O pairings are enabled, allowing a number of
connections to be carried over the shared bus.
To keep up with the input lines, the data rate on the bus must be high enough so that the slots
recur sufficiently frequently.
For 100 full-duplex lines at 19.200 Kbps, the data rate on the bus must be greater than 1.92
Mbps.
▪ The source destination pairs corresponding to all active connections are stored in the
control memory.
✔ Thus the slots need not specify the source and destination addresses.
43
TDM with and Without TSI
1 1 1 3
2 2 2 4
3 3 3 1
4 4 4 2

TDM without TSI TDM with TSI 1 3


2 4
Time-Slot Interchange 3 1
4 2

Writing is Sequentially
Reading is Selective

44
Time and Space Switch

A simple TST switch that consists of two time stages and one space stage and has 12 inputs and 12
outputs.
Instead of one time-division switch, it divides the input into three groups (of four inputs each) and directs
them to three timeslot interchanges.
The result is that the average delay is one-third of what would result from using one time-slot interchange to
handle all 12 inputs. 45
Routing in Circuit-Switched Network
In large circuit-switched network, connections often require a path through more than one switches.
Basic objective: Efficiency and Resilience
Two basic approaches: Static and Dynamic
Static Routing
Routing function in public switched telecommunication networks (PSTN) has been traditionally quite simple and static.
Switches are organized as a tree structure.
To add some resilience to the network, additional high-usage trunks are added that cut across the tree structure to
connect exchanges with high volumes of traffic between them.
Cannot adapt to changing conditions
Leads to congestion in case of failure

Dynamic Routing
To cope with growing demands, all providers presently use a dynamic approach.
▪ Routing decisions are influenced by current traffic conditions.
▪ Switching nodes have a peer relationship with each other rather than a hierarchical one.
▪ Routing is more complex and more flexible.
▪ Two techniques:
46
✔ Alternate and Adaptive routing
Alternate Routing Approach
The possible routes to be used between two end offices are predetermined.
▪ It is the responsibility of the originating switch to select the appropriate route for each call.
In practice, usually a different set of pre-planned routes is used for different time periods.
▪ Take advantage of different traffic patters in different time zones and different times of day.

47
Outline of the Lecture
Limitations of Circuit Switching
Message Switching Concepts
Packet Switching Concepts
Packet Switching Techniques:
▪ Virtual Circuit
▪ Datagram
Datagram versus virtual circuit
Internal and External Operations
Circuit Switching versus Packet Switching

48
Problems with Circuit Switching
Network resources are dedicated to a particular connection.
Two shortcomings for data communication:
▪ In a typical user/host data connection, line utilization is very low.
▪ Provides facility for data transmission at a constant rate.
✔ Data transmission pattern may not ensure this.
✔ Limits the utility of the method.

49
Message Switching
Basic Idea:
Each network node receives and stores the message
Determines the next leg of the route, and
Queues the message to go out of that link.

Advantages:
Line efficiency is greater (Sharing of links).
Data rate conversion is possible.
Even under heavy traffic, packets are accepted,
possibly with a greater delivery delay.
Message priorities can be used.

Disadvantages:
Message of large size monopolizes the link and storage 50
Packet Switching
New form of architecture for long-distance data communication (1970).
Packet switching technology has evolved over time.
Basic technology has not changed
Packet switching remains one of the few effective technologies for long-distance data communication.

Packet Switching: Basic Idea


Data are transmitted in short packets Message
(few Kbytes).
A longer message is broken up into a
series of packets.
H Packet 1 H Packet 1 H Packet 1
Every packet contains some control
information in its header (required for
routing and other purposes).

As mentioned earlier, a packet switching network


breaks up a message into packets.
Two approaches are commonly used for handling
these packets:
▪ Virtual Circuit
▪ Datagram 51
Advantages of Packet Switching Techniques
Line Efficiency
Line efficiency is greater, because a single node-to-node link can be dynamically shared by many
packets over time. The packets are queued up and transmitted as rapidly as possible over the
link. By contrast, with circuit switching, time on a node-to-node link is preallocated using
synchronous time-division multiplexing. Much of the time, such a link may be idle because a
portion of its time is dedicated to a connection that is idle.
Data-Rate Conversion
A packet-switching network can perform data-rate conversion. Two stations of different data rates can
exchange packets because each connects to its node at its proper data rate.
Blocking
When traffic becomes heavy on a circuit-switching network, some calls are blocked; that is, the network
refuses to accept additional connection requests until the load on the network decreases. On a
packet-switching network, packets are still accepted, but delivery delay increases.

Packet Priorities
Priorities can be used. If a node has a number of packets queued for transmission, it can transmit the
higher-priority packets first. These packets will therefore experience less delay than lower-priority packets.

52
Datagram Approach
Each packet is treated independently, with no reference to packet that have gone before.
Every intermediate node has to take routing decisions.
Every packet contains source and destination addresses.
Intermediate nodes maintain routing tables.
Advantages:
Call setup phase is avoided (for transmission of a
few packets, datagram will be faster).
Because it is more primitive, it is more flexible.
Congestion/failed link can be avoided (more
reliable).

Problems:
Packets may be delivered out of order.
If a node crashes momentarily, all of its queued
packets are lost.
53
Routing Table in Datagram Packet Switching

Delay in Datagram Packet Switching

54
Datagram Approach

55
Virtual Circuit Approach
A preplanned route is established before any packets are sent.
Call Request and Call Accept packets are used to establish the connection.
Route is fixed for the duration of the logical connection (like circuit switching).
▪ Each packet contains a virtual circuit identifier as well as data.
▪ Each node on the route knows where to forward packets.
A Clear Request packet issued by one of the two stations terminates the
connection.

Virtual Circuit Approach Main Characteristics


Route between stations is set up prior to data transfer.
A packet is buffered at each node, and queued for output
over a line.
A data packet needs to carry only the virtual circuit
identifier for effecting routing decisions.
Intermediate nodes take no routing decisions.
Often provides sequencing and error control.
56
Switch and Tables in Virtual Circuit Packet Switching

Setup
Request Setup
Delay in Virtual Circuit Packet Switching Acknowledgement

57
Switch and Tables in Virtual Circuit Packet Switching

58
Datagram Approach

59
Datagram Vs Virtual Circuit
In datagram Service
▪ Each packet is treated independently
▪ Call set up phase is avoided
▪ Inherently more flexible and reliable

In virtual Circuit
▪ Node need not decide route
▪ More difficult to adopt to congestion
▪ Maintain sequence order
▪ All packets are sent through the same preplanned route

60
Packet Size
As packet size is decreased, the transmission time reduces until it is comparable to the size
of the control information.
There is a close relationship between packet size and transmission time.
Example: Assume that there is a virtual circuit from station A through nodes 1, 2, 4, and 5, to
station C. Message size is 32 bytes, packet header is 4 bytes.

No. of Packets Total No. of Transmission


Bytes Time
1 108 108
2 120 80
4 144 72
8 192 80
16 288 120

In spite of increase in overhead, the transmission time


decreases because of parallelism in transmission.

61
Circuit and Packet Switching Comparison
Circuit Switching Datagram Packet Virtual Circuit Packet
Dedicated path No Dedicated Path No dedicated Path
Path established for Route established for Route established for
entire conversation each packet entire conversation
Call set up delay Packet transmission Call set up delay, Packet
delay transmission delay
Overload may block call Overload increases Overload may block call
set up packet delay set up and increases
packet delay
No speed or code Speed or code conversion Speed or Code
conversion conversion
Fixed bandwidth Dynamic bandwidth Dynamic bandwidth
No overhead bits after call Overhead bits in each Overhead bits in each
set up packet packet

62
Circuit and Packet Switching Comparison

63
Outline of the Lecture
Introduction
Broadcast Networks
Issues in MAC
Goals of MAC
MAC Techniques
Random Access MAC Techniques
▪ ALOHA
▪ CSMA
▪ CSMA/CD
▪ CSMA/CA

64
Outline of the Lecture
Previously we discussed data link control to provide a link with reliable communication.
We assumed that there is an available dedicated link between the sender and the receiver. This
assumption is not always true.
If we have a dedicated link, as when we connect to the Internet using PPP as the data link control protocol,
then the assumption is true and we do not need anything else.
On the other hand, if we use our cellular phone to connect to another cellular phone, the channel is not
dedicated.
We can consider the data link layer as two sublayers.
The upper sublayer is responsible for data link control, and the lower sublayer is responsible for
resolving access to the shared media.

• IEEE actually made this division for LANs.


• The upper sublayer that is responsible for flow and error control is called the logical link control (LLC)
layer.
• The lower sublayer that is mostly responsible for multiple access resolution is called the media access
control (MAC) layer.
65
Introduction
Types of Networks:
▪ Switched Communication Networks (Peer-to-Peer)
Users are interconnected by means of transmission lines, multiplexers and switches.
Ex. Telephone Networks, ATM, X.25, SONET
▪ Broadcast Networks (shared medium)
A single transmission media is shared by all the users and information is broadcast by an user into the medium.

Broadcast Networks Examples:


▪ Multi-tapped bus
▪ Ring networks sharing a medium
▪ Satellite communication using sharing of uplink and downlink
frequency bands
▪ Packet Radio network
▪ Wireless communication stations sharing a frequency band

Broadcast networks require a protocol to orchestrate the


transmission from the users
66
Issues in MAC
The question is “who goes next?”
The protocols used for this purpose are known as Medium Access Control (MAC) techniques.
The key issue involved here are – where and how the control is exercised.

Where?
Centralized: a designated station has an authority to grant access to the network.
▪ Simple logic at each station
▪ Greater control to provide features like priority, overrides and guaranteed bandwidth
▪ Easy coordination
▪ Lower reliability
Distributed: Stations can dynamically determine transmission order.
▪ Complex, reliable and scalable

67
Random Access
In random access or contention methods, no station is superior to another station and none is
assigned the control over another.
No station permits, or does not permit, another station to send.
At each instance, a station that has data to send uses a procedure defined by the protocol to
make a decision on whether or not to send.

In a random access method, each station has the right to the medium without being controlled by
any other station.
However, if more than one station tries to send, there is an access conflict-collision-and the
frames will be either destroyed or modified.
To avoid access conflict or to resolve it when it happens, each station follows a procedure that
answers the following questions:
When can the station access the medium?
What can the station do if the medium is busy?
How can the station determine the success or failure of the transmission?
What can the station do if there is an access conflict?

68
MAC Techniques

ALOHA can be Pure ALOHA or slotted ALOHA.


Reservation can be Centralized or Distributed. 69
Pure ALOHA
Vulnerable Time = 2* Frame transmission
time
Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5

70

You might also like