Lecture-13-17_DC
Lecture-13-17_DC
Credits: 4
1
Task of Data Link Layer
The two main functions of the data link layer are data link control and media access
control.
Data link control deals with the design and procedures for communication between two
adjacent nodes.
The media access control tells about how to share the link.
Data link control functions include framing, flow and error control, and software
implemented protocols that provide smooth and reliable transmission of frames between
nodes.
Protocol is a set of rules, implemented in software and run by the two nodes for data exchange
at the data link layer.
2
Framing
Data transmission in the physical layer means moving bits in the form of a signal from the
source to the destination.
The physical layer provides bit synchronization to ensure that the sender and receiver use
the same bit durations and timing.
The data link layer needs to pack bits into frames, so that each frame is distinguishable from
another.
Our postal system practices a type of framing.
The simple act of inserting a letter into an envelope separates one piece of information from
another; the envelope serves as the delimiter.
3
Framing
Framing in the data link layer separates a message from one source to a destination, or from
other messages to other destinations, by adding a sender address and a destination
address.
Although the whole message could be packed in one frame.
In this case, the frame can be very large, making flow and error control very inefficient.
When a message is carried in one very large frame, even a single-bit error would require the
retransmission of the whole message.
When a message is divided into smaller frames, a single-bit error affects only that small
frame.
4
Framing
Fixed-Size Framing
In fixed-size framing, there is no need for defining the boundaries of the frames; the size itself can
be used as a delimiter.
An example of this type of framing is the ATM wide-area network.
Variable-Size Framing
Variable-size framing, prevalent in local-area networks.
In variable-size framing, we need a way to define the end of the frame and the beginning of the
next.
Historically, two approaches were used for this purpose:
(a) Character-oriented approach
(b) Bit-oriented approach
5
A frame in a character-oriented protocol
In a character-oriented protocol, data to be carried are 8-bit characters from a coding system
such as ASCII.
The header normally carries the source and destination addresses and other control
information.
The trailer carries error detection or error correction redundant bits, are also multiples of 8
bits.
To separate one frame from the next, an 8-bit (1-byte) flag is added at the beginning and the
end of a frame.
6
A frame in a character-oriented protocol
Character-oriented framing was popular when only text was exchanged by the data link layers.
The flag could be selected to be any character not used for text communication.
Now, however, we send other types of information such as graphs, audio, and video.
Any pattern used for the flag could also be part of the information.
At the receiver, when it encounters this pattern in the middle of the data, it thinks the end of the
frame.
To fix this problem, The data section is stuffed with escape character (ESC), which has a
predefined bit pattern.
Whenever the receiver encounters the ESC character, it removes it from the data section and
treats the next character as data, not a delimiting flag.
7
Byte stuffing and unstuffing
Byte stuffing is the process of adding 1 extra byte whenever there is a flag or escape
The universal coding systems is used today, such as Unicode, have 16-bit and 32-bit
8
We can say that in general, the tendency is moving toward the bit-oriented protocols.
A frame in a Bit-oriented protocol
In a bit-oriented protocol, the data section of a frame is a sequence of bits to be interpreted by
the upper layer as text, graphic, audio, video, and so on.
However, in addition to headers (and possible trailers), we still need a delimiter to separate one
frame from the other.
Most protocols use a special 8-bit pattern flag 01111110 as the delimiter to define the
beginning and the end of the frame.
This flag can create the same type of problem we saw in the byte-oriented protocols.
If the flag pattern appears in the data, we need to somehow inform the receiver that this is not the
end of the frame.
We do this by stuffing 1 single bit (instead of 1 byte) to prevent the pattern from looking like a
flag.
9
A frame in a Bit-oriented protocol
The strategy is called bit stuffing.
In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0 is added.
This extra stuffed bit is eventually removed from the data by the receiver.
Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s follow a 0 in the
data, so that the receiver does not mistake the pattern 0111110 for a flag.
10
Outline of the Lecture
Why flow and error control?
Flow control techniques:
▪ Stop-and-wait flow control
▪ Sliding-window flow control
Performance of the flow control techniques
Backward Error Correction approaches:
▪ Stop-and-wait ARQ
▪ Go-back-N ARQ
▪ Selective-repeat ARQ
11
Why Flow and Error Control?
For reliable and efficient data communication a great deal of coordination is necessary
between at least two machines.
Constraints:
▪ Both Sender and receiver have limited speed
▪ Both Sender and Receiver have limited memory (Storage or Buffer)
Requirements:
▪ A fast sender should not overwhelm a slow receiver, which must perform a certain amount of
processing before passing the data on to the higher-level software.
▪ If error occur during transmission, it is necessary to devise mechanism to correct it
12
Stop-and-Wait Flow Control
The simplest form of flow control.
The source transmits a data frame.
After receiving the frame, the destination indicates its willingness to accept another frame by
sending back an ACK frame acknowledging the frame just received.
The source must wait until it receives the ACK frame before sending the next data frame.
13
Link Utilization in Stop-and-Wait
Frame
Tx Rx
a>1: Sender completes transmission of the entire frame before the leading bits of the frame arrive at the
receiver
Frame Rx
Tx 14
Sliding Window Flow Control
With the use of multiple frames for a single message, the stop-and-wait protocol does not
perform well.
Efficiency can be greatly improved by allowing multiple frames to be in transit at the same
time.
A B
ACK from the Rx can be sent as a part of Information frame that is called as Piggybacking.
15
Sliding Window (Sender and Receiver)
Sliding Window (Sender)
K=3
Sender Receiver
0 1 2 3 4 5 6 7 0 1 2 Frame 0 0 1 2 3 4 5 6 7 0 1 2
0 1 2 3 4 5 6 7 0 1 2 0 1 2 3 4 5 6 7 0 1 2
0 1 2 3 4 5 6 7 0 1 2
0 1 2 3 4 5 6 7 0 1 2 0 1 2 3 4 5 6 7 0 1 2
0 1 2 3 4 5 6 7 0 1 2
0 1 2 3 4 5 6 7 0 1 2
17
Piggybacking
The actual window size need not be the maximum possible size for a given sequence number
length.
▪ For a 3-bit sequence number, a window size of 4 can be configured.
If two stations exchange data, each need to maintain two windows. To save communication
capacity, a technique called piggybacking is used.
▪ Each data frame includes a field that holds the sequence number of that frame plus a
field that holds the sequence number used for ACK.
▪ If a station has an ACK but no data to send, it sends a separate ACK frame.
18
Link Utilization in Sliding-Window
The link Utilization
a = 200, U = 31/(1+400)
19
Protocols
Now let us see how the data link layer can combine framing, flow control, and error control to
achieve the delivery of data from one node to another.
The protocols are normally implemented in software by using one of the common
programming languages.
To make our discussions language-free, we have written in pseudocode a version of each
protocol that concentrates mostly on the procedure instead of delving into the details of language
rules.
Taxonomy of protocols
20
Backward Error Control
Model of frame transmission:
▪ Data are sent as a sequence of frames
▪ Frames arrive at the same order as they are sent
▪ Each transmitted frame suffers an arbitrary and variable amount of delay before reception.
In addition to above, following two types of error may occur:
Lost frame: A frame fails to arrive at the other side.
▪ A noise burst may damage a frame to such an extent that it is not recognizable at the receiving end
Damaged frame: A recognizable frame does arrive, but some of the bits are in error.
Most common techniques for error control are based on some or all of the following:
▪ Error Detection: We have already discussed.
▪ Positive Acknowledgement: The destination returns a positive acknowledgement to successfully
received error-free frames.
▪ Retransmission after timeout: The source retransmits a frame that has not been acknowledged after a
predetermined amount of time.
▪ Negative acknowledgement and retransmission: The destination returns a negative
21
acknowledgement to frames in which an error is detected. The source retransmits such frames.
Error Control Techniques
Collectively, the mechanisms are referred to as Automatic Repeat Request (ARQ).
▪ Objective is to turn an unreliable data link into a reliable one.
Three versions of ARQ are:
- Stop-and-wait - Go-back-N - Selective-repeat
Although the Stop-and-Wait Protocol gives us an idea of how to add flow control to its
predecessor, noiseless channels are nonexistent. We discuss three protocols in this section
that use error control.
Error Control
Go-back-n Selective-repeat
22
Stop-and-Wait ARQ
Based on the stop-and-wait flow control technique.
▪ The source station transmits a single frame and then waits for an acknowledgement (ACK).
▪ No other data frame can be sent until the destination station’s reply arrives at the source
station.
To take care of lost and damaged frames, the stations are equipped with:
▪ A timer: If no recognizable ACK is received when the timer expires at the end of the time-out
interval, then the same frame is sent again.
▪ Requires that the transmitter maintain a copy of a transmitted frame until an ACK is received for
it.
23
Stop-and-Wait ARQ
The ACK frame may be damaged.
Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent frame and
Advantages:
The main advantage of stop-and-wait ARQ is its simplicity.
It requires minimum buffer size.
Disadvantage:
It makes highly inefficient use of communication links, particularly when “a” is large
25
Go-back-N ARQ
Based on sliding window protocol.
Basic Concept:
▪ The destination will discard the frame in error and all future frames
until the frame in error is correctly received,
26
Selective Repeat ARQ
In this case, only those frames are retransmitted for
which negative acknowledgement has been received,
in this case called SREJ, or time out has occurred.
More efficient than Go-back-N.
27
Window Size Limit
Consider the following scenario with k =3, and a window size of 7:
▪ The receiver has already advanced its receive window to receive frames 7, 0, 1, 2, 3, 4, 5.
▪ The receiver wrongly assumes that frame 7 has been lost and frame 0 is accepted as a new
frame.
▪ The problem can be alleviated by using window size of no more than half the possible
sequence number
28
Outline of the Lecture
Why Circuit Switching?
Switched Communication Network
Circuit Switching Fundamentals
▪ Advantages and Disadvantages
Switching Concepts
▪ Space division switching
• Crossbar Switches
▪ Time division Switching
Routing in Circuit-Switched Networks
Signaling in Circuit-Switched Networks
29
Introduction
How two devices perform communication when there are many devices?
One alternative is to establish point-to-point communication between each pair of devices using
mesh topology
Mesh Topology
30
Switched Communication Network
The end devices that wish to communicate are called Stations. The switching devices are called
Nodes. Some nodes connect to other nodes and some to attached stations.
Network Topology not Regular
Uses FDM or TDM for node-to-node communication
There exists multiple paths between a source-destination pair for better network reliability
The switching nodes are not concerned with the contents of data
Their purpose is to provide a switching facility that will move data from node to node until they
reach the destination
Stati
on
No
de
31
Switching Techniques
Possible Switching Techniques:
Circuit Switching
Message Switching
Packet Switching
32
Switching Techniques
Switching at Physical Layer
At the physical layer, we can have only circuit switching. There are no packets exchanged at the
physical layer.
The switches at the physical layer allow signals to travel in one path or another.
Switching at Dat-Link Layer
At the data-link layer, we can have packet switching. However, the term packet in this case
means frames or cells.
Packet switching at the data-link layer is normally done using a virtual-circuit approach.
Switching at Network Layer
At the network layer, we can have packet switching. In this case, either a virtual-circuit approach
or a datagram approach can be used.
Currently the Internet uses a datagram approach, but the tendency is to move to a virtual-circuit
approach.
Switching at Application Layer
At the application layer, we can have only message switching. The communication at the
application layer occurs by exchanging messages.
33
Switching Techniques
Communication via circuit switching implies that there is a dedicated communication path
between two stations.
The path is connected sequence of links between network nodes.
On each physical link, a logical channel is dedicated to the connection.
Circuit Switching Phases
Circuit Establishment
▪ To establish an end-to-end connection before any transfer of data.
▪ Some segments of the circuit may be a dedicated link, while some other segments may be shared.
Data Transfer
▪ Transfer data is from the source to the destination
▪ The data may be analog or digital, depending on the nature of the network.
▪ The connection is generally full-duplex.
Circuit Disconnect
▪ Terminate connection at the end of data transfer
▪ Signals must be propagated to deallocate the dedicated resources. 34
Circuit Switching
Originally developed for handling voice traffic, but is now also used for data traffic.
Once the circuit is established, the network is transparent to the users.
Information is transmitted at a fixed rate with no delay other than that required for propagation through the
communication medium.
Best known example is the Public Switched Telephone Network (PSTN)
Advantages
Fixed bandwidth, guaranteed capacity (No congestion)
35
Low variance in end-to-end Delay (Constant Delay)
Circuit Switching Disadvantages
Circuit establishment and circuit disconnect introduces extra overhead and delay
Constant data rate from source to destination
Channel capacity is dedicated for the duration of the connection, even if no data is being
transferred.
For voice connection, utilization is typically high
(Statistics: 64-73% time one speaker speaking, 3-7% time both are speaking, 20-30% time both
are silent)
Inefficient for bursty data traffic. In a typical user/host data connection, line utilization is poor.
Other users cannot use it even it is free of traffic.
36
Switching Node
Let us consider the operation of a single circuit switched node comprising a collection of
stations attached to a central switching unit, which establishes a dedicated path between any
two devices that wish to communicate.
Major elements of a single-node network.
Digital Switch: Provides a transparent (full-duplex) signal path between any pair of attached
devices.
Network Interface: represents the functions and hardware needed to connect digital devices
to the network (like telephones).
Control Unit
Control Unit: establishes, maintains, and tear down a connection.
attached devices
Full-Duplex links
Digital Switch
to
37
Network
Space Division Switching
Originally developed for the analog environment, and has been carried over to the digital
domain
In space division switch, the signal paths physically separate from one another (divided in
space).
Essentially a crossbar matrix
39
Limitations of Crossbar Switch
The number of crosspoints grows with the square of the number of attached stations.
The failure of a crosspoint prevents connection between the two devices whose lines intersect at that crosspoint
Only a small fraction are engaged even if all of the attached devices are active
8 8
7
7
6 4x2 2x2 4x2 6
5 5
4 4
3
2 4x2 2x2 4x2 3
2
1 1
40
Three-stage Space Division Switch
41
Blocking in Multistage Switches
8x8 Switch using 4x2 and 2x2 crosspoint switch
The number of crosspoints needed goes down from 64 to 40
There is more than one path through the network to connect two endpoints, thereby increasing reliability
Multistage switches may lead to Blocking
The problem may be tacked by increasing the number or size of the intermediate switches, which also increases the cost
8 8 1 3
7
6
4x2 2x2 4x2 7
6 2 4
5 5 3 6
4 8
4 4
3
2
4x2 2x2 4x2 3
2
1 1 42
Time Division Switching
Both voice and data can be transmitted using digital signals.
All modem circuit switches use digital time-division multiplexing (TDM) technique for
establishing and maintaining circuits.
▪ Synchronous TDM allows multiple low-speed bit streams to share a high-speed line.
▪ A set of inputs is sampled in a round robin manner. The samples are organized serially into
slots (channels) to form a recurring frame of slots.
▪ During successive time slots, different I/O pairings are enabled, allowing a number of
connections to be carried over the shared bus.
To keep up with the input lines, the data rate on the bus must be high enough so that the slots
recur sufficiently frequently.
For 100 full-duplex lines at 19.200 Kbps, the data rate on the bus must be greater than 1.92
Mbps.
▪ The source destination pairs corresponding to all active connections are stored in the
control memory.
✔ Thus the slots need not specify the source and destination addresses.
43
TDM with and Without TSI
1 1 1 3
2 2 2 4
3 3 3 1
4 4 4 2
Writing is Sequentially
Reading is Selective
44
Time and Space Switch
A simple TST switch that consists of two time stages and one space stage and has 12 inputs and 12
outputs.
Instead of one time-division switch, it divides the input into three groups (of four inputs each) and directs
them to three timeslot interchanges.
The result is that the average delay is one-third of what would result from using one time-slot interchange to
handle all 12 inputs. 45
Routing in Circuit-Switched Network
In large circuit-switched network, connections often require a path through more than one switches.
Basic objective: Efficiency and Resilience
Two basic approaches: Static and Dynamic
Static Routing
Routing function in public switched telecommunication networks (PSTN) has been traditionally quite simple and static.
Switches are organized as a tree structure.
To add some resilience to the network, additional high-usage trunks are added that cut across the tree structure to
connect exchanges with high volumes of traffic between them.
Cannot adapt to changing conditions
Leads to congestion in case of failure
Dynamic Routing
To cope with growing demands, all providers presently use a dynamic approach.
▪ Routing decisions are influenced by current traffic conditions.
▪ Switching nodes have a peer relationship with each other rather than a hierarchical one.
▪ Routing is more complex and more flexible.
▪ Two techniques:
46
✔ Alternate and Adaptive routing
Alternate Routing Approach
The possible routes to be used between two end offices are predetermined.
▪ It is the responsibility of the originating switch to select the appropriate route for each call.
In practice, usually a different set of pre-planned routes is used for different time periods.
▪ Take advantage of different traffic patters in different time zones and different times of day.
47
Outline of the Lecture
Limitations of Circuit Switching
Message Switching Concepts
Packet Switching Concepts
Packet Switching Techniques:
▪ Virtual Circuit
▪ Datagram
Datagram versus virtual circuit
Internal and External Operations
Circuit Switching versus Packet Switching
48
Problems with Circuit Switching
Network resources are dedicated to a particular connection.
Two shortcomings for data communication:
▪ In a typical user/host data connection, line utilization is very low.
▪ Provides facility for data transmission at a constant rate.
✔ Data transmission pattern may not ensure this.
✔ Limits the utility of the method.
49
Message Switching
Basic Idea:
Each network node receives and stores the message
Determines the next leg of the route, and
Queues the message to go out of that link.
Advantages:
Line efficiency is greater (Sharing of links).
Data rate conversion is possible.
Even under heavy traffic, packets are accepted,
possibly with a greater delivery delay.
Message priorities can be used.
Disadvantages:
Message of large size monopolizes the link and storage 50
Packet Switching
New form of architecture for long-distance data communication (1970).
Packet switching technology has evolved over time.
Basic technology has not changed
Packet switching remains one of the few effective technologies for long-distance data communication.
Packet Priorities
Priorities can be used. If a node has a number of packets queued for transmission, it can transmit the
higher-priority packets first. These packets will therefore experience less delay than lower-priority packets.
52
Datagram Approach
Each packet is treated independently, with no reference to packet that have gone before.
Every intermediate node has to take routing decisions.
Every packet contains source and destination addresses.
Intermediate nodes maintain routing tables.
Advantages:
Call setup phase is avoided (for transmission of a
few packets, datagram will be faster).
Because it is more primitive, it is more flexible.
Congestion/failed link can be avoided (more
reliable).
Problems:
Packets may be delivered out of order.
If a node crashes momentarily, all of its queued
packets are lost.
53
Routing Table in Datagram Packet Switching
54
Datagram Approach
55
Virtual Circuit Approach
A preplanned route is established before any packets are sent.
Call Request and Call Accept packets are used to establish the connection.
Route is fixed for the duration of the logical connection (like circuit switching).
▪ Each packet contains a virtual circuit identifier as well as data.
▪ Each node on the route knows where to forward packets.
A Clear Request packet issued by one of the two stations terminates the
connection.
Setup
Request Setup
Delay in Virtual Circuit Packet Switching Acknowledgement
57
Switch and Tables in Virtual Circuit Packet Switching
58
Datagram Approach
59
Datagram Vs Virtual Circuit
In datagram Service
▪ Each packet is treated independently
▪ Call set up phase is avoided
▪ Inherently more flexible and reliable
In virtual Circuit
▪ Node need not decide route
▪ More difficult to adopt to congestion
▪ Maintain sequence order
▪ All packets are sent through the same preplanned route
60
Packet Size
As packet size is decreased, the transmission time reduces until it is comparable to the size
of the control information.
There is a close relationship between packet size and transmission time.
Example: Assume that there is a virtual circuit from station A through nodes 1, 2, 4, and 5, to
station C. Message size is 32 bytes, packet header is 4 bytes.
61
Circuit and Packet Switching Comparison
Circuit Switching Datagram Packet Virtual Circuit Packet
Dedicated path No Dedicated Path No dedicated Path
Path established for Route established for Route established for
entire conversation each packet entire conversation
Call set up delay Packet transmission Call set up delay, Packet
delay transmission delay
Overload may block call Overload increases Overload may block call
set up packet delay set up and increases
packet delay
No speed or code Speed or code conversion Speed or Code
conversion conversion
Fixed bandwidth Dynamic bandwidth Dynamic bandwidth
No overhead bits after call Overhead bits in each Overhead bits in each
set up packet packet
62
Circuit and Packet Switching Comparison
63
Outline of the Lecture
Introduction
Broadcast Networks
Issues in MAC
Goals of MAC
MAC Techniques
Random Access MAC Techniques
▪ ALOHA
▪ CSMA
▪ CSMA/CD
▪ CSMA/CA
64
Outline of the Lecture
Previously we discussed data link control to provide a link with reliable communication.
We assumed that there is an available dedicated link between the sender and the receiver. This
assumption is not always true.
If we have a dedicated link, as when we connect to the Internet using PPP as the data link control protocol,
then the assumption is true and we do not need anything else.
On the other hand, if we use our cellular phone to connect to another cellular phone, the channel is not
dedicated.
We can consider the data link layer as two sublayers.
The upper sublayer is responsible for data link control, and the lower sublayer is responsible for
resolving access to the shared media.
Where?
Centralized: a designated station has an authority to grant access to the network.
▪ Simple logic at each station
▪ Greater control to provide features like priority, overrides and guaranteed bandwidth
▪ Easy coordination
▪ Lower reliability
Distributed: Stations can dynamically determine transmission order.
▪ Complex, reliable and scalable
67
Random Access
In random access or contention methods, no station is superior to another station and none is
assigned the control over another.
No station permits, or does not permit, another station to send.
At each instance, a station that has data to send uses a procedure defined by the protocol to
make a decision on whether or not to send.
In a random access method, each station has the right to the medium without being controlled by
any other station.
However, if more than one station tries to send, there is an access conflict-collision-and the
frames will be either destroyed or modified.
To avoid access conflict or to resolve it when it happens, each station follows a procedure that
answers the following questions:
When can the station access the medium?
What can the station do if the medium is busy?
How can the station determine the success or failure of the transmission?
What can the station do if there is an access conflict?
68
MAC Techniques
70