0% found this document useful (0 votes)
84 views458 pages

CCN Merged

This document provides an overview of the Computer Communication Networks course taught by Sumayya Zafar. The course covers topics such as network topologies, data link layer protocols, local area networks, routing algorithms, and transport layer protocols. The learning resources include textbooks, reference books, and evaluation will include a midterm, assignments, quizzes, and class performance. The course objectives are to develop an understanding of computer networking concepts, components, protocols, technologies and applications.

Uploaded by

Ali Memon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views458 pages

CCN Merged

This document provides an overview of the Computer Communication Networks course taught by Sumayya Zafar. The course covers topics such as network topologies, data link layer protocols, local area networks, routing algorithms, and transport layer protocols. The learning resources include textbooks, reference books, and evaluation will include a midterm, assignments, quizzes, and class performance. The course objectives are to develop an understanding of computer networking concepts, components, protocols, technologies and applications.

Uploaded by

Ali Memon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 458

Computer Communication Networks

CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 1 – 1
Introduction

Spring Semester 2021 1


Course Outline
TOPICS
Introduction to Data Communication & Network Topologies , Subnets , Circuit and Packet switching, Layers of
Communication Protocol ,Connection oriented and Connection less Services
Data Link Layer : Framing , Error Detection Techniques
Data Link Layer: Flow and Error Control , Sliding window protocols
Data Link Layer :High Level Data Link Control Protocol , Point to Point Protocol
Medium Access Layer : Queuing theory
Local Area Networks
Network Layer : IP , Flooding & Routing Algorithms
Network Layer : Dijkstra Algorithm
Network Layer :Distance Vector Routing (RIP)
Network Layer : Routing Loops and Count to Infinity Problem , RIP Timers
Transport Layer: Quality of Service, Transport Protocol Mechanisms, Flow Control and Congestion Control in TCP,
Examples of Transport Protocols (UDP, TCP)

Spring Semester 2021 2


Learning Resources

❖Text Books:
• Data Communications & Networking (4th Edition), Behrouz A. Forouzan -
McGraw-Hill
• Data & Computer Communications (8th Edition) , William Stallings -
Prentice Hall

❖Reference Book:
• Computer Networking: A Top-Down Approach (7th Edition) James Kurose
, Keith Ross – Pearson

Spring Semester 2021 3


Evaluation Criteria

Assessment Type Marks Schedule (Week


No.)
Midterm 20 7*
Assignment 10 2,6
Quiz 05 5,9
Class Performance 05 1-16
Total Sessional Marks 40
Spring Semester 2021 4
Course Objectives
❖To develop understanding of the fundamental concepts of
computer networking.

❖To develop an understanding of different components of


computer networks, various protocols, modern technologies and
their applications.

Spring Semester 2021 5


Course Learning Outcomes
❖On successful completion of this course , the student must be able to :
• Understand basic computer network technology.
• Understand and explain Data Communications System and its
components.
• Identify the different types of network topologies and protocols.
• Explain the function(s) of each layer of OSI and TCP/IP reference model.
• Identify the different types of network devices and their functions within
a network.
• Understand sub-netting and routing mechanisms.
• Familiarity with the basic protocols of computer networks, and how they
can be used to assist in network design and implementation.

Spring Semester 2021 6


A Communication Model

• Exchange of data between two parties.

• Key Elements:
• Source – generates data to be transferred
• Transmitter – converts data into transmittable signals
• Transmission system – carries the data
• Receiver – converts received signal into data
• Destination – takes incoming data

Spring Semester 2021 8


Simplified Communication Model

Spring Semester 2021 9


What is a Network?

• Set of devices (often referred to as nodes) connected by


communication links, capable of sending and/or receiving data
generated by other nodes on the network.

• Fundamental aim of networks:


• Resource sharing (computing, printers, peripherals, information)
• Services (Email, video conferencing, DB access, Client/server
applications)

Spring Semester 2021 10


Important Tasks in Networking

• Routing – identify suitable routes subject to constraints on


capacity and allowable delays.

• Congestion control – avoid traffic overload situations in specific


network areas or at least to react properly to them.

• Flow Control – avoid overflowing receiver with data from


sender.

• Error Control – dealing with errors occurred during transmission


Spring Semester 2021 11
Common Communication Patterns

• Unicast
• Only two nodes in the network are involved. Receiver node

• One of the node is the sender and the other is receiver.


• Nodes can have both roles.

• E.g. Phone connections, viewing


a webpage.
Sender node

Communication link

Spring Semester 2021 12


Common Communication Patterns

• Broadcast
• One node as sender , all other nodes as receivers.

• E.g. Radio , TV

Sender node

*all other nodes receivers

Communication link

Spring Semester 2021 13


Common Communication Patterns

• Multicast
• Group communication receiver

• One of the node is the sender, several ,but not all others as receivers.
• In multicast group, all nodes can act
as sender

• E.g. Internet chat


phone conferences Sender node

Communication link

Spring Semester 2021 14


Network Topologies

• Network Topology – arrangement of elements in a


communication network.
• A simple model of network is a simple communication graph
• Nodes represent stations/switching elements
• Edges represent direct communication links
• Four basic topologies are:
• Mesh
• Star
• Bus
• Ring

Spring Semester 2021 15


Mesh Topology

• Mesh Topology – every device has a dedicated link to every other


device.
• Dedicated link carries traffic only between the two Station
devices it connects.
𝑛 𝑛−1
• Total no. of links =
2
• Advantages: Station Station
• Robust
• Privacy or security
• Disadvantages:
• Increased cost of installation
Station Station
• Poor scalability

Spring Semester 2021 16


Star Topology

• Star Topology – every device has a dedicated link to central


controller(hub).
Hub
• No direct traffic between devices, the hub acts
as an exchange.
• Total no. of links = 𝑛
• Advantages:
• Robust
• Less expensive
• Disadvantages: Station Station Station Station
• Hub is the single point of failure

Spring Semester 2021 17


Bus Topology

• Bus Topology – every device is connected to a common bus.


• Bus is a broadcast medium.
• Advantages: Station Station
• Ease of installation
• Disadvantages:
• Difficult to scale
Station Station
• Bus is a common point of failure

Spring Semester 2021 18


Ring Topology

• Ring Topology – every device has a dedicated connection to the


two devices on either side of it.
• Signal is passed along the ring in one
Station Station
direction only, from one device to
other, until it reaches the destination.
• Advantages:
• Ease of installation Station Station

• Disadvantages:
• Difficult to scale
• A single break in the link can bring entire network down.

Spring Semester 2021 19


Network Coverage Areas

• Local Area Network(LANs)


• Have limited geographical extension, usually ≤ 1 Km(spans office or
building )
• Controlled by only one owner/ administrative entity
• Offer a shared transmission medium to multiple stations
• Most common LANs are switched (Ethernet) LAN and
wireless LAN.
• E.g.
• Connect desktop computers to share files , emails
• Allow several computers to share printers , file servers.

Spring Semester 2021 20


Network Coverage Areas

• Wide Area Network(WANs)


• Spans large area (countries, continents , world)
• Controlled by several administrative entities
• Internet is an example of WAN
• In internet, LANs are an elementary unit.
• Internet = Network of Networks
• LANs are attached to Routers, Routers are interconnected via other
LANs
• WANs can be implemented using one of the two technologies:
• Circuit Switching
• Packet Switching

Spring Semester 2021 21


Circuit Switching

• Circuit Switching Networks– dedicated connection or circuit is established


between nodes for the duration of the connection.
• The lifetime of connection has three phases:
• Connection setup : identify the routes , set aside resources so that they are
guaranteed.
• Connection usage : use the established connection to transmit the data. The pre-
reserved resources guarantee that this connection is not influenced by other
connections
• Connection teardown: free the reserved resources
• Data generated by the source station are transmitted along the dedicated
path as rapidly as possible.
• At each node, incoming data are routed or switched to the appropriate
outgoing channel without delay.
• Switching elements are called switches.
• Example: The telephone network
Spring Semester 2021 22
Circuit Switching

• A routing decision is made only once (at connection setup) and


never/rarely modified.
• A connection has its resources guaranteed.
• Any bandwidth not used by a connection cannot be reused by other
connections, this can result in poor utilization.
• Connection setup takes time, if messages are much shorter than the
connection setup time then circuit switching is not economical.
• Connection setup may fail when no route or not enough resources
are available in the network.
• Admission Control: Switching elements check whether enough
resources are available for the new connection without
compromising the resources already granted to existing connections.

Spring Semester 2021 23


Event Timing of Circuit Switching
Source Switch Switch Destination
connection setup request

connection accept response

Data transfer

connection teardown request

connection teardown response

Spring Semester 2021 24


Packet Switching

• Packet Switching Networks - Data flows are segmented into small chunks called packets.
• Packets are basic unit of transmission.
• A packet consists of:
• A packet header containing meta-information about the packet, e.g. address fields
• The packet payload
• a packet trailer for error detection / correction
• Packets are transmitted individually and independently from one node to the other.
• At each node, the entire packet is received, stored briefly, and then transmitted to the
next node.
• There is no concept of a connection, packets can be sent immediately without having to
set up resource reservation in the network.
• Switching elements are called routers.
• Analogy: letter transfer in postal network, envelopes correspond to packet headers.
• Example: The Internet.

Spring Semester 2021 25


Packet Switching

• Sequence of packets between the same source destination pair is


called a flow.
• Each packet is routed individually, different packets in the same flow
can take different routes.
• Each router makes a routing decision for each packet.
• Each packet must include information facilitating routing, e.g. header
fields for source and destination addresses.
• Packets do not necessarily arrive in the same order as they have
been sent. Packets are reordered at the destination.
• Many flows can share a link, bandwidth not utilized by one flow can
be used by others.

Spring Semester 2021 26


Packet Switching

• No guarantee for packet delivery – lack of resource reservation.


• Internet/IP “best effort” service: packet is delivered – maybe
• Since flow data rates and routes often cannot be predicted in
advance, routers buffer some packets to prevent packet
dropping in temporary overload situations.
• Routers only have a finite amount of memory, and when
overload situation sustains, packet dropping is inevitable, this is
called congestion.
• Which packets to drop?
• Congestion control schemes either try to avoid congestion or to deal
with it.

Spring Semester 2021 27


Event Timing of Packet Switching
source router router destination

Pk 11
Pk
Pk 2
Pk 1
Pk
Pk33
Pk 22
Pk
Pk 1
Pk 3
Pk 2

Pk 3

Spring Semester 2021 28


Circuit Switching Vs Packet Switching

• Circuit-switching:
• Can give guaranteed bandwidths
• No reuse of resources
• Data forwarding is low-complexity operation for switches
• Routing is done only once
• Packet-switching:
• Cannot give any guarantees
• Allows reuse of resources
• Data forwarding is higher-complexity operation for routers
• Routing is done for every packet

Spring Semester 2021 29


Summary

• Communication Model
• Key Elements:
• Source – generates data to be transferred
• Transmitter – converts data into transmittable signals
• Transmission system – carries the data
• Receiver – converts received signal into data
• Destination – takes incoming data
• What is a Network?
• Set of nodes connected by communication links, capable of sending
and/or receiving data generated by other nodes on the network.
• Fundamental aim of networks: Resource sharing , Services

Spring Semester 2021 30


Summary

• Important tasks in Networking


• Routing
• Congestion control
• Flow Control
• Error Control
• Common Communication Patterns
• Unicast
• Broadcast
• Multicast

Spring Semester 2021 31


Summary

• Network Topologies
• Mesh
• Star
• Bus
• Ring
• Network Coverage Areas
• Local Area Networks(LANs)
• Wide Area Networks(WANs)
• Switching Techniques
• Circuit Switching
• Packet Switching

Spring Semester 2021 32


Reading Assignment

• History of Networking & The Internet

Spring Semester 2021 33


Questions?

Spring Semester 2021 34


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 1 – 2
Network Architectures & Protocol Basics

Spring Semester 2021 1


Recap

• Communication Model
• Key Elements:
• Source – generates data to be transferred
• Transmitter – converts data into transmittable signals
• Transmission system – carries the data
• Receiver – converts received signal into data
• Destination – takes incoming data
• What is a Network?
• Set of nodes connected by communication links, capable of sending
and/or receiving data generated by other nodes on the network.
• Fundamental aim of networks: Resource sharing , Services

Spring Semester 2021 2


Recap

• Important tasks in Networking


• Routing
• Congestion control
• Flow Control
• Error Control
• Common Communication Patterns
• Unicast
• Broadcast
• Multicast

Spring Semester 2021 3


Recap

• Network Topologies
• Mesh
• Star
• Bus
• Ring
• Network Coverage Areas
• Local Area Networks(LANs)
• Wide Area Networks(WANs)
• Switching Techniques
• Circuit Switching
• Packet Switching

Spring Semester 2021 4


The Need for Protocol Architecture
• Let us consider that two
friends communicate through
postal mail.
• Instead of implementing
entire logic as a single module
, the entire process is divided
into hierarchy of tasks.
• Each layer is responsible for a
set of tasks.

Spring Semester 2021


5
The Need for Protocol Architecture

• A network is a combination of hardware and software that sends data


from one location to another.
• The hardware consists of the physical equipment that carries signals
from one point of the network to another.
• The software consists of instruction sets that provide services that we
expect from a network.
• To reduce design complexity, most networks are designed as stack of
layers, each one built upon below it.
• A key principle for networking software is layering: the functionality is
decomposed into a chain of layers so that layer N offers services to
layer N + 1 and itself is only allowed to use services offered by layer
N − 1.

Spring Semester 2021 6


The Need for Protocol Architecture

• It takes two to communicate, so the same set of layered functions


must exist in two systems.
• Layer N on one machine carries conversation with layer N on another
machine.
• The rules and conventions used in this conversation are collectively
called layer N protocol.
• A protocol is an agreement between the communicating parties on
how communication is to proceed.
• The key features of a protocol are as follows:
• Syntax: Describes the format of the data blocks
• Semantics: Includes control information for coordination and error handling
• Timing: Includes speed matching and sequencing
Spring Semester 2021 7
The Need for Protocol Architecture
HOST A
• No data are directly transferred L5 protocol
HOST B
Layer 5
from layer N on one machine to Layer 5
L 4/5 interface
layer N on another machine. Layer 4
L4 protocol
Layer 4
• Instead each layer passes data L 3/4 interface
& control information to layer
L3 protocol Layer 3
Layer 3
immediately below it, until the L 2/3 interface
lower layer is reached. Layer 2
L2 protocol
Layer 2
L 1/2 interface
• Below layer 1 is physical Layer 1
L1 protocol
Layer 1
layer/medium through which
actual communication occurs. Physical medium

Spring Semester 2021 8


The Need for Protocol Architecture
• Between each pair of adjacent HOST A
L5 protocol
HOST B
layer is an interface, called N- Layer 5 Layer 5
service interface. L 4/5 interface
L4 protocol
• The N-interface offers services at Layer 4 Layer 4
service access point(SAP),the L 3/4 interface
lower layer makes available to the Layer 3 L3 protocol Layer 3
upper one. L 2/3 interface
• The N-interface can offer several Layer 2
L2 protocol
Layer 2
SAPs, this allows to multiplex L 1/2 interface
L1 protocol
between different layer N + 1 Layer 1 Layer 1
connections or sessions.
• The layer N-service is implemented Physical medium
through an N-protocol
Spring Semester 2021 9
The Need for Protocol Architecture
• A layer exchanges protocol data units
(PDUs) with a peer N-protocol entity. N+1 data N+1 data

• It constructs these PDUs itself and


hands them over to its local N − 1-
layer to deliver them to peer N- N Hdr N+1 data N Trl N Hdr N+1 data N Trl

protocol entity.
• An N-PDU is treated as payload / user N-1
Hdr
N Hdr N+1 data N Trl
N-1
Trl
N-1
Hdr
N Hdr N+1 data N Trl N-1
Trl
data by the N − 1 layer.
N-1
• Each layer adds own header and
N-2 N-1 N Trl N-2
N-2 N-1 N-1 N-2 N Hdr N+1 data
N Hdr N+1 data N Trl Hdr Hdr Trl Trl
Hdr Hdr Trl Trl
trailer before handing down to lower
layer.
• Receiving layer removes its header /
trailer before handing payload to
upper layer.

Spring Semester 2021


10
Design Issues for the Layers
• Some of the key design issues that occur in computer networks are
present in several layers. They include:
• Addressing - Networks have many communicating devices and some of them
have multiple processes, therefore a mechanism is needed for a process on
one machine to specify with whom it wants to talk i.e. addressing is needed
to specify a specific destination in case of multiple destinations.
• No of logical channels - Many networks provide at least two channels per
connection, one for normal data transfer and one for urgent data.
• Error Control - Both sender and receiver must agree on same set of error
detecting and error correcting codes.
• Sequencing - Reassembling of packets at receiver end that arrive out of order.
• Flow Control - Avoid overflowing receiver with data from sender.
• Routing - A suitable route must be chosen when there are multiple paths
between source and destination.
Spring Semester 2021 11
Types of Service
• Layers can offer two types of services to the layers above
them.
• Connection Oriented Service – Service user first establishes the
connection , uses the connection and then releases the connection.
Order is preserved and the data arrives in order. Connection oriented
service is modeled after telephone system.

• Connection Less Service – Each message carries full destination


address and is routed through the system independent of all others.
Data sent may arrive out of order. Connection less service is modeled
after postal system.

Spring Semester 2021 12


Reference Models
• A reference model is a conceptual blueprint of how
communications should take place.
• It addresses all the processes required for effective
communication and divides these processes into logical
groupings called layers.
• Two popular reference models are:
• OSI Reference Model
• TCP/IP Reference Model

Spring Semester 2021 13


OSI Reference Model
• OSI (Open System Interconnection) Layer 7: Application
reference model was developed by the
International Organization for
Standardization (ISO) as a model for a Layer 6: Presentation
computer protocol architecture.
• The model was not commercially
successful, but helped greatly to clarify Layer 5: Session
networking architectures and to provide
framework for developing protocol
standards. Layer 4: Transport
• OSI model consists of seven layers:
• Application Layer 3: Network
• Presentation
• Session
• Transport Layer 2: Data Link
• Network
• Data Link
• Physical Layer 1: Physical
Spring Semester 2021 14
OSI Reference Model
• Layer 1, 2 & 3 exchange HOST A HOST B Name of Unit
PDUs between physically Exchanged
connected hosts. Application Application APDU
• Upper four layers
exchange protocol Presentation Presentation PPDU
messages between end
hosts (over several Session Session SPDU
intermediate nodes, called
routers)
Transport Transport TPDU
• Hop by Hop
communication is shown
by Network Network Network Packet
• End to End
communication is shown Data Link Data Link Data Link Frame
by
Physical Physical Physical Bit

Spring Semester 2021 15


OSI RM - Physical Layer
• Responsible for transmission of bits over a physical medium.
• The protocols in this layer are link dependent and further
depend on the actual transmission medium of the link (for
example, twisted-pair ,copper wire, fiber optics).
• Often involves specification of:
• Cable types (wired) or (wireless)
• Connectors
• Electrical specifications

Spring Semester 2021 16


OSI RM – Data Link Layer
• (Reliable) transfer of messages over physical link .
• The Data Link layer will ensure that messages are delivered to the proper
device on a LAN using hardware addresses and will translate messages
from the Network layer into bits for the Physical layer to transmit.
• The Data Link layer formats the message into pieces, each called a data
frame, and adds a customized header containing the hardware destination
and source address.
• Involves specification of:
• Framing - determining the frame start and end , choice of frame size
• Error Detection and Correction - coding or retransmission-based
• Medium access control – control access to shared channel , often considered as a
separate “sub-layer” of link layer
• Flow control - Avoid overwhelming a slow receiver with too much data
Spring Semester 2021 17
OSI RM – Network Layer
• Concerned with:
• Addressing and routing
• End-to-end delivery of messages (transmission of data packets
between devices that are not locally connected)
• Network layer messages are called packets
• Involves specification of:
• Addressing formats
• Exchange of routing information and route computation
• Depending on technology: establishment, maintenance and teardown
of connections

Spring Semester 2021 18


OSI RM – Transport Layer
• Concerned with reliable, in-sequence, transparent end-to-end
data transfer.
• Transport layer data packet is called segment.
• The functions of the transport layer are:
• Break messages into packets and reassemble packets of size suitable
to network layer
• Multiplex sessions with same source/destination nodes
• Resequencing packets at destination
• Error Control
• Provide end-to-end flow control

Spring Semester 2021 19


OSI RM – Presentation & Session Layer
• Session layer:
• Concerned with establishing communication sessions between
applications
• A session can involve several transport layer connections in parallel or
sequentially
• The Session layer basically keeps different applications’ data separate
from other applications’ data.

• Presentation layer:
• Translates between different representations of data types used on
different end hosts
• Example: host A uses low-endian , host B big-endian
Spring Semester 2021 20
OSI RM – Application Layer
• Contains variety of protocols that are commonly needed by
users.
• Examples:
• HTTP(HyperText Transfer Protocol) – for Web document request and
transfer
• SMTP(Simple Mail Transfer Protocol) – for transfer of email messages
• FTP(File Transfer Protocol) – for transfer of files between two end
systems
• Packet of information at the application layer is called a
message.
• Focus: Transport Layer and Lower.
Spring Semester 2021 21
TCP/IP Reference Model
• The TCP/IP protocol architecture is a
result of protocol research and Layer 5: Application
development conducted on the
experimental packet-switched network,
ARPANET, funded by the Defense Layer 4: Transport
Advanced Research Projects Agency
(DARPA).
• It is generally referred to as the TCP/IP Layer 3: Internet
protocol suite.
• This model is used in the Internet.
Layer 2: Network Access
• TCP/IP model consists of five layers:
• Application
• Transport Layer 1: Physical
• Internet
• Network Access
• Physical

Spring Semester 2021 22


TCP/IP Reference Model
HOST A
• Hop by Hop communication is HOST B
shown by Application Application
• End to End communication is
shown by Transport Transport

Internet Internet Internet

Network Network Network

Physical Physical Physical

Spring Semester 2021 23


TCP/IP RM – Application Layer
• A vast array of protocols combine at the Application Layer to
integrate the various activities and duties spanning the focus of the
OSI’s corresponding top three layers (Application, Presentation, and
Session).
• Accesses transport layer through socket interface
• Well known application layer protocols are:
• Telnet
• FTP
• SMTP
• HTTP/HTTPS
• DNS
• RTP

Spring Semester 2021 24


TCP/IP RM – Transport Layer
• Provides end-to-end communications to applications
• Offers its services through socket interface
• Standard transport layer protocols:
• TCP: reliable, in-sequence byte-stream transfer
• UDP: unreliable, un-ordered message transfer
• SAPs are called ports, used for application multiplexing
• Ports are identified by numbers
• Several applications / processes can use transport service
• One application is bound to one port
• The PDUs generated by TCP / UDP are called segments

Spring Semester 2021 25


TCP/IP RM – Internet Layer
• This is a key part of the TCP/IP reference model.
• Uses IP (Internet Protocol), its PDUs are called datagrams.
• IP looks at each packet’s address. Then, using a routing table, it
decides where a packet is to be sent next, choosing the best
path.
• Other protocols found here are:
• ARP
• ICMP
• IGMP
• RARP
Spring Semester 2021 26
TCP/IP RM – Physical & Network Access Layer

• The physical layer is similar to the PHY of the OSI RM

• The Network Access Layer:


• Accepts IP datagrams and delivers them over physical link
• Receives IP datagrams and delivers them to local IP layer
• Includes medium access control, framing, address resolution
• May include link layer error and flow control

Spring Semester 2021 27


Encapsulation of Data
• The process of placing data behind headers (& before trailers) of
data packet is called encapsulation.
• Application Layer(APDU): Creates application header and places the data (created
by application) after the header.
• Presentation Layer(PPDU):Creates presentation header and places the data
(received from application layer) after the header.
• Session Layer(SPDU):Creates session header and places the data (received from
presentation layer) after the header.
• Transport Layer(Segment):Creates the header and places the data (received from
session layer) after the header.
• Network Layer(Packet):Creates the header and places the data (received from
transport layer) after the header.
• Data Link Layer(Frame):Creates the header and places the data (received from
network layer) after the header and also adds the trailer.
• Physical Layer(Bits):Encodes the signal to transmit the data.
Spring Semester 2021 28
Encapsulation of Data

Spring Semester 2021 29


Summary
• The need for Protocol Architecture
• Design Issues for Layers
• Addressing
• No of logical channels
• Error Control
• Sequencing
• Flow Control
• Routing
• Types of Services
• Connection Oriented Services
• Connection Less Services

Spring Semester 2021 30


Summary
• Reference Models
• OSI
• TCP/IP
• OSI Reference Model
• Application
• Presentation
• Session
• Transport
• Network
• Data Link
• Physical
• TCP/IP Reference Model
• Application
• Transport
• Internet
• Network Access
• Physical
• Encapsulation of Data

Spring Semester 2021 31


Questions?

Spring Semester 2021 32


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 2 – 1
Data Link Layer – Framing Techniques

Spring Semester 2021 1


Introduction to the Link Layer

• We will refer any device that runs a link layer (i.e., layer 2)
protocol as a node.
• Nodes include hosts, routers or switches, and Wi-Fi access
points.
• We will also refer to the communication channels that connect
adjacent nodes along the communication path as links.
• In order for a datagram to be transferred from source host to
destination host, it must be moved over each of the individual
links in the end-to-end path.
• Over a given link, a transmitting node encapsulates the datagram
in a link layer frame and transmits the frame into the link.
Spring Semester 2021 4
Introduction to the Link Layer

Spring Semester 2021 5


Services Provided by the Link Layer

• Data Link layer has number of specific functions it can


carry out. These include:
• Framing:
• Almost all link layer protocols encapsulate each network
layer datagram within a link layer frame before
transmission over the link.
• A frame consists of a data field, in which the network layer
datagram is inserted, and a number of header fields.
• The structure of the frame is specified by the link layer
protocol.
Spring Semester 2021 6
Services Provided by the Link Layer

• Data Link layer has number of specific functions it can


carry out. These include:
• Reliable delivery:
• When a link layer protocol provides reliable delivery service,
it guarantees to move each network layer datagram across
the link without error.
• A link layer reliable delivery service is often used for links
that are prone to high error rates, such as a wireless link.
• A link layer reliable delivery service can be achieved with
acknowledgments and retransmissions.
Spring Semester 2021 7
Services Provided by the Link Layer

• Data Link layer has number of specific functions it can carry out.
These include:
• Link access:
• A medium access control (MAC) protocol specifies the rules by which
a frame is transmitted onto the link.
• For point to point links that have a single sender at one end of the
link and a single receiver at the other end of the link, the MAC
protocol is simple – the sender can send a frame whenever the link is
idle.
• When multiple nodes share a single broadcast link the MAC protocol
serves to coordinate the frame transmissions of the many nodes.

Spring Semester 2021 8


Services Provided by the Link Layer

• Data Link layer has number of specific functions it can carry out.
These include:
• Error detection and correction:
• The link layer hardware in a receiving node can incorrectly decide that a bit in a
frame is zero when it was transmitted as a one, and vice versa.
• Such bit errors are introduced by signal attenuation and electromagnetic noise.
Because there is no need to forward a datagram that has an error, many link
layer protocols provide a mechanism to detect such bit errors.
• This is done by having the transmitting node include error detection bits in the
frame, and having the receiving node perform an error check.
• Error detection in the link layer is usually implemented in hardware.
• Error correction is similar to error detection, except that a receiver not only
detects when bit errors have occurred in the frame but also determines exactly
where in the frame the errors have occurred (and then corrects these errors).
Spring Semester 2021 9
Services Provided by the Link Layer

• Data Link layer has number of specific functions it can carry out.
These include:
• Flow Control:
• The nodes on each side of a link have a limited amount of buffering
capacity.
• This is a potential problem, as a receiving node may receive frames
at a rate faster than it can process the frames (over some time
interval).
• Without flow control, the receiver's buffer can overflow and frames
can get lost.
• Similar to the transport layer, a link layer protocol can provide flow
control in order to prevent the sending node on one side of a link
from overwhelming the receiving node on the other side of the link.
Spring Semester 2021 10
Where is Link Layer Implemented
• It is implemented in a
network adapter, or
sometimes known as a
network interface card (NIC).
• The controller in the NIC , is
usually a single, special-
purpose chip that implements
many of the link-layer services
(framing, link access, error
detection, etc.).

Spring Semester 2021 11


Framing

• The data Link layer takes the packet from the network layer and
encapsulates them in frames for transmission.
• Each frame contains a frame header, a payload field for holding
the packet, and a frame trailer.

Spring Semester 2021 12


Types of Frames

• Frames can be of two types.


• Fixed size frames:
• No need for defining the boundaries of the frames.
• The size itself can be used as a delimiter.
• Frame in ATM wide area network, which uses frames of
fixed size called cells.
• Variable size frames:
• Define the end of the frame and the beginning of the next.
• Prevalent in local area network.

Spring Semester 2021 13


How it is done?

• The data link layer breaks the bit stream up into discrete frames and
compute the checksum for each frame.
• When a frame arrives at the destination, the checksum is recomputed.
• If the newly computed checksum is different from the one contained in the
frame, the data link layer knows that an error has occurred and takes steps
to deal with it.
• One way to achieve this framing is to insert time gaps between frames,
much like the spaces between words in ordinary text.
• However, networks rarely make any guarantees about timing, so it is
possible these gaps might be squeezed out or other gaps might be inserted
during transmission.

Spring Semester 2021 14


Framing Techniques

• Common methods used are:


• Character Count
• Flag bytes with byte stuffing – Character Oriented Protocol
• Starting and ending flags, with bit stuffing - Bit Oriented
Approach

Spring Semester 2021 15


Character Count

• Number of character in the frame is specified in the


header field.
• When the data link layer at the destination sees the
character count, it knows how many characters follow
and hence where the end of the frame is.

Spring Semester 2021 16


Character Count

Spring Semester 2021 17


Character Count
• The count can be garbled by a transmission error. - Problem
• Even if the checksum is incorrect so the destination knows that
the frame is bad, it still has no way of telling where the next
frame starts.
• Sending a frame back to the source asking for a retransmission
does not help either, since the destination does not know how
many characters to skip over to get to the start of the
retransmission.
• For this reason, the character count method is rarely used.

Spring Semester 2021 18


Flag Bytes with byte stuffing
• Each frame starts and ends with special byte called a flag byte,
as both the starting and ending delimiter.

FLAG
FLAG HEADER
HEADER PAYLOAD
PAYLOAD TRAILER FLAG

• If the receiver ever loses synchronization, it can just search for


the flag byte to find the end of the current frame.
• Two consecutive flag bytes indicate the end of one frame and
start of the next one.
Spring Semester 2021 19
Flag Bytes with byte stuffing

• Flag byte's bit pattern may occur in the data. - Problem


• Insert a special escape byte (ESC) just before each
''accidental'' flag byte in the data at the sender side.
• The data link layer on the receiving end removes the
escape byte before the data are given to the network
layer.
• This technique is called byte stuffing or character
stuffing.

Spring Semester 2021 20


Flag Bytes with byte stuffing

Original Characters After Stuffing

A FLAG B A ESC FLAG B

Original Characters After Stuffing

A ESC B A ESC ESC B

Spring Semester 2021 21


Flag Bytes with byte stuffing

Original Characters After Stuffing

A ESC FLAG B A ESC ESC ESC FLAG B

Original Characters After Stuffing

A ESC ESC B A ESC ESC ESC ESC B

• In all cases, the byte sequence delivered after destuffing is


exactly the same as the original byte sequence.
Spring Semester 2021 22
Flag Bytes with byte stuffing

Spring Semester 2021 23


Flag Bytes with byte stuffing

• This method is tied to the use of 8-bit characters only.


• Not all character codes use 8-bit characters. For
example. UNICODE uses 16-bit characters.
• There is need of a new technique to allow arbitrary
sized characters.

Spring Semester 2021 24


Bit Stuffing
• Bit stuffing allows data frames to contain an arbitrary number of bits and
allows character codes with an arbitrary number of bits per character.
• Each frame begins and ends with a special bit pattern, 01111110 (in fact,
a flag byte).
• Whenever the sender's data link layer encounters five consecutive 1s in
the data, it automatically stuffs a 0 bit into the outgoing bit stream.
• This is called bit stuffing.
• When the receiver sees five consecutive incoming 1 bits, followed by a 0
bit, it automatically destuffs (i.e., deletes) the 0 bit.
• If the user data contain the flag pattern, 01111110, this flag is transmitted
as 011111010 but stored in the receiver's memory as 01111110.
Spring Semester 2021 25
Bit Stuffing

Original Data

After Stuffing

At Receiver

Spring Semester 2021 26


Bit Stuffing

Spring Semester 2021 27


Bit Stuffing
• With bit stuffing, the boundary between two frames can be
recognized by the flag pattern.
• Thus, if the receiver loses track of where it is, all it has to do is
scan the input for flag sequences, since they can only occur at
frame boundaries and never within the data.
• As a final note on framing, many data link protocols use a
combination of a character count with one of the other methods
for extra safety.

Spring Semester 2021 28


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 2 – 2
Data Link Layer – Error Detection
Techniques
Spring Semester 2021 1
Synchronization

• The transmission of a stream of bits from one device to another across


a transmission link involves cooperation and agreement between the
two sides. This is the most fundamental requirement and is called
synchronization.
• The receiver must know the rate at which bits are being received so
that it can sample the line at appropriate intervals to determine the
value of each received bit.
• The timing (rate, duration, spacing) of these bits must be the same for
transmitter and receiver.
• Two techniques are in common use for this purpose.
• Asynchronous transmission
• Synchronous transmission

Spring Semester 2021 4


Asynchronous Transmission

• Data are transmitted one character at a time , where each character is 5 – 8 bits in length.
• Timing or synchronization is maintained within each character.
• When no character is being transmitted, the line between transmitter and receiver is in an
idle state. The definition of idle is equivalent to the signaling element for binary 1.
• The beginning of a character is signaled by a start bit with a value of binary 0.
• This is followed by the 5 to 8 bits that actually make up the character. The bits of the
character are transmitted beginning with the least significant bit.
• The parity bit is set by the transmitter such that the total number of ones in the character,
including the parity bit, is even (even parity) or odd (odd parity), depending on the
convention being used. The receiver uses this bit for error detection.
• The final element is a stop element, which is a binary 1.
• A minimum length for the stop element is specified, and this is usually 1, 1.5, or 2 times the
duration of an ordinary bit.
• Because the stop element is the same as the idle state, the transmitter will continue to
transmit the stop element until it is ready to send the next character.

Spring Semester 2021 5


Asynchronous Transmission

Spring Semester 2021 6


Asynchronous Transmission

• Asynchronous transmission is simple and cheap but requires an


overhead of two to three bits per character.
• For example, for an 8 bit character with no parity bit, using a 1
bit long stop element, two out of every ten bits convey no
information but are there merely for synchronization; thus the
overhead is 20%.

Spring Semester 2021 7


Synchronous Transmission

• With synchronous transmission, a block of bits is transmitted in a


steady stream without start and stop codes. The block may be many
bits in length.
• With synchronous transmission, the receiver is required to determine
the beginning and end of a block of data.
• To achieve this, each block begins with a preamble bit pattern and
generally ends with a postamble bit pattern.
• In addition, other bits are added to the block that convey control
information used in the data link control procedures.
• The data plus preamble, postamble, and control information are called
a frame.

Spring Semester 2021 8


Synchronous Transmission

• The frame starts with a preamble called a flag, which is 8 bits long.
The same flag is used as a postamble.
• The receiver looks for the occurrence of the flag pattern to signal the
start of a frame.
• This is followed by some number of control fields (containing data
link control protocol information), then a data field (variable length
for most protocols), more control fields, and finally the flag is
repeated.
• Synchronous transmission is far more efficient than asynchronous.
Spring Semester 2021 9
Types of Errors
• In digital transmission systems, an error occurs when a bit is
altered between transmission and reception; that is, a binary 1
is transmitted and a binary 0 is received, or a binary 0 is
transmitted and a binary 1 is received.
• Two general types of errors can occur:
• Single bit errors - A single bit error is an isolated error condition that
alters one bit but does not affect nearby bits.
• Burst errors - A burst error of length B is a contiguous sequence of B
bits in which the first and last bits and any number of intermediate bits
are received in error.

Spring Semester 2021 10


Types of Errors
0 changed to 1

00000010 00001010
Sent Single Bit Error Received

Length of Burst = 8 bits

Sent 0100010001000011
Corrupted Bits

Received 0101110101100011
Burst Error
Spring Semester 2021 11
Types of Errors
• Single bit errors are the least likely type of error in serial data
transmission.
• A burst error does not necessarily mean that the errors occur in
consecutive bits.
• The length of the burst is measured from the first corrupted bit to the last
corrupted bit. Some bits in between may not have been corrupted.
• A burst error is more likely to occur than a single bit error. The duration of
noise is normally longer than the duration of 1 bit, which means that
when noise affects data, it affects a set of bits.
• The number of bits affected depends on the data rate and duration of
noise.
• Example: when a wireless transmitter transmits at 11 Mbps and an
interference burst of 200 μs occurs, 2200 bits are affected by the burst.

Spring Semester 2021 12


Detection Vs Correction
• In error detection, we are looking only to see if any error has
occurred. The answer is a simple yes or no. We are not
interested in the number of errors. A single bit error is the same
for us as a burst error.
• In error correction, we need to know the exact number of bits
that are corrupted and more importantly, their location in the
message. The number of the errors and the size of the message
are important factors.

Spring Semester 2021 13


Redundancy
• The central concept in detecting or correcting errors is
redundancy.
• To be able to detect or correct errors, we need to send some
extra(redundant) bits with our data.
• These redundant bits are added by the sender and removed by
the receiver.
• Their presence allows the receiver to detect or correct
corrupted bits.

Spring Semester 2021 14


Coding
• Redundancy is achieved through various coding schemes.
• The sender adds redundant bits through a process that creates
a relationship between the redundant bits and the actual data
bits.
• The receiver checks the relationships between the two sets of
bits to detect or correct the errors.
• The ratio of redundant bits to the data bits is important factor
in any coding scheme.

Spring Semester 2021 15


Forward Error Correction Vs Retransmission

• Two main methods of error correction are:


• Forward error correction(FEC) is the process in which
the receiver tries to guess the message by using
redundant bits.
• Correction by retransmission is a technique in which
the receiver detects the occurrence of an error and
asks the sender to resend the message. Resending is
repeated until a message arrives that the receiver
believes is error free.
Spring Semester 2021
16
How it is done?
• At the sending node, data, D, to be
protected against bit errors is
augmented with error detection
and correction bits (EDC).
• EDC is the function of transmitted
bits.
• Typically, for a data block of 𝑘 bits,
the error detecting algorithm yields
an error detecting code of 𝑛 − 𝑘
bits, where (𝑛 − 𝑘) < 𝑘.
• The error detecting code, also
referred to as the check bits, is
appended to the data block to
produce a frame of 𝑛 bits, which is
then transmitted.
Spring Semester 2021 17
How it is done?
• Both 𝐷(𝑘 bits) and 𝐸𝐷𝐶(𝑛 − 𝑘
bits) are sent to the receiving
node in a link level frame.
• At the receiving node, a
sequence of bits, 𝐷ሖ ( 𝑘ሖ bits)
and 𝐸𝐷𝐶 ሖ ( (𝑛 −ሖ 𝑘) bits)is
received.
• 𝐷ሖ and 𝐸𝐷𝐶 ሖ may differ from
the original 𝐷 and 𝐸𝐷𝐶 as a
result of in transit bit flips.
Spring Semester 2021 18
How it is done?
• The receiver performs the
same error detecting
calculation on the data bits
and compares this value with
the value of the incoming
error detecting code.
• A detected error occurs if and
only if there is a mismatch.

Spring Semester 2021 19


Error Detection Techniques
• Three techniques for detecting errors in the transmitted data
are:
• Parity checks
• Checksum methods
• Cyclic redundancy checks (CRC)

Spring Semester 2021


20
Parity Checks
• The simplest error detecting scheme is to append a single
parity bit to the end of a block of data.
• In an even parity scheme, the sender simply includes one
additional bit and chooses its value such that the total number
of 1s in the 𝑑 + 1 bits (the original information plus a parity
bit) is even.
• For odd parity schemes, the parity bit value is chosen such that
there is an odd number of 1s.

Spring Semester 2021


21
Parity Checks
‘d’ data bits parity bit(even parity scheme)

0111000110101011 1

‘d +1’ bits

• Receiver operation is also simple with a single parity bit.


• The receiver need only count the number of 1s in the received 𝑑 +
1 bits.
• If an odd number of 1 valued bits are found with an even parity
scheme, the receiver knows that at least one bit error has occurred.
More precisely, it knows that some odd number of bit errors have
occurred.
• But when even no. of bits error occurred, it is undetectable.
Spring Semester 2021
22
Properties of Parity Checks
• Properties of parity check codes are:
• All odd numbers of bit errors are detected.
• All even numbers of bit errors are not detected.
• Parity check codes have traditionally been used on serial
interfaces, e.g. RS-232, where a parity bit has been appended
to each byte.

Spring Semester 2021


23
Two Dimensional Parity Checks
• The 𝑑 bits in 𝐷 are divided into 𝑖 rows and 𝑗 columns.
• A parity value is computed for each row and for each column.
• The resulting 𝑖 + 𝑗 + 1 parity bits comprise the link layer
frame’s error detection bits.

Spring Semester 2021


24
Two Dimensional Parity Checks
1100111 1011101 0111001 0101001 Original data

1100111 1 Sender Side


1011101 1 Even Parity scheme used
0111001 0
0101001 1
0101010 1

Transmitted Frame
11001111 1011101 1 0111001 0 01010011 0101010 1

Spring Semester 2021


25
Two Dimensional Parity Checks
Receiver Side

1100111 1
1111101 1 Parity Error
0111001 0 *Bit in position (2,2) is switched to 1
0101001 1 *A single error in the parity bits is also
0101010 1 detectable and correctable.
Parity Error

Spring Semester 2021


26
Two Dimensional Parity Checks
Receiver Side

1100111 1
*Bit in position (2,2) is switched to 1 and bit
1111001 1 in position (2,5)is switched to 0
0111001 0
*Two-dimensional parity can also detect
0101001 1 (but not correct!) any combination of two
1 errors in a packet.
0101010
Parity Error Parity Error

Spring Semester 2021


27
Checksum Methods
• On the sender side: The checksum generator sub divides
the data unit into equal segments of ‘n’ bits(usually 16).
• These segments are added using 1’s complement
arithmetic in such a way that total is also ‘n’ bits.
• The total or sum is then complemented and appended to
the end of the data as redundancy bits.
• These redundancy bits are called checksum field.
• The extended data unit is transmitted across the network.

Spring Semester 2021 28


Checksum Methods
• The receiver sub divides the data unit adds all the
segments and complements the results.
• If the encoded data is intact, the total values found by
adding the data segments and the checksum field should
be zero.
• If the result is not zero , the packet contains error and
receiver rejects it.
• The Internet checksum is based on this approach.

Spring Semester 2021 29


Checksum Methods

10101001 00111001 Original data


Length of checksum = 8 bits

Sender Side:
10101001
Checksum
Binary addition 00111001
1’s complement
11100010 00011101

Encoded Word 10101001 00111001 00011101


Spring Semester 2021 30
Checksum Methods
Received Word 10101001 00111001 00011101

Receiver Side:
10101001
00111001 Result is zero so the data is
00011101 intact and no error
1’s complement
11111111 00000000

Spring Semester 2021 31


Checksum Methods
Now let us introduce error in first bit

Receiver Side:
00101001
00111001 Result is non zero so the data
00011101 has error and it is discarded
1’s complement
01111111 10000000

Spring Semester 2021 32


Checksum Method - Properties
• It detects all errors involving an odd number of bits as well as most errors
involving an even number of bits.
• If one or more bits of a segment are damaged & the corresponding bit or bits
of opposite value in a second segment are also damaged , the sum of those
columns will not change and the receiver will not detect error.
Transmitted Word 10101001 00111001 00011101
Received Word 00101001 10111001 00011101

Receiver Side: 00101001


Result is zero no error is
10111001 detected though error was
00011101 present
1’s complement
11111111 00000000
Spring Semester 2021 33
Cyclic Redundancy Check
• Most powerful and widely used error detection technique and it is based on binary
division.
• CRC codes are known as polynomials.
• Consider the 𝑑 bit piece of data, 𝐷 , that the sending node wants to send to the
receiving node.
• The sender and receiver must first agree on an 𝑟 + 1 bit pattern, known as a
generator, which we will denote as 𝐺.
• For a given piece of data, 𝐷, the sender will choose 𝑟 additional bits(also known as
frame check sequence), 𝑅, and append them to 𝐷 such that the resulting 𝑑 + 𝑟 bit
pattern is exactly divisible by 𝐺 (i.e., has no remainder) using modulo 2 arithmetic.
• All CRC calculations are done in modulo 2 arithmetic without carries in addition or
borrows in subtraction. This means that addition and subtraction are identical, and
both are equivalent to the bitwise exclusive or (XOR) of the operands.
Spring Semester 2021 34
Cyclic Redundancy Check
• The process of error checking with CRCs is thus simple: The receiver divides
the 𝑑 + 𝑟 received bits by 𝐺.
• If the remainder is nonzero, the receiver knows that an error has occurred;
otherwise the data is accepted as being correct.
‘d’ bits ‘r’ bits

‘D’ data bits ‘R’ FCS

‘d+r’ bits

• Mathematical formula: 𝐷. 2𝑟 XOR R


Left Shift ‘D’ by ‘r’ bits to yield ‘d+r’ bits

• In order to calculate 𝑅 , we divide 𝐷. 2𝑟 by 𝐺


Spring Semester 2021 35
Cyclic Redundancy Check
• Given:
Message ‘D’ = 1010001101 (10 Bits) = ‘d’ bits
Generator ‘G’ = 110101 (6 Bits)
Degree of G = 1. 𝑋 5 + 1. 𝑋 4 + 0. 𝑋 3 + 1. 𝑋 2 + 0. 𝑋 1 + 1. 𝑋 0 = 5 = ‘r’ bits
FCS to be calculated? = ‘R’ = ?
‘d + r’ = 10 + 5 = 15 bits
The message is multiplied 25 by yielding 101000110100000
‘10’ bits ‘r’ bits

1010001101 ‘?’ FCS

‘15’ bits

Spring Semester 2021 36


Cyclic Redundancy Check
𝑄
𝐺 25 𝐷

‘10’ bits ‘r’ bits

1010001101 ‘?’ FCS

‘15’ bits

Spring Semester 2021 37


𝑅
Cyclic Redundancy Check
• The remainder is added to 25 𝐷 to give 𝑇 = 101000110101110
which is transmitted.

‘10’ bits ‘r’ bits

1010001101 01110

‘15’ bits

• If there are no errors, the receiver receives 𝑇 intact. The received


frame is divided by 𝐺.

Spring Semester 2021 38


Cyclic Redundancy Check
𝑄

𝑇
𝐺

Because there is no
remainder, it is
assumed that there
have been no errors.

Spring Semester 2021 39


𝑅
CRC – Some Standard Polynomials
• A second way of viewing the CRC process is to express all values as
polynomials in a dummy variable 𝑋, with binary coefficients. The coefficients
correspond to the bits in the binary number.
• Thus for D = 110011 we have D 𝑋 = 𝑋 5 + 𝑋 4 + 𝑋 + 1, and for G= 11001 we
G 𝑋 = 𝑋4 + 𝑋3 + 1
• Arithmetic operations are again modulo 2.
• International standards have been defined for 8-, 12-, 16-, and 32-bit
generators, G.
• The CRC-32 32 bit standard, which has been adopted in a number of link level
IEEE protocols, uses a generator of:
𝐶𝑅𝐶 − 32
= 𝑋 32 + 𝑋 26 + 𝑋 23 + 𝑋 22 + 𝑋16 + 𝑋12 + 𝑋11 + 𝑋10 + 𝑋 8 + 𝑋 7 + 𝑋 5
+ 𝑋 42021
Spring Semester + 𝑋2 + 𝑋 + 1 40
Cyclic Redundancy Check - Properties
• Errors are not detected when their bit pattern (taken as a
polynomial) is evenly divisible by 𝐺(𝑋)
• All single bit errors, if 𝐺(𝑋) has more than one nonzero term.
• Any odd number of errors, as long as 𝐺(𝑋) contains a factor (𝑋 +
1)
• Any burst error for which the length of the burst is less than or
equal to ‘𝑟’ that is, less than or equal to the length of the FCS.

Spring Semester 2021 41


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 3 – 1
Data Link Layer – Error Control

Spring Semester 2021 1


Introduction

• Once the data is dispatched over the transmission medium, it


may be altered in various ways so that the signals received at the
remote end of a link differs from the transmitted signals.
• The effects of these adverse characteristics of a medium are
known as transmission impairments and they often reduce
transmission efficiency.
• In the case of binary data they may lead to errors, in that some
binary zeros are transformed into binary ones and vice versa.
• The three main impairments are attenuation, distortion and noise
which is the main factor and constrains the operation of any
communications system.
Spring Semester 2021 4
Introduction

• To overcome the effects of such impairments it is necessary to introduce


some form of error control.
• The first step in any form of error control is to detect whether any errors are
present in the received data, a process which has been explored in some
detail in previous lecture.
• Having detected the presence of errors there are two strategies commonly
used to correct them:
• Either further computations are carried out at the receiver to correct the
errors, a process known as forward error control (correction)
• Or a message is returned to the transmitter indicating that errors have
occurred and requesting a retransmission of the data, which is known as
feedback error control.
Spring Semester 2021 5
Introduction
• It is possible to use codes that not only detect the presence of errors but also enable
errors to be corrected.
• On a channel that are highly reliable, such as fiber optics, it is cheaper to use error
detection code and just retransmit occasional block found to be faulty.
• However, on channels that make many errors, it is better to add enough redundancy to
each block for the receiver to be able to figure out what the original block was.
• At a glance, it would seem that correction is always better, since with detection we are
forced to discard the message and ask for another copy to be retransmitted. This uses
bandwidth and may introduce latency while waiting for retransmission.
• Error correction tends to be more useful when:
• Errors are quite probable
• The cost of retransmission is too high such as latency involved in retransmitting a
packet over a satellite.

Spring Semester 2021 6


Channel Coding
• Error control is also known as channel coding.
• Channel coding is the process of coding data prior to transmission over a
communications channel so that if errors do occur during transmission it is possible
to detect and possibly even to correct those errors once the data has been received.
• In order to achieve this error detection/correction some bit patterns need to be
identified as error free at the receiver, whereas other bit patterns will be identified as
erroneous.
• To increase the number of identifiable bit patterns at the receiver , additional bits,
known as redundant bits, are added to the data or information bits prior to
transmission.
𝑘
• 𝑐𝑜𝑑𝑒 𝑟𝑎𝑡𝑒 = ; measurement of how many additional bandwidth is required to carry
𝑛
data at the same data rate as without the code.
𝑛−𝑘
• 𝑟𝑒𝑑𝑢𝑛𝑑𝑎𝑛𝑐𝑦 = ; ratio of redundant bits to data bits.
Spring Semester 2021 𝑘 7
Channel Coding
• Figure shows in general how
coding is done.
• On the transmission end,
each 𝑘 bit block of data is
mapped into an 𝑛 bit (𝑛 >
𝑘) block called a codeword,
using an FEC (forward error
correction) encoder.
• The codeword is then
transmitted.

Spring Semester 2021 8


Channel Coding
• During transmission, the
signal is subject to
impairments, which may
produce bit errors in the
signal.
• At the receiver, the incoming
signal is demodulated to
produce a bit string that is
similar to the original
codeword but may contain
errors.
Spring Semester 2021 9
Channel Coding
• This block is passed through
an FEC decoder, with one of
four possible outcomes:
1. If there are no bit errors, the
input to the FEC decoder is
identical to the original
codeword, and the decoder
produces the original data
block as output.

Spring Semester 2021 10


Channel Coding
• This block is passed through an
FEC decoder, with one of four
possible outcomes:
2. For certain error patterns, it is
possible for the decoder to detect
and correct those errors. Thus,
even though the incoming data
block differs from the transmitted
codeword, the FEC decoder is
able to map this block into the
original data block.

Spring Semester 2021 11


Channel Coding
• This block is passed through
an FEC decoder, with one of
four possible outcomes:
3. For certain error patterns,
the decoder can detect but not
correct the errors. In this case,
the decoder simply reports an
uncorrectable error.

Spring Semester 2021 12


Channel Coding
• This block is passed through
an FEC decoder, with one of
four possible outcomes:
4. For certain, typically rare,
error patterns, the decoder
does not detect that any errors
have occurred and maps the
incoming 𝑛 bit data block into a
𝑘 bit block that differs from the
original 𝑘 bit block.

Spring Semester 2021 13


Linear Block Codes

• Various different types of code are available for use in channel coding but
the simplest and most commonly used are called linear block codes.
• In a block code:
• The user data stream is segmented into blocks of 𝑘 bits
• Each 𝑘 bit block is encoded independently of other blocks to an 𝑛 bit
codeword (𝑛 > 𝑘)
• The code rate is 𝑘/𝑛
• The set of all possible source words has size 2𝑘
• The set of all possible words in the code space has size 2𝑛
• Out of these the code uses only 2𝑘 out of 2𝑛 elements
• These are called valid codewords

Spring Semester 2021 14


Linear Block Codes

• Channel errors can turn:


• a valid codeword into a word from the set of 2𝑛 − 2𝑘 unused codewords,
then the decoder must guess which was the transmitted codeword.
• When facing an unused codeword 𝑦, decoders essentially look for
the valid codeword that is “closest” to 𝑦.

Spring Semester 2021 15


The Hamming Codes

• This is an important group of error correcting codes pioneered by


R.W. Hamming in the 1950s.
• They involve the production of check(redundant) bits by adding
together different groups of data bits.
• No of redundant bits = 2𝑟 > 𝑘 + 𝑟 + 1 („k‟ are data bits, „r‟ are
redundant bits)
• The type of addition used is known as modulo 2 addition and is
equivalent to normal binary addition without any carries.
• Hamming codes can detect up to two bit errors or correct one bit
error.
Spring Semester 2021 16
The Hamming Codes

• We shall consider a Hamming (7,4) code, in which three check bits (𝑐1, 𝑐2 and 𝑐3) are
combined with four information bits (𝑘1, 𝑘2, 𝑘3 and 𝑘4) to produce a block of data of
length 𝑛 = 7.
• Three check equations are used to obtain the three check bits of this Hamming (7,4) code
as follows:
𝑐1 = 𝑘1 ⊕ 𝑘2 ⊕ 𝑘4
𝑐2 = 𝑘1 ⊕ 𝑘3 ⊕ 𝑘4
𝑐3 = 𝑘2 ⊕ 𝑘3 ⊕ 𝑘4
• Given data bits = 1010 then 𝑘1 = 1 , 𝑘2 = 0 , 𝑘3 = 1 and 𝑘4 = 0 and the check bits
obtained from the three check equations above are as follows:
𝑐1 = 𝑘1 ⊕ 𝑘2 ⊕ 𝑘4 = 1 ⊕ 0 ⊕ 0 = 1
𝑐2 = 𝑘1 ⊕ 𝑘3 ⊕ 𝑘4 = 1 ⊕ 1 ⊕ 0 = 0
𝑐3 = 𝑘2 ⊕ 𝑘3 ⊕ 𝑘4 = 0 ⊕ 1 ⊕ 0 = 1
• The codeword is obtained by adding the check bits to the end of the information bits and
therefore the data 1010101 will be transmitted (data bits first).
Spring Semester 2021 17
The Hamming Codes

• A complete set of codewords can be obtained in a similar way:


Code K1 K2 K3 K4 C1 C2 c3 Code K1 K2 K3 K4 C1 C2 c3
word word
# #
0 0 0 0 0 0 0 0 8 1 0 0 0 1 1 0
1 0 0 0 1 1 1 1 9 1 0 0 1 0 0 1
2 0 0 1 0 0 1 1 10 1 0 1 0 1 0 1
3 0 0 1 1 1 0 0 11 1 0 1 1 0 1 0
4 0 1 0 0 1 0 1 12 1 1 0 0 0 1 1
5 0 1 0 1 0 1 0 13 1 1 0 1 1 0 0
6 0 1 1 0 1 1 0 14 1 1 1 0 0 0 0
7 0 1 1 1 0 0 1 15 1 1 1 1 1 1 1
Spring Semester 2021 18
Hamming Distance

• An error that occurs in a transmitted codeword can be detected


only if the error changes the codeword into some other bit
pattern that does not appear in the code.
• The number of positions by which any two codewords in a code
differ is known as the Hamming distance(𝑑𝐻 ).
• Taking codewords 3 and 8 as an example, we have:
• Codeword 3 0 0 1 1 1 0 0
• Codeword 8 1 0 0 0 1 1 0
• These two codewords differ in positions 1, 3, 4 and 6 , so that
the distance between these two words is four.
Spring Semester 2021 19
Hamming Distance

• Since all linear block codes contain the all zeros codeword, then
an easy way to find the minimum distance(𝑑𝑚𝑖𝑛 ) of a code is to
compare a non zero codeword which has the minimum number
of ones with the all zeros codeword.
• Thus the minimum distance of a code is equal to the smallest
number of ones in any non zero codeword, which in the case of
this Hamming (7,4) code is three.
• If the codewords of a code differ in three or more positions then
error correction is possible since an erroneous bit pattern will be
„closer‟ to one codeword than another.

Spring Semester 2021 20


Hamming Codes - Properties

• If we take codewords 8 and 10 as an example, we have:


• Codeword 8 1 0 0 0 1 1 0 1000110 (codeword 8) (𝑑𝐻 = 1)
• Codeword 10 1 0 1 0 1 0 1 1010110(received codeword)
• The distance between these two codewords is three.
• If codeword 8 is transmitted and an error occurs in bit 3 then the received
data will be: 1 0 1 0 1 1 0
• This is not one of the other 15 Hamming (7,4) codewords since an error has
occurred.
• Furthermore, the most likely codeword to have been transmitted is
codeword 8 since this is the nearest(hamming distance of 1) to the received
bit pattern. Thus, it should also be possible to correct the received data by
making the assumption that the transmitted codeword was number 8.
Spring Semester 2021 21
Hamming Codes - Properties

• If, however, a second error occurs in bit 7 then the received bit
pattern will be 1 0 1 0 1 1 1
• It should still be possible to detect that an error has occurred
since this is not one of the 16 codewords.
• However, it is no longer possible to correct the errors since the
received bit pattern has changed in two places.
• Thus, this Hamming (7,4) code is able to detect two errors but
correct only one error.
• In general, if the minimum distance of a code is 𝑑𝑚𝑖𝑛 , then
𝑑 − 1 errors can normally be detected using a linear block code
and 𝑚𝑜𝑑(𝑑 − 1)/2 can be corrected.
Spring Semester 2021 22
Hamming Codes - Properties
• All Hamming codes (indeed, all linear block codes) possess the mathematical property that
if we add any two codewords together (modulo 2 addition) then the resulting sum is also a
codeword.
• For example, if we add codewords 1 and 2 :
000111
⊕ 001001
001110 Which is codeword 3

• This allows us to represent a whole code by means of a subset of codewords, since further
codewords can simply be obtained by modulo 2 addition.
• The subset of codewords is often expressed as a matrix known as a generator matrix, G.
• The codewords chosen are normally powers of 2, that is codewords 1, 2,4, 8.

Spring Semester 2021 23


Hamming Codes – Generator Matrix
• A suitable generator matrix for the Hamming (7,4) code consists of the
following four codewords:
1000110
0100101
𝐺=
0010011
0001111
• The matrix has four rows and seven columns, that is it has dimensions
4 × 7 (𝑘 × 𝑛).
• The whole code can be generated from this matrix just by adding together
rows, and it is for this reason that it is called a generator matrix.
• A further reason for the generator matrix being so named is that it can be
used to generate codewords directly.
Spring Semester 2021 24
Hamming Codes – Encoding
• Information consisting of the bits 1010 is to be encoded using the Hamming (7,4) code.
• Use the generator matrix to obtain the codeword to be transmitted.
• The codeword is obtained by multiplying the four information bits (expressed as a row vector) by the
generator matrix as follows:
1000110
0100101
1 010 ×
0010011
0001111
• The multiplication is achieved by multiplying each column of the generator matrix in turn by the row
vector as follows:
[(1 × 1 ⊕ 0 × 0 ⊕ 1 × 0 ⊕ 0 × 0), (1 × 0 ⊕ 0 × 1 ⊕ 1 × 0 ⊕ 0 × 0), (1 × 0 ⊕ 0
× 0 ⊕ 1 × 1 ⊕ 0 × 0), (1 × 0 ⊕ 0 × 0 ⊕ 1 × 0 ⊕ 0 × 1), (1 × 1 ⊕ 0 × 1
⊕ 1 × 0 ⊕ 0 × 1), (1 × 1 ⊕ 0 × 0 ⊕ 1 × 1 ⊕ 0 × 1), (1 × 0 ⊕ 0 × 1 ⊕ 1
× 1 ⊕ 0 × 1)] = 1 0 1 0 1 0 1
• Note that this process is, in fact, the same as using the check equations to obtain the check bits and
then add the check bits to the end of the information bits.
Spring Semester 2021 25
Hamming Codes – Check Matrix
• It is also possible to express the three check equations of the Hamming (7,4) code in the form of a
matrix known as the check matrix 𝐻, as follows:
Check equations
𝑐1 = 𝑘1 ⊕ 𝑘2 ⊕ 𝑘4
𝑐2 = 𝑘1 ⊕ 𝑘3 ⊕ 𝑘4
𝑐3 = 𝑘2 ⊕ 𝑘3 ⊕ 𝑘4
Check Matrix
1101100
H = 1011010
0111001
𝑘1𝑘2𝑘3𝑘4𝑐1𝑐2𝑐3

• The check matrix is obtained by having each row of the matrix correspond to one of the check
equations in that if a particular bit is present in an equation, then that bit is marked by a one in the
matrix. This results in a matrix with dimensions 3 × 7 (𝑐 × 𝑛).
Spring Semester 2021 26
Hamming Codes – Check Matrix
• If we now compare the two types of matrix we note that the generator matrix has an
identity matrix consisting of a diagonal of ones to its left and the check matrix has this
identity matrix to its right.
1000110
1101100
0100101
𝐺= H = 1011010
0010011
0111001
0001111
• When a generator or check matrix conforms to this pattern, it is in standard echelon form.
• A further point to note is that if the echelons are removed from the two matrices, then
what remains is the transpose of each other.
• In the case of the Hamming (7,4) code:
110
1101
101
From Check Matrix 1011 𝑖𝑠 𝑡ℎ𝑒 𝑡𝑟𝑎𝑛𝑠𝑝𝑜𝑠𝑒 𝑜𝑓
011 From Generator Matrix
0111
111
Spring Semester 2021 27
Hamming Codes – Check Matrix
Example: The generator matrix for a Hamming (15,11) code is as follows:
100000000001100
010000000000110
001000000000011
000100000001101
000010000001010
𝐺 = 000001000000101
000000100001110
000000010000111
000000001001111
000000000101011
000000000011001
Obtain the check matrix.

Spring Semester 2021 28


Hamming Codes – Check Matrix
• The code has a block length 𝑛 = 15, consisting of 𝑘 = 11 information bits
and 𝑐 = 4 check bits.
• The generator matrix has dimensions 11 × 15 , and includes an 11 ×
11 identity matrix (an echelon) to the left.
• The check matrix has dimensions 4 × 15 and contains a 4 × 4 identity
matrix to its right hand side.
• The rest of the check matrix is obtained by removing the identity matrix
from the generator matrix and transposing what is left. The check matrix is
as follows:
100110101111000
110101111000100
𝐻=
011010111100010
001101011110001
Spring Semester 2021 29
Hamming Codes – Decoding
• To determine whether received data is error free or not, it is necessary for all of the check
equations to be verified.
• This can be done by recalculating the check bits from the received data or, alternatively,
received data can be checked by using the check matrix.
• As its name implies, the check matrix can be used to check the received data for errors in a
similar way to using the generator matrix to generate a code word.
• The check matrix, 𝐻, is multiplied by the received data expressed as a column vector:

1
Using Hamming Code (7,4)
0
1101100 0
1011010 ∗ 0
H Matrix 0111001 1 Data Vector
1
0
Spring Semester 2021 30
Hamming Codes – Decoding

• This time the multiplication is achieved by multiplying each row


of the check matrix in turn by the received data vector, as
follows:
(1 × 1 ⊕ 1 × 0 ⊕ 0 × 0 ⊕ 1 × 0 ⊕ 1 × 1 ⊕ 0 × 1 ⊕ 0 × 0)
0
1 × 1 ⊕ 0 × 0 ⊕ 1 × 0 ⊕ 1 × 0 ⊕ 0 × 1 ⊕ 1 × 1 ⊕ 0 × 0 = 0
0
(0 × 1 ⊕ 1 × 0 ⊕ 1 × 0 ⊕ 1 × 0 ⊕ 1 × 0 ⊕ 0 × 1 ⊕ 1 × 0)
• If, as is the case here, the received data is error free then the
result of this multiplication is zero.
• This result, which in this case is a 3 bit vector, is known as the
syndrome.
Spring Semester 2021 31
Hamming Codes – Decoding
Example: Information consisting of four ones is to be transmitted using the Hamming (7,4)
code. (a) Determine the transmitted codeword.
(b) If an error occurs in bit 4 during transmission, determine the syndrome.
(c) Show how the syndrome can be used to correct the error.

To determine the transmitted codeword we can use the three check equations to determine
the check bits:
𝑐1 = 𝑘1 ⊕ 𝑘2 ⊕ 𝑘4 = 1 ⊕ 1 ⊕ 1 = 1
𝑐2 = 𝑘1 ⊕ 𝑘3 ⊕ 𝑘4 = 1 ⊕ 1 ⊕ 1 = 1
𝑐3 = 𝑘2 ⊕ 𝑘3 ⊕ 𝑘4 = 1 ⊕ 1 ⊕ 1 = 1
• The transmitted codeword is therefore seven ones as follows:
𝑘1 𝑘2 𝑘3 𝑘4 𝑐1 𝑐2 𝑐3 = 1111111
Spring Semester 2021 32
Hamming Codes – Decoding
(b) If an error occurs in bit 4 during transmission, determine the syndrome.
If an error occurs in bit 4 then the received data is 1110111. To check whether there is an
error in the received data, we multiply by the check matrix:
1
1
1101100 1 1
1011010 ∗ 0 = 1 syndrome
0111001 1 1
1
1
Column 4
• An error that was occurred has caused the syndrome to be non zero. Furthermore the
position of the error can be located by comparing the syndrome with the columns of the
check matrix. In this case the syndrome provides us with all we need to know about the
error since its numerical value equates with column 4 of the check matrix, thus indicating
an error in bit 4.
Spring Semester 2021 33
Some Other Codes

• Cyclic Codes - A cyclic code is one in which all the code words are
related by the fact that if a codeword is rotated, it becomes
another codeword – Cyclic Redundancy Check (CRC) – Already
discussed in previous lecture.
• Convolutional Codes - Convolutional codes work in a
fundamentally different manner in that they operate on data
continuously as it is received (or transmitted) by a network node.

Spring Semester 2021 34


Questions?

Spring Semester 2021 37


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 3 – 2
Data Link Layer – Flow Control

Spring Semester 2021 1


Feedback Error Control
• Error control refers to mechanisms to detect
and correct errors that occur in the
transmission of frames.
• Data are sent as a sequence of frames;
frames arrive in the same order in which they
are sent. There possibility of two types of
errors:
• Lost frame: A frame fails to arrive at the other
side. For example, a noise burst may damage
a frame to the extent that the receiver is not
aware that a frame has been transmitted.
• Damaged frame: A recognizable frame does
arrive, but some of the bits are in error (have
been altered during transmission).

Spring Semester 2021 4


Feedback Error Control
• The most common techniques for error control are based on some or all of the following
ingredients:
• Error detection: As discussed earlier
• Positive acknowledgment: The destination returns a positive acknowledgment to
successfully received, error free frames.
• Retransmission after timeout: The source retransmits a frame that has not been
acknowledged after a predetermined amount of time.
• Negative acknowledgment and retransmission: The destination returns a negative
acknowledgment to frames in which an error is detected. The source retransmits such
frames. This procedure is known as feedback error control.
• The process of retransmitting the data has traditionally been known as Automatic Repeat
Request (ARQ). There are three types of ARQ that have been used: namely,
• Stop and Wait,
• Go back N
• Selective Repeat.
Spring Semester 2021 5
ARQ Protocols
• Some important points:
• Transmitter must buffer packets for possible retransmission (here packet refers to the packet sent
by error control protocol, includes the message data and header information).
• Feedback channel needs bandwidth as well.
• Even for very few bit errors whole packet is retransmitted.
• ARQ protocols differ:
• in the number of allowed outstanding frames /unacknowledged frames
• in the buffering requirements at receiver / transmitter
• in the way feedback is provided (positive / negative acknowledgement frames, timers)
• ARQ protocols also use acknowledgement packets, called ACKs, which is a small control frame that a
protocol sends back to its peer(transmitter) saying that it has received an earlier frame.
• A control frame is a header without any data. The receipt of an acknowledgment indicates to the sender
of the original frame that its frame was successfully delivered.
• If the sender does not receive an acknowledgment after a reasonable amount of time, then it
retransmits the original frame. This action of waiting a reasonable amount of time is called a timeout.
Spring Semester 2021 6
Stop and Wait ARQ
• The simplest ARQ scheme is the stop and wait algorithm.
• The idea of stop and wait is straightforward: After transmitting one frame, the sender waits for an
acknowledgment before transmitting the next frame.
• If the acknowledgment does not arrive after a certain period of time, the sender times out and
retransmits the original frame.
• At the receiver, the data is checked for errors and if it is error free an acknowledgement (ACK) is
sent back to the transmitter.
• If errors are detected at the receiver a negative acknowledgement (NAK) is returned.
• Since errors could equally occur in the ACK or NAK signals, they should also be checked for errors.
• Thus, only if each frame is received error free and an ACK is returned error free can the next
frame be transmitted.
• Stop and wait ARQ guarantees in sequence delivery and is simple to implement.
• It requires one buffer at transmitter and one buffer at receiver.

Spring Semester 2021 7


Stop and Wait ARQ
• This figure is a timeline,
which is a common way to
depict a protocol’s behavior.
• The sending side is
represented on the left, the
receiving side is depicted on
the right, and time flows from
top to bottom.
• Figure shows the situation in
which the ACK is received
before the timer expires.
Spring Semester 2021 8
Stop and Wait ARQ
• This figure is a timeline, which is a
common way to depict a protocol’s
behavior.
• The sending side is represented on the
left, the receiving side is depicted on the
right, and time flows from top to bottom.
• Figure shows the situation in which the
original frame is lost.
• By “lost” we mean that the frame was
corrupted while in transit, that this
corruption was detected by an error code
on the receiver, and that the frame was
subsequently discarded.

Spring Semester 2021 9


Stop and Wait ARQ
• This figure is a timeline,
which is a common way to
depict a protocol’s behavior.
• The sending side is
represented on the left, the
receiving side is depicted on
the right, and time flows from
top to bottom.
• Figure shows the situation in
which the ACK is lost.
Spring Semester 2021 10
Stop and Wait ARQ
• This figure is a timeline,
which is a common way to
depict a protocol’s behavior.
• The sending side is
represented on the left, the
receiving side is depicted on
the right, and time flows from
top to bottom.
• Figure shows the situation in
which the timeout fires too
soon.
Spring Semester 2021 11
Stop and Wait ARQ
• Suppose the sender sends a frame and the receiver acknowledges it, but the acknowledgment is
either lost or delayed in arriving.
• This situation is illustrated in timelines on slide # 10 and 11.
• In both cases, the sender times out and retransmits the original frame, but the receiver will think
that it is the next frame, since it correctly received and acknowledged the first frame.
• This has the potential to cause duplicate copies of a frame to be delivered.
• To address this problem, the header for a stop and wait protocol usually includes a 1 bit sequence
number - that is, the sequence number can take on the values 0 and 1 - and the sequence
numbers used for each frame alternate.
• Thus, when the sender retransmits frame 0, the receiver can determine that it is seeing a second
copy of frame 0 rather than the first copy of frame 1 and therefore can ignore it (the receiver still
acknowledges it, in case the first ACK was lost).

12
Spring Semester 2021
Spring Semester 2021 13
Stop and Wait ARQ
• The main shortcoming of the stop and wait algorithm is that it allows the sender to have
only one outstanding frame on the link at a time, and this may be far below the link’s
capacity.
The significance of the
• Consider, for example, a 1.5 Mbps link with a 45 ms round trip time. delay × bandwidth
• This link has a: product is that it
represents the amount
𝑑𝑒𝑙𝑎𝑦 × 𝑏𝑎𝑛𝑑𝑤𝑖𝑑𝑡ℎ = 1.5 𝑀𝑏𝑝𝑠 ∗ 45𝑚𝑠 = 8 𝐾𝐵. of data that could be in
transit.

• Since the sender can send only one frame per RTT, and assuming a frame size of 1 KB,
about one eighth of the link’s capacity is used. To use the link fully, then, we’d like the
sender to be able to transmit up to eight frames before having to wait for an
acknowledgment.

Spring Semester 2021


14
Sliding Window ARQ
• The sliding window algorithm works as follows.
• First, the sender assigns a sequence number, denoted SeqNum, to each frame.
• The sender maintains three variables:
• The send window size, denoted SWS, gives the upper bound on the number of
outstanding (unacknowledged) frames that the sender can transmit;
• LAR denotes the sequence number of the last acknowledgment received;
• and LFS denotes the sequence number of the last frame sent.
• The sender also maintains the following invariant:
𝐿𝐹𝑆 − 𝐿𝐴𝑅 ≤ 𝑆𝑊𝑆

Spring Semester 2021 15


Sliding Window ARQ
• When an acknowledgment arrives, the sender moves LAR to the right,
thereby allowing the sender to transmit another frame.
• Also, the sender associates a timer with each frame it transmits, and it
retransmits the frame if the timer expire before an ACK is received.
• Sender has to buffer up to SWS frames since it must be prepared to
retransmit them until they are acknowledged.

Spring Semester 2021 16


Sliding Window ARQ
• The receiver maintains the following three variables:
• The receive window size, denoted RWS, gives the upper bound on the
number of out of order frames that the receiver is willing to accept;
• LAF denotes the sequence number of the largest acceptable frame;
• LFR denotes the sequence number of the last frame received.
• The receiver also maintains the following invariant:
𝐿𝐴𝐹 − 𝐿𝐹𝑅 ≤ 𝑅𝑊𝑆

Spring Semester 2021 17


Sliding Window ARQ
• When a frame with sequence number SeqNum arrives, the receiver takes the following action.
• If 𝑆𝑒𝑞𝑁𝑢𝑚 ≤ 𝐿𝐹𝑅 𝑜𝑟 𝑆𝑒𝑞𝑁𝑢𝑚 > 𝐿𝐴𝐹, then the frame is outside the receiver’s window and it is
discarded.
• If 𝐿𝐹𝑅 < 𝑆𝑒𝑞𝑁𝑢𝑚 ≤ 𝐿𝐴𝐹, then the frame is within the receiver’s window and it is accepted.
• Now the receiver needs to decide whether or not to send an ACK.
• Let SeqNumToAck denote the largest sequence number not yet acknowledged, such that all frames
with sequence numbers less than or equal to SeqNumToAck have been received.
• The receiver acknowledges the receipt of SeqNumToAck, even if higher numbered packets have
been received.
• This acknowledgment is said to be cumulative.
• It then sets 𝐿𝐹𝑅 = 𝑆𝑒𝑞𝑁𝑢𝑚𝑇𝑜𝐴𝑐𝑘 and adjusts 𝐿𝐴𝐹 = 𝐿𝐹𝑅 + 𝑅𝑊𝑆.

Spring Semester 2021 18


Go Back N ARQ
• Go back N combats the inefficiency of Stop and Wait by allowing N
outstanding frames where N is also called window size and outstanding
are frames not yet acknowledged.
• While no errors occur, the destination will acknowledge incoming frames
as usual (RR = receive ready).
• If the destination station detects an error in a frame, it may send a
negative acknowledgment (REJ = reject)for that frame.
• The destination station will discard that frame and all future incoming
frames until the frame in error is correctly received.
• Thus, the source station, when it receives a REJ, must retransmit the
frame in error plus all succeeding frames that were transmitted in the
interim.
Spring Semester 2021 19
Go Back N ARQ
Scenario
• Suppose that station A is sending frames to station B.
• After each transmission, A sets an acknowledgment timer for the frame just transmitted.
• Suppose that B has previously successfully received frame and A has just transmitted frame 𝑖.
• The go back N technique takes into account the following steps:
1. Damaged frame. If the received frame is invalid (i.e., B detects an error, or the frame is so damaged
that B does not even perceive that it has received a frame), B discards the frame and takes no
further action as the result of that frame. There are two subcases:
(a) Within a reasonable period of time, A subsequently sends frame (𝑖 + 1) B receives frame
(𝑖 + 1) out of order and sends a REJ 𝑖. A must retransmit frame 𝑖 and all subsequent frames.
(b) A does not soon send additional frames. B receives nothing and returns neither an RR nor a
REJ. When A’s timer expires, it transmits an RR frame that includes a bit known as the P bit, which is set
to 1. B interprets the RR frame with a P bit of 1 as a command that must be acknowledged by sending
an RR indicating the next frame that it expects, which is frame 𝑖. When A receives the RR, it retransmits
frame 𝑖. Alternatively, A could just retransmit frame 𝑖 when its timer expires.
Spring Semester 2021 20
Go Back N ARQ
Scenario
2. Damaged RR. There are two subcases:
(a) B receives frame 𝑖 and sends RR which suffers an error in transit. Because
acknowledgments are cumulative (e.g., RR 6 means that all frames through 5 are acknowledged), it may
be that A will receive a subsequent RR to a subsequent frame and that it will arrive before the timer
associated with frame 𝑖 expires.
(b) If A’s timer expires, it transmits an RR command. It sets another timer, called the P bit
timer. If B fails to respond to the RR command, or if its response suffers an error in transit, then A’s P bit
timer will expire. At this point, A will try again by issuing a new RR command and restarting the P bit
timer. This procedure is tried for a number of iterations. If A fails to obtain an acknowledgment after
some maximum number of attempts, it initiates a reset procedure.
3. Damaged REJ. If a REJ is lost, this is equivalent to Case 1b as discussed in previous slide.

Spring Semester 2021 21


Spring Semester 2021 22
Go Back N Properties
• The transmitter needs 𝑁 buffers, the receiver only a single buffer to accept
an incoming packet
• If a packet fails, this packet and all subsequent packets are retransmitted,
even if the latter were correctly received.
• If the packet error rate (PER) is small, Go back N is reasonably efficient, but
for higher PERs the protocol retransmits many correctly received packets and
becomes inefficient.

Spring Semester 2021 23


Selective Repeat ARQ
• With selective repeat ARQ, the only frames retransmitted are those that
receive a negative acknowledgment, in this case called SREJ, or those that
time out.
• It would appear to be more efficient than Go back N, because it minimizes
the amount of retransmission.
• On the other hand, the receiver must maintain a buffer large enough to save
post SREJ frames until the frame in error is retransmitted and must contain
logic for reinserting that frame in the proper sequence.
• The transmitter, too, requires more complex logic to be able to send a frame
out of sequence.

Spring Semester 2021 24


Spring Semester 2021 25
Flow Control
• Flow control is a technique for assuring that a transmitting entity does not
overwhelm a receiving entity with data.
• The receiving entity typically allocates a data buffer of some maximum length for a
transfer.
• When data are received, the receiver must do a certain amount of processing
before passing the data to the higher layers.
• In the absence of flow control, the receiver’s buffer may fill up and overflow while
it is processing old data.
• For this section, we assume that all frames that are transmitted are successfully
received; no frames are lost and none arrive with errors.
• Flow control Mechanism used:
• Stop and wait Flow Control
• Sliding Window Flow Control
Spring Semester 2021 26
Stop and Wait Flow Control
• The simplest form of flow control, known as stop and wait flow control.
• It works as follows:
• A source entity transmits a frame. After the destination entity receives
the frame, it indicates its willingness to accept another frame by sending
back an acknowledgment to the frame just received.
• The source must wait until it receives the acknowledgment before
sending the next frame.
• The destination can thus stop the flow of data simply by withholding
acknowledgment.
• The main shortcoming of the stop and wait algorithm is that it allows the
sender to have only one outstanding frame on the link at a time, and this
may be far below the link’s capacity.
Spring Semester 2021 27
Sliding Window Flow Control
• The essence of the problem in stop and wait flow control is that only one frame at a time can be in transit.
• In situations where the bit length of the link is greater than the frame length serious inefficiencies result. Efficiency
can be greatly improved by allowing multiple frames to be in transit at the same time.
• Let us assume two stations, A and B, connected via a full duplex link. Station B allocates buffer space for 𝑊 frames.
Thus, B can accept 𝑊 frames, and A is allowed to send 𝑊 frames without waiting for any acknowledgments.
• To keep track of which frames have been acknowledged, each is labeled with a sequence number. B acknowledges a
frame by sending an acknowledgment that includes the sequence number of the next frame expected.
• This acknowledgment also implicitly announces that B is prepared to receive the next 𝑊 frames, beginning with the
number specified.
• This scheme can also be used to acknowledge multiple frames. For example, B could receive frames 2, 3, and 4 but
withhold acknowledgment until frame 4 has arrived. By then returning an acknowledgment with sequence number 5,
B acknowledges frames 2, 3, and 4 at one time.
• A maintains a list of sequence numbers that it is allowed to send, and B maintains a list of sequence numbers that it
is prepared to receive.
• Each of these lists can be thought of as a window of frames. The operation is referred to as sliding window flow
control.

Spring Semester 2021 28


Sliding Window Flow Control
• Some more comments on sliding window flow control.
• Sequence number to be used occupies a field in the frame, therefore , it is limited to a range of values.
• For example, for a 3 bit field, the sequence number can range from 0 to 7. Accordingly, frames are
numbered modulo 8; that is, after sequence number 7, the next number is 0.
• In general, for a 𝑘 bit field the range of sequence numbers is 0 through 2𝑘 −1 and frames are numbered
modulo2𝑘 .
• The maximum window size is 2𝑘 − 1.
• Most data link control protocols also allow a station to cut off the flow of frames from the other side by
sending a Receive Not Ready (RNR) message, which acknowledges former frames but forbids transfer of
future frames.
• Thus, RNR 5 means “I have received all frames up through number 4 but am unable to accept any more
at this time.” At some subsequent point, the station must send a normal acknowledgment to reopen the
window.

Spring Semester 2021 29


Spring Semester 2021 30
Spring Semester 2021 31
Piggybacking
• So far, transmission in one direction only. If two stations exchange data, each needs to
maintain two windows, one for transmit and one for receive, and each side needs to send the
data and acknowledgments to the other.
• To provide efficient support for this requirement, a feature known as piggybacking is typically
provided.
• Each data frame includes a field that holds the sequence number of that frame plus a field
that holds the sequence number used for acknowledgment.
• When a data frame arrives, instead of immediately sending a separate control frame, the
receiver restrains itself and waits until the network layer passes it the next packet.
• The acknowledgement is attached to the outgoing data frame (using the ack field in the
frame header).
• In effect, the acknowledgement gets a free ride on the next outgoing data frame.
• The technique of temporarily delaying outgoing acknowledgements so that they can be
hooked onto the next outgoing data frame is known as piggybacking.
Spring Semester 2021 32
Piggybacking
• The principal advantage of using piggybacking over having distinct acknowledgement frames is a better
use of the available channel bandwidth.
• The ack field in the frame header costs only a few bits, whereas a separate frame would need a header,
the acknowledgement, and a checksum.
• Thus, if a station has data to send and an acknowledgment to send, it sends both together in one
frame, saving communication capacity.
• If a station has an acknowledgment but no data to send, it sends a separate acknowledgment frame,
such as RR or RNR.
• If a station has data to send but no new acknowledgment to send, it must repeat the last
acknowledgment sequence number that it sent.
• This is because the data frame includes a field for the acknowledgment number, and some value must
be put into that field. When a station receives a duplicate acknowledgment, it simply ignores it.
• Sliding window flow control is potentially much more efficient than stop and wait flow control. The
reason is that, with sliding window flow control, the transmission link is treated as a pipeline that may
be filled with frames in transit. In contrast, with stop and wait flow control, only one frame may be in
the pipe at a time.
Spring Semester 2021 33
Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 4 – 1
Data Link Layer Protocols - HDLC

Spring Semester 2021 1


Introduction
• In the following slides we will examine several widely used data
link protocols.
• The first one, HDLC, is a classical bit oriented protocol whose
variants have been in use for decades in many applications.

Spring Semester 2021 3


HDLC – High Level Data Link Control
• HDLC is a commonly used protocol developed by the ISO and used
to control data transfer over a link.
• It is derived from the data link protocol first used in the IBM
mainframe world: SDLC (Synchronous Data Link Control) protocol.
• It includes functions of flow control, link management and error
control.
• It thus serves as a good practical illustration of the principles
discussed in data link layer lectures.
• Not only is HDLC widely used, but it is the basis for many other
important data link control protocols.
Spring Semester 2021 4
HDLC - Basic Characteristics
• The protocol allows for a variety of different types of link.
• The two nodes at either end of the link are referred to as stations.
• To satisfy the requirements of variety of applications, HDLC defines
three types of stations, two link configurations, and three data
transfer modes of operation.

Spring Semester 2021 5


HDLC - Station Types
• A data link involves two or more
participating stations. The three station
types are:
• Primary station: Acts as a master and
responsible for controlling the operation of the
link. Frames issued by the primary are called
commands.
• Secondary station: Operates under the control
of the primary station. Frames issued by a
secondary are called responses. The primary
maintains a separate logical link with each
secondary station on the line.
• Combined station: Combines the features of
primary and secondary. A combined station
may issue both commands and responses.

Spring Semester 2021 6


HDLC - Link Configurations
• Two types of link configuration are:
• Unbalanced configuration: This is the situation in which a single
primary station has control over the operation of one or more
secondary stations and supports both full duplex and half
duplex transmission. Primary station establishes and maintains
the link and is responsible to error recovery.

Spring Semester 2021 7


HDLC - Link Configurations
• Two types of link configuration are:
• Balanced configuration: This refers to a point to point link
which has two combined stations, each capable of issuing a
command. Balanced configuration supports both full duplex and
half duplex transmission. Stations are peers on the link and are
and share equal responsibility for error recovery and line
management.

Spring Semester 2021 8


HDLC - Transfer Modes
• The three data transfer modes are:
• Normal response mode (NRM): Used with an unbalanced configuration. The primary may
initiate data transfer to a secondary, but a secondary may only transmit data in response
to a command from the primary.

Spring Semester 2021 9


HDLC - Transfer Modes
• The three data transfer modes are
• Asynchronous balanced mode (ABM): Used with a balanced configuration. Either combined station
may initiate transmission without receiving permission from the other combined station. Both
stations are equally responsible for error recovery and connection establishment.

• Asynchronous response mode (ARM): Used with an unbalanced configuration. The secondary may
initiate transmission without explicit permission of the primary. The primary still retains
responsibility for the line, including initialization, error recovery, and logical disconnection.

Spring Semester 2021 10


HDLC - Transfer Modes
• Normal Response Mode is used on multidrop lines, in which a
number of terminals are connected to a host computer. The
computer polls each terminal for input. Normal Response Mode
is also sometimes used on point to point links, particularly if the
link connects a terminal or other peripheral to a computer.
• Asynchronous Balance Mode is the most widely used of the
three modes; it makes more efficient use of a full duplex point
to point link because there is no polling overhead.
• Asynchronous Response Mode is rarely used; it is applicable to
some special situations in which a secondary may need to
initiate transmission.
Spring Semester 2021 11
HDLC - Frame Format
• HDLC uses synchronous transmission.
• All transmissions are in the form of frames, and a single frame format suffices for all
types of data and control exchanges.
• HDLC frame consists of the following:
• Flag field,
• Address field, Header
• Control field,
• Information field,
• Frame Check Sequence (FCS)
• Flag field Trailer

Spring Semester 2021 12


HDLC - Flag Fields(8 Bit)
• Flag fields delimit the frame at both ends with the unique pattern of
01111110.
• A single flag may be used as the closing flag for one frame and the
opening flag for the next.
• On both sides of the user network interface, receivers are
continuously checking for the flag sequence to synchronize on the
start of a frame.
• Because the protocol allows the presence of arbitrary bit patterns
there is no assurance that the pattern 01111110 will not appear
somewhere inside the frame, thus destroying synchronization.
• To avoid this problem, a procedure known as bit stuffing is used. For
all bits between the starting and ending flags, the transmitter inserts
an extra 0 bit after each occurrence of five 1s in the frame.
Spring Semester 2021 13
HDLC - Flag Fields(8 Bit)
• After detecting a starting flag, the receiver monitors the bit stream.
• When a pattern of five 1s appears, the sixth bit is examined. If this bit is 0, it is
deleted. If the sixth bit is a 1 and the seventh bit is a 0, the combination is
accepted as a flag.
• If the sixth and seventh bits are both 1, the sender is indicating an abort
condition.
• With the use of bit stuffing, arbitrary bit patterns can be inserted into the data field
of the frame. This property is known as data transparency.

Original Pattern
111111111111011111101111110

After bit stuffing


1111101111101101111101011111010

Spring Semester 2021 14


HDLC - Address Field(8 Bits or extendable)

• The address field identifies the secondary station that transmitted or is to receive the
frame.
• This field is not needed for point to point links but is always included for the sake of
uniformity.
• The address field is usually 8 bits long but, by prior agreement, an extended format may be
used in which the actual address length is a multiple of 7 bits.
• The leftmost bit of each octet is 1 or 0 according as it is or is not the last octet of the
address field. The remaining 7 bits of each octet form part of the address.
• The single octet address of 11111111 is interpreted as the all stations address in both basic
and extended formats. It is used to allow the primary to broadcast a frame for reception by
all secondaries.

Spring Semester 2021 15


HDLC - Control Field(8 or 16 Bits)
• The control field distinguishes between the three different types of frame
used in HDLC, namely:
• Information Frame (I) ,
• Supervisory Frame (S) and
• Unnumbered Frame (U).
• The first one or two bits of the field determine the type of frame.
• The field also contains control information which is used for flow control and
link management.

Spring Semester 2021 16


HDLC - Control Field(8 or 16 Bits)

Spring Semester 2021 17


HDLC - Control Field For Information Frames

• Information frames (I-frames) carry the data to be transmitted for the user . Additionally, flow and error control
data, using the ARQ mechanism, are piggybacked on an information frame.
• An I-frame is distinguished by the first bit of the control field being a binary 0.
• The control field of an I-frame contains both a send sequence number, N(S), and a receive sequence number,
N(R), which are used to facilitate flow control.
• N(S) is the sequence number of frames sent and N(R) the sequence number of frames successfully received by
the sending node prior to the present frame being sent.
• Thus the first frame transmitted in a data transfer has send and receive sequence numbers 0,0.
• Since 3 bits are available for each of the sequence numbers N(S) and N(R), they can have values only between 0
and 7, that is they use modulo-8 numbering. This imposes a limit on the size of the windows used for flow control.
• I-frames also contain a poll/final (P/F) bit. This acts as a poll bit when used by a primary station and a final bit by
a secondary.
• A poll bit is set when a primary is transmitting to a secondary and requires a frame or frames to be returned in
response, and the final bit is set in the final frame of a response.

Spring Semester 2021 18


HDLC - Control Field For Supervisor Frames

• Supervisory frames (S-frames) provide the ARQ mechanism when piggybacking is


not used.
• S-frames are distinguished by the first 2 bits of the control field being 10. These
frames are used as acknowledgements for flow and error control. HDLC allows for
both go back n and selective repeat ARQ.
• They also contain two function bits which allow for four functions which lists the
supervisory commands/responses.
• The S-frames contain only a receive sequence number since they relate to the
acknowledgement of I-frames and not to their transmission.
• The three bits from N(R), indicate that all I-frames numbered up to 𝑁(𝑅) − 1 have
been correctly received and the receiver is expecting the I-frame numbered 𝑁(𝑅).

Spring Semester 2021 19


HDLC - Control Field For Supervisor Frames

• Four types of S-frames are:


• Receive Ready (RR) - An RR frame
confirms receipt of frames numbered
up to 𝑁(𝑅) − 1 and indicates that it
is ready to receive frame number
𝑁(𝑅).
• An RR frame with the poll bit set to
1 may be used by a primary station
to poll a secondary station.

Spring Semester 2021 20


HDLC - Control Field For Supervisor Frames

• Four types of S-frames are:


• Reject (REJ)- An REJ frame is used to
indicate that a transmission error has
been detected by the primary or
secondary station.
• A station requests retransmission of
information frame number 𝑁(𝑅) and
those that follow it.
• It implies that I-frames numbered
𝑁(𝑅) − 1 and below have been correctly
received.
• This strategy is similar to Go Back N
protocol.
Spring Semester 2021 21
HDLC - Control Field For Supervisor Frames

• Four types of S-frames are:


• Receive Not Ready (RNR) - An RNR frame
indicates a temporary busy condition.
• The station which sends the RNR frame
acknowledges I-frames up to 𝑁(𝑅) − 1 ,
expects I-frame number 𝑁(𝑅) , and
indicates that it cannot accept any more I-
frames.
• When the condition has been repaired, the
receiver sends a RECEIVE READY, REJECT,
or certain control frames.

Spring Semester 2021 22


HDLC - Control Field For Supervisor Frames

• Four types of S-frames are:


• Selective Reject (SREJ) - An SREJ frame is
used by the primary or secondary to
request retransmission of the single I-
frame numbered 𝑁(𝑅).
• All frames up to 𝑁 𝑅 − 1 have been
received correctly but 𝑁(𝑅) has not.
• Once SREJ has been transmitted, the only
I-frames accepted are the frame N(R) and
those that follow it.
• This strategy is similar to Selective Repeat
protocol.
Spring Semester 2021 23
HDLC - Control Field For Unnumbered Frames

• Unnumbered frames do not contain any sequence numbers and are used for various control functions.
• They have five function bits which allow for the fairly large number of commands and responses , but not all 32
possibilities are used.

24
Spring Semester 2021
HDLC - Information & FCS Field

• The information field is present only in I-frames and some U-


frames. The field can contain any sequence of bits but must
consist of an integral number of octets. The length of the
information field is variable up to some system defined
maximum.
• The frame check sequence (FCS) is an error detecting code
calculated from the remaining bits of the frame, exclusive of
flags. The normal code is the 16-bit CRC-CCITT defined as
follows: 𝐶𝑅𝐶 − 𝐶𝐶𝐼𝑇𝑇 = 𝑥 16 + 𝑥 12 + 𝑥 5 + 1

25
Spring Semester 2021
HDLC Operation
• HDLC operation consists of the exchange of I-frames, S-frames,
and U-frames between two stations.
• The operation of HDLC involves three phases.
• First, one side or another initializes the data link so that
frames may be exchanged in an orderly fashion. During this
phase, the options that are to be used are agreed upon.
• After initialization, the two sides exchange user data and the
control information to exercise flow and error control.
• Finally, one of the two sides signals the termination of the
operation.
26
Spring Semester 2021
HDLC Operation - Initialization
• Either side may request initialization by issuing one of the set mode
commands. This command serves three purposes:
• It signals the other side that initialization is requested.
• It specifies which of the three modes (NRM,ABM,ARM) is
requested.
• It specifies whether 3- or 7-bit sequence numbers are to be used.
• If the other side accepts this request, then the HDLC module on that
end transmits an unnumbered acknowledged (UA) frame back to the
initiating side.
• If the request is rejected, then a disconnected mode (DM) frame is
sent.
27
Spring Semester 2021
• Figure shows the frames involved in link setup and
disconnect.
• The HDLC protocol entity for one side issues an SABM
command to the other side and starts a timer.
• The other side, upon receiving the SABM, returns a UA
response and sets local variables and counters to their initial
values.
• The initiating entity receives the UA response, sets its
variables and counters, and stops the timer.
• The logical connection is now active, and both sides may
begin transmitting frames.
• If the timer expire without a response to an SABM, the
originator will repeat the SABM.
• This would be repeated until a UA or DM is received or until,
after a given number of tries, the entity attempting initiation
gives up and reports failure.
• The figure shows the disconnect procedure. One side issues
a DISC command, and the other responds with a UA
response.
Spring Semester 2021 28
HDLC Operation – Data Transfer
• When the initialization has been requested and accepted, then a logical
connection is established.
• Both sides may begin to send user data in I-frames, starting with
sequence number 0.
• The 𝑁(𝑆) and 𝑁(𝑅) fields of the I-frame are sequence numbers that
support flow control and error control.
• An HDLC module sending a sequence of I-frames will number them
sequentially, modulo 8 or 128, depending on whether 3- or 7-bit sequence
numbers are used, and place the sequence number in 𝑁(𝑆).
• 𝑁(𝑅) is the acknowledgment for I-frames received; it enables the HDLC
module to indicate which number I-frame it expects to receive next.

29
Spring Semester 2021
• Figure shows full duplex exchange of I-frames.
• When an entity sends a number of I-frames in a
row with no incoming data, then the receive
sequence number is simply repeated.
• When an entity receives a number of I-frames in a
row with no outgoing frames, then the receive
sequence number in the next outgoing frame
must reflect the cumulative activity.
• Note that, in addition to I-frames, data exchange
may involve supervisory frames.

Spring Semester 2021 30


• Figure shows an operation involving a busy condition. Such
a condition may arise because an HDLC entity is not able to
process I-frames as fast as they are arriving, or the
intended user is not able to accept data as fast as they
arrive in I-frames. – Flow Control
• In either case, the entity’s receive buffer fills up and it must
halt the incoming flow of I-frames, using an RNR command.
• In this example, A issues an RNR, which requires B to halt
transmission of I-frames.
• The station receiving the RNR will usually poll the busy
station at some periodic interval by sending an RR with the
P bit set.
• This requires the other side to respond with either an RR or
an RNR. When the busy condition has cleared, A returns an
RR, and I-frame transmission from B can resume.

Spring Semester 2021 31


• Figure shows error recovery using the REJ
command. – Go Back N Protocol
• A transmits I-frames numbered 3, 4, and 5.
• Number 4 suffers an error and is lost.
When B receives I-frame number 5, it
discards this frame because it is out of
order and sends an REJ with an 𝑁(𝑅) of 4.
• This causes A to initiate retransmission of
I-frames previously sent, beginning with
frame 4.
• A may continue to send additional frames
after the retransmitted frames.

Spring Semester 2021 32


• Figure shows error recovery using a timeout.
• A transmits I-frame number 3 as the last in a sequence of I-
frames. The frame suffers an error.
• B detects the error and discards it. However, B cannot send
an REJ, because there is no way to know if this was an I-
frame. If an error is detected in a frame, all of the bits of
that frame are suspect, and the receiver has no way to act
upon it.
• A, however, would have started a timer as the frame was
transmitted. This timer has a duration long enough to span
the expected response time. When the timer expires, A
initiates recovery action. This is usually done by polling the
other side with an RR command with the P bit set, to
determine the status of the other side.
• Because the poll demands a response, the entity will receive
a frame containing an 𝑁(𝑅) field and be able to proceed.
• In this case, the response indicates that frame 3 was lost,
which A retransmits.

Spring Semester 2021 33


HDLC Operation – Disconnect
• Either HDLC module can initiate a disconnect.
• HDLC issues a disconnect by sending a disconnect (DISC)
frame.
• The remote entity must accept the disconnect by replying with
a UA.

34
Spring Semester 2021
• The figure shows the disconnect procedure. One side issues
a DISC command, and the other responds with a UA
response.

Spring Semester 2021 35


Questions?

Spring Semester 2021 37


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 4 – 2
Data Link Layer Protocols - PPP

Spring Semester 2021 1


Introduction
• In the following slides we will examine several widely used data
link protocols.
• The second one, PPP, is the data link protocol used to connect
home computers to the Internet.

Spring Semester 2021 3


PPP – Point-to-Point Protocol
• Although HDLC is a general protocol that can be used for both
point to point and multipoint configurations, one of the most
common protocols for point to point access is the Point-to-Point
Protocol (PPP).
• Today, millions of Internet users who need to connect their home
computers to the server of an Internet service provider use PPP.
• The Point-to-Point Protocol uses the principles, terminology, and
frame structure of the International Organization For
Standardization's (ISO) High level Data Link Control (HDLC)
procedures.

Spring Semester 2021 4


PPP – Basic Characteristics
• The Point-to-Point Protocol consists of three main components:
• A method for Encapsulating datagrams over serial links.
• A Link Control Protocol (LCP) for establishing, configuring, and testing the
data link connection.
• A family of Network Control Protocols (NCPs) for establishing and
configuring different network layer protocols.
• The mechanism that PPP uses to carry network traffic is to open a link with a
short exchange of packets. Once the link is open, network traffic is carried
with very little overhead. Frames are sent as unnumbered information frames,
so no data link acknowledgement is required and no retransmissions are
carried out. Once the link is established, PPP acts as a straight data pipe for
protocols.

Spring Semester 2021 5


PPP – Basic Characteristics
• PPP does not offer several services which are:
• Flow control - A sender can send several frames one after another
with no concern about overwhelming the receiver.
• Error Control – A CRC field is used to detect errors. If the frame is
corrupted, it is silently discarded. Lack of error control and
sequence numbering may cause a packet to be received out of
order.
• PPP does not provide a sophisticated addressing mechanism to
handle frames in a multipoint configuration.

Spring Semester 2021 6


PPP – Encapsulation
• The PPP frame format was chosen to closely resemble the HDLC
frame format.
• The major difference between PPP and HDLC is that PPP is
character oriented i.e. the frame always has an integral number of
bytes(octets).
• Data comes in frames, delimited by special characters called flags.
• When a frame is not being sent, the sender transmits flags
continually. This means that there is constant activity on any
synchronous line that is running properly.

Spring Semester 2021 7


PPP – Encapsulation
• All PPP frames begin and end with the standard HDLC flag byte 01111110 or
0x7E.
• When the payload (user data) contains flags, an escape byte 01111101 or
0x7D is inserted (byte stuffing).
• Next comes the Address field, which is always set to 11111111 or 0xFF to
indicate that all stations are to accept the frame. PPP does not assign
individual station addresses.
• Frames with unrecognized Addresses should be silently discarded.

Spring Semester 2021 8


PPP – Encapsulation
• The Address field is followed by the Control field, the default value
of which is 00000011 or 0x03. This value indicates an unnumbered
frame.
• In other words, PPP does not provide reliable transmission using
sequence numbers and acknowledgements as the default.
• Since the Address and Control fields are always constant in the
default configuration, LCP provides the necessary mechanism for
the two parties to negotiate an option to just omit them altogether
and save 2 bytes per frame.

Spring Semester 2021 9


PPP – Encapsulation
• The control field is followed
by the Protocol field.
• The Protocol field is two
octets and its value
identifies the protocol
encapsulated in the Payload
field of the frame.
• This Protocol field is defined
by PPP and is not a field
defined by HDLC.

Spring Semester 2021 10


PPP – Encapsulation
• The protocol field is followed by the Payload/Information Field.
Payload/Information field is zero or more octets(variable). The
Information field contains the datagram for the protocol specified
in the Protocol field. The default maximum length of the
Information field is 1500 octets.
• The Payload/Information field is followed by Frame Check
Sequence(FCS) which is normally 16 bits (two octets). The FCS
field is calculated over all bits of the Address , Control, Protocol
and Information fields.

Spring Semester 2021 11


PPP – Phase Transition Diagram

• This diagram shows the phases


that a line goes through when it
is brought up, used, and taken
down again.
• The phases include:
• Dead
• Establish
• Authenticate
• Network
• Open
• Terminate

Spring Semester 2021 12


PPP – Phase Transition Diagram

• The protocol starts with the line in the DEAD state, which means
that no physical layer carrier is present and no physical layer
connection exists.
• After physical connection is established, the line moves to
ESTABLISH.
• At that point LCP option negotiation begins, which, if successful,
leads to AUTHENTICATE. Now the two parties can check on each
other's identities if desired.
• When the NETWORK phase is entered, the appropriate NCP
protocol is invoked to configure the network layer.
• If the configuration is successful, OPEN is reached and data
transport can take place.
• When data transport is finished, the line moves into the
TERMINATE phase, and from there, back to DEAD when the
carrier is dropped.

Spring Semester 2021 13


PPP – Link Control Protocol
• PPP uses a Link Control Protocol (LCP) to establish, configure and test the data link connection that goes
through four distinct phases.
• Firstly, link establishment and configuration negotiation occur. Before any network layer packets (e.g.
IP) can be exchanged, LCP first must open the connection and negotiate configuration parameters.
• This phase is complete when a configuration acknowledgement frame has been both sent and received.
• This is followed by an link maintenance phase. In this phase, the link is tested to determine whether the
link quality is sufficient to support the network layer protocols. Transmission of network layer protocol
information is delayed until this phase is complete.
• At this point, a network layer protocol configuration negotiation occurs. PPP is designed to allow the
simultaneous use of multiple network layer protocols and network layer protocols can be configured
separately and can be brought into use and taken down at any time.
• Finally, link termination can occur. This is usually carried out at the request of a user but can happen
because of a physical event, such as the loss of line signals or the expiration of an idle period timer.

Spring Semester 2021 14


PPP – Link Control Protocol
• Each of these functions corresponds to one of the “life phases” of
a PPP link.
• Link configuration is performed during the initial Link
Establishment phase of a link;
• Link maintenance occurs while the link is open, and
• Link termination happens in the Link Termination phase.
• Three classes of LCP frames are used:
• Link configuration frames are used to establish and configure a link;
• Link termination frames are used to terminate a link; and
• Link maintenance frames are used to manage and debug a link.
Spring Semester 2021 15
PPP – LCP Frame Format
• All LCP packets(frames) are carried in the payload field of the PPP frame with the protocol
field set to 0xC021 in hexadecimal.
• The code field is one byte in length and defines the type of LCP packet.
• The id field is one byte in length and carries an identifier that is used to match associated
requests and replies.
• The length is two bytes in length and indicates the total length of the LCP frame including the
Code, Id, length, and data fields.
• Data field is variable in length and contains information specific to the message type.

Code ID Length Information

Flag Address Control Protocol Flag


Payload FCS
0111110 1111111 00000011 1100000000100001 0111110
Spring Semester 2021 16
PPP – LCP Link Configuration Frames
• Link configuration frames are transmitted during the link establishment phase.
• The information field of the LCP frame carries information used to negotiate
the configuration options for the link.
• The Link configuration frames are:
• Configure-Request – Code 0x01.Request the establishment of a link with a particular configuration.
Represents the start of the link establishment phase. (Direction: Initiator Responder)
• Configure-Ack – Code 0x02. Acknowledge the receipt of a recognizable Configure-request frame, and
accept the requested configuration. Represents the end of the link establishment phase. (Direction:
Initiator Responder)
• Configure-Nak – Code 0x03. Acknowledge the receipt of a recognizable Configure-request frame, but
reject some or all of the requested configuration. (Direction: Initiator Responder)
• Configure-Reject - Code 0x04. Reject a Configure-request frame because it is not recognizable or
because the requested configuration is not acceptable. (Direction: Initiator Responder)

Spring Semester 2021 17


PPP – LCP Link Termination Frames
• Link termination frames are transmitted during the link termination phase.
• The link termination frames are:
• Terminate-request - Code 0x05. Request the termination of a link. Represents the start of the
link termination phase. (Direction: Initiator Responder)
• Terminate-ack - Code 0x06. Acknowledge the receipt of a recognizable Terminate-
request frame, and accept the termination request. Represents the end of the link termination
phase(Direction: Initiator Responder)

Spring Semester 2021 18


PPP – LCP Link Maintenance Frames
• Link maintenance frames are transmitted periodically to test and maintain the
link.
• The link maintenance frames are:
• Code-reject – Code 0x07. Rejects an LCP frame that has an invalid code field. (Direction:
Initiator Responder)
• Protocol-reject - Code 0x08. Rejects a PPP frame that has an invalid protocol id. (Direction:
Initiator Responder)
• Echo-request - Code 0x09. Requests a response, in the form of an Echo-reply frame, from the
remote end point. Used to test that the link is still up. (Direction: Initiator Responder)
• Echo-reply - Code 0x10. Responds to a valid Echo-request frame. Used to test that the link is
still up. (Direction: Initiator Responder)
• Discard-request -Code 0x11. Sends a frame which is silently discarded at the remote
endpoint. Used as a debugging mechanism. (Direction: Initiator Responder)

Spring Semester 2021 19


This diagram provides an overview of
message exchanges performed by LCP during
different phases of a PPP connection.
• Link Configuration is shown as a simple
exchange of a Configure-Request and
Configure-Ack.
• After subsequent exchanges using other
PPP protocols to authenticate and
configure one or more NCPs, the link
enters the Link Open phase.
• An Echo-Request and Echo-Reply message
are first used to test the link, followed by
the sending and receiving of data by both
devices.
• One Data message is shown being
rejected due to an invalid Code field.
• Finally, the link is terminated using
Terminate-Request and Terminate-Ack
messages.

Spring Semester 2021 20


PPP – LCP Configuration Options
• Link configuration is the most important job that LCP does in PPP. During the Link
Establishment phase, LCP frames are exchanged that enable the two physically connected
devices to negotiate the conditions under which the link will operate.
• The process starts with the initiating device (let's call it initiator) creating a Configure-
Request frame that contains a variable number of configuration options that it wants to use to
set up on the link.
• A number of different configuration options that the initiator can specify in this request are:
• Maximum Receive Unit (MRU)
• This configuration option may be sent to inform the peer that the implementation can
receive larger frames, or to request that the peer send smaller frames.
• The maximum receive unit covers only the data link layer Information field.
• It does not include the header, FCS, or any other bytes.
• By default, it is ‘1500’.

Spring Semester 2021 21


PPP – LCP Configuration Options
• A number of different configuration options that the initiator can specify in this request are:
• Authentication Protocol
• On some links it may be desirable to require a peer to authenticate itself before allowing
network layer protocol packets to be exchanged.
• This configuration option provides a way to negotiate the use of a specific authentication
protocol.
• An implementation should not include multiple authentication protocol configuration
options in its configure request packet.
• Instead, it should attempt to configure the most desirable protocol first.
• If that protocol is rejected , then the implementation could attempt the next most
desirable protocol in the next configure request packet.
• By default authentication protocol is ‘authentication is not necessary’.

Spring Semester 2021 22


PPP – LCP Configuration Options
• A number of different configuration options that the initiator can specify in this request are:
• Quality Protocol
• On some links it may be desirable to determine when, and how often, the link is dropping
data.
• This process is called link quality monitoring.
• This field shows whether initiator wants to enable quality monitoring on the link.
• It is defined by Link Quality Report (LQR) packet which is transmitted down the link by
the router at regular intervals.
• This LQR packet contains information which is used to determine how many packets are
being lost on the link.
• By default quality protocol is ‘None’.

Spring Semester 2021 23


PPP – LCP Configuration Options
• A number of different configuration options that the initiator can specify in this request are:
• Protocol Field Compression
• This configuration option provides a way to negotiate the compression of the data link layer protocol
field.
• By default, all implementations must transmit standard PPP frames with two octet Protocol fields.
• However, PPP protocol field numbers are chosen such that some values may be compressed into a
single octet form which is clearly distinguishable from the two octet form.
• This configuration option is sent to inform the peer that the implementation can receive such single
octet protocol fields.
• Compressed protocol fields must not be transmitted unless this configuration option has been
negotiated.
• When a protocol field is compressed, the data link layer FCS field is calculated on the compressed
frame, not the original uncompressed frame.
• This provides a small savings (one byte) on each PPP frame.
• By default , it is ‘disabled’.

Spring Semester 2021 24


PPP – LCP Configuration Options
• A number of different configuration options that the initiator can specify in this request are:
• Address and Control Field Compression (ACFC)
• This configuration option provides a way to negotiate the compression of the data link
layer address and control fields.
• By default, all implementations must transmit frames with address and control fields and
must use the hexadecimal values 0xff and 0x03 respectively.
• Since these fields have constant values, they are easily compressed.
• This configuration option is sent to inform the peer that the implementation can receive
compressed address and control fields.
• Compressed address and control fields are formed by simply omitting them.
• By default , it is ‘not compressed’.

Spring Semester 2021 25


PPP – Network Control Protocol
• One of the reasons why PPP is such a powerful technology is that it is flexible i.e. PPP could easily carry
data from many types of network layer protocols.
• If only LCP is used for link configuration, it would need to know all the unique requirements of each
layer three protocol.
• This would also require that LCP be constantly updated as new layer three protocols were defined and
as new parameters were defined for existing ones.
• Instead of this, PPP takes a modular approach to link establishment. LCP performs the basic link setup,
and after authentication, invokes a Network Control Protocol (NCP) that is specific to each layer three
protocol that is to be carried over the link.
• The NCP conducts a negotiation of any parameters that are unique to the particular network layer
protocol.
• It is important to note that only configuration options which are independent of particular network layer
protocols are configured by LCP. Configuration of individual network layer protocols is handled by
separate Network Control Protocols (NCPs) during the Network Layer Protocol phase (see PPP Phase
transition diagram)

Spring Semester 2021 26


PPP – Network Control Protocol
• Like LCP, each NCP performs functions for link setup, maintenance and termination but it only
deals with its particular type of NCP link and not the overall LCP link.
• Each NCP uses a subset of seven of the message types defined in LCP, and uses them in very
much the same way as the message type of the same name is used in LCP:
• Link Configuration: The process of setting up and negotiating the parameters of the particular
NCP link (once an LCP link is established) is accomplished using Configure-
Request, Configure-Ack, Configure-Nak and Configure-Reject messages. The configuration
options are network layer protocol parameter being negotiated.
• Link Maintenance: Code-Reject messages can be sent to indicate invalid code values (NCP
frame types).
• Link Termination: An NCP link can be terminated using Terminate-Request and Terminate-Ack.
NCP links are set up within an LCP link and there can be more than one NCP link open.
Closing NCP links doesn't terminate the LCP link.

Spring Semester 2021 27


PPP – NCP Internet Protocol Control Protocol

• One example of NCP protocol is the Internet Protocol Control


Protocol (IPCP).
• This protocol configures the link used to carry IP packets in the
Internet.
• The value of the protocol field in hexadecimal is 0x8021(see table
on slide 10).

Spring Semester 2021 28


PPP – NCP Internet Protocol Control Protocol

• After the network layer configuration is completed by one of the


NCP protocols, the users can exchange data packets from the
network layer.
• There are different protocol fields for different network layers. For
example, if PPP is carrying data from the IP network layer, the field
value is 0x0021.

Spring Semester 2021 29


The overall operation of the NCPs, such as
IPCP is very similar to that of LCP.
• After LCP configuration(including
authentication) is complete, IPCP
Configure-Request and Configure-Ack
messages are used to establish an IPCP
link.
• IP Data can then be sent over the link.
• If the IPCP connection is no longer
needed it may be terminated, after which
the LCP link remains open for other types
of data to be transmitted.

Spring Semester 2021 30


PPP – Authentication Protocols
• The PPP Link Control Protocol (LCP) is responsible for establishing,
configuring and maintaining data link connections.
• Part of the process of configuring a link is the negotiation of
various options, including an authentication protocol, which is
performed before allowing Network Layer protocols to transmit
data over the link.
• The router supports two authentication protocols:
• The Password Authentication Protocol (PAP) and
• The Challenge-Handshake Authentication Protocol (CHAP).

Spring Semester 2021 31


PPP – Password Authentication Protocol
• Password Authentication Protocol(PAP) is a very straight forward authentication scheme,
consisting of only two basic steps.
• Authentication Request: The initiating device sends an Authenticate-Request message that
contains a name and a password.
• Authentication Reply: The responding device looks at the name and password and decides
whether to accept the initiating device and continue in setting up the link. If so, it sends back
an Authenticate-Ack. Otherwise, it sends an Authenticate-Nak.
• PAP transmits the user name and password in clear text across the link i.e. they are not
encrypted. This is a big no in security protocols, as it means any eavesdropper can get the
password and use it in the future.
• PAP also provides no protection against various security attacks. For example, an
unauthorized user could simply try different passwords indefinitely and hope he or she
eventually found one that worked.

Spring Semester 2021 32


• PAP frames are exchanged during the peer
authentication phase.
• The protocol id is 0xC023 for PAP frames.
• The code field identifies the type of PAP frame,
based on the following codes:
• PAP Authenticate-request frames (code 0x01)
are transmitted to start the authentication
phase, and contain the PAP id(username) and
PAP password sent for authentication.
• A PAP Authenticate-ack frame (code 0x02) is
transmitted by the authenticator when it
receives a recognizable PAP Authenticate-
request frame that contains an acceptable PAP
id(username) and PAP password.
• A PAP Authenticate-nak frame (code 0x03) is
transmitted by the authenticator when it
receives a PAP Authenticate-request frame that
is not recognizable, or that contains an
unacceptable PAP id(username) and PAP
password pair.

Spring Semester 2021 33


PPP – Challenge Handshake Authentication Protocol
• The Challenge Handshake Authentication Protocol (CHAP) is a more robust protocol which provides for
both authentication during the Link Establishment phase and periodic verification during the Network
Layer Protocol phase.
• The most important difference between PAP and CHAP is that CHAP doesn't transmit the password
across the link. It is a three way hand shaking authentication protocol that provides greater security
than PAP.
• The three-way handshake steps are as follows:
• Challenge: The authenticator generates a frame called a Challenge and sends it to the initiator. This
frame contains a simple text message(often called Challenge text). The message has no inherent
special meaning so it doesn't matter if anyone intercepts it. The important thing is that after receipt
of the Challenge both devices have the same challenge message.
• Response: The initiator uses its password to encrypt the challenge text. It then sends the encrypted
challenge text as a Response back to the authenticator.
• Success or Failure: The authenticator performs the same encryption on the challenge text that the
initiator did. If the authenticator gets the same result that the initiator sent it in the Response, the
authenticator knows that the initiator had the right password when it did its encryption, so the
authenticator sends back a Success message. Otherwise, it sends a Failure message.
Spring Semester 2021 34
• CHAP frames are exchanged during the peer authentication
phase.
• The protocol id is 0xC223 for CHAP frames.
• The code field identifies the type of CHAP frame, based on the
following codes:
• A CHAP Challenge frames (code 0x01) are used to start
the authentication negotiation, and are transmitted by the
authenticator. They contain the CHAP name and a
challenge value, which is calculated from the CHAP secret
using a one way hash algorithm.
• A CHAP Response frame (code 0x02) is sent on receipt of
a recognized CHAP Challenge frame. It contains a
response value, which is calculated using the CHAP secret,
the challenge value received, and the same one way hash
algorithm.
• A CHAP Success frame (code 0x03) is transmitted by the
authenticator when it receives a recognizable
CHAP response frame that contains an acceptable CHAP
name and response value.
• A CHAP Failure frame (code 0x04) is transmitted by the
authenticator when it receives a CHAP response frame that
is not recognizable, or that contains an unacceptable PAP
id and PAP password pair.

Spring Semester 2021 35


PPP – CHAP Vs PAP
• CHAP verifies that the two devices have the same “shared secret”
but doesn't require that the secret be sent over the link.
• The Response is calculated based on the password, but the
content of the Response is encrypted and thus, much harder to
derive the password from.
• CHAP also provides protection against replay attacks, where an
unauthorized user captures a message and tries to send it again
later on. This is done by changing an identifier in each message
and varying the challenge text.
• Also, in CHAP the server controls the authentication process, not
the client that is initiating the link.
Spring Semester 2021 36
Questions?

Spring Semester 2021 38


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 5 – 1
Queuing Theory Concepts

Spring Semester 2021 1


Queuing Systems - Introduction
• How much time is spent in one’s daily activities waiting in some form of a queue?
• For example:
• stopped at a traffic light;
• delayed at a supermarket checkout stand;
• standing in line for a ticket at a bus stand;
• holding the telephone as it rings, and so on.
• One thing common to all the systems is the flow of customers requiring service and there being some
restriction on the service that can be provided.
• For example, patients arriving at an out-patient’s clinic to see a doctor, the restriction on service is that
only one patient can be served at a time. This is a case of single server queue. An example of multi
server queue is a queue for having goods checked at a supermarket.
• In the above examples , the restriction on service is that not more than a limited number of customers
can be served at a time, and congestion arises because the unserved customers must queue up and
await their turn for service.
• Queueing Theory provides a mathematical basis for understanding and predicting the behavior of a
system in general and communication network in particular.
Spring Semester 2021 3
Queuing Systems - Introduction
• Queuing systems are used for analyzing systems performance(qualitative measure) and
estimating average packet delay(quantitative measure).
• Queuing arises naturally in both packet-switched and circuit-switched networks.
• Much of the theory of queuing was developed from the study of telephone traffic at
Copenhagen telephone exchange in 1910 by A. K. Erlang.
• In networking, a single buffer forms a queue of packets.
• A single queue of packets is an accumulation of packets at certain router or at entire network.
• A queuing buffer is a physical system that stores incoming packets and a server which can be
viewed as a mechanism to process and route packets to desired destination.
• A queuing system consists of a queuing buffer of various sizes and one or more servers.

Spring Semester 2021 4


What is a Queue?
• The simplest queue in which customers arrives randomly at an average rate of 𝜆 customers
per second.
• The customers are held in a queue while a server deals with them at a rate of 𝜇 per second
and then they leave the system.
• This type of system is known as a single-server queue, although there may be more than one
server in a system.
• It is important that the arrival rate 𝜆 is not allowed to exceed the service rate 𝜇, or the queue
will build up.

Queue
Customers Arriving Customers Departing
𝜆 Server 𝜇

Spring Semester 2021 5


Components of Queuing Systems
• Generally a queueing system can be characterized by the following components:
• Customers – Entities that receive service from server e.g. a process, a transaction , a packet
or a message etc.
• Server – Hardware or software providing service e.g. a CPU , an I/O device , software routine
or a router etc.
• Major parameters are:
• Interarrival Time Distribution – arrival pattern of packets
• Service Time Distribution - length of time that a packet spends in the service facility.
• No. of Servers - m
• Queuing Disciplines - order in which packets are taken from the queue
• No. of buffers – amount of buffer space present in the queue.

Customers Arriving Queue


Customers Departing
𝜆 Server 𝜇
Spring Semester 2021 6
Queuing Disciplines
• Queueing systems may not only differ in their distributions of the interarrival and service
times, but also in the number of servers, the size of the waiting line (infinite or finite), the
service discipline.
• Some common service disciplines are:
• FIFO: (First in, First out): Customers in the queue are served in the order they arrive.
• LIFO: (Last in, First out): Customers in the queue are served in reverse order of their arrival.
• Random Service: Customers in the queue are served in random order.
• Round Robin: Every customer gets a time slice and if the service is not completed, the
customer will re-enter the queue.
• Priority Disciplines: Every customer has a (static or dynamic) priority, the server selects
always the customers with the highest priority.
• Preemption: The customer currently being served can be interrupted and preempted if the
new customer in the queue has a higher priority.
Spring Semester 2021 7
Kendall’s Notation
• Queuing systems are described by Kendall’s notation as A/B/m/K where :
• A – distribution of interarrival time of customers
• B – distribution of service time
• m – No. of servers
• K – total capacity of system
• If a system reaches its full capacity, the arriving customer K+1 is blocked.
• A and B are represented with following symbols:
• M – Exponential Distribution (M = Markov)
• D – Deterministic Distribution (constant)
• G – General or arbitrary Distribution

Queue
Customers Arriving Customers Departing
𝜆 Server 𝜇
Spring Semester 2021 8
The Poisson Arrival Pattern
• Random arrivals or Poisson arrivals - The simplest arrival pattern mathematically, and the most
commonly used in all applications of queueing theory is the random or Poisson arrival process.
• If the interarrival times are exponentially distributed, number of arrivals in any given interval are Poisson
distributed.
𝜆 = arrival rate ; Δ𝑡 = time interval
• The probability that one arrival occurs between 𝑡 and 𝑡 + Δ𝑡 is independent of arrivals in earlier
intervals.
• The probability of exactly 𝑛 customers arriving during an interval of length 𝑡 is given by:
𝜆𝑡 𝑛 𝑒 −𝜆𝑡
𝑃 𝑛 𝑎𝑟𝑟𝑖𝑣𝑎𝑙𝑠 𝑖𝑛 𝑡 =
𝑛!
𝜏 ~ exponential Time

𝑛~Poission
Spring Semester 2021 9
Properties of Poisson
• M (Markov property) - memoryless arrival or Poisson arrival i.e. Previous history does not help
in predicting the future.
• Distribution of the time until the next arrival is independent of when the last arrival occurred.
• Merging - Let 𝐴1, 𝐴2, … 𝐴𝑘 be independent Poisson Processes of rate 𝜆1, 𝜆2, … 𝜆𝑘 then
𝐴 = ∑ 𝐴𝑖 is also Poisson of rate = λ = ∑ 𝜆𝑖 λ = ∑ 𝜆𝑖
𝜆1
𝜆2

𝜆3

• Splitting - Suppose that every arrival is randomly routed with probability 𝑃 to stream 1 and (1 − 𝑃) to
stream 2 then streams 1 and 2 are Poisson of rates 𝑃 𝜆 and (1 − 𝑃) 𝜆 respectively.

𝑃 𝜆𝑃
𝜆
1 −𝑃 𝜆(1 − 𝑃 )
Spring Semester 2021 10
Key Variables in Queuing Systems
𝜆 = Arrival rate (customers/sec) ; 𝐶𝑛 = Nth customer enters the system
𝜇 = Service rate (customers/sec)
𝑁 = Average no. of customers in the system
𝑁𝑞 = Average no. of customers waiting in queue
𝑇 = Average customer time in system(includes queueing delay plus service
time)
𝑊𝑞 = Average customer waiting time in the queue (does not include service
time)
1
𝑊𝑠 = Service time ; reciprocal of service rate =
𝜇
1
𝜏 = Interarrival time; reciprocal of arrival rate =
𝜆
Spring Semester 2021 11
Key Variables in Queuing Systems
𝜌 = Traffic Intensity or utilization of server ; fraction of time the server is busy
1
𝑚𝑒𝑎𝑛 𝑠𝑒𝑟𝑣𝑖𝑐𝑒 𝑡𝑖𝑚𝑒 𝜆
= = (for single server)
𝜇
= 1
𝑚𝑒𝑎𝑛 𝑖𝑛𝑡𝑒𝑟𝑎𝑟𝑟𝑖𝑣𝑎𝑙 𝑡𝑖𝑚𝑒 𝜇
𝜆
𝜆
= (for ‘m’ servers)
𝑚𝜇
For equilibrium or steady state;
𝜌<1
i.e. number of customers arriving in a finite time is equal to the number of
customers leaving the system. Otherwise, system will be unstable.

Spring Semester 2021 12


Little’s Theorem
• Provides basis for queuing
• Also known as Little’s Formula
• For a network to reach steady state, the average number of packets in a system (𝑁) is equal to the
product of the average arrival rate 𝜆, and the average time (𝑇) spent in queueing system.
𝑁 = 𝜆𝑇
• The usefulness of this theorem is that it applies to almost every queuing system.
• For example, slow moving traffic (large 𝑇) produces crowded streets (large 𝑁);
• The theorem can also be used to find the average number of packets in a queue rather than the overall
system.
• If we define the following:
𝑊𝑞 = the average time spent waiting in the queue
𝑁𝑞 = the average number of packets found waiting in the queue
• Then Little’s theorem leads to:
𝑁𝑞 = 𝜆𝑊𝑞
Spring Semester 2021 13
Little’s Theorem
Example: A fast-food restaurant is operating with a single person serving customers who arrive at an
average rate of two per minute and wait to receive their order for an average of 3 minutes. On average,
half of the customers eat in the restaurant and the other half eat take-away. A meal takes an average of
20 minutes to eat. Determine the average number of customers queuing and the average number in the
restaurant.
Solution :
Arrival rate = 𝜆 = 2/min
Customers who eat in the restaurant stay on average for 23 minutes
Customers who take-away for 3 minutes.
Average customer time in restaurant = 𝑇 = 0.5 × 23 + 0.5 × 3 = 13 𝑚𝑖𝑛𝑢𝑡𝑒𝑠
Average time in queue = 𝑊𝑞 = 3 𝑚𝑖𝑛𝑢𝑡𝑒𝑠
From Little’s Theorem;
Average number of customers queuing = 𝑁𝑞 = 𝜆𝑊𝑞 = 2 × 3 = 6
Average number in restaurant = 𝑁 = 𝜆𝑇 = 2 × 13 = 26
Spring Semester 2021 14
Markovian Queuing Systems
• The common characteristic of all markovian systems is that the distribution of the interarrival
times and the distribution of the service times are exponential distributions and thus exhibit
the Markov (memoryless) property. Examples are : M/M/1, M/M/1/b and M/M/∞.
• The M/M/1 Queue has interarrival times, which are exponentially distributed with parameter𝜆
and also service times with exponential distribution with parameter 𝜇. The system has only a
single server and uses the FIFO service discipline. The waiting buffer is of infinite size.
• The M/M/1 system is a pure birth/death process, where at any point in time at most one event occurs,
with an event either being the arrival of a new customer(birth) or the completion of a customer’s
service(death).
• In a birth/death process, at any given state 𝑘 can connect to only state 𝑘 − 1 with rate 𝜇𝑘 or to a state
𝑘 + 1 with rate 𝜆𝑘

Transitions occur only between adjacent states

Spring Semester 2021 15


M/M/1 Queue Formulas
In this system, 𝜆𝑘 = 𝜆 ; 𝑘 = 0,1,2, … and 𝜇𝑘 = μ ; 𝑘 = 0,1,2, …
We say that the state 𝐴𝑘 is occupied if there are 𝑘 customers in the queue including the one is being served.
𝜆
Utilization Factor = 𝜌 =
𝜇

Probability of ‘𝑘’ customers in the system = P 𝑘 = 𝜌𝑘 1 − 𝜌


𝜆
𝜌 𝜆
Average number of customers in the system = 𝑁 = = 𝜆
𝜇
=
1−𝜌 1− 𝜇−𝜆
𝜇
𝜆
𝑁 1
The average amount of time that a customer spends in the system can be obtained from Little’s formula = 𝑇 = = 𝜇−𝜆
=
𝜆 𝜆 𝜇−𝜆
1
Average waiting time in the queue = 𝑊𝑞 = 𝑇 − 𝑊𝑠 = 𝑇 −
𝜇
𝜌2
The average number of customers in the queue can be obtained from little’s formula = 𝑁𝑞 = 𝜆𝑊𝑞 =
1− 𝜌

Spring Semester 2021 16


M/M/1 Queue
Example:
• Consider a single server queue where the interarrival time is exponentially distributed with an average of 10
minutes and the service time is also exponentially distributed with an average of 8 minutes, find the following:
• Average wait time in the queue,
• Average number of customers in the queue,
• Average wait time in the system,
• Average number of customers in the system and
• Proportion of time the server is idle.

Queue
Customers Arriving Customers Departing
𝜆 Server 𝜇

Spring Semester 2021 17


M/M/1 Queue
Solution:
Arrival rate = 𝜆= 1/10
Service rate =𝜇 = 1/8
𝜆 8
Utilization of server = 𝜌 = =
𝜇 10
𝜌2 0.82
Average number of customers in the queue = 𝑁𝑞 = = = 3.2
1−𝜌 1−0.8
𝑁𝑞 3.2
Average wait time in the queue = 𝑊𝑞 = = 1 = 32 𝑚𝑖𝑛𝑠
𝜆
10
1 1
Average wait time in the system = 𝑇 = = 1 1 = 40 𝑚𝑖𝑛𝑠
𝜇−𝜆 −
8 10
1
Average number of customers in the system = 𝑁 = 𝜆𝑇 = ∗ 40 = 4
10
Proportion of time the server is idle = 1 − 𝜌 = 1 − 0.8 = 0.2 = 20% 𝑜𝑓 𝑡ℎ𝑒 𝑡𝑖𝑚𝑒.
Spring Semester 2021 18
Effect of Errors on Delay
• If errors occur in a system and Automatic Repeat Request (ARQ) is used for
error correction to retransmit erroneous packets, then the average
transmission time will increase.
• This will result in increased delays and queue lengths.
• In determining an average transmission time, both the error rate and the
type of ARQ strategy need to be taken into account.
• The greater the error rate, the more packets will need to be retransmitted,
and the greater will be transmission and queuing times and queue lengths.

Spring Semester 2021 19


Questions?

Spring Semester 2021 23


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 6 – 2
LAN Interconnection Devices

Spring Semester 2021 1


Internetworking - Introduction
• There are a large variety of types of both local and wide area networks and the need to connect from
one network to another is inevitable.
• This may be to access services on another network not available on a local network, to extend the
physical range of a network or to form a larger, or global, network.
• Alternatively, one may want to split a single network into two or more separate smaller networks (each
individual network a termed as subnet).
• One reason for this might be to provide a distinct boundary for management and control to enable
security features to be implemented to control access between a pair of subnetworks.
• A network may be split into a number of smaller networks to distribute a heavy load on a single network
into a number of smaller, less heavily loaded, networks.
• Splitting may also be done for administrative convenience in order to separate out business functions
which can then be mapped to a discrete network. This may eliminate internetwork traffic thus making it
easier to administer.

Spring Semester 2021 3


Internetworking - Introduction
• One of the most important internetworking requirements is LAN–WAN interconnection where a WAN is
required in order to interconnect two distant LANs. The LANs themselves may also be running different
protocols. For example, an Ethernet in one location may be required to be internetworked with a Token
Ring in another location.
• Thus Internetworking must physically and logically interconnect networks; successfully route data, which
is usually in packet format, across one or more intermediate networks and provide efficient management
of resources.
• Some of the main issues to consider in internetworking are:
• Protocol conversion between dissimilar networks.
• Some form of address translation may be required if the two distant networks use different network
address strategies.
• Transmission speed may vary across a number of networks.
• Networks have a maximum packet size. Where a large packet is required to be passed over a
network that uses a small maximum packet size, fragmentation, also known as segmentation, is
required. This simply breaks a large packet into a number of smaller packets. Where fragmentation
occurs, defragmentation must also occur at some point before packets are passed to a destination
end system.
Spring Semester 2021 4
Internetworking Devices
• Types of coupling devices:
• PHY layer: repeaters, hubs
• MAC layer: bridges, layer-2 switches
• Network layer: router
• Application layer: gateway

Spring Semester 2021 5


Repeaters
• Repeater operates at layer 1 of the OSI Reference Model. It is primarily concerned with the
transmission of electrical signals at bit level.
• A repeater simply reshapes and retimes data and then retransmits them. Its purpose is to
restore the integrity of the signals in regard to pulse shape, amplitude and timing, all of which
deteriorate with distance.
• A repeater is unintelligent, merely repeating each bit that it receives, and can therefore be
regarded as transparent. The repeater function may also be thought of as regeneration.
• A repeater is used for interconnection of LANs which are of similar type at the physical layer.
• Repeaters are therefore mainly found in bus based networks where they are used to
interconnect bus segments to extend the effective medium length beyond the basic
specification.

Spring Semester 2021 6


Hubs
• A hub is a multiple-port repeater. Each station is
connected to the hub by two lines (transmit and
receive).
• Any digital signal received from a segment on a
hub port is regenerated or reamplified and
transmitted out all other ports on the hub.
• This means all devices plugged into a hub are in
the same collision domain as well as in the same
broadcast domain.
• Hubs, like repeaters, don’t examine any of the
traffic as it enters and is then transmitted out to
the other parts of the physical media.
• A physical star network—where the hub is a central
device and cables extend in all directions out from
it—is the type of topology a hub creates.
• Hubs can be cascaded.
7
Spring Semester 2021
Bridges
• Bridges interconnect LANs on the MAC layer.
• Bridges are intelligent devices and examine each frame that they receive to
perform a ‘filtering’ function.
• A bridge extracts destination address from the frames, looks up the
destination in the table and forwards the frame to appropriate LAN segment.
Hence each LAN segment carries its own traffic.
• Bridges mostly connect LANs of the same type (i.e. Ethernet – Ethernet), but
bridges connecting LANs of different types (e.g. Ethernet – Token Ring) also
exist.
• Bridges can be cascaded.

Spring Semester 2021 8


Bridge Operation
• When bridge receives frame from
LAN A, it checks the frame for
correctness, buffers it and checks
the MAC destination address (𝑑𝑠𝑡)
• If 𝑑𝑠𝑡 on LAN B or 𝑑𝑠𝑡 unknown:
• bridge transmits frame on LAN B,
following the rules of the MAC
protocol.
• Do the same for B-to-A traffic.

Spring Semester 2021 9


Bridge Operation
• The bridge makes no modification to the content or
format of the frames it receives, nor does it encapsulate
them with an additional header.
• Each frame to be transferred is simply copied from one
LAN and repeated with exactly the same bit pattern on
the other LAN.
• Because the two LANs use the same LAN protocols, it is
permissible to do this.
• The bridge should contain enough buffer space to meet
peak demands.
• The bridge must contain addressing and routing
intelligence. At a minimum, the bridge must know which
addresses are on each network to know which frames to
pass.
• Further, there may be more than two LANs
interconnected by a number of bridges. In that case, a
frame may have to be routed through several bridges in
its journey from source to destination.
• A bridge may connect more than two LANs.
Spring Semester 2021 10
Some Reasons to Use Bridges
• Reliability: By keeping LANs separated and only interconnected by
a bridge, failures in one LAN do not affect others.
• Performance: By carefully evaluating addresses, bridges can
confine traffic local to one LAN to that very LAN, enabling parallel
local transmissions in different LANs.
• Security: The establishment of multiple LANs may improve security
of communications. It is desirable to keep different types of traffic
(e.g., accounting, personnel, strategic planning) that have different
security needs on physically separate media.

Spring Semester 2021 11


A Larger Network
• Suppose that station 1 transmits a frame on LAN A intended for
station 5.The frame will be read by bridges 101, 102,and 107.
• For each bridge, the addressed station is not on a LAN to which
the bridge is attached. Therefore, each bridge must make a
decision whether or not to retransmit the frame on its other LAN,
in order to move it closer to its intended destination.
• If bridge 101 repeats the frame on LAN B, whereas bridges 102
and 107 refrain from retransmitting the frame. Once the frame
has been transmitted on LAN B, it will be picked up by both
bridges 103 and 104.
• Again, each must decide whether or not to forward the frame. If
again bridge 104 retransmits the frame on LAN E it will be
received by the destination, station 5.
• There are two routes between LAN A and LAN E. Providing
different paths is useful to provide fault-tolerance and load-
balancing.
• Routing problem? When a bridge receives a frame, it must decide
whether or not to forward it. If the bridge is attached to two or
more networks, then it must decide whether or not to forward the
frame and, if so, on which LAN the frame should be transmitted.
Spring Semester 2021 12
A Larger Network
• Fixed Routing - A route is selected for each source destination pair of
LANs in the configuration.
• When there are alternate routes , the route with least no. of. hops is
chosen.
• Each bridge possess a table for each incoming interface. This table
indicates whether a frame should be forwarded and to which outgoing
interface.
• Table is stored at each bridge and each bridge needs one table for each
LAN to which it attaches.
• For example, bridge 105 has two tables, one for frames arriving from LAN
C and one for frames arriving from LAN F. The table shows, for each
possible destination MAC address, the identity of the LAN to which the
bridge should forward the frame.
• Once the table have been established, routing is a simple matter. A
bridge copies each incoming frame on each of its LANs. If the destination
MAC address corresponds to an entry in its routing table, the frame is
retransmitted on the appropriate LAN.
• Table needs to be recomputed and redistributed upon every change in
topology and does not scale well to large installations.

Spring Semester 2021 13


The Spanning Tree Approach
• The spanning tree approach is a mechanism in which bridges automatically
develop a routing table and update that table in response to changing
topology.
• It is specified in IEEE 802.1D specification which defines the protocol
architecture for MAC bridges.
• The algorithm consists of three mechanisms:
• Frame forwarding,
• Address learning, and
• Loop resolution.

Spring Semester 2021 14


Spanning Tree Approach – Frame Forwarding

• For each port / attached LAN, a bridge


maintains two information:
• A forwarding table (MAC database),
• A flag indicating if port is in blocking or
forwarding state.
• Forwarding table contains:
• All MAC addresses which can be reached
(directly or indirectly) by sending to this
port

Spring Semester 2021 15


Spanning Tree Approach – Frame Forwarding

• When a frame arrives at any port(interface) , the


destination hardware address is compared to the
forward MAC database.
• If the destination hardware address is known and
listed in the database, the frame is only sent out the
correct exit interface. This preserves bandwidth on
the other network segments and is called frame
filtering.
• But if the destination hardware address is not listed
in the MAC database, then the frame is flooded out
all active interfaces except the interface the frame
was received on.
• If a device answers the flooded frame, the MAC
database is updated with the device’s location
(interface).

Spring Semester 2021 16


Spanning Tree Approach – Address Learning

• Learning is based on the use of the source address field in each MAC frame.
• When a frame arrives on a particular port(interface),the source address field of the frame indicates
the source station.
• Thus, a bridge can update its forwarding database for that port on the basis of the source address
field of each incoming frame.
• To allow for changes in topology, each element in the database is equipped with a timer. When a
new element is added to the database, its timer is set.
• If the timer expires, then the element is eliminated from the database, since the corresponding
direction information may no longer be valid.
• Each time a frame is received, its source address is checked against the database. If the element is
already in the database, the entry is updated and the timer is reset.
• If the element is not in the database, a new entry is created, with its own timer.

Spring Semester 2021 17


Spring Semester 2021 18
Spanning Tree Approach – Loop Resolution
• The address learning mechanism is effective if there are
no alternate routes in the network. The existence of
alternate routes means that there is a closed loop. For
example in Figure on slide no 13, the following is a closed
loop: LAN A, bridge 101, LAN B, bridge 104, LAN E, bridge
107, LAN A.
• Consider in this figure , at time 𝑡0 station A transmits a
frame addressed to station B. The frame is captured by
both bridges. Each bridge updates its database to indicate
that station A is in the direction of LAN X, and retransmits
the frame on LAN Y.
• Say that bridge ∝ retransmits at time 𝑡1 and bridge 𝛽 at
time at 𝑡2 .
• Thus B will receive two copies of the frame. Furthermore,
each bridge will receive the other’s transmission on LAN Y.
Thus each bridge will update its database to indicate that
station A is in the direction of LAN Y. Neither bridge is now
capable of forwarding a frame addressed to station A.
• To overcome this problem, a simple result from graph
theory is used: For any connected graph, consisting of
nodes and edges connecting pairs of nodes, there is a
spanning tree of edges that maintains the connectivity of
the graph but contains no closed loops.

19
Spring Semester 2021
Spanning Tree Approach – Loop Resolution

• To avoid closed loops in a network, IEEE 802.1D specifies the spanning


tree protocol
• How it is done?
• Each bridge is equipped with an individual MAC address.
• A cost value is administratively assigned to each bridge.
• Bridges run a dedicated protocol among each other, exchanging information about
network topology (known as bridge protocol data unit)
• When topology is fully discovered, a minimum weight (related to per bridge costs)
spanning tree is computed.
• This algorithm is dynamic i.e. tree is recalculated upon changes in topology.

Spring Semester 2021 20


Spanning Tree Terms
• Root Bridge – It is the bridge with the best bridge ID. All the bridges in the network elect a root bridge that becomes the focal point in the
network. All other decisions in the network—such as which port is to be blocked and which port is to be put in forwarding mode—are made from the
perspective of this root bridge. Once a root bridge is elected on the network, all other bridges must make a single path to this root bridge. The port
with the best path to the root bridge is called the root port.
• Bridge ID - It is determined by a combination of the bridge priority and the base MAC address. The bridge with the lowest bridge ID becomes the
root bridge in the network.
• BPDU - All the bridges exchange information to use in the selection of the root bridge as well as in subsequent configuration of the network.
• Non-root bridges - These are all bridges that are not the root bridge. Non-root bridges exchange BPDUs with all bridges and update the STP
topology database on all bridges, preventing loops.
• Port cost - Determines the best path when multiple links are used between two bridges. The cost of a link is determined by the bandwidth of a link.
• Root port – It is the link directly connected to the root bridge, or the lowest path cost to the root bridge. If more than one link connects to the root
bridge, then a port cost is determined by checking the bandwidth of each link. The lowest-cost port becomes the root port. If multiple bridges have
the same cost, the bridge with the lower bridge ID is used.
• Designated port - It is the one that has been determined as having the best (lowest) cost to the root bridge via its root port. A designated port
will be marked as a forwarding port.
• Non-designated port - Is one with a higher cost than the designated port and they are put in blocking mode.
• Forwarding port – Forwards frames and can be a root port or a designated port.
• Blocked port – It is the port that, in order to prevent loops, will not forward frames. However, a blocked port will always listen to BPDU frames but
drop all other frames.
• Convergence - Occurs when all ports on bridges and switches have transitioned to either forwarding or blocking modes. No data will be forwarded
until convergence is complete.

Spring Semester 2021 21


BRIDGE A
Priority = 100
MAC = 0000 1111 2222

Cost = 4
Cost = 4
BRIDGE B
Priority = 32768 BRIDGE C
MAC = 0000 2222 3333 Priority = 32768
MAC = 0000 6666 7777

Cost = 4 Cost = 4

BRIDGE D
Priority = 100 Cost = 4
MAC = 0000 4444 5555 BRIDGE E
Priority = 32768
MAC = 0000 8888 9999
22
Spring Semester 2021
BRIDGE A (ROOT BRIDGE)
Priority = 100
MAC = 0000 1111 2222

Designated Port Designated Port


Cost = 4 Cost = 4
BRIDGE B Root Port Root Port
Priority = 32768 BRIDGE C
MAC = 0000 2222 3333 Priority = 32768
MAC = 0000 6666 7777
Designated Port Designated Port

Cost = 4 Cost = 4

BRIDGE D Root Port Root Port


Priority = 100
MAC = 0000 4444 5555 Cost = 4 BRIDGE E
Designated Port Priority = 32768
MAC = 0000 8888 9999
23
Spring Semester 2021
Layer 2 Switches
• A layer 2 switch does pretty much the same job as a bridge. However, some important differences are:
• Bridges are software based, while switches are hardware based because they use ASIC chips to help
make filtering decisions.
• A switch can be viewed as a multiport bridge.
• Most switches have a higher number of ports than most bridges.
• Stations are attached to switch via point-to-point links with separate transmit/receive lines.
• With a layer 2 switch, an incoming frame from a particular station is switched to the appropriate output line
to be delivered to the intended destination. At the same time, other unused lines can be used for switching
other traffic.
• A switch is able to process frames to distinct destinations in parallel, switches can therefore increase
network capacity.
• Frames arriving in parallel for the same destination are buffered at switches’ output interface (output
buffering).
• A switch is transparent to the hosts and routers in the subnet; that is, a host/router addresses a frame to
another host/router (rather than addressing the frame to the switch) and is unaware that a switch will be
receiving the frame and forwarding it.
• Nowadays almost all Ethernet installations use switches.
Spring Semester 2021 24
Layer 2 Switch Operation
• Forward/Filter Decisions - Filtering is the switch function that determines whether a frame should
be forwarded to some interface or should just be dropped. Forwarding is the switch function that
determines the interfaces to which a frame should be directed, and then moves the frame to
those interfaces. Switch filtering and forwarding are done with a switch table.
• Self-Learning – A switch table is built automatically, dynamically, and autonomously—without any
intervention from a network administrator or from a configuration protocol. In other words,
switches are self-learning.
• Loop Avoidance - Switches incorporate the same loop avoidance technique as bridges.
• Operation modes of switches:
• Store and forward switch: The layer 2 switch accepts a frame on an input line, buffers it
briefly, and then routes it to the appropriate output line.
• Cut through switch: The layer 2 switch takes advantage of the fact that the destination
address appears at the beginning of the MAC (medium access control) frame. The layer 2
switch begins repeating the incoming frame onto the appropriate output line as soon as the
layer 2 switch recognizes the destination address.

Spring Semester 2021 25


Collision & Broadcast Domains
• Collision Domain – set of stations for which their frames could collide.
• Broadcast Domain – set of stations for which a broadcast frame sent by one device will be received by all
other stations.
• A hub is a single collision and single broadcast domain. All ports on a hub are in the same collision domain
and same broadcast domain.
• Switches create separate collision domains but a single broadcast domain. Every port on a switch is in a
different collision domain i.e. a switch segments the collision domain. All ports on a switch are in single
broadcast domain.
• Routers provide a separate collision and separate broadcast domain for each interface i.e. a router
segments the broadcast domain. The most important thing to understand is that by default a router will not
pass broadcasts on to other networks. If routers did pass broadcasts then the entire internet would be in a
giant broadcast storm and would not function.

Spring Semester 2021 26


Collision Domain
Broadcast Domain

27
Spring Semester 2021
Collision Domain
Broadcast Domain

28
Spring Semester 2021
Collision Domain
Broadcast Domain

29
Spring Semester 2021
30
Spring Semester 2021
Virtual LANs
• Modern institutional LANs are often configured hierarchically, with each
workgroup (department) having its own switched LAN connected to the switched
LANs of other groups via a switch hierarchy. But this configuration will still be a
flat network (because of a single broadcast domain). Three drawbacks of this
configuration can be:
• Lack of traffic isolation - Although the hierarchy localizes group traffic to within a single switch,
broadcast traffic must still traverse the entire institutional network. Limiting the scope of such
broadcast traffic would improve LAN performance. Perhaps more importantly, it also may be desirable
to limit LAN broadcast traffic for security/privacy reasons.
• Inefficient use of switches - if there are less people in one workgroup.
• Managing users - If an employee moves between groups, the physical cabling must be changed to
connect the employee to a different switch. Employees belonging to two groups make the problem
even harder.
• These problems can be handled by a switch that supports virtual local area
networks (VLANs).

Spring Semester 2021 31


Virtual LANs
• VLANs allows multiple virtual local area networks to be defined over a single physical local area network
infrastructure.
• Hosts within a VLAN communicate with each other as if they (and no other hosts) were connected to the
switch.
• In a port based VLAN, the switch’s ports (interfaces) are divided into groups by the network manager. Each
group constitutes a VLAN, with the ports in each VLAN forming a broadcast domain i.e. VLANs create
broadcast domains.
• VLANs simplify network management:
• Network adds, moves, and changes are achieved with ease by just configuring a port into the appropriate VLAN.
• A group of users that need an unusually high level of security can be put into its own VLAN so that users outside of the VLAN can’t
communicate with them.
• As a logical grouping of users by function, VLANs can be considered independent from their physical or geographic locations.
• VLANs greatly enhance network security.
• VLANs increase the number of broadcast domains while decreasing their size.

• Since each VLAN is a different broadcast domain, so what about Spanning Tree
Protocol?
• How to send traffic to a different VLAN?
Spring Semester 2021 32
33
Spring Semester 2021
Questions?

Spring Semester 2021 35


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 7 – 1
Network Layer Protocols

Spring Semester 2021 1


Network Layer - Introduction
• The role of the network layer is to move packets from a sending host to
a receiving host. To do so, two important network layer functions can be
identified as:
• Forwarding - When a packet arrives at a router’s input link, the router
must move the packet to the appropriate output link.
• Routing - The network layer must determine the route or path taken by
packets as they flow from a sender to a receiver. The algorithms that
calculate these paths are referred to as routing algorithms.

Spring Semester 2021 3


Network Layer - Introduction
• Packets transmitted by sending host may pass through several networks and require many
hops at intermediate routers before reaching the destination host.
• Forwarding is router’s local action of transferring a packet from an input link interface to the
appropriate output link interface. Routing refers to the network wide process that determines
the end-to-end paths that packets take from source to destination.
• Every router has a forwarding table. A router forwards a packet by examining the value of a
field in the arriving packet’s header, and then using this header value to index into the router’s
forwarding table. The value stored in the forwarding table entry for that header indicates the
router’s outgoing link interface to which that packet is to be forwarded.
• The values of the forwarding table are determined by routing algorithm. The routing algorithm
can be centralized (e.g., executing on a central server) or decentralized (i.e., a distributed
routing algorithm running in each router).
• In either case, a router receives routing protocol messages, which are used to configure its
forwarding table.

Spring Semester 2021 4


The Internet Protocol (IP)
• The Internet Protocol (IP) is the core of the TCP/IP protocol suite and its main protocol at the network
layer.
• IP has four basic functions:
• Addressing - In order to deliver datagrams, IP must know where to deliver them to. For this reason, IP
includes a mechanism for host addressing.
• Data Encapsulation - IP accepts data from the transport layer protocols(UDP and TCP). It then
encapsulates this data into an IP datagram using a special format prior to transmission.
• Fragmentation and Reassembly - IP datagrams are passed down to the data link layer for transmission
on the local network. However, the maximum frame size of each data link network may be different. For
this reason, IP includes the ability to fragment IP datagrams into pieces so they can each be carried on
the local network. The receiving device uses the reassembly function to recreate the whole IP datagram
again.
• Routing - When an IP datagram is sent to a destination on the same local network, this can be done
easily using the network's underlying LAN protocol. However, when the destination is on a distant
network not directly attached to the source, the datagram must be delivered by routing the datagram
through intermediate devices (called routers). IP accomplishes this with support routing protocols.
Spring Semester 2021 5
IP Addressing
• IP address has following functions:
• Network Interface Identification - The IP address provides unique
identification of the interface between a device and the network.
This is required to ensure that the datagram is delivered to the
correct recipients.
• Routing - When the source and destination of an IP datagram are
not on the same network, the datagram must be delivered using
intermediate nodes. The IP address is an essential part of the
system used to route datagrams.

Spring Semester 2021 6


IP Addressing – What is an Interface?
• A host has only a single link into the network; when IP in the
host wants to send a datagram, it does so over this link. The
boundary between the host and the physical link is called an
interface.
• A router’s job is to receive a datagram on one link and
forward the datagram on some other link, a router
necessarily has two or more links to which it is connected.
The boundary between the router and any one of its links is
also called an interface.
• A router thus has multiple interfaces, one for each of its
links.
• Because every host and router is capable of sending and
receiving IP datagrams, IP requires each host and router
interface to have its own IP address. Thus, an IP address is
technically associated with an interface, rather than with the
host or router containing that interface.

Spring Semester 2021 7


IP Addressing
• An IP address consists of 32 bits of information. These addresses are referred to as IPv4 (IP
version 4) addresses.
• These addresses are unique i.e. each address defines one and only one connection to the
internet.
• An IP address can be represented using one of the following methods:
• Dotted decimal notation
• Binary notation
• Hexadecimal notation
• IP addresses are most commonly expressed in dotted decimal with each octet of 8 bits
converted to a decimal number and the octets separated by a period (a “dot”). Each of the
octets in an IP address can take on the values from 0 to 255 so the lowest value is
theoretically 0.0.0.0 and the highest is 255.255.255.255.
• Since the IP address is 32 bits wide, this provides a theoretical address space of 232 , or
4,294,967,296 addresses.
Spring Semester 2021 8
IP Address Structure
• The 32-bit IP address is a hierarchical address i.e. structured by network and host (two level hierarchy).
32 bits

Network ID Host ID

• Network Identifier (Network ID) - No. of bits, starting from the leftmost bit, used to uniquely identify a
network where the host is located. Also called the network prefix. Every machine on the same network
shares same network address.
• Host Identifier (Host ID) – No of bits used to uniquely identify a host on the network.
• Hierarchical addressing was chosen because, if every address were unique and flat addressing was used,
all routers on the Internet would need to store the address of each and every machine on the Internet.
This would make efficient routing impossible.
• Routers look at the network portion of the IP address to determine if the destination IP address is on
the same network as the host IP address. Then routing decisions are made based on information the
routers keep about where various networks are located. The host portion of the address is used by
devices on the local portion of the network.
Spring Semester 2021 9
IP Address Classes
• IP addressing supports five different address classes: A, B, C, D, and E. Only classes A, B, and
C are available for commercial use.
• Each class occupies some part of the address space.
• This architecture is called classful addressing.
8 bits 8 bits 8 bits 8 bits
Class A Network ID Host ID Host ID Host ID

8 bits 8 bits 8 bits 8 bits


Class B Network ID Network ID Host ID Host ID

8 bits 8 bits 8 bits 8 bits


Class C Network ID Network ID Network ID Host ID

Spring Semester 2021 10


Network Address Range – Class A
• The first bit of the first byte in a Class A network address must always be off (0).
• If 0xxxxxxx then network address range will be:
00000000 = 0
01111111 = 127
• This means a Class A address must be between 0 and 127 in the first byte.

8 bits 8 bits 8 bits 8 bits


Class A 0 Network ID Host ID Host ID Host ID

Spring Semester 2021 11


Network Address Range – Class B
• In a Class B network, the first bit of the first byte must always be turned on (1) but the
second bit must always be turned off (0).
• If 10xxxxxx then network address range will be:
10000000 = 128
10111111 = 191
• This means a Class B network is defined when the first byte is configured from 128 to 191.

8 bits 8 bits 8 bits 8 bits


Class B 1 0 Network Network ID Host ID Host ID
ID

Spring Semester 2021 12


Network Address Range – Class C
• For Class C networks, the first 2 bits of the first octet are always turned on (1), but the third
bit is off (0).
• If 110xxxxx then network address range will be:
11000000 = 192
11011111 = 223
• So, if you see an IP address that starts at 192 and goes to 223, then it is a Class C IP
address.

8 bits 8 bits 8 bits 8 bits


Class C 1 1 0 Network Network ID Network ID Host ID
ID

Spring Semester 2021 13


Reserved IP Addresses
• Some IP addresses are reserved for special purposes, so network administrators can’t assign these
addresses to nodes.
Address Function
Network Address of all zeros (0.0.0.160) Specified Host On This Network - This addresses a host on the
current or default network.
Network Address 127.0.0.1 Reserved for loopback tests - Designates the local node and allows
that node to send a test packet to itself without generating network
traffic.
Node Address of all zeros (77.0.0.0) The Specified Network - This notation, with a “0” at the end of the
address, refers to an entire network.
Node Address of all ones All Hosts On The Specified Network - Used for broadcasting to all
(77.255.255.255) hosts on the local network.
Entire IP address set to all 0s (0.0.0.0) Me - Used by a device to refer to itself when it doesn't know its own
IP address. The most common use is when a device attempts to
determine its address using a host configuration protocol like DHCP.

Entire IP address set to all 1s All Hosts On The Network - Broadcast to all nodes on the
(255.255.255.255) current network.
Spring Semester 2021 14
Class A Addresses
For Network ID
• 7 bits are used for Network ID as first bit is reserved as ‘0’ for Class A.
✓ No. of possible Network IDs = 27 − 2 = 126 (Network address 0 and 127 are reserved and cannot
be assigned to any network)
For Host ID
• 24 bits are used for Host ID
✓ No of Host IDs per network ID = 224 − 2 = 16,777,214 (Host address of all 0’s and all 1’s is
reserved)
E.g. All Host ID bits off is a network address 10.0.0.0
All Host ID bits on is a broadcast address 10.255.255.255
• Valid Host IDs are numbers between the network address and broadcast address i.e. 10.0.0.1
through 10.255.255.254
8 bits 8 bits 8 bits 8 bits
Class A 0 Network ID Host ID Host ID Host ID
Spring Semester 2021 15
Class B Addresses
For Network ID
• 14 bits are used for Network ID as first two bits are reserved as ‘10’ for Class B.
✓ No. of possible Network IDs = 214 = 16,384
For Host ID
• 16 bits are used for Host ID
✓ No of Host IDs per network ID = 216 − 2 = 65,534 (Host address of all 0’s and all 1’s is reserved)
E.g. All Host ID bits off is a network address 172.16.0.0
All Host ID bits on is a broadcast address 172.16.255.255
• Valid Host IDs are numbers between the network address and broadcast address i.e.
172.16.0.1 through 172.16.255.254

8 bits 8 bits 8 bits 8 bits


Class B 1 0 Network Network ID Host ID Host ID
ID

Spring Semester 2021 16


Class C Addresses
For Network ID
• 21 bits are used for Network ID as first three bits are reserved as ‘110’ for Class C.
✓ No. of possible Network IDs = 221 = 2,097,152
For Host ID
• 8 bits are used for Host ID
✓ No of Host IDs per network ID = 28 − 2 = 254 (Host address of all 0’s and all 1’s is reserved)
E.g. All Host ID bits off is a network address 192.168.100.0
All Host ID bits on is a broadcast address 192.168.100.255
• Valid Host IDs are numbers between the network address and broadcast address i.e.
192.168.100.1 through 192.168.100.254

8 bits 8 bits 8 bits 8 bits


Class C 1 1 0 Network Network ID Network ID Host ID
Spring Semester 2021 ID 17
Private IP Addresses
• Private IP addresses can be used on a private network, but they are not routable through the
Internet.
• If every host on every network had to have real routable IP addresses then we would have
run out of IP addresses.
• But by using private IP addresses, ISPs, corporations, and home users only need a relatively
tiny group of IP addresses to connect their networks to the Internet. They can use private IP
addresses on their inside networks and get along just fine.
• To accomplish this task, the ISP and the corporation use Network Address Translation (NAT),
which basically takes a private IP address and converts it for use on the Internet.

Address Class Reserved Address Space


Class A 10.0.0.0 through 10.255.255.255
Class B 172.16.0.0 through 172.31.255.255
Class C 192.168.0.0 through 192.168.255.255

Spring Semester 2021 18


IP Addresses Configuration
• There are two basic ways that IP addresses can be configured.
• Static configuration - each device is manually configured with an IP
address that doesn't change. This is fine for small networks but
becomes an administrative nightmare in larger networks when
changes are required.
• Dynamic configuration - IP addresses are assigned to devices and
changed under software control.

Spring Semester 2021 19


Dynamic Host Configuration Protocol
• DHCP allows a host to obtain an IP address automatically from a shared pool of IP addresses
managed by DHCP server.
• A network administrator can configure DHCP so that a given host receives the same IP
address each time it connects to the network (automatic allocation), or a host may be
assigned a temporary IP address that will be different each time the host connects to the
network(dynamic allocation).
• In addition to host IP address assignment, DHCP also allows a host to learn additional
information, such as:
• subnet mask,
• the address of its first hop router (default gateway), and
• the address of its local DNS server.
• DHCP is a client server protocol. A DHCP server is a network device that has been
programmed to provide DHCP services to clients. They manage address information and other
parameters and respond to client configuration requests. A DHCP client is any device that
sends DHCP requests to a server to obtain an IP address or other configuration information.

Spring Semester 2021 20


Dynamic Host Configuration Protocol
• For a newly arriving host, the DHCP protocol is a four step process. The four steps are:
• DHCP server discovery - The first task of a newly arriving host is to find a DHCP server with which to
interact. This is done using a DHCP discover message. The DHCP client creates an IP datagram
containing its DHCP discover message along with the broadcast destination IP address of
255.255.255.255 and a “this host” source IP address of 0.0.0.0. The DHCP client passes the IP
datagram to the link layer, which then broadcasts this frame to all nodes attached to the subnet.
• DHCP server offer - A DHCP server receiving a DHCP discover message responds to the client with a
DHCP offer message that is broadcast to all nodes on the subnet, again using the IP broadcast address
of 255.255.255.255. Each server offer message contains the transaction ID of the received discover
message, the proposed IP address for the client, the network mask, and an IP address lease time which
is the amount of time for which the IP address will be valid.
• DHCP request - The newly arriving client will choose from among one or more server offers and respond
to its selected offer with a DHCP request message, echoing back the configuration parameters.
• DHCP ACK - The server responds to the DHCP request message with a DHCP ACK message, confirming
the requested parameters.
• Once the client receives the DHCP ACK, the interaction is complete and the client can use the DHCP
allocated IP address for the lease duration. DHCP also provides a mechanism that allows a client to
renew its lease on an IP address in case of lease expiration.
Spring Semester 2021 21
Questions?

Spring Semester 2021 23


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 7 – 2
Network Layer – Subnetting & VLSM

Spring Semester 2021 1


Problems with Classful IP Addressing
• Several problems with classful addressing are:
• Not having a network class that can efficiently support a medium sized domain/organization. If an
organization is assigned Class B or Class A block of addresses, it will be considered as a single network
with one network ID by the Internet router. A Class B network supports 65,534 hosts which is too large
for a medium sized organization with the requirement of few hundred hosts. Thus wasting several
thousand potential host addresses.
• There are few choices in the sizes of networks available. Class C has 254 hosts and class B has 65534
hosts. There are many companies that need more than 254 IP address but a lot fewer than 65,000.
• Consider a company with 5,000 computers, what class network should be used? If it is assigned a Class
B network, 90% IP addresses are wasted. In order to avoid wasting all these IP addresses, if the same
company is given some Class C addresses, 20 Class C network address will be required to meet
company’s need. Thus every router on the Internet replaces the single Class B router table entry with 20
Class C router entries. This method would add to the size of router tables. Router tables have already
been growing quickly as the Internet has expanded. The larger these tables, the more time it takes for
routers to make routing decisions.

Spring Semester 2021 3


IP Subnet Addressing
• In order to address the problem of classful addressing a new addressing procedure called
subnet addressing or subnetting was proposed.
• The basic idea behind subnet addressing is to add an additional hierarchical level in the way
IP addresses are interpreted. The concept of a network remains unchanged, but instead of
having just hosts within a network, a new hierarchy is created - subnets and hosts.
• Each subnet is a subnetwork, and functions the way a full network does. A three level
hierarchy is thus created: networks, which contain subnets, each of which then has a number
of hosts.
• Subnetting provides numerous advantages:
• Hosts can be grouped into subnets that reflect the way they are actually structured in the
organization's physical network.
• The number of subnets and number of hosts per subnet can be customized for each organization.
• Since the subnet structure exists only within the organization, routers outside that organization
know nothing about it. The organization still maintains a single routing table entries for all of its
devices. Only routers inside the organization need to worry about routing between subnets.
Spring Semester 2021 4
Subnet Mask
• To create subnetworks, take bits from the host portion of the IP address and reserve them to define the subnet
address.
• In a classful environment, routers use the first octet of the IP address to determine the class is of the address, and
from this they know which bits are the network ID and which are the host ID.
• When subnetting is used, these routers also need to know how that host ID is divided into subnet ID and host ID.
However, this division can be arbitrary for each network. Furthermore, there is no way to tell how many bits belong to
each simply by looking at the IP address.
• So additional information about which bits are for the subnet ID and which for the host ID must be communicated to
devices that interpret IP addresses. This information is given in the form of a 32 bit binary number called a subnet
mask.
• The network administrator creates a 32 bit subnet mask composed of 1s and 0s. The 1s in the subnet mask represent
the positions that refer to the network or subnet addresses.

32 bits

Network ID Subnet ID Host ID


Spring Semester 2021 5
How Subnet Masks are Used to Determine the Network
Number?
• The router extracts the IP destination address from the incoming packet and retrieves the internal subnet mask. It
then performs a logical AND operation to obtain the network number. This causes the host portion of the IP
destination address to be removed, while the destination network number remains.
• Mask Bit is 1 - If the IP address bit is a 0, the result of the AND will be 0, and if it is a 1, the AND will be 1. In other
words, where the subnet bit is a 1, the IP address is preserved unchanged.
• Mask Bit is 0 – If we AND with a 0, so the result is always 0 regardless of what the IP address is. Thus, when the
subnet bit is a 0, the IP address bit is always cleared to 0.
• Any address bits which have corresponding mask bit set to ‘1’ represent the Network ID. Any address bits that have
corresponding mask bits set to ‘0’ represent a Host ID.
• So when we use the subnet mask on an IP address, the bits in the network ID and subnet ID are left intact, while the
host ID bits are removed.
• Default masks of classes are:
• Class A → 255.0.0.0
• Class B → 255.255.0.0
• Class C→ 255.255.255.0

Spring Semester 2021 6


Classless Interdomain Routing
• Classless Inter-Domain Routing (CIDR) is a method Subnet Mask CIDR Value
that ISPs use to allocate a number of addresses to a
company, a home or a customer. For example 255.255.0.0 /16
192.168.10.32/28.
255.255.128.0 /17
• The slash notation (/) means how many bits are
turned on (1s). 255.255.192.0 /18
• The maximum could be /32 but the largest subnet 255.255.224.0 /19
mask available can only be a /30 because at least 2
bits have to be reserved for host bits.
255.255.240.0 /20
255.255.248.0 /21
Subnet Mask CIDR Value
255.255.252.0 /22
255.0.0.0 /8
255.255.254.0 /23
255.128.0.0 /9
255.255.255.0 /24
255.192.0.0 /10
255.255.255.128 /25
255.224.0.0 /11
255.255.255.192 /26
255.240.0.0 /12
255.255.255.224 /27
255.248.0.0 /13
255.255.255.240 /28
255.252.0.0 /14
255.255.255.248 /29
255.254.0.0 /15
Spring Semester 2021 255.255.255.252 /30 7
Subnetting Class C Addresses
• In a Class C address, only 8 bits are available for defining the hosts.
• The subnet bits start at the left and move to the right, without skipping bits.
• We need to answer five simple questions:
• How many subnets does the chosen subnet mask produce?
Answer : 2𝑥 where 𝑥 is the number of masked bits or 1’s.
• How many valid hosts per subnet are available?
Answer : 2𝑦 − 2 where 𝑦 is the number of unmasked bits or 0’s.
• What are the valid subnets?
Answer : 256 – 𝑠𝑢𝑏𝑛𝑒𝑡 𝑚𝑎𝑠𝑘 = 𝑏𝑙𝑜𝑐𝑘 𝑠𝑖𝑧𝑒
• What’s the broadcast address of each subnet?
Answer : 𝑇ℎ𝑒 𝑏𝑟𝑜𝑎𝑑𝑐𝑎𝑠𝑡 𝑎𝑑𝑑𝑟𝑒𝑠𝑠 𝑖𝑠 𝑎𝑙𝑤𝑎𝑦𝑠 𝑡ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑟𝑖𝑔ℎ𝑡 𝑏𝑒𝑓𝑜𝑟𝑒 𝑡ℎ𝑒 𝑛𝑒𝑥𝑡 𝑠𝑢𝑏𝑛𝑒𝑡
• What are the valid hosts in each subnet?
Answer : 𝑉𝑎𝑙𝑖𝑑 ℎ𝑜𝑠𝑡𝑠 𝑎𝑟𝑒 𝑡ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟𝑠 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑡ℎ𝑒 𝑠𝑢𝑏𝑛𝑒𝑡𝑠, 𝑒𝑥𝑐𝑒𝑝𝑡 𝑎𝑙𝑙 0𝑠 𝑎𝑛𝑑 𝑎𝑙𝑙 1𝑠.

Spring Semester 2021 8


Subnetting Class C Addresses
Subnet the following address Network address 192.168.10.0 ; Subnet Mask 255.255.255.128 (/25)
No. of subnets = 2𝑥 = 21 = 2
No. of valid hosts per subnet = 2𝑦 − 2 = 2 7 − 2 = 126
Valid subnets = 256 − 𝑠𝑢𝑏𝑛𝑒𝑡 𝑚𝑎𝑠𝑘 = 256 − 128 = 128 𝑖𝑠 𝑏𝑙𝑜𝑐𝑘 𝑠𝑖𝑧𝑒
∴ 𝑇𝑤𝑜 𝑠𝑢𝑏𝑛𝑒𝑡𝑠 𝑎𝑟𝑒 𝑠𝑢𝑏𝑛𝑒𝑡 0 𝑎𝑛𝑑 𝑠𝑢𝑏𝑛𝑒𝑡 128
Broadcast address for each subnet is the number right before the value of the next subnet
∴ 𝐹𝑜𝑟 𝑠𝑢𝑏𝑛𝑒𝑡 0 𝑏𝑟𝑜𝑎𝑑𝑐𝑎𝑠𝑡 𝑎𝑑𝑑𝑟𝑒𝑠𝑠 𝑖𝑠 127
𝐹𝑜𝑟 𝑠𝑢𝑏𝑛𝑒𝑡 128 𝑡ℎ𝑒 𝑏𝑟𝑜𝑎𝑑𝑐𝑎𝑠𝑡 𝑎𝑑𝑑𝑟𝑒𝑠𝑠 𝑖𝑠 255
Valid Hosts per subnet are the numbers between subnet address and broadcast address
∴ 𝐹𝑜𝑟 𝑆𝑢𝑏𝑛𝑒𝑡 0 → 1 − 126
𝐹𝑜𝑟 𝑆𝑢𝑏𝑛𝑒𝑡 128 → 129 − 254
Subnet 0 128
Network Address 192.168.10.0 192.168.10.128
First Host Address 192.168.10.1 192.168.10.129
Last Host Address 192.168.10.126 192.168.10.254
Spring Semester 2021 Broadcast Address 192.168.10.127 192.168.10.255 9
Subnetting Class C Addresses
Subnet the following address Network address 192.168.10.0 ; Subnet Mask 255.255.255.192 (/26)
No. of subnets = 2𝑥 = 22 = 4 ; No. of valid hosts per subnet = 2𝑦 − 2 = 2 6 − 2 = 62
Valid subnets = 256 − 𝑠𝑢𝑏𝑛𝑒𝑡 𝑚𝑎𝑠𝑘 = 256 − 192 = 64 𝑖𝑠 𝑏𝑙𝑜𝑐𝑘 𝑠𝑖𝑧𝑒
∴ 𝐹𝑜𝑢𝑟 𝑠𝑢𝑏𝑛𝑒𝑡𝑠 𝑎𝑟𝑒 𝑠𝑢𝑏𝑛𝑒𝑡 0, 𝑠𝑢𝑏𝑛𝑒𝑡 64, 𝑠𝑢𝑏𝑛𝑒𝑡 128 𝑎𝑛𝑑 𝑠𝑢𝑏𝑛𝑒𝑡 192
Broadcast address for each subnet is the number right before the value of the next subnet
∴ 𝐹𝑜𝑟 𝑠𝑢𝑏𝑛𝑒𝑡 0 𝑏𝑟𝑜𝑎𝑑𝑐𝑎𝑠𝑡 𝑎𝑑𝑑𝑟𝑒𝑠𝑠 𝑖𝑠 63
𝐹𝑜𝑟 𝑠𝑢𝑏𝑛𝑒𝑡 64 𝑡ℎ𝑒 𝑏𝑟𝑜𝑎𝑑𝑐𝑎𝑠𝑡 𝑎𝑑𝑑𝑟𝑒𝑠𝑠 𝑖𝑠 127
𝐹𝑜𝑟 𝑠𝑢𝑏𝑛𝑒𝑡 128 𝑏𝑟𝑜𝑎𝑑𝑐𝑎𝑠𝑡 𝑎𝑑𝑑𝑟𝑒𝑠𝑠 𝑖𝑠 191
𝐹𝑜𝑟 𝑠𝑢𝑏𝑛𝑒𝑡 192 𝑏𝑟𝑜𝑎𝑑𝑐𝑎𝑠𝑡 𝑎𝑑𝑑𝑟𝑒𝑠𝑠 𝑖𝑠 255
Valid Hosts per subnet are the numbers between subnet address and broadcast address
∴ 𝐹𝑜𝑟 𝑠𝑢𝑏𝑛𝑒𝑡 0 → 1 − 62
𝐹𝑜𝑟 𝑠𝑢𝑏𝑛𝑒𝑡 64 → 65 − 126
𝐹𝑜𝑟 𝑠𝑢𝑏𝑛𝑒𝑡 128 → 129 − 190
𝐹𝑜𝑟 𝑠𝑢𝑏𝑛𝑒𝑡 192 → 193 − 254

Subnet 0 64 128 192


Network Address 192.168.10.0 192.168.10.64 192.168.10.128 192.168.10.192
First Host Address 192.168.10.1 192.168.10.65 192.168.10.129 192.168.10.193
Last Host Address 192.168.10.62 192.168.10.126 192.168.10.190 192.168.10.254
Broadcast
Spring SemesterAddress
2021 192.168.10.63 192.168.10.127 192.168.10.191 192.168.10.255 10
Subnetting Class B Addresses
• The process of subnetting a Class B network is same as it is for a Class C, except that we have more host bits and we start
in the third octet. Class B network address has 16 bits available for host addressing. This means we can use up to 14 bits
for subnetting.
Subnet the following address Network address 172.16.0.0 ; Subnet Mask 255.255.128.0 (/17)
No. of subnets = 2𝑥 = 21 = 2
No. of valid hosts per subnet = 2𝑦 − 2 = 2 15 − 2 = 32,766
Valid subnets = 256 − 𝑠𝑢𝑏𝑛𝑒𝑡 𝑚𝑎𝑠𝑘 = 256 − 128 = 128 𝑖𝑠 𝑏𝑙𝑜𝑐𝑘 𝑠𝑖𝑧𝑒
∴ 𝑇𝑤𝑜 𝑠𝑢𝑏𝑛𝑒𝑡𝑠 𝑎𝑟𝑒 𝑠𝑢𝑏𝑛𝑒𝑡 0.0 𝑎𝑛𝑑 𝑠𝑢𝑏𝑛𝑒𝑡 128.0
Broadcast address for each subnet is the number right before the value of the next subnet.
Valid Hosts per subnet are the numbers between subnet address and broadcast address.

Subnet 0.0 128.0


Network Address 172.16.0.0 172.16.128.0
First Host Address 172.16.0.1 172.16.128.1
Last Host Address 172.16.127.254 172.16.255.254
Broadcast Address 172.16.127.255 172.16.255.255

Spring Semester 2021 11


Subnetting Class B Addresses
Subnet the following address Network address 172.16.0.0 ; Subnet Mask 255.255.240.0 (/20)
No. of subnets = 2𝑥 = 24 = 16
No. of valid hosts per subnet = 2𝑦 − 2 = 2 12 − 2 = 4094
Valid subnets = 256 − 𝑠𝑢𝑏𝑛𝑒𝑡 𝑚𝑎𝑠𝑘 = 256 − 240 = 16 𝑖𝑠 𝑏𝑙𝑜𝑐𝑘 𝑠𝑖𝑧𝑒
Broadcast address for each subnet is the number right before the value of the next subnet
Valid Hosts per subnet are the numbers between subnet address and broadcast address

Subnet 0.0 16.0 … 208.0 224.0 240.0


Network Address 172.16.0.0 172.16.16.0 … 172.16.208.0 172.16.224.0 172.16.240.0
First Host Address 172.16.0.1 172.16.16.1 … 172.16.208.1 172.16.224.1 172.16.240.1
Last Host Address 172.16.15.254 172.16.31.254 … 172.16.223.254 172.16.239.254 172.16.255.254
Broadcast Address 172.16.15.255 172.16.31.255 … 172.16.223.255 172.16.239.255 172.16.255.255

Spring Semester 2021 12


Subnetting Class A Addresses
• The process of subnetting a Class A network is same as it is for a Class B and Class C, except that we start in the second
octet. Class A network address has 24 bits available for host addressing. This means we can use up to 22 bits for
subnetting.
Subnet the following address Network address 10.0.0.0 ; Subnet Mask 255.255.0.0 (/16)
No. of subnets = 2𝑥 = 28 = 256
No. of valid hosts per subnet = 2𝑦 − 2 = 2 16 − 2 = 65,534
Valid subnets = 256 − 𝑠𝑢𝑏𝑛𝑒𝑡 𝑚𝑎𝑠𝑘 = 256 − 255 = 1 𝑖𝑠 𝑏𝑙𝑜𝑐𝑘 𝑠𝑖𝑧𝑒(𝑎𝑙𝑙 𝑖𝑛 𝑡ℎ𝑒 𝑠𝑒𝑐𝑜𝑛𝑑 𝑜𝑐𝑡𝑒𝑡)
Broadcast address for each subnet is the number right before the value of the next subnet.
Valid Hosts per subnet are the numbers between subnet address and broadcast address.

Subnet 0.0.0 1.0.0 … 254.0.0 255.0.0


Network Address 10.0.0.0 10.1.0.0 … 10.254.0.0 10.255.0.0
First Host Address 10.0.0.1 10.1.0.1 … 10.254.0.1 10.255.0.1
Last Host Address 10.0.255.254 10.1.255.254 … 10.254.255.254 10.255.255.254
Broadcast Address 10.0.255.255 10.1.255.255 … 10.254.255.255 10.255.255.255

Spring Semester 2021 13


VLSM
• Subnetting replaces the two level IP addressing scheme with a three level hierarchy and subnet ID is the same length
throughout the network(Fixed Length Subnet Mask).
• This can be a problem if we have subnetworks with very different numbers of hosts on them. If we choose the
subnet ID based on whichever subnet has the greatest number of hosts, even if most of subnets have far fewer host
requirement.
• This will be inefficient as much of the valuable IP address space will be wasted in subnets with fewer host
requirements.
• VLSM allows designers to reduce number of wasted IP addresses in each subnet by creating many subnetworks using
subnet mask of different lengths.
• The idea is to subnet the network, and then subnet the subnets just the way the original network was sub-netted.
Also called sub – subnetting.

Spring Semester 2021 14


VLSM
• Example

• Network Address 192.168.10.0 Default Mask 255.255.255.0(/24)


• Start with network having maximum host requirement.
• In our case , it is Network # 1
• If we subnet using 2 bits then,
• Total Number of subnets = 22 = 4 ; Valid Hosts per subnet = 2𝑦 − 2 = 26 − 2 = 62 𝐻𝑜𝑠𝑡𝑠
• Our requirement for network # 1 is of 60 hosts so this subnet mask fulfils the requirement.
• Subnet Mask will be 255.255.255.192(/26)
Spring Semester 2021 15
VLSM
Four subnets will be:
Subnet 0: 192.168.10.00000000 (192.168.10.0)
Subnet 64: 192.168.10.01000000 (192.168.10.64)
Subnet 128: 192.168.10.10000000 (192.168.10.128)
Subnet 192:192.168.10.11000000 (192.168.10.192)
We choose subnet 64 and assign it to network # 1

192.168.10.64 (/26)
Possible Hosts = 62
Network Address 192.168.10.64(/26) No of wasted IP Addresses = 2

Subnet Mask 255.255.255.192(/26)


First Host Address 192.168.10.65
Last Host Address 192.168.10.126
Broadcast Address 192.168.10.127
Spring Semester 2021 16
VLSM
For Network # 2
Network Address 192.168.10.128(/27)
We choose subnet 128 for sub-subnetting
Subnet Mask 255.255.255.224(/27)
We choose to further subnet on 1 bit ;
Total Number of sub-subnet = 21 = 2
First Host Address 192.168.10.129

Valid Hosts per sub-subnet = 2𝑦 − 2 = 25 − 2 = 30 𝐻𝑜𝑠𝑡𝑠 Last Host Address 192.168.10.158


Two sub-subnets: Broadcast Address 192.168.10.159
Subnet 128: 192.168.10.10000000 (192.168.10.128) (Assign it to Network # 2)
Subnet 160: 192.168.10.10100000 (192.168.10.160)

192.168.10.64 (/26) 192.168.10.128 (/27)


Possible Hosts = 30
Spring Semester 2021Possible Hosts = 62 No of wasted IP Addresses = 0 17
No of wasted IP Addresses = 2
VLSM
For Network # 3 Network Address 192.168.10.160(/28)
We choose subnet 160 for sub-subnetting Subnet Mask 255.255.255.240(/28)
We choose to further subnet on 1 bit ; First Host Address 192.168.10.161
Total Number of sub-subnet = 21 = 2 Last Host Address 192.168.10.174
Valid Hosts per sub-subnet = 2𝑦 − 2 = 24 − 2 = 14 𝐻𝑜𝑠𝑡𝑠 Broadcast Address 192.168.10.175
Two sub-subnets:
Subnet 160: 192.168.10.10100000 (192.168.10.160) (Assign it to Network # 3)
Subnet 176: 192.168.10.10110000 (192.168.10.176)

192.168.10.128 (/27) 192.168.10.160 (/28)


192.168.10.64 (/26) Possible Hosts = 14
Possible Hosts = 30
Spring Semester 2021Possible Hosts = 62 No of wasted IP Addresses = 0 No of wasted IP Addresses = 2 18
No of wasted IP Addresses = 2
VLSM
For Network # 4 & 5
We choose subnet 176 for sub-subnetting Network Address (4) 192.168.10.180(/30)
We choose to further subnet on 2 bits ; Subnet Mask 255.255.255.252(/30)
Total Number of sub-subnet = 22 = 4 First Host Address 192.168.10.181
Valid Hosts per sub-subnet = 2𝑦 − 2 = 22 − 2 = 2 𝐻𝑜𝑠𝑡𝑠 Last Host Address 192.168.10.182
Four sub-subnets:
Broadcast Address 192.168.10.183
Subnet 176: 192.168.10.10110000 (192.168.10.176)
Subnet 180: 192.168.10.10110100 (192.168.10.180) (Assign it to Network # 4)
Subnet 184: 192.168.10.10111000 (192.168.10.184) (Assign it to Network # 5)
Subnet 188: 192.168.10.10111100 (192.168.10.188)
Network Address (5) 192.168.10.184(/30)
Subnet Mask 255.255.255.252(/30)
First Host Address 192.168.10.185
Last Host Address 192.168.10.186
Broadcast Address 192.168.10.187

Spring Semester 2021 19


192.168.10.180 (/30)
Possible Hosts = 2 192.168.10.184 (/30)
No of wasted IP Addresses = 0 Possible Hosts = 2
No of wasted IP Addresses = 0

192.168.10.160 (/28)
Possible Hosts = 14
192.168.10.128 (/27) No of wasted IP Addresses = 2
192.168.10.64 (/26) Possible Hosts = 30
Possible Hosts = 62 No of wasted IP Addresses = 0
No of wasted IP Addresses = 2

Spring Semester 2021 20


Questions?

Spring Semester 2021 22


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 8 – 1
Network Layer – Routing Protocols

Spring Semester 2021 1


The Internet Protocol (IP)
• The Internet Protocol (IP) is the core of the TCP/IP protocol suite and its main protocol at the network
layer.
• IP has four basic functions:
• Addressing - In order to deliver datagrams, IP must know where to deliver them to. For this reason, IP
includes a mechanism for host addressing.
• Data Encapsulation - IP accepts data from the transport layer protocols(UDP and TCP). It then
encapsulates this data into an IP datagram using a special format prior to transmission.
• Fragmentation and Reassembly - IP datagrams are passed down to the data link layer for transmission
on the local network. However, the maximum frame size of each data link network may be different. For
this reason, IP includes the ability to fragment IP datagrams into pieces so they can each be carried on
the local network. The receiving device uses the reassembly function to recreate the whole IP datagram
again.
• Routing - When an IP datagram is sent to a destination on the same local network, this can be done
easily using the network's underlying LAN protocol. However, when the destination is on a distant
network not directly attached to the source, the datagram must be delivered by routing the datagram
through intermediate devices (called routers). IP accomplishes this with support routing protocols.
Spring Semester 2021 3
IPv4 Packet Format
• Data transmitted over an internet using IP is carried in messages called IP datagrams.
• IP datagram consists of following fields:
Version IHL (4) ECN
DS (6) Total Length (16)
(4) (2)
Flags
Identification (16) Fragment Offset (13)
(3)
20
Octets Time to Live (8) Protocol (8) Header Checksum (16)

Source Address (32)

Destination Address (32)

Options + Padding (0 or 32 if any)

Data (Variable)

Spring Semester 2021 4


IPv4 Packet Format
• Version (4 bits) - Indicates version number, the value is 4. By looking at the version number, the router can
determine how to interpret the remainder of the IP datagram.
• Internet Header Length (IHL) (4 bits) - Because an IPv4 datagram can contain a variable number of options
these 4 bits are needed to determine where in the IP datagram the data begins. Most IP datagrams do not
contain options, so the typical IP datagram has a 20 byte header.
• Differentiated Service(DS)/Explicit Congestion Notification(ECN)(8 bits) - Allows to mark packets for
differentiated treatment to achieve Quality-Of-Service (QoS), e.g. express priorities. The ECN field provides for
explicit signaling of congestion.
• Total Length (16 bits) - Total datagram length, including header plus data, in bytes. Since this field is 16 bits
long, the theoretical maximum size of the IP datagram is 65,535 bytes.
• Identification (16 bits) - A sequence number that, together with the source address, destination address, and
user protocol, is intended to identify a datagram uniquely. Thus, this number should be unique for the
datagram’s source address, destination address, and user protocol for the time during which the datagram will
remain in the internet.
• Flags (3 bits) - Only two of the bits are currently defined. MF is ‘more fragments’ and is used for fragmentation
and reassembly. The DF ‘Don’t Fragment’ bit prohibits fragmentation when set.
• Fragment Offset (13 bits) - When fragmentation of a message occurs, this field specifies the offset, or position,
in the overall message where the data in this fragment goes. It is specified in units of 8 bytes (64 bits).

Spring Semester 2021 5


IPv4 Packet Format
• Time to Live (8 bits) - The time to live (TTL) field is included to ensure that datagrams do not circulate forever in the
network. It specifies how long the datagram is allowed to ‘live’ on the network, in terms of router hops .This field is
decremented by one each time the datagram is processed by a router. If the TTL field reaches 0, the datagram must be
dropped.
• Protocol (8 bits) - Indicates the next higher level protocol(either transport layer protocol or encapsulated network layer
protocol) that is to receive the data field at the destination. Example values are ICMP = 0x01 , TCP = 0x06 , UDP = 0x11.
• Header Checksum (16 bits) - An error detecting code applied to the header only. The header checksum is computed by
treating each 2 bytes in the header as a number and adding these numbers using 1s complement arithmetic. The 1s
complement of this sum, known as the Internet checksum, is stored in the checksum field. A router computes the header
checksum for each received IP datagram and detects an error condition if the checksum carried in the datagram header does
not equal the computed checksum. Routers typically discard datagrams for which an error has been detected. The checksum
must be recomputed and stored again at each router, as the TTL field, and possibly the options field as well, may change.
• Source and Destination Addresses (32 bits each) - When a source creates a datagram, it inserts its IP address into the source
IP address field and inserts the address of the ultimate destination into the destination IP address field.
• Options/Padding (variable) - Contains header field for optional IP feature requested by the sending user. If one or more
options are included, and the number of bits used for them is not a multiple of 32, enough zero bits are added to make the
header to a multiple of 32 bits (4 bytes).
• Data/Payload (variable) - The data field of the IP datagram contains the transport layer segment (TCP or UDP) to be
delivered to the destination. However, the data field can carry other types of data, such as ICMP messages. The data field
must be an integer multiple of 8 bits in length. The maximum length of the datagram (data field plus header) is 65,535
octets.

Spring Semester 2021 6


IP Fragmentation & Reassembly
• The maximum amount of data that a link layer frame can carry is called the maximum transmission unit (MTU). For
example, Ethernet frame can carry up to 1500 bytes of data.
• Because each IP datagram is encapsulated within the link layer frame for transport from one router to the next router,
the MTU of the link layer protocol places a hard limit on the length of an IP datagram. Another issue is that each of
the links along the route between sender and destination can use different link layer protocols, and each of these
protocols can have different MTUs.
• The solution is to fragment the data in the IP datagram into two or more smaller IP datagrams, encapsulate each of
these smaller IP datagrams in a separate link layer frame; and send these frames over the outgoing link. Each of
these smaller datagrams is referred to as a fragment.
• In addition, every intermediate router can either fragment a full message or further fragment a fragment when
necessary for transmission on next hop.
• Fragments need to be reassembled before they reach the transport layer at the destination. The question is : Where
they should be reassembled? Network Routers or Destination End System!
• Reassembly at intermediate routers can have following disadvantages:
• Large buffers are required at routers, and there is the risk that all of the buffer space will be used up storing
partial datagrams.
• All fragments of a datagram must pass through the same router which can prevent the use of dynamic routing.
• Thus, datagram fragments are reassembled at the destination end system.

Spring Semester 2021 7


IP Fragmentation & Reassembly
• All fragment datagrams belonging to same message have:
• A full IP header
• Identification field(ID) – same for all fragments.
• Total Length field - reflecting the fragment size.
• Fragment Offset field – different for all fragments, reflecting the start of the present fragment within the whole message,
specifies offset in multiples of 64 bits.
• MF Flag (more fragments) bit – set for all fragments except for the last fragment.
• When a datagram is created, the sending host stamps the datagram with an identification number, source and destination addresses.
When a router needs to fragment a datagram, each resulting datagram (that is, fragment) is stamped with the source address,
destination address, and identification number of the original datagram.
• Let’s assume, datagram of 4000 bytes wide (including the 20 byte IP header) needs to be sent over a link with an MTU of 1500 bytes.
Suppose original datagram has an identification number of 123. The following steps are taken at the router:
• Create First Fragment – Total Length(Bytes) = 1480 , ID = 123 , Fragment offset = 0 (data should be inserted beginning at byte 0) ,
MF = 1 (more fragments also).
• Create Second Fragment – Total Length(Bytes) = 1480 , ID = 123 , Fragment offset = 185 (data should be inserted beginning at
byte1480) , MF = 1 (more fragments also).
• Create Third Fragment – Total Length(Bytes) = 1020(3980-1480-1480) , ID = 123 , Fragment offset = 370 (data should be inserted
beginning at byte 2960) , MF = 0 (this is the last fragment).

Spring Semester 2021 8


9
Spring Semester 2021
IP Fragmentation & Reassembly
• To reassemble a datagram following steps are taken:
• The receiving device initializes a buffer where it can store the fragments of the message as they are
received.
• The receiving device sets up a timer for reassembly of the message.
• As fragments with the same ID arrive, their data fields are inserted in the proper position in the buffer
until the entire data field is reassembled, which is achieved when a contiguous set of data exists starting
with an Offset of zero and ending with data from a fragment with a false More Flag.
• The IP service does not guarantee delivery. If the timer for the reassembly expires with any of the
fragments missing, the message cannot be reconstructed. The already received fragments are discarded,
and an ICMP message is generated for the source host.
• Fragmentation/Reassembly creates significant overhead:
• Several datagrams transmitted per message, each one having full IP header.
• Complicates router and end systems which need to be designed to accommodate
fragmentation/reassembly.
• Upon loss of single fragment the whole message is possibly retransmitted by higher layers.

Spring Semester 2021 10


Internet Control Message Protocol
• IP's datagram delivery is connectionless, unreliable and unacknowledged i.e. datagrams are just sent
over the internetwork with no prior connection established, no assurance of their delivery, and no
acknowledgement sent to the sender that they arrived.
• ICMP provides a means for transferring messages from routers and other hosts to a host and provides
feedback about problems in the communication environment.
• ICMP lies above IP, as ICMP messages are carried inside IP datagrams. That is, ICMP messages are
carried as IP payload.
• ICMP message format is as follows:
• Type(8 bits) – specifies the type of ICMP message.
• Code(8 bits) - Used to specify parameters of the message.
• Checksum (16 bits) - Checksum of the entire ICMP message. This is the same checksum algorithm used
for IP.
• Parameters (32 bits) - Used to specify more lengthy parameters.

Type (8 ) Code (8) Checksum (16)


Data
Spring Semester 2021 11
Some ICMP Messages
ICMP Type Code Description
0 0 Echo Reply (to ping)
3 0 Destination network unreachable

3 1 Destination host unreachable


3 2 Destination protocol unreachable
3 3 Destination port unreachable
3 4 Fragment Required
3 6 Destination network unknown
3 7 Destination host unknown
4 0 Source Quench
8 0 Echo Request
9 0 Router Advertisement
10 0 Router Discovery
11 0 TTL expired
11 1 Fragment Reassembly Time Exceeded
Spring Semester 2021 12
Some ICMP Messages
• Source Quench - Generated by an IP router when it has to drop a packet because of congestion. This
message is sent to a source host, requesting that it reduce the rate at which it is sending traffic to the
internet destination(flow control)
• TTL expiration - generated by an IP router when it drops a packet because its TTL value reached zero.
• Fragment reassembly time exceed - Generated by destination when not all fragments of a message have
been received within timeout.
• Destination unreachable messages - These messages are generated when:
• Router finds that the cost to reach a non directly connected host is infinity (e.g. are link failure),
• Router could not deliver datagram to directly connected host.
• If the user protocol or some higher level service access point is unreachable.
• Router could not determine a next hop to a non directly connected host or network.
• Fragment Required - If a router finds that outgoing link for this packet has MTU less enough to transmit
the packet and it must fragment this datagram but the Don’t Fragment flag is set, the datagram is
discarded and a message is returned.
• Echo Request & Echo Reply - provide a mechanism for testing that communication is possible between
entities.

Spring Semester 2021 13


IP Routing
• IP routing is the process of moving packets from one network to another network using routers.
• To accomplish this, a path or route through the network must be determined.
• It is possible the more than one route is also available. Thus, a routing function must be
performed.
• Following requirements are imposed on the routing function:
• Correctness - computed routes should be valid paths that contain no loops.
• Simplicity - routing algorithms / protocols should be computationally simple and require only little
information exchange among routers.
• Robustness - a routing protocol must be able to cope with: link or station failures, newly established
links or stations, changes in link metrics , congestion situations by establishing new routes when old
ones become infeasible or are no longer optimal.
• Stability - a routing protocol should not recompute everything upon minor changes in the network.
• Fairness – all users should be treated in equal manner.
• Optimality – It has different perceptive based on different criteria. From user perspective, generated
routes should be short, fast and offer good throughout. From provider perspective, the network should
carry as many packets as possible.
• Efficiency - when a route between two nodes exists, the routing algorithm / protocol should be able to
find it.
Spring Semester 2021 14
IP Routing – Performance Criteria
• The selection of a route is based on following criterion:
• Number of Hops or least cost Criteria- Choose the minimum hop route (one that passes through the least
number of nodes) through the network. In least cost routing, a cost is associated with each link, and, for
any pair of attached stations, the route through the network that accumulates the least cost is chosen.
• Shortest path(fewest hops) from N1 to N6:
Nodes visited = 𝑁1 → 𝑁3 → 𝑁6
Cost = 5 + 5 = 10
• Least Cost path from N1 to N6:
Nodes visited = 𝑁1 → 𝑁4 → 𝑁5 → 𝑁6
Cost = 1 + 1 + 2 = 4

• Decision Time – refers to when routing the decisions are made i.e.
for individual packet or for each session or at the time of network configuration time.

Spring Semester 2021 15


IP Routing – Performance Criteria
• Decision Place - refers to which node or nodes in the network are responsible for the routing
decision. In distributed routing, each node has the responsibility of selecting an output link for
routing packets as they arrive. In centralized routing , the decision is made by some
designated node, such as a network control center. In source routing the routing decision is
made by the source station rather than by a network node and is then communicated to the
network.
• Network Information Source – refers to the information used for making routing decision such
as knowledge of the topology of the network, traffic load, and link cost. Some strategies use
no such information and manage to get packets through flooding. In distributed routing, the
individual node may make use of only local information, such as the cost of each outgoing
link. Each node might also collect information from adjacent (directly connected) nodes, such
as the amount of congestion experienced at that node. In centralized routing, the central
node typically makes use of information obtained from all nodes.
• Information update timing – refers to when information used in routing decision is updated
i.e. the information is never updated or it is updated periodically to enable the routing
decision to adapt to changing conditions. Thus the more frequently it is updated, the more
likely the network is to make good routing decisions.
Spring Semester 2021 16
Routing Strategies
• Four key strategies are:
• Fixed Routing - A single, permanent route is configured for each source destination pair of nodes in the
network. Either of the least cost routing algorithms can be used. The routes are fixed, or at least only
change when there is a change in the topology of the network. The advantage of fixed routing is its
simplicity, and it should work well in a reliable network with a stable load. Its disadvantage is its lack of
flexibility. It does not react to network congestion or failures.
• Flooding - This technique requires no network information. A packet is sent by a source node to every
one of its neighbors. At each node, an incoming packet is retransmitted on all outgoing links except for
the link on which it arrived. Flooding technique is highly robust and could be used to send emergency
messages. The principal disadvantage of flooding is the high traffic load that it generates, which is
directly proportional to the connectivity of the network.
• Random Routing - With random routing, a node selects only one outgoing path for retransmission of an
incoming packet. The outgoing link is chosen at random, excluding the link on which the packet arrived.
If all links are equally likely to be chosen, then a node may simply utilize outgoing links in a round robin
fashion. A probability can also be assigned to each outgoing link and link is selected based on that
probability. Like flooding, random routing requires the use of no network information.
• Adaptive Routing - The routing decisions are made change as conditions on the network change. The
principal conditions that influence routing decisions are node or link failure and congestion. For adaptive
routing to be possible, information about the state of the network must be exchanged among the nodes.
Spring Semester 2021 17
Routing Algorithm & Routing Protocols
• A network is modelled as a graph 𝐺 = (𝑁, 𝐸) which is a set 𝑁 of nodes and a collection E of
edges. In the context of network layer routing, the nodes in the graph represent routers and
the edges connecting these nodes represent the physical links between these routers.
• A host is directly attached to one router called as the default router or the first hop router.
Whenever a host sends a packet, the packet is transferred to its default router. The default
router of the source host is referred as the source router and the default router of the
destination host is referred as the destination router.
• The routers in an internet are responsible for receiving and forwarding packets through the
interconnected set of networks. Each router makes routing decision based on knowledge of
the topology and traffic/delay conditions of the network.
• The purpose of a routing algorithm is simple: given a set of routers, with links connecting the
routers, a routing algorithm finds a good path from source router to destination router.
• A routing protocol specifies how routers communicate with each other to distribute routing
information that enables them to select routes between two nodes.

Spring Semester 2021 18


Routing Tables
• Each router maintains a set of information that provides a mapping between different network IDs and the other
routers to which it is connected. This information is contained in a routing table.
• Each entry in the table is called a routing entry which provides information about one network.
• Each time a packet is received, the router checks its destination IP address against the routing entries in its table to
decide where to send the packet, and then sends it on its next hop.
• The routing table contains information not only about the networks directly connected to the router, but also
information that the router has learned about more distant networks.
• Common fields in routing table are:
• Destination IP address – it can be a host address or network address to which the packet is finally delivered.
• Next hop address – it is the address of the next hop router to which the packet is delivered.
• Outgoing Interface - used when forwarding the packet to the next hop or final destination.
• Flags - A flag telling whether destination IP is host or network; A flag telling whether next hop is a router or directly
attached network
• A routing table can be static or dynamic. Static routing table contains information that is entered manually. The
administrator enters the route for each destination into the table. When a table is created, it cannot update
automatically when there is a change in the network. The table must be manually altered by the administrator. A
dynamic routing table is updated periodically by using one of the dynamic routing protocols whenever there is a
change in the network.
Spring Semester 2021 19
Questions?

Spring Semester 2021 21


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 8 – 2
Network Layer – Shortest Path Routing
Algorithms
Spring Semester 2021 1
Interior and Exterior Gateway Routing Protocols

• One routing protocol cannot handle the task of updating routing tables of all routers on the internet. For
this reason, an internet is divided into autonomous systems.
• An autonomous system (AS) exhibits the following characteristics:
• AS is a set of routers and networks managed by a single organization.
• An AS consists of a group of routers exchanging information via a common routing protocol.
• AS is connected that is, there is a path between any pair of nodes , except in the times of failure.
• Interior gateway routing protocol is used for passing routing information between routers within an AS.
The protocol used to pass routing information between routers in different ASs are refered as an exterior
gateway routing protocol.
• AS are connected by linking a router in one AS to a router in another AS. An AS consists of a set of
routers with two different types of connectivity:
• Internal Routers - These routers in an AS connect only to other routers in the same AS. These run
interior gateway routing protocols.
• Border Routers – These routers in an AS connect both to routers within an AS and to routers in one or
more AS. These devices are responsible for passing traffic between the AS and the rest of the
internetwork. They run both interior and exterior routing protocols.

Spring Semester 2021 3


Internal Routing
External Routing

Spring Semester 2021 4


Shortest Path Routing Algorithms
• In shortest path routing, a path between the source and destination node is
chosen that has the least cost.
• These algorithms are also called least cost path routing algorithms. In such
algorithms, a cost is associated with each link.
• This link cost is a usually non-negative and proportional to link’s current traffic
load.
• The link cost is defined on both directions between each pair of nodes.
• Several least cost path routing algorithms have been developed for packet
switched networks. In particular, following two algorithms have been most
effective and widely used. They are:
• Dijkstra’s Algorithm
• Bellman Ford Algorithm
Spring Semester 2021 5
Shortest Path Routing Algorithms
• A network is modelled as a graph 𝐺 = (𝑁, 𝐸) which is a set 𝑁 of nodes and a
collection 𝐸 of edges.
• 𝑖 ∈ 𝑁 and 𝑗 ∈ 𝑁 refer to nodes / stations in the network.
• 𝑑𝑖,𝑗 is the direct link cost / metric between 𝑖 and 𝑗 , with:
• 0 ≤ 𝑑𝑖,𝑗 < ∞ when 𝑖 and 𝑗 are adjacent nodes.
• 𝑑𝑖,𝑗 = ∞ when 𝑖 and 𝑗 are non adjacent nodes.
• 𝐷𝑖,𝑗 represents the total cost of the least cost path from 𝑖 to 𝑗.
• 𝑁𝑖 represents the set of nodes adjacent to node 𝑖.

Spring Semester 2021 6


Dijkstra Algorithm
• Dijkstra’s algorithm can be stated as: Find the shortest paths from a given source node to all
other nodes by developing the paths in order of increasing path length.
• Dijkstra Algorithm is the centralized routing algorithm that is it takes the connectivity between
all nodes and all link costs as inputs and stores them in central location.
• Dijkstra’s algorithm is iterative and has the property that after the 𝑘𝑡ℎ iteration of the
algorithm, the least cost paths are known to 𝑘 destination nodes.
• This algorithm consists of an initialization step followed by a loop. The number of times the
loop is executed is equal to the number of nodes in the network.
• Upon termination, the algorithm will have calculated the shortest paths from the source node
to every other node in the network.
• Dijkstra cannot handle negative metrics.
• It is greedy: in every situation it makes the choice that is currently the best, without regard to
future situations.

Spring Semester 2021 7


Dijkstra Algorithm
1. Define
𝑠 = source node
P 𝑣 = predecessor node(neighbor of 𝑣)
𝑁ሖ = set of visited nodes by the algorithm
𝑑𝑖,𝑗 = direct link cost between node 𝑖 to 𝑗
𝐷𝑖,𝑗 = total cost of the least cost path from 𝑖 to 𝑗.
2. Initialization
𝑁’ = {𝑠}
for all nodes 𝑣, if 𝑣 is a neighbor of 𝑠
then 𝐷𝑠,𝑣 =𝑑𝑠,𝑣
else 𝐷𝑠,𝑣 = ∞
3. Loop
find 𝑤 not in 𝑁’ such that 𝐷𝑠,𝑤 is a minimum
add 𝑤 to 𝑁’, update 𝐷𝑠,𝑣 for each neighbor 𝑣 of 𝑤 and not in 𝑁’
𝐷𝑠,𝑣 = min( 𝐷𝑠,𝑣 , 𝐷𝑠,𝑤 + 𝑑𝑤,𝑣 ) /* new cost to 𝑣 is either old cost to 𝑣 or known least path cost to 𝑤 plus cost
from 𝑤 to 𝑣 */
until 𝑁’ = 𝑁
Spring Semester 2021 8
Execution of Dijkstra Algorithm
Using Dijkstra’s algorithm, find the least cost path from node A to node B in the given network.

5
C D
1
1
3
2 1
2 2
8
2
2 2
A E B
1 3
3
4
7 1
6
F G
3

Spring Semester 2021 9


Execution of Dijkstra Algorithm
Step 1 & 2 - Define & Initialization
𝑠 = A
P 𝐴 = A
𝑁ሖ = {A}
for all nodes in the network,
𝐷𝐴,𝑐 =𝑑𝐴,𝐶 = 1 (Direct cost since it is a neighbor of A)
𝐷𝐴,𝐹 =𝑑𝐴,𝐹 = 1 (Direct cost since it is a neighbor of A)
𝐷𝐴,𝐸 = ∞ (Not directly connected)
𝐷𝐴,𝐷 = ∞ (Not directly connected)
𝐷𝐴,𝐺 = ∞ (Not directly connected)
𝐷𝐴,𝐵 = ∞ (Not directly connected)
Both nodes ‘C’ and ‘F’ have cost of 1 from source node. Anyone of these
nodes can be chosen but we choose Node C for next iteration.
Spring Semester 2021 10
Execution of Dijkstra Algorithm

Step ഥ
𝑵 𝑫𝑨,𝑪 , 𝐏 𝑪 𝑫𝑨,𝑭 ,𝑷 𝑭 𝑫𝑨,𝑬 , 𝐏 𝑬 𝑫𝑨,𝑫 , 𝐏 𝑫 𝑫𝑨,𝑮 ,𝐏 𝑮 𝑫𝑨,𝑩 , 𝐏 𝑩
1 {A} 1 , A (AC) 1 , A (AF) ∞ ∞ ∞ ∞

5
C D
1
1
3
2 1
2 2
8
2
2 2
A E B
1 3
3
4
7 1
6
F G
3
Spring Semester 2021 11
Execution of Dijkstra Algorithm
Step 3 - Loop
𝑁ሖ = {A,C}
P 𝐶 = A (predecessor node of C) path : 𝐴 → 𝐶
for each neighbor of 𝐶 and not in 𝑁’ we update the cost
𝐷𝐴,𝐸 = min( 𝐷𝐴,𝐸 , 𝐷𝐴,𝐶 + 𝑑𝐶,𝐸 ) /* new cost to 𝐸 is either old cost to 𝐸 or known
least path cost to C plus cost from 𝐶 to 𝐸 */
𝐷𝐴,𝐸 = min ∞, 1 + 2 = 3
𝐷𝐴,𝐷 = min( 𝐷𝐴,𝐷 , 𝐷𝐴,𝐶 + 𝑑𝐶,𝐷 ) /* new cost to 𝐷 is either old cost to 𝐷 or
known least path cost to 𝐶 plus cost from 𝐶 to 𝐷 */
𝐷𝐴,𝐷 = min ∞, 1 + 5 = 6

Both nodes ‘C’ and ‘F’ have cost of 1 from source node. We chose Node C
for second iteration. Now we choose Node ‘F’ for third iteration and
continue step no. 3 of algorithm until all the nodes have been visited and
included in 𝑵ሖ
Spring Semester 2021 12
Execution of Dijkstra Algorithm
Step ഥ
𝑵 𝑫𝑨,𝑪 , 𝐏 𝑪 𝑫𝑨,𝑭 ,𝑷 𝑭 𝑫𝑨,𝑬 , 𝐏 𝑬 𝑫𝑨,𝑫 , 𝐏 𝑫 𝑫𝑨,𝑮 ,𝐏 𝑮 𝑫𝑨,𝑩 , 𝐏 𝑩
1 {A} 1 , A (AC) 1 , A (AF) ∞ ∞ ∞ ∞
2 {A,C} 1 , A (AC) 1 , A (AF) 3 , C (ACE) 6 , C (ACD) ∞ ∞

5
C D
1
1
3
2 1
2 2
8
2
2 2
A E B
1 3
3
4
7 1
6
F G
3
Spring Semester 2021 13
Execution of Dijkstra Algorithm
Third Iteration (Loop)
𝑁ሖ = {A,C,F}
P 𝐹 = A (predecessor node of F) path : 𝐴 → F
for each neighbor of 𝐹 and not in 𝑁’ we update the cost
𝐷𝐴,𝐸 = min( 𝐷𝐴,𝐸 , 𝐷𝐴,𝐹 + 𝑑𝐹,𝐸 ) /* new cost to 𝐸 is either old cost to 𝐸 or known
least path cost to 𝐹 plus cost from 𝐹 to 𝐸 */
𝐷𝐴,𝐸 = min 3, 1 + 3 = 3
𝐷𝐴,𝐺 = min( 𝐷𝐴,𝐺 , 𝐷𝐴,𝐹 + 𝑑𝐹,𝐺 ) /* new cost to 𝐺 is either old cost to 𝐺 or known
least path cost to 𝐹 plus cost from 𝐹 to 𝐺 */
𝐷𝐴,𝐺 = min ∞, 1 + 6 = 7
Both nodes ‘C’ and ‘F’ have cost of 1 from source node and have been added
to 𝑵ሖ . Now we choose Node ‘E ’ , which has a cost of 3 from source node,
for fourth iteration and continue step no. 3 of algorithm until all the

nodes have been visited and included in 𝑵.

Spring Semester 2021 14


Execution of Dijkstra Algorithm
Step ഥ
𝑵 𝑫𝑨,𝑪 , 𝐏 𝑪 𝑫𝑨,𝑭 ,𝑷 𝑭 𝑫𝑨,𝑬 , 𝐏 𝑬 𝑫𝑨,𝑫 , 𝐏 𝑫 𝑫𝑨,𝑮 ,𝐏 𝑮 𝑫𝑨,𝑩 , 𝐏 𝑩
1 {A} 1 , A (AC) 1 , A (AF) ∞ ∞ ∞ ∞
2 {A,C} 1 , A (AC) 1 , A (AF) 3 , C (ACE) 6 , C (ACD) ∞ ∞
3 {A,C,F) 1 , A (AC) 1 , A (AF) 3 , C (ACE) 6 , C (ACD) 7 , F (AFG) ∞

5
C D
1
1
3
2 1
2 2
8
2
2 2
A E B
1 3
3
4
7 1
6
F G
Spring Semester 2021 3 15
Execution of Dijkstra Algorithm
Fourth Iteration (Loop)
𝑁ሖ = {A,C,F,E}
P 𝐸 = C (predecessor node of E) path : 𝐴 → C → E
for each neighbor of 𝐸 and not in 𝑁’ we update the cost
𝐷𝐴,𝐷 = min( 𝐷𝐴,𝐷 , 𝐷𝐴,𝐸 + 𝑑𝐸,𝐷 ) /* new cost to 𝐷 is either old cost to 𝐷 or
known least path cost to 𝐸 plus cost from 𝐸 to 𝐷 */
𝐷𝐴,𝐷 = min 6, 3 + 1 = 4
𝐷𝐴,𝐹 = min( 𝐷𝐴,𝐹 , 𝐷𝐴,𝐸 + 𝑑𝐸,𝐹 ) /* new cost to 𝐹 is either old cost to 𝐹 or known
least path cost to 𝐸 plus cost from 𝐸 to 𝐹 */
𝐷𝐴,𝐹 = min 1, 3 + 4 = 1
Nodes ‘C’,‘F’ and ‘E’ have minimum cost from source node and have been
added to 𝑵ሖ . Now we choose Node ‘D’ , which has a cost of 4 from source
node, for fifth iteration and continue step no. 3 of algorithm until all

the nodes have been visited and included in 𝑵.

Spring Semester 2021 16


Execution of Dijkstra Algorithm
Step ഥ
𝑵 𝑫𝑨,𝑪 , 𝐏 𝑪 𝑫𝑨,𝑭 ,𝑷 𝑭 𝑫𝑨,𝑬 , 𝐏 𝑬 𝑫𝑨,𝑫 , 𝐏 𝑫 𝑫𝑨,𝑮 ,𝐏 𝑮 𝑫𝑨,𝑩 , 𝐏 𝑩
1 {A} 1 , A (AC) 1 , A (AF) ∞ ∞ ∞ ∞
2 {A,C} 1 , A (AC) 1 , A (AF) 3 , C (ACE) 6 , C (ACD) ∞ ∞
3 {A,C,F) 1 , A (AC) 1 , A (AF) 3 , C (ACE) 6 , C (ACD) 7 , F (AFG) ∞
4 {A,C,F,E} 1 , A (AC) 1 , A (AF) 3 , C (ACE) 4 , E (ACED) 7 , F (AFG) ∞
5
C D
1
1
3
2 1
2 2
8
2
2 2
A E B
1 3
3
4
7 1
6
F G
Spring Semester 2021 17
3
Execution of Dijkstra Algorithm
Final Picture (All Nodes have been added to 𝑁)

Step ഥ
𝑵 𝑫𝑨,𝑪 , 𝐏 𝑪 𝑫𝑨,𝑭 ,𝑷 𝑭 𝑫𝑨,𝑬 , 𝐏 𝑬 𝑫𝑨,𝑫 , 𝐏 𝑫 𝑫𝑨,𝑮 ,𝐏 𝑮 𝑫𝑨,𝑩 , 𝐏 𝑩
1 {A} 1 , A (AC) 1 , A (AF) ∞ ∞ ∞ ∞
2 {A,C} 1 , A (AC) 1 , A (AF) 3 , C (ACE) 6 , C (ACD) ∞ ∞
3 {A,C,F) 1 , A (AC) 1 , A (AF) 3 , C (ACE) 6 , C (ACD) 7 , F (AFG) ∞
4 {A,C,F,E} 1 , A (AC) 1 , A (AF) 3 , C (ACE) 4 , E (ACED) 7 , F (AFG) ∞
5 {A,C,F,E,D} 1 , A (AC) 1 , A (AF) 3 , C (ACE) 4 , E (ACED) 6 , D (ACEDG) 7 , D (ACEDB)
6 {A,C,F,E,D,G} 1 , A (AC) 1 , A (AF) 3 , C (ACE) 4 , E (ACED) 6 , D (ACEDG) 7 , D (ACEDB)
7 {A,C,F,E,D,G,B} 1 , A (AC) 1 , A (AF) 3 , C (ACE) 4 , E (ACED) 6 , D (ACEDG) 7 , D (ACEDB)

Spring Semester 2021 18


Dijkstra’s Algorithm - Performance
• When this algorithm terminates, we have, for each node, its predecessor along the least cost
path from the source node.
• For each predecessor, we also have its predecessor, and so in this manner we can construct
the entire path from the source to all destinations.
• What is the computational complexity of this algorithm?
• If we have 𝑛 nodes (excluding the source), then in the first iteration, we need to search
through all 𝑛 nodes(not in 𝑁)
ሖ to determine the nodes that has the minimum cost.
• In the second iteration, we need to check 𝑛 – 1 nodes to determine the minimum cost; in the
third iteration 𝑛 – 2 nodes, and so on.
• Overall, the total number of nodes we need to search through over all the iterations is
𝑛(𝑛 + 1)/2.
• So this algorithm has worst case complexity of 𝑂 𝑛 2 .

Spring Semester 2021 19


Dijkstra Algorithm - Performance
• Dijkstra’s algorithm requires that each node must have complete topological information about
the network. That is, each node must know the link costs of all links in the network. Thus, for
this algorithm, information must be exchanged with all other nodes.
• The algorithm converges under static conditions of topology, and link costs. If the link costs
change over time, the algorithm will attempt to catch up with these changes.

Spring Semester 2021 20


Questions?

Spring Semester 2021 22


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 9 – 1
Network Layer – Routing Protocols (2)

Spring Semester 2021 1


Routing Protocols - Basics
• The routers in an internet are responsible for receiving and forwarding packets through the interconnected set
of networks.
• Each router makes routing decision based on knowledge of the topology and traffic/delay conditions of the
internet.
• To make dynamic routing decisions, routers exchange routing information using special routing protocols.
• Routing protocols can be either interior routing protocols or exterior routing protocols. Interior routing
protocols are used to share routing information within an autonomous system; each AS may use a different
interior routing protocol because the system is autonomous. Exterior routing protocols convey routing data
between autonomous systems; each AS must use the same exterior protocol to ensure that they can
communicate.
• Another key differentiation of routing protocols is on the basis of the algorithms and metrics they use. An
algorithm refers to a method that the protocol uses for determining the best route between any pair of
networks, and for sharing routing information between routers. A metric is a measure of ‘cost’ that is used to
assess the efficiency of a particular route.
• Routing protocols employ two of the most commonly used routing protocols algorithms for gathering and using
routing information. They are:
• Distance vector routing algorithm , and
• Link state routing algorithm.
Spring Semester 2021 3
Distance Vector Routing
• Distance vector(DV) routing algorithm, also called a Bellman Ford algorithm, is one where routes are
selected based on the distance between networks.
• The distance metric is simple which is usually the number of ‘hops’, or routers between them.
• Distance vector routing requires that each node (router or host that implements the routing protocol)
exchange information with its neighboring nodes. Two nodes are said to be neighbors if they are both
directly connected to the same network.
• Each router sends a distance vector to all of its neighbors, and that vector contains the estimated path
cost to all networks in the configuration.
• These routers then update their tables and send to their neighbors. This causes distance information to
propagate across the internetwork, so that eventually each router obtains distance information about all
networks on the internet.
• Furthermore, when there is a significant change in a link cost or when a link is unavailable, it may take a
considerable amount of time for this information to propagate through the internet.
• The distance vector algorithm is iterative and distributed. It is distributed in a sense that each node
receives some information from one or more of its directly attached neighbors, performs a calculation,
and then distributes the results of its calculation back to its neighbors. It is iterative in a sense that
process continues on until no more information is exchanged between neighbors.
Spring Semester 2021 4
Distance Vector Routing
• In DV algorithm each node 𝑥 begins with 𝐷𝑥 (𝑦), an estimate of the cost of the least cost path from itself
to node 𝑦, for all nodes in 𝑁.
• Let 𝐷𝑥 = [𝐷𝑥 (𝑦): 𝑦 𝑖𝑛 𝑁] be node 𝑥 distance vector, which is the vector of cost estimates from 𝑥 to all
other nodes, 𝑦, in 𝑁.
• With the DV algorithm, each node 𝑥 maintains the following routing information:
• For each neighbor 𝑣, the cost 𝑐(𝑥, 𝑣) from 𝑥 to directly connected neighbor, 𝑣
• Node 𝑥’𝑠 distance vector, that is, 𝐷𝑥 = [𝐷𝑥 (𝑦): 𝑦 𝑖𝑛 𝑁] , containing 𝑥’𝑠 estimate of its cost to all
destinations, 𝑦, in 𝑁.
• The distance vectors of each of its neighbors, that is, 𝐷𝑣 = [𝐷𝑣 (𝑦): 𝑦 𝑖𝑛 𝑁] for each neighbor 𝑣 of 𝑥.
• In the distributed algorithm, from time to time, each node sends a copy of its distance vector to each of
its neighbors. When a node 𝑥 receives a new distance vector from any of its neighbors 𝑣, it saves 𝑣’𝑠
distance vector, and then uses the Bellman Ford equation to update its own distance vector as follows:
𝐷𝑥 𝑦 = 𝑚𝑖𝑛𝑣 𝑐 𝑥, 𝑣 + 𝐷𝑣 𝑦 for each node y in N

Spring Semester 2021 5


Distance Vector Algorithm
At each node, x:
Initialization:
for all destinations y in N:
𝐷𝑥 𝑦 = 𝑐(𝑥, 𝑦) /*if y is not a neighbor then 𝑐(𝑥, 𝑦) = ∞ */
for each neighbor w
𝐷𝑤 𝑦 = ? for all destinations y in N
for each neighbor w
send distance vector 𝐷𝑥 = [𝐷𝑥 (𝑦): 𝑦 𝑖𝑛 𝑁] to w
loop
wait (until there is a change in link cost to some neighbor w or until a distance
vector from some neighbor w is received)
for each y in N:
𝐷𝑥 𝑦 = 𝑚𝑖𝑛𝑣 𝑐 𝑥, 𝑣 + 𝐷𝑣 𝑦
if 𝐷𝑥 (𝑦) changed for any destination y
send distance vector 𝐷𝑥 = [𝐷𝑥 (𝑦): 𝑦 𝑖𝑛 𝑁] to all neighbors
forever
Spring Semester 2021 6
Execution of Distance Vector Algorithm
2
Y
Initial State
1

7
X Z

Cost to Cost to
Cost to
𝑫𝒙 X Y Z 𝑫𝒚 X Y Z 𝑫𝒛 X Y Z
X 0 2 7 X ∞ ∞ ∞ X ∞ ∞ ∞

From
From

From
Y ∞ ∞ ∞ Y 2 0 1 Y ∞ ∞ ∞
Z ∞ ∞ ∞ Z ∞ ∞ ∞ Z 7 1 0
𝐷𝑥 𝑥 = 0 𝐷𝑦 𝑥 = 2 𝐷𝑧 𝑥 = 7
𝐷𝑥 𝑦 = 2 𝐷𝑦 𝑦 = 0 𝐷𝑧 𝑦 = 1
𝐷𝑥 (𝑧)= 7 𝐷𝑦 (𝑧)= 1 𝐷𝑧 (𝑧)= 0
Spring Semester 2021 7
Execution of Distance Vector Algorithm
• In the initial routing tables for each node each row is a distance vector—
specifically, each node’s routing table includes its own distance vector and that
of each of its neighbors.
• Because at initialization node 𝑥 has not received anything from node 𝑦 or 𝑧,
the entries in the second and third rows are initialized to infinity.
• After initialization, each node sends its distance vector to each of its two
neighbors. After receiving the updates, each node recomputes its own
distance vector.

Spring Semester 2021 8


Execution of Distance Vector Algorithm
Node ‘y’ and ‘z’ send their distance vector to ‘x’ Cost to
𝑫𝒙 X Y Z
𝐷𝑥 𝑥 = 0 X 0 2 3

From
𝐷𝑥 𝑦 = min 𝑐 𝑥, 𝑦 + 𝐷𝑦 𝑦 , 𝑐 𝑥, 𝑧 + 𝐷𝑧 𝑦 = min{ 2 + 0 , 7 + 1} = 2
Y 2 0 1
𝐷𝑥 𝑧 = min 𝑐 𝑥, 𝑦 + 𝐷𝑦 𝑧 , 𝑐 𝑥, 𝑧 + 𝐷𝑧 𝑧 = min 2 + 1,7 + 0 = 3
Z 7 1 0

Cost to
Node ‘x’ and ‘z’ send their distance vector to ‘y’ 𝑫𝒚 X Y Z
𝐷𝑦 𝑥 = min 𝑐 𝑦, 𝑥 + 𝐷𝑥 𝑥 , 𝑐 𝑦, 𝑧 + 𝐷𝑧 𝑥 = min 2 + 0,1 + 7 = 2
X 0 2 7

From
𝐷𝑦 𝑦 = 0
𝐷𝑦 𝑧 = min 𝑐 𝑦, 𝑧 + 𝐷𝑧 𝑧 , 𝑐 𝑦, 𝑥 + 𝐷𝑥 𝑧 = min 1 + 0,2 + 7 = 1 Y 2 0 1
Z 7 1 0

Cost to
Node ‘x’ and ‘y’ send their distance vector to ‘z’
𝑫𝒛 X Y Z
𝐷𝑧 𝑥 = min 𝑐 𝑧, 𝑥 + 𝐷𝑥 𝑥 , 𝑐 𝑧, 𝑦 + 𝐷𝑦 𝑥 = { 7 + 0 , 1 + 2} = 3

From
𝐷𝑧 𝑦 = min 𝑐 𝑦, 𝑧 + 𝐷𝑦 𝑦 , 𝑐 𝑧, 𝑥 + 𝐷𝑥 𝑦 = 1 + 0 ,7 + 2 = 1 X 0 2 7
𝐷𝑧 (𝑧)= 0 Y 2 0 1
Spring Semester 2021
Z 3 1 0 9
Execution of Distance Vector Algorithm
• After the nodes recompute their distance vectors, they again send their
updated distance vectors to their neighbors (if there has been a change).
• Only nodes 𝑥 and 𝑧 send updates and node 𝑦’𝑠 distance vector didn’t change
so node 𝑦 doesn’t send an update.
• After receiving the updates, the nodes then recompute their distance vectors
and update their routing tables.

Spring Semester 2021 10


Execution of Distance Vector Algorithm
Cost to
Node ‘z’ sends its distance vector to ‘x’ 𝑫𝒙 X Y Z
𝐷𝑥 𝑥 = 0 X 0 2 3

From
𝐷𝑥 𝑦 = min 𝑐 𝑥, 𝑦 + 𝐷𝑦 𝑦 , 𝑐 𝑥, 𝑧 + 𝐷𝑧 𝑦 = min{ 2 + 0 , 7 + 1} = 2
Y 2 0 1
𝐷𝑥 𝑧 = min 𝑐 𝑥, 𝑦 + 𝐷𝑦 𝑧 , 𝑐 𝑥, 𝑧 + 𝐷𝑧 𝑧 = min 2 + 1,7 + 0 = 3
Z 3 1 0

Cost to
Node ‘x’ sends its distance vector to ‘z’
𝑫𝒛 X Y Z
𝐷𝑧 𝑥 = min 𝑐 𝑧, 𝑥 + 𝐷𝑥 𝑥 , 𝑐 𝑧, 𝑦 + 𝐷𝑦 𝑥 = { 7 + 0 , 1 + 2} = 3
X 0 2 3

From
𝐷𝑧 𝑦 = min 𝑐 𝑧, 𝑦 + 𝐷𝑦 𝑦 , 𝑐 𝑧, 𝑥 + 𝐷𝑥 𝑦 = 1 + 0 ,7 + 2 = 1 Y 2 0 1
𝐷𝑧 (𝑧)= 0 Z 3 1 0

Spring Semester 2021 11


Execution of Distance Vector Algorithm
• The process of receiving updated distance vectors from neighbors,
recomputing routing table entries, and informing neighbors of changed costs
of the least cost path to a destination continues until no update messages are
sent.
• At this point, since no update messages are sent, no further routing table
calculations will occur and the algorithm will converge and nodes will be
performing the waiting until a link cost changes.
• Routing tables for each node will be:

Node X Node Y Node Z


Dst Cost Outgoing Link Dst Cost Outgoing Link Dst Cost Outgoing Link
X 0 - X 2 Y-X X 3 Z-Y
Y 2 X-Y Y 0 - Y 1 Z-Y
Z 3 X-Y Z 1 Y-Z Z 0 -

Spring Semester 2021 12


Distance Vector Algorithm – Link Cost Change
• When a node running the DV algorithm detects a change in the link cost from
itself to a neighbor , it updates its distance vector and, if there’s a change in
the cost of the least cost path, informs its neighbors of its new distance
vector.
• Before the link cost changes, the routing tables at each node will be:
Cost to
• Now link cost changes from 4 → 1
X Y Z

From
X 0 4 5
1 Y
4 Y 4 0 1
1
Z 5 1 0

X 50
Z

Spring Semester 2021 13


Distance Vector Algorithm – Link Cost Change
• At time 𝑡0 , 𝑦 detects the link cost change and updates Cost to

its distance vector, and informs its neighbors of this 𝑫𝒚 X Y Z


change. X 0 4 5

From
Y 1 0 1
Z 5 1 0
• At time 𝑡1 , 𝑧 and 𝑥 receive the update from 𝑦 and
Cost to
update their tables.
𝑫𝒙 X Y Z
𝐷𝑥 𝑥 = 0 X 0 1 2
𝐷𝑥 𝑦 = min 𝑐 𝑥, 𝑦 + 𝐷𝑦 𝑦 , 𝑐 𝑥, 𝑧 + 𝐷𝑧 𝑦 = min{ 1 + 0 , 50 + 1} = 1

From
Y 1 0 1
𝐷𝑥 𝑧 = min 𝑐 𝑥, 𝑦 + 𝐷𝑦 𝑧 , 𝑐 𝑥, 𝑧 + 𝐷𝑧 𝑧 = min 1 + 1, 50 + 0 = 2
Z 5 1 0
Cost to
𝐷𝑧 𝑥 = min 𝑐 𝑧, 𝑥 + 𝐷𝑥 𝑥 , 𝑐 𝑧, 𝑦 + 𝐷𝑦 𝑥 = { 50 + 0 , 1 + 1} = 2 𝑫𝒛 X Y Z
𝐷𝑧 𝑦 = min 𝑐 𝑧, 𝑦 + 𝐷𝑦 𝑦 , 𝑐 𝑧, 𝑥 + 𝐷𝑥 𝑦 = 1 + 0 , 50 + 1 = 1 X 0 4 5

From
𝐷𝑧 (𝑧)= 0 Y 1 0 1
Spring Semester 2021 Z 2 1 0 14
Distance Vector Algorithm – Link Cost Change
• At time 𝑡2 , 𝑥 and 𝑧 send their updated distance vectors to
Cost to
neighbors and they recompute their routing table entries.
𝑫𝒙 X Y Z
𝐷𝑥 𝑥 = 0
X 0 1 2
𝐷𝑥 𝑦 = min 𝑐 𝑥, 𝑦 + 𝐷𝑦 𝑦 , 𝑐 𝑥, 𝑧 + 𝐷𝑧 𝑦 = min{ 1 + 0 , 50 + 1} = 1

From
𝐷𝑥 𝑧 = min 𝑐 𝑥, 𝑦 + 𝐷𝑦 𝑧 , 𝑐 𝑥, 𝑧 + 𝐷𝑧 𝑧 = min 1 + 1, 50 + 0 = 2 Y 1 0 1
Cost to
Z 2 1 0
𝑫𝒚 X Y Z
𝐷𝑦 𝑥 = min 𝑐 𝑦, 𝑥 + 𝐷𝑥 𝑥 , 𝑐 𝑦, 𝑧 + 𝐷𝑧 𝑥 = min 1 + 0,1 + 2 = 1
X 0 1 2
𝐷𝑦 𝑦 = 0

From
𝐷𝑦 𝑧 = min 𝑐 𝑦, 𝑧 + 𝐷𝑧 𝑧 , 𝑐 𝑦, 𝑥 + 𝐷𝑥 𝑧 = min 1 + 0,1 + 2 = 1 Y 1 0 1
Z 2 1 0
Cost to
𝑫𝒛 X Y Z
𝐷𝑧 𝑥 = min 𝑐 𝑧, 𝑥 + 𝐷𝑥 𝑥 , 𝑐 𝑧, 𝑦 + 𝐷𝑦 𝑥 = { 50 + 0 , 1 + 1} = 2 X 0 1 2
𝐷𝑧 𝑦 = min 𝑐 𝑧, 𝑦 + 𝐷𝑦 𝑦 , 𝑐 𝑧, 𝑥 + 𝐷𝑥 𝑦 = 1 + 0 , 50 + 1 = 1

From
Y 1 0 1
𝐷𝑧 (𝑧)= 0
Z 2 1 0
Spring Semester 2021 15
Distance Vector Algorithm – Link Cost Change
• None of the routing entries have been updated and updates will not be sent to
neighbors. Thus two iterations are required for the DV algorithm to converge.
The good news about the decreased cost between 𝑥 and 𝑦 has propagated
quickly through the network.
• Before the link cost changes, the routing tables at each node will be: Cost to
X Y Z

From
X 0 4 5
• Now consider that link cost between 𝑥 and 𝑦 has increased.
Y 4 0 1
Z 5 1 0
60 Y
4
1

X 50
Z

Spring Semester 2021 16


Distance Vector Algorithm – Link Cost Change
• At time 𝑡0 , 𝑦 detects the link cost change and updates its distance vector,
Cost to
𝐷𝑦 𝑥 = min 𝑐 𝑦, 𝑥 + 𝐷𝑥 𝑥 , 𝑐 𝑦, 𝑧 + 𝐷𝑧 𝑥 = min 60 + 0,1 + 5 = 6 𝑫𝒚 X Y Z
𝐷𝑦 𝑦 = 0
X 0 4 5
𝐷𝑦 𝑧 = min 𝑐 𝑦, 𝑧 + 𝐷𝑧 𝑧 , 𝑐 𝑦, 𝑥 + 𝐷𝑥 𝑧 = min 1 + 0,60 + 5 = 1

From
Y 6 0 1
• With the global view of the network, we can see that this new cost via 𝑧 is wrong. Z 5 1 0
• But the only information node 𝑦 has is that its direct cost to 𝑥 is 60 and that 𝑧 has last told
𝑦 that 𝑧 could get to 𝑥 with a cost of 5.
• So in order to get to 𝑥, 𝑦 would now route through 𝑧, fully expecting that 𝑧 will be able to
get to 𝑥 with a cost of 5.
• As of 𝑡1 we have a routing loop—in order to get to 𝑥, 𝑦 routes through 𝑧, and 𝑧 routes
through 𝑦.
• A packet destined for 𝑥 arriving at 𝑦 or 𝑧 at 𝑡1 will bounce back and forth between these
two nodes forever (or until the forwarding tables are changed). Y
60 1
4
Spring Semester 2021 X 50 17 Z
Distance Vector Algorithm – Link Cost Change
• At time 𝑡1 , 𝑦 informs its neighbors of its new distance vector.
Cost to
• Sometime after 𝑡1 , 𝑧 receives 𝑦’𝑠 new distance vector, which indicates that
𝑦’𝑠 minimum cost to 𝑥 is 6. 𝑫𝒛 X Y Z
• 𝑧 knows it can get to 𝑦 with a cost of 1 and hence computes a new least X 0 4 5
cost to 𝑥 :

From
Y 6 0 1
Z 7 1 0
𝐷𝑧 𝑥 = min 𝑐 𝑧, 𝑥 + 𝐷𝑥 𝑥 , 𝑐 𝑧, 𝑦 + 𝐷𝑦 𝑥 = { 50 + 0 , 1 + 6} = 7
Cost to
• Since 𝑧’𝑠 least cost to 𝑥 has increased, it then informs 𝑦 of its new distance 𝑫𝒚 X Y Z
vector at 𝑡2 .
X 0 4 5
• In a similar manner, after receiving 𝑧’𝑠 new distance vector, 𝑦 determines:

From
Y 8 0 1
𝐷𝑦 𝑥 = min 𝑐 𝑦, 𝑥 + 𝐷𝑥 𝑥 , 𝑐 𝑦, 𝑧 + 𝐷𝑧 𝑥 = min 60+, 1 + 7 = 8 Z 7 1 0

and sends 𝑧 its distance vector. 𝑧 then determines 𝐷𝑧 𝑥 = 9 and sends 𝑦 its
distance vector, and so on.

Spring Semester 2021 18


Distance Vector Algorithm – Link Cost Change
• The loop will persist i.e. message exchanges between 𝑦 and 𝑧 until 𝑧 eventually computes the cost of
its path to 𝑥 via 𝑦 to be greater than 50.
• At this point, 𝑧 will determine that its least cost path to 𝑥 is via its direct connection to 𝑥. 𝑦 will then
route to 𝑥 via 𝑧.
• The result of the bad news about the increase in link cost has indeed traveled slowly. This problem is
referred as count to infinity problem.

Spring Semester 2021 19


Link State Algorithm
• Link-state routing is designed to overcome the drawbacks of distance vector routing.
• When a router is initialized, it determines the link cost on each of its network interfaces.
The router then advertises this set of link costs(Link state advertisements) to all other
routers in the internet topology, not just neighboring routers.
• Every router monitors its link costs and whenever there is a significant change (link cost
increases or decreases substantially, a new link is created, an existing link becomes
unavailable), the router again advertises its set of link costs to all other routers in the
configuration.
• Each router receives the link costs of all routers in the configuration and thus can construct
the topology of the entire configuration and then calculate the shortest path to each
destination network. In practice, Dijkstra’s algorithm is used to calculate the shortest path.
The router’s routing table lists the first hop to each destination.
• Because the router has a representation of the entire network, it does not use a distributed
version of a routing algorithm, as is done in distance vector routing.
• Link state algorithms adapt dynamically to changing internetwork conditions, and also allow
routes to be selected based on more realistic metrics of cost than simply the number of
hops between networks.
Spring Semester 2021 20
LS Algorithm VS DV Algorithm
• In the DV algorithm, each node talks to only its directly connected
neighbors, and provides its neighbors with least cost estimates from itself to
all the nodes in the network.
• In the LS algorithm, each node talks with all other nodes , but it tells them
only the costs of its directly connected links.
• LS requires each node to know the cost of each link in the network. Also,
whenever a link cost changes, the new link cost must be sent to all nodes.
The DV algorithm requires message exchanges between directly connected
neighbors at each iteration.
• LS is an 𝑂 𝑁 2 algorithm whereas DV algorithm can converge slowly and
can have routing loops while the algorithm is converging. DV also suffers
from the count to infinity problem.

Spring Semester 2021 21


Questions?

Spring Semester 2021 23


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 9 – 2
Network Layer – Interior Gateway
Routing Protocols - RIP
Spring Semester 2021 1
Interior Gateway Routing Protocols
• An Interior gateway routing protocol is used to determine how routing is performed within an
autonomous system (AS). These routing protocols are also known as Intra-AS routing
protocols.
• Two most popular routing protocols that have been used extensively for routing within an
autonomous system in the Internet are :
• Routing Information Protocol (RIP), and
• Open Shortest Path First (OSPF).

Spring Semester 2021 3


Administrative Distance
• The administrative distance (AD) is used to rate the trustworthiness of routing information
received on a router from a neighbor router.
• An administrative distance is an integer from 0 to 255, where 0 is the most trusted and 255
means no traffic will be passed via this route.
• If multiple routing protocols are configured (e.g. static route, RIP, EIGRP, OSPF) on a routers’
interface and it receives two updates listing the same remote network, the first thing the
router checks is the AD. If one of the advertised routes has a lower AD than the other, then
the route with the lowest AD will be placed in the routing table.
• If both advertised routes to the same network have the same AD, then routing protocol
metrics (such as hop count or bandwidth of the lines) will be used to find the best path to the
remote network. The advertised route with the lowest metric will be placed in the routing
table. But if both advertised routes have the same AD as well as the same metrics, then the
routing protocol will load balance to the remote network (which means that it sends packets
down each link).

Spring Semester 2021 4


Routing Information Protocol - Basics
• The Routing Information Protocol (RIP) was one of the first interior routing protocols used in TCP/IP. RIP
is a distance vector protocol and uses Bellman Ford routing algorithm.
• In distance vector routing algorithm, each router passes complete routing table contents to neighboring
routers, which then combine the received routing table entries with their own routing tables to complete
the router’s routing table.
• This is called routing by rumor because a router receiving an update from a neighbor router believes the
information about remote networks without actually finding out for itself.
• RIP uses only hop count to determine the best path to a network. If RIP finds more than one link with
the same hop count to the same remote network, it will automatically perform a round robin load
balancing.
• RIP has a maximum allowable hop count of 15 by default, meaning that hop count of 16 is defined as
infinity and considered as unreachable. Thus, RIP works well in small networks, but it’s inefficient on
large networks with a large number of routers installed.
• RIP version 1 uses only classful routing, which means that all devices in the network must use the same
subnet mask. This is because RIP version 1 doesn’t send updates with subnet mask information. RIP
version 2 provides classless routing and does send subnet mask information with the route updates.
• Default AD of RIP is 120.
Spring Semester 2021 5
RIP Routing Information
• Like any routing protocol, the job of RIP is to provide a mechanism for exchanging information about
routes so routers can keep their routing tables up to date.
• Each router in an RIP internetwork keeps track of all the networks in its routing table. For each network
or host, following information is included:
• The address of the network or host.
• The distance from that router to the network or host.
• The first hop for the route: the device to which datagrams must be sent first to eventually get to the
destination network or host.
• Routing information is propagated between routers in RIP approximately every 30 seconds using a RIP
response message. RIP response messages are also known as RIP advertisements.
• This RIP response message sent by a router specifies what networks it can reach, and how many hops
to reach them. Other routers directly connected to it know that they can then reach those networks
through that router at a cost of one additional hop.
• So if router 𝐴 sends a message saying it can reach network 𝑋 for a cost of 𝑁 hops, each other router
that connects directly to 𝐴 can reach network 𝑋 for a cost of 𝑁 + 1 hops. It will put that information into
its routing table, unless it knows of an alternate route through another router that has a lower cost.

Spring Semester 2021 6


RIP Timers
• RIP uses four different kinds of timers to regulate its performance:
• Route update timer - Sets the interval between periodic routing updates in
which the router sends a complete copy of its routing table out to all
neighbors.
• This process ensures that route information is regularly sent around the
internet, so routers are always kept up to date about routes.
• The default value of update timer is 30 seconds.

Spring Semester 2021 7


RIP Timers
• Route invalid timer – When a router receives routing information and enters it into its routing
table, that information cannot be considered valid indefinitely.
• Route invalid timer determines the length of time that must elapse before a router determines
that a route has become invalid.
• Whenever the router receives a RIP Response with information about that route, the route is
considered refreshed and its invalid timer is reset. As long as the route continues to be
refreshed, the timer will never expire.
• If, however, RIP Responses containing that route stop arriving, the timer will eventually
expire. When this happens, the route is marked for deletion, by setting the distance for the
route to 16 (which indicates an unreachable network).
• It will come to this conclusion if it hasn’t heard any updates about a particular route for that
period. When that happens, the router will send out updates to all its neighbors letting them
know that the route is invalid.
• The default value for the invalid timer is usually 180 seconds. This allows several periodic
updates of a route to be missed before a router will conclude that the route is no longer
reachable.
Spring Semester 2021 8
RIP Timers
• Route flush timer - When a route is marked for deletion, route flush timer is also started.
• This sets the time between a route becoming invalid and its removal from the routing
table. The reason for using this two stage removal method is to give the router (that
declared the route no longer reachable) a chance to propagate this information to other
routers.
• Until the route flush timer expires, the router will include that route, with the unreachable
metric of 16 hops, in its own RIP Responses, so other routers are informed of the
problem with that route. When the timer expires the route is deleted.
• If during the route flush timer period, a new RIP Response for the route is received, then
the deletion process is aborted, route flush timer is cleared, the route is marked as valid
again, and a new invalid timer starts.
• The default value for this timer is 240(180+60) seconds. The value of the route invalid
timer must be less than that of the route flush timer. This gives the router enough time to
tell its neighbors about the invalid route before the local routing table is updated.

Spring Semester 2021 9


RIP Timers
• Hold down timer - The hold down feature works by having each
router start a timer when they first receive information about a
network that is unreachable.
• This sets the amount of time during which routing information is
suppressed. Routes will enter into the hold down state when an
update packet is received that indicates the route is unreachable.
• This continues either until an update packet is received with a
better metric, the original route comes back up, or the hold down
timer expires.
• The default value for this timer is 180 seconds.

Spring Semester 2021 10


RIP Limitations & Problems
• The simplicity of the Routing Information Protocol is often considered as the main reason for
its popularity but it also has some limitations and weakness. They are:
• Slow Convergence – In distance vector algorithm, all routers share all their routing
information regularly so that all routers eventually end up with the same information about
the location of networks and which are the best routes to use to reach them. This is called
convergence.
• RIP algorithm is rather slow to achieve convergence. It takes a long time for all routers to get
the same information, and in particular, it takes a long time for information about topology
changes to propagate.
• Consider the worst case situation of two networks separated by 15 routers. Since routers send
RIP Response messages only every 30 seconds, a change to one of this pair of networks
might not be seen by the router nearest the other one until many minutes have elapsed.
• The slow convergence problem is even more pronounced when it comes to the propagation of
route failures. Failure of a route is only detected through the expiration of the 180 second
invalid timer, so that adds up to three minutes more delay before convergence can even
begin.
Spring Semester 2021 11
RIP Limitations & Problems
• Consider the given example network
• Assume that all nodes are switched on at the same time t = 0
• Immediately after being switched on, each node informs its neighbors about
its presence.
• Each node transmits its distance vector message every 60 seconds
• After receiving the distance vector messages the shortest path computations
takes one second
• Calculate the time of convergence of this example network.

1 2 1
1 2 3 6

Spring Semester 2021 12


RIP Limitations & Problems
• The simplicity of the Routing Information Protocol is often considered as the
main reason for its popularity but it also has some limitations and weakness.
They are:
• Routing Loops - A routing loop occurs when in order to get to 𝑥, 𝑦 routes
through 𝑧, and 𝑧 routes through 𝑦. (Ref. See Slide 16-17 Lecture 9-1)
• Larger loops can also exist: Router 𝑥 says to send to 𝑦, which says to send to
𝑧, which says to send to 𝑡.
• Routing loops can occur in DV protocols e.g. after link failure or major
increase in a link metric.
• RIP does not include any specific mechanism to detect or prevent routing
loops; the best it can do is try to avoid them.

Spring Semester 2021 13


RIP Limitations & Problems
• The simplicity of the Routing Information Protocol is often considered as the main
reason for its popularity but it also has some limitations and weakness. They are:
• Count to Infinity – Node 𝑧 informs node 𝑦 about its new cost (which is now 7) and
subsequently node 𝑦 re-calculates its cost to 8. Node 𝑦 informs node 𝑧 about its new
cost (which is now 8) and subsequently node 𝑧 re-calculates its cost to 9 and so on.
• The loop will persist i.e. message exchanges between 𝑦 and 𝑧 until 𝑧 eventually
computes the cost of its path to 𝑥 via 𝑦 to be greater than 50. At this point, 𝑧 will
determine that its least cost path to 𝑥 is via its direct connection to 𝑥. 𝑦 will then
route to 𝑥 via 𝑧.
• The result of the bad news about the increase in link cost has indeed traveled slowly.
This problem is referred as count to infinity problem.
Y
60 1
4
Spring Semester 2021 X 50 14 Z
Techniques to Resolve RIP Problems
• Four techniques are used as a solution to problems that arise due to RIP. They
are:
• Split Horizon – When a router sends out an RIP Response on any of the
networks to which it is connected, it omits any route information that was
originally learned from that network. This feature is called split horizon.
• This reduces incorrect routing information and routing overhead in a distance
vector network.
• In other words, the routing protocol differentiates which interface a network
route was learned on, and once this is determined, it won’t advertise the route
back out that same interface.
• This would have prevented Router 𝑧 from sending the update information it
received from Router 𝑦 back to Router 𝑦.

Spring Semester 2021 15


Techniques to Resolve RIP Problems
• Split Horizon With Poisoned Reverse - This is an enhancement of the basic split horizon feature.
Instead of omitting routes learned from a particular interface when sending RIP Response messages on
that interface, we include those routes but set their metric to RIP infinity, 16.
• The poisoned reverse refers to the fact that we are poisoning the routes that we want to make sure
routers on that interface don't use.
• For example, when Network 5 goes down, Router E initiates route poisoning by advertising Network 5
with a hop count of 16, or unreachable. This poisoning of the route to Network 5 keeps Router C from
being susceptible to incorrect updates about the route to Network 5. When Router C receives a route
poisoning from Router E, it sends an update, called a poison reverse, back to Router E. This ensures that
all routers on the segment have received the poisoned route information.

Network 3 Network 4 Network 5

Spring Semester 2021 16


Techniques to Resolve RIP Problems
• Triggered Updates – A routing loop occurs when in order to get to 𝑥, 𝑦 routes
through 𝑧, and 𝑧 routes through 𝑦. Another aspect of the problem is that
router had to wait up to 30 seconds until its next scheduled transmission time
to tell other routers about the failure/link increase.
• For RIP to work well, whenever a router changes the metric for a route it is
required to immediately send out an RIP Response to tell its immediate
neighbor routers about the change. If these routers, seeing this change,
update their routing information, they are in turn required to send out
updates.
• Thus, the change of any network route information causes cascading updates
to be sent throughout the internetwork, significantly reducing the slow
convergence problem.

Spring Semester 2021 17


Techniques to Resolve RIP Problems
• Hold Down - Split horizon tries to solve the counting to infinity problem by suppressing the
transmission of invalid information about routes that fail. For extra insurance, we can
implement a feature that changes how devices receiving route information process it in the
case of a failed route.
• The hold down feature works by having each router start a timer when they first receive
information about a network that is unreachable.
• Until the timer expires, the router will discard any subsequent route messages that indicate
the route is in fact reachable. A typical hold down timer runs for 180 seconds.
• The main advantage of this technique is that a router won't be confused by receiving spurious
information about a route being accessible when it was just recently told that the route was
no longer valid.
• Hold downs prevent routes from changing too rapidly by allowing time for either the downed
route to come back up or the network to stabilize somewhat before changing to the next best
route.

Spring Semester 2021 18


Questions?

Spring Semester 2021 20


Computer Communication Networks
CS-418

Course Teacher : Sumayya Zafar


Class : BE EE

Lecture 10-1
Network Layer – Interior Gateway
Routing Protocols - OSPF
Spring Semester 2021 1
Interior Gateway Routing Protocols
• An Interior gateway routing protocol is used to determine how routing is performed within an
autonomous system (AS). These routing protocols are also known as Intra-AS routing
protocols.
• Two most popular routing protocols that have been used extensively for routing within an
autonomous system in the Internet are :
• Routing Information Protocol (RIP), and
• Open Shortest Path First (OSPF).

Spring Semester 2021 3


Open Shortest Path First (OSPF) - Basics
• Like RIP, OSPF routing is widely used for intra-AS routing in the Internet.
• The Open in OSPF indicates that the routing protocol is open and non proprietary standard.
• OSPF is a link state protocol that uses flooding of link state information and a Dijkstra least cost
path algorithm.
• The fundamental concept behind OSPF is a data structure called the link state database (LSDB).
• Each router in an autonomous system maintains a copy of this database, which contains
information in the form of a directed graph that describes the current state of the autonomous
system.
• Each link to a network or another router is represented by an entry in the database, and each has
an associated cost (or metric). The cost of an interface in OSPF is an indication of the overhead
required to send packets across a certain interface.
• The cost of an interface is inversely proportional to the bandwidth of that interface. A higher
bandwidth indicates a lower cost.
• Cisco uses a simple equation of 108 /𝑏𝑎𝑛𝑑𝑤𝑖𝑑𝑡ℎ, where bandwidth is the configured bandwidth for
the interface.

Spring Semester 2021 4


Open Shortest Path First (OSPF) - Basics
• Information about the autonomous system moves around the autonomous system in the form
of link state advertisements (LSAs), messages that let each router tell the others what it
currently knows about the state of the AS.
• To determine actual routes, each router uses its link state database to construct a shortest
path tree.
• This tree shows the links from the router to each other router and network, and allows the
lowest cost route to any location to be determined.
• As new information about the state of the internetwork arrives, this tree can be recalculated,
so the best route is dynamically adjusted based on network conditions.
• When more than one route with an equal cost exists, traffic can be shared amongst the routes
(load balancing).

Spring Semester 2021 5


OSPF VS RIP
• RIP has certain limitations that can cause problems in large networks:
• RIP has a limit of 15 hops. A RIP network that spans more than 15 hops (15 routers) is considered unreachable.
• RIP cannot handle Variable Length Subnet Masks (VLSM).
• Periodic broadcasts of the full routing table consume a large amount of bandwidth.
• RIP converges slower than OSPF.
• RIP has no concept of network delays and link costs. Routing decisions are based on hop counts. The path with the
lowest hop count to the destination is always preferred even if the longer path has a better aggregate link bandwidth
and less delays.
• RIP networks are flat networks. There is no concept of areas or boundaries.
• OSPF, on the other hand, addresses most of the issues previously presented:
• With OSPF, there is no limitation on the hop count.
• The intelligent use of VLSM is very useful in IP address allocation.
• OSPF has better convergence than RIP. This is because routing changes are propagated instantaneously and not
periodically.
• OSPF allows for a logical definition of networks where routers can be divided into areas.

Spring Semester 2021 6


OSPF Terminology
• Link - A link is a network or router interface assigned to any given network. When an interface is added
to the OSPF process, it’s considered to be a link.
• Router ID - The router ID (RID) is an IP address used to identify the router.
• Neighbor - Neighbors are two or more routers that have an interface on a common network, such as two
routers connected on a point-to-point serial link. Neighbors are elected via the Hello protocol. Two
routers will not become neighbors unless they agree on the following:
• Area ID - This represents the area that the originating router interface belongs to.
• Authentication - This is the authentication type and corresponding information.
• Hello and Dead Intervals - The period between Hello packets is the Hello time, which is 10 seconds
by default. The dead time is the length of time allotted for a Hello packet to be received before a
neighbor is considered down. This is usually four times the Hello interval, unless otherwise
configured.
• Adjacency - An adjacency is a relationship between two OSPF routers that permits the direct exchange
of route updates. Adjacent routers are routers that go beyond the simple Hello exchange and proceed
into the database exchange process. In order to minimize the amount of information exchange on a
particular segment, OSPF elects one router to be a designated router (DR), and one router to be a
backup designated router (BDR).

Spring Semester 2021 7


OSPF Terminology
• Designated router - A designated router (DR) is elected whenever OSPF routers are connected to the
same broadcast network to minimize the number of adjacencies formed and to publicize received routing
information to and from the remaining routers on the broadcast network or link. Elections are won based
upon a router’s priority level, with the one having the highest priority becoming the winner. If there’s a
tie, the router ID will be used to break it. All routers on the shared network will establish adjacencies
with the DR and the BDR, which ensures that all routers’ topology tables are synchronized
• Backup designated router - A backup designated router (BDR) is a standby for the DR on broadcast, or
multi-access, links. The BDR receives all routing updates from OSPF adjacent routers but does not
disperse LSA updates.
• Hello protocol - The OSPF Hello protocol provides dynamic neighbor discovery and maintains neighbor
relationships. Hello packets and Link State Advertisements (LSAs) build and maintain the topological
database.
• Neighborship database - The neighborship database is a list of all OSPF routers for which Hello packets
have been seen. A variety of details, including the router ID and state, are maintained on each router in
the neighborship database.
• Topological database - The topological database contains information from all of the Link State
Advertisement packets that have been received for an area. The router uses the information from the
topology database as input into the Dijkstra algorithm that computes the shortest path to every network.
Spring Semester 2021 8
OSPF Adjacency Requirement
• Once neighbors have been identified, adjacencies must be established so that routing
information can be exchanged.
• There are two steps required to change a neighboring OSPF router into an adjacent OSPF
router:
• Two way communication (achieved via the Hello protocol)
• Database synchronization, which consists of three packet types being exchanged between
routers:
• Database Description (DD) packets - These messages contain descriptions of the topology of the AS
or area. That is, they convey the contents of the link state database for the autonomous system or
area from one router to another.
• Link State Request (LSR) packets - After DD packets exchange process, the router may find it does
not have an up-to-date database. These messages are used by one router to request updated
information about a portion of the LSDB from another router. The message specifies exactly which
link(s) about which the requesting device wants more current information.
• Link State Update (LSU) packets - These messages contain updated information about the state of
certain links on the LSDB. They are sent in response to a Link State Request message, and also
broadcast or multicast by routers on a regular basis. Their contents are used to update the
information in the LSDBs of routers that receive them.
Spring Semester 2021 9
OSPF Adjacency Requirement
• Once database synchronization is complete, the two routers are considered adjacent.
• On point-to-point link , the two neighbors will become adjacent if the Hello packet information
for both routers is configured properly.
• On broadcast multi-access networks, adjacencies are formed only between the OSPF routers
on the network and the DR and BDR.

Spring Semester 2021 10


OSPF Link State Advertisements
• A Link State Advertisement (LSA) is an OSPF data packet containing link state
and routing information that is shared among OSPF routers.
• An OSPF router will exchange LSA packets only with routers to which it has
established adjacencies.
• LSA packet contains sufficient information to identify the link. Some of the
important fields are:
• LS Age - The LS Age is the equivalent of a time to live, except that it counts
up and the LSA expires when the age reaches a defined maximum value.
• Link State ID- Identifies the link. This usually is the IP address of either the
router or the network the link represents.
• Advertising Router - The ID of the router originating the LSA.
• LS Sequence Number - A sequence number used to detect old or duplicate
LSAs.
Spring Semester 2021 11
OSPF Link State Advertisements
• LS Type - Indicates the type of link this LSA describes.
LS Description
Type
1 Router Link advertisements - Generated by each router for each area it belongs to.
They describe the states of the router's link to the area. If a router is connected to
multiple areas, then it will send separate Type 1 LSAs for each of the areas it’s
connected to. These are only flooded within a particular area.
2 Network Link advertisements - Generated by Designated Routers. They describe the
set of routers attached to a particular network. Flooded in the area that contains the
network.
3 or 4 Summary Link advertisements - Generated by Area Border routers. They describe
inter area routes. Type 3 describes routes to networks, also used for aggregating
routes. Type 4 describes routes to ASBR.
5 AS external link advertisements - Originated by ASBR. They describe routes to
destinations external to the AS.

Spring Semester 2021 12


OSPF Areas
• OSPF provides the functionality to divide an autonomous system into sub
autonomous systems , commonly referred to as areas. Every autonomous
systems must have a core area, referred to as a backbone area; this is
identified with Area ID 0.
• Areas are identified through a 32 bit area field; thus Area ID 0 is the same as
0.0.0.0.
• Usually, areas (other than the backbone) are sequentially numbered as Area 1
(i.e., 0.0.0.1), Area 2, and so on.
• OSPF allows a hierarchical setup with the backbone area as the top level while
all other areas, connected to the backbone area, are referred to as low level
areas.
• This also means that the backbone area is in charge of summarizing the
topology of one area to another area, and vice versa.
Spring Semester 2021 13
OSPF Areas
• With the functionality provided to divide an OSPF network into areas ,the routers are classified
into four different types:
• Area Border Routers - These are the routers that sit on the border between the backbone and
the low level areas. Each area border router must have at least one interface to the backbone;
it also has at least one interface to each area to which it is connected.
• Internal Routers - These are the routers in each low level area that have interfaces only to
other internal routers in the same area.
• Backbone Routers - These are the routers located in Area 0 with at least one interface to
other routers in the backbone. Area border routers can also be considered as backbone
routers.
• AS Boundary Routers - These routers are located in Area 0 with connectivity to other AS; they
must be able to handle more than one routing protocol. For example, to exchange information
with another AS, they must be able to speak BGP. These routers also have internal interfaces
for connectivity to other backbone routers.

Spring Semester 2021 14


Spring Semester 2021 15
OSPF Network Types
• OSPF is designed to address five different types of networks. They are:
• Point-to-point networks - refers to a type of network topology made up of a direct connection between two
routers that provides a single communication path.
• Broadcast networks - refer to networks such as LANs connected by a technology such as Ethernet. Broadcast
networks, by nature, are multiaccess where all routers in a broadcast network can receive a single transmitted
packet. In such networks, a router is elected as a Designated Router (DR) and another as a Backup
Designated Router (BDR).
• Non–broadcast multiaccess networks - use technologies such as ATM or frame relay where more than two
routers may be connected without broadcast capability. Thus, an OSPF packet is required to be explicitly
transmitted to each router in the network. Such networks require an extra configuration to emulate the
operation of OSPF on a broadcast network. Like broadcast networks, NBMA networks elect a DR and a BDR.
• Point-to-multipoint networks - refers to a type of network topology made up of a series of connections
between a single interface on one router and multiple destination routers. All interfaces on all routers share
the point-to-multipoint connection and belong to the same network.
• Virtual Links - are used to connect an area to the backbone using a non backbone (transit) area. Virtual links
are configured between two area-border routers. Virtual links can also be used if a backbone is partitioned into
two parts due to a link failure; in such a case, virtual links are tunneled through a non backbone area.

Spring Semester 2021 16


OSPF Operation
• OSPF operation is basically divided into these three categories:
• Neighbor and adjacency initialization – When OSPF is initialized on a router, the router allocates memory
for it, as well as for the maintenance of both neighbor and topology tables. Once the router determines
which interfaces have been configured for OSPF, it will then check to see if they are active and begin
sending Hello packets. The Hello protocol is used to discover neighbors, establish adjacencies, and
maintain relationships with other OSPF routers.
• This is done for point-to-point, point-to-multipoint, and virtual link networks. For broadcast and NBMA
networks, not all routers become logically adjacent; here, the hello protocol is used for electing DRs and
BDRs.
• After initialization the hello protocol is used to keep alive connectivity, which ensures bidirectional
communication between neighbors; this means, if the keep alive hello messages are not received within
a certain time interval that was agreed upon during initialization, the link/connectivity between the
routers is assumed to be not available.
• Broadcast and point-to-point networks send Hellos every 10 seconds, whereas non-broadcast and point-
to-multipoint networks send them every 30 seconds.

Spring Semester 2021 17


OSPF Operation
• OSPF operation is basically divided into these three categories:
• LSA flooding - LSA flooding is the method OSPF uses to share routing information. Via Link
State Updates (LSU’s) packets, LSA information containing link state data is shared with all
OSPF routers within an area.
• The network topology is created from the LSA updates, and flooding is used so that all OSPF
routers have the same topology map to make SPF calculations with.
• On point-to-point networks, updates use the IP multicast address 224.0.0.5, referred to as
AllSPFRouters. A router on receiving an update forwards it to other routers, again using the
same multicast address.
• On broadcast networks, all non DR and non BDR routers send link state update and LSA
packets using the IP multicast address 224.0.0.6, referred to as AllDRouters. Any OSPF
packets that originates from a DR or a BDR, use the IP multicast address 224.0.0.5.
• OSPF flooding must be reliable. OSPF addresses reliable delivery of packets through use of
acknowledgments.

Spring Semester 2021 18


OSPF Operation
• OSPF operation is basically divided into these three categories:
• SPF tree calculation - Within an area, each router calculates the best/shortest path to every network in
that same area.
• This calculation is based upon the information collected in the topology database and an algorithm called
shortest path first (SPF).
• Each router in an area running the SPF algorithm constructs a tree where the router is the root and all
other networks are arranged along the branches and leaves.
• This is the shortest path tree used by the router to insert OSPF routes into the routing table. This tree
contains only networks that exist in the same area as the router itself does.
• If a router has interfaces in multiple areas, then separate trees will be constructed for each area.

Spring Semester 2021 19


OSPF Authentication
• OSPF packets can be authenticated such that routers can participate in routing
domains based on predefined passwords.
• By default, a router uses a Null authentication which means that routing
exchanges over a network are not authenticated.
• Two other authentication methods exist. They are:
• Simple password authentication allows a password (key) to be configured per
area. Routers in the same area that want to participate in the routing domain
will have to be configured with the same key. The drawback of this method is
that it is vulnerable to passive attacks.
• Message Digest authentication is a cryptographic authentication. A key
(password) and key id are configured on each router. The router uses an
algorithm(MD5) based on the OSPF packet, the key, and the key id to
generate a "message digest" that gets appended to the packet.
Spring Semester 2021 20
Questions?

Spring Semester 2021 22

You might also like