0% found this document useful (0 votes)
308 views151 pages

M.SC - CS Sem I NEP 2020 Software Defined Networking

Uploaded by

Bhargav Palav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
308 views151 pages

M.SC - CS Sem I NEP 2020 Software Defined Networking

Uploaded by

Bhargav Palav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 151

MSCCS 1.

M.Sc.
(Computer Science)
SEMESTER - I

(REVISED SYLLABUS
AS PER NEP 2020)

SOFTWARE DEFINED
NETWORKING
© UNIVERSITY OF MUMBAI
Prof. Ravindra Kulkarni
Vice-Chancellor,
University of Mumbai,

Prin. Dr. Ajay Bhamare Prof. Shivaji Sargar


Pro Vice-Chancellor, Director,
University of Mumbai, CDOE, University of Mumbai,

Programme Co-ordinator : Shri. Mandar Bhanushe


Head, Faculty of Science and Technology,
CDOE, University of Mumbai, Mumbai
Course Co-ordinator : Mr. Sumedh Shejole
& Editor Asst. Professor,
CDOE, University of Mumbai, Mumbai

Course Writers : Dr. V.I.Pujari


Assistant Professor,
D. Y. Patil School of Engineering and
Management, Kohapur

: Dr. Mitali Shewale


Doctoral Researcher,
Veermata Jijabai Technological Institute, Mumbai

: Trupti Kulkarni Kaujalgi


Head of Department,
ICLES' M J College, Vashi

: Amit Tikamdas Kukreja


Assistant Professor,
K. J. Somaiya.Institute of Technology, Sion

December 2024, Print - I

Published by : Director,
Centre for Distance and Online Education,
ipin Enterprises University of Mumbai,
Vidyanagari, Mumbai
Tantia Jogani - 400 098.Estate, Unit No. 2,
Industrial
Ground Floor, Sitaram Mill Compound,
DTP Composed J.R. University
: Mumbai Boricha Marg,
Press Mumbai - 400 011
Printed by Vidyanagari, Santacruz (E), Mumbai - 400 098
CONTENTS
Unit No. Title Page No.

1. Introduction to Computer Networking 01

2. Concepts and Implementation of IPV4 and IPV6 24

3. Routing 38

4. Software Defined Networking 50

5. Network Functions Virtualization Concepts and Architecture 62

6. Modern Network Architecture: Clouds and Fog 75

7. Design and Implementation of Network 86

8. Implementation of Routing 117


6 Write a program to implement linear and nonlinear noise smoothing on suitable
image or sound signal.
7 Write a program to apply various image enhancement using image derivatives by
implementing smoothing, sharpening, and unsharp masking filters for generating
suitable images for specific application requirements
8 Write a program to Apply edge detection techniques such as Sobel and Canny to
extract meaningful information from the given image samples
9 Write the program to implement various morphological image processing
techniques.
10 Write the program to extract image features by implementing methods like corner
and blob detectors, HoG and Haar features
11 Write the program to apply segmentation for detecting lines, circles, and other
shapes/ objects. Also, implement edge-based and region-based segmentation.

Programme Name: M.Sc. Computer Course Name: Software Defined Networking


Science (Semester I)
Total Marks: 100
Total Credits: 04
University assessment: 50
College assessment: 50

Prerequisite: Basic Networking concepts.


Course Outcome:
● Understand computer networking concepts, OSI/TCP-IP models, and routing protocols.
● Gain knowledge and skills in Software Defined Networking (SDN) architecture,
OpenFlow, and application development.
● Comprehend Network Functions Virtualization (NFV), cloud computing, and IoT
integration in modern network architectures.
● Design and implement switching techniques, routing protocols, multicast, MPLS, traffic
filtering, and routing redistribution.
● Develop network design and deployment skills for efficient and secure routing, traffic
management, and integration of network components.

Total
Course Code Course Title
Credits
PSCS503 Software Defined Networking 04
MODULE - I 02
Unit 1: Introduction to Computer Networking
Basic Concepts and Definitions: LAN, MAN, WAN, AD-Hoc, Wireless Network,
Understanding the layered architecture of OSI/RM and TCP-IP Model, Concepts
and implementation of IPV4 and IPV6, Study of various network Routing protocols,
Introduction to Transport layer and Application layer protocols.

Unit 2: Software Defined Networking


Elements of Modern Networking, Requirements and Technology, SDN: Background
and Motivation, SDN Data Plane and OpenFlow, SDN
Control Plane, SDN Application Plane

Page 11 of 48
MODULE - II 02
Unit 3: Network Functions Virtualization
Concepts and Architecture, NFV Functionality, Network Virtualization Quality of
Service, Modern Network Architecture: Clouds and Fog, Cloud Computing, The
Internet of Things: Components

Unit 4: Design and implementation of Network


Understand and implement Layer 2/3 switching techniques (VLAN /TRUNKING/
Managing Spanning Tree), Implementation of OSPF V2 and V3, Implementation
BGP, Implementation Multicast Routing, Implementation of MPLS, Implementation
of Traffic Filtering by using Standard and Extended Access Control List,
Implementation of Routing
redistribution, Implementation

Text Books:
1. TCPIP Protocol Suite, Behrouz A Forouzan , McGraw Hill Education; 4th edition, Fourth
Edition, 2017
2. Foundations of Modern Networking: SDN, NFV, QoE, IoT, and Cloud, William Stallings,
Addison-Wesley Professional, 2016.
3. Software Defined Networks: A Comprehensive Approach, Paul Goransson and Chuck
Black, Morgan Kaufmann Publications, 2014
4. SDN - Software Defined Networks by Thomas D. Nadeau & Ken Gray, O'Reilly, 2013

Programme Name: M.Sc. Computer Course Name: Software Defined Networking


Science (Semester I) Practical
Total Credits: 02 Total Marks: 50
University assessment: 50

Prerequisite: Basic Networking concepts, Knowledge of Cisco Packet Tracer.


Course Outcome:
● Implement various network protocols and technologies, including IP SLA, IPv4 ACLs,
SPAN, SNMP, and Net Flow.
● Configure network connectivity and address translation using GRE tunnels, VTP, NAT,
and inter-VLAN routing.
● Understand and optimize network spanning tree operation through STP topology
changes, RSTP, and advanced STP mechanisms.
● Establish and manage advanced networking features such as Ether Channel, OSPF,
BGP, and IPsec VPNs.
● Simulate and analyze Software-Defined Networking (SDN) environments using Open
Daylight and Mininet/OpenFlow.

Page 12 of 48
Course Code Course Title Credits

PSCSP504 Software Defined Networking Practical 02


Note: All the Practical’s should be implemented using GNS3/EVENG/CISCO VIRL
Link: GNS3:https://www.gns3.com/software/download
EVE-NG: https://www.eve-ng.net/index.php/download/CISCO
VIRL: https://learningnetwork.cisco.com/s/question/0D53i00000Kswpr/virl15-download
1 Implement IP SLA (IP Service Level Agreement)
2 Implement IPv4 ACLs
a) Standard ACL
b) Extended ACL
3 a) Implement SPAN Technologies (Switch Port Analyzer)
b) Implement SNMP and Syslog
c) Implement Flexible NetFlow
4 a) Implement a GRE Tunnel
b) Implement VTP
c) Implement NAT
5 Implement Inter-VLAN Routing
6 Observe STP Topology Changes and Implement RSTP
a) Implement Advanced STP Modifications and Mechanisms
b) Implement MST
7 a) Implement Ether Channel
b) Tune and Optimize Ether Channel Operations
8 OSPF Implementation
a) Implement Single-Area OSPFv2
b) Implement Multi-Area OSPFv2
c) OSPFv2 Route Summarization and Filtering
d) Implement Multi area OSPFv3
9 a) Implement BGP Communities
b) Implement MP-BGP
c) Implement eBGP for IPv4
d) Implement BGP Path Manipulation
10 a) Implement IPsec Site-to-Site VPNs
b) Implement GRE over IPsec Site-to-Site VPNs
c) Implement VRF Lite
11 Simulating SDN with
a) OpenDaylight SDN Controller with the Mininet Network Emulator
b) OFNet SDN network emulator
12 Simulating OpenFlow Using MININET

Page 13 of 48
1
INTRODUCTION TO COMPUTER
NETWORKING
Unit Structure :
1.0 Objective
1.1 Basic Concepts and Definitions
1.2 Local Area Network (LAN)
1.3 Metropolitan Area Network (MAN)
1.4 Wireless AD-Hoc network (WANET)
1.5 Understanding the layered architecture of OSI/RM and TCP-IP
Model
1.6 Summary
1.7 Questions

1.0 OBJECTIVE

1. To Understand the basic concept of computer networks.


2. To Understand the various types, examples, features, and advantages of
computer networks.
3. To Understand the layered architecture of OSI/RM and TCP-IP
Models.

1.1 BASIC CONCEPTS AND DEFINITIONS


A computer network is created by establishing communication links
between two or more PCs as well as additional hardware components. It
makes it possible for computers to speak with one another and to exchange
information, including hardware and software resources, commands, and
data.
In a network, every computing device is referred to as a node or station.
Servers, PCs, and routers are examples of nodes. The network is used to
alter data by applying rules referred to as protocols. The rules that each
network node must abide by in order to send data via a wired or wireless
connection are known as protocols.

Working of a Computer Network


The sources of creating and sending data are called nodes, and they
include devices like computers, switches, and modems. Next, the nodes
1
Software Defined are connected by means of the link, which is a transmission medium.
Networking The nodes will use connections to send and receive data if they adhere to
the standards. The architecture of computer networks specifies how these
logical and physical components are connected. It gives definitions for the
protocols, processes, functional structure, and physical elements of the
network.

Uses of Computer Network:


o It allows you to share resources such as printers, scanners, etc.
o You can share expensive software and databases among network users.
o It facilitates communications from one computer to another computer.
o It allows the exchange of data and information among users through a
network.

Popular Computer Networks:


o Local Area Network (LAN)
o Metropolitan Area Network (MAN)
o Wide Area Network (WAN)
The local area network, as its name implies, is a type of computer network
that links computers in a constrained geographic region, such as an office,
business, school, or other establishment. Thus, it is limited to a certain
space, such as a home, workplace, school, etc. network.

1.2 LOCAL AREA NETWORK (LAN)


A wired, wireless, or hybrid network can be a part of a local area network.
An Ethernet cable, which provides an interface to link various devices
including routers, switches, and computers, is typically used to connect the
devices in a local area network (LAN). For instance, you may set up a
LAN at your house, place of business, etc. with just a single router, a few
Ethernet connections, and few PCs. Within this network, a single
computer may function as a server, while other computers connected to
the network could operate as clients.

Features of LAN
o The network size is small, which consists of only a few kilometres.
o The data transmission rate is high, ranging from 100 Mbps to 1000
Mbps.
o LAN is included in bus, ring, mesh, and star topologies.
o Some network devices connected to the LAN will be limited.
o If more devices are added than prescribed network may fail.

2
Benefits of LAN: Introduction to Computer
Networking
o It offers a higher operating speed than WAN and MAN.
o It is less expensive and easy to install and maintain.
o It perfectly fulfills the requirement of a specific organization, such as
an office, school, etc.
o It can be wired or wireless or a combination of both.
o It is more secure than other networks as it is a small set up that can be
easily taken care of.

Primary Functions of LAN:


o File sharing: It enables the sharing or transfer of files across LAN-
connected computers. It can be used, for instance, to transport files
containing a customer's transaction data from the server to clients in a
bank.
o Printer sharing: It also permits file servers, printers, and other shared
access. For instance, a single printer, file server, fax machine, etc. may
be used by 10 computers linked over LAN.
o Sharing of Computational Capabilities: Some programs that operate
on clients in a local area network (LAN) may demand more
computational capacity. This enables the clients to use the
computational power of a server, such as an application server.
o Services pertaining to mail and messages: It permits mail
transmission and reception between LAN computers. This requires
that you have a mail server.
o Database services: With the aid of a database server, data may also be
stored and retrieved.

1.3 METROPOLITAN AREA NETWORK (MAN)

A metro area or town is an example of a big geographical region covered


by a high-speed network, or MAN. Local area networks are connected by
3
Software Defined routers and local phone exchange connections during setup. It could be run
Networking by a private business or it might be a service offered by an organization
like a neighborhood phone company.
When individuals in a sizable region wish to exchange data or
information, MAN is perfect. Via high-speed carriers or transmission
medium like fiber optics, copper, and microwaves, it offers quick
communication. X.25, Frame Relay, Asynchronous Transfer Mode
(ATM), xDSL (Digital Subscriber Line), ISDN (Integrated Services
Digital Network), ADSL (Asymmetrical Digital Subscriber Line), and
other protocols are frequently used for MAN.
Greater than a LAN, but less than a WAN, is the area that a MAN covers.
The network's range is between 5 and 50 kilometers. In addition, it offers
uplinks to link LANs to WANs and the internet. A MAN can be used by a
company to link all of the LANs in each of its several city offices.

Examples of MAN:
o Cable TV Network
o Telephone service provides that provide high-speed DSL lines
o IEEE 802.16 or WiMAX
o Connected fire stations in a city
o Connected branches of a school in a city
Features of MAN
o The size of the MAN is in the range of 5km to 50km.
o The MAN ranges from the campus to the entire city.
o The MAN is maintained and managed by either the user group or the
Network provider.
o Users can achieve the sharing of regional resources by using MAN.
o The data transmission rates can be medium to high

Advantages of MAN:
o Less Expensive: It is less expensive to set up a MAN and to connect it
to a WAN.
o High Speed: The speed of data transfer is more than WAN.
o Local Emails: It can send local emails fast.
o Access to the Internet: It allows you to share your internet
connection, and thus multiple users can have access to high-speed
internet.

4
o Easy to set up: You can easily set up a MAN by connecting multiple Introduction to Computer
LANs. Networking

o High Security: It is more secure than WAN.

Wide Area Network (WAN)

WAN covers a wide geographic region. It is mostly set up using phone


lines, fiber optic, or satellite links and is not restricted to a workplace,
school, city, or municipality. Large institutions, such as banks and
international corporations, mostly utilize it to connect with their global
branches and clientele. While sharing structural similarities with MAN,
WAM differs from MAN in that it can cover distances more than 50 km,
such as 1000 km or more, while MAN can only cover up to 50 km.
TCP/IP protocol is used by networking equipment such switches, routers,
firewalls, and modems to operate WANs. They are made to connect local
networks, such as LANs and MANs, to form larger networks, not
individual computers. Because it links several LANs and MANs via ISPs,
the internet is regarded as the world's biggest wide area network (WAN).
Using public networks like satellites, leased lines, or phone systems, the
PCs are linked to the wide area network. Because a wide area network
(WAN) connects distant computer systems in a vast arrangement, its users
do not own the network. To utilize this network, though, customers must
pay for a service offered by a telecommunications company.

Features of WAN
o Has a much larger capacity.
o We can share the regional resources by using WAN.
o They have more bit-rate errors.
o The transmission delay is, and hence they need more communication
speed.

5
Software Defined Advantages of a WAN:
Networking
o Large Network Range: It spans a large geographical area of 2000 km
or more, e.g., from one country to another country.
o Centralized data: It allows your different office branches to use your
head office server for retrieving and sharing data. Thus, you don’t
need to buy email servers, files server and back up servers, etc.
o Get updated files and data: It provides an ideal platform for
companies who need a live server for their employees to exchange
updated files within seconds.
o High bandwidth: It offers high bandwidth than a normal broadband
connection. Thus, it can increase the productivity of your company by
offering uninterrupted data transfer and communication.
o Workload Distribution: It helps distribute your workload to other
locations. You can hire employees in different countries and assign
them to work from your office.

Examples of WAN:
Internet US defense department Stock exchanges network Railway
reservation system Big Banks' cash dispensers' network Satellite systems

1.4 WIRELESS AD-HOC NETWORK (WANET)


An autonomously constructed local area network (LAN) known as a
wireless ad hoc network (WANET) allows two or more wireless devices
to connect to one another without the need for standard network
infrastructure components like wireless routers or access points.
Typically, an ad hoc network is constructed via the Wi-Fi interface of a
PC, laptop, or smartphone. In other scenarios, gadgets like wireless
sensors are made particularly to function in an ad hoc manner.

Central servers are not required for tasks like file sharing or printing since
devices in an ad hoc network may directly access each other's resources
6
via simple point-to-multipoint or peer-to-peer (P2P) protocols. Routing, Introduction to Computer
security, addressing, and key management are just a few of the network Networking
functions that are handled by a group of devices, or nodes, in a wireless
area network (WANANET), such as a smartphone or PC with wireless
capabilities.

1.5 UNDERSTANDING THE LAYERED


ARCHITECTURE OF OSI/RM AND TCP-IP
MODEL
A complicated combination of hardware and software makes up a
communication subsystem. The software for these subsystems was first
implemented using a single, complex, unstructured program with several
interdependent parts. It was exceedingly tough to test and alter the
resulting program. To address this issue, the ISO has created a multi-tiered
strategy. A layered method breaks down the networking notion into many
levels, with a specific purpose given to each layer. As a result, we may
state that networking jobs rely on the layers.

Layered Architecture

 The layered architecture's primary goal is to break the design up into


smaller components.

 To offer a complete set of services for managing communications and


operating the applications, each lower layer provides its services to the
top layer.

 It allows for subsystem interaction by offering modularity and distinct


interfaces.

 By offering services from a lower layer to a higher layer without


specifying how the services are to be implemented, it maintains the
independence between layers. As a result, changing one layer won't
have an impact on the others.

 Each network will have a different number of levels, each with its own
purposes and contents. But each layer's job is to deliver a service from
a lower layer to a higher tier while keeping the specifics of the
services' implementation hidden from view.

 The basic elements of layered architecture are services, protocols, and


interfaces.

 Service: It is a set of actions that a layer provides to the higher layer.

 Protocol: It defines a set of rules that a layer uses to exchange the


information with peer entity. These rules mainly concern about both
the contents and order of the messages used.

 Interface: It is a way through which the message is transferred from


one layer to another layer.
7
Software Defined  In a layer n architecture, layer n on one machine will have a
Networking communication with the layer n on another machine and the rules used
in a conversation are known as a layer-n protocol.

Let's take an example of the five-layered architecture.

 No data is moved from layer n of one machine to layer n of another in


a layered architecture. Rather, until the lowest layer is reached, each
layer transfers the data to the layer that is directly below it.

 The physical media that is used for real communication is situated


underneath layer 1.

 Unmanageable jobs are split up into several smaller, doable tasks in a


layered architecture.

 An interface is used to transfer data from the higher layer to the


bottom layer. The clean interface of a layered architecture ensures that
the least amount of information is transferred between levels. It also
makes sure that a different implementation of a layer may simply take
its place.

 Network architecture is a collection of protocols and layers.


Why do we require Layered architecture?

 Divide – and - conquer approach : Divide-and-conquer approach


makes a design process in such a way that the unmanageable tasks are
divided into small and manageable tasks. In short, we can say that this
approach reduces the complexity of the design.

8
 Modularity: Layered architecture is more modular. Modularity Introduction to Computer
provides the independence of layers, which is easier to understand and Networking
implement.

 Easy to modify: It ensures the independence of layers so that


implementation in one layer can be changed without affecting other
layers.

 Easy to test: Each layer of the layered architecture can be analyzed


and tested individually.

OSI Model

 Open System Interconnection, or OSI, is a reference model that


explains how data travels via a physical media from a computer
program on one machine to another computer program.

 The seven levels of OSI each carry out a specific network function.

 The OSI model, which was created in 1984 by the International


Organization for Standardization (ISO), is currently regarded as an
architectural model for communications between computers.

 The OSI model breaks the work out into seven more manageable,
smaller jobs. Every layer has a certain duty assigned to it.

 Because each layer is self-contained, tasks assigned to it can be


completed on its own.

Characteristics of OSI Model:

 There are two levels in the OSI model: upper layers and lower layers.

 The application-related problems that are mostly handled by the higher


layer of the OSI model are limited to software implementation. The
9
Software Defined layer nearest to the user is the application layer. Software applications
Networking are interacted with by both the application layer and the end user. The
layer just above another layer is referred to as an upper layer.

 The OSI model's lowest layer handles data transport-related problems.


Hardware and software are used to implement the data connection
layer and the physical layer. In the OSI model, the physical layer is the
lowest layer and the one nearest to the physical media. The physical
layer is mostly in charge of positioning theinformation on the physical
medium.

7 Layers of OSI Model


There are the seven OSI layers. Each layer has different functions. A list
of seven layers are given below:
1. Physical Layer
2. Data-Link Layer
3. Network Layer
4. Transport Layer
5. Session Layer
6. Presentation Layer
7. Application Layer

1) Physical layer

10
 The main functionality of the physical layer is to transmit the Introduction to Computer
individual bits from one node to another node. Networking

 It is the lowest layer of the OSI model.

 It establishes, maintains and deactivates the physical connection.

 It specifies the mechanical, electrical and procedural network interface


specifications.

Functions of a Physical layer :

 Line Configuration: It defines the way how two or more devices can
be connected physically.

 Data Transmission: It defines the transmission mode whether it is a


simplex, half-duplex or full-duplex mode between the two devices on
the network.

 Topology: It defines the way how network devices are arranged.

 Signals: It determines the type of signal used for transmitting the


information.

2) Data-Link Layer

 This layer is responsible for the error-free transfer of data frames.

 It defines the format of the data on the network.

 It provides a reliable and efficient communication between two or


more devices.

 It is mainly responsible for the unique identification of each device


that resides on a local network.

 It contains two sub-layers:

11
Software Defined Logical Link Control Layer
Networking
 It is responsible for transferring the packets to the Network layer of the
receiver that is receiving.

 It identifies the address of the network layer protocol from the header.

 It also provides flow control.

Media Access Control Layer

 A Media access control layer is a link between the Logical Link


Control layer and the network's physical layer.

 It is used for transferring the packets over the network.

Functions of the Data-link layer

 Framing: The data link layer translates the physical's raw bit stream
into packets known as Frames. The Data link layer adds the header and
trailer to the frame. The header which is added to the frame contains
the hardware destination and source address.

 Physical Addressing: The Data link layer adds a header to the frame
that contains a destination address. The frame is transmitted to the
destination address mentioned in the header.

 Flow Control: Flow control is the main functionality of the Data-link


layer. It is the technique through which the constant data rate is
maintained on both the sides so that no data get corrupted. It ensures
that the transmitting station such as a server with higher processing
speed does not exceed the receiving station, with lower processing
speed.

 Error Control: Error control is achieved by adding a calculated value


CRC (Cyclic Redundancy Check) that is placed to the Data link layer's
trailer which is added to the message frame before it is sent to the
physical layer. If any error seems to occurr, then the receiver sends the
acknowledgment for the retransmission of the corrupted frames.

 Access Control: When two or more devices are connected to the same
communication channel, then the data link layer protocols are used to
determine which device has control over the link at a given time.

12
3) Network Layer Introduction to Computer
Networking

 It is a layer 3 that manages device addressing, tracks the location of


devices on the network.

 It determines the best path to move data from source to the destination
based on the network conditions, the priority of service, and other
factors.

 The Data link layer is responsible for routing and forwarding the
packets.

 Routers are the layer 3 devices, they are specified in this layer and
used to provide the routing services within an internetwork.

 The protocols used to route the network traffic are known as Network
layer protocols. Examples of protocols are IP and Ipv6.

Functions of Network Layer :

 Internetworking: An internetworking is the main responsibility of the


network layer. It provides a logical connection between different
devices.

 Addressing: A Network layer adds the source and destination address


to the header of the frame. Addressing is used to identify the device on
the internet.

 Routing: Routing is the major component of the network layer, and it


determines the best optimal path out of the multiple paths from source
to the destination.

 Packetizing: A Network Layer receives the packets from the upper


layer and converts them into packets. This process is known as
Packetizing. It is achieved by internet protocol (IP).

13
Software Defined 4) Transport Layer
Networking

 The Transport layer is a Layer 4 ensures that messages are transmitted


in the order in which they are sent and there is no duplication of data.

 The main responsibility of the transport layer is to transfer the data


completely.

 It receives the data from the upper layer and converts them into
smaller units known as segments.

 This layer can be termed as an end-to-end layer as it provides a point-


to-point connection between source and destination to deliver the data
reliably.

The two protocols used in this layer are:

 Transmission Control Protocol


o It is a standard protocol that allows the systems to communicate
over the internet.
o It establishes and maintains a connection between hosts.
o When data is sent over the TCP connection, then the TCP protocol
divides the data into smaller units known as segments. Each
segment travels over the internet using multiple routes, and they
arrive in different orders at the destination. The transmission
control protocol reorders the packets in the correct order at the
receiving end.

 User Datagram Protocol


o User Datagram Protocol is a transport layer protocol.
o It is an unreliable transport protocol as in this case receiver does
not send any acknowledgment when the packet is received, the
sender does not wait for any acknowledgment. Therefore, this
makes a protocol unreliable.

14
Functions of Transport Layer: Introduction to Computer
Networking
Service-point addressing: Because of this, computers are able to execute
many programs at once. This allows data to be sent from one computer to
another as well as from one process to another. The header with the
address known as a service-point address or port address is added by the
transport layer. The transport layer is in charge of sending the message to
the appropriate process, whereas the network layer is in charge of sending
data from one computer to another.
Segmentation and reassembly: The message is split up into many
segments by the transport layer once it gets it from the top layer, and each
segment is given a sequence number that allows it to be uniquely
identified. The transport layer reassembles the message based on sequence
numbers after it has reached its destination.
Connection control: Two services are offered by the transport layer. Both
connectionless and connection-oriented services are available. Every
segment is handled as a separate packet by a connectionless service, and
they all take distinct paths to get there. Before sending the packets, a
connection-oriented service establishes a connection with the target
machine's transport layer. Every packet in a connection-oriented service
follows the same path.
Flow control: The transport layer also responsible for flow control but it
is performed end-to-end rather than across a single link.
Error control: Error control is another duty of the transport layer. Error
control is not carried out over the single connection, but rather end-to-end.
The sender transit layer guarantees error-free message delivery to the
intended recipient.

5) Session Layer

o It is a layer 3 in the OSI model.


o The Session layer is used to establish, maintain and synchronizes the
interaction between communicating devices.

15
Software Defined Functions of Session layer:
Networking
 Dialog control: The session layer serves as a dialog controller,
facilitating the creation of a dialogue or, more accurately, enabling
half-duplex or full-duplex communication between two processes.

 Synchronization: When transferring data sequentially, the session


layer adds a few checkpoints. The data transfer will restart from the
checkpoint if an error arises during the transmission process. We call
this procedure "recovery and synchronization."

6) Presentation Layer

 A Presentation layer is mainly concerned with the syntax and


semantics of the information exchanged between the two systems.

 It acts as a data translator for a network.

 This layer is a part of the operating system that converts the data from
one presentation format to another format.

 The Presentation layer is also known as the syntax layer.

Functions of Presentation layer :


Translation: Two systems' processes exchange data in the form of
character strings, integers, and other data. The presentation layer manages
the compatibility between the various encoding techniques used by
different computers. At the receiving end, it transforms the common
format into receiver-dependent format after converting the data from
sender-dependent format into a common format.

 Encryption: To preserve privacy, encryption is required. The act of


transforming information sent by the sender into a different format and
sending the resultant message via a network is known as encryption.

 Compression: The process of compressing data lowers the amount of


bits that need to be transferred. Text, audio, and video are examples of
multimedia where data compression is crucial.
16
7) Application Layer Introduction to Computer
Networking

 Users and application processes can access network services through


an application layer.

 It deals with matters like resource allocation and network


transparency.

 Although it carries out application layer tasks, an application layer is


not an application itself.

 This layer gives end users access to the network services.

Functions of Application layer:

 File transfer, access, and management (FTAM): With the help of an


application layer, a user may access, retrieve, and manage files stored
on a distant computer.

 Email services: Email forwarding and storage are made possible by an


application layer.

 Directory services: An application serves as a worldwide source of


information about different objects by supplying distributed database
sources.

TCP/IP model

 The OSI model was created after the TCP/IP model.


 The OSI model and the TCP/IP model are not precisely the same.
 The application, transport, network, data connection, and physical
layers make up the five levels of the TCP/IP paradigm.
 The first four levels of the TCP/IP model are represented by a single
layer known as the application layer. These four layers offer physical
standards, network interface, internetworking, and transport services
that correspond to the first four layers of the OSI model.

 TCP/IP is a hierarchical protocol made up of interactive modules, and


each of them provides specific functionality.

17
Software Defined Here, hierarchical means that each upper-layer protocol is supported by
Networking two or more lower-level protocols.

Functions of TCP/IP layers:

Network Access Layer

 In the TCP/IP paradigm, a network layer is the lowest layer.

 According to the OSI reference model, a network layer consists of the


Physical layer and Data Link layer.

 It establishes the physical protocol for data transmission throughout


the network.

 The data transfer between two devices connected to the same network
is mostly the responsibility of this layer.

 Encapsulating the IP datagram into network-transmitted frames and


translating IP addresses into physical addresses are the tasks
performed by this layer.

 This layer uses the Ethernet, Token Ring, FDDI, X.25, and frame relay
protocols.

Internet Layer

 The second layer in the TCP/IP paradigm is called an internet layer.

 The network layer is another name for an internet layer.

 The internet layer's primary duty is to transmit packets from any


network; regardless of the path they take, the packets reach their
destination.

18
Following are the protocols used in this layer are: Introduction to Computer
Networking
IP Protocol: IP protocol is used in this layer, and it is the most significant
part of the entire TCP/IP suite.

Following are the responsibilities of this protocol:

 IP Addressing: This protocol puts IP addresses—also referred to as


logical host addresses—into practice. The internet and higher levels
employ IP addresses to identify devices and enable routing for
internetwork.

 Host-to-host communication: It chooses the channel that will be used


to send the data.

 Data Encapsulation and Formatting: The transport layer protocol


transfers data to an IP protocol for acceptance. Data is encapsulated
into messages known as IP datagrams by an IP protocol, which
guarantees safe transmission and reception of the data.

 Fragmentation and Reassembly: Maximum Transmission Unit


(MTU) is the maximum size IP datagram that the data connection
layer protocol allows. The IP protocol divides an IP datagram into
smaller parts so that it can traverse a local network if the size of the
datagram exceeds the maximum transmission unit (MTU). Either the
sender or the intermediary router might fragment data. All of the
pieces are put back together at the recipient end to create the original
message.

 Routing: Direct delivery is the term for sending an IP datagram across


a local network, such as a LAN, MAN, or WAN. The IP datagram is
transferred indirectly when the source and destination are on different
networks. This may be achieved byrouting the IP datagram through
various devices such as routers.

ARP Protocol

 ARP stands for Address Resolution Protocol.

 ARP is a network layer protocol which is used to find the physical


address from the IP address.

 The two terms are mainly associated with the ARP Protocol:

 ARP request: ARP requests are broadcast to the network by


senders who wish to know the physical address of a device.

 ARP reply: All network-connected devices will acknowledge and


execute an ARP request; however, only the receiver will be able to
identify the IP address and respond with the physical address of the
device. The recipient appends the physical address to the datagram
header and cache memory.

19
Software Defined ICMP Protocol
Networking
The Internet Control Message Protocol is referred to as ICMP.

 The hosts or routers utilize this technique to notify the sender of any
datagram issues.

 A datagram moves from one router to the next until it arrives at its
final location. The ICMP protocol is used to alert the sender when a
router is unable to transport data due to unexpected circumstances,
such as disabled connections, a device that is on fire, or network
congestion.

An ICMP protocol mainly uses two terms:

 ICMP Test: ICMP Test is used to test whether the destination is


reachable or not.

 ICMP Reply: ICMP Reply is used to check whether the destination


device is responding or not.

 Reporting issues, not fixing them, is the main duty of the ICMP
protocol. The sender has the obligation for making the adjustment.

 Because the IP datagram only contains the addresses of the source and
destination—not the router to whom it is passed—ICMP can only send
messages to the source and cannot transmit them to the intermediate
routers.

Transport Layer
The transport layer is responsible for the reliability, flow control, and
correction of data which is being sent over the network.
The two protocols used in the transport layer are User Datagram
protocol and Transmission control protocol.

o User Datagram Protocol (UDP)

 It offers transmission delivery from beginning to finish and


connectionless service.
 The protocol is unreliable since it identifies faults but does not
explain them.
 The problem is found by User Datagram Protocol, and the ICMP
protocol notifies the sender that a user datagram has been
corrupted.

o The following fields make up UDP:

The address of the application software that generated the message is


the source port address. destination port address: The application

20
program's address that receives the message is the destination port Introduction to Computer
address. Networking

o Total distance traveled :


It specifies the user datagram's total bytes in bytes.
Checksum: A 16-bit field used for error detection is the checksum.
o UDP does not specify which packet is lost. UDP contains only
checksum; it does not contain any ID of a data segment.

o Transmission Control Protocol (TCP)

 It offers apps complete transport layer services.


 It establishes a virtual circuit that is active during the transmission
between the sender and the recipient.
 TCP is a dependable protocol since it recognizes errors and sends the
broken frames again. Consequently, it guarantees that before the
transmission is deemed complete and a virtual circuit is deleted, each
segment must be received and acknowledged.
 TCP splits the message into smaller units called segments at the
sending end. Each segment has a sequence number that is needed to
rearrange the frames to produce the original message.
 TCP gathers all of the segments at the receiving end and rearranges
them according to sequence numbers.

Application Layer

 In the TCP/IP paradigm, an application layer is the highest layer.

 It is in charge of managing representational concerns and high-level


procedures.

 The user can communicate with the program through this layer.

21
Software Defined  An application layer protocol passes its data to the transport layer in
Networking order to connect with another application layer.
A state of uncertainty has arisen within the application layer. Except
for those that communicate with the communication system, no
application can be run inside the application layer. For instance, even
though a web browser uses the HTTP protocol—which is an
application layer protocol—to communicate with the network, a text
editor cannot be regarded as an application layer protocol.

Following are the main protocols used in the application layer:


o HTTP: HTTP stands for Hypertext transfer protocol. This protocol
allows us to access the data over the world wide web. It transfers the
data in the form of plain text, audio, video. It is known as a Hypertext
transfer protocol as it has the efficiency to use in a hypertext
environment where there are rapid jumps from one document to
another.
o SNMP: The Simple Network Management Protocol is known by this
acronym. It is a framework for leveraging the TCP/IP protocol stack to
manage devices connected to the internet.
o SMTP: Simple Mail Transfer Protocol is what SMTP stands for. The
Simple Mail Transfer Protocol (SMTP) is the TCP/IP protocol that
enables email. The data can be sent to a different email address using
this protocol.
o DNS: Domain Name System is what DNS stands for. A host's unique
connection to the internet is identified by its IP address. However,
many would rather use names than addresses. Thus, Domain Name
mechanism refers to the mechanism that associates the name with the
address.
o TELNET: terminal network is the acronym for this word. By
connecting the local and distant computers, it creates the illusion that
the local terminal is a terminal at the remote system.
o FTP: File Transfer Protocol is what FTP stands for. The common
internet protocol known as FTP is used to transfer files from one
computer to another.

1.6 SUMMARY
A computer network connects multiple PCs and hardware, enabling
communication and resource sharing. Each device in the network,
called a node (e.g., servers, PCs, routers), follows protocols to
exchange data. Networks include various topologies and can be wired
or wireless. Local Area Network (LAN), Metropolitan Area
Network (MAN), Wide Area Network (WAN). Wireless Ad-Hoc
Network (WANET)

22
Networking subsystems, a combination of hardware and software, Introduction to Computer
were initially complex and unstructured. To manage this complexity, Networking
the ISO developed a layered approach, breaking networking tasks into
distinct layers, each with a specific role, providing services to higher
layers without revealing implementation details.

Layered Architecture Benefits


Modularity: Divides the design into smaller, manageable
components.
Subsystem Interaction: Provides clear interfaces for interaction,
maintaining independence between layers.
Ease of Modification: Allows changes in one layer without affecting
others.
Testing: Facilitates individual layer testing and analysis.
The OSI (Open System Interconnection) model, created in 1984 by the
ISO, consists of seven layers, each performing specific network
functions, ensuring data transfer from a computer program on one
machine to another.
The TCP/IP model, developed after the OSI model, comprises five
layers, aligning with OSI’s layers but combining some into broader
categories. It's a hierarchical protocol suite supporting internet and
network communication.
Both the OSI and TCP/IP models provide a structured approach to
networking, breaking down complex tasks into layers, each with
specific functions and protocols. This modularity enhances system
design, testing, and maintenance, ensuring efficient and reliable
communication across networks.

1.7 QUESTION
1. Explaing the working of Computer Network.
2. Write a short note on MAN indetail.
3. Write a short note on Wireless AD-Hoc network (WANET).
4. Explain TCP/IP Model.
5. Write a short note on OSI Model.




23
2
CONCEPTS AND IMPLEMENTATION OF
IPV4 AND IPV6
Unit Structure:
2.0 Objectives
2.1 Introduction
2.2 IPV4: Internet Protocol Version 4
2.3 IPV6: internet protocol version 6
2.4 comparison OF IPV4 and IPV6
2.5 Subnetting Techniques in IPV4 and IPV6
2.6 Transition Mechanisms From IPV4 to IPV6
2.7 Implementation Examples
2.8 Testing and Verification Tools
2.9 Summary
2.10 Glossary
2.11 Further Readings
2.12 Model Questions

2.0 OBJECTIVES
1. Understand the fundamental principles of IP addressing.
2. Differentiate between IPv4 and IPv6 features.
3. Learn subnetting techniques and their applications.
4. Explore transition mechanisms for migration to IPv6.
5. Implement IPv4 and IPv6 addressing schemes practically.

2.1 INTRODUCTION
A brief overview of IP addressing, the need for IPv4, and its limitations,
leading to the development of IPv6.

Study Guidance:
Suggestions to focus on practical examples and use tools like Wireshark,
Cisco Packet Tracer, or GNS3 for better understanding.

What is IP Addressing?
IP addressing is a fundamental concept in computer networking that
allows devices to identify and communicate with each other over a
network. It works at the network layer (Layer 3) of the OSI model.
24
2.2 IPV4: INTERNET PROTOCOL VERSION 4 Concepts and
Implementation of IPV4
and IPV6
IPv4 is the fourth version of the Internet Protocol and the first widely used
version. It forms the foundation of modern networking.

Key Features of IPv4:


1. Address Format:
o IPv4 addresses are 32-bit numbers divided into four octets,
separated by dots (e.g., 192.168.1.1).
o Each octet is represented in decimal and ranges from 0 to 255.
2. Address Space:
o Total addressable space: 2322^{32}232 = ~4.3 billion unique
addresses.
o Due to the rapid growth of internet-connected devices, IPv4 address
exhaustion became a problem.
3. Classes of IPv4: IPv4 addresses are divided into five classes (A, B, C,
D, E):
o Class A: 1.0.0.0 to 126.255.255.255 (Large networks).
o Class B: 128.0.0.0 to 191.255.255.255 (Medium-sized networks).
o Class C: 192.0.0.0 to 223.255.255.255 (Small networks).
o Class D: 224.0.0.0 to 239.255.255.255 (Multicasting).
o Class E: 240.0.0.0 to 255.255.255.255 (Reserved).
4. Address Types:
o Unicast: One-to-one communication.
o Broadcast: One-to-all communication (e.g., 255.255.255.255).
o Multicast: One-to-many communication.
5. Subnetting:
o Subnets divide a large network into smaller, more manageable
segments.
o Subnet masks are used to define the network and host portions (e.g.,
/24 corresponds to a subnet mask of 255.255.255.0).
6. Protocol: IPv4 supports transport layer protocols like TCP
(Transmission Control Protocol) and UDP (User Datagram Protocol).

25
Software Defined 2.3 IPV6: INTERNET PROTOCOL VERSION 6
Networking
IPv6 was introduced to overcome the limitations of IPv4, including
address exhaustion.

Key Features of IPv6:


1. Address Format:
o IPv6 addresses are 128-bit numbers represented in hexadecimal,
separated by colons (e.g.,
2001:0db8:85a3:0000:0000:8a2e:0370:7334).
o Leading zeroes can be omitted, and consecutive zeroes can be
compressed using :: (e.g., 2001:db8::8a2e:370:7334).
2. Address Space:
o Total addressable space: 21282^{128}2128 = ~340 undecillion
addresses, which is practically unlimited.
3. Address Types:
o Unicast: One-to-one communication.
o Multicast: One-to-many communication.
o Anycast: One-to-nearest communication.
4. No Broadcasts:
o IPv6 does not support broadcasting. Instead, it uses multicasting
for similar purposes.
5. Hierarchy:
o IPv6 simplifies address allocation and routing using hierarchical
structures, reducing the size of routing tables.
6. Autoconfiguration:
o IPv6 supports stateful (using DHCPv6) and stateless (using
SLAAC - Stateless Address Autoconfiguration) address
configuration.
7. Integrated Security:
o IPv6 has built-in support for IPsec (Internet Protocol Security) for
encryption and authentication.
8. Improved QoS:
o IPv6 includes a Flow Label field to improve Quality of Service
(QoS) for real-time applications like voice and video.

26
2.4 COMPARISON OF IPV4 AND IPV6 Concepts and
Implementation of IPV4
and IPV6
Feature IPv4 IPv6

Address Size 32 bits (4 octets) 128 bits

Address Decimal (e.g., Hexadecimal (e.g.,


Format 192.168.1.1) 2001:db8::1)

Address Space ~4.3 billion addresses Virtually unlimited

Configuration Manual/DHCP SLAAC/DHCPv6

Not supported (uses


Broadcast Supported
multicast)

Routing Tables Larger Smaller

Security Optional (add-on IPsec) Built-in IPsec

Header Size 20 bytes 40 bytes

Fragmentation Routers and hosts Hosts only

Challenges and Migration from IPv4 to IPv6


1. Coexistence:
o IPv4 and IPv6 operate in parallel due to the vast existing IPv4
infrastructure.
o Techniques like dual-stack, tunneling, and translation (e.g.,
NAT64) help in the transition.
2. Adoption:
o While IPv6 adoption is increasing, IPv4 still dominates due to
legacy systems and slow migration.
3. Costs:
o Upgrading hardware, software, and expertise for IPv6
compatibility involves significant costs.

Practical Use Cases


 IPv4: Still widely used in legacy systems, small networks, and for
compatibility purposes.
 IPv6: Growing adoption in IoT, cloud services, and modern networks
requiring scalability and security.
27
Software Defined 2.5 SUBNETTING TECHNIQUES IN IPV4 AND IPV6
Networking
Subnetting in IPv4
What is Subnetting?
Subnetting is the process of dividing a larger network into smaller, more
manageable sub-networks (subnets). This improves network efficiency
and reduces congestion by limiting the scope of broadcasts.

Key Concepts of Subnetting:


1. IP Address and Subnet Mask:
o An IP address is divided into two parts:
 Network portion: Identifies the network.
 Host portion: Identifies devices (hosts) within the network.
o A subnet mask is used to determine the boundary between the
network and host portions. For example:
 IP Address: 192.168.1.10
 Subnet Mask: 255.255.255.0 (or /24)
2. CIDR Notation:
o Classless Inter-Domain Routing (CIDR) represents the subnet
mask as a suffix, such as /24 for 255.255.255.0.
o Example:
 /24: 24 bits for the network, leaving 8 bits for hosts.
 Number of hosts = 28−2=2542^{8} - 2 = 25428−2=254
(subtracting 2 for the network and broadcast addresses).
3. Subnetting Example: Suppose you have a network 192.168.1.0/24
and want to divide it into four subnets:
o Each subnet will have a subnet mask of /26 (i.e., 64 IPs per
subnet).
o Subnet ranges:
 Subnet 1: 192.168.1.0 - 192.168.1.63
 Subnet 2: 192.168.1.64 - 192.168.1.127
 Subnet 3: 192.168.1.128 - 192.168.1.191
 Subnet 4: 192.168.1.192 - 192.168.1.255

28
IPv6 Addressing and Subnetting Concepts and
Implementation of IPV4
IPv6 simplifies subnetting by using a fixed-length subnet prefix. and IPV6

Structure of an IPv6 Address:


 Global Routing Prefix: The first 48 bits (assigned by ISPs).
 Subnet ID: 16 bits used by organizations to define subnets.
 Interface ID: The last 64 bits, typically derived from the device's
MAC address or generated randomly.

Subnetting in IPv6:
 IPv6 does not use classes like IPv4.
 The standard subnet prefix is /64, meaning the first 64 bits represent
the network and the remaining 64 bits represent the host.
Example:
 Address: 2001:0db8:abcd:0012::/64
o Network Portion: 2001:0db8:abcd:0012
o Host Portion: ::

Why /64?
 IPv6 reserves a large space for hosts within a subnet to support
advanced features like SLAAC (Stateless Address Autoconfiguration).

2.6 TRANSITION MECHANISMS FROM IPV4 TO IPV6


IPv4 to IPv6 Transition Mechanisms
Transitioning from IPv4 to IPv6 is challenging due to the differences in
addressing schemes and the widespread use of IPv4. The following
techniques facilitate coexistence and migration:

1. Dual-Stack
 Devices run both IPv4 and IPv6 simultaneously.
 Both protocols operate independently, allowing communication over
either.
 Pros:
o No need for translation between IPv4 and IPv6.
o Backward compatibility with IPv4 systems.

29
Software Defined  Cons:
Networking
o Increased resource usage on devices and networks.
o Complexity in network management.
2. Tunneling
 Encapsulates IPv6 packets within IPv4 packets, allowing IPv6 traffic
to travel over IPv4 networks.
 Common tunneling methods:
o 6to4: Automatically assigns an IPv6 address to an IPv4 network.
o Teredo: Allows IPv6 connectivity for devices behind NAT.
o IPsec Tunnel Mode: Provides secure tunneling between
networks.
3. Translation (NAT64)
 Converts IPv6 addresses to IPv4 and vice versa, enabling
communication between IPv4 and IPv6 devices.
 NAT64: Translates IPv6 packets to IPv4 using a special IPv6 prefix
(64:ff9b::/96).
 DNS64: Resolves DNS queries to support NAT64.
Detailed Subnetting Exercise in IPv4
Example:
You have a network 192.168.1.0/24 and need 6 subnets.
1. Determine the Number of Subnets:
o Subnetting increases the number of networks by borrowing bits
from the host portion.
o 2n2^n2n ≥ Number of subnets, where nnn is the number of bits
borrowed.
o 23=82^3 = 823=8, so borrow 3 bits.
2. New Subnet Mask:
o Original mask: /24 → 255.255.255.0
o Borrow 3 bits → New mask: /27 → 255.255.255.224
3. Calculate Hosts per Subnet:
o Remaining host bits: 32−27=532 - 27 = 532−27=5
o Hosts per subnet: 25−2=302^5 - 2 = 3025−2=30 (subtracting 2 for
network and broadcast).

30
4. Subnet Ranges: Concepts and
Implementation of IPV4
o Subnet 1: 192.168.1.0 - 192.168.1.31 and IPV6

o Subnet 2: 192.168.1.32 - 192.168.1.63


o Subnet 3: 192.168.1.64 - 192.168.1.95
o Subnet 4: 192.168.1.96 - 192.168.1.127
o Subnet 5: 192.168.1.128 - 192.168.1.159
o Subnet 6: 192.168.1.160 - 192.168.1.191

IPv6 Address Assignment Example


Given a global prefix of 2001:db8::/48, divide the address into 4 subnets.
1. Subnetting:
o Original Prefix: /48 → Borrow 2 bits for 4 subnets.
o New Prefix: /50
2. Subnet Ranges:
o Subnet 1: 2001:db8:0:0::/50
o Subnet 2: 2001:db8:0:4000::/50
o Subnet 3: 2001:db8:0:8000::/50
o Subnet 4: 2001:db8:0:c000::/50

Advanced Subnetting Tips


 Use subnet calculators for complex scenarios.
 Ensure you account for overheads like router and gateway IPs.
 In IPv6, focus on planning hierarchical structures to simplify routing.

2.7 IMPLEMENTATION EXAMPLES


Implementation of IPv4 Subnetting
Scenario:
You are tasked with dividing the network 192.168.1.0/24 into 4 subnets.

Steps to Implement:
1. Subnet Design:
o Calculate the new subnet mask:
 Original prefix: /24 (255.255.255.0).
31
Software Defined  Need 4 subnets → Borrow 2 bits → New prefix: /26
Networking (255.255.255.192).
o Hosts per subnet:
 Host bits = 32−26=632 - 26 = 632−26=6, so 26−2=622^6 - 2 =
6226−2=62 usable hosts.
2. Define Subnets:
o Subnet 1: 192.168.1.0/26 → Range: 192.168.1.1 - 192.168.1.62
(Broadcast: 192.168.1.63).
o Subnet 2: 192.168.1.64/26 → Range: 192.168.1.65 - 192.168.1.126
(Broadcast: 192.168.1.127).
o Subnet 3: 192.168.1.128/26 → Range: 192.168.1.129 - 192.168.1.190
(Broadcast: 192.168.1.191).
o Subnet 4: 192.168.1.192/26 → Range: 192.168.1.193 - 192.168.1.254
(Broadcast: 192.168.1.255).
3. Configuration on a Router (Cisco Example):
Bash code:
Router> enable
Router# configure terminal
Router(config)# interface FastEthernet0/0
Router(config-if)#ip address 192.168.1.1 255.255.255.192
Router(config-if)# no shutdown
Router(config-if)# exit
Router(config)# interface FastEthernet0/1
Router(config-if)#ip address 192.168.1.65 255.255.255.192
Router(config-if)# no shutdown
Router(config-if)# exit
Router(config)# interface FastEthernet0/2
Router(config-if)#ip address 192.168.1.129 255.255.255.192
Router(config-if)# no shutdown
4. Client Configuration: Assign IP addresses to clients within the range
of each subnet.

32
Example for a Windows machine: Concepts and
Implementation of IPV4
o Go to Control Panel → Network and Sharing Center → Change and IPV6
Adapter Settings.
o Right-click the network adapter → Properties.
o Select IPv4 → Properties → Assign:
 IP Address: 192.168.1.2
 Subnet Mask: 255.255.255.192
 Gateway: 192.168.1.1
5. Verification:
o Use the ping command to verify connectivity between devices.
Bash code:
ping 192.168.1.2

Implementation of IPv6 Subnetting


Scenario:
You are given the global IPv6 prefix 2001:db8::/48 and must divide it into
4 subnets.
Steps to Implement:
1. Subnet Design:
o Original prefix: /48.
o Need 4 subnets → Borrow 2 bits → New prefix: /50.
2. Define Subnets:
o Subnet 1: 2001:db8:0:0::/50
o Subnet 2: 2001:db8:0:4000::/50
o Subnet 3: 2001:db8:0:8000::/50
o Subnet 4: 2001:db8:0:c000::/50
3. Configuration on a Router (Cisco Example):
Bash code:
Router> enable
Router# configure terminal
Router(config)# interface GigabitEthernet0/0

33
Software Defined Router(config-if)# ipv6 address 2001:db8:0:0::1/50
Networking
Router(config-if)# no shutdown
Router(config-if)# exit
Router(config)# interface GigabitEthernet0/1
Router(config-if)# ipv6 address 2001:db8:0:4000::1/50
Router(config-if)# no shutdown
4. Client Configuration: For a Linux client, edit the network
configuration:
o File: /etc/network/interfaces or /etc/netplan/*.yaml.

Yamlcode:-
network:
version: 2
ethernets:
enp0s3:
addresses: [2001:db8:0:0::2/50]
gateway6: 2001:db8:0:0::1
nameservers:
addresses: [2001:4860:4860::8888, 2001:4860:4860::8844]
5. Verification: Use ping6 or traceroute6 to test IPv6 connectivity.
Bash code:
ping6 2001:db8:0:4000::1

Transition Mechanism: Dual-Stack Implementation


Scenario:
Configure a dual-stack environment where a server supports both IPv4 and
IPv6.
1. Server Configuration (Linux Example):
o Edit the network configuration file:
Bash code:
sudonano /etc/network/interfaces
Add:

34
Bash code: Concepts and
Implementation of IPV4
auto eth0 and IPV6

iface eth0 inet static


address 192.168.1.10
netmask 255.255.255.0
gateway 192.168.1.1

iface eth0 inet6 static


address 2001:db8:0:0::10
netmask 64
gateway 2001:db8:0:0::1
o Restart the networking service:
Bash code:
sudosystemctl restart networking
2. Router Configuration:
o Enable dual-stack on the router.
Bash code:
Router> enable
Router# configure terminal
Router(config)# interface FastEthernet0/0
Router(config-if)#ip address 192.168.1.1 255.255.255.0
Router(config-if)# ipv6 address 2001:db8:0:0::1/64
Router(config-if)# no shutdown
3. Verification:
o Test IPv4 and IPv6 connectivity from a client:
Bash code:
ping 192.168.1.10
ping6 2001:db8:0:0::10

35
Software Defined 2.8 TESTING AND VERIFICATION TOOLS
Networking
1. IPv4 and IPv6 Calculators:
o Use online tools (e.g., Subnet Calculator) to design subnets.
2. Ping Tools:
o ping (IPv4) and ping6 (IPv6) for connectivity testing.
3. Traceroute Tools:
o traceroute (IPv4) and traceroute6 (IPv6) for path analysis.
4. Network Monitoring:
o Tools like Wireshark to inspect IPv4 and IPv6 traffic.
5. Router Simulators:
o Use Cisco Packet Tracer or GNS3 for testing configurations in a
virtual environment.

2.9 SUMMARY
This unit explored the core concepts of IP addressing, focusing on IPv4
and IPv6. It highlighted the features, limitations, and use cases of both
protocols. We delved into practical subnetting techniques for both IPv4
and IPv6, emphasizing their role in network segmentation and efficiency.
Additionally, the unit covered transition mechanisms essential for
migrating from IPv4 to IPv6 and provided real-world implementation
examples to bridge theoretical knowledge with practical application. This
comprehensive overview equips learners with the foundational and
advanced knowledge required to manage modern IP-based networks
effectively.
Possible Answers
Subnetting Exercise Solutions:
1. IPv4 Example:
o Network: 192.168.1.0/24
o Subnet 1: 192.168.1.0 - 192.168.1.63
o Subnet 2: 192.168.1.64 - 192.168.1.127
2. IPv6 Example:
o Global Prefix: 2001:db8::/48
o Subnet 1: 2001:db8:0:0::/50
o Subnet 2: 2001:db8:0:4000::/50
Router Configuration Steps:
 Assign IPv4 and IPv6 addresses to router interfaces.
36
 Verify connectivity using ping and ping6 commands. Concepts and
Implementation of IPV4
List of References/Bibliography and IPV6
1. RFC 791 - Internet Protocol (IPv4 Specification).
2. RFC 8200 - Internet Protocol Version 6 (IPv6) Specification.
3. Tanenbaum, A. S., "Computer Networks."
4. Online resources:
o IETF IPv6 Standards
o Cisco Networking Tutorials

2.10 GLOSSARY
 IP Address: A unique identifier for devices on a network.
 SLAAC: Stateless Address Autoconfiguration for IPv6.
 CIDR: Classless Inter-Domain Routing for efficient IP address
allocation.
 NAT64: A translation mechanism to enable IPv6 devices to
communicate with IPv4 devices.
 IPsec: Internet Protocol Security for encryption and authentication.

2.11 FURTHER READINGS


1. "Understanding IPv6" by Joseph Davies.
2. "IPv6 Essentials" by Silvia Hagen.
3. Advanced network security topics with IPsec in IPv6.
4. IPv6 deployment strategies for enterprises.

2.12 MODEL QUESTIONS


1. What are the primary differences between IPv4 and IPv6?
2. Explain the process of subnetting in IPv4 and provide an example.
3. How does the IPv6 address space solve the issue of address
exhaustion?
4. What are the key transition mechanisms for migrating from IPv4 to
IPv6?
5. Configure a dual-stack network and verify its connectivity.



37
3
ROUTING
Chapter Structure :
3.0 Objective
3.1 Routing
3.2 Introduction to Transport layer and Application layer protocols
3.3 Summary
3.4 Questions

3.0 OBJECTIVE

1. To understand what is Routing.


2. To understand Transport Layer and Application Layer Protocols.

3.1 ROUTING

 The process of choosing a path for data transfer from a source to a


destination is known as routing. A unique device called a router is
responsible for routing.
 A router operates at the TCP/IP model's internet layer and the OSI
model's network layer.
 A router is a networking device that forwards a packet according to the
forwarding table and packet header contents.
 The packets are routed using the routing algorithms. All that the
routing algorithm is is a piece of software that determines the best
route for a packet to be transferred.
 The metric is used by the routing protocols to identify the optimal path
for packet delivery. The routing algorithm uses the metric—a standard
of measurement—to identify the best route to the destination.
Examples of metrics include hop count, bandwidth, latency, and
current load on the path.
 The routing algorithm sets up and keeps track of the routing table used
in the path decision process.

Routing Metrics and Costs


The optimal route to the destination is determined using routing metrics
and expenses. A metric is the name for the parameters that the protocols
utilize to find the shortest path.

38
The network characteristics called metrics are utilized to identify the Routing
optimal path to the target. When certain routing protocols employ static
metrics, their value cannot be altered, however when other protocols
utilize dynamic metrics, the system administrator can set a value.

The most common metric values are given below:

 Hop count: A measure called hop count indicates how many times a
packet must traverse through an internet working device, such a router,
in order to go from its source to its destination. The path with the
fewest hops will be chosen as the optimal route to go from the source
to the destination if the routing protocol uses hops as its primary
statistic.

 Delay: This is the amount of time the router needs to receive, process,
and send a datagram to an interface. This measure is used by the
protocols to calculate the end-to-end delay values for each connection
along the path. The optimal path will be determined by calculating the
delay value of each path.

 Bandwidth: The bandwidth of a connection refers to its capacity. The


units of measurement for bandwidth are bits per second. A gigabit
connection, which has a faster transfer rate, is preferable over a 56 kb
link, which has a smaller capacity. Every connection in the way will
have its bandwidth capacity determined by the protocol, and the path
with the highest total bandwidth will be deemed optimal.

 Load: The term "load" describes how busy a network resource, like a
router or network link, is. Numerous metrics, including CPU usage
and packets processed per second, can be used to compute a load. The
load value will rise in tandem with an increase in traffic.

 Reliability: This metric component has the potential to have a fixed


value. Its value is determined dynamically and is contingent upon the
network connectivity. Network outages are more common on some
than others. Certain network links are easier to restore after a network
breakdown than others. dependability ratings are often issued by the
system administrator as numerical numbers, however any
dependability element can be taken into account.

Types of Routing
Routing can be classified into three categories:
o Static Routing
o Default Routing
o Dynamic Routing

39
Software Defined
Networking

Static Routing
o Static Routing is also known as Nonadaptive Routing.
o It is a technique in which the administrator manually adds the routes in
a routing table.
o A Router can send the packets for the destination along the route
defined by the administrator.
o In this technique, routing decisions are not made based on the
condition or topology of the networks

Advantages Of Static Routing


Following are the advantages of Static Routing:
o No Overhead: It has ho overhead on the CPU usage of the router.
Therefore, the cheaper router can be used to obtain static routing.
o Bandwidth: It has not bandwidth usage between the routers.
o Security: It provides security as the system administrator is allowed
only to have control over the routing to a particular network.

Disadvantages of Static Routing:


Following are the disadvantages of Static Routing:
o For a large network, it becomes a very difficult task to add each route
manually to the routing table.
o The system administrator should have a good knowledge of a topology
as he has to add each route manually.

Default Routing
o Default Routing is a technique in which a router is configured to send
all the packets to the same hop device, and it doesn't matter whether it
belongs to a particular network or not. A Packet is transmitted to the
device for which it is configured in default routing.
o Default Routing is used when networks deal with the single exit point.
o It is also useful when the bulk of transmission networks have to
transmit the data to the same hp device.

40
o When a specific route is mentioned in the routing table, the router will Routing
choose the specific route rather than the default route. The default
route is chosen only when a specific route is not mentioned in the
routing table.
Dynamic Routing
o It is also known as Adaptive Routing.
o It is a technique in which a router adds a new route in the routing table
for each packet in response to the changes in the condition or topology
of the network.
o Dynamic protocols are used to discover the new routes to reach the
destination.
o In Dynamic Routing, RIP and OSPF are the protocols used to discover
the new routes.
o If any route goes down, then the automatic adjustment will be made to
reach the destination.
The Dynamic protocol should have the following features:
o All the routers must have the same dynamic routing protocol in order
to exchange the routes.
o If the router discovers any change in the condition or topology, then
router broadcast this information to all other routers.
Advantages of Dynamic Routing:
o It is easier to configure.
o It is more effective in selecting the best route in response to the
changes in the condition or topology.

Disadvantages of Dynamic Routing:


o It is more expensive in terms of CPU and bandwidth usage.
o It is less secure as compared to default and static routing.

3.2 INTRODUCTION TO TRANSPORT LAYER AND


APPLICATION LAYER PROTOCOLS
Transport Layer
o The transport layer is a 4th layer from the top.
o The main role of the transport layer is to provide the communication
services directly to the application processes running on different
hosts.
o The transport layer provides a logical communication between
application processes running on different hosts. Although the
application processes on different hosts are not physically connected,
application processes use the logical communication provided by the
transport layer to send the messages to each other.
41
Software Defined o The transport layer protocols are implemented in the end systems but
Networking not in the network routers.
o A computer network provides more than one protocol to the network
applications. For example, TCP and UDP are two transport layer
protocols that provide a different set of services to the network layer.
o All transport layer protocols provide multiplexing/demultiplexing
service. It also provides other services such as reliable data transfer,
bandwidth guarantees, and delay guarantees.
o Each of the applications in the application layer has the ability to send
a message by using TCP or UDP. The application communicates by
using either of these two protocols. Both TCP and UDP will then
communicate with the internet protocol in the internet layer. The
applications can read and write to the transport layer. Therefore, we
can say that communication is a two-way process.

Services provided by the Transport Layer


The services provided by the transport layer are similar to those of the data
link layer. The data link layer provides the services within a single
network while the transport layer provides the services across an
internetwork made up of many networks. The data link layer controls the
physical layer while the transport layer controls all the lower layers.
The services provided by the transport layer protocols can be divided
into five categories:
o End-to-end delivery
o Addressing
o Reliable delivery
o Flow control
o Multiplexing
42
Routing

End-to-end delivery:
The transport layer transmits the entire message to the destination.
Therefore, it ensures the end-to-end delivery of an entire message from a
source to the destination.

Reliable delivery:
The transport layer provides reliability services by retransmitting the lost
and damaged packets.

The reliable delivery has four aspects:


o Error control
o Sequence control
o Loss control
o Duplication control

Error Control
o The primary role of reliability is Error Control. In reality, no
transmission will be 100 percent error-free delivery. Therefore,
transport layer protocols are designed to provide error-free
transmission.
o The data link layer also provides the error handling mechanism, but it
ensures only node-to-node error-free delivery. However, node-to-node
reliability does not ensure the end-to-end reliability.
o The data link layer checks for the error between each network. If an
error is introduced inside one of the routers, then this error will not be
caught by the data link layer. It only detects those errors that have been
introduced between the beginning and end of the link. Therefore, the
43
Software Defined transport layer performs the checking for the errors end-to-end to
Networking ensure that the packet has arrived correctly.

Sequence Control
o The second aspect of the reliability is sequence control which is
implemented at the transport layer.
o On the sending end, the transport layer is responsible for ensuring that
the packets received from the upper layers can be used by the lower
layers. On the receiving end, it ensures that the various segments of a
transmission can be correctly reassembled.

Loss Control
Loss Control is a third aspect of reliability. The transport layer ensures
that all the fragments of a transmission arrive at the destination, not some
of them. On the sending end, all the fragments of transmission are given
sequence numbers by a transport layer. These sequence numbers allow the
receiver's transport layer to identify the missing segment.

Duplication Control
Duplication Control is the fourth aspect of reliability. The transport layer
guarantees that no duplicate data arrive at the destination. Sequence
numbers are used to identify the lost packets; similarly, it allows the
receiver to identify and discard duplicate segments.

Flow Control
Flow control is used to prevent the sender from overwhelming the
receiver. If the receiver is overloaded with too much data, then the
receiver discards the packets and asking for the retransmission of packets.
This increases network congestion and thus, reducing the system
performance. The transport layer is responsible for flow control. It uses
44
the sliding window protocol that makes the data transmission more Routing
efficient as well as it controls the flow of data so that the receiver does not
become overwhelmed. Sliding window protocol is byte oriented rather
than frame oriented.

Multiplexing
The transport layer uses the multiplexing to improve transmission
efficiency.

Multiplexing can occur in two ways:


o Upward multiplexing: Upward multiplexing means multiple transport
layer connections use the same network connection. To make more
cost-effective, the transport layer sends several transmissions bound
for the same destination along the same path; this is achieved through
upward multiplexing.

o Downward multiplexing: Downward multiplexing means one


transport layer connection uses the multiple network connections.
Downward multiplexing allows the transport layer to split a
connection among several paths to improve the throughput. This type
of multiplexing is used when networks have a low or slow capacity.

45
Software Defined Addressing
Networking
o According to the layered model, the transport layer interacts with the
functions of the session layer. Many protocols combine session,
presentation, and application layer protocols into a single layer known
as the application layer. In these cases, delivery to the session layer
means the delivery to the application layer. Data generated by an
application on one machine must be transmitted to the correct
application on another machine. In this case, addressing is provided by
the transport layer.
o The transport layer provides the user address which is specified as a
station or port. The port variable represents a particular TS user of a
specified station known as a Transport Service access point (TSAP).
Each station has only one transport entity.
o The transport layer protocols need to know which upper-layer
protocols are communicating.

Application Layer
The application layer in the OSI model is the closest layer to the end user
which means that the application layer and end user can interact directly
with the software application. The application layer programs are based on
client and servers.
The Application layer includes the following functions:
o Identifying communication partners: The application layer
identifies the availability of communication partners for an application
with data to transmit.
o Determining resource availability: The application layer determines
whether sufficient network resources are available for the requested
communication.
o Synchronizing communication: All the communications occur
between the applications requires cooperation which is managed by an
application layer.
46
Services of Application Layers Routing

o Network Virtual terminal: An application layer allows a user to log


on to a remote host. To do so, the application creates a software
emulation of a terminal at the remote host. The user's computer talks to
the software terminal, which in turn, talks to the host. The remote host
thinks that it is communicating with one of its own terminals, so it
allows the user to log on.
o File Transfer, Access, and Management (FTAM): An application
allows a user to access files in a remote computer, to retrieve files
from a computer and to manage files in a remote computer. FTAM
defines a hierarchical virtual file in terms of file structure, file
attributes and the kind of operations performed on the files and their
attributes.
o Addressing: To obtain communication between client and server,
there is a need for addressing. When a client made a request to the
server, the request contains the server address and its own address. The
server response to the client request, the request contains the
destination address, i.e., client address. To achieve this kind of
addressing, DNS is used.
o Mail Services: An application layer provides Email forwarding and
storage.
o Directory Services: An application contains a distributed database
that provides access for global information about various objects and
services.
Authentication: It authenticates the sender or receiver's message or both.

Network Application Architecture


Application architecture is different from the network architecture. The
network architecture is fixed and provides a set of services to applications.
The application architecture, on the other hand, is designed by the
application developer and defines how the application should be structured
over the various end systems.

Application architecture is of two types:


o Client-server architecture: An application program running on the
local machine sends a request to another application program is known
as a client, and a program that serves a request is known as a server.
For example, when a web server receives a request from the client
host, it responds to the request to the client host.

Characteristics of Client-server architecture:


o In Client-server architecture, clients do not directly communicate with
each other. For example, in a web application, two browsers do not
directly communicate with each other.
47
Software Defined o A server is fixed, well-known address known as IP address because the
Networking server is always on while the client can always contact the server by
sending a packet to the sender's IP address.

Disadvantage Of Client-server architecture:


It is a single-server based architecture which is incapable of holding all the
requests from the clients. For example, a social networking site can
become overwhelmed when there is only one server exists.
o P2P (peer-to-peer) architecture: It has no dedicated server in a data
center. The peers are the computers which are not owned by the
service provider. Most of the peers reside in the homes, offices,
schools, and universities. The peers communicate with each other
without passing the information through a dedicated server, this
architecture is known as peer-to-peer architecture. The applications
based on P2P architecture includes file sharing and internet telephony.

Features of P2P architecture


o Self scalability: In a file sharing system, although each peer generates
a workload by requesting the files, each peer also adds a service
capacity by distributing the files to the peer.
o Cost-effective: It is cost-effective as it does not require significant
server infrastructure and server bandwidth.

Client and Server processes


o A network application consists of a pair of processes that send the
messages to each other over a network.
o In P2P file-sharing system, a file is transferred from a process in one
peer to a process in another peer. We label one of the two processes as
the client and another process as the server.
o With P2P file sharing, the peer which is downloading the file is known
as a client, and the peer which is uploading the file is known as a
server. However, we have observed in some applications such as P2P
file sharing; a process can be both as a client and server. Therefore, we
can say that a process can both download and upload the files.

3.3 SUMMARY
Routing is the process of selecting a path for data transfer from a source
to a destination. This task is performed by a device known as a router,
which operates at the internet layer of the TCP/IP model and the network
layer of the OSI model. A router forwards packets based on the contents of
the forwarding table and packet headers, utilizing routing algorithms to
determine the best path for packet delivery.

48
Routing protocols use metrics to identify the optimal path for packet Routing
delivery. Metrics are network characteristics that help in determining the
best route.

3.4 QUESTIONS
1. Write a short note on Routing.
3. Explain Routing Metrics and Costs.
3.Describe the working of Transport Layer
4. Explain Application Layer Protocols



49
4
SOFTWARE DEFINED NETWORKING
Unit Structure :
4.0 Objectives
4.1 Introduction
4.2 Elements of Modern Networking
4.3 Requirements and Technology
4.4 SDN: Background and Motivation
4.5 SDN Data Plane and OpenFlow
4.6 SDN Control Plane
4.7 SDN Application Plane
4.8 Summary
4.9 List of References
4.10 Unit End Exercises

4.0 OBJECTIVES
 To get familiar with the elements of networking
 To understand and get acquaint with the requirements of technology
 To understand the key requirements of SDN

4.1 INTRODUCTION
Software Defined Networking (SDN) is a paradigm shift in the way
computer networks are designed, deployed, and managed. Traditionally,
network devices like routers and switches are controlled by their
proprietary firmware or software, with limited flexibility for dynamic
changes or optimizations. SDN, on the other hand, decouples the control
plane (decision-making logic) from the data plane (forwarding of packets).
This decoupling allows for centralized control and programmability of the
network through software.
Here's a breakdown of key components and concepts in SDN:
1. Control Plane : In SDN, the control plane is centralized in a software
controller. This controller communicates with network devices using
protocols like OpenFlow, providing a global view of the network and
making decisions on how data packets should be forwarded.

50
2. Data Plane : The data plane consists of network devices such as Software Defined
switches and routers. These devices forward packets according to Networking
instructions received from the controller. They are typically simpler
and more focused on packet forwarding, as the intelligence resides in
the controller.
3. Software Controller : This is the brain of the SDN architecture. It's
responsible for gathering information about the network topology,
traffic patterns, and making decisions on how to route traffic based on
defined policies and rules. Examples of SDN controllers include
OpenDaylight, ONOS, and Ryu.
4. Southbound APIs : These are the interfaces through which the SDN
controller communicates with network devices in the data plane.
OpenFlow is the most common southbound API, but there are others
such as NETCONF and P4.
5. Northbound APIs : These are the interfaces through which the SDN
controller exposes its capabilities to higher-level applications and
services. Northbound APIs enable integration with orchestration
systems, management tools, and other network services, allowing for
automation and programmability.
6. Network Virtualization : SDN enables network virtualization by
abstracting the underlying physical network infrastructure. This allows
for the creation of multiple logical networks (also known as overlays)
that can be customized, managed, and provisioned independently from
the physical infrastructure.
7. Programmability and Automation : One of the key advantages of
SDN is its programmability. Network administrators can write
software applications that interact with the SDN controller through its
northbound API, automating tasks such as network provisioning,
configuration management, and traffic engineering.
8. Dynamic Traffic Management : SDN enables dynamic traffic
management by providing real-time visibility into network conditions
and the ability to adapt network policies and configurations
accordingly. This allows for better traffic engineering, load balancing,
and Quality of Service (QoS) enforcement.

4.2 ELEMENTS OF MODERN NETWORKING


Modern networking encompasses a wide range of technologies and
concepts that facilitate the efficient and reliable transfer of data across
digital networks. Here's a detailed description of some key elements:
1. Cloud Computing : Cloud computing has revolutionized how
networks are designed and utilized. It involves the delivery of
computing services over the internet, including storage, servers,
databases, networking, software, and more. Cloud-based networking
enables scalable and flexible infrastructure, allowing organizations to
51
Software Defined rapidly deploy and manage applications and services with minimal
Networking upfront investment.
2. Virtualization : Virtualization technology abstracts computing
resources such as servers, storage, and networks, allowing multiple
virtual instances to run on a single physical machine. Network
virtualization, in particular, enables the creation of multiple logical
networks on top of a shared physical infrastructure. This improves
resource utilization, scalability, and flexibility, while also enabling
easier management and automation.
3. Software Defined Networking (SDN) : SDN decouples the control
plane from the data plane in networking devices, allowing centralized
control and programmability of the network. It enables dynamic
configuration and management of network resources through software,
leading to improved agility, scalability, and automation. SDN
architectures typically involve a centralized controller, southbound
APIs for communication with network devices, and northbound APIs
for integration with higher-level applications and services.
4. Network Function Virtualization (NFV) : NFV virtualizes network
functions such as firewalls, load balancers, and routers, running them
as software instances on commodity hardware. This replaces dedicated
hardware appliances with flexible and scalable virtualized functions,
reducing costs, simplifying management, and enabling more agile
service deployment.
5. Software-Defined WAN (SD-WAN) : SD-WAN is an approach to
wide-area networking that utilizes software-defined networking
principles to dynamically route traffic across multiple connection types
(such as MPLS, broadband, and LTE) based on application
requirements and network conditions. SD-WAN solutions provide
centralized management, improved application performance, and cost-
effective connectivity for distributed organizations.
6. Network Automation and Orchestration : Automation and
orchestration tools streamline network management tasks by
automating repetitive processes and coordinating the deployment and
configuration of network resources. These tools leverage APIs and
scripting languages to integrate with network devices, orchestration
platforms, and management systems, enabling faster provisioning,
troubleshooting, and optimization of network services.
7. Intent-Based Networking (IBN) : IBN is an emerging networking
paradigm that focuses on translating high-level business requirements
or "intent" into automated network configurations. By abstracting
network complexity and automating policy enforcement, IBN aims to
simplify network operations, improve security, and align network
behavior with business objectives.

52
8. 5G Networking : The fifth generation of mobile networking Software Defined
technology, 5G promises significant advancements in data rates, Networking
latency, reliability, and connectivity for both consumer and enterprise
applications. 5G networks leverage technologies such as millimeter-
wave spectrum, massive MIMO (Multiple Input, Multiple Output),
network slicing, and edge computing to deliver high-performance,
low-latency connectivity for a wide range of use cases, including IoT,
augmented reality, and autonomous vehicles.
These elements collectively represent the modern networking landscape,
characterized by flexibility, scalability, automation, and agility, all of
which are essential for supporting the evolving demands of digital
businesses and applications.

4.3 REQUIREMENTS AND TECHNOLOGY


The requirements and technologies in modern networking are deeply
intertwined, as advancements in technology often arise in response to
evolving demands and challenges. Here's a detailed breakdown:
1. High Performance : With the exponential growth of data traffic
driven by trends like video streaming, cloud computing, and IoT,
modern networks must deliver high performance in terms of
bandwidth, throughput, and low latency. Technologies like fiber-optic
communication, high-speed Ethernet, and advanced routing and
switching protocols (e.g., OSPF, BGP) are essential for achieving and
maintaining high-performance networks.
2. Scalability : Networks need to scale gracefully to accommodate
increasing numbers of devices, users, and applications without
compromising performance or reliability. Scalability is achieved
through technologies such as virtualization, which allows for the
efficient allocation and management of resources, and cloud
computing, which provides elastic scalability by dynamically
provisioning and deprovisioning resources as needed.
3. Reliability and Resilience : Networks must be highly reliable to
ensure uninterrupted access to critical services and applications.
Redundancy, fault tolerance, and resilience are achieved through
technologies like network redundancy protocols (e.g., Spanning Tree
Protocol, Virtual Router Redundancy Protocol), link aggregation, load
balancing, and automatic failover mechanisms.
4. Security : With the increasing prevalence of cyber threats and data
breaches, network security is paramount. Modern networks employ a
variety of security technologies and protocols, including firewalls,
intrusion detection and prevention systems (IDPS), VPNs, encryption,
authentication mechanisms (e.g., 802.1X), and security policies to
protect against unauthorized access, data theft, and other security
threats.

53
Software Defined 5. Flexibility and Agility : Networks need to be flexible and agile to
Networking adapt to changing business requirements, user demands, and
technological advancements. Technologies such as Software Defined
Networking (SDN), Network Function Virtualization (NFV), and
intent-based networking (IBN) enable dynamic configuration,
automation, and orchestration of network resources, allowing for rapid
deployment, scaling, and optimization of network services.
6. Interoperability : In heterogeneous network environments composed
of diverse hardware and software components from multiple vendors,
interoperability is essential to ensure seamless communication and
integration. Standards-based protocols and APIs facilitate
interoperability between different network devices, systems, and
applications, enabling interoperability, integration, and
interoperability.
7. Quality of Service (QoS) : To meet the diverse needs of different
applications and users, networks must provide differentiated levels of
service based on factors such as bandwidth, latency, and packet loss.
Quality of Service (QoS) mechanisms prioritize and manage network
traffic to ensure that critical applications receive the necessary
resources and performance guarantees, using technologies like traffic
shaping, prioritization, and congestion management.
8. Manageability and Monitoring : Effective network management and
monitoring are essential for ensuring optimal performance,
troubleshooting issues, and enforcing security policies. Network
management tools and protocols, such as SNMP (Simple Network
Management Protocol), NetFlow, and Syslog, provide visibility into
network traffic, performance metrics, and device status, enabling
proactive monitoring, troubleshooting, and optimization of network
resources.
These requirements and technologies collectively shape the design,
deployment, and operation of modern networking infrastructures, enabling
organizations to build robust, scalable, secure, and agile networks that
meet the evolving demands of digital businesses and applications.

4.4 SDN: BACKGROUND AND MOTIVATION


Software Defined Networking (SDN) represents a fundamental shift in the
way computer networks are designed, operated, and managed. The
concept emerged in response to the limitations of traditional networking
architectures, which were characterized by complex and inflexible
hardware-centric designs. Here's a detailed exploration of the background
and motivation behind SDN:
1) Traditional Networking Challenges: Traditional network
architectures, such as those based on the OSI model, rely on
distributed control mechanisms embedded within individual network

54
devices (e.g., routers, switches). This distributed control model leads Software Defined
to several challenges: Networking

 Lack of Centralized Control : Each network device makes


independent forwarding decisions based on locally stored routing
tables, leading to suboptimal traffic management and inefficient
resource utilization.

 Limited Programmability : Traditional network devices have fixed,


vendor-specific firmware or software that lacks programmability and
flexibility, making it difficult to adapt to changing network
requirements or deploy new services.

 Manual Configuration and Management : Network configuration


and management tasks are often labor-intensive, error-prone, and time-
consuming, requiring skilled administrators to manually configure
each device and manage complex routing protocols.
2) Emergence of SDN : SDN emerged as a response to these challenges,
aiming to introduce greater flexibility, programmability, and agility
into network architectures. The key motivation behind SDN includes:

 Centralized Control : SDN decouples the control plane (decision-


making logic) from the data plane (packet forwarding) and centralizes
control within a software-based controller. By consolidating network
intelligence in a centralized controller, SDN enables a global view of
the network topology and centralized decision-making, leading to
more efficient traffic management and optimization.

 Programmability and Flexibility : Unlike traditional networking


devices, which have fixed functionality, SDN allows for
programmable control of network behavior through software. This
programmability enables network administrators to dynamically
configure and customize network policies, protocols, and services
using high-level programming languages or APIs, without needing to
modify individual network devices.

 Automation and Orchestration : SDN facilitates automation and


orchestration of network provisioning, configuration, and management
tasks. By exposing programmable interfaces (northbound APIs), SDN
controllers enable integration with higher-level orchestration systems
and management platforms, allowing for automated service
deployment, scaling, and optimization based on application
requirements or business policies.

 Dynamic Adaptation to Changing Requirements : SDN enables


networks to adapt dynamically to changing traffic patterns, application
demands, and network conditions. By leveraging real-time network
telemetry and analytics, SDN controllers can adjust network
configurations and policies on the fly, optimizing resource allocation,
load balancing, and Quality of Service (QoS) enforcement in response
to changing conditions.
55
Software Defined  Ecosystem Innovation and Openness : SDN fosters innovation and
Networking interoperability by promoting open standards, open APIs, and
ecosystem collaboration. OpenFlow, an open standard for
communication between the SDN controller and network devices, has
gained widespread adoption, enabling interoperability between SDN
controllers and switches from different vendors and fostering an
ecosystem of third-party applications and services.
Overall, SDN represents a paradigm shift in networking, offering greater
flexibility, programmability, and automation compared to traditional
networking architectures. By centralizing control, enabling programmable
network behavior, and facilitating automation, SDN empowers
organizations to build more agile, efficient, and scalable networks that can
adapt to the evolving demands of modern applications and services.

4.5 SDN DATA PLANE AND OPENFLOW


SDN Data Plane:

 In Software Defined Networking (SDN), the data plane is responsible


for the actual forwarding of data packets through the network.

 Unlike traditional networking architectures where the data plane and


control plane are tightly integrated within individual network devices
(e.g., switches, routers), SDN decouples these planes, with the control
plane centralized in a software-based controller.

 The SDN data plane typically consists of network devices such as


switches, routers, and other forwarding elements. These devices
forward packets based on instructions received from the SDN
controller, without possessing any inherent intelligence or decision-
making capabilities.

 The primary role of the data plane is to execute forwarding actions


specified by the SDN controller, such as forwarding packets to specific
ports, applying traffic policies, or encapsulating packets for network
virtualization.

 Data plane devices in SDN architectures are often simpler and more
focused on packet forwarding, as the intelligence and decision-making
logic reside in the centralized controller.

Open Flow Protocol :

 OpenFlow is a widely adopted protocol used to communicate between


the SDN controller and network devices in the data plane.

 It serves as a standardized southbound API (Application Programming


Interface) that enables the controller to program and control the
behavior of network switches and routers.

56
 OpenFlow defines a set of messages and message formats that allow Software Defined
the controller to query the state of the network, modify forwarding Networking
tables, and instruct switches on how to handle incoming packets.

 The OpenFlow protocol operates on a switch-based model, where


network switches are referred to as "OpenFlow switches." These
switches consist of a flow table, which stores flow entries defining
packet forwarding behavior, and a secure channel (typically TCP/IP)
for communication with the SDN controller.

 When a packet arrives at an OpenFlow switch, the switch consults its


flow table to determine how to handle the packet. If there is no
matching flow entry, the switch forwards the packet to the controller
for further instructions.

 The controller processes packet-in messages from switches, determines


the appropriate actions based on network policies or routing algorithms,
and sends corresponding flow-mod messages to the switches to update
their flow tables accordingly.

 OpenFlow allows for granular control over packet forwarding, enabling


dynamic configuration and management of network resources, traffic
engineering, and implementation of network policies.

 In summary, the SDN data plane is responsible for forwarding data


packets through the network based on instructions received from the
centralized controller. The OpenFlow protocol serves as a standardized
communication interface between the controller and data plane devices,
enabling dynamic control and programmability of network behavior in
SDN architectures.

4.6 SDN CONTROL PLANE


The SDN (Software Defined Networking) control plane is a critical
component of SDN architecture, responsible for making decisions about
how data traffic should be forwarded throughout the network. Here's a
detailed breakdown of the SDN control plane:

1. Centralized Decision-Making:

 One of the fundamental principles of SDN is the centralization of


network control. Unlike traditional networking architectures where the
control plane logic is distributed across individual network devices, in
SDN, control is centralized within a software-based controller.

 Centralization allows for a global view of the network topology and


traffic patterns, enabling more informed decision-making and
optimization of network resources.

57
Software Defined 2. Network State Abstraction:
Networking
 The SDN controller maintains a comprehensive view of the network
state, which includes information about network topology, device
configurations, traffic flows, and performance metrics.

 Network state abstraction allows the controller to make intelligent


decisions about how to route traffic, allocate resources, and enforce
network policies based on real-time information.

4. Policy Definition and Enforcement:

 The SDN control plane defines network policies and rules based on
high-level objectives or business requirements. These policies specify
how traffic should be handled, such as Quality of Service (QoS)
guarantees, access control, traffic prioritization, and routing
preferences.

 Policies are implemented as software-defined rules that dictate how


packets are forwarded through the network. The controller
communicates these rules to data plane devices using a standardized
southbound API (e.g., OpenFlow) or vendor-specific protocols.

4. Dynamic Network Control:

 SDN enables dynamic control and adaptation of network behavior in


response to changing conditions. The controller continuously monitors
network state and performance metrics, adjusting policies and
forwarding decisions as needed to optimize resource utilization,
minimize latency, and ensure reliable packet delivery.

 Dynamic network control allows SDN to support a wide range of use


cases, including load balancing, traffic engineering, fault tolerance,
and security enforcement, with greater agility and responsiveness
compared to traditional networking architectures.
5. Integration with Higher-Level Services:

 The SDN control plane exposes programmable interfaces (northbound


APIs) that allow integration with higher-level services, applications,
and management systems. These APIs enable orchestration platforms,
network management tools, and application developers to interact with
the SDN controller, automate network provisioning, and implement
custom network services.

 Integration with higher-level services fosters ecosystem collaboration,


innovation, and interoperability, enabling the development of diverse
SDN applications and use cases tailored to specific business needs.
In summary, the SDN control plane centralizes decision-making and
policy enforcement in a software-based controller, providing a holistic
view of the network and enabling dynamic control and programmability of
network behavior. By abstracting network complexity, SDN facilitates
58
greater agility, flexibility, and automation in network management and Software Defined
operation. Networking

SDN Applications:

 SDN applications are software programs or modules that run on top of


the SDN controller,leveraging its centralized control and network
visibility to implement specific network functionalities or services

 Examples of SDN applications include traffic engineering


applications, security applications, Quality of Service (QoS)
management applications, and network monitoring and analytics
applications.

4.7 SDN APPLICATION PLANE


In the context of Software Defined Networking (SDN), the application
plane refers to the layer where various SDN applications and services are
implemented to provide specific functionalities and services tailored to the
needs of the network and its users. Here's a detailed breakdown of the
SDN application plane:

Purpose of the Application Plane:

 The application plane in SDN is where higher-level services,


applications, and management functions reside. These applications
leverage the programmable nature of SDN to deliver customized
network services, automate network operations, and implement
advanced functionalities.

SDN Applications:

 SDN applications are software programs or modules that run on top of


the SDN controller or within the network infrastructure. These
applications utilize the capabilities exposed by the SDN controller to
implement specific network functionalities or services.

 SDN applications can be developed by network administrators, third-


party vendors, or open-source communities to address a wide range of
use cases and requirements.
Types of SDN Applications:
 There are various types of SDN applications that can be deployed in
the application plane, including:
 Traffic Engineering: Applications for optimizing network traffic flows,
improving network performance, and maximizing resource utilization.
 Network Monitoring and Analytics: Applications for collecting,
analyzing, and visualizing network data to identify performance issues,
security threats, and anomalies.

59
Software Defined  Security and Access Control: Applications for enforcing security
Networking policies, access control, and threat detection and mitigation.
 Quality of Service (QoS) Management: Applications for prioritizing
and guaranteeing network bandwidth, latency, and reliability for
critical applications or services.
 Load Balancing: Applications for distributing network traffic across
multiple paths or resources to avoid congestion and optimize resource
usage.
 Virtual Network Management: Applications for creating, provisioning,
and managing virtual networks or network slices for specific tenants,
applications, or services.
 Policy-Based Routing: Applications for implementing network
policies and routing decisions based on business requirements, security
policies, or regulatory compliance.
 Service Chaining: Applications for chaining together multiple network
services (e.g., firewalls, load balancers, WAN accelerators) to create
complex service delivery chains.
 These are just a few examples, and the possibilities are virtually
limitless, depending on the specific needs and objectives of the
network deployment.
Northbound API
 The SDN controller exposes a northbound API that allows SDN
applications to interact with the controller and utilize its capabilities.
 The northbound API provides a standardized interface for SDN
applications to query network state, subscribe to event notifications,
install forwarding rules, and invoke controller functionalities.
 SDN applications communicate with the controller through the
northbound API to request information, make decisions, and take
actions based on network conditions and user requirements.
Integration with External Systems
 SDN applications can integrate with external systems, such as
orchestration platforms, cloud management systems, network
management tools, and business applications, to enable end-to-end
automation, service orchestration, and business process integration.
 Integration with external systems allows SDN applications to leverage
contextual information, automate cross-domain workflows, and align
network operations with broader business objectives.
In summary, the application plane in SDN is where SDN applications
reside, providing customized network services, automation, and
management functionalities tailored to the needs of the network
deployment. By leveraging the programmable capabilities of SDN
controllers and integrating with external systems, SDN applications enable

60
organizations to optimize network operations, improve network Software Defined
performance, and deliver innovative services to users. Networking

4.8 SUMMARY
We saw how SDN shift in the way computer networks are designed,
deployed, and managed and how they offer greater flexibility, agility, and
scalability compared to traditional networking approaches. It empowers
organizations to build and manage networks that are more adaptable to
changing business requirements and traffic patterns, ultimately leading to
improved efficiency and cost-effectiveness.
We also discussed how therequirements and technologies collectively
shape the design, deployment, and operation of modern networking
infrastructures, enabling organizations to build robust, scalable, secure,
and agile networks that meet the evolving demands of digital businesses
and applications.
SDN represents a paradigm shift in networking, offering greater
flexibility, programmability, automation, and agility compared to
traditional networking architectures. By centralizing control, enabling
programmable data forwarding, and providing a platform for developing
custom network applications, SDN empowers organizations to build more
efficient, scalable, and innovative networks that can adapt to the evolving
demands of modern applications.

4.9 LIST OF REFERENCES


1. TCPIP Protocol Suite, Behrouz A Forouzan, McGraw Hill Education;
4th edition, Fourth Edition, 2017
2. Foundations of Modern Networking: SDN, NFV, QoE, IoT, and
Cloud, William Stallings, Addison-Wesley Professional, 2016.
3. Software Defined Networks: A Comprehensive Approach, Paul
Goransson and Chuck Black, Morgan Kaufmann Publications, 2014
4. SDN - Software Defined Networks by Thomas D. Nadeau & Ken
Gray, O'Reilly, 2013

4.10 UNIT END EXERCISES


1) Discuss the elements of modern networking.
2) Write a note onRequirements and Technology.
3) Explain SDN: Background and Motivation.
4) What do you mean by SDN Data Plane and OpenFlow?
5) Explain SDN Control Plane.
6) Explain SDN Application Plane.


61
5
NETWORK FUNCTIONS
VIRTUALIZATION CONCEPTS AND
ARCHITECTURE
Unit Structure :
5.0 Objectives
5.1 Introduction
5.2 An Overview
5.2.1 What is Network Functions Virtualization
5.2.2 Concepts of NFV
5.2.3 NFV Architecture
5.2.4 Benefits of NFV
5.3 NFV Functionality
5.3.1 M Virtualization of Network Functions
5.3.2 Dynamic Service Deployment and Scaling
5.3.3 Service Chaining and Network Slicing
5.3.4 Orchestration and Management
5.3.5 Cost Efficiency and Resource Optimization
5.3.6 Implementation, Evaluationand Maintenance
5.4 Network Virtualization Quality of Service
5.4.1 Resource Allocation and Management
5.4.2 Traffic Prioritization
5.4.3 Traffic Shaping and Policing
5.5 Let us Sum Up
5.6 List of References
5.7 Bibliography
5.8 Unit End Exercises

5.0 OBJECTIVES
Aftergoing through this unit, you will be able to :
 Define Network Functions Virtualization
 understand Network Functions Virtualization Architecture

62
 describe the Benefits of NFV Network Functions
Virtualization Concepts
 classify different types of systems and Architecture
 explain NFV Functionality
 illustrate the Quality of Service

5.1 INTRODUCTION
Network Functions Virtualization (NFV) is a concept in networking where
traditional network functions that were previously implemented using
dedicated hardware appliances are virtualized. This means they are
decoupled from the physical infrastructure and run as software on standard
computing hardware.
Network functions virtualization (NFV) is the replacement of network
appliance hardware with virtual machines. The virtual machines use a
hypervisor to run networking software and processes such as routing and
load balancing.
It is a network architecture concept that uses the technologies of IT
virtualization to virtualize entire classes of network node functions into
building blocks that may connect, or chain together, to create
communication services.
It is a way to reduce costs and accelerate service deployment for network
operators by decoupling functions like a firewall or encryption from
dedicated hardware and moving them to virtual servers.
Network Virtualization (NV) refers to abstracting network resources that
were traditionally delivered in hardware to software. NV can combine
multiple physical networks to one virtual, software-based network, or it
can divide one physical network into separate, independent virtual
networks.

63
Software Defined
Networking

5.2 OVERVIEW
5.2.1 Network Functions Virtualization (NFV)
Network Functions Virtualization (NFV) is a concept in networking where
traditional network functions that were previously implemented using
dedicated hardware appliances are virtualized.
NFV is a fundamental shift in how network services are deployed and
managed, offering significant advantages in terms of flexibility,
scalability, and cost-efficiency for modern networking environments.
5.2.2 Concepts of NFV
1. Virtualization : NFV leverages virtualization technologies (such as
hypervisors and virtual machines) to run network functions as software
instances on standard servers, storage, and networking resources.
2. Decoupling : It decouples network functions from proprietary
hardware appliances, allowing them to run on any hardware that meets
the performance and capacity requirements.
3. Abstraction : NFV abstracts network functions from the underlying
hardware, providing flexibility, scalability, and easier management
compared to traditional hardware-based approaches.
4. Orchestration : NFV requires orchestration frameworks to manage
and automate the deployment, configuration, scaling, and monitoring
of virtualized network functions (VNFs).
5. Service Chaining : NFV enables the creation of service chains, where
multiple VNFs are interconnected to deliver complex network
services, such as firewalls, load balancers, and intrusion detection
systems.
5.2.3 NFV Architecture
NFV architecture typically involves several key components and layers:
5. Infrastructure Layer:
- Compute: Standard servers (physical or virtual) that host VNFs.
- Storage: Storage resources for VNFs and data.

64
- Networking: Physical and virtual networking components for Network Functions
interconnecting VNFs and external networks. Virtualization Concepts
and Architecture
2. Virtualization Layer:
- Hypervisors or virtual machine monitors (VMMs) that create and
manage virtual machines (VMs) where VNFs run.
- Container-based virtualization technologies may also be used for
lightweight isolation of VNFs.
5. Management and Orchestration (MANO):
- NFV Orchestrator (NFVO): Coordinates and manages the lifecycle of
VNFs and network services. It interfaces with higher-level orchestration
systems.
- Virtual Infrastructure Manager (VIM): Manages the underlying
compute, storage, and networking resources. It provides APIs to the
NFVO for resource allocation and management.
4. VNFs and VNF Managers:
- VNFs : Virtualized instances of network functions, such as routers,
firewalls, NAT (Network Address Translation) devices, etc.
- VNF Managers (VNFM) : Manage the lifecycle of VNF instances,
including instantiation, scaling, healing, and termination.
5. Orchestration Layer:
- Coordinates and automates the deployment and operation of VNFs and
service chains.
- Implements policies and rules for service assurance, scaling, and fault
management

5.2.4 Benefits of NFV


 Cost Efficiency : Reduces costs associated with proprietary hardware
and enables resource sharing.
 Flexibility and Scalability : Allows rapid deployment and scaling of
network services.
 Agility : Enables faster service innovation and deployment through
automation.

65
Software Defined  Easier Management : Simplifies operations through centralized
Networking management and orchestration.

5.3 NFV FUNCTIONALITY

Network Functions Virtualization (NFV) provides a wide range of


functionalities that transform traditional networking by virtualizing
network services.

5.3.1 Virtualization of Network Functions

 NFV enables the virtualization of various network functions that


traditionally required dedicated hardware appliances. These include
functions like routers, firewalls, load balancers, NAT (Network
Address Translation), WAN accelerators, and more.

 Virtualization allows these functions to run as software instances on


standard server hardware, making them more flexible, scalable, and
easier to manage.

 The virtual machines use a hypervisor to run networking software and


processes such as routing and load balancing.

 NFV allows for the separation of communication services from


dedicated hardware, such as routers and firewalls. This separation
means network operations can provide new services dynamically and
without installing new hardware.

 Deploying network components with network functions virtualization


takes hours instead of months like with traditional networking.

 Also, virtualized services can run on less expensive, generic servers


instead of proprietary hardware.

 Essentially, network functions virtualization replaces the functionality


provided by individual hardware networking components. This means
that virtual machines run software that accomplishes the same
networking functions as traditional hardware.

 Load balancing, routing and firewall security are all performed by


software instead of hardware components. A hypervisor or software-
defined networking controller allows network engineers to program all
of the different segments of the virtual network, and even automate the
provisioning of the network.

 IT managers can configure various aspects of the network


functionality through one pane of glass, in minutes.

66
Network Functions
Virtualization Concepts
and Architecture

5.3.2 Dynamic Service Deployment and Scaling


NFV enables dynamic deployment and scaling of network services.
Virtualized Network Functions (VNFs) can be instantiated, scaled up or
down, and terminated based on demand, without the need for physical
hardware changes.
This flexibility allows service providers to respond quickly to changing
traffic patterns and service demands.

5.3.3 Service Chaining and Network Slicing

 NFV facilitates service chaining, where multiple VNFs are


interconnected in a specific order to deliver end-to-end network
services. Service chaining enables the creation of complex network
service architectures tailored to specific use cases.

 Network slicing, a related concept, allows virtual networks to be


created on shared physical infrastructure, providing isolated and
customized network environments for different applications or
customers.

 A ‘service chain’ is a set of network services which are performed in a


specific order and ‘service chaining’ refers to steering the traffic
through such a “chain”. It’s like a recipe where actions are performed
in a preordained order.

 Services can be performed in parallel or in serial, depending on the


situation. The chain can be implemented by cabling individual devices
together or, increasingly, by using software provisioning to control the
flow of data through the selected services.

 Monitoring tools that are linked together in this way are sometimes
referred to as a daisy-chain.

 The use of service chains is linked to the automation of functions that


have been either embedded in single purpose hardware devices,
67
Software Defined dictated by physical topologies, or performed manually--which are
Networking increasingly perceived as too costly and inflexible in our fast-moving
digital economy.

 Service chaining is a useful concept that can help you organize


operational tasks into more manageable groups. As programmability
becomes the norm in network management, organizations will find
more ways to use service chaining to increase network visibility,
improve security monitoring, and increase the speed and quality of
applications.

 Advantages of Service Chaining:


 Enable Network Function Virtualization (NFV): Once upon a time,
specialized network appliances ruled the data center and in many
places they still do. When you consider their purpose, however, you
can identify multiple functions taking place inside each appliance. For
instance, a firewall might perform network address translation, deep
packet inspection, and access control. The hardware appliance was
designed to perform these functions at wire speed. But in recent years,
many of the functions once performed by expensive hardware
appliances are being redesigned as software functions that can be run
on any generic and low-cost CPUs. This process is called network
function virtualization, and the goal is to achieve the same results as
the appliance, but at greater efficiency and less cost.
 Reduce Latency: To get acceptable performance in a virtualized
environment, however, services that run as software on a generic CPU
must be chained together, to accelerate total processing speed or
latency. Any time services are grouped together in a way that forces
processing to proceed from step-to-step, latency can be reduced and
speed accelerated.
 Reduce Redundant Inspections:

 Without the ability to chain together certain functions, a particular


packet may need to pass through a particular service more than once to
meet the qualifications for other types of inspection tools.

 For instance, in the case of security monitoring, SSL traffic can pass
through a powerful decryption tool and the exposed content can be
sent through a series of additional inspection tools.

 This avoids the need to send the traffic through decryption for each
tool, which would increase latency and multiply the cycles being
consumed on the decryption tool.

 A more efficient and more cost-effective result is achieved by sending


decrypted traffic through multiple tools before passing it through to the
trusted network.

68
 Apply Consistent Policies: Pre-set service chains help ensure that Network Functions
actions are taken in a specific sequence, and nothing is overlooked. Virtualization Concepts
and Architecture
 This reduces errors and increases the chance that abnormalities will be
identified in time to prevent damage to an organization’s data or other
resources.
 Increase Flexibility:

 The ability to define service chains dynamically, based on the user,


device, location, service level, or other characteristic is a powerful
capability in the fast-moving digital economy.

 Well defined rules and policies can help decrease the time to deliver a
service and increase the quality of the user experience.
 Network slicing:

 It is the operators’ best answer on how to build and manage a network,


that meets and exceeds the emerging requirements from a wide range
of users.

 The way to achieve a sliced network is to transform it into a set of


logical networks on top of a shared infrastructure.

 Each logical network is designed to serve a defined business purpose


and comprises of all the required network resources, configured and
connected end-to-end.

 The network slice is a logically separated, self-contained, independent


and secured part of the network, targeting different services with
different requirements on speed, latency and reliability.

 Network slice characteristics are for example low latency, high


bandwidth and ultra-reliability for a critical IoT use case or higher
latency and lower bandwidth for a massive IoT use case.

 To efficiently manage the network slices and to maximize revenues, a


modern OSS and BSS providing automated business and operational
processes is a must.

 With programmable and flexible 5G networks and advanced AI


(Artificial Intelligence) and Service Level Agreement (SLA) driven
orchestration, the required network functions can be flexibly created,
quickly deployed and automatically managed throughout the life cycle.

 A network slice can be dedicated to one enterprise customer or shared


by multiple tenants. For example, a slice may consist of dedicated
radio, transport and core resources including a dedicated user plane
function at the edge.

 Another slice shares radio & transport resources between tenants but
provides dedicated core network functions per tenant.
69
Software Defined  End-to-end network slicing enables new business model innovation
Networking and use cases across all verticals and creates new revenue
opportunities for communication service providers.

 It provides service flexibility and the ability to deliver services faster


with high security, isolation, and applicable characteristics to meet the
contracted SLA.

 Network Slicing enables operators to maximize the return on


investment via efficient usage and management of the network
resources and provide differentiated services at scale.

 With network slicing, communication service providers can meet all


the needs from their enterprise customers.

5.3.4 Orchestration and Management

 NFV relies on orchestration frameworks to automate the lifecycle


management of VNFs and network services. This includes
provisioning, configuration, monitoring, scaling, and optimization.

 Orchestration systems coordinate with Virtual Infrastructure Managers


(VIMs) to allocate resources, manage network connectivity, and
ensure service continuity.

 Orchestration is the coordination and management of multiple


computer systems, applications and/or services, stringing together
multiple tasks to execute a larger workflow or process.

 These processes can consist of multiple tasks that are automated and
can involve multiple systems.

 The goal of orchestration is to streamline and optimize the execution


of frequent, repeatable processes and thus to help data teams more
easily manage complex tasks and workflows.

 Anytime a process is repeatable, and its tasks can be automated,


orchestration can be used to save time, increase efficiency, and
eliminate redundancies.

5.3.5 Cost Efficiency and Resource Optimization


 By optimizing network traffic and resource allocation, organizations
can improve throughput and minimize downtime. Network monitoring
tools enable proactive issue detection, reducing downtime.

 Optimizing network traffic and resource allocation improves


performance and cost efficiency.

 Network Optimization refers to the tools, techniques, and best


practices used to monitor and improve network performance.

70
 It involves analyzing the network infrastructure, identifying Network Functions
bottlenecks and other performance issues, and implementing solutions Virtualization Concepts
to eliminate or mitigate them. and Architecture

5.3.6 Implementation,EvaluationandMaintenance

 Maintenance changes the existing system, enhancement adds features


to the existing system, and development replaces the existing system.

 It is an important part of system development that includes the


activities which corrects errors in system design and implementation,
updates the documents, and tests the data.
Service Agility and Innovation:

 NFV accelerates service deployment and innovation by reducing time-


to-market for new services and features. Service providers can rapidly
deploy and update network services through software updates rather
than hardware replacements.

 This agility enables service providers to differentiate themselves in the


market by offering innovative and customizable network services.

Enhanced Network Management and Service Assurance:


 NFV enhances network management capabilities by providing
centralized visibility and control over virtualized network resources.

 Service assurance tools monitor and manage service performance,


ensuring high availability and quality of service (QoS).

 Automated fault detection and recovery mechanisms improve network


resilience and reduce downtime.

 NFV transforms traditional networking architectures by virtualizing


network functions, enabling dynamic service deployment and scaling,
supporting complex service architectures through service chaining and
network slicing, and enhancing operational efficiency and service
agility.

 These functionalities are crucial for modernizing network


infrastructures to meet the evolving demands of digital services and
applications.

5.4 NETWORK VIRTUALIZATION QUALITY OF


SERVICE
Network virtualization Quality of Service (QoS) refers to the mechanisms
and techniques used to ensure and manage the quality and performance of
virtualized network services.
In the context of network virtualization, where multiple virtual networks
(VNets) or virtualized network functions (VNFs) share the same physical
infrastructure, QoS becomes essential to guarantee that each virtualized
71
Software Defined entity receives adequate resources and performance levels according to
Networking defined service-level agreements (SLAs) or policies.

5.4.1 Resource Allocation and Management

 QoS in network virtualization involves allocating and managing


resources such as bandwidth, CPU, memory, and storage among
different virtual networks or VNF instances.

 Resource allocation can be dynamic and flexible, allowing adjustments


based on traffic patterns, application requirements, or business
priorities.

 Resource allocation strategies aim to effectively maximize


performance, system utilization, and profit by considering
virtualization technologies, heterogeneous resources, context
awareness, and other features.

5.4.2 Traffic Prioritization

 Traffic prioritization is a concept of Quality of Service (QoS).

 QoS enables network administrators to provide minimum bandwidth


for less time critical applications and maximum bandwidth for real-
time traffic likevoice, video where delay is not tolerated.

 QoS parameter will be configured on switches and routers.

 QoS mechanisms prioritize traffic flows within virtualized networks to


ensure critical applications or services receive sufficient bandwidth
and latency requirements.

 Prioritization may be based on application type, user class, or specific


service requirements defined in SLAs.

 Quality of Service (QoS) is a networking mechanism that helps control


and prioritize traffic so that more critical traffic can be sent first on the
network.

 This feature helps ensure performance for critical network traffic.

5.4.3 Traffic Shaping and Policing

 Traffic shaping is used to control bandwidth of the network to ensure


quality of service to business-critical applications.

 Traffic shaping regulates the rate of traffic flow to prevent congestion


and ensure smooth delivery of services. It can involve buffering, rate
limiting, or scheduling mechanisms.

 Traffic policing enforces traffic limits and controls to prevent any


single virtual entity from consuming excessive resources or violating
QoS policies.

72
Service Differentiation: Network Functions
Virtualization Concepts
 QoS supports differentiation of services based on performance and Architecture
requirements and priorities. It allows service providers to offer tiered
services with varying levels of performance guarantees (e.g., gold,
silver, bronze levels).

 Differentiated services ensure that critical applications or premium


customers receive higher QoS levels compared to less critical or
standard services.
Monitoring and Management:

 QoS in network virtualization requires monitoring tools and


management systems to continuously assess network performance,
detect anomalies, and enforce QoS policies.

 Real-time monitoring helps identify potential bottlenecks, congestion


points, or performance degradation issues that could impact QoS.

SLA Compliance:

 QoS mechanisms ensure that virtualized network services meet


predefined SLAs or contractual obligations regarding performance
metrics such as latency, throughput, jitter, and availability.

 SLA compliance monitoring and reporting are essential to validate


QoS levels and demonstrate service quality to customers or
stakeholders.

Fault Tolerance and Resilience:

 QoS strategies include fault tolerance mechanisms to maintain service


continuity and resilience against network failures or disruptions.

 Redundancy, failover mechanisms, and dynamic rerouting capabilities


help minimize downtime and ensure uninterrupted service delivery.

5.5 LETUSSUMUP
Implementing effective QoS in network virtualization environments can
pose several challenges:
 Complexity : Managing QoS across virtualized infrastructures with
diverse service requirements and traffic patterns requires sophisticated
policies and coordination.
 Performance Overhead : QoS mechanisms may introduce overhead
in terms of processing resources and latency, impacting overall system
performance.
 Scalability : Ensuring consistent QoS as the scale of virtualized
networks grows requires scalable architectures and efficient resource
management algorithms.
73
Software Defined  Interoperability : QoS solutions must be compatible with existing
Networking network management frameworks and virtualization platforms to
facilitate seamless integration and operation.
In conclusion, network virtualization QoS plays a crucial role in ensuring
predictable performance, efficient resource utilization, and service
differentiation within virtualized network environments.
Effective implementation requires careful planning, robust monitoring,
and adaptive management strategies to meet the diverse needs of modern
digital services and applications.

5.6 LIST OF REFERENCES


 https://ieeexplore.ieee.org/document/9148479

 https://www.vmware.com/topics/glossary/content/network-functions-
virtualization-nfv. html#: ~:text =Network %20functions% 20
virtualization %20(NFV) %20is,as%20 routing%20and%20load % 20
balancing.

5.7 BIBLIOGRAPHY
1. TCPIP Protocol Suite, Behrouz A Forouzan , McGraw Hill Education;
4th edition, Fourth Edition, 2017
2. Foundations of Modern Networking: SDN, NFV, QoE, IoT, and
Cloud, William Stallings, Addison-Wesley Professional, 2016.
3. Software Defined Networks: A Comprehensive Approach, Paul
Goransson and Chuck Black, Morgan Kaufmann Publications, 2014
4. SDN - Software Defined Networks by Thomas D. Nadeau & Ken
Gray, O'Reilly, 2013

5.8 UNIT END EXERCISES


1. What are the benefits of NFV?
2. Where is the best place to host virtual network functions for NFV?
3. How can virtual services meet the required carrier-class performance?
4. Explain NFV architecture.
5. Explain Network Virtualization Quality of Service.



74
6
MODERN NETWORK ARCHITECTURE:
CLOUDS AND FOG
Unit Structure:
6.0 Objectives
6.1 Introduction
6.2 Summary
6.3 Possible Answers
6.4 List of References
6.5 Bibliography
6.6 Glossary
6.7 Further Readings
6.8 Model Questions

6.0 OBJECTIVES
 Understand the key characteristics and benefits of Cloud Computing
and Fog Computing.
 Learn the differences between Cloud and Fog architectures.
 Explore the technologies and use cases driving modern network
systems.

6.1 INTRODUCTION
Modern network architecture addresses the demands of scalability, low
latency, and efficient resource utilization in an interconnected digital
world. Cloud Computing and Fog Computing represent two critical
paradigms in this evolution, offering distinct approaches to handling data
processing, storage, and analytics. This unit introduces these concepts,
detailing their characteristics, applications, and emerging trends.
Study Guidance:
To make the most of this chapter, focus on understanding the fundamental
differences between Cloud and Fog Computing. Pay attention to real-
world use cases and technological enablers, such as virtualization,
containers, and edge computing. Diagrams and tables included in the
chapter will help clarify key concepts, so refer to them closely.

75
Software Defined Modern Network Architecture: Clouds and Fog
Networking
Modern network architecture leverages advanced computing paradigms to
meet the demands of scalability, low latency, and efficient resource
utilization. Two key components in this architecture are Cloud
Computing and Fog Computing.

Cloud Computing
Cloud computing refers to the delivery of on-demand computing services
over the internet, including storage, processing power, and software
applications. These resources are housed in remote data centers
maintained by service providers.
Key Characteristics:
1. On-Demand Self-Service: Users can provision computing resources
automatically without human intervention.
2. Broad Network Access: Resources are available over the internet and
accessible from a wide range of devices.
3. Resource Pooling: Resources are shared among multiple users
through multi-tenancy.
4. Scalability and Elasticity: Resources can scale up or down
dynamically based on demand.
5. Pay-as-You-Go: Users pay only for what they use, enabling cost-
effectiveness.
Types of Cloud Services:
 IaaS (Infrastructure as a Service): Virtualized computing resources
like VMs, storage, and networks (e.g., AWS EC2, Google Compute
Engine).
 PaaS (Platform as a Service): Development platforms and tools for
building and deploying applications (e.g., Heroku, AWS Elastic
Beanstalk).
 SaaS (Software as a Service): Ready-to-use applications hosted on
the cloud (e.g., Google Workspace, Salesforce).
Deployment Models:
 Public Cloud: Open for public use (e.g., AWS, Azure).
 Private Cloud: Dedicated to a single organization for more control
and security.
 Hybrid Cloud: Combines public and private clouds, offering
flexibility and optimization.

76
Advantages: Modern Network
Architecture: Clouds and Fog
 High scalability
 Cost-efficiency
 Easy collaboration and accessibility
 Enhanced disaster recovery and backup options
Challenges:
 Security and compliance concerns
 Dependency on internet connectivity
 Data transfer costs

Fog Computing
Fog computing is an extension of cloud computing, designed to bring
computing, storage, and networking resources closer to end devices (IoT,
edge devices). Unlike centralized cloud systems, fog nodes are distributed
geographically to process data closer to its source.
Key Features:
1. Low Latency: By processing data locally, fog computing minimizes
latency.
2. Decentralization: Resources are distributed across multiple nodes
located near the data sources.
3. Real-Time Processing: Supports real-time applications like
autonomous vehicles, industrial automation, and smart cities.
Architecture:

77
Software Defined  Edge Devices: IoT devices that generate and sometimes pre-process
Networking data.
 Fog Nodes: Local computing devices or mini data centers that process
and store data near the source.
 Cloud: Acts as a central repository for long-term storage and global
processing.
Use Cases:
 Smart cities and traffic management
 Industrial IoT (IIoT) for predictive maintenance
 Healthcare for wearable devices and remote patient monitoring
Advantages:
 Reduces bandwidth usage and costs
 Enhances data privacy by processing sensitive data locally
 Supports time-sensitive applications
Challenges:
 Complex management of distributed nodes
 Interoperability between fog and cloud systems
 Higher initial deployment costs

Cloud Computing in Detail


Key Technologies Driving Cloud Computing
1. Virtualization:
o Abstracts hardware resources, allowing multiple virtual machines
(VMs) to run on a single physical server.
o Example: VMware, Hyper-V.
2. Containers:
o Lightweight, portable units of software that package applications
with their dependencies.
o Example: Docker, Kubernetes.
3. Microservices:
o Applications are broken into smaller, independent services for
better scalability and maintainability.

78
4. Serverless Computing: Modern Network
Architecture: Clouds and Fog
o Applications run in stateless compute containers triggered by
events without managing infrastructure.
o Example: AWS Lambda, Azure Functions.

Cloud Security:
1. Data Encryption:
o Encrypts data in transit and at rest to protect sensitive information.
2. Identity and Access Management (IAM):
o Ensures only authorized users can access resources.
3. Compliance:
o Adhering to regulations like GDPR, HIPAA, and SOC 2.

Emerging Trends:
1. Edge-Cloud Integration:
o Combining edge computing with cloud infrastructure to support
applications requiring both local processing and centralized
resources.
2. Multi-Cloud Strategy:
o Organizations use multiple cloud providers to avoid vendor lock-
in and improve resilience.
3. AI and Machine Learning in Cloud:
o AI models are trained and deployed at scale using cloud resources.
o Example: Google AI Platform, AWS SageMaker.

Examples of Cloud Applications:


1. Media Streaming: Netflix, YouTube
2. E-Commerce: Amazon, Shopify
3. Collaboration Tools: Microsoft Teams, Zoom
4. Cloud Storage: Dropbox, Google Drive
By combining the scalability of cloud computing with the proximity and
speed of fog computing, modern network architectures address the
demands of today’s interconnected digital world. This fusion supports a
range of applications, from IoT ecosystems to large-scale enterprise
operations.

79
Software Defined The Internet of Things (IoT): Components in Detail
Networking
The Internet of Things (IoT) refers to a network of interconnected devices
that collect, exchange, and act on data through the internet. These "things"
can range from simple sensors to complex machinery, all embedded with
electronics, software, and connectivity.
IoT systems are built on several key components, each playing a crucial
role in enabling communication, data processing, and decision-making.

1. Sensors and Devices


Sensors and devices form the foundation of any IoT system. They are
responsible for collecting data from the physical world.
 Types of Sensors:
o Environmental Sensors: Measure temperature, humidity, air
quality, etc.
o Motion Sensors: Detect movement or acceleration.
o Proximity Sensors: Detect the presence of nearby objects.
o Optical Sensors: Capture images or measure light intensity.
o Biometric Sensors: Monitor heart rate, blood pressure, or other
biological signals.
 Smart Devices:
o Devices that not only collect data but also have built-in processing
capabilities (e.g., smart thermostats, smartwatches).

2. Connectivity
Connectivity links IoT devices to gateways, servers, and the cloud,
enabling data exchange.
 Communication Protocols:
o Wi-Fi: Ideal for high-bandwidth, short-range applications.
o Bluetooth/BLE: Low power, suitable for personal area networks.
o Zigbee/Z-Wave: Low-power protocols for smart home devices.
o LoRaWAN: Long-range, low-power protocol for IoT in agriculture
and logistics.
o Cellular (4G/5G): Wide-area connectivity for IoT applications like
autonomous vehicles and smart cities.
o Ethernet: Reliable, high-speed connection for industrial IoT.

80
 Key Features: Modern Network
Architecture: Clouds and Fog
o Low latency
o Scalability
o Energy efficiency

3. IoT Gateways
Gateways act as intermediaries between IoT devices and the cloud or
centralized servers. They aggregate, preprocess, and securely transmit
data.
 Functions:
o Protocol translation (e.g., converting Zigbee data to Wi-Fi)
o Local processing and filtering to reduce bandwidth usage
o Data encryption for secure transmission
 Examples:
o Home automation hubs (e.g., Amazon Echo, Google Nest Hub)
o Industrial gateways (e.g., Cisco IoT Gateway)

4. Cloud Computing and Data Storage


The cloud provides the infrastructure to store, process, and analyze vast
amounts of IoT data.
 Data Processing:
o IoT data is processed using machine learning, analytics tools, and
big data frameworks.
 Data Storage:
o Distributed storage systems (e.g., Amazon S3, Google Cloud
Storage) ensure scalability and reliability.
 Advantages:
o Centralized data access
o High computational power
o Integration with analytics and AI services

5. Edge Computing
Edge computing processes data locally, near the source, rather than relying
on centralized cloud systems.

81
Software Defined  Use Cases:
Networking
o Real-time applications like autonomous vehicles or industrial
automation.
o Scenarios requiring low latency and high security.
 Benefits:
o Reduced latency
o Lower bandwidth costs
o Enhanced data privacy

6. Data Analytics and Machine Learning


IoT data becomes valuable when analyzed for insights, trends, and
predictions.
 Data Analytics:
o Real-time analytics for immediate decision-making (e.g., predictive
maintenance).
o Historical analytics for trend analysis.
 Machine Learning:
o Models trained to detect anomalies, forecast demand, or automate
tasks.
o Example: AI-powered smart thermostats that learn user preferences.

7. User Interface (UI)


The UI allows users to interact with IoT systems, visualize data, and
control devices.
 Forms of UI:
o Mobile apps (e.g., controlling smart lights with a smartphone
app).
o Web dashboards (e.g., monitoring energy usage in a smart grid).
o Voice-controlled interfaces (e.g., Amazon Alexa, Google
Assistant).

8. Security and Privacy


IoT systems are vulnerable to security threats due to their interconnected
nature.
 Security Components:
o Encryption: Protects data in transit and at rest.
82
o Authentication: Ensures only authorized devices and users can Modern Network
access the system. Architecture: Clouds and Fog

o Firewalls and Intrusion Detection Systems (IDS): Defend


against cyberattacks.
 Challenges:
o Limited processing power on IoT devices may constrain security
features.
o Ensuring compliance with privacy regulations like GDPR.

9. Power Management
IoT devices, particularly those in remote or battery-powered applications,
require efficient power management.
 Technologies:
o Energy-harvesting devices that use solar, thermal, or kinetic
energy.
o Low-power communication protocols (e.g., BLE, Zigbee).

10. Applications
IoT is deployed across various industries, each with unique requirements
and architectures.
 Smart Homes:
o Devices like smart thermostats, smart lighting, and security
cameras.
 Industrial IoT (IIoT):
o Sensors monitoring machinery for predictive maintenance.
 Healthcare:
o Wearables and remote monitoring devices.
 Smart Cities:
o Connected infrastructure like traffic lights, parking systems, and
waste management.
 Agriculture:
o IoT-enabled irrigation systems and soil monitoring.

IoT Ecosystem in Action


Consider a smart home system:
1. Sensors detect room temperature and motion.
83
Software Defined 2. Data is sent via Wi-Fi to a gateway.
Networking
3. The gateway relays data to the cloud for analysis.
4. Insights (e.g., optimal thermostat settings) are processed in real-
time.
5. The user can monitor and control devices through a mobile app.
IoT is a dynamic ecosystem, where seamless integration of these
components ensures efficient operation, real-time decision-making, and
enhanced user experiences. Its potential continues to grow with
advancements in connectivity, computing, and analytics technologies.

6.2 SUMMARY
This chapter explored the critical components of modern network
architectures, focusing on Cloud and Fog Computing. Key features like
scalability, low latency, and real-time processing were discussed alongside
their technological enablers. Use cases ranging from IoT to industrial
automation highlighted their relevance in diverse industries. Together,
Cloud and Fog Computing shape the future of efficient and adaptable
network systems.

6.3 POSSIBLE ANSWERS


1. Define Cloud Computing and list its characteristics.
2. What is Fog Computing, and how does it differ from Cloud
Computing?
3. Explain the role of edge devices in Fog Computing.

6.4 LIST OF REFERENCES


 NIST Cloud Computing Standards
 Cisco Fog Computing White Papers
 AWS Documentation on Cloud Architectures

6.5 BIBLIOGRAPHY
1. Mell, P., &Grance, T. (2011). The NIST Definition of Cloud
Computing. NIST Special Publication 800-145.
2. Cisco Systems. (2020). Fog Computing and the Internet of Things:
Extend the Cloud to Where the Things Are.

84
6.6 GLOSSARY Modern Network
Architecture: Clouds and Fog
 Cloud Computing: Delivery of on-demand computing resources over
the internet.
 Fog Computing: A distributed computing paradigm bringing
resources closer to the data source.
 Virtualization: Technology that allows multiple virtual machines to
run on a single physical server.

6.7 FURTHER READINGS


1. "Mastering Cloud Computing" by RajkumarBuyya et al.
2. "Fog Computing in the IoT Era" by Amir M. Rahmani et al.

6.8 MODEL QUESTIONS


1. Discuss the advantages and challenges of Cloud Computing.
2. Describe the architecture of Fog Computing with an example.
3. What are the emerging trends in Cloud and Fog Computing?



85
7
DESIGN AND IMPLEMENTATION OF
NETWORK
Unit Structure :
7.0 Objectives
7.1 Introduction
7.2 OSI Seven Layer Model
7.3 Switching at Different Layers
7.3.1 Layer 2 Switching
7.3.2 Layer 3 Switching and Routing
7.4 VLAN
7.5 Trunking
7.5.1 Types of network trunking
7.6 Spanning Tree
7.7 Introduction to OSPF
7.7.1 Basic OSPF Configuration
7.8 Introduction to BGP
7.8.1 How to configure EBGP (External BGP)
7.9 List Of References
7.10 Unit End Exercises

7.0 OBJECTIVES
 Understand and Implement Layer 2 and Layer 3 switching techniques.

 Understand and implement VLAN and trunking and its types.

 Describe and manage spanning tree

 Understand and implement OSPF and BGP

7.1 INTRODUCTION
The OSI Seven Layer Model—What Is a Layer? Established in 1947, the
International Organization for Standardization (ISO) was formed to bring
together the standards bodies from countries around the world. Their
definition of the model for Open Systems Interconnection, or OSI, is used
to define modes of interconnection between different components in a
networking system. This means that the physical method of transport can
86
be designed independently of the protocols and applications running over Design and Implementation
it. For example, TCP/IP can be run over both Ethernet and FDDI of Network
networks, and Novell’s IPX and Apple’s AppleTalk protocols can both be
run over Token Ring networks. These are examples of having
independence between the physical network type and the upper layer
protocols running across them. Consider also, two TCP/IP-enabled end
systems communicating across a multitude of different network types,
such as Ethernet, Frame Relay, and ATM.

7.2 OSI SEVEN LAYER MODEL


When we talk about Layer 2 and Layer 3 networking, it is these layers that
we’re referring to, and logically the further up the OSI model we move,
the greater intelligence we can use in networking decisions. Each layer
plays its part in moving data from one device to another across a network
infrastructure by providing a standard interface to the surrounding layers.
The Application Layer (Layer 7) The top layer in the stack, the
Application layer is where the end-user application resides. Think of the
Application layer as the browser application or email client for a user
surfing the Web or sending email. Many protocols are defined for use at
the Application layer, such as HTTP, FTP, SMTP, and Telnet.
In content switching terms, Layer 7 refers to the ability to parse
information directly generated by the user or application in decision
making, such as the URL typed by the user in the Web browser. For
example, http://www.foocorp.com is an example of Application layer data.
The Presentation Layer (Layer 6)
The Presentation layer is used to provide a common way for applications
(residing at the Application layer) to translate between data formats or
perform encryption and decryption. Mechanisms to convert between text
formats such as ASCII and Unicode may be considered part of the
Presentation layer, along with compression techniques for image files such
as GIF and JPEG.

Figure 5–1 The OSI Seven Layer Model.

The Session Layer (Layer 5)


The Session layer coordinates multiple Presentation layer processes
communicating between end devices. The Session layer is used by
applications at either end of the communication between end devices to tie
87
Software Defined together multiple Transport layer sessions and provide synchronization
Networking between them.
The HTTP protocol can use multiple TCP connections to retrieve objects
that make up a single Web page. The Session layer provides application
coordination between these separate TCP connections.

The Transport Layer (Layer 4)


The Transport layer is responsible for providing an identifiable and
sometimes reliable transport mechanism between two communicating
devices. User or application data, having passed through the Presentation
and Session layers, will typically be sequenced and checked before being
passed down to the Network layer for addressing.
The Transport layer is the first at which we see the concept of packets or
datagrams of information that will be transported across the network. TCP,
UDP, and ICMP are examples of Layer 4 protocols used to provide a
delivery mechanism between end stations. It is also at this layer in the
model that applications will be distinguished by information in the Layer 4
headers within the packets. Content switching operates most commonly at
this layer by using this information to distinguish between different
applications and different users using the same application.
The Network Layer (Layer 3) Whereas Layer 4 is concerned with
transport of the packets within a communication channel, the Network
layer is concerned with the delivery of the packets. This layer defines the
addressing structure of the internetwork and how packets should be routed
between end systems. The Network layer typically provides information
about which Transport layer protocol is being used, as well as local
checksums to ensure data integrity. Internet Protocol (IP) and Internet
Packet Exchange (IPX) are examples of Network layer protocols.
Traditional Internet routers operate at the Network layer by examining
Layer 3 addressing information before making a decision on where a
packet should be forwarded. Hardware-based Layer 3 switches also use
Layer 3 information in forwarding decisions. Layer 3 routers and switches
are not concerned whether the packets contain HTTP, FTP, or SMTP data,
but simply where the packet is flowing to and from.

The Data Link Layer (Layer 2)


The Data Link layer also defines a lower level addressing structure to be
used between end systems as well as the lower level framing and
checksums being used to transmit onto the physical medium. Ethernet,
Token Ring, and Frame Relay are all examples of Data Link layer or
Layer 2 protocols.
Traditional Ethernet switches operate at the Data Link layer and are
concerned with forwarding packets based on the Layer 2 addressing
scheme. Layer 2 Ethernet switches are not concerned with whether the
packet contains IP, IPX, or AppleTalk, but only with where the MAC
address of the recipient end system resides.
88
The Physical Layer (Layer 1) Design and Implementation
of Network
As with all computer systems, networking is ultimately about making,
moving, and storing 1s and 0s. In networking terms, the Physical layer
defines how the user’s browser application data is turned into 1s and 0s to
be transmitted onto the physical medium. The Physical layer defines the
physical medium such as cabling and interface specifications. AUI,
10Base-T, and RJ45 are all examples of Layer 1 specifications.

7.3 SWITCHING AT DIFFERENT LAYERS


Now that we’ve seen examples of different information available within
different layers of the OSI model, let’s look at how this information can be
used to make intelligent traffic forwarding decisions. Before the
development of switching, Ethernet relied on broadcast or flooding of
packets to all end stations within a network to forward traffic. Ethernet is
effectively a shared medium with only one Ethernet end station able to
transmit at any time. Combine this with early implementation techniques
relying on every end station in an Ethernet network seeing every packet,
even if it was not addressed to it, and issues of scalability quickly surface.

Figure 5–2 Passing data through the seven OSI layers.

7.3.1 Layer 2 Switching


The first implementation of Ethernet or Layer 2 switching uses
information in the Ethernet headers to make traffic forwarding decisions.
Intelligent switches learn which ports have which end stations attached by
recording the Ethernet MAC addresses of packets ingressing the switch.
Using this information along with the ability to parse the Layer 2 headers
of all packets means that a Layer 2 switch need only forward frames out of
89
Software Defined ports where it knows the end station to be. For end station addresses that
Networking have not yet been learned, frames with unknown destination MAC
addresses are flooded out of every port in the switch to force the recipient
to reply. This will allow the switch to learn the relevant MAC address, as
it will be the source address on the reply frame.
Layer 2 switching is implemented along side Layer 3 routing for local area
networks to facilitate communication between devices in a common IP
subnet. As the information at this layer is relatively limited, the
opportunity to configure Layer 2 switches to interpret address information
and act upon it in any way other than described previously is generally not
required. Many Layer 2 switches will offer the ability to configure
intelligent services such as Quality of Service (QoS), bandwidth shaping,
or VLAN membership based on the Layer 2 information. Figure 2–3
shows a simplified Layer 2 frame with examples of information that might
be used to make switching decisions.

Figure 5–3 Example Layer 2 headers for switching

7.3.2 Layer 3 Switching and Routing


Traditional protocol routers work by using information in the Layer 3
headers of Ethernet frames. While routing platforms exist for many
different protocols (e.g., IPX, AppleTalk, and DECNet), in TCP/IP terms a
router or routing device will typically use the destination IP address in the
Layer 3 header to make a forwarding decision. The main advantage of
Layer 3 routing in its earliest guises was that it gave the network designer
the ability to segregate the network into distinct IP networks and carefully
control the traffic and reachability between each.
Many of the early implementers and pioneers of Layer 3 routing devices
used software-based devices as platforms that, while offering a flexible
platform for development of the technology, often provided limitations in
terms of performance. As Layer 2 switching became more commonplace
and the price per port of Ethernet switching systems dropped,
manufacturers looked to combine the performance of ASIC-based Layer 2
switching with the functionality and flexibility of Layer 3 routing. Step
forward the Layer 3 switch. Layer 3 switches work by examining the
destination IP address and making a forwarding decision based on the
routing configuration implemented. The destination subnet might be
learned via a connected interface, a static route, or a dynamic routing
protocol such as RIP, OSPF, or BGP. In all instances, once the Layer 3
switch has examined the frame and compared the destination IP address
against the information in its routing database, the destination MAC
address is changed and the frame is forwarded through the relevant egress
port. For IP frames traversing a Layer 3 device, such as a router or Layer 3
switch, the TTL field in the IP header is also decremented to indicate to
end stations and intermediaries that a routing hop has occurred. It is once
90
we reach the Layer 3 switching environment that configuration for devices Design and Implementation
become inherently more complex. The administrator must configure the of Network
correct routing information to enable basic traffic flow along with the
interface IP addresses in each of the subnets to which the Layer 3 switch is
attached. Figure 5–4 shows the typical information used by a Layer 3
switch in making a forwarding decision.

Figure 5–4 Example Layer 3 headers for switching and routing

7.4 VLAN
Figure 7.5 shows a relatively common type of hierarchical LAN
configuration. In this example, the devices on the LAN are organized into
four segments, each served by a LAN switch. and- forward packetend
systems to form a LAN segment. The switch can forward a access control
(MAC) frame destination- attached device. It can also broadcast a frame
from a source attached device to all other attached devices. Multiples
switches can be interconnected so that multiple LAN segments form a
larger LAN. A LAN switch can also connect to a transmission link or a
router or other network device to provide connectivity to the Internet or
other WANs. FIGURE 1 Traditionally, a LAN switch operated
exclusively at the MAC level. Contemporary LAN swi 204 NETWORK
VIRTUALIZATION Virtual networks have two important benefits: They
enable the user to construct and manage networks independent of the
underlying physical network and with assurance of isolation from other
virtual networks using the same physical network. They enable network
providers to efficiently use network resources to support a wide range of
user requirements. 1 shows a relatively common type of hierarchical LAN
configuration. In this example, the devices on the LAN are organized into
four segments, each served by a LAN switch. The LAN switch is a store -
forwarding device used to interconnect a number of end systems to form a
LAN segment. The switch can forward a media access control (MAC)
frame: from a source-attached device to a attached device. It can also
broadcast a frame from a source attached device to all other attached
devices. Multiples switches can be interconnected so that multiple LAN
segments form a larger LAN. A LAN ch can also connect to a
transmission link or a router or other network device to provide
connectivity to the Internet or other WANs.

91
Software Defined
Networking

FIGURE 7.5 A LAN Configuration


Traditionally, a LAN switch operated exclusively at the MAC level.
Contemporary LAN switches generally provide greater functionality,
including multilayer awareness (Layers 3, 4, application), quality of
service (QoS) support, and trunking for wide-area networking.
The three lower groups in Figure 1 might correspond to different
departments, which are physically separated, and the upper group could
correspond to a centralized server farm that is used by all the departments.
Consider the transmission of a single MAC frame from workstation X.
Suppose the destination MAC address in the frame is workstation Y. This
frame is transmitted from X to the local switch, which then directs the
frame along the link to Y. If X transmits a frame addressed to Z or W, its
local switch forwards the MAC frame through the appropriate switches to
the intended destination. All these are examples of unicast addressing, in
which the destination address in the MAC frame designates a unique
destination. A MAC frame may also contain a broadcast address, in which
case the destination MAC address indicates that all devices on the LAN
should receive a copy of the frame. Thus, if X transmits a frame with a
broadcast destination address, all the devices on all the switches in Figure
1 receive a copy of the frame. The total collection of devices that receive
broadcast frames from each other is referred to as a broadcast domain.
In many situations, a broadcast frame is used for a purpose, such as
network management or the transmission of some type of alert, with a
relatively local significance. Thus, in Figure 7.5, if a broadcast frame has
information that is useful only to a particular department, transmission
capacity is wasted on the other portions of the LAN and on the other
switches.

The Use of Virtual LANs:


A more effective alternative is the creation of VLANs. In essence, a
virtual local-area network (VLAN): is a logical subgroup within a LAN
that is created by software rather than by physically moving and
separating devices. It combines user stations and network devices into a
92
single broadcast domain regardless of the physical LAN segment they are Design and Implementation
attached to and allows traffic to flow more efficiently within populations of Network
of mutual interest. The VLAN logic is implemented in LAN switches and
functions at the MAC layer. Because the objective is to isolate traffic
within the VLAN, a router is required to link from one VLAN to another.
Routers can be implemented as separate devices, so that traffic from one
VLAN to another is directed to a router, or the router logic can be
implemented as part of the LAN switch, as shown in Figure 7.6
VLANs enable any organization to be physically dispersed throughout the
company while maintaining its group identity. For example, accounting
personnel can be located on the shop floor, in the research and
development center, in the cash disbursement office, and in
Figure 7.6 VLAN: Figure 7.6 shows five defined VLANs. A transmission
from workstation X to server Z is within the same VLAN, so it is
efficiently switched at the MAC level. A broadcast MAC frame from X is
transmitted to all devices in all portions of the same VLAN. But a
transmission from X to printer Y goes from one VLAN to another.
Accordingly, router logic at the IP level is required to move the IP packet
from shows that logic integrated into the switch, so that the switch
determines whether the incoming MAC frame is destined for another
device on the same VLAN. If not, the switch routes the enclosed IP packet
at the IP level.

Figure 7.6 VLAN

Defining VLANs:
A VLAN is a broadcast domain consisting of a group of end stations,
perhaps on multiple physical LAN segments, that are not constrained by
their physical location and can communicate as if they
were on a common LAN. A number of different approaches have used for
defining membership, including the following:

 Membership by port group : Each switch in the LAN


configurationcontains two types of ports: a trunk port, which connects
two switches; and an end port, which connects the switch to an end
93
Software Defined can be defined by assigning each end port to a specific VLAN. This
Networking approach has the advantage that it is relatively easy to configure. The
principle disadvantage is that the network manager must reconfigure
VLAN membership when an end system another.

 Membership by MAC address : Because MAC layer addresses are


hardwired into the workstation’s network interface card (NIC),VLANs
based on MAC addresses enable network managers to move
aworkstation to a different physical location on the network and have
that workstation automatically retain its VLAN membership. The
main problem with this method is that VLAN membership must be
assigned initially. In networks with thousands of users, this is no easy
task.Also, in environments where notebook PCs are used, the MAC
address is associated with the docking station and not with the
notebook PC. Consequently, when a notebook PC is moved to a
different docking station, its VLAN membership must be
reconfigured.

 Membership based on protocol information : VLAN membership


can be assigned based on IP address, transport protocol information,
or even higher-layer protocol information. This is a quite flexible
approach, but it does require switches to examine portions of the
MAC frame above the MAC layer, which may have a performance
impact.
Communicating VLAN Membership: Switches must have a way of
understanding VLAN membership (that is, which stations belong to which
VLAN) when network traffic arrives from other switches; otherwise,
VLANs would be limited to a single switch. One possibility is to
configure the information manually or with some type of network
management signaling protocol, so that switches can associate incoming
frames with the appropriate VLAN.
A more common approach is frame tagging, in which a header is typically
inserted into each frame on inter-switch trunks to uniquely identify to
which VLAN a particular MAC-layer frame belongs.
IEEE 802.1Q VLAN Standard: The IEEE 802.1Q standard, defines the
operation of VLAN bridges and switches that permits the definition,
operation, and administration of VLAN topologies within a
bridged/switched LAN infrastructure.
Recall that a VLAN is an administratively configured broadcast domain,
consisting of a subset of end stations attached to a LAN. A VLAN is not
limited to one switch but can span multiple interconnected switches. In
that case, traffic between switches must indicate VLAN membership. This
is accomplished in 802.1Q by inserting a tag with a VLAN identifier
(VID) with a value in the range from 1 to 4094. Each VLAN in a LAN
configuration is assigned a globally unique VID. By assigning the same
VID to end systems on many switches, one or more VLAN broadcast
domains can be extended across a large network.

94
Figure 7.7 shows the position and content of the 802.1 tag, referred to as Design and Implementation
Tag Control Information (TCI). The presence of the two-octet TCI field is of Network
indicated by inserting a Length/Type field in the 802.3 MAC framewith a
value of 8100 hex. The TCI consists of three subfields, as described in the
list that follows.
User priority (3 bits): The priority level for this frame.
Canonical format indicator (1 bit): Is always set to 0 for Ethernet
switches. CFI is used for compatibility between Ethernet type networks
and Token Ring type networks. If a frame received at an Ethernet port has
a CFI set to 1, that frame should not be forwarded as it is to an untagged
port.
VLAN identifier (12 bits): The identification of the VLAN. Of the 4096
possible VIDs, a VID of 0 is used to identify that the TCI contains only a
priority value, and 4095 (0xFFF) is reserved, so the maximum possible
number of VLAN configurations is 4094

Figure 7.7 : Tagged 802.3 MAC Frame Format

Nested VLANs:
The original 802.1Q specification allowed for a single VLAN tag field to
be inserted into an Ethernet MAC frame. More recent versions of the
standard allow for the insertion of two VLAN tag fields, allowing the
definition of multiple sub For example, a single VLAN level suffices for
an Ethernet configuration entirely on a single premise. However, it is not
uncommon for an enterprise to make use of a network service provider to
interconnect multiple LAN locations, and to use met connect to the
provider. Multiple customers of the service provider may wish to use the
802.1Q tagging facility across the service provider network (SPN).
One possible approach is for the customer’s VLANs to be visible to the
service provider. In that case, the service provider could support a total of
only 4094 VLANs for all its customers. Instead, the serviceprovider
inserts a second VLAN tag into Ethernet frames. For example, consider
two customers with multiple sites, SPN (Refer part A of Figure 6).
Customer A has configured VLANs 1 to100 at their sites, and similarly
95
Software Defined Customer B has configured VLANs 1 to 50 at their sites. The tagged data
Networking frames belonging to the customers must be kept separate while they
traverse the service provider’s network. The customer’s data frame can be
identified and kept separate by associating another VLAN for that
customer’s traffic. This results in the tagged customer data frame being
tagged again with a VLAN tag, traverses the SPN (see part b of Figure 6).
The additional tag is removed at the edge of the SPN when the data enters
the customer’s network again.
Packed VLAN tagging is known as VLAN stacking or as Q-in-Q

b) Position of tags in Ethernet frame


FIGURE 7.8 Use of Stacked VLAN Tags

7.5 TRUNKING
What is trunk and what is trunking in networking?
A network trunk is a communications line or link designed to carry
multiple signals simultaneously to provide network access between two
points. Trunks typically connect switching centers in a communications
system. The signals can convey any type of communications data.
A networking trunk can consist of several wires, cables or fiber
optic strands bundled together in a single physical cable to maximize the
available bandwidth. Or it can consist of a single high-capacity link over
which many signals are multiplexed.

7.5.1 Types of network trunking


There are several ways to use trunking in networking, including in
broadcasting, phone systems and data networks.
96
Trunking in telephone systems Design and Implementation
of Network
The term trunking in networking dates from the days of analog phone
systems. With those systems, many landline users shared a few
communication paths extending from a main trunk line, like the branches
of a tree.
Today, trunks interconnect switching network nodes, such as private
branch exchanges (PBXs) and central offices. Session Initiation Protocol
(SIP) trunk links enable voice over Internet Protocol (VoIP) to connect a
PBX to the internet. In enterprise telephony, the transition from traditional
time-division multiplexing trunks to SIP trunks began around 2009.
VoIP, also known as IP trunking, is a technology that converts human
speech to data for digital transmission via the internet. By contrast, analog
telephone lines send electrical signals across cables to convey changes in
voice.
A SIP trunk is a virtualized instance of analog telephone lines. It connects
an unlimited number of channels to a PBX system for long-distance and
international calling over the internet. The SIP trunk router must be set
to quality of service to ensure voice traffic takes priority over data-
intensive activities, such as downloading or content streaming.

Trunking in broadcasting
A trunk can also consist of a cluster of broadcast frequencies, as in a
trunked radio system that enables the sharing of a few radio
frequency channels among a large group of users. Trunked radio systems
were developed in the 1990s. They provide more efficient use of the radio
spectrum. Rather than assigning a frequency to one group, users are placed
in logical groups. All the frequencies are pooled, and computers
automatically allocate broadcast channels as users request
them. Repeaters retransmit signals and extend the coverage to a wider
area.

Computerized data networks


Data networks use the following two types of trunks:
1. trunks that carry data from multiple local area networks or virtual local
area networks (VLANs) across a single interconnect between network
switches or routers, called a trunk port.
2. trunks that bond or aggregate multiple physical links to create a single,
higher-capacity, more reliable logical link, which is called port
trunking.

Trunking and VLAN configuration


Trunking is a key architectural component of VLANs. In VLANS, a
physical network is virtualized to create several logical networks that are
independent broadcast domains. The main physical communications link
97
Software Defined is the trunk; switches connected to the trunk provide the branches out to
Networking support many client devices.
VLANs were an improvement over shared network hubs. A VLAN groups
client devices that frequently communicate with one another. On busy
networks, this reduces broadcast traffic congestion. It also segments data
as it goes through the switches. Trunk links pass packets of data from each
of the VLANs. This connects switches together so that each port can be
independently configured to a dedicated VLAN.

Trunk ports vs. access ports


An Ethernet interface can be configured as an access port or a trunk port
by switching the port's mode setting:
 Access port. In the switch port mode access setting, a port provides a
dedicated link to servers, routers or terminals within a single VLAN.
Access ports convey only the data traffic that matches the access value
of its pre-assigned VLAN. When configured as an access port, the
switch connects to a network host. The host presumes the arriving data
frames to be part of that VLAN. A common use case for access ports is
to connect a personal computer or peripheral device to a switch.
 Trunk ports. In switch port mode trunk setting, a port will
concurrently carry traffic between several VLAN switches on the same
physical link. A trunk port adds special identifying tags to isolate traffic
on the different switches. IEEE (Institute of Electrical and Electronics
Engineers) open standard 802.1Q describes the vendor-agnostic
encapsulation protocol for VLAN tagging. A tag gets placed
on Ethernet frames as they pass between switches. This ensures
each frame is routed to its intended VLAN at the other end of the
trunked link. A trunk port is commonly used for connecting two
switches, connecting switches to servers and routers, and connecting
hypervisors to switches.

Trunking to extend VLAN access


Using port trunking extends VLAN access across an entire network. This
practice is also known as link aggregation. Numerous Ethernet links are
bunched together to behave as a single, logical link. The method to
aggregate links is defined by IEEE standard 802.1aq and by
the 802.1AX standard for LANs and metropolitan area networks, as well
by various vendor-proprietary methods. The trunking function must be
activated via parallel commands on both the sending and receiving ends.
Before open standards, adding VLAN tags required the use of switch
makers' proprietary protocols developed. Cisco's VLAN tagging protocol,
known as Inter-Switch Link (ISL), encapsulates frames in a header and
trailer. ISL only works with Cisco switches. Not all switches support ISL,
and Cisco has since deprecated ISL in favor of newer switches that also
support the IEEE tagging protocol.

98
Trunking in networking differs from trunking in software development. In Design and Implementation
software, a trunk refers to the primary branch of code that developers use of Network
to iterate and make version changes. Code changes are made to the trunk,
rather than secondary branches of the code, a process that enables new
features to be added and deployed more rapidly.

7.6 SPANNING TREE


Why do we need spanning tree?
What is a loop, and how do we get one? Let me show you an example:

In the picture above, we have two switches. These switches are connected
with a single cable, so there is a single point of failure. To get rid of this
single point of failure, we will add another cable:

With the extra cable, we now have redundancy. Unfortunately for us,
redundancy also brings loops. Why do we have a loop in the scenario
above? Let me describe it to you:
1. H1 sends an ARP request because it’s looking for the MAC address of
H2. An ARP request is a broadcast frame.
2. SW1 will forward this broadcast frame on all it interfaces, except the
interface where it received the frame on.
3. SW2 will receive both broadcast frames.
Now, what does SW2 do with those broadcast frames?

99
Software Defined 1. It will forward it from every interface except the interface where it
Networking received the frame.
2. This means that the frame that was received on interface Fa0/0 will be
forwarded on Interface Fa1/0.
3. The frame that was received on Interface Fa1/0 will be forwarded on
Interface Fa0/0.
Do you see where this is going? We have a loop! Both switches will keep
forwarding over and over again until the following happens:
 You fix the loop by disconnecting one of the cables.
 One of your switches will crash because they are overburdened with
traffic.
Ethernet frames don’t have a TTL (Time to Live) value, so they will loop
around forever. Besides ARP requests, many frames are broadcasted. For
example, whenever the switch doesn’t know about a destination MAC
address, it will be flooded.
How spanning tree solves loops
Spanning tree will help us to create a loop-free topology by blocking
certain interfaces. Let’s take a look at how spanning tree work! Here’s an
example:

We have three switches, and as you can see, we have added redundancy
by connecting the switches in a triangle, this also means we have a loop
here. I have added the MAC addresses but simplified them for this
example:
 SW1: MAC AAA
 SW2: MAC BBB
 SW3: MAC CCC
Since spanning tree is enabled, all our switches will send a special frame
to each other called a BPDU (Bridge Protocol Data Unit). In this BPDU,
there are two pieces of information that spanning tree requires:

100
 MAC address Design and Implementation
of Network
 Priority
The MAC address and the priority together make up the bridge ID. The
BPDU is sent between switches as shown in the following picture:

Spanning tree requires the bridge ID for its calculation. Let me explain
how it works:
 First of all, spanning tree will elect a root bridge; this root bridge will
be the one that has the best “bridge ID”.
 The switch with the lowest bridge ID is the best one.
 By default, the priority is 32768, but we can change this value if we
want.
So who will become the root bridge? In our example, SW1 will become
the root bridge! Priority and MAC address make up the bridge ID. Since
the priority is the same on all switches, it will be the MAC address that is
the tiebreaker. SW1 has the lowest MAC address thus the best bridge ID
and will become the root bridge.
The ports on our root bridge are always designated, which means they are
in a forwarding state. Take a look at the following picture:

101
Software Defined Above, you see that SW1 has been elected as the root bridge and the “D”
Networking on the interfaces stands for designated.
Now we have agreed on the root bridge, our next step for all our “non-
root” bridges (so that’s every switch that is not the root) will have to find
the shortest path to our root bridge! The shortest path to the root bridge
is called the “root port”. Take a look at my example:

I’ve put an “R” for “root port” on SW2 and SW3. Their Fa0/0 interface
is the shortest path to get to the root bridge. In my example, I’ve kept
things simple, but “shortest path” in spanning tree means it will actually
look at the speed of the interface. Each interface has a certain cost, and
the path with the lowest cost will be used. Here’s an overview of the
interfaces and their cost:
 10 Mbit = Cost 100
 100 Mbit = Cost 19
 1000 Mbit = Cost 4
Excellent!…we have designated ports on our root bridge and root ports on
our non-root bridges, we still have a loop, however, so we need to shut
down a port between SW2 and SW3 to break that loop. So which port are
we going to shut down? The one on SW2 or the one on SW3? We’ll look
again at the best bridge ID:
 Bridge ID = Priority + MAC address.
Lower is better, both switches have the same priority, but the MAC
address of SW2 is lower, which means that SW2 will “win this battle”.
SW3 is our loser here which means it will have to block its port,
effectively breaking our loop! Take a look at my example:

102
Design and Implementation
of Network

7.7 INTRODUCTION TO OSPF


Link-state routing protocols are like your navigation system, they have a
complete map of the network. If you have a full map of the network you
can calculate the shortest path to all the different destinations out there.
This is cool because if you know about all the different paths, it’s
impossible to get a loop since you know everything! The downside is that
this is more CPU intensive than a distance vector routing protocol. It’s just
like your navigation system…if you calculate a route from New York to
Los Angeles, it’s going to take a bit longer than when you calculate a
route from one street to another street in the same city.

 Link: That’s the interface of our router.


 State: Description of the interface and how it’s connected to
neighbor routers.

Link-state routing protocols operate by sending link-state


advertisements (LSA) to all other link-state routers.
103
Software Defined All the routers need to have these link-state advertisements so they can
Networking build their link-state database or LSDB. Basically, all the link-state
advertisements are a piece of the puzzle that builds the LSDB.
If you have a lot of OSPF routers, it might not be very efficient that each
OSPF router floods its LSAs to all other OSPF routers. Let me show you
an example:

Above, we have a network with 8 OSPF routers connected on a switch.


Each of those routers is going to become OSPF neighbors with all of the
other routers…sending hello packets, flooding LSAs, and building the
LSDB. This is what will happen:

7.7.1 Basic OSPF Configuration

104
Design and Implementation
of Network

This is the topology that we’ll use. All routers are in OSPF Area 0. Note
that the link between R2 and R1 is an Ethernet (10Mbit) link. All other
links are FastEthernet (100Mbit) interfaces.
We’ll start with the configuration between R2 and R3:

R2(config)#router ospf 1
R2(config-router)#network 192.168.23.0 0.0.0.255 area 0
R3(config)#router ospf 1

R3(config-router)#network 192.168.23.0 0.0.0.255 area 0

I need to use the router ospf command to get into the OSPF configuration.
The number “1” is a process ID and you can choose any number you like.
It doesn’t matter and if you want you can use a different number on each
router.
The second step is to use the network command. It works similar to RIP
but it is slightly different, let me break it down for you:

network 192.168.23.0 0.0.0.255

Just like RIP the network command does two things:


 Advertise the networks that fall within this range in OSPF.
 Activate OSPF on the interface(s) that fall within this range. This
means that OSPF will send hello packets on the interface.
Behind 192.168.23.0 you can see it says 0.0.0.257. This is not a subnet
mask but a wildcard mask. A wildcard mask is a reverse subnet mask.
Let me give you an example:

105
Software Defined When I say reverse subnet mask I mean that the binary 1s and 0s of the
Networking wildcard mask are flipped compared to the subnet mask. A subnet mask of
Subnetmask 255 255 255 0

11111111 11111111 11111111 00000000

Wildcardmask 0 0 0 255

00000000 00000000 00000000 11111111


257.257.257.0 is the same as wildcard mask 0.0.0.257. Don’t worry about
this too much for now as I’ll explain wildcard masks to you when we talk
about access-lists!
OSPF uses areas so you need to specify the area:

area 0

In our example we have configured single area OSPF. All routers belong
to area 0.
After typing in my network command you’ll see this message in the
console:

R3# %OSPF-5-ADJCHG: Process 1, Nbr 192.168.23.2 on


FastEthernet0/0 from LOADING to FULL, Loading Done
R2# %OSPF-5-ADJCHG: Process 1, Nbr 192.168.23.3 on
FastEthernet1/0 from LOADING to FULL, Loading Done

It seems that R3 and R2 have become neighbors. There’s another


command we can use to verify that we have become neighbors:

R3#show ip ospf neighbor


Neighbor ID Pri State Dead Time Address Interface
192.168.23.2 1 FULL/BDR 00:00:36 192.168.23.2 FastEthernet0/0
R2#show ip ospf neighbor
Neighbor ID Pri State Dead Time Address Interface

192.168.23.3 1 FULL/DR 00:00:32 192.168.23.3 FastEthernet1/0

Show ip ospf neighbor is a great command to see if your router has OSPF
neighbors. When the state is full you know that the routers have
successfully become neighbors.

106
Each OSPF router has a router ID and we check it with the show ip Design and Implementation
protocols command: of Network

R2#show ip protocols
Routing Protocol is "ospf 1"
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set

Router ID 192.168.23.2
R3#show ip protocols
Routing Protocol is "ospf 1"
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set

Router ID 192.168.23.3

Above you see the router ID of R2 and R3. They used their highest active
IP address as the router ID. Let’s create a loopback on R2 to see if the
router ID changes…

R2(config)#interface loopback 0

R2(config-if)#ip address 2.2.2.2 257.257.257.0

This is how you create a loopback interface. You can pick any number that
you like it really doesn’t matter.

R2#show ip protocols
Routing Protocol is "ospf 1"
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set

Router ID 192.168.23.2

The router ID still the same. We need to reset the OSPF process before the
change will take effect, this is how you do it:

R2#clear ip ospf process

Reset ALL OSPF processes? [no]: yes

107
Software Defined Use clear ip ospf process to reset OSPF. Let’s see if there is a difference:
Networking
R2#show ip protocols
Routing Protocol is "ospf 1"
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set

Router ID 2.2.2.2

We can also change the router ID manually. Let me demonstrate this on


R3:

R3#show ip protocols
Routing Protocol is "ospf 1"
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set

Router ID 192.168.23.3

Right now it’s 192.168.23.3…

R3(config-router)#router-id 3.3.3.3

Reload or use "clear ip ospf process" command, for this to take effect
R3#clear ip ospf process
Reset ALL OSPF processes? [no]: yes

The router is friendly enough to warn me to reload or clear the OSPF


process. Let’s verify our configuration:

R3#show ip protocols
Routing Protocol is "ospf 1"
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set

Router ID 3.3.3.3

As you can see above the router ID is now 3.3.3.3.

108
Right now we have an OSPF neighbor adjacency between R2 and R3. Design and Implementation
Let’s configure our routers so that R2/R1 and R1/R3 also become OSPF of Network
neighbors:

R2(config)#router ospf 1
R2(config-router)#network 192.168.12.0 0.0.0.255 area 0
R1(config)#router ospf 1
R1(config-router)#network 192.168.12.0 0.0.0.255 area 0
R1(config-router)#network 192.168.13.0 0.0.0.255 area 0
R3(config)#router ospf 1
R3(config-router)#network 192.168.13.0 0.0.0.255 area 0

I’ll advertise all networks in OSPF. Before we check the routing table it’s
a good idea to see if our routers have become OSPF neighbors:

R2#show ip ospf neighbor


Neighbor ID Pri State Dead Time Address Interface
192.168.13.1 1 FULL/BDR 00:00:31 192.168.12.1 Ethernet0/0
3.3.3.3 1 FULL/DR 00:00:38 192.168.23.3 FastEthernet1/0

R1#show ip ospf neighbor


Neighbor ID Pri State Dead Time Address Interface
3.3.3.3 1 FULL/BDR 00:00:33 192.168.13.3 FastEthernet1/0
2.2.2.2 1 FULL/DR 00:00:30 192.168.12.2 Ethernet0/0
R3#show ip ospf neighbor
Neighbor ID Pri State Dead Time Address Interface
192.168.13.1 1 FULL/DR 00:00:37 192.168.13.1 FastEthernet1/0
2.2.2.2 1 FULL/BDR 00:00:30 192.168.23.2 FastEthernet0/0

Excellent our routers have become OSPF neighbors and the state is full
which means they are done exchanging information. Let’s check the
routing tables:

R2#show ip route ospf

O192.168.13.0/24 [110/2] via 192.168.23.3, 00:09:45, FastEthernet1/0

109
Software Defined 7.8 INTRODUCTION TO BGP
Networking
Why do we need BGP?
Let’s start by looking at some scenarios so you can understand why and
when we need BGP:

Nowadays almost everything is connected to the Internet. In the picture


above we have a customer network connected to an ISP (Internet Service
Provider). Our ISP is making sure we have Internet access. Our ISP has
given us a single public IP address we can use to access the Internet. To
make sure everyone on our LAN at the customer side can access the
Internet we are using NAT/PAT (Network / Port address translation) to
translate our internal private IP addresses to this single public IP address.
This scenario is excellent when you only have clients that need Internet
access. On our customer LAN we only need a default route pointing to the
ISP router and we are done. For this scenario we don’t need BGP…

Maybe the customer has a couple of servers that need to be reachable from
the Internet…perhaps a mail- or webserver. We could use port forwarding
and forward the correct ports to these servers so we still only need a single
IP address. Another option would be to get more public IP addresses from
our ISP and use these to configure the different servers. For this scenario
we still don’t need BGP…

110
Design and Implementation
of Network

What if I want a bit more redundancy? Having a single point of failure


isn’t a good idea. We could add another router at the customer side and
connect it to the ISP. You can use the primary link for all traffic and have
another link as the backup. We still don’t require BGP in this situation, it
can be solved with default routing:
 Advertise a default route in your IGP on the primary customer router
with a low metric.
 Advertise a default route in your IGP on the secondary customer router
with a high metric.
This will make sure that your IGP sends all traffic using the primary link.
Once the link fails your IGP will make sure all traffic is sent down the
backup link. Let me ask you something to think about…can we do any
load balancing across those two links? It’ll be difficult right?
Your IGP will send all traffic down the primary link and nothing down the
backup link unless there is a failure. You could advertise a default route
with the same metric but you’d still have something like a 50/50% load
share. What if I wanted to send 80% of the outgoing traffic on the primary
link and 20% down the backup link? That’s not going to happen here but
with BGP it’s possible.

This scenario is a bit more interesting. Instead of being connected to a


single ISP we now have two different ISPs. For redundancy reasons it’s
important to have two different ISPs, in case one fails you will always

111
Software Defined have a backup ISP to use. What about our Customer network? We still
Networking have two servers that need to be reachable from the Internet.
In my previous examples we got public IP addresses from our ISP. Now
I’m connected to two different ISPs so what public IP addresses should I
use? From ISP1 or ISP2? If we use public IP addresses from ISP1 (or
ISP2) then these servers will be unreachable once the ISP has connectivity
issues.
Instead of using public IP addresses from the ISP we will get our own
public IP addresses.The IP address space is maintained by IANA (Internet
Assigned Numbers Authority – http://www.iana.org/ ). IANA is assigning
IP address space to a number of large Regional Internet Registries
like RIPE or ARIN. Each of these assign IP address space to ISPs or large
organizations.
When we receive our public IP address space then we will advertise this to
our ISPs. Advertising is done with a routing protocol and that will be
BGP.
If you are interested here’s an overview of the IPv4 space that has been
allocated by IANA:
IANA IPv4 address space
Autonomous Systems
Besides getting public IP address space we also have to think about an AS
(Autonomous System):

An AS is a collection of networks under a single administrative domain.


The Internet is nothing more but a bunch of autonomous systems that are
connected to each other. Within an autonomous system we use an IGP like
OSPF or EIGRP.
For routing between the different autonomous systems we use an EGP
(external gateway protocol). The only EGP we use nowadays is BGP.
How do we get an autonomous system number? Just like public IP address
space you’ll need to register one.
112
Autonomous system numbers are 16-bit which means we have 65535 Design and Implementation
numbers to choose from. Just like private and public IP addresses, we have of Network
a range of public and private AS numbers.
Range 1 – 64511 is globally unique AS numbers and range 64512 – 65535
are private autonomous system numbers.
BGP has two flavors:
 External BGP: used between autonomous systems
 Internal BGP: used within the autonomous system.
External BGP is to exchange routing information between the different
autonomous systems. In this lesson I explain why we need internal BGP. I
would recommend to read it after finishing this lesson and learning
about external BGP first.
BGP Advertisements
You now have an idea of why we require BGP and what autonomous
systems are. The Internet is a big place, as I am writing this there are more
than 500.000 prefixes in a complete Internet routing table. If you are
curious, you can find the size of the Internet routing table here:

CIDR Report
On the internet there are a number of looking glass servers. These are
routers that have public view access and you can use them to look at the
Internet routing table. If you want to see what it looks like check out:

Looking glass servers


Scroll down all the way to “Category 2 – IPv4 and IPv6 BGP Route
Servers by region (TELNET access)”. You can telnet to these devices and
use show ip route and show ip bgp to check the BGP or routing table.
When we run BGP, does this mean we have to learn more than 500.000
prefixes? It depends…let’s look at some examples:

Above in our picture our customer network has an autonomous system


number (AS 1) and some IP address space (10.0.0.0 /8), let’s pretend that
113
Software Defined these are public IP addresses. We are connected to two different ISPs and
Networking you can see their AS number (AS2 and AS3) and IP address space
(20.0.0.0/8 and 30.0.0.0/8). We can reach the rest of the internet through
both ISPs.
We can use BGP to advertise our address space to the ISPs but what are
the ISPS going to advertise to our customer through BGP? There are a
number of options:
 They advertise only a default route.
 They advertise a default route and a partial routing table.
 They advertise the full Internet routing table.

7.8.1 How to configure EBGP (External BGP)


In this lesson I will show you how to configure EBGP (External BGP) and
how to advertise networks. I will be using the following topology:

Let’s start with a simple topology. Just two routers and two autonomous
systems. Each router has a network on a loopback interface, which we will
advertise in BGP.

R1(config)#router bgp 1
R1(config-router)#neighbor 192.168.12.2 remote-as 2
R2(config)#router bgp 2

R2(config-router)#neighbor 192.168.12.1 remote-as 1

Use the router bgp command with the AS number to start BGP.
Neighbors are not configured automatically. This is something you’ll have
to do yourself with the neighbor x.x.x.x remote-as command. This is how
we configure external BGP.

R1# %BGP-5-ADJCHANGE: neighbor 192.168.12.2 Up

R2# %BGP-5-ADJCHANGE: neighbor 192.168.12.1 Up

114
If everything goes ok, you should see a message that we have a new BGP Design and Implementation
neighbor adjacency. of Network

R1(config)#router bgp 1
R1(config-router)#neighbor 192.168.12.2 password MYPASS
R2(config)#router bgp 2
R2(config-router)#neighbor 192.168.12.1 password MYPASS

If you like, you can enable MD5 authentication by using the neighbor
password command. Your router will calculate an MD5 digest of every
TCP segment sent.

R1#show ip bgp summary


BGP router identifier 1.1.1.1, local AS number 1
BGP table version is 1, main routing table version 1
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down
State/PfxRcd
192.168.12.2 4 2 10 10 1 0 0 00:07:12 0
R2#show ip bgp summary
BGP router identifier 2.2.2.2, local AS number 2
BGP table version is 1, main routing table version 1
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down
State/PfxRcd
192.168.12.1 4 1 11 11 1 0 0 00:08:33 0

Show ip bgp summary is an excellent command to check if you have


BGP neighbors. You also see how many prefixes you received from each
neighbor.

7.9 LIST OF REFERENCES


1. TCPIP Protocol Suite, Behrouz A Forouzan, McGraw Hill Education;
4th edition, Fourth Edition, 2017
2. Foundations of Modern Networking: SDN, NFV, QoE, IoT, and
Cloud, William Stallings, Addison-Wesley Professional, 2016.
3. Software Defined Networks: A Comprehensive Approach, Paul
Goransson and Chuck Black, Morgan Kaufmann Publications, 2014
4. SDN - Software Defined Networks by Thomas D. Nadeau & Ken
Gray, O'Reilly, 2013

7.10 UNIT END EXERCISES


115
Software Defined 1) Explain Switching at Different Layers?
Networking
2) Explain Layer 2 Switching in detail?
3) Explain Layer 3 Switching in detail?
4) Write short note on “VLAN”.
5) What is trunk and what is trunking in networking?
6) Explain the types of network trunking?
7) Why do we need spanning tree?
8) How spanning tree solves loops?
9) Write short note on “OSPF”.
10) Write short note on “BGP”.




116
8
IMPLEMENTATION OF ROUTING
Unit Structure :
8.0 Objectives
8.1 Introduction
8.2 Multicast Routing
8.2.1 Multicast in Datacenter
8.2.2 Multicast Routing in SDN
8.2.3 Multicast Tree Packing in SDN
8.3 MPLS
8.4 Implementation of Traffic Filtering by using Standard and Extended
Access Control List
8.4.1 Access-list (ACL)
8.4.2 Standard Access-list
8.4.3 Extended Access-list
8.5 Introduction to Routing Redistribution
8.6 Redistribution between EIGRP and OSPF
8.6.1 Redistribute OSPF into EIGRP
8.6.2 Verification
8.6.3 Redistribute EIGRP into OSPF
8.6.4 Verification
8.7 Verification

8.0 OBJECTIVES
 Understand and Implement Multicast Routing

 Understand and implement MPLS

 Describe and implement traffic filterting using standard and extended


acess list

 Understand and implement Redistribution between EIGRP and OSPF

117
Software Defined 8.1 INTRODUCTION
Networking
Multicast is an important communication way, which addresses how to
distribute the data from one or many sources to a group of destination
computers simultaneously. The typical application examples include video
conference, video-on-demand and file distribution. Multicast can be
classified into two main types, i.e. IP multicast and application layer
multicast (ALM). IP multicast is a technique for one-to-many
communication over an IP infrastructure in a network. The nodes in the
network (switches and routers) take care of replicating the packet to reach
multiple receivers such that messages are sent over each link of the
network only once. Therefore the forwarding efficiency of IP multicast is
very high. However, it has not been widely deployed in current Internet
because of some limitations such as dependence on the supports of
network infrastructures and rapid resource-consuming of routers. As an
alternative of IP multicast, ALM is implemented at the application layer,
using only end-systems. Participating peers organize themselves into an
overlay topology, where each edge in this topology corresponds to a
unicast path between two end-systems or peers.
Recently emerging software-defined networking separates the network
control plane from the data forwarding plane with the promise to
dramatically improve network resource utilization, simplify network
management, reduce operating cost, and promote innovation and
evolution. In SDN, the controller can collect information from network
devices and change the traffic flow settings. With the full knowledge of
network condition, the SDN controller can adaptively set up different
routes for different flows to maximize the service utility. In this paper, we
present a survey of multicast in software-defined network.

8.2 MULTICAST ROUTING


Typical SDN Architecture In this section, we review two well-known
SDN architectures, i.e. ONF OpenFlow-based SDN and IETF ForCES.
The above two architectures each follow the basic principle of separation
between the control and data planes, and each standardize information
exchange between planes. However, there exist some differences on the
architecture design.
The ONF SDN architecture comprises three layers, i.e. data plane, control
plane and application plane, as Figure 8.1 shows. The network elements in
the data plane expose their capabilities toward the control layer. In the
Controller Plane, the SDN controller translates the applications’
requirements and exerts more granular control over the network elements.
Services are offered to applications via the application-controller plane
interface. An SDN controller may orchestrate competing application
demands for limited network resources. SDN applications reside in the
Application Plane, and communicate their network requirements toward
the Controller Plane.

118
Implementation of
Routing

Figure 8.1 :- ONF SDN


The IETF ForCES (Forwarding and Control Element Separation)
redefines the network device’s internal architecture,in which the control
element separates from the forwarding element, as Figure 8.2 explains.
However, the network device is still represented as a single entity. Unlike
OpenFlow-based SDN, the control and data planes are kept within close
proximity (e.g., same box or room). ForCES defines two logic entities
called the Forwarding Element (FE) and the Control Element (CE), which
each implement the communication protocol. The FE uses the underlying
hardware to provide per-packet handling. The CE executes control and
signaling functions and uses the ForCES protocol to instruct FEs on how
to handle packets.

Figure 8.2. ForCES

119
Software Defined 8.2.1 Multicast in Datacenter
Networking

Figure 8.3. Architecture of the Avalanche OpenDaylight implement


Iyer et al. presented an SDN based system, called Avalanche, which
enables multicast in commodity switches used in data centers .Avalanche
adopts a new multicast routing algorithm called Avalanche Routing
Algorithm (AvRA) that attempts to minimize the size of the routing tree.
In typical data center topologies like Tree and FatTree, AvRA tends to
solve the Steiner Tree problem. This solution uses the SDN technique to
take advantage of the rich path diversity commonly available in data
centers networks, and thereby achieves high bandwidth utilization. Figure
8. 3 presents the architecture of the implementation of Avalanche based on
Open Day light .
The multicast addressing and routing scale to much larger numbers of
multicast groups than in previous designs, while provides greater
robustness to switch and link failures. This literature presents a general
method for scaling out the number of supported multicast groups. Rather
than treating each switch as an independent entity, it leverage ideas from
scale-out storage systems to partition the multicast address space and
distribute address partitions across cooperating switches. This literature
also introduces a novel indirection and rewriting mechanism that
aggregates local groups into virtual meta-groups for addressing and
routing, and uses local multicast address aggregation to increase the
network’s group capacity. In addition, this literature provides some
mechanisms based on a fast failover through local multicast rerouting,
which are resilient and adapt quickly to switch and link failures. The
authors implement the above methods using Open Flow-compliant

120
switches, which support prefix forwarding, multicast addresses, packet Implementation of
rewriting, and a remotely configurable forwarding plane. Routing

8.2.2 Multicast Routing in SDN


The scalability problem in SDN is more serious than that in traditional
network because the network traffic is more difficult to be aggregated. To
attempt to address the above problem, a new multicast tree for SDN,
named Branch-aware Steiner Tree (BST), the BST problem is NP-Hard. It
presents an approximation algorithm, called Branch Aware Edge
Reduction Algorithm (BAERA). BAERA includes two phases, Edge
Optimization Phase and Branch Optimization Phase, to effectively
minimize the number of edges and branch nodes. In the first phase,
BAERA iteratively chooses and adds a terminal node in K to the solution
tree T(VT, ET) for constructing a basic BST, where VT and ET denote the
nodes and edges currently in T, respectively at each iteration. Branch
Optimization Phase re-routes the tree T to reduce the number of branch
nodes. Branch Optimization Phase includes two steps: 1) Deletion Step
and 2) Alternation Step. Deletion Step first tries to remove some branch
nodes in T obtained from Edge Optimization Phase, and then Alternation
Step tries to iteratively move each of remaining branch nodes to its
neighbor node.
Jiang et al. employed an Extended Dijkstra’s Algorithm to implement the
load balance and multicast in SDN . The extended Dijkstra’s algorithm
considers not only the edge weights but also the node weights for finding
shortest paths from a source node to all other nodes in a given graph. Jiang
et al. adopt the concept of virtual IP (VIP) for achieving load-balancing.
The client just sends a request to the VIP, and the request will be deflected
to one of the multiple servers. The basic idea of proposed load balance
algorithm is to forward each request to the nearest server with the link
load lower than a pre-specified threshold. If all the servers have link loads
larger than the threshold, the algorithm chooses the nearest server, which
can prevent congestion on the servers. The proposed multicast algorithm is
based on the multicast tree construction algorithm using the extended
Dijkstra’s algorithm for a multicast group publisher to send data packets to
all members in the corresponding multicast group.

8.2.3 Multicast Tree Packing in SDN


The network might carry many concurrent multicast sessions. Considering
each multicast session in isolation may cause congestion on some links
and reduce network utilization. The optimized sharing of network
resources (i.e. nodes and links) among multiple coexisting multicast trees
is formulated as a packing problem in which the network tries to
accommodate all the multicast groups by optimizing the utilization of
resources . Most existing multicast tree packing solutions attempt to
minimize the total link cost based the least cost tree (Steiner tree) whereby
they can use limited reserved bandwidth to accommodate as many as
possible coexisting multicast sessions. Packing multicast trees fully using
available network resource can provide effective user-oriented
121
Software Defined optimization. However, it is difficult to monitor links' practical traffic and
Networking make a global adjustment on the running routing scheme to accommodate
new group members and multicast groups.
The SDN technique provides new power for the multicast tree packing
because it can monitor links' practical traffic and make a global adjustment
on the running routing scheme to accommodate new group members and
multicast groups.

8.3 MPLS
MPLS was invented in the late 1990s, at a time when Asynchronous
Transfer Mode (ATM) was a widespread WAN technology.
ATM had some virtues: multiservice, asynchronous transport, class of
service, reduced forwarding state, predictability, and so on. But it had at
least as many defects: no tolerance to data loss or reordering, a forwarding
overhead that made it unsuitable to high speeds, no decent multipoint, lack
of a native integration with IP, and so forth.
MPLS learned from the instructive ATM experience, taking advantage of
its virtues while solving its defects. Modern MPLS is an asynchronous
packet-based forwarding technology. In that sense, it is similar to IP, but
MPLS has a much lighter forwarding plane and it greatly reduces the
amount of state that needs to be signaled and programmed on the devices.

MPLS in Action
Probably the best way to understand MPLS is by looking at a real
example, such as that
shown in figure 8.4

Figure 8.4. MPLS in action


Figure 8.4 shows two unidirectional MPLS Label-Switched Paths (LSPs)
named PE1→PE4 and PE4→PE2. Let’s begin with the first one. An

122
IPv4 H1→H3 (10.1.12.10→10.2.34.30) packet arrives at PE1, which Implementation of
leads to the following: Routing

1. H3 is reachable through PE4, so PE1 places the packet in the


PE1→PE4 LSP. It does so by inserting a new MPLS header between
the IPv4 and the Ethernet headers of the H1→H3 packet. This header
contains MPLS label 1000001, which is locally significant to P1. In
MPLS terminology, this operation is a label push. Finally, PE1 sends
the packet to P1.
2. P1 receives the packet and inspects and removes the original MPLS
header. Then, P1 adds a new MPLS header with label 1000002, which
is locally significant to P2, and sends the packet to P2. This MPLS
operation is called a label swap.
3. P2 receives the packet, inspects and removes the MPLS header, and
then sends the plain IPv4 packet to PE4. This MPLS operation is called
a label pop.
4. PE4 receives the IPv4 packet without any MPLS headers. This is fine
because PE4 speaks BGP and is aware of all the IPv4 routes, so it
knows how to forward the packet toward its destination.
The H3→H1 packet travels from PE4 to PE2 in a shorter LSP where only
two MPLS operations take place: label push at PE4 and label pop at P2.
There is no label swap.
Note : These LSPs happen to follow the shortest IGP path between their
endpoints. This is not mandatory and it is often not the case.

Router roles in a LSP


Looking back at figure 8.4 , the PE1→PE4 LSP starts at PE1, traverses P1
and P2, and ends... at P2 or at PE4? Let’s see. By placing the packet in the
LSP, PE1 is basically sending it to PE4. Indeed, when P2 receives a packet
with label 1000002, the forwarding instruction is clear: pop the label and
send the packet out of the interface Gi 0/0/0/5. So the LSP ends at PE4.
Note: The H1→H3 packet arrives unlabeled to PE4 by virtue of a
mechanism called Penultimate Hop Popping (PHP) executed by P2.
Following are the different router roles from the point of view of the
PE1→PE4 LSP. For each of these roles, there are many terms and
acronyms:
 PE1 Ingress PE, Ingress Label Edge Router (LER), LSP Head-End,
LSP Upstream Endpoint. The term ingress comes from the fact that
user packets like H1→H3 enter the LSP at PE1, which acts as an
entrance or ingress point.
 P1 (or P2) Transit P, P-Router, Label Switching Router (LSR), or
simply P.

123
Software Defined  PE4 Egress PE, Egress Label Edge Router (LER), LSP Tail-End,
Networking LSP Downstream Endpoint. The term egress comes from the fact that
user packets such as H1→H3 exit the LSP at this PE.

The MPLS Header


Paraphrasing Ivan Pepelnjak, technical director of NIL Data
Communications, in his www.ipspace.net blog:
MPLS is not tunneling, it’s a virtual-circuits-based technology, and the
difference between the two is a major one. You can talk about tunneling
when a protocol that should be lower in the protocol stack gets
encapsulated in a protocol that you’d usually find above or next to it.
MAC-in-IP, IPv6-in-IPv4, IP-over-GRE-over-IP... these are tunnels. IP-
over-MPLS-over-Ethernet is not tunneling.
It is true, however, that MPLS uses virtual circuits, but they are not
identical to tunnels. Just because all packets between two endpoints follow
the same path and the switches in the middle don’t inspect their IP
headers, doesn’t mean you use a tunneling technology.
MPLS headers are elegantly inserted in the packets. Their size is only 4
bytes. Example 6-1 presents a capture of the H1→H3 packet as it traverses
the P1-P2 link.
Example 6-1. MPLS packet on-the-wire

1 Ethernet II, Src: MAC_P1_ge-2/0/3, Dst: MAC_P2_gi0/0/0/2


2 Type: MPLS label switched packet (0x8847)
3 MultiProtocol Label Switching Header
4 1111 0100 0010 0100 0010 .... .... .... = Label: 1000002
5 .... .... .... .... .... 000. .... .... = Traffic Class: 0
6 .... .... .... .... .... ...0 .... .... = Bottom of Stack: 1
7 .... .... .... .... .... .... 1111 1100 = MPLS TTL: 252
8 Internet Protocol Version 4, Src: 10.1.12.10, Dst: 10.2.34.30
9 Version: 4
10 Header Length: 20 bytes
11 Differentiated Services Field: 0x00
12 # IPv4 Packet Header Details and IPv4 Packet Payload

Here is a description of the 32 bits that compose an MPLS header:


124
1. The first 20 bits (line 4) are the MPLS label. Implementation of
Routing
2. The next 3 bits (line 5) are the Traffic Class. In the past, they were
called the experimental bits. This field is semantically similar to the
first 3 bits of the IPv4 header’s Differentiated Services Code Point
(DSCP) field (line 11).
3. The next 1 bit (line 6) is the Bottom of Stack (BoS) bit. It is set to value
1 only if this is the MPLS header in contact with the next protocol (in
this case, IPv4) header. Otherwise, it is set to zero. This bit is important
because the MPLS header does not have a type field, so it needs the
BoS bit to indicate that it is the last header before the MPLS payload.
4. The next 8 bits (line 7) are the MPLS Time-to-Live (TTL). Like the IP
TTL, the MPLS TTL implements a mechanism to discard packets in
the event of a forwarding loop. Typically the ingress PE decrements the
IP TTL by one and then copies its value into the MPLS TTL. Transit P-
routers decrement the MPLS TTL by one at each hop. Finally, the
egress PE copies the MPLS TTL into the IP TTL and then decrements
its value by one. You can tune this default implementation in both
Junos and IOS XR.
Figure 8.5 shows two other label operations that have not been described
so far:
 The first incoming packet has a two-label stack. You can see the usage
of the BoS bit. The swap operation only affects the topmost (outermost)
label.
 The second incoming packet initially has a one-label stack, and it is
processed by a composite label operation: swap and push. The result is
a two-label stack.

Figure 8.5 . Other MPLS operations


MPLS Configuration and Forwarding Plane
MPLS interface configuration
The first step is to enable MPLS on the interfaces on which you want to
forward MPLS packets. Example 6-2 shows the Junos configuration of
one interface at PE1.
125
Software Defined Example 6-2 . MPLS interface configuration—PE1 (Junos)
Networking

1 interfaces {
2 ge-2/0/4 {
3 unit 0 {
4 family mpls;
5 }}}
6 protocols {
7 mpls {
8 interface ge-2/0/4.0;
9 }}

Lines 1 through 4 enable the MPLS encapsulation on the interface, and


lines 6 through 8 enable the interface for MPLS protocols. Strictly
speaking, the latter configuration block is not always needed, but it is a
good practice to systematically add it.
In IOS XR, there is no generic MPLS configuration. You need to enable
the interface for each of the MPLS flavors that you need to use. This
chapter features the simplest of all the MPLS flavors: static
MPLS. Example 6-3 presents the configuration of one interface at PE4.
Example 6-3. MPLS interface configuration—PE4 (IOS XR)

mpls static
interface GigabitEthernet0/0/0/0
!

Label-switched path PE1→PE4—configuration


Remember that H1→H3 packets go through PE1 and PE4. You need an
LSP that takes these packets from PE1 to PE4. Let’s make the LSP follow
the path PE1-P1-P2-PE4 that we saw in figure 8.4
Example 6-4 gives the full configuration along the path.
Example 6-4 . LSP PE1→PE4 configuration—Junos and IOS XR

#PE1 (Junos)

126
Implementation of
Routing

protocols {
mpls {
static-label-switched-path PE1--->PE4 {
ingress {
next-hop 10.0.0.3;
to 172.18.0.44;
push 1000001;
}}}}

#P1 (Junos)

protocols {
mpls {
icmp-tunneling;
static-label-switched-path PE1--->PE4 {
transit 1000001 {
next-hop 10.0.0.7;
swap 1000002;
}}}}

#P2 (IOS XR)

mpls static
address-family ipv4 unicast

local-label 1000002 allocate


forward
path 1 nexthop GigabitEthernet0/0/0/5 10.0.0.11 out-label pop

127
Software Defined
Networking !

PE4 receives plain IPv4 packets from P2, so it does not require any LSP-
specific configuration.
Labels 1000001 and 1000002 are locally significant to P1 and P2,
respectively. Their numerical values could have been identical and they
would still correspond to different instructions because they are not
interpreted by the same LSR.
LSP PE1→PE4—forwarding plane
It’s time to inspect the forwarding instructions that steer the H1→H3 IPv4
packet through the PE1→PE4 LSP. Let’s begin at PE1, which is shown
in Example 6-5.
Example 6-5. Routing and forwarding state at the ingress PE—PE1
(Junos)

1 juniper@PE1> show route receive-protocol bgp 172.18.0.201


2 10.2.34.30 active-path
3
4 inet.0: 36 destinations, 45 routes (36 active, ...)
5 Prefix Nexthop MED Lclpref AS path
6 * 10.2.34.0/24 172.18.0.44 100 100 65002 I
7
8 juniper@PE1> show route 172.18.0.44
9
10 inet.0: 36 destinations, 45 routes (36 active, ...)
11 + = Active Route, - = Last Active, * = Both
12
13 172.18.0.44/32 *[IS-IS/18] 1d 11:22:00, metric 30
14 > to 10.0.0.3 via ge-2/0/4.0
15
16 inet.3: 1 destinations, 1 routes (1 active, ...)
17 + = Active Route, - = Last Active, * = Both

128
Implementation of
18 Routing

19 172.18.0.44/32 *[MPLS/6/1] 05:00:00, metric 0


20 > to 10.0.0.3 via ge-2/0/4.0, Push 1000001
21
22 juniper@PE1> show route 10.2.34.30 active-path
23
24 inet.0: 36 destinations, 45 routes (36 active...)
25 + = Active Route, - = Last Active, * = Both
26
27 10.2.34.0/24 *[BGP/170] 06:37:28, MED 100, localpref 100,
28 from 172.18.0.201, AS path: 65002 I
29 > to 10.0.0.3 via ge-2/0/4.0, Push 1000001
30
31 juniper@PE1> show route forwarding-table destination
10.2.34.30
32 Routing table: default.inet
33 Internet:
34 Destination Next hop Type Index NhRef Netif
35 10.2.34.0/24 indr 1048575 3
36 10.0.0.3 Push 1000001 513 2 ge-2/0/4.0
37
38 juniper@PE1> show mpls static-lsp statistics name PE1--->PE4
39 Ingress LSPs:
40 LSPname To State Packets Bytes
41 PE1--->PE4 172.18.0.44 Up 27694 2768320

The best BGP route to the destination 10.2.34.30 (H3) has a BGP next-hop
attribute (line 6) equal to 172.18.0.44. There are two routes toward
172.18.0.44 (PE4’s loopback):
 An IS-IS route in the global IPv4 routing table inet.0 (lines 10 through
14).
129
Software Defined  A MPLS route in the inet.3 auxiliary table (lines 16 through 20). The
Networking static LSP configured in Example 1-9 automatically installs this MPLS
route.
The goal of the inet.3 auxiliary table is to resolve BGP next hops (line 6)
into forwarding next hops (line 20). Indeed, the BGP route 10.2.34.0/24 is
installed in inet.0 with a labeled forwarding next hop (line 29) that is
copied from inet.3 (line 20). Finally, the BGP route is installed in the
forwarding table (lines 31 through 36) and pushed to the forwarding
engines.
The fact that Junos has an auxiliary table (inet.3) to resolve BGP next hops
is quite relevant. Keep in mind that Junos uses inet.0 and not inet.3 to
program the forwarding table.

8.4 IMPLEMENTATIONOF TRAFFIC FILTERING BY


USING STANDARD AND EXTENDED ACCESS
CONTROL LIST
8.4.1Access-list (ACL) is a set of rules defined for controlling network
traffic and reducing network attacks. ACLs are used to filter traffic based
on the set of rules defined for the incoming or outgoing of the network.
ACL features –
1. The set of rules defined are matched serial wise i.e matching starts
with the first line, then 2nd, then 3rd, and so on.
2. The packets are matched only until it matches the rule. Once a rule is
matched then no further comparison takes place and that rule will be
performed.
3. There is an implicit denial at the end of every ACL, i.e., if no
condition or rule matches then the packet will be discarded.

Once the access-list is built, then it should be applied to inbound or


outbound of the interface:
 Inbound access lists – When an access list is applied on inbound
packets of the interface then first the packets will be processed
according to the access list and then routed to the outbound interface.

 Outbound access lists – When an access list is applied on outbound


packets of the interface then first the packet will be routed and then
processed at the outbound interface.

Types of ACL – There are two main different types of Access-list


namely:

130
1. Standard Access-list – These are the Access-list that are made using Implementation of
the source IP address only. These ACLs permit or deny the entire Routing
protocol suite. They don’t distinguish between the IP traffic such as
TCP, UDP, HTTPS, etc. By using numbers 1-99 or 1300-1999, the
router will understand it as a standard ACL and the specified address
as the source IP address.
2. Extended Access-list – These are the ACL that uses source IP,
Destination IP, source port, and Destination port. These types of ACL,
we can also mention which IP traffic should be allowed or denied.
These use range 100-199 and 2000-2699.
Also, there are two categories of access-list:
1. Numbered access-list – These are the access list that cannot be
deleted specifically once created i.e if we want to remove any rule
from an Access-list then this is not permitted in the case of the
numbered access list. If we try to delete a rule from the access list
then the whole access list will be deleted. The numbered access-list
can be used with both standard and extended access lists.
2. Named access list – In this type of access list, a name is assigned to
identify an access list. It is allowed to delete a named access list,
unlike numbered access list. Like numbered access lists, these can be
used with both standards and extended access lists.

Rules for ACL –


1. The standard Access-list is generally applied close to the destination
(but not always).
2. The extended Access-list is generally applied close to the source (but
not always).
3. We can assign only one ACL per interface per protocol per direction,
i.e., only one inbound and outbound ACL is permitted per interface.
4. We can’t remove a rule from an Access-list if we are using numbered
Access-list. If we try to remove a rule then the whole ACL will be
removed. If we are using named access lists then we can delete a
specific rule.
5. Every new rule which is added to the access list will be placed at the
bottom of the access list therefore before implementing the access
lists, analyses the whole scenario carefully.
6. As there is an implicit deny at the end of every access list, we should
have at least a permit statement in our Access-list otherwise all traffic
will be denied.
7. Standard access lists and extended access lists cannot have the same
name.

131
Software Defined 8.4.2 Standard Access-list – These are the Access-list which are made
Networking using the source IP address only. These ACLs permit or deny the entire
protocol suite. They don’t distinguish between the IP traffic such as
TCP, UDP, HTTPS, etc. By using numbers 1-99 or 1300-1999, the
router will understand it as a standard ACL and the specified address as
the source IP address.
Features –
1. Standard Access-list is generally applied close to destination (but not
always).
2. In a standard access list, the whole network or sub-network is denied.
3. Standard access-list uses the range 1-99 and extended range 1300-
1999.
4. Standard access-list is implemented using source IP address only.
5. If numbered with standard Access-list is used then remember rules
can’t be deleted. If one of the rules is deleted then the whole access
list will be deleted.
6. If named with standard Access-list is used then you have the
flexibility to delete a rule from the access list.

Note – Standard Access-list are less used as compared to extended


access-list as the entire IP protocol suite will be allowed or denied for the
traffic as it can’t distinguish between the different IP protocol traffic.
Configuration –

Here is a small topology in which there are 3 departments namely sales,


finance, and marketing. The sales department has a network of
172.18.40.0/24, the Finance department has a network of 172.18.50.0/24,
and the marketing department has a network of 172.18.60.0/24. Now,
want to deny connection from the sales department to the finance
department and allow others to reach that network.

132
Now, first configuring numbered standard access – list for denying any Implementation of
IP connection from sales to finance department. Routing

R1# config terminal


R1(config)# access-list 10 deny 172.18.40.0 0.0.0.255
Here, like extended access-list, you cannot specify the particular IP
traffic to be permitted or denied. Also, note that wildcard mask has been
used (0.0.0.255 which means Subnet mask 255.255.255.0). 10 is used
from the number standard access-list range.
R1(config)# access-list 110 permit ip any
Now, as you already know there is an implicit deny at the end of every
access list which means that if the traffic doesn’t match any of the rules
of the access list then the traffic will be dropped.
By specifying any means that source having any IP address traffic will
reach the finance department except the traffic which it matches the
above rules that you have made.
Now, you have to apply the access list on the interface of the router:
R1(config)# int fa0/1
R1(config-if)# ip access-group 10 out
Named standard Access-list example –

Now, considering the same topology, you will make a named standard
access list.
R1(config)# ip access-list standard blockacl
By using this command you have made an access-list named blockacl.
R1(config-std-nacl)# deny 172.18.40.0 0.0.0.255
133
Software Defined R1(config-std-nacl)# permit any
Networking
And then the same configuration you have done in numbered access-list.
R1(config)# int fa0/1
R1(config-if)# ip access-group blockacl out
8.4.3 Extended Access-list – It is one of the types of Access-list which
is mostly used as it can distinguish IP traffic therefore the whole traffic
will not be permitted or denied like in standard access-list. These are the
ACL that uses both source and destination IP addresses and also the port
numbers to distinguish IP traffic. In this type of ACL, we can also
mention which IP traffic should be allowed or denied. These use range
100-199 and 2000-2699.
Features –
1. Extended access-list is generally applied close to the source but not
always.
2. In the Extended access list, packet filtering takes place on the basis of
source IP address, destination IP address, port numbers.
3. In an extended access list, particular services will be permitted or
denied.
4. Extended ACL is created from 100 – 199 & extended range 2000 –
2699.
5. If numbered with extended Access-list is used then remember rules
can’t be deleted. If one of the rules is deleted then the whole access
list will be deleted.
6. If named with extended Access-list is used then we have the
flexibility to delete a rule from the access list.

Configuration –

Here is a small topology in which there are 3 departments namely sales,


finance, and marketing. The sales department has a network of
172.18.10.40/24, the Finance department has a network of
172.18.50.0/24, and the marketing department has a network of
134
172.18.60.0/24. Now, we want to deny the FTP connection from the Implementation of
sales department to the finance department and deny telnet to the Routing
Finance department from both the sales and marketing departments.
Now, first configuring numbered extended access – list for denying FTP
connection from sales to finance department.
R1# config terminal
R1(config)# access-list 110
deny tcp 172.18.40.0 0.0.0.255 172.18.50.0 0.0.0.255 eq 21
Here, we first create a numbered Access-list in which we use 110 (used
from extended access-list range) and deny the sales network
(172.18.40.0) to make an FTP connection to the finance network
(172.18.50.0).
Note – Here, as FTP uses TCP and port number 21. Therefore, we have
to specify the permit or deny the condition according to the need. Also,
after eq, we have to use the port number for the specified application
layer protocol.
Now, we have to deny telnet connection to finance department from both
sales and Marketing department which means no one should telnet to
finance department. Configuring for the same.
R1(config)# access-list 110
deny tcp any 172.18.50.0 0.0.0.255 eq 23
Here, we have used the keyword any which means 0.0.0.0 0.0.0.0 i.e any
IP address from
any subnet mask. As telnet uses port number 23 therefore, we have to
specify the port number 23 after eq.
R1(config)# access-list 110 permit ip any
R1(config)# access-list 110 permit ip any
Now, this is the most important part. As we already know there is an
implicit deny at the end of every access list which means that if the
traffic doesn’t match any of the rules of Access-list then the traffic will
be dropped.
By specifying any means that source having any IP address traffic will
reach finance department except the traffic which it matches the above
rules that we have made. Now, we have to apply the access-list on the
interface of the router:
R1(config)# int fa0/1
R1(config-if)# ip access-group 110 out

135
Software Defined As we remember, we have to apply the extended access-list as close as
Networking possible to source but here we have applied it to close to the destination
because we have to block the traffic from both sales and marketing
department, therefore, we have to apply it close to the destination here
otherwise we have to make separate access-list for fa0/0 and fa1/0
inbound.

Named access-list example –

Now, considering the same topology, we will make a named extended


access list.
R1(config)# ip access-list extended blockacl
By using this command we have made an access-list named blockacl.
R1(config-ext-nacl)# deny tcp 172.18.40.0 0.0.0.255 172.18.50.0
0.0.0.255 eq 21
R1(config-ext-nacl)# deny tcp any 172.18.50.0 0.0.0.255 eq 23
R1(config-ext-nacl)# permit ip any
And then the same configuration we have done in numbered access-list.
R1(config)# int fa0/1
R1(config-if)# ip access-group blockacl out

8.5 INTRODUCTION TO ROUTING REDISTRIBUTION


Most networks you encounter will probably only run a single routing
protocol like OSPF or EIGRP. Maybe you find some old small networks
still running RIP that needs migration to OSPF or EIGRP. What if you
have a company that is running OSPF and you just bought another
company, and their network is running EIGRP?

136
It’s possible that we have multiple routing protocols on our network and Implementation of
we’ll need some method to exchange routing information between the Routing
different protocols. This is called redistribution. We’ll look into some of
the issues that we encounter. What are we going to do with our metrics?
OSPF uses cost and EIGRP uses K-values and they are not compatible
with each other….RIP uses hop count.
Redistribution also adds another problem. If you “import” routing
information from one routing protocol into another, it’s possible to create
routing loops.
If you don’t feel 100% confident about your knowledge of OSPF and
EIGRP, then I suggest you stop reading now and read more about OSPF /
EIGRP or do some labs. One routing protocol can be difficult but when
you mix a couple of them the fun really starts…
Having said that, let’s take a look at a possible redistribution scenario:

Look at the topology picture above. We have routers running EIGRP in


AS 1 with the 10.0.0.0 /8 network. OSPF has multiple areas, and we have
20.0.0.0 /8 there. At the bottom, there are two RIP routers in the 30.0.0.0
/8 network. If we want full connectivity in this network, we’ll have to do
some redistribution.
Redistribution is not just for between routing protocols. We have multiple
options:
 Between routing protocols (RIP, OSPF, EIGRP, BGP).
 Static routes can be redistributed into a routing protocol.
 Directly connected routes can be redistributed into a routing protocol.
Normally you use the network command to advertise directly connected
routes into your routing protocol. You can also use the redistribute

137
Software Defined connected command, which will redistribute it into the routing protocol.
Networking Let’s take a look at some real routers:

In the topology picture above, I have three routers. R1 is running EIGRP,


and R3 is running RIP. R2 is in the middle and is running EIGRP and RIP.
If we want to do redistribution, we’ll have to do it on R2. Let’s take a
look, shall we?

R1(config)#router eigrp 12
R1(config-router)#no auto-summary
R1(config-router)#network 192.168.12.0
R1(config-router)#network 1.1.1.0 0.0.0.255
R2(config)#router eigrp 12
R2(config-router)#no auto-summary
R2(config-router)#network 192.168.12.0
R2(config-router)#exit
R2(config)#router rip
R2(config-router)#version 2
R2(config-router)#no auto-summary
R2(config-router)#network 192.168.23.0
R3(config)#router rip
R3(config-router)#version 2
R3(config-router)#no auto-summary
R3(config-router)#network 192.168.23.0

R3(config-router)#network 3.3.3.0

Here are the router configurations, nothing special…I only advertised the
links to get EIGRP and RIP up and running.

R1#show ip route

138
Implementation of
Routing

Gateway of last resort is not set

C 192.168.12.0/24 is directly connected, FastEthernet0/0


1.0.0.0/24 is subnetted, 1 subnets
C 1.1.1.0 is directly connected, Loopback0
R2#show ip route

Gateway of last resort is not set

C 192.168.12.0/24 is directly connected, FastEthernet0/0


1.0.0.0/24 is subnetted, 1 subnets
D 1.1.1.0 [90/156160] via 192.168.12.1, 00:05:01, FastEthernet0/0
R 3.0.0.0/8 [120/1] via 192.168.23.3, 00:00:12, FastEthernet1/0
C 192.168.23.0/24 is directly connected, FastEthernet1/0
R3#show ip route

Gateway of last resort is not set

3.0.0.0/24 is subnetted, 1 subnets


C 3.3.3.0 is directly connected, Loopback0
C 192.168.23.0/24 is directly connected, FastEthernet0/0

Here is the routing table of all three routers after configuring RIP and
EIGRP. You can see R2 has learned the loopback interfaces of R3 and R1.
R1 and R3 don’t have anything in their routing table because R2 is not
advertising anything. As you can see, redistribution is not done
automatically.

8.6 REDISTRIBUTION BETWEEN EIGRP AND OSPF


Redistribution
We can now continue with redistribution.
139
Software Defined 8.6.1 Redistribute OSPF into EIGRP
Networking
First, we’ll redistribute OSPF into EIGRP. We do this under the EIGRP
process:

R2(config)#router eigrp 12

Let’s take a look at the redistribute ospf options:

R2(config-router)#redistribute ospf ?
<1-65535> Process ID

We need to select the correct OSPF process. In our example, that’s process
ID 1. There are three options you can choose from:

R2(config-router)#redistribute ospf 1 ?
match Redistribution of OSPF routes
metric Metric for redistributed routes
route-map Route map reference

With the match option, we can choose to redistribute only specific OSPF
routes like external or internal routes. The route-map is another option
only to redistribute specific OSPF routes, for example, by using an access-
list.
We’ll keep it simple for now and just redistribute all OSPF routes into
EIGRP. We have to specify a metric, if we don’t, redistribution fails.
EIGRP and OSPF use different metrics and there is no way to convert
from one metric to another. This means we have to configure the metric
ourselves.
EIGRP uses a metric that is based on bandwidth, delay, reliability, load,
and MTU (even though MTU is not actually used in the calculation). Let’s
check what options we have under the metric statement:

R2(config-router)#redistribute ospf 1 metric ?


<1-4294967295> Bandwidth metric in Kbits per second

First, I have to specify a bandwidth metric. In our topology, R2 is the only


router doing redistribution. R1 and R3 can only reach each other by going
through R2 so it doesn’t matter whether the metric is high or low. We’ll
keep it simple and use 1 for all metric values:

140
Implementation of
R2(config-router)#redistribute ospf 1 metric 1 ? Routing

<0-4294967295> EIGRP delay metric, in 10 microsecond units


R2(config-router)#redistribute ospf 1 metric 1 1 ?
<0-255> EIGRP reliability metric where 255 is 100% reliable
R2(config-router)#redistribute ospf 1 metric 1 1 1 ?
<1-255> EIGRP Effective bandwidth metric (Loading) where 255 is
100% loaded
R2(config-router)#redistribute ospf 1 metric 1 1 1 1 ?
<1-65535> EIGRP MTU of the path
R2(config-router)#redistribute ospf 1 metric 1 1 1 1 1

Redistribution from OSPF into EIGRP is now configured.


Instead of specifying the metric as I did above, you can also use
the default-metric command to set a default seed metric. EIGRP will then
use these values for everything you redistribute into EIGRP unless you
specify the metric with the redistribute command.

8.6.2 Verification
Let’s verify our work. Redistribution doesn’t affect the routing table of the
router doing redistribution:

R2#show ip route

1.0.0.0/32 is subnetted, 1 subnets


D 1.1.1.1 [90/130816] via 192.168.12.1, 00:42:39, GigabitEthernet0/1
3.0.0.0/32 is subnetted, 1 subnets
O 3.3.3.3 [110/2] via 192.168.23.3, 00:41:25, GigabitEthernet0/2
192.168.12.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.12.0/24 is directly connected, GigabitEthernet0/1
L 192.168.12.2/32 is directly connected, GigabitEthernet0/1
192.168.23.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.23.0/24 is directly connected, GigabitEthernet0/2
L 192.168.23.2/32 is directly connected, GigabitEthernet0/2

141
Software Defined Something changed on R1 however:
Networking
R1#show ip route

1.0.0.0/32 is subnetted, 1 subnets


C 1.1.1.1 is directly connected, Loopback0
3.0.0.0/32 is subnetted, 1 subnets
D EX 3.3.3.3
[170/2560000512] via 192.168.12.2, 00:00:07, GigabitEthernet0/1
192.168.12.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.12.0/24 is directly connected, GigabitEthernet0/1
L 192.168.12.1/32 is directly connected, GigabitEthernet0/1
D EX 192.168.23.0/24
[170/2560000512] via 192.168.12.2, 00:00:07, GigabitEthernet0/1

Excellent. As you can see above, we have two external routes:


 The loopback 0 interface of R3.
 The network in between R2 and R3.
The metric (2560000512) is calculated based on the redistribution metric
values we specified.

8.6.3 Redistribute EIGRP into OSPF


We are halfway there. We still need to redistribute EIGRP into OSPF.
Let’s go to the OSPF process:

R2(config)#router ospf 1

And take a look at the redistribute eigrp options. Make sure you select the
correct EIGRP AS number (12 in our example):

R2(config-router)#redistribute eigrp 12 ?
metric Metric for redistributed routes
metric-type OSPF/IS-IS exterior metric type for redistributed routes
nssa-only Limit redistributed routes to NSSA areas

142
Implementation of
route-map Route map reference Routing

subnets Consider subnets for redistribution into OSPF


tag Set tag for routes redistributed into OSPF

There are a number of (advanced) options which we’ll ignore for now.
Unlike EIGRP, we don’t have to specify a metric value here. The
following command is all you need:

R2(config-router)#redistribute eigrp 12

The command above redistributes all EIGRP routes into OSPF.

8.6.4 Verification
The routing table of R2 remains the same:

R2#show ip route

1.0.0.0/32 is subnetted, 1 subnets


D 1.1.1.1 [90/130816] via 192.168.12.1, 00:12:07, GigabitEthernet0/1
3.0.0.0/32 is subnetted, 1 subnets
O 3.3.3.3 [110/2] via 192.168.23.3, 00:11:05, GigabitEthernet0/2
192.168.12.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.12.0/24 is directly connected, GigabitEthernet0/1
L 192.168.12.2/32 is directly connected, GigabitEthernet0/1
192.168.23.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.23.0/24 is directly connected, GigabitEthernet0/2

L 192.168.23.2/32 is directly connected, GigabitEthernet0/2

But something happens on R3:

R3#show ip route

1.0.0.0/32 is subnetted, 1 subnets

143
Software Defined
Networking
O E2 1.1.1.1 [110/20] via 192.168.23.2, 00:05:40, GigabitEthernet0/1
3.0.0.0/32 is subnetted, 1 subnets
C 3.3.3.3 is directly connected, Loopback0
O E2 192.168.12.0/24 [110/20] via 192.168.23.2, 00:05:40,
GigabitEthernet0/1
192.168.23.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.23.0/24 is directly connected, GigabitEthernet0/1

L 192.168.23.3/32 is directly connected, GigabitEthernet0/1

Above, you see two OSPF E2 routes with a metric of 20. The default
metric-type for redistributed routes in OSPF is E2 which means that the
metric remains the same throughout the OSPF network. If you had another
router behind R3 running OSPF, you would still see the redistributed
routes with a metric of 20.

8.7 VERIFICATION
We successfully redistributed OSPF into EIGRP and vice versa but just to
be sure, let’s see if we have connectivity between R1 and R3. We can test
this with a quick ping between the loopback interfaces:

R1#ping 3.3.3.3 source 1.1.1.1


Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 3.3.3.3, timeout is 2 seconds:
Packet sent with a source address of 1.1.1.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 2/2/5 ms

Great, we have full reachability.

8.8 LIST OF REFERENCES


1. TCPIP Protocol Suite, Behrouz A Forouzan, McGraw Hill Education;
4th edition, Fourth Edition, 2017
2. Foundations of Modern Networking: SDN, NFV, QoE, IoT, and
Cloud, William Stallings, Addison-Wesley Professional, 2018.
3. Software Defined Networks: A Comprehensive Approach, Paul
Goransson and Chuck Black, Morgan Kaufmann Publications, 2014

144
4. SDN - Software Defined Networks by Thomas D. Nadeau &amp; Ken Implementation of
Gray, O&#39;Reilly, 2013 Routing

8.9 UNIT END EXERCISES


1) Explain Multicast routing with suitable example?
2) Write short note on “Multicast Routing”.
3) Explain the multicast in datacenter in detail?
4) Write short note on “MPLS”.
5) Explain the Traffic Filtering by using Standard Access control list ?
6) Explain the Traffic Filtering by using Extended Aceess control list ?
7) Write short note on “Routing Redistribution”.
8) Write short note on “Access control list”.
9) Explain redistribution between EIGRP and OSPF?



145

You might also like