0% found this document useful (0 votes)
138 views206 pages

Networking and Communication

1. An internetwork is a collection of individual networks connected by routers and other devices that functions as a single large network. 2. Early networks used mainframes and terminals, while local area networks evolved around personal computers. Wide area networks connect geographically dispersed local area networks. 3. Internetworking evolved to solve problems of isolated local area networks, duplication of resources across networks, and lack of centralized network management.

Uploaded by

satishkumarmani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
138 views206 pages

Networking and Communication

1. An internetwork is a collection of individual networks connected by routers and other devices that functions as a single large network. 2. Early networks used mainframes and terminals, while local area networks evolved around personal computers. Wide area networks connect geographically dispersed local area networks. 3. Internetworking evolved to solve problems of isolated local area networks, duplication of resources across networks, and lack of centralized network management.

Uploaded by

satishkumarmani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 206

SPCI 102

POSTGRADUATE COURSE
M.Sc., Cyber Forensics and Information Security

FIRST YEAR
FIRST SEMESTER

CORE PAPER - II

NETWORKING AND
COMMUNICATION PROTOCOLS

INSTITUTE OF DISTANCE EDUCATION


UNIVERSITY OF MADRAS
M.Sc., Cyber Forensics and Information Security CORE PAPER - II
FIRST YEAR - FIRST SEMESTER NETWORKING AND
COMMUNICATION PROTOCOLS

WELCOME
Warm Greetings.

It is with a great pleasure to welcome you as a student of Institute of Distance


Education, University of Madras. It is a proud moment for the Institute of Distance education
as you are entering into a cafeteria system of learning process as envisaged by the University
Grants Commission. Yes, we have framed and introduced Choice Based Credit
System(CBCS) in Semester pattern from the academic year 2018-19. You are free to
choose courses, as per the Regulations, to attain the target of total number of credits set
for each course and also each degree programme. What is a credit? To earn one credit in
a semester you have to spend 30 hours of learning process. Each course has a weightage
in terms of credits. Credits are assigned by taking into account of its level of subject content.
For instance, if one particular course or paper has 4 credits then you have to spend 120
hours of self-learning in a semester. You are advised to plan the strategy to devote hours of
self-study in the learning process. You will be assessed periodically by means of tests,
assignments and quizzes either in class room or laboratory or field work. In the case of PG
(UG), Continuous Internal Assessment for 20(25) percentage and End Semester University
Examination for 80 (75) percentage of the maximum score for a course / paper. The theory
paper in the end semester examination will bring out your various skills: namely basic
knowledge about subject, memory recall, application, analysis, comprehension and
descriptive writing. We will always have in mind while training you in conducting experiments,
analyzing the performance during laboratory work, and observing the outcomes to bring
out the truth from the experiment, and we measure these skills in the end semester
examination. You will be guided by well experienced faculty.

I invite you to join the CBCS in Semester System to gain rich knowledge leisurely at
your will and wish. Choose the right courses at right times so as to erect your flag of
success. We always encourage and enlighten to excel and empower. We are the cross
bearers to make you a torch bearer to have a bright future.

With best wishes from mind and heart,

DIRECTOR

(i)
M.Sc., Cyber Forensics and Information Security CORE PAPER - II
FIRST YEAR - FIRST SEMESTER NETWORKING AND
COMMUNICATION PROTOCOLS

COURSE WRITERS

Mr. R.P. Arjun


&
Mr. Ramachandran S.

COORDINATION AND EDITING

Dr. N. Kala
Dr. S.Director
Thenmozhi
i/c
CCFIS, Professor
Associate UNOM
Department of Psychology
Institute of Distance Education
University of Madras
Chepauk Chennnai - 600 005.

© UNIVERSITY OF MADRAS, CHENNAI 600 005.

(ii)
M.Sc., Cyber Forensics and Information Security

FIRST YEAR

FIRST SEMESTER

Core Paper - II

NETWORKING AND COMMUNICATION PROTOCOLS

SYLLABUS

Unit 1

Networking models- OSI Layered model - TCP/IP Model - MAC Address representation -
Organisationally Unique Identifier - Internet Protocol - Versions and Header lengths - IP
Identification - IP Flags - IP fragmentation and reassembly structure - Transport Layer
protocols - Port numbers - TCP Flags - Segmentation - TCP 3 way handshake and Options
- encapsulation and De-encapsulation - Payload.

Unit 2

Static and Dynamic Routing - IP Routing Protocols - Classful and Classless Routing -
RIPv1 - RIPv2, Broadcast and Multicast domains - OSPF, EIGRP - Network Address
Translation - IP Classes - Private IP - Public IP - Reserved IP - APIPA.

Unit 3

Subnetting IP network - Class A, B, C subnetting - Classless Inter-domain Routing (CIDR)


- Subnet mask - Wild card mask - WAN Technologies - Frame Relay - Data link Connection
Identifiers (DLCI) - Committed Information Rate (CIR) - Permanent Virtual Circuits (PVCs)
- Multiprotocol Label Switching (MPLS) - Edge Routers - Label Switching - CE and PE
Routers - Data Terminal Equipment (DTE) - Data Communication Equipment (DCE) -
Clock speed.

(iii)
Unit 4

Virutal LANs - Access links and Trunk links - Switchport modes - Vlan Trunking - Server,
Client and Transparent modes - VTP Domain - Configuration Revision numbers - Inter
Vlan Communications - Broadcast domain - Collision Domain

Unit 5

Communication protocols - Address Resolution Protocol (ARP) - Reverse Address


Resolution Protocol (RARP) - Internet Control Message Protocol (ICMP) - Internet Protocol
(IP) - Transmission Control Protocol (TCP) - User Datagram Protocol (UDP) - American
Standard Code for Information Interchange (ASCII) - Hypertext Transfer Protocol (HTTP) -
File Transfer Protocol (FTP) - Simple Mail Transfer Protocol (SMTP) - Telnet - Trivial File
Transfer Protocol (TFTP) - Post Office Protocol version 3 (POP3) - Internet Message
Access Protocol (IMAP) - Simple Network Management Protocol (SNMP) - Domain Name
System (DNS) - DNS Flags - Dynamic Host Configuration Protocol (DHCP).

(iv)
M.Sc., Cyber Forensics and Information Security

FIRST YEAR

FIRST SEMESTER

Core Paper - II

NETWORKING AND COMMUNICATION PROTOCOLS


SCHEME OF LESSONS

Sl.No. Title Page

1 Introduction to Internetworking 1

2 IP Routing 47

3 Subnetting IP network 95

4 Virtual LANs 147

5 Communication Protocols 171

(v)
1

UNIT - 1
INTRODUCTION TO INTERNETWORKING
Learning Objectives
 This chapter will act as a foundation for the technology discussions that follow. In
this chapter, some fundamental concepts and terms used in the evolving language
of internetworking are addressed.

 A learner will be able to understand the various model of networking and the basic
difference between them.

 In a protocol based model the learner will be able to understand the functions of
each layer, and the structure of TCP packet.

Strucutre
1.1 Introduction to Interworking

1.2 Networking Models

1.3 Adressing System in Internetworking

1.4 Transport Layer Protocols

1.5 User Datagram Protocol UDP

1.6 TCP

1.1 Introduction to Internetworking


1.1.1 What is an Internetwork?

An internetwork is a collection of individual networks, connected by intermediate networking


devices, that functions as a single large network. Internetworking refers to the industry, products,
and procedures that meet the challenge of creating and administering internetworks.
Figure 1.1 illustrates a basic network topology interconnected by routers and other networking
devices to create an internetwork:
2

Figure 1.1: Sample Network Topology

1.1.2 History of Internetworking

The first networks were time-sharing networks that used mainframes and attached
terminals. Both IBM’s System Network Architecture (SNA) and Digital’s network architecture
implemented such environments.

Local area networks (LANs) evolved around the PC revolution. LANs enabled multiple
users in a relatively small geographical area to exchange files and messages, as well as access-
shared resources such as file servers. Wide area networks (WANs) interconnect LANs across
normal telephone lines (and other media), thereby interconnecting geographically dispersed
users.

Today, high-speed LANs and switched internetworks are becoming widely used, largely
because they operate at very high speeds and support such high-bandwidth applications as
voice and video conferencing.

Internetworking evolved as a solution to three key problems: isolated LANs, duplication


of resources, and a lack of network management. Isolated LANS made electronic communication
between different offices or departments impossible. Duplication of resources meant that the
same hardware and software had to be supplied to each office or department, as did a separate
support staff. This lack of network management meant that no centralized method of managing
and troubleshooting networks existed.
3

1.1.3 Internetworking Challenges

Implementing a functional internetwork is no simple task. Many challenges must be faced,


especially in the areas of connectivity, reliability, network management, and flexibility. Each
area is key in establishing an efficient and effective internetwork. The challenge when connecting
various systems is to support communication between disparate technologies. Different sites,
for example, may use different types of media, or they might operate at varying speeds.

Another essential consideration, reliable service, must be maintained in any internetwork.


Individual users and entire organizations depend on consistent, reliable access to network
resources. Furthermore, network management must provide centralized support and
troubleshooting capabilities in an internetwork. Configuration, security, performance, and other
issues must be adequately addressed for the internetwork to function smoothly.

Flexibility, the final concern, is necessary for network expansion and new applications
and services, among other factors.

1.2 Networking Models


Basic network architecture and construction is a good starting point when trying to
understand how communication systems function. Architectures are typically based on a model
showing how protocols and functions fit together. Historically, there have been many models
used for this purpose, including, but not limited to, Systems Network Architecture (SNA-IBM),
AppleTalk, Novell Netware (IPX/SPX), Open System Interconnection (OSI) model and TCP/IP
Model or Internet Model. However, OSI Model and TCP/IP Model are considered as base
reference models for the current networking technologies. We will discuss about these two
models in coming sections.

1.2.1 What Is a Model?

A model is a way to organize a system’s functions and features to define its structural
design. A design can help us understand how a communication system accomplishes tasks to
form a protocol suite. To help us wrap our heads around models, communication systems are
often compared to the postal system (Figure 1.2). Imagine writing a letter and taking it to the
post office. At some point, the mail is sorted and then delivered via some transport system to
another post office. From there, it is sorted and given to a mail carrier for delivery to the destination.
The letter is handled at several points along the way. Each part of the system is trying to
4

accomplish the same thing—delivering the mail. But each section has a particular set of rules to
obey. While in transit, the truck follows the rules of the road as the letter is delivered to the next
point for processing. Inspectors and sorters ensure the mail is metered and safe, without much
concern for traffic lights or turn signals.

Figure 1.2: Postal System

A communication system is not much different, since messages created on a computer


are processed and delivered, with each piece of equipment involved performing some function
and obeying certain rules for transmission. Figure 1.3 depicts a typical scenario in which two
computers are connected by their network cards via a networking device. Two people are
communicating using an application such as an instant messaging or email program.

At some point, we have to decide exactly how to handle this communication. After all,
when we mail that letter, we cannot address the envelope in some arbitrary language or ignore
zip codes, just as the mail truck driver cannot drive on the wrong side of the road.

Figure 1.3: Small Communication Network

Models are routinely organized in a hierarchical or layered structure. Each layer has a set
of functions to perform. Protocols are created to handle these functions, and therefore, protocols
are also associated with each layer. The protocols are collectively referred to as a protocol
suite. The lower layers are often linked with hardware, and the upper layers with software. For
example, Ethernet operates at Layers 1 and 2, while the File Transfer Protocol (FTP) operates
at the very top of the model.
5

1.2.2 Why Use a Model?

A model describes the entire structure. Even a simple communication system is a


complicated environment in which thousands or even millions of transactions occur daily.
Interconnected systems are considerably more complex. A single electrical disturbance or
software configuration error can prevent completion of these transactions. Models provide a
starting point for determining what must be done to enable communication or to figure out how
systems using different protocols might connect to each other. They also help in troubleshooting
problems.

1.2.3 Open Systems Interconnection (OSI) model

Open Systems Interconnection (OSI) model is a reference model developed by ISO


(International Organization for Standardization) in 1984, as a conceptual framework of standards
for communication in the network across different equipment and applications by different
vendors. It is now considered the primary architectural model for inter-computing and
internetworking communications. Most of the network communication protocols used today
have a structure based on the OSI model. The OSI model defines the communications process
into 7 layers, dividing the tasks involved with moving information between networked computers
into seven smaller, more manageable task groups. The OSI 7 layers model has clear
characteristics for each layer which are capable of implementing the assigned task independently.
The OSI model is divided into two groups:

(a) Upper layers (layers 7, 6 & 5) - deal with application issues and generally are
implemented only in software.

(i) Layer 7: Application Layer

In simple terms, the function of the application layer is to take requests and data from the
users and pass them to the lower layers of the OSI model. Incoming information is passed to
the application layer, which then displays the information to the users. Some of the most basic
application-layer services include file and print capabilities. The most common misconception
about the application layer is that it represents applications that are used on a system such as
a web browser, word processor, or spreadsheet. Instead, the application layer defines the
processes that enable applications to use network services. For example, if an application
needs to open a file from a network drive, the functionality is provided by components that
reside at the application layer.
6

 Defines interface to user processes for communication and data transfer in network.

 Provides standardized services such as virtual terminal, file and job transfer and
operations.

(ii)Layer 6: Presentation Layer

The presentation layer’s basic function is to convert the data intended for or received
from the application layer into another format. Such conversion is necessary because of how
data is formatted so that it can be transported across the network. Applications cannot necessarily
read this conversion. Some common data formats handled by the presentation layer include
the following:

Graphics files: JPEG, TIFF, GIF, and so on are graphics file formats that require the data
to be formatted in a certain way.

Text and data: The presentation layer can translate data into different formats, such as
American Standard Code for Information Interchange (ASCII) and Extended Binary Coded
Decimal Interchange Code (EBCDIC).

Sound/video: MPEG, MP3, and MIDI files all have their own data formats to and from
which data must be converted.

 Masks the differences of data formats between dissimilar systems.

 Specifies architecture-independent data transfer format.

 Encodes/decodes data; encrypts/decrypts data; compresses/decompresses data.

(iii) Layer 5: Session Layer

The session layer is responsible for managing and controlling the synchronization of data
between applications on two devices. It does this by establishing, maintaining, and breaking
sessions. Whereas the transport layer is responsible for setting up and maintaining the connection
between the two nodes, the session layer performs the same function on behalf of the application.

 Manages user sessions and dialogues.

 Controls establishment and termination of logic links between users.

 Reports upper layer errors.


7

(b) Lower layers (layers 4, 3, 2, 1) – deals with data transport issues.

(i) Layer 4: Transport Layer

 Manages end-to-end message delivery in network.

 Provides connectionless oriented reliable and sequential packet delivery through


error recovery and fiow control mechanisms.

 Error checking: Protocols at the transport layer ensure that data is sent or received
correctly.

 Service addressing: Protocols such as TCP/IP support many network services.


The transport layer makes sure that data is passed to the right service at the upper
layers of the OSI model.

 Segmentation: To traverse the network, blocks of data need to be broken into


packets that are of a manageable size for the lower layers to handle. This process,
called segmentation, is the responsibility of the transport layer.

(ii) Layer 3: Network Layer

The primary responsibility of the network layer is routing—providing mechanisms by which


data can be passed from one network system to another. The network layer does not specify
how the data is passed, but rather provides the mechanisms to do so. Functionality at the
network layer is provided through routing protocols, which are software components.

Protocols at the network layer are also responsible for route selection, which refers to
determining the best path for the data to take throughout the network. In contrast to the data
link layer, which uses MAC addresses to communicate on the LAN, network layer protocols use
software configured addresses and special routing protocols to communicate on the network.
The term packet is used to describe the logical grouping of data at the network layer.

 Determines how data are transferred between network devices.

 Routes packets according to unique network device addresses.

 Provides flow and congestion control to prevent network resource depletion.

(iii) Layer 2: Data Link Layer


 Defines procedures for operating the communication links.

 Frames packets.
8

 Detects and corrects packets transmit errors.

 Media Access Control (MAC) layer: The MAC address is defined at this layer. The
MAC address is the physical or hardware address burned into each network interface
card (NIC). The MAC sub-layer also controls access to network media. The MAC
layer specification is included in the IEEE 802.1 standard.

 Logical Link Control (LLC) layer: The LLC layer is responsible for the error and
flow-control mechanisms of the data link layer. The LLC layer is specified in the
IEEE 802.2 standard.

(iv) Layer 1: Physical Layer

 Defines physical means of sending data over network devices.

 Interfaces between network medium and devices.

 Defines optical, electrical and mechanical characteristics.

 Hardware: The type of media used on the network, such as type of cable, type of
connector, and pinout format for cables.

 Topology: The physical layer identifies the topology to be used in the network.
Common topologies include ring, mesh, star, and bus.

 In addition to these characteristics, the physical layer defines the voltage used on a
given medium and the frequency at which the signals that carry the data operate.
These characteristics dictate the speed and bandwidth of a given medium, as well
as the maximum distance over which a certain media type can be used.

Basically, layers 7 through 4 deal with end to end communications between data source
and destinations, while layers 3 to 1 deal with communications between network devices.

Information being transferred from a software application in one computer to an application


in another moves through the OSI layers. For example, if a software application in computer A
has information to pass to a software application in computer B, the application program in
computer A need to pass the information to the application layer (Layer 7) of computer A, which
then passes the information to the presentation layer (Layer 6), which relays the data to the
session layer (Layer 5), and so on all the way down to the physical layer (Layer 1). At the
physical layer, the data is placed on the physical network medium and is sent across the medium
to computer B. The physical layer of computer B receives the data from the physical medium,
9

and then passes the information up to the data link layer (Layer 2), which relays it to the network
layer (Layer 3), and so on, until it reaches the application layer (Layer 7) of computer B. Finally,
the application layer of computer B passes the information to the recipient application program
to complete the communication process. Figure 1.4 illustrates this process.

Figure 1.4: End to end communications between data source and destinations

Each layer will add a Header and a Trailer to its Data it has received from the upper layer,
which consists of the upper layer’s Header, Trailer and Data as it proceeds through the layers.
The Headers contain information that specifically addresses layer-to-layer communication.
Headers, trailers and data are relative concepts, depending on the layer that analyzes the
information unit. For example, the Transport Header (TH) contains information that only the
Transport layer sees. All other layers below the Transport layer pass the Transport Header as
part of their Data. At the network layer, an information unit consists of a Layer 3 header (NH)
and data. At the data link layer, however, all the information passed down by the network layer
(the Layer 3 header and the data) is treated as data. In other words, the data portion of an
10

information unit at a given OSI layer potentially can contain headers, trailers, and data from all
the higher layers. This is known as encapsulation.

Figure 1.5 OSI Layers with functions

1.2.4 TCP/IP Reference Model

TCP/ IP stands for Transmission Control Protocol/ Internet Protocol. The TCP/IP reference
model is the network model used in the current Internet architecture. It has its origins back in
the 1960’s with the ARPANET. This was a research network sponsored by the Department of
Defense in the United States. The following were seen as major design goals:

 Ability to connect multiple networks together seamlessly.

 Ability for connections to remain intact as long as the source and destination machines
were functioning.

 To be built on flexible architecture

TCP/ IP is a suite of protocols. Like most network protocols, TCP/ IP is a layered protocol.
Each layer builds upon the layer below it, adding new functionality. The lowest level protocol is
concerned purely with the business of sending and receiving data - any data - using specific
network hardware. At the top are protocols designed specifically for tasks like transferring files
or delivering email. In between are levels which are concerned with issues like routing and
reliability. The benefit that the layered protocol stack gives you is that, if you invent a new
11

network application or a new type of hardware, you only need to create a protocol for that
application or that hardware: you don’t have to rewrite the whole stack. TCP/IP is a layered
model but it is generally agreed that there are fewer layers than the seven layers of the OSI
model. The TCP/IP 4-layer model and the key functions of each layer is described below in
figure 1.6.

Figure 1.6 Key Functions of TCP/IP Model

a) Application Layer

The Application Layer in TCP/IP groups the functions of OSI Application, Presentation
Layer and Session Layer. Therefore any process above the transport layer is called an Application
in the TCP/IP architecture. In TCP/IP socket and port are used to describe the path over which
applications communicate. Most application level protocols are associated with one or more
port number.

b) Transport Layer

In TCP/IP architecture, there are two Transport Layer protocols. The Transmission Control
Protocol (TCP) guarantees information transmission. The User Datagram Protocol (UDP)
transports datagram without end-to-end reliability checking. Both protocols are useful for different
applications.

c) Internet layer

The Internet Protocol (IP) is the primary protocol in the TCP/IP Network Layer. All upper
and lower layer communications must travel through IP as they are passed through the TCP/IP
protocol stack. In addition, there are many supporting protocols in the Network Layer, such as
ICMP, to facilitate and manage the routing process.
12

d) Network Access Layer

In the TCP/IP architecture, the Data Link Layer and Physical Layer are normally grouped
together to become the Network Access layer. TCP/IP makes use of existing Data Link and
Physical Layer standards rather than defining its own. Many RFCs describe how IP utilizes and
interfaces with the existing data link protocols such as Ethernet, Token Ring, FDDI, HSSI, and
ATM. The physical layer, which defines the hardware communication properties, is not often
directly interfaced with the TCP/IP protocols in the network layer and above.

1.2.5 Comparison: OSI and TCP/IP Model

TCP/IP architecture does not exactly match the OSI model. Unfortunately, there is no
universal agreement regarding how to describe TCP/IP with a layered model. It is generally
agreed that TCP/IP has fewer levels than the seven layers of the OSI model.

Figure 1.7 Comparison: OSI and TCP/IP Model

Here we force TCP/IP protocols into the OSI 7 layers structure for comparison purpose.
The TCP/IP suite’s core functions are addressing and routing (IP/IPv6 in the networking layer)
and transportation control (TCP, UDP in the transport layer).The main differences identified
between the OSI and TCP/IP model can be summarized as under:
13

Table 1.1 Comparison between OSI and TCP/IP Model

1.2.6 Message Transmission in TCP/IP Model

Each computer in the network has software that operates at each of the layers and performs
the functions required by those layers (the physical layer is hardware not software). Each layer
in the network uses a formal language, or protocol, that is simply a set of rules that define what
the layer will do and that provides a clearly defined set of messages that software at the layer
needs to understand. For example, the protocol used for Web applications is HTTP. In general,
all messages sent in a network pass through all layers. All layers except the Physical layer add
a Protocol Data Unit (PDU) to the message as it passes through them. The PDU contains
information that is needed to transmit the message through the network. Some experts use the
word “Packet” to mean a PDU. Figure 1.8 shows how a message requesting a Web page would
be sent on the Internet.
14

Figure 1.8 : HTTP transaction utilising various layers of network Model

Application Layer: First, the user creates a message at the application layer using a
Web browser by clicking on a link (e.g., get the home page at www.somebody.com). The browser
translates the user’s message (the click on the Web link) into HTTP. The rules of HTTP define
a specific PDU—called an HTTP packet—that all Web browsers must use when they request a
Web page. For now, we can think of the HTTP packet as an envelope into which the user’s
message (get the Web page) is placed. In the same way that an envelope placed in the mail
needs certain information written in certain places (e.g., return address, destination address),
so too does the HTTP packet. The Web browser fills in the necessary information in the HTTP
packet, drops the user’s request inside the packet and then passes the HTTP packet (containing
the Web page request) to the transport layer.
15

Transport Layer The transport layer on the Internet uses a protocol called TCP
(Transmission Control Protocol), and it, too, has its own rules and its own PDUs. TCP is
responsible for breaking large files into smaller packets and for opening a connection to the
server for the transfer of a large set of packets. The transport layer places the HTTP packet
inside a TCP PDU (which is called a TCP segment), fills in the information needed by the TCP
segment, and passes the TCP segment (which contains the HTTP packet, which, in turn, contains
the message) to the network layer.

Network Layer The network layer on the Internet uses a protocol called IP (Internet
Protocol), which has its rules and PDUs. IP selects the next stop on the message’s route through
the network. It places the TCP segment inside an IP PDU, which is called an IP packet, and
passes the IP packet, which contains the TCP segment, which, in turn, contains the HTTP
packet, which, in turn, contains the message, to the data link layer.

Data Link Layer If we are connecting to the Internet using a LAN, data link layer may use
a protocol called Ethernet, which also has its own rules and PDUs. The data link layer formats
the message with start and stop markers, adds error-checking information, places the IP packet
inside an Ethernet PDU, which is called an Ethernet frame, and instructs the physical hardware
to transmit the Ethernet frame, which contains the IP packet, which contains the TCP segment,
which contains the HTTP packet, which contains the message.

Physical Layer The physical layer in this case is network cable connecting computer to
the rest of the network. The computer will take the Ethernet frame (complete with the IP packet,
the TCP segment, the HTTP packet, and the message) and sends it as a series of electrical
pulses through cable to the server.

Figure 1.9: Encapsulation


16

When the server gets the message, this process is performed in reverse. The physical
hardware translates the electrical pulses into computer data and passes the message to the
data link layer. The data link layer uses the start and stop markers in the Ethernet frame to
identify the message. The data link layer checks for errors and, if it discovers one, requests that
the message be resent. If a message is received without error, the data link layer will strip off
the Ethernet frame and pass the IP packet (which contains the TCP segment, the HTTP packet,
and the message) to the network layer. The network layer checks the IP address and, if it is
destined for this computer, strips off the IP packet and passes the TCP segment, which contains
the HTTP packet and the message to the transport layer. The transport layer processes the
message, strips off the TCP segment, and passes the HTTP packet to the application layer for
processing. The application layer (i.e., the Web server) reads the HTTP packet and the message
it contains (the request for the Web page) and processes it by generating an HTTP packet
containing the Web page we requested. This reverse process is called De-encapsulation. Then
the process starts again as the page is sent back to we.

Figure 1.10: De-encapsulation

There are three important points in this example. First, there are many different software
packages and many different PDUs that operate at different layers to successfully transfer a
message. Networking is in some ways similar to the Russian Matryoshka, nested dolls that fit
neatly inside each other. This is called encapsulation, because the PDU at a higher level is
placed inside the PDU at a lower level so that the lower level PDU encapsulates the higher-level
17

one. The major advantage of using different software and protocols is that it is easy to develop
new software, because all one has to do is write software for one level at a time. The developers
of Web applications, for example, do not need to write software to perform error checking or
routing, because those are performed by the data link and network layers. Developers can
simply assume those functions are performed and just focus on the application layer. Likewise,
it is simple to change the software at any level (or add new application protocols), as long as the
interface between that layer and the ones around it remains unchanged.

Second, it is important to note that for communication to be successful, each layer in one
computer must be able to communicate with its matching layer in the other computer. For
example, the physical layer connecting the client and server must use the same type of electrical
signals to enable each to understand the other (or there must be a device to translate between
them). Ensuring that the software used at the different layers is the same is accomplished by
using standards. A standard defines a set of rules, called protocols, which explain exactly how
hardware and software that conform to the standard are required to operate. Any hardware and
software that conform to a standard can communicate with any other hardware and software
that conform to the same standard. Without standards, it would be virtually impossible for
computers to communicate.

Third, the major disadvantage of using a layered network model is that it is somewhat
inefficient. Because there are several layers, each with its own software and PDUs, sending a
message involves many software programs (one for each protocol) and many PDUs. The PDUs
add to the total amount of data that must be sent (thus increasing the time it takes to transmit),
and the different software packages increase the processing power needed in computers.
Because the protocols are used at different layers and are stacked on top of one another (refer
figure 1.8), the set of software used to understand the different protocols is often called a
protocol stack.

1.3 Addressing System in Internetworking


Internetwork addresses identify devices separately or as members of a group. Addressing
schemes vary depending on the protocol family and the OSI layer. Three types of internetwork
addresses are commonly used: data-link layer addresses, Media Access Control (MAC)
addresses, and network-layer addresses.
18

1.3.1 Data Link Layer

A data-link layer address uniquely identifies each physical network connection of a network
device. Data-link addresses sometimes are referred to as physical or hardware addresses.
Data-link addresses usually exist within a flat address space and have a pre-established and
typically fixed relationship to a specific device. End systems generally have only one physical
network connection, and thus have only one data-link address. Routers and other internetworking
devices typically have multiple physical network connections and therefore have multiple data-
link addresses. In simple terms, a device would have same number of data link addresses as
the number of physical interfaces it has (Figure 1.11)

(a) MAC Address representation

A media access control address (MAC address) is a binary number used to uniquely
identify computer network adapters of a device that is assigned to a network interface
controller (NIC) for communications at the data link layer of a network segment. The Media
Access Control (MAC) address, sometimes called “hardware addresses” or “physical addresses”
are embedded into the network hardware during the manufacturing process, or stored in firmware,
and designed in a way not to be modified. MAC addresses are used as a network address for
most IEEE 802 network technologies, including Ethernet and Wi-Fi. In this context, MAC
addresses are used in the medium access control protocol sub-layer.

A MAC address is given to a network adapter when it is manufactured. It is hardwired or


hard-coded onto your computer’s network interface card (NIC) and is unique to it. Something
called the ARP (Address Resolution Protocol) translates an IP address into a MAC address.
The ARP is like a passport that takes data from an IP address through an actual piece of
computer hardware. For this reason, the MAC address is sometimes referred to as a networking
hardware address, the burned-in address (BIA), or the physical address. It may also be known
as an Ethernet hardware address (EHA), hardware address or physical address (not to be
confused with a memory physical address).
19

Figure 1.11: Each interface on a device is uniquely identified by a data-link address

Figure 1.12: MAC & Data-link addresses and IEEE sub layers of Data-link layers

A network node may have multiple NICs and in such cases each NIC must have a unique
MAC address. Sophisticated network equipment such as a multilayer switch or router may
require one or more permanently assigned MAC addresses.

MAC addresses are most often assigned by the manufacturer of a NIC and are stored in
its hardware, such as the card’s read-only memory or some other firmware mechanism. MAC
addresses are formed according to the rules of one of three numbering name spaces managed
by the Institute of Electrical and Electronics Engineers (IEEE): EUI-48 (it replaces the obsolete
term MAC-48) and EUI-64. EUI is an abbreviation for Extended Unique Identifier.
20

In IEEE 802 networks, the Data Link Control (DLC) layer of the OSI Reference Model is
divided into two sub-layers: the Logical Link Control (LLC) layer and the Media Access Control
(MAC) layer. The MAC layer interfaces directly with the network medium. Consequently, each
different type of network medium requires a different MAC layer.

Figure 1.13: MAC address contains a unique format of hexadecimal digit

On networks that do not conform to the IEEE 802 standards but do conform to the OSI
Reference Model, the node address is called the Data Link Control (DLC) address.

Finding a MAC Address

To display your MAC address on a Windows NT/2000/2003/XP/Vista computer:

 Click START

 Go to ACCESSORIES

 Select Command Prompt

 Type: (no quotes) “ipconfig /all”

In the “ipconfig /all” results look for the adapter you want to find the MAC address of. The
MAC address is the number located next to “Physical Address” in the list.

Format of a MAC Address

Traditional MAC addresses are 12-digit (6 bytes or 48 bits) hexadecimal numbers. By


convention, they are usually written in one of the following three formats:
21

MM:MM:MM:SS:SS:SS
MM-MM-MM-SS-SS-SS
MMM.MMM.SSS.SSS

The leftmost 6 digits (24 bits) called a “prefix” is associated with the adapter manufacturer.
Each vendor registers and obtains MAC prefixes as assigned by the IEEE. Vendors often possess
many prefix numbers associated with their different products. For example, the prefixes 00:13:10,
00:25:9C and 68:7F:74 (plus many others) all belong to Linksys (Cisco Systems).

The rightmost digits of a MAC address represent an identification number for the specific
device. Among all devices manufactured with the same vendor prefix, each is given their own
unique 24-bit number. Note that hardware from different vendors may happen to share the
same device portion of the address.

64-bit MAC Addresses

While traditional MAC addresses are all 48 bits in length, a few types of networks require
64-bit addresses instead. ZigBee wireless home automation and other similar networks based
on IEEE 802.15.4, for example, require 64-bit MAC addresses be configured on their hardware
devices.

TCP/IP networks based on IPv6 also implement a different approach to communicating


MAC addresses compared to mainstream IPv4. Instead of 64-bit hardware addresses, though,
IPv6 automatically translates 48-bit MAC address to a 64-bit address by inserting a fixed
(hardcoded) 16-bit value FFFE in between the vendor prefix and the device identifier. IPv6 calls
these numbers “identifiers” to distinguish them from true 64-bit hardware addresses. For example,
a 48-bit MAC address 00:25:96:12:34:56 appears on an IPv6 network as (commonly written in
either of these two forms):

00:25:96:FF:FE:12:34:56
0025:96FF:FE12:3456

MAC and IP Address Relationship

TCP/IP networks use both MAC addresses and IP addresses but for separate purposes.
A MAC address remains fixed to the device’s hardware while the IP address for that same
device can be changed depending on its TCP/IP network configuration. Media Access Control
22

operates at Layer 2 of the OSI model while Internet Protocol operates at Layer 3. This allows
MAC addressing to support other kinds of networks besides TCP/IP. IP networks manage the
conversion between IP and MAC addresses using Address Resolution Protocol (ARP). The
Dynamic Host Configuration Protocol (DHCP) relies on ARP to manage the unique assignment
of IP addresses to devices.

Universally and Locally Administered Address

Addresses can either be universally administered addresses (UAA) or locally administered


addresses (LAA). A universally administered address is uniquely assigned to a device by its
manufacturer. The first three octets (in transmission order) identify the organization that issued
the identifier and are known as the organizationally unique identifier (OUI). The remainder of
the address (three octets for EUI-48 or five for EUI-64) are assigned by that organization in
nearly any manner they please, subject to the constraint of uniqueness. A locally administered
address is assigned to a device by a network administrator, overriding the burned-in address.

Universally administered and locally administered addresses are distinguished by setting


the second-least-significant bit of the first octet of the address. This bit is also referred to as the
U/L bit, short for Universal/Local, which identifies how the address is administered. If the bit is
0, the address is universally administered. If it is 1, the address is locally administered. In the
example address 06-00-00-00-00-00 the first octet is 06 (hex), the binary form of which is
00000110, where the second-least-significant bit is 1. Therefore, it is a locally administered
address.

Unicast and Multicast

When the least significant bit of an address’s first octet is 0 (zero), the frame is meant to
reach only one receiving NIC. This type of transmission is called unicast. A unicast frame is
transmitted to all nodes within the collision domain. In a modern wired setting the collision
domain usually is the length of the Ethernet cable between two network cards. In a wireless
setting, the collision domain is all receivers that can detect a given wireless signal. If a switch
does not know which port leads to a given MAC address, the switch will forward a unicast frame
to all of its ports (except the originating port), an action known as unicast flood. Only the node
with the matching hardware MAC address will accept the frame; network frames with non-
matching MAC-addresses are ignored, unless the device is in promiscuous mode.
23

If the least significant bit of the first octet is set to 1, the frame will still be sent only once;
however, NICs will choose to accept it based on criteria other than the matching of a MAC
address: for example, based on a configurable list of accepted multicast MAC addresses. This
is called multicast addressing. The IEEE has built in several special address types to allow
more than one network interface card to be addressed at one time:

Packets sent to the broadcast address, all one bits, are received by all stations on a local
area network. In hexadecimal the broadcast address would be FF:FF:FF:FF:FF:FF. A broadcast
frame is flooded and is forwarded to and accepted by all other nodes.

Packets sent to a multicast address are received by all stations on a LAN that have been
configured to receive packets sent to that address.

Functional addresses identify one or more Token Ring NICs that provide a particular
service, defined in IEEE 802.5.

These are all examples of group addresses, as opposed to individual addresses; the
least significant bit of the first octet of a MAC address distinguishes individual addresses from
group addresses. That bit is set to 0 in individual addresses and set to 1 in group addresses.
Group addresses, like individual addresses, can be universally administered or locally
administered.

Organizationally Unique Identifier (OUI)

A MAC address may include the manufacturer’s organizationally unique identifier (OUI).
An OUI {Organizationally Unique Identifier} is a 24-bit number that uniquely identifies a vendor
or manufacturer. They are purchased and assigned by the IEEE. The OUI is basically the first
three octets of a MAC address, the first 24 bits of a MAC address for a network-connected
device, which indicate the specific vendor for that device. The IEEE assigns OUIs to vendors.
(The last 24 bits of the MAC address are the device unique serial number, assigned to the
device by the manufacturer). The OUI sometimes is referred to as the Vendor ID.

MAC Address Cloning

Some Internet Service Providers link each of their residential customer accounts to the
MAC addresses of the home network router (or another gateway device). The address seen by
the provider doesn’t change until the customer replaces their gateway, such as by installing a
new router. When a residential gateway is changed, the Internet provider now sees a different
24

MAC address being reported and blocks that network from going online. A process called “cloning”
solves this problem by enabling the router (gateway) to keep reporting the old MAC address to
the provider even though its own hardware address is different. Administrators can configure
their router (assuming it supports this feature, as many do) to use the cloning option and enter
the MAC address of the old gateway into the configuration screen. When cloning isn’t available,
the customer must contact the service provider to register their new gateway device instead.

MAC Address Filtering

Most broadband routers and other wireless access points include an optional feature
called MAC address filtering, or hardware address filtering. It is supposed to improve security
by limiting the devices that can join the network.

How MAC Address Filtering Works

On a typical wireless network, any device that has the proper credentials (knows the
SSID and password) can authenticate with the router and join the network, getting an IP address
and access to the internet and any shared resources. MAC address filtering adds an extra layer
to this process. Before letting any device join the network, the router checks the device’s MAC
address against a list of approved addresses. If the client’s address matches one on the router’s
list, access is granted as usual; otherwise, it’s blocked from joining.

Configuring MAC Address Filtering

To set up MAC filtering on a router, the administrator must configure a list of devices that
should be allowed to join. The physical address of each approved device must be found and
then those addresses need to be entered into the router, and the MAC address filtering option
turned on. Most routers let you see the MAC address of connected devices from the admin
console. If not, you can use your operating system to do it. Once you have the list of MAC
address, go into your router’s settings and put them in their proper places. For example, you
can enable the MAC filter on a Linksys Wireless-N router through the Wireless > Wireless MAC
Filter page. The same can be done on NETGEAR routers through ADVANCED > Security >
Access Control, and some D-Link routers in ADVANCED > NETWORK FILTER.

MAC Address Filtering and Network Security

In theory, having a router perform this connection check before accepting devices increases
the chances of preventing malicious network activity. The MAC addresses of wireless clients
25

can’t truly be changed because they’re encoded in the hardware. However, it must be noted
that MAC addresses can be faked, and determined attackers know how to exploit this fact. An
attacker still needs to know one of the valid addresses for that network in order to break in, but
this too is not difficult for anyone experienced in using network sniffer tools. MAC filtering will
only prevent average hackers from gaining network access. Most computer users don’t know
how to spoof their MAC address let alone find a router’s list of approved addresses.

1.3.2 Network-Layer Address

A network-layer address identifies an entity at the network layer of the OSI layers. Network
addresses usually exist within a hierarchical address space and sometimes are called virtual or
logical addresses. The relationship between a network address and a device is logical and
unfixed; it typically is based either on physical network characteristics (the device is on a particular
network segment) or on groupings that have no physical basis (the device is part of an AppleTalk
zone). End systems require one network-layer address for each network-layer protocol they
support. (This assumes that the device has only one physical network connection.) Routers
and other internetworking devices require one network-layer address per physical network
connection for each network-layer protocol supported. A router, for example, with three interfaces
each running AppleTalk, TCP/IP, and OSI must have three network-layer addresses for each
interface. The router therefore has nine network-layer addresses. Figure 1.14 illustrates how
each network interface must be assigned a network address for each protocol supported

Figure 1.14: Each network interface must be assigned a network


address for each protocol supported
26

1.3.2.1 Internet Protocol

In order to send somebody information over the internet, you need the correct address –
just like sending a regular letter through the mail. In this case however, it is the IP address. Just
as a letter receives a stamp to ensure it arrives to the correct recipient, data packets get an IP
address. The difference between an IP address and a postal address is that they do not correlate
with a specific location perse: instead, they are automatically or manually assigned to networked
devices during the connection set up. “Internet Protocol” plays an important role in this process.
The Internet Protocol (IP) is the principal communications protocol in the Internet protocol suite
for relaying datagrams across network boundaries. Its routing function enables internetworking,
and essentially establishes the Internet.

Internet Protocol (IP) is a connection free protocol that is an integral part of the Internet
protocol suite (a collection of around 500 network protocols) and is responsible for the addressing
and fragmentation of data packets in digital networks. Together with the transport layer TCP
(Transmission Control Protocol), IP makes up the basis of the internet. To be able to send a
packet from sender to addressee, the Internet Protocol creates a packet structure which
summarizes the sent information. So, the protocol determines how information about the source
and destination of the data is described and separates this information from the informative
data in the IP header. This kind of packet format is also known as an IP-Datagram. IP has the
task of delivering packets from the source host to the destination host solely based on the IP
addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate
the data to be delivered. It also defines addressing methods that are used to label the datagram
with source and destination information.

In 1974 the Institute of Electrical and Electronics Engineers (IEEE) published a research
paper by the American computer scientists Robert Kahn and Vint Cerf, who described a protocol
model for a mutual packet network connection based on the internet predecessor ARPANET. In
addition to the TCP transmission control protocol, the primary component of this model was the
IP protocol which (aside from a special abstraction layer) allowed for communication across
different physical networks. After this, more and more research networks were consolidated on
the basis of “TCP/IP” protocol combination, which in 1981 was definitively specified as a standard
in the RFC 971.
27

IPv4 and IPv6

Today, those who are concerned with the characteristics of a particular IP address e.g.,
one that would make computers addressable in a local network, will no doubt encounter the two
variants IPv4 and IPv6. However, despite undergoing extensive changes in the past, in no way
is this the fourth or sixth generation of IP protocol. IPv4 actually is the first official version of the
Internet Protocol, whilst the version number relates to the fact that the fourth version of the TCP
protocol is used. IPv6 is the direct successor of IPv4 – the development of IPv5 was suspended
prematurely for economic reasons.

Even though there have been no further releases since IPv4 and IPv6, the Internet Protocol
has been revised since its first mention in 1974 (before this it was just a part of TCP and did not
exist independently). The focus was essentially on optimizing connection set-up and addressing.
For example, the bit length of host addresses were increased from 16 to 32 bits, therefore
extending the address space to approximately four billion possible proxies. The visionary IPv6
has 128-bit address fields and allows for about 340 sextillion (a number with 37 zeroes) different
addresses, thus meeting the long term need for Internet addresses.

The Internet Protocol is responsible for addressing hosts, encapsulating data into
datagrams (including fragmentation and reassembly) and routing datagrams from a source
host to a destination host across one or more IP networks. For these purposes, the Internet
Protocol defines the format of packets and provides an addressing system.

Each datagram has two components: a header and a payload. The IP header includes
source IP address, destination IP address, and other metadata needed to route and deliver the
datagram. The payload is the data that is transported. This method of nesting the data payload
in a packet with a header is called encapsulation.

IP addressing entails the assignment of IP addresses and associated parameters to host


interfaces. The address space is divided into sub networks, involving the designation of network
prefixes. IP routing is performed by all hosts, as well as routers, whose main function is to
transport packets across network boundaries. Routers communicate with one another via
specially designed routing protocols, either interior gateway protocols or exterior gateway
protocols, as needed for the topology of the network.
28

IP header of a datagram

As previously mentioned, the Internet Protocol ensures that each data packet is preceded
by the important structural features in the header and is assigned to the appropriate transport
protocol (usually TCP). The header data area has been fundamentally revised for version 6,
which is why it is necessary to specify between the IPv4 and IPv6 headers.

Construction of IPv4 headers

Every IP header always begins with a 4 Bit long specification of the Internet protocol
version number – either IPv4 or IPv6. Then there are a further 4 Bits, which contain information
about the length of the IP header (IP header length), as this does not always remain constant.

32

16

Figure 1.15: Construction of IPv4 headers

The total length of the header is always calculated from this value, multiplied by 32 bits.
Thus, the smallest possible header length is 160bytes (equivalent to 20 bytes) when no options
are added. The maximum value is 15 to 480 bit (equivalent to 60 bytes). Bits 8 to 15 (type of
service) include instructions for handling and prioritizing the datagram. Here the host can specify
the importance of points such as reliability, throughput and delay in data transmission, for
example.
29

The total length specifies the total size of the data packet- in other words, it adds the size
of the useful data to the header length. Since the field has a length of 16 bits, the maximum limit
is 65,635 bytes. It is stipulated in RFC 791 that each host has to be able to process at least 576
bytes. An IP datagram can be fragmented on its way from the host to routers or other devices
if desired, but the fragments should not be smaller than the 576 bytes mentioned. The other
fields on the IPv4 header have the following meanings:

Identification: All fragments of a datagram have the same identification number that
they receive from the sender. By matching this 16 bit field, the target host can assign individual
fragments to a particular datagram.

Flags: Every IP header contains 3 flag bits, which contain information and guidelines for
fragmentation. The first bit is reserved and always has the value 0. The second bit, called “Don’t
Fragment”, informs whether or not the packet may be fragmented (0) or not (1). The last “More
Fragments” bit indicates whether further fragments follow (1) or whether the packet is complete
or will be completed with the current fragment (0).

Fragment alignment: This field informs the target host about where a single fragment
belongs, so that the entire datagram can be complied again easily. The 13 bit length means that
the datagram can be split into 8192 fragments.

Lifespan (Time to Live, TTL): To ensure that a packet on the network cannot migrate
from node to node indefinitely, it is sent with a maximum lifespan (Time to Live). The RFC
standard provides the unit of seconds for this 8 bit field, while the maximum lifetime is 255
seconds. The TTL is reduced by at least 1 for each network node that has passed. If the value
0 is reached, the data packet is automatically discarded.

Protocol: The protocol field (8 bit) assigns the respective transport protocol to the data
packet, for example the value 6 for TCP or the value 17 for the UDP protocol. The official list of
all possible protocols has been managed and maintained by IANA (Internet Assigned Numbers
Authority) since 2002.

Header/Checksum: The 16 bit “Checksum” field contains the checksum for the header.
This has to be recalculated at every network node, due to the dwindling TTL per interim. The
accuracy of the user information remains unverified for efficiency reasons.
30

Source address and destination address: Every 32 bits (4 bytes) are reserved for the
assigned IP address of the originating and target hosts. These IP addresses are usually written
in the form of 4 decimal numbers separated by dots. The lowest address is 0.0.0.0., and the
highest is 255.255.255.255.

Options: The options field expands the IP protocol with additional information which is
not provided in the standard design. Since these are just optional additions, the field has a
variable length, which is limited by the maximum header length. Examples of possible options
include: “Security” (indicates how secret a datagram is), “RecordRoute” (indicates all network
nodes that have passed, their IP address to follow the packet route), and “Time Stamp” (adds
the time at which a particular node was passed).

Construction of IPv6 Headers

Figure 1.16: Construction of IPv6 headers

Unlike its predecessor’s header, the IPv6 protocol has a fixed size of 320 bits (40 bytes).
Less frequently required information can be attached separately between the standard header
and the user data. These extension headers can be compared to the option field of the IPv4
protocol and can be adapted at any time without having to change the actual header. Amongst
other things, you can determine packet routes, specify fragmentation information, or initiate
encrypted communication via IPSec. To optimize performance, a header checksum does not
exist.
31

Like IPv4, the actual IP header begins with the 4-bit version number of the Internet Protocol.
The following field called “Traffic Class” is equivalent to the “Type of Service” entry in the older
protocol variant. The same rules apply to these 8 bits as in the previous version: they inform the
target host about the qualitative processing of the datagram. A new feature of IPv6 is the
FlowLabel (20 bit), which makes it possible to identify data streams from continuous data packets.
This allows for the reservation of bandwidth and the optimization of routing. The following list
explains the additional header information for the improved IP protocol:

Size of user data: IPv6 transmits a value for the size of the transported user data, including
the extension headers (total 16 bits). In the previous version, this value had to be calculated
separately from the total length minus the header line length.

Next Header: The 8-bit “Next Header” field is the counterpart of the protocol specification
in IPv4 and therefore has also assumed its function – the assignment of the desired transport
protocol.

Hop-Limit: The Hop limit (8 bit) defines the maximum number of intermediate stations
that a packet can pass through before it is discarded. Just like the TTL in IPv4, the value is
reduced by at least 1 with each node.

Source and destination address: Most of the IPv6 headers contain the addresses of
sender and addressee. As previously mentioned, these have a length of 128 bits (quadruple of
IPv4 addresses). There are also significant differences in the standard notation. The newer
version of the Internet Protocol uses hexadecimal numbers and divides them into 8 blocks of 16
bits each. Double points are used instead of simple dots to separate them. For example, a full
IPv6 address looks something like

2001:0db8:85a3:08d3:1319:8a2e:0370:7344.

Internet Protocol addressing

In order for the datagrams in their header to make the basic specification of the initial and
destination addresses, they must first be assigned to the network subscribers. They are usually
assigned between internal and external, or public IP addresses. Three address ranges are
reserved for the former, which are used for communication in local networks:

 10.0.0.0 to 10.255.255.255
 172.16.0.0 to 172.31.255.255
 192.168.0.0 to 192.168.255.255
32

The prefix “fc00::/7” is provided for IPv6 networks. Addresses in these networks are not
routed in the internet and can therefore be freely selected and used in private or company
networks. Addresses are successfully assigned either by manual input or automatically as soon
as the device connects to the network, as long as the automatic address assignment is activated
and a DHCP server is in use. With the help of a subnet mask, this type of local network can also
be selectively segmented into other areas.

External IP addresses are routed automatically by the respective internet provider when
they connect to the internet. All devices on the internet via a common router access the same
external IP. Typically, the providers assign a new internet address every 24 hours from an
address range, which was assigned to them by the IANA. This also applies to the almost
inexhaustible arsenal of IPv6 addresses, which are only partly released for normal use.
Furthermore, it is not just divided into private and public addresses, but it can be distinguished
by much more versatile classification possibilities in so-called “address scopes”:

 Host Scope: The loopback address 0:0:0:0:0:0:0: can use a host to send IPv6
datagrams to itself.

 Link Local Scope: For IPv6 connectivity it is essential that each host has its own
address, even if it is only valid on a local network. This link local address is identified
by the prefix “fe80::/10” and is used for example, for communication with the standard
gateway (router) in order to generate a public IP address.

 Unique Local Scope: This is the aforementioned address range “fc00::/7”, which
is exclusively reserved for the configuration of local networks.

 Site Local Scope: The site local scope is an now outdated prefix “fec0::/10”, which
was also defined for local networks. However, as soon as different networks were
connected or VPN connections were made between networks that were numbered
with site-local addresses, the standard was considered overtaken.

 Global Scope: Any host that wants to connect to the internet at least needs its own
public address. This is obtained by auto-configuration, either by accessing the SLAAC
(stateless address Auto configuration) or DHCPv6 (state-oriented address
configuration).

 Multicast Scope: Network nodes, routers, servers and other network services can
be grouped into multicast groups using IPv6. Each of these groups has its own
33

address, which allows a single packet to reach all the hosts involved. The prefix
“ff00::/8” indicates that a multicast address follows.

1.3.2.2 Regulation of IP Protocol fragmentation

Whenever a data packet needs to be sent via TCP/IP, the overall size is automatically
checked. If the size is above the maximum transmission unit of the respective network interface,
the information becomes fragmented i.e., deconstructed into smaller data blocks.

The sending host (IPv6) or an intermediate router (IPv4) takes over this task. By default,
the packet is composed by the recipient, who accesses the fragmentation information stored in
the IP header or in the extension header. In exceptional cases, the reassembling can also be
taken over by a firewall, as long as it can be configured accordingly.

When a router receives a packet, it examines the destination address and determines the
outgoing interface to use that interface’s Maximum Transmission Unit (MTU). If the packet size
is bigger than the MTU, and the Do not Fragment (DF) bit in the packet’s header is set to 0, then
the router may fragment the packet.

The router divides the packet into fragments. The max size of each fragment is the MTU
minus the IP header size (20 bytes minimum; 60 bytes maximum). The router puts each fragment
into its own packet, each fragment packet having following changes:

 The total length field is the fragment size.

 The more fragments (MF) flag is set for all fragments except the last one, which is
set to 0.

 The fragment offset field is set, based on the offset of the fragment in the original
data payload. This is measured in units of eight-byte blocks.

 The header checksum field is recomputed.

For example, for an MTU of 1,500 bytes and a header size of 20 bytes, the fragment
offsets would be multiples of (1500–20)/8 = 185. These multiples are 0, 185, 370, 555, 740,...

It is possible that a packet is fragmented at one router, and that the fragments are further
fragmented at another router. For example, a packet of 4,520 bytes, including the 20 bytes of
the IP header (without options) is fragmented to two packets on a link with an MTU of 2,500
bytes:
34

The total data size is preserved: 2480 bytes + 2020 bytes = 4500 bytes. The offsets are 0
and 0 + 2480/8 = 310.

On a link with an MTU of 1,500 bytes, each fragment results in two fragments:

Again, the data size is preserved: 1480 + 1000 = 2480, and 1480 + 540 = 2020.

Also in this case, the More Fragments (MF) bit remains 1 for all the fragments that came
with 1 in them and for the last fragment that arrives, it works as usual, that is the MF bit is set to
0 only in the last one. And of course, the Identification field continues to have the same value in
all re-fragmented fragments. This way, even if fragments are re-fragmented, the receiver knows
they have initially all started from the same packet.

The last offset and last data size are used to calculate the total data size: 495*8 + 540 =
3960 + 540 = 4500.

Since IPv6 generally no longer provides fragmentation and no longer allows router
fragmentation, the IP packet must already have a suitable size before sending. If a router
reaches IPv6 datagrams that are higher than the maximum transmission unit, the router will
discard them and inform the sender of an ICMPv6 type 2 “Packet Too Big” message. The data
sending application can now either create smaller, non-fragmented packets, or initiate
fragmentation. Subsequently, the appropriate extension header is added to the IP packet, so
that the target host can also reassemble the individual fragments after reception.
35

1.3.2.3 Reassembly

A receiver knows that a packet is a fragment if at least one of the following conditions is
true:

 The “more fragments” flag is set. (This is true for all fragments except the last.)

 The “fragment offset” field is nonzero. (This is true for all fragments except the
first.)

The receiver identifies matching fragments using the foreign and local address, the protocol
ID, and the identification field. The receiver reassembles the data from fragments with the
same ID using both the fragment offset and the more fragments flag. When the receiver receives
the last fragment (which has the “more fragments” flag set to 0), it can calculate the length of
the original data payload, by multiplying the last fragment’s offset by eight, and adding the last
fragment’s data size. In the example above, this calculation was 495*8 + 540 = 4500 bytes.

When the receiver has all fragments, they can be correctly ordered by using the offsets,
and reassembled to yield the original data segment.

1.4 Transport Layer Protocols


The most important and common protocols of the TCP/IP transport layer include:

 User Datagram Protocol (UDP)

 Transmission Control Protocol (TCP)

By building on the functionality provided by the Internet Protocol (IP), the transport protocols
deliver data to applications executing in the IP host. This is done by making use of ports. The
transport protocols can provide additional functionality such as congestion control, reliable data
delivery, duplicate data suppression, and flow control as is done by TCP.

1.4.1 Ports

The concept of ports provides a way to uniformly and uniquely identify connections and
the programs and hosts that are engaged in them, irrespective of specific process IDs.

Each process that wants to communicate with another process identifies itself to the
TCP/IP protocol suite by one or more ports. A port is a 16-bit number, used by the host-to-host
36

protocol to identify to which higher level protocol or application program (process) it must deliver
incoming messages. There are two types of port:

 Well-known: Well-known ports belong to standard servers, for example, Telnet uses
port 23. Well-known port numbers range between 1 and 1023 (prior to 1992, the
range between 256 and 1023 was used for UNIX-specific servers). Well-known
port numbers are typically odd, because early systems using the port concept
required an odd/even pair of ports for duplex operations. Most servers require only
a single port. Exceptions are the BOOTP server, which uses two: 67 and 68. And
the FTP server, which uses two: 20 and 21.

The well-known ports are controlled and assigned by the Internet Assigned number
Authority (IANA) and on most systems can only be used by system processes or by
programs executed by privileged users. The reason for well-known ports is to allow
clients to be able to find servers without configuration information. The well-known
port numbers are defined in STD 2 - Assigned Internet Numbers.

 Ephemeral: Clients do not need well-known port numbers because they initiate
communication with servers and the port number they are using is contained in the
UDP datagrams sent to the server. Each client process is allocated a port number
for as long as it needs it by the host it is running on. Ephemeral port numbers have
values greater than 1023, normally in the range 1024 to 65535. A client can use any
number allocated to it, as long as the combination of <transport protocol, IP address,
port number> is unique.

Ephemeral ports are not controlled by IANA and can be used by ordinary user-
developed programs on most systems. Confusion, due to two different applications
trying to use the same port numbers on one host, is avoided by writing those
applications to request an available port from TCP/IP. Because this port number is
dynamically assigned, it may differ from one invocation of an application to the
next.

UDP and TCP use the same port principle. To the best possible extent, the same
port numbers are used for the same services on top of UDP, and TCP.

1.5 User Datagram Protocol (UDP)


The User Datagram Protocol (UDP) is a connectionless transport layer protocol (Layer 4)
that belongs to the Internet protocol family. UDP is basically an interface between IP and upper-
37

layer processes. UDP protocol ports distinguish multiple applications running on a single device
from one another.

Unlike the TCP, UDP adds no reliability, flow control, or error-recovery functions to IP.
Because of UDP’s simplicity, UDP headers contain fewer bytes and consume less network
overhead than TCP. UDP is useful in situations in which the reliability mechanisms of TCP are
not necessary, such as in cases where a higher-layer protocol might provide error and flow
control.

UDP is the transport protocol for several well-known application layer protocols, including
Network File System (NFS), Simple Network Management Protocol (SNMP), Domain Name
System (DNS), and Trivial File Transfer Protocol (TFTP).

The UDP packet format contains four fields, as shown in Figure 1-17. These include
Source Port, Destination Port, Length, and Checksum fields.

Figure 1.17: UDP Packet Structure

The Source and Destination port fields contain the 16-bit UDP protocol port numbers
used to de-multiplex datagrams for receiving application layer processes. A Length field specifies
the length of the UDP header and data. The Checksum field provides an (optional) integrity
check on the UDP header and data.

1.6 Transmission Control Protocol (TCP)


The Transmission Control Protocol (TCP) provides reliable transmission of data in an IP
environment. TCP corresponds to the transport layer (Layer 4) of the OSI reference model.
Among the services that TCP provides are stream data transfer, reliability, efficient flow control,
38

full-duplex operation, and multiplexing. With stream data transfer, TCP delivers an unstructured
stream of bytes identified by sequence numbers. This service benefits applications because
they do not have to chop data into blocks before handing it off to TCP. Instead, TCP groups
bytes into segments and passes them to IP for delivery.

TCP offers reliability by providing connection-oriented, end-to-end reliable packet delivery


through an internetwork. It does this by sequencing bytes with a forwarding acknowledgment
number that indicates to the destination the next byte that the source expects to receive. Bytes
not acknowledged within a specified time period are retransmitted. The reliability mechanism of
TCP allows devices to deal with lost, delayed, duplicate, or misread packets. A timeout mechanism
allows devices to detect lost packets and request retransmission.

TCP offers efficient flow control, which means that, when sending acknowledgments back
to the source, the receiving TCP process indicates the highest sequence number that it can
receive without overflowing its internal buffers. Full-duplex operation means that TCP processes
can both send and receive at the same time. Finally, TCP’s multiplexing means that numerous
simultaneous upper-layer conversations can be multiplexed over a single connection.

1.6.1 TCP Connection Establishment

To use reliable transport services, TCP hosts must establish a connection-oriented session
with one another. Connection establishment is performed by using a three-way handshake
mechanism.

A three-way handshake synchronizes both ends of a connection by allowing both sides to


agree upon initial sequence numbers. This mechanism also guarantees that both sides are
ready to transmit data and know that the other side is ready to transmit as well. This is necessary
so that packets are not transmitted or retransmitted during session establishment or after session
termination.

Each host randomly chooses a sequence number used to track bytes within the stream
that it is sending and receiving. Then, the three-way handshake proceeds in the following manner:

(a) The first host (Host A) initiates a connection by sending a packet with the initial
sequence number (X) and SYN bit set to indicate a connection request.

(b) The second host (Host B) receives the SYN, records the sequence number X, and
replies by acknowledging the SYN (with an ACK =X + 1). Host B includes its own
39

initial sequence number (SEQ = Y). An ACK of 20 means that the host has received
bytes 0 through 19, and expects byte 20 next. This technique is called forward
acknowledgment.

(c) Host A then acknowledges all bytes that Host B sent with a forward acknowledgment
indicating the next byte Host A expects to receive (ACK = Y + 1).

Data transfer then can begin.

Figure 1.8: TCP Threeway Handshake

1.6.2 TCP Segmentation

TCP is often called a “connection-oriented” protocol because it ensures the successful


delivery of data to the receiving host. TCP divides the data received from the application layer
into segments and attaches a header to each segment.

Segment headers contain sender and recipient ports, segment ordering information, and
a data field known as a checksum. The TCP protocols on both hosts use the checksum data to
determine whether data has transferred without error. We will see more about these headers in
TCP Packet Structure section.

1.6.3 Positive Acknowledgment and Retransmission

A simple transport protocol might implement a reliability and flow control technique in
which the source sends one packet, starts a timer, and waits for an acknowledgment before
sending a new packet. If the acknowledgment is not received before the timer expires, the
source retransmits the packet. Such a technique is called positive acknowledgment and
retransmission (PAR).
40

.By assigning each packet a sequence number, PAR enables hosts to track lost or duplicate
packets caused by network delays that result in premature retransmission. The sequence
numbers are sent back in the acknowledgments so that the acknowledgments can be tracked.

PAR is an inefficient use of bandwidth, however, because a host must wait for an
acknowledgment before sending a new packet, and only one packet can be sent at a time.

1.6.4 TCP Sliding Window

A TCP sliding window provides more efficient use of network bandwidth than PAR because
it enables hosts to send multiple bytes or packets before waiting for an acknowledgment.

In TCP, the receiver specifies the current window size in every packet. Because TCP
provides a byte-stream connection, window sizes are expressed in bytes. This means that a
window is the number of data bytes that the sender is allowed to send before waiting for an
acknowledgment. Initial window sizes are indicated at connection setup but might vary throughout
the data transfer to provide flow control. A window size of zero, for instance, means “Send no
data.”

In a TCP sliding-window operation, for example, the sender might have a sequence of
bytes to send (numbered 1 to 10) to a receiver who has a window size of 5. The sender then
would place a window around the first 5 bytes and transmit them together. It would then wait for
an acknowledgment.

The receiver would respond with an ACK of 6, indicating that it has received bytes 1 to 5
and is expecting byte 6 next. In the same packet, the receiver would indicate that its window
size is 5. The sender then would move the sliding window 5 bytes to the right and transmit bytes
6 to 10. The receiver would respond with an ACK of 11, indicating that it is expecting sequenced
byte 11 next. In this packet, the receiver might indicate that its window size is 0 (because, for
example, its internal buffers are full). At this point, the sender cannot send any more bytes until
the receiver sends another packet with a window size greater than 0.

1.6.5 TCP Packet Structure

The following descriptions summarize the TCP packet fields illustrated in Figure 1.16

 Source port and destination port — Identifies points at which upper-layer source
and destination processes receive TCP services.
41

 Sequence number — Usually specifies the number assigned to the first byte of
data in the current message. In the connection-establishment phase, this field also
can be used to identify an initial sequence number to be used in an upcoming
transmission.

 Acknowledgment number — Contains the sequence number of the next byte of


data that the sender of the packet expects to receive.

 Data offset — Indicates the number of 32-bit words in the TCP header.

 Reserved— Remains reserved for future use.

 Flags — Carries a variety of control information. There are 6 types of TCP flags:-

 SYN - Initiates a connection

 ACK - Acknowledges received data

 FIN - Closes a connection

 RST - Aborts a connection in response to an error

 PSH - Informs the receiving host that the data should be pushed up to the receiving
application immediately instead of waiting for other data to be gather before initiation
of TCP channel.

 URG – TCP creates a special segment in which it sets the URG flag and also the
urgent pointer field in case a data needs to be sent urgently. This causes the receiving
TCP to forward the urgent data on a separate channel to the application. This allows
the application to process the data out of band.

 Window — Specifies the size of the sender’s receive window (that is, the buffer
space available for incoming data).

 Checksum — Indicates whether the header was damaged in transit.

 Urgent pointer — Points to the first urgent data byte in the packet.

 Options — Specifies various TCP options. This will be discussed in detail in next
section “TCP Header Options”.

 Data / Payload — Contains upper-layer information.


42

Figure 1.18 (A) : TCP Packet Structure

1.6.6 TCP Header – Options

There are five available TCP Options currently as on the date of compilation of this material.
They are:

 End of Options List

 No Operation (NOP)

 Maximum Segment Size (MSS)

 Window Scaling

 Selective Acknowledgement Permitted (SACK Permitted)

 Selective Acknowledgement (SACK)

 Timestamps

Options may occupy space at the end of the TCP header and are a multiple of 8 bits in
length. All options are included in the checksum. An option may begin on any octet boundary.
There are two cases for the format of an option:

 Case 1: A single octet of option-kind.

 Case 2: An octet of option-kind, an octet of option-length, and the actual option-


43

data octets.

The option-length counts the two octets of option-kind and option-length as well as the
option-data octets. Note that the list of options may be shorter than the data offset field might
imply. The content of the header beyond the End-of-Option option must be header padding
(i.e., zero).

Table 1.2: TCP Options along with Kind and Length

End of Option List

This option code indicates the end of the option list. This might not coincide with the end
of the TCP header according to the Data Offset field. This is used at the end of all options, not
the end of each option, and need only be used if the end of the options would not otherwise
coincide with the end of the TCP header.

No Operation

This option code may be used between options, for example, to align the beginning of a
subsequent option on a word boundary. There is no guarantee that senders will use this option,
so receivers must be prepared to process options even if they do not begin on a word boundary.

Maximum Segment Size

If this option is present, then it communicates the maximum received segment size at the
TCP which sends this segment. This field must only be sent in the initial connection request
(i.e., in segments with the SYN control bit set). If this option is not used, any segment size is
44

allowed.

Windows Scaling

The three-byte Window Scale option MAY be sent in a <SYN> segment by a TCP. It has
two purposes: (1) indicate that the TCP is prepared to both send and receive window scaling,
and (2) communicate the exponent of a scale factor to be applied to its receive window. Thus,
a TCP that is prepared to scale windows SHOULD send the option, even if its own scale factor
is 1 and the exponent 0. The scale factor is limited to a power of two and encoded logarithmically,
so it may be implemented by binary shift operations. The maximum scale exponent is limited to
14 for a maximum permissible receive window size of 1 GiB (2^ (14+16)).

SACK Permitted

This two-byte option may be sent in a SYN by a TCP that has been extended to receive
(and presumably process) the SACK option once the connection has opened. It MUST NOT be
sent on non-SYN segments.

SACK

The SACK option is to be used to convey extended acknowledgment information from


the receiver to the sender over an established TCP connection.

Timestamps

The Timestamps option is introduced to address some of the issues such as latency and
error checking. The Timestamps option is specified in a symmetrical manner, so that Timestamp
Value (TSval) timestamps are carried in both data and <ACK> segments and are echoed in
Timestamp Echo Reply (TSecr) fields carried in returning <ACK> or data segments. Originally
used primarily for timestamping individual segments, the properties of the Timestamps option
allow for taking time measurements as well as additional uses.

The Timestamps option is important when large receive windows are used to allow the
use of the Protection Against Wrapped Sequence Number (PAWS) mechanism.

TCP is a symmetric protocol, allowing data to be sent at any time in either direction, and
therefore timestamp echoing may occur in either direction. For simplicity and symmetry, we
specify that timestamps always be sent and echoed in both directions. For efficiency, we combine
45

the timestamp and timestamp reply fields into a single TCP Timestamps option.

Summary

In this chapter, you learned:

 Data networks are systems of end devices, intermediary devices, and the media
connecting the devices. For communication to occur, these devices must know how
to communicate.

 These devices must comply with communication rules and protocols. TCP/IP is an
example of a protocol suite.

 Most protocols are created by a standards organization such as the IETF or IEEE.

 The most widely-used networking models are the OSI and TCP/IP models.

 Data that passes down the stack of the OSI model is segmented into pieces and
encapsulated with addresses and other labels. The process is reversed as the pieces
are de-encapsulated and passed up the destination protocol stack.

 The OSI model describes the processes of encoding, formatting, segmenting, and
encapsulating data for transmission over the network.

 The TCP/IP protocol suite is an open standard protocol that has been endorsed by
the networking industry and ratified, or approved, by a standards organization.

 The Internet Protocol Suite is a suite of protocols required for transmitting and
receiving information using the Internet.

 Protocol Data Units (PDUs) are named according to the protocols of the TCP/IP
suite: data, segment, packet, frame, and bits.

 Applying models allows individuals, companies, and trade associations to analyze


current networks and plan the networks of the future.

Review Questions
 Describe the structure of a TCP packet.

 What do you understand by Time stamp?

 Explain Three way Handshake and its importance in communication.


46

 Write short notes on the structure of IPV4 & IPV6 headers.

 Explain MAC Addressing.

 Enumerate the main differences identified between the OSI and TCP/IP model.

 Explain Open Systems Interconnection (OSI) model.

References
 Internetworking Technologies Handbook - by Cisco Systems Inc.
 Packet Guide to Core Network Protocols by Bruce Hartpence
 TCP/IP Tutorial and Technical Overview (IBM Redbooks)

Online Sources
 http://what-when-how.com/data-communications-and-networking/networkmodels-
data-communications-and-networking/
 https://en.wikipedia.org/
 http://www.iana.org/go/rfc793
 http://www.iana.org/go/rfc7323
 https://tools.ietf.org/html/rfc2018
47

UNIT - 2
IP ROUTING
Learning Objectives

This chapter acts as a foundation for the technology discussions that follow. The topic of
routing has been covered to make sure you understand the basics of routing and routing protocol.

At the end of the chapter a learner should be able to clearly understand the process of
routing and the key concepts of the following:

 Static routing and Dynamic routing.

 RIP, Broadcast and Multicast domains

 OSPF and EIGRP

 Network Address Translation

 Classes of IP Addressing

Strucutre
2.1 Introduction

2.2 IP Routing Process

2.3 Routing Components

2.4 Admnistrative Distance & Metric

2.5 IP Rating Protocols

2.6 Classful & Classless Routing

2.7 Routing Information Protocol (RIP)

2.8 Network Address Translation (NAT)

2.9 Classes of IP addresses

2.10 Classful Addressing

2.11 Automatic Private IP Addressing (APIA)


48

2.1 Introduction
In this chapter, some fundamental concepts and terms used in the evolving language of
internetworking are addressed. The topic of routing has been covered in computer science
literature for over two decades, but routing only achieved commercial popularity in the mid-
1980s. The primary reason for this time lag is the nature of networks in the 1970s. During this
time, networks were fairly simple, homogeneous environments. Only recently has large-scale
internetworking become popular.

Routing is moving information across an internetwork from source to destination. Along


the way, at least one intermediate node is typically encountered. Routing is often contrasted
with bridging which seems to accomplish precisely the same thing. The primary difference
between the two is that bridging occurs at Layer 2 (the link layer) of the OSI reference model,
while routing occurs at Layer 3 (the network layer). This distinction provides routing and bridging
with different information to use in the process of moving information from source to destination.
As a result, routing and bridging accomplish their tasks in different ways and, in fact, there are
several different kinds of routing and bridging.

2.1.1 IP routing is the process of sending packets from a host on one network to another
host on a different remote network. This process is usually done by routers. Routers examine
the destination IP address of a packet, determine the next-hop address, and forward the packet.
Routers use routing tables to determine a next hop address to which the packet should be
forwarded. Consider the following example of IP routing:

Figure 2.1: IP Routing

Host A wants to communicate with host B, but host B is on another network. Host A is
configured to send all packets destined for remote networks to router R1. Router R1 receives
the packets, examines the destination IP address and forwards the packet to the outgoing
interface associated with the destination network.
49

2.1.2 Default gateway

A default gateway is a router that hosts use to communicate with other hosts on remote
networks. A default gateway is used when a host doesn’t have a route entry for the specific
remote network and doesn’t know how to reach that network. Hosts can be configured to send
all packets destined to remote networks to a default gateway, which has a route to reach that
network. The following example explains the concept of a default gateway more thoroughly.

Figure 2.2: Default Gateway

Host A has an IP address of the router R1 configured as the default gateway address.
Host A is trying to communicate with host B, a host on another, remote network. Host A looks up
in its routing table to check if there is an entry for that destination network. If the entry is not
found, the host sends all data to the router R1. Router R1 receives the packets and forwards
them to host B.

2.1.3 Routing table

Each router maintains a routing table and stores it in RAM. A routing table is used by
routers to determine the path to the destination network. Each routing table consists of the
following entries:

 network destination and subnet mask – specifies a range of IP addresses.

 remote router – IP address of the router used to reach that network.

 outgoing interface – outgoing interface the packet should go out to reach the
destination network.

There are three different methods for populating a routing table:

 directly connected subnets

 using static routing

 using dynamic routing


50

Each of this method will be described in the following chapters.

Consider the following example. Host A wants to communicate with host B, but host B is
on another network. Host A is configured to send all packets destined for remote networks to
the router. The router receives the packets, checks the routing table to see if it has an entry for
the destination address. If it does, the router forwards the packet out the appropriate interface
port. If the router doesn’t find the entry, it discards the packet.

Figure 2.3: Connected Routes

We can use the show ip route command from the enabled mode to display the router’s
routing table.

Figure 2.4: Show IP Route Command

As you can see from the output above, this router has two directly connected routes to the
subnets 10.0.0.0/8 and 192.168.0.0/24. The character C in the routing table indicates that a
route is a directly connected route. So when host A sends the packet to host B, the router will
look up into its routing table and find the route to the 10.0.0.0/8 network on which host B
resides. The router will then use that route to route packets received from host A to host B.

2.2 IP Routing Process


When a user communicates to other users over a network ( or the Internet), it does not
take too much time to receive the reply from the remote user. However, this communication
51

goes through a complex and lengthy process called IP Routing Process. If you have the basic
idea of the network, network devices, network protocols, and the OSI model, you can easily
understand and describe the IP Routing Process. Step by step IP routing process is discussed
in detail below.

The IP routing process is quite simple and remains unchanged regardless of the number
of connected devices and the size of the network being used. We will use the following simple
network design to explain the step by step IP routing process.

2.2.1 IP Routing Process Topology

In order to understand the IP routing process, we will explain the communication between
PC1 and PC2 that are interconnected to each other using a router. We will use the Internet
Control Message Protocol (ICMP) protocol (used by the ping utility) to test and explain the IP
routing process between PC1 and PC2. Let’s see what happens when PC1 communicates to
PC2 on a different network using a router.

Figure 2.5: IP Routing Process

2.2.2 IP Routing Process Steps

When a user executes the ping 192.168.1.1 command, a packet is generated on PC1
with the help of the IP and ICMP protocols.

The IP protocol will use the Address Resolution Protocol (ARP) protocol to determine the
destination network for this packet by looking at the IP address and the subnet mask (192.168.0.1/
24) of PC1. Since the packet is destined for a remote network (192.168.1.0/24), it must be sent
to the router using the gateway address (192.168.0.100).
52

In order to send the packet to the router, the hardware address of the router’s interface
(Fa0/0) is required. To get the hardware address, the ARP cache will be checked. If the IP
address has not already been resolved to a hardware address, it will not be present in the ARP
cache. In the very first time, the host will send an ARP broadcast looking for the hardware
address of IP address 192.168.0.100.

The router will respond with the hardware address of the Fa0/0 interface connected to the
PC1 (local network) and the packet will be hand over to the Data Link layer.

The Data Link layer will create a frame that includes the source and destination hardware
addresses, the Type field, and Frame Check Sequence (FCS).

The Type field is used to specify the network layer protocol and the FCS filed is
used to calculate the Cyclic Redundancy Check (CRC) value for error detection.

Now, the Data Link layer will hand over the frame to the Physical layer. The Physical layer
will encode the binary bit stream (1s and 0s) into a digital signal (signaling, analog or digital,
depends on the type of media being used). The signals then will be transmitted on the local
physical network.

The router’s interface (Fa0/0) will receive the signals and then will be encoded into the
binary bit stream. Next, the router’s interface will build the frame and run a CRC. In addition, at
the end of the frame, the router will also check the FCS field to ensure that there is no error in
the frame.

Now, the destination hardware address and the Type field will be checked to determine
what the router should do with this frame. Since IP is in the Type field, the router will hand over
the packet to the IP protocol running on the router.

Now, the packet’s destination IP address will be checked in the routing table to determine
where the packet should be forwarded. Since the destination IP address is 192.168.1.1, the
router will see in the routing table that 192.168.1.0 network is directly connected through the
Fa0/1 interface.

If the routing table does not contain the routing information about the destination network
(192.168.1.0), the packet will be discarded and the destination host unreachable message will
be sent out to source device (PC1).
53

Next, the router will place the packet in the buffer memory of the Fa0/1 interface. Now, the
router will create a frame to send the packet to the destination host. First, the ARP cache will be
checked to determine whether the hardware address has already been resolved or not. If the
hardware address is not in the ARP cache, the router will send an ARP broadcast out to Fa0/1
interface to find the hardware address of 192.168.1.1.

PC2 will respond with the hardware address of its Network Interface Card (NIC), in this
case, Ethernet 0, with an ARP reply. The router’s Fa0/1 interface now has everything that is
required to send the packet to the final destination. Now, the router will send the frame to PC2.

Once the frame is received by PC2, the CRC value will be calculated. If everything is OK,
the packet will be handed over to IP to check the destination IP address. The IP destination
address will match with the IP address of PC2. Next, the protocol field of the packet will be
checked to determine what the purpose of the packet is.

Since the packet is an ICMP echo request (ping), PC2 will generate a new ICMP echo-
reply packet containing a source IP address of PC2 and a destination IP address of PC1. The
process will start all over again, however, this time, it will go in the reverse direction (PC2 to
PC1). Since the hardware addresses of each device have already been resolved, hence each
device will only look in its ARP cache to determine the hardware address of each interface.

Finally, an ICMP echo-reply will be received by PC1 from PC2. That’s all what happens to
communicate from one device to another device using a router.

2.3 Routing Components


Routing involves two basic activities: determination of optimal routing paths and the
transport of information groups (typically called packets) through an internetwork. In this
publication, the latter of these is referred to as switching. Switching is relatively straightforward.
Path determination, on the other hand, can be very complex.

2.3.1 Path Determination

A metric is a standard of measurement—for example, path length—that is used by routing


algorithms to determine the optimal path to a destination. To aid the process of path determination,
routing algorithms initialize and maintain routing tables, which contain route information. Route
information varies depending on the routing algorithm used.
54

Routing algorithms fill routing tables with a variety of information. Destination/next hop
associations tell a router that a particular destination can be gained optimally by sending the
packet to a particular router representing the “next hop” on the way to the final destination.
When a router receives an incoming packet, it checks the destination address and attempts to
associate this address with a next hop. Figure 2.6 shows an example of a destination/next hop
routing table.

Routing tables can also contain other information, such as information about the desirability
of a path. Routers compare metrics to determine optimal routes. Metrics differ depending on
the design of the routing algorithm being used.

Routers communicate with one another (and maintain their routing tables) through the
transmission of a variety of messages. The routing update message is one such message.
Routing updates generally consist of all or a portion of a routing table.

By analyzing routing updates from all routers, a router can build a detailed picture of
network topology. A link-state advertisement is another example of a message sent between
routers. Link-state advertisements inform other routers of the state of the sender’s links. Link
information can also be used to build a complete picture of network topology. Once the network
topology is understood, routers can determine optimal routes to network destinations.

Figure 2.6: Destination / Next Hop Routing Table


55

2.3.2 Routing Algorithms

Routing algorithms can be differentiated based on several key characteristics. First, the
particular goals of the algorithm designer affect the operation of the resulting routing protocol.
Second, various types of routing algorithms exist, and each algorithm has a different impact on
network and router resources. Finally, routing algorithms use a variety of metrics that affect
calculation of optimal routes.

Routing algorithms often have one or more of the following design goals:

 Optimality

 Simplicity and low overhead

 Robustness and stability

 Rapid convergence

 Flexibility

Routing algorithms can be classified by type. Key differentiators include these:

 Static versus dynamic


 Single-path versus multipath
 Flat versus hierarchical
 Host-intelligent versus router-intelligent
 Intra domain versus inter domain
 Link-state versus distance vector

However, we will be understanding various routing protocols in the coming sections based
on static or dynamic routing classification as that is how routing algorithms are widely classified.

2.3.3 Connected, Static and Dynamic Routes


2.3.3.1 Connected routes

Subnets directly connected to a router’s interface are added to the router’s routing table.
Interface has to have an IP address configured and both interface status codes must be in
the up and up state. A router will be able to route all packets destined for all hosts in subnets
directly connected to its active interfaces.
56

Consider the following example. The router has two active interfaces, Fa0/0 and Fa0/1.
Each interface has been configured with an IP address and is currently in the up-up state, so
the router adds these subnets to its routing table.

Figure 2.7: Connected Routes

As you can see from the output above, the router has two directly connected routes to the
subnets 10.0.0.0/8 and 192.168.0.0/24. The character C in the routing table indicates that a
route is a directly connected route.

NOTE:- You can see only connected routes in a router’s routing table by typing the show
ip route connected command

2.3.3.2 Static routes

By adding static routes, a router can learn a route to a remote network that is not directly
connected to one of its interfaces. Static routes are configured manually by typing the global
configuration mode command ip route DESTINATION_NETWORK SUBNET_MASK
NEXT_HOP_IP_ADDRESS. This type of configuration is usually used in smaller networks
because of scalability reasons (you have to configure each route on each router).

A simple example will help you understand the concept of static routes.

Figure 2.8: Static Routes


57

Router A is directly connected to router B. Router B is directly connected to the subnet


10.0.1.0/24. Since that subnet is not directly connected to Router A, the router doesn’t know
how to route packets destined for that subnet. However, you can configure that route manually
on router A.

First, consider the router A’s routing table before we add the static route:

Now, we’ll use the static route command to configure router A to reach the subnet 10.0.0.0/
24. The router now has the route to reach the subnet.

The character S in the routing table indicates that a route is a statically configured route.

Another version of the ip route command exists. You don’t have to specify the next-hop IP
address. You can rather specify the exit interface of the local router. In the example above we
58

could have typed the ip route DEST_NETWORK NEXT_HOP_INTERFACE command to instruct


router A to send all traffic destined for the subnet out the right interface. In our case, the command
would be ip route 10.0.0.0 255.255.255.0 Fa0/0.

2.3.3.3 Dynamic routes

A router can learn dynamic routes if a routing protocol is enabled. A routing protocol is
used by routers to exchange routing information with each other. Every router in the network
can then use information to build its routing table. A routing protocol can dynamicaly choose a
different route if a link goes down, so this type of routing is fault-tolerant. Also, unlike with static
routing, there is no need to manually configure every route on every router, which greatly reduces
the administrative overhead. You only need to define which routes will be advertised on a router
that connect directly to the corresponding subnets – routing protocols take care of the rest.

The disadvantage of dynamic routing is that it increases memory and CPU usage on a
router, because every router has to process received routing information and calculate its routing
table. To better understand the advantages that dynamic routing procotols bring, consider the
following example:

Figure 2.8: Dynamic Routes

Both routers are running a routing protocol, namely EIGRP. There is no static routes on
Router A, so R1 doesn’t know how to reach the subnet 10.0.0.0/24 that is directly connected to
Router B. Router B then advertises the subnet to Router A using EIGRP. Now Router A has the
route to reach the subnet. This can be verified by typing the show ip route command:
59

You can see that Router A has learned the subnet from EIGRP. The letter D in front of the
route indicates that the route has been learned through EIGRP. If the subnet 10.0.0.0/24 fails,
Router B can immediately inform Router A that the subnet is no longer reachable.

2.4 Administrative distance and Metric


2.4.1 Administrative distance

A network can use more than one routing protocol, and routers on the network can learn
about a route from multiple sources. Routers need to find a way to select a better path when
there are multiple paths available. Administrative distance number is used by routers to find out
which route is better (lower number is better). For example, if the same route is learned from
RIP and EIGRP, a Cisco router will choose the EIGRP route and stores it in the routing table.
This is because EIGRP routes have (by default) the administrative distance of 90, while RIP
route have a higher administrative distance of 120.

You can display the administrative distance of all routes on your router by typing the show
ip route command:

In the above case, the router has only a single route in its routing table learned from a
dynamic routing protocols – the EIGRP route. The following table lists the administrative distance
default values:
60

Table: 2.1: Administrative Distance Default Values

2.4.2 Metric

If a router learns two different paths for the same network from the same routing protocol,
it has to decide which route is better and will be placed in the routing table. Metric is the measure
used to decide which route is better (lower number is better). Each routing protocol uses its own
metric. For example, RIP uses hop counts as a metric, while OSPF uses cost.

The following example explains the way RIP calculates its metric and why it chooses one
path over another.

Figure 2.9: Metric Rip

RIP has been configured on all routers. Router 1 has two paths to reach the subnet
10.0.0.0/24. One path is goes through Router 2, while the other path goes through Router 3
and then Router 4. Because RIP uses the hop count as its metric, the path through Router 1 will
be used to reach the 10.0.0.0/24 subnet. This is because that subnet is only one router away on
the path. The other path will have a higher metric of 2, because the subnet is two routers away.
61

NOTE: - The example above can be used to illustrate a disadvantage of using RIP as a
routing protocol. Imagine if the first path through R2 was the 56k modem link, while the other
path (R3-R4) is a high speed WAN link. Router R1 would still chose the path through R2 as the
best route, because RIP uses only the hop count as its metric. The following table lists the
parameters that various routing protocols use to calculate the metric:

Table 2.2: Routing Protocol Parameters

Routing tables contain information used by switching software to select the best route.
But how, specifically, are routing tables built? What is the specific nature of the information that
they contain? How do routing algorithms determine that one route is preferable to others?

Routing algorithms have used many different metrics to determine the best route.
Sophisticated routing algorithms can base route selection on multiple metrics, combining them
in a single (hybrid) metric. All the following metrics have been used:

 Path length

 Reliability

 Delay

 Bandwidth

 Load

 Communication cost

Path length is the most common routing metric. Some routing protocols allow network
administrators to assign arbitrary costs to each network link. In this case, path length is the sum
of the costs associated with each link traversed. Other routing protocols define hop count, a
metric that specifies the number of passes through internetworking products, such as routers,
that a packet must take en route from a source to a destination.

Reliability, in the context of routing algorithms, refers to the dependability (usually


described in terms of the bit-error rate) of each network link. Some network links might go down
62

more often than others. After a network fails, certain network links might be repaired more
easily or more quickly than other links. Any reliability factors can be taken into account in the
assignment of the reliability ratings, which are arbitrary numeric values usually assigned to
network links by network administrators.

Routing delay refers to the length of time required to move a packet from source to
destination through the internetwork. Delay depends on many factors, including the bandwidth
of intermediate network links, the port queues at each router along the way, network congestion
on all intermediate network links, and the physical distance to be travelled. Because delay is a
conglomeration of several important variables, it is a common and useful metric.

Bandwidth refers to the available traffic capacity of a link. All other things being equal, a
10-Mbps Ethernet link would be preferable to a 64-kbps leased line. Although bandwidth is a
rating of the maximum attainable throughput on a link, routes through links with greater bandwidth
do not necessarily provide better routes than routes through slower links. For example, if a
faster link is busier, the actual time required to send a packet to the destination could be greater.

Load refers to the degree to which a network resource, such as a router, is busy. Load
can be calculated in a variety of ways, including CPU utilization and packets processed per
second. Monitoring these parameters on a continual basis can be resource-intensive itself.

Communication cost is another important metric, especially because some companies


may not care about performance as much as they care about operating expenditures. Although
line delay may be longer, they will send packets over their own lines rather than through the
public lines that cost money for usage time.

2.5 IP Routing Protocols


Dynamic routes are routes learned via routing protocols. Routing protocols are configured
on routers with the purpose of exchanging routing information. There are many benefits of
using routing protocols in your network, such as:

 Unlike static routing, you don’t need to manually configure every route on each
router in the network. You just need to configure the networks to be advertised on a
router directly connected to them.

 If a link fails and the network topology changes, routers can advertise that some
routes have failed and pick a new route to that network.
63

2.5.1 Types of routing protocols

There are two types of routing protocols:

 Distance vector

o Routing Information Protocol (RIP)

o Interior Gateway Routing Protocol (IGRP)

 Link state

o Open Shortest Path First (OSPF)

o Intermediate System to Intermediate System (IS-IS)

Cisco has created its own routing protocol – Enhanced Interior Gateway Routing Protocol
(EIGRP). EIGRP is considered to be an advanced distance vector protocol, although some
materials erroneously state that EIGRP is a hybrid routing protocol, a combination of distance
vector and link state.

All of the routing protocols mentioned above are interior routing protocols (IGP), which
means that they are used to exchange routing information within one autonomous system.
BGP (Border Gateway Protocol) is an example of an exterior routing protocol (EGP) which is
used to exchange routing information between autonomous systems on the Internet.

2.5.1.1 Distance vector protocols

As the name implies, distance vector routing protocols use distance to determine the best
path to a remote network. The distance is something like the number of hops (routers) to the
destination network.

Distance vector protocols usually send the complete routing table to each neighbor (a
neighbor is directly connected router that runs the same routing protocol). They employ some
version of Bellman-Ford algorithm to calculate the best routes. Compared with link state routing
protocols, distance vector protocols are easier to configure and require little management, but
are susceptible to routing loops and converge slower than the link state routing protocols.
Distance vector protocols also use more bandwidth because they send complete routing table,
while the link state procotols send specific updates only when topology changes occur.

RIP and EIGRP are examples of distance vector routing protocols.


64

2.5.1.2 Link state protocols

Link state routing protocols are the second type of routing protocols. They have the same
basic purpose as distance vector protocols, to find a best path to a destination, but use different
methods to do so. Unlike distance vector protocols, link state protocols don’t advertise the
entire routing table. Instead, they advertise information about a network toplogy (directly
connected links, neighboring routers…), so that in the end all routers running a link state protocol
have the same topology database. Link state routing protocols converge much faster than
distance vector routing protocols, support classless routing, send updates using multicast
addresses and use triggered routing updates. They also require more router CPU and memory
usage than distance-vector routing protocols and can be harder to configure.

Each router running a link state routing protocol creates three different tables:

 Neighbor table – the table of neighboring routers running the same link state routing
protocol.

 Topology table – the table that stores the topology of the entire network.

 Routing table – the table that stores the best routes.

Shortest Path First algorithm is used to calculate the best route. OSPF and IS-IS are
examples of link state routing protocols.

2.5.2 Difference between distance vector and link state routing protocols

The following table summarizes the differences:

Table 2.3: Difference-distance vector and link state routing protocols


65

2.6 Classful and Classless Routing

Table 2.4: Difference Classful and Classless Routing

This section requires understanding of different classes of IPs and IP Subnetting. This
has been explained in detail as part of Unit 3. This section has been placed in this Unit due to its
relevance with the topic.

Classful Routing protocols do not send subnet mask information when a route update is
sent out. All devices in the network must use the same subnet mask E.g.: RIP V1

Classless Routing is performed by protocols that send subnet mask information in the
routing updates. Classless routing allows VLSM (Variable Length Subnet Masking) E.g.: RIP
V2, EIGRP, & OSPF. Below table clearly differentiates both Classful and Classless Routing:

2.7 RIP (Routing Information Protocol)


RIP (Routing Information Protocol) is one of the oldest distance vector routing protocols.
It is usually used on small networks because it is very simple to configure and maintain, but
66

lacks some advanced features of routing protocols like OSPF or EIGRP. Two versions of the
protocol exists: version 1 and version 2. Both versions use hop count as a metric and have the
administrative distance of 120. RIP version 2 is capable of advertising subnet masks and uses
multicast to send routing updates, while version 1 doesn’t advertise subnet masks and uses
broadcast for updates. Version 2 is backwards compatible with version 1.

2.7.1 RIPv1

RIPv1 is a classful routing protocol and it does not support VLSM (Variable Length Subnet
Masking). RIPv1 uses local broadcasts to share routing information. These updates are periodic
in nature, occurring, by default, every 30 seconds. To prevent packets from circling around a
loop forever, both versions of RIP solve counting to infinity by placing a hop count limit of 15
hops on packets. Any packet that reaches the sixteenth hop will be dropped. RIPv1 is a classful
protocol. RIP supports up to six equal-cost paths to a single destination. Equal-cost path are
the paths where the metric is same (Hop count).

2.7.2 RIPv2

Figure 2.10: RIP

RIPv2 is classless routing and it support VLSM (Variable Length Subnet


Masking). RIPv2 has the option for network mask in the update to allow classless routing
advertisements. RIPv2 sends the entire routing table every 30 seconds, which can consume a
lot of bandwidth. RIPv2 uses multicast address of 224.0.0.9 to send routing updates, supports
authentication and triggered updates (updates that are sent when a change in the network
occurs).
67

RIPv2 is a distance vector routing protocol with routing enhancements built into it, and it
is based on RIPV1. Therefore, it is commonly called as hybrid routing protocol.RIPv2 uses
multicasts instead of broadcasts. RIPv2 supports triggered updates. When a change occurs, a
RIPv2 router will immediately propagate its routing information to its connected neighbours.

Router R1 directly connects to the subnet 10.0.0.0/24. Network engineer has configured
RIP on R1 to advertise the route to this subnet. R1 sends routing updates to R2 and R3. The
routing updates list the subnet, subnet mask and metric for this route. Each router, R2 and R3,
receives this update and adds the route to their respective routing tables. Both routers list the
metric of 1 because the network is only one hop away.

NOTE:- Maximum hop count for a RIP route is 15. Any route with a higher hop count is
considered to be unreachable.

2.7.3 EIGRP (Enhanced Interior Gateway Routing Protocol)

EIGRP (Enhanced Interior Gateway Routing Protocol) is an advanced distance vector


routing protocol. This protocol is an evolution of an earlier Cisco protocol called IGRP, which is
now considered obsolete. EIGRP supports classless routing and VLSM, route summarization,
incremental updates, load balacing and many other useful features. It is a Cisco propriatery
protocol, so all routers in a network that is running EIGRP must be Cisco routers. Routers
running EIGRP must become neighbors before exchanging routing information. To dynamically
discover neighbors, EIGRP routers use the multicast address of 224.0.0.10. Each EIGRP router
stores routing and topology information in three tables:

 Neighbor table – stores information about EIGRP neighbors

 Topology table – stores routing information learned from neighboring routers

 Routing table – stores the best routes

Administrative distance of EIGRP is 90, which is less than both the administrative distance
of RIP and the administrative distance of OSPF, so EIGRP routes will be preferred over these
routes. EIGRP uses Reliable Transport Protocol (RTP) for sending messages.

EIGRP calculates its metric by using bandwidth, delay, reliability and load. By default,
only bandwidth and delay are used when calulating metric, while reliability and load are set to
zero.
68

EIGPR uses the concept of autonomous systems. An autonomous system is a set of


EIGRP enabled routers that should become EIGRP neighbors. Each router inside an autonomous
system must have the same autonomous system number configured, otherwise routers will not
become neighbors.

EIGRP Neighbors

EIGRP must establish neighbor relationships with other EIGRP neighboring routers before
exchanging routing information. To establish a neighbor relationships, routers send hello packets
every couple of seconds. Hello packets are sent to the multicast address of 224.0.0.10

NOTE:- On LAN interfaces hellos are sent every 5 seconds. On WAN interfaces every 60
seconds.

The following fields in a hello packet must be the identical in order for routers to become
neighbors:

 ASN (autonomous system number)

 Subnet number

 K values (components of metric)

Routers send hello packets every couple of seconds to ensure that the neighbor relationship
is still active. By default, routers considers the neighbor to be down after a hold-down timer has
expired. Hold-down timer is, by default, three times the hello interval. On LAN network the hold-
down timer is 15 seconds.

Feasible and reported distance

Two terms that you will often encounter when working with EIGRP are feasible and reported
distance. Let’s clarify these terms:

Feasible distance (FD) – the metric of the best route to reach a network. That route will be
listed in the routing table.

Reported distance (RD) – the metric advertised by a neighboring router for a specific
route. It other words, it is the metric of the route used by the neighboring router to reach the
network.
69

To better understand the concept, consider the following example.

Figure 2.11: Feasible and reported distance

EIGRP has been configured on R1 and R2. R2 is directly connected to the subnet 10.0.1.0/
24 and advertises that subnet into EIGRP. Let’s say that R2’s metric to reach that subnet is
28160. When the subnet is advertised to R1, R2 informs R1 that its metric to reach 10.0.1.0/24
is 10. From the R1’s perspective that metric is considered to be the reported distance for that
route. R1 receives the update and adds the metric to the neighbor to the reported distance.
That metric is called the feasible distance and is stored in R1’s routing table (30720 in our
case). The feasible and reported distance are displayed in R1’s EIGRP topology table:

Successor and feasible successor

Another two terms that appear often in the EIGRP world are successor and feasible
successor. A successor is the route with the best metric to reach a destination. That route is
stored in the routing table. A feasible successor is a backup path to reach that same destination
that can be used immediately if the successor route fails. These backup routes are stored in the
topology table.
70

For a route to be chosen as a feasible successor, one condition must be met:

The neighbor’s advertised distance (AD) for the route must be less than the
successor’s feasible distance (FD).

The following example explains the concept of a successor and a feasible successor.

Figure 2.12: Successor and feasible successor

R1 has two paths to reach the subnet 10.0.0.0/24. The path through R2 has the best
metric (20) and it is stored in the R1’s routing table. The other route, through R3, is a feasible
successor route, because the feasiblility condition has been met (R3’s advertised distance of
15 is less than R1’s feasible distance of 20). R1 stores that route in the topology table. This
route can be immediately used if the primary route fails.

EIGRP topology table

EIGRP toplogy table contains all learned routes to a destination. The table holds all routes
received from a neighbor, successors and feasible successors for every route, and interfaces
on which updates were received. The table also holds all locally connected subnets included in
an EIGRP process.

Best routes (the successors) from the topology table are stored in the routing table. Feasible
successors are only stored in the topology table and can be used immediately if the primary
route fails.
71

Consider the following network topology.

Figure 2.13: EIGRP Topolgy

EIGRP is running on all three routers. Routers R2 and R3 both connect to the subnet
10.0.1.0/24 and advertise that subnet to R1. R1 receives both updates and calculates the best
route. The best path goes through R2, so R1 stores that route in the routing table. Router R1
also calculates the metric of the route through R3. Let’s say that advertised distance of that
route is less then feasible distance of the best route. The feasibility condition is met and router
R1 stores that route in the topology table as a feasible successor route. The route can be used
immediately if the primary route fails.

2.7.4 OSPF (Open Shortest Path First)

OSPF (Open Shortest Path First) is a link state routing protocol. Because it is an open
standard, it is implemented by a variety of network vendors. OSPF will run on most routers that
doesn’t necessarily have to be Cisco routers (unlike EIGRP which can be run only on Cisco
routers).

Here are the most important features of OSPF:

 A classless routing protocol

 Supports VLSM, CIDR, manual route summarization, equal cost load balancing

 Incremental updates are supported

 Uses only one parameter as the metric – the interface cost.


72

 The administrative distance of OSPF routes is, by default, 110.

 Uses multicast addresses 224.0.0.5 and 224.0.0.6 for routing updates.

Routers running OSPF have to establish neighbor relationships before exchanging routes.
Because OSPF is a link state routing protocol, neighbors don’t exchange routing tables. Instead,
they exchange information about network topology. Each OSFP router then runs SFP algorithm
to calculate the best routes and adds those to the routing table. Because each router knows the
entire topology of a network, the chance for a routing loop to occur is minimal.

Each OSPF router stores routing and topology information in three tables:

 Neighbor table – stores information about OSPF neighbors

 Topology table – stores the topology structure of a network

 Routing table – stores the best routes

2.7.4.1 OSPF neighbors

OSPF routers need to establish a neighbor relationship before exchanging routing updates.
OSPF neighbors are dynamically discovered by sending Hello packets out each OSPF-enabled
interface on a router. Hello packets are sent to the multicast IP address of 224.0.0.5. The
process is explained in the following figure:

Figure 2.14: OSPF hellos

Routers R1 and R2 are directly connected. After OSFP is enabled both routers send
Hellos to each other to establish a neighbor relationship. You can verify that the neighbor
relationship has indeed been established by typing the show ip ospf neighbors command.
73

In the example above, you can see that the router-id of R2 is 2.2.2.2. Each OSPF router
is assigned a router ID. A router ID is determined by using one of the following:

 Using the router-id command under the OSPF process.

 Using the highest IP address of the router’s loopback interfaces.

 Using the highest IP address of the router’s physical interfaces.

The following fields in the Hello packets must be the same on both routers in order for
routers to become neighbors:

 subnet

 area id

 hello and dead interval timers

 authentication

 area stub flag

 MTU

By default, OSPF sends hello packets every 10 second on an Ethernet network (Hello
interval). A dead timer is four times the value of the hello interval, so if a routers on an Ethernet
network doesn’t receive at least one Hello packet from an OSFP neighbor for 40 seconds, the
routers declares that neighbor to be down.

OSPF neighbor states

Before establishing a neighbor relationship, OSPF routers need to go through several


state changes. These states are explained below.

1. Init state – a router has received a Hello message from the other OSFP router

2. 2-way state – the neighbor has received the Hello message and replied with a Hello
message of his own

3. Exstart state – beginning of the LSDB exchange between both routers. Routers are
starting to exchange link state information.

4. Exchange state – DBD (Database Descriptor) packets are exchanged. DBDs contain
LSAs headers. Routers will use this information to see what LSAs need to be exchanged.
74

5. Loading state – one neighbor sends LSRs (Link State Requests) for every network it
doesn’t know about. The other neighbor replies with the LSUs (Link State Updates) which
contain information about requested networks. After all the requested information have been
received, other neighbor goes through the same process

6. Full state – both routers have the synchronized database and are fully adjacent with
each other.

2.7.4.2 OSPF areas

OSPF uses the concept of areas. An area is a logical grouping of contiguous networks
and routers. All routers in the same area have the same topology table, but they don’t know
about routers in the other areas. The main benefits of creating areas is that the size of the
topology and the routing table on a router is reduced, less time is required to run the SFP
algorithm and routing updates are also reduced.

Each area in the OSPF network has to connect to the backbone area (area 0). All router
inside an area must have the same area ID to become OSPF neighbors. A router that has
interfaces in more than one area (area 0 and area 1, for example) is called Area Border Router
(ABR). A router that connects an OSPF network to other routing domains (EIGRP network, for
example) is called Autonomous System Border Router (ASBR).

NOTE:- In OSPF, manual route summarization is possible only on ABRs and ASBRs.

To better understand the concept of areas, consider the following example.

Figure 2.15 OSPF Areas


75

All routers are running OSPF. Routers R1 and R2 are inside the backbone area (area 0).
Router R3 is an ABR, because it has interfaces in two areas, namely area 0 and area 1. Router
R4 and R5 are inside area 1. Router R6 is an ASBR, because it connects OSFP network to
another routing domain (an EIGRP domain in this case). If the R1’s directly connected subnet
fails, router R1 sends the routing update only to R2 and R3, because all routing updates all
localized inside the area.

NOTE:- The role of an ABR is to advertise address summaries to neighboring areas. The
role of an ASBR is to connect an OSPF routing domain to another external network (e.g. Internet,
EIGRP network…).

2.7.4.3 LSA, LSU and LSR

The LSAs (Link-State Advertisements) are used by OSPF routers to exchange topology
information. Each LSA contains routing and topology information to describe a part of an OSPF
network. When two neighbors decide to exchange routes, they send each other a list of all
LSAa in their respective topology database. Each router then checks its topology database and
sends a Link State Request (LSR) message requesting all LSAs not found in its topology table.
Other router responds with the Link State Update (LSU) that contains all LSAs requested by the
other neighbor. The concept is explained in the following example:

Figure 2.16: LSA, LSR & LSU

After configuring OSPF on both routers, routers exchange LSAs to describe their respective
topology database. Router R1 sends an LSA header for its directly connected network 10.0.1.0/
24. Router R2 check its topology database and determines that it doesn’t have information
about that network. Router R2 then sends Link State Request message requesting further
information about that network. Router R1 responds with Link State Update which contains
information about subnet 10.0.1.0/24 (next hop address, cost…).
76

2.8 Network Address Translation (NAT)


Network Address Translation (NAT) is the process where a network device, usually a
firewall, assigns a public address to a computer (or group of computers) inside a private network.
The main use of NAT is to limit the number of public IP addresses an organization or company
must use, for both economy and security purposes.

Network address translation (NAT) is a method of remapping one IP address space into
another by modifying network address information in the IP header of packets while they are in
transit across a traffic routing device. The technique was originally used as a shortcut to avoid
the need to readdress every host when a network was moved. It has become a popular and
essential tool in conserving global address space in the face of IPv4 address exhaustion. One
Internet-routable IP address of a NAT gateway can be used for an entire private network.

IP masquerading is a technique that hides an entire IP address space, usually consisting


of private IP addresses, behind a single IP address in another, usually public address space.
The address that has to be hidden is changed into a single (public) IP address as “new” source
address of the outgoing IP packet so it appears as originating not from the hidden host but from
the routing device itself. Because of the popularity of this technique to conserve IPv4 address
space, the term NAT has become virtually synonymous with IP masquerading.

As network address translation modifies the IP address information in packets, it has


serious consequences on the quality of Internet connectivity and requires careful attention to
the details of its implementation. NAT implementations vary widely in their specific behavior in
various addressing cases and their effect on network traffic. The specifics of NAT behavior are
not commonly documented by vendors of equipment containing NAT implementations

The most common form of network translation involves a large private network using
addresses in a private range (10.0.0.0 to 10.255.255.255, 172.16.0.0 to 172.31.255.255, or
192.168.0 0 to 192.168.255.255). The private addressing scheme works well for computers
that only have to access resources inside the network, like workstations needing access to file
servers and printers. Routers inside the private network can route traffic between private
addresses with no trouble. However, to access resources outside the network, like the Internet,
these computers have to have a public address in order for responses to their requests to
return to them. This is where NAT comes into play.
77

Internet requests that require Network Address Translation (NAT) are quite complex but
happen so rapidly that the end user rarely knows it has occurred. A workstation inside a network
makes a request to a computer on the Internet. Routers within the network recognize that the
request is not for a resource inside the network, so they send the request to the firewall. The
firewall sees the request from the computer with the internal IP. It then makes the same request
to the Internet using its own public address, and returns the response from the Internet resource
to the computer inside the private network. From the perspective of the resource on the Internet,
it is sending information to the address of the firewall. From the perspective of the workstation,
it appears that communication is directly with the site on the Internet. When NAT is used in this
way, all users inside the private network access the Internet have the same public IP address
when they use the Internet. That means only one public addresses is needed for hundreds or
even thousands of users.

Most modern firewalls are stateful - that is, they are able to set up the connection between
the internal workstation and the Internet resource. They can keep track of the details of the
connection, like ports, packet order, and the IP addresses involved. This is called keeping track
of the state of the connection. In this way, they are able to keep track of the session composed
of communication between the workstation and the firewall, and the firewall with the Internet.
When the session ends, the firewall discards all of the information about the connection.

There are other uses for Network Address Translation (NAT) beyond simply allowing
workstations with internal IP addresses to access the Internet. In large networks, some servers
may act as Web servers and require access from the Internet. These servers are assigned
public IP addresses on the firewall, allowing the public to access the servers only through that
IP address. However, as an additional layer of security, the firewall acts as the intermediary
between the outside world and the protected internal network. Additional rules can be added,
including which ports can be accessed at that IP address. Using NAT in this way allows network
engineers to more efficiently route internal network traffic to the same resources, and allow
access to more ports, while restricting access at the firewall. It also allows detailed logging of
communications between the network and the outside world.

Additionally, NAT can be used to allow selective access to the outside of the network, too.
Workstations or other computers requiring special access outside the network can be assigned
specific external IPs using NAT, allowing them to communicate with computers and applications
that require a unique public IP address. Again, the firewall acts as the intermediary, and can
control the session in both directions, restricting port access and protocols.
78

NAT is a very important aspect of firewall security. It conserves the number of public
addresses used within an organization, and it allows for stricter control of access to resources
on both sides of the firewall.

There are three types of address translation:

 Static NAT – translates one private IP address to a public one. The public IP address
is always the same.

 Dynamic NAT – private IP addresses are mapped to the pool of public IP addresses.

 Port Address Translation (PAT) – one public IP address is used for all internal devices,
but a different port is assigned to each private IP address. Also known as NAT
Overload.

An example will help you understand the concept.

Figure 2.17: NAT

Computer A request a web page from an Internet server. Because Computer A uses
private IP addressing, the source address of the request has to be changed by the router
because private IP addresses are not routable on the Internet. Router R1 receives the request,
changes the source IP address to its public IP address and sends the packet to server S1.
Server S1 receives the packet and replies to router R1. Router R1 receives the packet, changes
the destination IP addresses to the private IP address of Computer A and sends the packet to
Computer A.
79

2.8.1 Static NAT

With static NAT, routers or firewalls translate one private IP address to a single public IP
address. Each private IP address is mapped to a single public IP address. Static NAT is not
often used because it requires one public IP address for each private IP address.

To configure static NAT, three steps are required:

 Configure private/public IP address mapping by using the ip nat inside source static
PRIVATE_IP PUBLIC_IP command

 Configure the router’s inside interface using the ip nat inside command

 Configure the router’s outside interface using the ip nat outside command

Here is an example.

Figure 2.18: Static NAT

Computer A requests a web resource from S1. Computer A uses its private IP address
when sending the request to router R1. Router R1 receives the request, changes the private IP
address to the public one and sends the request to S1. S1 responds to R1. R1 receives the
response, looks up in its NAT table and changes the destination IP address to the private IP
address of Computer A. In the example above, we need to configure static NAT. To do that, the
following commands are required on R1:
80

Using the commands above, we have configured a static mapping between Computer
A’s private IP address of 10.0.0.2 and router’s R1 public IP address of 59.50.50.1. To check
NAT, you can use the show ip nat translations command:

2.8.2 Dynamic NAT

With dynamic NAT, you specify two sets of addresses on your Cisco router:

 Inside addresses that will be translated.

 A pool of global addresses.

Unlike with static NAT, where you had to manually define a static mapping between a
private and a public address, with dynamic NAT the mapping of a local address to a global
address happens dynamically. This means that the router dynamically picks an address from
the global address pool that is not currently assigned. It can be any address from the pool of
global addresses. The dynamic entry stays in the NAT translations table as long as the traffic is
exchanged. The entry times out after a period of inactivity and the global IP address can be
used for new translations.

To configure dynamic NAT, the following steps are required:

 Configure the router’s inside interface using the ip nat inside command.

 Configure the router’s outside interface using the ip nat outside command.

 Configure an ACL that has a list of the inside source addresses that will be translated.

 Configure the pool of global IP addresses using the ip nat pool NAME
FIRST_IP_ADDRESS LAST_IP_ADDRESS netmask SUBNET_MASK command.

 Enable dynamic NAT with the ip nat inside source list ACL_NUMBER pool
NAME global configuration command.
81

Figure 2.19: Dynamic NAT

Computer A requests a web resource from S1. Computer A uses its private IP address
when sending the request to router R1. Router R1 receives the request, changes the private IP
address to one of the available global addresses in the pool and sends the request to S1. S1
responds to R1. R1 receives the response, looks up in its NAT table and changes the destination
IP address to the private IP address of Computer A. In the example above we need to configure
dynamic NAT. To do that, the following commands are required on R1:

 To configure the router’s inside interface:

 To configure the router’s outside interface:

 To configure an ACL that has a list of the inside source addresses that will be
translated:

NOTE:- The access list configured above matches all hosts from the 192.168.0.0/24
subnet.

 To configure the pool of global IPdresses:

The pool configured above consists of 5 addresses: 4.4.4.1, 4.4.4.2, 4.4.4.3, 4.4.4.4, and
4.4.4.5.

 To enable dynamic NAT:


82

The command above instructs the router to translate all addresses specified in the access
list 1 to the pool of global addresses called MY_POOL.

You can list all NAT translations using the show ip nat translations command:

In the example above, you can see that the private IP address of Computer A (192.168.0.2)
has been translated to the first available global address (4.4.4.1).

NOTE:- You can remove all NAT translations from the table by using the clear ip nat
translation * command

2.8.3 Port Address Translation (PAT)

With Port Address Translation (PAT), a single public IP address is used for all internal
private IP addresses, but a different port is assigned to each private IP address. This type of
NAT is also known as NAT Overload and is the typical form of NAT used in today’s networks. It
is even supported by most consumer-grade routers.

PAT allows you to support many hosts with only few public IP addresses. It works by
creating dynamic NAT mapping, in which a global (public) IP address and a unique port number
are selected. The router keeps a NAT table entry for every unique combination of the private IP
address and port, with translation to the global address and a unique port number. We will use
the following example network to explain the benefits of using PAT:

Figure 2.20: Port Address translation (PAT)


83

As you can see in the picture above, PAT uses unique source port numbers on the inside
global (public) IP address to distinguish between translations. For example, if the host with the
IP address of 10.0.0.101 wants to access the server S1 on the Internet, the host’s private IP
address will be translated by R1 to 155.4.12.1:1056 and the request will be sent to S1. S1 will
respond to 155.4.12.1:1056. R1 will receive that response, look up in its NAT translation table,
and forward the request to the host.

To configure PAT, the following commands are required:

 Configure the router’s inside interface using the ip nat inside command.

 Configure the router’s outside interface using the ip nat outside command.

 Configure an access list that includes a list of the inside source addresses that
should be translated.

 Enable PAT with the ip nat inside source list ACL_NUMBER interface TYPE
overload global configuration command.

Here is how we would configure PAT for the network picture above.

First, we will define the outside and inside interfaces on R1:

R1(config)#int Gi0/0

R1(config-if)#ip nat inside

R1(config-if)#int Gi0/1

R1(config-if)#ip nat outside

Next, we will define an access list that will include all private IP addresses we would like
to translate:

R1(config-if)#access-list 1 permit 10.0.0.0 0.0.0.255

The access list defined above includes all IP addresses from the 10.0.0.0 – 10.0.0.255
range.

Now we need to enable NAT and refer to the ACL created in the previous step and to the
interface whose IP address will be used for translations:
84

R1(config)#ip nat inside source list 1 interface Gi0/1 overload

To verify the NAT translations, we can use the show ip nat translations command after
hosts request a web resource from S1:

R1#show ip nat translations

Pro Inside global Inside local Outside local Outside global

tcp 155.4.12.1:1024 10.0.0.100:1025 155.4.12.5:80 155.4.12.5:80

tcp 155.4.12.1:1025 10.0.0.101:1025 155.4.12.5:80 155.4.12.5:80

tcp 155.4.12.1:1026 10.0.0.102:1025 155.4.12.5:80 155.4.12.5:80

Notice that the same IP address (155.4.12.1) has been used to translate three private IP
addresses (10.0.0.100, 10.0.0.101, and 10.0.0.102). The port number of the public IP address
is unique for each connection. So when S1 responds to 155.4.12.1:1026, R1 look into its NAT
translations table and forward the response to 10.0.0.102:1025

2.9 Classes of IP addresses


TCP/IP defines five classes of IP addresses: class A, B, C, D, and E. Each class has a
range of valid IP addresses. The value of the first octet determines the class. IP addresses from
the first three classes (A, B and C) can be used for host addresses. The other two classes are
used for other purposes (class D for multicast and class E for experimental purposes).

The system of IP address classes was developed for the purpose of Internet IP addresses
assignment. The classes created were based on the network size. For example, for the small
number of networks with a very large number of hosts, the Class A was created. The Class C
was created for the numerous networks with the small number of hosts.

Classes of IP addresses are:


85

Figure 2.17: Classes of IP Addresses

For the IP addresses from Class A, the first 8 bits (the first decimal number) represent the
network part, while the remaining 24 bits represent the host part. For Class B, the first 16 bits
(the first two numbers) represent the network part, while the remaining 16 bits represent the
host part. For Class C, the first 24 bits represent the network part, while the remaining 8 bits
represent the host part.

Consider the following IP addresses:

 10.50.120.7 – because this is a Class A address, the first number (10) represents
the network part, while the remainder of the address represents the host part
(50.120.7). This means that, in order for devices to be on the same network, the
first number of their IP addresses has to be the same for both devices. In this case,
a device with the IP address of 10.47.8.4 is on the same network as the device with
the IP address listed above. The device with the IP address 11.5.4.3 is not on the
same network, because the first number of its IP address is different.

 172.16.55.13 – because this is a Class B address, the first two numbers (172.16)
represents the network part, while the remainder of the address represents the host
part (55.13). The device with the IP address of 172.16.254.3 is on the same network,
while a device with the IP address of 172.55.54.74 isn’t.

NOTE:- The system of network address ranges described here is generally bypassed
today by use of the Classless Inter-Domain Routing (CIDR) addressing

Special IP address ranges that are used for special purposes are:

 0.0.0.0/8 – addresses used to communicate with the current network

 127.0.0.0/8 – loopback addresses


86

 169.254.0.0/16 – link-local addresses (APIPA)

Table 2.5: IP address classes

Note: Class A addresses 127.0.0.0 to 127.255.255.255 cannot be used and is reserved


for loopback and diagnostic functions.

Private IP Addresses

Table 2.6: Private IP addresses

2.10 Classful Addressing


The 32 bit IP address is divided into five sub-classes. These are:

 Class A

 Class B

 Class C

 Class D

 Class E

Each of these classes has a valid range of IP addresses. Classes D and E are reserved
for multicast and experimental purposes respectively. The order of bits in the first octet determine
the classes of IP address.
87

IPv4 address is divided into two parts:

 Network ID

 Host ID

The class of IP address is used to determine the bits used for network ID and host ID and
the number of total networks and hosts possible in that particular class. Each ISP or network
administrator assigns IP address to each device that is connected to its network

Figure 2.18: Classes of IP Addresses

Note: IP addresses are globally managed by Internet Assigned Numbers Authority(IANA)


and regional Internet registries(RIR).

Note: While finding the total number of host IP addresses, 2 IP addresses are not counted
and are therefore, decreased from the total count because the first IP address of any network
is the network number and whereas the last IP address is reserved for broadcast IP.

2.10.1 Class A

IP address belonging to class A are assigned to the networks that contain a large number
of hosts.

 The network ID is 8 bits long.

 The host ID is 24 bits long.


88

The higher order bits of the first octet in class A is always set to 0. The remaining 7 bits in
first octet are used to determine network ID. The 24 bits of host ID are used to determine the
host in any network. The default sub-net mask for class A is 255.x.x.x. Therefore, class A has a
total of:

 2^7= 128 network ID

 2^24 – 2 = 16,777,214 host ID

IP addresses belonging to class A ranges from 1.x.x.x – 126.x.x.x

Figure 2.19: Classes ‘A’ IP Addresses

2.10.2 Class B

IP address belonging to class B are assigned to the networks that ranges from medium-
sized to large-sized networks.

 The network ID is 16 bits long.

 The host ID is 16 bits long.

The higher order bits of the first octet of IP addresses of class B are always set to 10. The
remaining 14 bits are used to determine network ID. The 16 bits of host ID is used to determine
the host in any network. The default sub-net mask for class B is 255.255.x.x. Class B has a total
of:

 2^14 = 16384 network address

 2^16 – 2 = 65534 host address

IP addresses belonging to class B ranges from 128.0.x.x – 191.255.x.x.


89

Figure 2.20: Classes ‘B’ IP Addresses

2.10.3 Class C

IP address belonging to class C are assigned to small-sized networks.

 The network ID is 24 bits long.


 The host ID is 8 bits long.

The higher order bits of the first octet of IP addresses of class C are always set to 110.
The remaining 21 bits are used to determine network ID. The 8 bits of host ID is used to
determine the host in any network. The default sub-net mask for class C is 255.255.255.x.
Class C has a total of:

 2^21 = 2097152 network address


 2^8 – 2 = 254 host address

IP addresses belonging to class C ranges from 192.0.0.x – 223.255.255.x.

Figure 2.21: Classes ‘C’ IP Addresses

2.10.4 Class D

IP address belonging to class D are reserved for multi-casting. The higher order bits of
the first octet of IP addresses belonging to class D are always set to 1110. The remaining bits
are for the address that interested hosts recognize.
90

Class D does not posses any sub-net mask. IP addresses belonging to class D ranges
from 224.0.0.0 – 239.255.255.255.

Figure 2.22: Classes ‘D’ IP Addresses

2.10.5 Class E

IP addresses belonging to class E are reserved for experimental and research purposes.
IP addresses of class E ranges from 240.0.0.0 – 255.255.255.254. This class doesn’t have any
sub-net mask. The higher order bits of first octet of class E are always set to 1111.

Figure 2.23: Classes ‘E’ IP Addresses

2.10.6 Range of special IP addresses

169.254.0.0 – 169.254.0.16 : Link local addresses

127.0.0.0 – 127.0.0.8 : Loop-back addresses

0.0.0.0 – 0.0.0.8 : used to communicate within the current network.

2.10.7 Rules for assigning Host ID

Host ID’s are used to identify a host within a network. The host ID are assigned based on
the following rules:

 Within any network, the host ID must be unique to that network.


91

 Host ID in which all bits are set to 0 cannot be assigned because this host ID is used
to represent the network ID of the IP address.

 Host ID in which all bits are set to 1 cannot be assigned because this host ID is
reserved as a broadcast address to send packets to all the hosts present on that
particular network.

2.10.8 Rules for assigning Network ID

Hosts that are located on the same physical network are identified by the network ID, as
all host on the same physical network are assigned the same network ID. The network ID is
assigned based on the following rules:

 The network ID cannot start with 127 because 127 belongs to class A address and
is reserved for internal loop-back functions.

 All bits of network ID set to 1 are reserved for use as an IP broadcast address and
therefore, cannot be used.

 All bits of network ID set to 0 are used to denote a specific host on the local network
and are not routed and therefore, aren’t used.

Figure 2.24: Summary of Classful Addressing

2.11 APIPA (Automatic Private IP Addressing)


APIPA. (Automatic Private IP Addressing) is the Windows function that provides DHCP
auto configuration addressing. APIPA assigns a class B IP address from 169.254.0.0 to
169.254.255.255 to the client when a DHCP server is either permanently or temporarily
unavailable.

When the DHCP process fails, Windows automatically assigns an IP address from the
private range, which is 169.254.0.1 to 169.254.255.254. Using Address Resolution Protocol
(ARP), clients verify that the chosen APIPA address is unique on the network before they use it.
92

Automatic Private IP Addressing (APIPA) is a DHCP fail-safe that protects a computer


system from failure by invoking a standby mechanism for local Internet Protocol version 4
(IPv4) networks supported by Microsoft Windows. With APIPA, DHCP clients can obtain IP
addresses even when DHCP servers are not functional. APIPA exists in all modern versions of
Windows, including Windows 10.

2.11.1 How APIPA Works?

Networks that are set up for dynamic addressing rely on a DHCP server to manage the
pool of available local IP addresses. When a Windows client device attempts to join the local
network, it contacts the DHCP server to request its IP address. If the DHCP server stops
functioning, a network glitch interferes with the request, or some issue occurs on the Windows
device, this process can fail.

When the DHCP process fails, Windows automatically assigns an IP address from the
private range, which is 169.254.0.1 to 169.254.255.254. Using Address Resolution Protocol
(ARP), clients verify that the chosen APIPA address is unique on the network before they use it.
Clients then check back with the DHCP server at periodic intervals—usually every five minutes—
and update their addresses automatically when the DHCP server is able to service requests.

When you start a computer running Windows Vista, for example, it waits for only six
seconds for a DHCP server before using an IP from the APIPA range. Earlier versions of Windows
look for a DHCP server for as long as three minutes.

All APIPA devices use the default network mask 255.255.0.0, and all reside on the same
subnet.

APIPA is enabled by default in Windows whenever the PC network interface is configured


for DHCP. This option is called autoconfiguration in Windows utilities such as ipconfig. A computer
administrator can disable the feature by editing the Windows Registry and setting the following
key value to 0:

Network administrators and experienced computer users recognize that failures in the
DHCP process indicate network troubleshooting is needed to identify and resolve the issues
that are preventing DHCP from working properly.
93

2.11.2 Limitations of APIPA

APIPA addresses do not fall into any of the private IP address ranges defined by the
Internet Protocol standard and are restricted for use on local networks only. Like private IP
addresses, ping tests or any other connection requests from the internet and other outside
networks cannot be made to APIPA devices directly.

APIPA-configured devices can communicate with peer devices on their local network but
cannot communicate outside of it. While APIPA provides Windows clients a usable IP address,
it does not provide the client with nameserver (DNS or WINS) and network gateway addresses
as DHCP does.

Local networks should not attempt to manually assign addresses in the APIPA range
because IP address conflicts will result. To maintain the benefit APIPA has of indicating DHCP
failures, administrators should avoid using those addresses for any other purpose and instead
limit their networks to use the standard IP address ranges.

Summary

In this chapter, you learnt:

 IP Routing process, components of routing and routing algorithms.

 Difference between distance vector and link state routing protocols.

 Classful Routing protocols do not send subnet mask information when a route update
is sent out. All devices in the network must use the same subnet mask.

 Classless Routing is performed by protocols that send subnet mask information in


the routing updates.

 Network Address Translation (NAT) is the process where a network device, usually
a firewall, assigns a public address to a computer (or group of computers) inside a
private network.

 The main use of NAT is to limit the number of public IP addresses an organization
or company must use, for both economy and security purposes.

 TCP/IP defines five classes of IP addresses: class A, B, C, D, and E. Each class


has a range of valid IP addresses.
94

Review Questions
 Explain IP Routing and its algorithm.

 What is Classful and Classless Routing?

 Network Address Translation (NAT).

 What is OSPF?

 Explain EIGRP.

References

Books
 Internetworking Technologies Handbook - by Cisco Systems Inc.

 Packet Guide to Core Network Protocols by Bruce Hartpence

 TCP/IP Tutorial and Technical Overview (IBM Redbooks)

 CCNA Book 6th edition

Online Sources
 http://what-when-how.com/data-communications-and-networking/network

models-data-communications-and-networking/

 https://en.wikipedia.org/

 http://www.iana.org/go/rfc793

 http://www.iana.org/go/rfc7323

 https://tools.ietf.org/html/rfc2018
95

UNIT - 3
SUBNETTING IP NETWORK
Learning Objectives
 In this chapter, some fundamental concepts and terms used in subnetting will be
discussed with the aim to make the students understand the following

o The need for Subnetting

o Relevance of Subnetting.

 A learner will be able to understand the various subnetting models of networking


and the basic difference between them by practicing the examples discussed in this
study material.

Structure
3.1 Introduction

3.2 Understand Subnetting

3.3 Classless inter-Domain Routing (CIDR)

3.4 Wildcard Masks

3.5 WAN Technologies

3.1 Introduction
There comes a time when a network becomes too big and performance begins to suffer
as a result of too much traffic. When that happens, one of the ways that you can solve the
problem is by breaking the network into smaller pieces. There are several techniques for splitting
a network, but one of the most effective techniques is called subnetting. In this unit, we explain
what subnetting is, and how it works.

3.1.1 What is a subnet?

Subnetting is basically just a way of splitting a TCP/IP network into smaller, more
manageable pieces. The basic idea is that if you have an excessive amount of traffic flowing
across your network, then that traffic can cause your network to run slowly. When you subnet
your network, you are splitting the network into a separate, but interconnected network. That
96

way, most of the network traffic will be isolated to the subnet in which it originated. Of course
you can still communicate across a subnet, but the only time that traffic will cross subnet
boundaries is when it is specifically destined for a host residing in an alternate subnet.

3.1.2 Relevance of subnetting

The main purpose of subnetting is to help relieve network congestion. Congestion used
to be a bigger problem than it is today because it was more common for networks to use hubs
than switches. When nodes on a network are connected through a hub, the entire network acts
as a single collision domain. What this means is that if one PC sends a packet to another PC,
every PC on the entire network sees the packet. Each machine looks at the packet header, but
ignores the packet if it isn’t the intended recipient.

The problem with this type of network is that if any two machines on the network happen
to send packets simultaneously, then the packets collide and are destroyed in the collision. The
two machines then wait a random amount of time and resend the packets. The point is that an
occasional collision is no big deal, but excessive collisions can slow a network way down.

Switches solve the excessive collision problem by directing packets directly from the
source machine to the destination machine. Using this technique combined with caching
practically eliminates collisions and allows a network to perform much better than it ever could
if it were using a hub. So let’s go back to the original question. Are subnets still relevant for
switched networks?

The answer is that it really just depends on how the network is laid out and how it is
performing. Keep in mind that a switch only helps performance when a packet is destined for a
specific PC. Broadcast traffic is still sent to every machine on the network. If you’re running a
switched network, then subnetting will help you if you have a lot of broadcast network. Subnetting
is also important if you have branch offices that are connected by a slow WAN link.

A subnet mask is used to divide an IP address into two parts. One part identifies the host
(computer), the other part identifies the network to which it belongs. To better understand how
IP addresses and subnet masks work, look at an IP (Internet Protocol) address and see how it
is organized.

The main purpose of subnetting is to help relieve network congestion. Congestion used
to be a bigger problem than it is today because it was more common for networks to use hubs
97

than switches. When nodes on a network are connected through a hub, the entire network acts
as a single collision domain.

3.2 Understand Subnetting


Subnetting allows you to create multiple logical networks that exist within a single Class
A, B, or C network. If you do not subnet, you are only able to use one network from your Class
A, B, or C network, which is unrealistic.

Each data link on a network must have a unique network ID, with every node on that link
being a member of the same network. If you break a major network (Class A, B, or C) into
smaller subnetworks, it allows you to create a network of interconnecting subnetworks. Each
data link on this network would then have a unique network/subnetwork ID. Any device, or
gateway, that connects n networks/subnetworks has n distinct IP addresses, one for each network
/ subnetwork that it interconnects.

In order to subnet a network, extend the natural mask with some of the bits from the host
ID portion of the address in order to create a subnetwork ID. For example, given a Class C
network of 204.17.5.0 which has a natural mask of 255.255.255.0, you can create subnets in
this manner:

204.17.5.0 - 11001100.00010001.00000101.00000000

255.255.255.224 - 11111111.11111111.11111111.11100000

—————————————|sub|——

By extending the mask to be 255.255.255.224, you have taken three bits (indicated by
“sub”) from the original host portion of the address and used them to make subnets. With these
three bits, it is possible to create eight subnets. With the remaining five host ID bits, each
subnet can have up to 32 host addresses, 30 of which can actually be assigned to a device
since host ids of all zeros or all ones are not allowed (it is very important to remember this). So,
with this in mind, these subnets have been created.

204.17.5.0 255.255.255.224 host address range 1 to 30


204.17.5.32 255.255.255.224 host address range 33 to 62
204.17.5.64 255.255.255.224 host address range 65 to 94
98

204.17.5.96 255.255.255.224 host address range 97 to 126


204.17.5.128 255.255.255.224 host address range 129 to 158
204.17.5.160 255.255.255.224 host address range 161 to 190
204.17.5.192 255.255.255.224 host address range 193 to 222
204.17.5.224 255.255.255.224 host address range 225 to 254

Note: There are two ways to denote these masks. First, since you use three bits
more than the “natural” Class C mask, you can denote these addresses as having a 3-bit
subnet mask. Or, secondly, the mask of 255.255.255.224 can also be denoted as /27 as
there are 27 bits that are set in the mask. This second method is used with CIDR. With
this method, one of these networks can be described with the notation prefix/length. For
example, 204.17.5.32/27 denotes the network 204.17.5.32 255.255.255.224. When
appropriate, the prefix/length notation is used to denote the mask throughout the rest of
this document.

The network subnetting scheme in this section allows for eight subnets, and the network
might appear as:

Figure 3.1: Subnetting

Notice that each of the routers in Figure 3.1 is attached to four subnetworks, one subnetwork
is common to both routers. Also, each router has an IP address for each subnetwork to which it
is attached. Each subnetwork could potentially support up to 30 host addresses.

This brings up an interesting point. The more host bits you use for a subnet mask, the
more subnets you have available. However, the more subnets available, the less host addresses
available per subnet. For example, a Class C network of 204.17.5.0 and a mask of
99

255.255.255.224 (/27) allows you to have eight subnets, each with 32 host addresses (30 of
which could be assigned to devices). If you use a mask of 255.255.255.240 (/28), the break
down is:

204.17.5.0 - 11001100.00010001.00000101.00000000

255.255.255.240 - 11111111.11111111.11111111.11110000

——————————-——|sub |—

Since you now have four bits to make subnets with, you only have four bits left for host
addresses. So in this case you can have up to 16 subnets, each of which can have up to 16 host
addresses (14 of which can be assigned to devices).

Take a look at how a Class B network might be subnetted. If you have network 172.16.0.0
,then you know that its natural mask is 255.255.0.0 or 172.16.0.0/16. Extending the mask to
anything beyond 255.255.0.0 means you are subnetting. You can quickly see that you have the
ability to create a lot more subnets than with the Class C network. If you use a mask of
255.255.248.0 (/21), how many subnets and hosts per subnet does this allow for?

172.16.0.0 - 10101100.00010000.00000000.00000000

255.255.248.0 - 11111111.11111111.11111000.00000000

————————| sub |—————

You use five bits from the original host bits for subnets. This allows you to have 32 subnets
(25). After using the five bits for subnetting, you are left with 11 bits for host addresses. This
allows each subnet so have 2048 host addresses (211), 2046 of which could be assigned to
devices.

Note: In the past, there were limitations to the use of a subnet 0 (all subnet bits are set to
zero) and all ones subnet (all subnet bits set to one). Some devices would not allow the use of
these subnets. Cisco Systems devices allow the use of these subnets when the ip subnet zero
command is configured.

Example-1
100

Now that you have an understanding of subnetting, put this knowledge to use. In this
example, you are given two address / mask combinations, written with the prefix/length notation,
which have been assigned to two devices. Your task is to determine if these devices are on the
same subnet or different subnets. You can use the address and mask of each device in order to
determine to which subnet each address belongs.

DeviceA: 172.16.17.30/20

DeviceB: 172.16.28.15/20

Determine the Subnet for DeviceA:

172.16.17.30 - 10101100.00010000.00010001.00011110

255.255.240.0 - 11111111.11111111.11110000.00000000

————————| sub|——————

subnet = 10101100.00010000.00010000.00000000 = 172.16.16.0

Looking at the address bits that have a corresponding mask bit set to one, and setting all
the other address bits to zero (this is equivalent to performing a logical “AND” between the
mask and address), shows you to which subnet this address belongs. In this case, DeviceA
belongs to subnet 172.16.16.0.

Determine the Subnet for DeviceB:

172.16.28.15 - 10101100.00010000.00011100.00001111

255.255.240.0 - 11111111.11111111.11110000.00000000

————————| sub|——————

subnet = 10101100.00010000.00010000.00000000 = 172.16.16.0

From these determinations, DeviceA and DeviceB have addresses that are part of the
same subnet.

Example-2
101

Given the Class C network of 204.15.5.0/24, subnet the network in order to create the
network in Figure 3.2 with the host requirements shown.

Figure 3.2: Subnetting

Looking at the network shown in Figure 3.2, you can see that you are required to create
five subnets. The largest subnet must support 28 host addresses. Is this possible with a Class
C network? and if so, then how?

You can start by looking at the subnet requirement. In order to create the five needed
subnets you would need to use three bits from the Class C host bits. Two bits would only allow
you four subnets (22). Since you need three subnet bits, that leaves you with five bits for the
host portion of the address. How many hosts does this support? 25 = 32 (30 usable). This
meets the requirement. Therefore you have determined that it is possible to create this network
with a Class C network. An example of how you might assign the subnetworks is:

netA: 204.15.5.0/27 host address range 1 to 30

netB: 204.15.5.32/27 host address range 33 to 62

netC: 204.15.5.64/27 host address range 65 to 94

netD: 204.15.5.96/27 host address range 97 to 126

netE: 204.15.5.128/27 host address range 129 to 158


102

Example-3 (VLSM) (Variable Length Subnet Mask)

In all of the previous examples of subnetting, notice that the same subnet mask was
applied for all the subnets. This means that each subnet has the same number of available host
addresses. You can need this in some cases, but, in most cases, having the same subnet mask
for all subnets ends up wasting address space. In the example-2, a class C network was split
into eight equal-size subnets; however, each subnet did not utilize all available host addresses,
which results in wasted address space. Figure 3.3 illustrates this wasted address space. It also
illustrates that of the subnets that are being used, NetA, NetC, and NetD have a lot of unused
host address space

Figure 3.3: VLSM

It is possible that this was a deliberate design accounting for future growth, but in many
cases this is just wasted address space due to the fact that the same subnet mask is used for
all the subnets.
103

Variable Length Subnet Masks (VLSM) allows you to use different masks for each subnet,
thereby using address space efficiently. Given the same network and requirements as in example-
2 develop a subnetting scheme with the use of VLSM, given:

netA: must support 14 hosts


netB: must support 28 hosts
netC: must support 2 hosts
netD: must support 7 hosts
netE: must support 28 host

Determine what mask allows the required number of hosts.


netA: requires a /28 (255.255.255.240) mask to support 14 hosts
netB: requires a /27 (255.255.255.224) mask to support 28 hosts
netC: requires a /30 (255.255.255.252) mask to support 2 hosts
netD*: requires a /28 (255.255.255.240) mask to support 7 hosts
netE: requires a /27 (255.255.255.224) mask to support 28 hosts

* a /29 (255.255.255.248) would only allow 6 usable host addresses therefore netD
requires a /28 mask.

The easiest way to assign the subnets is to assign the largest first. For example, you can
assign in this manner:

netB: 204.15.5.0/27 host address range 1 to 30


netE: 204.15.5.32/27 host address range 33 to 62
netA: 204.15.5.64/28 host address range 65 to 78
netD: 204.15.5.80/28 host address range 81 to 94
netC: 204.15.5.96/30 host address range 97 to 98

This can be graphically represented as shown in Figure 3.4:


104

Figure 3.4: VLSM

3.2.1 Subnetting a Class C network address

If we use the default subnet mask with a Class C network address, then we already know
that three bytes are used to define the network and only one byte is used to define the hosts on
each network.

The default Class C mask is: 255.255.255.0. To make smaller networks, called
subnetworks, we will borrow bits from the host portion of the mask. Since the Class C mask
only uses the last octet for host addressing, we only have 8 bits at our disposal. Therefore, only
the following masks can be used with Class C networks (Table 3.1).

Subset zero: Take note that in the table below I do not assume subnet zero. Cisco does
teach a subnet zero assumption but they do not test that way. I have chosen to follow the exam.

Table 3.1: Class C Subnet Masks


105

You can see in Table 3.1 that the bits that are turned on (1s) are used for subnetting, while
the bits that are turned off (0s) are used for addressing of hosts. You can use some easy math
to determine the number of subnets and hosts per subnet for each different mask.

To determine the number of subnets, use the 2x-2, where the x exponent is the number of
subnet bits in the mask.

To determine the number of hosts, use the 2x-2, where the x exponent is the number of
host bits in the mask.

To determine the mask you need for your network, you must first determine your business
requirements. Count the number of networks and the number of hosts per network that you
need. Then determine the mask by using the equations shown above—and don’t forget to
factor for growth.

For example, if you have eight networks and each requires 10 hosts, you would use the
Class C mask of 255.255.255.240 because 240 in binary is 11110000, which means you have
four subnet bits and four host bits. Using our math, we’d get the following:

24-2=14 subnets

24-2=14 hosts

Many people find it easy to memorize the Class C information because Class C networks
have few bits to manipulate. However, there is an easier way to subnet.

Instead of memorizing the entire table (Table 3.1), it’s possible to glance at a host address
and quickly determine the necessary information if you’ve memorized key parts of the table.
First, you need to know your binary-to-decimal conversion. Memorize the number of bits used
with each mask that are shown in Table 3.1. Second, you need to remember the following:

256-192=64

256-224=32

256-240=16

256-248=8

256-252=4

Once you have the two steps memorized, you can begin subnetting.
106

Example-1

We will use the Class C mask of 255.255.255.192. Ask five simple questions to gather all
the facts:

 How many subnet bits are used in this mask?

 How many host bits are available per subnet?

 What are the subnet addresses?

 What is the broadcast address of each subnet?

 What is the valid host range of each subnet?

You already know how to answer questions one and two. To answer question three, use
the formula 256-subnetmask to get the first subnet and your variable. Keep adding this number
to itself until you get to the subnet mask value to determine the valid subnets. Once you verify
all of the subnets, you can determine the broadcast address by looking at the next subnet’s
value. The broadcast address is the number just before the next subnet number. Once you
have the subnet number and broadcast address, the valid hosts are the numbers in between.

Here are the answers using 255.255.255.192:

 How many subnet bits are used in this mask? Answer: 2 2^2-2=2 subnets

 How many host bits are available per subnet? Answer: 6 2^6-2=62 hosts per subnet

 What are the subnet addresses?Answer: 256-192=64 (the first subnet)64+64=128


(the second subnet)64+128=192. However, although 192 is the subnet mask value,
it’s not a valid subnet. The valid subnets are 64 and 128.

 What is the broadcast address of each subnet?Answer: 64 is the first subnet and
128 is the second subnet. The broadcast address is always the number before the
next subnet. The broadcast address of the 64 subnet is 127. The broadcast address
of the 128 subnet is 191.

 What is the valid host range of each subnet? Answer: The valid hosts are the numbers
between the subnet number and the mask. For the 64 subnet, the valid host range
is 64-126. For the 128 subnet, the valid host range is 129-190.
107

Example-2
Using the Class C mask of 255.255.255.224. Here are the answers:

 How many subnet bits are used in this mask?Answer: 3 bits or 2^3-2=6 subnets

 How many host bits are available per subnet?Answer: 5 bits or 2^5-2=30 hosts per
subnet

 What are the subnet addresses?Answer: 256-224 =32, 64, 96, 128, 160 and 192
(Six subnets found by continuing to add 32 to itself.)

 What is the broadcast address of each subnet?Answer: The broadcast address for
the 32 subnet is 63. The broadcast address for the 64 subnet is 95. The broadcast
address for the 96 subnet is 127. The broadcast address for the 160 subnet is 191.
The broadcast address for the 192 subnet is 223 (since 224 is the mask).

 What is the valid host range of each subnet?Answer: The valid hosts are the numbers
in between the subnet and broadcast addresses. For example, the 32 subnet valid
hosts are 33-62.

Example-3

Using the Class C mask of 255.255.255.240. Here are the answers:

 How many subnet bits are used in this mask?Answer: 4 bits or 2^4-2=14 subnets

 How many host bits are available per subnet?Answer: 4 bits or 2^4-2=14 hosts per
subnet

 What are the subnet addresses?Answer: 256-240 =16, 32, 48, 64, 80, 96, 112,
128, 144. 160, 176, 192, 208 and 224 (14 subnets found by continuing to add 16 to
itself.)

 What is the broadcast address of each subnet?Answer: Here are some examples
of the broadcast address: The broadcast address for the 16 subnet is 31. The
broadcast address for the 32 subnet is 47. The broadcast address for the 64 subnet
is 79. The broadcast address for the 96 subnet is 111. The broadcast address for
the 160 subnet is 175. The broadcast address for the 192 subnet is 207.

 What is the valid host range of each subnet?Answer: The valid hosts are the numbers
in between the subnet and broadcast addresses. The 32 subnet valid hosts are 33-
46.
108

Example-4

Using the Class C mask of 255.255.255.248. Here are the answers:

 How many subnet bits are used in this mask?Answer: 5 bits or 2^5-2=30 subnets

 How many host bits are available per subnet?Answer: 3 bits or 2^3-2=6 hosts per
subnet

 What are the subnet addresses?Answer 256-248 =8, 16, 24, 32, 40, 48, and so
forth. The last subnet is 240 (30 subnets found by continuing to add 8 to itself).

 What is the broadcast address of each subnet?Answer: The broadcast address for
the 8 subnet is 15. The broadcast address for the 16 subnet is 23. The broadcast
address for the 48 subnet is 55.

 What is the valid host range of each subnet?Answer: The valid hosts are the numbers
in between the subnet and broadcast addresses. For example, the 32 subnet valid
hosts are 33-38.

Example-5

Using the Class C mask of 255.255.255.252. Here are the answers:

 How many subnet bits are used in this mask?Answer: 6 bits or 2^6-2=62 subnets

 How many host bits are available per subnet?Answer: 2 bits or 2^2-2=2 hosts per
subnet

 What are the subnet addresses?Answer: 256-252 =4, 8, 12, 16, 20, and so forth.
The last subnet is 248 (62 subnets found by continuing to add 4 to itself).

 What is the broadcast address of each subnet?Answer: The broadcast address for
the 4 subnet is 7. The broadcast address for the 8 subnet is 11. The broadcast
address for the 12 subnet is 15. The broadcast address for the 20 subnet is 23.

 What is the valid host range of each subnet?Answer: The valid hosts are the numbers
in between the subnet and broadcast addresses. For example, the 16 subnet valid
hosts are 17 and 18.
109

How to use this information?

Let’s take a look at an example that will highlight how the above information is applied.

A host configuration has an IP configuration of 192.168.10.17 255.255.255.248. What


are the subnet, broadcast address, and host range that this host is a member of? The answer
is: 256-248=8, 16, 24. This host is in the 16 subnet, the broadcast address of the 16 subnet is
23, and the valid host range is 17-22. Pretty easy!

Here is an explanation of this example: First, we used 256-subnetmask to get the variable
and first subnet. Then I kept adding this number to itself until I passed the host address. The
subnet is the number before the host address, and the broadcast address is the number right
before the next subnet. The valid hosts are the numbers in between the subnet and broadcast
address.

Let’s examine a second example. A host configuration has an IP configuration of


192.168.10.37 255.255.255.240. What are the subnet, broadcast address, and host range this
host is a member of? The answer is: 256-240=16, 32, 48. This host is in the 32 subnet, the
broadcast address of the 32 subnet is 47, and the valid host range is 33-46.

Let’s go through a third example: A host configuration has an IP configuration of


192.168.10.44 255.255.255.224. What are the subnet, broadcast address, and host range this
host is a member of? The answer is: 256-224=32, 64. This host is in the 32 subnet, the broadcast
address of the 32 subnet is 63, and the valid host range is 33-62.

Here’s a fourth example: A host configuration has an IP configuration of 192.168.10.17


255.255.255.252. What are the subnet, broadcast address, and host range this host is a member
of? The answer is: 256-252=4, 8, 12, 16, 20. This host is in the 16 subnet, the broadcast
address of the 16 subnet is 19, and the valid host range is 17-18.

Let’s go through a final example. A host configuration has an IP configuration of


192.168.10.88 255.255.255.192. What are the subnet, broadcast address and host range this
host is a member of? The answer is: 256-192=64, 128. This host is in the 64 subnet, the
broadcast address of the 64 subnet is 127, and the valid host range must be 65-126.

Conclusion: It is important to be able to subnet quickly and efficiently. After studying the
examples presented, one should be familiar with this process with Class C addresses.
110

3.2.2 Subnetting a Class B network address

There are quite a few more masks we can use with a Class B network address than we
can with a Class C network address. Remember that this is not harder than subnetting with
Class C, but it can get confusing if you don’t pay attention to where the subnet bits and host bits
are in a mask.

We will use the same techniques as used in the Class C to subnet a network. We’ll start
with the Class B subnet mask of 255.255.192.0 and figure out the subnets, broadcast address,
and valid host range. We will answer the same five questions we answered for the Class C
subnet masks:

 How many subnets does this mask provide?

 How many hosts per subnet does this mask provide?

 What are the valid subnets?

 What is the broadcast address for each subnet?

 What is the host range of each subnet?

Before we answer these questions, there is one difference you need to be aware of when
subnetting a Class B network address. When subnetting in the third octet, you need to add the
fourth octet. For example, on the 255.255.192.0 mask, the subnetting will be done in the third
octet. To create a valid subnet, you must add the fourth octet of all 0s and all 1s for the network
and broadcast address (0 for all 0s and 255 for all 1s).

Example-1: Answers for the 255.255.192.0 mask


 2-2=2 subnets

 2-2=16,382 hosts per subnet

 256-192=64.0, 128.0

 Broadcast for the 64.0 subnet is 127.255. Broadcast for the 128.0 subnet is 191.255.

 The valid hosts are:


111

Notice that the numbers in the third octet are the same numbers we used in the fourth
octet when subnetting the 192 mask. The only difference is that we add 0 and 255 in the fourth
octet.

For the 64.0 subnet, all the hosts between 64.1 and 127.254 are in the 64 subnet. In the
128.0 subnet, the hosts are 128.1 through 191.254.

Example-2: 255.255.240.0
 2-2=14 subnets

 2-2=4094 hosts per subnet

 256-240=16.0, 32.0, 48.0, 64.0, etc.

 Broadcast for the 16.0 subnet is 31.255. Broadcast for the 32.0 subnet is 47.255,
etc.

 The valid hosts are:

Example-3: 255.255.248.0
 2-2=30 subnets

 2-2=2,046 hosts per subnet

 256-248=8.0, 16.0, 24.0, 32.0, 40.0, 48.0, 56.0, 64.0, etc.


112

 Broadcast for the 8.0 subnet is 15.255. Broadcast for the 16.0 subnet is 23.255,
etc.

The valid hosts are:

Example-4: 255.255.252.0
 2-2=62 subnets

 2-2=1,022 hosts per subnet

 256-252=4.0, 8.0, 12.0, 16.0, 20.0, 24.0, 28.0, 32.0, etc.

 Broadcast for the 4.0 subnet is 7.255. Broadcast for the 8.0 subnet is 11.255, etc.

The valid hosts are:

Example-5: 255.255.255.0

 2-2=254 subnets

 2-2=254 hosts per subnet

 256-255=1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, etc.

 Broadcast for the 1.0 subnet is 1.255. Broadcast for the 2.0 subnet is 2.255, etc.

 The valid hosts are:


113

The more difficult process of subnetting a Class B network address is when you start
using bits in the fourth octet for subnetting. For example, what happens when you use this
mask with a Class B network address: 255.255.255.128? Is that valid? Absolutely! There are
nine bits for subnetting and seven bits for hosts. That is 510 subnets, each with 126 hosts.
However, it is the most difficult mask to figure out the valid hosts for.

Example-6: The Class B 255.255.255.128 subnet mask:


 2-2=510 subnets

 2-2=126 hosts per subnet

 For the third octet, the mask would be 256-255=1, 2, 3, 4, 5, 6, etc.

 For the fourth octet, the mask would be 256-128=128, which is one subnet if it is
used. However, if you turn the subnet bit off, the value is 0. This means that for
every subnet in the third octet, the fourth octet has two subnets: 0 and 128, for
example 1.0 and 1.128.

 Broadcast for the 0.128 subnet is 128.255; the broadcast for the 1.0 subnet is
1.127. Broadcast for the 1.128 subnet is 1.255, etc.

 The valid hosts are:


114

The thing to remember is that for every subnet in the third octet, there are two in the
fourth octet: 0 and 128. For the 0 subnet, the broadcast address is always 127. For the 128
subnet, the broadcast address is always 255.

Example-7: Class B network 255.255.255.192

 2-2=1022 subnets

 2-2=62 hosts per subnet

 256-255=1.0, 2.0, 3.0, etc. for the third octet. 256-192=64, 128, 192 for the fourth
octet. For every valid subnet in the third octet, we get four subnets in the fourth
octet: 0, 64, 128, and 192.

 Broadcast for the 1.0 subnet is 1.63, since the next subnet is 1.64. Broadcast for
the 1.64 subnet is 1.127, since the next subnet is 1.128. Broadcast for the 1.128
subnet is 1.191, since the next subnet is 1.192. Broadcast for the 1.192 subnet is
1.255.

 The valid hosts are as follows:

On this one, the 0 and 192 subnets are valid, since we are using the third octet as well.
The subnet range is 0.64 through 255.128. 0.0 is not valid since no subnet bits are on. 255.192
is not valid because then all subnet bits would be on.

Example-8: Class B network 255.255.255.224


 2-2=2046 subnets

 2-2=30 hosts per subnet

 256-255=1.0, 2.0, 3.0, etc. for the third octet. 256-224=32, 64, 96, 128, 160, 192 for
the subnet value. (For every value in the third octet, we get eight subnets in the
fourth octet: 0, 32, 64, 96, 128, 160, 192, 224.)
115

 Broadcast for the 1.0 subnet is 1.63, since the next subnet is 1.64. Broadcast for
the 1.64 subnet is 1.127, since the next subnet is 1.128. Broadcast for the 1.128
subnet is 1.191, since the next subnet is 1.192. Broadcast for the 1.192 subnet is
1.255.

 The valid hosts are:

For this subnet mask, the 0 and 224 subnets are valid as long as not all subnet bits in the
third octet are off or all subnet bits in the fourth octet are on.

3.2.3 Subnetting a Class A network address

The Class A networking address scheme is designed for the government and large
institutions needing a great deal of unique nodes. Although the Class A network has only 254
unique network addresses, it can contain approximately 17 million unique nodes, which can
make subnetting such a network a nightmare.

Subnetting Class A addresses requires a little forethought, some basic information, and a
lot of practice. Here, I will explain Class A subnet masks and how to assign valid subnets and
host addresses to provide flexibility in configuring your network.

Class A subnet masks must start with 255.0.0.0 at a minimum, because the whole first
octet of an IP address (the IP address describes the specific location on the network) is used to
define the network portion. Routers use the network portion to send packets through an
internetwork. Routers aren’t concerned about host addresses. They need to know only where
the hosts are located and that the MAC address is used to find a host on a LAN. The last three
octets of a Class A subnet mask are used to address hosts on a LAN; the 24 bits you can
manipulate however you wish.
116

If you wanted to create smaller networks (subnetworks) out of a Class A network ID, you’d
borrow bits from the host portion of the mask. The more bits you borrow, the more subnets you
can have, but this means fewer hosts per subnet. However, with a Class A mask, you have 24
bits to manipulate, so this isn’t typically a problem.

Once you have the mask assigned to each network, you must assign the valid subnet
addresses and host ranges to each network. To determine the valid subnets and host addresses
for each network, you need to answer three easy questions:

 What is the valid subnet address?

 What is the broadcast address?

 What is the valid host range?

Here are some tips for finding the answers:

Valid subnet address: To figure out the valid subnet address, simply subtract the subnet
mask from 256. For example, if you had a Class A mask of 255.240.0.0, the equation would be
256-240=16. The number 16 is the first subnet and also your block size. Keep adding the block
size (in this case 16) to itself until you reach the subnet mask value. The valid subnets in this
example are 16, 32, 48, 64, 80, 96, 112, 128, 144, 160, 176, 192, 208, 224. As another example,
if you had a Class A subnet mask of 255.255.240.0, you’d use the mask on the second and third
octets minus 256. The second octet would be 256-255=1, 2, 3, etc., all the way to 254; the third
octet would be 256-240=16, 32, 48, etc.
117

Table 3.2: Class A Subnet Masks

Broadcast address: To determine the broadcast address of each subnet, just subtract 1
from the next subnet value. For example, within the 16 subnet, the next subnet is 32, so the
broadcast address of the 16 subnet is 31. The broadcast address for the 32 subnet is 47,
because the next subnet is 48. The broadcast address for the 48 subnet is 63, because the next
subnet is 64.
118

Valid host range: The valid hosts are the numbers between the subnet address and the
broadcast address. For the 16 subnet, the valid host range you can assign on a network is 17-
30 because the subnet number is 16 and the broadcast address is 31. For the 32 subnet, the
valid host range is 33 to 46 because the subnet number is 32 and the broadcast address is 47.
You can’t use the subnet number and broadcast addresses as valid host addresses.

Note

 Subnet zero: It is assumed you can use subnet zero. If you’re not using subnet
zero, subtract two from each number in the Subnets column in Table 3.2 above.

 Once you have an idea what your network will look like, write down the number of
physical subnets you have and the number of hosts needed for each subnet. For
example, on a WAN point-to-point link, you need only two IP addresses, so you can
use a /30 mask.

 /30: The slash (/) indicates the number of mask bits turned on. It saves you from
typing, or pronouncing, the whole mask. For example, /8 means 255.0.0.0, /16 is
255.255.0.0, and /24 is 255.255.255.0. You pronounce it as “configure a slash 24
mask on that network.” It’s just an easier way of saying “configure a 255.255.255.0
mask on that network.”

Example-1: Class A mask 255.240.0.0 (/12)

This mask provides you with only four subnet bits, or 16 subnets (14 if you’re not using
subnet zero) with 1,048,574 hosts each. The valid subnets are 256-240=16, 32, 48, 64, 80, etc.,
all the way to 224. (Subnets 0 and 240 are available if you’re using subnet zero.)

The first subnet, assuming subnet zero, is:


Subnet: 10.0.0.0
Broadcast: 10.15.255.255
Valid host range: 10.0.0.1 through 10.15.255.254

The last subnet, assuming subnet zero, is:


Subnet: 10.240.0.0
Broadcast: 10.255.255.255
Valid host range: 10.240.0.1 through 10.255.255.254
119

Example-2: Class A mask 255.255.128.0

This mask provides you with nine bits of subnetting and 15 host bits (/17). This gives you
512 subnets with 32,766 hosts each. The second octet is 256-255=1, 2, 3, etc., all the way to
255. Zero is available in the second octet if you have either a subnet bit on in the third octet or
are, of course, using subnet zero.

The first available subnet is:


Subnet: 10.0.0.0
Broadcast: 10.0.127.255
Valid host range: 10.0.0.1 through 10.0.127.254

You must remember that the third octet is using only one subnet bit. This bit can be either
off or on; if it is off, the subnet is 0. If it is on, the subnet is 128.

An example of the 10.0.128.0 subnet is:


Subnet: 10.0.128.0
Broadcast: 10.0.255.255
Valid host range: 10.0.128.1 through 10.0.255.254

The last available subnet is:


Subnet: 10.255.128.0
Broadcast: 10.255.255.255
Valid host range: 10.255.128.1 through 10.255.255.254

Example-4: Class A mask 255.255.255.252

This mask is the easiest to subnet. Even if it weren’t a Class A mask, and you used this
mask with a Class B or Class C mask, you’d always have only two available host IDs. The
reason you would use this with a Class A mask is because it can give you up to 4,194,304
subnets with two hosts each. This is a perfect mask for a point-to-point link, so I suggest always
saving a few block sizes of four (/30) masks for use on WANs and point-to-point LAN connections.

If you use the 10.2.3.0 network, your subnets are always 2.3 in the second and third
octets, respectively. But the fourth octet is where it changes, as in 256-252=4, 8, 12, 16, 20, 24,
28, etc., all the way to 248. If you use subnet zero, your first subnets are 0, and your last subnet
is 255.
120

An example of the 10.2.3.0 subnet is:


Subnet: 10.2.3.0
Broadcast: 10.2.3.3
Valid hosts: 10.2.3.1 and 10.2.3.2

An example of the 10.2.3.252 subnet is:


Subnet: 10.2.3.252
Broadcast: 10.2.3.255
Valid hosts: 10.2.3.253 and 10.2.3.254

Using Class A network IDs

You should probably use Class A network IDs in most networks these days. Why? Because
the 10.0.0.0 network ID cannot be routed on the Internet. These private IP address ranges
allow you to create a more secure network and use port address translation on your router to
the Internet to do the translation for you.We suggest 10.0.0.0 to address your network because
it provides the most flexibility for configuring networks.

3.3 Classless inter-domain routing (CIDR)


3.3.1 About CIDR

Classless inter-domain routing (CIDR) is a set of Internet protocol (IP) standards that is
used to create unique identifiers for networks and individual devices. The IP addresses allow
particular information packets to be sent to specific computers. CIDR is an IP addressing scheme
that improves the allocation of IP addresses. CIDR, sometimes called supernetting, is a way to
allow more flexible allocation of Internet Protocol (IP) addresses than was possible with the
original system of IP address classes. As a result, the number of available Internet addresses
was greatly increased, which along with widespread use of network address translation (NAT),
has significantly extended the useful life of IPv4.

Originally, IP addresses were assigned in four major address classes, A through D. Each
of these classes allocates one portion of the 32-bit IP address format to identify a network
gateway — the first 8 bits for class A, the first 16 for class B, and the first 24 for class C. The
remainder identify hosts on that network — more than 16 million in class A, 65,535 in class B
and 254 in class C. (Class D addresses identify multicast domains.)
121

To illustrate the problems with the class system, consider that one of the most commonly
used classes was Class B. An organization that needed more than 254 host machines would
often get a Class B license, even though it would have far fewer than 65,534 hosts. This resulted
in most of the block of addresses allocated going unused. The inflexibility of the class system
accelerated IPv4 address pool exhaustion. With IPv6, addresses grow to 128 bits, greatly
expanding the number of possible addresses on the Internet. The transition to IPv6 is slow,
however, so IPv4 address exhaustion continues to be a significant issue.

CIDR reduced the problem of wasted address space by providing a new and more flexible
way to specify network addresses in routers. CIDR lets one routing table entry represent an
aggregation of networks that exist in the forward path that don’t need to be specified on that
particular gateway. This is much like how the public telephone system uses area codes to
channel calls toward a certain part of the network. This aggregation of networks in a single
address is sometimes referred to as a supernet.

Using CIDR, each IP address has a network prefix that identifies either one or several
network gateways. The length of the network prefix in IPv4 CIDR is also specified as part of the
IP address and varies depending on the number of bits needed, rather than any arbitrary class
assignment structure. A destination IP address or route that describes many possible destinations
has a shorter prefix and is said to be less specific. A longer prefix describes a destination
gateway more specifically. Routers are required to use the most specific, or longest, network
prefix in the routing table when forwarding packets. (In IPv6, a CIDR block always gets 64 bits
for specifying network addresses). A CIDR network address looks like this under IPv4:

192.30.250.00/18

The “192.30.250.0” is the network address itself and the “18” says that the first 18 bits are
the network part of the address, leaving the last 14 bits for specific host addresses.
122

Figure 3.5: CIDR

CIDR is now the routing system used by virtually all gateway routers on the Internet’s
backbone network. The Internet’s regulating authorities expect every Internet service provider
(ISP) to use it for routing. CIDR is supported by the Border Gateway Protocol, the prevailing
exterior (interdomain) gateway protocol and by the OSPF interior (or intradomain) gateway
protocol. Older gateway protocols like Exterior Gateway Protocol and Routing Information
Protocol do not support CIDR.

Advantages of CIDR

CIDR provides numerous advantages over the “classful” addressing scheme, whether or
not subnetting is used:

(a) Efficient Address Space Allocation:

 Instead of allocating addresses in fixed-size blocks of low granularity, under CIDR


addresses are allocated in sizes of any binary multiple.

 So, a company that needs 5,000 addresses can be assigned a block of 8,190 instead
of 65,534. Or, to think of it another way, the equivalent of a single Class B network
can be shared amongst 8 companies that each need 8,190 or fewer IP addresses.
123

(b) Elimination of Class Imbalances:


 There are no more class A, B and C networks, so there is no problem with some
portions of the address space being widely used while others are neglected.

(c) Efficient Routing Entries:


 CIDR’s multiple-level hierarchical structure allows a small number of routing entries
to represent a large number of networks.

 Network descriptions can be “aggregated” and represented by a single entry.

 Since CIDR is hierarchical, the detail of lower-level, smaller networks can be hidden
from routers that move traffic between large groups of networks.

(d) No Separate Subnetting Method:


 CIDR implements the concepts of subnetting within the internet itself.

 An organization can use the same method used on the Internet to subdivide its
internal network into subnets of arbitrary complexity without needing a separate
subnetting mechanism.

3.4 Wildcard masks


Wildcard masks are used to specify a range of network addresses. They are commonly
used with routing protocols (like OSPF) and access lists.

Just like a subnet mask, a wildcard mask is 32 bits long. It acts as an inverted subnet
masks, but with wildcard mask, the zero bits indicate that the corresponding bit position must
match the same bit position in the IP address. The one bits indicate that the corresponding bit
position doesn’t have to match the bit position in the IP address.

A wildcard mask is a mask of bits that indicates which parts of an IP address are available
for examination. In the Cisco IOS, they are used in several places, for example:

 To indicate the size of a network or subnet for some routing protocols, such as
OSPF.

 To indicate what IP addresses should be permitted or denied in access control lists


(ACLs).
124

At a simplistic level a wildcard mask can be thought of as an inverted subnet mask.

For example, a subnet mask of 255.255.255.0 (binary equivalent =

11111111.11111111.11111111.00000000) inverts to a wildcard mask of 0.0.0.255.

A wild card mask is a matching rule. The rule for a wildcard mask is:

 0 means that the equivalent bit must match

 1 means that the equivalent bit does not matter

Any wildcard bit-pattern can be masked for examination: For example, a wildcard mask of
0.0.0.254 (binary equivalent = 00000000.00000000.00000000.11111110) applied to IP address
10.10.10.2 (00001010.00001010.00001010.00000010) will match even-numbered IP addresses
10.10.10.0, 10.10.10.2, 10.10.10.4, 10.10.10.6 etc. Same mask applied to 10.10.10.1
(00001010.00001010.00001010.00000001) will match odd-numbered IP addresses 10.10.10.1,
10.10.10.3, 10.10.10.5 etc.

A network and wildcard mask combination of 1.1.1.1 0.0.0.0 would match an interface
configured exactly with 1.1.1.1 only, and nothing else. This is really useful if you want to activate
OSPF on a specific interface in a very clear and simple way.

If you insist on matching a range of networks, the network and wildcard mask combination
of 1.1.0.0 0.0.255.255 would match any interface in the range of 1.1.0.0 to 1.1.255.255. Because
of this, it’s simpler and safer to stick to using wildcard masks of 0.0.0.0 and identify each OSPF
interface individually, but once configured, they function exactly the same — one way is not
better than the other.

Wildcard masks are used in situations where subnet masks may not apply. For example,
when two affected hosts fall in different subnets, the use of a wildcard mask will group them
together.

Here is an example of using a wildcard mask to include only the desired interfaces in the
OSPF routing process:
125

Figure 3.6: Wildcard Mask

Router R1 has three networks directly connected. To include only the 10.0.1.0 subnet in
the OSPF routing process, the following network command can be used:

Let’s break down the wildcard part of the command. To do that, we need to use binary
numbers instead of decimal notation.

10.0.1.0 = 00001010.00000000.00000001.00000000
0.0.0.255 = 00000000.0000000.00000000.11111111

The theory says that the zero bits of the wildcard mask have to match the same position
in the IP address. So, let’s write the wildacard mask below the IP address:

00001010.00000000.00000001.00000000
00000000.00000000.00000000.11111111

As you can see from the output above, the last octet doesen’t have to match, because the
wildcard mask bits are all ones. The first 24 bits have to match, because of the wildcard mask
bits of all zeros. So, in this case, wildcard mask will match all addresses that begins with 10.0.1.X.
In our case, only one network will be matched, 10.0.1.0/24.
126

What is we want to match both 10.0.0.0/24 and 10.0.1.0/24? Than we will have to use
different wildcard mask. We need to use the wildcard mask of 0.0.1.255. Why is that? Well, we
again need to write down the addresses in binary:

00001010.00000000.00000000.00000000 = 10.0.0.0
00001010.00000000.00000001.00000000 = 10.0.1.0
00000000.00000000.00000001.11111111 = 0.0.1.255

From the output above, we can see that only the first 23 bits have to match (notice that
the third octet of the wildcard mask has a 1 at the end). That means that all addresses in the
range of 10.0.0.0 – 10.0.1.255 will be matched. So, in our case, we have successfully matched
both addresses, 10.0.0.0 and 10.0.1.0.

NOTE

Wildcard mask of all zeros (0.0.0.0) means that the entire IP address have to match in
order for a statement to execute. For example, if we want to match only the IP address of
192.168.0.1, the command used would be 192.168.0.1 0.0.0.0.

A wildcard mask of all ones (255.255.255.255) means that no bits have to match. This
basically means that all addresses will be matched.

3.5 WAN Technologies


3.5.1 Frame Relay

Frame Relay is a standardized wide area network technology that specifies the physical
and data link layers of digital telecommunications channels using a packet switching methodology.
Frame Relay is a high-performance WAN protocol that operates at the physical and data link
layers of the OSI reference model. Frame Relay originally was designed for use across Integrated
Services Digital Network (ISDN) interfaces. Today, it is used over a variety of other network
interfaces as well.

Frame relay is a packet-switching telecommunication service designed for cost-efficient


data transmission for intermittent traffic between local area networks (LANs) and between
endpoints in wide area networks (WANs). The service, once widely available and implemented,
is in the process of being discontinued by major Internet service providers. Sprint ended its
127

frame relay service in 2007, while Verizon said it plans to phase out the service in 2015. AT&T
stopped offering frame relay in 2012 but said it would support existing customers until 2016.

Frame relay puts data in a variable-size unit called a frame and leaves any necessary
error correction (retransmission of data) up to the endpoints, which speeds up overall data
transmission. For most services, the network provides a permanent virtual circuit (PVC), which
means that the customer sees a continuous, dedicated connection without having to pay for a
full-time leased line, while the service provider figures out the route each frame travels to its
destination and can charge based on usage. Switched virtual circuits (SVC), by contrast, are
temporary connections that are destroyed after a specific data transfer is completed.

An enterprise can select a level of service quality, prioritizing some frames and making
others less important. A number of service providers, including AT&T, offer frame relay, and it’s
available on fractional T-1 or full T-carrier system carriers. Frame relay complements and provides
a mid-range service between ISDN, which offers bandwidth at 128 Kbps, and Asynchronous
Transfer Mode (ATM), which operates in somewhat similar fashion to frame relay but at speeds
of 155.520 Mbps or 622.080 Mbps.

Devices for Frame Relay

In order for a frame relay WAN to transmit data, data terminal equipment (DTE) and data
circuit-terminating equipment (DCE) are required. DTEs are typically located on the customer’s
premises and can encompass terminals, routers, bridges and personal computers. DCEs are
managed by the carriers and provide switching and associated services.

Frame relay is based on the older X.25 packet-switching technology that was designed
for transmitting analog data such as voice conversations. Unlike X.25, which was designed for
analog signals, frame relay is a fast packet technology, which means that the protocol does not
attempt to correct errors. When an error is detected in a frame, it is simply dropped (that is,
thrown away). The end points are responsible for detecting and retransmitting dropped frames
(though the incidence of error in digital networks is extraordinarily small relative to analog
networks).

Frame relay is often used to connect LANs with major backbones as well as on public
wide area networks and also in private network environments with leased T-1 lines. It requires
a dedicated connection during the transmission period and is not ideal for voice or video, which
require a steady flow of transmissions. Frame relay transmits packets at the data link layer of
128

the Open Systems Interconnection (OSI) model rather than at the network layer. A frame can
incorporate packets from different protocols such as Ethernet and X.25. It is variable in size and
can be as large as a thousand bytes or more.

Configuring user equipment in a Frame Relay network is extremely simple. The connection-
oriented link-layer service provided by Frame Relay has properties like non-duplication of frames,
preservation of the frame transfer order and small probability of frame loss. The features provided
by Frame Relay make it one of the best choices for interconnecting local area networks using
a wide area network. However the drawback in this method is that it becomes prohibitively
expensive with growth of the network.

There are certain benefits which are associated with Frame Relay. First of all, it helps in
reducing the cost of internetworking, as there is considerable reduction in the number of circuits
required and the associated bandwidths. Second, it helps in increasing the performance due to
reduced network complexity. Third, it increases the interoperability with the help of international
standards. Fourth, Frame Relay is protocol independent and can easily be used to combine
traffic from other networking protocols such as IPX, SNA and IP. The reduction of network
management and unification of hardware used for the protocols can help in cost savings due to
Frame Relay.

In business scenarios, where there is unpredictable and high-volume traffic, Frame Relay
is one of the best choices. It also remains a great choice for medium- to large-sized networks,
which makes use of star or mesh connectivity.

In business scenarios, where there is a slow connection or continuous traffic flow due to
applications like multimedia, Frame Relay is not a recommended choice.

Frame Relay is a standardized wide area network technology that specifies the physical
and data link layers of digital telecommunications channels using a packet switching methodology.
Network providers commonly implement Frame Relay for voice (VoFR) and data as an
encapsulation technique used between local area networks (LANs) over a wide area network
(WAN). Each end-user gets a private line (or leased line) to a Frame Relay node. The Frame
Relay network handles the transmission over a frequently changing path transparent to all end-
user extensively used WAN protocols. It is less expensive than leased lines and that is one
reason for its popularity. The extreme simplicity of configuring user equipment in a Frame Relay
network offers another reason for Frame Relay’s popularity.
129

With the advent of Ethernet over fiber optics, MPLS, VPN and dedicated broadband
services such as cable modem and DSL, the end may loom for the Frame Relay protocol and
encapsulation.[speculation?] However many rural areas remain lacking DSL and cable modem
services. In such cases, the least expensive type of non-dial-up connection remains a 64-kbit/
s Frame Relay line. Thus a retail chain, for instance, may use Frame Relay for connecting rural
stores into their corporate WAN.

Figure 3.7: Frame Relay Network

3.5.2 Data Link Connection Identifier (DLCI)

A data link connection identifier (DLCI) is a Frame Relay 10-bit-wide link-local virtual
circuit identifier used to assign frames to a specific PVC or SVC. Frame Relay networks use
DLCIs to statistically multiplex frames. DLCIs are preloaded into each switch and act as road
signs to the traveling frames.

The standard allows the existence of 1024 DLCIs. DLCI 0 is reserved for the ANSI/q993a
LMI standard—only numbers 16 to 976 are usable for end-user equipment. DLCI 1023 is reserved
for Cisco LMI, however, numbers 16 to 1007 are usable.

In summary, if using Cisco LMI, numbers from 16 to 1007 are available for end-user
equipment. The rest are reserved for various management purposes. DLCI are Layer 2 Addresses
that are locally significant. No two devices have the same DLCI mapped to its interface in one
frame relay cloud.
130

3.5.3 Committed Information Rate (CIR)

In a Frame Relay network, committed information rate is the bandwidth for a virtual circuit
guaranteed by an internet service provider to work under normal conditions. Committed data
rate is the payload portion of the CIR. At any given time, the available bandwidth should not fall
below this committed figure.

In frame relay networks, a committed information rate (CIR) is a bandwidth (expressed in


bits per second) associated with a logical connection in a permanent virtual circuit (PVC). Frame
relay networks are digital networks in which different logical connections share the same physical
path and some logical connections are given higher bandwidths than others. For example, a
connection conveying a high proportion of video signals (which require a high bandwidth) could
be set up for certain workstations in a company (or on a larger network) and other connections
requiring less bandwidth could be set up for all other workstations. Using statistical multiplexing,
frame relay assemblers and dissemblers (FRADs), the devices that interconnect to the frame
relay network, manage the logical connections so that, for example, those with the video signals
(and higher CIRs) get more use of the paths. Because the CIR is defined in software, the
network’s mix of traffic bandwidths can be redefined in a relatively short amount of time.

With frame relay networks, multiple customers can share the same physical wires using
virtual circuits. Since different customers have different bandwidth needs, providers can designate
faster connections to those who need them with a committed information rate. A streaming
video provider running a content delivery network needs more throughput than a customer
primarily sending text data, for example. Under a CIR, a customer is guaranteed a certain
bandwidth under a service level agreement. Frame relay connections are also usually burstable
with an excess information rate (EIR) or peak information rate (PIR).

Committed Information Rate is a way of guaranteeing that, even though you share a
bandwidth pool with many other users, you are assured that you will receive, at least a part of
this, no matter how busy the link gets.

As we know, the cost of providing bandwidth to satellite users (driven by the development,
launch, maintenance and ground segment of a satellite) is extremely high. The only way to
make this affordable is to share the bandwidth among several users.

Having a small portion of dedicated bandwidth becomes important when you are using
the internet for something that has a critical minimum bandwidth, like a VoIP phone call that can
131

require 30kbps, 20kbps, and sometimes more or less, depending on the type of call you are
making. A free Skype call typically requires about 30-40 kbps, but a call through the network
providers VoIP router could be considerably less. If the bandwidth is not available at the time of
your phone call, you could have very disappointing results with the call dropping or parts of the
conversation dropping out.

To guarantee that minimum bandwidth, costs money, as this is bandwidth that the satellite
provider cannot sell to other users. So expect to pay a premium for CIR.

To ensure quality voice calls, you only need to guarantee the bandwidth to support as
many voice calls as your VoIP router will support, typically 10 kbps - 20 kbps per simultaneous
call.

3.5.4 BIR - Burst Information Rate / MIR - Maximum Information Rate

BIR - Burst Information Rate, or MIR - Maximum Information Rate is the theoretical
maximum to which your bandwidth can increase as bandwidth becomes available. This is the
size of the complete data pipe that you are sharing with others, and the rate normally advertised
by providers.

In practice, it is rare that you will ever actually achieve this rate, even when you are the
only subscriber to the service. There are overheads and bottlenecks both on the vessel and on
shore. Typically one would achieve bandwidths of between 50% and 90% of the advertized
rate.

If this does not meet your requirements, you will need to subscribe to a higher level of
service, or increase your CIR, both of which will cost more money.

3.5.5 Contention Ratio

Contention Ratio is the number of other subscribers on the same network competing for
the same bandwidth.

Generally speaking, the contention ratio is the MIR (Maximum Information Rate) divided
by the CIR (Committed Information Rate). So with a contention ratio of 4:1 on a 1 Mbps downlink,
the 4 subscribers would each have 256kbps guaranteed, while they can each burst up to the full
1 Mbps as it is available and not in use by their peers.
132

A quality, marine shared network may have a contention ratio of 5:1 and this generally will
provide good results. Theoretically this means that in the assigned MIR bandwidth that there
are a total of 5 subscribers using the bandwidth simultaneously. Due to the sporadic nature that
we use the internet, not everyone is downloading at the same time. Not everyone is even using
the internet all the time, and when they do, it is in fits and starts with many idle times in between,
like while you are reading the web page that you have downloaded. With voice calls, hardly any
bandwidth is used while you are not speaking, between sentences, between words, and even
between syllables. So there is plenty of free bandwidth for others to use.

Some providers offer a choice of an entry level package of 8 : 1 contention ratio which will
work fine for most internet applications, and a premium package of 4:1 contention ratio which
should provide excellent throughput near to the contracted rate. Most providers also advertise
a 1:1 contention ratio, but this is a very expensive proposition and should only be considered if
your applications demand the full bandwidth all of the time. It is often possible to temporarily
bump up the bandwidth to 1:1 if you have that requirement for short periods of time and then
drop back to a shared service during idle times.

3.5.6 Oversubscription

Some of the low budget providers (perhaps not marine) have contention ratios of “many
to one (20:1 or 40:1). Some terrestrial internet providers use contention rations of 50:1. Depending
on your usage requirements, this is often fine for casual use and provides offshore VSAT service
to vessels that would not normally have this in their budget.

There is talk that some providers will take license with the contention ratios and
oversubscribe the network, where they put more subscribers than the defined contention ratio
on the same bandwidth. They justify this by saying that the contention ratio is the average
number of subscribers using the network at any given time, not the total number of subscribers
assigned to that network. They count only the users that are actually online at that time. They
monitor the usage and dynamically adjust the bandwidth to keep a satisfactory user experience
for all. As long as this is done diligently this should not be an issue, and if the user experience
is not effected, hopefully this economy will be reflected in reduced monthly fees to the subscribers.

The bottom line is the user experience and the monthly communication budget of the
vessel. If you are not getting satisfactory results, you may need to increase your service
agreement and pay for more MIR or CIR. If you have an absolute minimum bandwidth requirement
133

(like VoIP), increase your CIR (or subscribe to a lower contention ratio). If you want more
speed, then you will want to increase your MIR.

3.5.7 Permanent Virtual Circuit (PVC)

A permanent virtual circuit (PVC) is a connection that is permanently established between


two or more nodes in frame relay and asynchronous transfer mode (ATM) based networks. It
enables the creation of a logical connection on top of a physical connection between nodes that
communicate frequently or continuously.

A PVC is designed to eliminate the need to set up a call connection on frame relay, ATM
or X.25 networks. Typically, the physical connections of a frame relay or supported network into
various virtual circuits (VC) allow a physical connection to support multiple VCs simultaneously.
Each connection is permanent and transfers data by utilizing the underlying bandwidth capacity
and infrastructure. For example, a bank’s headquarters often sets up a PVC between branch
offices for continuous data exchange and transfer

A permanent virtual circuit (PVC) is a software-defined logical connection in a network


such as a frame relay network. A feature of frame relay that makes it a highly flexible network
technology is that users (companies or clients of network providers) can define logical connections
and required bandwidth between end points and let the frame relay network technology worry
about how the physical network is used to achieve the defined connections and manage the
traffic. In frame relay, the end points and a stated bandwidth called a Committed Information
Rate (CIR) constitute a PVC, which is defined to the frame relay network devices. The bandwidth
may not exceed the possible physical bandwidth. Typically, multiple PVCs share the same
physical paths at the same time. To manage the variation in bandwidth requirements expressed
in the CIRs, the frame relay devices use a technique called statistical multiplexing.

3.5.8 Lable Switching and Multiprotocol Label Switching (MPLS)

Label switching is a technique of network relaying to overcome the problems perceived


by traditional IP-table switching (also known as traditional layer 3 hop-by-hop routing). Here,
the switching of network packets occurs at a lower level, namely the data link layer rather than
the traditional network layer.

Each packet is assigned a label number and the switching takes place after examination
of the label assigned to each packet. The switching is much faster than IP-routing. New
134

technologies such as Multiprotocol Label Switching (MPLS) use label switching. The established
ATM protocol also uses label switching at its core.

Note: Asynchronous Transfer Mode (ATM) is, according to the ATM Forum, “a
telecommunications concept defined by ANSI and ITU standards for carriage of a complete
range of user traffic, including voice, data, and video signals”. ATM was developed to meet the
needs of the Broadband Integrated Services Digital Network, as defined in the late 1980s, and
designed to integrate telecommunication networks. Additionally, It was designed for networks
that must handle both traditional high-throughput data traffic, and real-time, low-latency content
such as voice and video. The reference model for ATM approximately maps to the three lowest
layers of the ISO-OSI reference model: network layer, data link layer, and physical layer. ATM is
a core protocol used over the SONET/SDH backbone of the public switched telephone network
(PSTN) and Integrated Services Digital Network (ISDN), but its use is declining in favour of all
IP.

According to RFC 2475 (An Architecture for Differentiated Services, December 1998):
“Examples of the label switching (or virtual circuit) model include Frame Relay, ATM, and MPLS.
In this model path forwarding state and traffic management or Quality of Service (QoS) state is
established for traffic streams on each hop along a network path. Traffic aggregates of varying
granularity are associated with a label switched path at an ingress node, and packets/cells
within each label switched path are marked with a forwarding label that is used to look up the
next-hop node, the per-hop forwarding behavior, and the replacement label at each hop. This
model permits finer granularity resource allocation to traffic streams, since label values are not
globally significant but are only significant on a single link; therefore resources can be reserved
for the aggregate of packets/cells received on a link with a particular label, and the label switching
semantics govern the next-hop selection, allowing a traffic stream to follow a specially engineered
path through the network.”

Multiprotocol Label Switching (MPLS) is a type of data-carrying technique for high-


performance telecommunications networks. MPLS directs data from one network node to the
next based on short path labels rather than long network addresses, avoiding complex lookups
in a routing table. The labels identify virtual links (paths) between distant nodes rather than
endpoints. MPLS can encapsulate packets of various network protocols, hence its name
“multiprotocol”. MPLS supports a range of access technologies, including T1/E1, ATM, Frame
Relay, and DSL.
135

Multiprotocol Label Switching (MPLS) is a protocol-agnostic routing technique designed


to speed up and shape traffic flows across enterprise wide area and service provider networks.
MPLS allows most data packets to be forwarded at Layer 2 — the switching level — rather than
having to be passed up to Layer 3 — the routing level. For this reason, it is often informally
described as operating at Layer 2.5.

MPLS was created in the late 1990s as a more efficient alternative to traditional IP routing,
which requires each router to independently determine a packet’s next hop by inspecting the
packet’s destination IP address before consulting its own routing table. This process consumes
time and hardware resources, potentially resulting in degraded performance for real-time
applications such as voice and video.

In an MPLS network, the very first router to receive a packet determines the packet’s
entire route upfront, the identity of which is quickly conveyed to subsequent routers using a
label in the packet header.

While router hardware has improved exponentially since MPLS was first developed —
somewhat diminishing its significance as a more efficient traffic management technology— it
remains important and popular due to its various other benefits, particularly security, flexibility
and traffic engineering.

a) Components of MPLS

One of the defining features of MPLS is its use of labels — the L in MPLS. Sandwiched
between Layers 2 and 3, a label is a four-byte — 32-bit — identifier that conveys the packet’s
predetermined forwarding path in an MPLS network. Labels can also contain information related
to quality of service (QoS), indicating a packet’s priority level.

MPLS labels consist of four parts:


 Label value: 20 bits
 Experimental: 3 bits
 Bottom of stack: 1 bit
 Time to live: 8 bits

The paths, which are called label-switched paths (LSPs), enable service providers to
decide ahead of time the best way for certain types of traffic to flow within a private or public
network.
136

b) How an MPLS network works ?

In an MPLS network, each packet gets labeled on entry into the service provider’s network
by the ingress router, also known as the label edge router (LER). This is also the router that
decides the LSP the packet will take until it reaches its destination address.

All the subsequent label-switching routers (LSRs) perform packet forwarding based only
on those MPLS labels — they never look as far as the IP header. Finally, the egress router
removes the labels and forwards the original IP packet toward its final destination.

When an LSR receives a packet, it performs one or more of the following actions:

 Push: Adds a label. This is typically performed by the ingress router.

 Swap: Replaces a label. This is usually performed by LSRs between the ingress
and egress routers.

 Pop: Removes a label. This is most often done by the egress router.

Figure 3.8: Basic MPLS Network

c) MPLS: Pros and Cons

The benefits of MPLS are scalability, performance, better bandwidth utilization, reduced
network congestion and a better end-user experience.
137

MPLS itself does not provide encryption, but it is a virtual private network and, as such, is
partitioned off from the public Internet. Therefore, MPLS is considered a secure transport mode.
And it is not vulnerable to denial of service attacks, which might impact pure-IP-based networks.

Service providers and enterprises can use MPLS to implement QoS by defining LSPs
that can meet specific service-level agreements on traffic latency, jitter, packet loss and downtime.
For example, a network might have three service levels that prioritize different types of traffic —
e.g., one level for voice, one level for time-sensitive traffic and one level for best effort traffic.

On the negative side, MPLS is a service that must be purchased from a carrier and is far
more expensive than sending traffic over the public Internet.

As companies expand into new markets, they may find it difficult to find an MPLS service
provider who can deliver global coverage. Typically, service providers piece together global
coverage through partnerships with other service providers, which can be costly. MPLS was
designed in an era when branch offices sent traffic back to a main headquarters or data center,
not for today’s world where branch office workers want direct access to the cloud.

3.5.9 Edge Router

Figure 3.9(A): Edge Router Deployment


138

An edge device is a device which provides an entry point into enterprise or service provider
core networks. Examples include routers, routing switches, integrated access devices (IADs),
multiplexers, and a variety of metropolitan area network (MAN) and wide area network (WAN)
access devices.

An edge router is a specialized router located at a network boundary that enables a


campus network to connect to external networks. They are primarily used at two demarcation
points: the wide area network (WAN) and the internet.

Typically, the edge router sends or receives data directly to or from other organizations’
networks, using either static or dynamic routing capabilities. Handoffs between the campus
network and the internet or WAN edge primarily use Ethernet, usually Gigabit Ethernet copper
or Gigabit Ethernet over single or multimode fiber optics.

In some instances, an organization maintains multiple isolated networks of its own and
uses edge routers to link them together instead of using a core router.

Edge routers are often hardware devices, but their functions can also be performed by
software running on a standard x86 server.

At its most essential level, the internet can be viewed as the sum of all the interconnections
of edge routers across all participating organizations, from its periphery — small business and
home broadband routers, for example — all the way to its core, where major telecom provider
networks connect to each other via massive edge routers.

Edge routers play a fundamental role as more services and applications begin to be
managed on an organization’s network edge rather than in its data center or in the cloud.
Services considered suitable for edge router management include wireless capabilities often
built into network edge devices, Dynamic Host Configuration Protocol (DHCP) services and
domain name system (DNS) services, among others.
139

Figure 3.9(B): Edge Router Deployment

In general, edge devices are normally routers that provide authenticated access (most
commonly PPPoA and PPPoE) to faster, more efficient backbone and core networks. The trend
is to make the edge device smart and the core device(s) “dumb and fast”, so edge routers often
include Quality of Service (QoS) and multi-service functions to manage different types of traffic.
Consequently, core networks are often designed with switches that use routing protocols such
as Open Shortest Path First (OSPF) or Multiprotocol Label Switching (MPLS) for reliability and
scalability, allowing edge routers to have redundant links to the core network. Links between
core networks are different, for example Border Gateway Protocol (BGP) routers often used for
peering exchanges.

a) Types of edge routers and how they work

Edge routers are divided into two different types: subscriber edge routers and label edge
routers.
140

Subscriber edge routers function in two ways:


 As external Border Gateway Protocol (BGP) routers that connect one autonomous
system (AS) to other ASes, which includes connecting an enterprise network to the
network edge of its internet service provider (ISP).

 As small or midsize business (SMB) or consumer broadband routers connecting a


home network or small office to an ISP’s network edge.

Label edge routers

Label edge routers are used at the edge of Multiprotocol Label Switching (MPLS) networks,
act as gateways between a local network and a WAN or the internet and assign labels to
outbound data transmissions. Edge routers are not internal routers that partition a given AS
network into separate subnets. To connect to external networks, routers use the internet protocol
(IP) and the Open Shortest Path First (OSPF) protocol to route packets efficiently.

b) Difference between edge routers and core routers

In general, edge routers accept inbound customer traffic into the network. These edge
devices characterize and secure IP traffic from other edge routers, as well as core routers.
They provide security for the core.

By comparison, core routers offer packet forwarding between other core and edge routers
and manage traffic to prevent congestion and packet loss. To improve efficiency, core routers
often employ multiplexing.

c) Security considerations

Because edge routers serve as a connection point between external networks, security is
an issue, since enterprises can’t control who might try to access the corporate network.

3.5.10 Customer Edge (CE) and Provider Edge (PE) Routers

A Customer Edge Router (CE router) is a router located on the customer premises that
provides an Ethernet interface between the customer’s LAN and the provider’s core network.
CE routers, P (provider) routers and PE (provider edge) routers are components in an MPLS
(multiprotocol label switching) architecture. Provider routers are located in the core of the provider
or carrier’s network. Provider edge routers sit at the edge of the network. CE routers connect to
PE routers and PE routers connect to other PE routers over P routers.
141

The customer edge (CE) is the router at the customer premises that is connected to the
provider edge of a service provider IP/MPLS network. CE peers with the Provider Edge (PE)
and exchanges routes with the corresponding VRF inside the PE. The routing protocol used
could be static or dynamic (an interior gateway protocol like OSPF or an exterior gateway
protocol like BGP).

 Customer Edge (CE) Router connects to Provider Edge (PE) Router.

 Customer Edge Router is often owned by the service provider

A Provider Edge router (PE router) is a router between one network service provider’s
area and areas administered by other network providers. A network provider is usually an Internet
service provider as well (or only that). The term PE router covers equipment capable of a broad
range of routing protocols, notably:

 Border Gateway Protocol (BGP) (PE to PE or PE to CE communication)

 Open Shortest Path First (OSPF) (PE to CE Router communication)

 Multiprotocol Label Switching (MPLS) (PE to P Router communication)

PE routers do not need to be aware of what kind of traffic is coming from the provider’s
network, as opposed to a P Router that functions as a transit within the service provider’s
network. However, some PE routers also do labelling.

3.5.11 Data terminal equipment (DTE)

Data terminal equipment (DTE) is an end instrument that converts user information into
signals or reconverts received signals. These can also be called tail circuits. A DTE device
communicates with the data circuit-terminating equipment (DCE). The DTE/DCE classification
was introduced by IBM. In computer data transmission, DTE (Data Terminal Equipment) is the
RS-232C interface that a computer uses to exchange data with a modem or other serial device.

A DTE is the functional unit of a data station that serves as a data source or a data sink
and provides for the data communication control function to be performed in accordance with
the link protocol.

The data terminal equipment may be a single piece of equipment or an interconnected


subsystem of multiple pieces of equipment that perform all the required functions necessary to
permit users to communicate. A user interacts with the DTE (e.g. through a human-machine
142

interface), or the DTE may be the user. Usually, the DTE device is the terminal (or a computer
emulating a terminal), and the DCE is a modem or another carrier-owned device.

A general rule is that DCE devices provide the clock signal (internal clocking) and the
DTE device synchronizes on the provided clock (external clocking). D-sub connectors follow
another rule for pin assignment.

 25 pin DTE devices transmit on pin 2 and receive on pin 3.

 25 pin DCE devices transmit on pin 3 and receive on pin 2.

 9 pin DTE devices transmit on pin 3 and receive on pin 2.

 9 pin DCE devices transmit on pin 2 and receive on pin 3.

This term is also generally used in the Telco and Cisco equipment context to designate a
network device, such as terminals, personal computers but also routers and bridges, that’s
unable or configured not to generate clock signals. Hence a direct PC to PC Ethernet connection
can also be called a DTE to DTE communication. This communication is done via an Ethernet
crossover cable as opposed to a PC to DCE (hub, switch, or bridge) communication which is
done via an Ethernet straight cable.

Data Terminal Equipment (DTE) is any equipment that is either a source or destination for
digital data. DTE do not generally communicate with each other to do so they need to use DCE
to carry out the communication. DTE does not need to know how data is sent or received; the
communications details are left to the DCE. A typical example of DTE is a computer. Other
common DTE examples include:

 Printers

 File and application servers

 PCs

 Dumb Terminals

 Routers

3.5.12 Data Communications Equipment (DCE)

Data Communications Equipment (DCE) can be classified as equipment that transmits or


receives analogue or digital signals through a network. DCE works at the physical layer of the
143

OSI model taking data generated by Data Terminal Equipment (DTE) and converting it into a
signal that can then be transmitted over a communications link. A common DCE example is a
modem which works as a translator of digital and analogue signals.

Data communications equipment (DCE) refers to computer hardware devices used to


establish, maintain and terminate communication network sessions between a data source and
its destination. DCE is connected to the data terminal equipment (DTE) and data transmission
circuit (DTC) to convert transmission signals. IT vendors may also refer to data communications
equipment as data circuit-terminating equipment or data carrier equipment

A modem is a typical example of data communications equipment. In general, data


communications equipment is used to perform signal exchange, coding and line clocking tasks
as a part of intermediate equipment or DTE.

Some additional interfacing electronic equipment may also be needed to pair the DTE
with a transmission channel or to connect a circuit to the DTE. DCE and DTE are often confused
with each other, but these are two different device types that are interlinked with an RS-232
serial line.

DCE may also be responsible for providing timing over a serial link. In a complex network
which uses directly connected routers to provide serial links, one serial interface of each
connection must be configured with a clock rate to provide synchronization. DCE is sometimes
said to stand for Data Circuit-terminating Equipment.

Other common DCE examples include:

 ISDN adapters

 Satellites (including base stations)

 Microwave stations

 NIC (network interface cards)

3.5.13 Clock Speed

In a computer, clock speed refers to the number of pulses per second generated by an
oscillator that sets the tempo for the processor. Clock speed is usually measured in MHz
(megahertz, or millions of pulses per second) or GHz (gigahertz, or billions of pulses per second).
Today’s personal computers run at a clock speed in the hundreds of megahertz and some
144

exceed one gigahertz. The clock speed is determined by a quartz-crystal circuit, similar to
those used in radio communications equipment.

Also called clock rate, it is the speed at which a microprocessor executes instructions.
Every computer contains an internal clock that regulates the rate at which instructions are
executed and synchronizes all the various computer components. The CPU requires a fixed
number of clock ticks (or clock cycles) to execute each instruction. The faster the clock, the
more instructions the CPU can execute per second. Clock speeds are expressed in megahertz
(MHz) or gigahertz ((GHz).

The internal architecture of a CPU has as much to do with a CPU’s performance as the
clock speed, so two CPUs with the same clock speed will not necessarily perform equally.
Whereas an Intel 80286 microprocessor requires 20 cycles to multiply two numbers, an Intel
80486 or later processor can perform the same calculation in a single clock tick. (Note that
clock tick here refers to the system’s clock, which runs at 66 MHz for all PCs.) These newer
processors, therefore, would be 20 times faster than the older processors even if their clock
speeds were the same. In addition, some microprocessors are superscalar, which means that
they can execute more than one instruction per clock cycle.

Computer clock speed has been roughly doubling every year. The Intel 8088, common in
computers around the year 1980, ran at 4.77 MHz. The 1 GHz mark was passed in the year
2000.

Clock speed is one measure of computer “power,” but it is not always directly proportional
to the performance level. If you double the speed of the clock, leaving all other hardware
unchanged, you will not necessarily double the processing speed. The type of microprocessor,
the bus architecture, and the nature of the instruction set all make a difference. In some
applications, the amount of random access memory (RAM) is important, too.

Some processors execute only one instruction per clock pulse. More advanced processors
can perform more than one instruction per clock pulse. The latter type of processor will work
faster at a given clock speed than the former type. Similarly, a computer with a 32-bit bus will
work faster at a given clock speed than a computer with a 16-bit bus. For these reasons, there
is no simplistic, universal relation among clock speed, “bus speed,” and millions of instructions
per second (MIPS).
145

Excessive clock speed can be detrimental to the operation of a computer. As the clock
speed in a computer rises without upgrades in any of the other components, a point will be
reached beyond which a further increase in frequency will render the processor unstable. Some
computer users deliberately increase the clock speed, hoping this alone will result in a proportional
improvement in performance, and are disappointed when things don’t work out that way.

Summary

In this chapter, you learnt:

 Subnetting is basically just a way of splitting a TCP/IP network into smaller, more
manageable pieces. When you subnet your network, you are splitting the network
into a separate, but interconnected network. The main purpose of subnetting is to
help relieve network congestion.

 Subnetting allows you to create multiple logical networks that exist within a single
Class A, B, or C network. If you do not subnet, you are only able to use one network
from your Class A, B, or C network, which is unrealistic.

 Classless inter-domain routing (CIDR) is a set of Internet protocol (IP) standards


that is used to create unique identifiers for networks and individual devices. The IP
addresses allow particular information packets to be sent to specific computers.

 CIDR is an IP addressing scheme that improves the allocation of IP addresses.


CIDR, sometimes called supernetting, is a way to allow more flexible allocation of
Internet Protocol (IP) addresses than was possible with the original system of IP
address classes.

 CIDR reduced the problem of wasted address space by providing a new and more
flexible way to specify network addresses in routers. CIDR lets one routing table
entry represent an aggregation of networks that exist in the forward path that don’t
need to be specified on that particular gateway. CIDR provides numerous advantages
over the “classful” addressing scheme, whether or not subnetting is used.

 Wildcard masks are used to specify a range of network addresses. They are
commonly used with routing protocols (like OSPF) and access lists.

 Frame relay is a packet-switching telecommunication service designed for cost-


efficient data transmission for intermittent traffic between local area networks (LANs)
and between endpoints in wide area networks (WANs).
146

 Label switching is a technique of network relaying to overcome the problems


perceived by traditional IP-table switching (also known as traditional layer 3 hop-
by-hop routing).

 An edge router is a specialized router located at a network boundary that enables a


campus network to connect to external networks. They are primarily used at two
demarcation points: the wide area network (WAN) and the internet.

Review Questions
 What is subnetting and what are the advantages of subnetting?

 Explain CIDR.

 Why is frame relay useful?

 What are DTE and DCE?

 How are edge routers used in a network? What are its advantages?

References
CCNA Routing and Switching 200-125 Official Cert Guide Library

https://searchsdn.techtarget.com

https://www.quora.com

https://en.wikipedia.org/

https://whatis.techtarget.com
147

UNIT - 4
VIRTUAL LANs
Learning Objectives
 Learners will be able to understand the concept of Virtual LAN and also configure a
VLAN on their own.

 In this chapter we also explain the concept of VLAN Trunking Protocol and its
advantages as well as disadvantages.

 We will also learn about Collision and Broadcast domain.

Structure
4.1 Introduction

4.2 Uses of VLAN

4.3 Designing of VLAN

4.4 Differences between Physical and Virtual VLAN

4.5 Advantages of VLANs

4.6 Types of VLAN Connections

4.7 Access Links

4.8 Trunk Liks

4.9 Switchport Modes

4.10 VLAN Trunking Protocol (VTP)

4.11 Inter VLAN Communications

4.12 Collision and Broadcast Domain

4.1 Introduction
A virtual LAN (VLAN) is any broadcast domain that is partitioned and isolated in a computer
network at the data link layer (OSI layer 2). LAN is the abbreviation for local area network and
in this context virtual refers to a physical object recreated and altered by additional logic. VLANs
work by applying tags to network packets and handling these tags in networking systems –
148

creating the appearance and functionality of network traffic that is physically on a single network
but acts as if it is split between separate networks. In this way, VLANs can keep network
applications separate despite being connected to the same physical network, and without
requiring multiple sets of cabling and networking devices to be deployed.

VLANs allow network administrators to group hosts together even if the hosts are not
directly connected to the same network switch. Because VLAN membership can be configured
through software, this can greatly simplify network design and deployment. Without VLANs,
grouping hosts according to their resource needs necessitates the labor of relocating nodes or
rewiring data links. VLANs allow networks and devices that must be kept separate to share the
same physical cabling without interacting, improving simplicity, security, traffic management, or
economy. For example, a VLAN could be used to separate traffic within a business due to
users, and due to network administrators, or between types of traffic, so that users or low
priority traffic cannot directly affect the rest of the network’s functioning. Many Internet hosting
services use VLANs to separate their customers’ private zones from each other, allowing each
customer’s servers to be grouped together in a single network segment while being located
anywhere in their data center. Some precautions are needed to prevent traffic “escaping” from
a given VLAN, an exploit known as VLAN hopping.

To subdivide a network into VLANs, one configures network equipment. Simpler equipment
can partition only per physical port (if at all), in which case each VLAN is connected with a
dedicated network cable. More sophisticated devices can mark frames through VLAN tagging,
so that a single interconnect (trunk) may be used to transport data for multiple VLANs. Since
VLANs share bandwidth, a VLAN trunk can use link aggregation, quality-of-service prioritization,
or both to route data efficiently.

4.2 Uses of VLAN


VLANs address issues such as scalability, security, and network management. Network
architects set up VLANs to provide network segmentation. Routers between VLANs filter
broadcast traffic, enhance network security, perform address summarization, and mitigate
network congestion.

In a network utilizing broadcasts for service discovery, address assignment and resolution
and other services, as the number of peers on a network grows, the frequency of broadcasts
also increases. VLANs can help manage broadcast traffic by forming multiple broadcast domains.
149

Breaking up a large network into smaller independent segments reduces the amount of broadcast
traffic each network device and network segment has to bear. Switches may not bridge network
traffic between VLANs, as doing so would violate the integrity of the VLAN broadcast domain.

VLANs can also help create multiple layer 3 networks on a single physical infrastructure.
VLANs are data link layer (OSI layer 2) constructs, analogous to Internet Protocol (IP) subnets,
which are network layer (OSI layer 3) constructs. In an environment employing VLANs, a one-
to-one relationship often exists between VLANs and IP subnets, although it is possible to have
multiple subnets on one VLAN.

Without VLAN capability, users are assigned to networks based on geography and are
limited by physical topologies and distances. VLANs can logically group networks to decouple
the users’ network location from their physical location. By using VLANs, one can control traffic
patterns and react quickly to employee or equipment relocations. VLANs provide the flexibility
to adapt to changes in network requirements and allow for simplified administration.

VLANs can be used to partition a local network into several distinctive segments, for
instance:

Production

Voice over IP

Network management

Storage area network (SAN)

Guest Internet access

Demilitarized zone (DMZ)

A common infrastructure shared across VLAN trunks can provide a measure of security
with great flexibility for a comparatively low cost. Quality of service schemes can optimize traffic
on trunk links for real-time (e.g. VoIP) or low-latency requirements (e.g. SAN). However, VLANs
as a security solution should be implemented with great care as they can be defeated unless
implemented carefully.

In cloud computing VLANs, IP addresses, and MAC addresses in the cloud are resources
that end users can manage. To help mitigate security issues, placing cloud-based virtual machines
on VLANs may be preferable to placing them directly on the Internet.
150

Figure 4.1: VLAN

4.3 Designing a VLAN


Early network designers often segmented physical LANs with the aim of reducing the size
of the Ethernet collision domain—thus improving performance. When Ethernet switches made
this a non-issue (because each switch port is a collision domain), attention turned to reducing
the size of the broadcast domain at the MAC layer. VLANs were first employed to separate
several broadcast domains across one physical medium.

A VLAN can also serve to restrict access to network resources without regard to physical
topology of the network, although the strength of this method remains debatable as VLAN
hopping is a means of bypassing such security measures if not prevented. VLAN hopping can
be mitigated with proper switchport configuration.

VLANs operate at Layer 2 (the data link layer) of the OSI model. Administrators often
configure a VLAN to map directly to an IP network, or subnet, which gives the appearance of
involving Layer 3 (the network layer). In the context of VLANs, the term “trunk” denotes a
network link carrying multiple VLANs, which are identified by labels (or “tags”) inserted into their
packets. Such trunks must run between “tagged ports” of VLAN-aware devices, so they are
often switch-to-switch or switch-to-router links rather than links to hosts. (Note that the term
‘trunk’ is also used for what Cisco calls “channels”: Link Aggregation or Port Trunking). A router
(Layer 3 device) serves as the backbone for network traffic going across different VLANs.
151

A basic switch not configured for VLANs has VLAN functionality disabled or permanently
enabled with a default VLAN that contains all ports on the device as members. The default
VLAN typically has the ID “1”. Every device connected to one of its ports can send packets to
any of the others. Separating ports by VLAN groups separates their traffic very much like
connecting each group using a distinct switch for each group.

It is only when the VLAN port group is to extend to another device that tagging is used.
Since communications between ports on two different switches travel via the uplink ports of
each switch involved, every VLAN containing such ports must also contain the uplink port of
each switch involved, and traffic through these ports must be tagged.

Management of the switch requires that the administrative functions be associated with
one or more of the configured VLANs. If the default VLAN were deleted or renumbered without
first moving the management connection to a different VLAN, it is possible for the administrator
to be locked out of the switch configuration, normally requiring physical access to the switch to
regain management by either a forced clearing of the device configuration (possibly to the
factory default), or by connecting through a console port or similar means of direct management.

Switches typically have no built-in method to indicate VLAN port members to someone
working in a wiring closet. It is necessary for a technician to either have administrative access
to the device to view its configuration, or for VLAN port assignment charts or diagrams to be
kept next to the switches in each wiring closet. These charts must be manually updated by the
technical staff whenever port membership changes are made to the VLANs.

Generally, VLANs within the same organization will be assigned different non-overlapping
network address ranges. This is not a requirement of VLANs. There is no issue with separate
VLANs using identical overlapping address ranges (e.g. two VLANs each use the private network
192.168.0.0/16). However, it is not possible to route data between two networks with overlapping
addresses without delicate IP remapping, so if the goal of VLANs is segmentation of a larger
overall organizational network, non-overlapping addresses must be used in each separate VLAN.
Network technologies with VLAN capabilities include:
Asynchronous Transfer Mode (ATM)
Fiber Distributed Data Interface (FDDI)
Ethernet
HiperSockets
InfiniBand
152

4.4 Differences between Physical and Virtual LANs


It is important to understand that a VLAN does not create new devices or attempt to
virtually represent new devices. A lot of attention is currently focused on virtualization and the
abstraction of services; however, for the purposes of this discussion, we will ignore those
technologies and how they operate.

The purpose of a VLAN is simple: It removes the limitation of physically switched LANs
with all devices automatically connected to each other. With a VLAN, it is possible to have hosts
that are connected together on the same physical LAN but not allowed to communicate directly.
This restriction gives us the ability to organize a network without requiring that the physical LAN
mirror the logical connection requirements of any specific organization.

To make this concept a bit clearer, let’s use the analogy of a telephone system. Imagine
that a company has 500 employees, each with his or her own telephone and dedicated phone
number. If the telephones are connected like a traditional residential phone system, anyone
has the ability to call any direct phone number within the company, regardless of whether that
employee needs to receive direct business phone calls. This arrangement presents a number
of problems, from potential wrong number calls to prank or malicious calls that are intended to
reduce the organization’s productivity.

Now suppose a more efficient and secure option is offered, allowing the business to
install and configure a separate internal phone system. This phone system forces external calls
to go through a separate switchboard or operator—in a more modern phone network, an
Integrated Voice Response (IVR) system. This new phone system lets internal users connect
directly to each other via extensions (typically using shorter numbers), while it limits what the
internal user’s phones can do and where/who the user can call. This internal phone system
allows the organization to virtually separate the internal phones. This is essentially what a
VLAN does on a network. To take this analogy into the networking world, consider the network
shown in Figure 4.2.

Suppose that hosts A and B are together in one department, and hosts C and D are
together in another department. With physical LANs, they could be connected in only two ways:
either all of the devices are connected together on the same LAN (hoping that the users of the
other department hosts will not attempt to communicate), or each of the department hosts
could be connected together on separate physical switches. Neither of these is a good solution.
The first option opens up many potential security holes, and the second option would become
expensive very quickly.
153

Figure 4.2: Basic switched network

To solve this sort of problem, the concept of a VLAN was developed. With a VLAN, each
port on a switch can be configured into a specific VLAN, and then the switch will only allow
devices that are configured into the same VLAN to communicate. Using the network in Figure
4.2, if A and B were grouped together and separated from the C and D group, you could place
A and B into VLAN 10 and C and D into VLAN 20. This way, their traffic would be kept isolated
on the switch. In this configuration, the traffic between groups would be prevented at Layer 2
because of the difference in assigned VLANs.

4.5 Advantages of VLANs


VLANs provide a number of advantages, such as ease of administration, confinement of
broadcast domains, reduced broadcast traffic, and enforcement of security policies. Some other
advantages are:

 VLANs enable logical grouping of end-stations that are physically dispersed on a


network.

 When users on a VLAN move to a new physical location but continue to perform the
same job function, the end-stations of those users do not need to be reconfigured.
Similarly, if users change their job functions, they need not physically move: changing
the VLAN membership of the end-stations to that of the new team makes the users’
end-stations local to the resources of the new team.
154

 VLANs reduce the need to have routers deployed on a network to contain broadcast
traffic.

 Flooding of a packet is limited to the switch ports that belong to a VLAN.

 Confinement of broadcast domains on a network significantly reduces traffic. By


confining the broadcast domains, end-stations on a VLAN are prevented from
listening to or receiving broadcasts not intended for them. Moreover, if a router is
not connected between the VLANs, the end-stations of a VLAN cannot communicate
with the end-stations of the other VLANs.

4.6 Types of VLAN Connection


There are two types of VLAN connection links and they are Access link and Trunk link.
The difference between access link and trunk link are given below.

 Access link: An access link is a link that is part of only one VLAN, and normally
access links are for end devices. Any device attached to an access link is unaware
of a VLAN membership. An access-link connection can understand only standard
Ethernet frames. Switches remove any VLAN information from the frame before it
is sent to an access-link device.

 Trunk link: A Trunk link can carry multiple VLAN traffic and normally a trunk link is
used to connect switches to other switches or to routers. To identify the VLAN that
a frame belongs to, Cisco switches support different identification techniques (VLAN
Frame tagging). Our focus for CCNA Routing and Switching examination is on
IEEE 802.1Q. A trunk link is not assigned to a specific VLAN. Many VLAN traffic
can be transported between switches using a single physical trunk link.

Each port on a Cisco switch can be configured as either an access or a trunk port. The
type of a port specifies how the switch determines the incoming frame’s VLAN. Here is a
description of these two port types:

 Access port – a port that can be assigned to a single VLAN. The frames that arrive
on an access port are assumed to be part of the access VLAN. This port type is
configured on switch ports that are connected to devices with a normal network
card, for example a host on a network.
155

 Trunk port – a port that is connected to another switch. This port type can carry
traffic of multiple VLANs, thus allowing you to extend VLANs across your entire
network. Frames are tagged by assigning a VLAN ID to each frame as they traverse
between switches.

The following picture illustrates the difference between access and trunk ports:

Figure 4.3: Access and Link Ports

As you can see from the picture above, the ports on the switches that connect to hosts
are configured as access ports. The ports between switches are configured as trunk ports

4.7 Access links


Access links is referred to as the native VLAN of the port as it is part of one VLAN only.
When any device is connected to an access link then it is not aware of a VLAN membership—
the device does not understand about the physical network and so it just presumes that it’s a
component of a broadcast domain. All the information of VLAN is actually removed by switches
from the frame before it reaches to an access-link device. No communication or interaction can
take place between the Access-link devices and the devices outside their VLAN, the
communication is possible only when the packet is routed through a router.

Access Links are the most common type of links on any VLAN switch. All network hosts
connect to the switch’s Access Links in order to gain access to the local network. These links
are your ordinary ports found on every switch, but configured in a special way, so you are able
to plug a computer into them and access your network.
156

Figure 4.4: Cisco Catalyst 3550 series switch (Access Links (ports) marked in Green)

We must note that the ‘Access Link’ term describes a configured port - this means that
the ports above can be configured as the second type of VLAN links - Trunk Links. What we are
showing here is what’s usually configured as an Access Link port in 95% of all switches.
Depending on your needs, you might require to configure the first port (top left corner) as a
Trunk Link, in which case, it is obviously not called a Access Link port anymore, but a Trunk
Link!

When configuring ports on a switch to act as Access Links, we usually configure only one
VLAN per port, that is, the VLAN our device will be allowed to access. If you recall the diagram
below which was also present during the introduction of the VLAN concept, you’ll see that each
PC is assigned to a specific port:

Figure 4.5: Access Link Port Assigned on VLAN

In this case, each of the 6 ports used have been configured for a specific VLAN. Ports 1,
2 and 3 have been assigned to VLAN 1 while ports 4, 5 and 6 to VLAN 2.
157

In the above diagram, this translates to allowing only VLAN 1 traffic in and out of ports 1,
2 and 3, while ports 4, 5 and 6 will carry VLAN 2 traffic. As you would remember, these two
VLANs do not exchange any traffic between each other, unless we are using a layer 3 switch (or
router) and we have explicitly configured the switch to route traffic between the two VLANs.

It is equally important to note at this point that any device connected to an Access Link
(port) is totally unaware of the VLAN assigned to the port. The device simply assumes it is part
of a single broadcast domain, just as it happens with any normal switch. During data transfers,
any VLAN information or data from other VLANs is removed so the recipient has no information
about them. The following diagram illustrates this to help you get the picture:

Figure 4.6: Access Link Port do not contain VLAN info

As shown, all packets arriving, entering or exiting the port are standard Ethernet II type
packets which are understood by the network device connected to the port. There is nothing
special about these packets, other than the fact that they belong only to the VLAN the port is
158

configured for. If, for example, we configured the port shown above for VLAN 1, then any
packets entering/exiting this port would be for that VLAN only. In addition, if we decided to use
a logical network such as 192.168.0.0 with a default subnet mask of 255.255.255.0 (/24), then
all network devices connecting to ports assigned to VLAN 1 must be configured with the
appropriate network address so they may communicate with all other hosts in the same VLAN.

4.8 Trunk links


The term trunks is named after the telephone system trunks that carry number of
conversations. Similarly, the trunk links can carry/move multiple VLANs. There is a fixed trunk
link i.e. 100- or 1000Mbps between a switch and a router, between two switches or between a
server and a switch. At one time these can carry the traffic of as many as 1 to 1005 VLAN. It is
not possible to run them on links of 10Mbps. Trunking permits to make one port part of many
VLANs simultaneously. This can be really beneficial. In other words, you can easily arrange
things up to a server in 2 broadcast domains at the same time, and it would be easy for the user
to log in and access it without crossing a layer-3 device (router). There is one more advantage
to trunking when you are attaching switches. Trunk link carries little or all information of VLAN
across the link, but if the switches are not trunked then the VLAN 1 information will be carried
across the link and this will happen by default. Due to this reason the configuration of all VLANs
is done on a trunked link unless it is deleted by an administrator manually. In the figure you can
see the utilization of various links in a switched network. It is the trunk link between the two
switches that makes the communication possible to all VLANs. On the contrary, when you use
an access link then it permits the use of single VLAN between switches. Here you can easily
notice that these hosts are making use of access links in order to link to the switch, which
means that they can only communicate in single VLAN.
159

Figure 4.7: Trunk Link in VLAN

A Trunk Link, or ‘Trunk’ is a port configured to carry packets for any VLAN. These type of
ports are usually found in connections between switches. These links require the ability to carry
packets from all available VLANs because VLANs span over multiple switches.

The diagram below shows multiple switches connected throughout a network and the
Trunk Links are marked in purple colour to help you identify them:
160

Figure 4.8: Trunk Link between switches in VLAN

As you can see in our diagram, our switches connect to the network backbone via the
Trunk Links. This allows all VLANs created in our network to propagate throughout the whole
network. Now in the unlikely event of Trunk Link failure on one of our switches, the devices
connected to that switch’s ports would be isolated from the rest of the network, allowing only
ports on that switch, belonging to the same VLAN, to communicate with each other.

So now that we have an idea of what Trunk Links are and their purpose, let’s take a look
at an actual switch to identify a possible Trunk Link:
161

Figure 4.9: VLAN interfaces- Trunk Link

As we noted with the explanation of Access Link ports, the term ‘Trunk Link’ describes a
configured port. In this case, the Gigabit ports are usually configured as Trunk Links, connecting
the switch to the network backbone at the speed of 1 Gigabit, while the Access Link ports
connect at 100Mbits. In addition, we should note that for a port or link to operate as a Trunk
Link, it is imperative that it runs at speeds of 100Mbit or greater. A port running at speeds of
10Mbit’s cannot operate as a Trunk Link and this is logical because a Trunk Link is always used
to connect to the network backbone, which must operate at speeds greater than most Access
Links.

4.9 Switchport Modes


There are two types of switchports: trunk or access. The default switchport mode for
newer Cisco switch Ethernet interfaces is dynamic auto. Note that if two Cisco switches are left
to the common default setting of auto, a trunk will never form. switchport mode dynamic desirable:
Makes the interface actively attempt to convert the link to a trunk link.

A access port is typically for a switch to host connection and this port is assigned to only
one VLAN. This can be done with the following commands:

 # interface fastethernet [interface number]

 # switchport mode access

 # switchport access vlan [vlan number]

A trunk is typically a link between two switches or a switch and a router. This allows
multiple VLANs to traverse the interface/link. This can be configured in a few different ways but
will achieve the same result.
162

 # interface fasterthernet [interface number]

 # switchport mode [select mode]

The mode you select could be any of the below:

 Dynamic auto: The ‘dynamic auto’ will configure the port to accept incoming
negotiation and will accept becoming either a trunk or an access port.

 Dynamic desirable: The ‘dynamic desirable’ will configure the port to try and become
a trunk link by sending a request to the other end of the wire requesting to become
a trunk port.

 Trunk: The ‘trunk’ command will force the port to become a trunk.

4.10 VLAN Trunking Protocol (VTP)


4.10.1 What is VTP?

VLAN Trunking Protocol (VTP) is a Cisco proprietary protocol that propagates the definition
of Virtual Local Area Networks (VLAN) on the whole local area network. To do this, VTP carries
VLAN information to all the switches in a VTP domain. VTP advertisements can be sent
over 802.1Q, and ISL trunks. VTP is available on most of the Cisco Catalyst Family products.
There are three versions of VTP, namely version 1, version 2, version 3. The purpose of VTP is
to provide a way to manage Cisco switches as a single group for VLAN configuration purposes.
VTP is a protocol used to distribute and synchronize identifying information about VLANs
configured throughout a switched network. Configuration changes made to the VLANs on a
single VTP server switch are propagated across Trunk links to all trunk-connected switches in
the network. Using VTP, each Catalyst Family Switch advertises the following on its trunk ports:

 Management domain

 Configuration revision number

 Known VLANs and their specific parameters

On Cisco Devices, VTP (VLAN Trunking Protocol) maintains VLAN configuration


consistency across a single Layer 2 network. VTP uses Layer 2 frames to manage the addition,
deletion, and renaming of VLANs from switches in the VTP client mode. VTP is responsible for
163

synchronizing VLAN information within a VTP domain and reduces the need to configure the
same VLAN information on each switch thereby minimizing the possibility of configuration
inconsistencies that arise when changes are made.

By default, all switches are configured to be VTP servers. This configuration is suitable
for small-scale networks in which the size of the VLAN information is small and the information
is easily stored in all switches (in NVRAM). In a large network, the network administrator must
make a judgment call at some point, when the NVRAM storage that is necessary is wasteful
because it is duplicated on every switch. At this point, the network administrator must choose a
few well-equipped switches and keep them as VTP servers. Everything else that participates in
VTP can be turned into a client. The number of VTP servers should be chosen in order to
provide the degree of redundancy that is desired in the network.

Notes:
 If a switch is configured as a VTP server without a VTP domain name, you cannot
configure a VLAN on the switch.

 Note: It is applicable only for CatOS. You can configure VLAN(s) without having the
VTP domain name on the switch which runs on IOS.

 If a new Catalyst is attached in the border of two VTP domains, the new Catalyst
keeps the domain name of the first switch that sends it a summary advertisement.
The only way to attach this switch to another VTP domain is to manually set a
different VTP domain name.

 Dynamic Trunking Protocol (DTP) sends the VTP domain name in a DTP packet.
Therefore, if you have two ends of a link that belong to different VTP domains, the
trunk does not come up if you use DTP. In this special case, you must configure the
trunk mode as on or nonegotiate, on both sides, in order to allow the trunk to come
up without DTP negotiation agreement.

 If the domain has a single VTP server and it crashes, the best and easiest way to
restore the operation is to change any of the VTP clients in that domain to a VTP
server. The configuration revision is still the same in the rest of the clients, even if
the server crashes. Therefore, VTP works properly in the domain.
164

4.10.2 VTP Modes

You can configure a switch to operate in any one of these VTP modes:

 Server: In VTP server mode, you can create, modify, and delete VLANs and specify
other configuration parameters, such as VTP version and VTP pruning, for the
entire VTP domain. VTP servers advertise their VLAN configuration to other switches
in the same VTP domain and synchronize their VLAN configuration with other
switches based on advertisements received over trunk links. VTP server is the
default mode.

 Client: VTP clients behave the same way as VTP servers, but you cannot create,
change, or delete VLANs on a VTP client.

 Transparent: VTP transparent switches do not participate in VTP. A VTP transparent


switch does not advertise its VLAN configuration and does not synchronize its VLAN
configuration based on received advertisements, but transparent switches do forward
VTP advertisements that they receive out their trunk ports in VTP Version 2.

 Off (configurable only in CatOS switches): In the three described modes, VTP
advertisements are received and transmitted as soon as the switch enters the
management domain state. In the VTP off mode, switches behave the same as in
VTP transparent mode with the exception that VTP advertisements are not forwarded.

4.10.3 VTP V2

VTP V2 is not much different than VTP V1. The major difference is that VTP V2 introduces
support for Token Ring VLANs. If you use Token Ring VLANs, you must enable VTP V2.
Otherwise, there is no reason to use VTP V2. Changing the VTP version from 1 to 2 will not
cause a switch to reload.

4.10.4 VTP: Advantage sand Disadvantages

VTP provides the following benefits:

 VLAN configuration consistency across the layer 2 network

 Dynamic distribution of added VLANs across the network

 Plug-and-play configuration when adding new VLANs


165

The disadvantages are that when a new switch is added to the network, by default it is
configured with no VTP domain name or password, but in VTP server mode. If no VTP Domain
Name has been configured, it assumes the one from the first VTP packet it receives. Since a
new switch has a VTP configuration revision of 0, it will accept any revision number as newer
and overwrite its VLAN information if the VTP passwords match. However, if you were to
accidentally connect a switch to the network with the correct VTP domain name and password
but a higher VTP revision number than what the network currently has (such as a switch that
had been removed from the network for maintenance and returned with its VLAN information
deleted) then the entire VTP Domain would adopt the VLAN configuration of the new switch
which is likely to cause loss of VLAN information on all switches in the VTP Domain, leading to
failures on the network. Since Cisco switches maintain VTP configuration information separately
from the normal configuration, and since this particular issue occurs so frequently, it has become
known colloquially as the “VTP Bomb”.

Before creating VLANs on the switch that will propagate via VTP, a VTP domain must first
be set up. A VTP domain for a network is a set of all contiguously trunked switches with the
matching VTP settings (domain name, password and VTP version). All switches in the same
VTP domain share their VLAN information with each other, and a switch can participate in only
one VTP management domain. Switches in different domains do not share VTP information.
Non-matching VTP settings might result in issues in negotiating VLAN trunks, port-channels or
Virtual Port Channels.

4.11 Inter VLAN Communications


VLANs divide broadcast domains in a LAN environment. Whenever hosts in one VLAN
need to communicate with hosts in another VLAN, the traffic must be routed between them.
This is known as inter-VLAN routing. On Catalyst switches it is accomplished by the creation of
Layer 3 interfaces (switch virtual interfaces (SVIs).

Two separate VLANs must communicate through a layer-3 device, like a router. Devices
on a VLAN communicate with each other using layer-2. Layer-3 must be used to communicate
between separate layer-2 domains. Assuming the most common communications (layer-2 is
ethernet and layer-3 is IP), when a host on a VLAN wants to communicate with another host on
the same VLAN, it discovers the other hosts layer-2 (e.g. MAC) address with something like
ARP, and it sends the frame to the MAC address.
166

When a host on one VLAN wants to send something to a host on another VLAN, it must
use a layer-3 (e.g. IP) address. The host will use layer-2 to send the frames to its defined
gateway (router). The router will strip off the layer-2 frame and inspect the layer-3 packet for the
destination layer-3 address. The router will then look up the next hop for the layer-3 address. It
will then create a new layer-2 frame for the layer-3 packet based on the layer-2 LAN on the
interface where it needs to send the packet for the next hop. Other routers which may be in the
path to the end LAN will repeat this process until the frame is placed on the final VLAN, where
the receiving host gets the frame.

You should search for the OSI model and learn how it works. Just remember that it is a
model, and some things in the real world don’t necessarily work exactly like the model would
predict, but it will give you a gross understanding of how data travel from an application on one
host to an application on another host.

4.12 Collision and Broadcast domain


4.12.1 Collision domain

A collision domain is, as the name implies, the part of a network where packet collisions
can occur. A collision occurs when two devices send a packet at the same time on the shared
network segment. The packets collide and both devices must send the packets again, which
reduces network efficiency. Collisions are often in a hub environment, because each port on a
hub is in the same collision domain. By contrast, each port on a bridge, a switch or a router is in
a separate collision domain. The following example illustrates 6 collision domains:

Figure 4.10: Collision Domain


167

Note: Remember, each port on a hub is in the same collision domain. Each port on a
bridge, a switch or router is in a separate collision domain.

A term collision is described as an event that usually happens on an Ethernet network


when we use a “Shared Media” to connect the devices in an Ethenrnet network. A “Shared
Media” is a type of connecting media which is used to connect different network devices, where
every device share the same media. Example: 1) Ethernet Hubs, 2) Bus Topology

In a “Shared Media” there are no separate channels for sending and recieving the data
signals, but only one channel to send and recieve the data signals. We call the media as shared
media when the devices are connected together using Bus topology, or by using an Ethernet
Hub. Both are half-duplex, means that the devices can Send OR Recieve data signals at same
time. Sending and recieving data signals at same time is not supported.

Collisions will happen in an Ethernet Network when two devices simultaneously try to
send data on the Shared Media, since Shared Media is half-duplex and sending and recieving
is not supported at same time. Please refer CSMA/CD to learn how Ethernet avoid Collision.
Collisions are a normal part of life in an Ethernet network when Ethernet operates in Half-
duplex and under most circumstances should not be considered as a problem.

A Collision Domain is any network segment in which collisions can happen (usually in
Ethernet networks). In other words, a Collision Domain consists of all the devices connected
using a Shared Media (Bus Topolgy or using Ethernet Hubs) where a Collision can happen
between any device at any time.

As the number of devices in a collision domain increases, chances of collisions are also
more. If there is more traffic in a collision domain, the chances of collisions are also more. More
collisions will normally happen with a large number of network devices in a Collision domain.

Increased collisions will result in low quality network where hosts spending more and
more time for packet retransmission and packet processing. Usually switches are used to
segment (divide) a big Collision domain to many small collision domains. Each port of an Ethernet
Switch is operating in a separate Collision domain.

In other words, Collision cannot happen between two devices which are connected to
different ports of a Switch.
168

4.12.2 Broadcast domain

A broadcast domain is the domain in which a broadcast is forwarded. A broadcast domain


contains all devices that can reach each other at the data link layer (OSI layer 2) by using
broadcast. All ports on a hub or a switch are by default in the same broadcast domain. All ports
on a router are in the different broadcast domains and routers don’t forward broadcasts from
one broadcast domain to another. The following example clarifies the concept:

Figure 4.11: Broadcast Domain

Broadcast is a type of communication, where the sending device sends a single copy of
data and that copy of data will be delivered to every device in the network segment. Broadcast
is a required type of communication and we cannot avoid Broadcasts, because many protocols
(Example: ARP and DHCP) and applications are dependent on Broadcast to function.

A Broadcast Domain consists of all the devices that will receive any broadcast packet
originating from any device within the network segment. As the number of devices in the
Broadcast Domain increases, number of Broadcasts also increases and the quality of the network
will come down because of the following reasons.

 Decrease in available Bandwidth: Large number of Broadcasts will reduce the


available bandwidth of network links for normal traffic because the broadcast traffic
is forwarded to all the ports in a switch.
169

 Decrease in processing power of computers: Since the computers need to process


all the broadcast packets it receive, a portion of the computer CPU power is spent
on processing the broadcast packets. Normally a Broadcast packet is relevant to a
particular computer and for other computers that broadcast packet is irrelevant (For
example, DHCPDISCOVER message is relevant only for a DHCP Server. For other
computers DHCPDISCOVER is irrelevant and they will drop the packet after
processing). This will reduce the processing power of computers in a Broadcast
domain.

By design, Routers will not allow broadcasts from one of its connected network segment
to cross the router and reach another network segment. The primary function of a Router is to
segment (divide) a big broadcast domain in to multiple smaller broadcast domains.

Summary

In this chapter we have learnt that:

 A virtual LAN (VLAN) is any broadcast domain that is partitioned and isolated in a
computer network at the data link layer (OSI layer 2)

 VLANs allow network administrators to group hosts together even if the hosts are
not directly connected to the same network switch. Because VLAN membership
can be configured through software, this can greatly simplify network design and
deployment.

 VLANs address issues such as scalability, security, and network management.


Network architects set up VLANs to provide network segmentation. Routers between
VLANs filter broadcast traffic, enhance network security, perform address
summarization, and mitigate network congestion.

 VLANs can be used to partition a local network into several distinctive segments.

 The purpose of a VLAN is simple: It removes the limitation of physically switched


LANs with all devices automatically connected to each other. With a VLAN, it is
possible to have hosts that are connected together on the same physical LAN but
not allowed to communicate directly. This restriction gives us the ability to organize
a network without requiring that the physical LAN mirror the logical connection
requirements of any specific organization.
170

 VLANs provide a number of advantages, such as ease of administration, confinement


of broadcast domains, reduced broadcast traffic, and enforcement of security policies.

 VTP is a protocol used to distribute and synchronize identifying information about


VLANs configured throughout a switched network. Configuration changes made to
the VLANs on a single VTP server switch are propagated across Trunk links to all
trunk-connected switches in the network.

Review Questions
 Explain the concept of VLANs with their advantages and disadvantages.

 What do you understand by VTP?

 Write short notes on Collision and Broadcast Domain.

 Explain Access and Trunk Links

References
 http://www.pearsonitcertification.com

 http://www.firewall.cx/networking-topics/vlan-networks/218-vlan-access-trunk-
links.html

 https://www.cisco.com/c/en/us/support/docs/lan-switching/vtp/10558-21.html
171

UNIT – 5
COMMUNICATION PROTOCOLS
Learning Objectives
The main objectives of learning in this unit are

 Understanding the basics of communication in a networked environment and the


various protocols that govern these communication modes.

 Learn the standard email protocols and be ready to configure a mail server.

 Understand how a Domain Name System (DNS) and a DHCP works.

 Learn about ASCII.

Structure
5.1 Introduction to Protocols

5.1 Introduction to Protocols


When computers were only used as standalone systems, there was practically no need
for any kind of mechanisms to communicate between them. The need to connect computers for
the purpose of sharing files and printers became a necessity. Establishing communication
between network devices required a set of rules to decide how systems would communicate.
Protocols provide that method. A number of protocols can be used on a network, each of which
has its own features, advantages, and disadvantages. What protocol you choose, will have a
significant impact on the network’s functioning and performance. Some of the most commonly
used protocols as a network administrator are discussed in the succeeding paragraphs.

Figure 5.1: TCP/IP Model Protocols


172

5.1.1 Application Layer Protocols


(a) TELNET

TELNET is the terminal emulation protocol in a TCP/IP environment. Telnet, which is


defined in RFC 854, is a virtual terminal protocol. It allows sessions to be opened on a remote
host, and then commands can be executed on that remote host. For many years, Telnet was
the method by which clients accessed multiuser systems such as mainframes and minicomputers.
It also was the connection method of choice for UNIX systems. Today, Telnet is still commonly
used to access routers and other managed network devices. One of the problems with Telnet is
that it is not secure. As a result, remote session functionality is now almost always achieved by
using alternatives such as SSH.

TELNET uses the TCP as the transport protocol to establish connection between server
and client. TELNET server and client enter a phase of option negotiation that determines the
options that each side can support for the connection. Each connected system can negotiate
new options or renegotiate old options at any time. In general, each end of the TELNET
connection attempts to implement all options that maximize performance for the systems involved.

When a TELNET connection is first established, each end is assumed to originate and
terminate at a “Network Virtual Terminal”, or NVT. An NVT is an imaginary device which provides
a standard, network-wide, intermediate representation of a canonical terminal. This eliminates
the need for “server” and “user” hosts to keep information about the characteristics of each
other’s terminals and terminal handling conventions.

The principle of negotiated options takes cognizance of the fact that many hosts will wish
to provide additional services over and above those available within an NVT and many users
will have sophisticated terminals and would like to have elegant, rather than minimal, services.

(b) File Transfer Protocol (FTP)

FTP provides for the uploading and downloading of files from a remote host running FTP
server software. FTP allows you to view the contents of folders on an FTP server and rename
and delete files and directories if you have the necessary permissions. FTP, which is defined in
RFC 959, uses TCP as a transport protocol to guarantee delivery of packets.

FTP has security mechanisms used to authenticate users. However, rather than create a
user account for every user, you can configure FTP server software to accept anonymous
173

logons. When you do this, the username is anonymous, and the password normally is the
user’s email address. Most FTP servers that offer files to the general public operate in this way.

All the common network operating systems offer FTP server capabilities and all popular
workstation operating systems offer FTP client functionality. FTP assumes that files being
uploaded or downloaded are straight text (that is, ASCII) files. If the files are not text, which is
likely, the transfer mode has to be changed to binary. With sophisticated FTP clients, such as
CuteFTP, the transition between transfer modes is automatic. With more basic utilities, you
have to perform the mode switch manually.

In addition to being popular mechanism for exchanging files to the general public over
networks such as the Internet, FTP is also popular with organizations that need to frequently
exchange large files with other people or organizations. The key functions of FTP are:

 To promote sharing of files (computer programs and/or data);

 To encourage indirect or implicit (via programs) use of remote computers;

 To shield a user from variations in file storage systems among hosts; and

 To transfer data reliably and efficiently.

(c) Trivial File Transfer Protocol (TFTP)

Trivial File Transfer Protocol (TFTP) is a simple protocol to transfer files. It has been
implemented on top of the Internet User Datagram protocol (UDP). TFTP is designed to be
small and easy to implement and, therefore, lacks most of the features of a regular FTP. TFTP
only reads and writes files (or mail) from/ to a remote server. It cannot list directories, and
currently has no provisions for user authentication.

Three modes of transfer are currently supported by TFPT: netASCII, that is 8 bit ASCII;
octet (this replaces the “binary” mode of previous versions of this document.) i.e. raw 8-bit
bytes; mail, netASCII characters sent to a user rather than a file. Additional modes can be
defined by pairs of cooperating hosts.

In TFTP, any transfer begins with a request to read or write a file, which also serves to
request a connection. If the server grants the request, the connection is opened and the file is
sent in fixed length blocks of 512 bytes. Each data packet contains one block of data and must
be acknowledged by an acknowledgment packet before the next packet can be sent. A data
packet of less than 512 bytes signals termination of a transfer. If a packet gets lost in the
174

network, the intended recipient will timeout and may retransmit his last packet (which may be
data or an acknowledgment), thus causing the sender of the lost packet to retransmit that lost
packet. The sender has to keep just one packet on hand for retransmission, since the lock step
acknowledgment guarantees that all older packets have been received. Notice that both machines
involved in a transfer are considered senders and receivers. One sends data and receives
acknowledgments, the other sends acknowledgments and receives data.

The current version of TFTP is version 2. The basic TFTP header structure will be as
under:

The basic TFTP commands are given below:

(d) Simple Mail Transfer Protocol (SMTP)

SMTP, which is defined in RFC 821, is a protocol that defines how mail messages are
sent between hosts. SMTP uses TCP connections to guarantee error-free delivery of messages.
SMTP is not overly sophisticated, and it requires that the destination host always be available.
Therefore, mail systems commonly spool incoming mail so that users can read it later. How the
user then reads the mail depending on how the client accesses the SMTP server. SMTP can be
used to both send and receive mail. Post Office Protocol (POP) and Internet Message Access
Protocol (IMAP) can be used only to receive mail.
175

(e) Domain Name System (DNS)

DNS performs an important function on TCP/IP-based networks. It resolves hostnames,


such as www.google.com, to IP addresses, such as 64.233.160.0 Such a resolution system
makes it possible for people to remember the names of and refer to frequently used hosts using
easy-to-remember hostnames rather than hard-to-remember IP addresses.

DNS solves the problem of name resolution by offering resolution through servers
configured to act as name servers. The name servers run DNS server software, which allows
them to receive, process, and reply to requests from systems that want to resolve hostnames to
IP addresses. Systems that ask DNS servers for a hostname-to-IP address mapping are called
resolvers or DNS clients. One of the problems with DNS is that, despite all its automatic resolution
capabilities, entries and changes to those entries must still be performed manually. A strategy
to solve this problem is to use Dynamic DNS (DDNS), a newer system that allows hosts to be
dynamically registered with the DNS server.

DNS operates in the DNS namespace. This space has logical divisions organized
hierarchically. At the top level are domains such as .com (commercial) and .edu (education), as
well as domains for countries, such as .uk (United Kingdom) and .de (Germany). Below the top
level are sub-domains or second-level domains associated with organizations or commercial
companies, such as Microsoft, IBM. Within these domains, hosts or other sub-domains can be
assigned. The domain name, along with any sub-domains, is called the fully qualified domain
name (FQDN) because it includes all the components from the top of the DNS namespace to
the host. For this reason, many people refer to DNS as resolving FQDNs to IP addresses.

Although the most common entry in a DNS database is an A (address) record, which
maps a hostname to an IP address, DNS can hold numerous other types of entries as well.
Some of particular note are the MX record, which is used to map entries that correspond to mail
exchanger systems, and CNAME (canonical record name), which can be used to create alias
records for a system. A system can have an A record and then multiple CNAME entries for its
aliases.

The importance of DNS, particularly in environments where the Internet is heavily used,
cannot be overstated. If DNS facilities are not accessible, the Internet effectively becomes
unusable, unless you can remember the IP addresses of all your favorite sites.
176

Each DNS name server maintains information about its zone, or domain, in a series of
records, known as DNS resource records. There are several DNS resource records each
contain information about the DNS domain and the systems within it. These records are text
entries that are stored on the DNS server. Some of the DNS resource records include:

 Start of Authority (SOA) is a record of information containing data on DNS zones


and other DNS records. A DNS zone is the part of a domain for which an individual
DNS server is responsible. Each zone contains a single SOA record.

 Name Server (NS) record stores information that identifies the name servers in the
domain that store information for that domain.

 Canonical Name (CNAME) record stores additional host names, or aliases,


for hosts in the domain. A CNAME specifies an alias or nickname for a canonical
host name record in a domain name system (DNS) database. CNAME records are
used to give a single computer multiple names (aliases).

 Mail Exchange (MX) record stores information about where mail for the domain
should be delivered.

Figure 5.2: Typical DNS entries on a Windows system

(f) Routing Information Protocol (RIP)

Routing Information Protocol (RIP) is a standard for exchange of routing information


among gateways and hosts. This protocol is most useful as an “interior gateway protocol”. In a
nationwide network such as the current Internet, there are many routing protocols used for the
whole network. The network will be organized as a collection of “autonomous systems”. Each
177

autonomous system will have its own routing technology, which may well be different for different
autonomous systems. The routing protocol used within an autonomous system is referred to as
an interior gateway protocol, or “IGP”. A separate protocol is used to interface among the
autonomous systems. The earliest such protocol, still used in the Internet, is “EGP” (exterior
gateway protocol). Such protocols are now usually referred to as inter-AS routing protocols.
RIP is designed to work with moderate-size networks using reasonably homogeneous technology.
Thus it is suitable as an IGP for many campuses and for regional networks using serial lines
whose speeds do not vary widely. It is not intended for use in more complex environments.

RIP2, derives from RIP, is an extension of the Routing Information Protocol (RIP) intended
to expand the amount of useful information carried in the RIP2 messages and to add a measure
of security. RIP2 is a UDP-based protocol. Each host that uses RIP2 has a routing process that
sends and receives datagrams on UDP port number 520. RIP and RIP2 are for the IPv4 network
while the RIPng is designed for the IPv6 network.

(g) Simple Network Management Protocol (SNMP)

SNMP facilitates network devices to communicate information about their state to a central
system. It also allows the central system to pass configuration parameters to the NW devices.
SNMP is a protocol that only facilitates network management functionality but is not the NW
management system (NMS) by itself.

In an SNMP configuration, a central system known as a manager functions as the central


communication point for all the SNMP enabled devices on the network. On each device that is
to be managed and monitored via SNMP, software called an SNMP agent is set up and configured
with the manager’s IP address that helps the SNMP manager to communicate with and retrieve
information from the devices running the SNMP agent software. Using SNMP and NMS, you
can monitor all the devices on a network, including switches, hubs, routers, servers, and printers,
as well as any device that supports SNMP, from a single location.

(h) POP and POP3: Post Office Protocol (version 3)

The Post Office Protocol is designed to allow a workstation to dynamically access a mail
drop on a server host. POP3 is the version 3 (the latest version) of the Post Office Protocol.
POP3 allows a workstation to retrieve mail that the server is holding for it. POP3 transmissions
appear as data messages between stations. The messages are either command or reply
messages.
178

There are several different technologies and approaches to building a distributed electronic
mail infrastructure: POP (Post Office Protocol), DMSP (Distributed Mail System Protocol) and
IMAP (Internet Message Access Protocol) among them. Of the three, POP is the oldest and
consequently the best known. DMSP is largely limited to a single application, PCMAIL, and is
known primarily for its excellent support of “disconnected” operation. IMAP offers a superset of
POP and DMSP capabilities, and provides good support for all three modes of remote mail- box
access: offiine, online, and disconnected.

POP was designed to support “offiine” mail processing, in which mail is delivered to a
server, and a personal computer user periodically invokes a mail “client” program that connects
to the server and downloads all of the pending mail to the user’s own machine. The offiine
access mode is a kind of store-and-for- ward service, intended to move mail (on demand) from
the mail server (drop point) to a single destination machine, usually a PC or Mac. Once delivered
to the PC or Mac, the messages are then deleted from the mail server.

POP3 is not designed to provide extensive manipulation operations of mail on the server;
which are done by a more advanced (and complex) protocol IMAP4. POP3 uses TCP as the
transport protocol.

In computing, the Post Office Protocol (POP) is an application layer Internet standard
protocol used by e-mail clients to retrieve e-mail from a server in an Internet Protocol (IP)
network. POP version 3 (POP3) is the most recent level of development in common use. POP
has largely been superseded by the Internet Message Access Protocol (IMAP).

POP supports download-and-delete requirements for access to remote mailboxes. Although


most POP clients have an option to leave mail on server after download, e-mail clients using
POP generally connect, retrieve all messages, store them on the client system, and delete
them from the server. Other protocols, notably the Internet Message Access Protocol (IMAP)
provide more features of message management to typical mailbox operations. A POP3 server
listens on well-known port number 110 for service requests. Encrypted communication for POP3
is either requested after protocol initiation, using the STLS command, if supported, or by POP3S,
which connects to the server using Transport Layer Security (TLS) or Secure Sockets Layer
(SSL) on well-known TCP port number 995.

Available messages to the client are fixed when a POP session opens the maildrop, and
are identified by message-number local to that session or, optionally, by a unique identifier
179

assigned to the message by the POP server. This unique identifier is permanent and unique to
the maildrop and allows a client to access the same message in different POP sessions. Mail is
retrieved and marked for deletion by message-number. When the client exits the session, the
mail marked for deletion is removed from the maildrop.

(j) Internet Message Access Protocol (IMAP)

IMAP (Internet Message Access Protocol) is a standard email protocol that stores email
messages on a mail server, but allows the end user to view and manipulate the messages as
though they were stored locally on the end user’s computing device(s). This allows users to
organize messages into folders, have multiple client applications know which messages have
been read, flag messages for urgency or follow-up and save draft messages on the server.

IMAP can be contrasted with another client/server email protocol, Post Office Protocol 3
(POP3). With POP3, mail is saved for the end user in a single mailbox on the server and moved
to the end user’s device when the mail client opens. While POP3 can be thought of as a “store-
and-forward” service, IMAP can be thought of as a remote file server.

Most implementations of IMAP support multiple logins; this allows the end user to
simultaneously connect to the email server with different devices. For example, the end user
could connect to the mail server with his Outlook iPhone app and his Outlook desktop client at
the same time. The details for how to handle multiple connections are not specified by the
protocol but are instead left to the developers of the mail client.

Even though IMAP has an authentication mechanism, the authentication process can
easily be circumvented by anyone who knows how to steal a password by using a protocol
analyzer because the client’s username and password are transmitted as clear text. In an
Exchange Server environment, administrators can work around this security flaw by using Secure
Sockets Layer (SSL) encryption for IMAP.

In computing, the Internet Message Access Protocol (IMAP) is an Internet standard protocol
used by email clients to retrieve email messages from a mail server over a TCP/IP connection.
IMAP was designed with the goal of permitting complete management of an email box by
multiple email clients, therefore clients generally leave messages on the server until the user
explicitly deletes them. An IMAP server typically listens on port number 143. IMAP over SSL
(IMAPS) is assigned the port number 993.
180

Virtually all modern e-mail clients and servers support IMAP. IMAP and the earlier POP3
(Post Office Protocol) are the two most prevalent standard protocols for email retrieval, with
many webmail service providers such as Gmail, Outlook.com and Yahoo! Mail also providing
support for either IMAP or POP3

POP Comparison with IMAP

 POP is a much simpler protocol, implementation is easier.

 POP mail moves the message from the email server onto your local computer,
although there is usually an option to leave the messages on the email server as
well. IMAP defaults to leaving the message on the email server, simply downloading
a local copy.

 POP treats the mailbox as one store, and has no concept of folders

 An IMAP client performs complex queries, asking the server for headers, or the
bodies of specified messages, or to search for messages meeting certain criteria.
Messages in the mail repository can be marked with various status flags (e.g.
“deleted” or “answered”) and they stay in the repository until explicitly removed by
the user—which may not be until a later session. In short: IMAP is designed to
permit manipulation of remote mailboxes as if they were local. Depending on the
IMAP client implementation and the mail architecture desired by the system manager,
the user may save messages directly on the client machine, or save them on the
server, or be given the choice of doing either.

 The POP protocol requires the currently connected client to be the only client
connected to the mailbox. In contrast, the IMAP protocol specifically allows
simultaneous access by multiple clients and provides mechanisms for clients to
detect changes made to the mailbox by other, concurrently connected, clients.

 When POP retrieves a message, it receives all parts of it, whereas the IMAP4
protocol allows clients to retrieve any of the individual MIME parts separately - for
example retrieving the plain text without retrieving attached files.

 IMAP supports flags on the server to keep track of message state: for example,
whether or not the message has been read, replied to, or deleted.

(k) Address Resolution Protocol (ARP), Inverse Address Resolution Protocol


(InARP) and Reverse Address Resolution Protocol (RARP)
181

Address Resolution Protocol (ARP) performs mapping of an IP address to a physical


machine address (MAC address for Ethernet) that is recognized in the local network. For example,
in IP Version 4, an address is 32 bits long. In an Ethernet local area network, however, addresses
for attached devices are 48 bits long. A table, usually called the ARP cache, is used to maintain
a correlation between each MAC address and its corresponding IP address. ARP provides the
rules for making this correlation and providing address conversion in both directions. Since
protocol details differ for each type of local area network, there are separate ARP specifications
for Ethernet, Frame Re- lay, ATM, Fiber Distributed-Data Interface, HIPPI, and other protocols.
InARP is an addition to ARP to address ARP in Frame Relay environment. ARP and InARP
have the same structure

There is a Reverse ARP (RARP) for host machines that don’t know their IP address.
RARP enables them to request their IP address from the gateway’s ARP cache. Reverse Address
Resolution Protocol (RARP) allows a physical machine in a local area network to request its IP
address from a gateway server’s Address Resolution Protocol (ARP) table or cache. A network
182

administrator creates a table in a local area network’s gateway router that maps the physical
machines’ (or Media Access Control - MAC) addresses to corresponding Internet Protocol
addresses. When a new machine is set up, its RARP client program requests its IP address
from the RARP server on the router. Assuming that an entry has been set up in the router table,
the RARP server will return the IP address to the machine, which can store it for future use.
RARP is available for Ethernet, Fiber Distributed-Data Interface, and Token Ring LANs. The
structure of RARP header is same as for RAP.

5.1.2 Transport Layer Protocols


(a) Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) is the transport layer protocol in the TCP/IP suite,
which provides a reliable stream delivery and virtual connection service to applications through
the use of sequenced acknowledgment with retransmission of packets when necessary. Along
with the Internet Protocol (IP), TCP represents the heart of the Internet protocols.

Since many network applications may be running on the same machine, computers need
to make sure that the correct software application on the destination computer gets the data
packets from the source machine, and to make sure replies get routed to the correct application
on the source computer. This is accomplished through the use of the TCP “port numbers”. The
combination of IP address of a network station and its port number is known as a “socket” or an
“endpoint”. TCP establishes connections or virtual circuits between two “endpoints” for reliable
communications.

Among the services TCP provides are stream data transfer, reliability, efficient fiow control,
full-duplex operation, and multiplexing. With stream data transfer, TCP delivers an unstructured
stream of bytes identified by sequence numbers. This service benefits applications because
the application does not have to break data into blocks before handing it off to TCP. TCP can
group bytes into segments and pass them to IP for delivery.

TCP offers reliability by providing connection-oriented, end-to- end reliable packet delivery.
It does this by sequencing bytes with a forwarding acknowledgment number that indicates to
the destination the next byte the source expects to receive. Bytes not acknowledged within a
specified time period are retransmitted. The reliability mechanism of TCP allows devices to
deal with lost, delayed, duplicate, or misread packets. A time-out mechanism allows devices to
detect lost packets and request retransmission. TCP offers efficient fiow control - When sending
183

acknowledgments back to the source, the receiving TCP process indicates the highest sequence
number it can receive without over fiowing its internal buffers.

TCP processes can both send and receive packets at the same time (Full-duplex operation).
Numerous simultaneous upper-layer conversations can be multiplexed over a single connection
(Multiplexing in TCP).

Figure 5.4: TCP Protocol Structure

 Source port — Identifies points at which upper-layer source process receives TCP
services.

 Destination port — Identifies points at which upper-layer Destination process receives


TCP services.

 Sequence number — Usually specifies the number as- signed to the first byte of
data in the current message. In the connection-establishment phase, this field also
can be used to identify an initial sequence number to be used in an upcoming
transmission.

 Acknowledgment number – Contains the sequence number of the next byte of data
the sender of the packet expects to receive. Once a connection is established, this
value is always sent.

 Data offset — 4 bits. The number of 32-bit words in the TCP header indicates where
the data begins.

 Reserved — 6 bits. Reserved for future use. Must be zero.

 Control bits (Flags) — 6 bits. Carry a variety of control information. The control bits
may be:
184

o U (URG) Urgent pointer field significant.


o A (ACK) Acknowledgment field significant. P (PSH) Push function.
o R (RST) Reset the connection.
o S (SYN) Synchronize sequence numbers. F (FIN) No more data
from sender.

 Window — 16 bits. Specifies the size of the sender’s receive window, that is, the
buffer space available in octets for incoming data.

 Checksum — 16 bits. Indicates whether the header was damaged in transit.

 Urgent Pointer — 16 bits. Points to the first urgent data byte in the packet.

 Option + Paddling – Specifies various TCP options. There are two possible formats
for an option: a single octet of option type; an octet of option type, an octet of option
length and the actual option data octets.

 Data – contains upper-layer information.

(b) User Datagram Protocol (UDP)

UDP is a connectionless transport layer (layer 4) protocol in the OSI model which provides
a simple and unreliable message service for transaction-oriented services. UDP is basically an
interface between IP and upper-layer processes. UDP protocol ports distinguish multiple
applications running on a single device from one another.

Since many network applications may be running on the same machine, computers need
something to make sure the correct software application on the destination computer gets the
data packets from the source machine and some way to make sure replies get routed to the
correct application on the source computer. This is accomplished through the use of the UDP
“port numbers”. For example, if a NW station wished to use a Domain Name System (DNS) on
the station 128.1.123.1, it would address the packet to station 128.1.123.1 and insert destination
port number 53 in the UDP header. The source port number Identifies the application on the
local station that requested domain name server, and all response packets generated by the
destination station should be addressed to that port number on the source station. Details of
UDP port numbers can be found in the reference.

Unlike TCP, UDP adds no reliability, fiow-control, or error recovery functions to IP. Because
of UDP’s simplicity, UDP headers contain fewer bytes and consume less network overhead
185

than TCP. UDP is useful in situations where the reliability mechanisms of TCP are not necessary,
such as in cases where a higher-layer protocol or application might provide error and fiow
control.

UDP is the transport protocol for several well-known application- layer protocols, including
Network File System (NFS), Simple Network Management Protocol (SNMP), Domain Name
System (DNS), and Trivial File Transfer Protocol (TFTP).

Figure 5.5: UDP Protocol Structure


 Source port – 16 bits. Source port is an optional field. When used, it indicates the
port of the sending process and may be assumed to be the port to which a reply
should be addressed in the absence of any other information. If not used, a value of
zero is inserted.

 Destination port – 16 bits. Destination port has a meaning within the context of a
particular Internet destination address.

 Length – 16 bits. The length in octets of this user datagram, including this header
and the data. The mini- mum value of the length is eight.

 Checksum — 16-bits The sum of a pseudo header of information from the IP


header, the UDP header and the data, padded with zero octets at the end, if
necessary, to make a multiple of two octets.

 Data – Contains upper-level data information.

(c) Internet Group Management Protocol (IGMP)

IGMP is the protocol within the TCP/IP protocol suite that manages multicast groups. It
allows one computer on the Internet to target content to a specific group of computers that will
receive content from the sending system. This is in contrast to unicast messaging, in which
data is sent to a single computer or network device and not to a group, or a broadcast message
goes to all systems.
186

Multicasting is a mechanism by which groups of network devices can send and receive
data between the members of the group at one time, instead of sending messages to each
device in the group separately. The multicast grouping is established by each device being
configured with the same multicast IP address. These multicast IP addresses are from the IPv4
Class D range, including 224.0.0.0 to 239.255.255.255 address ranges.

IGMP is used to register devices into a multicast group, as well as to discover what other
devices on the network are members of the same multicast group. Common applications for
multicasting include groups of routers on an internetwork and videoconferencing clients.

Internet Group Management Protocol (IGMP), a multicasting protocol in the internet


protocols family, is used by IP hosts to report their host group memberships to any immediately
neighboring multicast routers. IGMP messages are encapsulated in IP datagram, with an IP
protocol number of 2. IGMP has versions IGMP v1, v2 and v3.

 IGMPv1: Hosts can join multicast groups. There are no leave messages. Routers
use a time-out based mechanism to discover the groups that are of no interest to
the members.

 IGMPv2: Leave messages were added to the protocol, allowing group membership
termination to be quickly reported to the routing protocol, which is important for
high-bandwidth multicast groups and/or subnets with highly volatile group
membership.

 IGMPv3: A major revision of the protocol allows hosts to specify the list of hosts
from which they want to receive traffic. Traffic from other hosts is blocked inside the
net- work. It also allows hosts to block inside the network packets that come from
sources that send unwanted traffic.

 The variant protocols of IGMP are:

o DVMRP: Distance Vector Multicast Routing Protocol.

o IGAP: IGMP for user Authentication Protocol.

o RGMP: Router-port Group Management Protocol.

(d) Internet Control Message Protocol (ICMP)

ICMP, which is defined in RFC 792, is a protocol that works with the IP layer to provide
error checking and reporting functionality. In effect, ICMP is a tool that IP uses in its quest to
187

provide best-effort delivery. ICMP can be used for a number of functions. Its most common
function is probably the widely used and incredibly useful ping utility. Ping sends a stream of
ICMP echo requests to a remote host. If the host can respond, it does so by sending echo reply
messages back to the sending host. In that one simple process, ICMP enables the verification
of the protocol suite configuration of both the sending and receiving nodes and any intermediate
networking devices.

However, ICMP’s functionality is not limited to the use of the ping utility. ICMP also can
return error messages such as Destination unreachable and Time exceeded. In addition to
these and other functions, ICMP performs source quench. . In a source quench scenario, the
receiving host cannot handle the influx of data at the same rate as the data is being sent. To
slow down the sending host, the receiving host sends ICMP source quench messages, telling
the sender to slow down. This action prevents packets from being dropped and having to be
resent. ICMP is a useful protocol. Although ICMP operates largely in the background, the ping
utility alone makes it one of the most valuable of the protocols.

Internet Control Message Protocol (ICMP) is an integrated part of the IP suite. ICMP
messages, delivered in IP packets, are used for out-of-band messages related to network
operation. ICMP packet delivery is unreliable, so hosts can’t count on receiving ICMP packets
for any network problems. The key ICMP functions are:

 Announce network errors, such as a host or entire portion of the network being
unreachable, due to some type of failure. A TCP or UDP packet directed at a port
number with no receiver attached is also reported via ICMP.

 Announce network congestion. When a router begins buffering too many packets,
due to an inability to transmit them as fast as they are being received, it will generate
ICMP Source Quench messages. Directed at the sender, these messages should
cause the rate of packet transmission to be slowed. Of course, generating too
many Source Quench messages would cause even more network congestion, so
they are used sparingly.

 Assist Troubleshooting. ICMP supports an Echo function, which just sends a packet
on a round—trip between two hosts. Ping, a common network management tool, is
based on this feature. Ping will transmit a series of packets, measuring average
round—trip times and computing loss percentages.
188

 Announce Timeouts. If an IP packet’s TTL field drops to zero, the router discarding
the packet will often generate an ICMP packet announcing this fact. Trace Route is
a tool which maps network routes by sending packets with small TTL values and
watching the ICMP timeout announcements.

The Internet Control Message Protocol (ICMP) was revised during the definition of IPv6.
In addition, the multicast control functions of the IPv4 Group Membership Protocol (IGMP) are
now incorporated in the ICMPv6.

5.1.3 Network Layer Protocols


(a) The Internet Protocol (IP)

The Internet Protocol (IP) is a network-layer (Layer 3 in the OSI model) protocol that
contains addressing information and some control information to enable packets to be routed in
a network. IP is the primary network-layer protocol in the TCP/IP protocol suite. Along with the
Transmission Control Protocol (TCP), IP represents the heart of the Internet protocols. IP is
equally well suited for both LAN and WAN communications.

IP has two primary responsibilities: providing connectionless, best-effort delivery of


datagrams through a network; and providing fragmentation and reassembly of datagrams to
support data links with different maximum-transmission unit (MTU) sizes. The IP addressing
scheme is integral to the process of routing IP datagrams through an internetwork. Each IP
address has specific components and follows a basic format. These IP addresses can be
subdivided and used to create addresses for subnet works. Each computer (referred as a host)
on a TCP/IP network is assigned a unique 32-bit logical address that is divided into two parts:
the network number and the host number. The network number Identifies a network and must
be assigned by the Internet Network Information Center (Inter-NIC) if the network is to be part
of the Internet. An Internet Service Provider (ISP) can obtain blocks of network addresses from
the Inter-NIC and can itself assign address space as necessary. The host number Identifies a
host on a network and is assigned by the local network administrator.

When you send or receive data (for example, an e-mail note or a Web page), the message
gets divided into little chunks called packets. Each of these packets contains both the sender’s
Inter- net address and the receiver’s address. Because a message is divided into a number of
packets, each packet can, if necessary, be sent by a different route across the Internet. Packets
can arrive in a different order than the order they were sent in. The Internet Protocol just delivers
189

them. It’s up to another protocol, the Transmission Control Protocol (TCP) to put them back in
the right order. All other protocols within the TCP/IP suite, except ARP and RARP, use IP to
route frames from host to host. There are two basic IP versions, IPv4 and IPv6.

IPv6 is the new version of Internet Protocol (IP) based on IPv4, a network-layer (Layer 3)
protocol that contains addressing information and some control information enabling packets to
be routed in the network. IPv6 increases the IP address size from 32 bits to 128 bits, to support
more levels of addressing hierarchy, a much greater number of addressable nodes and simpler
auto-configuration of addresses. IPv6 addresses are expressed in hexadecimal format (base
16) which allows not only numerals (0-9) but a few characters as well (a-f). A sample ipv6
address looks like: 3ffe:ffff:100:f101:210:a4ff:fee3:9566. Scalability of multicast addresses is
introduced. A new type of address called an anycast address is also defined, to send a packet
to any one of a group of nodes.

Major improvements in IPv6 vs. v4

 Improved su pport for extensions and options - IPv6 options are placed in separate
headers that are located between the IPv6 header and the transport layer header.
Changes in the way IP header options are encoded allow more efficient forwarding,
less stringent limits on the length of options, and greater fiexibility for introducing
new options in the future. The extension headers are: Hop-by-Hop Option, Routing
(Type 0), Fragment, Destination Option, Authentication, and Encapsulation Payload.

 Flow labeling capability - A new capability has been added to enable the labeling of
packets belonging to particular traffic fiows for which the sender requests special
handling, such as non-default Quality of Service or real- time service.

Internet Security architecture (IPsec)

Internet Security architecture (IPsec) defines the security services at the IP layer by enabling
a system to select required security protocols, determine the algorithm(s) to use for the service(s),
and put in place any cryptographic keys required to provide the requested services. IPsec can
be used to protect one or more “paths” between a pair of hosts, between a pair of security
gateways, or between a security gateway and a host.

The set of security services that IPsec can provide includes access control, connectionless
integrity, data origin authentication, rejection of replayed packets (a form of partial sequence
integrity), confidentiality (encryption), and limited traffic fiow confidentiality. Because these
190

services are provided at the IP layer, they can be used by any higher layer protocol, e.g., TCP,
UDP, ICMP, BGP, etc.

These objectives are met through the use of two traffic security protocols, the Authentication
Header (AH) and the Encapsulating Security Payload (ESP), and through the use of cryptographic
key management procedures and protocols. The set of IPsec protocols employed in any context,
and the ways in which they are employed, will be determined by the security and system
requirements of users, applications, and/or sites/organizations. When these mechanisms are
correctly implemented and deployed, they ought not to adversely affect users, hosts, and other
Internet components that do not employ these security mechanisms for protection of their traffic.
These mechanisms also are designed to be algorithm-independent. This modularity permits
selection of different sets of algorithms without affecting the other parts of the implementation.

A standard set of default algorithms is specified to facilitate interoperability in the global


Internet. The use of these algorithms, in conjunction with IPsec traffic protection and key
management protocols, is intended to permit system and application developers to deploy high
quality, Internet layer, cryptographic security technology. IPsec Architecture includes many
protocols and algorithms.

(b) Dynamic Host Configuration Protocol (DHCP)

DHCP, which is defined in RFC 2131, allows ranges of IP addresses, known as scopes,
to be defined on a system running a DHCP server application. When another system configured
as a DHCP client is initialized, it asks the server for an address. If all things are as they should
be, the server assigns an address from the scope to the client for a predetermined amount of
time, known as the lease. At various points during the lease (normally the 50% and 85% points),
the client attempts to renew the lease from the server. If the server cannot perform a renewal,
the lease expires at 100%, and the client stops using the address. In addition to an IP address
and the subnet mask, the DHCP server can supply many other pieces of information depending
on how the DHCP server implementation has been configured. In addition to the address
information, the default gateway is often supplied, along with DNS information.

DHCP is a protocol-dependent service, and it is not platform- dependent. This means that
you can use, say, a Linux DHCP server for a network with Windows clients or a Novell DHCP
server with Linux clients. Although the DHCP server offerings in the various network operating
systems might differ slightly, the basic functionality is the same across the board. Likewise, the
191

client configuration for DHCP servers running on a different operating system platform is the
same as for DHCP servers running on the same base operating system platform.

(c) Hypertext Transfer Protocol (HTTP)

The Hypertext Transfer Protocol (HTTP) is an application-level protocol with the lightness
and speed necessary for distributed, collaborative, hypermedia information systems. HTTP
has been in use by the World-Wide Web global information initiative since 1990. HTTP allows
an open-ended set of methods to be used to indicate the purpose of a request. It builds on the
discipline of reference provided by the Uniform Resource Identifier (URI), as a location (URL)
or name (URN), for indicating the resource on which a method is to be applied. Messages are
passed in a format similar to that used by Internet Mail and the Multipurpose Internet Mail
Extensions (MIME). HTTP is also used as a generic protocol for communication between user
agents and proxies/gateways to other Internet protocols, such as SMTP, NNTP, FTP, Gopher
and WAIS, allowing basic hypermedia access to resources available from diverse applications
and simplifying the implementation of user agents.

The HTTP protocol is a request/response protocol. A client sends a request to the server
in the form of a request method, URI, and protocol version, followed by a MIME-like message
containing request modifiers, client information, and possible body content over a connection
with a server. The server responds with a status line, including the message’s protocol version
and a success or error code, followed by a MIME-like message containing server information,
entity meta information, and possible entity- body content.

The first version of HTTP, referred to as HTTP/0.9, was a simple protocol for raw data
transfer across the Internet. HTTP/1.0, as defined by RFC 1945, improved the protocol by
allowing messages to be in the format of MIME-like messages, containing meta information
about the data transferred and modifiers on the request/response semantics. However, HTTP/
1.0 does not sufficiently take into consideration the effects of hierarchical proxies, caching, the
need for persistent connections, or virtual hosts. “HTTP/1.1” includes more stringent requirements
than HTTP/1.0 in order to ensure reliable implementation of its features. HTTP messages consist
of requests from client to server and responses from server to client.

HTTPS (S-HTTP) specification is the secure version of HTTP. Another popular technology
for secured web communication is HTTPS, which is HTTP running on top of TLS and SSL for
secured web transactions. For HTTPS to be used, both the client and server must support it. All
popular browsers now support HTTPS, as do web server products, such as Microsoft Internet
192

Information Server (IIS), Apache, and almost all other web server applications that provide
sensitive applications. When you are accessing an application that uses HTTPS, the URL starts
with https rather than http—for example, https://www.mycollgeonline.com.

American Standard Code for Information Interchange (ASCII)

ASCII (American Standard Code for Information Interchange) is the most common format
for text files in computers and on the Internet. In an ASCII file, each alphabetic, numeric, or
special character is represented with a 7-bit binary number (a string of seven 0s or 1s). 128
possible characters are defined. ASCII was developed by the American National Standards
Institute (ANSI).

UNIX and DOS-based operating systems use ASCII for text files. Windows NT and 2000
uses a newer code, Unicode. IBM’s S/390 systems use a proprietary 8-bit code called EBCDIC.
Conversion programs allow different operating systems to change a file from one code to another.

ASCII was developed from telegraph code. Its first commercial use was as a seven-bit
teleprinter code promoted by Bell data services. Work on the ASCII standard began on October
6, 1960, with the first meeting of the American Standards Association’s (ASA) (now the American
National Standards Institute or ANSI) X3.2 subcommittee. The first edition of the standard was
published in 1963, underwent a major revision during 1967, and experienced its most recent
update during 1986. Compared to earlier telegraph codes, the proposed Bell code and ASCII
were both ordered for more convenient sorting (i.e., alphabetization) of lists, and added features
for devices other than teleprinters.

Originally based on the English alphabet, ASCII encodes 128 specified characters into
seven-bit integers as shown by the ASCII chart above. Ninety-five of the encoded characters
are printable: these include the digits 0 to 9, lowercase letters a to z, uppercase letters A to Z,
and punctuation symbols. In addition, the original ASCII specification included 33 non-printing
control codes which originated with Teletype machines; most of these are now obsolete, although
a few are still commonly used, such as the carriage return, line feed and tab codes.

For example, lowercase i would be represented in the ASCII encoding by binary 1101001
= hexadecimal 69 (i is the ninth letter) = decimal 105.
193

The ASCII table is divided into three different sections.

 Non-printable, system codes between 0 and 31.

 Lower ASCII, between 32 and 127. This table originates from the older, American
systems, which worked on 7-bit character tables.

 Higher ASCII, between 128 and 255. This portion is programmable; characters are
based on the language of your operating system or program you are using. Foreign
letters are also placed in this section.

Figure 5.6: ASCII Code


194

Extended ASCII uses eight instead of seven bits, which adds 128 additional characters.
This gives extended ASCII the ability for extra characters, such as special symbols, foreign
language letters, and drawing characters as shown below.

Figure 5.7: Extended ASCII Code


195

Summary
 TELNET is the terminal emulation protocol in a TCP/IP environment. Telnet, which
is defined in RFC 854, is a virtual terminal protocol. It allows sessions to be opened
on a remote host, and then commands can be executed on that remote host.

 FTP provides for the uploading and downloading of files from a remote host running
FTP server software. FTP allows you to view the contents of folders on an FTP
server and rename and delete files and directories if you have the necessary
permissions. FTP, which is defined in RFC 959, uses TCP as a transport protocol to
guarantee delivery of packets.

 Trivial File Transfer Protocol (TFTP) is a simple protocol to transfer ûles. It has
been implemented on top of the Internet User Datagram protocol (UDP). TFTP is
designed to be small and easy to implement and, therefore, lacks most of the features
of a regular FTP. TFTP only reads and writes ûles (or mail) from/ to a remote server.
It cannot list directories, and currently has no provisions for user authentication.

 Trivial File Transfer Protocol (TFTP) is a simple protocol to transfer ûles. It has
been implemented on top of the Internet User Datagram protocol (UDP). TFTP is
designed to be small and easy to implement and, therefore, lacks most of the features
of a regular FTP. TFTP only reads and writes ûles (or mail) from/ to a remote server.
It cannot list directories, and currently has no provisions for user authentication.

 SMTP, which is defined in RFC 821, is a protocol that defines how mail messages
are sent between hosts. SMTP uses TCP connections to guarantee error-free delivery
of messages.

 DNS performs an important function on TCP/IP-based networks. It resolves


hostnames, such as www.google.com, to IP addresses, such as 64.233.160.0 Such
a resolution system makes it possible for people to remember the names of and
refer to frequently used hosts using easy-to-remember hostnames rather than hard-
to-remember IP addresses.

 Routing Information Protocol (RIP) is a standard for exchange of routing information


among gateways and hosts. This protocol is most useful as an “interior gateway
protocol”. In a nationwide network such as the current Internet, there are many
routing protocols used for the whole network.
196

 SNMP facilitates network devices to communicate information about their state to a


central system. It also allows the central system to pass configuration parameters
to the NW devices. SNMP is a protocol that only facilitates network management
functionality but is not the NW management system (NMS) by itself.

 The Post Ofûce Protocol is designed to allow a workstation to dynamically access a


mail drop on a server host. POP3 is the version 3 (the latest version) of the Post
Ofûce Protocol. POP3 allows a workstation to retrieve mail that the server is holding
for it. POP3 transmissions appear as data messages between stations. The
messages are either command or reply messages.

 IMAP (Internet Message Access Protocol) is a standard email protocol that stores
email messages on a mail server, but allows the end user to view and manipulate
the messages as though they were stored locally on the end user’s computing
device(s). This allows users to organize messages into folders, have multiple client
applications know which messages have been read, flag messages for urgency or
follow-up and save draft messages on the server.

 Address Resolution Protocol (ARP) performs mapping of an IP address to a physical


machine address (MAC address for Ethernet) that is recognized in the local network.
For example, in IP Version 4, an address is 32 bits long.

 Transmission Control Protocol (TCP) is the transport layer protocol in the TCP/IP
suite, which provides a reliable stream delivery and virtual connection service to
applications through the use of sequenced acknowledgment with retransmission of
packets when necessary. Along with the Internet Protocol (IP), TCP represents the
heart of the Internet protocols.

 UDP is a connectionless transport layer (layer 4) protocol in the OSI model which
provides a simple and unreliable message service for transaction-oriented services.
UDP is basically an interface between IP and upper-layer processes. UDP protocol
ports distinguish multiple applications running on a single device from one another.

 ICMP, which is defined in RFC 792, is a protocol that works with the IP layer to
provide error checking and reporting functionality. In effect, ICMP is a tool that IP
uses in its quest to provide best-effort delivery.

 The Internet Protocol (IP) is a network-layer (Layer 3 in the OSI model) protocol
that contains addressing information and some control information to enable packets
197

to be routed in a network. IP is the primary network-layer protocol in the TCP/IP


protocol suite. Along with the Transmission Control Protocol (TCP), IP represents
the heart of the Internet protocols. IP is equally well suited for both LAN and WAN
communications.

 DHCP, which is defined in RFC 2131, allows ranges of IP addresses, known as


scopes, to be defined on a system running a DHCP server application. When another
system configured as a DHCP client is initialized, it asks the server for an address.

 The Hypertext Transfer Protocol (HTTP) is an application-level protocol with the


lightness and speed necessary for distributed, collaborative, hypermedia information
systems. HTTP has been in use by the World-Wide Web global information initiative
since 1990.

Review Questions

Write short notes the following:

 Protocol

 Telnet

 FTP

 TFTP

 SMTP

 DNS

 RIP

 SNMP

 POP & POP3

 IMAP

 ARP

 TCP/IP

 UDP

 IGMP

 ICMP
198

 DHCP

 HTTP

Reference
 Radia Perlman: Interconnections: Bridges, Routers, Switches, and Internetworking
Protocols. 2nd Edition. Addison-Wesley 1999, ISBN 0-201-63448-1. In particular
Ch. 18 on “network design folklore”, which is also available online at http://
www.informit.com/articles/article.aspx?p=20482

 Gerard J. Holzmann: Design and Validation of Computer Protocols. Prentice Hall,


1991, ISBN 0-13-539925-4. Also available online at http://spinroot.com/spin/Doc/
Book91.html

 Douglas E. Comer (2000). Internetworking with TCP/IP - Principles, Protocols and


Architecture (4th ed.). Prentice Hall. ISBN 0-13-018380-6. In particular Ch.11
Protocol layering. Also has a RFC guide and a Glossary of Internetworking Terms
and Abbreviations.

 Internet Engineering Task Force abbr. IETF (1989): RFC1122, Requirements for
Internet Hosts — Communication Layers, R. Braden (ed.), Available online at http:/
/tools.ietf.org/html/rfc1122. Describes TCP/IP to the implementors of
protocolsoftware. In particular the introduction gives an overview of the design goals
of the suite.

 M. Ben-Ari (1982): Principles of concurrent programming 10th Print. Prentice Hall


International, ISBN 0-13-701078-8.

 C.A.R. Hoare (1985): Communicating sequential processes 10th Print. Prentice Hall
International, ISBN 0-13-153271-5. Available online via http://www.usingcsp.com

 R.D. Tennent (1981): Principles of programming languages 10th Print. Prentice Hall
International, ISBN 0-13-709873-1.

 Brian W Marsden (1986): Communication network protocols 2nd Edition. Chartwell


Bratt, ISBN 0-86238-106-1.

 Andrew S. Tanenbaum (1984): Structured computer organization 10th Print. Prentice


Hall International, ISBN 0-13-854605-3.
199

MODEL QUESTION PAPER

M.SC CYBER FORENSICS AND INFORMATION SECURITY

FIRST YEAR- FIRST SEMESTER

CORE PAPER-II

NETWORKING AND COMMUNICATION PROTOCOLS

Time: 3 hours Maximum: 80

Section-A

Answer any 10 of the following in 50 words in each (10 x 2 = 20)

1. What do you understand by timestamp?

2. Explain three-way handshake?

3. What is OSPF? Explain.

4. What is NAT? Explain.

5. What is CIDR?

6. What are DTE and DCE?

7. What is VTP? Explain.

8. Explain Access and Trunk Links?

9. What is subnetting?

10. Explain LSU, LSA and LSR?

Section-B

Answer any five of the following in 250 words in each (5 x 6 = 30)

1. Write short notes on IPV4 and IPV6 headers.

2. How does APIPA work? Explain its limitations.

3. What is subnetting?Subnet the Class C IP Address 205.11.2.0 so that you have 30


subnets. What is the subnet mask for the maximum number of hosts? How many
hosts can each subnet have? What is the IP address of host 3 on subnet 2?

4. What is VLAN? What are the advantages and disadvantages of VLAN?


200

5. Briefly explain the following protocols

a. ARP

b. IGMP

c. POP & POP3

SECTION – C

Answer any THREE questions in about 500 words each (3 x 10 = 30 )

1. Explain OSI Model. Compare with TCP/IP Model.

2. Your company has the network ID 165.121.0.0. You are responsible for creating
subnets on the network, and each subnet must provide at least 900 host IDs.What
subnet mask meets the requirement for the minimum number of host IDs and
provides the greatest number of subnets?

3. Write brief notes on

a. What is frame relay?

b. How are edge routers used in a network? What are its advantages?

You might also like