Networking and Communication
Networking and Communication
POSTGRADUATE COURSE
M.Sc., Cyber Forensics and Information Security
FIRST YEAR
FIRST SEMESTER
CORE PAPER - II
NETWORKING AND
COMMUNICATION PROTOCOLS
WELCOME
Warm Greetings.
I invite you to join the CBCS in Semester System to gain rich knowledge leisurely at
your will and wish. Choose the right courses at right times so as to erect your flag of
success. We always encourage and enlighten to excel and empower. We are the cross
bearers to make you a torch bearer to have a bright future.
DIRECTOR
(i)
M.Sc., Cyber Forensics and Information Security CORE PAPER - II
FIRST YEAR - FIRST SEMESTER NETWORKING AND
COMMUNICATION PROTOCOLS
COURSE WRITERS
Dr. N. Kala
Dr. S.Director
Thenmozhi
i/c
CCFIS, Professor
Associate UNOM
Department of Psychology
Institute of Distance Education
University of Madras
Chepauk Chennnai - 600 005.
(ii)
M.Sc., Cyber Forensics and Information Security
FIRST YEAR
FIRST SEMESTER
Core Paper - II
SYLLABUS
Unit 1
Networking models- OSI Layered model - TCP/IP Model - MAC Address representation -
Organisationally Unique Identifier - Internet Protocol - Versions and Header lengths - IP
Identification - IP Flags - IP fragmentation and reassembly structure - Transport Layer
protocols - Port numbers - TCP Flags - Segmentation - TCP 3 way handshake and Options
- encapsulation and De-encapsulation - Payload.
Unit 2
Static and Dynamic Routing - IP Routing Protocols - Classful and Classless Routing -
RIPv1 - RIPv2, Broadcast and Multicast domains - OSPF, EIGRP - Network Address
Translation - IP Classes - Private IP - Public IP - Reserved IP - APIPA.
Unit 3
(iii)
Unit 4
Virutal LANs - Access links and Trunk links - Switchport modes - Vlan Trunking - Server,
Client and Transparent modes - VTP Domain - Configuration Revision numbers - Inter
Vlan Communications - Broadcast domain - Collision Domain
Unit 5
(iv)
M.Sc., Cyber Forensics and Information Security
FIRST YEAR
FIRST SEMESTER
Core Paper - II
1 Introduction to Internetworking 1
2 IP Routing 47
3 Subnetting IP network 95
(v)
1
UNIT - 1
INTRODUCTION TO INTERNETWORKING
Learning Objectives
This chapter will act as a foundation for the technology discussions that follow. In
this chapter, some fundamental concepts and terms used in the evolving language
of internetworking are addressed.
A learner will be able to understand the various model of networking and the basic
difference between them.
In a protocol based model the learner will be able to understand the functions of
each layer, and the structure of TCP packet.
Strucutre
1.1 Introduction to Interworking
1.6 TCP
The first networks were time-sharing networks that used mainframes and attached
terminals. Both IBM’s System Network Architecture (SNA) and Digital’s network architecture
implemented such environments.
Local area networks (LANs) evolved around the PC revolution. LANs enabled multiple
users in a relatively small geographical area to exchange files and messages, as well as access-
shared resources such as file servers. Wide area networks (WANs) interconnect LANs across
normal telephone lines (and other media), thereby interconnecting geographically dispersed
users.
Today, high-speed LANs and switched internetworks are becoming widely used, largely
because they operate at very high speeds and support such high-bandwidth applications as
voice and video conferencing.
Flexibility, the final concern, is necessary for network expansion and new applications
and services, among other factors.
A model is a way to organize a system’s functions and features to define its structural
design. A design can help us understand how a communication system accomplishes tasks to
form a protocol suite. To help us wrap our heads around models, communication systems are
often compared to the postal system (Figure 1.2). Imagine writing a letter and taking it to the
post office. At some point, the mail is sorted and then delivered via some transport system to
another post office. From there, it is sorted and given to a mail carrier for delivery to the destination.
The letter is handled at several points along the way. Each part of the system is trying to
4
accomplish the same thing—delivering the mail. But each section has a particular set of rules to
obey. While in transit, the truck follows the rules of the road as the letter is delivered to the next
point for processing. Inspectors and sorters ensure the mail is metered and safe, without much
concern for traffic lights or turn signals.
At some point, we have to decide exactly how to handle this communication. After all,
when we mail that letter, we cannot address the envelope in some arbitrary language or ignore
zip codes, just as the mail truck driver cannot drive on the wrong side of the road.
Models are routinely organized in a hierarchical or layered structure. Each layer has a set
of functions to perform. Protocols are created to handle these functions, and therefore, protocols
are also associated with each layer. The protocols are collectively referred to as a protocol
suite. The lower layers are often linked with hardware, and the upper layers with software. For
example, Ethernet operates at Layers 1 and 2, while the File Transfer Protocol (FTP) operates
at the very top of the model.
5
(a) Upper layers (layers 7, 6 & 5) - deal with application issues and generally are
implemented only in software.
In simple terms, the function of the application layer is to take requests and data from the
users and pass them to the lower layers of the OSI model. Incoming information is passed to
the application layer, which then displays the information to the users. Some of the most basic
application-layer services include file and print capabilities. The most common misconception
about the application layer is that it represents applications that are used on a system such as
a web browser, word processor, or spreadsheet. Instead, the application layer defines the
processes that enable applications to use network services. For example, if an application
needs to open a file from a network drive, the functionality is provided by components that
reside at the application layer.
6
Defines interface to user processes for communication and data transfer in network.
Provides standardized services such as virtual terminal, file and job transfer and
operations.
The presentation layer’s basic function is to convert the data intended for or received
from the application layer into another format. Such conversion is necessary because of how
data is formatted so that it can be transported across the network. Applications cannot necessarily
read this conversion. Some common data formats handled by the presentation layer include
the following:
Graphics files: JPEG, TIFF, GIF, and so on are graphics file formats that require the data
to be formatted in a certain way.
Text and data: The presentation layer can translate data into different formats, such as
American Standard Code for Information Interchange (ASCII) and Extended Binary Coded
Decimal Interchange Code (EBCDIC).
Sound/video: MPEG, MP3, and MIDI files all have their own data formats to and from
which data must be converted.
The session layer is responsible for managing and controlling the synchronization of data
between applications on two devices. It does this by establishing, maintaining, and breaking
sessions. Whereas the transport layer is responsible for setting up and maintaining the connection
between the two nodes, the session layer performs the same function on behalf of the application.
Error checking: Protocols at the transport layer ensure that data is sent or received
correctly.
Protocols at the network layer are also responsible for route selection, which refers to
determining the best path for the data to take throughout the network. In contrast to the data
link layer, which uses MAC addresses to communicate on the LAN, network layer protocols use
software configured addresses and special routing protocols to communicate on the network.
The term packet is used to describe the logical grouping of data at the network layer.
Frames packets.
8
Media Access Control (MAC) layer: The MAC address is defined at this layer. The
MAC address is the physical or hardware address burned into each network interface
card (NIC). The MAC sub-layer also controls access to network media. The MAC
layer specification is included in the IEEE 802.1 standard.
Logical Link Control (LLC) layer: The LLC layer is responsible for the error and
flow-control mechanisms of the data link layer. The LLC layer is specified in the
IEEE 802.2 standard.
Hardware: The type of media used on the network, such as type of cable, type of
connector, and pinout format for cables.
Topology: The physical layer identifies the topology to be used in the network.
Common topologies include ring, mesh, star, and bus.
In addition to these characteristics, the physical layer defines the voltage used on a
given medium and the frequency at which the signals that carry the data operate.
These characteristics dictate the speed and bandwidth of a given medium, as well
as the maximum distance over which a certain media type can be used.
Basically, layers 7 through 4 deal with end to end communications between data source
and destinations, while layers 3 to 1 deal with communications between network devices.
and then passes the information up to the data link layer (Layer 2), which relays it to the network
layer (Layer 3), and so on, until it reaches the application layer (Layer 7) of computer B. Finally,
the application layer of computer B passes the information to the recipient application program
to complete the communication process. Figure 1.4 illustrates this process.
Figure 1.4: End to end communications between data source and destinations
Each layer will add a Header and a Trailer to its Data it has received from the upper layer,
which consists of the upper layer’s Header, Trailer and Data as it proceeds through the layers.
The Headers contain information that specifically addresses layer-to-layer communication.
Headers, trailers and data are relative concepts, depending on the layer that analyzes the
information unit. For example, the Transport Header (TH) contains information that only the
Transport layer sees. All other layers below the Transport layer pass the Transport Header as
part of their Data. At the network layer, an information unit consists of a Layer 3 header (NH)
and data. At the data link layer, however, all the information passed down by the network layer
(the Layer 3 header and the data) is treated as data. In other words, the data portion of an
10
information unit at a given OSI layer potentially can contain headers, trailers, and data from all
the higher layers. This is known as encapsulation.
TCP/ IP stands for Transmission Control Protocol/ Internet Protocol. The TCP/IP reference
model is the network model used in the current Internet architecture. It has its origins back in
the 1960’s with the ARPANET. This was a research network sponsored by the Department of
Defense in the United States. The following were seen as major design goals:
Ability for connections to remain intact as long as the source and destination machines
were functioning.
TCP/ IP is a suite of protocols. Like most network protocols, TCP/ IP is a layered protocol.
Each layer builds upon the layer below it, adding new functionality. The lowest level protocol is
concerned purely with the business of sending and receiving data - any data - using specific
network hardware. At the top are protocols designed specifically for tasks like transferring files
or delivering email. In between are levels which are concerned with issues like routing and
reliability. The benefit that the layered protocol stack gives you is that, if you invent a new
11
network application or a new type of hardware, you only need to create a protocol for that
application or that hardware: you don’t have to rewrite the whole stack. TCP/IP is a layered
model but it is generally agreed that there are fewer layers than the seven layers of the OSI
model. The TCP/IP 4-layer model and the key functions of each layer is described below in
figure 1.6.
a) Application Layer
The Application Layer in TCP/IP groups the functions of OSI Application, Presentation
Layer and Session Layer. Therefore any process above the transport layer is called an Application
in the TCP/IP architecture. In TCP/IP socket and port are used to describe the path over which
applications communicate. Most application level protocols are associated with one or more
port number.
b) Transport Layer
In TCP/IP architecture, there are two Transport Layer protocols. The Transmission Control
Protocol (TCP) guarantees information transmission. The User Datagram Protocol (UDP)
transports datagram without end-to-end reliability checking. Both protocols are useful for different
applications.
c) Internet layer
The Internet Protocol (IP) is the primary protocol in the TCP/IP Network Layer. All upper
and lower layer communications must travel through IP as they are passed through the TCP/IP
protocol stack. In addition, there are many supporting protocols in the Network Layer, such as
ICMP, to facilitate and manage the routing process.
12
In the TCP/IP architecture, the Data Link Layer and Physical Layer are normally grouped
together to become the Network Access layer. TCP/IP makes use of existing Data Link and
Physical Layer standards rather than defining its own. Many RFCs describe how IP utilizes and
interfaces with the existing data link protocols such as Ethernet, Token Ring, FDDI, HSSI, and
ATM. The physical layer, which defines the hardware communication properties, is not often
directly interfaced with the TCP/IP protocols in the network layer and above.
TCP/IP architecture does not exactly match the OSI model. Unfortunately, there is no
universal agreement regarding how to describe TCP/IP with a layered model. It is generally
agreed that TCP/IP has fewer levels than the seven layers of the OSI model.
Here we force TCP/IP protocols into the OSI 7 layers structure for comparison purpose.
The TCP/IP suite’s core functions are addressing and routing (IP/IPv6 in the networking layer)
and transportation control (TCP, UDP in the transport layer).The main differences identified
between the OSI and TCP/IP model can be summarized as under:
13
Each computer in the network has software that operates at each of the layers and performs
the functions required by those layers (the physical layer is hardware not software). Each layer
in the network uses a formal language, or protocol, that is simply a set of rules that define what
the layer will do and that provides a clearly defined set of messages that software at the layer
needs to understand. For example, the protocol used for Web applications is HTTP. In general,
all messages sent in a network pass through all layers. All layers except the Physical layer add
a Protocol Data Unit (PDU) to the message as it passes through them. The PDU contains
information that is needed to transmit the message through the network. Some experts use the
word “Packet” to mean a PDU. Figure 1.8 shows how a message requesting a Web page would
be sent on the Internet.
14
Application Layer: First, the user creates a message at the application layer using a
Web browser by clicking on a link (e.g., get the home page at www.somebody.com). The browser
translates the user’s message (the click on the Web link) into HTTP. The rules of HTTP define
a specific PDU—called an HTTP packet—that all Web browsers must use when they request a
Web page. For now, we can think of the HTTP packet as an envelope into which the user’s
message (get the Web page) is placed. In the same way that an envelope placed in the mail
needs certain information written in certain places (e.g., return address, destination address),
so too does the HTTP packet. The Web browser fills in the necessary information in the HTTP
packet, drops the user’s request inside the packet and then passes the HTTP packet (containing
the Web page request) to the transport layer.
15
Transport Layer The transport layer on the Internet uses a protocol called TCP
(Transmission Control Protocol), and it, too, has its own rules and its own PDUs. TCP is
responsible for breaking large files into smaller packets and for opening a connection to the
server for the transfer of a large set of packets. The transport layer places the HTTP packet
inside a TCP PDU (which is called a TCP segment), fills in the information needed by the TCP
segment, and passes the TCP segment (which contains the HTTP packet, which, in turn, contains
the message) to the network layer.
Network Layer The network layer on the Internet uses a protocol called IP (Internet
Protocol), which has its rules and PDUs. IP selects the next stop on the message’s route through
the network. It places the TCP segment inside an IP PDU, which is called an IP packet, and
passes the IP packet, which contains the TCP segment, which, in turn, contains the HTTP
packet, which, in turn, contains the message, to the data link layer.
Data Link Layer If we are connecting to the Internet using a LAN, data link layer may use
a protocol called Ethernet, which also has its own rules and PDUs. The data link layer formats
the message with start and stop markers, adds error-checking information, places the IP packet
inside an Ethernet PDU, which is called an Ethernet frame, and instructs the physical hardware
to transmit the Ethernet frame, which contains the IP packet, which contains the TCP segment,
which contains the HTTP packet, which contains the message.
Physical Layer The physical layer in this case is network cable connecting computer to
the rest of the network. The computer will take the Ethernet frame (complete with the IP packet,
the TCP segment, the HTTP packet, and the message) and sends it as a series of electrical
pulses through cable to the server.
When the server gets the message, this process is performed in reverse. The physical
hardware translates the electrical pulses into computer data and passes the message to the
data link layer. The data link layer uses the start and stop markers in the Ethernet frame to
identify the message. The data link layer checks for errors and, if it discovers one, requests that
the message be resent. If a message is received without error, the data link layer will strip off
the Ethernet frame and pass the IP packet (which contains the TCP segment, the HTTP packet,
and the message) to the network layer. The network layer checks the IP address and, if it is
destined for this computer, strips off the IP packet and passes the TCP segment, which contains
the HTTP packet and the message to the transport layer. The transport layer processes the
message, strips off the TCP segment, and passes the HTTP packet to the application layer for
processing. The application layer (i.e., the Web server) reads the HTTP packet and the message
it contains (the request for the Web page) and processes it by generating an HTTP packet
containing the Web page we requested. This reverse process is called De-encapsulation. Then
the process starts again as the page is sent back to we.
There are three important points in this example. First, there are many different software
packages and many different PDUs that operate at different layers to successfully transfer a
message. Networking is in some ways similar to the Russian Matryoshka, nested dolls that fit
neatly inside each other. This is called encapsulation, because the PDU at a higher level is
placed inside the PDU at a lower level so that the lower level PDU encapsulates the higher-level
17
one. The major advantage of using different software and protocols is that it is easy to develop
new software, because all one has to do is write software for one level at a time. The developers
of Web applications, for example, do not need to write software to perform error checking or
routing, because those are performed by the data link and network layers. Developers can
simply assume those functions are performed and just focus on the application layer. Likewise,
it is simple to change the software at any level (or add new application protocols), as long as the
interface between that layer and the ones around it remains unchanged.
Second, it is important to note that for communication to be successful, each layer in one
computer must be able to communicate with its matching layer in the other computer. For
example, the physical layer connecting the client and server must use the same type of electrical
signals to enable each to understand the other (or there must be a device to translate between
them). Ensuring that the software used at the different layers is the same is accomplished by
using standards. A standard defines a set of rules, called protocols, which explain exactly how
hardware and software that conform to the standard are required to operate. Any hardware and
software that conform to a standard can communicate with any other hardware and software
that conform to the same standard. Without standards, it would be virtually impossible for
computers to communicate.
Third, the major disadvantage of using a layered network model is that it is somewhat
inefficient. Because there are several layers, each with its own software and PDUs, sending a
message involves many software programs (one for each protocol) and many PDUs. The PDUs
add to the total amount of data that must be sent (thus increasing the time it takes to transmit),
and the different software packages increase the processing power needed in computers.
Because the protocols are used at different layers and are stacked on top of one another (refer
figure 1.8), the set of software used to understand the different protocols is often called a
protocol stack.
A data-link layer address uniquely identifies each physical network connection of a network
device. Data-link addresses sometimes are referred to as physical or hardware addresses.
Data-link addresses usually exist within a flat address space and have a pre-established and
typically fixed relationship to a specific device. End systems generally have only one physical
network connection, and thus have only one data-link address. Routers and other internetworking
devices typically have multiple physical network connections and therefore have multiple data-
link addresses. In simple terms, a device would have same number of data link addresses as
the number of physical interfaces it has (Figure 1.11)
A media access control address (MAC address) is a binary number used to uniquely
identify computer network adapters of a device that is assigned to a network interface
controller (NIC) for communications at the data link layer of a network segment. The Media
Access Control (MAC) address, sometimes called “hardware addresses” or “physical addresses”
are embedded into the network hardware during the manufacturing process, or stored in firmware,
and designed in a way not to be modified. MAC addresses are used as a network address for
most IEEE 802 network technologies, including Ethernet and Wi-Fi. In this context, MAC
addresses are used in the medium access control protocol sub-layer.
Figure 1.12: MAC & Data-link addresses and IEEE sub layers of Data-link layers
A network node may have multiple NICs and in such cases each NIC must have a unique
MAC address. Sophisticated network equipment such as a multilayer switch or router may
require one or more permanently assigned MAC addresses.
MAC addresses are most often assigned by the manufacturer of a NIC and are stored in
its hardware, such as the card’s read-only memory or some other firmware mechanism. MAC
addresses are formed according to the rules of one of three numbering name spaces managed
by the Institute of Electrical and Electronics Engineers (IEEE): EUI-48 (it replaces the obsolete
term MAC-48) and EUI-64. EUI is an abbreviation for Extended Unique Identifier.
20
In IEEE 802 networks, the Data Link Control (DLC) layer of the OSI Reference Model is
divided into two sub-layers: the Logical Link Control (LLC) layer and the Media Access Control
(MAC) layer. The MAC layer interfaces directly with the network medium. Consequently, each
different type of network medium requires a different MAC layer.
On networks that do not conform to the IEEE 802 standards but do conform to the OSI
Reference Model, the node address is called the Data Link Control (DLC) address.
Click START
Go to ACCESSORIES
In the “ipconfig /all” results look for the adapter you want to find the MAC address of. The
MAC address is the number located next to “Physical Address” in the list.
MM:MM:MM:SS:SS:SS
MM-MM-MM-SS-SS-SS
MMM.MMM.SSS.SSS
The leftmost 6 digits (24 bits) called a “prefix” is associated with the adapter manufacturer.
Each vendor registers and obtains MAC prefixes as assigned by the IEEE. Vendors often possess
many prefix numbers associated with their different products. For example, the prefixes 00:13:10,
00:25:9C and 68:7F:74 (plus many others) all belong to Linksys (Cisco Systems).
The rightmost digits of a MAC address represent an identification number for the specific
device. Among all devices manufactured with the same vendor prefix, each is given their own
unique 24-bit number. Note that hardware from different vendors may happen to share the
same device portion of the address.
While traditional MAC addresses are all 48 bits in length, a few types of networks require
64-bit addresses instead. ZigBee wireless home automation and other similar networks based
on IEEE 802.15.4, for example, require 64-bit MAC addresses be configured on their hardware
devices.
00:25:96:FF:FE:12:34:56
0025:96FF:FE12:3456
TCP/IP networks use both MAC addresses and IP addresses but for separate purposes.
A MAC address remains fixed to the device’s hardware while the IP address for that same
device can be changed depending on its TCP/IP network configuration. Media Access Control
22
operates at Layer 2 of the OSI model while Internet Protocol operates at Layer 3. This allows
MAC addressing to support other kinds of networks besides TCP/IP. IP networks manage the
conversion between IP and MAC addresses using Address Resolution Protocol (ARP). The
Dynamic Host Configuration Protocol (DHCP) relies on ARP to manage the unique assignment
of IP addresses to devices.
When the least significant bit of an address’s first octet is 0 (zero), the frame is meant to
reach only one receiving NIC. This type of transmission is called unicast. A unicast frame is
transmitted to all nodes within the collision domain. In a modern wired setting the collision
domain usually is the length of the Ethernet cable between two network cards. In a wireless
setting, the collision domain is all receivers that can detect a given wireless signal. If a switch
does not know which port leads to a given MAC address, the switch will forward a unicast frame
to all of its ports (except the originating port), an action known as unicast flood. Only the node
with the matching hardware MAC address will accept the frame; network frames with non-
matching MAC-addresses are ignored, unless the device is in promiscuous mode.
23
If the least significant bit of the first octet is set to 1, the frame will still be sent only once;
however, NICs will choose to accept it based on criteria other than the matching of a MAC
address: for example, based on a configurable list of accepted multicast MAC addresses. This
is called multicast addressing. The IEEE has built in several special address types to allow
more than one network interface card to be addressed at one time:
Packets sent to the broadcast address, all one bits, are received by all stations on a local
area network. In hexadecimal the broadcast address would be FF:FF:FF:FF:FF:FF. A broadcast
frame is flooded and is forwarded to and accepted by all other nodes.
Packets sent to a multicast address are received by all stations on a LAN that have been
configured to receive packets sent to that address.
Functional addresses identify one or more Token Ring NICs that provide a particular
service, defined in IEEE 802.5.
These are all examples of group addresses, as opposed to individual addresses; the
least significant bit of the first octet of a MAC address distinguishes individual addresses from
group addresses. That bit is set to 0 in individual addresses and set to 1 in group addresses.
Group addresses, like individual addresses, can be universally administered or locally
administered.
A MAC address may include the manufacturer’s organizationally unique identifier (OUI).
An OUI {Organizationally Unique Identifier} is a 24-bit number that uniquely identifies a vendor
or manufacturer. They are purchased and assigned by the IEEE. The OUI is basically the first
three octets of a MAC address, the first 24 bits of a MAC address for a network-connected
device, which indicate the specific vendor for that device. The IEEE assigns OUIs to vendors.
(The last 24 bits of the MAC address are the device unique serial number, assigned to the
device by the manufacturer). The OUI sometimes is referred to as the Vendor ID.
Some Internet Service Providers link each of their residential customer accounts to the
MAC addresses of the home network router (or another gateway device). The address seen by
the provider doesn’t change until the customer replaces their gateway, such as by installing a
new router. When a residential gateway is changed, the Internet provider now sees a different
24
MAC address being reported and blocks that network from going online. A process called “cloning”
solves this problem by enabling the router (gateway) to keep reporting the old MAC address to
the provider even though its own hardware address is different. Administrators can configure
their router (assuming it supports this feature, as many do) to use the cloning option and enter
the MAC address of the old gateway into the configuration screen. When cloning isn’t available,
the customer must contact the service provider to register their new gateway device instead.
Most broadband routers and other wireless access points include an optional feature
called MAC address filtering, or hardware address filtering. It is supposed to improve security
by limiting the devices that can join the network.
On a typical wireless network, any device that has the proper credentials (knows the
SSID and password) can authenticate with the router and join the network, getting an IP address
and access to the internet and any shared resources. MAC address filtering adds an extra layer
to this process. Before letting any device join the network, the router checks the device’s MAC
address against a list of approved addresses. If the client’s address matches one on the router’s
list, access is granted as usual; otherwise, it’s blocked from joining.
To set up MAC filtering on a router, the administrator must configure a list of devices that
should be allowed to join. The physical address of each approved device must be found and
then those addresses need to be entered into the router, and the MAC address filtering option
turned on. Most routers let you see the MAC address of connected devices from the admin
console. If not, you can use your operating system to do it. Once you have the list of MAC
address, go into your router’s settings and put them in their proper places. For example, you
can enable the MAC filter on a Linksys Wireless-N router through the Wireless > Wireless MAC
Filter page. The same can be done on NETGEAR routers through ADVANCED > Security >
Access Control, and some D-Link routers in ADVANCED > NETWORK FILTER.
In theory, having a router perform this connection check before accepting devices increases
the chances of preventing malicious network activity. The MAC addresses of wireless clients
25
can’t truly be changed because they’re encoded in the hardware. However, it must be noted
that MAC addresses can be faked, and determined attackers know how to exploit this fact. An
attacker still needs to know one of the valid addresses for that network in order to break in, but
this too is not difficult for anyone experienced in using network sniffer tools. MAC filtering will
only prevent average hackers from gaining network access. Most computer users don’t know
how to spoof their MAC address let alone find a router’s list of approved addresses.
A network-layer address identifies an entity at the network layer of the OSI layers. Network
addresses usually exist within a hierarchical address space and sometimes are called virtual or
logical addresses. The relationship between a network address and a device is logical and
unfixed; it typically is based either on physical network characteristics (the device is on a particular
network segment) or on groupings that have no physical basis (the device is part of an AppleTalk
zone). End systems require one network-layer address for each network-layer protocol they
support. (This assumes that the device has only one physical network connection.) Routers
and other internetworking devices require one network-layer address per physical network
connection for each network-layer protocol supported. A router, for example, with three interfaces
each running AppleTalk, TCP/IP, and OSI must have three network-layer addresses for each
interface. The router therefore has nine network-layer addresses. Figure 1.14 illustrates how
each network interface must be assigned a network address for each protocol supported
In order to send somebody information over the internet, you need the correct address –
just like sending a regular letter through the mail. In this case however, it is the IP address. Just
as a letter receives a stamp to ensure it arrives to the correct recipient, data packets get an IP
address. The difference between an IP address and a postal address is that they do not correlate
with a specific location perse: instead, they are automatically or manually assigned to networked
devices during the connection set up. “Internet Protocol” plays an important role in this process.
The Internet Protocol (IP) is the principal communications protocol in the Internet protocol suite
for relaying datagrams across network boundaries. Its routing function enables internetworking,
and essentially establishes the Internet.
Internet Protocol (IP) is a connection free protocol that is an integral part of the Internet
protocol suite (a collection of around 500 network protocols) and is responsible for the addressing
and fragmentation of data packets in digital networks. Together with the transport layer TCP
(Transmission Control Protocol), IP makes up the basis of the internet. To be able to send a
packet from sender to addressee, the Internet Protocol creates a packet structure which
summarizes the sent information. So, the protocol determines how information about the source
and destination of the data is described and separates this information from the informative
data in the IP header. This kind of packet format is also known as an IP-Datagram. IP has the
task of delivering packets from the source host to the destination host solely based on the IP
addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate
the data to be delivered. It also defines addressing methods that are used to label the datagram
with source and destination information.
In 1974 the Institute of Electrical and Electronics Engineers (IEEE) published a research
paper by the American computer scientists Robert Kahn and Vint Cerf, who described a protocol
model for a mutual packet network connection based on the internet predecessor ARPANET. In
addition to the TCP transmission control protocol, the primary component of this model was the
IP protocol which (aside from a special abstraction layer) allowed for communication across
different physical networks. After this, more and more research networks were consolidated on
the basis of “TCP/IP” protocol combination, which in 1981 was definitively specified as a standard
in the RFC 971.
27
Today, those who are concerned with the characteristics of a particular IP address e.g.,
one that would make computers addressable in a local network, will no doubt encounter the two
variants IPv4 and IPv6. However, despite undergoing extensive changes in the past, in no way
is this the fourth or sixth generation of IP protocol. IPv4 actually is the first official version of the
Internet Protocol, whilst the version number relates to the fact that the fourth version of the TCP
protocol is used. IPv6 is the direct successor of IPv4 – the development of IPv5 was suspended
prematurely for economic reasons.
Even though there have been no further releases since IPv4 and IPv6, the Internet Protocol
has been revised since its first mention in 1974 (before this it was just a part of TCP and did not
exist independently). The focus was essentially on optimizing connection set-up and addressing.
For example, the bit length of host addresses were increased from 16 to 32 bits, therefore
extending the address space to approximately four billion possible proxies. The visionary IPv6
has 128-bit address fields and allows for about 340 sextillion (a number with 37 zeroes) different
addresses, thus meeting the long term need for Internet addresses.
The Internet Protocol is responsible for addressing hosts, encapsulating data into
datagrams (including fragmentation and reassembly) and routing datagrams from a source
host to a destination host across one or more IP networks. For these purposes, the Internet
Protocol defines the format of packets and provides an addressing system.
Each datagram has two components: a header and a payload. The IP header includes
source IP address, destination IP address, and other metadata needed to route and deliver the
datagram. The payload is the data that is transported. This method of nesting the data payload
in a packet with a header is called encapsulation.
IP header of a datagram
As previously mentioned, the Internet Protocol ensures that each data packet is preceded
by the important structural features in the header and is assigned to the appropriate transport
protocol (usually TCP). The header data area has been fundamentally revised for version 6,
which is why it is necessary to specify between the IPv4 and IPv6 headers.
Every IP header always begins with a 4 Bit long specification of the Internet protocol
version number – either IPv4 or IPv6. Then there are a further 4 Bits, which contain information
about the length of the IP header (IP header length), as this does not always remain constant.
32
16
The total length of the header is always calculated from this value, multiplied by 32 bits.
Thus, the smallest possible header length is 160bytes (equivalent to 20 bytes) when no options
are added. The maximum value is 15 to 480 bit (equivalent to 60 bytes). Bits 8 to 15 (type of
service) include instructions for handling and prioritizing the datagram. Here the host can specify
the importance of points such as reliability, throughput and delay in data transmission, for
example.
29
The total length specifies the total size of the data packet- in other words, it adds the size
of the useful data to the header length. Since the field has a length of 16 bits, the maximum limit
is 65,635 bytes. It is stipulated in RFC 791 that each host has to be able to process at least 576
bytes. An IP datagram can be fragmented on its way from the host to routers or other devices
if desired, but the fragments should not be smaller than the 576 bytes mentioned. The other
fields on the IPv4 header have the following meanings:
Identification: All fragments of a datagram have the same identification number that
they receive from the sender. By matching this 16 bit field, the target host can assign individual
fragments to a particular datagram.
Flags: Every IP header contains 3 flag bits, which contain information and guidelines for
fragmentation. The first bit is reserved and always has the value 0. The second bit, called “Don’t
Fragment”, informs whether or not the packet may be fragmented (0) or not (1). The last “More
Fragments” bit indicates whether further fragments follow (1) or whether the packet is complete
or will be completed with the current fragment (0).
Fragment alignment: This field informs the target host about where a single fragment
belongs, so that the entire datagram can be complied again easily. The 13 bit length means that
the datagram can be split into 8192 fragments.
Lifespan (Time to Live, TTL): To ensure that a packet on the network cannot migrate
from node to node indefinitely, it is sent with a maximum lifespan (Time to Live). The RFC
standard provides the unit of seconds for this 8 bit field, while the maximum lifetime is 255
seconds. The TTL is reduced by at least 1 for each network node that has passed. If the value
0 is reached, the data packet is automatically discarded.
Protocol: The protocol field (8 bit) assigns the respective transport protocol to the data
packet, for example the value 6 for TCP or the value 17 for the UDP protocol. The official list of
all possible protocols has been managed and maintained by IANA (Internet Assigned Numbers
Authority) since 2002.
Header/Checksum: The 16 bit “Checksum” field contains the checksum for the header.
This has to be recalculated at every network node, due to the dwindling TTL per interim. The
accuracy of the user information remains unverified for efficiency reasons.
30
Source address and destination address: Every 32 bits (4 bytes) are reserved for the
assigned IP address of the originating and target hosts. These IP addresses are usually written
in the form of 4 decimal numbers separated by dots. The lowest address is 0.0.0.0., and the
highest is 255.255.255.255.
Options: The options field expands the IP protocol with additional information which is
not provided in the standard design. Since these are just optional additions, the field has a
variable length, which is limited by the maximum header length. Examples of possible options
include: “Security” (indicates how secret a datagram is), “RecordRoute” (indicates all network
nodes that have passed, their IP address to follow the packet route), and “Time Stamp” (adds
the time at which a particular node was passed).
Unlike its predecessor’s header, the IPv6 protocol has a fixed size of 320 bits (40 bytes).
Less frequently required information can be attached separately between the standard header
and the user data. These extension headers can be compared to the option field of the IPv4
protocol and can be adapted at any time without having to change the actual header. Amongst
other things, you can determine packet routes, specify fragmentation information, or initiate
encrypted communication via IPSec. To optimize performance, a header checksum does not
exist.
31
Like IPv4, the actual IP header begins with the 4-bit version number of the Internet Protocol.
The following field called “Traffic Class” is equivalent to the “Type of Service” entry in the older
protocol variant. The same rules apply to these 8 bits as in the previous version: they inform the
target host about the qualitative processing of the datagram. A new feature of IPv6 is the
FlowLabel (20 bit), which makes it possible to identify data streams from continuous data packets.
This allows for the reservation of bandwidth and the optimization of routing. The following list
explains the additional header information for the improved IP protocol:
Size of user data: IPv6 transmits a value for the size of the transported user data, including
the extension headers (total 16 bits). In the previous version, this value had to be calculated
separately from the total length minus the header line length.
Next Header: The 8-bit “Next Header” field is the counterpart of the protocol specification
in IPv4 and therefore has also assumed its function – the assignment of the desired transport
protocol.
Hop-Limit: The Hop limit (8 bit) defines the maximum number of intermediate stations
that a packet can pass through before it is discarded. Just like the TTL in IPv4, the value is
reduced by at least 1 with each node.
Source and destination address: Most of the IPv6 headers contain the addresses of
sender and addressee. As previously mentioned, these have a length of 128 bits (quadruple of
IPv4 addresses). There are also significant differences in the standard notation. The newer
version of the Internet Protocol uses hexadecimal numbers and divides them into 8 blocks of 16
bits each. Double points are used instead of simple dots to separate them. For example, a full
IPv6 address looks something like
2001:0db8:85a3:08d3:1319:8a2e:0370:7344.
In order for the datagrams in their header to make the basic specification of the initial and
destination addresses, they must first be assigned to the network subscribers. They are usually
assigned between internal and external, or public IP addresses. Three address ranges are
reserved for the former, which are used for communication in local networks:
10.0.0.0 to 10.255.255.255
172.16.0.0 to 172.31.255.255
192.168.0.0 to 192.168.255.255
32
The prefix “fc00::/7” is provided for IPv6 networks. Addresses in these networks are not
routed in the internet and can therefore be freely selected and used in private or company
networks. Addresses are successfully assigned either by manual input or automatically as soon
as the device connects to the network, as long as the automatic address assignment is activated
and a DHCP server is in use. With the help of a subnet mask, this type of local network can also
be selectively segmented into other areas.
External IP addresses are routed automatically by the respective internet provider when
they connect to the internet. All devices on the internet via a common router access the same
external IP. Typically, the providers assign a new internet address every 24 hours from an
address range, which was assigned to them by the IANA. This also applies to the almost
inexhaustible arsenal of IPv6 addresses, which are only partly released for normal use.
Furthermore, it is not just divided into private and public addresses, but it can be distinguished
by much more versatile classification possibilities in so-called “address scopes”:
Host Scope: The loopback address 0:0:0:0:0:0:0: can use a host to send IPv6
datagrams to itself.
Link Local Scope: For IPv6 connectivity it is essential that each host has its own
address, even if it is only valid on a local network. This link local address is identified
by the prefix “fe80::/10” and is used for example, for communication with the standard
gateway (router) in order to generate a public IP address.
Unique Local Scope: This is the aforementioned address range “fc00::/7”, which
is exclusively reserved for the configuration of local networks.
Site Local Scope: The site local scope is an now outdated prefix “fec0::/10”, which
was also defined for local networks. However, as soon as different networks were
connected or VPN connections were made between networks that were numbered
with site-local addresses, the standard was considered overtaken.
Global Scope: Any host that wants to connect to the internet at least needs its own
public address. This is obtained by auto-configuration, either by accessing the SLAAC
(stateless address Auto configuration) or DHCPv6 (state-oriented address
configuration).
Multicast Scope: Network nodes, routers, servers and other network services can
be grouped into multicast groups using IPv6. Each of these groups has its own
33
address, which allows a single packet to reach all the hosts involved. The prefix
“ff00::/8” indicates that a multicast address follows.
Whenever a data packet needs to be sent via TCP/IP, the overall size is automatically
checked. If the size is above the maximum transmission unit of the respective network interface,
the information becomes fragmented i.e., deconstructed into smaller data blocks.
The sending host (IPv6) or an intermediate router (IPv4) takes over this task. By default,
the packet is composed by the recipient, who accesses the fragmentation information stored in
the IP header or in the extension header. In exceptional cases, the reassembling can also be
taken over by a firewall, as long as it can be configured accordingly.
When a router receives a packet, it examines the destination address and determines the
outgoing interface to use that interface’s Maximum Transmission Unit (MTU). If the packet size
is bigger than the MTU, and the Do not Fragment (DF) bit in the packet’s header is set to 0, then
the router may fragment the packet.
The router divides the packet into fragments. The max size of each fragment is the MTU
minus the IP header size (20 bytes minimum; 60 bytes maximum). The router puts each fragment
into its own packet, each fragment packet having following changes:
The more fragments (MF) flag is set for all fragments except the last one, which is
set to 0.
The fragment offset field is set, based on the offset of the fragment in the original
data payload. This is measured in units of eight-byte blocks.
For example, for an MTU of 1,500 bytes and a header size of 20 bytes, the fragment
offsets would be multiples of (1500–20)/8 = 185. These multiples are 0, 185, 370, 555, 740,...
It is possible that a packet is fragmented at one router, and that the fragments are further
fragmented at another router. For example, a packet of 4,520 bytes, including the 20 bytes of
the IP header (without options) is fragmented to two packets on a link with an MTU of 2,500
bytes:
34
The total data size is preserved: 2480 bytes + 2020 bytes = 4500 bytes. The offsets are 0
and 0 + 2480/8 = 310.
On a link with an MTU of 1,500 bytes, each fragment results in two fragments:
Again, the data size is preserved: 1480 + 1000 = 2480, and 1480 + 540 = 2020.
Also in this case, the More Fragments (MF) bit remains 1 for all the fragments that came
with 1 in them and for the last fragment that arrives, it works as usual, that is the MF bit is set to
0 only in the last one. And of course, the Identification field continues to have the same value in
all re-fragmented fragments. This way, even if fragments are re-fragmented, the receiver knows
they have initially all started from the same packet.
The last offset and last data size are used to calculate the total data size: 495*8 + 540 =
3960 + 540 = 4500.
Since IPv6 generally no longer provides fragmentation and no longer allows router
fragmentation, the IP packet must already have a suitable size before sending. If a router
reaches IPv6 datagrams that are higher than the maximum transmission unit, the router will
discard them and inform the sender of an ICMPv6 type 2 “Packet Too Big” message. The data
sending application can now either create smaller, non-fragmented packets, or initiate
fragmentation. Subsequently, the appropriate extension header is added to the IP packet, so
that the target host can also reassemble the individual fragments after reception.
35
1.3.2.3 Reassembly
A receiver knows that a packet is a fragment if at least one of the following conditions is
true:
The “more fragments” flag is set. (This is true for all fragments except the last.)
The “fragment offset” field is nonzero. (This is true for all fragments except the
first.)
The receiver identifies matching fragments using the foreign and local address, the protocol
ID, and the identification field. The receiver reassembles the data from fragments with the
same ID using both the fragment offset and the more fragments flag. When the receiver receives
the last fragment (which has the “more fragments” flag set to 0), it can calculate the length of
the original data payload, by multiplying the last fragment’s offset by eight, and adding the last
fragment’s data size. In the example above, this calculation was 495*8 + 540 = 4500 bytes.
When the receiver has all fragments, they can be correctly ordered by using the offsets,
and reassembled to yield the original data segment.
By building on the functionality provided by the Internet Protocol (IP), the transport protocols
deliver data to applications executing in the IP host. This is done by making use of ports. The
transport protocols can provide additional functionality such as congestion control, reliable data
delivery, duplicate data suppression, and flow control as is done by TCP.
1.4.1 Ports
The concept of ports provides a way to uniformly and uniquely identify connections and
the programs and hosts that are engaged in them, irrespective of specific process IDs.
Each process that wants to communicate with another process identifies itself to the
TCP/IP protocol suite by one or more ports. A port is a 16-bit number, used by the host-to-host
36
protocol to identify to which higher level protocol or application program (process) it must deliver
incoming messages. There are two types of port:
Well-known: Well-known ports belong to standard servers, for example, Telnet uses
port 23. Well-known port numbers range between 1 and 1023 (prior to 1992, the
range between 256 and 1023 was used for UNIX-specific servers). Well-known
port numbers are typically odd, because early systems using the port concept
required an odd/even pair of ports for duplex operations. Most servers require only
a single port. Exceptions are the BOOTP server, which uses two: 67 and 68. And
the FTP server, which uses two: 20 and 21.
The well-known ports are controlled and assigned by the Internet Assigned number
Authority (IANA) and on most systems can only be used by system processes or by
programs executed by privileged users. The reason for well-known ports is to allow
clients to be able to find servers without configuration information. The well-known
port numbers are defined in STD 2 - Assigned Internet Numbers.
Ephemeral: Clients do not need well-known port numbers because they initiate
communication with servers and the port number they are using is contained in the
UDP datagrams sent to the server. Each client process is allocated a port number
for as long as it needs it by the host it is running on. Ephemeral port numbers have
values greater than 1023, normally in the range 1024 to 65535. A client can use any
number allocated to it, as long as the combination of <transport protocol, IP address,
port number> is unique.
Ephemeral ports are not controlled by IANA and can be used by ordinary user-
developed programs on most systems. Confusion, due to two different applications
trying to use the same port numbers on one host, is avoided by writing those
applications to request an available port from TCP/IP. Because this port number is
dynamically assigned, it may differ from one invocation of an application to the
next.
UDP and TCP use the same port principle. To the best possible extent, the same
port numbers are used for the same services on top of UDP, and TCP.
layer processes. UDP protocol ports distinguish multiple applications running on a single device
from one another.
Unlike the TCP, UDP adds no reliability, flow control, or error-recovery functions to IP.
Because of UDP’s simplicity, UDP headers contain fewer bytes and consume less network
overhead than TCP. UDP is useful in situations in which the reliability mechanisms of TCP are
not necessary, such as in cases where a higher-layer protocol might provide error and flow
control.
UDP is the transport protocol for several well-known application layer protocols, including
Network File System (NFS), Simple Network Management Protocol (SNMP), Domain Name
System (DNS), and Trivial File Transfer Protocol (TFTP).
The UDP packet format contains four fields, as shown in Figure 1-17. These include
Source Port, Destination Port, Length, and Checksum fields.
The Source and Destination port fields contain the 16-bit UDP protocol port numbers
used to de-multiplex datagrams for receiving application layer processes. A Length field specifies
the length of the UDP header and data. The Checksum field provides an (optional) integrity
check on the UDP header and data.
full-duplex operation, and multiplexing. With stream data transfer, TCP delivers an unstructured
stream of bytes identified by sequence numbers. This service benefits applications because
they do not have to chop data into blocks before handing it off to TCP. Instead, TCP groups
bytes into segments and passes them to IP for delivery.
TCP offers efficient flow control, which means that, when sending acknowledgments back
to the source, the receiving TCP process indicates the highest sequence number that it can
receive without overflowing its internal buffers. Full-duplex operation means that TCP processes
can both send and receive at the same time. Finally, TCP’s multiplexing means that numerous
simultaneous upper-layer conversations can be multiplexed over a single connection.
To use reliable transport services, TCP hosts must establish a connection-oriented session
with one another. Connection establishment is performed by using a three-way handshake
mechanism.
Each host randomly chooses a sequence number used to track bytes within the stream
that it is sending and receiving. Then, the three-way handshake proceeds in the following manner:
(a) The first host (Host A) initiates a connection by sending a packet with the initial
sequence number (X) and SYN bit set to indicate a connection request.
(b) The second host (Host B) receives the SYN, records the sequence number X, and
replies by acknowledging the SYN (with an ACK =X + 1). Host B includes its own
39
initial sequence number (SEQ = Y). An ACK of 20 means that the host has received
bytes 0 through 19, and expects byte 20 next. This technique is called forward
acknowledgment.
(c) Host A then acknowledges all bytes that Host B sent with a forward acknowledgment
indicating the next byte Host A expects to receive (ACK = Y + 1).
Segment headers contain sender and recipient ports, segment ordering information, and
a data field known as a checksum. The TCP protocols on both hosts use the checksum data to
determine whether data has transferred without error. We will see more about these headers in
TCP Packet Structure section.
A simple transport protocol might implement a reliability and flow control technique in
which the source sends one packet, starts a timer, and waits for an acknowledgment before
sending a new packet. If the acknowledgment is not received before the timer expires, the
source retransmits the packet. Such a technique is called positive acknowledgment and
retransmission (PAR).
40
.By assigning each packet a sequence number, PAR enables hosts to track lost or duplicate
packets caused by network delays that result in premature retransmission. The sequence
numbers are sent back in the acknowledgments so that the acknowledgments can be tracked.
PAR is an inefficient use of bandwidth, however, because a host must wait for an
acknowledgment before sending a new packet, and only one packet can be sent at a time.
A TCP sliding window provides more efficient use of network bandwidth than PAR because
it enables hosts to send multiple bytes or packets before waiting for an acknowledgment.
In TCP, the receiver specifies the current window size in every packet. Because TCP
provides a byte-stream connection, window sizes are expressed in bytes. This means that a
window is the number of data bytes that the sender is allowed to send before waiting for an
acknowledgment. Initial window sizes are indicated at connection setup but might vary throughout
the data transfer to provide flow control. A window size of zero, for instance, means “Send no
data.”
In a TCP sliding-window operation, for example, the sender might have a sequence of
bytes to send (numbered 1 to 10) to a receiver who has a window size of 5. The sender then
would place a window around the first 5 bytes and transmit them together. It would then wait for
an acknowledgment.
The receiver would respond with an ACK of 6, indicating that it has received bytes 1 to 5
and is expecting byte 6 next. In the same packet, the receiver would indicate that its window
size is 5. The sender then would move the sliding window 5 bytes to the right and transmit bytes
6 to 10. The receiver would respond with an ACK of 11, indicating that it is expecting sequenced
byte 11 next. In this packet, the receiver might indicate that its window size is 0 (because, for
example, its internal buffers are full). At this point, the sender cannot send any more bytes until
the receiver sends another packet with a window size greater than 0.
The following descriptions summarize the TCP packet fields illustrated in Figure 1.16
Source port and destination port — Identifies points at which upper-layer source
and destination processes receive TCP services.
41
Sequence number — Usually specifies the number assigned to the first byte of
data in the current message. In the connection-establishment phase, this field also
can be used to identify an initial sequence number to be used in an upcoming
transmission.
Data offset — Indicates the number of 32-bit words in the TCP header.
Flags — Carries a variety of control information. There are 6 types of TCP flags:-
PSH - Informs the receiving host that the data should be pushed up to the receiving
application immediately instead of waiting for other data to be gather before initiation
of TCP channel.
URG – TCP creates a special segment in which it sets the URG flag and also the
urgent pointer field in case a data needs to be sent urgently. This causes the receiving
TCP to forward the urgent data on a separate channel to the application. This allows
the application to process the data out of band.
Window — Specifies the size of the sender’s receive window (that is, the buffer
space available for incoming data).
Urgent pointer — Points to the first urgent data byte in the packet.
Options — Specifies various TCP options. This will be discussed in detail in next
section “TCP Header Options”.
There are five available TCP Options currently as on the date of compilation of this material.
They are:
No Operation (NOP)
Window Scaling
Timestamps
Options may occupy space at the end of the TCP header and are a multiple of 8 bits in
length. All options are included in the checksum. An option may begin on any octet boundary.
There are two cases for the format of an option:
data octets.
The option-length counts the two octets of option-kind and option-length as well as the
option-data octets. Note that the list of options may be shorter than the data offset field might
imply. The content of the header beyond the End-of-Option option must be header padding
(i.e., zero).
This option code indicates the end of the option list. This might not coincide with the end
of the TCP header according to the Data Offset field. This is used at the end of all options, not
the end of each option, and need only be used if the end of the options would not otherwise
coincide with the end of the TCP header.
No Operation
This option code may be used between options, for example, to align the beginning of a
subsequent option on a word boundary. There is no guarantee that senders will use this option,
so receivers must be prepared to process options even if they do not begin on a word boundary.
If this option is present, then it communicates the maximum received segment size at the
TCP which sends this segment. This field must only be sent in the initial connection request
(i.e., in segments with the SYN control bit set). If this option is not used, any segment size is
44
allowed.
Windows Scaling
The three-byte Window Scale option MAY be sent in a <SYN> segment by a TCP. It has
two purposes: (1) indicate that the TCP is prepared to both send and receive window scaling,
and (2) communicate the exponent of a scale factor to be applied to its receive window. Thus,
a TCP that is prepared to scale windows SHOULD send the option, even if its own scale factor
is 1 and the exponent 0. The scale factor is limited to a power of two and encoded logarithmically,
so it may be implemented by binary shift operations. The maximum scale exponent is limited to
14 for a maximum permissible receive window size of 1 GiB (2^ (14+16)).
SACK Permitted
This two-byte option may be sent in a SYN by a TCP that has been extended to receive
(and presumably process) the SACK option once the connection has opened. It MUST NOT be
sent on non-SYN segments.
SACK
Timestamps
The Timestamps option is introduced to address some of the issues such as latency and
error checking. The Timestamps option is specified in a symmetrical manner, so that Timestamp
Value (TSval) timestamps are carried in both data and <ACK> segments and are echoed in
Timestamp Echo Reply (TSecr) fields carried in returning <ACK> or data segments. Originally
used primarily for timestamping individual segments, the properties of the Timestamps option
allow for taking time measurements as well as additional uses.
The Timestamps option is important when large receive windows are used to allow the
use of the Protection Against Wrapped Sequence Number (PAWS) mechanism.
TCP is a symmetric protocol, allowing data to be sent at any time in either direction, and
therefore timestamp echoing may occur in either direction. For simplicity and symmetry, we
specify that timestamps always be sent and echoed in both directions. For efficiency, we combine
45
the timestamp and timestamp reply fields into a single TCP Timestamps option.
Summary
Data networks are systems of end devices, intermediary devices, and the media
connecting the devices. For communication to occur, these devices must know how
to communicate.
These devices must comply with communication rules and protocols. TCP/IP is an
example of a protocol suite.
Most protocols are created by a standards organization such as the IETF or IEEE.
The most widely-used networking models are the OSI and TCP/IP models.
Data that passes down the stack of the OSI model is segmented into pieces and
encapsulated with addresses and other labels. The process is reversed as the pieces
are de-encapsulated and passed up the destination protocol stack.
The OSI model describes the processes of encoding, formatting, segmenting, and
encapsulating data for transmission over the network.
The TCP/IP protocol suite is an open standard protocol that has been endorsed by
the networking industry and ratified, or approved, by a standards organization.
The Internet Protocol Suite is a suite of protocols required for transmitting and
receiving information using the Internet.
Protocol Data Units (PDUs) are named according to the protocols of the TCP/IP
suite: data, segment, packet, frame, and bits.
Review Questions
Describe the structure of a TCP packet.
Enumerate the main differences identified between the OSI and TCP/IP model.
References
Internetworking Technologies Handbook - by Cisco Systems Inc.
Packet Guide to Core Network Protocols by Bruce Hartpence
TCP/IP Tutorial and Technical Overview (IBM Redbooks)
Online Sources
http://what-when-how.com/data-communications-and-networking/networkmodels-
data-communications-and-networking/
https://en.wikipedia.org/
http://www.iana.org/go/rfc793
http://www.iana.org/go/rfc7323
https://tools.ietf.org/html/rfc2018
47
UNIT - 2
IP ROUTING
Learning Objectives
This chapter acts as a foundation for the technology discussions that follow. The topic of
routing has been covered to make sure you understand the basics of routing and routing protocol.
At the end of the chapter a learner should be able to clearly understand the process of
routing and the key concepts of the following:
Classes of IP Addressing
Strucutre
2.1 Introduction
2.1 Introduction
In this chapter, some fundamental concepts and terms used in the evolving language of
internetworking are addressed. The topic of routing has been covered in computer science
literature for over two decades, but routing only achieved commercial popularity in the mid-
1980s. The primary reason for this time lag is the nature of networks in the 1970s. During this
time, networks were fairly simple, homogeneous environments. Only recently has large-scale
internetworking become popular.
2.1.1 IP routing is the process of sending packets from a host on one network to another
host on a different remote network. This process is usually done by routers. Routers examine
the destination IP address of a packet, determine the next-hop address, and forward the packet.
Routers use routing tables to determine a next hop address to which the packet should be
forwarded. Consider the following example of IP routing:
Host A wants to communicate with host B, but host B is on another network. Host A is
configured to send all packets destined for remote networks to router R1. Router R1 receives
the packets, examines the destination IP address and forwards the packet to the outgoing
interface associated with the destination network.
49
A default gateway is a router that hosts use to communicate with other hosts on remote
networks. A default gateway is used when a host doesn’t have a route entry for the specific
remote network and doesn’t know how to reach that network. Hosts can be configured to send
all packets destined to remote networks to a default gateway, which has a route to reach that
network. The following example explains the concept of a default gateway more thoroughly.
Host A has an IP address of the router R1 configured as the default gateway address.
Host A is trying to communicate with host B, a host on another, remote network. Host A looks up
in its routing table to check if there is an entry for that destination network. If the entry is not
found, the host sends all data to the router R1. Router R1 receives the packets and forwards
them to host B.
Each router maintains a routing table and stores it in RAM. A routing table is used by
routers to determine the path to the destination network. Each routing table consists of the
following entries:
outgoing interface – outgoing interface the packet should go out to reach the
destination network.
Consider the following example. Host A wants to communicate with host B, but host B is
on another network. Host A is configured to send all packets destined for remote networks to
the router. The router receives the packets, checks the routing table to see if it has an entry for
the destination address. If it does, the router forwards the packet out the appropriate interface
port. If the router doesn’t find the entry, it discards the packet.
We can use the show ip route command from the enabled mode to display the router’s
routing table.
As you can see from the output above, this router has two directly connected routes to the
subnets 10.0.0.0/8 and 192.168.0.0/24. The character C in the routing table indicates that a
route is a directly connected route. So when host A sends the packet to host B, the router will
look up into its routing table and find the route to the 10.0.0.0/8 network on which host B
resides. The router will then use that route to route packets received from host A to host B.
goes through a complex and lengthy process called IP Routing Process. If you have the basic
idea of the network, network devices, network protocols, and the OSI model, you can easily
understand and describe the IP Routing Process. Step by step IP routing process is discussed
in detail below.
The IP routing process is quite simple and remains unchanged regardless of the number
of connected devices and the size of the network being used. We will use the following simple
network design to explain the step by step IP routing process.
In order to understand the IP routing process, we will explain the communication between
PC1 and PC2 that are interconnected to each other using a router. We will use the Internet
Control Message Protocol (ICMP) protocol (used by the ping utility) to test and explain the IP
routing process between PC1 and PC2. Let’s see what happens when PC1 communicates to
PC2 on a different network using a router.
When a user executes the ping 192.168.1.1 command, a packet is generated on PC1
with the help of the IP and ICMP protocols.
The IP protocol will use the Address Resolution Protocol (ARP) protocol to determine the
destination network for this packet by looking at the IP address and the subnet mask (192.168.0.1/
24) of PC1. Since the packet is destined for a remote network (192.168.1.0/24), it must be sent
to the router using the gateway address (192.168.0.100).
52
In order to send the packet to the router, the hardware address of the router’s interface
(Fa0/0) is required. To get the hardware address, the ARP cache will be checked. If the IP
address has not already been resolved to a hardware address, it will not be present in the ARP
cache. In the very first time, the host will send an ARP broadcast looking for the hardware
address of IP address 192.168.0.100.
The router will respond with the hardware address of the Fa0/0 interface connected to the
PC1 (local network) and the packet will be hand over to the Data Link layer.
The Data Link layer will create a frame that includes the source and destination hardware
addresses, the Type field, and Frame Check Sequence (FCS).
The Type field is used to specify the network layer protocol and the FCS filed is
used to calculate the Cyclic Redundancy Check (CRC) value for error detection.
Now, the Data Link layer will hand over the frame to the Physical layer. The Physical layer
will encode the binary bit stream (1s and 0s) into a digital signal (signaling, analog or digital,
depends on the type of media being used). The signals then will be transmitted on the local
physical network.
The router’s interface (Fa0/0) will receive the signals and then will be encoded into the
binary bit stream. Next, the router’s interface will build the frame and run a CRC. In addition, at
the end of the frame, the router will also check the FCS field to ensure that there is no error in
the frame.
Now, the destination hardware address and the Type field will be checked to determine
what the router should do with this frame. Since IP is in the Type field, the router will hand over
the packet to the IP protocol running on the router.
Now, the packet’s destination IP address will be checked in the routing table to determine
where the packet should be forwarded. Since the destination IP address is 192.168.1.1, the
router will see in the routing table that 192.168.1.0 network is directly connected through the
Fa0/1 interface.
If the routing table does not contain the routing information about the destination network
(192.168.1.0), the packet will be discarded and the destination host unreachable message will
be sent out to source device (PC1).
53
Next, the router will place the packet in the buffer memory of the Fa0/1 interface. Now, the
router will create a frame to send the packet to the destination host. First, the ARP cache will be
checked to determine whether the hardware address has already been resolved or not. If the
hardware address is not in the ARP cache, the router will send an ARP broadcast out to Fa0/1
interface to find the hardware address of 192.168.1.1.
PC2 will respond with the hardware address of its Network Interface Card (NIC), in this
case, Ethernet 0, with an ARP reply. The router’s Fa0/1 interface now has everything that is
required to send the packet to the final destination. Now, the router will send the frame to PC2.
Once the frame is received by PC2, the CRC value will be calculated. If everything is OK,
the packet will be handed over to IP to check the destination IP address. The IP destination
address will match with the IP address of PC2. Next, the protocol field of the packet will be
checked to determine what the purpose of the packet is.
Since the packet is an ICMP echo request (ping), PC2 will generate a new ICMP echo-
reply packet containing a source IP address of PC2 and a destination IP address of PC1. The
process will start all over again, however, this time, it will go in the reverse direction (PC2 to
PC1). Since the hardware addresses of each device have already been resolved, hence each
device will only look in its ARP cache to determine the hardware address of each interface.
Finally, an ICMP echo-reply will be received by PC1 from PC2. That’s all what happens to
communicate from one device to another device using a router.
Routing algorithms fill routing tables with a variety of information. Destination/next hop
associations tell a router that a particular destination can be gained optimally by sending the
packet to a particular router representing the “next hop” on the way to the final destination.
When a router receives an incoming packet, it checks the destination address and attempts to
associate this address with a next hop. Figure 2.6 shows an example of a destination/next hop
routing table.
Routing tables can also contain other information, such as information about the desirability
of a path. Routers compare metrics to determine optimal routes. Metrics differ depending on
the design of the routing algorithm being used.
Routers communicate with one another (and maintain their routing tables) through the
transmission of a variety of messages. The routing update message is one such message.
Routing updates generally consist of all or a portion of a routing table.
By analyzing routing updates from all routers, a router can build a detailed picture of
network topology. A link-state advertisement is another example of a message sent between
routers. Link-state advertisements inform other routers of the state of the sender’s links. Link
information can also be used to build a complete picture of network topology. Once the network
topology is understood, routers can determine optimal routes to network destinations.
Routing algorithms can be differentiated based on several key characteristics. First, the
particular goals of the algorithm designer affect the operation of the resulting routing protocol.
Second, various types of routing algorithms exist, and each algorithm has a different impact on
network and router resources. Finally, routing algorithms use a variety of metrics that affect
calculation of optimal routes.
Routing algorithms often have one or more of the following design goals:
Optimality
Rapid convergence
Flexibility
However, we will be understanding various routing protocols in the coming sections based
on static or dynamic routing classification as that is how routing algorithms are widely classified.
Subnets directly connected to a router’s interface are added to the router’s routing table.
Interface has to have an IP address configured and both interface status codes must be in
the up and up state. A router will be able to route all packets destined for all hosts in subnets
directly connected to its active interfaces.
56
Consider the following example. The router has two active interfaces, Fa0/0 and Fa0/1.
Each interface has been configured with an IP address and is currently in the up-up state, so
the router adds these subnets to its routing table.
As you can see from the output above, the router has two directly connected routes to the
subnets 10.0.0.0/8 and 192.168.0.0/24. The character C in the routing table indicates that a
route is a directly connected route.
NOTE:- You can see only connected routes in a router’s routing table by typing the show
ip route connected command
By adding static routes, a router can learn a route to a remote network that is not directly
connected to one of its interfaces. Static routes are configured manually by typing the global
configuration mode command ip route DESTINATION_NETWORK SUBNET_MASK
NEXT_HOP_IP_ADDRESS. This type of configuration is usually used in smaller networks
because of scalability reasons (you have to configure each route on each router).
A simple example will help you understand the concept of static routes.
First, consider the router A’s routing table before we add the static route:
Now, we’ll use the static route command to configure router A to reach the subnet 10.0.0.0/
24. The router now has the route to reach the subnet.
The character S in the routing table indicates that a route is a statically configured route.
Another version of the ip route command exists. You don’t have to specify the next-hop IP
address. You can rather specify the exit interface of the local router. In the example above we
58
A router can learn dynamic routes if a routing protocol is enabled. A routing protocol is
used by routers to exchange routing information with each other. Every router in the network
can then use information to build its routing table. A routing protocol can dynamicaly choose a
different route if a link goes down, so this type of routing is fault-tolerant. Also, unlike with static
routing, there is no need to manually configure every route on every router, which greatly reduces
the administrative overhead. You only need to define which routes will be advertised on a router
that connect directly to the corresponding subnets – routing protocols take care of the rest.
The disadvantage of dynamic routing is that it increases memory and CPU usage on a
router, because every router has to process received routing information and calculate its routing
table. To better understand the advantages that dynamic routing procotols bring, consider the
following example:
Both routers are running a routing protocol, namely EIGRP. There is no static routes on
Router A, so R1 doesn’t know how to reach the subnet 10.0.0.0/24 that is directly connected to
Router B. Router B then advertises the subnet to Router A using EIGRP. Now Router A has the
route to reach the subnet. This can be verified by typing the show ip route command:
59
You can see that Router A has learned the subnet from EIGRP. The letter D in front of the
route indicates that the route has been learned through EIGRP. If the subnet 10.0.0.0/24 fails,
Router B can immediately inform Router A that the subnet is no longer reachable.
A network can use more than one routing protocol, and routers on the network can learn
about a route from multiple sources. Routers need to find a way to select a better path when
there are multiple paths available. Administrative distance number is used by routers to find out
which route is better (lower number is better). For example, if the same route is learned from
RIP and EIGRP, a Cisco router will choose the EIGRP route and stores it in the routing table.
This is because EIGRP routes have (by default) the administrative distance of 90, while RIP
route have a higher administrative distance of 120.
You can display the administrative distance of all routes on your router by typing the show
ip route command:
In the above case, the router has only a single route in its routing table learned from a
dynamic routing protocols – the EIGRP route. The following table lists the administrative distance
default values:
60
2.4.2 Metric
If a router learns two different paths for the same network from the same routing protocol,
it has to decide which route is better and will be placed in the routing table. Metric is the measure
used to decide which route is better (lower number is better). Each routing protocol uses its own
metric. For example, RIP uses hop counts as a metric, while OSPF uses cost.
The following example explains the way RIP calculates its metric and why it chooses one
path over another.
RIP has been configured on all routers. Router 1 has two paths to reach the subnet
10.0.0.0/24. One path is goes through Router 2, while the other path goes through Router 3
and then Router 4. Because RIP uses the hop count as its metric, the path through Router 1 will
be used to reach the 10.0.0.0/24 subnet. This is because that subnet is only one router away on
the path. The other path will have a higher metric of 2, because the subnet is two routers away.
61
NOTE: - The example above can be used to illustrate a disadvantage of using RIP as a
routing protocol. Imagine if the first path through R2 was the 56k modem link, while the other
path (R3-R4) is a high speed WAN link. Router R1 would still chose the path through R2 as the
best route, because RIP uses only the hop count as its metric. The following table lists the
parameters that various routing protocols use to calculate the metric:
Routing tables contain information used by switching software to select the best route.
But how, specifically, are routing tables built? What is the specific nature of the information that
they contain? How do routing algorithms determine that one route is preferable to others?
Routing algorithms have used many different metrics to determine the best route.
Sophisticated routing algorithms can base route selection on multiple metrics, combining them
in a single (hybrid) metric. All the following metrics have been used:
Path length
Reliability
Delay
Bandwidth
Load
Communication cost
Path length is the most common routing metric. Some routing protocols allow network
administrators to assign arbitrary costs to each network link. In this case, path length is the sum
of the costs associated with each link traversed. Other routing protocols define hop count, a
metric that specifies the number of passes through internetworking products, such as routers,
that a packet must take en route from a source to a destination.
more often than others. After a network fails, certain network links might be repaired more
easily or more quickly than other links. Any reliability factors can be taken into account in the
assignment of the reliability ratings, which are arbitrary numeric values usually assigned to
network links by network administrators.
Routing delay refers to the length of time required to move a packet from source to
destination through the internetwork. Delay depends on many factors, including the bandwidth
of intermediate network links, the port queues at each router along the way, network congestion
on all intermediate network links, and the physical distance to be travelled. Because delay is a
conglomeration of several important variables, it is a common and useful metric.
Bandwidth refers to the available traffic capacity of a link. All other things being equal, a
10-Mbps Ethernet link would be preferable to a 64-kbps leased line. Although bandwidth is a
rating of the maximum attainable throughput on a link, routes through links with greater bandwidth
do not necessarily provide better routes than routes through slower links. For example, if a
faster link is busier, the actual time required to send a packet to the destination could be greater.
Load refers to the degree to which a network resource, such as a router, is busy. Load
can be calculated in a variety of ways, including CPU utilization and packets processed per
second. Monitoring these parameters on a continual basis can be resource-intensive itself.
Unlike static routing, you don’t need to manually configure every route on each
router in the network. You just need to configure the networks to be advertised on a
router directly connected to them.
If a link fails and the network topology changes, routers can advertise that some
routes have failed and pick a new route to that network.
63
Distance vector
Link state
Cisco has created its own routing protocol – Enhanced Interior Gateway Routing Protocol
(EIGRP). EIGRP is considered to be an advanced distance vector protocol, although some
materials erroneously state that EIGRP is a hybrid routing protocol, a combination of distance
vector and link state.
All of the routing protocols mentioned above are interior routing protocols (IGP), which
means that they are used to exchange routing information within one autonomous system.
BGP (Border Gateway Protocol) is an example of an exterior routing protocol (EGP) which is
used to exchange routing information between autonomous systems on the Internet.
As the name implies, distance vector routing protocols use distance to determine the best
path to a remote network. The distance is something like the number of hops (routers) to the
destination network.
Distance vector protocols usually send the complete routing table to each neighbor (a
neighbor is directly connected router that runs the same routing protocol). They employ some
version of Bellman-Ford algorithm to calculate the best routes. Compared with link state routing
protocols, distance vector protocols are easier to configure and require little management, but
are susceptible to routing loops and converge slower than the link state routing protocols.
Distance vector protocols also use more bandwidth because they send complete routing table,
while the link state procotols send specific updates only when topology changes occur.
Link state routing protocols are the second type of routing protocols. They have the same
basic purpose as distance vector protocols, to find a best path to a destination, but use different
methods to do so. Unlike distance vector protocols, link state protocols don’t advertise the
entire routing table. Instead, they advertise information about a network toplogy (directly
connected links, neighboring routers…), so that in the end all routers running a link state protocol
have the same topology database. Link state routing protocols converge much faster than
distance vector routing protocols, support classless routing, send updates using multicast
addresses and use triggered routing updates. They also require more router CPU and memory
usage than distance-vector routing protocols and can be harder to configure.
Each router running a link state routing protocol creates three different tables:
Neighbor table – the table of neighboring routers running the same link state routing
protocol.
Topology table – the table that stores the topology of the entire network.
Shortest Path First algorithm is used to calculate the best route. OSPF and IS-IS are
examples of link state routing protocols.
2.5.2 Difference between distance vector and link state routing protocols
This section requires understanding of different classes of IPs and IP Subnetting. This
has been explained in detail as part of Unit 3. This section has been placed in this Unit due to its
relevance with the topic.
Classful Routing protocols do not send subnet mask information when a route update is
sent out. All devices in the network must use the same subnet mask E.g.: RIP V1
Classless Routing is performed by protocols that send subnet mask information in the
routing updates. Classless routing allows VLSM (Variable Length Subnet Masking) E.g.: RIP
V2, EIGRP, & OSPF. Below table clearly differentiates both Classful and Classless Routing:
lacks some advanced features of routing protocols like OSPF or EIGRP. Two versions of the
protocol exists: version 1 and version 2. Both versions use hop count as a metric and have the
administrative distance of 120. RIP version 2 is capable of advertising subnet masks and uses
multicast to send routing updates, while version 1 doesn’t advertise subnet masks and uses
broadcast for updates. Version 2 is backwards compatible with version 1.
2.7.1 RIPv1
RIPv1 is a classful routing protocol and it does not support VLSM (Variable Length Subnet
Masking). RIPv1 uses local broadcasts to share routing information. These updates are periodic
in nature, occurring, by default, every 30 seconds. To prevent packets from circling around a
loop forever, both versions of RIP solve counting to infinity by placing a hop count limit of 15
hops on packets. Any packet that reaches the sixteenth hop will be dropped. RIPv1 is a classful
protocol. RIP supports up to six equal-cost paths to a single destination. Equal-cost path are
the paths where the metric is same (Hop count).
2.7.2 RIPv2
RIPv2 is a distance vector routing protocol with routing enhancements built into it, and it
is based on RIPV1. Therefore, it is commonly called as hybrid routing protocol.RIPv2 uses
multicasts instead of broadcasts. RIPv2 supports triggered updates. When a change occurs, a
RIPv2 router will immediately propagate its routing information to its connected neighbours.
Router R1 directly connects to the subnet 10.0.0.0/24. Network engineer has configured
RIP on R1 to advertise the route to this subnet. R1 sends routing updates to R2 and R3. The
routing updates list the subnet, subnet mask and metric for this route. Each router, R2 and R3,
receives this update and adds the route to their respective routing tables. Both routers list the
metric of 1 because the network is only one hop away.
NOTE:- Maximum hop count for a RIP route is 15. Any route with a higher hop count is
considered to be unreachable.
Administrative distance of EIGRP is 90, which is less than both the administrative distance
of RIP and the administrative distance of OSPF, so EIGRP routes will be preferred over these
routes. EIGRP uses Reliable Transport Protocol (RTP) for sending messages.
EIGRP calculates its metric by using bandwidth, delay, reliability and load. By default,
only bandwidth and delay are used when calulating metric, while reliability and load are set to
zero.
68
EIGRP Neighbors
EIGRP must establish neighbor relationships with other EIGRP neighboring routers before
exchanging routing information. To establish a neighbor relationships, routers send hello packets
every couple of seconds. Hello packets are sent to the multicast address of 224.0.0.10
NOTE:- On LAN interfaces hellos are sent every 5 seconds. On WAN interfaces every 60
seconds.
The following fields in a hello packet must be the identical in order for routers to become
neighbors:
Subnet number
Routers send hello packets every couple of seconds to ensure that the neighbor relationship
is still active. By default, routers considers the neighbor to be down after a hold-down timer has
expired. Hold-down timer is, by default, three times the hello interval. On LAN network the hold-
down timer is 15 seconds.
Two terms that you will often encounter when working with EIGRP are feasible and reported
distance. Let’s clarify these terms:
Feasible distance (FD) – the metric of the best route to reach a network. That route will be
listed in the routing table.
Reported distance (RD) – the metric advertised by a neighboring router for a specific
route. It other words, it is the metric of the route used by the neighboring router to reach the
network.
69
EIGRP has been configured on R1 and R2. R2 is directly connected to the subnet 10.0.1.0/
24 and advertises that subnet into EIGRP. Let’s say that R2’s metric to reach that subnet is
28160. When the subnet is advertised to R1, R2 informs R1 that its metric to reach 10.0.1.0/24
is 10. From the R1’s perspective that metric is considered to be the reported distance for that
route. R1 receives the update and adds the metric to the neighbor to the reported distance.
That metric is called the feasible distance and is stored in R1’s routing table (30720 in our
case). The feasible and reported distance are displayed in R1’s EIGRP topology table:
Another two terms that appear often in the EIGRP world are successor and feasible
successor. A successor is the route with the best metric to reach a destination. That route is
stored in the routing table. A feasible successor is a backup path to reach that same destination
that can be used immediately if the successor route fails. These backup routes are stored in the
topology table.
70
The neighbor’s advertised distance (AD) for the route must be less than the
successor’s feasible distance (FD).
The following example explains the concept of a successor and a feasible successor.
R1 has two paths to reach the subnet 10.0.0.0/24. The path through R2 has the best
metric (20) and it is stored in the R1’s routing table. The other route, through R3, is a feasible
successor route, because the feasiblility condition has been met (R3’s advertised distance of
15 is less than R1’s feasible distance of 20). R1 stores that route in the topology table. This
route can be immediately used if the primary route fails.
EIGRP toplogy table contains all learned routes to a destination. The table holds all routes
received from a neighbor, successors and feasible successors for every route, and interfaces
on which updates were received. The table also holds all locally connected subnets included in
an EIGRP process.
Best routes (the successors) from the topology table are stored in the routing table. Feasible
successors are only stored in the topology table and can be used immediately if the primary
route fails.
71
EIGRP is running on all three routers. Routers R2 and R3 both connect to the subnet
10.0.1.0/24 and advertise that subnet to R1. R1 receives both updates and calculates the best
route. The best path goes through R2, so R1 stores that route in the routing table. Router R1
also calculates the metric of the route through R3. Let’s say that advertised distance of that
route is less then feasible distance of the best route. The feasibility condition is met and router
R1 stores that route in the topology table as a feasible successor route. The route can be used
immediately if the primary route fails.
OSPF (Open Shortest Path First) is a link state routing protocol. Because it is an open
standard, it is implemented by a variety of network vendors. OSPF will run on most routers that
doesn’t necessarily have to be Cisco routers (unlike EIGRP which can be run only on Cisco
routers).
Supports VLSM, CIDR, manual route summarization, equal cost load balancing
Routers running OSPF have to establish neighbor relationships before exchanging routes.
Because OSPF is a link state routing protocol, neighbors don’t exchange routing tables. Instead,
they exchange information about network topology. Each OSFP router then runs SFP algorithm
to calculate the best routes and adds those to the routing table. Because each router knows the
entire topology of a network, the chance for a routing loop to occur is minimal.
Each OSPF router stores routing and topology information in three tables:
OSPF routers need to establish a neighbor relationship before exchanging routing updates.
OSPF neighbors are dynamically discovered by sending Hello packets out each OSPF-enabled
interface on a router. Hello packets are sent to the multicast IP address of 224.0.0.5. The
process is explained in the following figure:
Routers R1 and R2 are directly connected. After OSFP is enabled both routers send
Hellos to each other to establish a neighbor relationship. You can verify that the neighbor
relationship has indeed been established by typing the show ip ospf neighbors command.
73
In the example above, you can see that the router-id of R2 is 2.2.2.2. Each OSPF router
is assigned a router ID. A router ID is determined by using one of the following:
The following fields in the Hello packets must be the same on both routers in order for
routers to become neighbors:
subnet
area id
authentication
MTU
By default, OSPF sends hello packets every 10 second on an Ethernet network (Hello
interval). A dead timer is four times the value of the hello interval, so if a routers on an Ethernet
network doesn’t receive at least one Hello packet from an OSFP neighbor for 40 seconds, the
routers declares that neighbor to be down.
1. Init state – a router has received a Hello message from the other OSFP router
2. 2-way state – the neighbor has received the Hello message and replied with a Hello
message of his own
3. Exstart state – beginning of the LSDB exchange between both routers. Routers are
starting to exchange link state information.
4. Exchange state – DBD (Database Descriptor) packets are exchanged. DBDs contain
LSAs headers. Routers will use this information to see what LSAs need to be exchanged.
74
5. Loading state – one neighbor sends LSRs (Link State Requests) for every network it
doesn’t know about. The other neighbor replies with the LSUs (Link State Updates) which
contain information about requested networks. After all the requested information have been
received, other neighbor goes through the same process
6. Full state – both routers have the synchronized database and are fully adjacent with
each other.
OSPF uses the concept of areas. An area is a logical grouping of contiguous networks
and routers. All routers in the same area have the same topology table, but they don’t know
about routers in the other areas. The main benefits of creating areas is that the size of the
topology and the routing table on a router is reduced, less time is required to run the SFP
algorithm and routing updates are also reduced.
Each area in the OSPF network has to connect to the backbone area (area 0). All router
inside an area must have the same area ID to become OSPF neighbors. A router that has
interfaces in more than one area (area 0 and area 1, for example) is called Area Border Router
(ABR). A router that connects an OSPF network to other routing domains (EIGRP network, for
example) is called Autonomous System Border Router (ASBR).
NOTE:- In OSPF, manual route summarization is possible only on ABRs and ASBRs.
All routers are running OSPF. Routers R1 and R2 are inside the backbone area (area 0).
Router R3 is an ABR, because it has interfaces in two areas, namely area 0 and area 1. Router
R4 and R5 are inside area 1. Router R6 is an ASBR, because it connects OSFP network to
another routing domain (an EIGRP domain in this case). If the R1’s directly connected subnet
fails, router R1 sends the routing update only to R2 and R3, because all routing updates all
localized inside the area.
NOTE:- The role of an ABR is to advertise address summaries to neighboring areas. The
role of an ASBR is to connect an OSPF routing domain to another external network (e.g. Internet,
EIGRP network…).
The LSAs (Link-State Advertisements) are used by OSPF routers to exchange topology
information. Each LSA contains routing and topology information to describe a part of an OSPF
network. When two neighbors decide to exchange routes, they send each other a list of all
LSAa in their respective topology database. Each router then checks its topology database and
sends a Link State Request (LSR) message requesting all LSAs not found in its topology table.
Other router responds with the Link State Update (LSU) that contains all LSAs requested by the
other neighbor. The concept is explained in the following example:
After configuring OSPF on both routers, routers exchange LSAs to describe their respective
topology database. Router R1 sends an LSA header for its directly connected network 10.0.1.0/
24. Router R2 check its topology database and determines that it doesn’t have information
about that network. Router R2 then sends Link State Request message requesting further
information about that network. Router R1 responds with Link State Update which contains
information about subnet 10.0.1.0/24 (next hop address, cost…).
76
Network address translation (NAT) is a method of remapping one IP address space into
another by modifying network address information in the IP header of packets while they are in
transit across a traffic routing device. The technique was originally used as a shortcut to avoid
the need to readdress every host when a network was moved. It has become a popular and
essential tool in conserving global address space in the face of IPv4 address exhaustion. One
Internet-routable IP address of a NAT gateway can be used for an entire private network.
The most common form of network translation involves a large private network using
addresses in a private range (10.0.0.0 to 10.255.255.255, 172.16.0.0 to 172.31.255.255, or
192.168.0 0 to 192.168.255.255). The private addressing scheme works well for computers
that only have to access resources inside the network, like workstations needing access to file
servers and printers. Routers inside the private network can route traffic between private
addresses with no trouble. However, to access resources outside the network, like the Internet,
these computers have to have a public address in order for responses to their requests to
return to them. This is where NAT comes into play.
77
Internet requests that require Network Address Translation (NAT) are quite complex but
happen so rapidly that the end user rarely knows it has occurred. A workstation inside a network
makes a request to a computer on the Internet. Routers within the network recognize that the
request is not for a resource inside the network, so they send the request to the firewall. The
firewall sees the request from the computer with the internal IP. It then makes the same request
to the Internet using its own public address, and returns the response from the Internet resource
to the computer inside the private network. From the perspective of the resource on the Internet,
it is sending information to the address of the firewall. From the perspective of the workstation,
it appears that communication is directly with the site on the Internet. When NAT is used in this
way, all users inside the private network access the Internet have the same public IP address
when they use the Internet. That means only one public addresses is needed for hundreds or
even thousands of users.
Most modern firewalls are stateful - that is, they are able to set up the connection between
the internal workstation and the Internet resource. They can keep track of the details of the
connection, like ports, packet order, and the IP addresses involved. This is called keeping track
of the state of the connection. In this way, they are able to keep track of the session composed
of communication between the workstation and the firewall, and the firewall with the Internet.
When the session ends, the firewall discards all of the information about the connection.
There are other uses for Network Address Translation (NAT) beyond simply allowing
workstations with internal IP addresses to access the Internet. In large networks, some servers
may act as Web servers and require access from the Internet. These servers are assigned
public IP addresses on the firewall, allowing the public to access the servers only through that
IP address. However, as an additional layer of security, the firewall acts as the intermediary
between the outside world and the protected internal network. Additional rules can be added,
including which ports can be accessed at that IP address. Using NAT in this way allows network
engineers to more efficiently route internal network traffic to the same resources, and allow
access to more ports, while restricting access at the firewall. It also allows detailed logging of
communications between the network and the outside world.
Additionally, NAT can be used to allow selective access to the outside of the network, too.
Workstations or other computers requiring special access outside the network can be assigned
specific external IPs using NAT, allowing them to communicate with computers and applications
that require a unique public IP address. Again, the firewall acts as the intermediary, and can
control the session in both directions, restricting port access and protocols.
78
NAT is a very important aspect of firewall security. It conserves the number of public
addresses used within an organization, and it allows for stricter control of access to resources
on both sides of the firewall.
Static NAT – translates one private IP address to a public one. The public IP address
is always the same.
Dynamic NAT – private IP addresses are mapped to the pool of public IP addresses.
Port Address Translation (PAT) – one public IP address is used for all internal devices,
but a different port is assigned to each private IP address. Also known as NAT
Overload.
Computer A request a web page from an Internet server. Because Computer A uses
private IP addressing, the source address of the request has to be changed by the router
because private IP addresses are not routable on the Internet. Router R1 receives the request,
changes the source IP address to its public IP address and sends the packet to server S1.
Server S1 receives the packet and replies to router R1. Router R1 receives the packet, changes
the destination IP addresses to the private IP address of Computer A and sends the packet to
Computer A.
79
With static NAT, routers or firewalls translate one private IP address to a single public IP
address. Each private IP address is mapped to a single public IP address. Static NAT is not
often used because it requires one public IP address for each private IP address.
Configure private/public IP address mapping by using the ip nat inside source static
PRIVATE_IP PUBLIC_IP command
Configure the router’s inside interface using the ip nat inside command
Configure the router’s outside interface using the ip nat outside command
Here is an example.
Computer A requests a web resource from S1. Computer A uses its private IP address
when sending the request to router R1. Router R1 receives the request, changes the private IP
address to the public one and sends the request to S1. S1 responds to R1. R1 receives the
response, looks up in its NAT table and changes the destination IP address to the private IP
address of Computer A. In the example above, we need to configure static NAT. To do that, the
following commands are required on R1:
80
Using the commands above, we have configured a static mapping between Computer
A’s private IP address of 10.0.0.2 and router’s R1 public IP address of 59.50.50.1. To check
NAT, you can use the show ip nat translations command:
With dynamic NAT, you specify two sets of addresses on your Cisco router:
Unlike with static NAT, where you had to manually define a static mapping between a
private and a public address, with dynamic NAT the mapping of a local address to a global
address happens dynamically. This means that the router dynamically picks an address from
the global address pool that is not currently assigned. It can be any address from the pool of
global addresses. The dynamic entry stays in the NAT translations table as long as the traffic is
exchanged. The entry times out after a period of inactivity and the global IP address can be
used for new translations.
Configure the router’s inside interface using the ip nat inside command.
Configure the router’s outside interface using the ip nat outside command.
Configure an ACL that has a list of the inside source addresses that will be translated.
Configure the pool of global IP addresses using the ip nat pool NAME
FIRST_IP_ADDRESS LAST_IP_ADDRESS netmask SUBNET_MASK command.
Enable dynamic NAT with the ip nat inside source list ACL_NUMBER pool
NAME global configuration command.
81
Computer A requests a web resource from S1. Computer A uses its private IP address
when sending the request to router R1. Router R1 receives the request, changes the private IP
address to one of the available global addresses in the pool and sends the request to S1. S1
responds to R1. R1 receives the response, looks up in its NAT table and changes the destination
IP address to the private IP address of Computer A. In the example above we need to configure
dynamic NAT. To do that, the following commands are required on R1:
To configure an ACL that has a list of the inside source addresses that will be
translated:
NOTE:- The access list configured above matches all hosts from the 192.168.0.0/24
subnet.
The pool configured above consists of 5 addresses: 4.4.4.1, 4.4.4.2, 4.4.4.3, 4.4.4.4, and
4.4.4.5.
The command above instructs the router to translate all addresses specified in the access
list 1 to the pool of global addresses called MY_POOL.
You can list all NAT translations using the show ip nat translations command:
In the example above, you can see that the private IP address of Computer A (192.168.0.2)
has been translated to the first available global address (4.4.4.1).
NOTE:- You can remove all NAT translations from the table by using the clear ip nat
translation * command
With Port Address Translation (PAT), a single public IP address is used for all internal
private IP addresses, but a different port is assigned to each private IP address. This type of
NAT is also known as NAT Overload and is the typical form of NAT used in today’s networks. It
is even supported by most consumer-grade routers.
PAT allows you to support many hosts with only few public IP addresses. It works by
creating dynamic NAT mapping, in which a global (public) IP address and a unique port number
are selected. The router keeps a NAT table entry for every unique combination of the private IP
address and port, with translation to the global address and a unique port number. We will use
the following example network to explain the benefits of using PAT:
As you can see in the picture above, PAT uses unique source port numbers on the inside
global (public) IP address to distinguish between translations. For example, if the host with the
IP address of 10.0.0.101 wants to access the server S1 on the Internet, the host’s private IP
address will be translated by R1 to 155.4.12.1:1056 and the request will be sent to S1. S1 will
respond to 155.4.12.1:1056. R1 will receive that response, look up in its NAT translation table,
and forward the request to the host.
Configure the router’s inside interface using the ip nat inside command.
Configure the router’s outside interface using the ip nat outside command.
Configure an access list that includes a list of the inside source addresses that
should be translated.
Enable PAT with the ip nat inside source list ACL_NUMBER interface TYPE
overload global configuration command.
Here is how we would configure PAT for the network picture above.
R1(config)#int Gi0/0
R1(config-if)#int Gi0/1
Next, we will define an access list that will include all private IP addresses we would like
to translate:
The access list defined above includes all IP addresses from the 10.0.0.0 – 10.0.0.255
range.
Now we need to enable NAT and refer to the ACL created in the previous step and to the
interface whose IP address will be used for translations:
84
To verify the NAT translations, we can use the show ip nat translations command after
hosts request a web resource from S1:
Notice that the same IP address (155.4.12.1) has been used to translate three private IP
addresses (10.0.0.100, 10.0.0.101, and 10.0.0.102). The port number of the public IP address
is unique for each connection. So when S1 responds to 155.4.12.1:1026, R1 look into its NAT
translations table and forward the response to 10.0.0.102:1025
The system of IP address classes was developed for the purpose of Internet IP addresses
assignment. The classes created were based on the network size. For example, for the small
number of networks with a very large number of hosts, the Class A was created. The Class C
was created for the numerous networks with the small number of hosts.
For the IP addresses from Class A, the first 8 bits (the first decimal number) represent the
network part, while the remaining 24 bits represent the host part. For Class B, the first 16 bits
(the first two numbers) represent the network part, while the remaining 16 bits represent the
host part. For Class C, the first 24 bits represent the network part, while the remaining 8 bits
represent the host part.
10.50.120.7 – because this is a Class A address, the first number (10) represents
the network part, while the remainder of the address represents the host part
(50.120.7). This means that, in order for devices to be on the same network, the
first number of their IP addresses has to be the same for both devices. In this case,
a device with the IP address of 10.47.8.4 is on the same network as the device with
the IP address listed above. The device with the IP address 11.5.4.3 is not on the
same network, because the first number of its IP address is different.
172.16.55.13 – because this is a Class B address, the first two numbers (172.16)
represents the network part, while the remainder of the address represents the host
part (55.13). The device with the IP address of 172.16.254.3 is on the same network,
while a device with the IP address of 172.55.54.74 isn’t.
NOTE:- The system of network address ranges described here is generally bypassed
today by use of the Classless Inter-Domain Routing (CIDR) addressing
Special IP address ranges that are used for special purposes are:
Private IP Addresses
Class A
Class B
Class C
Class D
Class E
Each of these classes has a valid range of IP addresses. Classes D and E are reserved
for multicast and experimental purposes respectively. The order of bits in the first octet determine
the classes of IP address.
87
Network ID
Host ID
The class of IP address is used to determine the bits used for network ID and host ID and
the number of total networks and hosts possible in that particular class. Each ISP or network
administrator assigns IP address to each device that is connected to its network
Note: While finding the total number of host IP addresses, 2 IP addresses are not counted
and are therefore, decreased from the total count because the first IP address of any network
is the network number and whereas the last IP address is reserved for broadcast IP.
2.10.1 Class A
IP address belonging to class A are assigned to the networks that contain a large number
of hosts.
The higher order bits of the first octet in class A is always set to 0. The remaining 7 bits in
first octet are used to determine network ID. The 24 bits of host ID are used to determine the
host in any network. The default sub-net mask for class A is 255.x.x.x. Therefore, class A has a
total of:
2.10.2 Class B
IP address belonging to class B are assigned to the networks that ranges from medium-
sized to large-sized networks.
The higher order bits of the first octet of IP addresses of class B are always set to 10. The
remaining 14 bits are used to determine network ID. The 16 bits of host ID is used to determine
the host in any network. The default sub-net mask for class B is 255.255.x.x. Class B has a total
of:
2.10.3 Class C
The higher order bits of the first octet of IP addresses of class C are always set to 110.
The remaining 21 bits are used to determine network ID. The 8 bits of host ID is used to
determine the host in any network. The default sub-net mask for class C is 255.255.255.x.
Class C has a total of:
2.10.4 Class D
IP address belonging to class D are reserved for multi-casting. The higher order bits of
the first octet of IP addresses belonging to class D are always set to 1110. The remaining bits
are for the address that interested hosts recognize.
90
Class D does not posses any sub-net mask. IP addresses belonging to class D ranges
from 224.0.0.0 – 239.255.255.255.
2.10.5 Class E
IP addresses belonging to class E are reserved for experimental and research purposes.
IP addresses of class E ranges from 240.0.0.0 – 255.255.255.254. This class doesn’t have any
sub-net mask. The higher order bits of first octet of class E are always set to 1111.
Host ID’s are used to identify a host within a network. The host ID are assigned based on
the following rules:
Host ID in which all bits are set to 0 cannot be assigned because this host ID is used
to represent the network ID of the IP address.
Host ID in which all bits are set to 1 cannot be assigned because this host ID is
reserved as a broadcast address to send packets to all the hosts present on that
particular network.
Hosts that are located on the same physical network are identified by the network ID, as
all host on the same physical network are assigned the same network ID. The network ID is
assigned based on the following rules:
The network ID cannot start with 127 because 127 belongs to class A address and
is reserved for internal loop-back functions.
All bits of network ID set to 1 are reserved for use as an IP broadcast address and
therefore, cannot be used.
All bits of network ID set to 0 are used to denote a specific host on the local network
and are not routed and therefore, aren’t used.
When the DHCP process fails, Windows automatically assigns an IP address from the
private range, which is 169.254.0.1 to 169.254.255.254. Using Address Resolution Protocol
(ARP), clients verify that the chosen APIPA address is unique on the network before they use it.
92
Networks that are set up for dynamic addressing rely on a DHCP server to manage the
pool of available local IP addresses. When a Windows client device attempts to join the local
network, it contacts the DHCP server to request its IP address. If the DHCP server stops
functioning, a network glitch interferes with the request, or some issue occurs on the Windows
device, this process can fail.
When the DHCP process fails, Windows automatically assigns an IP address from the
private range, which is 169.254.0.1 to 169.254.255.254. Using Address Resolution Protocol
(ARP), clients verify that the chosen APIPA address is unique on the network before they use it.
Clients then check back with the DHCP server at periodic intervals—usually every five minutes—
and update their addresses automatically when the DHCP server is able to service requests.
When you start a computer running Windows Vista, for example, it waits for only six
seconds for a DHCP server before using an IP from the APIPA range. Earlier versions of Windows
look for a DHCP server for as long as three minutes.
All APIPA devices use the default network mask 255.255.0.0, and all reside on the same
subnet.
Network administrators and experienced computer users recognize that failures in the
DHCP process indicate network troubleshooting is needed to identify and resolve the issues
that are preventing DHCP from working properly.
93
APIPA addresses do not fall into any of the private IP address ranges defined by the
Internet Protocol standard and are restricted for use on local networks only. Like private IP
addresses, ping tests or any other connection requests from the internet and other outside
networks cannot be made to APIPA devices directly.
APIPA-configured devices can communicate with peer devices on their local network but
cannot communicate outside of it. While APIPA provides Windows clients a usable IP address,
it does not provide the client with nameserver (DNS or WINS) and network gateway addresses
as DHCP does.
Local networks should not attempt to manually assign addresses in the APIPA range
because IP address conflicts will result. To maintain the benefit APIPA has of indicating DHCP
failures, administrators should avoid using those addresses for any other purpose and instead
limit their networks to use the standard IP address ranges.
Summary
Classful Routing protocols do not send subnet mask information when a route update
is sent out. All devices in the network must use the same subnet mask.
Network Address Translation (NAT) is the process where a network device, usually
a firewall, assigns a public address to a computer (or group of computers) inside a
private network.
The main use of NAT is to limit the number of public IP addresses an organization
or company must use, for both economy and security purposes.
Review Questions
Explain IP Routing and its algorithm.
What is OSPF?
Explain EIGRP.
References
Books
Internetworking Technologies Handbook - by Cisco Systems Inc.
Online Sources
http://what-when-how.com/data-communications-and-networking/network
models-data-communications-and-networking/
https://en.wikipedia.org/
http://www.iana.org/go/rfc793
http://www.iana.org/go/rfc7323
https://tools.ietf.org/html/rfc2018
95
UNIT - 3
SUBNETTING IP NETWORK
Learning Objectives
In this chapter, some fundamental concepts and terms used in subnetting will be
discussed with the aim to make the students understand the following
o Relevance of Subnetting.
Structure
3.1 Introduction
3.1 Introduction
There comes a time when a network becomes too big and performance begins to suffer
as a result of too much traffic. When that happens, one of the ways that you can solve the
problem is by breaking the network into smaller pieces. There are several techniques for splitting
a network, but one of the most effective techniques is called subnetting. In this unit, we explain
what subnetting is, and how it works.
Subnetting is basically just a way of splitting a TCP/IP network into smaller, more
manageable pieces. The basic idea is that if you have an excessive amount of traffic flowing
across your network, then that traffic can cause your network to run slowly. When you subnet
your network, you are splitting the network into a separate, but interconnected network. That
96
way, most of the network traffic will be isolated to the subnet in which it originated. Of course
you can still communicate across a subnet, but the only time that traffic will cross subnet
boundaries is when it is specifically destined for a host residing in an alternate subnet.
The main purpose of subnetting is to help relieve network congestion. Congestion used
to be a bigger problem than it is today because it was more common for networks to use hubs
than switches. When nodes on a network are connected through a hub, the entire network acts
as a single collision domain. What this means is that if one PC sends a packet to another PC,
every PC on the entire network sees the packet. Each machine looks at the packet header, but
ignores the packet if it isn’t the intended recipient.
The problem with this type of network is that if any two machines on the network happen
to send packets simultaneously, then the packets collide and are destroyed in the collision. The
two machines then wait a random amount of time and resend the packets. The point is that an
occasional collision is no big deal, but excessive collisions can slow a network way down.
Switches solve the excessive collision problem by directing packets directly from the
source machine to the destination machine. Using this technique combined with caching
practically eliminates collisions and allows a network to perform much better than it ever could
if it were using a hub. So let’s go back to the original question. Are subnets still relevant for
switched networks?
The answer is that it really just depends on how the network is laid out and how it is
performing. Keep in mind that a switch only helps performance when a packet is destined for a
specific PC. Broadcast traffic is still sent to every machine on the network. If you’re running a
switched network, then subnetting will help you if you have a lot of broadcast network. Subnetting
is also important if you have branch offices that are connected by a slow WAN link.
A subnet mask is used to divide an IP address into two parts. One part identifies the host
(computer), the other part identifies the network to which it belongs. To better understand how
IP addresses and subnet masks work, look at an IP (Internet Protocol) address and see how it
is organized.
The main purpose of subnetting is to help relieve network congestion. Congestion used
to be a bigger problem than it is today because it was more common for networks to use hubs
97
than switches. When nodes on a network are connected through a hub, the entire network acts
as a single collision domain.
Each data link on a network must have a unique network ID, with every node on that link
being a member of the same network. If you break a major network (Class A, B, or C) into
smaller subnetworks, it allows you to create a network of interconnecting subnetworks. Each
data link on this network would then have a unique network/subnetwork ID. Any device, or
gateway, that connects n networks/subnetworks has n distinct IP addresses, one for each network
/ subnetwork that it interconnects.
In order to subnet a network, extend the natural mask with some of the bits from the host
ID portion of the address in order to create a subnetwork ID. For example, given a Class C
network of 204.17.5.0 which has a natural mask of 255.255.255.0, you can create subnets in
this manner:
204.17.5.0 - 11001100.00010001.00000101.00000000
255.255.255.224 - 11111111.11111111.11111111.11100000
—————————————|sub|——
By extending the mask to be 255.255.255.224, you have taken three bits (indicated by
“sub”) from the original host portion of the address and used them to make subnets. With these
three bits, it is possible to create eight subnets. With the remaining five host ID bits, each
subnet can have up to 32 host addresses, 30 of which can actually be assigned to a device
since host ids of all zeros or all ones are not allowed (it is very important to remember this). So,
with this in mind, these subnets have been created.
Note: There are two ways to denote these masks. First, since you use three bits
more than the “natural” Class C mask, you can denote these addresses as having a 3-bit
subnet mask. Or, secondly, the mask of 255.255.255.224 can also be denoted as /27 as
there are 27 bits that are set in the mask. This second method is used with CIDR. With
this method, one of these networks can be described with the notation prefix/length. For
example, 204.17.5.32/27 denotes the network 204.17.5.32 255.255.255.224. When
appropriate, the prefix/length notation is used to denote the mask throughout the rest of
this document.
The network subnetting scheme in this section allows for eight subnets, and the network
might appear as:
Notice that each of the routers in Figure 3.1 is attached to four subnetworks, one subnetwork
is common to both routers. Also, each router has an IP address for each subnetwork to which it
is attached. Each subnetwork could potentially support up to 30 host addresses.
This brings up an interesting point. The more host bits you use for a subnet mask, the
more subnets you have available. However, the more subnets available, the less host addresses
available per subnet. For example, a Class C network of 204.17.5.0 and a mask of
99
255.255.255.224 (/27) allows you to have eight subnets, each with 32 host addresses (30 of
which could be assigned to devices). If you use a mask of 255.255.255.240 (/28), the break
down is:
204.17.5.0 - 11001100.00010001.00000101.00000000
255.255.255.240 - 11111111.11111111.11111111.11110000
——————————-——|sub |—
Since you now have four bits to make subnets with, you only have four bits left for host
addresses. So in this case you can have up to 16 subnets, each of which can have up to 16 host
addresses (14 of which can be assigned to devices).
Take a look at how a Class B network might be subnetted. If you have network 172.16.0.0
,then you know that its natural mask is 255.255.0.0 or 172.16.0.0/16. Extending the mask to
anything beyond 255.255.0.0 means you are subnetting. You can quickly see that you have the
ability to create a lot more subnets than with the Class C network. If you use a mask of
255.255.248.0 (/21), how many subnets and hosts per subnet does this allow for?
172.16.0.0 - 10101100.00010000.00000000.00000000
255.255.248.0 - 11111111.11111111.11111000.00000000
You use five bits from the original host bits for subnets. This allows you to have 32 subnets
(25). After using the five bits for subnetting, you are left with 11 bits for host addresses. This
allows each subnet so have 2048 host addresses (211), 2046 of which could be assigned to
devices.
Note: In the past, there were limitations to the use of a subnet 0 (all subnet bits are set to
zero) and all ones subnet (all subnet bits set to one). Some devices would not allow the use of
these subnets. Cisco Systems devices allow the use of these subnets when the ip subnet zero
command is configured.
Example-1
100
Now that you have an understanding of subnetting, put this knowledge to use. In this
example, you are given two address / mask combinations, written with the prefix/length notation,
which have been assigned to two devices. Your task is to determine if these devices are on the
same subnet or different subnets. You can use the address and mask of each device in order to
determine to which subnet each address belongs.
DeviceA: 172.16.17.30/20
DeviceB: 172.16.28.15/20
172.16.17.30 - 10101100.00010000.00010001.00011110
255.255.240.0 - 11111111.11111111.11110000.00000000
————————| sub|——————
Looking at the address bits that have a corresponding mask bit set to one, and setting all
the other address bits to zero (this is equivalent to performing a logical “AND” between the
mask and address), shows you to which subnet this address belongs. In this case, DeviceA
belongs to subnet 172.16.16.0.
172.16.28.15 - 10101100.00010000.00011100.00001111
255.255.240.0 - 11111111.11111111.11110000.00000000
————————| sub|——————
From these determinations, DeviceA and DeviceB have addresses that are part of the
same subnet.
Example-2
101
Given the Class C network of 204.15.5.0/24, subnet the network in order to create the
network in Figure 3.2 with the host requirements shown.
Looking at the network shown in Figure 3.2, you can see that you are required to create
five subnets. The largest subnet must support 28 host addresses. Is this possible with a Class
C network? and if so, then how?
You can start by looking at the subnet requirement. In order to create the five needed
subnets you would need to use three bits from the Class C host bits. Two bits would only allow
you four subnets (22). Since you need three subnet bits, that leaves you with five bits for the
host portion of the address. How many hosts does this support? 25 = 32 (30 usable). This
meets the requirement. Therefore you have determined that it is possible to create this network
with a Class C network. An example of how you might assign the subnetworks is:
In all of the previous examples of subnetting, notice that the same subnet mask was
applied for all the subnets. This means that each subnet has the same number of available host
addresses. You can need this in some cases, but, in most cases, having the same subnet mask
for all subnets ends up wasting address space. In the example-2, a class C network was split
into eight equal-size subnets; however, each subnet did not utilize all available host addresses,
which results in wasted address space. Figure 3.3 illustrates this wasted address space. It also
illustrates that of the subnets that are being used, NetA, NetC, and NetD have a lot of unused
host address space
It is possible that this was a deliberate design accounting for future growth, but in many
cases this is just wasted address space due to the fact that the same subnet mask is used for
all the subnets.
103
Variable Length Subnet Masks (VLSM) allows you to use different masks for each subnet,
thereby using address space efficiently. Given the same network and requirements as in example-
2 develop a subnetting scheme with the use of VLSM, given:
* a /29 (255.255.255.248) would only allow 6 usable host addresses therefore netD
requires a /28 mask.
The easiest way to assign the subnets is to assign the largest first. For example, you can
assign in this manner:
If we use the default subnet mask with a Class C network address, then we already know
that three bytes are used to define the network and only one byte is used to define the hosts on
each network.
The default Class C mask is: 255.255.255.0. To make smaller networks, called
subnetworks, we will borrow bits from the host portion of the mask. Since the Class C mask
only uses the last octet for host addressing, we only have 8 bits at our disposal. Therefore, only
the following masks can be used with Class C networks (Table 3.1).
Subset zero: Take note that in the table below I do not assume subnet zero. Cisco does
teach a subnet zero assumption but they do not test that way. I have chosen to follow the exam.
You can see in Table 3.1 that the bits that are turned on (1s) are used for subnetting, while
the bits that are turned off (0s) are used for addressing of hosts. You can use some easy math
to determine the number of subnets and hosts per subnet for each different mask.
To determine the number of subnets, use the 2x-2, where the x exponent is the number of
subnet bits in the mask.
To determine the number of hosts, use the 2x-2, where the x exponent is the number of
host bits in the mask.
To determine the mask you need for your network, you must first determine your business
requirements. Count the number of networks and the number of hosts per network that you
need. Then determine the mask by using the equations shown above—and don’t forget to
factor for growth.
For example, if you have eight networks and each requires 10 hosts, you would use the
Class C mask of 255.255.255.240 because 240 in binary is 11110000, which means you have
four subnet bits and four host bits. Using our math, we’d get the following:
24-2=14 subnets
24-2=14 hosts
Many people find it easy to memorize the Class C information because Class C networks
have few bits to manipulate. However, there is an easier way to subnet.
Instead of memorizing the entire table (Table 3.1), it’s possible to glance at a host address
and quickly determine the necessary information if you’ve memorized key parts of the table.
First, you need to know your binary-to-decimal conversion. Memorize the number of bits used
with each mask that are shown in Table 3.1. Second, you need to remember the following:
256-192=64
256-224=32
256-240=16
256-248=8
256-252=4
Once you have the two steps memorized, you can begin subnetting.
106
Example-1
We will use the Class C mask of 255.255.255.192. Ask five simple questions to gather all
the facts:
You already know how to answer questions one and two. To answer question three, use
the formula 256-subnetmask to get the first subnet and your variable. Keep adding this number
to itself until you get to the subnet mask value to determine the valid subnets. Once you verify
all of the subnets, you can determine the broadcast address by looking at the next subnet’s
value. The broadcast address is the number just before the next subnet number. Once you
have the subnet number and broadcast address, the valid hosts are the numbers in between.
How many subnet bits are used in this mask? Answer: 2 2^2-2=2 subnets
How many host bits are available per subnet? Answer: 6 2^6-2=62 hosts per subnet
What is the broadcast address of each subnet?Answer: 64 is the first subnet and
128 is the second subnet. The broadcast address is always the number before the
next subnet. The broadcast address of the 64 subnet is 127. The broadcast address
of the 128 subnet is 191.
What is the valid host range of each subnet? Answer: The valid hosts are the numbers
between the subnet number and the mask. For the 64 subnet, the valid host range
is 64-126. For the 128 subnet, the valid host range is 129-190.
107
Example-2
Using the Class C mask of 255.255.255.224. Here are the answers:
How many subnet bits are used in this mask?Answer: 3 bits or 2^3-2=6 subnets
How many host bits are available per subnet?Answer: 5 bits or 2^5-2=30 hosts per
subnet
What are the subnet addresses?Answer: 256-224 =32, 64, 96, 128, 160 and 192
(Six subnets found by continuing to add 32 to itself.)
What is the broadcast address of each subnet?Answer: The broadcast address for
the 32 subnet is 63. The broadcast address for the 64 subnet is 95. The broadcast
address for the 96 subnet is 127. The broadcast address for the 160 subnet is 191.
The broadcast address for the 192 subnet is 223 (since 224 is the mask).
What is the valid host range of each subnet?Answer: The valid hosts are the numbers
in between the subnet and broadcast addresses. For example, the 32 subnet valid
hosts are 33-62.
Example-3
How many subnet bits are used in this mask?Answer: 4 bits or 2^4-2=14 subnets
How many host bits are available per subnet?Answer: 4 bits or 2^4-2=14 hosts per
subnet
What are the subnet addresses?Answer: 256-240 =16, 32, 48, 64, 80, 96, 112,
128, 144. 160, 176, 192, 208 and 224 (14 subnets found by continuing to add 16 to
itself.)
What is the broadcast address of each subnet?Answer: Here are some examples
of the broadcast address: The broadcast address for the 16 subnet is 31. The
broadcast address for the 32 subnet is 47. The broadcast address for the 64 subnet
is 79. The broadcast address for the 96 subnet is 111. The broadcast address for
the 160 subnet is 175. The broadcast address for the 192 subnet is 207.
What is the valid host range of each subnet?Answer: The valid hosts are the numbers
in between the subnet and broadcast addresses. The 32 subnet valid hosts are 33-
46.
108
Example-4
How many subnet bits are used in this mask?Answer: 5 bits or 2^5-2=30 subnets
How many host bits are available per subnet?Answer: 3 bits or 2^3-2=6 hosts per
subnet
What are the subnet addresses?Answer 256-248 =8, 16, 24, 32, 40, 48, and so
forth. The last subnet is 240 (30 subnets found by continuing to add 8 to itself).
What is the broadcast address of each subnet?Answer: The broadcast address for
the 8 subnet is 15. The broadcast address for the 16 subnet is 23. The broadcast
address for the 48 subnet is 55.
What is the valid host range of each subnet?Answer: The valid hosts are the numbers
in between the subnet and broadcast addresses. For example, the 32 subnet valid
hosts are 33-38.
Example-5
How many subnet bits are used in this mask?Answer: 6 bits or 2^6-2=62 subnets
How many host bits are available per subnet?Answer: 2 bits or 2^2-2=2 hosts per
subnet
What are the subnet addresses?Answer: 256-252 =4, 8, 12, 16, 20, and so forth.
The last subnet is 248 (62 subnets found by continuing to add 4 to itself).
What is the broadcast address of each subnet?Answer: The broadcast address for
the 4 subnet is 7. The broadcast address for the 8 subnet is 11. The broadcast
address for the 12 subnet is 15. The broadcast address for the 20 subnet is 23.
What is the valid host range of each subnet?Answer: The valid hosts are the numbers
in between the subnet and broadcast addresses. For example, the 16 subnet valid
hosts are 17 and 18.
109
Let’s take a look at an example that will highlight how the above information is applied.
Here is an explanation of this example: First, we used 256-subnetmask to get the variable
and first subnet. Then I kept adding this number to itself until I passed the host address. The
subnet is the number before the host address, and the broadcast address is the number right
before the next subnet. The valid hosts are the numbers in between the subnet and broadcast
address.
Conclusion: It is important to be able to subnet quickly and efficiently. After studying the
examples presented, one should be familiar with this process with Class C addresses.
110
There are quite a few more masks we can use with a Class B network address than we
can with a Class C network address. Remember that this is not harder than subnetting with
Class C, but it can get confusing if you don’t pay attention to where the subnet bits and host bits
are in a mask.
We will use the same techniques as used in the Class C to subnet a network. We’ll start
with the Class B subnet mask of 255.255.192.0 and figure out the subnets, broadcast address,
and valid host range. We will answer the same five questions we answered for the Class C
subnet masks:
Before we answer these questions, there is one difference you need to be aware of when
subnetting a Class B network address. When subnetting in the third octet, you need to add the
fourth octet. For example, on the 255.255.192.0 mask, the subnetting will be done in the third
octet. To create a valid subnet, you must add the fourth octet of all 0s and all 1s for the network
and broadcast address (0 for all 0s and 255 for all 1s).
256-192=64.0, 128.0
Broadcast for the 64.0 subnet is 127.255. Broadcast for the 128.0 subnet is 191.255.
Notice that the numbers in the third octet are the same numbers we used in the fourth
octet when subnetting the 192 mask. The only difference is that we add 0 and 255 in the fourth
octet.
For the 64.0 subnet, all the hosts between 64.1 and 127.254 are in the 64 subnet. In the
128.0 subnet, the hosts are 128.1 through 191.254.
Example-2: 255.255.240.0
2-2=14 subnets
Broadcast for the 16.0 subnet is 31.255. Broadcast for the 32.0 subnet is 47.255,
etc.
Example-3: 255.255.248.0
2-2=30 subnets
Broadcast for the 8.0 subnet is 15.255. Broadcast for the 16.0 subnet is 23.255,
etc.
Example-4: 255.255.252.0
2-2=62 subnets
Broadcast for the 4.0 subnet is 7.255. Broadcast for the 8.0 subnet is 11.255, etc.
Example-5: 255.255.255.0
2-2=254 subnets
Broadcast for the 1.0 subnet is 1.255. Broadcast for the 2.0 subnet is 2.255, etc.
The more difficult process of subnetting a Class B network address is when you start
using bits in the fourth octet for subnetting. For example, what happens when you use this
mask with a Class B network address: 255.255.255.128? Is that valid? Absolutely! There are
nine bits for subnetting and seven bits for hosts. That is 510 subnets, each with 126 hosts.
However, it is the most difficult mask to figure out the valid hosts for.
For the fourth octet, the mask would be 256-128=128, which is one subnet if it is
used. However, if you turn the subnet bit off, the value is 0. This means that for
every subnet in the third octet, the fourth octet has two subnets: 0 and 128, for
example 1.0 and 1.128.
Broadcast for the 0.128 subnet is 128.255; the broadcast for the 1.0 subnet is
1.127. Broadcast for the 1.128 subnet is 1.255, etc.
The thing to remember is that for every subnet in the third octet, there are two in the
fourth octet: 0 and 128. For the 0 subnet, the broadcast address is always 127. For the 128
subnet, the broadcast address is always 255.
2-2=1022 subnets
256-255=1.0, 2.0, 3.0, etc. for the third octet. 256-192=64, 128, 192 for the fourth
octet. For every valid subnet in the third octet, we get four subnets in the fourth
octet: 0, 64, 128, and 192.
Broadcast for the 1.0 subnet is 1.63, since the next subnet is 1.64. Broadcast for
the 1.64 subnet is 1.127, since the next subnet is 1.128. Broadcast for the 1.128
subnet is 1.191, since the next subnet is 1.192. Broadcast for the 1.192 subnet is
1.255.
On this one, the 0 and 192 subnets are valid, since we are using the third octet as well.
The subnet range is 0.64 through 255.128. 0.0 is not valid since no subnet bits are on. 255.192
is not valid because then all subnet bits would be on.
256-255=1.0, 2.0, 3.0, etc. for the third octet. 256-224=32, 64, 96, 128, 160, 192 for
the subnet value. (For every value in the third octet, we get eight subnets in the
fourth octet: 0, 32, 64, 96, 128, 160, 192, 224.)
115
Broadcast for the 1.0 subnet is 1.63, since the next subnet is 1.64. Broadcast for
the 1.64 subnet is 1.127, since the next subnet is 1.128. Broadcast for the 1.128
subnet is 1.191, since the next subnet is 1.192. Broadcast for the 1.192 subnet is
1.255.
For this subnet mask, the 0 and 224 subnets are valid as long as not all subnet bits in the
third octet are off or all subnet bits in the fourth octet are on.
The Class A networking address scheme is designed for the government and large
institutions needing a great deal of unique nodes. Although the Class A network has only 254
unique network addresses, it can contain approximately 17 million unique nodes, which can
make subnetting such a network a nightmare.
Subnetting Class A addresses requires a little forethought, some basic information, and a
lot of practice. Here, I will explain Class A subnet masks and how to assign valid subnets and
host addresses to provide flexibility in configuring your network.
Class A subnet masks must start with 255.0.0.0 at a minimum, because the whole first
octet of an IP address (the IP address describes the specific location on the network) is used to
define the network portion. Routers use the network portion to send packets through an
internetwork. Routers aren’t concerned about host addresses. They need to know only where
the hosts are located and that the MAC address is used to find a host on a LAN. The last three
octets of a Class A subnet mask are used to address hosts on a LAN; the 24 bits you can
manipulate however you wish.
116
If you wanted to create smaller networks (subnetworks) out of a Class A network ID, you’d
borrow bits from the host portion of the mask. The more bits you borrow, the more subnets you
can have, but this means fewer hosts per subnet. However, with a Class A mask, you have 24
bits to manipulate, so this isn’t typically a problem.
Once you have the mask assigned to each network, you must assign the valid subnet
addresses and host ranges to each network. To determine the valid subnets and host addresses
for each network, you need to answer three easy questions:
Valid subnet address: To figure out the valid subnet address, simply subtract the subnet
mask from 256. For example, if you had a Class A mask of 255.240.0.0, the equation would be
256-240=16. The number 16 is the first subnet and also your block size. Keep adding the block
size (in this case 16) to itself until you reach the subnet mask value. The valid subnets in this
example are 16, 32, 48, 64, 80, 96, 112, 128, 144, 160, 176, 192, 208, 224. As another example,
if you had a Class A subnet mask of 255.255.240.0, you’d use the mask on the second and third
octets minus 256. The second octet would be 256-255=1, 2, 3, etc., all the way to 254; the third
octet would be 256-240=16, 32, 48, etc.
117
Broadcast address: To determine the broadcast address of each subnet, just subtract 1
from the next subnet value. For example, within the 16 subnet, the next subnet is 32, so the
broadcast address of the 16 subnet is 31. The broadcast address for the 32 subnet is 47,
because the next subnet is 48. The broadcast address for the 48 subnet is 63, because the next
subnet is 64.
118
Valid host range: The valid hosts are the numbers between the subnet address and the
broadcast address. For the 16 subnet, the valid host range you can assign on a network is 17-
30 because the subnet number is 16 and the broadcast address is 31. For the 32 subnet, the
valid host range is 33 to 46 because the subnet number is 32 and the broadcast address is 47.
You can’t use the subnet number and broadcast addresses as valid host addresses.
Note
Subnet zero: It is assumed you can use subnet zero. If you’re not using subnet
zero, subtract two from each number in the Subnets column in Table 3.2 above.
Once you have an idea what your network will look like, write down the number of
physical subnets you have and the number of hosts needed for each subnet. For
example, on a WAN point-to-point link, you need only two IP addresses, so you can
use a /30 mask.
/30: The slash (/) indicates the number of mask bits turned on. It saves you from
typing, or pronouncing, the whole mask. For example, /8 means 255.0.0.0, /16 is
255.255.0.0, and /24 is 255.255.255.0. You pronounce it as “configure a slash 24
mask on that network.” It’s just an easier way of saying “configure a 255.255.255.0
mask on that network.”
This mask provides you with only four subnet bits, or 16 subnets (14 if you’re not using
subnet zero) with 1,048,574 hosts each. The valid subnets are 256-240=16, 32, 48, 64, 80, etc.,
all the way to 224. (Subnets 0 and 240 are available if you’re using subnet zero.)
This mask provides you with nine bits of subnetting and 15 host bits (/17). This gives you
512 subnets with 32,766 hosts each. The second octet is 256-255=1, 2, 3, etc., all the way to
255. Zero is available in the second octet if you have either a subnet bit on in the third octet or
are, of course, using subnet zero.
You must remember that the third octet is using only one subnet bit. This bit can be either
off or on; if it is off, the subnet is 0. If it is on, the subnet is 128.
This mask is the easiest to subnet. Even if it weren’t a Class A mask, and you used this
mask with a Class B or Class C mask, you’d always have only two available host IDs. The
reason you would use this with a Class A mask is because it can give you up to 4,194,304
subnets with two hosts each. This is a perfect mask for a point-to-point link, so I suggest always
saving a few block sizes of four (/30) masks for use on WANs and point-to-point LAN connections.
If you use the 10.2.3.0 network, your subnets are always 2.3 in the second and third
octets, respectively. But the fourth octet is where it changes, as in 256-252=4, 8, 12, 16, 20, 24,
28, etc., all the way to 248. If you use subnet zero, your first subnets are 0, and your last subnet
is 255.
120
You should probably use Class A network IDs in most networks these days. Why? Because
the 10.0.0.0 network ID cannot be routed on the Internet. These private IP address ranges
allow you to create a more secure network and use port address translation on your router to
the Internet to do the translation for you.We suggest 10.0.0.0 to address your network because
it provides the most flexibility for configuring networks.
Classless inter-domain routing (CIDR) is a set of Internet protocol (IP) standards that is
used to create unique identifiers for networks and individual devices. The IP addresses allow
particular information packets to be sent to specific computers. CIDR is an IP addressing scheme
that improves the allocation of IP addresses. CIDR, sometimes called supernetting, is a way to
allow more flexible allocation of Internet Protocol (IP) addresses than was possible with the
original system of IP address classes. As a result, the number of available Internet addresses
was greatly increased, which along with widespread use of network address translation (NAT),
has significantly extended the useful life of IPv4.
Originally, IP addresses were assigned in four major address classes, A through D. Each
of these classes allocates one portion of the 32-bit IP address format to identify a network
gateway — the first 8 bits for class A, the first 16 for class B, and the first 24 for class C. The
remainder identify hosts on that network — more than 16 million in class A, 65,535 in class B
and 254 in class C. (Class D addresses identify multicast domains.)
121
To illustrate the problems with the class system, consider that one of the most commonly
used classes was Class B. An organization that needed more than 254 host machines would
often get a Class B license, even though it would have far fewer than 65,534 hosts. This resulted
in most of the block of addresses allocated going unused. The inflexibility of the class system
accelerated IPv4 address pool exhaustion. With IPv6, addresses grow to 128 bits, greatly
expanding the number of possible addresses on the Internet. The transition to IPv6 is slow,
however, so IPv4 address exhaustion continues to be a significant issue.
CIDR reduced the problem of wasted address space by providing a new and more flexible
way to specify network addresses in routers. CIDR lets one routing table entry represent an
aggregation of networks that exist in the forward path that don’t need to be specified on that
particular gateway. This is much like how the public telephone system uses area codes to
channel calls toward a certain part of the network. This aggregation of networks in a single
address is sometimes referred to as a supernet.
Using CIDR, each IP address has a network prefix that identifies either one or several
network gateways. The length of the network prefix in IPv4 CIDR is also specified as part of the
IP address and varies depending on the number of bits needed, rather than any arbitrary class
assignment structure. A destination IP address or route that describes many possible destinations
has a shorter prefix and is said to be less specific. A longer prefix describes a destination
gateway more specifically. Routers are required to use the most specific, or longest, network
prefix in the routing table when forwarding packets. (In IPv6, a CIDR block always gets 64 bits
for specifying network addresses). A CIDR network address looks like this under IPv4:
192.30.250.00/18
The “192.30.250.0” is the network address itself and the “18” says that the first 18 bits are
the network part of the address, leaving the last 14 bits for specific host addresses.
122
CIDR is now the routing system used by virtually all gateway routers on the Internet’s
backbone network. The Internet’s regulating authorities expect every Internet service provider
(ISP) to use it for routing. CIDR is supported by the Border Gateway Protocol, the prevailing
exterior (interdomain) gateway protocol and by the OSPF interior (or intradomain) gateway
protocol. Older gateway protocols like Exterior Gateway Protocol and Routing Information
Protocol do not support CIDR.
Advantages of CIDR
CIDR provides numerous advantages over the “classful” addressing scheme, whether or
not subnetting is used:
So, a company that needs 5,000 addresses can be assigned a block of 8,190 instead
of 65,534. Or, to think of it another way, the equivalent of a single Class B network
can be shared amongst 8 companies that each need 8,190 or fewer IP addresses.
123
Since CIDR is hierarchical, the detail of lower-level, smaller networks can be hidden
from routers that move traffic between large groups of networks.
An organization can use the same method used on the Internet to subdivide its
internal network into subnets of arbitrary complexity without needing a separate
subnetting mechanism.
Just like a subnet mask, a wildcard mask is 32 bits long. It acts as an inverted subnet
masks, but with wildcard mask, the zero bits indicate that the corresponding bit position must
match the same bit position in the IP address. The one bits indicate that the corresponding bit
position doesn’t have to match the bit position in the IP address.
A wildcard mask is a mask of bits that indicates which parts of an IP address are available
for examination. In the Cisco IOS, they are used in several places, for example:
To indicate the size of a network or subnet for some routing protocols, such as
OSPF.
A wild card mask is a matching rule. The rule for a wildcard mask is:
Any wildcard bit-pattern can be masked for examination: For example, a wildcard mask of
0.0.0.254 (binary equivalent = 00000000.00000000.00000000.11111110) applied to IP address
10.10.10.2 (00001010.00001010.00001010.00000010) will match even-numbered IP addresses
10.10.10.0, 10.10.10.2, 10.10.10.4, 10.10.10.6 etc. Same mask applied to 10.10.10.1
(00001010.00001010.00001010.00000001) will match odd-numbered IP addresses 10.10.10.1,
10.10.10.3, 10.10.10.5 etc.
A network and wildcard mask combination of 1.1.1.1 0.0.0.0 would match an interface
configured exactly with 1.1.1.1 only, and nothing else. This is really useful if you want to activate
OSPF on a specific interface in a very clear and simple way.
If you insist on matching a range of networks, the network and wildcard mask combination
of 1.1.0.0 0.0.255.255 would match any interface in the range of 1.1.0.0 to 1.1.255.255. Because
of this, it’s simpler and safer to stick to using wildcard masks of 0.0.0.0 and identify each OSPF
interface individually, but once configured, they function exactly the same — one way is not
better than the other.
Wildcard masks are used in situations where subnet masks may not apply. For example,
when two affected hosts fall in different subnets, the use of a wildcard mask will group them
together.
Here is an example of using a wildcard mask to include only the desired interfaces in the
OSPF routing process:
125
Router R1 has three networks directly connected. To include only the 10.0.1.0 subnet in
the OSPF routing process, the following network command can be used:
Let’s break down the wildcard part of the command. To do that, we need to use binary
numbers instead of decimal notation.
10.0.1.0 = 00001010.00000000.00000001.00000000
0.0.0.255 = 00000000.0000000.00000000.11111111
The theory says that the zero bits of the wildcard mask have to match the same position
in the IP address. So, let’s write the wildacard mask below the IP address:
00001010.00000000.00000001.00000000
00000000.00000000.00000000.11111111
As you can see from the output above, the last octet doesen’t have to match, because the
wildcard mask bits are all ones. The first 24 bits have to match, because of the wildcard mask
bits of all zeros. So, in this case, wildcard mask will match all addresses that begins with 10.0.1.X.
In our case, only one network will be matched, 10.0.1.0/24.
126
What is we want to match both 10.0.0.0/24 and 10.0.1.0/24? Than we will have to use
different wildcard mask. We need to use the wildcard mask of 0.0.1.255. Why is that? Well, we
again need to write down the addresses in binary:
00001010.00000000.00000000.00000000 = 10.0.0.0
00001010.00000000.00000001.00000000 = 10.0.1.0
00000000.00000000.00000001.11111111 = 0.0.1.255
From the output above, we can see that only the first 23 bits have to match (notice that
the third octet of the wildcard mask has a 1 at the end). That means that all addresses in the
range of 10.0.0.0 – 10.0.1.255 will be matched. So, in our case, we have successfully matched
both addresses, 10.0.0.0 and 10.0.1.0.
NOTE
Wildcard mask of all zeros (0.0.0.0) means that the entire IP address have to match in
order for a statement to execute. For example, if we want to match only the IP address of
192.168.0.1, the command used would be 192.168.0.1 0.0.0.0.
A wildcard mask of all ones (255.255.255.255) means that no bits have to match. This
basically means that all addresses will be matched.
Frame Relay is a standardized wide area network technology that specifies the physical
and data link layers of digital telecommunications channels using a packet switching methodology.
Frame Relay is a high-performance WAN protocol that operates at the physical and data link
layers of the OSI reference model. Frame Relay originally was designed for use across Integrated
Services Digital Network (ISDN) interfaces. Today, it is used over a variety of other network
interfaces as well.
frame relay service in 2007, while Verizon said it plans to phase out the service in 2015. AT&T
stopped offering frame relay in 2012 but said it would support existing customers until 2016.
Frame relay puts data in a variable-size unit called a frame and leaves any necessary
error correction (retransmission of data) up to the endpoints, which speeds up overall data
transmission. For most services, the network provides a permanent virtual circuit (PVC), which
means that the customer sees a continuous, dedicated connection without having to pay for a
full-time leased line, while the service provider figures out the route each frame travels to its
destination and can charge based on usage. Switched virtual circuits (SVC), by contrast, are
temporary connections that are destroyed after a specific data transfer is completed.
An enterprise can select a level of service quality, prioritizing some frames and making
others less important. A number of service providers, including AT&T, offer frame relay, and it’s
available on fractional T-1 or full T-carrier system carriers. Frame relay complements and provides
a mid-range service between ISDN, which offers bandwidth at 128 Kbps, and Asynchronous
Transfer Mode (ATM), which operates in somewhat similar fashion to frame relay but at speeds
of 155.520 Mbps or 622.080 Mbps.
In order for a frame relay WAN to transmit data, data terminal equipment (DTE) and data
circuit-terminating equipment (DCE) are required. DTEs are typically located on the customer’s
premises and can encompass terminals, routers, bridges and personal computers. DCEs are
managed by the carriers and provide switching and associated services.
Frame relay is based on the older X.25 packet-switching technology that was designed
for transmitting analog data such as voice conversations. Unlike X.25, which was designed for
analog signals, frame relay is a fast packet technology, which means that the protocol does not
attempt to correct errors. When an error is detected in a frame, it is simply dropped (that is,
thrown away). The end points are responsible for detecting and retransmitting dropped frames
(though the incidence of error in digital networks is extraordinarily small relative to analog
networks).
Frame relay is often used to connect LANs with major backbones as well as on public
wide area networks and also in private network environments with leased T-1 lines. It requires
a dedicated connection during the transmission period and is not ideal for voice or video, which
require a steady flow of transmissions. Frame relay transmits packets at the data link layer of
128
the Open Systems Interconnection (OSI) model rather than at the network layer. A frame can
incorporate packets from different protocols such as Ethernet and X.25. It is variable in size and
can be as large as a thousand bytes or more.
Configuring user equipment in a Frame Relay network is extremely simple. The connection-
oriented link-layer service provided by Frame Relay has properties like non-duplication of frames,
preservation of the frame transfer order and small probability of frame loss. The features provided
by Frame Relay make it one of the best choices for interconnecting local area networks using
a wide area network. However the drawback in this method is that it becomes prohibitively
expensive with growth of the network.
There are certain benefits which are associated with Frame Relay. First of all, it helps in
reducing the cost of internetworking, as there is considerable reduction in the number of circuits
required and the associated bandwidths. Second, it helps in increasing the performance due to
reduced network complexity. Third, it increases the interoperability with the help of international
standards. Fourth, Frame Relay is protocol independent and can easily be used to combine
traffic from other networking protocols such as IPX, SNA and IP. The reduction of network
management and unification of hardware used for the protocols can help in cost savings due to
Frame Relay.
In business scenarios, where there is unpredictable and high-volume traffic, Frame Relay
is one of the best choices. It also remains a great choice for medium- to large-sized networks,
which makes use of star or mesh connectivity.
In business scenarios, where there is a slow connection or continuous traffic flow due to
applications like multimedia, Frame Relay is not a recommended choice.
Frame Relay is a standardized wide area network technology that specifies the physical
and data link layers of digital telecommunications channels using a packet switching methodology.
Network providers commonly implement Frame Relay for voice (VoFR) and data as an
encapsulation technique used between local area networks (LANs) over a wide area network
(WAN). Each end-user gets a private line (or leased line) to a Frame Relay node. The Frame
Relay network handles the transmission over a frequently changing path transparent to all end-
user extensively used WAN protocols. It is less expensive than leased lines and that is one
reason for its popularity. The extreme simplicity of configuring user equipment in a Frame Relay
network offers another reason for Frame Relay’s popularity.
129
With the advent of Ethernet over fiber optics, MPLS, VPN and dedicated broadband
services such as cable modem and DSL, the end may loom for the Frame Relay protocol and
encapsulation.[speculation?] However many rural areas remain lacking DSL and cable modem
services. In such cases, the least expensive type of non-dial-up connection remains a 64-kbit/
s Frame Relay line. Thus a retail chain, for instance, may use Frame Relay for connecting rural
stores into their corporate WAN.
A data link connection identifier (DLCI) is a Frame Relay 10-bit-wide link-local virtual
circuit identifier used to assign frames to a specific PVC or SVC. Frame Relay networks use
DLCIs to statistically multiplex frames. DLCIs are preloaded into each switch and act as road
signs to the traveling frames.
The standard allows the existence of 1024 DLCIs. DLCI 0 is reserved for the ANSI/q993a
LMI standard—only numbers 16 to 976 are usable for end-user equipment. DLCI 1023 is reserved
for Cisco LMI, however, numbers 16 to 1007 are usable.
In summary, if using Cisco LMI, numbers from 16 to 1007 are available for end-user
equipment. The rest are reserved for various management purposes. DLCI are Layer 2 Addresses
that are locally significant. No two devices have the same DLCI mapped to its interface in one
frame relay cloud.
130
In a Frame Relay network, committed information rate is the bandwidth for a virtual circuit
guaranteed by an internet service provider to work under normal conditions. Committed data
rate is the payload portion of the CIR. At any given time, the available bandwidth should not fall
below this committed figure.
With frame relay networks, multiple customers can share the same physical wires using
virtual circuits. Since different customers have different bandwidth needs, providers can designate
faster connections to those who need them with a committed information rate. A streaming
video provider running a content delivery network needs more throughput than a customer
primarily sending text data, for example. Under a CIR, a customer is guaranteed a certain
bandwidth under a service level agreement. Frame relay connections are also usually burstable
with an excess information rate (EIR) or peak information rate (PIR).
Committed Information Rate is a way of guaranteeing that, even though you share a
bandwidth pool with many other users, you are assured that you will receive, at least a part of
this, no matter how busy the link gets.
As we know, the cost of providing bandwidth to satellite users (driven by the development,
launch, maintenance and ground segment of a satellite) is extremely high. The only way to
make this affordable is to share the bandwidth among several users.
Having a small portion of dedicated bandwidth becomes important when you are using
the internet for something that has a critical minimum bandwidth, like a VoIP phone call that can
131
require 30kbps, 20kbps, and sometimes more or less, depending on the type of call you are
making. A free Skype call typically requires about 30-40 kbps, but a call through the network
providers VoIP router could be considerably less. If the bandwidth is not available at the time of
your phone call, you could have very disappointing results with the call dropping or parts of the
conversation dropping out.
To guarantee that minimum bandwidth, costs money, as this is bandwidth that the satellite
provider cannot sell to other users. So expect to pay a premium for CIR.
To ensure quality voice calls, you only need to guarantee the bandwidth to support as
many voice calls as your VoIP router will support, typically 10 kbps - 20 kbps per simultaneous
call.
BIR - Burst Information Rate, or MIR - Maximum Information Rate is the theoretical
maximum to which your bandwidth can increase as bandwidth becomes available. This is the
size of the complete data pipe that you are sharing with others, and the rate normally advertised
by providers.
In practice, it is rare that you will ever actually achieve this rate, even when you are the
only subscriber to the service. There are overheads and bottlenecks both on the vessel and on
shore. Typically one would achieve bandwidths of between 50% and 90% of the advertized
rate.
If this does not meet your requirements, you will need to subscribe to a higher level of
service, or increase your CIR, both of which will cost more money.
Contention Ratio is the number of other subscribers on the same network competing for
the same bandwidth.
Generally speaking, the contention ratio is the MIR (Maximum Information Rate) divided
by the CIR (Committed Information Rate). So with a contention ratio of 4:1 on a 1 Mbps downlink,
the 4 subscribers would each have 256kbps guaranteed, while they can each burst up to the full
1 Mbps as it is available and not in use by their peers.
132
A quality, marine shared network may have a contention ratio of 5:1 and this generally will
provide good results. Theoretically this means that in the assigned MIR bandwidth that there
are a total of 5 subscribers using the bandwidth simultaneously. Due to the sporadic nature that
we use the internet, not everyone is downloading at the same time. Not everyone is even using
the internet all the time, and when they do, it is in fits and starts with many idle times in between,
like while you are reading the web page that you have downloaded. With voice calls, hardly any
bandwidth is used while you are not speaking, between sentences, between words, and even
between syllables. So there is plenty of free bandwidth for others to use.
Some providers offer a choice of an entry level package of 8 : 1 contention ratio which will
work fine for most internet applications, and a premium package of 4:1 contention ratio which
should provide excellent throughput near to the contracted rate. Most providers also advertise
a 1:1 contention ratio, but this is a very expensive proposition and should only be considered if
your applications demand the full bandwidth all of the time. It is often possible to temporarily
bump up the bandwidth to 1:1 if you have that requirement for short periods of time and then
drop back to a shared service during idle times.
3.5.6 Oversubscription
Some of the low budget providers (perhaps not marine) have contention ratios of “many
to one (20:1 or 40:1). Some terrestrial internet providers use contention rations of 50:1. Depending
on your usage requirements, this is often fine for casual use and provides offshore VSAT service
to vessels that would not normally have this in their budget.
There is talk that some providers will take license with the contention ratios and
oversubscribe the network, where they put more subscribers than the defined contention ratio
on the same bandwidth. They justify this by saying that the contention ratio is the average
number of subscribers using the network at any given time, not the total number of subscribers
assigned to that network. They count only the users that are actually online at that time. They
monitor the usage and dynamically adjust the bandwidth to keep a satisfactory user experience
for all. As long as this is done diligently this should not be an issue, and if the user experience
is not effected, hopefully this economy will be reflected in reduced monthly fees to the subscribers.
The bottom line is the user experience and the monthly communication budget of the
vessel. If you are not getting satisfactory results, you may need to increase your service
agreement and pay for more MIR or CIR. If you have an absolute minimum bandwidth requirement
133
(like VoIP), increase your CIR (or subscribe to a lower contention ratio). If you want more
speed, then you will want to increase your MIR.
A PVC is designed to eliminate the need to set up a call connection on frame relay, ATM
or X.25 networks. Typically, the physical connections of a frame relay or supported network into
various virtual circuits (VC) allow a physical connection to support multiple VCs simultaneously.
Each connection is permanent and transfers data by utilizing the underlying bandwidth capacity
and infrastructure. For example, a bank’s headquarters often sets up a PVC between branch
offices for continuous data exchange and transfer
Each packet is assigned a label number and the switching takes place after examination
of the label assigned to each packet. The switching is much faster than IP-routing. New
134
technologies such as Multiprotocol Label Switching (MPLS) use label switching. The established
ATM protocol also uses label switching at its core.
Note: Asynchronous Transfer Mode (ATM) is, according to the ATM Forum, “a
telecommunications concept defined by ANSI and ITU standards for carriage of a complete
range of user traffic, including voice, data, and video signals”. ATM was developed to meet the
needs of the Broadband Integrated Services Digital Network, as defined in the late 1980s, and
designed to integrate telecommunication networks. Additionally, It was designed for networks
that must handle both traditional high-throughput data traffic, and real-time, low-latency content
such as voice and video. The reference model for ATM approximately maps to the three lowest
layers of the ISO-OSI reference model: network layer, data link layer, and physical layer. ATM is
a core protocol used over the SONET/SDH backbone of the public switched telephone network
(PSTN) and Integrated Services Digital Network (ISDN), but its use is declining in favour of all
IP.
According to RFC 2475 (An Architecture for Differentiated Services, December 1998):
“Examples of the label switching (or virtual circuit) model include Frame Relay, ATM, and MPLS.
In this model path forwarding state and traffic management or Quality of Service (QoS) state is
established for traffic streams on each hop along a network path. Traffic aggregates of varying
granularity are associated with a label switched path at an ingress node, and packets/cells
within each label switched path are marked with a forwarding label that is used to look up the
next-hop node, the per-hop forwarding behavior, and the replacement label at each hop. This
model permits finer granularity resource allocation to traffic streams, since label values are not
globally significant but are only significant on a single link; therefore resources can be reserved
for the aggregate of packets/cells received on a link with a particular label, and the label switching
semantics govern the next-hop selection, allowing a traffic stream to follow a specially engineered
path through the network.”
MPLS was created in the late 1990s as a more efficient alternative to traditional IP routing,
which requires each router to independently determine a packet’s next hop by inspecting the
packet’s destination IP address before consulting its own routing table. This process consumes
time and hardware resources, potentially resulting in degraded performance for real-time
applications such as voice and video.
In an MPLS network, the very first router to receive a packet determines the packet’s
entire route upfront, the identity of which is quickly conveyed to subsequent routers using a
label in the packet header.
While router hardware has improved exponentially since MPLS was first developed —
somewhat diminishing its significance as a more efficient traffic management technology— it
remains important and popular due to its various other benefits, particularly security, flexibility
and traffic engineering.
a) Components of MPLS
One of the defining features of MPLS is its use of labels — the L in MPLS. Sandwiched
between Layers 2 and 3, a label is a four-byte — 32-bit — identifier that conveys the packet’s
predetermined forwarding path in an MPLS network. Labels can also contain information related
to quality of service (QoS), indicating a packet’s priority level.
The paths, which are called label-switched paths (LSPs), enable service providers to
decide ahead of time the best way for certain types of traffic to flow within a private or public
network.
136
In an MPLS network, each packet gets labeled on entry into the service provider’s network
by the ingress router, also known as the label edge router (LER). This is also the router that
decides the LSP the packet will take until it reaches its destination address.
All the subsequent label-switching routers (LSRs) perform packet forwarding based only
on those MPLS labels — they never look as far as the IP header. Finally, the egress router
removes the labels and forwards the original IP packet toward its final destination.
When an LSR receives a packet, it performs one or more of the following actions:
Swap: Replaces a label. This is usually performed by LSRs between the ingress
and egress routers.
Pop: Removes a label. This is most often done by the egress router.
The benefits of MPLS are scalability, performance, better bandwidth utilization, reduced
network congestion and a better end-user experience.
137
MPLS itself does not provide encryption, but it is a virtual private network and, as such, is
partitioned off from the public Internet. Therefore, MPLS is considered a secure transport mode.
And it is not vulnerable to denial of service attacks, which might impact pure-IP-based networks.
Service providers and enterprises can use MPLS to implement QoS by defining LSPs
that can meet specific service-level agreements on traffic latency, jitter, packet loss and downtime.
For example, a network might have three service levels that prioritize different types of traffic —
e.g., one level for voice, one level for time-sensitive traffic and one level for best effort traffic.
On the negative side, MPLS is a service that must be purchased from a carrier and is far
more expensive than sending traffic over the public Internet.
As companies expand into new markets, they may find it difficult to find an MPLS service
provider who can deliver global coverage. Typically, service providers piece together global
coverage through partnerships with other service providers, which can be costly. MPLS was
designed in an era when branch offices sent traffic back to a main headquarters or data center,
not for today’s world where branch office workers want direct access to the cloud.
An edge device is a device which provides an entry point into enterprise or service provider
core networks. Examples include routers, routing switches, integrated access devices (IADs),
multiplexers, and a variety of metropolitan area network (MAN) and wide area network (WAN)
access devices.
Typically, the edge router sends or receives data directly to or from other organizations’
networks, using either static or dynamic routing capabilities. Handoffs between the campus
network and the internet or WAN edge primarily use Ethernet, usually Gigabit Ethernet copper
or Gigabit Ethernet over single or multimode fiber optics.
In some instances, an organization maintains multiple isolated networks of its own and
uses edge routers to link them together instead of using a core router.
Edge routers are often hardware devices, but their functions can also be performed by
software running on a standard x86 server.
At its most essential level, the internet can be viewed as the sum of all the interconnections
of edge routers across all participating organizations, from its periphery — small business and
home broadband routers, for example — all the way to its core, where major telecom provider
networks connect to each other via massive edge routers.
Edge routers play a fundamental role as more services and applications begin to be
managed on an organization’s network edge rather than in its data center or in the cloud.
Services considered suitable for edge router management include wireless capabilities often
built into network edge devices, Dynamic Host Configuration Protocol (DHCP) services and
domain name system (DNS) services, among others.
139
In general, edge devices are normally routers that provide authenticated access (most
commonly PPPoA and PPPoE) to faster, more efficient backbone and core networks. The trend
is to make the edge device smart and the core device(s) “dumb and fast”, so edge routers often
include Quality of Service (QoS) and multi-service functions to manage different types of traffic.
Consequently, core networks are often designed with switches that use routing protocols such
as Open Shortest Path First (OSPF) or Multiprotocol Label Switching (MPLS) for reliability and
scalability, allowing edge routers to have redundant links to the core network. Links between
core networks are different, for example Border Gateway Protocol (BGP) routers often used for
peering exchanges.
Edge routers are divided into two different types: subscriber edge routers and label edge
routers.
140
Label edge routers are used at the edge of Multiprotocol Label Switching (MPLS) networks,
act as gateways between a local network and a WAN or the internet and assign labels to
outbound data transmissions. Edge routers are not internal routers that partition a given AS
network into separate subnets. To connect to external networks, routers use the internet protocol
(IP) and the Open Shortest Path First (OSPF) protocol to route packets efficiently.
In general, edge routers accept inbound customer traffic into the network. These edge
devices characterize and secure IP traffic from other edge routers, as well as core routers.
They provide security for the core.
By comparison, core routers offer packet forwarding between other core and edge routers
and manage traffic to prevent congestion and packet loss. To improve efficiency, core routers
often employ multiplexing.
c) Security considerations
Because edge routers serve as a connection point between external networks, security is
an issue, since enterprises can’t control who might try to access the corporate network.
A Customer Edge Router (CE router) is a router located on the customer premises that
provides an Ethernet interface between the customer’s LAN and the provider’s core network.
CE routers, P (provider) routers and PE (provider edge) routers are components in an MPLS
(multiprotocol label switching) architecture. Provider routers are located in the core of the provider
or carrier’s network. Provider edge routers sit at the edge of the network. CE routers connect to
PE routers and PE routers connect to other PE routers over P routers.
141
The customer edge (CE) is the router at the customer premises that is connected to the
provider edge of a service provider IP/MPLS network. CE peers with the Provider Edge (PE)
and exchanges routes with the corresponding VRF inside the PE. The routing protocol used
could be static or dynamic (an interior gateway protocol like OSPF or an exterior gateway
protocol like BGP).
A Provider Edge router (PE router) is a router between one network service provider’s
area and areas administered by other network providers. A network provider is usually an Internet
service provider as well (or only that). The term PE router covers equipment capable of a broad
range of routing protocols, notably:
PE routers do not need to be aware of what kind of traffic is coming from the provider’s
network, as opposed to a P Router that functions as a transit within the service provider’s
network. However, some PE routers also do labelling.
Data terminal equipment (DTE) is an end instrument that converts user information into
signals or reconverts received signals. These can also be called tail circuits. A DTE device
communicates with the data circuit-terminating equipment (DCE). The DTE/DCE classification
was introduced by IBM. In computer data transmission, DTE (Data Terminal Equipment) is the
RS-232C interface that a computer uses to exchange data with a modem or other serial device.
A DTE is the functional unit of a data station that serves as a data source or a data sink
and provides for the data communication control function to be performed in accordance with
the link protocol.
interface), or the DTE may be the user. Usually, the DTE device is the terminal (or a computer
emulating a terminal), and the DCE is a modem or another carrier-owned device.
A general rule is that DCE devices provide the clock signal (internal clocking) and the
DTE device synchronizes on the provided clock (external clocking). D-sub connectors follow
another rule for pin assignment.
This term is also generally used in the Telco and Cisco equipment context to designate a
network device, such as terminals, personal computers but also routers and bridges, that’s
unable or configured not to generate clock signals. Hence a direct PC to PC Ethernet connection
can also be called a DTE to DTE communication. This communication is done via an Ethernet
crossover cable as opposed to a PC to DCE (hub, switch, or bridge) communication which is
done via an Ethernet straight cable.
Data Terminal Equipment (DTE) is any equipment that is either a source or destination for
digital data. DTE do not generally communicate with each other to do so they need to use DCE
to carry out the communication. DTE does not need to know how data is sent or received; the
communications details are left to the DCE. A typical example of DTE is a computer. Other
common DTE examples include:
Printers
PCs
Dumb Terminals
Routers
OSI model taking data generated by Data Terminal Equipment (DTE) and converting it into a
signal that can then be transmitted over a communications link. A common DCE example is a
modem which works as a translator of digital and analogue signals.
Some additional interfacing electronic equipment may also be needed to pair the DTE
with a transmission channel or to connect a circuit to the DTE. DCE and DTE are often confused
with each other, but these are two different device types that are interlinked with an RS-232
serial line.
DCE may also be responsible for providing timing over a serial link. In a complex network
which uses directly connected routers to provide serial links, one serial interface of each
connection must be configured with a clock rate to provide synchronization. DCE is sometimes
said to stand for Data Circuit-terminating Equipment.
ISDN adapters
Microwave stations
In a computer, clock speed refers to the number of pulses per second generated by an
oscillator that sets the tempo for the processor. Clock speed is usually measured in MHz
(megahertz, or millions of pulses per second) or GHz (gigahertz, or billions of pulses per second).
Today’s personal computers run at a clock speed in the hundreds of megahertz and some
144
exceed one gigahertz. The clock speed is determined by a quartz-crystal circuit, similar to
those used in radio communications equipment.
Also called clock rate, it is the speed at which a microprocessor executes instructions.
Every computer contains an internal clock that regulates the rate at which instructions are
executed and synchronizes all the various computer components. The CPU requires a fixed
number of clock ticks (or clock cycles) to execute each instruction. The faster the clock, the
more instructions the CPU can execute per second. Clock speeds are expressed in megahertz
(MHz) or gigahertz ((GHz).
The internal architecture of a CPU has as much to do with a CPU’s performance as the
clock speed, so two CPUs with the same clock speed will not necessarily perform equally.
Whereas an Intel 80286 microprocessor requires 20 cycles to multiply two numbers, an Intel
80486 or later processor can perform the same calculation in a single clock tick. (Note that
clock tick here refers to the system’s clock, which runs at 66 MHz for all PCs.) These newer
processors, therefore, would be 20 times faster than the older processors even if their clock
speeds were the same. In addition, some microprocessors are superscalar, which means that
they can execute more than one instruction per clock cycle.
Computer clock speed has been roughly doubling every year. The Intel 8088, common in
computers around the year 1980, ran at 4.77 MHz. The 1 GHz mark was passed in the year
2000.
Clock speed is one measure of computer “power,” but it is not always directly proportional
to the performance level. If you double the speed of the clock, leaving all other hardware
unchanged, you will not necessarily double the processing speed. The type of microprocessor,
the bus architecture, and the nature of the instruction set all make a difference. In some
applications, the amount of random access memory (RAM) is important, too.
Some processors execute only one instruction per clock pulse. More advanced processors
can perform more than one instruction per clock pulse. The latter type of processor will work
faster at a given clock speed than the former type. Similarly, a computer with a 32-bit bus will
work faster at a given clock speed than a computer with a 16-bit bus. For these reasons, there
is no simplistic, universal relation among clock speed, “bus speed,” and millions of instructions
per second (MIPS).
145
Excessive clock speed can be detrimental to the operation of a computer. As the clock
speed in a computer rises without upgrades in any of the other components, a point will be
reached beyond which a further increase in frequency will render the processor unstable. Some
computer users deliberately increase the clock speed, hoping this alone will result in a proportional
improvement in performance, and are disappointed when things don’t work out that way.
Summary
Subnetting is basically just a way of splitting a TCP/IP network into smaller, more
manageable pieces. When you subnet your network, you are splitting the network
into a separate, but interconnected network. The main purpose of subnetting is to
help relieve network congestion.
Subnetting allows you to create multiple logical networks that exist within a single
Class A, B, or C network. If you do not subnet, you are only able to use one network
from your Class A, B, or C network, which is unrealistic.
CIDR reduced the problem of wasted address space by providing a new and more
flexible way to specify network addresses in routers. CIDR lets one routing table
entry represent an aggregation of networks that exist in the forward path that don’t
need to be specified on that particular gateway. CIDR provides numerous advantages
over the “classful” addressing scheme, whether or not subnetting is used.
Wildcard masks are used to specify a range of network addresses. They are
commonly used with routing protocols (like OSPF) and access lists.
Review Questions
What is subnetting and what are the advantages of subnetting?
Explain CIDR.
How are edge routers used in a network? What are its advantages?
References
CCNA Routing and Switching 200-125 Official Cert Guide Library
https://searchsdn.techtarget.com
https://www.quora.com
https://en.wikipedia.org/
https://whatis.techtarget.com
147
UNIT - 4
VIRTUAL LANs
Learning Objectives
Learners will be able to understand the concept of Virtual LAN and also configure a
VLAN on their own.
In this chapter we also explain the concept of VLAN Trunking Protocol and its
advantages as well as disadvantages.
Structure
4.1 Introduction
4.1 Introduction
A virtual LAN (VLAN) is any broadcast domain that is partitioned and isolated in a computer
network at the data link layer (OSI layer 2). LAN is the abbreviation for local area network and
in this context virtual refers to a physical object recreated and altered by additional logic. VLANs
work by applying tags to network packets and handling these tags in networking systems –
148
creating the appearance and functionality of network traffic that is physically on a single network
but acts as if it is split between separate networks. In this way, VLANs can keep network
applications separate despite being connected to the same physical network, and without
requiring multiple sets of cabling and networking devices to be deployed.
VLANs allow network administrators to group hosts together even if the hosts are not
directly connected to the same network switch. Because VLAN membership can be configured
through software, this can greatly simplify network design and deployment. Without VLANs,
grouping hosts according to their resource needs necessitates the labor of relocating nodes or
rewiring data links. VLANs allow networks and devices that must be kept separate to share the
same physical cabling without interacting, improving simplicity, security, traffic management, or
economy. For example, a VLAN could be used to separate traffic within a business due to
users, and due to network administrators, or between types of traffic, so that users or low
priority traffic cannot directly affect the rest of the network’s functioning. Many Internet hosting
services use VLANs to separate their customers’ private zones from each other, allowing each
customer’s servers to be grouped together in a single network segment while being located
anywhere in their data center. Some precautions are needed to prevent traffic “escaping” from
a given VLAN, an exploit known as VLAN hopping.
To subdivide a network into VLANs, one configures network equipment. Simpler equipment
can partition only per physical port (if at all), in which case each VLAN is connected with a
dedicated network cable. More sophisticated devices can mark frames through VLAN tagging,
so that a single interconnect (trunk) may be used to transport data for multiple VLANs. Since
VLANs share bandwidth, a VLAN trunk can use link aggregation, quality-of-service prioritization,
or both to route data efficiently.
In a network utilizing broadcasts for service discovery, address assignment and resolution
and other services, as the number of peers on a network grows, the frequency of broadcasts
also increases. VLANs can help manage broadcast traffic by forming multiple broadcast domains.
149
Breaking up a large network into smaller independent segments reduces the amount of broadcast
traffic each network device and network segment has to bear. Switches may not bridge network
traffic between VLANs, as doing so would violate the integrity of the VLAN broadcast domain.
VLANs can also help create multiple layer 3 networks on a single physical infrastructure.
VLANs are data link layer (OSI layer 2) constructs, analogous to Internet Protocol (IP) subnets,
which are network layer (OSI layer 3) constructs. In an environment employing VLANs, a one-
to-one relationship often exists between VLANs and IP subnets, although it is possible to have
multiple subnets on one VLAN.
Without VLAN capability, users are assigned to networks based on geography and are
limited by physical topologies and distances. VLANs can logically group networks to decouple
the users’ network location from their physical location. By using VLANs, one can control traffic
patterns and react quickly to employee or equipment relocations. VLANs provide the flexibility
to adapt to changes in network requirements and allow for simplified administration.
VLANs can be used to partition a local network into several distinctive segments, for
instance:
Production
Voice over IP
Network management
A common infrastructure shared across VLAN trunks can provide a measure of security
with great flexibility for a comparatively low cost. Quality of service schemes can optimize traffic
on trunk links for real-time (e.g. VoIP) or low-latency requirements (e.g. SAN). However, VLANs
as a security solution should be implemented with great care as they can be defeated unless
implemented carefully.
In cloud computing VLANs, IP addresses, and MAC addresses in the cloud are resources
that end users can manage. To help mitigate security issues, placing cloud-based virtual machines
on VLANs may be preferable to placing them directly on the Internet.
150
A VLAN can also serve to restrict access to network resources without regard to physical
topology of the network, although the strength of this method remains debatable as VLAN
hopping is a means of bypassing such security measures if not prevented. VLAN hopping can
be mitigated with proper switchport configuration.
VLANs operate at Layer 2 (the data link layer) of the OSI model. Administrators often
configure a VLAN to map directly to an IP network, or subnet, which gives the appearance of
involving Layer 3 (the network layer). In the context of VLANs, the term “trunk” denotes a
network link carrying multiple VLANs, which are identified by labels (or “tags”) inserted into their
packets. Such trunks must run between “tagged ports” of VLAN-aware devices, so they are
often switch-to-switch or switch-to-router links rather than links to hosts. (Note that the term
‘trunk’ is also used for what Cisco calls “channels”: Link Aggregation or Port Trunking). A router
(Layer 3 device) serves as the backbone for network traffic going across different VLANs.
151
A basic switch not configured for VLANs has VLAN functionality disabled or permanently
enabled with a default VLAN that contains all ports on the device as members. The default
VLAN typically has the ID “1”. Every device connected to one of its ports can send packets to
any of the others. Separating ports by VLAN groups separates their traffic very much like
connecting each group using a distinct switch for each group.
It is only when the VLAN port group is to extend to another device that tagging is used.
Since communications between ports on two different switches travel via the uplink ports of
each switch involved, every VLAN containing such ports must also contain the uplink port of
each switch involved, and traffic through these ports must be tagged.
Management of the switch requires that the administrative functions be associated with
one or more of the configured VLANs. If the default VLAN were deleted or renumbered without
first moving the management connection to a different VLAN, it is possible for the administrator
to be locked out of the switch configuration, normally requiring physical access to the switch to
regain management by either a forced clearing of the device configuration (possibly to the
factory default), or by connecting through a console port or similar means of direct management.
Switches typically have no built-in method to indicate VLAN port members to someone
working in a wiring closet. It is necessary for a technician to either have administrative access
to the device to view its configuration, or for VLAN port assignment charts or diagrams to be
kept next to the switches in each wiring closet. These charts must be manually updated by the
technical staff whenever port membership changes are made to the VLANs.
Generally, VLANs within the same organization will be assigned different non-overlapping
network address ranges. This is not a requirement of VLANs. There is no issue with separate
VLANs using identical overlapping address ranges (e.g. two VLANs each use the private network
192.168.0.0/16). However, it is not possible to route data between two networks with overlapping
addresses without delicate IP remapping, so if the goal of VLANs is segmentation of a larger
overall organizational network, non-overlapping addresses must be used in each separate VLAN.
Network technologies with VLAN capabilities include:
Asynchronous Transfer Mode (ATM)
Fiber Distributed Data Interface (FDDI)
Ethernet
HiperSockets
InfiniBand
152
The purpose of a VLAN is simple: It removes the limitation of physically switched LANs
with all devices automatically connected to each other. With a VLAN, it is possible to have hosts
that are connected together on the same physical LAN but not allowed to communicate directly.
This restriction gives us the ability to organize a network without requiring that the physical LAN
mirror the logical connection requirements of any specific organization.
To make this concept a bit clearer, let’s use the analogy of a telephone system. Imagine
that a company has 500 employees, each with his or her own telephone and dedicated phone
number. If the telephones are connected like a traditional residential phone system, anyone
has the ability to call any direct phone number within the company, regardless of whether that
employee needs to receive direct business phone calls. This arrangement presents a number
of problems, from potential wrong number calls to prank or malicious calls that are intended to
reduce the organization’s productivity.
Now suppose a more efficient and secure option is offered, allowing the business to
install and configure a separate internal phone system. This phone system forces external calls
to go through a separate switchboard or operator—in a more modern phone network, an
Integrated Voice Response (IVR) system. This new phone system lets internal users connect
directly to each other via extensions (typically using shorter numbers), while it limits what the
internal user’s phones can do and where/who the user can call. This internal phone system
allows the organization to virtually separate the internal phones. This is essentially what a
VLAN does on a network. To take this analogy into the networking world, consider the network
shown in Figure 4.2.
Suppose that hosts A and B are together in one department, and hosts C and D are
together in another department. With physical LANs, they could be connected in only two ways:
either all of the devices are connected together on the same LAN (hoping that the users of the
other department hosts will not attempt to communicate), or each of the department hosts
could be connected together on separate physical switches. Neither of these is a good solution.
The first option opens up many potential security holes, and the second option would become
expensive very quickly.
153
To solve this sort of problem, the concept of a VLAN was developed. With a VLAN, each
port on a switch can be configured into a specific VLAN, and then the switch will only allow
devices that are configured into the same VLAN to communicate. Using the network in Figure
4.2, if A and B were grouped together and separated from the C and D group, you could place
A and B into VLAN 10 and C and D into VLAN 20. This way, their traffic would be kept isolated
on the switch. In this configuration, the traffic between groups would be prevented at Layer 2
because of the difference in assigned VLANs.
When users on a VLAN move to a new physical location but continue to perform the
same job function, the end-stations of those users do not need to be reconfigured.
Similarly, if users change their job functions, they need not physically move: changing
the VLAN membership of the end-stations to that of the new team makes the users’
end-stations local to the resources of the new team.
154
VLANs reduce the need to have routers deployed on a network to contain broadcast
traffic.
Access link: An access link is a link that is part of only one VLAN, and normally
access links are for end devices. Any device attached to an access link is unaware
of a VLAN membership. An access-link connection can understand only standard
Ethernet frames. Switches remove any VLAN information from the frame before it
is sent to an access-link device.
Trunk link: A Trunk link can carry multiple VLAN traffic and normally a trunk link is
used to connect switches to other switches or to routers. To identify the VLAN that
a frame belongs to, Cisco switches support different identification techniques (VLAN
Frame tagging). Our focus for CCNA Routing and Switching examination is on
IEEE 802.1Q. A trunk link is not assigned to a specific VLAN. Many VLAN traffic
can be transported between switches using a single physical trunk link.
Each port on a Cisco switch can be configured as either an access or a trunk port. The
type of a port specifies how the switch determines the incoming frame’s VLAN. Here is a
description of these two port types:
Access port – a port that can be assigned to a single VLAN. The frames that arrive
on an access port are assumed to be part of the access VLAN. This port type is
configured on switch ports that are connected to devices with a normal network
card, for example a host on a network.
155
Trunk port – a port that is connected to another switch. This port type can carry
traffic of multiple VLANs, thus allowing you to extend VLANs across your entire
network. Frames are tagged by assigning a VLAN ID to each frame as they traverse
between switches.
The following picture illustrates the difference between access and trunk ports:
As you can see from the picture above, the ports on the switches that connect to hosts
are configured as access ports. The ports between switches are configured as trunk ports
Access Links are the most common type of links on any VLAN switch. All network hosts
connect to the switch’s Access Links in order to gain access to the local network. These links
are your ordinary ports found on every switch, but configured in a special way, so you are able
to plug a computer into them and access your network.
156
Figure 4.4: Cisco Catalyst 3550 series switch (Access Links (ports) marked in Green)
We must note that the ‘Access Link’ term describes a configured port - this means that
the ports above can be configured as the second type of VLAN links - Trunk Links. What we are
showing here is what’s usually configured as an Access Link port in 95% of all switches.
Depending on your needs, you might require to configure the first port (top left corner) as a
Trunk Link, in which case, it is obviously not called a Access Link port anymore, but a Trunk
Link!
When configuring ports on a switch to act as Access Links, we usually configure only one
VLAN per port, that is, the VLAN our device will be allowed to access. If you recall the diagram
below which was also present during the introduction of the VLAN concept, you’ll see that each
PC is assigned to a specific port:
In this case, each of the 6 ports used have been configured for a specific VLAN. Ports 1,
2 and 3 have been assigned to VLAN 1 while ports 4, 5 and 6 to VLAN 2.
157
In the above diagram, this translates to allowing only VLAN 1 traffic in and out of ports 1,
2 and 3, while ports 4, 5 and 6 will carry VLAN 2 traffic. As you would remember, these two
VLANs do not exchange any traffic between each other, unless we are using a layer 3 switch (or
router) and we have explicitly configured the switch to route traffic between the two VLANs.
It is equally important to note at this point that any device connected to an Access Link
(port) is totally unaware of the VLAN assigned to the port. The device simply assumes it is part
of a single broadcast domain, just as it happens with any normal switch. During data transfers,
any VLAN information or data from other VLANs is removed so the recipient has no information
about them. The following diagram illustrates this to help you get the picture:
As shown, all packets arriving, entering or exiting the port are standard Ethernet II type
packets which are understood by the network device connected to the port. There is nothing
special about these packets, other than the fact that they belong only to the VLAN the port is
158
configured for. If, for example, we configured the port shown above for VLAN 1, then any
packets entering/exiting this port would be for that VLAN only. In addition, if we decided to use
a logical network such as 192.168.0.0 with a default subnet mask of 255.255.255.0 (/24), then
all network devices connecting to ports assigned to VLAN 1 must be configured with the
appropriate network address so they may communicate with all other hosts in the same VLAN.
A Trunk Link, or ‘Trunk’ is a port configured to carry packets for any VLAN. These type of
ports are usually found in connections between switches. These links require the ability to carry
packets from all available VLANs because VLANs span over multiple switches.
The diagram below shows multiple switches connected throughout a network and the
Trunk Links are marked in purple colour to help you identify them:
160
As you can see in our diagram, our switches connect to the network backbone via the
Trunk Links. This allows all VLANs created in our network to propagate throughout the whole
network. Now in the unlikely event of Trunk Link failure on one of our switches, the devices
connected to that switch’s ports would be isolated from the rest of the network, allowing only
ports on that switch, belonging to the same VLAN, to communicate with each other.
So now that we have an idea of what Trunk Links are and their purpose, let’s take a look
at an actual switch to identify a possible Trunk Link:
161
As we noted with the explanation of Access Link ports, the term ‘Trunk Link’ describes a
configured port. In this case, the Gigabit ports are usually configured as Trunk Links, connecting
the switch to the network backbone at the speed of 1 Gigabit, while the Access Link ports
connect at 100Mbits. In addition, we should note that for a port or link to operate as a Trunk
Link, it is imperative that it runs at speeds of 100Mbit or greater. A port running at speeds of
10Mbit’s cannot operate as a Trunk Link and this is logical because a Trunk Link is always used
to connect to the network backbone, which must operate at speeds greater than most Access
Links.
A access port is typically for a switch to host connection and this port is assigned to only
one VLAN. This can be done with the following commands:
A trunk is typically a link between two switches or a switch and a router. This allows
multiple VLANs to traverse the interface/link. This can be configured in a few different ways but
will achieve the same result.
162
Dynamic auto: The ‘dynamic auto’ will configure the port to accept incoming
negotiation and will accept becoming either a trunk or an access port.
Dynamic desirable: The ‘dynamic desirable’ will configure the port to try and become
a trunk link by sending a request to the other end of the wire requesting to become
a trunk port.
Trunk: The ‘trunk’ command will force the port to become a trunk.
VLAN Trunking Protocol (VTP) is a Cisco proprietary protocol that propagates the definition
of Virtual Local Area Networks (VLAN) on the whole local area network. To do this, VTP carries
VLAN information to all the switches in a VTP domain. VTP advertisements can be sent
over 802.1Q, and ISL trunks. VTP is available on most of the Cisco Catalyst Family products.
There are three versions of VTP, namely version 1, version 2, version 3. The purpose of VTP is
to provide a way to manage Cisco switches as a single group for VLAN configuration purposes.
VTP is a protocol used to distribute and synchronize identifying information about VLANs
configured throughout a switched network. Configuration changes made to the VLANs on a
single VTP server switch are propagated across Trunk links to all trunk-connected switches in
the network. Using VTP, each Catalyst Family Switch advertises the following on its trunk ports:
Management domain
synchronizing VLAN information within a VTP domain and reduces the need to configure the
same VLAN information on each switch thereby minimizing the possibility of configuration
inconsistencies that arise when changes are made.
By default, all switches are configured to be VTP servers. This configuration is suitable
for small-scale networks in which the size of the VLAN information is small and the information
is easily stored in all switches (in NVRAM). In a large network, the network administrator must
make a judgment call at some point, when the NVRAM storage that is necessary is wasteful
because it is duplicated on every switch. At this point, the network administrator must choose a
few well-equipped switches and keep them as VTP servers. Everything else that participates in
VTP can be turned into a client. The number of VTP servers should be chosen in order to
provide the degree of redundancy that is desired in the network.
Notes:
If a switch is configured as a VTP server without a VTP domain name, you cannot
configure a VLAN on the switch.
Note: It is applicable only for CatOS. You can configure VLAN(s) without having the
VTP domain name on the switch which runs on IOS.
If a new Catalyst is attached in the border of two VTP domains, the new Catalyst
keeps the domain name of the first switch that sends it a summary advertisement.
The only way to attach this switch to another VTP domain is to manually set a
different VTP domain name.
Dynamic Trunking Protocol (DTP) sends the VTP domain name in a DTP packet.
Therefore, if you have two ends of a link that belong to different VTP domains, the
trunk does not come up if you use DTP. In this special case, you must configure the
trunk mode as on or nonegotiate, on both sides, in order to allow the trunk to come
up without DTP negotiation agreement.
If the domain has a single VTP server and it crashes, the best and easiest way to
restore the operation is to change any of the VTP clients in that domain to a VTP
server. The configuration revision is still the same in the rest of the clients, even if
the server crashes. Therefore, VTP works properly in the domain.
164
You can configure a switch to operate in any one of these VTP modes:
Server: In VTP server mode, you can create, modify, and delete VLANs and specify
other configuration parameters, such as VTP version and VTP pruning, for the
entire VTP domain. VTP servers advertise their VLAN configuration to other switches
in the same VTP domain and synchronize their VLAN configuration with other
switches based on advertisements received over trunk links. VTP server is the
default mode.
Client: VTP clients behave the same way as VTP servers, but you cannot create,
change, or delete VLANs on a VTP client.
Off (configurable only in CatOS switches): In the three described modes, VTP
advertisements are received and transmitted as soon as the switch enters the
management domain state. In the VTP off mode, switches behave the same as in
VTP transparent mode with the exception that VTP advertisements are not forwarded.
4.10.3 VTP V2
VTP V2 is not much different than VTP V1. The major difference is that VTP V2 introduces
support for Token Ring VLANs. If you use Token Ring VLANs, you must enable VTP V2.
Otherwise, there is no reason to use VTP V2. Changing the VTP version from 1 to 2 will not
cause a switch to reload.
The disadvantages are that when a new switch is added to the network, by default it is
configured with no VTP domain name or password, but in VTP server mode. If no VTP Domain
Name has been configured, it assumes the one from the first VTP packet it receives. Since a
new switch has a VTP configuration revision of 0, it will accept any revision number as newer
and overwrite its VLAN information if the VTP passwords match. However, if you were to
accidentally connect a switch to the network with the correct VTP domain name and password
but a higher VTP revision number than what the network currently has (such as a switch that
had been removed from the network for maintenance and returned with its VLAN information
deleted) then the entire VTP Domain would adopt the VLAN configuration of the new switch
which is likely to cause loss of VLAN information on all switches in the VTP Domain, leading to
failures on the network. Since Cisco switches maintain VTP configuration information separately
from the normal configuration, and since this particular issue occurs so frequently, it has become
known colloquially as the “VTP Bomb”.
Before creating VLANs on the switch that will propagate via VTP, a VTP domain must first
be set up. A VTP domain for a network is a set of all contiguously trunked switches with the
matching VTP settings (domain name, password and VTP version). All switches in the same
VTP domain share their VLAN information with each other, and a switch can participate in only
one VTP management domain. Switches in different domains do not share VTP information.
Non-matching VTP settings might result in issues in negotiating VLAN trunks, port-channels or
Virtual Port Channels.
Two separate VLANs must communicate through a layer-3 device, like a router. Devices
on a VLAN communicate with each other using layer-2. Layer-3 must be used to communicate
between separate layer-2 domains. Assuming the most common communications (layer-2 is
ethernet and layer-3 is IP), when a host on a VLAN wants to communicate with another host on
the same VLAN, it discovers the other hosts layer-2 (e.g. MAC) address with something like
ARP, and it sends the frame to the MAC address.
166
When a host on one VLAN wants to send something to a host on another VLAN, it must
use a layer-3 (e.g. IP) address. The host will use layer-2 to send the frames to its defined
gateway (router). The router will strip off the layer-2 frame and inspect the layer-3 packet for the
destination layer-3 address. The router will then look up the next hop for the layer-3 address. It
will then create a new layer-2 frame for the layer-3 packet based on the layer-2 LAN on the
interface where it needs to send the packet for the next hop. Other routers which may be in the
path to the end LAN will repeat this process until the frame is placed on the final VLAN, where
the receiving host gets the frame.
You should search for the OSI model and learn how it works. Just remember that it is a
model, and some things in the real world don’t necessarily work exactly like the model would
predict, but it will give you a gross understanding of how data travel from an application on one
host to an application on another host.
A collision domain is, as the name implies, the part of a network where packet collisions
can occur. A collision occurs when two devices send a packet at the same time on the shared
network segment. The packets collide and both devices must send the packets again, which
reduces network efficiency. Collisions are often in a hub environment, because each port on a
hub is in the same collision domain. By contrast, each port on a bridge, a switch or a router is in
a separate collision domain. The following example illustrates 6 collision domains:
Note: Remember, each port on a hub is in the same collision domain. Each port on a
bridge, a switch or router is in a separate collision domain.
In a “Shared Media” there are no separate channels for sending and recieving the data
signals, but only one channel to send and recieve the data signals. We call the media as shared
media when the devices are connected together using Bus topology, or by using an Ethernet
Hub. Both are half-duplex, means that the devices can Send OR Recieve data signals at same
time. Sending and recieving data signals at same time is not supported.
Collisions will happen in an Ethernet Network when two devices simultaneously try to
send data on the Shared Media, since Shared Media is half-duplex and sending and recieving
is not supported at same time. Please refer CSMA/CD to learn how Ethernet avoid Collision.
Collisions are a normal part of life in an Ethernet network when Ethernet operates in Half-
duplex and under most circumstances should not be considered as a problem.
A Collision Domain is any network segment in which collisions can happen (usually in
Ethernet networks). In other words, a Collision Domain consists of all the devices connected
using a Shared Media (Bus Topolgy or using Ethernet Hubs) where a Collision can happen
between any device at any time.
As the number of devices in a collision domain increases, chances of collisions are also
more. If there is more traffic in a collision domain, the chances of collisions are also more. More
collisions will normally happen with a large number of network devices in a Collision domain.
Increased collisions will result in low quality network where hosts spending more and
more time for packet retransmission and packet processing. Usually switches are used to
segment (divide) a big Collision domain to many small collision domains. Each port of an Ethernet
Switch is operating in a separate Collision domain.
In other words, Collision cannot happen between two devices which are connected to
different ports of a Switch.
168
Broadcast is a type of communication, where the sending device sends a single copy of
data and that copy of data will be delivered to every device in the network segment. Broadcast
is a required type of communication and we cannot avoid Broadcasts, because many protocols
(Example: ARP and DHCP) and applications are dependent on Broadcast to function.
A Broadcast Domain consists of all the devices that will receive any broadcast packet
originating from any device within the network segment. As the number of devices in the
Broadcast Domain increases, number of Broadcasts also increases and the quality of the network
will come down because of the following reasons.
By design, Routers will not allow broadcasts from one of its connected network segment
to cross the router and reach another network segment. The primary function of a Router is to
segment (divide) a big broadcast domain in to multiple smaller broadcast domains.
Summary
A virtual LAN (VLAN) is any broadcast domain that is partitioned and isolated in a
computer network at the data link layer (OSI layer 2)
VLANs allow network administrators to group hosts together even if the hosts are
not directly connected to the same network switch. Because VLAN membership
can be configured through software, this can greatly simplify network design and
deployment.
VLANs can be used to partition a local network into several distinctive segments.
Review Questions
Explain the concept of VLANs with their advantages and disadvantages.
References
http://www.pearsonitcertification.com
http://www.firewall.cx/networking-topics/vlan-networks/218-vlan-access-trunk-
links.html
https://www.cisco.com/c/en/us/support/docs/lan-switching/vtp/10558-21.html
171
UNIT – 5
COMMUNICATION PROTOCOLS
Learning Objectives
The main objectives of learning in this unit are
Learn the standard email protocols and be ready to configure a mail server.
Structure
5.1 Introduction to Protocols
TELNET uses the TCP as the transport protocol to establish connection between server
and client. TELNET server and client enter a phase of option negotiation that determines the
options that each side can support for the connection. Each connected system can negotiate
new options or renegotiate old options at any time. In general, each end of the TELNET
connection attempts to implement all options that maximize performance for the systems involved.
When a TELNET connection is first established, each end is assumed to originate and
terminate at a “Network Virtual Terminal”, or NVT. An NVT is an imaginary device which provides
a standard, network-wide, intermediate representation of a canonical terminal. This eliminates
the need for “server” and “user” hosts to keep information about the characteristics of each
other’s terminals and terminal handling conventions.
The principle of negotiated options takes cognizance of the fact that many hosts will wish
to provide additional services over and above those available within an NVT and many users
will have sophisticated terminals and would like to have elegant, rather than minimal, services.
FTP provides for the uploading and downloading of files from a remote host running FTP
server software. FTP allows you to view the contents of folders on an FTP server and rename
and delete files and directories if you have the necessary permissions. FTP, which is defined in
RFC 959, uses TCP as a transport protocol to guarantee delivery of packets.
FTP has security mechanisms used to authenticate users. However, rather than create a
user account for every user, you can configure FTP server software to accept anonymous
173
logons. When you do this, the username is anonymous, and the password normally is the
user’s email address. Most FTP servers that offer files to the general public operate in this way.
All the common network operating systems offer FTP server capabilities and all popular
workstation operating systems offer FTP client functionality. FTP assumes that files being
uploaded or downloaded are straight text (that is, ASCII) files. If the files are not text, which is
likely, the transfer mode has to be changed to binary. With sophisticated FTP clients, such as
CuteFTP, the transition between transfer modes is automatic. With more basic utilities, you
have to perform the mode switch manually.
In addition to being popular mechanism for exchanging files to the general public over
networks such as the Internet, FTP is also popular with organizations that need to frequently
exchange large files with other people or organizations. The key functions of FTP are:
To shield a user from variations in file storage systems among hosts; and
Trivial File Transfer Protocol (TFTP) is a simple protocol to transfer files. It has been
implemented on top of the Internet User Datagram protocol (UDP). TFTP is designed to be
small and easy to implement and, therefore, lacks most of the features of a regular FTP. TFTP
only reads and writes files (or mail) from/ to a remote server. It cannot list directories, and
currently has no provisions for user authentication.
Three modes of transfer are currently supported by TFPT: netASCII, that is 8 bit ASCII;
octet (this replaces the “binary” mode of previous versions of this document.) i.e. raw 8-bit
bytes; mail, netASCII characters sent to a user rather than a file. Additional modes can be
defined by pairs of cooperating hosts.
In TFTP, any transfer begins with a request to read or write a file, which also serves to
request a connection. If the server grants the request, the connection is opened and the file is
sent in fixed length blocks of 512 bytes. Each data packet contains one block of data and must
be acknowledged by an acknowledgment packet before the next packet can be sent. A data
packet of less than 512 bytes signals termination of a transfer. If a packet gets lost in the
174
network, the intended recipient will timeout and may retransmit his last packet (which may be
data or an acknowledgment), thus causing the sender of the lost packet to retransmit that lost
packet. The sender has to keep just one packet on hand for retransmission, since the lock step
acknowledgment guarantees that all older packets have been received. Notice that both machines
involved in a transfer are considered senders and receivers. One sends data and receives
acknowledgments, the other sends acknowledgments and receives data.
The current version of TFTP is version 2. The basic TFTP header structure will be as
under:
SMTP, which is defined in RFC 821, is a protocol that defines how mail messages are
sent between hosts. SMTP uses TCP connections to guarantee error-free delivery of messages.
SMTP is not overly sophisticated, and it requires that the destination host always be available.
Therefore, mail systems commonly spool incoming mail so that users can read it later. How the
user then reads the mail depending on how the client accesses the SMTP server. SMTP can be
used to both send and receive mail. Post Office Protocol (POP) and Internet Message Access
Protocol (IMAP) can be used only to receive mail.
175
DNS solves the problem of name resolution by offering resolution through servers
configured to act as name servers. The name servers run DNS server software, which allows
them to receive, process, and reply to requests from systems that want to resolve hostnames to
IP addresses. Systems that ask DNS servers for a hostname-to-IP address mapping are called
resolvers or DNS clients. One of the problems with DNS is that, despite all its automatic resolution
capabilities, entries and changes to those entries must still be performed manually. A strategy
to solve this problem is to use Dynamic DNS (DDNS), a newer system that allows hosts to be
dynamically registered with the DNS server.
DNS operates in the DNS namespace. This space has logical divisions organized
hierarchically. At the top level are domains such as .com (commercial) and .edu (education), as
well as domains for countries, such as .uk (United Kingdom) and .de (Germany). Below the top
level are sub-domains or second-level domains associated with organizations or commercial
companies, such as Microsoft, IBM. Within these domains, hosts or other sub-domains can be
assigned. The domain name, along with any sub-domains, is called the fully qualified domain
name (FQDN) because it includes all the components from the top of the DNS namespace to
the host. For this reason, many people refer to DNS as resolving FQDNs to IP addresses.
Although the most common entry in a DNS database is an A (address) record, which
maps a hostname to an IP address, DNS can hold numerous other types of entries as well.
Some of particular note are the MX record, which is used to map entries that correspond to mail
exchanger systems, and CNAME (canonical record name), which can be used to create alias
records for a system. A system can have an A record and then multiple CNAME entries for its
aliases.
The importance of DNS, particularly in environments where the Internet is heavily used,
cannot be overstated. If DNS facilities are not accessible, the Internet effectively becomes
unusable, unless you can remember the IP addresses of all your favorite sites.
176
Each DNS name server maintains information about its zone, or domain, in a series of
records, known as DNS resource records. There are several DNS resource records each
contain information about the DNS domain and the systems within it. These records are text
entries that are stored on the DNS server. Some of the DNS resource records include:
Name Server (NS) record stores information that identifies the name servers in the
domain that store information for that domain.
Mail Exchange (MX) record stores information about where mail for the domain
should be delivered.
autonomous system will have its own routing technology, which may well be different for different
autonomous systems. The routing protocol used within an autonomous system is referred to as
an interior gateway protocol, or “IGP”. A separate protocol is used to interface among the
autonomous systems. The earliest such protocol, still used in the Internet, is “EGP” (exterior
gateway protocol). Such protocols are now usually referred to as inter-AS routing protocols.
RIP is designed to work with moderate-size networks using reasonably homogeneous technology.
Thus it is suitable as an IGP for many campuses and for regional networks using serial lines
whose speeds do not vary widely. It is not intended for use in more complex environments.
RIP2, derives from RIP, is an extension of the Routing Information Protocol (RIP) intended
to expand the amount of useful information carried in the RIP2 messages and to add a measure
of security. RIP2 is a UDP-based protocol. Each host that uses RIP2 has a routing process that
sends and receives datagrams on UDP port number 520. RIP and RIP2 are for the IPv4 network
while the RIPng is designed for the IPv6 network.
SNMP facilitates network devices to communicate information about their state to a central
system. It also allows the central system to pass configuration parameters to the NW devices.
SNMP is a protocol that only facilitates network management functionality but is not the NW
management system (NMS) by itself.
The Post Office Protocol is designed to allow a workstation to dynamically access a mail
drop on a server host. POP3 is the version 3 (the latest version) of the Post Office Protocol.
POP3 allows a workstation to retrieve mail that the server is holding for it. POP3 transmissions
appear as data messages between stations. The messages are either command or reply
messages.
178
There are several different technologies and approaches to building a distributed electronic
mail infrastructure: POP (Post Office Protocol), DMSP (Distributed Mail System Protocol) and
IMAP (Internet Message Access Protocol) among them. Of the three, POP is the oldest and
consequently the best known. DMSP is largely limited to a single application, PCMAIL, and is
known primarily for its excellent support of “disconnected” operation. IMAP offers a superset of
POP and DMSP capabilities, and provides good support for all three modes of remote mail- box
access: offiine, online, and disconnected.
POP was designed to support “offiine” mail processing, in which mail is delivered to a
server, and a personal computer user periodically invokes a mail “client” program that connects
to the server and downloads all of the pending mail to the user’s own machine. The offiine
access mode is a kind of store-and-for- ward service, intended to move mail (on demand) from
the mail server (drop point) to a single destination machine, usually a PC or Mac. Once delivered
to the PC or Mac, the messages are then deleted from the mail server.
POP3 is not designed to provide extensive manipulation operations of mail on the server;
which are done by a more advanced (and complex) protocol IMAP4. POP3 uses TCP as the
transport protocol.
In computing, the Post Office Protocol (POP) is an application layer Internet standard
protocol used by e-mail clients to retrieve e-mail from a server in an Internet Protocol (IP)
network. POP version 3 (POP3) is the most recent level of development in common use. POP
has largely been superseded by the Internet Message Access Protocol (IMAP).
Available messages to the client are fixed when a POP session opens the maildrop, and
are identified by message-number local to that session or, optionally, by a unique identifier
179
assigned to the message by the POP server. This unique identifier is permanent and unique to
the maildrop and allows a client to access the same message in different POP sessions. Mail is
retrieved and marked for deletion by message-number. When the client exits the session, the
mail marked for deletion is removed from the maildrop.
IMAP (Internet Message Access Protocol) is a standard email protocol that stores email
messages on a mail server, but allows the end user to view and manipulate the messages as
though they were stored locally on the end user’s computing device(s). This allows users to
organize messages into folders, have multiple client applications know which messages have
been read, flag messages for urgency or follow-up and save draft messages on the server.
IMAP can be contrasted with another client/server email protocol, Post Office Protocol 3
(POP3). With POP3, mail is saved for the end user in a single mailbox on the server and moved
to the end user’s device when the mail client opens. While POP3 can be thought of as a “store-
and-forward” service, IMAP can be thought of as a remote file server.
Most implementations of IMAP support multiple logins; this allows the end user to
simultaneously connect to the email server with different devices. For example, the end user
could connect to the mail server with his Outlook iPhone app and his Outlook desktop client at
the same time. The details for how to handle multiple connections are not specified by the
protocol but are instead left to the developers of the mail client.
Even though IMAP has an authentication mechanism, the authentication process can
easily be circumvented by anyone who knows how to steal a password by using a protocol
analyzer because the client’s username and password are transmitted as clear text. In an
Exchange Server environment, administrators can work around this security flaw by using Secure
Sockets Layer (SSL) encryption for IMAP.
In computing, the Internet Message Access Protocol (IMAP) is an Internet standard protocol
used by email clients to retrieve email messages from a mail server over a TCP/IP connection.
IMAP was designed with the goal of permitting complete management of an email box by
multiple email clients, therefore clients generally leave messages on the server until the user
explicitly deletes them. An IMAP server typically listens on port number 143. IMAP over SSL
(IMAPS) is assigned the port number 993.
180
Virtually all modern e-mail clients and servers support IMAP. IMAP and the earlier POP3
(Post Office Protocol) are the two most prevalent standard protocols for email retrieval, with
many webmail service providers such as Gmail, Outlook.com and Yahoo! Mail also providing
support for either IMAP or POP3
POP mail moves the message from the email server onto your local computer,
although there is usually an option to leave the messages on the email server as
well. IMAP defaults to leaving the message on the email server, simply downloading
a local copy.
POP treats the mailbox as one store, and has no concept of folders
An IMAP client performs complex queries, asking the server for headers, or the
bodies of specified messages, or to search for messages meeting certain criteria.
Messages in the mail repository can be marked with various status flags (e.g.
“deleted” or “answered”) and they stay in the repository until explicitly removed by
the user—which may not be until a later session. In short: IMAP is designed to
permit manipulation of remote mailboxes as if they were local. Depending on the
IMAP client implementation and the mail architecture desired by the system manager,
the user may save messages directly on the client machine, or save them on the
server, or be given the choice of doing either.
The POP protocol requires the currently connected client to be the only client
connected to the mailbox. In contrast, the IMAP protocol specifically allows
simultaneous access by multiple clients and provides mechanisms for clients to
detect changes made to the mailbox by other, concurrently connected, clients.
When POP retrieves a message, it receives all parts of it, whereas the IMAP4
protocol allows clients to retrieve any of the individual MIME parts separately - for
example retrieving the plain text without retrieving attached files.
IMAP supports flags on the server to keep track of message state: for example,
whether or not the message has been read, replied to, or deleted.
There is a Reverse ARP (RARP) for host machines that don’t know their IP address.
RARP enables them to request their IP address from the gateway’s ARP cache. Reverse Address
Resolution Protocol (RARP) allows a physical machine in a local area network to request its IP
address from a gateway server’s Address Resolution Protocol (ARP) table or cache. A network
182
administrator creates a table in a local area network’s gateway router that maps the physical
machines’ (or Media Access Control - MAC) addresses to corresponding Internet Protocol
addresses. When a new machine is set up, its RARP client program requests its IP address
from the RARP server on the router. Assuming that an entry has been set up in the router table,
the RARP server will return the IP address to the machine, which can store it for future use.
RARP is available for Ethernet, Fiber Distributed-Data Interface, and Token Ring LANs. The
structure of RARP header is same as for RAP.
Transmission Control Protocol (TCP) is the transport layer protocol in the TCP/IP suite,
which provides a reliable stream delivery and virtual connection service to applications through
the use of sequenced acknowledgment with retransmission of packets when necessary. Along
with the Internet Protocol (IP), TCP represents the heart of the Internet protocols.
Since many network applications may be running on the same machine, computers need
to make sure that the correct software application on the destination computer gets the data
packets from the source machine, and to make sure replies get routed to the correct application
on the source computer. This is accomplished through the use of the TCP “port numbers”. The
combination of IP address of a network station and its port number is known as a “socket” or an
“endpoint”. TCP establishes connections or virtual circuits between two “endpoints” for reliable
communications.
Among the services TCP provides are stream data transfer, reliability, efficient fiow control,
full-duplex operation, and multiplexing. With stream data transfer, TCP delivers an unstructured
stream of bytes identified by sequence numbers. This service benefits applications because
the application does not have to break data into blocks before handing it off to TCP. TCP can
group bytes into segments and pass them to IP for delivery.
TCP offers reliability by providing connection-oriented, end-to- end reliable packet delivery.
It does this by sequencing bytes with a forwarding acknowledgment number that indicates to
the destination the next byte the source expects to receive. Bytes not acknowledged within a
specified time period are retransmitted. The reliability mechanism of TCP allows devices to
deal with lost, delayed, duplicate, or misread packets. A time-out mechanism allows devices to
detect lost packets and request retransmission. TCP offers efficient fiow control - When sending
183
acknowledgments back to the source, the receiving TCP process indicates the highest sequence
number it can receive without over fiowing its internal buffers.
TCP processes can both send and receive packets at the same time (Full-duplex operation).
Numerous simultaneous upper-layer conversations can be multiplexed over a single connection
(Multiplexing in TCP).
Source port — Identifies points at which upper-layer source process receives TCP
services.
Sequence number — Usually specifies the number as- signed to the first byte of
data in the current message. In the connection-establishment phase, this field also
can be used to identify an initial sequence number to be used in an upcoming
transmission.
Acknowledgment number – Contains the sequence number of the next byte of data
the sender of the packet expects to receive. Once a connection is established, this
value is always sent.
Data offset — 4 bits. The number of 32-bit words in the TCP header indicates where
the data begins.
Control bits (Flags) — 6 bits. Carry a variety of control information. The control bits
may be:
184
Window — 16 bits. Specifies the size of the sender’s receive window, that is, the
buffer space available in octets for incoming data.
Urgent Pointer — 16 bits. Points to the first urgent data byte in the packet.
Option + Paddling – Specifies various TCP options. There are two possible formats
for an option: a single octet of option type; an octet of option type, an octet of option
length and the actual option data octets.
UDP is a connectionless transport layer (layer 4) protocol in the OSI model which provides
a simple and unreliable message service for transaction-oriented services. UDP is basically an
interface between IP and upper-layer processes. UDP protocol ports distinguish multiple
applications running on a single device from one another.
Since many network applications may be running on the same machine, computers need
something to make sure the correct software application on the destination computer gets the
data packets from the source machine and some way to make sure replies get routed to the
correct application on the source computer. This is accomplished through the use of the UDP
“port numbers”. For example, if a NW station wished to use a Domain Name System (DNS) on
the station 128.1.123.1, it would address the packet to station 128.1.123.1 and insert destination
port number 53 in the UDP header. The source port number Identifies the application on the
local station that requested domain name server, and all response packets generated by the
destination station should be addressed to that port number on the source station. Details of
UDP port numbers can be found in the reference.
Unlike TCP, UDP adds no reliability, fiow-control, or error recovery functions to IP. Because
of UDP’s simplicity, UDP headers contain fewer bytes and consume less network overhead
185
than TCP. UDP is useful in situations where the reliability mechanisms of TCP are not necessary,
such as in cases where a higher-layer protocol or application might provide error and fiow
control.
UDP is the transport protocol for several well-known application- layer protocols, including
Network File System (NFS), Simple Network Management Protocol (SNMP), Domain Name
System (DNS), and Trivial File Transfer Protocol (TFTP).
Destination port – 16 bits. Destination port has a meaning within the context of a
particular Internet destination address.
Length – 16 bits. The length in octets of this user datagram, including this header
and the data. The mini- mum value of the length is eight.
IGMP is the protocol within the TCP/IP protocol suite that manages multicast groups. It
allows one computer on the Internet to target content to a specific group of computers that will
receive content from the sending system. This is in contrast to unicast messaging, in which
data is sent to a single computer or network device and not to a group, or a broadcast message
goes to all systems.
186
Multicasting is a mechanism by which groups of network devices can send and receive
data between the members of the group at one time, instead of sending messages to each
device in the group separately. The multicast grouping is established by each device being
configured with the same multicast IP address. These multicast IP addresses are from the IPv4
Class D range, including 224.0.0.0 to 239.255.255.255 address ranges.
IGMP is used to register devices into a multicast group, as well as to discover what other
devices on the network are members of the same multicast group. Common applications for
multicasting include groups of routers on an internetwork and videoconferencing clients.
IGMPv1: Hosts can join multicast groups. There are no leave messages. Routers
use a time-out based mechanism to discover the groups that are of no interest to
the members.
IGMPv2: Leave messages were added to the protocol, allowing group membership
termination to be quickly reported to the routing protocol, which is important for
high-bandwidth multicast groups and/or subnets with highly volatile group
membership.
IGMPv3: A major revision of the protocol allows hosts to specify the list of hosts
from which they want to receive traffic. Traffic from other hosts is blocked inside the
net- work. It also allows hosts to block inside the network packets that come from
sources that send unwanted traffic.
ICMP, which is defined in RFC 792, is a protocol that works with the IP layer to provide
error checking and reporting functionality. In effect, ICMP is a tool that IP uses in its quest to
187
provide best-effort delivery. ICMP can be used for a number of functions. Its most common
function is probably the widely used and incredibly useful ping utility. Ping sends a stream of
ICMP echo requests to a remote host. If the host can respond, it does so by sending echo reply
messages back to the sending host. In that one simple process, ICMP enables the verification
of the protocol suite configuration of both the sending and receiving nodes and any intermediate
networking devices.
However, ICMP’s functionality is not limited to the use of the ping utility. ICMP also can
return error messages such as Destination unreachable and Time exceeded. In addition to
these and other functions, ICMP performs source quench. . In a source quench scenario, the
receiving host cannot handle the influx of data at the same rate as the data is being sent. To
slow down the sending host, the receiving host sends ICMP source quench messages, telling
the sender to slow down. This action prevents packets from being dropped and having to be
resent. ICMP is a useful protocol. Although ICMP operates largely in the background, the ping
utility alone makes it one of the most valuable of the protocols.
Internet Control Message Protocol (ICMP) is an integrated part of the IP suite. ICMP
messages, delivered in IP packets, are used for out-of-band messages related to network
operation. ICMP packet delivery is unreliable, so hosts can’t count on receiving ICMP packets
for any network problems. The key ICMP functions are:
Announce network errors, such as a host or entire portion of the network being
unreachable, due to some type of failure. A TCP or UDP packet directed at a port
number with no receiver attached is also reported via ICMP.
Announce network congestion. When a router begins buffering too many packets,
due to an inability to transmit them as fast as they are being received, it will generate
ICMP Source Quench messages. Directed at the sender, these messages should
cause the rate of packet transmission to be slowed. Of course, generating too
many Source Quench messages would cause even more network congestion, so
they are used sparingly.
Assist Troubleshooting. ICMP supports an Echo function, which just sends a packet
on a round—trip between two hosts. Ping, a common network management tool, is
based on this feature. Ping will transmit a series of packets, measuring average
round—trip times and computing loss percentages.
188
Announce Timeouts. If an IP packet’s TTL field drops to zero, the router discarding
the packet will often generate an ICMP packet announcing this fact. Trace Route is
a tool which maps network routes by sending packets with small TTL values and
watching the ICMP timeout announcements.
The Internet Control Message Protocol (ICMP) was revised during the definition of IPv6.
In addition, the multicast control functions of the IPv4 Group Membership Protocol (IGMP) are
now incorporated in the ICMPv6.
The Internet Protocol (IP) is a network-layer (Layer 3 in the OSI model) protocol that
contains addressing information and some control information to enable packets to be routed in
a network. IP is the primary network-layer protocol in the TCP/IP protocol suite. Along with the
Transmission Control Protocol (TCP), IP represents the heart of the Internet protocols. IP is
equally well suited for both LAN and WAN communications.
When you send or receive data (for example, an e-mail note or a Web page), the message
gets divided into little chunks called packets. Each of these packets contains both the sender’s
Inter- net address and the receiver’s address. Because a message is divided into a number of
packets, each packet can, if necessary, be sent by a different route across the Internet. Packets
can arrive in a different order than the order they were sent in. The Internet Protocol just delivers
189
them. It’s up to another protocol, the Transmission Control Protocol (TCP) to put them back in
the right order. All other protocols within the TCP/IP suite, except ARP and RARP, use IP to
route frames from host to host. There are two basic IP versions, IPv4 and IPv6.
IPv6 is the new version of Internet Protocol (IP) based on IPv4, a network-layer (Layer 3)
protocol that contains addressing information and some control information enabling packets to
be routed in the network. IPv6 increases the IP address size from 32 bits to 128 bits, to support
more levels of addressing hierarchy, a much greater number of addressable nodes and simpler
auto-configuration of addresses. IPv6 addresses are expressed in hexadecimal format (base
16) which allows not only numerals (0-9) but a few characters as well (a-f). A sample ipv6
address looks like: 3ffe:ffff:100:f101:210:a4ff:fee3:9566. Scalability of multicast addresses is
introduced. A new type of address called an anycast address is also defined, to send a packet
to any one of a group of nodes.
Improved su pport for extensions and options - IPv6 options are placed in separate
headers that are located between the IPv6 header and the transport layer header.
Changes in the way IP header options are encoded allow more efficient forwarding,
less stringent limits on the length of options, and greater fiexibility for introducing
new options in the future. The extension headers are: Hop-by-Hop Option, Routing
(Type 0), Fragment, Destination Option, Authentication, and Encapsulation Payload.
Flow labeling capability - A new capability has been added to enable the labeling of
packets belonging to particular traffic fiows for which the sender requests special
handling, such as non-default Quality of Service or real- time service.
Internet Security architecture (IPsec) defines the security services at the IP layer by enabling
a system to select required security protocols, determine the algorithm(s) to use for the service(s),
and put in place any cryptographic keys required to provide the requested services. IPsec can
be used to protect one or more “paths” between a pair of hosts, between a pair of security
gateways, or between a security gateway and a host.
The set of security services that IPsec can provide includes access control, connectionless
integrity, data origin authentication, rejection of replayed packets (a form of partial sequence
integrity), confidentiality (encryption), and limited traffic fiow confidentiality. Because these
190
services are provided at the IP layer, they can be used by any higher layer protocol, e.g., TCP,
UDP, ICMP, BGP, etc.
These objectives are met through the use of two traffic security protocols, the Authentication
Header (AH) and the Encapsulating Security Payload (ESP), and through the use of cryptographic
key management procedures and protocols. The set of IPsec protocols employed in any context,
and the ways in which they are employed, will be determined by the security and system
requirements of users, applications, and/or sites/organizations. When these mechanisms are
correctly implemented and deployed, they ought not to adversely affect users, hosts, and other
Internet components that do not employ these security mechanisms for protection of their traffic.
These mechanisms also are designed to be algorithm-independent. This modularity permits
selection of different sets of algorithms without affecting the other parts of the implementation.
DHCP, which is defined in RFC 2131, allows ranges of IP addresses, known as scopes,
to be defined on a system running a DHCP server application. When another system configured
as a DHCP client is initialized, it asks the server for an address. If all things are as they should
be, the server assigns an address from the scope to the client for a predetermined amount of
time, known as the lease. At various points during the lease (normally the 50% and 85% points),
the client attempts to renew the lease from the server. If the server cannot perform a renewal,
the lease expires at 100%, and the client stops using the address. In addition to an IP address
and the subnet mask, the DHCP server can supply many other pieces of information depending
on how the DHCP server implementation has been configured. In addition to the address
information, the default gateway is often supplied, along with DNS information.
DHCP is a protocol-dependent service, and it is not platform- dependent. This means that
you can use, say, a Linux DHCP server for a network with Windows clients or a Novell DHCP
server with Linux clients. Although the DHCP server offerings in the various network operating
systems might differ slightly, the basic functionality is the same across the board. Likewise, the
191
client configuration for DHCP servers running on a different operating system platform is the
same as for DHCP servers running on the same base operating system platform.
The Hypertext Transfer Protocol (HTTP) is an application-level protocol with the lightness
and speed necessary for distributed, collaborative, hypermedia information systems. HTTP
has been in use by the World-Wide Web global information initiative since 1990. HTTP allows
an open-ended set of methods to be used to indicate the purpose of a request. It builds on the
discipline of reference provided by the Uniform Resource Identifier (URI), as a location (URL)
or name (URN), for indicating the resource on which a method is to be applied. Messages are
passed in a format similar to that used by Internet Mail and the Multipurpose Internet Mail
Extensions (MIME). HTTP is also used as a generic protocol for communication between user
agents and proxies/gateways to other Internet protocols, such as SMTP, NNTP, FTP, Gopher
and WAIS, allowing basic hypermedia access to resources available from diverse applications
and simplifying the implementation of user agents.
The HTTP protocol is a request/response protocol. A client sends a request to the server
in the form of a request method, URI, and protocol version, followed by a MIME-like message
containing request modifiers, client information, and possible body content over a connection
with a server. The server responds with a status line, including the message’s protocol version
and a success or error code, followed by a MIME-like message containing server information,
entity meta information, and possible entity- body content.
The first version of HTTP, referred to as HTTP/0.9, was a simple protocol for raw data
transfer across the Internet. HTTP/1.0, as defined by RFC 1945, improved the protocol by
allowing messages to be in the format of MIME-like messages, containing meta information
about the data transferred and modifiers on the request/response semantics. However, HTTP/
1.0 does not sufficiently take into consideration the effects of hierarchical proxies, caching, the
need for persistent connections, or virtual hosts. “HTTP/1.1” includes more stringent requirements
than HTTP/1.0 in order to ensure reliable implementation of its features. HTTP messages consist
of requests from client to server and responses from server to client.
HTTPS (S-HTTP) specification is the secure version of HTTP. Another popular technology
for secured web communication is HTTPS, which is HTTP running on top of TLS and SSL for
secured web transactions. For HTTPS to be used, both the client and server must support it. All
popular browsers now support HTTPS, as do web server products, such as Microsoft Internet
192
Information Server (IIS), Apache, and almost all other web server applications that provide
sensitive applications. When you are accessing an application that uses HTTPS, the URL starts
with https rather than http—for example, https://www.mycollgeonline.com.
ASCII (American Standard Code for Information Interchange) is the most common format
for text files in computers and on the Internet. In an ASCII file, each alphabetic, numeric, or
special character is represented with a 7-bit binary number (a string of seven 0s or 1s). 128
possible characters are defined. ASCII was developed by the American National Standards
Institute (ANSI).
UNIX and DOS-based operating systems use ASCII for text files. Windows NT and 2000
uses a newer code, Unicode. IBM’s S/390 systems use a proprietary 8-bit code called EBCDIC.
Conversion programs allow different operating systems to change a file from one code to another.
ASCII was developed from telegraph code. Its first commercial use was as a seven-bit
teleprinter code promoted by Bell data services. Work on the ASCII standard began on October
6, 1960, with the first meeting of the American Standards Association’s (ASA) (now the American
National Standards Institute or ANSI) X3.2 subcommittee. The first edition of the standard was
published in 1963, underwent a major revision during 1967, and experienced its most recent
update during 1986. Compared to earlier telegraph codes, the proposed Bell code and ASCII
were both ordered for more convenient sorting (i.e., alphabetization) of lists, and added features
for devices other than teleprinters.
Originally based on the English alphabet, ASCII encodes 128 specified characters into
seven-bit integers as shown by the ASCII chart above. Ninety-five of the encoded characters
are printable: these include the digits 0 to 9, lowercase letters a to z, uppercase letters A to Z,
and punctuation symbols. In addition, the original ASCII specification included 33 non-printing
control codes which originated with Teletype machines; most of these are now obsolete, although
a few are still commonly used, such as the carriage return, line feed and tab codes.
For example, lowercase i would be represented in the ASCII encoding by binary 1101001
= hexadecimal 69 (i is the ninth letter) = decimal 105.
193
Lower ASCII, between 32 and 127. This table originates from the older, American
systems, which worked on 7-bit character tables.
Higher ASCII, between 128 and 255. This portion is programmable; characters are
based on the language of your operating system or program you are using. Foreign
letters are also placed in this section.
Extended ASCII uses eight instead of seven bits, which adds 128 additional characters.
This gives extended ASCII the ability for extra characters, such as special symbols, foreign
language letters, and drawing characters as shown below.
Summary
TELNET is the terminal emulation protocol in a TCP/IP environment. Telnet, which
is defined in RFC 854, is a virtual terminal protocol. It allows sessions to be opened
on a remote host, and then commands can be executed on that remote host.
FTP provides for the uploading and downloading of files from a remote host running
FTP server software. FTP allows you to view the contents of folders on an FTP
server and rename and delete files and directories if you have the necessary
permissions. FTP, which is defined in RFC 959, uses TCP as a transport protocol to
guarantee delivery of packets.
Trivial File Transfer Protocol (TFTP) is a simple protocol to transfer ûles. It has
been implemented on top of the Internet User Datagram protocol (UDP). TFTP is
designed to be small and easy to implement and, therefore, lacks most of the features
of a regular FTP. TFTP only reads and writes ûles (or mail) from/ to a remote server.
It cannot list directories, and currently has no provisions for user authentication.
Trivial File Transfer Protocol (TFTP) is a simple protocol to transfer ûles. It has
been implemented on top of the Internet User Datagram protocol (UDP). TFTP is
designed to be small and easy to implement and, therefore, lacks most of the features
of a regular FTP. TFTP only reads and writes ûles (or mail) from/ to a remote server.
It cannot list directories, and currently has no provisions for user authentication.
SMTP, which is defined in RFC 821, is a protocol that defines how mail messages
are sent between hosts. SMTP uses TCP connections to guarantee error-free delivery
of messages.
IMAP (Internet Message Access Protocol) is a standard email protocol that stores
email messages on a mail server, but allows the end user to view and manipulate
the messages as though they were stored locally on the end user’s computing
device(s). This allows users to organize messages into folders, have multiple client
applications know which messages have been read, flag messages for urgency or
follow-up and save draft messages on the server.
Transmission Control Protocol (TCP) is the transport layer protocol in the TCP/IP
suite, which provides a reliable stream delivery and virtual connection service to
applications through the use of sequenced acknowledgment with retransmission of
packets when necessary. Along with the Internet Protocol (IP), TCP represents the
heart of the Internet protocols.
UDP is a connectionless transport layer (layer 4) protocol in the OSI model which
provides a simple and unreliable message service for transaction-oriented services.
UDP is basically an interface between IP and upper-layer processes. UDP protocol
ports distinguish multiple applications running on a single device from one another.
ICMP, which is defined in RFC 792, is a protocol that works with the IP layer to
provide error checking and reporting functionality. In effect, ICMP is a tool that IP
uses in its quest to provide best-effort delivery.
The Internet Protocol (IP) is a network-layer (Layer 3 in the OSI model) protocol
that contains addressing information and some control information to enable packets
197
Review Questions
Protocol
Telnet
FTP
TFTP
SMTP
DNS
RIP
SNMP
IMAP
ARP
TCP/IP
UDP
IGMP
ICMP
198
DHCP
HTTP
Reference
Radia Perlman: Interconnections: Bridges, Routers, Switches, and Internetworking
Protocols. 2nd Edition. Addison-Wesley 1999, ISBN 0-201-63448-1. In particular
Ch. 18 on “network design folklore”, which is also available online at http://
www.informit.com/articles/article.aspx?p=20482
Internet Engineering Task Force abbr. IETF (1989): RFC1122, Requirements for
Internet Hosts — Communication Layers, R. Braden (ed.), Available online at http:/
/tools.ietf.org/html/rfc1122. Describes TCP/IP to the implementors of
protocolsoftware. In particular the introduction gives an overview of the design goals
of the suite.
C.A.R. Hoare (1985): Communicating sequential processes 10th Print. Prentice Hall
International, ISBN 0-13-153271-5. Available online via http://www.usingcsp.com
R.D. Tennent (1981): Principles of programming languages 10th Print. Prentice Hall
International, ISBN 0-13-709873-1.
CORE PAPER-II
Section-A
5. What is CIDR?
9. What is subnetting?
Section-B
a. ARP
b. IGMP
SECTION – C
2. Your company has the network ID 165.121.0.0. You are responsible for creating
subnets on the network, and each subnet must provide at least 900 host IDs.What
subnet mask meets the requirement for the minimum number of host IDs and
provides the greatest number of subnets?
b. How are edge routers used in a network? What are its advantages?