Ccna Basic Final
Ccna Basic Final
Internet:-a global network provides a variety of information and communication facilities by using
standardized communication protocols.
Servers:-servers are computers that provide shared resources to network users; examples are file
shared, email, printer and web services.
1.amount of traffic
2.number of nodes
3.size of packets
4.network diameter
Types of network:
1.LAN
2.MAN
3.MAN
OSI is set of standard/protocol
Mac sub-layer:-this layer deals with the broadcast network and protocols.
This post gives a brief overview of the two sub-layers of the data link layer, namely LLC (Logical
Link Control) and MAC (Media Access Control).
The data link layer functionality is usually split it into logical sub-layers, the upper sub-layer, termed
as LLC, that interacts with the network layer above and the lower sub-layer, termed as MAC, that
interacts with the physical layer below, as shown in the diagram given below:
While LLC is responsible for handling multiple Layer3 protocols (multiplexing/de-multiplexing) and
link services like reliability and flow control, the MAC is responsible for framing and media access
control for broadcast media. The functional overview of LLC and MAC sub-layers are given in the
diagram below :
Role of LLC and MAC
LLC
The primary responsibilities of LLC are:
LLC can optionally provide reliable frame transmission by the sending node numbering each
transmitted frame (sequence number), the receiving node acknowledging each received frame (
acknowledgment number) and the sending node retransmitting lost frames. It can also optionally
provide flow control by allowing the receivers to control the sender’s rate through control frames like
RECEIVE READY and RECEIVE NOT READY etc.
Based on whether a logical connection is established betweeen the layer 2 peers and based on
whether frames are acknowledged by the peer, LLC can be classified to provide the following types
of service modes:
b) Connectionless Acknowledged Service: In this mode, data is directly sent between Layer2
peers without any logical link establishment. But each frame is numbered using sequence numbers
and the peer acknowledges each frame received using an Acknowledgment number field. This
service mode is used in scenarios where the overhead of a connection establishment is costly due to
the extra delay involved, but where data reliability is needed. The sender can track lost or damaged
frames and retransmit such frames to achieve reliability. This type of service is used in wireless
links, where the quality of link is not as good as wired links and so frequent link establishment and
teardown are unnecessary overheads, as these control frames may themselves be corrupted or lost.
c) Connection Oriented Service: In this mode, procedures are laid out for logical link establishment
and disconnection. Before data transfer, a logical connection is established between peers, before
data transfer starts, through the exchange of control frames, known as Supervisory Frames. The
logical connection is closed after the data exchange phase is over. Actual data transfer starts after
the connection establishment phase and frames carrying higher layer data are known
as Information Frames. A third category of frames, known as Unumbered Acknowledgment frames
are used to acknowledge received Supervisory frames.
In this mode too, there are two variants that are used, namely one without acknowledgement and
another with acknowledgement.
Here, though a logical link is established before actual data transfer happens, there is no concept of
frames being numbered and acknowledged through Sequence number and acknowledgment
number fields. It is just a best effort service, with reliability left to the higher layer protocol. Many
WAN protocols like HDLC, PPP, LAPB, LAPD etc. use this mode of service.
Here, apart from a logical link being established before data transfer happens, reliability and flow
control services are also provided by the LLC. Reliability is provided through the use of sequence
number, acknowledgment number and retransmission of lost frames using strategies like Go Back
N or Selective Repeat. Flow control is provided by using a sliding window mechanism. This
service mode is rarely used in the Internet, because Internet uses TCP, that supports reliability and
flow control at the transport layer. This service mode is used in properietary protocols like Microsoft’s
NetBIOS.
Note that though connection establishment, reliability and flow control are optional services at the
data link layer, error detection is still a basic service provided by the data link layer, through the use
of CRC/checksums in the frame trailer, that is added by the MAC sub-layer framing functionality.
MAC
The MAC sub-layer interacts with the physical layer and is primarily responsible for framing/de-
framing and collision resolution.
Framing/De-Framing and interaction with PHY: On the sending side, the MAC sub-layer is
responsible for creation of frames from network layer packets, by adding the frame header and the
frame trailer. While the frame header consists of layer2 addresses (known as MAC address) and a
few other fields for control purposes, the frame trailer consists of the CRC/checksum of the whole
frame. After creating a frame, the MAC layer is responsible for interacting with the physical layer
processor (PHY) to transmit the frame.
On the receiving side, the MAC sub-layer receives frames from the PHY and is responsible for
accepting each frame, by examining the frame header. It is also responsible for verifying the
checksum to conclude whether the frame has come uncorrupted through the link without bit errors.
Since checksum computation and verification are compute intensive tasks, the framing/de-framing
functionality is done by dedicated piece of hardware (e.g. NIC card on PCs).
Collision Resolution : On shared or broadcast links, where multiple end nodes are connected to
the same link, there has to be a collision resolution protocol running on each node, so that the link is
used cooperatively. The MAC sub-layer is responsible for this task and it is the MAC sub-block that
implements standard collision resolution protocols like CSMA/CD, CSMA etc. For half-duplex links, it
is the MAC sub-layer that makes sure that a node sends data on the link only during its turn. For full-
duplex point-to-point links, the collision resolution functionality of MAC sub-layer is not required.
In end nodes and in intermediate devices like L2 switches and Routers, the LLC
functionality is implemented in network device driver software that is part of the
Operating System and the MAC functionality is implemented in dedicated piece of
hardware.
Protocol type:-
2.TCP/IP
#IPX/SPX:-IPX/SPX is a local area network communication protocol developed by Novell in 1970. IPX/SPX
was created by Novell primarily for Novell NetWare network in NetWare OS .
# IPX provides routing and internetwork services similar to IP, and SPX provides transport layer
services similar to TCP. IPX and IP are connectionless datagram protocols, while SPX and TCP
are connection-oriented protocols.
# The SPX layer sits on top of the IPX layer and provides connection-oriented services between two
nodes on the network. SPX is used primarily by client–server applications.
# the IPX protocol having similarities to IP, and SPX having similarities to TCP
# IPX is similar to the internet protocol, but is not industry standard and has been overwhelmed by IP.
IPX/SPX is a part of the NetWare protocol suite. The NetWare suite of protocols supports
several media-access (Layer 2) protocols, including Ethernet/IEEE 802.3 IPX is similar to the
Internet Protocol from the TCP/IP suite, it is a connectionless Layer 3 (Network layer) protocol
used to transfer datagrams between hosts and networks. SPX is the Transport protocol used to
provide reliable transport for IPX datagrams, similar as TCP does for IP. Although current
versions of Novell Netware use TCP/IP, before Netware version 5, IPX was the protocol in
Netware networks. It is a small and easy to implement routable protocol developed by Novell
and based on the Xerox Network System. The Netware protocol suite is a suite of several
protocols for different functions, the most important being IPX and SPX.
Communication protocol:-
The information exchanged between devices through a network, or other media is governed by rules
and connections that can be set out in technical application called communication protocol.
IPX\SPX Novell
OSI ISO
protocol and pot no:_----
ip
4.HTTP (80)
5.HTTPS(443)
6. POP3(110)
7.NTP()
8.SSL(443)
9.SSH(22)
TFTP:-----------------
Trivial File Transfer Protocol (TFTP) is a file transfer protocol, which is the basic form of File
Transfer Protocol (FTP). Trivial File Transfer Protocol (TFTP) has a very simple design and it
requires only a very small amount of memory. Trivial File Transfer Protocol (TFTP) is mainly
used for network booting of computers and network infrastructure devices such as routers and
switches. Trivial File Transfer Protocol (TFTP) is used in Cisco networking environment to back
up Cisco IOS (Operating System) image file, configuration files, Network Booting and for an IOS
upgrade.
#Only one bytes is sent at a time there is a ideal time between 2 data bytes.(at a regular
interval) stop bit(0) start bit(0)
TRASMITT RECIEVER
ER 0110 11110010 1 1001110 10
(Rx)
4 3221gggshx 0012001111
(Tx)
4 bhga000121
Gaps between data000
units
000
Direction
2@09980of0data flow
# to help receiver start and stop bits are used along with data is in middle.
TCP UDP
Reliable Unreliable
Connection-oriented Connection-less
Network devices:-
0Nic:-
Bridge:-
1.bridge is layer 2 device of osi model.
2.bridge is used to connect more than two segment network .
3.bridge reduce the traffic on a LAN by dividing it into two
segment.
4..bridge creates the MAC table.
5.bridge store the address on software .
6.bridge is a device to connect two similar network segment
together.
7.bridge is also used to connect two LANs working on the
protocol.
8.bridge filter the packet by reading the MAC address of source
and destination.
9. bridge is generally used for connecting two different
topology.
10.it restrict the transmission on the other LAN segment . if
destination is not found.
11. bridge is a store and forward device . store and forward
means that the bridge receive a complete frame . and
determine which outgoing port to use , prepare the frame for
outgoing port , calculate a CRC, transmitting the frame once the
medium is free on the outgoing port.
12 bridge have less no of port .
13.bridge operate at lower speed.
14.bridges are full duplex mode, a two way communication is
possible.(if there is only one device connects to the interface it
supports full duplex and if there are multiple devices connected
to that one interface then it support half duplex)
15.min 4 max 8
Switch:
Router:
3.router does two basic thing – a.select the best path from routing table. B. forward the packet on that
path
7.manufacturing company:…
transmission mode:
2.half-duplex ex-walkie-talkie
3.full-duplex ex-telephone
Topology:--
1.bus
2.star
3.ring
4.mesh
5.tree
6.hybrid
BUS Topology
Bus topology is a network type in which every computer and network device is
connected to single cable. When it has exactly two endpoints, then it is
called Linear Bus topology.
1. It is cost effective.
4. It is easy to understand.
network decreases.
RING Topology
It is called ring topology because it forms a ring as each computer is
connected to another computer, with the last one connected to the first.
Exactly two neighbors for each device.
1. A number of repeaters are used for Ring topology with large number of
nodes, because if someone wants to send some data to the last node in
the ring topology with 100 nodes, then the data will have to pass through
99 nodes to reach the 100th node. Hence to prevent data loss repeaters
Topology.
3. In Dual Ring Topology, two ring networks are formed, and data flow is in
opposite direction in them. Also, if one ring fails, the second ring can act
transmitted, has to pass through each node of the network, till the
destination node.
STAR Topology
In this type of topology all the computers are connected to a single hub
through a cable. This hub is the central node and all others nodes are
connected to the central node.
Features of Star Topology
3. Easy to troubleshoot.
5. Only that node is affected which has failed, rest of the nodes can work
smoothly.
2. Expensive to use.
3. If the hub fails then the whole network is stopped because all the nodes
MESH Topology
It is a point-to-point connection to other nodes or devices. All the network
nodes are connected to each other. Mesh has n(n-1)/2 physical channels to
link n devices.
There are two techniques to transmit data over the Mesh topology, they are :
1. Routing
2. Flooding
connected in the same fashion as mesh topology but some devices are
2. Full Mesh Topology : Each and every nodes or devices are connected to
each other.
1. Fully connected.
2. Robust.
3. Not flexible.
TREE Topology
It has a root node and all other nodes are connected to it forming a hierarchy.
It is also called hierarchical topology. It should at least have three levels to the
hierarchy.
1. Heavily cabled.
2. Costly.
HYBRID Topology
It is two different types of topologies which is a mixture of two or more
topologies. For example if in an office in one department ring topology is used
and in another star topology is used, connecting these topologies will result in
Hybrid Topology (ring topology and star topology).
Features of Hybrid Topology
2. Effective.
1. Complex in design.
2. Costly.
Protocol:- protocol is a set of rule , according to this rule networking devices communicate each other.
1.syntex
2.semantics
3.timing
ISP (internet service provider):- An ISP is entity that provides (usually sells) access to the global
Internet
Types of ISP:
Transmission media:--
Transmission media
Types of UTP:--
Collision domain:-
1.When two device send the packet at the same time on shared network segment.
2.if a collision occurs in one port, it will affect the entire hub because same media is shared across the
hub
Broadcast domain:--
1.when one device tries to communicate to another device in hub network, then hub broadcast the data
to all ports that lead to broadcast domain.
Ex- if switch have 8 port then switch have separate 8 collision domain (same as bridge)
3.all ports on hub and switch are in the same broadcast domain.(same as bridge)
Access control is a solution for controlling transmissions as soon as a collision is detected in a half duplex
Network.
1.In this technique, host first sense the medium before trying to use it.
3. a collision occurs when two hosts send data at the same time. The CSMA/CD sends a jam signal over
the Ethernet . the jam signal is an indication to all the other devices on the segment that no further data
should be sent.
4.before clearing the jam signal each sender waits for a random wait period before beginning the
entire process all over again.
Ethernet cabling:--
1.straight cable
2.crossover cable
3.rolled cable
EIA/TIA568- EIA/TIA-568-
A A
EIA/TIA-568- EIA/TIA-568-
B B
This cable is used to connect a host EIA/TIA 232 interface to a router console serial com port
HUB C C S S
SWITCH C C S S
ROUTER S S C C
PC S S C C
ADDRESSING
1.size= 48 bit
2.format == hexadecimal
00 A0 CC 23 AF 4A
6.last 24 bit is called UAA which is a unique serial number assigned by the manufacturer .
IP (internet protocol):-
1.IP is a identity of a host over the network. In order to communicate over the internet , computers and
devices must have an IP address.
IP FORMAT:---
IPv4 header
An IP header is a prefix to an IP packet that contains information about the IP version,
length of the packet, source and destination IP addresses, etc. It consists of the
following fields:
• Version – the version of the IP protocol. For IPv4, this field has a value of 4.
• Header length – the length of the header in 32-bit words. The minumum value is 20 bytes,
and the maximum value is 60 bytes. Header length of datagram.it contain all header length
information.
• Priority and Type of Service – specifies how the datagram should be handled. The first 3
bits are the priority bits.min delay service, reliability and cost.
• Total length – the length of the entire packet (header + data). The minimum length is 20
bytes, and the maximum is 65,535 bytes.
• Identification – used to differentiate fragmented packets from different datagrams.
• Flags – used to control or identify fragments.
• Fragmented offset – used for fragmentation and reassembly if the packet is too large to put
in a frame.
• Time to live – limits a datagram’s lifetime. If the packet doesn’t get to its destination before
the TTL expires, it is discarded.maximum time for which IP datagram is alive.
• Protocol – defines the protocol used in the data portion of the IP datagram. For example,
TCP is represented by the number 6 and UDP by 17. Higher lever protocol like TCP, UDP,
ICMP
• Header checksum – used for error-checking of the header. If a packet arrives at a router
and the router calculates a different checksum than the one specified in this field, the packet
will be discarded.
• Source IP address – the IP address of the host that sent the packet.
• Destination IP address – the IP address of the host that should receive the packet.
• Options – used for network testing, debugging, security, and more. This field is usually
empty.
3. ICANN Internet Corporation for Assigned Names and Numbers is in charge of making
policy decisions about how the domain name system is run.
A regional Internet registry (RIR) is an organization that manages the allocation and
registration of Internet number resources within a particular region of the world. Internet
number resources include IP addresses and autonomous system (AS) numbers.
…
IP address
IPV4 IPV6
Multicast Multicast
Division of IP address:--
Class A:-
Class C :-
Subnet mask:-
Subnet mask is used to identify the no of bits in network portion and no of bits in the host portion.
Class C :- 255.255.255.0
Wildcard mask:--
1. mask of bits that indicates which part id an IP address are available for examination.
3.to indicate the size of a network or subnet for some routing protocol, such as OSPF.
/9 255.128.0.0 0.127.255.255
/8 255.0.0.0 0.255.255.255
/7 254.0.0.0 1.255.255.255
/6 252.0.0.0 3.255.255.255
/5 248.0.0.0 7.255.255.255
/4 240.0.0.0 15.255.255.255
/3 224.0.0.0 31.255.255.255
/2 192.0.0.0 63.255.255.255
/1 128.0.0.0 127.255.255.255
Loopback ip:-(127.0.0.1)
It is feature of windows OS ,
Private IP:-
Public IP:-
Except to private IP
Subnetting :
CIDR value:-
3. once bits are borrowed from host portion to network portion, the classfull nature of IP address
become classless.
Ex--- 192.168.0.0\25
3. The slash notation (/) means how many bits are turned on.
Classful networks use the 'classful' subnet mask according to the leading bits in the first block of the IP
address:
Classless IP addressing means you can use any subnet mask you want, even assigning partial blocks. For
example subnet 172.16.0.0 is a class B network.
But you have two physical interfaces on your router that connect to switches with 5 VLANs. Classless
routing allows me to break up this IP address into more useful segments.
Switch 1 Switch 2
So I have made 10 useful subnets from a single network. Here's the real advantage:
1. When my router is configured for classless routing it can advertise all 10 networks as one IP address
172.16.0.0 255.255.240.0, or I can advertise each physical interface using 172.16.0.0 and 172.16.8.0
using the subnet mask 255.255.248.0
2. I still have spare room in the 172.16.0.0/16 for lots more subnets. If I want to create another group,
do the binary math, I find the next available subnets are 172.16.24.0 255.255.248.0. If I use the #no
auto-summary command on my routing protocol. I can place the new 172.16.x.x subnets anywhere in
my network.
#auto-summary tells my router I only want to use class full subnet masks.
1. In which we can vary the subnet mask according the using of host number.
2. VLSM is a process of dividing an IP space into subnets of different sizes without wasting IP addresses.
IPV6:--------------------
6.it is divided in to 8 separate quarter separated by colon (:), each quarter contain 16 bit.
10.it is used in USA, UK, Russia , china and some part of India (Bangalore)
11.IPV6 reduce the need of ipv4 sub netting and Network address translation (NAT)
Ex-
2001:0db8:3c4d:0015:0000:0000:1a2f:1a2b
Subnet Id:-- A 16 bit is a private id and used for internal purpose in an organization.
Interface id (host id):- A 64 bit interface id. It is either automatically configured from the MAC address or
manually configured in EUI-64 format.(EXTENDED UNIQUE IDENTIFIER)
Note:--The ipv6 EUI-64 format address is obtained through the 48 bit MAC address .
The 16-bit 0*FFFE Is then inserted between these two 24-bits for the 64 bit EUI address.
Ipv6 header Format:------
Version (4-bits) : Indicates version of Internet Protocol which contains bit sequence 0110.
Traffic Class (8-bits) : (distinguishing different payload)The Traffic Class field indicates class
or priority of IPv6 packet which is similar to Service Field in IPv4 packet. It helps routers to
handle the traffic based on priority of the packet. If congestion occurs on router then packets
with least priority will be discarded.
As of now only 4-bits are being used (and remaining bits are under research), in which 0 to 7 are
assigned to Congestion controlled traffic and 8 to 15 are assigned to Uncontrolled traffic.
Priority assignment of Congestion controlled traffic :
Uncontrolled data traffic is mainly used for Audio/Video data. So we give higher priority to
Uncontrolled data traffic.
Source node is allowed to set the priorities but on the way routers can change it. Therefore,
destination should not expect same priority which was set by source node.
Flow Label (20-bits) : (provide special handling for particular data flow)Flow Label field is
used by source to label the packets belonging to the same flow in order to request special
handling by intermediate IPv6 routers, such as non-default quality of service or real time service.
In order to distinguish the flow, intermediate router can use source address, destination address
and flow label of the packets. Between a source and destination multiple flows may exist because
many processes might be running at the same time. Routers or Host that do not support the
functionality of flow label field and for default router handling, flow label field is set to 0. While
setting up the flow label, source is also supposed to specify the lifetime of flow.
Payload Length (16-bits)(length of ip datagram – base header) : It is a 16-bit (unsigned
integer) field, indicates total size of the payload which tells routers about amount of information
a particular packet contains in its payload. Payload Length field includes extension headers(if
any) and upper layer packet. In case length of payload is greater than 65,535 bytes (payload up to
65,535 bytes can be indicated with 16-bits), then the payload length field will be set to 0 and
jumbo payload option is used in the Hop-by-Hop options extension header.
Next Header (8-bits) : Next Header indicates type of extension header(if present) immediately
following the IPv6 header. Whereas In some cases it indicates the protocols contained within
upper-layer packet, such as TCP, UDP.
0 --- ICMP
6 --- TCP
17 ---- UDP
Hop Limit (8-bits) : Hop Limit field is same as TTL in IPv4 packets. It indicates the maximum
number of intermediate nodes IPv6 packet is allowed to travel. Its value gets decremented by
one, by each node that forwards the packet and packet is discarded if value decrements to 0. This
is used to discard the packets that are stuck in infinite loop because of some routing error.
Source Address (128-bits) : Source Address is 128-bit IPv6 address of the original source of the
packet.
Destination Address (128-bits) : Destination Address field indicates the IPv6 address of the
final destination(in most cases). All the intermediate nodes can use this information in order to
correctly route the packet.
Extension Headers : In order to rectify the limitations of IPv4 Option Field, Extension Headers
are introduced in IPversion 6. The extension header mechanism is very important part of the
IPv6 architecture. Next Header field of IPv6 fixed header points to the first Extension Header
and this first extension header points to the second extension header and so on.
IPv6 packet may contain zero, one or more extension headers but these should be present in their
recommended order:
1.unicast
2.multicast
3.anycast
Unicast address------------------
1.Global unicast:--
Ex---
2000::/3
In this 3 means 3 bits reserved by IANA, which we cannot change first 3 bit but we can change
remaining bits.
0010000000000000::/3
2.Unique local:-----
Ex--
FC00::/7
1111110000000000::/7
#link local is a default IPV6 address present on every ipv6 enabled interface.
Ex-FE80::/10
111111001000000::/10
Ex—
FF00::/8
1111111100000000::/8
In which same IP address assign on multiple computers of server. Client get reach to the server pc by
shortest route.
Defines group of nodes and or computers that all share a single address . A packet with anycast address
is delivered to only one member of the group which is the most reachable one.
Rule 1—
Ex:-
2001:0022:0003:0333:C2C1:1020:0123:0001
2001:22:3:333:C2C1:1020:123:1
Rule 2----
Successive zeroes are represents double colon but only one time in the address—
Ex:-
2001:0000:0000:C2C9:0000:0000:0056
2001::C2C9:0:0:56 OR 2001:0:0:C2C9::56
1.ipconfig /all
2.getmac
3.ping a.a.a.a
4.arp –a
5.arp –d
7.nslookup www.google.ccom
8.netsh
PORT addressing:--
They are used by system processes that provide widely-used types of network service. On unix-like
operating system , a process must execute with super user priviledge to be able to bind a network
socket to an IP address using one of the well known port.
2.registered ports:---(1024--49151)
They area assigned by IANA for specific service upon application by a requesting entity.
Now, let’s see these four DHCP Messages and the other DHCP Messages one by one.
For IP allocation, firstly, DHCP Client sends a Broadcast DHCP Discover Message. This DHCP Discover
Messege ask this question :
“Are there any DHCP Server in the Network?”
DHCP Client ask this question to all the nodes. Because DHCP Discover Message is sent as Broadcast. Here,
the destination port is UDP 68. In this message, there are some other required fields but the most important
one is the MAC address of DHCP Client.
There is also one more important flag in DHCP Discover Message. This flag is “Broadcast Flag”. If this flag
set to “1”, this means that DHCP Client is waiting Broadcast responses from the DHCP Server. But if it set
to “0”, this means that DHCP Client is waiting Unicast responses from the DHCP Sercer. Even if this flag, the
DHCP Server acts as its capability. In most cases, the responses of DHCP Server is Unicast.
All the nodes in the network receive the DHCP Discover Message. But, only the DHCP Server reply to this
message with DHCP Offer Message. This DHCP Offer Message is sent as Unicast or Broadcast, according to
the Broadcast Flag in DHCP Discover Message and DHCP Server capability.
The meaning of this message is :
“I am a DHCP Server. And I can give you these IP address!”
Here, the destination port is UDP 67. This time, the message includes some more information :
• MAC address of DHCP Client,
• IP address of DHCP Server
• MAC address of DHCP Server
• Offered IP address and Subnet Mask
When DHCP Client gets the DHCP Offer Messages, it determines one of the DHCP Server and sends a
Broadcast DHCP Request Message that says, it select a DHCP Server. In this DHCP section, DHCP Client can
get many offers from the different DHCP Servers. But, it selects only one of them. This is generally the first
sender.
This message means that:
“This DHCP Server can be my DHCP Server. Other DHCP Servers! Shut up!”
With this DHCP Request Message, DHCP Client says all the nodes that, it has determined the DHCP Server of
its dreams. So, others DHCP Servers stop sending DHCP messages. The DHCP Server accepted is also get this
DHCP Request Message.
At this stage, accepted DHCP Server sends a DHCP Ack Message.In this messages assigned IP address and
other IP informations are send to the DHCP Client. This message is again can be unicast or broadcast
according to the Broadcast Flag in DHCP Discover Message and DHCP Server capability.
This message means that:
“Here is your IP information.”
You can find the whole process of a successfull DHCP IP Address Allcoation Operation steps below:
This is the scenario if everything is going good. But what if a DHCP Client request an IP addres and the
requested IP address is not available in DHCP Server? At this time, DHCP Server sends a DHCP NAK Message.
This message means:
“I am sorry, I can not give this IP address.”
The other message is DHCP Release Message. This message is used when the DHCP Client want to release the
associated IP address.
This message means:
“I would like to divorce from this IP address.”
DHCP Decline Message is sent from DHCP Client to DHCP Server when the IP that is offered is alread in use.
This means that :
“I can not use this IP, it is alredy in use.”
DHCP Inform Messages is sent from DHCP Client to Server that it has already ahs IP address but needs
additional IP information.
This means that :
“I have already has an IP address. Give me additional IP information.”
You can find the detailed DHCP Messages and Their Meanings below. This list will be your guide for DHCP IP
Allocation Operation.
Basically, DHCP IP Allocation Operation is achieved with this messages like above.
Cisco Devices:------------------
The Cisco hierarchical model can help you design, implement, and maintain a scalable, reliable, cost
effective hierarchical internetwork. Cisco defines three layers of hierarchy .
1.access layer
2.distribution layer
3. core layer
Access layer:---
Access layer controls users and workgroup access to internetwork resources. The access layer is
sometimes referred to as desktop layer.
3.acces layer routers have low cost , less port , less processing capabilities
Ex-router series—800, 1800, 1900, 2800, 2900, 3800, 4000 (ISR) integrated services routers.
Distribution layer is sometimes referred to as the workgroup layer and used to make communication
between core layer and access layer.
The primary function of the distribution layer is to provide routing, filtering, and WAN access and to
determine how packets can access the core, if needed.
The distribution layer must determine the fastest way that network service requests are handled.
Distribution layer is where we want to implement policies for the network because we are allowed a lot
of flexibility in defining network operation here.
1. routing
4. high cost, more port, more processing than access layer routers.
5.router series:-- 1000, 7200,7300, 7500, 7600 series aggregation services routers
Core layer:--------------
The core layer is responsible for transporting large amount of traffic both reliably and quickly.
The only purpose of the network’s core layer is to switch traffic as fast as possible.
Traffic transported across the core is common to a majority of users, but remember that user data is
processed at the distribution layer, which forward the request to the core if needed.
4.high cost , more port , high processing than distribution layer routers
5.routers series:----------- 900, 1000, 7200, 7300, 7500, 7600, 9000, 12000 series ASR.
1.LAN
2.WAN
Gigabit Ethernet
Aux port
Connect to
modem
Memory component of Cisco routers:-
ROM:----
2.the initial software that runs on a Cisco routers is called the bootstrap software and usually stored in
ROM chip.
3.ROM stores four components POST, bootstrap, ROMMON mode and mini IOS.
4. POST process check the basic functionality of the router hardware and determine which interfaces are
present.
5. Bootstrap software search the IOS on router , once it has been searched then load that IOS on RAM.
6. ROMMON mode is a diagnostic tool that is stored on ROM, it is used to recover the password and
data backup of router.
7. mini IOS is called the RXBOOT or boot loader by Cisco. The mini IOS is a small IOS in ROM that can be
used to bring up an interface and load a Cisco IOS in flash memory. It also performs some other
maintenance operations.
FLASH Memory:----
1. Flash memory located on processor board. It is used to store the IOS (internetwork operating system).
RAM:----
1. RAM is a very fast memory which loses its information when the system is restarted.
Step—1:
The router performs a power on self test (POST). The POST tests the hardware to verify that al l
component of the device are operational and present or not. (Check interfaces on router).
Step 2:---
Step 4:-
Check NVRAM (which stored old configuration files) for startup-config files.
(9600 bits/sec means that the speed on which CLI mode run)
Working mode in router:---
User mode is the first mode a user has access to after logging into the router. This mode allow user to
execute only some basic commands. (show the system status) , router cannot be configured or restarted
from this mode.
Privileged mode allows user to view the system configuration, restart the system, and enter router
configuration mode
Global configuration mode allows users to modify the running system configuration.
If your router or access server does not find a valid system image to load, the system will enter read-
only memory (ROM) monitor mode, this mode is also used to recover the forgot password, so this mode
is also known as recovery mode.
To go to this mode type Router reload then confirm with enter key then within loading process you
press ctrl+c
3. CDP periodically sends own information to directly connected devices with the help of
multicast address.CDP bydefault configured on cisco devices.
5. The media types which are supported for Cisco Discovery Protocol (CDP) are Ethernet, Token
Ring, FDDI (FIBER DATA DISTRIBUTED INTERFACE), PPP, HDLC (high-level data link control), ATM,
and Frame Relay.
6.CDP interface bydefault enabled.
But you can manually set the CDP timer in the range of <5-254>
CDP hold time: ------ CDP hold time delimits the amount of time that the device will hold
packets received from neighbor devices. (Length of time that receiver must keep this packet)
by default CDP hold time = 180 sec
But you can manually set the CDP hold time in the range of <10-255>
• Hardware platform
• The interface which generated the Cisco Discovery Protocol (CDP) message.
CDP commands:----
#int f0/1
#no cdp enable (to disable cdp on particular port)
Note:----when you disable cdp on particular port then after
holdtime time it will delete whole neighbor information.
4. This protocol can advertise details such as configuration information device capabilities and
device identity.
#port description
#System name
#system description
# System capabilities
# Management address
LLDP has the following configuration guidelines and limitations…
Router(config-if)# shutdown
Router (config)# line vty 0 4(By default, here are 5 vtys defined (0-4), therefore 5 terminal sessions
are possible.)
Router(config-line)# login
Here, firstly we enter the line vty mode and then set the password string with password
keyword. After that, we enter login command to activate it.
Router(config-line)# login
Configuring SSH
SSH is generally used to access a router remotely. Because it is more secure then telnet.
Here, we are giving a brief SSH configuration in this basic router security configuration .
Router(config)# crypto key generate rsa modulus 1024 (key lenght, 1024
higher security)
Router(config)# ip ssh time-out 120( The time interval that the router waits for the SSH client to
respond)
The function of Switching is to switch data packets between devices on the same network (or
The function of Routing is to Route
same LAN - Local Area Network).
packets between different networks (between different LANs -
Local Area Networks).
Routed protocol: ------
3. A Routed Protocol is an integral part of network protocol suit and it is available in every
device which is participating in network communication (Example, Routers, Switches,
Computers etc).
4.Routed Protocols used between routers to direct user
traffic.
Routing protocol:----
1. A
Routing Protocol learns routes (path) for a Routed Protocol
and IP (Internet Protocol), IPX (Internetwork Packet Exchange)
and AppleTalk are the examples of Routed Protocols.
2. Routing Protocols are network protocols used to dynamically advertise and learn the
networks connected, and to learn the routes (network paths) which are available.
3. Routing protocols running in different routers exchange updates between each other and
most efficient routes to a destination. Routing Protocols have capacity to learn about a network
when a new network is added and detect when a network is unavailable.
4. Routing Protocols normally run only in Routers, Layer 3 Switches, End devices (firewalls) or
Network Servers with Network Operating Systems. Routing Protocols are not available in a
normal computer or a printer.
Forwarding of packet from one network to another network by choosing best path from routing
table.
1. Static routing
2. Default routing
3. Dynamic routing
Routing
4. Static routing used for small organization with a network of 10-15 routers.
Disadvantage:--
1. Dynamic routing is a networking technique that provides optimal data routing. Unlike static
routing, dynamic routing enables routers to select paths according to real-time logical network
layout changes.
2. In dynamic routing, The routing protocol defines the set of rules used by the router when it
communicates routing information between neighbors routers. the routing protocol operating
on the router is responsible for the creation, maintenance and updating of the dynamic routing
table.
Advantage: ----
1. Work with advertisements of directly connected networks. (Routers automatically learn the
advertisements)
6. Neighbor routers exchange routing information and build the routing table automatically.
IGP (Interior gateway routing protocol) EGP (exterior gateway routing protocol)
IGP used within an same autonomous system EGP used between different autonomous
systems
All routers will be routing within the same Routers in different AS need EGP
autonomous boundary
1.distance vector:--- the distance-vector protocols in use today find the best path to a remote
network by judging distance.
Ex-RIP
2. link state:-------link state protocol is also called as shortest path first (SPF) . in link state
protocol, each router create three separate tables. One of these tables keeps tracks of directly
attached neighbors, one determines the topology of the entire internetwork, and one is used as
the routing table. Link state routing tables are not exchanged periodically. Instead triggered
(incremented) updates containing only specific link-state information are sent.
Ex- OSPF
3.Advance distance vector:-----advance distance-vector protocols use aspects of both distance-
vector and link-state protocols
Ex- EIGRP
1. Administrative Distance (AD) is a value that routers use in order to select the best path when
there are two or more different routes to the same destination from two different routing
protocols.
3. Admi0nistrative Distance (AD) is a numeric value which can range from 0 to 255.
4. A smaller Administrative Distance (AD) is more trusted by a router, therefore the best
Administrative Distance (AD) being 0 and the worst, 255.
5. If router learn same routes from different source (RIP, OSPF, EIGRP) then less AD value is
more trusted
0 or 1 Static Route
115 IS-IS
1. A unique number identifying the routing domain (one organization) of the routers.
4. An Interior Gateway Protocol (IGP) refers to a routing protocol that handles routing within a
single autonomous system. IGPs include RIP, IGRP, EIGRP, and OSPF.
5. An Exterior Gateway Protocol (EGP) handles routing between different Autonomous Systems
(AS). Border Gateway Protocol (BGP) is an EGP. BGP is used to route traffic across the Internet
backbone between different Autonomous Systems.
6. The Autonomous System Number (ASN) value 0 is reserved, and the largest ASN value
65,535, is also reserved.
7. The values, from 1 to 64,535, are available for use in Internet routing,
EX-----
In above diagram
1. Both company get internet connection from same ISP, but ASN is different of both company.
3. In which the traffic of TCS.COM will not flow in HP.COM due to different ASN.
4. If you want to allow traffic flow between these two companies then you will have to
configure EGP (BGP)
6. Neighbor routers exchange routing information and build the routing table automatically.
7.
3. Hybrid protocol.
Distance vector Link state Hybrid
Works with Bellman Ford Work with Dijkstra algorithm Work with DUAL algorithm
algorithm (diffusing update algorithm)
Full routing table are Missing routes are Missing routes are
exchanged exchanged exchanged
Class full routing protocol Classless routing protocol Classless routing protocol
(Class full routing protocol do (classless routing protocol
not carry the subnet mask carry the subnet mask
information along with information along with
updates) updates)
Updates are through Updates are through Updates are through
broadcast multicast multicast
Version 1:=--
6. Metric = hop count (metric is a measurement of unit, which is used in deciding best route.)
10. In this protocol each routers send full updates every time instead of just sending new or
changed routing information to neighbor’s routers.
RIP timers:-
By default each router update routing information by 30 sec or send updated information to
neighbor’s router.
Time a router waits to hear updates (suppose that if any link has failed between routers. after
30 sec invalid time will start and wait for 150 sec after 30+150 sec router remove/flush the
routing table)
The router is marked unreachable if there are no updates during this interval.
Time before the invalid routes purged from the routing table.
After 60 sec (wait for 60 sec) it will flush the routes from its routing table.
Suppose that a router have 2 paths to reach destination. For 180 sec it hold-down the
neighbor’s router update and write down the best path route in its own routing table and
neglect another update.
RIP Version 2:-------------
1. It supports VLSM.
3. It supports authentication.
4. Trigger updates
1. Easy to configure.
2. No complexity.
3. Less overhead
Disadvantage:--
4. Slow convergence.
RIP v1 RIP v2
Uses broadcast (255.255.255.255) to send the Uses multicast (224.0.0.9) to send updates
updates
#router rip
#ver 2
#network 10.0.0.0
#no auto-summary
Show commands:---
#sh ip int br
#sh ip protocols
#distance 50
10. introduces the concept of Area’s to ease management and control traffic.
OSPF router identity is 32 bit numbers written in dotted decimal. It is just like as IP address.
1. The highest IP address of the active physical interface of the router is Router ID.
2. If logical interface (loopback interface) is configured, the highest IP address of the logical
interface is router ID.
Reference bandwidth was defined as arbitrary value in OSPF documentation (RFC 2338).
Vendors need to use their own reference bandwidth. Cisco uses 100Mbps (108) bandwidth as
reference bandwidth. With this bandwidth, our equation would be
OSPF uses this logic to calculate the cost. Cost is the inverse proportional of bandwidth. Higher
bandwidth has a lower cost. Lower bandwidth has a higher cost.
Key points
Now we know the equation, let’s do some math and figure out the default cost of some
essential interfaces.
Best route for routing table = Route which has the lowest cumulative cost
Summary
• OSPF uses SPT tree to calculate the best route for routing table.
• A SPT tree cannot grow beyond the area. So if a router has interfaces in multiple areas, it needs
to build separate tree for each area.
• SPF algorithm calculates all possible routes from source router to destination network.
• Cumulative cost is the sum of the all costs of the outgoing OSPF interfaces in the path.
• While calculating cumulative cost, OSPF consider only outgoing interfaces in path. It does not
add the cost of incoming interfaces in cumulative cost.
• If multiple routes exist, SPF compares the cumulative costs. Route which has the lowest
cumulative cost will be chosen for routing table.
Now we have a basic understanding of SPF algorithm. In remaining part this tutorial we will
learn how SPF algorithm selects the best route from available routes.
Create a practice lab as illustrated in following figure or download this pre-created practice lab
and load in practice tracer.
Download OSPF Practice Topology with OSPF configuration
Run show ip route ospf command from privilege mode to view all learned routes through the
OSPF protocol.
As output shows, Router0 has six routes from OSPF in routing table. We will go through the
each route and find out why it was chosen as the best route for routing table by OSPF.
Route 20.0.0.0
We have three routes to get 20.0.0.0/8 network. Let’s calculate the cumulative cost of each
route.
Via route R0 – R3 – R4 – R6
Via route R0 – R5 – R6
# LSA’s are held in the memory of router and that database is known as LSDB (Link state
database)
# Each LSA have own LSA sequence number in which we can differentiate one router’s LSA to
another routers LSA.
# Routers LSA
# Network LSA
# Each router have own LSDB which contain Link state information of all routers in network.
# contains information about all the possible routes the networks with in the area.
Routing Table:---
## show ip route
Note ---- in OSPF all routers must haveS the same database.
OSPF Area’s:--------------------------
#
# OSPF area minimizes size of database.
# Restrict any changes within that area. (Not flood outside area).
# Router within an area must maintain a topological database for the area to which it belongs.
# An OSPF Area must contain a backbone area and another area is connected with that
backbone area and connected area known as non- backbone area
# the backbone area must not be partitioned or divided into smaller pieces-under any failure
conditions, such as link or router down events.
4backbone router
The interface of router which aligns in two areas that is known as ABR
That types of router which connect our area’s autonomous system routers to another area’s
autonomous system router
1. Router ID
2. Area ID
5. Router priority
7. RID backup DR
# If any new router will be added in the OSPF network then all routers require to reconfigure.
Each OSPF router selects a router ID (RID) that has to be unique on your network. OSPF stores
the topology of the network in its LSDB (Link State Database) and each router is identified with
its unique router ID , if you have duplicate router IDs then you will run into reach ability issues.
Because of this, two OSPF routers with the same router ID will not become neighbors but you
could still have duplicated router IDs in the network with routers that are not directly
connected to each other.
R1(config)#router ospf 1
R1(config-router)#exit
It selected 11.11.11.11 which is the highest IP address on our loopback interfaces. Let’s get rid
of the loopbacks now:
It’s still the same, this is because the router ID selection is only done once. You have to reset
the OSPF process before it will select another one:
There we go, the router ID is now the highest IP address of our physical interfaces. If you want
we can manually set the router ID. This will overrule everything:
#A priority value 0 means that the router does not participate in election and that type of
router can never become DR and BDR.
Most CCNA students think that this DR/BDR election is done per area but this is incorrect. I’ll
show you how the election is done and how you can influence it. This is the topology we’ll use:
Here’s an example of a network with 3 OSPF routers on a Fast Ethernet network. They are
connected to the same switch (multi-access network) so there will be a DR/BDR election. OSPF
has been configuring so all routers have become OSPF neighbors, let’s take a look:
When a router is not the DR or BDR it’s called a DROTHER. I have no idea if we have to
Of course we can change which router becomes the DR/BDR by playing with the priority. Let’s
turn R1 in the DR:
You change the priority if you like by using the ip ospf priority command:
• The default priority is 1.
• A priority of 0 means you will never be elected as DR or BDR.
• You need to use clear ip ospf process before this change takes effect.
As you can see R3 is still the DR, we need to reset the OSPF neighbor adjacencies so that we’ll
elect the new DR and BDR.
Now you can see R1 is the DR because the other routers are DROTHER and BDR.
Or we can confirm it from R3, you’ll see that R1 is the DR and that the priority is 200.
• Configurations
• R1
• R2
• R3
Want to take a look for yourself? Here you will find the configuration of each device.
Something you need to be aware of is that the DR/BDR election is per multi-access
segment…not per area!). Let me give you an example:
In the example above we have 2 multi-access segments. Between R2 and R1, and between
R2 and R3. For each segment there will be a DR/BDR election.
# Once connection established between routers then OSPF router send hello message to each
others.
3. Hello interval
4. Dead interval
OSPF Timers
OSPF uses some default timers. The common of this timers are hello timer and dead
timer. And these timer values can be changed for the OSPF network types.
For broadcast networks, hello timer is 10 seconds and the dead timer is 40 seconds. But
in nonbroadcast networks these are changed as 30 and 120.
Hello interval: ---this defines how often we send the hello packet.
Dead interval:---this defines how long we should wait for hello packets before we declare the
neighbor dead.
The hello and dead interval values can be different depending on the OSPF network type. On Ethernet
interfaces you will see a 10 second hello interval and a 40 second dead interval.
Configuration
Let’s enable OSPF:
R1 & R2#
(config)#router ospf 1
(config-router)#network 192.168.12.0 0.0.0.255 area 0
The hello and dead interval can be different for each interface. Above you can see that
the hello interval is 10 seconds and the dead interval is 40 seconds. Let’s try if this is
true:
After shutting the interface on R1 you will see the following message:
R1#
Aug 30 17:57:05.519: %OSPF-5-ADJCHG: Process 1, Nbr 192.168.12.2 on
FastEthernet0/0 from FULL to DOWN, Neighbor Down: Interface down or detached
R1 will know that R2 is unreachable since its interface went down. Now take a look at
R2:
R2#
Aug 30 17:57:40.863: %OSPF-5-ADJCHG: Process 1, Nbr 192.168.12.1 on
FastEthernet0/0 from FULL to DOWN, Neighbor Down: Dead timer expired
R2 is telling us that the dead timer has expired. This took a bit longer. The interface on
R1 went down at 17:57:05 and R2’s dead timer expired at 17:57:40…that’s close to 40
seconds.
40 seconds is a long time…R2 will keep sending traffic to R1 while the dead interval is
expiring. To speed up this process we can play with the timers. Here’s an example:
R1 & R2
(config)#interface FastEthernet 0/0
(config-if)#ip ospf hello-interval 1
(config-if)#ip ospf dead-interval 3
You can use these two commands to change the hello and dead interval. We’ll send a
hello packet every second and the dead interval is 3 seconds. Let’s verify this:
Reducing the dead interval from 40 to 3 seconds is a big improvement but we can do
even better:
IP-OSPF transmit-delay:-
To set the estimated time required to send an Open Shortest Path First (OSPF) link-state update
packet on the interface, use the ip ospf transmit-delay command. To return to the default, use
the no form of this command.
no ip ospf transmit-delay
Syntax Description
seconds Time (in seconds) required to send a link-state update. The range is
from 1 to 450 seconds, and the default is 1.
Command Default
1 second
Command Modes
Command History
Release Modification
5.0(3)N1(1) This command was introduced.
Usage Guidelines
Use the ip ospf transmit-delay command to set the estimated time needed to send an LSA
update packet. OSPF increments the LSA age time by the transmit delay amount before
transmitting the LSA update. You should take into account the transmission and propagation
delays for the interface when you set this value.
Examples
This example shows how to set the transmit delay value to 8 seconds:
Related Commands
Command Description
copy running-config Saves the configuration changes to the startup
startup-config configuration file.
ip ospf retransmit-interval Sets the estimated time between LSAs
transmitted from this interface.
show ip ospf Displays OSPF information.
ip ospf retransmit-interval
To specify the time between Open Shortest Path First (OSPF) link-state advertisement (LSA) retransmissions for
adjacencies that belongs to the interface, use the ip ospf retransmit-interval command. To return to the default, use
the no form of this command.
no ip ospf retransmit-interval
Syntax Description
seconds Time (in seconds) between retransmissions. The time must be greater
than the expected round-trip delay between any two routers on the
attached network. The range is from 1 to 65535 seconds. The default is 5
seconds.
Command Default
5 seconds
Command Modes
Interface configuration mode
Command History
Release Modification
5.0(3)N1(1) This command was introduced.
Usage Guidelines
Use the ip ospf retransmit-interval command to set the time between LSA retransmissions. When a router sends
an LSA to its neighbor, it keeps the LSA until it receives an acknowledgment message from the neighbor. If the router
receives no acknowledgment within the retransmit interval, the local router resends the LSA.
Examples
This example shows how to set the retransmit interval value to 8 seconds:
Related Commands
Command Description
copy running-config startup- Saves the configuration changes to the startup
config configuration file.
ip ospf transmit-delay Sets the estimated time to transmit an LSA to a
neighbor.
show ip ospf Displays OSPF information.
OSPF V3:------
# it distributes ipv6 prefixes.
# it runs directly over ipv6.
# OSPF V3 runs over a link, rather than a subnet.
# OSPF messages are sourced using the link-local address of the exit interface.
# OSPF uses FF02::6 as the DR/BDR multicast address and the FF02::5 all OSPF router multicast
address.
# each interface must be enabled using the ipv6 ospf process-id area area-id in the interface-
configuration mode.
# EIGRP (Enhanced Interior Gateway Routing Protocol) is Cisco’s IGP (Interior Gateway Protocol)
that was made an “open standard” in 2013.
# In these lessons we start with the basics of EIGRP and end with the most advanced topics.
# EIGRP is a classless, distance vector protocol that uses the concept of an autonomous system
to describe a set of contiguous routers that run the same routing protocol and share routing
information.
# EIGRP sometimes referred to as a hybrid routing protocol and advance distance vector
protocol because it has characteristics of both distance vector and link state.
# EIGRP stands for Enhanced Interior Gateway Routing Protocol and is a routing protocol
created by Cisco. Originally, it was only available on Cisco hardware but since a few years, it’s
now an open standard.
# Max hop count is 255 (100 by default)
# Administrative distance is 90
# Flexible network design.
# Multicast and unicast instead of broadcast address.
# 100 % loop-free classless routing.
# Easy configuration for WANs and LANs.
# Updates are through multicast 224.0.0.10
# Convergence rate is fast.
# supports IP, IPX and Apple Talk protocols.
# It supports equal cost and unequal cost load balancing.
EIGRP tables:--
1. Neighbor table
# show ip eigrp neighbor
2. Topology table:---
#Show ip eigrp topology
3. Routing table.
# show ip route
# EIGRP routers will start sending hello packets to other routers just like OSPF does, if you send
hello packets and you receive them you will become neighbors. EIGRP neighbors will exchange
routing information which will be saved in the topology table. The best path from the topology
table will be copied in the routing table.
Selecting the best path with EIGRP works a bit different than other routing protocols so let’s see
it in action:
We have three routers named R1, R2 and R3. We are going to calculate the best path to the
destination which is behind R3.
EIGRP uses a rich set of metrics namely bandwidth, delay, load and reliability which we will
cover later. These values will be put into a formula and each link will be assigned a metric. The
lower these metrics the better.
In the picture above I have assigned some values on the interfaces, if you would look on a real
EIGRP router you’ll see the numbers are very high and a bit annoying to work with.
R3 will advertise to R2 its metric towards the destination. Basically R3 is saying to R2: “It costs
me 5 to get there”. This is called the advertised distance. R2 has a topology table and in this
topology table it will save this metric, the advertised distance to reach this destination is 5.
We are not done yet since there is something else that R2 will save in its topology table. We
know the advertised distance is 5 since this is what R3 told us. We also know the metric of the
link between R2 and R3 since this is directly connected. R2 now knows the metric for the total
path to the destination, this total path is called the feasible distance and it will be saved in the
topology table.
You have now learned two important concepts of EIGRP. The advertised distance, your
neighbor tells you how far it is for him to reach the destination and the feasible distance which
is your total distance to get to the destination.
Let’s continue!
We are not done yet since R1 is also running EIGRP. R2 is sending its feasible distance towards
R1 which is 15. R1 will save this information in the topology table as the advertised distance.
R2 is “telling” R1 the distance is 15.
NO auto summary:--- No auto-summary is needed because by default EIGRP will behave like a
classful routing protocol which means it won’t advertise the subnet mask along the routing information. In
this case that means that 1.1.1.0/24 and 2.2.2.0/24 will be advertised as 1.0.0.0/8 and 2.0.0.0/8. Disabling
auto-summary will ensure EIGRP sends the subnet mask along.
Network 1.1.1.0 0.0.0.255 mean that I’m advertising the 1.1.1.0 network with wildcard 0.0.0.255. If I don’t
specify the wildcard you’ll find “network 1.0.0.0” in your configuration. Does it matter? Yes and no. The
same thing applies to “network 2.2.2.0 /24”. It will work but also means that every interface that falls within
the 1.0.0.0/8 or 2.0.0.0/8 range is going to run EIGRP. Network 192.168.12.0 without a wildcard mask is
fine since I’m using a /24 on this interface which is Class C.
If you are working on a lab and are lazy (like me) you can also type in network 0.0.0.0 which will activate
EIGRP on all of your interfaces…if that’s what you want of course.
EIGRP Packet Types
EIGRP (Enhanced Interior Gateway Routing Protocol) uses EIGRP messages to establish and maintain the
EIGRP neighbourship. EIGRP uses five packet types for this messages. EIGRP doesn’t send messages with
UDP or TCP; instead, a Cisco’s protocol called Reliable Transport Protocol (RTP) is used for
communication between EIGRP-speaking routers. As the name implies, reliability is a key
feature of this protocol, and it is designed to enable quick delivery of updates and tracking of
data reception.
These packet types are :
–Hello,
–Query,
–Reply,
–Update,
– ACK.
First I will configure EIGRP on both routers, nothing special I just want to make sure we
have a neighbor adjacency:
R1(config)#router eigrp 12
R1(config-router)#network 192.168.12.0
R2(config)#router eigrp 12
R2(config-router)#network 192.168.12.0
Now I will increase the hold time so it doesn’t drop the neighbor adjacency so quickly. I’ll
set it to 1 hour:
When we take a look at R1 you’ll see that it uses 3600 seconds as the hold time for R2:
We have 3597 seconds and counting…now I will set the hello timer to a low value so
that we can find out if other EIGRP packets will renew the hold time. I’ll set it to 5
minutes:
Now I will create a new loopback interface on R2 and advertise it in EIGRP. This will
cause some traffic between R1 and R2. Before I do this, let’s take a quick look at the
current state of the hold time again:
Right now we are down to 3504 seconds…let’s advertise that loopback interface in
EIGRP:
Route Summarization
Summary routes can also be produced manually. You can use this by “ip summary-address”
command.
If you want to prevent one interface or all interfaces to receive EIGRP update and EIGRP hello
packets , you can use “passive interface” command. By using this command, no update and
hello packets coem to that interface anymore.
For example, with this command, you can prevent fa0/1 to receive update and hello packets
like below.
You can also prevent all the interfaces to receive these packets and then you can open the ones
you want. Below, firstly we prevent all interfaces and then allow fa0/5 and fa0/6.
Unicast Neighbours
In EIGRP, multicast address 224.0.0.10 is used . If you want , you can change this to a specific
neighbour address with “neighbour ip-address” command. You can use this command like
below:
EIGRP Stub
EIGRP Stub router is the router that is not a transit router and only has one EIGRP neighborship.
EIGRP Stub provides less queries.
After stub command, you can optionally use some parameters. These parameters are receive-
only, connected , static, summary, redistributed.
Load Balancing is the traffic equalization or adjustment on the interfaces towards the
destionations. There are two types Load Balancing. These are:
•Equal- Cost Load Balancing
• Unequal-Cost Load Balancing
Equal-Cost Load Balancing is the Load Balancing, that is done when there are equal costs to a
destination. By default EIGRP automatically use Equal-Cost Load Balancing and supports up to 4
Equal-Cost routes.
EIGRP Equal Cost Load Balancing
Unequal-Cost Load Balancing is the load balancing, that is done when a traffic to the destination links has
different cost or speed. In EIGRP Unequal-Cost Load Balancing is done with the help of “variance” command.
Router(config-router) # variance 2
This tutorial explains EIGRP Composite Metric calculation formula and components (K1–
Bandwidth, K2-Load, K3-Delay, K4-Reliability and K5-MTU) step by step in detail with examples.
Learn how EIGRP Composite Metric Formula calculate and select the shortest path to reach the
destination.
Enhanced Interior Gateway Routing Protocol uses composite metric calculation formula to
calculate and select the best available route for destination. This formula uses five metric
components. These are Bandwidth, Load, Delay, Reliability and MTU. In order to understand how
EIGRP composite metric calculation works, first we have to understand these five key metric
components.
K-Values are the most confusing part of EIGRP. Usually newbies take K Values as EIGRP metric
components. K Values are not the metric components in themself. They are only the place
holder or influencer for actual metric components in metric calculation formula. So when we
enable or disable a K value, actually we enable or disable its associate metric component.
EIGRP uses four components out of five to calculate the routing metric.
Bandwidth (K1)
Bandwidth is a static value. It will change only when we make some physical (layer1)
changes in route such as changing cable or upgrading link types. EIGPR picks lowest
bandwidth from all outing going interfaces of route to the destination network.
Among these bandwidths EIGRP will pick 56Kbps for composite metric calculation formula.
You may surprise why it picks the lowest instead of the highest? Well picking the highest
bandwidth doesn’t give us a surety of equivalent bandwidth throughout the route. It’s a
maximum cap which means we will get its equivalent or lower bandwidth in this route.
While picking the lowest bandwidth gives us a guarantee of equivalent of higher bandwidth
throughout the route. Since this is the bottleneck of route.
EIGRP first looks at bandwidth command. If bandwidth is set through this command, EIGRP will
use it. If bandwidth is not set, it will use interface’s default bandwidth.
When we enable an interface, router automatically assign a bandwidth value to it based on its
type. For example serial interface has a default bandwidth value of 1544Kbps. Until we change
this value with bandwidth command, it will be used where it is required.
Let me clear one more thing about bandwidth. Changing default bandwidth
with bandwidth command does not change the actual bandwidth of interface. Neither default
bandwidth nor bandwidth set by bandwidth command has anything to do with actual layer one
link bandwidth.
This command is only used to influence the routing protocol which uses bandwidth in route
selection process such as EIGRP and OSPF.
Suppose we have two routes for single destination; Route1 and Route2. For some reason we
want to take Route1 instead of Route2. How will we influence default metric calculation to select
the Route1?
In starting of this article we talked about K-Values. K-Values allow us to influence the metric
calculation. K1 is associated with bandwidth. K1 gets its weight from interface’s default
bandwidth or bandwidth set through the bandwidth command. Changing default bandwidth
with bandwidth command will change the K1’s value in metric calculation formula.
So to take Route1, we will have to make its lowest bandwidth higher than Route2. This can be
done in two ways; either raise the lowest bandwidth of Route1 higher than Route2 or reduce the
lowest bandwidth of Route2 lower than Route1. Both can be done easily
with bandwidth command.
Let’s understand this with a simple example. Following figure illustrate a simple EIGRP network.
EIGRP is configured on all routers and all links have default bandwidth.
Serial link has default bandwidth of 1544Kbps. Until we change bandwidth of any route, both
routes have equal lowest bandwidth.
Both routes are load balanced with equal cost value 2684416.
Ok, let’s change default bandwidth to see how bandwidth component influence the route
metric.
Set bandwidth to 64Kbps (lower than default 1544Kbps) on R3’s serial 0/0/0 interface.
Ok let’s change bandwidth at R3 again this time increase default bandwidth to 2800Kbps.
Why EIGRP load balanced between Route1 and Route2 while now Route2 has better
bandwidth?
Because EIGRP uses the lowest bandwidth of route to calculate the path cost and that is still
1544Kbps.
Load (K2)
Load is a dynamic value that changes frequently. It is based on packet rate and bandwidth of
interface. It calculates the volume of traffic passing through the interface in comparison of
maximum capacity. It is expressed on a scale of 255 where 1 represent that an interface is empty
and 255 represent that an interface is fully utilized.
Since data flows from both directions, router maintains two separate metric counters;
If K2 is enabled, maximum Txload value will be used in composite metric calculation formula.
Delay (k3)
Delay reflects the time taken by a packet in crossing the interface. It is measured in fractions of
seconds. Like as bandwidth Cisco has implicit delay values for all interfaces based on the type of
interface hardware. For example a FastEthernet has default delay of 100 microseconds. Since it is
a static value, we can override it with delay command.
Default delay value or value set by delay command has nothing to do with the actual delay
caused by interface. Just like bandwidth, this value is also an influencer.
Total delay = delay received from neighboring router + its own interface delay
EIGRP is an enhanced distance vector routing protocol. It also uses route poisoning, withdrawing
route, split horizon and poisoned reverse for loop free optimized network. For all these
mentioned techniques EIGRP use the maximum delay as the indication of the unreachable route.
To denote the unreachable route EIGRP uses the delay of 16,777,215 tens of the microseconds.
Reliability (K4)
Just like load, reliability is also a dynamic value. It compares all successfully received frames
against all received frames. 100% reliability indicates that all the frames which we received were
good. We don’t have any issue with physical link. If we have any issue with physical link, this
value will be decrease.
Reliability is expressed as the fraction of 255. 255 expresses 100% reliability while 0 represents
0% reliability. If K4 is enabled in metric calculation formula, it will use minimal reliability.
MTU (K5)
MTU stands for maximum transmission unit. It is advertised with routing update but it does not
actively participate in metric calculation. EIGRP allows us to load balance between equal cost
paths (32 maximum, default set to 4). It is used when equal cost paths for same destination
exceed the number of allowed paths set from maximum-paths command. For example we set
maximum allowed paths for load balancing to 5 and metric calculates 6 equal cost paths for a
single destination. In this situation path with lowest MTU will be ignored.
At first glance this formula looks like a complicated equation. But it is not as difficult as it sound.
Let’s make it easier.
As we know MTU (K5) is not actively participate in formula. So set its value to Zero. When K5 is
equal to 0 then [K5/ (K4 + reliability)] is defined to be 1.
By default EIGRP does not use dynamic values in metric. This will disable two more components;
load (K2) and reliability (K4).
ValueE = cumulative delay of route [Sum of all outgoing interface’s delay. Use interface default
delay, if not set through the delaycommand]
Putting these configuration values will make formula to look like this
Before we move further, let me explain why EIGRP keeps dynamic values disable by default.
Dynamic values change over the time. Enabling dynamic values will force EIGRP routers to
calculate metric all the time and send updates each other just because the load or reliability of
an interface has changed. This will create serious performance issue. To avoid such a situation
EIGRP only enables static values for metric calculation.
If we only enable static values for metric calculation, EIGRP will not recalculate the metric unless
it changed. Static values change only when a physical change occurred in network such as an
interface is down or router is dead. This will keep EIGRP nice and clean.
Let’s see this formula in action. Earlier in this tutorial we used an example topology to explain
the bandwidth component. Load that topology in packet tracer and run show ip route
eigrp command from privilege mode. We have four routes for three destination networks. One
destination network has two routes.
30.0.0.0/8
For this destination network metric cost is 2681856. Before we learn how this cost was
calculated, we need to understand some key points associated with formula.
We have three serial interfaces between source and destination. So our first step is to find out
the value of bandwidth and delay.
We have two outgoing interfaces between source and destination. Both have a default delay of
20000 microseconds so total delay would be 40000 microseconds. As we know this delay is in
microseconds and formula uses the unit of “tens of microseconds”. We need to divide 40000
with 10. So our cumulative delay would be 40000/10 = 4000.
Okay now we have least bandwidth (1544Kbps) and cumulative delay (4000) let’s put them in
formula
As I said “Any decimal value will be rounded back to the nearest integer before performing the
rest of the formula.”
Before solving rest of the formula, convert decimal value back in positive integer.
Metric = (10476)*256
Metric = 10476*256
Metric = 2681856
Great! We have revealed the cost calculation method. Let’s do this calculation again for next
route.
40.0.0.0/8
For this route we have lowest bandwidth 1544Kbps and cumulative delay of 4000(ten of
microseconds).
Let’s put these values in our formula
Metric = ((10000000/1544)+4000)*256
Metric = 2681856
Fine, now we have only route left. Let’s figure out its cost also.
50.0.0.0/8
For this destination we have two routes. Both routes have equal least bandwidth and cumulative
delay. So naturally their cost will also be same. As we know EIGRP automatically load balance
equal cost routes and these routes have equal cost. So they both make their way to routing
table.
Metric = ((10000000/1544) + 4010) * 256
Metric = 2684416
This is how EIGRP calculates the route cost. In job life you will rarely need to calculate the route
cost manually. Still I suggest you to spend a little extra time in learning this process especially if
you are a Cisco exam candidate.
That’s all for this part. In next part I will explain how to configure EIGRP step by step with
examples.
I’m using two routers with a loopback interface each and EIGRP has been configured.
The routers have become EIGRP neighbors as we can see here:
In the output above I’m looking at the EIGRP neighbor table of R1. As you can see we
have one neighbor (192.168.12.2) which happens to be R2 on interface FastEthernet
0/0. What else do we find here?
• H (Handle): Here you will find the order when the neighbor adjacency was established. Your first neighbor will have
a value of 0, the second neighbor a value of 1 and so on.
• Hold: (sec): this is the holddown timer per EIGRP neighbor. Once this timer expires we will drop the neighbor
adjacency. The default holddown timer is 15 seconds. On older IOS versions only a hello packet would reset the
holddown timer but on newer IOS versions any EIGRP packet after the first hello will reset the holddown timer.
• Uptime: How long the neighbor has been up.
• SRTT (Smooth round-trip time): The number of milliseconds it takes to send an EIGRP packet to your neighbor and
receive an acknowledgment packet back.
• RTO (Retransmission timeout): The amount of time in milliseconds that EIGRP will wait before retransmitting a
packet from the retransmission queue to this neighbor.
• Q Cnt (Q count): The number of EIGRP packets (Update, Query or Reply) in the queue that are awaiting
transmission. Ideally you want this number to be 0 otherwise it might be an indication of congestion on the
network.
• Seq Num (Sequence number): This will show you the sequence number of the last update,query or reply packet
that you received from your EIGRP neighbor.
Excellent so that’s how EIGRP stores neighbor information! Our next stop is of course
to take a look at the EIGRP Topology table:
R1#show ip eigrp topology
IP-EIGRP Topology Table for AS(1)/ID(1.1.1.1)
Codes: P - Passive, A - Active, U - Update, Q - Query, R - Reply,
r - reply Status, s - sia Status
Now that’s a lot of information to look at! Let me break it down for you in chunks:
If you look at the red fonts you can see that we are looking at the EIGRP topology table
for AS (Autonomous System) number 1. Keep in mind that the AS number has to match on
EIGRP routers in order to become neighbors!
R1#show ip eigrp topology
IP-EIGRP Topology Table for AS(1)/ID(1.1.1.1)
Look at those codes…Update, Query and Reply should ring a bell since I discussed
them a few pages ago. Let’s focus on those codes that I didn’t explain before…
Redistribution between different protocol:-------------
Redistribution is the term used to inject routing updates between routers if there are two or
more routing protocols enabled in a network or interconnecting two different networks having
different routing protocol.
Router 1 EIGRP1
R1#configure terminal
R1(config)#router eigrp 1
R1(config-router)#network 10.0.0.0
R1(config-router)#network 60.0.0.0
R1(config-router)#no auto-summary
R1(config-router)#exit
R1(config)#
R1#
R1#
Router 2 EIGRP1
R2#configure terminal
R2(config)#router eigrp 1
R2(config-router)#network 20.0.0.0
R2(config-router)#network 60.0.0.0
R2(config-router)#network 70.0.0.0
R2(config-router)#no auto-summary
R2(config-router)#exit
R2(config)#
R2#
R2#
Router 2 OSPF1
R2#configure terminal
R2(config)#router ospf 1
R2(config-router)#exit
R2(config)#
R2#
Router 3 OSPF1
R3#configure terminal
R3(config)#router ospf 1
R3(config-router)#exit
R3(config)#
R2#
R3#
Router 4 OSPF1
R4#configure terminal
R4(config)#router ospf 1
R4(config-router)#exit
R4(config)#
R4#
R4#
Router 5 OSPF1
R5#configure terminal
R5(config)#router ospf 1
R5(config)#
R5#
R5#
Basic configurations are completed.
Router 1
R1#show ip route
Router 2
R2#show ip route
Router 3
R3#show ip route
Router 4
R4#show ip route
Router 5
R5#show ip route
To redistribute OSPF database to RIP table follow the configuration command syntax.
#redistribute ospf metric
R2#configure terminal
R2(config)#router eigrp 1
R2(config-router)#redistribute ospf 1 ?
R2(config-router)#exit
R2#
R2#
Syntax for inject EIGRP database to OSPF routing table is #redistribute eigrp metric subnets
R2#configure terminal
R2(config)#router ospf 1
R2(config-router)#redistribute eigrp 1 ?
R2(config-router)#exit
R2(config)#
R2#
R2#
Check the routing table again, now the network is converged. The route O E2 10.0.0.0/8 [110/10] via 90.0.0.1 is learned by mutual
redistribution.
ACLs are a set of rules used most commonly to filter network traffic. They are used on network
devices with packet filtering capabilities (e.g. routers or firewalls). ACLs are applied on the
interface basis to packets leaving or entering an interface.
For example on how ACLs are used, consider the following network topology:-
Let’s say that server S1 holds some important documents that need to be available only to the
company’s management. We could configure an access list on R1 to enable access to S1 only to
users from the management network. All other traffic going to S1 will be blocked. This way, we
can ensure that only authorized user can access the sensitive files on S1.
Types of ACL:-
ACL
EXTENDED ACL
STANDARD ACL
The access-list number range is 1 – 99. The access-list number range is 100 – 199.
It can block a network, host and subnet. It can block a network, host, subnet and
services.
Filtering is done based on only source IP It check source, destination, protocol, port no.
address
Great job, we have just created our first ACL with classic numbered method. Now let’s create our second ACL, but this
time use modern named method.
Good going, we have finished our ACL creation task or router R1. Now access the global configuration mode of router
R2 and enter following commands to create ACL20
Now our security guards (ACLs) have an authorized persons (conditions) list. Right now they are just sitting in office
(router). From here they will do nothing. We need to send them on their job place (interface) where they will perform
their jobs (filtrations).
Regardless what method we used in creating the ACLs, assigning them in interfaces are the same steps process:-
Commands and parameters are explained in previous part of this article. In this part we will use these commands in
assigning the ACLs.
This tutorial explains Extended Access Control List configuration commands and its parameters in detail with
examples. Learn how to build, enable and delete an extended ACL (Numbered and Named) condition or statement
including how to perform host level and application level filtering with Extended ACL.
An Extended IP ACL can filter a packet based on its source and destination IP address, protocol information, port
number, message type for ICMP and TCP/IP protocol such as FTP, HTTP, SSH, Telnet etc.
Just like Standard ACL we can create Extended ACL in two ways:-
To create an Extended numbered ACL following global configuration mode command is used:-
access-list
Through this parameter we tell router that we are creating or accessing an access list.
ACL_Identifier_number
With this parameter we specify the type of access list. We have two types of access list; standard and extended. Both
lists have their own unique identifier numbers. Extended ACL uses numbers range 100 to 199 and 2000 to 2699. We
can pick any number from this range to tell the router that we are working with Extended ACL. This number is used in
groping the conditions under a single ACL. This number is also a unique identifier for this ACL in router.
permit/deny
As we know an ACL condition has two actions; permit and deny. If we use permit keyword, ACL will allow all packets
that match with parameters specified next in command. If we use deny keyword, ACL will drop all packets which
match with following specified parameters.
IP_protocol
This parameter tells router that what kind of filtering we want. We have two choices here, host level filtering and
application level filtering. Host level filtering is used for generic filtering while application level filtering is used for
more specific filtering. In easy language Host level filtering checks “Whether host A is allowed to access host B or
not” while application level filtering checks “How much host A is allowed to access host B”.
IP
For host level filtering we need to use IP keyword here. Please make sure if you choose IP here, you will not be able
to specify a specific application layer protocol in this statement later. Generic command for host level filtering is
following
After IP keyword we need to provide source and destination address with wildcard mask. I have already explained
wildcard mask in detail with example in second part of this article.
In standard ACL, to match a specific host we are allowed to type IP address alone. (Router will automatically
add host keyword with it). But in extended ACL we have to type host keyword with IP address to match a specific
host.
Application level filtering
For application level filtering we need to use appropriate layer 4 (Transport) protocol here such as TCP, and ICMP.
Depending on protocol we are allowed to use more specific filtering parameters in statement later.
TCP/UDP
Port numbers are used to distinguish between different applications data. For example a server performs a number of
functions like email, FTP, DNS, Web service, file service, data service etc. TCP/UDP assigns a unique number to each
application, so its data doesn’t get mix-up with others applications in transmission. These unique numbers are called
Port number. Extended ACL can filter data packet based on port numbers or application names. Following table lists
some most common port numbers and their associate applications.
TCP UDP
Port Number Application ACL Keyword Port Number Application ACL Keyword
53 DNS Domain
80 HTTP www
Operators
Operators are used to match port numbers or application names. There are five operators.
Operator Description
Lt Less than
Gt Greater than
Eq equal to
Established
Established keyword is used only with TCP packets. With this keyword we can control the direction of data flow. As
we know, user data packets are always transported in TCP packets. If we use this keyword, ACL will allow only the TCP
packets which have establish flag bit set in their header. Logic behind this keyword is that allow traffic only if it is
originated from inside.
Log
Log keyword is used to log every matched packet. It asks router to log a message every time when an ACL is hit. This
feature is extremely useful in monitoring inappropriate access attempts.
ICMP
Sending a packet is not a guarantee of delivering the packet. Sometime packets get lost in their way to destination. In
such a situation nearest device sends error message back to sender. So it can get an idea about undelivered packets
and their possible reasons. Networking devices use ICMP protocol to send error messages.
If we do not specify a particular message type, ACL will match all message types.
Beside IP, TCP, UDP and ICMP we can also filter a packet based on ahp (Authentication Header Protocol), eigrp
(Cisco's EIGRP routing protocol), esp (Encapsulation Security Payload), gre (Cisco's GRE tunneling), igmp (Internet
Gateway Message Protocol), ipinip (IP in IP tunneling), nos KA9Q (NOS compatible IP over IP tunneling), ospf (OSPF
routing protocol), pcp (Payload Compression Protocol) and pim (Protocol Independent Multicast). These options are
not included in any associate (CCNA) level exam syllabus. For CCNA level exams we should focus only on four
protocols IP, TCP, UDP and ICMP.
Starting from Cisco IOS version 11.2, routers support modern configuration approach. While in classical style we are
not allowed to edit/update/delete a single line from ACL, in modern style we can edit/update/delete a single line from
ACL.
ACL_name_number
Once you enter above command, we are moved into the ACL sub-configuration mode
Router(config-ext-acl)#
Once we are finished use exit command to return in global configuration mode.
Router(config)#interface interface_number
Router(config-if)#ip access-group ACL_Number_name in|Out
That’s all for this part. In next part we will practically implement what we have learnt from this part.
Great job, we have just created our first ACL with classic numbered method. Now let’s create our second ACL, but this
time use modern numbered method.
Good going, we have finished our ACL creation task or router R1. Now access the global configuration mode of router
R2 and enter following commands to create ACL-SecureManagement
No matter how we have created ACLs, assigning them in interfaces are the same steps process:-
Commands and parameters are explained in previous part of this article. In this part we will use these commands in
assigning the ACLs.
Router(config)#interface Fa0/0
Router(config-if)#ip access-group SecureManagement in
Router(config-if)#exit
Router(config)#
Packet Tracer includes several tools to verify our implementation such as ping command that can be used to test the
connectivity. We can use FTP and Web Browser to test applications level filter.
As per permission PC0 is allowed to access only its section. It is not allowed to access anything from outside.
Let’s do one more testing form PC2. As per permission PC2 is allowed to access development section and only web
service from server.
NTP (NETWORK TIME PROTOCOL)
# make sure that logging information and timestamps have the accurate time
and date.
# a NTP network usually gets its time from an authoritative time source, such
as radio clock or an atomic clock attached to a time server.
NTP Stratum:
# NTP uses a stratum to describe the distance between a network device and
an authoritative time source.
# a stratum 2 NTP server receives its time through NTP from a stratum 1 time
server.
Ether channel:--
# Ether channel combines multiple physical links (links of switch and multilayer
switches) into a single logical link.
# In 2000 the IEEE passed 802.3ad which is an open standard version of Ether
channel.
Authentication-local database:------------
# No scalable.
# Scalable.
3. the router authenticates the username and password using the local
database and the user is authorized to access the network based on
information in the local database.
#TACACS+ or RADIUS protocols are used to communicate between the client AAA security
servers.
AAA Authentication using TACACS+ :-------
#config t
#aaa new-model
#line con 0
#login authentication
#exit
#end
Network Address Translation (NAT) is a process in which one or more local IP address is
translated into one or more Global IP address and vice versa in order to provide Internet access
to the local hosts. NAT generally operates on router or firewall.
NAT Terminology
Before we understand NAT in details let’s get familiar with four basic terms used in NAT.
Term Description
Inside Local IP Address Before translation source IP address located inside the local network.
Inside Global IP Address After translation source IP address located outside the local network.
Outside Global IP Before translation destination IP address located outside the remote network.
Address
Outside Local IP Address After translation destination IP address located inside the remote network.
Let’s understand these terms with an example. Suppose a user is browsing a website from his home computer. The
network which connects his computer with internet is considered as a local network for him. Same as the network
which connects the webserver where the website is located with internet is considered as a local network for
webserver. The network which connects both networks on internet is considered as a global network.
On router the interface which is connected with local network will be configured with inside local IP address and the
interface which is connected with global network will be configured with inside global IP address. Inside and outside
depend on where we are standing right now. For example in above network for user router R1 is inside and router R2
is outside.
So, what about outside global and outside local? Well… these terms are used to explain the NAT process theoretically.
Practically we never need to configure the outside local and outside global as they sound. For example let’s discuss
above example once again.
On R1 we will configure inside local address (10.0.0.1) and inside global address (100.0.0.1) which will become outside
local address (10.0.0.1) and outside global address (100.0.0.1) for R2 respectively.
Same way on R2 we will configure inside local address (192.168.1.1) and inside global address (100.0.0.2) which will
become outside local address (192.168.1.1) and outside global address (100.0.0.2) for R1 respectively.
So practically we only configure inside local and inside global. What is inside for one side is the outside for other side.
Types of NAT
There are three types of NAT; Static NAT, Dynamic NAT and PAT. These types define how inside local IP address will
be mapped with inside global IP address.
Static NAT
In this type we manually map each inside local IP address with inside global IP address. Since this type uses one to
one mapping we need exactly same number of IP address on both sides.
Dynamic NAT
In this type we create a pool of inside global IP addresses and let the NAT device to map inside local IP address with
the available outside global IP address from the pool automatically.
PAT
In this type a single inside global IP address is mapped with multiple inside local IP addresses using the source port
address. This is also known as PAT (Port Address Translation) or NAT over load.
There are no hard and fast rules about where we should use NAT or where we should not use the NAT. Whether we
should use the NAT or not is purely depends on network requirement for example NAT is the best solution in
following situations: -
• Our network is built with private IP addresses and we want to connect it with internet. As we know to connect with
internet we require public IP address. In this situation we can use NAT device which will map private IP address with
public IP address.
• Two networks which are using same IP address scheme want to merge. In this situation NAT device is used to avoid IP
overlapping issue.
• We want to connect multiple computers with internet through the single public IP address. In this situation NAT is
used to map the multiple IP addresses with single IP address through the port number.
That’s all for this article. In next part of this tutorial we will learn how to configure static NAT and dynamic NAT in
Cisco router.
Configure Static NAT
Since static NAT use manual translation, we have to map each inside local IP address (which needs a translation) with
inside global IP address. Following command is used to map the inside local IP address with inside global IP address.
Router(config)#ip nat inside source static [inside local ip address] [inside global IP
address]
For example in our lab Laptop1 is configured with IP address 10.0.0.10. To map it with 50.0.0.10 IP address we will use
following command
In second step we have to define which interface is connected with local the network. On both routers interface Fa0/0
is connected with the local network which need IP translation.
In third step we have to define which interface is connected with the global network. On both routers serial 0/0/0
interface is connected with the global network. Following command will define interface Serial0/0/0 as inside global.
Let’s implement all these commands together and configure the static NAT.
For testing purpose I configured only one static translation. You may use following commands to configure the
translation for remaining address.
Before we test this lab we need to configure the IP routing. IP routing is the process which allows router to route the
packet between different networks. Following tutorial explain routing in detail with examples
In this lab we configured static NAT on R1 and R2. On R1 we mapped inside local IP address 10.0.0.10 with inside
global address 50.0.0.10 while on R2 we mapped inside local IP address 192.168.1.10 with inside global IP address
200.0.0.10.
To test this setup click Laptop0 and Desktop and click Command Prompt.
In first step we will create a standard access list which defines which inside local addresses are permitted to map with
inside global address.
Router(config)#
access-list
Through this parameter we tell router that we are creating or accessing an access list.
ACL_Identifier_number
With this parameter we specify the type of access list. We have two types of access list; standard and extended. Both
lists have their own unique identifier numbers. Standard ACL uses numbers range 1 to 99 and 1300 to 1999. We can
pick any number from this range to tell the router that we are working with standard ACL. This number is used in
groping the conditions under a single ACL. This number is also a unique identifier for this ACL in router.
permit/deny
An ACL condition has two actions; permit and deny. If we use permit keyword, ACL will allow all packets from the
source address specified in next parameter. If we use deny keyword, ACL will drop all packets from the source address
specified in next parameter.
matching-parameters
This parameter allows us to specify the contents of packet that we want to match. In a standard ACL condition it could
be a single source address or a range of addresses. We have three options to specify the source address.
• Any
• host
• A.B.C.D
Any
Any keyword is used to match all sources. Every packet compared against this condition would be matched.
Host
Host keyword is used to match a specific host. To match a particular host, type the keyword host and then the IP
address of host.
A.B.C.D
Through this option we can match a single address or a range of addresses. To match a single address, simply type its
address. To match a range of addresses, we need to use wildcard mask.
Wildcard mask
Just like subnet mask, wildcard mask is also used to draw a boundary in IP address. Where subnet mask is used to
separate network address from host address, wildcard mask is used to distinguish the matching portion from the rest.
Wildcard mask is the invert of Subnet mask. Wildcard can be calculated in decimal or in binary from subnet mask.
We have three hosts in lab. Let’s create a standard access list which allows two hosts and denies one host.
To create a standard numbered ACL following global configuration mode command is used:-
In second step we define a pool of inside global addresses which are available for translation.
Following command is used to define the NAT pool.
Router(config)#ip nat pool [Pool Name] [Start IP address] [End IP address] netmask [Subnet
mask]
This command accepts four options pool name, start IP address, end IP address and Subnet mask.
Pool Name: - This is the name of pool. We can choose any descriptive name here.
Start IP Address: - First IP address from the IP range which is available for translation.
End IP Address: - Last IP address from the IP range which is available for translation. There is no minimum or
maximum criteria for IP range for example we can have a range of single IP address or we can have a range of all IP
address from a subnet.
In third step we map access list with pool. Following command will map the access list with pool and configure the
dynamic NAT.
Router(config)#ip nat inside source list [access list name or number] pool [pool name]
Access list name or number: - Name or number the access list which we created in first step.
In first step we created a standard access list with number 1 and in second step we created a pool named ccna. To
configure a dynamic NAT with these options we will use following command.
Finally we have to define which interface is connected with local network and which interface is connected with global
network.
For testing purpose I configured dynamic translations for two addresses only.
On R2 we can keep standard configuration or can configure dynamic NAT as we just did in R1 or can configure static
NAT as we learnt in pervious part of this article.
Let’s do a quick recap of what we learnt in previous part and configure static NAT on R2.
R2>enable
R2#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#ip nat inside source static 192.168.1.10 200.0.0.10
R2(config)#interface Serial 0/0/0
R2(config-if)#ip nat outside
R2(config-if)#exit
R2(config)#interface FastEthernet 0/0
R2(config-if)#ip nat inside
R2(config-if)#exit
R2(config)#
To understand above commands in detail please see the second part of this tutorial.
Before we test this lab we need to configure the IP routing. IP routing is the process which allows router to route the
packet between different networks. Following tutorial explain routing in detail with examples
In this lab we configured dynamic NAT on R1for 10.0.0.10 and 10.0.0.20 and static NAT on R2 for 192.168.1.10.
To test this setup click Laptop0 and Desktop and click Command Prompt.
➢ HDLC
➢ PPP
➢ FRAME RELAY
➢ ATM
Ex-modem Ex-router
WAN LINKS:-------
WAN
DEDICATED SWITCHED
HDLC PPP
PPP Framing defines how network layer packets are encapsulated in PPP frame. As we know
PPP can carry multiple Layer 3 protocols over a single link. To support multiple network layer
protocols PPP uses Protocol Type filed in header. Following figure illustrates PPP framing
FLAG:--
To seprate a one frame from the next, an 8-bit flag is added at the beginning and the end of the
frame . But the problem with that is , any pattern used for flag could be part of the information.
So there are two ways to overcome this problem.
1.Byte stuffing:--
A byte (usually escape charater(ESC)). Which has a predefined bit pattern is added to the data
section of the Frame when there is a charater with the same pattern as the Flag. Whenver the
reciever encounters the esc character, it removes from the data section and treats the next
character as a data , not a flag.
But problem arises when text contains one or more escape characters followed by a flag. To
solve this problem, the escape characters that are part of the text are marked by another
escape character i.e., if the escape character is part of the text,an extra one is added to show
that the second one is part of the text.
Example:Note – Point-to-Point Protocol (PPP) is a byte-oriented protocol.
2. Bit stuffing –
Mostly flag is a special 8-bit pattern “0111110” used to define the beginning and the end of the
frame.
Problem with the flag is same as that was in case of byte stuffing. So, in this protocol what we
do is, if we encounter 0 and five consecutive 1 bits,an extra 0 is added after these bits. This
extra stuffed bit is removed from the data by the receiver.
The extra bit is added after one 0 followed by five 1 bits regardless of the value of the next bit.
Also, as sender side always knows which sequence is data and which is flag it will only add this
extra bit in the data sequence not in the flag sequence.
Example:
This is the second component of PPP. PPP uses it to build and maintain data-link connections. It
provides following options:-
Compression:- Through compression LCP increases overall data transmission speed while
saving bandwidth at the same time. It compression data at sending end and decompress data at
receiving end.
Error Detection:- LCP uses LQM (Link Quality Monitoring ) tool to detect the interface that is
exceeding threshold error percentage. Once faulty interface is identified, LCP will disable that
interface and reroute the traffic from better route.
Looped Link Detection:- LCP uses magic number to detect looped link. Once looped link is
detected LCP will disable that interface and reroute the traffic over the working link.
Multilink:- In this option multiple physical links are combined in a single logical connection at
layer three. For example if we have two 64Kbps lines then this option can combine them in such
a way that they appear as a single 128Kbps connection at layer 3.
Call Back :- In this option remote side router will call back to calling router. For example we
have two routers; R1 and R2 with callback enabled. In this case, R1 will connect with R2 and
authenticate itself. Once authentication process is completed, R2 will terminate the connection
and then re-initiate the connection from its side. This way R1 will be charged only for the data
that is used during the authentication process while R2 will be charged for remaining data
transmission.
This is the third component of PPP. PPP uses NCP (Network Control Protocol) to allow multiple
Network layer protocols (such as IPv4, IPv6, IPX) to be used in a single point to point
connection.
PPP is specified at the physical and Data Link layers only. Don’t confuse with NCP component. NCP component is only
used to carry multiple Network Layer protocols simultaneously across the single point to point link. PPP is neither
specified as layer 3 protocol nor it works as layer 3 (network layer) protocol.
PPP Authentication
PPP Authentication is the method of identifying remote device. Through authentication we can find out whether
remote party is genuine or imposter. For example there are two routers (R1 and R2) communicating over a serial link.
Now R1 has some data for R2. But before sending this data, R1 want to be sure that remote device which is claiming
itself as R2, is real R2. In this case R1 will initiate authentication process. In authentication process R2 will prove its
identity. PPP supports two authentication protocols; PAP and CHAP.
In this protocol, password is sent in clear text format that makes it less secure in comparison with CHAP. PAP
authentication is a two steps process. In step one, Router that want to be authenticate will send its user name and
password to the Router that will authenticate it. In second step, if user name and password match, remote router will
authenticate originating router otherwise authentication process will be failed. Following figure illustrate this process
in detail
In step one, R1 sends user name and password in clear text format to R2 which will authenticate R1.
In step two, R2 will match received username and password with locally stored username and password. If both
credential match, R2 will assume that R1 is real R1. R2 will send back an acknowledgment to R1 stating that it has
passed authentication process and R2 is ready for data transmission.
PAP authentication is only performed upon the initial link establishment. Once link is established, no more sequential
authentication are done for that particular session. PAP sends user name and password in clear text format.
Username and password are case sensitive.
CHAP is used at initial startup and once link is established, sequential authentication are performed to make sure that
router is still communicating with same host. If any sequential authentication is failed, connection will be terminated
immediately. CHAP authentication is a three steps process.
Step1
In first step R1 (Source) sends its username (without password) to the R2 (Destination).
Step2
• Routers running CHAP need to maintain a local authentication database. This database contain a list of all allowed
hosts with their login credential.
• R2 will scan this database to find out whether R1 is allowed to connect with it or not.
• If no entry for a particular host is found in database then that specific host is not allowed to connect with it. In such a
case connection will be terminated at this point.
• A database entry for R1 (with password) will confirm that R1 is allowed to connect with it. R1’s password would be
picked up for next process.
• At this moment a random key will be generated.
• This random key with password will be passed in MD5 hashing function.
• MD5 hashing function will produce a hashed value from given input (Random Key + Password).
• This hashed value is known as Challenge.
• R2 will send this Challenge with random key back to R1.
Step3
CHAP uses three way handshake process to perform the authentication. In CHAP protocol actual password is never
sent across the link. CHAP uses a hashed value for authentication that is generated from MD5 hashed function. MD5
uses locally store password and a random key to generate hashed value. This hashed value is valid only for one time.
Differences between PAP and CHAP authentication
protocol
PAP CHAP
Perform authentication in two steps. Perform authentication in three steps.
Username and password are sent across the Only username is sent across the link.
link.
Actual password is sent across the link. Actual password is never sent across the link.
Password is sent in clear text format. Password is hashed with a random key through the MD5 hashed
function.
It is a less secure authentication protocol. It is a secure authentication protocol. Since actual password is
Anyone tapping the wire can learn password. never sent across the wire, no one can learn password from wire-
tapping.
PAP authentication is performed only at CHAP authentication is performed at initial startup and if required
initial link establishment. any time during the session.
RIP, OSPF and EIGRP are all different but they have one thing in common…they want to find the
shortest path to the destination. When we look at the Internet we don’t care as much as to find
the shortest path, being able to manipulate traffic paths is far more important. There is only
one routing protocol we currently use on the Internet (public network) which is BGP.
BGP feature:-----
##With auto-summary enabled, you can advertise a classful network and you don’t have to
add the mask parameter. BGP will automatically advertise the classful network if you have
the classful network or a subnet of this network in your routing table.
Note:- to advertise and carry subnet routes in BGP, use an explicit network command or the no
auto-summary command, if you disable automatic summarization and have not entered a
network command, you will not advertise network routes for networks with subnet routes
unless they contain summary.
Note:-- to block subnets and create summary sub prefixes to the classful network boundary
when crossing classful network boundaries, use the auto-summary command.
EBGP Multihop:----
eBGP (external BGP) by default requires two Cisco IOS routers to be directly
connected to each other in order to establish a neighbor adjacency. This is
because eBGP routers use a TTL of one for their BGP packets. When the BGP
neighbor is more than one hop away, the TTL will decrement to 0 and it will be
discarded.
When these two routers are not directly connected then we can still make it work
but we’ll have to use multihop. This requirement does not apply to internal BGP.
Here’s an example:
Above we will try to configure eBGP between R1 and R3. Since R2 is in the middle,
these routers are more than one hop away from each other. Let’s take a look at
the configuration:
R1(config)#router bgp 1
R1(config-router)#neighbor 192.168.23.3 remote-as 3
R3(config)#router bgp 3
R3(config-router)#neighbor 192.168.12.1 remote-as 1
Even though this configuration is correct, BGP will not even try to establish a
eBGP neighbor adjacency. BGP knows that since these routers are on different
subnets, they are not directly connected. We can verify this with the following
command:
The wireshark capture above shows us that R1 is trying to connect to R3. As you
can see the TTL is 1. Once R2 receives this packet it will decrement the TTL by 1
and drop it:
Above you can see that R2 is dropping this packet since the TTL is exceeded. It will
send an ICMP time-to-live exceeded message to R1. Our BGP routers will show a
message like this:
R1#
BGP: 192.168.23.3 open failed: Connection timed out; remote host not
responding, open active delayed 27593ms (35000ms max, 28% jitter)
This is R1 telling us that it couldn’t connect to R3. To fix this issue, we’ll tell eBGP
to increase the TTL. First let’s enable the directly connected check again:
R1(config-router)#no neighbor 192.168.23.3 disable-connected-check
R3(config-router)#no neighbor 192.168.12.1 disable-connected-check
And now we will increase the TTL:
This capture shows us the TTL of 2. After a few seconds, our routers will become
eBGP neighbors:
R1#
%BGP-5-ADJCHANGE: neighbor 192.168.23.3 Up
R3#
%BGP-5-ADJCHANGE: neighbor 192.168.12.1 Up
That’s it, problem solved!
BGP neighbors:---
# BGP neighbors are routers forming TCP connection for exchanging BGP updates.
# BGP neighbors is also called as BGP peers or BGP speakers.
# two types of BGP neighbor relationship…….
1.iBGP
2.eBGP
BGP databases (BGP tables)
# a list of all configured BGP neighbors.
# has to be manually configured using neighbor command
#show ip bgp summary
# show ip bgp neighbors
BGP forwarding table/ database
# a list of network known by BGP, along with their paths and attributes
# show ip bgp
IP routing table:--
# a list of best paths to destination networks.
# show ip route
BGP configuration commands:----
Router(config)#router bgp <as no>
Router(config-router)#network <network id> <subnet mask>
Router(config-router)#neighbor <ip-address> <remote-as no>
BGP split horizon rule:----
# An update send by one iBGP neighbor should not be send back to another iBGP neighbor.
#BGP split horizon is necessary to ensure that routing loops are not started within an AS full-
mesh iBGP peering is required within an AS for all the routers within the AS to learn about the
BGP routes.
Solution:
1.full mesh neighborship (means every router should be a neighbor of every other router within
the AS)
2.use route reflector
Note:---
IBGP neighbors need not to be directly connected (but they must be reachable to each other)
IBGP full mesh scalability concerns---(it will be very very complex network not beneficial
solution)
##administration
Configuration management on increasingly large number of routers.
##number of TCP sessions
1. Total number of sessions = n(n-1)/2
2. Maintaining extreme numbers of TCP sessions creates.
3. Extra overhead.
BGP table size
#a higher number of neighbors generally translate to a higher number of paths for each route.
#memory consumption.
Route reflector:---
1.provides a scalable alternative to an iBGP full mesh.
2.allows a router (route reflector-RR) to advertise routes received from an iBGP peer to other
iBGP peers.
3.clients updates server.
4.server updates to all the remaining clients.
5.all clients should establish neighbor with only servers.
6.clients will not establish neighbor with any other client.
7.in case If you have 2 servers (server establish neighbor with other servers and clients)
8.in which one server behave like as a client for another server and vice versa. if one server
goes Down then another server will handle route update information for clients.
BGP neighbor with loopback:---
Whenever physical interface will go down then for redundancy BGP router form neighbor ship
with loopback interface, that will never go down.
AS PATH :---
#list of AS through which update has traversed
#path with shortest AS path list is more desirable.
NEXT HOP:---
ORIGIN:--
#Origin informs all AS in internetwork how network got introduced into BGP.
1. IGP(i):- advertised in BGP using network command
2. EGP(e):- redistributed into BGP from IGP or static. WITH old version 2 EGP
3. Incomplete(?):-- redistributed in to BGP from IGP or static. in which BGP
advertise in IGP protocol like RIP, OSPF , EIGRP
# well-known mandatory and transitive.
WEIGHT:---
# It is Cisco’s proprietary attribute.
#tells, how to exit the AS.
#Path with the highest weight is more desirable.
#weight is partial attributes.
#it is local to the router, means it does not affect other router, it affect only local router.
#Default weight = 0 = learned routes, which is not directly connected.
32768===locally injected routes., which is directly connected.
#weight is more preferred than AS path
LOCAL PREFERENCE:--
# local preference defines how data traffic should exit from an AS.
# Path with highest preference value is more desirable.
#it is advertised only to IBGP neighbors within an AS.
# default value is 100
Nowadays almost everything is connected to the Internet. In the picture above we have a customer
network connected to an ISP (Internet Service Provider). Our ISP is making sure we have Internet
access. Our ISP has given us a single public IP address we can use to access the Internet. To make
sure everyone on our LAN at the customer side can access the Internet we are using NAT/PAT (Network
/ Port address translation) to translate our internal private IP addresses to this single public IP address.
This scenario is excellent when you only have clients that need Internet access. On our customer LAN we
only need a default route pointing to the ISP router and we are done. For this scenario we don’t need
BGP…
Maybe the customer has a couple of servers that need to be reachable from the Internet…perhaps a
mail- or webserver. We could use port forwarding and forward the correct ports to these servers so we
still only need a single IP address. Another option would be to get more public IP addresses from our ISP
and use these to configure the different servers. For this scenario we still don’t need BGP…
What if I want a bit more redundancy? Having a single point of failure isn’t a good idea. We could add
another router at the customer side and connect it to the ISP. You can use the primary link for all traffic
and have another link as the backup. We still don’t require BGP in this situation, it can be solved with
default routing:
• Advertise a default route in your IGP on the primary customer router with a low metric.
• Advertise a default route in your IGP on the secondary customer router with a high metric.
This will make sure that your IGP sends all traffic using the primary link. Once the link fails your IGP will
make sure all traffic is sent down the backup link. Let me ask you something to think about…can we do
any load balancing across those two links? It’ll be difficult right?
Your IGP will send all traffic down the primary link and nothing down the backup link unless there is a
failure. You could advertise a default route with the same metric but you’d still have something like a
50/50% load share. What if I wanted to send 80% of the outgoing traffic on the primary link and 20%
down the backup link? That’s not going to happen here but with BGP it’s possible
This scenario is a bit more interesting. Instead of being connected to a single ISP we now have two
different ISPs. For redundancy reasons it’s important to have two different ISPs, in case one fails you will
always have a backup ISP to use. What about our Customer network? We still have two servers that
need to be reachable from the Internet.
In my previous examples we got public IP addresses from our ISP. Now I’m connected to two different
ISPs so what public IP addresses should I use? From ISP1 or ISP2? If we use public IP addresses from
ISP1 (or ISP2) then these servers will be unreachable once the ISP has connectivity issues.
Instead of using public IP addresses from the ISP we will get our own public IP addresses.The IP address
space is maintained by IANA (Internet Assigned Numbers Authority – http://www.iana.org/ ). IANA is
assigning IP address space to a number of large Regional Internet Registries like RIPE or ARIN. Each of
these assign IP address space to ISPs or large organizations.
When we receive our public IP address space then we will advertise this to our ISPs. Advertising is done
with a routing protocol and that will be BGP.
If you are interested here’s an overview of the IPv4 space that has been allocated by IANA:
For routing between the different autonomous systems we use an EGP (external gateway
protocol). The only EGP we use nowadays is BGP.
How do we get an autonomous system number? Just like public IP address space you’ll need to register
one.
Autonomous system numbers are 16-bit which means we have 65535 numbers to choose from. Just like
private and public IP addresses, we have a range of public and private AS numbers.
Range 1 – 64511 are globally unique AS numbers and range 64512 – 65535 are private autonomous
system numbers.
If you are interested, see if you can find the AS number of your ISP:
Table of Contents
BGP
o Introduction to BGP
o eBGP Multi-Hop
o BGP Auto-summary
Single Homed
The single homed design means you have a single connection to a single ISP. With this design, you don’t
need BGP since there is only one exit path in your network. You might as well just use a static default
route that points to the ISP.
The advantage of a single-homed link is that it’s cost effective, the disadvantage is that you don’t have
any redundancy. Your link is a single point of failure but so is using a single ISP.
Dual Homed
The dual homed connection adds some redundancy. You are still only connected to a single ISP, but you
use two links instead of one. There are some variations for this design. Here’s the first one:
With this design, we use a single router on both ends, but we do have redundant links.
The example above offers the most redundancy when you are connected to a single ISP. We have two
links and two routers on both ends. One disadvantage of this design is that we are still using a single ISP.
Single Multi-homed
Multihomed means we are connected to at least two different ISPs. The most simple design looks like
this:
Above you see that we have a single router at the customer, connected to two different ISPs. The single
point of failure in this design is that you only have one router at the customer. When it fails, you won’t be
able to connect to any ISP. We can improve this by adding a second router:
This is a pretty good design, we only use single links, but we are connected to two different ISPs using
different routers.
Dual Multihomed
The dual multihomed designs means we are connected to two different ISPs, and we use redundant links.
There are some variations, here’s the first one:
Above you can see that we are connected to two different ISPs, using one router and two links to each
ISP. We have redundant ISPs and links, but the router is still a single point of failure. We can improve this
by adding a second router:
The design above is better; it has two customer routers. One disadvantage, however, is that once one of
your router fails, you will lose the connection to one of the ISPs. Using the same number of routers and
links, the following design might be better:
This design has redundant ISPs, routers, and links. Both customer routers are connected to both ISPs.
This design does offer the highest redundancy but it’s also an expensive option.
Conclusion
You have now learned what the different (BGP) connection options to an ISP are:
• Single homed: you are connected to a single ISP using a single link.
• Dual homed: you are connected to a single ISP using dual links.
• Single multi-homed: you are connected to two ISPs using single links.
• Dual multi-homed: you are connected to two ISPs using dual links.
Sign Up
Search
Let’s start with a simple topology. Just two routers and two autonomous systems. Each router has a
network on a loopback interface which we are going to advertise in BGP.
R1(config)#router bgp 1
R1(config-router)#neighbor 192.168.12.2 remote-as 2
R2(config)#router bgp 2
R2(config-router)#neighbor 192.168.12.1 remote-as 1
Use the router bgp command with the AS number to start BGP. Neighbors are not configured
automatically this is something you’ll have to do yourself with the neighbor x.x.x.x remote-ascommand.
This is how we configure external BGP.
R1# %BGP-5-ADJCHANGE: neighbor 192.168.12.2 Up
R2# %BGP-5-ADJCHANGE: neighbor 192.168.12.1 Up
If everything goes ok you should see a message that we have a new BGP neighbor adjacency.
R1(config)#router bgp 1
R1(config-router)#neighbor 192.168.12.2 password MYPASS
R2(config)#router bgp 2
R2(config-router)#neighbor 192.168.12.1 password MYPASS
If you like you can enable MD5 authentication by using the neighbor password command. Your router
will calculate a MD5 digest of every TCP segment that is being sent.
R1#show ip bgp summary
BGP router identifier 1.1.1.1, local AS number 1
BGP table version is 1, main routing table version 1
Show ip bgp summary is an excellent command to check if you have BGP neighbors. You also see how
many prefixes you received from each neighbor.
Excellent Networking Site
NetworkLessons.com is a great site for information about networking. Their lessons are clear, comprehensive and
easy to understand. Rene is also quick to reply to topics posted in the various forums. I recommend
NetworkLessons.com for anyone studying for a certification or to just further their knowledge!
We use cookies to give you the best personal experience on our website. By using our website, you agree to our use of cookies Read more.
SWITCHING :---------------------------------
Layer 2 switching (or Data Link layer switching) is the process of using devices’ MAC addresses
on a LAN to segment a network. Switches and bridges are used for Layer 2 switching. They
break up one large collision domain into multiple smaller ones.
In a typical LAN, all hosts are connected to one central device. In the past, the device was
usually a hub. But hubs had many disadvantages, such as not being aware of traffic that passes
through them, creating one large collision domain, etc. To overcome some of the problems
with hubs, bridges were created. They were better than hubs because they created multiple
collision domains, but they had limited number of ports. Finally, switch were created and are
still widely used today. Switches have more ports than bridges, can inspect incoming traffic and
make forwarding decisions accordingly. Each port on a switch is a separate collision domain.
Now consider the way the switches work. We have the same topology as above, only this we
are using a switch instead of a hub.
Switches increase the number of collision domains. Each port is one collision domain, which
means that the chances for collisions to occur are minimal. A switch learns which device is
connected to which port and forwards a frame based on the destination MAC address included
in the frame. This reduces traffic on a LAN and enhances security.
How switches work
Each network card has a unique identifier called Media Access Control (MAC) address. This
address is used in LANs for communication between devices on the same network segment.
Devices that want to communicate need to know each other MAC address before sending out
packets. They use a process called ARP (Address Resolution Protocol) to find out the MAC
address of another device. When the hardware address of the destination host is known, the
sending host has all the required information to communicate with the remote host.
Let’s say that host A wants to communicate with host B for the first time. Host A knows the IP
address of host B, but since this is the first time the two hosts communicate, hardware (MAC)
addresses are not known. Host A uses an ARP process to find out the MAC address of host B.
Switch forwards the ARP request out all ports except the port the host A is connected to. Host B
receives the ARP request and responds with its MAC address. Host B also learns the MAC
address of host A ( because host A sends its MAC address in the ARP request). The switch learns
which MAC addresses are associated with which port. For example, because host B responded
with the ARP request that included its MAC address, the switch knows the MAC address of host
B and stores that address in its MAC address table. The same is with host A, the switch knows
the MAC address of the host A because of the ARP request. Now, when host A sends a packet
to host B, the switch looks up in its MAC address table and forwards the frame only out Fa0/1
port, the port on which host B is connected.
You can display the MAC address table of the switch by using the show mac-address-
table command:
This process is vulnerable to layer 2 MAC address spoofing attacks where an attacker
spoofs a certain MAC address to change entries in the MAC address table. A really
simple method to deal with this issue is to manually configure entries in the MAC
address table, a static entry will always overrule dynamic entries. You can either specify
the interface where the MAC address is located or tell the switch to drop the traffic.
Let’s look at an example!
To demonstrate this we only require two devices. A router to generate some traffic and
a switch to look at (and configure) the MAC address table. Here’s the configuration:
We’ll do a quick ping to generate some traffic so SW1 can learn about the mac address
of R1’s FastEthernet 0/0 interface:
R1#ping 192.168.12.2
Here’s the MAC address of R1, learned dynamically. Let’s turn this into a static entry:
There it is, a static entry. No way to overrule this unless you have access to our switch.
This prevents us from moving R1 to another interface on SW1 unless we change the
static entry. Like I mentioned before we can also change a static entry so it will drop all
traffic. Here’s how to do it:
Switching Methods
Any delay in passing traffic is known as latency. Cisco switches offer three ways to switch the
traffic depending upon how thoroughly you want the frame to be checked before it is passed
on. The more checking you want the more latency you will introduce to the switch.
• Cut through
• Store-and-forward
• Fragment-free
Cut-through
Cut-through switching is the fastest switching method meaning it has the lowest latency. The
incoming frame is read up to the destination MAC address. Once it reaches the destination
MAC address, the switch then checks its CAM table for the correct port to forward the frame
out of and sends it on its way. There is no error checking so this method gives you the lowest
latency. The price however is that the switch will forward any frames containing errors.
You are the security at a club and are asked to make sure that everyone who enters has a
picture ID. You are not asked to make sure the picture matches the person, only that the ID has
a picture. With this method of checking, people are surely going to move quickly to enter the
establishment. This is how cut-through switching works.
Store-and-forward
Here the switch reads the entire frame and copies it into its buffers. A cyclic redundancy check
(CRC) takes place to check the frame for any errors. If errors are found the frame is dropped
otherwise the switching table is examined and the frame forwarded. Store and Forward
ensures that the frame is at least 64 bytes and no larger than 1518 bytes. If smaller than 64
bytes or larger than 1518 bytes then the switch will discard the frame.
Now imagine you are the security at the club, only this time you have to not only make sure
that the picture matches the person, but you must also write down the name and address of
everyone before they can enter. Doing it this way causes a great deal of time and delay and this
is how the store-and-forward method of switching works.
Store-and-forward switching has the highest latency of all switching methods and is the default
setting of the 2900 series switches.
Since cut-through can ensure that all frames are good and store-and-forward takes too long, we
need a method that is both quick and reliable. Using our example of the nightclub security,
imagine you are asked to make sure that everyone has an ID and that the picture matches the
person. With this method you have made sure everyone is who they say they are, but you do
not have to take down all the information. In switching we accomplish this by using the
fragment-free method of switching.
This is the default configuration on lower level Cisco switches. Fragment-free, or modified cut-
through, is a modified variety of cut-through switching. The first 64 bytes of a frame are
examined for any errors, and if none are detected, it will pass it. The reason for this is that if
there is an error in the frame it is most likely to be in the first 64 bytes.
The minimum size of an Ethernet frame is 64 bytes; anything less than 64 bytes is called a
“runt” frame. Since every frame must be at least 64 bytes before forwarding, this will eliminate
the runts, and that is why this method is also known as “runt-free” switching.
The figure below shows which method reads how much of a frame before forwarding it.
# An Ethernet frame is preceded by a preamble and start frame delimiter (SFD), which are both
part of the Ethernet packet at the physical layer Each Ethernet frame starts with an Ethernet
header, which contains destination and source MAC addresses as its first two fields.
# The middle section of the frame is payload data including any headers for other protocols (for
example, Internet Protocol) carried in the frame.
# The frame ends with a frame check sequence (FCS), which is a 32-bit cyclic redundancy
check used to detect any in-transit corruption of data.
# A data packet on the wire and the frame as its payload consist of binary data.
# Ethernet transmits data with the most-significant octet (byte) first; within each octet, however,
the least-significant bit is transmitted first.
# The internal structure of an Ethernet frame is specified in IEEE 802.3. The table below shows
the complete Ethernet packet and the frame inside, as transmitted, for the payload size up to
the MTU of 1500 octets.
Some implementations of Gigabit Ethernet and other higher-speed variants of Ethernet support
larger frames, known as jumbo frames.
46-15
7 octet 1 6 oct (4 00
6 octets 2 octets 4 octets 12 octets
s octet ets octets) octet
s
Layer
2 ← 64–1522 octets →
Ether
net
frame
Layer
1
Ether
← 12
net ← 72–1530 octets →
octets →
pack
et &
IPG
# An Ethernet packet starts with a seven-octet preamble and one-octet start frame
delimiter (SFD).
# The preamble consists of a 56-bit (seven-byte) pattern of alternating 1 and 0 bits, allowing
devices on the network to easily synchronize their receiver clocks, providing bit-level
synchronization. Its provides a 5MHz clock at the start of each packet, which allow the receiving
devices to clock the incoming bit stream.
# It is followed by the SFD to provide byte-level synchronization and to mark a new incoming
frame. For Ethernet variants transmitting serial bits instead of larger symbols, the (uncoded) on-
the-wire bit pattern for the preamble together with the SFD portion of the frame is 10101010
10101010 10101010 10101010 10101010 10101010 10101010 10101011. The bits are
transmitted in order, from left to right.
# The SFD is the eight-bit (one-byte) value that marks the end of the preamble, which is the first
field of an Ethernet packet, and indicates the beginning of the Ethernet frame.
# The SFD is designed to break the bit pattern of the preamble and signal the start of the actual
frame.
# TheSFD is immediately followed by the destination MAC address, which is the first field in an
Ethernet frame. SFD has the value of 171 (10101011 in binary notation
Header:--------------
The header features destination and source MAC addresses (each six octets in length),
the EtherType (https://en.m.wikipedia.org/wiki/EtherType) field and, optionally, an IEEE 802.1Q
tag or IEEE 802.1ad tag.
PayloadEdit
The minimum payload is 42 octets when an 802.1Q tag is present and 46 octets when
absent.[d] When the actual payload is less, padding bytes are added accordingly.[e] The
maximum payload is 1500 octets. Non-standard jumbo frames allow for larger maximum payload
size.
Frame check sequence:-----------------
# The frame check sequence (FCS) is a four-octet cyclic redundancy check (CRC) that allows
detection of corrupted data within the entire frame as received on the receiver side.
# The FCS value is computed as a function of the protected MAC frame fields: source and
destination address, length/type field, MAC client data and padding (that is, all fields except the
FCS).
# Running the CRC algorithm over the received frame data including the CRC code will always
result in a zero value for error-free received data, because the CRC is a remainder of the data
divided by the polynomial. However, this technique can result in "false negatives", in which data
with trailing zeroes will also result in the same zero remainder.
# To avoid this scenario, the FCS is complemented (reversed for each bit) by the sender before it
is attached to the end of the payload data.
# This way, the algorithm result will always be a magic number or CRC32 residue of 0xC704DD7B
when data has been received correctly. This allows for receiving a frame and validating the FCS
without knowing where the FCS field actually starts.
Introduction:
Port security is easy to configured and it allows you to secure access to a port based upon a
MAC address basis. Port security can also configured locally and has no mechanism for
controlling port security in a centralized fashion for distributed switches. Port security is
normally configured on ports that connect servers or fixed devices, because the likelihood of
the MAC address changing on that port is low. A common example of using basic port security
is applying it to a port that is in an area of the physical premises that is publicly accessible. This
could include a meeting room or reception area available for public usage. By restricting the
port to accept only the MAC address of the authorized device, you prevent unauthorized access
if somebody plugged another device into the port.
Configuration Steps:-
By default, the switch port security feature is disabled on all switch ports and must be enabled.
1) Your switch interface must be L2 as "port security" is configure on an access interface. You
can make your L3 switch port to an access interface by using the "switchport" command.
2) Then you need to enable port security by using the "switchport port-security" command.
This can also be applied in a range of the interfaces on a switch or individual interfaces.
3) This step is optional, but you can specify how many MAC addresses the switch can have on
one interface at a time. If this setting is not applied the default of one MAC address is used. The
command to configure this is as follows, "switchport port-security maximum N" (where N can
be from 1 to 6272) Keep in mind the range the number of maximum MAC address depends on
the hardware and Cisco IOS you use.
4) This step is also optional, but you can define the action to take when a violation occurs on
that interface or interfaces. The default is to shut down the interface or interfaces. The
command to configure this is as follows "switch port-security violation { protect | restrict |
shutdown }"
Protect which discards the traffic but keeps the port up and does not send a SNMP
message.(only allow traffic from the secure port and drop packet from other MAC addresses.
Restrict which discards the traffic and sends a SNMP message but keeps the port up(alert the
network administrator)
Shutdown which discards the traffic sends a SNMP message and disables the port. (This is the
default behavior is no setting is specified.)(the default is to shutdown the port)
5) You can specify the MAC address that is allowed to access the network resources manually
by using the command "switchport port-security mac-address value". Use this command
multiple times if you want to add more than one MAC address.
6) If you don’t want to configure manually every single MAC address of your organization then
you can have the switch learn the MAC address dynamically using the "switchport port-security
mac-address sticky" command. This command allow switch to learn the first MAC address that
comes into on the interface.
## With the help of this command, convert the dynamic MAC addresses to secure sticky MAC
addresses.
## If you save the sticky MAC addresses in the configuration file, when switch restart, the
interface does not need to relearn the MAC address.
##You can allow the port to dynamically configure the MAC address with the MAC addresses of
connected devices.
##if port shutdown all dynamically learned addresses are removed.
Configuration Example:
Unmanageable switch:----
Manageable switches:-----------
# these switches are also plug and play.
# it has console port and CLI access.
# we can verify and modify configurations and can implement and test some advance switching
technologies like VLAN, trunking, STP.
Cisco’s Hierarchical Design Model: -----------------------
Access layer:----
1900 & 2900 layer 2 switches.
Now, to take a deeper dive into these switch categories and talk about various options, you can
select the switches based on:
– Speed
– Number of ports
Speed:
You can find Fixed Configuration switches in Fast Ethernet (10/100 Mbps), Gigabit Ethernet
(10/100/1000 Mbps), Ten Gigabit (10/100/1000/10000 Mbps) and even some 40/100 Gbps
speeds. These switches have a number of uplink ports and a number of downlink ports.
Downlinks connect to end users – uplinks connect to other Switches or to the network
infrastructure. Currently, Gigabit is the most popular interface speed though Fast Ethernet is
still widely used, especially in price-sensitive environments. Ten Gigabit has been growing
rapidly, especially in the datacenter and, as the cost comes down, it will continue to expand
into more network applications. With 10GBase-T Ten Gigabit copper interfaces being integrated
into LOM (LAN on the Motherboard) and 10G-Base-T switches becoming available now (see the
Cisco SG500XG-8F8T 16-port 10-Gigabit switch), building a Storage or Server farm with 10
Gigabit interfaces has never been easier or more cost-effective. 40G/100G is still emerging and
will be mainstream in a few years.
Number of ports:
Fixed Configuration Switches typically come in 5, 8, 10, 16, 24, 28, 48, and 52-port
configurations. These ports may be a combination of SFP/SFP+ slots for fiber connectivity, but
more commonly they are copper ports with RJ-45 connectors on the front, allowing for
distances up to 100 meters. With Fiber SFP modules, you can go distances up to 40 kilometers
Power over Ethernet is a capability that facilitates powering a device (such as an IP phone, IP
Surveillance Camera, or Wireless Access Point) over the same cable as the data traffic. One of
the advantages of PoE is the flexibility it provides in allowing you to easily place endpoints
anywhere in the business, even places where it might be difficult to run a power outlet. One
example is that you can place a Wireless Access Point inside a wall or ceiling.
Switches deliver power according to a few standards – IEEE 802.3af delivers power up to 15.4
Watts on a switch port whereas IEEE 802.3at (also known as POE+) delivers power up to 30
Watts on a switch port. For most endpoints, 802.3af is sufficient but there are devices, such as
Video phones ori Access Points with multiple radios, which have higher power needs. It’s
important to point out that there are other PoE standards currently being developed that will
deliver even high levels of power for future applications. Switches have a power budget set
aside for running the switch itself, and also an amount of power dedicated for POE endpoints.
TYPES OF SWITCHING :---------------------------------
Circuit switching:-------------------------------
# In circuit switching, each data unit know the entire path address which is provided by the
source.
# Resource reservation is the feature of circuit switching because path is fixed for data
transmission.
Packet switching:-------------------
# In Packet switching, data is processed at all intermediate node including source system.
# Virtual circuit
Computer networks that provide connection-oriented service are called Virtual Circuits while
those providing connection-less service are called as Datagram networks. For prior knowledge,
the Internet which we use is actually based on Datagram network (connection-less) at network
level as all packets from a source to a destination do not follow same path.
Let us see what are the highlighting differences between these two hot debated topics here:
Virtual Circuits-
1. It is connection-oriented simply meaning that there is a reservation of resources like
buffers, CPU, bandwidth,etc. for the time in which the newly setup VC is going to be used
by a data transfer session.
2. First packet goes and reserves resources for the subsequent packets which as a result
follow the same path for the whole connection time.
3. Since all the packets are going to follow the same path, a global header is required only for
the first packet of the connection and other packets generally don’t require global
headers.
4. Since data follows a particular dedicated path, packets reach inorder to the destination.
5. From above points, it can be concluded that Virtual Circuits are highly reliable means of
transfer.
6. Since each time a new connection has to be setup with reservation of resources and extra
information handling at routers, its simply costly to implement Virtual Circuits.
Datagram Networks:
VLAN:---------------------------------
• What happens when a computer connected to the Research switch sends a broadcast like
an ARP request?
• What happens when the Helpdesk switch fails?
• Will our users at the Human Resource switch have fast network connectivity?
• How can we implement security in this network?
Now tell me explain you why this is a bad network design. If any of our computers sends a
broadcast what will our switches do? They flood it! This means that a single broadcast frame
will be flooded on this entire network. This also happens when a switch hasn’t learned about a
certain MAC address, the frame will be flooded.
If our helpdesk switch would fail this means that users from Human Resource are “isolated”
from the rest and unable to access other departments or the internet, this applies to other
switches as well. Everyone has to go through the Helpdesk switch in order to reach the Internet
which means we are sharing bandwidth, probably not a very good idea performance-wise.
Last but not least, what about security? We could implement port-security and filter on MAC
addresses but that’s not a very secure method since MAC addresses are very easy to spoof.
VLANs are one way to solve our problems.
One more question I’d like to ask you to refresh your knowledge:
1.STATIC VLAN:------------------------
# Static VLAN’s are based on port numbers.
# Need to manually assign a port on a switch to a VLAN.
# also called port-based VLAN’s
# One port can be member of only one VLAN.
Example:--
In this lesson I will show you how to configure VLANs on Cisco Catalyst Switches and how
to assign interfaces to certain VLANs. Let’s start with a simple network topology:
Let’s start with a simple example. H1 and H2 are connected to SW1.
SW1#show vlan
Interesting…VLAN 1 is the default LAN and you can see that all active interfaces are
assigned to VLAN 1.
VLAN information is not saved in the running-config or startup-config but in a separate file
called vlan.dat on your flash memory. If you want to delete the VLAN information you
should delete this file by typing delete flash:vlan.dat. I configured an IP address on H1 and
H2 so they are in the same subnet.
Let’s see if H1 and H2 can reach each other:
Even with the default switch configuration H1 is able to reach H2. Let’s see if I can create a
new VLAN for H1 and H2:
SW1(config)#vlan 50
SW1(config-vlan)#name Computers
SW1(config-vlan)#exit
This is how you create a new VLAN. If you want you can give it a name but this is optional.
I’m calling my VLAN “Computers”.
SW1#show vlan
VLAN 50 was created on SW1 and you can see that it’s active. However no ports are
currently in VLAN 50. Let’s see if we can change this…
SW1(config)interface fa0/1
SW1(config-if)#switchport mode access
SW1(config-if)#switchport access vlan 50
SW1(config)interface fa0/2
SW1(config-if)#switchport mode access
SW1(config-if)#switchport access vlan 50
First I will configure the switchport in access mode with the switchport mode access
command. By using the switchport access vlan command we can move our interfaces to
another VLAN.
SW1#show vlan
Excellent! Both computers are now in VLAN 50. Let’s verify our configuration by checking if
they can ping each other:
# if the TTL field reaches zero before the datagram arrives at its destination, then the
datagram is discarded and an ICMP error datagram is sent back to sender.
# the purpose of the TTL field is to avoid a situation in which an undeliverable datagram
keeps circulating on an internet system, and such a system eventually becoming swamped
Dynamic vlan:--------
# for Dynamic VLAN configuration a software called VMPS ( VLAN membership policy
server) is needed.
# Trunks are required to carry VLAN traffic from one switch to another.
# trunk port was inspired by the telephone system trunks, which carry multiple telephone
conversation at a time.
Types of links/ports
1.access link:-------
# part of one vlan (access port carries traffic of only one VLAN)
# any device attached to an access port is unaware of a VLAN membership—the device
just assumes it’s part of some broadcast domain.
# switches remove any VLAN information from the frame before it forwarded out to an
access link device.
# you can only create a switch port to be either an access port or trunk port.
2.trunk links:------------
# trunk link carries traffic of multiple VLANs from 1 t0 4094 VLANs at a time. But the amount
is really only up to 1001.
VLAN Frame Tagging Protocols (ISL and
dot1.q)
There are two types of frame tagging protocols. These are :-----------------------
1.ISL(Inter-SwitchLink)
2.Dot1Q (or IEEE 802.1Q)
ISL encapsulate the frame with a header (26 bytes) and trailer (4 bytes). So ISL
increases the size of a frame 30 bytes. This protocol is a Cisco proprietary protocol and
it is not supported on new Cisco devices. ISL support 1000 VLAN on a trunk port.
You can configure ISL on Cisco switches like below:
Switch(config)# interface fa0/0
Dot1Q (or IEEE 802.1Q) is the industry standard. So with this frame tagging protocol,
you can trunk different vendors’ switches. Dot1Q modifies the layer-2 header and add 4-
byte VLAN tag into it. Because of this process, the frame CRC value is calculated again.
Dot1.q support 4096 VLAN s on a trunk.
The normal frame size is 1514 bytes. With ISL, this value increate 30 bytes and the frame
become giant to other vendor’s switches. But with dot1.q, the frame size becomes 1514 to
1518. And this value is supported by all other vendors’ switches.
• 802.1Q: This is the most common trunking protocol. It’s a standard and supported by
many vendors.
• ISL: This is the Cisco trunking protocol. Not all switches support it.
Trunking Modes
Access : Puts the interface in a permanent non trunking mode. Even if the other end is in
trunking mode.
#In this mode do not send and receive negotiate message. it force the switchport to
become the access mode if neighboring switch agree or not.
Trunk: Puts the interface in a permanent trunking mode. Even if the other end is not in
trunking mode.
# in this mode switchport send and receive negotiate message, it force the switchport to
become the trunk mode if neighboring switch agree or not.
Dynamic Auto : Puts interface ready to be a trunk if the other end is in trunk or dynamic
desirable mode. It is a passive mode. It does not actively attempt to convert the link to a
trunk. The default mode of the recent Cisco switches.
#In this mode switchport does not sent negotiate message. only receive
Switch(config-if)# switchport mode dynamic auto
# For Router on Stick topology (Inter VLAN Routing) configuration, we will create router
virtual interfaces under the router interfaces. Then, we will assign each of this virtual
interfaces to a specific VLAN.
# We will also create our VLANs and configure the PCs on that VLAN. For Router on Stick
topology (Inter VLAN Routing), we will use one switch, one router and six PCs in Packet
Tracer. And we will have 3 VLANs.
#
Let’s start to configure our Router on Stick topology (Inter VLAN Routing).
We will use 10.0.0.0/24, 20.0.0.0/24 and 30.0.0.0/24 blocks for our Packet Tracer Router
on Stick topology example. The first block will be for VLAN 2, the second will be for VLAN 3
and the last one will be for VLAN 4.
Now, let’s configure our VLANs and assign the interfaces to these VLANs. Firstly we will
create VLAN 2,3 and 4. Then, we will enter the interface range and configure the interface
range as access interface. Lastly, we will assign the interface to a specific VLAN with
“switchport access vlan” command.
---- ----- ---------- ----- ------ ------ -------- ---- -------- ------ --
----
As you can see, for our Router on Stick topology, Interface Fa0/2 and Fa0/3 are the
member of VLAN 2, Interface Fa0/4 and Fa0/5 are the member of VLAN 3, Interface Fa0/6
and Fa0/7 are the member of VLAN 4. Interface Fa0/1 is not on the VLAN table. Because it
is our Trunking port.
It is time to configure the switch’s router face interface, interface 0/1. We will connect
switch to the router, with this interface. This interface will be a trunk port. In our Router on
Stick topology, Trunk interface will pass all our VLANs that we allowed.
Native VLAN:---------------------------
# If a packet is received on a dot1Q link, that does not have VLAN tagged, it is assumed that it
belongs to native VLAN.
S1(config)#int f0/1
# VTP allows a network manager to configure a switch so that it will propagate VLAN
configuration to other switches in the network.
# VTP manage the addition, deletion, and renaming of VLAN’s on a network basis.
# When you configure a new VLAN on one VTP switch, the VLAN is distributed through all
switches in the domain. Reduces the need to configure the same VLAN everywhere.
VTP modes:--------
1. server mode—it is default mode. It send VLAN information to other switches. In which
all switches work like as a server.
# synchronizes VLAN configuration with latest information received from other switches in
the management domain.
2. client mode--- it receive VLAN information from server and forward it to other switch.
# synchronizes VLAN configuration with latest information received from other switches in
the management domain
# Forward VTP advertisements received from other switches in the same management
domain.
# does not synchronize VLAN configuration with latest information received from other
switches in the management domain.
VTP PRUNING:---------------
# it is a feature that you use in order to eliminate or prune this unnecessary traffic.
Commands :---
#int f0/1
Note 2:------
VTP Version reflects the capabilities of the device/code revision.
VTP V2 Mode reflects the vtp version it is currently configured to
operate in.
Your output shows a device that is capable of running Version 2, but is running Version 1
currently.
You can move it into version 2 by entering the "vtp version 2" command in global configuration
mode.
Note 3:--
VTP trap is disabled by default. If you enable this feature, it causes an SNMP message to be
generated every time a new VTP message is sent.
This example shows how to send VTP traps to the NMS:
Switch(config)# snmp-server enable traps VTP
Note 4:--
The general purpose of an MD5 value is to verify the integrity of a received packet and to detect
any changes to the packet or corruption of the packet during transit
• Message Digest 5 (MD5) carries the VTP password, if MD5 is configured and used to authenticate
the validation of a VTP update.
• VTP takes the VTP domain name into account when calculating the VTP MD5 hash
If the receiving switch finds the md5 does not match, it implies two things:
1) domain name is wrong
2)vtp password does not match.
But that also means hackers can alter the other field present in vtp message while keeping the domain
name unaltered.
================================================================================
=
Please consider the following example:
sw1----------------------------------------sw2
sw2 receives a vtp summary advertisement with high config number.
sw2 sends a vtp advertisement request.
sw1 first sends vtp summary advertisements listing the number of subset advertisement to follow.
sw1 then sends vtp subset advertisements.
Here is my questions. Since md5 hash is computed using domain name and configured password if one
is configured, therefore all the vtp advertisements sent by sw1 will have the same md5 hash because
domain name and password is same.
However when I perform the lab I found out sw1 always sends a vtp messages with different hash value.
Sw1----------------------------------sw2
sw1#show vtp status
VTP Version :2
Configuration Revision :0
Maximum VLANs supported locally : 36
Number of existing VLANs :5
VTP Operating Mode : Server
VTP Domain Name : zee
VTP Pruning Mode : Disabled
VTP V2 Mode : Disabled
VTP Traps Generation : Disabled
MD5 digest : 0xC2 0x6F 0x90 0xF9 0x75 0x7F 0x92 0x68
Configuration last modified by 0.0.0.0 at 0-0-00 00:00:00
Local updater ID is 0.0.0.0 (no valid interface found)
next I configure vlan2 on sw1 which increase the config revision number to1
R1#show vtp status
VTP Version :2
Configuration Revision :1
Maximum VLANs supported locally : 36
Number of existing VLANs :6
VTP Operating Mode : Server
VTP Domain Name : zee
VTP Pruning Mode : Disabled
VTP V2 Mode : Disabled
VTP Traps Generation : Disabled
MD5 digest : 0xE3 0xC6 0x61 0x30 0x70 0x95 0xBA 0xEC
Configuration last modified by 0.0.0.0 at 3-1-02 00:02:49
Local updater ID is 0.0.0.0 (no valid interface found)e
Md5 hash is different each time vtp message is transmitted even though domain name and password ( it
is null) are same.
I appreciate your help.
Have a good night.
# Rapid PVST+ (Rapid Per VLAN Spanning Tree Plus) – Cisco Proprietary
# MST (Multiple Spanning Tree) – 802.1s
Here, when Switch A receives a frame from Segment 1 and send it to the Segment 2,
Switch B also can learn it from Segment 2. And it also sends this frame to Segment 1 as if it
is being done first time. So, frames are doubled and a Layer 2 Loop Occurred. And this
Layer 2 loop causes a Broadcast Storm. A Infinite frames send/receive process occurs.
One loop in a Layer 2 domain can cause one more A layer 2 loops even if the switched
network is a large network.
STP TERMS:----
1. ROOT Bridge:--the switch with the lowest bridge ID becomes the root bridge.
2. Bridge ID:--the bridge ID is how STP keeps track all the switches in the network. It’s
determined by a combination of the bridge priority, which is 32,768 by default on all Cisco
switches, and the MAC address.
3.BPDU (bridge protocol data unit):- BPDU contain the bridge ID, switches exchange BPDU
to compare bridge ID.
A BPDU contains information regarding ports, switches, port priority and address.
Root port:-- s
# shortest path to the root bridge.
The Port selection process is done orderly. First Root Bridge is selected, secondly Root Ports
on all the switches, then Designated Ports are selected, and lastly the remainning ports
become Non-Designated Port, meaning Blocking Port.
All switches in the switched domain has one Root Port that has a Lowest Cost to Root
Bridge. The Root Ports are in “Forwarding State”.
The port of a switch that connects a switch to a Non-Root segment will be selected as
Disagnated. This is done by selecting a Designated switch in that segment. The lowest cost
switch to the Root Bridge is selected as Designated Switch. And its port in that segment is
selected as Designated Port. The all port of the Root Bridge are also “Designated Ports”.
The other side of Designated Port is selected as Non Designated (Blocking) Port and
Blocked.
Here, if a port become offline, then a Blocking Port become active and go into the
Forwarding State.
Think about the below topology. Let’s determine the STP Port Roles of this topology.
Here, after the Root Bridge Selection, port roles will be determined. The ports of the Root
Bridge will be Designated Ports. The minimum Cost port from a switch to the Root Bridge
will be selected as Root Ports. And on each segment there will be one Designated Port
active and the oppusite direction of this port will be in Blocking.
.
STP Timers
There are different important timers in Spanning Tree Protocol (STP). These timers, their
definions and the default values are given below:
•Hello(Time between two BPDU – 2 seconds)
•Forward(Delay (Listenning and Learning – 15 seconds)
• Max Age (Last port condifuration save time – 20 seconds)
Command Description
show spanning-tree Displays information about STP.
show spanning-tree Displays information about STP active interfaces only.
active
show spanning-tree Displays the bridge ID, timers, and protocol for the local bridge on the
bridge switch.
show spanning-tree Displays a brief summary about STP.
brief
show spanning-tree Displays the STP interface status and configuration of specified
interface interfaces.
show spanning-tree Displays information about Multiple Spanning Tree (MST) STP.
mst
show spanning-tree Displays the status and configuration of the root bridge for the STP
root instance to which this switch belongs.
show spanning-tree Displays summary information about STP.
summary
show spanning-tree Displays STP information for specified VLANs.
vlan
# port fast causes a port to enter the spanning-tree forwarding state immediately
bypassing the listening and learning states.
Note :--------
1. Port fast should be used only when connecting a single end station to a switch port.
2. If you enable port fast a port connected to another networking device , such as a switch,
you can create network loops.
#spanning-tree portfast
As you know, STP (Spanning Tree Protocol) is the key protocol of Switching world. With
STP, link redundancy is provided and switching loops are avoided.
STP has different versions. One of the STP version is RSTP (Rapid Spanning Tree
Protocol). Like it name, RSTP (Rapid Spanning Tree Protocol) is the fastest converged
version of STP.
In this example, we will configure RSTP (Rapid Spanning Tree Protocol) with Packet
Tracer.
For our RSTP (Rapid Spanning Tree Protocol) example, we will use the below switching
topology.
STP (Spanning Tree Protocol) has four states. These STP states are; Blocking, Listenning,
Learning and Forwarding. With RSTP (Rapid Spanning Tree Protocol), the Spanning Tree
state Blocking and Listenning are bypassed. The RSTP states that starting with discarded,
go through the learning and forwarding.
In STP (Spanning Tree Protocol), Blocking State is 20 seconds, Listenning State is 15
seconds and Learning State is 15 seconds. So, for STP, going through forwarding states
needs 50 seconds. This total time is 15 seconds in RSTP (Rapid Spanning Tree Protocol).
Because RSTP, bypasses the Blocking and Listenning states.
#in regular STP BPDU are orginated by the root and received by each switch.
#in RSTP, each switch originates BPDU, whether or not it recieves a BPDU on its root port,
PVST is done by RSTP PVST+ on catalyst switches.
Disabled Discarding
Blocking Discarding
Listening Discarding
Learning Learning
Forwarding Forwarding
Discarding:-----
Learning :---
Forwarding:---
RSTP synchronization:----
Alternative port==
Backup port:---
# the backup port applies only when a single switch has links to the same segment(collision
domain)
Edge port:---
Portfast:-----
#due to portfast(disable the STP), it will create the loop so need protection against this
problem(solution is BPDU guard and BPDU filter)
How BPDU guard and BPDU filter work:-----
BPDU Guard:---
#when BPDU guard is enabled on the interface, it put the port in the error disbaled state
BPDU Filter:----
#if a portfast interface receive any BPDU then port is not shutdown and this basically disables
spanning-tree on the interface.
#uplink fast is for speeding convergence when a direct link to an upstream switch fails.
#when uplinkfast is enabled, it is enabled for the entire switch and all vlans
#portfast work on access port for end user but uplinkfast work on switch-switch (if direct link
failed then blocking port will )
#in STP direct link will take 15LRN+15LSN=30 sec to up but in indirect link it will take
20BLK+15LRN+15LSN=50 sec