0% found this document useful (0 votes)
1 views

ccna_notes_part2

The document explains FTP and TFTP protocols for file transfer, detailing their functionalities, authentication methods, and usage examples. It also covers DHCP and DNS protocols, including how DHCP assigns network parameters and how DNS translates hostnames to IP addresses. Additionally, it describes configuring Cisco routers as DHCP servers and clients, the concept of APIPA for automatic IP addressing, and the Spanning Tree Protocol for preventing network loops.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

ccna_notes_part2

The document explains FTP and TFTP protocols for file transfer, detailing their functionalities, authentication methods, and usage examples. It also covers DHCP and DNS protocols, including how DHCP assigns network parameters and how DNS translates hostnames to IP addresses. Additionally, it describes configuring Cisco routers as DHCP servers and clients, the concept of APIPA for automatic IP addressing, and the Spanning Tree Protocol for preventing network loops.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 161

FTP & TFTP Explained

FTP (File Transfer Protocol)


FTP is a network protocol used to transfer files from one computer to another over a
TCP network. Like Telnet, it uses a client-network architecture, which means that a user
has to have an FTP client installed to access the FTP server running on a remote
machine. After establishing the FTP connection, the user can download or upload files
to and from the FTP server.

Consider the following example:

A user wants to transfer files from Host A to the FTP server. The user will start an FTP
client program (in this example, Filezilla), and initiate the connection:

In the example above, the anonymous authentication was used, so the user was not
asked to provide the password. The client can now transfer files from and to the FTP
server using the graphical interface.
NOTE
FTP uses two TCP ports: port 20 for sending data and port 21 for sending control commands. The
protocol supports the use of authentication, but like Telnet, all data is sent in clear text, including
usernames and passwords.
TFTP (Trivial File Protocol)
TFTP is a network protocol used to transfer files between remote machines. It is a
simple version of FTP, lacking some of the more advanced features FTP offers, but
requiring less resources than FTP.

Because of it’s simplicity TFTP can be used only to send and receive files. This
protocol is not widely used today, but it still can be used to save and restore a router
configuration or to backup an IOS image.

Consider the following example:

A user wants to transfer files from Host A to the router R1. R1 is a Cisco device and it
has a TFTP server installed. The user will start an TFTP client program and initiate the
data transfer.
NOTE
TFTP doesn’t support user authentication and sends all data in clear text. It uses UDP port 69 for
communication.

Copy Files with FTP


IOS includes a built-in File Transfer Protocol (FTP) client that can be used to transfer
images to and from the Cisco device. Unlike TFTP, FTP supports authentication, and
you will need to provide a valid FTP server username and password.

The following steps are required for FTP transfers:

1. create an FTP username and password by using the ip ftp username


USERNAME and ip ftp password PASSWORD global configuration command.
We need to provide the username and password that were already created on
the FTP server.
2. issue the copy ftp flash exec mode command and follow the wizard.
Here is an example. Let’s say that we want to transfer the image file from the FTP
server to a Cisco switch. We can do this using the following set of commands:

SW1(config)#ip ftp username tuna

SW1(config)#ip ftp password peyo

SW1(config)#end

SW1#

%SYS-5-CONFIG_I: Configured from console by console

SW1#copy ftp flash

Address or name of remote host []? 10.0.0.100

Source filename []? c2960-lanbasek9-mz.150-2.SE4.bin

Destination filename [c2960-lanbasek9-mz.150-2.SE4.bin]?

Accessing ftp://10.0.0.100/c2960-lanbasek9-mz.150-2.SE4.bin...

[OK - 4670455 bytes]

4670455 bytes copied in 10.02 secs (37473 bytes/sec)

To verify that the file has indeed been transfered, we can use the show flash: command:

SW1#show flash:

Directory of flash:/

1 -rw- 4414921 c2960-lanbase-mz.122-25.FX.bin

4 -rw- 4670455 c2960-lanbasek9-mz.150-2.SE4.bin

2 -rw- 1052 config.text

We can also transfer files from the IOS device to the FTP server, for example, to
backup the startup configuration. Here is an example of copying the startup
configuration of a switch to the FTP server:

SW1#copy startup-config ftp


Address or name of remote host []? 10.0.0.100

Destination filename [SW1-confg]?

Writing startup-config...

[OK - 1052 bytes]

1052 bytes copied in 0.09 secs (11000 bytes/sec)

64016384 bytes total (54929956 bytes free)

DHCP & DNS Protocols Explained


DHCP (Dynamic Host Configuration Protocol)
DHCP is a network protocol that is used to assign various network parameters to a
device. This greatly simplifies the administration of a network, since there is no need to
assign static network parameters for each device.

DHCP is a client-server protocol. A client is a device that is configured to use DHCP to


request network parameters from a DHCP server. DHCP server maintains a pool of
available IP addresses and assigns one of them to the host. A DHCP server can also
provide some other parameters, such as:

 subnet mask
 default gateway
 domain name
 DNS server

Cisco routers can be configured as both DHCP client and DHCP server.

DHCP process explained:

DHCP client goes through the four step process:


1: A DHCP client sends a broadcast packet (DHCP Discover) to discover DHCP
servers on the LAN segment.

2: The DHCP servers receive the DHCP Discover packet and respond with DHCP
Offer packets, offering IP addressing information.

3: If the client receives the DHCP Offer packets from multiple DHCP servers, the first
DHCP Offer packet is accepted. The client responds by broadcasting a DHCP
Request packet, requesting the network parameters from the server that responded
first.

4: The DHCP server approves the lease with a DHCP Acknowledgement packet. The
packet includes the lease duration and other configuration information.
NOTE
DHCP uses a well-known UDP port number 67 for the DHCP server and the UDP port number 68 for
the client.

DNS (Domain Name System)


DNS is a network protocol used to translate hostnames into IP addresses. DNS is not
required to establish a network connection, but it is much more user friendly for human
users than the numeric addressing scheme. Consider this example – you can access
the Google homepage by typing 216.58.207.206, but it’s much easier just to
type www.google.com!

To use DNS, you must have a DNS server configured to handle the resolution process.
A DNS server has a special-purpose application installed. The application maintains a
table of dynamic or static hostname-to-IP address mappings. When a user request
some network resource using a hostname, (e.g. by typing www.google.com in a
browser), a DNS request is sent to the DNS server asking for the IP address of the
hostname. The DNS server then replies with the IP address. The user’s browser can
now use that IP address to access www.google.com.
The figure below explains the concept:

Suppose that the DNS Client wants to communicate with the server named Server1.
Since the DNS Client doesn’t know the IP address of Server1, it sends a DNS Request
to the DNS Server, asking for Server1’s IP address. The DNS Server replies with the IP
address of Server1 (DNS Reply).

The picture below shows a sample DNS record, taken from a DNS server:
Here you can see that the host with the hostname APP1 is using the IP address
of 10.0.0.3.
NOTE
DNS uses a well-known UDP port 53.

Configure Cisco router as DHCP server


Dynamic Host Configuration Protocol (DHCP) is an application layer protocol used to
distribute various network configuration parameters to devices on a TCP/IP network. –
IP addresses, subnet masks, default gateways, DNS servers, etc. DHCP employs a
client-server architecture; a DHCP client is configured to request network parameters
from a DHCP server on the network. A DHCP server is configured with a pool of
available IP addresses and assigns one of them to the DHCP client.

A Cisco router can be configured as a DHCP server. Here are the steps:

1. Exclude IP addresses from being assigned by DHCP by using the ip dhcp


excluded-address FIRST_IP LAST_IP
2. Create a new DHCP pool with the ip dhcp pool NAME command.
3. Define a subnet that will be used to assign IP addresses to hosts with
the network SUBNET SUBNET_MASK command.
4. Define the default gateway with the default-router IP command.
5. Define the DNS server with the dns-server IP address command.
6. (Optional) Define the DNS domain name by using the ip domain-name
NAME command.
7. (Optional) Define the lease duration by using the lease DAYS HOURS
MINUTES command. If you don’t specify this argument, the default lease time of
24 hours will be used.

Here is an example configuration:

Floor1(config)#ip dhcp excluded-address 192.168.0.1 192.168.0.50

Floor1(config)#ip dhcp pool Floor1DHCP

Floor1(dhcp-config)#network 192.168.0.0 255.255.255.0

Floor1(dhcp-config)#default-router 192.168.0.1

Floor1(dhcp-config)#dns-server 192.168.0.1

In the example above you can see that I’ve configured the DHCP server with the
following parameters:

 the IP addresses from the 192.168.0.1 – 192.168.0.50 range will not be assigned
to hosts
 the DHCP pool was created and named Floor1DHCP
 the IP addresses assigned to the hosts will be from the 192.168.0.0/24 range
 the default gateway’s IP address is 192.168.0.1
 the DNS server’s IP address is 192.168.0.1

To view information about the currently leased addresses, you can use the show ip
dhcp binding command:
Floor1#show ip dhcp binding

IP address Client-ID/ Lease expiration Type

Hardware address

192.168.0.51 0060.5C2B.3DCC -- Automatic

In the output above you can see that there is a single DHCP client that was assigned
the IP address of 192.168.0.51. Since we’ve excluded the IP addresses from
the 192.168.0.1 – 192.168.0.50 range, the device got the first address available
– 192.168.0.51.
To display information about the configured DHCP pools, you can use the show ip dhcp
pool command:

Floor1#show ip dhcp pool

Pool Floor1DHCP :

Utilization mark (high/low) : 100 / 0

Subnet size (first/next) : 0 / 0

Total addresses : 254

Leased addresses : 1

Excluded addresses : 1

Pending event : none

1 subnet is currently in the pool

Current index IP address range Leased/Excluded/Total

192.168.0.1 192.168.0.1 - 192.168.0.254 1 / 1 / 254

This command displays some important information about the DHCP pool(s) configured
on the device – the pool name, total number of IP addresses, the number of leased and
excluded addresses, subnet’s IP range, etc.

DHCP Relay Agent


When a device is configured as a Dynamic Host Configuration Protocol (DHCP) client, it
will send a broadcast packet to discover DHCP servers on the network. Routers do not
forward broadcast packets by default, so if a DHCP server is in a different network than
DHCP clients, it will not receive DHCP requests. Consider the following scenario:

The workstation on the left is configured as a DHCP client. R2 on the right is configured
as a DHCP server. The workstation sends a DHCP discover packet but receives no
DHCP request since R1 doesn’t forward the packet to R2 (broadcast packets stay on
the local subnet).

To rectify this, we can configure R1 to act as a DHCP relay agent and forward the
DHCP requests to the configured DHCP server. This is done by issuing the ‘ip helper-
address DHCP_SERVER_IP_ADDRESS’ command on its Gi0/0 interface. This
command instructs the router to do the following:

1. watch for DHCP messages on the interface


2. when a DHCP packet arrives, set the packet’s source IP address to the IP address of
Gi0/0
3. change the destination IP address of the packet from 255.255.255.255 (the broadcast
address) to the IP address of the DHCP server and send it to R2
4. when the answer from the DHCP server is received, change the packet’s destination IP
to 255.255.255.255 and send it out its Gi0/0 interface so that the workstation (which
does not have an IP address yet) can receive the DHCP message.

Configure DHCP Relay Agents


To configure the interface Gi0/0 on R1 to forward DHCP packets, only a single
command is needed:

R1(config-if)#ip helper-address 172.16.0.2

To make sure that the workstation indeed got its IP parameters, we can issue
the ‘ipconfig’ command:

C:\>ipconfig

FastEthernet0 Connection:(default port)

Link-local IPv6 Address.........: FE80::2E0:B0FF:FEB3:73E

IP Address......................: 10.0.0.104

Subnet Mask.....................: 255.255.255.0

Default Gateway.................: 10.0.0.1

Configure Cisco router as a DHCP client


Cisco routers can be configure as both DHCP servers and DHCP clients. An interface
on a router that connects to the Internet Service Provider (ISP) is often configured as a
DHCP client. This way, the ISP can provide the IP information to the client device.

To configure an interface as a DHCP client, the ip address dhcp interface mode


command is used. Consider the following example:

We have a small network consisting of a router and a DHCP server. We want to


configure the interface Gi0/0 on the router as a DHCP client. This is how this is done:
R1(config)#int Gi0/0

R1(config-if)#ip address dhcp

We can verify that the Gi0/0 interface has indeed got its IP address from the DHCP
server by running the show ip int brief command:

R1#show ip int brief

Interface IP-Address OK? Method Status


Protocol

GigabitEthernet0/0 192.168.0.1 YES DHCP up


up

GigabitEthernet0/1 unassigned YES unset administratively


down down

The DHCP keyword in the method column indicates that the IP information were
obtained by the DHCP server.
NOTE
If you want to configure a Cisco switch as a DHCP client, the ip address dhcp command is used
under the VLAN 1 configuration mode.

What is APIPA (Automatic Private IP Addressing)?


Automatic Private IP Addressing (APIPA) is a feature in operating systems (such as
Windows) that enables computers to automatically self-configure an IP address and
subnet mask when their DHCP server isn’t reachable. The IP address range for APIPA
is 169.254.0.1-169.254.255.254, with the subnet mask of 255.255.0.0.

When a DHCP client boots up, it looks for a DHCP server in order to obtain network
parameters. If the client can’t communicate with the DHCP server, it uses APIPA to
configure itself with an IP address from the APIPA range. This way, the host will still be
able to communicate with other hosts on the local network segment that are also
configured for APIPA.

Consider the following example:

The host on the left is configured as DHCP client. The host boots up and looks for
DHCP servers on the network. However, the DHCP server is down and can’t respond to
the host. After some time (from a couple of seconds to a couple of minutes, depending
on the operating system) the client auto-configures itself with an address from the
APIPA range (e.g. 169.254.154.22).
NOTE
If your host is using an IP address from the APIPA range, there is usually a problem on the network.
Check the network connectivity of your host and the status of the DHCP server.

The APIPA service also checks regularly for the presence of a DHCP server (every
three minutes). If it detects a DHCP server on the network, the DHCP server replaces
the APIPA networking addresses with dynamically assigned addresses.

What is Spanning Tree Protocol? STP Overview


Spanning Tree Protocol (STP) is a network protocol designed to prevent layer 2 loops.
It is standardized as IEEE 802.D protocol. STP blocks some ports on switches with
redundant links to prevent broadcast storms and ensure a loop-free logical topology.
With STP in place, you can have redundant links between switches in order to provide
redundancy.

Broadcast Storm
To better understand the importance of STP and how STP prevents broadcast storms
on a network with redundant links, consider the following example:

SW1 sends a broadcast frame to SW2 and SW3. Both switches receive the frame and
forward the frame out of every port except the port on which the frame was received. So
SW2 forwards the frame to SW3. SW3 receives that frame and forwards it to SW1. SW1
then again forwards the frame to SW2! The same thing also happens in the opposite
direction. Without STP in place, these frames would loop forever. STP prevents loops
by placing one of the switch ports in a blocking state.

Loop-Free Topology
With Spanning Tree Protocol (STP), our network topology above would look like this:
In the topology above, STP has placed one port on SW3 in the blocking state. That port
will no longer process any frames except the STP messages. If SW3 receives a
broadcast frame from SW1, it will not forward it out to the port connected to SW2.

STP uses the spanning tree algorithm to prevent loops. The switches among
themselves send Bridge Protocol Data Units (BPDU), and a root bridge or root switch
will be elected among the switches in the network. This will determine whether a port is
a root port, designated port, or blocked port.

The root bridge is the switch with the most preferred bridge ID. The root port is the
shortest path forwarding frames to the root bridge, and the designated port sends
frames away from the root bridge.

NOTE

Spanning Tree Protocol (STP) enables layer 2 redundancy. In the example above, if the
link between SW3 and SW1 fails, STP will converge and unblock the port on SW3.

There are various STP modes, including Rapid Spanning Tree Protocol (RSTP),
Multiple Spanning Tree Protocol (MSTP), and VLAN Spanning Tree Protocol.

How STP works


STP uses the Spanning-Tree Algorithm (SPA) to create a topology database of the
network. To prevent loops, SPA places some interfaces in forwarding state and other
interfaces in blocking state. How does STP decides in which state the port will be
placed? A couple of criteria exist:

1. all switches in a network elect a root switch. All working interfaces on the root switch
are placed in forwarding state.
2. all other switches, called nonroot switches, determine the best path to get to the
root switch. The port used to reach the root switch (root port) is placed in forwarding
state.
3. on the shared Ethernet segments, the switch with the best path to reach the root
switch is placed in forwarding state. That switch is called the designated switch and its
port is known as the designated port.
4. all other interfaces are placed in blocking state and will not forward frames.
NOTE
STP considers only working interfaces – shutdown interfaces or interfaces without the cable installed
are placed in an STP disabled state.

An example will help you understand the concept:

Let’s say that SW1 is elected as the root switch. All ports on SW1 are placed into
forwarding state. SW2 and SW3 choose ports with the lowest cost to reach the root
switch to be the root ports. These ports are also placed in forwarding state. On the
shared Ethernet segment between SW2 and SW3, port Fa0/1 on SW2 has the lowest
cost to reach the root switch. This port is placed in forwarding state. To prevent loops,
port Fa0/1 on SW3 is placed in blocking state.
NOTE
A switch with the lowest switch ID will become the root switch. A switch ID consists of two
components: the switch’s priority (by default 32,768 on Cisco switches) and the switch’s MAC
address.

BPDU (Bridge Protocol Data Unit)


BPDUs are messages used by switches to share STP information with each other in
order to elect a root switch and detect loops. The most common messages are Hello
BPDUs which include the following information:

 root switch ID
 sender’s switch ID
 sender’s root cost
 Hello, MaxAge, and forward delay timers

Electing the Root Switch in STP


The STP process works by default on Cisco switches and begins with the root switch
election. The election is based on the bridge IDs (BIDs) sent in the BPDUs. Each
switch that participates in STP will have a 8-byte switch ID that comprises of the
following fields:

 2-byte priority field – by default, all switches have the priority of 32768. This
value can be changed using configuration commands.
 6-byte system ID – a value based on the MAC address of each switch.

A switch with the lowest BID will become a root switch, with lower number meaning
better priority.

Consider the following example:

As mentioned above, the switch with the lower BID wins. Since by default all switches
have the BID priority of 32768, the second comparison has to be made – the lowest
MAC address. In our example SW1 has the lowest MAC address and becomes the root
switch.
NOTE
For simplicity, all ports on switches in the example above are assigned to VLAN 1. Also, note that
STP adds the VLAN number to the priority value, so all switches actually have the BID priority of
32,769.

To influence the election process, you can change the BID priority to a lower value on a
switch you would like to become root. This can be done using the following command:

(config)#spanning-tree vlan ID priority VALUE

The priority must be in increments of 4096, so if you choose any other value, you will
get en error and possible values listed:
(config)#spanning-tree vlan 1 priority 224

% Bridge Priority must be in increments of 4096.

% Allowed values are:

0 4096 8192 12288 16384 20480 24576 28672

32768 36864 40960 45056 49152 53248 57344 61440

(config)#spanning-tree vlan 1 priority 4096

Spanning Tree Priority: Root Primary and Root


Secondary
Spanning Tree Protocol is a Layer 2 loop prevention mechanism that will block one port
on the network switch if it detects a loop of broadcast messages within its architecture.
By default, spanning trees are enabled on most interconnected Cisco switches.
Switches send out Bridge Protocol Data Unit (BPDU) on all active interfaces. BPDU
contains STP information needed to elect a root switch and detect loops.

STP Root Bridge Election


The switch assigns a root bridge within the interconnected switches. A root bridge is the
central point of all switches and will be responsible for forwarding the traffic. The switch
selects a root bridge by using the switch priority and the MAC address. Each switch has
its own bridge ID and has a default priority value of 32768. The root bridge is taking
precedence over the MAC address. If a switch has the lowest bridge priority value
among the switches within the LAN, then it will be elected as the spanning tree root
bridge.

If all the spanning tree bridge priority has the same priority value on all the switches,
then the MAC address will be the tiebreaker. The lowest MAC address will be elected
as the Root Bridge. Most of the older switches have a lower value of MAC address and
have lower bandwidth and limited CPU/memory as compared to newer switches.
Electing an older switch as the root bridge will cause a suboptimal operation on your
network.

Spanning Tree Priority Root Bridge Optimization


We should avoid electing a root bridge using the MAC address, which can cause a
suboptimal network performance as it will choose the oldest switch with the lowest MAC
address in the network. The example spanning tree topology below shows the LAN
switches that elect Switch6 as the Root Bridge by using the MAC address election. Let’s
assume that Switch6 is the oldest switch in the group. All traffic will go and process first
on Switch6 before it goes to the destined switch. That will create a poor performance of
the network.

To prevent having a suboptimal network, we need to manually choose a root bridge


within the network. By doing that, we need to manually configure a value of the root
bridge or manually assign it as a root bridge by using the ‘root primary’ command. This
will set the bridge priority to 24576, which is lower than the default priority.

What if the primary root bridge fails? To optimize further, we need to assign the other
core switch as the secondary root bridge in case the primary root bridge is not
operational. To do that, we enter the ‘root secondary’ command. This will set the
bridge priority to 28672, which is lower than the default priority but higher than the root
primary. When the primary switch fails, the switches will elect a new root bridge. It will
then failover to the secondary switch, and it will be elected as the new root bridge.
STP Root Primary and Root Secondary Configuration
Based on the diagram above, we need to manually configure the core switch as the root
bridge as they have higher bandwidth and better features in general as compared to the
other switches on the group. The below configuration shows how we configure the core
switch, Switch0, as the root bridge.

Switch0(config)#spanning-tree vlan 1 root primary

To verify, we can use the ‘show spanning-tree’ command.

Switch0#show spanning-tree vlan 1

VLAN0001

Spanning tree enabled protocol ieee

Root ID Priority 24577

Address 0001.9725.3338

This bridge is the root

Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec

Bridge ID Priority 24577 (priority 24576 sys-id-ext 1)

Address 000.19725.3338

Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec

Below is the configuration to assign Switch1 as the secondary root.

Switch1(config)#spanning-tree vlan 1 root secondary

Again, we can use the ‘show spanning-tree’ command to verify our configuration.

Switch1#show spanning-tree vlan 1

VLAN0001

Spanning tree enabled protocol ieee


Root ID Priority 24577

Address 0001.9725.3338

Cost 19

Port 1 (FastEthernet0/1)

Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec

Bridge ID Priority 28673 (priority 28672 sys-id-ext 1)

Address 0040.0B2C.E63A

Hello Time 2 sec Max Age 20 sec Forward Delay 15 sec

Aging Time 20

NOTE
The spanning tree port priority value can also be changed to improve the effectiveness of STP.

Selecting STP root port


As we’ve mentioned before, all working interfaces on the root switch are placed in
forwarding state. All other switches (called nonroot switches) determine the best path to
get to the root switch and the port used to reach the root switch is placed in forwarding
state. The best path is the one with the lowest cost to reach the root switch. The cost is
calculated by adding the individual port costs along the path from the switch to the root.

Take a look the following example:


SW1 has won the election process and is the root switch. Consider the SW3’s
perspective for choosing its root port. Two paths are available to reach the root switch,
one direct path over Fa0/1 and the other going out Fa0/2 and through SW2. The direct
path has a cost of 19, while the indirect path has the cost of 38 (19+19). That is why
Fa0/1 will become the root port on SW3.

In case the best root cost ties for two or more paths, the following tiebreakers are
applied:

 the lowest neighbor bridge ID


 the lowest neighbor port priority
 the lowest neighbor internal port number

The default port cost is defined by the operating speed of the interface:

Speed Cost

10 Mbps 100

100 Mbps 19
Speed Cost

1 Gbps 4

10 Gbps 2

You can override the default value on the per-interface basis using the following
command:
(config-if)#spanning-tree cost VALUE

Selecting Spanning Tree Designated Port (DP)


We’ve already learned that with Spanning Tree Protocol (STP) on the shared Ethernet
segments, the switch with the best path to reach the root switch is placed in a
forwarding state. That switch is called the designated switch and its ports are known
as the designated ports. In order to avoid loops, the non-designated port on the
other end of the link is placed in a blocking state.

The designated switch is determined based on the following criteria:

1. The switch with the lowest root path cost becomes the designated switch on that
link.
2. In case of a root path cost tie, the switch with the lowest Bridge ID (BID)
becomes the designated switch.

Consider the following example:


SW1 has the lowest BID and has been selected as the root bridge or switch. SW2 and
SW3 have then determined their own root port based on the lowest port cost. On the
shared network segment between SW2 and SW3, a spanning tree designated port
needs to be selected.

Since SW3 has a lower cost to reach the root switch (4<19), its Fa0/2 port will be the
spanning tree designated port for the segment. The Fa0/2 switch port on SW2 will be
placed in a blocking state.

NOTE

If the link between SW1 and SW3 fails, STP will converge and the Fa0/2 port on SW2
will be placed in the forwarding state to forward traffic.

Spanning Tree Modes: MSTP, PVST+, and


RPVST+
STP, Spanning Tree Protocol, is enabled on all of the vendors’ switches by default.
We have different Spanning Tree modes, both for Cisco proprietary and for open
standard STP.
IEEE Open Standard Spanning Tree Modes
We have the following IEEE STP standards, which are used by all other vendors:

Spanning Tree Protocol (STP) IEEE 802.1D – the first and original implementation of
the Spanning Tree Protocol standard. A single instance of spanning tree is allowed in
the Local Area Network (LAN).

Rapid Spanning Tree Protocol (RSTP) IEEE 802.1w – improved version of 802.1D
STP. It is faster for the network to converge. However, just like 802.1D STP, only a
single instance of spanning tree is allowed in the Local Area Network (LAN).

Multiple Spanning Tree Protocol (MSTP) IEEE 802.1s – allows us to create multiple
separate spanning-tree instances, and it enables us to map and allocate multiple
VLANs to the instances.

Cisco Spanning Tree Modes


We have the following Cisco proprietary STP standards which are exclusively used by
Cisco switches:

Per VLAN Spanning Tree Plus (PVST+) Protocol – Cisco-proprietary enhancement to


the IEEE 802.1D STP, and it is the default spanning-tree version for Cisco switches. It
enables us to create one instance of spanning-tree per VLAN.

Rapid Per VLAN Spanning Tree Plus (RPVST+) Protocol – Cisco-proprietary


enhancement to the IEEE 802.1w RSTP. Similar to PVST+, it enables us to create a
one spanning-tree instance per VLAN as well. Network convergence is also faster with
RPVST+.

Single Spanning Tree vs Multiple Spanning Trees


With IEEE 802.1D STP and 802.1w RSTP standards, all of the VLANs will have one
spanning-tree instance. Therefore, some will be taking suboptimal paths just like in the
example topology below:
Since it is a single spanning-tree instance, there will be a single root bridge for all of the
VLANs in the LAN. In this example, let’s say it’s SW1. All of the traffic will then be
forwarded to SW1.

Our multiple spanning-tree modes, IEEE 802.1s MSTP, PVST+, and RPVST+, allow us
to have various spanning-tree instances. These instances can take different paths
through the network by having different root bridges, enabling load balancing to be
possible. The traffic will be taking optimized paths for the same reason as well.

Multiple Spanning Tree Protocol (MSTP) Example


With the MSTP spanning-tree mode, we have one instance of spanning tree for each
group of VLANs. Let’s say we have the following different departments in our office
which are assigned with different VLANs:

 Sales Department – VLAN 10


 Engineering Department – VLAN 20
 Management Department – VLAN 30
 Production Department – VLAN 40
We can map the Sales and the Management departments to SW1 as their root bridge.
For the Engineering and the Production departments, we can make SW2 their root
bridge. Now, we have two instances of spanning tree running.

For the first instance, the traffic for VLAN 10 and VLAN 30 will be forwarded to SW1,
and the links to SW2 will be blocked. In the second instance, the traffic for VLAN 20 and
VLAN 40 will be forwarded to SW2 and will be blocked on SW1.

PVST+ and RPVST+ Example


PVST+ and RPVST+ Cisco spanning-tree modes are both Per VLAN spanning tree
protocols. This means that every VLAN has a single instance of spanning tree. We’ll
use this example topology again:
For example, we want the traffic from the Sales and the Management departments to be
forwarded to their root bridge at SW2 and blocked on SW1. Also, the traffic from the
Engineering and the Production departments will be forwarded to their root bridge at
SW1, and SW2 will be in a blocked state.

There will be a total of four spanning-tree instances running, as we have four VLANs in
the network. Assuming that we have 100 VLANs in our network, we will also have 100
spanning-tree instances. It would be consuming more resources as compared to
grouping them like in MSTP.

NOTE
PVST+ uses Root ports, Designated ports, and Alternate ports.
The Alternate Ports are Blocking Ports

The spanning-tree mode Command


We use the spanning-tree mode command to show the supported spanning-tree modes
and to select the mode to use for the spanning tree configuration:
Switch(config)# spanning-tree mode ?
mst Multiple spanning tree mode

pvst Per-Vlan spanning tree mode

rapid-pvst Per-Vlan rapid spanning tree mode

What is RSTP (Rapid Spanning Tree Protocol)?


RSTP (Rapid Spanning Tree Protocol) is an evolution of STP. It was originally
introduced as IEEE 802.1w standard and in 2004 IEEE decided to replace STP with
RSTP in 802.1D standard. Finally, in 2011, in the IEEE decided to move all the RSTP
details into 802.1Q standard.

RSTP is backwards-compatible with STP and there are many similarities between the
two protocols, such as:

 the root switch is elected using the same set of rules in both protocols
 root ports are selected with the same rules, as well as designated port on LAN
segments
 both STP and RSTP place each port in either forwarding or blocking state. The
blocking state in RSTP is called the discarding state.

However, there are differences between STP and RSTP:

 RSTP enables faster convergence times than STP (usually within just a couple of
seconds)
 STP ports states listening, blocking, and disabled are merged into a single state
in RSTP – the discarding state
 STP features two port types – root and designated port. RSTP adds two
additional port types – alternate and backup port.
 with STP, the root switch generates and sends Hellos to all other switches, which
are then relayed by the non-root switches. With RSTP, each switch can generate
its own Hellos.

Consider the following network topology with RSTP turned on:


In order to avoid loops, RSTP has placed one port on SW3 in the alternate state. This
port will not process or forward any frames except the RSTP messages. However, if the
root port on SW3 fails, the alternate port will rapidly become the root port and start
forwarding frames.

How RSTP works


Just like STP, RSTP creates a topology database of the network. To prevent loops,
some interfaces on switches are placed in forwarding state and other interfaces in
discarding state. How does RSTP decides in which state the port will be placed? A
couple of criteria exist:

1. all switches in a network elect a root switch. All working interfaces on the root switch
are placed in forwarding state.
2. all other switches, called nonroot switches, determine the best path to get to the root
switch. The port used to reach the root switch (root port) is placed in forwarding state.
3. on the shared Ethernet segments, the switch with the best path to reach the root
switch is placed in forwarding state. That switch is called the designated switch and its
port is known as the designated port.
4. all other interfaces are placed in discarding state and will not forward frames.
NOTE
RSTP is backwards-compatible with STP and they both can be used in the same network.

Consider the following example:


Let’s say that SW1 is elected as the root switch. All ports on SW1 are placed in
forwarding state. SW2 and SW3 choose ports with the lowest cost to reach the root
switch to be the root ports. These ports are also placed in forwarding state. On the
shared Ethernet segment between SW2 and SW3, port Fa0/1 on SW2 has the lowest
cost to reach the root switch. This port is placed in forwarding state. To prevent loops,
port Fa0/1 on SW3 is placed in discarding state. If the root port on SW3 fails, this
alternate port will quickly take over and become the root port.
NOTE
RSTP also introduces a concept of backup port. This port serves as a replacement for the
designated port inside the same collision domain. This can only occur when using hubs in your
network.

Configuring RSTP
Most newer Cisco switches use RSTP by default. RSTP prevents frame looping out of
the box and no additional configuration is necessary. To check whether a switch runs
RSTP, the show spanning-tree command is used:
SW1#show spanning-tree

VLAN0001

Spanning tree enabled protocol rstp

Root ID Priority 32769

Address 0004.9A47.1039

This bridge is the root

Hello Time 2 sec Max Age 20 sec Forward Delay 15


sec

Bridge ID Priority 32769 (priority 32768 sys-id-ext 1)


Address 0004.9A47.1039

Hello Time 2 sec Max Age 20 sec Forward Delay 15


sec

Aging Time 20

Interface Role Sts Cost Prio.Nbr Type

---------------- ---- --- --------- -------- ----------------------


----------

Fa0/3 Desg FWD 19 128.3 P2p

Fa0/2 Desg FWD 19 128.2 P2p

If RSTP is not being used, the following command will enable it:
SW1(config)#spanning-tree mode rapid-pvst

Most other configuration options (electing root switch, selecting root and designated
ports) are similar to the ones used in STP.

STP Portfast, BPDU Guard, Root Guard


Configuration
Spanning Tree Protocol (STP) and Rapid Spanning Tree Protocol (RSTP) are switching
mechanisms that prevent a LAN with redundant links to forward Ethernet frames to loop
in an indefinite time in a network. STP and RSTP have features that help the network
work better and more securely, such as Portfast, BPDU Guard, and Root Guard.

What is a Bridge Protocol Data Unit (BPDU)?


A bridge protocol data unit (BPDU) is a data message forwarded across a Local Area
Network (LAN) to detect loops in a spanning tree topology. A BPDU contains
information about ports, switches, port priority, and addresses.

PortFast
PortFast enables the switch to instantaneously transition from blocking state to
forwarding state immediately through bypassing the listening and learning state.
However, PortFast is highly recommended only on non-trunking access ports, such as
edge ports, because these ports typically do not send nor receive BPDU.
It is advisable to implement PortFast only on edge ports that connect end stations to the
switches, similar to the example STP topology below.

Configuring PortFast on an Access Port


We can configure the PortFast command on an access switch port interface. See the
configuration example below:
Sw1(config)# interface f0/10

Sw1(config-if)# spanning-tree portfast

Sw1(config)# spanning-tree portfast default

BPDU Guard
Because PortFast can be enabled on non-trunking ports connecting two switches,
spanning-tree loops can occur because Bridge Protocol Data Units (BPDUs) are still
being transmitted and received on those ports.

Layer 2 loops in our network topology can be prevented by enabling another feature
called PortFast BPDU Guard wherein it prevents the loop from happening by moving
non-trunking switch ports into an errdisable state when the Bridge Protocol Data Unit
(BPDU) is accepted on that port. Whenever STP BPDU guard is enabled on the switch,
STP shuts down PortFast-configured interfaces on the switch that received Bridge
Protocol Data Unit (BPDU) instead of putting them into STP blocking state.
In a correct configuration, PortFast-configured ports do not receive BPDU. If a PortFast-
configured interface receives a Bridge Protocol Data Unit (BPDU), a misconfiguration
exists. BPDU guard provides a secure response to invalid configurations because the
network engineer needs to manually put the interface in a forwarding state.

Enabling BPDU Guard


We enable the BPDU guard command in the interface configuration mode. This
configuration example shows how to configure BPDU guard in Switch1’s
FastEthernet0/1 interface.
Switch1(config)# interface fastethernet0/1

Switch1(config-if)# spanning-tree portfast

Switch1(config-if)# spanning-tree bpduguard enable

Switch1(config)# spanning-tree portfast bpduguard default

Root Guard
Any switch in the network can be designated as the root bridge. But to efficiently
forward frames, the positioning of the root bridge should be predetermined in a strategic
location. The standard STP does not ensure that the root bridge can be assigned
permanently by the administrator.

An enhanced feature of STP is developed to address this issue. The root guard feature
enables a way to implement the root bridge deployment in the network.
The root guard assures that the interface on which the root guard is enabled is set as
the designated port. Normally, the root bridge ports are all set as designated ports
unless two or more root bridge ports are connected. If the bridge receives superior STP
Bridge Protocol Data Units (BPDUs) on a root guard-enabled interface, the root guard
moves this interface to a root-inconsistent STP state. This root-inconsistent state is
effectively equivalent to a listening state. No traffic is forwarded across this interface. In
this process, the root guard enforces the position of the root bridge.

Configuring Root Guard


Configuration on the interface level of root guard for Catalyst 6500/6000 and Catalyst
4500/4000 are shown below:

Switch# configure terminal

Enter configuration commands, one per line. End with CNTL/Z.

Switch#(config)# interface fastethernet 3/1

Switch#(config-if)# spanning-tree guard root

On the Cisco Switches Catalyst 2900XL, 3500XL, 2950, and 3550, we configure root
guard as shown:

Switch# configure terminal

Enter configuration commands, one per line. End with CNTL/Z.

Switch(config)# interface fastethernet 0/8

Switch(config-if)# spanning-tree rootguard


What is EtherChannel and Why Do We Need It?
EtherChannel is a technology wherein we bundle physical interfaces together to create
a single logical link. It is also known as Link Aggregation. It provides fault-tolerant and
high-speed links between Cisco switches and routers and is often seen in the backbone
network. The approved open standard is called 802.3ad, which works with other
vendors and is often called LAG.

How Does EtherChannel Work?


We can assign up to 16 physical interfaces to an EtherChannel, but only 8 interfaces
can be active at a time. You can form EtherChannel between two, four, or eight active
Fast, Gigabit, or 10-Gigabit Ethernet interfaces, with an additional one to eight inactive
interfaces which can become active as the other interfaces fail.

To create an EtherChannel, all of the interfaces should have:

1. Same Duplex
2. Same Speed
3. Same VLAN Configuration (Ex. native VLAN and allowed VLAN should be same)
4. Switch Port Modes should be the same (Access or Trunk Mode)

EtherChannel can look at the following options to decide which physical link to
send data over:

1. Source MAC Address


2. Destination MAC Address
3. Source and Destination MAC Address
4. Source IP Address
5. Destination IP Address
6. Source and Destination IP Address
7. Source TCP/UDP Port
8. Destination TCP/UDP Port
9. Source and Destination TCP/UDP Port.

These options depend on the hardware and software, and there could be more options
on other models and software versions.

The 2 Etherchannel Protocols


We also have two protocols that we can use, aside from manually configuring
EtherChannel.
Port Aggregation Protocol (PAgP) – is a Cisco proprietary EtherChannel protocol
where we can combine a maximum of 8 physical links into a single virtual link.
Link Aggregation Control Protocol (LACP) – is an IEEE 802.3ad standard where we
can combine up to 8 ports that can be active and another 8 ports that can be in standby
mode.

Why Do We Need EtherChannel?


Below are the advantages and benefits of implementing EtherChannel on our networks:

Increased Bandwidth

In our network planning, we always take into account the cost. For example, our
company needs more than 100 Mbps bandwidth, but our hardware only supports Fast
Ethernet (100 Mbps). In this case, we can opt not to upgrade the hardware by
implementing EtherChannel.

Also, we might think that if we have two or more links between two switches, like in our
figure above, then we can utilize the full bandwidth of these links. But, in a traditional
network setup, Spanning Tree Protocol (STP) blocks one redundant link to avoid Layer
2 loops. Our solution to this problem? EtherChannel.

EtherChannel aggregates or combines traffic across all available active links, which
makes it look like one logical cable. So in our example, if we have 8 active links with
100 Mbps each, that will be a total of 800 Mbps. If any of the physical links inside the
EtherChannel go down, STP will not see this and will not recalculate.

Redundancy

Since more than one physical connection is combined into one logical connection,
EtherChannel enables more available links in instances where one or more links go
down.

Load Balancing

With load balancing, we are able to balance the traffic load across the links and
improves the efficient use of bandwidth.

NOTE
Cisco does not offer round-robin load balancing for EtherChannel as it could potentially result in out-
of-order frames.
Switched Network Without EtherChannel
In this example, we connected 2 switches, Switch1 and Switch2, using four links. What
do you think will happen without EtherChannel? You can see in the network topology
below the link states, only one link is being utilized.

If we issue the ‘show spanning-tree’ command on both switches, we can see that all
interfaces of Switch1 are in forwarding state but only one interface is forwarding in
Switch2, and the other interfaces are in blocking state.

Switch1:

Interface Role Sts Cost Prio.Nbr Type

---------------- ---- --- --------- -------- ----------------------


----------

Fa0/2 Desg FWD 19 128.2 P2p

Fa0/1 Desg FWD 19 128.1 P2p

Fa0/4 Desg FWD 19 128.4 P2p

Fa0/3 Desg FWD 19 128.3 P2p

Switch2:

Interface Role Sts Cost Prio.Nbr Type

---------------- ---- --- --------- -------- ----------------------


----------

Fa0/2 Altn BLK 19 128.2 P2p

Fa0/4 Altn BLK 19 128.4 P2p


Fa0/1 Root FWD 19 128.1 P2p

Fa0/3 Altn BLK 19 128.3 P2p

Switched Network With EtherChannel


If we enable EtherChannel on the links of the switches, you can see that the link states
for all of the links are up. Meaning, we can utilize all of the 4 links and reap the benefits
of EtherChannel namely, load balancing, redundancy, and increased bandwidth.

If we enter ‘show running-config interface <interface name>’ on both switches, we’ll see
the ‘switchport mode trunk’ and ‘channel-group 1 mode’ commands issued on the
interfaces. These commands are used to enable EtherChannel.

If we enter the ‘show spanning-tree’ command, we should see a single logical


interface, Port Channel 1 (Po1), instead of four separate physical interfaces.
Interface Role Sts Cost Prio.Nbr Type

---------------- ---- --- --------- -------- ----------------------


----------

Po1 Desg FWD 7 128.27 Shr

EtherChannel Port Aggregation Protocol (PAgP)


Port Aggregation Protocol or PAgP is an EtherChannel technology that is a Cisco
proprietary protocol. It is a form of logical aggregation of Cisco Ethernet switch ports,
and it enables data/traffic load balancing. PAgP EtherChannel can combine a maximum
of 8 physical links into a single virtual link. We also have an IEEE open standard, Link
Aggregation Control Protocol, LACP.
PAgP Initial Configuration Check
Since PAgP is a Cisco proprietary protocol, we need to make sure that all of the
interfaces have the same following configurations in our Cisco network devices:

1. Speed and Duplex


2. Operational State (all Access or all Trunking)
3. Access VLAN on access interfaces
4. Native VLAN and Allowed VLANs on trunk interfaces
5. STP interface settings

The 2 Port Aggregation Protocol Configuration Modes


There are two Cisco EtherChannel Port Aggregation Protocol modes, which we can
implement as a part of the port configuration:

Auto mode– interface can respond to PAgP packet negotiation but will never start one
on its own.

Desirable mode– interface actively attempts a negotiating state for PAgP packet
negotiation.

Cisco PAgP Mode Combinations


Here are the combinations of the different PAgP modes (auto mode and desirable
mode) and if link aggregation will work or not between the Cisco devices:

Auto Desirable

Auto No Yes

Desirable Yes Yes

How to Configure Port Aggregation Protocol?


We’ll use the network topology below for our example. We have two Cisco switches to
be configured with PAgP.
Switch 1 Configuration:

Switch 1#conf t

Switch 1(config)#interface range fa0/1 - 2

Switch 1(config-if-range)#speed 100

Switch 1(config-if-range)#duplex full

Switch 1(config-if-range)#switchport mode trunk

Switch 1(config-if-range)#channel-group 1 mode desirable

Switch 1(config-if-range)#end

Switch 2 Configuration:

Switch 2#conf t

Switch 2(config)#interface range fa0/1 - 2

Switch 2(config-if-range)#speed 100

Switch 2(config-if-range)#duplex full

Switch 2(config-if-range)#switchport mode trunk

Switch 2(config-if-range)#channel-group 1 mode auto

Switch 2(config-if-range)#end

Switch logs showing Port-Channel1 comes up:

%LINK-5-CHANGED: Interface Port-channel1, changed state to up


%LINEPROTO-5-UPDOWN: Line protocol on Interface Port-channel1,
changed state to up

Port Aggregation Protocol Verification


We can issue the ‘show etherchannel <channel-group number> port-
channel’ command to check if the Port-channel is active or not. We can verify which of
the two protocols is used, PAgP or LACP, and the interfaces participating in the
aggregation.
Switch1#show etherchannel 1 port-channel

Port-channels in the group:

---------------------------

Port-channel: Po1

------------

Age of the Port-channel = 0d:00h:04m:07s

Logical slot/port = 16/0 Number of ports = 2

GC = 0x00010001 HotStandBy port = null

Port state = Port-channel Ag-Inuse

Protocol = PAgP

Port security = Disabled

Ports in the Port-channel:

Index Load Port EC state No of bits

------+------+------+------------------+-----------

0 00 Fa0/1 Automatic-Sl 0

0 00 Fa0/2 Automatic-Sl 0

Time since last port bundled: 0d:00h:00m:26s Fa0/2

Next, ‘show etherchannel summary’ will show us a quick overview of the


EtherChannel status.
Switch 1#show etherchannel summary

Flags: D - down P - bundled in port-channel

I - stand-alone s - suspended

H - Hot-standby (LACP only)

R - Layer3 S - Layer2

U - in use N - not in use, no aggregation

f - failed to allocate aggregator

M - not in use, minimum links not met

m - not in use, port not aggregated due to minimum links not


met

u - unsuitable for bundling

w - waiting to be aggregated

d - default port

A - formed by Auto LAG

Number of channel-groups in use: 1

Number of aggregators: 1

Group Port-channel Protocol Ports

------+-------------+-----------+----------------------------------
-------------

1 Po1(SU) PAgP Fa0/1(P) Fa0/2(P)

Lastly, we can issue the ‘show interfaces fa0/1 etherchannel’ command. We can see
both local and neighbor interface information, and also the Cisco PAgP mode used.

Switch 1#show interfaces fa0/1 etherchannel

Port state = Up Mstr In-Bndl

Channel group = 1 Mode = Automatic-Sl Gcchange = 0

Port-channel = Po1 GC = 0x00010001 Pseudo port-


channel = Po1
Port index = 0 Load = 0x00 Protocol =
PAgP

Flags: S - Device is sending Slow hello. C - Device is in


Consistent state.

A - Device is in Auto mode. P - Device learns on


physical port.

d - PAgP is down.

Timers: H - Hello timer is running. Q - Quit timer is


running.

S - Switching timer is running. I - Interface timer is


running.

Local information:

Hello Partner PAgP Learning


Group

Port Flags State Timers Interval Count Priority Method


Ifindex

Fa0/1 SAC U6/S7 HQ 30s 1 128 Any


10

Partner's information:

Partner Partner Partner


Partner Group

Port Name Device ID Port Age


Flags Cap.

Fa0/1 Switch2 5000.0002.8000 Fa0/1 18s SC


10001

Age of the port in the current state: 0d:00h:06m:54s

EtherChannel Link Aggregation Control Protocol


(LACP)
Link Aggregation Control Protocol or LACP in networking is an IEEE standard and a
part of the IEEE 802.3ad specification that allows you to combine multiple network
connections or physical links in our network devices to form a single logical link and
enable load balancing in our interfaces. If a link fails, LACP will also fail over
automatically.
We can configure LACP EtherChannel with a maximum of 16 multiple Ethernet links of
the same type. In a LAG or Link Aggregation Group, up to eight member links can be
active, and the other eight links can be on standby.

LACP Configuration Modes


We have two Link Aggregation Control Protocol modes, and these are the following:

Active – The interface actively sends LACP packets in its attempt to form an LACP
connection.

Passive – The interface can respond to LACP negotiation but will never initiate on its
own.

Here’s an overview of the different modes and combinations and whether link
aggregation will work:

Active Passive

Active Yes Yes

Passive Yes No

LACP Initial Configuration Check


We have minimum requirements for Link Aggregation Control Protocol to work. These
are the following:

1. One side should at least be in Active Mode.

2. All port speed and duplex configurations should be the same.

3. Switchport mode is the same (access or trunk).

4. Virtual Local Area Network (VLAN) passing on both sides should match.
EtherChannel LACP Configuration
The first requirement for link aggregation is at least one side should be in Active mode.
For example, we will configure Switch1 to be in Active Mode and the other network
switch, Switch2, to be in Passive Mode.

Now, using our sample network topology below, let’s configure LACP on our network
switches’ multiple links:

Switch 1 – Active Mode

Switch1#conf t

Switch1(config)#interface range fa0/1 - 2

Switch1(config-if-range)#speed 100

Switch1(config-if-range)#duplex full

Switch1(config-if-range)#switchport mode trunk

Switch1(config-if-range)#channel-group 1 mode active

Switch1(config-if-range)#end

Switch 2 – Passive Mode

Switch2#conf t

Switch2(config)#interface range fa0/1 - 2

Switch2(config-if-range)#speed 100

Switch2(config-if-range)#duplex full

Switch2(config-if-range)#switchport mode trunk

Switch2(config-if-range)#channel-group 1 mode passive


Switch2(config-if-range)#end

The logs on our switch show that Port-Channel1 came up, and the aggregated link is
working:
*Sep 5 15:30:06.378: %LINK-3-UPDOWN: Interface Port-channel1,
changed state to up

*Sep 5 15:30:07.378: %LINEPROTO-5-UPDOWN: Line protocol on


Interface Port-channel1, changed state to up

How to Verify LACP?


We can use the ‘show etherchannel <channel-group number> port-
channel’ command to verify link aggregation and our port channel status:

Switch1#show etherchannel 1 port-channel

Port-channels in the group:

---------------------------

Port-channel: Po1 (Primary Aggregator)

------------

Age of the Port-channel = 0d:00h:01m:05s

Logical slot/port = 16/0 Number of ports = 2

HotStandBy port = null

Port state = Port-channel Ag-Inuse

Protocol = LACP

Port security = Disabled

Ports in the Port-channel:

Index Load Port EC state No of bits

------+------+------+------------------+-----------

0 00 Fa0/1 Active 0
0 00 Fa0/2 Active 0

Time since last port bundled: 0d:00h:00m:50s Fa0/2

Cisco Layer 3 EtherChannel Explained &


Configured
You probably learned about Layer 2 EtherChannels and how to configure them. Now,
we are going to discuss EtherChannel on Layer 3 switches and how to configure it.

One use case on why we would want to configure EtherChannel on Layer 3 switches is
when we are forming redundancy between Core and Distribution Layers and
implementing a routing protocol. Instead of learning two IP routes with the same
neighboring switch (but two different next hops), we now can have a single next-hop IP
address of the neighboring switch for each IP route learned.

Another use case is to avoid Spanning Tree Protocol (STP) and use Layer 3 links
between your Core and Distribution Layers instead. We can enable routing protocols
where we can have more control on load balancing and failover and can be much faster
than STP.

NOTE
1. The port-channel interface number can differ between two neighbor Layer 3 Switches, but we
have to use the same Channel Group Number for all physical interfaces on a Layer 3 Switch.
2. We have to issue the ‘no switchport’ command to make the physical interface a routed
port/interface (Group state = from Layer 2 to Layer 3)

Layer 3 EtherChannel Configuration


For the Layer 3 Etherchannel configuration, we will use the topology below as an
example. We have two Layer 3 switches, Switch1 and Switch2, and we will configure
the two links connecting them as EtherChannels.
Switch1 Configuration:

Switch1(config)#interface range FastEthernet 0/1 - 2

Switch1(config-if-range)#no switchport

Switch1(config-if-range)#channel-group 1 mode desirable

Switch1(config-if-range)#interface port-channel 1

Switch1(config-if)#no switchport

Switch1(config-if)#ip address 192.168.1.1 255.255.255.0

Switch2 Configuration:

Switch1(config)#interface range FastEthernet 0/1 - 2

Switch2(config-if-range)#no switchport

Switch2(config-if-range)#channel-group 2 mode desirable

Switch2(config-if-range)#interface port-channel 2

Switch2(config-if)#no switchport

Switch2(config-if)#ip address 192.168.1.2 255.255.255.0

Layer 3 EtherChannel Verification


We can verify if our layer 3 EtherChannel configuration is working as expected by doing
the following commands:

First, let’s check if we can ping between point-to-point links.

Pings from Switch1 to Switch2

Switch1#ping 192.168.1.2

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 192.168.1.2, timeout is 2


seconds:

.!!!!

Success rate is 80 percent (4/5), round-trip min/avg/max = 2/3/5 ms


Pings from Switch2 to Switch1

Switch2#ping 192.168.1.1

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 192.168.1.1, timeout is 2


seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 3/3/4


ms

We can also check the Group state using the ‘show etherchannel‘ command in the
global configuration mode:
Switch1#show etherchannel

Channel-group listing:

----------------------

Group: 1

----------

Group state = L3

Ports: 2 Maxports = 4

Port-channels: 1 Max Port-channels = 1

Protocol: PAgP

Minimum Links: 0

To check the Port-channel status, we can use the ‘show etherchannel port-channel‘
command:
Switch1#show etherchannel port-channel

Channel-group listing:

----------------------
Group: 1

----------

Port-channels in the group:

---------------------------

Port-channel: Po1

------------

Age of the Port-channel = 0d:00h:40m:39s

Logical slot/port = 16/0 Number of ports = 2

GC = 0x00010001 HotStandBy port = null

Passive port list = Fa0/1 Fa0/2

Port state = Port-channel L3-Ag Ag-Inuse

Protocol = PAgP

Port security = Disabled

Ports in the Port-channel:

Index Load Port EC state No of bits

------+------+------+------------------+-----------

0 00 Fa0/1 Desirable-Sl 0

0 00 Fa0/2 Desirable-Sl 0

Time since last port bundled: 0d:00h:35m:01s Fa0/2

What is DHCP Snooping? – Explanation and


Configuration
DHCP Snooping is a security technology on a Layer 2 network switch that can prevent
unauthorized DHCP servers from accessing your network. It is a protection from the
untrusted hosts that want to become DHCP servers. DHCP Snooping works as a
protection from man-in-the-middle attacks. DHCP itself operates on Layer 3 of the OSI
layer while DHCP snooping operates on Layer 2 devices to filter the traffic that is
coming from DHCP clients.
Why Do We Need DHCP Snooping?
Dynamic Host Configuration Protocol (DHCP) server is a vital role in every
organization’s network as most end-user devices like PC and laptops are using DHCP
to learn the IP addresses automatically. The host is making an IP address lease to the
DHCP server.

To protect the host within the organization’s network to establish a connection from
unauthorized rogue DHCP servers, we need to configure DHCP snooping on the Layer
2 switch where the unauthorized hosts are connected.

DCHP Snooping Trusted and Untrusted Ports


In Cisco switches, DHCP snooping is enabled manually. Trusted ports should be
manually configured and the rest unconfigured ports are considered untrusted ports.
Most devices connected to trusted ports are routers, switches, and servers. DHCP
clients like PC and laptops are commonly connected to an untrusted port.

How it works is that it will allow DHCP server messages like DHCPOFFER and
DHCPACK that are coming from a trusted source. If the DHCP server messages are
coming from untrusted ports, it will discard the DHCP traffic. The switch creates a table
called the DHCP Snooping Binding Database. The DHCP snooping database registers
the source MAC address and IP address of the hosts that are connected to an untrusted
port.

NOTE
DHCP clients connected to an untrusted port are expected to transmit these DHCP
messages: DHCP DISCOVER and DHCP REQUEST. If it transmits DHCPOFFER and DHCPACK,
then the switch discards the DHCP packets. DHCPOFFER and DHCPACK are expected to be
received on the trusted ports of the switch.

Dynamic Host Configuration Protocol Snooping Configuration


For our configuration example, we will use the network topology below. There is a rouge
DHCP Server trying to connect to our network through a man-in-a-middle attack.
1. To enable DHCP snooping on the switch, we use the following command:

SW(config)#ip dhcp snooping

2. After enabling DHCP snooping, configure FastEthernet 0/1 and FastEthernet 0/2 as a
trusted port.
SW(config)#interface range FastEthernet 0/1 - FastEthernet 0/2

SW(config-if-range)#ip dhcp snooping trust

SW(config-if-range)#no shutdown

SW(config-if-range)#exit

3. Assign IP DHCP Snooping to the VLAN that is currently using the following
command.
SW(config)#ip dhcp snooping vlan 1

4. Assign an IP address to the gateway router’s interface gigabitEthernet 0/0.


Router(config)#interface gigabitEthernet 0/0

Router(config-if)#ip address 192.168.1.1 255.255.255.0

Router(config-if)#no shutdown

5. On the legitimate DHCP server, select the Services tab and click on DHCP. Enable
the service, and assigned the IP details, subnet mask, and DNS server IP. The
name serverPool cannot be changed as it is existing already.

6. Enable DHCP on PC0 and PC1 and will get the IP address from the legitimate DHCP
server.
7. Disconnect the legitimate DHCP server and observe that PC0 and PC1 are not
getting any IP. The below snippet shows PC0 is getting an APIPA address. The PCs will
not be able to get connected to the rogue DHCP server.

DHCP Snooping Verification Commands


The following are the show commands that we can use on the switch to verify if DHCP
snooping works as expected.
Switch#show ip dhcp snooping

Switch#show ip dhcp snooping binding

Dynamic ARP Inspection (DAI) Explanation &


Configuration
Address Resolution Protocol (ARP) is a Layer 2 protocol that maps an IP address
(Layer 3) to a MAC address (Layer 2). So, how does the traditional ARP work? In this
example, PC1 wants to communicate with PC2. PC1 knows the destination IP address
but not the destination MAC address.

The ARP process will be as follows:

1. First, PC1 checks its ARP table for PC2’s IP address (10.10.10.100).

2. If there is no cache, PC1 will send ARP Request, a broadcast message (source:
AAAA.AAAA.AAAA, destination: FFFF.FFFF.FFFF) to all hosts on the same subnet.

3. All hosts will receive the ARP Request, but only PC2 will reply. PC2 will send an ARP
Reply containing its own MAC address (EEEE.EEEE.EEEE).

4. PC1 receives the MAC address and saves it to its ARP Table.

Why Do We Need Dynamic ARP Inspection (DAI)?


You may be asking why we need Dynamic ARP Inspection (DAI). In our first example, a
rogue peer, PC3, is connected to one of the switch ports. PC3 can send a Gratuitous
ARP or an ARP Reply that was not prompted by an ARP Request to update the ARP
mapping of the other hosts on the network.
Unknowingly, PC2 will update its ARP Cache and change the MAC address of PC1 to
the MAC address of PC3. Same with the other direction, PC3 can spoof PC2 by lying
about its MAC address. This attack, or ARP spoofing, is also called a Man-in-the-Middle
attack.

PC2’s ARP Cache before spoofing:

C:\>arp -a

Internet Address Physical Address Type

10.10.10.10 aaaa.aaaa.aaaa dynamic

PC2’s ARP Cache after spoofing:

C:\>arp -a

Internet Address Physical Address Type

10.10.10.10 cccc.cccc.cccc dynamic


How Does DAI Prevent a Man-in-the-Middle Attack?
With Dynamic ARP Inspection (DAI), the switch compares the incoming ARP packet
and should match entries in:

1. DHCP Snooping Binding Table

2. Any configured ARP ACLs (can be used for hosts using static IP instead of DHCP)

If the ARP and any of the above do not match, the switch discards the ARP message.

Dynamic ARP Inspection (DAI) Configuration


For our Dynamic ARP Inspection (DAI) configuration example, the switch ports are all
under VLAN 100.

To enable Dynamic ARP Inspection (DAI) on VLAN 100:


Switch#conf t

Switch(config)#ip arp inspection vlan 100


To enable Interface Trust State:

Switch(config)#interface FastEthernet 0/4

Switch(config-if)#ip dhcp snooping trust

NOTE
To bypass the Dynamic ARP Inspection (DAI) process, you will usually configure the interface trust
state towards network devices like switches, routers, and servers under your administrative control.

We can also use the ‘show ip arp inspection’ command to verify the number of dropped
ARP packets:

Switch#show ip arp inspection

In our example, if we want to configure PC1 with static IP instead of DHCP, we must
create a static entry using ARP ACL.
Switch(config)#arp access-list PC1-Static

Switch(config-arp-nacl)#permit ip host 10.10.10.10 mac host


aaaa.aaaa.aaaa

Switch(config)#ip arp inspection filter PC1-Static vlan 100

Now, the switch will check the ARP access list first, and then when it doesn’t find a
match, the switch will check the DHCP Snooping Binding Table.

An attacker could also generate a large number of ARP messages, causing CPU
overutilization in the switch (Denial-of-Service or DoS). Note that DAI works in the
switch CPU rather than in the switch ASIC. We can prevent this type of attack by
limiting DAI Message Rates.

Configuring ARP Inspection Message Rate Limits


An untrusted interface allows 15 ARP packets per second by default. Here’s how we
can change it:

Switch(config)#interface FastEthernet 0/1


Switch(config-if)#ip arp inspection limit rate 8 burst interval 4

This interface now only allows 8 ARP packets every 4 seconds.

We can verify this with the following command:


Switch#show ip arp inspection interfaces

Interface Trust State Rate (pps) Burst Interval

--------------- ----------- ---------- --------------

Fa0/1 Untrusted 8 4

Fa0/2 Untrusted 15 1

Fa0/3 Untrusted 15 1

Dynamic ARP Inspection is an excellent security feature, but before configuring DAI, we
need to think and make a few decisions based on our goals, topology, and device roles.
We do not want to block important traffic after enabling it.

What is 802.1x Authentication and How it Works?


IEEE 802.1x is a standard defined by the IEEE 802.1x working group for addressing
port-based access control employing authentication for wired and wireless networks.
There are three main components that we have to take into account, namely
the Supplicant, Authenticator, and the Authentication Server (AS).

The Supplicant is the user or client allowed to access the network. While the
Authenticator is any network device such as a Wireless Access Point or an Ethernet
Switch, and the Authentication Server (AS) is a trusted server that does the
authentication mechanism of the network access by receiving, processing, and
responding to various requests from the clients and also the one who decides and tells
the authenticator to either allow or deny the access and apply various settings to the
user.
NOTE:

Authentication servers usually run software supporting the RADIUS and Extensible
Authentication Protocol (EAP). In some cases, the AS software may be running on the
authenticator hardware, such as the RADIUS server.

Common EAP Based Authentication Methods


Below is an overview of the commonly used EAP methods:

Light Weight EAP (LEAP) – the authentication process is where the client simply
provides the AS its credentials, such as the username and password. Encrypted
challenge messages are exchanged between the AS and client to ensure that the client
is authorized to access the network.

EAP Flexible Authentication by Secure Tunneling (EAP-FAST) – there are three


phases to gain access using this EAP method wherein a Protected Access Credential
(PAC) should be passed between the AS and the supplicant.

Protected EAP (PEAP) – uses inner and outer authentication. Nevertheless, the AS
presents a digital certificate to authenticate itself with the supplicant in the outer
authentication.

EAP Transport Layer Security (EAP-TLS) – the AS and the supplicant exchange
certificates and can authenticate each other. A TLS tunnel is built afterward to exchange
encryption key material securely. EAP-TLS is considered the most secure wireless
authentication method available; however, implementing it can sometimes be complex.
NOTE:

EAP-TLS is practical only if wireless clients can receive and utilize digital certificates.
Many wireless devices, such as communicators, medical devices, and RFID tags, have
an underlying operating system that can’t interface with a CA or use certificates.

IEEE 802.1x Authentication Process


The authentication method begins when the client device requests to connect to the
network. The authenticator receives the request and creates a virtual port with the
supplicant. The authenticator acts as a proxy for the end user, passing authentication
information to and from the authentication server on its behalf. The authenticator limits
traffic to authentication data to the server. A negotiation takes place, which includes:

 The client may send an EAP-start message.


 The access point sends an EAP-request identity message.
 The client’s EAP-response packet with the client’s identity is “proxied” to the
authentication server by the authenticator.
 The authentication server challenges the client to prove itself and may send its
credentials to prove itself to the client (if using mutual authentication).
 The client checks the server’s credentials (if using mutual authentication) and then
sends its credentials to the server to prove itself.
 The authentication server accepts or rejects the client’s request for connection.
 If the end user is accepted, the authenticator changes the virtual port with the end user
to an authorized state allowing full network access to that end user.
 The client’s virtual port is changed back to the unauthorized state at log-off.

Identity-Based Networking Services (IBNS)


IBNS is distinct from IEEE 802.1x but has the same 802.1x functionality. IBNS is a
systems security framework that delivers network authentication methods, and a part of
it uses IEEE 802.1x.

IEEE 802.1x is a standard defined by IEEE 802.1x working group for addressing port-
based access control using authentication. It defines a standard link-layer protocol that
is used for transporting higher-level authentication protocols, and the actual
enforcement is via MAC-based filtering and port state monitoring.

Benefits of Identity-Based Networking


There are quite a number of advantages to using identity-based networking, such as:
 Offers complete visibility, access control, and audit of all interactions based on user
identity, machine identity, and health status
 Secures remote access
 Enables quick resolution of network incidents
 Prevents unauthorized access to network resources
 Controls assets, applications, and data
 Ensures maximum service availability
 Increases user productivity gains
 Reduces operating costs
 Port Security
 By default, all interfaces on a Cisco switch are turned on. That means that an
attacker could connect to your network through a wall socket and potentially
threaten your network. If you know which devices will be connected to which
ports, you can use the Cisco security feature called port security. By using port
security, a network administrator can associate specific MAC addresses with the
interface, which can prevent an attacker to connect his device. This way you can
restrict access to an interface so that only the authorized devices can use it. If an
unauthorized device is connected, you can decide what action the switch will
take, for example discarding the traffic and shutting down the port.
 To configure port security, three steps are required:
 1. define the interface as an access interface by using the switchport mode
access interface subcommand
2. enable port security by using the switchport port-security interface
subcommand
3. define which MAC addresses are allowed to send frames through this interface
by using the switchport port-security mac-address MAC_ADDRESS interface
subcommand or using the swichport port-security mac-address sticky interface
subcommand to dynamically learn the MAC address of the currently connected
host
 Two steps are optional:
 1. define what action the switch will take when receiving a frame from an
unauthorized device by using the port security violation {protect | restrict |
shutdown} interface subcommand. All three options discard the traffic from the
unauthorized device. The restrict and shutdown options send log messages
when a violation occurs. Shut down mode also shuts down the port.
2. define the maximum number of MAC addresses that can be used on the port
by using the switchport port-security maximum NUMBER interface submode
command
 The following example shows the configuration of port security on a Cisco switch:

First, we need to enable port security and define which MAC addresses are
allowed to send frames:
 SW1(config)#interface fastEthernet0/1
 SW1(config-if)#switchport mode access
 SW1(config-if)#switchport port-security
 SW1(config-if)#switchport port-security mac-address sticky

 Next, by using the show port-security interface fa0/1 we can see that the switch
has learned the MAC address of host A:
 SW1#show port-security interface fastEthernet0/1
 Port Security : Enabled
 Port Status : Secure-up
 Violation Mode : Shutdown
 Aging Time : 0 mins
 Aging Type : Absolute
 SecureStatic Address Aging : Disabled
 Maximum MAC Addresses : 1
 Total MAC Addresses : 1
 Configured MAC Addresses : 0
 Sticky MAC Addresses : 1
 Last Source Address:Vlan : 000A.4188.D0C3:1
 Security Violation Count : 0

 By default, the maximum number of allowed MAC addresses is one, so if we
connect another host to the same port, the security violation will occur:
 SW1#show interfaces fastEthernet0/1
 FastEthernet0/1 is down, line protocol is down (err-disabled)
 Hardware is Lance, address is 0001.c79a.4501 (bia
0001.c79a.4501)
 BW 100000 Kbit, DLY 1000 usec,
 reliability 255/255, txload 1/255, rxload 1/255

 The status code of err-disabled means that the security violation occurred on
the port.
 NOTE
To enable the port, we need to use the shutdown and no shutdown interface subcommands.

Cisco Port Security Violation Modes Configuration


The Cisco port security violation mode is a port security feature that restricts input to an
interface when it receives a frame that breaks the port security settings on the said
interface. This security mechanism is used in Cisco Catalyst switches to secure their
ethernet ports from unauthorized users by limiting and identifying MAC addresses of the
peripheral that are allowed to access the port.

Enabling the port security violation feature on the switch ports means that each port can
be configured to take advantage of one of the three violation modes that define the
necessary actions to take when a violation happens. These modes cause the switch to
discard the violating frame (the frame whose source MAC address would drive the
number of learned MAC addresses over the limit).

Configuring Port Security on a Switch Port


To limit or discard an unwanted frame on a switch interface, we need to limit and
identify the MAC address of the peripherals that are allowed to access the port. We
need to configure port security to these interfaces.

Step 1: Enter interface configuration mode and input the physical interface to configure.
We will be using gigabitEthernet 2/1 as an example.
Switch(config)# interface gigabitEthernet 2/1

Step 2: Set the interface mode to access. The default mode, which is dynamic
desirable, cannot be configured to be a secured port.

Switch(config-if)# switchport mode access

Step 3: Enable port security on the interface.

Switch(config-if)# switchport port-security

Step 4: Set the maximum number of secure MAC addresses for the interface, which
ranges from 1 to 3072, wherein the default value is 1.
Switch(config-if)# switchport port-security maximum {1-3072}

Step 5: Configure the violation mode on the port. Actions that shall be taken when a
security violation is detected. Refer to the table below for the actions to be taken.

Switch(config-if)# switchport port-security violation {protect|


restrict | shutdown}

NOTE:
When a secure port is in an error-disabled state, you can bring it out of the state by issuing the
command ‘errdisable recovery cause psecure-violation’ at the global configuration mode, or you
can manually reenable it by entering the ‘shutdown’ and ‘no shutdown’ commands.

Step 6: Set the rate limit for bad packets.

Switch(config-if)# switchport port-security limit rate invalid-


source-mac

Step 7: Input the identified secure MAC addresses for the interface. You can use this
command to limit the maximum number of secure MAC addresses. If in case, you
configure fewer secure MAC addresses than the maximum, then the remaining MAC
addresses are dynamically learned.
Switch(config-if)# switchport port-security mac-address
{mac_address}

Step 8: Verify your configuration by the following commands below.

Switch# show port-security address interface gigabitEthernet 2/1

Switch# show port-security address

What are ACLs (Access Control Lists)?


ACLs are a set of rules used most commonly to filter network traffic. They are used on
network devices with packet filtering capatibilites (e.g. routers or firewalls). ACLs are
applied on the interface basis to packets leaving or entering an interface.

For example on how ACLs are used, consider the following network topology:

Let’s say that server S1 holds some important documents that need to be available only
to the company’s management. We could configure an access list on R1 to enable
access to S1 only to users from the management network. All other traffic going to S1
will be blocked. This way, we can ensure that only authorized user can access the
sensitive files on S1.

ACL Types: Standard and Extended


There are two Access Control List (ACL) types:

1. Standard Access Control Lists – with standard access lists, you can filter traffic
only on the source IP address of a packet. These types of access lists are not as
powerful as extended access lists, but they are less processor-intensive for the router.
The following example describes the way in which standard access lists can be used for
traffic flow control:

Let’s say that server S1 holds some important documents that need to be available only
to the company’s management. We could configure an access list on R1 to enable
network access to S1 for the users from the management network only. All other traffic
going to S1 will be blocked. This way, we can ensure only authorized users can access
sensitive files on S1.

2. Extended Access Control Lists – with extended access lists, you can be more
precise in your network traffic filtering. You can evaluate the source and destination IP
addresses, type of layer 3 protocol, source and destination port, etc. Extended access
lists are more complex to configure and consume more CPU time than standard access
lists, but they allow a much more granular level of control.

To demonstrate the usefulness of extended ACLs, we will use the following example:
In the example network above, we have used the standard access list to prevent all
users from accessing server S1. But, with that configuration, we also deny access to
S2! To be more specific, we can use extended access lists. Let’s say that we need to
prevent users from accessing server S1. We could place an extended access list on R1
to prevent users only from accessing S1 (we would use an access list to filter the IP
traffic according to the destination IP address). That way, no other traffic is forbidden,
and users can still access the other server, S2:

Configuring standard ACLs


To create an standard access list on a Cisco router, the following command is used
from the router’s global configuration mode:

R1(config)# access-list ACL_NUMBER permit|deny IP_ADDRESS


WILDCARD_MASK

NOTE
ACL number for the standard ACLs has to be between 1–99 and 1300–1999.

You can also use the host keyword to specify the host you want to permit or deny:
R1(config)# access-list ACL_NUMBER permit|deny host IP_ADDRESS

Once the access list is created, it needs to be applied to an interface. You do that by
using the ip access-group ACL_NUMBER in|out interface
subcommand. in and out keywords specify in which direction you are activating the
ACL. in means that ACL is applied to the traffic coming into the interface, while
the out keyword means that the ACL is applied to the traffic leaving the interface.

Consider the following network topology:


We want to allow traffic from the management LAN to the server S1. First, we need to
write an ACL to permit traffic from LAN 10.0.0.0/24 to S1. We can use the following
command on R1:
R1(config)#access-list 1 permit 10.0.0.0 0.0.0.255

The command above permits traffic from all IP addresses that begin with 10.0.0. We
could also target the specific host by using the host keyword:
R1(config)#access-list 1 permit host 10.0.0.1

The command above permits traffic only from the host with the IP address of 10.0.0.1.

Next, we will deny traffic from the Users LAN (11.0.0.0/24):

R1(config)#access-list 1 deny 11.0.0.0 0.0.0.255

Next, we need to apply the access list to an interface. It is recommended to place the
standard access lists as close to the destination as possible. In our case, this is
the Fa0/0 interface on R1. Since we want to evaluate all packets trying to exit out Fa0/0,
we will specify the outbound direction with the out keyword:
R1(config-if)#ip access-group 1 out

NOTE
At the end of each ACL there is an implicit deny all statement. This means that all traffic not
specified in earlier ACL statements will be forbidden, so the second ACL statement (access-list 1
deny 11.0.0.0 0.0.0.255) wasn’t even necessary.

Configuring Extended ACLs (Access Lists)


To be more precise when matching a certain network traffic, extended access lists are
used. Extended access lists are more difficult to configure and require more processor
time than the standard access lists, but they enable a much more granular level of
control.
With extended access lists, you can evaluate additional packet information, such as:

 source and destination IP address


 type of TCP/IP protocol (TCP, UDP, IP…)
 source and destination port numbers

Two steps are required to configure an extended access list:

1. configure an extended access list using the following command:


(config) access list NUMBER permit|deny IP_PROTOCOL SOURCE_ADDRESS
WILDCARD_MASK [PROTOCOL_INFORMATION] DESTINATION_ADDRESS
WILDCARD_MASK PROTOCOL_INFORMATION

2. apply an access list to an interface using the following command:


(config) ip access-group ACL_NUMBER in | out

NOTE
Extended access lists numbers are in ranges from 100 to 199 and from 2000 to 2699. You should
always place extended ACLs as close to the source of the packets that are being evaluated as
possible.

To better understand the concept of extended access lists, consider the following
example:
We want to enable the administrator’s workstation (10.0.0.1/24) unrestricted access to
Server (192.168.0.1/24). We will also deny any type of access to Server from the user’s
workstation (10.0.0.2/24).

First, we’ll create a statement that will permit the administrator’s workstation access to
Server:
R1(config)#access-list 100 permit ip 10.0.0.1 0.0.0.0 192.168.0.1
0.0.0.0

Next, we need to create a statement that will deny the user’s workstation access to
Server:

R1(config)#access-list 100 deny ip 10.0.0.2 0.0.0.0 192.168.0.1


0.0.0.0

Lastly, we need to apply the access list to the Fa0/0 interface on R1:

R1(config)#int f0/0

R1(config-if)#ip access-group 100 in

This will force the router to evaluate all packets entering Fa0/0. If the administrator tries
to access Server, the traffic will be allowed, because of the first statement. However, if
User tries to access Server, the traffic will be forbidden because of the second ACL
statement.
NOTE
At the end of each access list there is an explicit deny all statement, so the second ACL statement
wasn’t really necessary. After applying an access list, every traffic not explicitly permited will be
denied.

What if we need to allow traffic to Server only for certain services? For example, let’s
say that Server was a web server and users should be able to access the web pages
stored on it. We can allow traffic to Server only to certain ports (in this case, port 80),
and deny any other type of traffic. Consider the following example:

On the right side, we have a Server that serves as a web server, listening on port 80.
We need to permit User to access web sites on S1 (port 80), but we also need to deny
other type of access.

First, we need to allow traffic from User to the Server port of 80. We can do that using
the following command:

R1(config)#access-list 100 permit tcp 10.0.0.2 0.0.0.0 192.168.0.1


0.0.0.0 eq 80

By using the tcp keyword, we can filter packets by the source and destination ports. In
the example above, we have permitted traffic from 10.0.0.2 (User’s workstation) to
192.168.0.1 (Server) on port 80. The last part of the statement, eq 80, specifies the
destination port of 80.

Since at the end of each access list there is an implicit deny all statement, we don’t
need to define any more statement. After applying an access list, every traffic not
originating from 10.0.0.2 and going to 192.168.0.1, port 80 will be denied.

We need to apply the access list to the interface:

R1(config)#int f0/0
R1(config-if)#ip access-group 100 in

We can verify whether our configuration was successful by trying to access Server from
the User’s workstation using different methods. For example, the ping will fail:
C:\>ping 192.168.0.1

Pinging 192.168.0.1 with 32 bytes of data:

Reply from 10.0.0.1: Destination host unreachable.

Reply from 10.0.0.1: Destination host unreachable.

Reply from 10.0.0.1: Destination host unreachable.

Reply from 10.0.0.1: Destination host unreachable.

Ping statistics for 192.168.0.1:

Packets: Sent = 4, Received = 0, Lost = 4 (100% loss)

Telnetting to the port 21 will fail:


C:\>telnet 192.168.0.1 21

Trying 192.168.0.1 ...

% Connection timed out; remote host not responding

However, we will be able to access Server on port 80 using our browser:

Configuring named ACLs


Just like the numbered ACLs we’ve used so far, named ACLs allow you to filter network
traffic according to various criteria. However, they have the following benefits over
numbered ACLs:

 an ALC can be assigned a meaningful name (e.g. filter_traffic_to_server)


 ACL subcommands are used in the ACL configuration mode, and not in the
global configuration mode as with numbered ACLs
 you can reorder statements in a named access list using sequence numbers

NOTE
Just like numbered ACLs, named ACLs can be of two types: standard and extended.
The named ACL name and type is defined using the following syntax:
(config) ip access-list STANDARD|EXTENDED NAME

The command above moves you to the ACL configuration mode, where you can
configure the permit and deny statements. Just like with numbered ACLs, named ACLs
ends with the implicit deny statement, so any traffic not explicitly permitted will be
forbidden.

We will use the following network in our configuration example:


We want to deny the user’s workstation (10.0.0.2/24) any type of access to the Domain
server (192.168.0.1/24). We also want to enable the user unrestricted access to
the File share (192.168.0.2/24).

First, we will create and name our ACL:

R1(config)#ip access-list extended allow_traffic_fileshare

Once inside the ACL config mode, we need to create a statement that will deny the
user’s workstation access to the Domain server:

R1(config-ext-nacl)#20 deny ip 10.0.0.2 0.0.0.0 192.168.0.1 0.0.0.0

The number 20 represents the line in which we want to place this entry in the ACL. This
allows us to reorder statements later if needed.

Now, we will execute a statement that will permit the workstation access to the File
share:

R1(config-ext-nacl)#50 permit ip 10.0.0.2 0.0.0.0 192.168.0.2


0.0.0.0

Lastly, we need to apply the access list to the Gi0/0 interface on R1:
R1(config)#int Gi0/0

R1(config-if)#ip access-group allow_traffic_fileshare in

The commands above will force the router to evaluate all packets trying to enter Gi0/0. If
the workstation tries to access the Domain server, the traffic will be forbidden because
of the first ACL statement. However. if the user tries to access the File server, the
traffic will be allowed, because of the second statement.

Our named ACL configuration looks like this:


R1#show ip access-lists

Extended IP access list allow_traffic_fileshare

20 deny ip host 10.0.0.2 host 192.168.0.1

50 permit ip host 10.0.0.2 host 192.168.0.2

Notice the sequence number at the beginning of each entry. If we need to stick a new
entry between these two entries, we can do that by specifying a sequence number in
the range between 20 and 50. If we don’t specify the sequence number, the entry will be
added to the bottom of the list.

We can use the ping command on the workstation to verify the traffic is being blocked
properly:
C:\>ping 192.168.0.1

Pinging 192.168.0.1 with 32 bytes of data:

Reply from 10.0.0.1: Destination host unreachable.

Reply from 10.0.0.1: Destination host unreachable.

Reply from 10.0.0.1: Destination host unreachable.

Reply from 10.0.0.1: Destination host unreachable.

Ping statistics for 192.168.0.1:

Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),

C:\>

C:\>ping 192.168.0.2

Pinging 192.168.0.2 with 32 bytes of data:

Reply from 192.168.0.2: bytes=32 time<1ms TTL=127

Reply from 192.168.0.2: bytes=32 time<1ms TTL=127

Reply from 192.168.0.2: bytes=32 time<1ms TTL=127

Reply from 192.168.0.2: bytes=32 time<1ms TTL=127

Ping statistics for 192.168.0.2:

Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),

Approximate round trip times in milli-seconds:

Minimum = 0ms, Maximum = 0ms, Average = 0ms


As you can see from the ping output above, the traffic is being filtered properly.

What is NAT (Network Address Translation)?


NAT (Network Address Translation) is a process of changing the source and
destination IP addresses and ports. Address translation reduces the need for IPv4
public addresses and hides private network address ranges. This process is usually
done by routers or firewalls.

An example will help you understand the concept:

Host A request a web page from an Internet server. Because Host A uses private IP
addressing, the source address of the request has to be changed by the router because
private IP addresses are not routable on the Internet. Router R1 receives the request,
changes the source IP address to its public IP address and sends the packet to server
S1. Server S1 receives the packet and replies to router R1. Router R1 receives the
packet, changes the destination IP addresses to the private IP address of Host A and
sends the packet to Host A.

There are three types of address translation:

1. Static NAT – translates one private IP address to a public one. The public IP
address is always the same.
2. Dynamic NAT – private IP addresses are mapped to the pool of public IP
addresses.
3. Port Address Translation (PAT) – one public IP address is used for all internal
devices, but a different port is assigned to each private IP address. Also known
as NAT Overload.

Static NAT
With static NAT, routers or firewalls translate one private IP address to a single public IP
address. Each private IP address is mapped to a single public IP address. Static NAT is
not often used because it requires one public IP address for each private IP address.

To configure static NAT, three steps are required:


1. configure private/public IP address mapping by using the ip nat inside source static
PRIVATE_IP PUBLIC_IP command
2. configure the router’s inside interface using the ip nat inside command
3. configure the router’s outside interface using the ip nat outside command

Here is an example.

Computer A requests a web resource from S1. Computer A uses its private IP address
when sending the request to router R1. Router R1 receives the request, changes the
private IP address to the public one, and sends the request to S1. S1 responds to R1.
R1 receives the response, looks it up in its NAT table, and changes the destination IP
address to the private IP address of Computer A.

In the example above, we need to configure static NAT. To do that, the following
commands are required on R1:
R1(config)#ip nat inside source static 10.0.0.2 59.50.50.1

R1(config)#interface fastEthernet 0/0

R1(config-if)#ip nat inside

R1(config-if)#interface fastEthernet 0/1

R1(config-if)#ip nat outside

Using the commands above, we have configured a static mapping between Computer
A’s private IP address of 10.0.0.2 and the router’s R1 public IP address of 59.50.50.1.
To check NAT, you can use the show ip nat translations command:
R1#show ip nat translations

Pro Inside global Inside local Outside local Outside global

icmp 59.50.50.1:9 10.0.0.2:9 59.50.50.2:9 59.50.50.2:9

--- 59.50.50.1 10.0.0.2 --- ---


Dynamic NAT
Unlike with static NAT, where you had to manually define a static mapping between a
private and public address, dynamic NAT does the mapping of a local address to a
global address happens dynamically. This means that the router dynamically picks an
address from the global address pool that is not currently assigned. The dynamic entry
stays in the NAT translations table as long as the traffic is exchanged. The entry times
out after a period of inactivity and the global IP address can be used for new
translations.

With dynamic NAT, you need to specify two sets of addresses on your Cisco router:

 the inside addresses that will be translated


 a pool of global addresses

To configure dynamic NAT, the following steps are required:

1. configure the router’s inside interface using the ip nat inside command
2. configure the router’s outside interface using the ip nat outside command
3. configure an ACL that has a list of the inside source addresses that will be translated
4. configure a pool of global IP addresses using the ip nat pool NAME
FIRST_IP_ADDRESS LAST_IP_ADDRESS netmask SUBNET_MASK command
5. enable dynamic NAT with the ip nat inside source list ACL_NUMBER pool
NAME global configuration command

Consider the following example:

Host A requests a web resource from a internet server S1. Host A uses its private IP
address when sending the request to router R1. Router R1 receives the request,
changes the private IP address to one of the available global addresses in the pool and
sends the request to S1. S1 responds to R1. R1 receives the response, looks up in its
NAT table and changes the destination IP address to the private IP address of Host A.

To configure dynamic NAT, the following commands are required on R1:

1. First we need to configure the router’s inside and outside NAT interfaces:
R1(config)#int f0/0

R1(config-if)#ip nat inside

R1(config-if)#int f0/1

R1(config-if)#ip nat outside

2. Next, we need to configure an ACL that will include a list of the inside source
addresses that will be translated. In this example we want to translate all inside hosts on
the 10.0.0.0/24 network:
R1(config)#access-list 1 permit 10.0.0.0 0.0.0.255

3. We need to configure the pool of global (public) IP addresses available on the outside
interface:
R1(config)#ip nat pool STUDY-CCNA_POOL 155.4.12.1 155.4.12.3
netmask 255.255.255.0

The pool configured above consists of 3 addresses: 155.4.12.1, 155.4.12.2, and


155.4.12.3.

4. Lastly, we need to enable dynamic NAT:

R1(config)#ip nat inside source list 1 pool STUDY-CCNA_POOL

The command above tells the router to translate all addresses specified in the access
list 1 to the pool of global addresses named MY POOL.

You can list all NAT translations using the show ip nat translations command.

Generate some traffic from the PC to the server first to test:

C:\>ping 155.4.12.5

Pinging 155.4.12.5 with 32 bytes of data:

Reply from 155.4.12.5: bytes=32 time<1ms TTL=127

Reply from 155.4.12.5: bytes=32 time=3ms TTL=127

Reply from 155.4.12.5: bytes=32 time=1ms TTL=127

Reply from 155.4.12.5: bytes=32 time<1ms TTL=127


Ping statistics for 155.4.12.5:

Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),

Approximate round trip times in milli-seconds:

Minimum = 0ms, Maximum = 3ms, Average = 1ms

Then enter the show ip nat translations command quickly enough before the translation
has timed out:
R1#show ip nat translations

Pro Inside global Inside local Outside local


Outside global

icmp 155.4.12.1:16 10.0.0.100:16 155.4.12.5:16


155.4.12.5:16

In the output above you can see that the translation has been made between the Host
A’s private IP address (Inside local, 10.0.0.100) to the first available public IP address
from the pool (Inside global, 155.4.12.1) and it is connecting to the server on the outside
(Outside local and Outside global, 155.4.12.5) .
NOTE
You can remove all NAT translations from the table by using the clear ip nat translation * command.

Port Address Translation (PAT) configuration


With Port Address Translation (PAT), a single public IP address is used for all internal
private IP addresses, but a different port is assigned to each private IP address. This
type of NAT is also known as NAT Overload and is the typical form of NAT used in
today’s networks. It is even supported by most consumer-grade routers.

PAT allows you to support many hosts with only few public IP addresses. It works by
creating dynamic NAT mapping, in which a global (public) IP address and a unique port
number are selected. The router keeps a NAT table entry for every unique combination
of the private IP address and port, with translation to the global address and a unique
port number.

We will use the following example network to explain the benefits of using PAT:
As you can see in the picture above, PAT uses unique source port numbers on the
inside global (public) IP address to distinguish between translations. For example, if the
host with the IP address of 10.0.0.101 wants to access the server S1 on the Internet,
the host’s private IP address will be translated by R1 to 155.4.12.1:1056 and the
request will be sent to S1. S1 will respond to 155.4.12.1:1056. R1 will receive that
response, look up in its NAT translation table, and forward the request to the host.

To configure PAT, the following commands are required:

 configure the router’s inside interface using the ip nat inside command.
 configure the router’s outside interface using the ip nat outside command.
 configure an access list that includes a list of the inside source addresses that
should be translated.
 enable PAT with the ip nat inside source list ACL_NUMBER interface TYPE
overload global configuration command.

Here is how we would configure PAT for the network picture above.

First, we will define the outside and inside interfaces on R1:


R1(config)#int Gi0/0

R1(config-if)#ip nat inside


R1(config-if)#int Gi0/1

R1(config-if)#ip nat outside

Next, we will define an access list that will include all private IP addresses we would like
to translate:
R1(config-if)#access-list 1 permit 10.0.0.0 0.0.0.255

The access list defined above includes all IP addresses from the 10.0.0.0 – 10.0.0.255
range.

Now we need to enable NAT and refer to the ACL created in the previous step and to
the interface whose IP address will be used for translations:
R1(config)#ip nat inside source list 1 interface Gi0/1 overload

To verify the NAT translations, we can use the show ip nat translations command after
hosts request a web resource from S1:
R1#show ip nat translations

Pro Inside global Inside local Outside local Outside global

tcp 155.4.12.1:1024 10.0.0.100:1025 155.4.12.5:80 155.4.12.5:80

tcp 155.4.12.1:1025 10.0.0.101:1025 155.4.12.5:80 155.4.12.5:80

tcp 155.4.12.1:1026 10.0.0.102:1025 155.4.12.5:80 155.4.12.5:80

Notice that the same IP address (155.4.12.1) has been used to translate three private
IP addresses (10.0.0.100, 10.0.0.101, and 10.0.0.102). The port number of the public IP
address is unique for each connection. So when S1 responds to 155.4.12.1:1026, R1
look into its NAT translations table and forward the response to 10.0.0.102:1025

What is IPv6 (Internet Protocol Version 6)?


IPv6 is the newest version of the IP protocol. IPv6 was developed to overcome many
deficiencies of IPv4, most notably the problem of IPv4 address exhaustion. Unlike IPv4,
which has only about 4.3 billion (2 raised to power 32) available addresses, IPv6 allows
for 3.4 × 10 raised to power 38 addresses.

IPv6 features
Here is a list of the most important features of IPv6:

 Large address space: IPv6 uses 128-bit addresses, which means that for each
person on the Earth there are
48,000,000,000,000,000,000,000,000,000 addresses!
 Enhanced security: IPSec (Internet Protocol Security) is built into IPv6 as part
of the protocol . This means that two devices can dynamically create a secure
tunnel without user intervention.
 Header improvements: the packed header used in IPv6 is simpler than the one
used in IPv4. The IPv6 header is not protected by a checksum so routers do not
need to calculate a checksum for every packet.
 No need for NAT: since every device has a globally unique IPv6 address, there
is no need for NAT.
 Stateless address autoconfiguration: IPv6 devices can automatically configure
themselves with an IPv6 address.
 IPv6 address format
 Unlike IPv4, which uses a dotted-decimal format with each byte ranges from 0 to
255, IPv6 uses eight groups of four hexadecimal digits separated by colons. For
example, this is a valid IPv6 address:
 2340:0023:AABA:0A01:0055:5054:9ABC:ABB0
 If you don’t know how to convert hexadecimal number to binary, here is a table
that will help you do the conversion:


 IPv6 address shortening
 The IPv6 address given above looks daunting, right? Well, there are two
conventions that can help you shorten what must be typed for an IP address:
 1. a leading zero can be omitted
 For example, the address listed above
(2340:0023:AABA:0A01:0055:5054:9ABC:ABB0) can be shortened
to 2340:23:AABA:A01:55:5054:9ABC:ABB0
 2. successive fields of zeroes can be represented as two colons (::)
 For example, 2340:0000:0000:0000:0455:0000:AAAB:1121 can be written
as 2340::0455:0000:AAAB:1121
 NOTE
You can shorten an address this way only for one such occurrence. The reason is obvious –
if you had more than occurence of double colon you wouldn’t know how many sets of zeroes
were being omitted from each part.

 Here is a couple of more examples that can help you grasp the concept of IPv6
address shortening:
 Long version: 1454:0045:0000:0000:4140:0141:0055:ABBB
Shortened version: 1454:45::4140:141:55:ABBB
 Long version: 0000:0000:0001:AAAA:BBBC:A222:BBBA:0001
Shortened version: ::1:AAAA:BBBC:A222:BBBA:1

IPv6 Interface Identifier


The second part of an IPv6 unicast or anycast address is typically a 64-bit interface
identifier used to identify a host’s network interface. A 64-bit interface ID is created by
inserting the hex value of FFFE in the middle of the MAC address of the network card.
Also, the 7th Bit in the first byte is flipped to a binary 1 (if the 7th bit is set to 0 it means
that the MAC address is a burned-in MAC address). When this is done, the interface ID
is commonly called the modified extended unique identifier 64 (EUI-64).

For example, if the MAC address of a network card is 00:BB:CC:DD:11:22 the interface
ID would be 02BBCCFFFEDD1122.

Why is that so?


Well, first we need to flip the seventh bit from 0 to 1. MAC addresses are in hex format.
The binary format of the MAC address looks like this:
hex 00BBCCDD1122

binary 0000 0000 1011 1011 1100 1100 1101 1101 0001 0001 0010 0010

We need to flip the seventh bit:


binary 0000 0010 1011 1011 1100 1100 1101 1101 0001 0001 0010 0010

Now we have this address in hex:


hex 02BBCCDD1122

Next, we need to insert FFFE in the middle of the address listed above:
hex 02BBCCFFFEDD1122

So, the interface ID is now 02BB:CCFF:FEDD:1122.

Another example, this time with the MAC address of 00000C432A35.

1. Convert to binary and flip the seventh bit to one:

binary: 0000 0010 0000 0000 0000 1100 0100 0011 0010 1010 0011 0101

2. Convert back to hex:

hex: 02000C432A35

3. Insert FFFE in the middle:

interface ID: 02000CFFFE432A35


Differences between IPv4 and IPv6
The following table summarizes the major differences between IPv4 and IPv6:

Feature IPv4 IPv6

Address length 32 bits 128 bits

Address representation 4 decimal numbers from 0- 8 groups of 4 hexadecimal


255 separated by periods digits separated by colons

Address types unicast, multicast, broadcast unicast, multicast, anycast

Packet header 20 bytes long 40 bytes long, but simpler


than IPv4 header

Configuration manual, DHCP manual, DHCP, auto-


configuration

IPSec support optional built-in

Types of IPv6 Addresses


Three categories of IPv6 addresses exist:

 Unicast – represents a single interface. Packets addressed to a unicast address are


delivered to a single host.
 Anycast – identifies one or more interfaces. For example, servers that support the same
function can use the same unicast IP address. Packets sent to that IP address are
forwarded to the nearest server. Anycast addresses are used for load-balancing. Known
as “one-to-nearest” address.
 Multicast – represents a dynamic group of hosts. Packets sent to this address are
delivered to many interfaces. Multicast addresses in IPv6 have a similar purpose as their
counterparts in IPv4.

NOTE
IPv6 doesn’t use the broadcast method, but multicast to all hosts on the network provides the
functional equivalent.
IPv6 Unicast Addresses
Unicast addresses represent a single interface. Packets addressed to a unicast address
will be delivered to a specific network interface.

There are three types of IPv6 unicast addresses:

 global unicast – similar to IPv4 public IP addresses. These addresses are assigned by
the IANA and used on public networks. They have a prefix of 2000::/3, (all the addresses
that begin with binary 001).
 unique local – similar to IPv4 private addresses. They are used in private networks and
aren’t routable on the Internet. These addresses have a prefix of FD00::/8.
 link local – these addresses are used for sending packets over the local subnet.
Routers do not forward packets with this addresses to other subnets. IPv6 requires a
link-local address to be assigned to every network interface on which the IPv6 protocol is
enabled. These addresses have a prefix of FE80::/10.

IPv6 Global Addresses


IPv6 global addresses are similar to IPv4 public addresses. As the name implies, they
are routable on the internet. Currently IANA has assigned only 2000::/3 addresses to
the global pool.

A global IPv6 address consists of two parts:

 subnet ID – 64 bits long. Contains the site prefix (obtained from a Regional Internet
Registry) and the subnet ID (subnets within the site).
 interface ID – 64 bits long. typically composed of a part of the MAC address of
the interface.

Here is a graphical representation of the two parts of an global IPv6 address:

IPv6 Unique Local Addresses


Unique local IPv6 addresses have a similar function as IPv4 private addresses. They
are not allocated by an address registry and are not meant to be routed outside their
domain. Unique local IPv6 addresses begin with FD00::/8.
A unique local IPv6 address is constructed by appending a randomly generated 40-bit
hexadecimal string to the FD00::/8 prefix. The subnet field and interface ID are created
the same way as with global IPv6 addresses.

A graphical representation of a unique local IPv6 address:

NOTE
The original IPv6 RFCs defined a private address class called site local. This class has been
deprecated and replaced with unique local addresses.

IPv6 Link-Local Addresses


Link-local IPv6 addresses have a smaller scope of how far they can travel: only within a
network segment to which a host is connected. Routers will not forward packets
destined to a link-local address to other links. A link-local IPv6 address must be
assigned to every network interface on which the IPv6 protocol is enabled. A host can
automatically derive its own link-local IP address, or the address can be manually
configured.

Link-local addresses have a prefix of FE80::/10. They are mostly used for auto-address
configuration and neighbour discovery.

Here is a graphical representation of a link local IPv6 address:

IPv6 Multicast Addresses


Multicast addresses in IPv6 are similar to multicast addresses in IPv4. They are used to
communicate with dynamic groupings of hosts, for example all routers on the link (one-
to-many distribution).

Here is a graphical representation of the IPv6 multicast packet:

IPv6 multicast addresses start with FF00::/8. After the first 8 bits, there are 4 bits that
represent the flag fields that indicate the nature of specific multicast addresses. The
next 4 bits indicate the scope of the IPv6 network for which the multicast traffic is
intended. Routers use the scope field to determine whether multicast traffic can be
forwarded. The remaining 112 bits of the address make up the multicast Group ID.

Some of the possible scope values are:

1 – interface-local
2 – link-local
4 – admin-local
5 – site-local
8 – organization-local
E – global

For example, the addresses that begin with FF02::/16 are multicast addresses intended
to stay on the local link.

The following table lists of some of the most common link-local multicast addresses:

IPv6 Address Prefixes


Here is a summary of the most common address prefixes in IPv6:

How to Enable IPv6 on a Cisco Router?


Cisco routers do not have Internet Protocol version 6 (IPv6) routing enabled by default.
So how do we enable IPv6 on a router?

1. First, enable IPv6 routing on a Cisco router using the ‘ipv6 unicast-routing’ global
configuration command. This command globally enables IPv6 and must be the first
command executed on the router.
2. Configure the IPv6 global unicast address on an interface using the ‘ipv6 address
address/prefix-length [eui-64]’ command. After you enter this command, the link local
address will be automatically derived. If you omit the ‘eui-64’ parameter, you will need
to configure the entire address manually.

IPv6 Configuration and Verification


Here is an IPv6 configuration example:
R1(config)#ipv6 unicast-routing

R1(config)#int Gi0/0

R1(config-if)#ipv6 address 2001:0BB9:AABB:1234::/64 eui-64

We can verify the IP configuration and IP settings using the ‘show ipv6 interface
Gi0/0’ command:

R1#show ipv6 interface Gi0/0

GigabitEthernet0/0 is up, line protocol is up

IPv6 is enabled, link-local address is FE80::201:42FF:FE65:3E01

No Virtual link-local address(es):

Global unicast address(es):

2001:BB9:AABB:1234:201:42FF:FE65:3E01, subnet is
2001:BB9:AABB:1234::/64 [EUI]

Joined group address(es):

FF02::1

FF02::2

FF02::1:FF65:3E01

MTU is 1500 bytes

....

From the output above, we can verify the following:

1. The link local IPv6 address has been automatically configured. Link local addresses
begin with FE80::/10, and the interface ID is used for the rest of the address. Because
the interface’s MAC address is 00:01:42:65:3E01, the calculated address
is FE80::201:42FF:FE65:3E01.IPv6 hosts check that their link local IP addresses are
unique and not in use by reaching out to the local network using Neighbor Discovery
Process (NDP).
2. The global IPv6 address has been created using the modified EUI-64 method.
Remember that IPv6 global addresses begin with 2000::/3. So in our case, the IPv6
global address is 2001:BB9:AABB:1234:201:42FF:FE65:3E01.

We will also create an IPv6 address on another router. This time, we will enter the
whole address:
R2(config-if)#ipv6 address
2001:0BB9:AABB:1234:1111:2222:3333:4444/64

Notice that the IPv6 address is in the same subnet configured on R1


(2001:0BB9:AABB:1234/64). We can test the connectivity between the devices using
the ‘ping’ command for IPv6:

R1#ping ipv6 2001:0BB9:AABB:1234:1111:2222:3333:4444

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to


2001:0BB9:AABB:1234:1111:2222:3333:4444, timeout is 2 seconds:

!!!!!

Success rate is 100 percent (5/5), round-trip min/avg/max = 0/0/0


ms

As you can see from the output above, the devices can communicate with each other.
So that’s how to enable IPv6 on router. IPv6 addresses and the default gateway can
also be configured on hosts automatically using SLAAC and DHCPv6. DNS servers are
still required to be able to reach the Internet.

IPv6 SLAAC – Stateless Address


Autoconfiguration
There are three options on how to configure IPv6 addresses on our networks. The first
one is to statically assign a unicast IPv6 address to an interface of the device. Another
option is Stateless Address Autoconfiguration (SLAAC), which uses link-local addresses
and the interface’s MAC address. Lastly, stateful autoconfiguration which uses Dynamic
Host Configuration Protocol version 6 (DHCPv6).
Stateless Autoconfiguration (SLAAC) Process
IPv6 Stateless Address Autoconfiguration or SLAAC allows devices on a network to
automatically configure IPv6 addresses on its interface without managing a DHCP
server.

Here is the command to configure Stateless Autoconfiguration to the device’s interface:


Corp Router(config)#interface fastEthernet 0/0

Corp Router(config-if)#ipv6 address autoconfig default

Link-Local Address
The first thing that happens is that the device gives itself its own link-local address. The
IPv6 link-local address configuration for its own interface is done automatically. The link-
local address can be acquired by combining the link-local prefix FE80::/64 and the EUI-
64 interface identifiers generated from the interface’s MAC address that is padded in
the middle with FFFE – 16 bits.

For example, I have a MAC address of AA40:1134:5531. To create a link-local address,


we will insert 0xFFFE in the middle of it:
AA40:11FF:FE34:5531

Then flip the 7TH bit of the MAC address, which results to:
A840:11FF:FE34:5531

10101010 represents the first 8 bits of the MAC address (AA), which, when inverting the
7th bit becomes 10101000 (A8). Therefore, the resulting IPv6 EUI-64 link-local address
is this:
FE80::A840:11FF:FE34:5531.

NOTE

Most modern operating systems randomize the host portion of the address rather than
using the standard EUI-64 for security and privacy reasons.

Duplicate Address Detection (DAD)


When the client device has obtained the IPv6 link-local address on its interface, the
second step will be to send Duplicate Address Detection (DAD) ICMPv6 message to its
link. This checks if the target address matches existing addresses, ensuring it is a
unique link-local address and not duplicated in the local segment.
NOTE

When the device autoconfigures or receives an IPv6 address, they send three DADs out
via Network Discovery Protocol – Network Solicitation (NDP NS), asking if anyone has
this same address.

Router Solicitation and Router Advertisement


After the device confirms that its IPv6 address is unique and there’s no duplicate link-
local address as its interface becomes enabled, the next step of the SLAAC process is
to send a Router Solicitation message (RS) that requests that routers generate Router
Advertisements (RA). This message aims to query all IPv6-speaking routers attached to
this segment about the global unicast prefix used.

Global Unicast Address


Since our client device has only a link-local unicast address configured on its interface,
which is not meant to be routed, it needs to configure a global unicast address.

When our client device received the router advertisement sent by the router, it combines
the global unicast prefix address (2001:1385:A:B:: /64) with its EUI-64 interface
identifier (A840:11FF:FE34:5531), resulting in the global unicast address
2001:1385:A:B:A840:11FF:FE34:5531/64 that can be routed on the Internet. The
default gateway of our client device will be the router that sends Router Advertisements
(RA) to it.
NOTE

Stateless autoconfiguration (SLAAC) assigns IP addresses automatically without the


need for a DHCP server to keep track of the IP address of a host. However, SLAAC
does not provide DNS information, and without DNS, many services, such as surfing the
Internet, are not possible.

DHCPv6 – Dynamic Host Configuration Protocol v6


Another way to assign IPv6 addresses to hosts is via DHCPv6. DHCPv6 is an update to
DHCPv4, with its main difference being that it supports IPv6’s new addressing scheme.
Unlike SLAAC, DHCPv6 is called stateful configuration because it keeps track of which
hosts have which IP address and stores information about them.

DHCPv6 Process
The first step is the DHCPv6 client must find addresses of other devices on the link via
Neighbor Discovery Process (NDP). If a DHCP client detects a router, it will check the
Router Advertisement (RA) sent by a router to see if a DHCPv6 has been set up, or if
no router advertisement messages are seen, the client will send DHCP solicit
messages, which is a multicast message addressed with a destination of ff02::1:2 that
calls out, “All DHCP agents, both servers and relays.” The DHCPv6 client will use the
link-local scope to send the DHCP solicit message to ensure that the message isn’t
forwarded, by default, beyond the local link.

NOTE

In practice, a DHCP server is still required to give out information such as DNS
information to be able to surf the internet, unlike SLAAC, which only configures IPv6
addresses.

IPv6 Routing – Static Routes Explained and


Configured
With both IPv4 and IPv6 routing, we can use static routing and/or dynamic routing
protocols, either Link State routing protocol or Distance Vector protocol to easily
distribute routes. A static route is typically used when there is a single route or a
preferred route for the traffic to reach a destination. Static routing utilizes small routing
tables with a single entry or route for each destination. It also requires less computation
time than dynamic routing because every route is preconfigured.
IPv4 vs IPv6 Routing
Routing is configured to enable communication between different networks. IPv6 works
in the same way as IPv4 in terms of routing, but it has a separate routing table and
process. If the router receives a packet using the IPv4 scheme, it will route the traffic
based on the IPv4 routing table. On the other hand, if it receives an IPv6 address, it will
route the traffic based on the IPv6 routing table. The mechanism in terms of
configuration syntax and operation of both IPv4 and IPv6 is almost the same. The
notable difference is changing the IPv4 address to IPv6 address on the configuration.

IPv6 Static Route Configuration


By default, IPv6 is disabled globally. To enable it, you can enter the command ‘ipv6
unicast-routing’ in the global configuration mode. We can still enable IPv6 in every
interface, but it only accepts IPv6 traffic on a specific interface and will not forward the
traffic to another interface on which IPv6 is not enabled. That approach is good for
security reasons if you want to isolate the IPv6 traffic on a specific interface.

In configuring an IPv6 static route, we will use the same procedure and configuration
syntax that we are using in configuring IPv4 static routes. For our example
configuration, we will use the topology below:

1. Enable IPv6 on global configuration.


R1(config)#ipv6 unicast-routing

R2(config)#ipv6 unicast-routing
2. Configure the IPv6 addresses on the interfaces. We will use the network prefix /64.
R1(config)#int g0/0

R1(config-if)#ipv6 add 2001:1:1:1::1/64

R1(config-if)#no shut

R1(config-if)#int g0/1

R1(config-if)#ipv6 add FC00:11:11:11::1/64

R1(config-if)#no shut

R2(config)#int g0/0

R2(config-if)#ipv6 add 2001:1:1:1::2/64

R2(config-if)#no shut

R2(config-if)#int g0/1

R2(config-if)#ipv6 add FC00:12:12:12::1/64

R2(config-if)#no shut

3. Configure the IPv6 static route on each interface. We will use the next-hop address
instead of the output interface. We will use the network prefixes /64.
R1(config)#ipv6 route FC00:12:12:12::/64 2001:1:1:1::2

R2(config)#ipv6 route FC00:11:11:11::/64 2001:1:1:1::1

4. Configure IPv6 address and gateway on each PC. The below link-local address will
be populated automatically. IPv6 link-local addresses are addresses that can be used to
communicate with nodes (hosts and routers).
5. Verify by using end-to-end ping testing from each PC.
IPv6 Default Static Route and Summary Route
Did you ever wonder how to efficiently connect the host or your workstation to the
internet in terms of ease of configuration? One of the options is by using the default
static route and summary route. In summary route configuration, we can be able to
configure a static route for all the subnets within the LAN network with only one
command only if within the same subnet. To connect the host inside the LAN, we need
to configure default on the CPE (Customer Premise Equipment) point towards the ISP.
In this article, you will learn the concept and configuration of the IPv6 default static route
and summary route.

IPv6 Summary Route


A summary route is a route advertisement that lists a single route. The IP address range
that matches that single route includes the same addresses in multiple other subnets in
a router’s routing table. If there are many subnets on the network that needs a static
route configuration on the CPE (R3), we will use a static summary route for ease of
configuration. All the routes in the network are summarized on one subnet.

IPv6 Default Static Route


On the other hand, the default route is the gateway IP address where the router sends
all of the IP packets that it does not have a learned or static route for. A default static
route configured is simply a static route with ::/0 as the destination IPv6 address.

Connected and Local Routes


Similar to IPv4 routes, we also have Connected and Local routes, which are
automatically added to the routing table. IPv6 local routes define the route for that one
specific IPv6 address configured on the router interface. IPv6 local routes have a /128
prefix length which is used to specify their own local IPv6 addresses.

IPv6 connected routes are also automatically added to the routing table when there is
a directly connected subnet on a router’s interface. Both ends of the directly connected
link must have an IPv6 address configured and both interface status codes must be in
the up state. The command ‘ipv6 unicast-routing’ must be also configured on the
routers to enable IPv6 routing.

IPv6 Summary and Default Static Routing Configuration


We will configure the IPv6 default static route on R3 for the hosts, and routers inside the
LAN can connect to the internet (ISP). We will use the below diagram as we proceed on
configuring the IPv6 default static route and summary route.
1. Enable IPv6 on the global configuration of each router.
R1(config)#ipv6 unicast-routing

R2(config)#ipv6 unicast-routing

R3(config)#ipv6 unicast-routing

ISP(config)#ipv6 unicast-routing

2. Configure the IPv6 address on each interface of each router.

R1(config)#int g0/0

R1(config-if)#ipv6 add 2001:1:0:1::1/64

R1(config-if)#no shut

R2(config)#int g0/0

R2(config-if)#ipv6 add 2001:1:0:1::2/64


R2(config-if)#no shut

R2(config-if)#int g0/1

R2(config-if)#ipv6 add 2001:1:0:2::1/64

R2(config-if)#no shut

R3(config)#int g0/0

R3(config-if)#ipv6 add 2001:1:0:2::2/64

R3(config-if)#no shut

ISP(config)#int g0/1

ISP(config-if)#ipv6 add 2001:1:1:1::1/64

ISP(config-if)#no shut

3. Configure IPv6 static route on R1 for the network 2001:1:0:2::/64 and 2001:1:1:1::/64
so that R2 will know the route R3 and ISP. We will use the next-hop address instead of
the output interface.
R1(config)#ipv6 route 2001:1:0:2::/64 2001:1:0:1::2

R1(config)#ipv6 route 2001:1:1:1::/64 2001:1:0:1::2

4. Configure IPv6 static route on R2 for the network 2001:1:1:1::/64 so that R2 will know
the route ISP.
R2(config)#ipv6 route 2001:1:1:1::/64 2001:1:0:2::2
5. Configure the IPv6 summary route on R3 for the route pointing to R1 and R2.
R3(config)#ipv6 route 2001:1:0::/48 2001:1:0:2::1

6. Configure IPv6 default static routing on R3 pointing all the unknown traffic
(destination IPv6) to ISP. With this configuration, all the unknown destination networks
will go to ISP.
R3(config)#ipv6 route ::/0 2001:1:1:1::2

7. Configure an IPv6 static route on ISP, a static route configuration from R1, R2, R3.
ISP(config)#ipv6 route ::/0 2001:1:1:1::1

IPv6 Routing Protocols


Like IPv4, IPv6 also supports routing protocols that enable routers to exchange
information about connected networks. IPv6 routing protocols can be internal (RIPng,
EIGRP for IPv6) and external (BGP).

As with IPv4, IPv6 routing protocols can be either distance vector or link-state. An
example of a distance vector protocol is RIPng with hop count as the metric. An
example of a link-state routing protocol is OSPF with cost as the metric.

IPv6 supports the following routing protocols:

 RIPng (RIP New Generation)


 OSPFv3
 EIGRP for IPv6
 IS-IS for IPv6
 MP-BGP4 (Multiprotocol BGP-4)

Neighbor Discovery Protocol – NDP Overview


NDP stands for Neighbor Discovery Protocol, an IPv6 protocol responsible for tasks
such as stateless autoconfiguration, address resolution, Neighbor Unreachability
Detection (NUD), and Duplicate Address Detection (DAD). It operates at Layer 2, the
Data Link Layer, of the OSI model and was developed to improve data transmission
efficiency and consistency across multiple networks and processes.

Unlike IPv4, we no longer use Address Resolution Protocol or ARP in IPv6. IPv6
Neighbor Discovery replaces this function.
Neighbor Discovery Protocol Features
Now, let’s discuss the different functions of NDP:

1. Stateless Address Autoconfiguration (SLAAC) – enables each host on the network to


auto-configure its unique IPv6 link-local address and global unicast address without the
help of a DHCP server. Link-local addresses can be used to talk with other hosts on the
same network, while global unicast addresses are routable on the Internet.
2. Address Resolution – The basic concepts of address resolution in IPv6 are not that
different from those in IPv4 ARP. Resolution is still dynamic and based on using a cache
table that maintains pairings of IPv6 addresses and MAC addresses.
3. Neighbor Unreachability Detection (NUD) – detects when a host is no longer
reachable.
4. Duplicate Address Detection (DAD) – verifies that a unicast IPv6 address is unique
before being assigned to a host interface.

NDP ICMPv6 Message Types


Neighbor Discovery Protocol uses ICMPv6 messages to perform all its functions. Let’s
discuss the five different types of ICMPv6:

1. Router Solicitation – Router Solicitation messages (RS) are sent by hosts when they
boot up to find any routers in a local segment and to request that they advertise their
presence on the network.
2. Router Advertisement – Router Advertisement messages (RA) are used by an IPv6
router to advertise its presence on the network. These Router Advertisements contain
information like the router’s IPv6 address, MAC address, MTU, etc.
3. Neighbor Solicitation Message – Neighbor Solicitation messages (NS) are sent by a
host to determine a remote host’s link-layer IPv6 address. The destination address will
be the solicited-node multicast address of the remote host. It verifies that a neighbor or
remote host is still reachable via a cached link-layer address.
4. Neighbor Advertisement Message – A host use Neighbor Advertisement messages
(NA) to respond to NS message. If a remote host receives an NS message, it returns a
NA message to the sender host. A host also uses this message to announce a link-layer
address change.
5. Redirect – IPv6 routers use this message to notify an originating host of a better next-
hop address for a specific destination. Only routers can send unicast traffic redirect
messages. Only hosts process redirect messages.

Cisco VPN – What is VPN (Virtual Private


Network)?
Security is compulsory in today’s generation because of emerging threats initiated by
hackers ready to compromise your network and resources. There are three states of
data that we need to protect – data at rest, data in use, and data in transit. Data in
transit is more vulnerable to attacks as the data will travel outside your protected
network. The best and cheapest option to protect our data in transit is by using Virtual
Private Network (VPN). Cisco VPN solutions are offered as well.

Why Do We Need VPN?


A Virtual Private Network (VPN) is an encrypted tunnel between two or more devices,
usually a firewall, such as the Cisco Adaptive Security Appliance (Cisco ASA), over an
unsecured network such as the internet. All the network traffic that is sent through the
VPN tunnel will be encrypted and kept confidential from hackers on a network or the
internet. VPN replaces the dedicated point-to-point link with the emulated point-to-point
link or secure connection that shares the common infrastructure.

Using VPN will cost you nothing as it is completely free since most organizations have
firewalls already installed with a built-in VPN feature. VPN also provides security for all
the traffic that is sent outside your network through VPN tunnels. Lastly, VPN is scalable
in that you can add unlimited tunnels and users.

Two Types of VPN


There are two types of VPN that we are commonly using, and both are secured but
implemented and used in different ways.

1. Site-to-Site VPN

Organizations are continuously expanding into different branches, and to protect the
data in transit between two branches, we need to implement a site-to-site VPN.
Common VPN protocols used in site-to-site VPN are Internet Security Protocol (IPSec).
In implementing this type of VPN, we need to set up the Phase 1 and Phase 2 VPN
negotiations. IKE Phase 1 negotiation is where we create a secure encrypted channel
or encrypted network connectivity for the two firewalls can start the Phase 2 negotiation.

In IKE Phase 2 negotiation, the two firewalls will agree on the configured parameters
that define what traffic can go via the VPN tunnel and how to authenticate and encrypt
the traffic. The agreement is called Security Association. Both Phase 1 and Phase 2
should have the same parameters, such as pre-shared keys, authentication, encryption,
and IKE version.

There are two ways to implement site-to-site VPN:

Intranet VPN – it provides secured site-to-site connectivity within the company or


internally.
Extranet VPN – it provides secured site-to-site connectivity outside the company. For
example, customers or partners can securely access the shared resources of the
company.

The below image shows the Site-to-Site VPN implementation:

2. Remote Access VPN

Commonly called a mobile VPN. Using this type of VPN connection permits the users to
connect through the internet anywhere in the world to access the corporate network
resources securely. It can be used in a work-from-home setup where the employees
can securely access the company’s internal resources through a VPN. To implement
this, the employee must install a VPN client, such as a Cisco anyconnect secure
mobility client or Cisco anyconnect VPN client, to their device, and a virtual IP address
will be assigned to the employee’s device/PC that will be used to establish a secured
tunnel.

Remote access VPN can use SSL, IKEv2, L2TP, and IPSec protocols. The most secure
and easy to implement protocol is IKEv2. Some internet connections blocked the IKEv2
and L2TP protocols. That is why some are using SSL VPN as it uses the typical
HTTP/HTTPS traffic that is allowed on all internet connection types. IPSec for remote
access VPN is not usually used, as there is already a known vulnerability on the
protocol.

The system administrator can choose between two modes to implement the remote
access VPN:

Full Tunnel – all the traffic that is coming out from the employee’s device will go directly
to the firewall, and the firewall will forward it to the internet if necessary. This is a
completely secured implementation as all the security services of the firewall will be
applied to all the traffic coming out from the employee’s device.
Split Tunnel – the traffic that will go to the internet like HTTP/HTTPS traffic will go to
the typical internet connection such as broadband/LTE, while the VPN traffic will be
used to access the internal resource of the company will use a VPN tunnel. The traffic
will be split based on its purpose.

The below image shows the Remote Access VPN implementation:

WAN Connection Types – Explanation and


Examples
Organizations are constantly expanding into different branches with different locations.
To connect the two branches, we are leveraging the infrastructure of Internet Service
Provider to provide connectivity between the branches. The network connection outside
of the organization’s network is called Wide Area Network (WAN).

The ISP is providing us with a subscription-based WAN connection type for us to have
access to our resources remotely. Another notable application of WAN is the internet.
We can connect to the internet because of the WAN provided by our local Internet
Service Providers. We’ll discuss each WAN connection type below.

Wide Area Network Connectivity


Wide Area Network (WAN) is a telecommunication network that is used to simply
extend a LAN over a large geographical distance. Technically, two or more local area
networks can be connected via WAN with different layer 3 devices like routers or
firewalls.

Most WAN IP address uses the static public IP address, dynamic public IP address, and
PPPoE (Point-to-Point Protocol) as the connection to the ISP depending on the
subscription. Static public IP addresses are much expensive as compare to dynamic
and PPPoE because we only have limited unique public IP address that cannot be
reused by other customers. Private IP addresses are only routable on the local network
and not routable over WAN or the internet. The below diagram shows the LAN and
WAN infrastructure.

WAN Connection Types


There are many WAN connections that we use to provide our connectivity to the
internet. Below are the common options for WAN connectivity from the internet provider.

1. Leased Line

This WAN connection type is a dedicated point-to-point link and fixed-bandwidth data
connection. By using leased lined, your network will have a completely secured and
reliable connection, high bandwidth, and superior quality of service. On the other side,
leased lines can be expensive and not scalable as it is a permanent physical
connection.

2. Digital Subscriber Line

DSL is a medium used to transfer digital signals over the standard telephone lines. It
uses a different frequency than the telephone is using so that you can use the internet
while making a call. DSL is an older concept that provides a typical speed of around
6mbps. The good thing in DSL is the bandwidth is not shared and provides a constant
speed.
3. Cable Internet

One way to provide broadband internet connection is by using cable internet from a
local cable TV provider. It has quite a similarity with DSL as it also uses an existing
cable modem from cable TV to send data. On this connection, the speed varies with the
number of users on the service at a specific time.

4. Fiber Internet Access

It is the newest broadband connection that provides the highest internet speed service
to the customers. It is also commonly used in telecommunication backhaul connections
because of the higher speed it can handle as compared to other cables. DWDM,
SONET, and SDH are the ISP backhaul transport equipment that uses fiber optic cable.
Fiber optic is also used in telecom packet switching networks or circuit switching
networks.

5. Multi-Protocol Label Switching (MPLS)

MPLS is a type of VPN that uses labels on forwarding packets instead of IP addresses
or layer 3 headers. It offers optimum security and routing for customer’s sites. On
MPLS, the service provider is participating in the customer’s routing.

6. Wireless WAN

Most of us are using mobile phones that use mobile data to connect to the internet. The
commonly known connection types for wireless WAN are 3G, 4G, LTE, and 5G. It is the
services offered by local ISP to provide wireless internet access to mobile devices via
cellular sites. It uses specific frequencies to provide wider coverage and stronger signal
to customers.

Leased Line Definition, Explanation, and Example


Local Area Network (LAN) is where we connect the endpoint devices like servers,
desktops, telephones, and access points. To connect the device within the LAN to a
different device within the different LAN, we are using a Wide Area Network (WAN). In
WAN, two or more local area networks can relate to different layer 3 devices like routers
or firewalls. The common type of WAN is leased line connection.
Understanding Leased Lines
A leased line, sometimes called a dedicated line, is a dedicated point-to-point link and
fixed-bandwidth data connection. A leased line is not a dedicated cable. It is a reserved
dedicated leased line circuit (either a copper or a fiber optic cable) between two points.

The leased line transfers data in both directions using a full-duplex transmission. It uses
two pairs of wires (full-duplex cable), that each wire is used in a unidirectional
transmission of data network. A leased line is not a long physical cable extended to two
or more locations as others perceived. It uses a specialized switching device that acts
as a signal booster to make the connection a point-to-point link and reach a remote
destination.

Organizations are not building their infrastructure to create a dedicated connection to


their other branches as it is expensive and difficult to implement. They use the
infrastructure of an Internet Service Provider on a fixed monthly fee, which is why it is
called a leased line.

The below diagram shows how the leased line connects two branches:

ISP as a Leased Line


A leased line can be of any medium as long as it connects two branches together
regardless if it has network circuitry in between. It can be an MPLS, Fiber Optic, DSL, or
Satellite. The local Internet Service Provider is the best way to acquire a leased line as
they have a huge geographical network infrastructure. It can either be a monthly or
yearly subscription, depending on the terms with the ISP.

From the Optical Network Terminal (ONT) located on the customer’s branch, the traffic
will go to the Optical Line Termination (OLT) located in the ISP premise, where it
multiplexed and processed all the optical signals coming from the customers.

From OLT, it will then go to the edge routers where it uses VRF and adds labels if it is
using MPLS. From edge routers, it will then go to the core router using BGP as the
overlay protocol and IGP like OSPF as the underlay protocol. From core routers, it will
transmit the data to the other edge router, go to OLT, and finally to the ONT, which is
located to the other branch of the customer. The transmission equipment used in
between the core routers, edge routers, OLT, and ONT is either Synchronous Digital
Hierarchy (SDH) or Dense Wavelength Division Multiplexing (DWDM).

Lease Line Advantages and Disadvantages


Most leased lines have a Service Level of Agreement (SLA) to the ISP, which
guarantees a reliable and stable internet connection. Because the leased line is a
dedicated communication channel, your network will have reliable internet access,
continuous data flow, higher bandwidth can be achieved and controlled. You can
implement a leased line when you want a completely secured and superior quality of
service for your network.

On the other hand, leased lines can be expensive because they need a dedicated cable
and switching circuitry. Not only that, the leased line is not scalable as it is a permanent
physical connection.

The Different Wide Area Network (WAN)


Topologies
Wide Area Network (WAN) tackles the different geographical locations, such as
branches on any other site or remote location, challenges that a LAN usually
encounters. We are going to tackle the different network topologies that address the
various situation needed by customers which the service provider provides their WAN
services to interconnect their clients’ sites for a certain amount based on the
requirement such as bandwidth, availability, and speed.

We will be discussing the popular WAN technology nowadays, such as Metro Ethernet
(MetroE), wherein it uses physical Ethernet links to connect the customer’s network
devices to the service provider’s network and other WAN topology that are used today.
To be able to connect multiple sites across Wide Area Network, the service provider
should be well aware of the topology to be used to address the enterprise client’s
needs.
Point-to-Point
The customer basically requires basic point-to-point network connectivity between two
sites separated geographically that will allow them to send and receive Ethernet frames
to each other as if they are connected directly. Point-to-point topology is transparent to
the customer network and acts as if there are physical connections between remote
sites.

Typical dedicated leased lines connections are offered with this kind of setup, such as
an E1 or T1 line. But with MetroE, service providers offer what is called an Ethernet Line
Service or E-Line where two customer premise equipment (CPE) devices can exchange
Ethernet frames, similar in concept to a leased line.

Point-to-Multipoint
This WAN topology requires a central site device that directly sends Layer 2 frames to
reach remote sites, but the remote sites can only send frames to the central site. This
Metro Ethernet service is known as E-Tree, wherein the central site is the E-Tree Root
and the remote sites are the E-Tree leaves. This topology is also known as Hub and
Spoke or partial mesh network topologies, but regardless, this type of WAN topology
creates a service that addresses the need to have a hub site with several sites

connected.

Full Mesh
Full mesh topology allows all the connected devices in the WAN topology to
communicate with each other, and the people who develop MetroE has anticipated that
there is a need for full-mesh connectivity of devices wherein they all send and receive
Ethernet frames to each other directly without having to be dependent on a central hub.
This MetroE service is called Ethernet LAN or E-LAN.

An E-LAN service permits all the devices connected to it to send frames directly to each
other as if they are in one big Ethernet switch.
Ring Topology
This topology is almost like a point-to-point but is connected on both sides to provide
protection just in case a fault happens. WANs that use this topology are less susceptible
to failure since traffic can be routed the other way around the ring if a fault is detected
on the network. But, adding new sites to the ring topology requires more work and cost
compared to a simple point-to-point setup because each new site will require twice the

connection needed.

Star Topology
WAN topology using the star network configuration requires a central hub. A
concentrator router is being used to ensure that data is properly sent to the destination.
This topology allows easier integration of new network components to the network,
which can be an important consideration for business WANs as it entails less work and
cost.

This topology also makes the network less vulnerable to a single point of failure issue
that may affect traffic on the network except for the central hub. The spokes have an
independent link in transmitting data to the concentrator router, which means they do
not depend on other spokes to function properly and can easily isolate the issue if ever
something wrong happens.

Cybersecurity Threats and Common Attacks


Explained
Cyber security threats are the malicious and deliberate ways to damage and steal data
or disrupt services to their target, even a serious concern about national security. Cyber
threats refer to a possibility of successful cyber attacks that aim to access, damage,
disrupt or steal valuable, sensitive information from their target. Cyber security threats
are either from within the organization by trusted users or from remote locations by
unknown parties.

Nowadays, modern enterprise networks are usually made up of many parts that all work
with each other. Securing them can become very complex work. The larger the network
grows, the harder it is to protect everything until you have identified many vulnerabilities,
assessed the many exploits, and exposed where the threats might come from.

To be able to address cyber security threats and attacks, we must first need to
understand these types and how they may disrupt the network by accessing sensitive
data.

Malware
Malware is malicious software that performs a task on a targeted device or a whole
network. Once activated, usually by clicking a malicious link or attachment, it can block
access to important network components, install harmful software, subtly obtain
sensitive information by sending data from the hard drive and disrupting the whole
network halting services to users. Below are examples of malware that cyber criminals
use to gain unauthorized access to the target and perform a cyber attack:

1. Spyware
2. Ransomware
3. Backdoor
4. Trojans
5. Virus
6. Worms

Phishing
Phishing is a way of obtaining valuable information about the target by tricking the victim
to unknowingly provide their credentials and can be considered a type of social
engineering that attacks the users into bypassing normal cyber security protocols and
giving up personal data about them.

Phishing attacks can be accomplished in a lot of ways, typically by sending phishing


emails that may seem to be coming from trusted sources such as your bank, friends,
co-workers, or even your own employer. The cybercriminals try to get users to click on
links in the emails that will redirect them to fraudulent websites that request personal
information or install malware, such as keyloggers or spyware, on their devices.

Man-in-the-Middle (MITM) Attack


This type of attack happens when there is someone between two parties sharing
sensitive information is being exchanged. Once the hacker has gained access, usually
by disrupting traffic, they can filter and steal data. MITM usually occurs when a guest
uses an unsecured Wi-Fi connection. The attackers insert themselves between the
guest and the network by installing malware and utilizing the data maliciously.

Distributed Denial of Service (DDoS)


A DDoS attack is a cybersecurity threat that maliciously attempts to interrupt the normal
traffic of a targeted server, service, or network by flooding and overwhelming the target
or its surrounding infrastructure with Internet traffic.

DDoS attacks are executed with networks of Internet-connected machines. These


networks are consist of computers and other devices which have been infected with
malware and permit them to be controlled remotely by an attacker. These devices are
called bots or zombies, and a group of bots is called a botnet.
Data Breaches
A data breach is a kind of data theft. Motives for data breaches include crimes such as
identity theft, a desire to embarrass an institution, and espionage to steal sensitive data
and gain access to critical infrastructure for financial gain and even threaten national
security.

Domain Name System (DNS) Attack


A DNS attack is a kind of cyberattack in which cybercriminals exploit the Domain Name
System (DNS) vulnerabilities. The attackers make use of the DNS vulnerabilities to
divert site visitors to malicious pages (DNS Hijacking) and access data from
compromised systems (DNS Tunneling).

Structured Query Language (SQL) injection


SQL injection exploits the web security vulnerability of an application. It allows an
attacker to intervene with the database queries of the application. Therefore, they can
view and retrieve restricted data, including user data or any other sensitive data that the
application can access. An attacker can modify or delete this data in many cases,
causing persistent changes to the application’s content or behavior.

Malware on Mobile Apps


Mobile devices are also vulnerable to malware attacks, and cybercriminals can use this
as a starting point to exploit other devices in the network. Attackers may embed
malware in app downloads, mobile websites, or phishing emails and text messages.
Once compromised, this can give the attacker access to personal information, location,
financial accounts, and more.

Advanced Persistent Threats


An advanced persistent cyber threat is when an unauthorized user gains access to a
system or network and stays there without being detected for an extended time.

The Different Types of Firewalls Explained


This article will dig deeper into the most common type of network firewalls. We will
elaborate stateful firewalls, stateless or packet-filtering firewalls, application-level
gateway firewalls, and next-generation firewalls. We are going to define them and
describe the main differences, including both their advantages and disadvantages.

A firewall can be a software firewall, a hardware firewall, or a combination of both.


Software firewalls are applications or programs installed on devices. Hardware firewalls,
on the other hand, are physical devices.

Stateful Inspection Firewall


A stateful firewall is located at Layer 3 (source and destination IP addresses) and Layer
4 (Transmission Control Protocol/TCP and User Datagram Protocol/UDP) of the OSI
model. It is a type of firewall that monitors the status of active network connections while
analyzing incoming packets for potential threats.

Essential firewall functions include preventing malicious traffic from entering or leaving
the private network. Monitoring the state and context of network communications is
critical because this information can be used to identify threats based on where they
come from, where they go, or the content of their data packets. Stateful firewalls can
detect unauthorized network access attempts and analyze data within packets to see if
it contains malicious code.

Advantages of Stateful Inspection Firewalls:

 Connection state-aware
 Does not open a large range of ports to permit traffic
 Extensive logging capabilities
 Robust attack prevention

Disadvantages of Stateful Firewalls:

 It can be complex to configure


 Cannot avoid application-level attacks
 Does not have user authentication capability
 Not all protocols have state information
 Additional overhead in maintaining state table

Stateless Firewall
We can also call it a packet-filtering firewall. We can also call it a packet-filtering firewall.
It is the oldest and most basic type of firewalls. Stateless packet-filtering firewalls
operate inline at the network’s perimeter. These firewalls, however, do not route
packets; instead, they compare each packet received to a set of predefined criteria,
such as the allowed IP addresses, packet type, port number, and other aspects of the
packet protocol headers. Packet filters provide a basic level of security that can give
protection against known threats. The packet filter does not maintain a connection state
table.

Packet-filtering Firewall Advantages:

 A single device can filter traffic for an entire network


 Extremely fast processing of packets
 Inexpensive

Packet-filtering Firewall Disadvantages:

 It can be complex to configure and hard to manage


 Cannot avoid application-level attacks
 Does not have user authentication capability
 Limited logging capabilities
 Prone to certain types of TCP/IP protocol attacks
Application-Level Gateway Firewalls
Application-level gateway firewalls work on Layer 7, application layer, of the OSI
reference model. They inspect and route internet traffic to and from the requested web
address and the user. Moreover, they also address network security and privacy
policies and support internet traffic regulation and usage. Proxy firewalls are the most
common type of application-level gateway firewalls. The connections coming from
outside the network are established through the proxy firewall.

Application-Level Gateway Firewalls Advantages:

 Content caching
 Increased network performance
 Easier to log traffic
 Prevents direct connections from outside the network

Application-Level Gateway Firewalls Disadvantages:

 Impact throughput capabilities


 Impact applications

Next-Generation Firewalls
The Next-Generation Firewall (NGFW) is a deep-packet inspection firewall that expands
beyond port/protocol inspection and blocking to include application-level inspection (up
to Layer 7 of the OSI), intrusion prevention, and intelligence from outside the firewall.

NGFW is a more advanced version of traditional firewalls that provides the same
benefits. NGFW, like traditional firewalls, employ both static and dynamic packet filtering
and VPN support to secure all connections between the network, internet, and firewall.
Both types of firewalls should be able to do NAT and PAT.

There are also significant differences between traditional and next-generation firewalls.
The ability of an NGFW to filter packets based on applications is the most apparent
distinction between the two. These firewalls have a high level of control and visibility
over the applications that they can identify through analysis and signature matching.
They can use whitelists or a signature-based intrusion prevention system to distinguish
between safe and malicious applications, which are then identified using SSL
decryption. Unlike most traditional firewalls, NGFWs also include a path for receiving
future updates.

Next-Generation Firewalls Advantages:

 More secure
 Supports application-level inspection up to Layer 7 of the OSI model
 Capable of user authentication
 Detailed logging

Next-Generation Firewalls Disadvantages:

 Take a lot more system resources


 Can be more expensive than some firewall options
 Requires more fine-tuning to limit false positives and false negatives

Firewalls, IDS, and IPS Explanation and


Comparison
This article will discuss the differences between network security devices – firewalls,
Intrusion Prevention Systems (IPS), and Intrusion Detection Systems (IDS). The major
distinction is that a firewall blocks and filters network traffic, but an IDS/IPS detects and
alerts an administrator or prevents the attack, depending on the setup.

A firewall permits traffic depending on a set of rules that have been set up. It is based
on the source, destination, and port addresses. A firewall can deny any traffic that does
not satisfy the specified criteria. IDS are passive monitoring system devices that monitor
network traffic as they travel over the network, compare signature patterns, and raise an
alarm if suspicious activity or known security threat is detected. On the other hand, IPS
is an active device that prevents attacks by blocking them.

Firewalls
A firewall employs rules to filter incoming and outgoing network traffic. It uses IP
addresses and port numbers to filter traffic. It can be set to either Layer 3 or transparent
mode. The firewall should be the first line of defense and installed inline at the network’s
perimeter.

There are also different types of firewalls like proxy firewall, stateful inspection firewall,
unified threat management (UTM) firewall, next-generation firewall (NGFW), threat-
focused NGFW, and a virtual firewall.

Intrusion Prevention System (IPS)


IPS is a device that inspects, detects, classifies, and proactively prevents harmful traffic.
It examines real-time communications for attack patterns or signatures and then blocks
attacks when they have been detected. Placement and configuration in inline mode and
generally being in Layer 2 after the firewall. In inline mode, traffic passes into one of the
device’s ethernet ports and out of the other.

Intrusion Prevention System must work efficiently to avoid decreasing network


performance. It must be quick because exploits might occur anytime. To eliminate
threats and false positives, the IPS must detect and respond accurately.

Some of the actions of IPS include:

 Alerting network administrators (anomaly-based detection)


 Dropping the malicious traffic
 Denying traffic from the source address
 Reset the connection
Intrusion Detection System (IDS)
IDS is either a hardware or software program that analyzes incoming network traffic for
malicious activities or policy breaches (network behavior analysis) and issues alerts
when they are detected. It detects real-time traffic and searches for attack signatures or
traffic patterns, then sends out alarms. Unlike IPS, a network Intrusion Detection
System is not in line with the data path, so it can only alert and alarm on detection of

anomalies.

Cisco Cryptography: Symmetric vs Asymmetric


Encryption
Before tackling symmetric vs. asymmetric encryption and its key differences, let us first
see what cryptography is. Cryptography transforms readable messages into an
unintelligible (impossible to understand) form and later reverses the process. It can
send sensitive data securely over an untrusted network that uses authentication and
encryption methods such as Symmetric and Asymmetric encryption.

Cryptography Services
Cryptography provides the following services to the data:

 Authenticity (Proof of source)


 Confidentiality (Privacy and secrecy)
 Integrity (the data has not changed during transit)
 Non-Repudiation (Non-deniability)
Symmetric Encryption
Symmetric encryption is a type of encryption that uses public key encryption. The same
secret key that both encrypts and decrypts data and is known by both parties. The
public key cryptography method is faster, and it is typically used in encrypted data with
large transmission such as email, secure web traffic, IPsec. Symmetric encryption
algorithms include DES, 3DES, AES, and SEAL.

Take a look at the symmetric encryption process above. The same key is used to
encrypt data, and the duplicate public keys are used to decrypt data. First, the sender
specifies the data, then proceeds in encrypting data to unreadable format, forwards it to
the transit. When the receiver receives that data, it is still encrypted, decrypt data using
the same key, and then it is read. Notice that the data during the transit is jumbled so
that if somebody tries to sniff it, they would not understand the data because they need
that one key to decrypt it.

Asymmetric Encryption
Asymmetric encryption uses a secret key/private key and public key pairs. The
message is encrypted with the public key and can only be decrypted with the secret key
and vice versa. With asymmetric encryption, only the private key must be kept secret,
and the public key can be ready in the public domain. Asymmetric Encryption is slower
than symmetric encryption, and it is used for smaller transmissions like Symmetric Key
Exchange and Digital Signatures. Asymmetric encryption algorithms include RSA and
ECDSA.

In the asymmetric encryption process above, you see that from the left, the sender has
the public key, while the receiver on the right has the private key. Those are the keys
they use for the encryption and to make the data jumbled when it passes to the transit
network. Lastly, the data will be decrypted with the private key on the receiving end.
This enables everyone to send data securely to the host with a private key, and it is
exclusive for those who have the private key, so only those who can decrypt the
message.

Authenticity and Non-Repudiation


As you can see on the image below, if the host with the secret key or private key sends
data to the server with the public key, it is apparent that the sender would be that
particular sender because it has been authenticated. Therefore, there is no point in
denying or repudiating it. The encrypted message contains decrypted information that
was acquired during the decryption process.

Hash-Based Message Authentication Codes (HMAC)


HMAC codes provide data security and integrity. The sender makes a hash value from
the data to be sent using a symmetric key. If the hash values do not change, it is an
indication the data has not been altered in transit. HMAC is usually used for large
transmissions such as email, secure web traffic, and IPsec. HMAC algorithms include
MD5 and SHA.

Public Key Infrastructure (PKI)


Cryptography can be used to send sensitive data securely over an untrusted network.
Symmetric key encryption is used for bulk data transmissions. Therefore, each side
needs to know the shared key, which leads to a problem.

For example, when you want to buy something online, you want your credit card details
to be encrypted over the internet. The online store cannot send you the shared key over
the same Internet channel because your connection is not yet encrypted. Anybody that
sniffs your data in real-time will get the shared key as well if it is sent on the same line. It
is not even practical to call a customer every time someone wants to purchase just to
give them the shared key. So, how do we resolve this?

Public Key Infrastructure (PKI) solves this problem. It uses a trusted introducer,
Certificate Authority, for the two parties who need secure communication, digital
certificates. Also, both parties need to trust the Certificate Authority.

HTTP and HTTPS explained


HTTP (Hypertext Transfer Protocol)
HTTP is an client-server protocol that allows clients to request web pages from web
servers. It is an application level protocol widely used on the Internet. Clients are usually
web browsers. When a user wants to access a web page, a browser sends an HTTP
Request message to the web server. The server responds with the requested web
page. By default, web servers use the TCP port 80.

Clients and web servers use request-response method to communicate with each other,
with clients sending the HTTP Requests and servers responding with the HTTP
Responses. Clients usually send their requests using GET or POST methods, for
example GET /homepage.html. Web servers responds with a status message (200 if
the request was successful) and sends the requested resource.

An example will clarify this process:


The client wants to access http://google.com and points his browser to the
URL http://google.com (this is an example of an HTTP Request message). The web
server hosting http://google.com receives the request and responds with the content of
the web page (the HTTP response message).

Web servers usually use a well-known TCP port 80. If the port is not specified in a URL,
browsers will use this port when sending HTTP request. For example, you will get the
same result when requesting http://google.com and http://google.com:80.
NOTE
The version of HTTP most commonly used today is HTTP/1.1. A newer version, HTTP/2, is available
and supported by most browser.

HTTPS (Hypertext Transfer Protocol Secure)


Hypertext Transfer Protocol Secure is a secure version of HTTP. This protocol enables
secure communication between a client (e.g. web browser) and a server (e.g. web
server) by using encryption. HTTPS uses Transport Layer Security (TLS) protocol or
its predecessor Secure Sockets Layer (SSL) for encryption.

HTTPS is commonly used to create a secure channel over some insecure network, e.g.
Internet. A lot of traffic on the Internet is unencryped and susceptible to sniffing attacks.
HTTPS encrypts sensitive information, which makes a connection secure.

HTTPS URLs begin with https instead of http. In Internet Explorer, you can
immediately recognize that a web site is using HTTPS because a lock appears to the
right of the address bar:
NOTE
HTTPS uses a well-known TCP port 443. If the port is not specified in a URL, browsers will use this
port when sending HTTPS request. For example, you will get the same result when
requesting https://gmail.com and https://gmail.com:443.

Cisco Cyber Attack Mitigation and Prevention


Cyber threats should not be taken lightly. Cyber attacks can cause blackouts and
military equipment failure and can be a national security concern. A cyber threat can
lead to the loss of valuable and sensitive data like medical records or make
communications unavailable to the public by disrupting the network. That is why it is
essential for companies and organizations to establish cyber attack mitigation and
prevent attacks on critical infrastructure to gain access and steal data.

Sources of Cyber Threats


A cyber attack can come from all directions, whether within the company or from a
remote location on the other side of the planet. Identifying the source of the threat and
having proactive cybersecurity risk mitigation is an important part of preventing a
successful cyber attack from disrupting the network. These malicious threat actors may
include the following:

 Criminal organizations with large numbers of employees develop attack vectors and
execute attacks
 Individuals who create attack vectors
 Nation-states
 Terrorist groups
 Industrial spies
 Organized crime groups
 Insider threats
 Hackers and cyber criminals
 Business competitors

Cyber Defense is a Must


Any organization or business running its networks must establish network access
controls and implement the best cyber defense practice possible, including consistent
cyber risk assessment, installing security systems and security solutions, and cyber risk
mitigation.

For individuals, the best practice is basic and simple. Having anti-virus software
installed on your computer is a good start. Habitually changing your alphanumeric
password can go the distance regarding cyber defense. Lastly, being vigilant in
identifying phishing attacks can help prevent an individual from being a victim of a cyber
attack.

Effective Cyber Defence Tools


Despite the emergence of cybersecurity threats, there are tools that can be used to
prevent or even stop data breaches, unauthorized access, identity theft, and phishing
attacks from happening in our computer network and disrupting digital operations. Many
new services and technologies are coming onto the market, making it easier to mount a
robust defense against cyber threats.

1. Threat Detection Tools – also known as extended detection response (XDR), is an


integral part of a company’s cybersecurity portfolio that acts as a first responder in
detecting if there is any malicious code or malicious software in the network.
2. Vulnerability Testing Tools -well-known cybersecurity companies offer crowdsourced
cybersecurity services composed of professional white hat hackers that can identify your
company’s security vulnerabilities and inform your security team.
3. Outsourced Security Providers – various cybersecurity companies have emerged and
offer services to protect your network from advanced persistent threats.
4. Device Management Point Solution – pinpoints the weakest part in your network by
exposing it across all network organizations with services ranging from device tracking
software to remote wipe to disk encryption.
5.Telnet & SSH Explained
6. Telnet
7. Telnet is a network protocol that allows a user to communicate with a remote
device. It is a virtual terminal protocol used mostly by network administrators to
remotely access and manage devices. Administrator can access the device
by telnetting to the IP address or hostname of a remote device.
8. To use telnet, you must have a software (Telnet client) installed. On a remote
device, a Telnet server must be installed and running. Telnet uses the TCP port
23 by default.
9. One of the greatest disadvantages of this protocol is that all data, including
usernames and passwords, is sent in clear text, which is a potential security risk.
This is the main reason why Telnet is rarely used today and is being replaced by
a much secure protocol called SSH. Here you can find information about setting
up Telnet access on your Cisco device.
10. NOTE
The word telnet can also refer to the software that implements the telnet protocol.
11.
12. On Windows, you can start a Telnet session by typing the telnet IP_ADDRESS or
HOSTNAME command:

13.
14. SSH (Secure Shell)
15. SSH is a network protocol used to remotely access and manage a device. The
key difference between Telnet and SSH is that SSH uses encryption, which
means that all data transmitted over a network is secure from
eavesdropping. SSH uses the public key encryption for such purposes.
16. Like Telnet, a user accessing a remote device must have an SSH client installed.
On a remote device, an SSH server must be installed and running. SSH uses the
TCP port 22 by default.
17. Here is an example of creating an SSH session using Putty, a free SSH client:

18.
19. NOTE
SSH is the most common way to remotely access and manage a Cisco device. Here you can
find information about setting up SSH access on your Cisco device.

Setting up Telnet
To access a Cisco device using telnet, you first need to enable remote login. Cisco
devices usually supports 16 concurrent virtual terminal sessions, so the first command
usually looks like this:
HOSTNAME(config)line vty 0 15

To enable remote login, the login command is used from the virtual terminal session
mode:
HOSTNAME(config-vty)login

Next, you need to define a password. This is done using the password command from
the virtual terminal session mode:
HOSTNAME(config-vty)password PASSWORD

Let’s try this on a real router. First, we will try to access the router without enabling
telnet on a device:
R1#telnet 10.0.0.1

Trying 10.0.0.1 ...Open

[Connection to 10.0.0.1 closed by foreign host]

As you can see above, we can not access a Cisco device using telnet before setting up
the password. Let’s do that:
R1#(config)#line vty 0 15

R1#(config-line)#password cisco

R1#(config-line)#login

Now, let’s try to access our device:

R1#telnet 10.0.0.1

Trying 10.0.0.1 ...Open

User Access Verification

Password:

This time, because telnet was configured on the device, we have successfully telnetted
to the device.

Setting up SSH Secure Shell


To enable secure access to your Cisco device, you can use SSH instead of Telnet. SSH
uses encryption to secure data from eavesdropping.

To enable SSH, the following steps are required:

1. set up a hostname and and a domain name.


2. configure local username and password.
3. generate RSA public and private keys.
4. allow only SSH access.

The following example shows the configuration of the first three steps:
Router(config)#hostname R1

R1(config)#ip domain-name cisco

R1(config)#username study password ccna

R1(config)#crypto key generate rsa

The name for the keys will be: R1.cisco

Choose the size of the key modulus in the range of 360 to 2048 for
your

General Purpose Keys. Choosing a key modulus greater than 512 may
take

a few minutes.

How many bits in the modulus [512]:

% Generating 512 bit RSA keys, keys will be non-exportable...[OK]

R1(config)#

*Jun 8 16:46:45.407: %SSH-5-ENABLED: SSH 1.99 has been enabled

R1(config)#

First, we have defined the device hostname by using the hostname R1 command. Next,
we have defined the domain name by using the ip domain-name cisco command. After
that, the local user is created by using the username study password ccna command.
Next, we need to enable only the SSH access to a device. This is done by using
the transport input ssh command:

R1(config)#line vty 0 15
R1(config-line)#login local

R1(config-line)#transport input ssh

R1(config-line)#

If we use the transport input ssh command, the telnet access to the device is
automatically disabled.
NOTE
You should use the more recent version of the protocol, SSH version 2. This is done by using the ip
ssh version 2 global configuration command.

Cisco Console Port Security


Every Cisco router or switch has a single console port that is used to connect it to a
computer directly for configuration and management. A console cable or a rollover cable
is used to connect to the router or switch console port and is typically used during initial
configuration as there is no network connection and remote access, such as Telnet,
SSH, or HTTPS, configured on the device yet.

A terminal emulation software, such as Putty, is also needed to connect to the device. It
is used so that we can use a PC or a laptop as a display device for the router or switch
and have access to the device’s Command Line Interface (CLI). The software is
essential as we need them to be able to configure devices initially and to be able to
install routers and switches onto the network and enable remote management. You
should connect your device via the COM port on your computer. You can check your
COM port via the device manager.
Console Cable
The rollover cable is an RJ45 on one end and a DB9 on the other. It is the most popular
console cable for older devices. You can still use it, but you’ll need an adapter if you
don’t have a serial port on your device. Usually, you’ll use a DB9 to USB adapter to
connect to your PC or laptop. Nowadays, there’s a direct RJ45 to USB console cable
available in the market. Newer Cisco devices, usually the smaller and portable ones,
have mini USB console ports. The console cable for it has a mini USB for the console
port and a USB on the other end.

Console Port Configuration


With the console port, administrators can access the terminal lines of the Cisco devices’
IOS. However, this can be a potential threat in our networks because anyone can
access the device freely, or a user can access the device by using the same password
locally stored on the device. There is no authentication for the users.

A router or switch has one console port only. The console port has a line number of 0,
thus ‘line console 0’. To secure the console port connections to our networking device,
we can set a password by issuing the following commands below. In this way, we can
secure our console port by requiring a password upon logging in.
Router(config)#line console 0

Router(config-line)#password StudyCCNA

Router(config-line)#login

NOTE
The management port is used for remote access only. The console port is used for local and
physical access.

The exec-timeout Command


By default, an IOS device will disconnect a console or VTY user after 10 minutes of
inactivity. You can specify a different inactivity timer using the exec-timeout MINUTES
SECONDS line mode command.

For example, to disconnect a console user after 90 seconds of inactivity, we can use the
following command:

R1(config)#line con 0

R1(config-line)#exec-timeout 1 30
After 90 seconds of inactivity, the session will be disconnected and the user will need to
supply the console password to log back in:

R1(config-line)#

R1 con0 is now available

Press RETURN to get started.

User Access Verification

Password:

NOTE
To disable the timeout, use the value of 0 (not recommended in production environments!)

Encrypt local usernames and passwords


We’ve learned it is possible to configure local usernames and passwords on a Cisco
device and then use them to login to the device. To do this, we’ve used the username
USER password PASSWORD command, like in the example below:
R1(config)#username tuna password peyo

However, there is one problem with this command – the password is stored in clear text
in the configuration:
R1#show running-config

Building configuration...

Current configuration : 635 bytes

version 15.1

....

!
username tuna password 0 peyo

...

We can use the service password-encryption global configuration command to encrypt


the password, but this method does not provide a high level of network security and the
passwords can be cracked.

To rectify this, Cisco introduced a new command – username USER secret


PASSWORD. This command uses a stronger type of encryption:

R1(config)#username tuna secret peyo

R1(config)#

R1(config)#do show run | include username

username tuna secret 5 $1$mERr$Ux7QsUATkj4kWVORI4.m21

Note that (unlike with the enable password and enable secret commands) you can’t
have both the username password and username secret commands configured at the
same time:

R1(config)#username tuna password peyo

ERROR: Can not have both a user password and a user secret.

Please choose one or the other.

Cisco Privilege Levels – Explanation and


Configuration
It is important to secure your Cisco devices by configuring and implementing username
and password protection and assigning different Cisco privilege levels to control and
restrict access to the CLI. Hence, protecting the devices from unauthorized access. In
this article, we will discuss how to configure user accounts and how to associate them
to the different Cisco privilege levels. Then, we’ll take a deep dive into their purposes
and functions, as well as their importance in network security design.

Privilege Level Security


Cisco IOS devices use privilege levels for more granular security and Role-Based
Access Control (RBAC) in addition to usernames and passwords. There are 16 privilege
levels of admins access, 0-15, on the Cisco router or switch that you can configure to
provide customized access control. With 0 being the least privileged and 15 being the
most privileged. These are three privilege levels the Cisco IOS uses by default:

 Level 0 – Zero-level access only allows five commands- logout, enable, disable, help
and exit.
 Level 1 – User-level access allows you to enter in User Exec mode that provides very
limited read-only access to the router.
 Level 15 – Privilege level access allows you to enter in Privileged Exec mode and
provides complete control over the router.

NOTE
By default, Line level security has a privilege level of 1 (con, aux, and vty lines ).

Cisco Privilege Level Configuration


To assign the specific privilege levels, we include the privilege number when indicating
the username and password of the user.

Router(config)#username admin1 privilege 0 secret Study-CCNA1

Router(config)#username admin2 privilege 15 secret Study-CCNA2

Router(config)#username admin3 secret Study-CCNA3

In this example, we assign user admin1 a privilege level of 0. Then, we assign user
admin2 to privilege level 15, which is the highest level. For admin3, we did not specify
any privilege level, but it will have a privilege level of 1 by default.

Let’s try to verify the output of our configuration by logging in to each user. Enter the
username and the corresponding password, starting with admin1.
User Access Verification

Username: admin1

Password:

Router>?

Exec commands:

disable Turn off privileged commands


enable Turn on privileged commands

exit Exit from the EXEC

help Description of the interactive help system

logout Exit from the EXEC

Router>

Notice in the output above that the user admin1 is under User Exec mode and has only
five commands- logout, enable, disable, help, and exit. Now, let’s log in as admin2.
User Access Verification

Username: admin2

Password:

Router#show privilege

current privilege level is 15

Router#

The output above shows that user admin2 is currently in level 15, and we verified that
by typing the ‘show privilege’ command on the CLI. Notice also that we are in
Privileged Exec mode. Lastly, let’s log in as admin3.
User Access Verification

Username: admin3

Password:

Router>show privilege

current privilege level is 1


Router>

When we logged in as admin3, we verified that it was in level 1 by typing the ‘show
privilege’ command on the CLI. Notice that we are in User Exec mode.

Privilege Levels 2-14


You can increase the security of your network by configuring additional privileges from 2
to 14 and associating them to usernames to provide customized access control. This is
suitable when you are designing role-based access control for different users and
allowing only certain commands for them to execute. Hence, giving them restrictions to
unnecessary commands and increasing the layers of security on the device.

Let’s now assign privilege level 5 to a user. After that, we will configure privilege level 5
users to be in User Exec mode and allow them to use the ‘show running-
config’ command.

Router(config)#username admin4 privilege 5 secret Study-CCNA4

Router(config)#privilege exec level 5 show running-config

All level 5 users now will be automatically accessing the User Exec mode and can now
use the User Exec commands such as ‘show running-config’ on the CLI. Let’s log in
as user admin4 to verify that.
User Access Verification

Username: admin4

Password:

Router#show running-config

Building configuration...

Current configuration : 57 bytes

boot-start-marker

boot-end-marker
!

end

Router#

Enable Secret Command Privilege


We can also configure different privilege levels to passwords. Here, we will allow
the ‘enable secret’ command to access the Privileged Exec level. Use the ‘enable
secret level {level} {password}’ syntax as shown below. The command sets the
enable secret password for privilege level 5.

Router(config)#enable secret level 5 Study-CCNA5

We can verify our configuration as shown below:


User Access Verification

Username: admin5

Password:

Router>show running-config

% Invalid input detected at ‘^’ marker.

Router>enable 5

Password:

R4#show privilege

Current privilege level is 5


Router#show running-config

Building configuration...

Current configuration : 57 bytes

boot-start-marker

boot-end-marker

end

Router#

In our first attempt, notice in the example above that we do not have access to
the ‘show running-configuration’ command. That is because we are currently under
privilege level 0. However, we can log in as a privilege level 5 user with the ‘enable
{privilege level}’ command, and from there, we can now access the ‘show running-
configuration’ command.

What is AAA Security? Authentication,


Authorization & Accounting
AAA, Authentication, Authorization, and Accounting framework manages the user’s
activity on a network it wants to access by authentication, authorization, and accounting
mechanism. AAA uses effective identity and access management that enhances
network security by ensuring that only those granted access are allowed and their
activities while in the network are monitored and logged.

AAA uses methods to challenge and handles user requests for network access by
asking them for their authorized and authenticated user credentials to prove that they
are legitimate users before gaining access to the network. AAA is widely used in
network devices such as routers, switches, and firewalls, just to give a few to control
and monitor access within the network.
AAA Server
AAA addresses the limitations of local security configuration and the scalability issues
that come with it. For example, if you need to change or add a password, it has to be
done locally and on all devices, which will require a lot of time and resources.

An external AAA server solves these issues by centralizing such tasks within the
network. Having backup AAA servers in the network ensures redundancy and security
throughout the network.

Authentication
The AAA server receives a user authentication request. It challenges the user’s
credentials by asking for the username and password, for example, which is encrypted
using a hashing algorithm. The AAA server compares the user’s authentication
credentials with the user credentials stored in the database.

Authorization
Once the user’s credentials are authenticated, the authorization process determines
what that specific user is allowed to do and access within the premise of the network.
Users are categorized to know what type of operations they can perform, such as an
Administrator or Guest. The user profiles are configured and controlled from the AAA
server. This centralized approach eliminates the hassle of editing on a “per box” basis.
Accounting
The last process done in the AAA framework is accounting for everything the user is
doing within the network. AAA servers monitor the resources being used during the
network access. Accounting also logs the session statistics and auditing usage
information, usually for authorization control, billing invoice, resource utilization, trend
analysis, and planning the data capacity of the business operations.

AAA Protocols
There are two most commonly used protocols in implementing AAA, Authentication,
Authorization, and Accounting in the network. RADIUS and TACACS+ are open
standards that different vendors use to ensure security within the network.

Remote Authentication Dial-In User Service (RADIUS) Protocol operates on ports


UDP 1645 and UDP 1812 that provide centralized AAA management for users who
connect and use Network Access Server (NAS), such as a VPN concentrator, router,
and switch. This client/server protocol and software enables remote access servers to
communicate with a central server to perform AAA operations for remote users. This
protocol operates at the application layer and can use TCP or UDP as a transport
protocol.

Terminal Access Controller Access-Control System Plus (TACACS+) – is a remote


authentication protocol that allows a remote access server to communicate with an
authentication server to validate user access to the network. TACACS+ permits a client
to accept a username and password and pass a query to a TACACS+ authentication
server.

Configuring AAA on Cisco Devices – RADIUS and


TACACS+
Usually, a Cisco IOS device implements authentication based on a line password and
authorization based on a level 15 enable password. This is a problem for any
organization that desires granularity or the ability to track activities back to one of the
multiple users that use the network resources. The solution to this is AAA. This allows
an administrator to configure granular user access and audit ability to an IOS device.
We must first use the ‘aaa new-model’ command to enable this more advanced and
granular control in IOS.
Below is the latest configuration guide for a Cisco router or switch using Remote
Authentication Dial-In User Service (RADIUS) and Terminal Access Controller Access-
Control System (TACACS+) in implementing AAA in network devices to allow network
access to trusted users.

RADIUS Configuration
RADIUS is an access server AAA protocol. To configure it, first, we need to define the
IP address of the RADIUS server in our Cisco router.

R1(config)#radius-server host 192.168.1.10

Configure AAA Cisco command on the device in global configuration mode, which gives
us access to some AAA commands.
R1(config)#aaa new-model

Now let us configure the RADIUS servers that you want to use.

R1(config)#radius server RADIUS_SERVER1

R1(config-radius-server)#address ipv4 192.168.1.10

R1(config-radius-server)#key STUDY_CCNA1

R1(config)#radius server RADIUS_SERVER2

R1(config-radius-server)#address ipv4 192.168.1.11

R1(config-radius-server)#key STUDY_CCNA2

Configure AAA authentication command with the group group-name method to specify
a subset of RADIUS servers to use as the login authentication method. To specify and
define the group name and the group members, use the aaa group server command.
For example, use the aaa group server command to first define the members
of STUDY_CCNA.

R1(config-radius-server)#aaa group server radius STUDY_CCNA

R1(config-sg-radius)#server name RADIUS_SERVER1

R1(config-sg-radius)#server name RADIUS_SERVER2


We have two authentication methods. All users are authenticated using the Radius
server (the first method). If the Radius server doesn’t respond, then the router’s local
database is used (the second method). For the local authentication process, define the
username name and password:
R1(config-sg-tacacs+)#aaa authentication login default group
STUDY_CCNA local

R1(config)#username AdminBackup secret STUDYCCNA

TACACS+ Configuration
For AAA Cisco TACACS+ configuration, we need to define first the IP address of the
TACACS+ server.
R1(config)#tacacs-server host 192.168.1.10

Configure a local user in case of connectivity to the AAA server is lost.


R1(config)#username AdminBackup secret STUDYCCNA

Enable AAA command on the device in global configuration mode which gives us
access to some AAA commands.

R1(config)#aaa new-model

Now let us configure the TACACS+ servers that you want to use.

R1(config)#tacacs server TACACS_SERVER1

R1(config-server-tacacs)#address ipv4 192.168.1.10

R1(config-server-tacacs)#key STUDY_CCNA1

R1(config)#radius server TACACS_SERVER2

R1(config-server-tacacs)#address ipv4 192.168.1.11

R1(config-server-tacacs)#key STUDY_CCNA2
Use the aaa authentication login command to configure login authentication. Indicate
it with the group group-name method to specify a subset of RADIUS servers to use as
the login authentication method. To specify and define the group name and the
members of the group, use the aaa group server command. For example, use
the aaa group server command to first define the members of STUDY_CCNA.

R1(config-tacacs-server)#aaa group server tacacs+ STUDY_CCNA

R1(config-sg-tacacs+)#server name RADIUS_SERVER1

R1(config-sg-tacacs+)#server name RADIUS_SERVER2

R1(config-sg-tacacs+)#aaa authentication login default group


STUDY_CCNA local

Configuring a Cisco Banner: MOTD, Login, &


Exec Banners
Cisco banners are customized messages displayed on a terminal when a user is trying
to connect to our Cisco IOS devices via Telnet, SSH, Console port, or Auxillary port.
They are most commonly used to display security warnings and informational
messages. There are different types of banner messages, such as Message of the day
(MOTD), Login banners, and Exec banners. These can be displayed in the CLI before
and/or after the user logs in to a Cisco IOS device. The three are the most common
types of a banner that can be configured on a Cisco switch and routers.

Banner MOTD
The Message of the Day (MOTD) banner will be displayed before the user authenticates
to our devices. It is typically used to display a temporary notice that may change
regularly, such as system availability.

To create a MOTD banner on a Cisco router, the following banner MOTD command is
used from the router’s global config mode:
Router(config)# banner motd $

Attention!

We will be having scheduled system maintenance on this device.

Router(config)#
NOTE
Be careful on choosing your delimiter character when configuring your banner, the banner must not
have a delimiter character on its content, or else, the cisco ios will interpret it as an indicator to end
the banner message.

In this example, the MOTD banner spans multiple lines of text, and the delimiting
character, which is also called start/stop character, is the dollar sign ($). Now let’s try to
access our devices to see what the MOTD Banner looks like:

Router con0 is now available

Press RETURN to get started.

Attention!

We will be having scheduled system maintenance on this device.

User Access Verification

Username:

% Username: timeout expired!

Username:

The figure above shows the MOTD banner before the user logs in to the router.
Banner Login
The Login banner will also be displayed before the user authenticates to our devices. It
will show up after the MOTD banner. Unlike the MOTD Banner, it is designed to
commonly display legal notices, such as security warnings and more permanent
messages to the users.

To create a Login banner on our device, the following command is used from the
router’s global configuration mode:
Router(config)# banner login ?

Warning!

Authorized personnel only.

Router(config)#

In this example, we use a question mark (?) as a delimiting character to indicate the
start and stop of the banner configuration.

Now let’s try to access our Cisco device to see what the Login banner looks like:

Router con0 is now available

Press RETURN to get started.

*Mar 1 00:22:33.231: %SYS-5-CONFIG_I: Configured from console by


cisco on console

Attention!

We will be having scheduled system maintenance on this device.


Warning!

Authorized personnel only.

User Access Verification

Username:

As you can see above, the login banner is shown after the MOTD banner before the
user logs in to the router.

Banner Exec
We use Exec banner to display messages after the users, or network administrators are
authenticated to our Cisco IOS devices and before the user enters UserExec Mode.
Unlike MOTD, the Exec banner is designed to be more of a permanent message and
would not change frequently.

To create an Exec banner on a Cisco router, the following Exec banner command is
used from the router’s global configuration mode:

Router(config)# banner motd 8

Please log out immediately if you are not an authorized


administrator

Router(config)#

In this example, We use the number eight (8) as a delimiting character to indicate the
start and stop of the banner configuration, just to show that any character can be used.

Now let’s try to access our Cisco devices to see what the Exec banner looks like:

Router con0 is now available


Press RETURN to get started.

Attention!

We will be having scheduled system maintenance on this device.

Warning!

Authorized personnel only.

User Access Verification

Username: cisco

Password:

Please log out immediately if you are not an authorized


administrator

Router>

The image above confirms that the MOTD, Login, and Exec banners are all displayed
respectively.

Configure timezone and Daylight Saving Time


(DST)
It is recommended to set the correct timezone and adjust the DST setting before
configuring a router as an NTP client. The syntax of the command used to set the
timezone is:
(config)clock timezone NAME HOURS [MINUTES]

The name of the timezone can be anything you like. After the name parameter, you
need to specify the difference in hours (and optionally minutes) from Coordinated
Universal Time (UTC). For example, to specify the Atlantic Standard Time, which is 4
hours behind UTC, we would use the following command:
R1(config)#clock timezone AST -4

The syntax of the command used to adjust for DST is:

(config)clock summer-time NAME recurring [week day month hh:mm week


day month hh:mm [offset]]

Again, the name parameter can be anything you like. The recurring keyword instructs
the router to update the clock each year. If you press enter after the recurring keyword,
the router will use the U.S. DST rules for the annual time changes in April and October.
You can also manually set the date and time for DST according to your location. For
example, to specify the DST that starts on the last Sunday of March and ends on the
last Sunday of October, we would use the following command:

R1(config)clock summer-time DST recurring last Sunday March 2:00


last Sunday October 2:00

What is NTP? Network Time Protocol Overview


Network Time Protocol (NTP) is an application layer protocol used for time
synchronization between hosts on a TCP/IP network. Its goal is to ensure that all
computers on a network agree on the system clock since even a small difference can
create problems. For example, if there is more than 5 minutes difference between your
host’s local clock and the Active Directory domain controller, you cannot log into your
AD domain.

NTP uses a hierarchical system of time sources. At the top of the structure are highly
accurate time sources – typically a GPS or atomic clock, providing Coordinated
Universal Time (UTC). A reference clock is known as a stratum 0 server. Stratum 1
servers are directly linked to the reference clocks or stratum 0 servers and computers
run NTP servers that deliver the time to stratum 2 servers, and so on (image source:
Wikipedia):
NTP Server
NTP uses a client-server architecture; one host is configured as the NTP server and all
other hosts on the network are configured as NTP clients. Consider the following
example:
Host A is our NTP client and it is configured to use a public NTP
server uk.pool.ntp.org. Host A will periodically send an NTP request to the NTP server.
The server will provide accurate data and time, enabling system clock synchronization
on Host A.

NOTE

NTP servers listen for NTP packets on User Datagram Protocol (UDP) port 123. The
current version is NTPv4, and it is backward compatible with NTPv3.

NTP servers can either be local or public. Public NTP servers are often free to use and
are provided by third-party operators such as Google and Facebook. A local or internal
NTP server is owned by the company itself and is deployed within the network.

Simple Network Time Protocol (SNTP)


Simple Network Time Protocol (SNTP) is a simpler NTP version commonly used by
small networks. SNTP is a client-only architecture, so it can only receive information
from the NTP time servers and cannot provide time for other devices.

Configure NTP on a Cisco router


NTP (Network Time Protocol) is an application layer protocol used for time
synchronization between hosts on a TCP/IP network. The goal of NTP is to ensure that
all devices on a network agree on the time, since even a small difference can cause
problems. NTP uses a client-server architecture; usually with one host being configured
as the NTP server, and other hosts on the network are configured as NTP clients.

Cisco routers can be configured as both NTP clients and NTP servers. To configure a
Cisco router as an NTP client, we can use the ntp server IP_ADDRESS command:
Floor1(config)#ntp server 192.168.0.100

NOTE
To define a version of NTP, add the version NUMBER keywords at the end of the command (e.g. ntp
server 192.168.0.100 version 3).

To verify NTP status, use the show ntp status command:


Floor1#show ntp status

Clock is synchronized, stratum 2, reference is 192.168.0.100

nominal freq is 250.0000 Hz, actual freq is 249.9990 Hz, precision


is 2**19
reference time is DE4AB2B7.0000037A (18:49:27.890 UTC Thu Apr 5
2018)

clock offset is 0.00 msec, root delay is 0.00 msec

root dispersion is 0.02 msec, peer dispersion is 0.02 msec.

To configure your Cisco router as an NTP server, only a single command is needed:
DEVICE(config)#ntp master

After entering this command you will need to point all the devices in your LAN to use the
router as NTP server.

Syslog explained
Syslog is a standard for message logging. Syslog messages are generated on Cisco
devices whenever an event takes place – for example, when an interface goes down or
a port security violation occurs.

You’ve probably already encountered syslog messages when you were connected to a
Cisco device through the console – Cisco devices show syslog messages by default to
the console users:
R1#

%LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet0/0,


changed state to down

This is because the logging console global configuration command is enabled by


default. SSH and Telnet users need to execute the terminal monitor EXEC mode
command in order to see the messages:
R1#terminal monitor

R1#

%LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet0/1,


changed state to up

In the example above you can see that the logged in user executed the terminal
monitor command. Because of that, the telnet user was notified via a syslog message
when the Gi0/1 interface went up a couple of moments later.

It is recommended to store the logs generated by Cisco devices to a central syslog


server. To instruct a device to send logs to the syslog server, we can use the logging
IP_ADDRESS command:

R1(config)#logging 10.0.0.10
Now, logs generated on R1 will be sent to the syslog server with the IP address of
10.0.0.10. Of course, you need to have a Syslog server (e.g. Kiwi syslog) installed and
configured.
NOTE
It is also possible (and recommended) to use specify hostname instead of the IP address in
the logging command. The command is logging host HOSTNAME.

Syslog message format


Syslog messages that appear on a Cisco device consists of several parts. Consider the
following message:
*Jan 18 03:02:42: %LINEPROTO-5-UPDOWN: Line protocol on Interface
GigabitEthernet0/0, changed state to down

The message consists of the following parts:

 Jan 18 03:02:42 – the timestamp


 %LINEPROTO – the source that generated the message. It can be a hardware
device (e,g. a router), a protocol, or a module of the system software.
 5 – the severity level, from 0 to 7, with lower numbers being more critical.
 UPDOWN – the unique mnemonic for the message
 Line protocol on Interface GigabitEthernet0/0, changed state to down – the
description of the event

Severity levels are numbered 0 to 7:

 0 – emergency (System unusable)


 1 – alert (Immediate action needed)
 2 – critical events (Critical condition)
 3 – error events (Error condition)
 4 – warning events (Warning condition)
 5 – notification events (Normal but significant condition)
 6 – informal events (Informational message only)
 7 – debug messages (Appears during debugging only)

In our example the message has the severity level of 5, which is a notification event.
The first five levels (0-4) are used by messages that indicate that the functionality of the
device is affected. Levels 5 and 6 are used by notification messages, while the level 7 is
reserved for debug messages.

The severity levels can be used to specify the type of messages that will be logged. For
example, if you think that you are getting too many non-important messages when
logged in through a console, the global configuration command logging console 2 will
instruct the device to only log messages of the severity level 0, 1 and 2 to the console.
Cisco IOS Syslog Logging Locations
A Syslog message is a system-generated message produced by routers and switches
used to inform network administrators about useful information regarding the health and
state of the device, along with network events and incidents that occurred at that point
in time. Syslog logging is critical to our network system because it provides easier
troubleshooting and enhances security by providing visibility into the infrastructure
devices and equipment logs. We will discuss the different Cisco logging locations and
how to configure Syslog logging on these locations below.

NOTE
By default, Cisco IOS devices send Syslog messages to the console line and logging buffer.
However, you can also configure it to send Syslog messages into the terminal lines, SNMP traps,
and Syslog servers.

Syslog messages can be logged to various locations. These are the four ways and
locations where we can store and display messages on our Cisco devices:

 Logging Buffer – events saved in the RAM memory of a router or switches. The buffer
has a fixed size to ensure that the log messages will not use up valuable system
memory. It is enabled by default.
 Console Line – events will be displayed in the CLI when you log in over a console
connection. It is enabled by default.
 Terminal Lines – log messages will be shown in the CLI when you log in over a Telnet or
SSH session. It is disabled by default.
 Syslog Server – log messages are saved in the Syslog server.

Syslog Logging Configuration


Now, we will see the different Syslog configurations you can do in your network devices
depending on the location, preference, and needs.

Logging Buffer Configuration


The first one we will configure is the logging buffer using the ‘logging
buffered’ configuration command. We will also set its buffer size and the logging
severity levels using the following configuration commands:
R1(config)#logging buffered

R1(config)#logging buffered 100000

R1(config)#logging buffered debugging


NOTE
Cisco IOS devices have a default logging buffer size of 4096, but you can modify it depending on
your preference.

Console Line Configuration


The console line Syslog configuration is enabled by default. However, if you wish to
disable logging to the console line, use the ‘no logging’ command:

R1(config)#no logging console

NOTE
When you execute logging synchronous on the global configuration mode, log messages sent to the
console line will not interrupt the command you are typing, and the command will be moved to a new
line.

Terminal Line Configuration


You can also configure Syslog to send messages into the VTY terminal lines using
the ‘terminal monitor’ command:

R1(config)#terminal monitor

NOTE
The reason terminal lines are disabled by default in Cisco IOS is that it might make your VTY
terminal connections congested due to massive amounts of log messages being sent into the lines
when it is improperly configured.

Syslog Server Configuration


Adding an external Syslog server to our network is important because it provides
centralized storage and management. It makes sure that all of the network events
messages and incidents are being recorded and logged on a server. Using a remote
Syslog server makes handling logs a lot easier because messages can be stored on a
hard drive on the Syslog server instead of on the router itself, thus freeing up the
router’s memory. By default, these messages are sent to the logging host through UDP
port 514.
To enable this, first, we configure the IP address of the Syslog server to be used by
entering the ‘logging’ command. We then specify the Syslog server logging level or the
type of message we want to send.
R1(config)#logging 10.0.0.100

R1(config)#logging trap debugging

Next, we specify the local timestamp for the Syslog messages sent to the Syslog server
because it is not included by default.
R1(config)#service timestamps log datetime msec

Let’s take a look and verify our configured Syslog logging and log outputs on the
different locations using the ‘show logging’ command.

R1#show logging

Syslog logging: enabled (0 messages dropped, 0 messages rate-


limited,

0 flushes, 0 overruns, xml disabled, filtering disabled)

No Active Message Discriminator.

No Inactive Message Discriminator.

Console logging: disabled

Monitor logging: level debugging, 1 messages logged, xml


disabled,

filtering disabled

Buffer logging: level debugging, 0 messages logged, xml


disabled,

filtering disabled

Logging Exception size (4096 bytes)

Count and timestamp logging messages: disabled

Persistent logging: disabled

No active filter modules.

ESM: 0 messages dropped


Trap logging: level debugging, 5 message lines logged

Logging to 10.0.0.100 (udp port 514, audit disabled,

authentication disabled, encryption disabled, link


up),

2 message lines logged,

0 message lines rate-limited,

0 message lines dropped-by-MD,

xml disabled, sequence number disabled

filtering disabled

Log Buffer (10000 bytes):

As the output shows, the logging buffer is enabled with an output size of 1000 bytes and
a severity level of ‘debugging’. The console line logging is disabled as we configured it,
but the terminal line is enabled. Lastly, you can see that R1 uses the Syslog server with
an IP address of 10.0.0.100 and a severity level of ‘debugging’.

Security Information and Event Management (SIEM)


A SIEM can be considered a centralized log server since it provides a centralized
location for all logging messages. It will typically provide advanced analysis and
correlation of events, mainly used for security and audit administration.

You might also like