ccna_notes_part2
ccna_notes_part2
A user wants to transfer files from Host A to the FTP server. The user will start an FTP
client program (in this example, Filezilla), and initiate the connection:
In the example above, the anonymous authentication was used, so the user was not
asked to provide the password. The client can now transfer files from and to the FTP
server using the graphical interface.
NOTE
FTP uses two TCP ports: port 20 for sending data and port 21 for sending control commands. The
protocol supports the use of authentication, but like Telnet, all data is sent in clear text, including
usernames and passwords.
TFTP (Trivial File Protocol)
TFTP is a network protocol used to transfer files between remote machines. It is a
simple version of FTP, lacking some of the more advanced features FTP offers, but
requiring less resources than FTP.
Because of it’s simplicity TFTP can be used only to send and receive files. This
protocol is not widely used today, but it still can be used to save and restore a router
configuration or to backup an IOS image.
A user wants to transfer files from Host A to the router R1. R1 is a Cisco device and it
has a TFTP server installed. The user will start an TFTP client program and initiate the
data transfer.
NOTE
TFTP doesn’t support user authentication and sends all data in clear text. It uses UDP port 69 for
communication.
SW1(config)#end
SW1#
Accessing ftp://10.0.0.100/c2960-lanbasek9-mz.150-2.SE4.bin...
To verify that the file has indeed been transfered, we can use the show flash: command:
SW1#show flash:
Directory of flash:/
We can also transfer files from the IOS device to the FTP server, for example, to
backup the startup configuration. Here is an example of copying the startup
configuration of a switch to the FTP server:
Writing startup-config...
subnet mask
default gateway
domain name
DNS server
Cisco routers can be configured as both DHCP client and DHCP server.
2: The DHCP servers receive the DHCP Discover packet and respond with DHCP
Offer packets, offering IP addressing information.
3: If the client receives the DHCP Offer packets from multiple DHCP servers, the first
DHCP Offer packet is accepted. The client responds by broadcasting a DHCP
Request packet, requesting the network parameters from the server that responded
first.
4: The DHCP server approves the lease with a DHCP Acknowledgement packet. The
packet includes the lease duration and other configuration information.
NOTE
DHCP uses a well-known UDP port number 67 for the DHCP server and the UDP port number 68 for
the client.
To use DNS, you must have a DNS server configured to handle the resolution process.
A DNS server has a special-purpose application installed. The application maintains a
table of dynamic or static hostname-to-IP address mappings. When a user request
some network resource using a hostname, (e.g. by typing www.google.com in a
browser), a DNS request is sent to the DNS server asking for the IP address of the
hostname. The DNS server then replies with the IP address. The user’s browser can
now use that IP address to access www.google.com.
The figure below explains the concept:
Suppose that the DNS Client wants to communicate with the server named Server1.
Since the DNS Client doesn’t know the IP address of Server1, it sends a DNS Request
to the DNS Server, asking for Server1’s IP address. The DNS Server replies with the IP
address of Server1 (DNS Reply).
The picture below shows a sample DNS record, taken from a DNS server:
Here you can see that the host with the hostname APP1 is using the IP address
of 10.0.0.3.
NOTE
DNS uses a well-known UDP port 53.
A Cisco router can be configured as a DHCP server. Here are the steps:
Floor1(dhcp-config)#default-router 192.168.0.1
Floor1(dhcp-config)#dns-server 192.168.0.1
In the example above you can see that I’ve configured the DHCP server with the
following parameters:
the IP addresses from the 192.168.0.1 – 192.168.0.50 range will not be assigned
to hosts
the DHCP pool was created and named Floor1DHCP
the IP addresses assigned to the hosts will be from the 192.168.0.0/24 range
the default gateway’s IP address is 192.168.0.1
the DNS server’s IP address is 192.168.0.1
To view information about the currently leased addresses, you can use the show ip
dhcp binding command:
Floor1#show ip dhcp binding
Hardware address
In the output above you can see that there is a single DHCP client that was assigned
the IP address of 192.168.0.51. Since we’ve excluded the IP addresses from
the 192.168.0.1 – 192.168.0.50 range, the device got the first address available
– 192.168.0.51.
To display information about the configured DHCP pools, you can use the show ip dhcp
pool command:
Pool Floor1DHCP :
Leased addresses : 1
Excluded addresses : 1
This command displays some important information about the DHCP pool(s) configured
on the device – the pool name, total number of IP addresses, the number of leased and
excluded addresses, subnet’s IP range, etc.
The workstation on the left is configured as a DHCP client. R2 on the right is configured
as a DHCP server. The workstation sends a DHCP discover packet but receives no
DHCP request since R1 doesn’t forward the packet to R2 (broadcast packets stay on
the local subnet).
To rectify this, we can configure R1 to act as a DHCP relay agent and forward the
DHCP requests to the configured DHCP server. This is done by issuing the ‘ip helper-
address DHCP_SERVER_IP_ADDRESS’ command on its Gi0/0 interface. This
command instructs the router to do the following:
To make sure that the workstation indeed got its IP parameters, we can issue
the ‘ipconfig’ command:
C:\>ipconfig
IP Address......................: 10.0.0.104
We can verify that the Gi0/0 interface has indeed got its IP address from the DHCP
server by running the show ip int brief command:
The DHCP keyword in the method column indicates that the IP information were
obtained by the DHCP server.
NOTE
If you want to configure a Cisco switch as a DHCP client, the ip address dhcp command is used
under the VLAN 1 configuration mode.
When a DHCP client boots up, it looks for a DHCP server in order to obtain network
parameters. If the client can’t communicate with the DHCP server, it uses APIPA to
configure itself with an IP address from the APIPA range. This way, the host will still be
able to communicate with other hosts on the local network segment that are also
configured for APIPA.
The host on the left is configured as DHCP client. The host boots up and looks for
DHCP servers on the network. However, the DHCP server is down and can’t respond to
the host. After some time (from a couple of seconds to a couple of minutes, depending
on the operating system) the client auto-configures itself with an address from the
APIPA range (e.g. 169.254.154.22).
NOTE
If your host is using an IP address from the APIPA range, there is usually a problem on the network.
Check the network connectivity of your host and the status of the DHCP server.
The APIPA service also checks regularly for the presence of a DHCP server (every
three minutes). If it detects a DHCP server on the network, the DHCP server replaces
the APIPA networking addresses with dynamically assigned addresses.
Broadcast Storm
To better understand the importance of STP and how STP prevents broadcast storms
on a network with redundant links, consider the following example:
SW1 sends a broadcast frame to SW2 and SW3. Both switches receive the frame and
forward the frame out of every port except the port on which the frame was received. So
SW2 forwards the frame to SW3. SW3 receives that frame and forwards it to SW1. SW1
then again forwards the frame to SW2! The same thing also happens in the opposite
direction. Without STP in place, these frames would loop forever. STP prevents loops
by placing one of the switch ports in a blocking state.
Loop-Free Topology
With Spanning Tree Protocol (STP), our network topology above would look like this:
In the topology above, STP has placed one port on SW3 in the blocking state. That port
will no longer process any frames except the STP messages. If SW3 receives a
broadcast frame from SW1, it will not forward it out to the port connected to SW2.
STP uses the spanning tree algorithm to prevent loops. The switches among
themselves send Bridge Protocol Data Units (BPDU), and a root bridge or root switch
will be elected among the switches in the network. This will determine whether a port is
a root port, designated port, or blocked port.
The root bridge is the switch with the most preferred bridge ID. The root port is the
shortest path forwarding frames to the root bridge, and the designated port sends
frames away from the root bridge.
NOTE
Spanning Tree Protocol (STP) enables layer 2 redundancy. In the example above, if the
link between SW3 and SW1 fails, STP will converge and unblock the port on SW3.
There are various STP modes, including Rapid Spanning Tree Protocol (RSTP),
Multiple Spanning Tree Protocol (MSTP), and VLAN Spanning Tree Protocol.
1. all switches in a network elect a root switch. All working interfaces on the root switch
are placed in forwarding state.
2. all other switches, called nonroot switches, determine the best path to get to the
root switch. The port used to reach the root switch (root port) is placed in forwarding
state.
3. on the shared Ethernet segments, the switch with the best path to reach the root
switch is placed in forwarding state. That switch is called the designated switch and its
port is known as the designated port.
4. all other interfaces are placed in blocking state and will not forward frames.
NOTE
STP considers only working interfaces – shutdown interfaces or interfaces without the cable installed
are placed in an STP disabled state.
Let’s say that SW1 is elected as the root switch. All ports on SW1 are placed into
forwarding state. SW2 and SW3 choose ports with the lowest cost to reach the root
switch to be the root ports. These ports are also placed in forwarding state. On the
shared Ethernet segment between SW2 and SW3, port Fa0/1 on SW2 has the lowest
cost to reach the root switch. This port is placed in forwarding state. To prevent loops,
port Fa0/1 on SW3 is placed in blocking state.
NOTE
A switch with the lowest switch ID will become the root switch. A switch ID consists of two
components: the switch’s priority (by default 32,768 on Cisco switches) and the switch’s MAC
address.
root switch ID
sender’s switch ID
sender’s root cost
Hello, MaxAge, and forward delay timers
2-byte priority field – by default, all switches have the priority of 32768. This
value can be changed using configuration commands.
6-byte system ID – a value based on the MAC address of each switch.
A switch with the lowest BID will become a root switch, with lower number meaning
better priority.
As mentioned above, the switch with the lower BID wins. Since by default all switches
have the BID priority of 32768, the second comparison has to be made – the lowest
MAC address. In our example SW1 has the lowest MAC address and becomes the root
switch.
NOTE
For simplicity, all ports on switches in the example above are assigned to VLAN 1. Also, note that
STP adds the VLAN number to the priority value, so all switches actually have the BID priority of
32,769.
To influence the election process, you can change the BID priority to a lower value on a
switch you would like to become root. This can be done using the following command:
The priority must be in increments of 4096, so if you choose any other value, you will
get en error and possible values listed:
(config)#spanning-tree vlan 1 priority 224
If all the spanning tree bridge priority has the same priority value on all the switches,
then the MAC address will be the tiebreaker. The lowest MAC address will be elected
as the Root Bridge. Most of the older switches have a lower value of MAC address and
have lower bandwidth and limited CPU/memory as compared to newer switches.
Electing an older switch as the root bridge will cause a suboptimal operation on your
network.
What if the primary root bridge fails? To optimize further, we need to assign the other
core switch as the secondary root bridge in case the primary root bridge is not
operational. To do that, we enter the ‘root secondary’ command. This will set the
bridge priority to 28672, which is lower than the default priority but higher than the root
primary. When the primary switch fails, the switches will elect a new root bridge. It will
then failover to the secondary switch, and it will be elected as the new root bridge.
STP Root Primary and Root Secondary Configuration
Based on the diagram above, we need to manually configure the core switch as the root
bridge as they have higher bandwidth and better features in general as compared to the
other switches on the group. The below configuration shows how we configure the core
switch, Switch0, as the root bridge.
VLAN0001
Address 0001.9725.3338
Address 000.19725.3338
Again, we can use the ‘show spanning-tree’ command to verify our configuration.
VLAN0001
Address 0001.9725.3338
Cost 19
Port 1 (FastEthernet0/1)
Address 0040.0B2C.E63A
Aging Time 20
NOTE
The spanning tree port priority value can also be changed to improve the effectiveness of STP.
In case the best root cost ties for two or more paths, the following tiebreakers are
applied:
The default port cost is defined by the operating speed of the interface:
Speed Cost
10 Mbps 100
100 Mbps 19
Speed Cost
1 Gbps 4
10 Gbps 2
You can override the default value on the per-interface basis using the following
command:
(config-if)#spanning-tree cost VALUE
1. The switch with the lowest root path cost becomes the designated switch on that
link.
2. In case of a root path cost tie, the switch with the lowest Bridge ID (BID)
becomes the designated switch.
Since SW3 has a lower cost to reach the root switch (4<19), its Fa0/2 port will be the
spanning tree designated port for the segment. The Fa0/2 switch port on SW2 will be
placed in a blocking state.
NOTE
If the link between SW1 and SW3 fails, STP will converge and the Fa0/2 port on SW2
will be placed in the forwarding state to forward traffic.
Spanning Tree Protocol (STP) IEEE 802.1D – the first and original implementation of
the Spanning Tree Protocol standard. A single instance of spanning tree is allowed in
the Local Area Network (LAN).
Rapid Spanning Tree Protocol (RSTP) IEEE 802.1w – improved version of 802.1D
STP. It is faster for the network to converge. However, just like 802.1D STP, only a
single instance of spanning tree is allowed in the Local Area Network (LAN).
Multiple Spanning Tree Protocol (MSTP) IEEE 802.1s – allows us to create multiple
separate spanning-tree instances, and it enables us to map and allocate multiple
VLANs to the instances.
Our multiple spanning-tree modes, IEEE 802.1s MSTP, PVST+, and RPVST+, allow us
to have various spanning-tree instances. These instances can take different paths
through the network by having different root bridges, enabling load balancing to be
possible. The traffic will be taking optimized paths for the same reason as well.
For the first instance, the traffic for VLAN 10 and VLAN 30 will be forwarded to SW1,
and the links to SW2 will be blocked. In the second instance, the traffic for VLAN 20 and
VLAN 40 will be forwarded to SW2 and will be blocked on SW1.
There will be a total of four spanning-tree instances running, as we have four VLANs in
the network. Assuming that we have 100 VLANs in our network, we will also have 100
spanning-tree instances. It would be consuming more resources as compared to
grouping them like in MSTP.
NOTE
PVST+ uses Root ports, Designated ports, and Alternate ports.
The Alternate Ports are Blocking Ports
RSTP is backwards-compatible with STP and there are many similarities between the
two protocols, such as:
the root switch is elected using the same set of rules in both protocols
root ports are selected with the same rules, as well as designated port on LAN
segments
both STP and RSTP place each port in either forwarding or blocking state. The
blocking state in RSTP is called the discarding state.
RSTP enables faster convergence times than STP (usually within just a couple of
seconds)
STP ports states listening, blocking, and disabled are merged into a single state
in RSTP – the discarding state
STP features two port types – root and designated port. RSTP adds two
additional port types – alternate and backup port.
with STP, the root switch generates and sends Hellos to all other switches, which
are then relayed by the non-root switches. With RSTP, each switch can generate
its own Hellos.
1. all switches in a network elect a root switch. All working interfaces on the root switch
are placed in forwarding state.
2. all other switches, called nonroot switches, determine the best path to get to the root
switch. The port used to reach the root switch (root port) is placed in forwarding state.
3. on the shared Ethernet segments, the switch with the best path to reach the root
switch is placed in forwarding state. That switch is called the designated switch and its
port is known as the designated port.
4. all other interfaces are placed in discarding state and will not forward frames.
NOTE
RSTP is backwards-compatible with STP and they both can be used in the same network.
Configuring RSTP
Most newer Cisco switches use RSTP by default. RSTP prevents frame looping out of
the box and no additional configuration is necessary. To check whether a switch runs
RSTP, the show spanning-tree command is used:
SW1#show spanning-tree
VLAN0001
Address 0004.9A47.1039
Aging Time 20
If RSTP is not being used, the following command will enable it:
SW1(config)#spanning-tree mode rapid-pvst
Most other configuration options (electing root switch, selecting root and designated
ports) are similar to the ones used in STP.
PortFast
PortFast enables the switch to instantaneously transition from blocking state to
forwarding state immediately through bypassing the listening and learning state.
However, PortFast is highly recommended only on non-trunking access ports, such as
edge ports, because these ports typically do not send nor receive BPDU.
It is advisable to implement PortFast only on edge ports that connect end stations to the
switches, similar to the example STP topology below.
BPDU Guard
Because PortFast can be enabled on non-trunking ports connecting two switches,
spanning-tree loops can occur because Bridge Protocol Data Units (BPDUs) are still
being transmitted and received on those ports.
Layer 2 loops in our network topology can be prevented by enabling another feature
called PortFast BPDU Guard wherein it prevents the loop from happening by moving
non-trunking switch ports into an errdisable state when the Bridge Protocol Data Unit
(BPDU) is accepted on that port. Whenever STP BPDU guard is enabled on the switch,
STP shuts down PortFast-configured interfaces on the switch that received Bridge
Protocol Data Unit (BPDU) instead of putting them into STP blocking state.
In a correct configuration, PortFast-configured ports do not receive BPDU. If a PortFast-
configured interface receives a Bridge Protocol Data Unit (BPDU), a misconfiguration
exists. BPDU guard provides a secure response to invalid configurations because the
network engineer needs to manually put the interface in a forwarding state.
Root Guard
Any switch in the network can be designated as the root bridge. But to efficiently
forward frames, the positioning of the root bridge should be predetermined in a strategic
location. The standard STP does not ensure that the root bridge can be assigned
permanently by the administrator.
An enhanced feature of STP is developed to address this issue. The root guard feature
enables a way to implement the root bridge deployment in the network.
The root guard assures that the interface on which the root guard is enabled is set as
the designated port. Normally, the root bridge ports are all set as designated ports
unless two or more root bridge ports are connected. If the bridge receives superior STP
Bridge Protocol Data Units (BPDUs) on a root guard-enabled interface, the root guard
moves this interface to a root-inconsistent STP state. This root-inconsistent state is
effectively equivalent to a listening state. No traffic is forwarded across this interface. In
this process, the root guard enforces the position of the root bridge.
On the Cisco Switches Catalyst 2900XL, 3500XL, 2950, and 3550, we configure root
guard as shown:
1. Same Duplex
2. Same Speed
3. Same VLAN Configuration (Ex. native VLAN and allowed VLAN should be same)
4. Switch Port Modes should be the same (Access or Trunk Mode)
EtherChannel can look at the following options to decide which physical link to
send data over:
These options depend on the hardware and software, and there could be more options
on other models and software versions.
Increased Bandwidth
In our network planning, we always take into account the cost. For example, our
company needs more than 100 Mbps bandwidth, but our hardware only supports Fast
Ethernet (100 Mbps). In this case, we can opt not to upgrade the hardware by
implementing EtherChannel.
Also, we might think that if we have two or more links between two switches, like in our
figure above, then we can utilize the full bandwidth of these links. But, in a traditional
network setup, Spanning Tree Protocol (STP) blocks one redundant link to avoid Layer
2 loops. Our solution to this problem? EtherChannel.
EtherChannel aggregates or combines traffic across all available active links, which
makes it look like one logical cable. So in our example, if we have 8 active links with
100 Mbps each, that will be a total of 800 Mbps. If any of the physical links inside the
EtherChannel go down, STP will not see this and will not recalculate.
Redundancy
Since more than one physical connection is combined into one logical connection,
EtherChannel enables more available links in instances where one or more links go
down.
Load Balancing
With load balancing, we are able to balance the traffic load across the links and
improves the efficient use of bandwidth.
NOTE
Cisco does not offer round-robin load balancing for EtherChannel as it could potentially result in out-
of-order frames.
Switched Network Without EtherChannel
In this example, we connected 2 switches, Switch1 and Switch2, using four links. What
do you think will happen without EtherChannel? You can see in the network topology
below the link states, only one link is being utilized.
If we issue the ‘show spanning-tree’ command on both switches, we can see that all
interfaces of Switch1 are in forwarding state but only one interface is forwarding in
Switch2, and the other interfaces are in blocking state.
Switch1:
Switch2:
If we enter ‘show running-config interface <interface name>’ on both switches, we’ll see
the ‘switchport mode trunk’ and ‘channel-group 1 mode’ commands issued on the
interfaces. These commands are used to enable EtherChannel.
Auto mode– interface can respond to PAgP packet negotiation but will never start one
on its own.
Desirable mode– interface actively attempts a negotiating state for PAgP packet
negotiation.
Auto Desirable
Auto No Yes
Switch 1#conf t
Switch 1(config-if-range)#end
Switch 2 Configuration:
Switch 2#conf t
Switch 2(config-if-range)#end
---------------------------
Port-channel: Po1
------------
Protocol = PAgP
------+------+------+------------------+-----------
0 00 Fa0/1 Automatic-Sl 0
0 00 Fa0/2 Automatic-Sl 0
I - stand-alone s - suspended
R - Layer3 S - Layer2
w - waiting to be aggregated
d - default port
Number of aggregators: 1
------+-------------+-----------+----------------------------------
-------------
Lastly, we can issue the ‘show interfaces fa0/1 etherchannel’ command. We can see
both local and neighbor interface information, and also the Cisco PAgP mode used.
d - PAgP is down.
Local information:
Partner's information:
Active – The interface actively sends LACP packets in its attempt to form an LACP
connection.
Passive – The interface can respond to LACP negotiation but will never initiate on its
own.
Here’s an overview of the different modes and combinations and whether link
aggregation will work:
Active Passive
Passive Yes No
4. Virtual Local Area Network (VLAN) passing on both sides should match.
EtherChannel LACP Configuration
The first requirement for link aggregation is at least one side should be in Active mode.
For example, we will configure Switch1 to be in Active Mode and the other network
switch, Switch2, to be in Passive Mode.
Now, using our sample network topology below, let’s configure LACP on our network
switches’ multiple links:
Switch1#conf t
Switch1(config-if-range)#speed 100
Switch1(config-if-range)#duplex full
Switch1(config-if-range)#end
Switch2#conf t
Switch2(config-if-range)#speed 100
Switch2(config-if-range)#duplex full
The logs on our switch show that Port-Channel1 came up, and the aggregated link is
working:
*Sep 5 15:30:06.378: %LINK-3-UPDOWN: Interface Port-channel1,
changed state to up
---------------------------
------------
Protocol = LACP
------+------+------+------------------+-----------
0 00 Fa0/1 Active 0
0 00 Fa0/2 Active 0
One use case on why we would want to configure EtherChannel on Layer 3 switches is
when we are forming redundancy between Core and Distribution Layers and
implementing a routing protocol. Instead of learning two IP routes with the same
neighboring switch (but two different next hops), we now can have a single next-hop IP
address of the neighboring switch for each IP route learned.
Another use case is to avoid Spanning Tree Protocol (STP) and use Layer 3 links
between your Core and Distribution Layers instead. We can enable routing protocols
where we can have more control on load balancing and failover and can be much faster
than STP.
NOTE
1. The port-channel interface number can differ between two neighbor Layer 3 Switches, but we
have to use the same Channel Group Number for all physical interfaces on a Layer 3 Switch.
2. We have to issue the ‘no switchport’ command to make the physical interface a routed
port/interface (Group state = from Layer 2 to Layer 3)
Switch1(config-if-range)#no switchport
Switch1(config-if-range)#interface port-channel 1
Switch1(config-if)#no switchport
Switch2 Configuration:
Switch2(config-if-range)#no switchport
Switch2(config-if-range)#interface port-channel 2
Switch2(config-if)#no switchport
Switch1#ping 192.168.1.2
.!!!!
Switch2#ping 192.168.1.1
!!!!!
We can also check the Group state using the ‘show etherchannel‘ command in the
global configuration mode:
Switch1#show etherchannel
Channel-group listing:
----------------------
Group: 1
----------
Group state = L3
Ports: 2 Maxports = 4
Protocol: PAgP
Minimum Links: 0
To check the Port-channel status, we can use the ‘show etherchannel port-channel‘
command:
Switch1#show etherchannel port-channel
Channel-group listing:
----------------------
Group: 1
----------
---------------------------
Port-channel: Po1
------------
Protocol = PAgP
------+------+------+------------------+-----------
0 00 Fa0/1 Desirable-Sl 0
0 00 Fa0/2 Desirable-Sl 0
To protect the host within the organization’s network to establish a connection from
unauthorized rogue DHCP servers, we need to configure DHCP snooping on the Layer
2 switch where the unauthorized hosts are connected.
How it works is that it will allow DHCP server messages like DHCPOFFER and
DHCPACK that are coming from a trusted source. If the DHCP server messages are
coming from untrusted ports, it will discard the DHCP traffic. The switch creates a table
called the DHCP Snooping Binding Database. The DHCP snooping database registers
the source MAC address and IP address of the hosts that are connected to an untrusted
port.
NOTE
DHCP clients connected to an untrusted port are expected to transmit these DHCP
messages: DHCP DISCOVER and DHCP REQUEST. If it transmits DHCPOFFER and DHCPACK,
then the switch discards the DHCP packets. DHCPOFFER and DHCPACK are expected to be
received on the trusted ports of the switch.
2. After enabling DHCP snooping, configure FastEthernet 0/1 and FastEthernet 0/2 as a
trusted port.
SW(config)#interface range FastEthernet 0/1 - FastEthernet 0/2
SW(config-if-range)#no shutdown
SW(config-if-range)#exit
3. Assign IP DHCP Snooping to the VLAN that is currently using the following
command.
SW(config)#ip dhcp snooping vlan 1
Router(config-if)#no shutdown
5. On the legitimate DHCP server, select the Services tab and click on DHCP. Enable
the service, and assigned the IP details, subnet mask, and DNS server IP. The
name serverPool cannot be changed as it is existing already.
6. Enable DHCP on PC0 and PC1 and will get the IP address from the legitimate DHCP
server.
7. Disconnect the legitimate DHCP server and observe that PC0 and PC1 are not
getting any IP. The below snippet shows PC0 is getting an APIPA address. The PCs will
not be able to get connected to the rogue DHCP server.
1. First, PC1 checks its ARP table for PC2’s IP address (10.10.10.100).
2. If there is no cache, PC1 will send ARP Request, a broadcast message (source:
AAAA.AAAA.AAAA, destination: FFFF.FFFF.FFFF) to all hosts on the same subnet.
3. All hosts will receive the ARP Request, but only PC2 will reply. PC2 will send an ARP
Reply containing its own MAC address (EEEE.EEEE.EEEE).
4. PC1 receives the MAC address and saves it to its ARP Table.
C:\>arp -a
C:\>arp -a
2. Any configured ARP ACLs (can be used for hosts using static IP instead of DHCP)
If the ARP and any of the above do not match, the switch discards the ARP message.
NOTE
To bypass the Dynamic ARP Inspection (DAI) process, you will usually configure the interface trust
state towards network devices like switches, routers, and servers under your administrative control.
We can also use the ‘show ip arp inspection’ command to verify the number of dropped
ARP packets:
In our example, if we want to configure PC1 with static IP instead of DHCP, we must
create a static entry using ARP ACL.
Switch(config)#arp access-list PC1-Static
Now, the switch will check the ARP access list first, and then when it doesn’t find a
match, the switch will check the DHCP Snooping Binding Table.
An attacker could also generate a large number of ARP messages, causing CPU
overutilization in the switch (Denial-of-Service or DoS). Note that DAI works in the
switch CPU rather than in the switch ASIC. We can prevent this type of attack by
limiting DAI Message Rates.
Fa0/1 Untrusted 8 4
Fa0/2 Untrusted 15 1
Fa0/3 Untrusted 15 1
Dynamic ARP Inspection is an excellent security feature, but before configuring DAI, we
need to think and make a few decisions based on our goals, topology, and device roles.
We do not want to block important traffic after enabling it.
The Supplicant is the user or client allowed to access the network. While the
Authenticator is any network device such as a Wireless Access Point or an Ethernet
Switch, and the Authentication Server (AS) is a trusted server that does the
authentication mechanism of the network access by receiving, processing, and
responding to various requests from the clients and also the one who decides and tells
the authenticator to either allow or deny the access and apply various settings to the
user.
NOTE:
Authentication servers usually run software supporting the RADIUS and Extensible
Authentication Protocol (EAP). In some cases, the AS software may be running on the
authenticator hardware, such as the RADIUS server.
Light Weight EAP (LEAP) – the authentication process is where the client simply
provides the AS its credentials, such as the username and password. Encrypted
challenge messages are exchanged between the AS and client to ensure that the client
is authorized to access the network.
Protected EAP (PEAP) – uses inner and outer authentication. Nevertheless, the AS
presents a digital certificate to authenticate itself with the supplicant in the outer
authentication.
EAP Transport Layer Security (EAP-TLS) – the AS and the supplicant exchange
certificates and can authenticate each other. A TLS tunnel is built afterward to exchange
encryption key material securely. EAP-TLS is considered the most secure wireless
authentication method available; however, implementing it can sometimes be complex.
NOTE:
EAP-TLS is practical only if wireless clients can receive and utilize digital certificates.
Many wireless devices, such as communicators, medical devices, and RFID tags, have
an underlying operating system that can’t interface with a CA or use certificates.
IEEE 802.1x is a standard defined by IEEE 802.1x working group for addressing port-
based access control using authentication. It defines a standard link-layer protocol that
is used for transporting higher-level authentication protocols, and the actual
enforcement is via MAC-based filtering and port state monitoring.
Enabling the port security violation feature on the switch ports means that each port can
be configured to take advantage of one of the three violation modes that define the
necessary actions to take when a violation happens. These modes cause the switch to
discard the violating frame (the frame whose source MAC address would drive the
number of learned MAC addresses over the limit).
Step 1: Enter interface configuration mode and input the physical interface to configure.
We will be using gigabitEthernet 2/1 as an example.
Switch(config)# interface gigabitEthernet 2/1
Step 2: Set the interface mode to access. The default mode, which is dynamic
desirable, cannot be configured to be a secured port.
Step 4: Set the maximum number of secure MAC addresses for the interface, which
ranges from 1 to 3072, wherein the default value is 1.
Switch(config-if)# switchport port-security maximum {1-3072}
Step 5: Configure the violation mode on the port. Actions that shall be taken when a
security violation is detected. Refer to the table below for the actions to be taken.
NOTE:
When a secure port is in an error-disabled state, you can bring it out of the state by issuing the
command ‘errdisable recovery cause psecure-violation’ at the global configuration mode, or you
can manually reenable it by entering the ‘shutdown’ and ‘no shutdown’ commands.
Step 7: Input the identified secure MAC addresses for the interface. You can use this
command to limit the maximum number of secure MAC addresses. If in case, you
configure fewer secure MAC addresses than the maximum, then the remaining MAC
addresses are dynamically learned.
Switch(config-if)# switchport port-security mac-address
{mac_address}
For example on how ACLs are used, consider the following network topology:
Let’s say that server S1 holds some important documents that need to be available only
to the company’s management. We could configure an access list on R1 to enable
access to S1 only to users from the management network. All other traffic going to S1
will be blocked. This way, we can ensure that only authorized user can access the
sensitive files on S1.
1. Standard Access Control Lists – with standard access lists, you can filter traffic
only on the source IP address of a packet. These types of access lists are not as
powerful as extended access lists, but they are less processor-intensive for the router.
The following example describes the way in which standard access lists can be used for
traffic flow control:
Let’s say that server S1 holds some important documents that need to be available only
to the company’s management. We could configure an access list on R1 to enable
network access to S1 for the users from the management network only. All other traffic
going to S1 will be blocked. This way, we can ensure only authorized users can access
sensitive files on S1.
2. Extended Access Control Lists – with extended access lists, you can be more
precise in your network traffic filtering. You can evaluate the source and destination IP
addresses, type of layer 3 protocol, source and destination port, etc. Extended access
lists are more complex to configure and consume more CPU time than standard access
lists, but they allow a much more granular level of control.
To demonstrate the usefulness of extended ACLs, we will use the following example:
In the example network above, we have used the standard access list to prevent all
users from accessing server S1. But, with that configuration, we also deny access to
S2! To be more specific, we can use extended access lists. Let’s say that we need to
prevent users from accessing server S1. We could place an extended access list on R1
to prevent users only from accessing S1 (we would use an access list to filter the IP
traffic according to the destination IP address). That way, no other traffic is forbidden,
and users can still access the other server, S2:
NOTE
ACL number for the standard ACLs has to be between 1–99 and 1300–1999.
You can also use the host keyword to specify the host you want to permit or deny:
R1(config)# access-list ACL_NUMBER permit|deny host IP_ADDRESS
Once the access list is created, it needs to be applied to an interface. You do that by
using the ip access-group ACL_NUMBER in|out interface
subcommand. in and out keywords specify in which direction you are activating the
ACL. in means that ACL is applied to the traffic coming into the interface, while
the out keyword means that the ACL is applied to the traffic leaving the interface.
The command above permits traffic from all IP addresses that begin with 10.0.0. We
could also target the specific host by using the host keyword:
R1(config)#access-list 1 permit host 10.0.0.1
The command above permits traffic only from the host with the IP address of 10.0.0.1.
Next, we need to apply the access list to an interface. It is recommended to place the
standard access lists as close to the destination as possible. In our case, this is
the Fa0/0 interface on R1. Since we want to evaluate all packets trying to exit out Fa0/0,
we will specify the outbound direction with the out keyword:
R1(config-if)#ip access-group 1 out
NOTE
At the end of each ACL there is an implicit deny all statement. This means that all traffic not
specified in earlier ACL statements will be forbidden, so the second ACL statement (access-list 1
deny 11.0.0.0 0.0.0.255) wasn’t even necessary.
NOTE
Extended access lists numbers are in ranges from 100 to 199 and from 2000 to 2699. You should
always place extended ACLs as close to the source of the packets that are being evaluated as
possible.
To better understand the concept of extended access lists, consider the following
example:
We want to enable the administrator’s workstation (10.0.0.1/24) unrestricted access to
Server (192.168.0.1/24). We will also deny any type of access to Server from the user’s
workstation (10.0.0.2/24).
First, we’ll create a statement that will permit the administrator’s workstation access to
Server:
R1(config)#access-list 100 permit ip 10.0.0.1 0.0.0.0 192.168.0.1
0.0.0.0
Next, we need to create a statement that will deny the user’s workstation access to
Server:
Lastly, we need to apply the access list to the Fa0/0 interface on R1:
R1(config)#int f0/0
This will force the router to evaluate all packets entering Fa0/0. If the administrator tries
to access Server, the traffic will be allowed, because of the first statement. However, if
User tries to access Server, the traffic will be forbidden because of the second ACL
statement.
NOTE
At the end of each access list there is an explicit deny all statement, so the second ACL statement
wasn’t really necessary. After applying an access list, every traffic not explicitly permited will be
denied.
What if we need to allow traffic to Server only for certain services? For example, let’s
say that Server was a web server and users should be able to access the web pages
stored on it. We can allow traffic to Server only to certain ports (in this case, port 80),
and deny any other type of traffic. Consider the following example:
On the right side, we have a Server that serves as a web server, listening on port 80.
We need to permit User to access web sites on S1 (port 80), but we also need to deny
other type of access.
First, we need to allow traffic from User to the Server port of 80. We can do that using
the following command:
By using the tcp keyword, we can filter packets by the source and destination ports. In
the example above, we have permitted traffic from 10.0.0.2 (User’s workstation) to
192.168.0.1 (Server) on port 80. The last part of the statement, eq 80, specifies the
destination port of 80.
Since at the end of each access list there is an implicit deny all statement, we don’t
need to define any more statement. After applying an access list, every traffic not
originating from 10.0.0.2 and going to 192.168.0.1, port 80 will be denied.
R1(config)#int f0/0
R1(config-if)#ip access-group 100 in
We can verify whether our configuration was successful by trying to access Server from
the User’s workstation using different methods. For example, the ping will fail:
C:\>ping 192.168.0.1
NOTE
Just like numbered ACLs, named ACLs can be of two types: standard and extended.
The named ACL name and type is defined using the following syntax:
(config) ip access-list STANDARD|EXTENDED NAME
The command above moves you to the ACL configuration mode, where you can
configure the permit and deny statements. Just like with numbered ACLs, named ACLs
ends with the implicit deny statement, so any traffic not explicitly permitted will be
forbidden.
Once inside the ACL config mode, we need to create a statement that will deny the
user’s workstation access to the Domain server:
The number 20 represents the line in which we want to place this entry in the ACL. This
allows us to reorder statements later if needed.
Now, we will execute a statement that will permit the workstation access to the File
share:
Lastly, we need to apply the access list to the Gi0/0 interface on R1:
R1(config)#int Gi0/0
The commands above will force the router to evaluate all packets trying to enter Gi0/0. If
the workstation tries to access the Domain server, the traffic will be forbidden because
of the first ACL statement. However. if the user tries to access the File server, the
traffic will be allowed, because of the second statement.
Notice the sequence number at the beginning of each entry. If we need to stick a new
entry between these two entries, we can do that by specifying a sequence number in
the range between 20 and 50. If we don’t specify the sequence number, the entry will be
added to the bottom of the list.
We can use the ping command on the workstation to verify the traffic is being blocked
properly:
C:\>ping 192.168.0.1
C:\>
C:\>ping 192.168.0.2
Host A request a web page from an Internet server. Because Host A uses private IP
addressing, the source address of the request has to be changed by the router because
private IP addresses are not routable on the Internet. Router R1 receives the request,
changes the source IP address to its public IP address and sends the packet to server
S1. Server S1 receives the packet and replies to router R1. Router R1 receives the
packet, changes the destination IP addresses to the private IP address of Host A and
sends the packet to Host A.
1. Static NAT – translates one private IP address to a public one. The public IP
address is always the same.
2. Dynamic NAT – private IP addresses are mapped to the pool of public IP
addresses.
3. Port Address Translation (PAT) – one public IP address is used for all internal
devices, but a different port is assigned to each private IP address. Also known
as NAT Overload.
Static NAT
With static NAT, routers or firewalls translate one private IP address to a single public IP
address. Each private IP address is mapped to a single public IP address. Static NAT is
not often used because it requires one public IP address for each private IP address.
Here is an example.
Computer A requests a web resource from S1. Computer A uses its private IP address
when sending the request to router R1. Router R1 receives the request, changes the
private IP address to the public one, and sends the request to S1. S1 responds to R1.
R1 receives the response, looks it up in its NAT table, and changes the destination IP
address to the private IP address of Computer A.
In the example above, we need to configure static NAT. To do that, the following
commands are required on R1:
R1(config)#ip nat inside source static 10.0.0.2 59.50.50.1
Using the commands above, we have configured a static mapping between Computer
A’s private IP address of 10.0.0.2 and the router’s R1 public IP address of 59.50.50.1.
To check NAT, you can use the show ip nat translations command:
R1#show ip nat translations
With dynamic NAT, you need to specify two sets of addresses on your Cisco router:
1. configure the router’s inside interface using the ip nat inside command
2. configure the router’s outside interface using the ip nat outside command
3. configure an ACL that has a list of the inside source addresses that will be translated
4. configure a pool of global IP addresses using the ip nat pool NAME
FIRST_IP_ADDRESS LAST_IP_ADDRESS netmask SUBNET_MASK command
5. enable dynamic NAT with the ip nat inside source list ACL_NUMBER pool
NAME global configuration command
Host A requests a web resource from a internet server S1. Host A uses its private IP
address when sending the request to router R1. Router R1 receives the request,
changes the private IP address to one of the available global addresses in the pool and
sends the request to S1. S1 responds to R1. R1 receives the response, looks up in its
NAT table and changes the destination IP address to the private IP address of Host A.
1. First we need to configure the router’s inside and outside NAT interfaces:
R1(config)#int f0/0
R1(config-if)#int f0/1
2. Next, we need to configure an ACL that will include a list of the inside source
addresses that will be translated. In this example we want to translate all inside hosts on
the 10.0.0.0/24 network:
R1(config)#access-list 1 permit 10.0.0.0 0.0.0.255
3. We need to configure the pool of global (public) IP addresses available on the outside
interface:
R1(config)#ip nat pool STUDY-CCNA_POOL 155.4.12.1 155.4.12.3
netmask 255.255.255.0
The command above tells the router to translate all addresses specified in the access
list 1 to the pool of global addresses named MY POOL.
You can list all NAT translations using the show ip nat translations command.
C:\>ping 155.4.12.5
Then enter the show ip nat translations command quickly enough before the translation
has timed out:
R1#show ip nat translations
In the output above you can see that the translation has been made between the Host
A’s private IP address (Inside local, 10.0.0.100) to the first available public IP address
from the pool (Inside global, 155.4.12.1) and it is connecting to the server on the outside
(Outside local and Outside global, 155.4.12.5) .
NOTE
You can remove all NAT translations from the table by using the clear ip nat translation * command.
PAT allows you to support many hosts with only few public IP addresses. It works by
creating dynamic NAT mapping, in which a global (public) IP address and a unique port
number are selected. The router keeps a NAT table entry for every unique combination
of the private IP address and port, with translation to the global address and a unique
port number.
We will use the following example network to explain the benefits of using PAT:
As you can see in the picture above, PAT uses unique source port numbers on the
inside global (public) IP address to distinguish between translations. For example, if the
host with the IP address of 10.0.0.101 wants to access the server S1 on the Internet,
the host’s private IP address will be translated by R1 to 155.4.12.1:1056 and the
request will be sent to S1. S1 will respond to 155.4.12.1:1056. R1 will receive that
response, look up in its NAT translation table, and forward the request to the host.
configure the router’s inside interface using the ip nat inside command.
configure the router’s outside interface using the ip nat outside command.
configure an access list that includes a list of the inside source addresses that
should be translated.
enable PAT with the ip nat inside source list ACL_NUMBER interface TYPE
overload global configuration command.
Here is how we would configure PAT for the network picture above.
Next, we will define an access list that will include all private IP addresses we would like
to translate:
R1(config-if)#access-list 1 permit 10.0.0.0 0.0.0.255
The access list defined above includes all IP addresses from the 10.0.0.0 – 10.0.0.255
range.
Now we need to enable NAT and refer to the ACL created in the previous step and to
the interface whose IP address will be used for translations:
R1(config)#ip nat inside source list 1 interface Gi0/1 overload
To verify the NAT translations, we can use the show ip nat translations command after
hosts request a web resource from S1:
R1#show ip nat translations
Notice that the same IP address (155.4.12.1) has been used to translate three private
IP addresses (10.0.0.100, 10.0.0.101, and 10.0.0.102). The port number of the public IP
address is unique for each connection. So when S1 responds to 155.4.12.1:1026, R1
look into its NAT translations table and forward the response to 10.0.0.102:1025
IPv6 features
Here is a list of the most important features of IPv6:
Large address space: IPv6 uses 128-bit addresses, which means that for each
person on the Earth there are
48,000,000,000,000,000,000,000,000,000 addresses!
Enhanced security: IPSec (Internet Protocol Security) is built into IPv6 as part
of the protocol . This means that two devices can dynamically create a secure
tunnel without user intervention.
Header improvements: the packed header used in IPv6 is simpler than the one
used in IPv4. The IPv6 header is not protected by a checksum so routers do not
need to calculate a checksum for every packet.
No need for NAT: since every device has a globally unique IPv6 address, there
is no need for NAT.
Stateless address autoconfiguration: IPv6 devices can automatically configure
themselves with an IPv6 address.
IPv6 address format
Unlike IPv4, which uses a dotted-decimal format with each byte ranges from 0 to
255, IPv6 uses eight groups of four hexadecimal digits separated by colons. For
example, this is a valid IPv6 address:
2340:0023:AABA:0A01:0055:5054:9ABC:ABB0
If you don’t know how to convert hexadecimal number to binary, here is a table
that will help you do the conversion:
IPv6 address shortening
The IPv6 address given above looks daunting, right? Well, there are two
conventions that can help you shorten what must be typed for an IP address:
1. a leading zero can be omitted
For example, the address listed above
(2340:0023:AABA:0A01:0055:5054:9ABC:ABB0) can be shortened
to 2340:23:AABA:A01:55:5054:9ABC:ABB0
2. successive fields of zeroes can be represented as two colons (::)
For example, 2340:0000:0000:0000:0455:0000:AAAB:1121 can be written
as 2340::0455:0000:AAAB:1121
NOTE
You can shorten an address this way only for one such occurrence. The reason is obvious –
if you had more than occurence of double colon you wouldn’t know how many sets of zeroes
were being omitted from each part.
Here is a couple of more examples that can help you grasp the concept of IPv6
address shortening:
Long version: 1454:0045:0000:0000:4140:0141:0055:ABBB
Shortened version: 1454:45::4140:141:55:ABBB
Long version: 0000:0000:0001:AAAA:BBBC:A222:BBBA:0001
Shortened version: ::1:AAAA:BBBC:A222:BBBA:1
For example, if the MAC address of a network card is 00:BB:CC:DD:11:22 the interface
ID would be 02BBCCFFFEDD1122.
binary 0000 0000 1011 1011 1100 1100 1101 1101 0001 0001 0010 0010
Next, we need to insert FFFE in the middle of the address listed above:
hex 02BBCCFFFEDD1122
binary: 0000 0010 0000 0000 0000 1100 0100 0011 0010 1010 0011 0101
hex: 02000C432A35
NOTE
IPv6 doesn’t use the broadcast method, but multicast to all hosts on the network provides the
functional equivalent.
IPv6 Unicast Addresses
Unicast addresses represent a single interface. Packets addressed to a unicast address
will be delivered to a specific network interface.
global unicast – similar to IPv4 public IP addresses. These addresses are assigned by
the IANA and used on public networks. They have a prefix of 2000::/3, (all the addresses
that begin with binary 001).
unique local – similar to IPv4 private addresses. They are used in private networks and
aren’t routable on the Internet. These addresses have a prefix of FD00::/8.
link local – these addresses are used for sending packets over the local subnet.
Routers do not forward packets with this addresses to other subnets. IPv6 requires a
link-local address to be assigned to every network interface on which the IPv6 protocol is
enabled. These addresses have a prefix of FE80::/10.
subnet ID – 64 bits long. Contains the site prefix (obtained from a Regional Internet
Registry) and the subnet ID (subnets within the site).
interface ID – 64 bits long. typically composed of a part of the MAC address of
the interface.
NOTE
The original IPv6 RFCs defined a private address class called site local. This class has been
deprecated and replaced with unique local addresses.
Link-local addresses have a prefix of FE80::/10. They are mostly used for auto-address
configuration and neighbour discovery.
IPv6 multicast addresses start with FF00::/8. After the first 8 bits, there are 4 bits that
represent the flag fields that indicate the nature of specific multicast addresses. The
next 4 bits indicate the scope of the IPv6 network for which the multicast traffic is
intended. Routers use the scope field to determine whether multicast traffic can be
forwarded. The remaining 112 bits of the address make up the multicast Group ID.
1 – interface-local
2 – link-local
4 – admin-local
5 – site-local
8 – organization-local
E – global
For example, the addresses that begin with FF02::/16 are multicast addresses intended
to stay on the local link.
The following table lists of some of the most common link-local multicast addresses:
1. First, enable IPv6 routing on a Cisco router using the ‘ipv6 unicast-routing’ global
configuration command. This command globally enables IPv6 and must be the first
command executed on the router.
2. Configure the IPv6 global unicast address on an interface using the ‘ipv6 address
address/prefix-length [eui-64]’ command. After you enter this command, the link local
address will be automatically derived. If you omit the ‘eui-64’ parameter, you will need
to configure the entire address manually.
R1(config)#int Gi0/0
We can verify the IP configuration and IP settings using the ‘show ipv6 interface
Gi0/0’ command:
2001:BB9:AABB:1234:201:42FF:FE65:3E01, subnet is
2001:BB9:AABB:1234::/64 [EUI]
FF02::1
FF02::2
FF02::1:FF65:3E01
....
1. The link local IPv6 address has been automatically configured. Link local addresses
begin with FE80::/10, and the interface ID is used for the rest of the address. Because
the interface’s MAC address is 00:01:42:65:3E01, the calculated address
is FE80::201:42FF:FE65:3E01.IPv6 hosts check that their link local IP addresses are
unique and not in use by reaching out to the local network using Neighbor Discovery
Process (NDP).
2. The global IPv6 address has been created using the modified EUI-64 method.
Remember that IPv6 global addresses begin with 2000::/3. So in our case, the IPv6
global address is 2001:BB9:AABB:1234:201:42FF:FE65:3E01.
We will also create an IPv6 address on another router. This time, we will enter the
whole address:
R2(config-if)#ipv6 address
2001:0BB9:AABB:1234:1111:2222:3333:4444/64
!!!!!
As you can see from the output above, the devices can communicate with each other.
So that’s how to enable IPv6 on router. IPv6 addresses and the default gateway can
also be configured on hosts automatically using SLAAC and DHCPv6. DNS servers are
still required to be able to reach the Internet.
Link-Local Address
The first thing that happens is that the device gives itself its own link-local address. The
IPv6 link-local address configuration for its own interface is done automatically. The link-
local address can be acquired by combining the link-local prefix FE80::/64 and the EUI-
64 interface identifiers generated from the interface’s MAC address that is padded in
the middle with FFFE – 16 bits.
Then flip the 7TH bit of the MAC address, which results to:
A840:11FF:FE34:5531
10101010 represents the first 8 bits of the MAC address (AA), which, when inverting the
7th bit becomes 10101000 (A8). Therefore, the resulting IPv6 EUI-64 link-local address
is this:
FE80::A840:11FF:FE34:5531.
NOTE
Most modern operating systems randomize the host portion of the address rather than
using the standard EUI-64 for security and privacy reasons.
When the device autoconfigures or receives an IPv6 address, they send three DADs out
via Network Discovery Protocol – Network Solicitation (NDP NS), asking if anyone has
this same address.
When our client device received the router advertisement sent by the router, it combines
the global unicast prefix address (2001:1385:A:B:: /64) with its EUI-64 interface
identifier (A840:11FF:FE34:5531), resulting in the global unicast address
2001:1385:A:B:A840:11FF:FE34:5531/64 that can be routed on the Internet. The
default gateway of our client device will be the router that sends Router Advertisements
(RA) to it.
NOTE
DHCPv6 Process
The first step is the DHCPv6 client must find addresses of other devices on the link via
Neighbor Discovery Process (NDP). If a DHCP client detects a router, it will check the
Router Advertisement (RA) sent by a router to see if a DHCPv6 has been set up, or if
no router advertisement messages are seen, the client will send DHCP solicit
messages, which is a multicast message addressed with a destination of ff02::1:2 that
calls out, “All DHCP agents, both servers and relays.” The DHCPv6 client will use the
link-local scope to send the DHCP solicit message to ensure that the message isn’t
forwarded, by default, beyond the local link.
NOTE
In practice, a DHCP server is still required to give out information such as DNS
information to be able to surf the internet, unlike SLAAC, which only configures IPv6
addresses.
In configuring an IPv6 static route, we will use the same procedure and configuration
syntax that we are using in configuring IPv4 static routes. For our example
configuration, we will use the topology below:
R2(config)#ipv6 unicast-routing
2. Configure the IPv6 addresses on the interfaces. We will use the network prefix /64.
R1(config)#int g0/0
R1(config-if)#no shut
R1(config-if)#int g0/1
R1(config-if)#no shut
R2(config)#int g0/0
R2(config-if)#no shut
R2(config-if)#int g0/1
R2(config-if)#no shut
3. Configure the IPv6 static route on each interface. We will use the next-hop address
instead of the output interface. We will use the network prefixes /64.
R1(config)#ipv6 route FC00:12:12:12::/64 2001:1:1:1::2
4. Configure IPv6 address and gateway on each PC. The below link-local address will
be populated automatically. IPv6 link-local addresses are addresses that can be used to
communicate with nodes (hosts and routers).
5. Verify by using end-to-end ping testing from each PC.
IPv6 Default Static Route and Summary Route
Did you ever wonder how to efficiently connect the host or your workstation to the
internet in terms of ease of configuration? One of the options is by using the default
static route and summary route. In summary route configuration, we can be able to
configure a static route for all the subnets within the LAN network with only one
command only if within the same subnet. To connect the host inside the LAN, we need
to configure default on the CPE (Customer Premise Equipment) point towards the ISP.
In this article, you will learn the concept and configuration of the IPv6 default static route
and summary route.
IPv6 connected routes are also automatically added to the routing table when there is
a directly connected subnet on a router’s interface. Both ends of the directly connected
link must have an IPv6 address configured and both interface status codes must be in
the up state. The command ‘ipv6 unicast-routing’ must be also configured on the
routers to enable IPv6 routing.
R2(config)#ipv6 unicast-routing
R3(config)#ipv6 unicast-routing
ISP(config)#ipv6 unicast-routing
R1(config)#int g0/0
R1(config-if)#no shut
R2(config)#int g0/0
R2(config-if)#int g0/1
R2(config-if)#no shut
R3(config)#int g0/0
R3(config-if)#no shut
ISP(config)#int g0/1
ISP(config-if)#no shut
3. Configure IPv6 static route on R1 for the network 2001:1:0:2::/64 and 2001:1:1:1::/64
so that R2 will know the route R3 and ISP. We will use the next-hop address instead of
the output interface.
R1(config)#ipv6 route 2001:1:0:2::/64 2001:1:0:1::2
4. Configure IPv6 static route on R2 for the network 2001:1:1:1::/64 so that R2 will know
the route ISP.
R2(config)#ipv6 route 2001:1:1:1::/64 2001:1:0:2::2
5. Configure the IPv6 summary route on R3 for the route pointing to R1 and R2.
R3(config)#ipv6 route 2001:1:0::/48 2001:1:0:2::1
6. Configure IPv6 default static routing on R3 pointing all the unknown traffic
(destination IPv6) to ISP. With this configuration, all the unknown destination networks
will go to ISP.
R3(config)#ipv6 route ::/0 2001:1:1:1::2
7. Configure an IPv6 static route on ISP, a static route configuration from R1, R2, R3.
ISP(config)#ipv6 route ::/0 2001:1:1:1::1
As with IPv4, IPv6 routing protocols can be either distance vector or link-state. An
example of a distance vector protocol is RIPng with hop count as the metric. An
example of a link-state routing protocol is OSPF with cost as the metric.
Unlike IPv4, we no longer use Address Resolution Protocol or ARP in IPv6. IPv6
Neighbor Discovery replaces this function.
Neighbor Discovery Protocol Features
Now, let’s discuss the different functions of NDP:
1. Router Solicitation – Router Solicitation messages (RS) are sent by hosts when they
boot up to find any routers in a local segment and to request that they advertise their
presence on the network.
2. Router Advertisement – Router Advertisement messages (RA) are used by an IPv6
router to advertise its presence on the network. These Router Advertisements contain
information like the router’s IPv6 address, MAC address, MTU, etc.
3. Neighbor Solicitation Message – Neighbor Solicitation messages (NS) are sent by a
host to determine a remote host’s link-layer IPv6 address. The destination address will
be the solicited-node multicast address of the remote host. It verifies that a neighbor or
remote host is still reachable via a cached link-layer address.
4. Neighbor Advertisement Message – A host use Neighbor Advertisement messages
(NA) to respond to NS message. If a remote host receives an NS message, it returns a
NA message to the sender host. A host also uses this message to announce a link-layer
address change.
5. Redirect – IPv6 routers use this message to notify an originating host of a better next-
hop address for a specific destination. Only routers can send unicast traffic redirect
messages. Only hosts process redirect messages.
Using VPN will cost you nothing as it is completely free since most organizations have
firewalls already installed with a built-in VPN feature. VPN also provides security for all
the traffic that is sent outside your network through VPN tunnels. Lastly, VPN is scalable
in that you can add unlimited tunnels and users.
1. Site-to-Site VPN
Organizations are continuously expanding into different branches, and to protect the
data in transit between two branches, we need to implement a site-to-site VPN.
Common VPN protocols used in site-to-site VPN are Internet Security Protocol (IPSec).
In implementing this type of VPN, we need to set up the Phase 1 and Phase 2 VPN
negotiations. IKE Phase 1 negotiation is where we create a secure encrypted channel
or encrypted network connectivity for the two firewalls can start the Phase 2 negotiation.
In IKE Phase 2 negotiation, the two firewalls will agree on the configured parameters
that define what traffic can go via the VPN tunnel and how to authenticate and encrypt
the traffic. The agreement is called Security Association. Both Phase 1 and Phase 2
should have the same parameters, such as pre-shared keys, authentication, encryption,
and IKE version.
Commonly called a mobile VPN. Using this type of VPN connection permits the users to
connect through the internet anywhere in the world to access the corporate network
resources securely. It can be used in a work-from-home setup where the employees
can securely access the company’s internal resources through a VPN. To implement
this, the employee must install a VPN client, such as a Cisco anyconnect secure
mobility client or Cisco anyconnect VPN client, to their device, and a virtual IP address
will be assigned to the employee’s device/PC that will be used to establish a secured
tunnel.
Remote access VPN can use SSL, IKEv2, L2TP, and IPSec protocols. The most secure
and easy to implement protocol is IKEv2. Some internet connections blocked the IKEv2
and L2TP protocols. That is why some are using SSL VPN as it uses the typical
HTTP/HTTPS traffic that is allowed on all internet connection types. IPSec for remote
access VPN is not usually used, as there is already a known vulnerability on the
protocol.
The system administrator can choose between two modes to implement the remote
access VPN:
Full Tunnel – all the traffic that is coming out from the employee’s device will go directly
to the firewall, and the firewall will forward it to the internet if necessary. This is a
completely secured implementation as all the security services of the firewall will be
applied to all the traffic coming out from the employee’s device.
Split Tunnel – the traffic that will go to the internet like HTTP/HTTPS traffic will go to
the typical internet connection such as broadband/LTE, while the VPN traffic will be
used to access the internal resource of the company will use a VPN tunnel. The traffic
will be split based on its purpose.
The ISP is providing us with a subscription-based WAN connection type for us to have
access to our resources remotely. Another notable application of WAN is the internet.
We can connect to the internet because of the WAN provided by our local Internet
Service Providers. We’ll discuss each WAN connection type below.
Most WAN IP address uses the static public IP address, dynamic public IP address, and
PPPoE (Point-to-Point Protocol) as the connection to the ISP depending on the
subscription. Static public IP addresses are much expensive as compare to dynamic
and PPPoE because we only have limited unique public IP address that cannot be
reused by other customers. Private IP addresses are only routable on the local network
and not routable over WAN or the internet. The below diagram shows the LAN and
WAN infrastructure.
1. Leased Line
This WAN connection type is a dedicated point-to-point link and fixed-bandwidth data
connection. By using leased lined, your network will have a completely secured and
reliable connection, high bandwidth, and superior quality of service. On the other side,
leased lines can be expensive and not scalable as it is a permanent physical
connection.
DSL is a medium used to transfer digital signals over the standard telephone lines. It
uses a different frequency than the telephone is using so that you can use the internet
while making a call. DSL is an older concept that provides a typical speed of around
6mbps. The good thing in DSL is the bandwidth is not shared and provides a constant
speed.
3. Cable Internet
One way to provide broadband internet connection is by using cable internet from a
local cable TV provider. It has quite a similarity with DSL as it also uses an existing
cable modem from cable TV to send data. On this connection, the speed varies with the
number of users on the service at a specific time.
It is the newest broadband connection that provides the highest internet speed service
to the customers. It is also commonly used in telecommunication backhaul connections
because of the higher speed it can handle as compared to other cables. DWDM,
SONET, and SDH are the ISP backhaul transport equipment that uses fiber optic cable.
Fiber optic is also used in telecom packet switching networks or circuit switching
networks.
MPLS is a type of VPN that uses labels on forwarding packets instead of IP addresses
or layer 3 headers. It offers optimum security and routing for customer’s sites. On
MPLS, the service provider is participating in the customer’s routing.
6. Wireless WAN
Most of us are using mobile phones that use mobile data to connect to the internet. The
commonly known connection types for wireless WAN are 3G, 4G, LTE, and 5G. It is the
services offered by local ISP to provide wireless internet access to mobile devices via
cellular sites. It uses specific frequencies to provide wider coverage and stronger signal
to customers.
The leased line transfers data in both directions using a full-duplex transmission. It uses
two pairs of wires (full-duplex cable), that each wire is used in a unidirectional
transmission of data network. A leased line is not a long physical cable extended to two
or more locations as others perceived. It uses a specialized switching device that acts
as a signal booster to make the connection a point-to-point link and reach a remote
destination.
The below diagram shows how the leased line connects two branches:
From the Optical Network Terminal (ONT) located on the customer’s branch, the traffic
will go to the Optical Line Termination (OLT) located in the ISP premise, where it
multiplexed and processed all the optical signals coming from the customers.
From OLT, it will then go to the edge routers where it uses VRF and adds labels if it is
using MPLS. From edge routers, it will then go to the core router using BGP as the
overlay protocol and IGP like OSPF as the underlay protocol. From core routers, it will
transmit the data to the other edge router, go to OLT, and finally to the ONT, which is
located to the other branch of the customer. The transmission equipment used in
between the core routers, edge routers, OLT, and ONT is either Synchronous Digital
Hierarchy (SDH) or Dense Wavelength Division Multiplexing (DWDM).
On the other hand, leased lines can be expensive because they need a dedicated cable
and switching circuitry. Not only that, the leased line is not scalable as it is a permanent
physical connection.
We will be discussing the popular WAN technology nowadays, such as Metro Ethernet
(MetroE), wherein it uses physical Ethernet links to connect the customer’s network
devices to the service provider’s network and other WAN topology that are used today.
To be able to connect multiple sites across Wide Area Network, the service provider
should be well aware of the topology to be used to address the enterprise client’s
needs.
Point-to-Point
The customer basically requires basic point-to-point network connectivity between two
sites separated geographically that will allow them to send and receive Ethernet frames
to each other as if they are connected directly. Point-to-point topology is transparent to
the customer network and acts as if there are physical connections between remote
sites.
Typical dedicated leased lines connections are offered with this kind of setup, such as
an E1 or T1 line. But with MetroE, service providers offer what is called an Ethernet Line
Service or E-Line where two customer premise equipment (CPE) devices can exchange
Ethernet frames, similar in concept to a leased line.
Point-to-Multipoint
This WAN topology requires a central site device that directly sends Layer 2 frames to
reach remote sites, but the remote sites can only send frames to the central site. This
Metro Ethernet service is known as E-Tree, wherein the central site is the E-Tree Root
and the remote sites are the E-Tree leaves. This topology is also known as Hub and
Spoke or partial mesh network topologies, but regardless, this type of WAN topology
creates a service that addresses the need to have a hub site with several sites
connected.
Full Mesh
Full mesh topology allows all the connected devices in the WAN topology to
communicate with each other, and the people who develop MetroE has anticipated that
there is a need for full-mesh connectivity of devices wherein they all send and receive
Ethernet frames to each other directly without having to be dependent on a central hub.
This MetroE service is called Ethernet LAN or E-LAN.
An E-LAN service permits all the devices connected to it to send frames directly to each
other as if they are in one big Ethernet switch.
Ring Topology
This topology is almost like a point-to-point but is connected on both sides to provide
protection just in case a fault happens. WANs that use this topology are less susceptible
to failure since traffic can be routed the other way around the ring if a fault is detected
on the network. But, adding new sites to the ring topology requires more work and cost
compared to a simple point-to-point setup because each new site will require twice the
connection needed.
Star Topology
WAN topology using the star network configuration requires a central hub. A
concentrator router is being used to ensure that data is properly sent to the destination.
This topology allows easier integration of new network components to the network,
which can be an important consideration for business WANs as it entails less work and
cost.
This topology also makes the network less vulnerable to a single point of failure issue
that may affect traffic on the network except for the central hub. The spokes have an
independent link in transmitting data to the concentrator router, which means they do
not depend on other spokes to function properly and can easily isolate the issue if ever
something wrong happens.
Nowadays, modern enterprise networks are usually made up of many parts that all work
with each other. Securing them can become very complex work. The larger the network
grows, the harder it is to protect everything until you have identified many vulnerabilities,
assessed the many exploits, and exposed where the threats might come from.
To be able to address cyber security threats and attacks, we must first need to
understand these types and how they may disrupt the network by accessing sensitive
data.
Malware
Malware is malicious software that performs a task on a targeted device or a whole
network. Once activated, usually by clicking a malicious link or attachment, it can block
access to important network components, install harmful software, subtly obtain
sensitive information by sending data from the hard drive and disrupting the whole
network halting services to users. Below are examples of malware that cyber criminals
use to gain unauthorized access to the target and perform a cyber attack:
1. Spyware
2. Ransomware
3. Backdoor
4. Trojans
5. Virus
6. Worms
Phishing
Phishing is a way of obtaining valuable information about the target by tricking the victim
to unknowingly provide their credentials and can be considered a type of social
engineering that attacks the users into bypassing normal cyber security protocols and
giving up personal data about them.
Essential firewall functions include preventing malicious traffic from entering or leaving
the private network. Monitoring the state and context of network communications is
critical because this information can be used to identify threats based on where they
come from, where they go, or the content of their data packets. Stateful firewalls can
detect unauthorized network access attempts and analyze data within packets to see if
it contains malicious code.
Connection state-aware
Does not open a large range of ports to permit traffic
Extensive logging capabilities
Robust attack prevention
Stateless Firewall
We can also call it a packet-filtering firewall. We can also call it a packet-filtering firewall.
It is the oldest and most basic type of firewalls. Stateless packet-filtering firewalls
operate inline at the network’s perimeter. These firewalls, however, do not route
packets; instead, they compare each packet received to a set of predefined criteria,
such as the allowed IP addresses, packet type, port number, and other aspects of the
packet protocol headers. Packet filters provide a basic level of security that can give
protection against known threats. The packet filter does not maintain a connection state
table.
Content caching
Increased network performance
Easier to log traffic
Prevents direct connections from outside the network
Next-Generation Firewalls
The Next-Generation Firewall (NGFW) is a deep-packet inspection firewall that expands
beyond port/protocol inspection and blocking to include application-level inspection (up
to Layer 7 of the OSI), intrusion prevention, and intelligence from outside the firewall.
NGFW is a more advanced version of traditional firewalls that provides the same
benefits. NGFW, like traditional firewalls, employ both static and dynamic packet filtering
and VPN support to secure all connections between the network, internet, and firewall.
Both types of firewalls should be able to do NAT and PAT.
There are also significant differences between traditional and next-generation firewalls.
The ability of an NGFW to filter packets based on applications is the most apparent
distinction between the two. These firewalls have a high level of control and visibility
over the applications that they can identify through analysis and signature matching.
They can use whitelists or a signature-based intrusion prevention system to distinguish
between safe and malicious applications, which are then identified using SSL
decryption. Unlike most traditional firewalls, NGFWs also include a path for receiving
future updates.
More secure
Supports application-level inspection up to Layer 7 of the OSI model
Capable of user authentication
Detailed logging
A firewall permits traffic depending on a set of rules that have been set up. It is based
on the source, destination, and port addresses. A firewall can deny any traffic that does
not satisfy the specified criteria. IDS are passive monitoring system devices that monitor
network traffic as they travel over the network, compare signature patterns, and raise an
alarm if suspicious activity or known security threat is detected. On the other hand, IPS
is an active device that prevents attacks by blocking them.
Firewalls
A firewall employs rules to filter incoming and outgoing network traffic. It uses IP
addresses and port numbers to filter traffic. It can be set to either Layer 3 or transparent
mode. The firewall should be the first line of defense and installed inline at the network’s
perimeter.
There are also different types of firewalls like proxy firewall, stateful inspection firewall,
unified threat management (UTM) firewall, next-generation firewall (NGFW), threat-
focused NGFW, and a virtual firewall.
anomalies.
Cryptography Services
Cryptography provides the following services to the data:
Take a look at the symmetric encryption process above. The same key is used to
encrypt data, and the duplicate public keys are used to decrypt data. First, the sender
specifies the data, then proceeds in encrypting data to unreadable format, forwards it to
the transit. When the receiver receives that data, it is still encrypted, decrypt data using
the same key, and then it is read. Notice that the data during the transit is jumbled so
that if somebody tries to sniff it, they would not understand the data because they need
that one key to decrypt it.
Asymmetric Encryption
Asymmetric encryption uses a secret key/private key and public key pairs. The
message is encrypted with the public key and can only be decrypted with the secret key
and vice versa. With asymmetric encryption, only the private key must be kept secret,
and the public key can be ready in the public domain. Asymmetric Encryption is slower
than symmetric encryption, and it is used for smaller transmissions like Symmetric Key
Exchange and Digital Signatures. Asymmetric encryption algorithms include RSA and
ECDSA.
In the asymmetric encryption process above, you see that from the left, the sender has
the public key, while the receiver on the right has the private key. Those are the keys
they use for the encryption and to make the data jumbled when it passes to the transit
network. Lastly, the data will be decrypted with the private key on the receiving end.
This enables everyone to send data securely to the host with a private key, and it is
exclusive for those who have the private key, so only those who can decrypt the
message.
For example, when you want to buy something online, you want your credit card details
to be encrypted over the internet. The online store cannot send you the shared key over
the same Internet channel because your connection is not yet encrypted. Anybody that
sniffs your data in real-time will get the shared key as well if it is sent on the same line. It
is not even practical to call a customer every time someone wants to purchase just to
give them the shared key. So, how do we resolve this?
Public Key Infrastructure (PKI) solves this problem. It uses a trusted introducer,
Certificate Authority, for the two parties who need secure communication, digital
certificates. Also, both parties need to trust the Certificate Authority.
Clients and web servers use request-response method to communicate with each other,
with clients sending the HTTP Requests and servers responding with the HTTP
Responses. Clients usually send their requests using GET or POST methods, for
example GET /homepage.html. Web servers responds with a status message (200 if
the request was successful) and sends the requested resource.
Web servers usually use a well-known TCP port 80. If the port is not specified in a URL,
browsers will use this port when sending HTTP request. For example, you will get the
same result when requesting http://google.com and http://google.com:80.
NOTE
The version of HTTP most commonly used today is HTTP/1.1. A newer version, HTTP/2, is available
and supported by most browser.
HTTPS is commonly used to create a secure channel over some insecure network, e.g.
Internet. A lot of traffic on the Internet is unencryped and susceptible to sniffing attacks.
HTTPS encrypts sensitive information, which makes a connection secure.
HTTPS URLs begin with https instead of http. In Internet Explorer, you can
immediately recognize that a web site is using HTTPS because a lock appears to the
right of the address bar:
NOTE
HTTPS uses a well-known TCP port 443. If the port is not specified in a URL, browsers will use this
port when sending HTTPS request. For example, you will get the same result when
requesting https://gmail.com and https://gmail.com:443.
Criminal organizations with large numbers of employees develop attack vectors and
execute attacks
Individuals who create attack vectors
Nation-states
Terrorist groups
Industrial spies
Organized crime groups
Insider threats
Hackers and cyber criminals
Business competitors
For individuals, the best practice is basic and simple. Having anti-virus software
installed on your computer is a good start. Habitually changing your alphanumeric
password can go the distance regarding cyber defense. Lastly, being vigilant in
identifying phishing attacks can help prevent an individual from being a victim of a cyber
attack.
13.
14. SSH (Secure Shell)
15. SSH is a network protocol used to remotely access and manage a device. The
key difference between Telnet and SSH is that SSH uses encryption, which
means that all data transmitted over a network is secure from
eavesdropping. SSH uses the public key encryption for such purposes.
16. Like Telnet, a user accessing a remote device must have an SSH client installed.
On a remote device, an SSH server must be installed and running. SSH uses the
TCP port 22 by default.
17. Here is an example of creating an SSH session using Putty, a free SSH client:
18.
19. NOTE
SSH is the most common way to remotely access and manage a Cisco device. Here you can
find information about setting up SSH access on your Cisco device.
Setting up Telnet
To access a Cisco device using telnet, you first need to enable remote login. Cisco
devices usually supports 16 concurrent virtual terminal sessions, so the first command
usually looks like this:
HOSTNAME(config)line vty 0 15
To enable remote login, the login command is used from the virtual terminal session
mode:
HOSTNAME(config-vty)login
Next, you need to define a password. This is done using the password command from
the virtual terminal session mode:
HOSTNAME(config-vty)password PASSWORD
Let’s try this on a real router. First, we will try to access the router without enabling
telnet on a device:
R1#telnet 10.0.0.1
As you can see above, we can not access a Cisco device using telnet before setting up
the password. Let’s do that:
R1#(config)#line vty 0 15
R1#(config-line)#password cisco
R1#(config-line)#login
R1#telnet 10.0.0.1
Password:
This time, because telnet was configured on the device, we have successfully telnetted
to the device.
The following example shows the configuration of the first three steps:
Router(config)#hostname R1
Choose the size of the key modulus in the range of 360 to 2048 for
your
General Purpose Keys. Choosing a key modulus greater than 512 may
take
a few minutes.
R1(config)#
R1(config)#
First, we have defined the device hostname by using the hostname R1 command. Next,
we have defined the domain name by using the ip domain-name cisco command. After
that, the local user is created by using the username study password ccna command.
Next, we need to enable only the SSH access to a device. This is done by using
the transport input ssh command:
R1(config)#line vty 0 15
R1(config-line)#login local
R1(config-line)#
If we use the transport input ssh command, the telnet access to the device is
automatically disabled.
NOTE
You should use the more recent version of the protocol, SSH version 2. This is done by using the ip
ssh version 2 global configuration command.
A terminal emulation software, such as Putty, is also needed to connect to the device. It
is used so that we can use a PC or a laptop as a display device for the router or switch
and have access to the device’s Command Line Interface (CLI). The software is
essential as we need them to be able to configure devices initially and to be able to
install routers and switches onto the network and enable remote management. You
should connect your device via the COM port on your computer. You can check your
COM port via the device manager.
Console Cable
The rollover cable is an RJ45 on one end and a DB9 on the other. It is the most popular
console cable for older devices. You can still use it, but you’ll need an adapter if you
don’t have a serial port on your device. Usually, you’ll use a DB9 to USB adapter to
connect to your PC or laptop. Nowadays, there’s a direct RJ45 to USB console cable
available in the market. Newer Cisco devices, usually the smaller and portable ones,
have mini USB console ports. The console cable for it has a mini USB for the console
port and a USB on the other end.
A router or switch has one console port only. The console port has a line number of 0,
thus ‘line console 0’. To secure the console port connections to our networking device,
we can set a password by issuing the following commands below. In this way, we can
secure our console port by requiring a password upon logging in.
Router(config)#line console 0
Router(config-line)#password StudyCCNA
Router(config-line)#login
NOTE
The management port is used for remote access only. The console port is used for local and
physical access.
For example, to disconnect a console user after 90 seconds of inactivity, we can use the
following command:
R1(config)#line con 0
R1(config-line)#exec-timeout 1 30
After 90 seconds of inactivity, the session will be disconnected and the user will need to
supply the console password to log back in:
R1(config-line)#
Password:
NOTE
To disable the timeout, use the value of 0 (not recommended in production environments!)
However, there is one problem with this command – the password is stored in clear text
in the configuration:
R1#show running-config
Building configuration...
version 15.1
....
!
username tuna password 0 peyo
...
R1(config)#
Note that (unlike with the enable password and enable secret commands) you can’t
have both the username password and username secret commands configured at the
same time:
ERROR: Can not have both a user password and a user secret.
Level 0 – Zero-level access only allows five commands- logout, enable, disable, help
and exit.
Level 1 – User-level access allows you to enter in User Exec mode that provides very
limited read-only access to the router.
Level 15 – Privilege level access allows you to enter in Privileged Exec mode and
provides complete control over the router.
NOTE
By default, Line level security has a privilege level of 1 (con, aux, and vty lines ).
In this example, we assign user admin1 a privilege level of 0. Then, we assign user
admin2 to privilege level 15, which is the highest level. For admin3, we did not specify
any privilege level, but it will have a privilege level of 1 by default.
Let’s try to verify the output of our configuration by logging in to each user. Enter the
username and the corresponding password, starting with admin1.
User Access Verification
Username: admin1
Password:
Router>?
Exec commands:
Router>
Notice in the output above that the user admin1 is under User Exec mode and has only
five commands- logout, enable, disable, help, and exit. Now, let’s log in as admin2.
User Access Verification
Username: admin2
Password:
Router#show privilege
Router#
The output above shows that user admin2 is currently in level 15, and we verified that
by typing the ‘show privilege’ command on the CLI. Notice also that we are in
Privileged Exec mode. Lastly, let’s log in as admin3.
User Access Verification
Username: admin3
Password:
Router>show privilege
When we logged in as admin3, we verified that it was in level 1 by typing the ‘show
privilege’ command on the CLI. Notice that we are in User Exec mode.
Let’s now assign privilege level 5 to a user. After that, we will configure privilege level 5
users to be in User Exec mode and allow them to use the ‘show running-
config’ command.
All level 5 users now will be automatically accessing the User Exec mode and can now
use the User Exec commands such as ‘show running-config’ on the CLI. Let’s log in
as user admin4 to verify that.
User Access Verification
Username: admin4
Password:
Router#show running-config
Building configuration...
boot-start-marker
boot-end-marker
!
end
Router#
Username: admin5
Password:
Router>show running-config
Router>enable 5
Password:
R4#show privilege
Building configuration...
boot-start-marker
boot-end-marker
end
Router#
In our first attempt, notice in the example above that we do not have access to
the ‘show running-configuration’ command. That is because we are currently under
privilege level 0. However, we can log in as a privilege level 5 user with the ‘enable
{privilege level}’ command, and from there, we can now access the ‘show running-
configuration’ command.
AAA uses methods to challenge and handles user requests for network access by
asking them for their authorized and authenticated user credentials to prove that they
are legitimate users before gaining access to the network. AAA is widely used in
network devices such as routers, switches, and firewalls, just to give a few to control
and monitor access within the network.
AAA Server
AAA addresses the limitations of local security configuration and the scalability issues
that come with it. For example, if you need to change or add a password, it has to be
done locally and on all devices, which will require a lot of time and resources.
An external AAA server solves these issues by centralizing such tasks within the
network. Having backup AAA servers in the network ensures redundancy and security
throughout the network.
Authentication
The AAA server receives a user authentication request. It challenges the user’s
credentials by asking for the username and password, for example, which is encrypted
using a hashing algorithm. The AAA server compares the user’s authentication
credentials with the user credentials stored in the database.
Authorization
Once the user’s credentials are authenticated, the authorization process determines
what that specific user is allowed to do and access within the premise of the network.
Users are categorized to know what type of operations they can perform, such as an
Administrator or Guest. The user profiles are configured and controlled from the AAA
server. This centralized approach eliminates the hassle of editing on a “per box” basis.
Accounting
The last process done in the AAA framework is accounting for everything the user is
doing within the network. AAA servers monitor the resources being used during the
network access. Accounting also logs the session statistics and auditing usage
information, usually for authorization control, billing invoice, resource utilization, trend
analysis, and planning the data capacity of the business operations.
AAA Protocols
There are two most commonly used protocols in implementing AAA, Authentication,
Authorization, and Accounting in the network. RADIUS and TACACS+ are open
standards that different vendors use to ensure security within the network.
RADIUS Configuration
RADIUS is an access server AAA protocol. To configure it, first, we need to define the
IP address of the RADIUS server in our Cisco router.
Configure AAA Cisco command on the device in global configuration mode, which gives
us access to some AAA commands.
R1(config)#aaa new-model
Now let us configure the RADIUS servers that you want to use.
R1(config-radius-server)#key STUDY_CCNA1
R1(config-radius-server)#key STUDY_CCNA2
Configure AAA authentication command with the group group-name method to specify
a subset of RADIUS servers to use as the login authentication method. To specify and
define the group name and the group members, use the aaa group server command.
For example, use the aaa group server command to first define the members
of STUDY_CCNA.
TACACS+ Configuration
For AAA Cisco TACACS+ configuration, we need to define first the IP address of the
TACACS+ server.
R1(config)#tacacs-server host 192.168.1.10
Enable AAA command on the device in global configuration mode which gives us
access to some AAA commands.
R1(config)#aaa new-model
Now let us configure the TACACS+ servers that you want to use.
R1(config-server-tacacs)#key STUDY_CCNA1
R1(config-server-tacacs)#key STUDY_CCNA2
Use the aaa authentication login command to configure login authentication. Indicate
it with the group group-name method to specify a subset of RADIUS servers to use as
the login authentication method. To specify and define the group name and the
members of the group, use the aaa group server command. For example, use
the aaa group server command to first define the members of STUDY_CCNA.
Banner MOTD
The Message of the Day (MOTD) banner will be displayed before the user authenticates
to our devices. It is typically used to display a temporary notice that may change
regularly, such as system availability.
To create a MOTD banner on a Cisco router, the following banner MOTD command is
used from the router’s global config mode:
Router(config)# banner motd $
Attention!
Router(config)#
NOTE
Be careful on choosing your delimiter character when configuring your banner, the banner must not
have a delimiter character on its content, or else, the cisco ios will interpret it as an indicator to end
the banner message.
In this example, the MOTD banner spans multiple lines of text, and the delimiting
character, which is also called start/stop character, is the dollar sign ($). Now let’s try to
access our devices to see what the MOTD Banner looks like:
Attention!
Username:
Username:
The figure above shows the MOTD banner before the user logs in to the router.
Banner Login
The Login banner will also be displayed before the user authenticates to our devices. It
will show up after the MOTD banner. Unlike the MOTD Banner, it is designed to
commonly display legal notices, such as security warnings and more permanent
messages to the users.
To create a Login banner on our device, the following command is used from the
router’s global configuration mode:
Router(config)# banner login ?
Warning!
Router(config)#
In this example, we use a question mark (?) as a delimiting character to indicate the
start and stop of the banner configuration.
Now let’s try to access our Cisco device to see what the Login banner looks like:
Attention!
Username:
As you can see above, the login banner is shown after the MOTD banner before the
user logs in to the router.
Banner Exec
We use Exec banner to display messages after the users, or network administrators are
authenticated to our Cisco IOS devices and before the user enters UserExec Mode.
Unlike MOTD, the Exec banner is designed to be more of a permanent message and
would not change frequently.
To create an Exec banner on a Cisco router, the following Exec banner command is
used from the router’s global configuration mode:
Router(config)#
In this example, We use the number eight (8) as a delimiting character to indicate the
start and stop of the banner configuration, just to show that any character can be used.
Now let’s try to access our Cisco devices to see what the Exec banner looks like:
Attention!
Warning!
Username: cisco
Password:
Router>
The image above confirms that the MOTD, Login, and Exec banners are all displayed
respectively.
The name of the timezone can be anything you like. After the name parameter, you
need to specify the difference in hours (and optionally minutes) from Coordinated
Universal Time (UTC). For example, to specify the Atlantic Standard Time, which is 4
hours behind UTC, we would use the following command:
R1(config)#clock timezone AST -4
Again, the name parameter can be anything you like. The recurring keyword instructs
the router to update the clock each year. If you press enter after the recurring keyword,
the router will use the U.S. DST rules for the annual time changes in April and October.
You can also manually set the date and time for DST according to your location. For
example, to specify the DST that starts on the last Sunday of March and ends on the
last Sunday of October, we would use the following command:
NTP uses a hierarchical system of time sources. At the top of the structure are highly
accurate time sources – typically a GPS or atomic clock, providing Coordinated
Universal Time (UTC). A reference clock is known as a stratum 0 server. Stratum 1
servers are directly linked to the reference clocks or stratum 0 servers and computers
run NTP servers that deliver the time to stratum 2 servers, and so on (image source:
Wikipedia):
NTP Server
NTP uses a client-server architecture; one host is configured as the NTP server and all
other hosts on the network are configured as NTP clients. Consider the following
example:
Host A is our NTP client and it is configured to use a public NTP
server uk.pool.ntp.org. Host A will periodically send an NTP request to the NTP server.
The server will provide accurate data and time, enabling system clock synchronization
on Host A.
NOTE
NTP servers listen for NTP packets on User Datagram Protocol (UDP) port 123. The
current version is NTPv4, and it is backward compatible with NTPv3.
NTP servers can either be local or public. Public NTP servers are often free to use and
are provided by third-party operators such as Google and Facebook. A local or internal
NTP server is owned by the company itself and is deployed within the network.
Cisco routers can be configured as both NTP clients and NTP servers. To configure a
Cisco router as an NTP client, we can use the ntp server IP_ADDRESS command:
Floor1(config)#ntp server 192.168.0.100
NOTE
To define a version of NTP, add the version NUMBER keywords at the end of the command (e.g. ntp
server 192.168.0.100 version 3).
To configure your Cisco router as an NTP server, only a single command is needed:
DEVICE(config)#ntp master
After entering this command you will need to point all the devices in your LAN to use the
router as NTP server.
Syslog explained
Syslog is a standard for message logging. Syslog messages are generated on Cisco
devices whenever an event takes place – for example, when an interface goes down or
a port security violation occurs.
You’ve probably already encountered syslog messages when you were connected to a
Cisco device through the console – Cisco devices show syslog messages by default to
the console users:
R1#
R1#
In the example above you can see that the logged in user executed the terminal
monitor command. Because of that, the telnet user was notified via a syslog message
when the Gi0/1 interface went up a couple of moments later.
R1(config)#logging 10.0.0.10
Now, logs generated on R1 will be sent to the syslog server with the IP address of
10.0.0.10. Of course, you need to have a Syslog server (e.g. Kiwi syslog) installed and
configured.
NOTE
It is also possible (and recommended) to use specify hostname instead of the IP address in
the logging command. The command is logging host HOSTNAME.
In our example the message has the severity level of 5, which is a notification event.
The first five levels (0-4) are used by messages that indicate that the functionality of the
device is affected. Levels 5 and 6 are used by notification messages, while the level 7 is
reserved for debug messages.
The severity levels can be used to specify the type of messages that will be logged. For
example, if you think that you are getting too many non-important messages when
logged in through a console, the global configuration command logging console 2 will
instruct the device to only log messages of the severity level 0, 1 and 2 to the console.
Cisco IOS Syslog Logging Locations
A Syslog message is a system-generated message produced by routers and switches
used to inform network administrators about useful information regarding the health and
state of the device, along with network events and incidents that occurred at that point
in time. Syslog logging is critical to our network system because it provides easier
troubleshooting and enhances security by providing visibility into the infrastructure
devices and equipment logs. We will discuss the different Cisco logging locations and
how to configure Syslog logging on these locations below.
NOTE
By default, Cisco IOS devices send Syslog messages to the console line and logging buffer.
However, you can also configure it to send Syslog messages into the terminal lines, SNMP traps,
and Syslog servers.
Syslog messages can be logged to various locations. These are the four ways and
locations where we can store and display messages on our Cisco devices:
Logging Buffer – events saved in the RAM memory of a router or switches. The buffer
has a fixed size to ensure that the log messages will not use up valuable system
memory. It is enabled by default.
Console Line – events will be displayed in the CLI when you log in over a console
connection. It is enabled by default.
Terminal Lines – log messages will be shown in the CLI when you log in over a Telnet or
SSH session. It is disabled by default.
Syslog Server – log messages are saved in the Syslog server.
NOTE
When you execute logging synchronous on the global configuration mode, log messages sent to the
console line will not interrupt the command you are typing, and the command will be moved to a new
line.
R1(config)#terminal monitor
NOTE
The reason terminal lines are disabled by default in Cisco IOS is that it might make your VTY
terminal connections congested due to massive amounts of log messages being sent into the lines
when it is improperly configured.
Next, we specify the local timestamp for the Syslog messages sent to the Syslog server
because it is not included by default.
R1(config)#service timestamps log datetime msec
Let’s take a look and verify our configured Syslog logging and log outputs on the
different locations using the ‘show logging’ command.
R1#show logging
filtering disabled
filtering disabled
filtering disabled
As the output shows, the logging buffer is enabled with an output size of 1000 bytes and
a severity level of ‘debugging’. The console line logging is disabled as we configured it,
but the terminal line is enabled. Lastly, you can see that R1 uses the Syslog server with
an IP address of 10.0.0.100 and a severity level of ‘debugging’.