Course Transcript Routing
Course Transcript Routing
Table of Contents
1. Video: Course Overview (it_csnetp24_08_enus_01)
2. Video: Static Routing (it_csnetp24_08_enus_02)
3. Video: Dynamic Routing (it_csnetp24_08_enus_03)
4. Video: Route Selection (it_csnetp24_08_enus_04)
5. Video: Network Address Translation (NAT) (it_csnetp24_08_enus_05)
6. Video: First Hop Redundancy Protocol (FHRP) (it_csnetp24_08_enus_06)
7. Video: Virtual IP Addresses (it_csnetp24_08_enus_07)
8. Video: Layer 3 Subinterfaces (it_csnetp24_08_enus_08)
9. Video: Virtual Local Area Network (VLAN) (it_csnetp24_08_enus_09)
10. Video: Network and Switch Interface Configuration (it_csnetp24_08_enus_10)
11. Video: Spanning Tree Protocol (it_csnetp24_08_enus_11)
12. Video: Maximum Transmission Unit (MTU) (it_csnetp24_08_enus_12)
13. Video: Course Summary (it_csnetp24_08_enus_13)
In this video, we will discover the key concepts covered in this course.
[Video description begins] Topic title: Course Overview. Presented by: Aaron Sampson. [Video
description ends]
Hi, my name is Aaron Sampson. Routing and switching are fundamental technologies within
computer networking that enable data to be transmitted efficiently between devices. In this
course, we'll explore static and dynamic routing, as well as network address translation and
port address translation.
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745e… 1/21
12/06/2025, 10:44 Course Transcript
Next, we'll see how the First Hop Redundancy Protocol is used to provide nearly uninterrupted
network availability and discover use cases and characteristics of virtual IP addressing and
layer 3 subinterfaces.
Lastly, we'll cover Virtual Local Area Networks including VLAN databases and Switch Virtual
Interfaces and discover considerations when configuring network interfaces including 802.1Q
tagging, link aggregation, speed, and duplex. This course is one of a collection that helps
prepare learners for the CompTIA Network+ or N10-009 certification exam.
Upon completion of this video, you will be able to outline static routing and explain how it can be
implemented in smaller networks.
outline static routing and explain how it can be implemented in smaller networks
[Video description begins] Topic title: Static Routing. Presented by: Aaron Sampson. [Video
description ends]
In this presentation, we'll provide an overview of static routing, which in short means that the
routes between source and destination networks must be manually configured or
preprogrammed into the router.
And this is commonly done in scenarios where changes to those routes are not expected and/or
the number of routes that must be configured is fairly small. Now this is also known as
nonadaptive routing because as we'll see in our next presentation, its counterpart is dynamic or
adaptive routing, whereby routers can exchange information with each other in order to adapt
to changing conditions.
So, with static or nonadaptive routing, if a change does occur then it has to be dealt with
manually. Any static routes will have to be configured in the router before any communication
can occur. But that typically doesn't present much of an administrative overhead because with
static routing there is often only a single or a preferred route for traffic to reach its destination.
But even if there are several, the routing tables, which are the collections of all routes known to
that router are fairly small. But because there are only a few routes, this requires less overhead
in terms of processing on the router.
And it also means that no routing protocols are required, because routing protocols are only
required when dynamic routing is used. It's the job of a routing protocol to exchange the route
tables with other routers so that simply doesn't happen with static routing.
Some key considerations, however, include that static routing should typically only be used in
small network environments that don't change often because statically configured routers
don't exchange their routing tables with each other, so any change has to be implemented
manually on each unit. But this also means that less network bandwidth is required for the
routers themselves since they aren't exchanging any data.
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745e… 2/21
12/06/2025, 10:44 Course Transcript
And this actually also results in a more secure network because routing information is not
traversing the network, so it can't be picked up through network sniffing. It's drawbacks
however include that management of your routers can become time-consuming if your
environment begins to grow and more and more routers begin to appear.
So, again, small networks with few changes are best suited to static routing. And if and/or when
changes do occur, static routing can be prone to errors because any incorrectly entered value
will prevent that router from being able to reach its destination.
And if there is an error, static routing does not allow traffic to be re-routed over an alternate
route. But again, for smaller networks that tend not to change, static routing can present a
simple configuration that is easy to manage with minimal overhead, lower bandwidth
requirements, and higher security.
After completing this video, you will be able to identify how dynamic routing is used to determine
the best path for the data to travel over a network.
identify how dynamic routing is used to determine the best path for the data to travel over a
network
[Video description begins] Topic title: Dynamic Routing. Presented by: Aaron Sampson. [Video
description ends]
In this video, we'll provide an overview of dynamic routing, which is in essence the opposite of
static routing, which we just covered, in that, it's also known as adaptive routing, whereby
routers are able to adapt to changing conditions automatically without the need for manual
updates to be performed by an administrator.
As such, this makes it a better solution for larger and more dynamic or changing network
environments. Now, while it is more adaptable, it's also more complex to implement than static
routing because dynamic routing creates more possible routes between source and
destination.
And to help visualize that, just imagine city streets on a map or GPS. There is almost always
more than one way to get from point A to point B. But because of its more complex structure,
dynamic routing does require more network bandwidth than static routing because routers
will also exchange their routing tables with each other in essence to learn better ways to route
the traffic.
Now coming back to my analogy of a map or a GPS, dynamic routing not only exchanges
information with other routers, it attempts to determine the best possible path to a destination
using algorithms known as distance vector protocols and link state protocols.
In short, distance vector protocols basically only consider the distance between source and
destination, which in routing refers to the number of routers that must be crossed, and those
are also known as hops. So, if, for example, network 1 is directly connected to network 2 on
either side of the same router, then there is only that one router between them so one hop. So,
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745e… 3/21
12/06/2025, 10:44 Course Transcript
the fewer the hops, the closer the network and therefore the shorter the path. But again,
consider the GPS app.
Most of them will only determine the shortest route to your destination, but that route could
include a congested highway. Or a road that's closed due to construction or perhaps an
accident has occurred. But some of them are connected to live updates from drivers who have
already reported those delays, so the app can reroute you to a less congested path. Link state
protocols are able to take similar types of delays into consideration.
If any given router along any given path is experiencing issues or delays, then a link state
protocol can determine an alternate route that might be longer in terms of distance but shorter
in terms of time.
Now both distance vector and link state protocols will create routing tables within the router
with an entry for each possible destination, and each of those destinations must specify the
interface or the connection to use to send the packets out.
But not only do they construct these routing tables, as mentioned, they also exchange them
with other routers to which they have direct connections so that they can make the choice as
to what the best possible route is. Now I say direct connections intentionally here because it's
not as though any given router can exchange its tables with every other router on the Internet.
But in a simple example, if router 1 is directly connected to router 2, and router 2 is directly
connected to router 3 in a linear fashion, then router 1 can only see router 2, and router 3 can
only see router 2. But since 2 is in the middle, it can see both 1 and 3. So, 1 can exchange data
with 2, 3 can exchange data with 2, but 2 can exchange data with both of them.
So, router 1 can receive the tables of router 3 through router 2. In other words, in this example,
all tables of all routers can be exchanged with each other. So, if we compare that to static
routing, which only uses a single predefined route to a destination network, dynamic routing
can create multiple routes.
In static routing, the routers themselves do not share any routing tables with each other,
whereas dynamic routers share those tables in order to make better choices in terms of the
best path to use. Changes to the environment can still happen, of course, regardless of which
method you use, but in static routing, those changes must be updated manually on all routers.
But with dynamic routing, those algorithms are used to update the routes and static routing
also does not use any routing protocols. But dynamic routing uses those distance vector and/or
link state protocols to determine the routes and share information.
As such, dynamic routing is much better suited to large and more complex network
environments where changes can be frequent. But it is a somewhat less secure approach
because now you do have routing information traversing the network, which does not happen
with static routing. So, this could make it vulnerable to being intercepted.
And because of its more complex nature, it also requires more network bandwidth and higher
computing power to process the routing data. In addition, dynamic routing often has higher
hardware requirements as compared to static routing, and it also has higher maintenance
requirements.
Not in terms of having to manually update the routes, but the protocols themselves do require
configuration, and updates to both the software and the firmware of the routers themselves
will almost certainly be more frequent. Both of which will also translate into higher costs
overall.
Just as a simple example, if you had a very small and simple network with only a few routes,
almost any standard network server could be used for static routing as long as it has at least
two network interfaces. And it could even be a very old server that has been otherwise
decommissioned, because in that type of environment, static routing would not be a
demanding task.
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745e… 4/21
12/06/2025, 10:44 Course Transcript
But by contrast, a large and frequently changing environment would need a dedicated router
with its own proprietary operating system, its own processing and memory, and of course, the
protocols. So, its base cost and its administrative requirements would be significantly higher
than that statically configured server. But like most things in networking, the type of routing
you'll need will depend on the circumstances. But in general, dynamic routing is a more flexible
and adaptable solution that tends to be a better choice for larger and more complex networks.
Through this video, you will be able to describe route selection in networking.
[Video description begins] Topic title: Route Selection. Presented by: Aaron Sampson. [Video
description ends]
In this presentation, we'll examine how route selection works, which refers to the process by
which a router determines the best path for data packets being sent through the networks they
connect. Now this is important because just like a map of a city or a GPS app, there is almost
always multiple routes between any 2 endpoints. But of course, we always want to get to our
destinations in the shortest amount of time. But it's not always just about time.
Other key goals of route selection include maximizing the reliability of a path and ensuring the
integrity of the data while it's in transit. Now at a general level, the decision-making criteria for
selecting a route will consider factors such as the cost, which refers to the number of hops or
routers that must be crossed. The overall network performance in terms of speed and low
congestion, the reliability of the source information for any given route and if there are any
redundant paths that can be used in the event of failure, and other cost values such as latency
or delay. But as mentioned, in most cases, there is more than one route between the source and
the destination so routers build and maintain tables that store those routes.
But to determine which is the best choice for any given situation requires evaluating more
specific attributes for each routing entry, which include the administrative distance, the prefix
length, the metric, and, if applicable, any routing policies. The administrative distance is
effectively a reliability value in terms of the trustworthiness of the source where that route
came from, and the lower the value, the more reliable the route. But that value itself isn't just
arbitrarily determined, it considers factors such as the routing protocol that was used to obtain
the information. Now we'll see some examples of how various protocols affect the
administrative distance in just a moment, so I'll come back to this point.
The prefix length refers to the CIDR notation used for the destination network, such as /24 or
/28, with longer prefixes or higher values representing more specific routes, because a longer
prefix means that there are fewer bits used to address hosts on the network that matches the
destination address. For example, if a destination address is 192.168.1.98 and there are two
routing table entries for other addresses, let's say 141.92.168.1.10/24 and
141.92.168.1.105/28, then the entry with /28 only leaves four bits to address host systems,
which is only 2 to the power of 4 or 16 total addresses, only 14 of which are usable for host
addressing.
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745e… 5/21
12/06/2025, 10:44 Course Transcript
So, there are far fewer hosts on that network, meaning it's more specific, so that route is
preferred. The metric refers to the value such as the hop count or the number of routers that
must be crossed, along with the bandwidth and latency. So, lower metric values are favored
over higher values. Now, policy-based routing allows administrators to create these predefined
policies that can sometimes override or at least influence routing decisions, such as by
specifying certain values such as the source address, the destination address, or the type of
service.
In other words, you might let the router make its own decisions most of the time, but if the
factors being considered matched those specified in a policy, then the policy will be used to
select the route. Policies can generally be thought of as exceptions for certain situations. So,
coming back to the administrative distance, as mentioned, one of the primary factors in terms
of how trustworthy the source of that routing entry is the protocol that was used in obtaining
it. Recall that routers exchange information with each other using routing protocols, so the
particular routing protocol that was used to obtain any given routing entry has an effect on the
reliability of that entry.
So, with static routing, there is no routing protocol at all. In other words, the route was
manually entered into the router by an administrator, or to put that another way, it was not
obtained through routing table exchanges. So, these are considered to be the most
trustworthy, and therefore the default administrative distance for statically entered routes is
one or the most reliable. Routes obtained through the open shortest path first protocol have a
default distance of 110, then the routing information protocol is 120 and the Border Gateway
Protocol is 200.
Now these are just a few examples, there are several other routing protocols, and in truth, I
don't really know what criteria is used to determine one protocol's reliability over any other.
But the values are all preset as a built-in attribute of the protocol, but they can be changed if
you feel that you prefer one protocol over another. But in the end, if there were two routes in
the table that were the same in every other aspect other than their administrative distance, the
lower distance value would be preferred. So, there are certainly a number of factors to
consider when it comes to selecting a route.
But the good news is, those decisions are made automatically by the router, so it's not as
though you as a routing administrator have to go through all of these entries and prioritize
them yourself. Although it's not a bad idea from time to time to at least examine which entries
are present in your routers so that you can possibly remove any entries that you know are no
longer valid.
Upon completion of this video, you will be able to differentiate between network address
translation (NAT) and port address translation (PAT).
differentiate between network address translation (NAT) and port address translation (PAT)
[Video description begins] Topic title: Network Address Translation (NAT). Presented by:
Aaron Sampson. [Video description ends]
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745e… 6/21
12/06/2025, 10:44 Course Transcript
In this presentation, we'll examine the process of NAT, or network address translation, and PAT,
or port address translation. Now beginning with NAT, this is the process of translating or
mapping a private IP address used in internal networks to the public IP addresses of the
Internet and vice versa. Now this can be done in what's known as a one-to-one or a many-to-
one relationship, whereby one-to-one maps a single private address to a single public address
and many-to-one typically maps many private addresses to a single public address.
Now PAT is essentially a type of NAT, but it refines the process a bit by including the port
numbers. So, it still maps private addresses to public addresses, but typically, with PAT, we're
usually talking about the many-to-one relationship. So, this is what you would generally see
when you have a private internal network with many clients that needs all of those private
addresses mapped to a single public address.
Now before looking at the processes themselves, let's address why this is even needed in the
first place. So, it simply comes down to the fact that the private addresses that we all use in our
internal LANs are never routed to the public Internet.
So, in order for any of us in a private or internal network to get to the public Internet, there has
to be at least one public IP address which is assigned to the outward-facing interface of our
router. But in terms of communications over TCP/IP, this all comes down to the fact that when
we make a request of any other system, included in the packet is the destination address,
which, of course, is the system we want to reach, and the source address so that the
destination system knows where to send the information back.
So, when we open our browsers from an internal network and we want to reach a website, that
site is, of course, on the public Internet. So, the destination address is therefore a public
address. But a client system on the internal LAN is likely using a private address such as
192.168.1.12 or any other private address.
So, if the packet were to specify that as the return address, the web server on the other side
would never be able to route its packets back to you because its router, and every router in the
world, will see 192.168.1.12 as a private address, and it will never forward those packets out
onto the public Internet. So, in other words, the return address cannot be a private address. So,
that's where NAT comes into play. So, over on the left-hand side of this graphic, we see the
internal private network with private addresses being used such as 192.168.1.12 and
192.168.1.10.
It doesn't really matter what those systems are, but those are the private addresses in use on
this network, and every system on that network would have a default gateway of something
like 192.168.1.1, in this case, which is the inward-facing interface of the router. So, every
system on the LAN can reach the router, but the router then essentially strips off those private
addresses and sends the packet out onto the public Internet with a public return address,
which, in this case, is 23.54.50.122.
So, now that web server sees a public address as the return address and it can now send the
packets back to you once they arrive back at our router. The router also has to remember that
the original request came from my system, we'll say so that it can forward the packets back to
the requesting system, but by using that, it keeps the Internet separate from the internal
networks. It's also worth mentioning that this process came about in the first place due to the
exhaustion of the public Internet address space. In short, there just aren't enough public
addresses in IP version 4 to address every single interface.
But since private addresses are never routed to the public Internet, we can all use those
private address blocks, however, we like without interfering with each other. So, my internal
network can have the exact same address scheme as your internal network without issue. So,
even with hundreds or even thousands of internal devices, we still only need a single public
address to have Internet access, which, of course, reduces the number of public addresses that
are required. Now that process of translating addresses is consistent in all implementations of
NAT, but there are a few different types.
Static is typically the aforementioned one-to-one relationship whereby you manually create an
entry that says this public address is always translated to this private address. Dynamic then is
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745e… 7/21
12/06/2025, 10:44 Course Transcript
typically the many-to-one scenario whereby there are multiple private addresses mapped to a
single public address, which is certainly more common these days.
And then as mentioned, PAT or port address translation, which is in fact the most common
these days, simply includes the port value to refine the service. So, with PAT, we still see the
same private addresses in the internal LAN of 192.168.1.12 and 192.168.1.10. And again, it
doesn't really matter what those values are.
But then the colon after the address indicates the port number or the application protocol
being used. So, we still need to translate the address as it goes through the router and it will
still translate all the private addresses to the single public address of 23.54.50.122, in this case.
But the table maintained by the router also remembers which system made which request and
over which port. So, any one system could, of course, make multiple requests, but for different
services or applications. So, in this table, we do see two entries that use the same IP address
but different port numbers.
But that's perfectly fine as they pass through the router. No matter what the private address is,
it's still translated to the public address, but it remembers the port number as well so that the
particular type of service or application can be returned back accordingly. So, in terms of
comparing them, NAT is still the process of translating local internal addresses to external
public addresses, which is also still done with PAT. But NAT in and of itself is not concerned
with port numbers, whereas PAT is. Now the entry that says single request for NAT doesn't
mean that it can only work for a single address.
In other words, it's not that one-to-one relationship, rather it refers to the fact that since there
is no port number, each request from each system will always generate a single and separate
entry in the net table of the router, whereas PAT can accommodate multiple requests in the
same entry. The IP address would be the same, but a different port number would be used for
each request. In short, NAT requests only use the IP address, whereas PAT requests use the IP
address and the port number.
Now all of that said, in day-to-day conversation, NAT is probably still the term that you'll hear
most often. It tends to just be a little more familiar. But officially, the process that's used in
almost every environment these days would be PAT, because it does include the port number.
But ultimately, they both allow internal systems on a LAN to be able to reach the Internet
without needing a public IP address for every LAN system.
In this video, we will outline how first hop redundancy protocol (FHRP) is used to provide nearly
uninterrupted network availability.
outline how first hop redundancy protocol (FHRP) is used to provide nearly uninterrupted network
availability
[Video description begins] Topic title: First Hop Redundancy Protocol (FHRP). Presented by:
Aaron Sampson. [Video description ends]
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745e… 8/21
12/06/2025, 10:44 Course Transcript
In this presentation, we'll take a look at the First HOP Redundancy Protocol, or FHRP, which
provides us with the means to implement a redundant network topology in terms of routing,
but more specifically for client systems to be able to always have a default gateway available,
which of course, provides those clients with all access to external networks, most notably the
Internet. In short, almost every production system on every network will have a default
gateway configured, but in almost every case, there will only be one default gateway. So if it
were to go down, then all external connectivity would be lost.
So FHRP reduces the risk of failure and helps to ensure business continuity by implementing
multiple routers so that in the event of failure of one of the default gateways, another router is
available to continue to provide external network connectivity. So the First Hop Redundancy
Protocol helps to mitigate the risk of losing connectivity by implementing a virtual IP address
as the default gateway for all clients. Now, just quickly, we'll take a closer look at virtual IP
addresses in our very next presentation.
But in short, it's an address that is not actually assigned to any real interface of any real router,
but the protocol recognizes it as being used purely to accept incoming client requests. But
since there are multiple routers in use as backups with FHRP, it means that unless all routers
were to somehow fail at the same time, then at least one router will still be available. So again,
FHRP uses a virtual IP address as well as a virtual MAC address so that client systems can all be
configured with a single IP address as the default gateway.
And in short, if the default gateway being used were to fail, a backup router will become
available, usually within a matter of seconds to continue to provide external connectivity.
Which of course provides us with the redundancy just mentioned. So while client systems
might experience a short delay due to the failed router, services will still continue, so no actual
disruption occurs. So in terms of putting this all together, first off you have to have multiple
actual routers in use.
And for the sake of simplicity let's just say that there are two of them, R1 and R2. Each of those
routers will have an actual IP address that is unique on the network, just like any other system.
So, let's just say that R1 is using 192.168.1.1 and R2 is using 192.168.1.2. But then the
aforementioned virtual IP address and virtual MAC address is also configured on each router.
But since it's just a virtual IP address, it can be the same on both routers. So, let's say the virtual
IP address is 192.168.1.10. So, then one router is configured as the default router.
Let's say it's R1 in this case, and R2 would be configured as a backup. But it's the virtual IP
address that is used as the default gateway for all client systems. So, as long as R1 is healthy,
requests from clients that are sent to the virtual IP address of 192.168.1.10 are translated to
the actual IP address of R1 and processed accordingly. But if R1 should fail, then R2 has the
same virtual IP. So, the requests are simply translated to 192.168.1.2 and all requests are
assumed by R2 and services can continue without disruption. And again, nothing had to be
changed on the clients themselves, they all still point to the virtual IP address.
Now to finish up, there are a few different varieties of FHRP. The first of which is known as Hot
Standby Router Protocol, which is a proprietary implementation of Cisco, and it's pretty much
what I just described, wherein Router 2 is standing by ready to assume services should router
one fail. So, you can kind of think of that as the default implementation so to speak, but the key
point is that it is a proprietary protocol so it only works on Cisco routers. For environments not
using Cisco routers, then the Virtual Router
Redundancy Protocol is an open standard equivalent that is also implemented in the same
manner as just described. But if you are using Cisco routers then a more recent
implementation has been released known as the Gateway Load Balancing Protocol which
simply makes use of the fact that your redundant routers are there ready and able to handle
routing tasks.
So, rather than just having them as standby routers which only become active in the event of
failure of the default router, they can be used in addition to the default router so that all
requests are balanced evenly across all routers, which makes for less work for any single
router, in turn creating a more efficient routing environment. So, again, just for easy
visualization, in my previous example, R1 will handle 100% of all requests, and R2 only
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745e… 9/21
12/06/2025, 10:44 Course Transcript
becomes active if R1 fails, so it handles 0% of all requests. Then if R1 does fail, then R2 handles
100% of all requests.
But with GLBP, R1 and R2 would both actively handle approximately 50% of all requests, and
one of them would only have to kick it up to 100% in the event of either one failing. Ultimately
though, the key aspect of FHRP is to simply ensure redundancy so that services can continue in
the event of a failed router. From that point, implementing GLBP would just be a bonus. So,
that's really your call. But by using FHRP, you can certainly help to ensure the business
continuity of your environment.
Through this video, you will be able to recognize when to use virtual IP addresses.
[Video description begins] Topic title: Virtual IP Addresses. Presented by: Aaron Sampson.
[Video description ends]
In this presentation, we'll examine what's known as a virtual IP address, whereby the IP
address that is visible to client systems is not bound to a single node. In fact, it can be used to
access multiple nodes. So, what does that look like? There are in fact several instances where
this can be useful, but I'll give you a common example known as clustering. In clustering, there
are literally multiple servers that all perform the same service.
Now clustering itself came about primarily for the purposes of fault tolerance, which ensures
that the servers that provide us with very important services are available as much as possible.
So, for example, your company database may be very important to your day-to-day operations,
but if it were to be hosted on a single server and that server were to fail, then obviously the
database goes down.
But in a cluster, there are multiple servers that all host identical copies of the database and
they all replicate with each other. But from the perspective of a client, they don't make a
connection to server 1, server 2, or server 3.
There is a single node that they see that represents the multiple devices of the cluster. So, the
virtual IP address essentially represents all of the systems in the cluster. So, that single IP
address that is visible to the clients simply gets redirected to a real address of a real server
behind the virtual address, so to speak. Now, there are a couple of different implementations of
clustering, but for the sake of argument, let's just say that there are 3 database servers and
they're all up and running and they're all servicing client requests.
This by the way, also allows you to distribute the workload across all three servers and it's
referred to as active clustering. But if any one of the servers were to go down, it's not critical
because the request can be redirected to one of the other surviving servers. So, this also
ensures availability. But again, clients do not see the actual IP addresses that are assigned to
each actual server. The virtual IP address is simply configured on the cluster itself, and the
redirection of requests to the actual servers is all handled automatically by the clustering
software.
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745… 10/21
12/06/2025, 10:44 Course Transcript
But this makes it much easier to configure your clients because they all just connect to the
same virtual IP address. Another implementation of virtual IP addresses is known as content
distribution, whereby a content gateway, which is a single system that can represent multiple
IP addresses on the same subnet, can rotate and distribute addresses among nodes that host a
service.
This system would have a primary or a real IP address bound to its interface card, but it can
also serve many more virtual addresses by using DNS round-robin which points to a pool of
virtual IP addresses as opposed to using the real addresses of the content gateway itself.
Because virtual IP addresses are not bound to any particular system, a content gateway can
take addresses from inactive nodes and distribute those addresses among the remaining live
nodes if any failures are detected because nodes communicate their status with their peers. So,
if a node fails, the issue can be remediated by negotiating which of the remaining nodes will
take over the failed node's virtual IP address. Now again, those are just two examples, but the
idea is still that as far as clients are concerned, they're just using a single name or a single IP
address to access a service.
Whether that IP address is virtual or real as far as the client is concerned is irrelevant. They
just want to be able to connect to an address and receive their service or their content. Which
system services the request is handled behind the scenes, but again, it doesn't matter to the
client.
So, with virtual IP addresses, you gain the ability to configure all of these behind the scenes
mechanisms so to speak, but in any implementation, they enable you to deliver services more
efficiently and more reliably, which, of course, is more beneficial to all parties involved.
After completing this video, you will be able to identify the purpose and characteristics of
subinterfaces.
[Video description begins] Topic title: Layer 3 Subinterfaces. Presented by: Aaron Sampson.
[Video description ends]
In this video, let's take a look at using subinterfaces, which is the process of creating a virtual
interface by converting a physical interface into more than one logical interface. Now, all you
have to do to achieve this is to assign more than one IP address to a single physical adapter, and
this is something that is supported by just about every interface in use these days.
Simply put, a single network adapter can have more than one IP address bound to it, and doing
so effectively creates the subinterfaces. So, as an example, let's look at connecting a router
with only a single physical interface to multiple networks.
Now just to quickly clarify, if you were to go out and purchase what we might think of as an
official network router so to speak, it's going to have at least two interfaces. But you can also
create a router out of any network server that might not be in use anymore, which can save you
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745… 11/21
12/06/2025, 10:44 Course Transcript
some money. So, how does a server with only a single network interface route traffic between
two networks? By simply creating multiple subinterfaces with each subinterface having its own
unique IP address. Now this is going to be a very simple example just to make it easier to
visualize.
So, let's just imagine a small network of 10 computers that are all connected to a single switch.
OK, so if you then use the network address of 192.168.1, you can assign each host an address
of let's just say 192.168.1.1.2.3.4.5.6.7.8.9 and .10. OK, so there are your 10 addresses all
connected to that switch, and all systems can now see each other. So, now for whatever reason,
you decide that you want to divide this network up into two networks. Well, all you'd have to
do is to assign a different network ID to some of the hosts.
So, let's split it right down the middle, and we'll say that five of the systems will continue to use
the original 192.168.1 network, and the hosts will still use 192.168.1.1.2.3.4 and .5. So, that's
network 1. Then for the other systems, you can simply change the network address. So, let's go
with maybe 192.168.2. Then the host can be .1.2.3.4, and .5 again. So, that's network 2.
But since those are different network addresses, even though all 10 systems are still plugged
into the same switch, if a host on network 1 attempts to communicate with a host on network
2, TCP/IP will evaluate the addresses and determine that they are on different networks. So,
the packets will be sent off looking for a default gateway instead of being sent to the other host
directly, and they won't be able to reach their destination. So, now we have to bring a router
into the mix. So, we go get our router and we plug it into the switch.
But if the router only has a single network adapter and a single IP address of let's say
192.168.1.254, then only the systems on network 1 would be able to see that router. So, we
can assign another address to the same interface of 192.168.2.254. Now that router has
addresses in both networks and all systems in each network will be able to find it. So, by
configuring the subinterface, you get that additional address and that single physical interface
is now visible on two different networks. Now another implementation of using subinterfaces
is known as VLAN traffic routing.
Now this is essentially the same process as I just described with respect to enabling a router to
see more than one network, but this is a little more of a practical scenario. So, with VLAN
routing, you still have many physical systems all connected to each other, typically through
switches. But with respect to managing the traffic and optimizing it, you might determine that
there are simply too many systems on the network as it is, and therefore there is too much
traffic. Or perhaps you need to isolate one group of systems from another for security reasons.
Now of course you can physically separate them if you want to, but you really don't have to.
VLAN traffic routing is quite simply just using software to partition off various sections of the
network. So, once again, all you have to do is to configure the router as if it was a physical
interface like any other, but you still just assign multiple addresses so that it's visible to each
VLAN, because it's the address configuration that separates one network from another, not the
physical connections.
But once systems have been separated into different networks, whether it's a physical or a
logical separation, you still have to route between those networks. But now in terms of the
physical cables and the physical connections in the switches themselves, nothing actually has
to be changed. So, as an administrator, you don't have to rewire anything. Ultimately all you
need to get from 1 system to another is a physical pathway, but you can use software and
virtual configurations to direct how that traffic will be moved from VLAN to VLAN. And again,
it ultimately comes down to the addressing.
So, the combination of subinterfaces and the software to define the VLANs allows them to be
logically separated even though they are still physically connected. So, in short, by using
subinterfaces you can reconfigure your network environment entirely at a logical level while
leaving the physical network untouched, which can help to save you time, money, and effort.
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745… 12/21
12/06/2025, 10:44 Course Transcript
In this video, we will outline Virtual Local Area Network (VLAN), including VLAN databases and
Switch Virtual Interface (SVI).
outline Virtual Local Area Network (VLAN), including VLAN databases and Switch Virtual Interface
(SVI)
[Video description begins] Topic title: Virtual Local Area Network (VLAN). Presented by: Aaron
Sampson. [Video description ends]
In this presentation, we'll provide an overview of Virtual Local Area Networks, or VLANs,
which allow you to abstract the physical connections of standard LANs, enabling you to create
multiple logical networks without having to physically reconfigure the devices such as the
computers, switches, routers, and cabling.
Now, just before we get into the details, a good analogy for a VLAN is a virtual machine.
Because with virtual machines you still only have one physical host computer, but then
software on that host is used to create additional machines that can all share the physical
resources of the host.
So, for example, the physical host can use some of its memory, disk space, and processing time
for itself, but it doesn't need all of it. So, the virtual machines can be configured to use some of
those same resources, and as long as the host system has enough of those resources, it can host
more and more virtual machines.
Similarly with the VLAN, you still have to start with a physical network with all of the standard
host systems, routers, switches, and cabling. But once everything is physically set up and
connected, software can be used to reconfigure the network so that certain groups of systems
can be isolated from other groups, but again, without having to physically disconnect, move,
rewire, and reconnect any of your devices.
This can make overall network management much easier, particularly for very large and very
dynamic network environments such as cloud service providers and other large datacenters.
Now, when it comes to configuration, VLANs are implemented on switches because these are
the devices that physically connect all of the actual devices in the network. And in short,
VLANs operate at layer two of the OSI model, just as switches do. But when you do begin to
implement VLANs, you need to know about the five main types. The default VLAN can basically
be thought of as the switch itself in its default state.
In other words, if we just use a single 24 port switch as an example, when it powers on, all 24
ports belong to the default VLAN, and any device plugged into any of those 24 ports will be
able to communicate with any other device plugged into any other port.
So, it's not really any different than how the switch would operate at a physical level, but since
any port can be moved off into a different VLAN, the default VLAN just represents its home, so
to speak. Until it does get moved, every switch that supports VLANs will have a default VLAN,
and it can't be removed nor renamed.
The management VLAN is configured to access the management capabilities of a switch, such
as traffic logging and monitoring, and it ensures that bandwidth for management will be
available even when user traffic is high.
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745… 13/21
12/06/2025, 10:44 Course Transcript
Any VLAN could be defined as the management VLAN, although many switches may come pre-
configured with VLAN 1 as the management VLAN, but if so it would be recommended to
rename it. The native VLAN identifies traffic coming from each end of the trunk links, which
themselves are the physical connections between switches or between switches and routers.
A native VLAN is allocated only to a trunk port, and it places any untagged traffic, which is
traffic that does not come from any other VLAN or from a device that doesn't support VLANs,
onto the native VLAN. The data VLAN is used to divide the whole network into two main
groups, one for users and one for devices.
It's also sometimes referred to as the user VLAN because it's used only for user generated
data, not for management traffic or voice data. And on that note, the voice VLAN is configured
to carry voice traffic only and this helps to preserve bandwidth and improve voice-over IP
quality because it's often given a higher transmission priority over other network traffic.
Now, when defining your own VLANs, there are two basic options, static VLANs and dynamic
VLANs. As you might guess, static means that you manually assign any given physical port of a
switch to any given VLAN and it will remain that way until reassigned. Whereas with dynamic
software or other intelligent tools are used to assign ports to VLANs automatically, dynamic
assignments are also referred to as use based because the port assignments are often based on
the type of traffic or the device creating the traffic.
For example, a port might be assigned to a VLAN based on the identity of the device as
indicated by a security certificate or by the network protocols in use, and as such, a single port
could be associated with multiple dynamic VLANs depending on its state at any given time.
Static VLANs can of course require a little more manual configuration, but they tend to be
more secure because a port assignment can't be changed without an administrator's
permission or knowledge, whereas dynamic VLANs can require less administrative overhead.
But the chance of ports being changed automatically can result in lower security, particularly if
the primary reason for configuring your VLANs was to isolate systems from each other.
As for their advantages, they offer simplified management and improved security in that they
don't require in depth monitoring and they allow you to divide systems and devices into
multiple isolated LAN segments all through software configuration. They're much more flexible
than physical networks because they can be configured based on port, protocol or other subnet
criteria, and without any concern for the actual location of a device or the physical connections
such as the cabling. Once all devices are physically connected to each other, their VLAN
assignments are all done through software, so changes to the network do not require changing
the location of your devices or rewiring your switches.
And of course, all of this simply helps to save both time and costs, particularly in those very
large and dynamic networks. Now, that all said, because you are configuring the network using
software, there needs to be a means to store all of that configuration data, which is where the
VLAN database comes in. It does exactly that. For instance, it stores the VLAN IDs for every
VLAN you create along with the name for each one and various properties such as the
maximum transmission unit or MTU.
Because each VLAN can be configured differently, the VLAN database is typically stored in a
file called VLAN.DAT or DAT, which itself is stored in static or non-volatile memory on the
switch so that it will retain its data if the switch is powered off. But this raises the question of
how many actual switches you're using for your VLANs. Because since VLANs are configured
with software and don't rely on the physical ports to determine connectivity, any given VLAN
can span more than one switch.
Now, in a very simple configuration wherein you might only use a single switch to create one or
more smaller VLANs, you would only need a single VLAN database on that switch. But that
would be pretty rare in a production environment. So, on their own, you would have to
configure every VLAN on every switch so that they would all know the same VLAN
configuration. Now, that might seem like a lot of effort, but it can still work effectively if your
environment is still relatively small. But in larger environments, you can avoid this by
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745… 14/21
12/06/2025, 10:44 Course Transcript
implementing what's known as VTP or the VLAN Trunk Protocol, which implements a
centralized VTP server.
Then VLAN configuration data is distributed to all switches in the network, which reduces the
need to configure the same VLANs everywhere. But that said, VTP is a proprietary protocol of
Cisco, so if you aren't using Cisco switches you might need to still use manual configurations,
although there are alternative protocols used by other vendors such as multiple VLAN
registration protocol.
Another consideration is that once you configure even 2 separate VLANs, then as far as the
host systems on each VLAN are concerned, they can no longer reach the host of the other
VLAN because they are effectively on different networks.
So, in order to communicate across networks, you need to operate at layer three of the OSI
model or the network layer, which is where we find routers. So, you can imagine then that if
hosts on two separate VLANs do need to communicate with each other, then we would also
have to connect a router to the switch. Now you could do that, but many switches these days
are what's known as layer 3 switches, which simply stated means that they can both switch and
route.
In fact, for many of you at home with any kind of high-speed Internet, it's quite likely that your
home router is in fact a layer 3 switch because it has physical switch ports on it so that multiple
physical devices could be plugged into it and communicate with each other just like any other
LAN.
But it also connects that LAN to your ISP and therefore the Internet, which is of course a
different network. So, it's both switching and routing. So, to enable routing in a VLAN, a
Switched Virtual Interface, or SVI, is the routing interface that represents the IP address space
for any VLAN connected to that interface.
Now recall that at a physical level, we're still dealing with a switch which simply has physical
ports into which you plug your cables. In other words, there is no physical interface to which
this address is assigned. This is all done through internal processing, but at layer 3.
So, for example, if you have VLAN 1 and VLAN 2, you can create SVI 1 and SVI 2, each with
different IP addresses, one for each network, just like the two interfaces of an actual router.
Then enable routing between those SVI addresses so that clients in each VLAN can now
communicate with each other without needing an actual router.
So, there are certainly a number of considerations to factor in when it comes to using VLANs,
but they can certainly reduce the administrative overhead of switching and routing,
particularly in those large and dynamic environments. But if your network is smaller and much
less dynamic, then you might not need to use VLANs at all. So, like many things, the choice of
using VLANs or not comes down to the circumstances.
Upon completion of this video, you will be able to identify considerations for configuring switch
interfaces.
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745… 15/21
12/06/2025, 10:44 Course Transcript
[Video description begins] Topic title: Network and Switch Interface Configuration. Presented
by: Aaron Sampson. [Video description ends]
In this presentation, we'll examine some considerations when configuring switch interfaces,
beginning with what's referred to as 802.1Q port tagging. Now, just quickly, the 802.1Q refers
to a working group within the IEEE which itself is the Institute of Electrical and Electronics
Engineers who've been tasked with overseeing the development of many different
internetworking standards.
The number after the dot refers to various networking technologies and methods. For example,
anything to do with .3 is Ethernet, and anything to do with .11 is wireless. Then the letter after
the number typically refines these methods even more. So, 1Q deals with VLANs, and in
particular tagging Ethernet frames so that switches can identify and segregate traffic by the
VLAN to which those frames belong. In simpler terms, any single switch could have physical
ports that belong to multiple different VLANs, so the tag simply indicates that this particular
frame belongs to VLAN 1, and this one to VLAN 2, etcetera, etcetera.
Tagging is in essence, what allows VLANs to share the same physical switches while still
isolating traffic to their individual VLANs, which in turn ensures that traffic on VLAN 1 doesn't
find its way onto VLAN 2 and vice versa. Yet we can do this without having to physically
reconfigure the switches themselves, including any cabling. So, as you're configuring your
switch interfaces in terms of your VLAN implementation, you simply assign the VLAN
membership for each port of the switch. So, for example, port 5 of switch one might belong to
VLAN 10, and port 8 of switch one might belong to VLAN 20.
Now, for any traffic that remains within that switch, the tags aren't necessary. So, let's say that
port 12 of switch 1 is also on VLAN 10. If it needs to communicate with port 5, then they're
already both in the same VLAN, and traffic is simply moved from port to port based on the
VLAN membership. But if port 5 of switch 1 needs to communicate with port 5 of switch 2, but
they're still also within the same VLAN, then it's the trunk port that connects the two switches
that applies the tag.
Because switch 2 needs to know which VLAN on that switch the frame should be delivered.
The trunk port of switch 1 already knows that it originated in VLAN 10, so it simply applies the
tag as it passes through the trunk port. Then, when it's received by switch 2, it ensures that the
frame is delivered to the appropriate VLAN on that switch. Now, not only does tagging provide
you with the ability to isolate traffic, that isolation itself translates into increased security
because a tagged frame cannot be delivered to any VLAN other than the one identified by the
tag.
They facilitate segmented or smaller networks, which are always easier to manage and offer
better traffic control. They help to optimize bandwidth and reduce latency. And they can
reduce the complexity of your network management by allowing you to configure consistent
policies and configurations across multiple VLANs. Next is link aggregation, which allows you
to combine multiple Ethernet links into a shared logical link, typically between two network
devices, such as connecting one switch to another.
Once you have multiple connections configured as a single link, they're referred to as a Link
Aggregation Group, or LAG. And you can configure more than one LAG on any given switch.
And the LAG itself can also be included in any VLAN. And any given LAG can have more than
two connections, but the maximum number of connections per LAG will depend on the device
itself. Some devices also support the Link Aggregation Control Protocol, which can help to
prevent errors in the setup process of your link aggregations. But if the switch is unmanaged,
then link aggregation is not supported at all.
As for some key features, link aggregation enables more efficient use of resources by load
balancing the traffic across the aggregated links, which increases availability and reliability if
one of the physical links should go down because there would likely be other members that are
still healthy. Aggregated links can help to optimize and improve bandwidth, and as such it may
help to save on costs as compared to having to purchase new equipment if an upgraded
connection is required. In terms of configuration, each participating device must support link
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745… 16/21
12/06/2025, 10:44 Course Transcript
aggregation and all devices must have the same settings for port speed, duplex mode, flow
control, and MTU size.
If the switches also have VLANs configured, then the member connections of the LAG must
belong to the same VLAN. And you should avoid connecting the two devices with more than
one cable until you've completed the LAG configuration. Because if you do so on a device that
doesn't have loop prevention, such as the Spanning Tree Protocol, then you could end up
creating a network loop. So, ensure that your LAG is configured and functioning properly
before adding any redundant cable connections.
If you're using one, then other key configuration options might include specifying a voice
VLAN, which allows for a Dedicated Path for Voice packets to reduce contention and improve
call quality. But this also relies on using Quality of Service, or QoS, which is able to distinguish
different types of packets and can then assign different priorities based on the type. So, by
prioritizing voice traffic, QoS can reduce latency and improve the clarity of your calls. Using a
voice VLAN also requires that managed switches be used which are able to distinguish
between different types of traffic.
And the aforementioned QoS protocol must also be supported and enabled. And your voice
VLAN must also have a specific VLAN ID to ensure that voice traffic remains isolated on that
VLAN. The speed of an interface can also be configured and should be consistent for both ends
of any given connection. Now, by default, every interface would be set to whatever the fastest
speed supported by the switches, so in most cases you likely won't have to change this. But if
there are still legacy devices connected to the switch that run at slower speeds, such as 100
megabits per second, you should adjust accordingly.
And again, ideally, just make sure that each end of any given connection is set to the same
speed, whether it's 100 megabits, 1 Gigabit, 10 Gigabit, etc. But that said, many switches will
also have an auto-negotiation setting whereby devices will automatically poll each other to set
the speed accordingly, reducing or even eliminating the need for manual adjustments. The
duplex mode will also likely be set by default. And duplex itself refers to the ability for traffic to
flow in both directions. But Half Duplex means that each system must take its turn, so to speak.
So, A can only talk to B while B is listening, much like a phone call.
Whereas with Full Duplex mode, A&B can both talk to each other at the same time. So, Full
Duplex mode is preferred because it's more efficient and it's likely set by default. But again,
some legacy equipment might not support full duplex. Now, if either the speed or the duplex
mode don't match on either end, then it's not as though communications won't work at all, but
it will have an impact on overall performance.
A speed mismatch could result in one device flooding the other, so a lot of retransmissions
could occur, which reduces throughput and takes away from available bandwidth. And a duplex
mismatch could result in a high volume of collisions, which can significantly reduce the speed of
the network again due to having to retransmit packets far too often. But collisions can have a
broader effect, because if they do happen too often, they can cause all systems on the VLAN to
stop transmitting until the network quiets down, then everyone can start communicating
again.
But if the collisions just keep reoccurring, then the same thing will happen over and over again.
So, again, ideally each end of any connection should have consistent interface settings. So,
there are certainly a number of settings to bear in mind when configuring your switch
interfaces, but it's worth reiterating that many of them will be set to the correct values by
default. But it's always a good idea to verify and to also review the settings periodically,
particularly if your environment is very large and dynamic, to ensure that mismatches don't
occur.
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745… 17/21
12/06/2025, 10:44 Course Transcript
Through this video, you will be able to describe how the spanning tree protocol is used to prevent
looping within a network topology.
describe how the spanning tree protocol is used to prevent looping within a network topology
[Video description begins] Topic title: Spanning Tree Protocol. Presented by: Aaron Sampson.
[Video description ends]
In this presentation, we'll describe how the spanning tree protocol can be implemented on
switches to mitigate a possible issue of what's known as looping, which in short, means that a
packet just loops around endlessly without ever reaching its intended destination.
In this presentation, we'll describe how the Spanning Tree Protocol can be implemented on
switches to mitigate a possible issue of what's known as looping, which, in short, means that a
packet just loops around endlessly without ever reaching its intended destination. Now, what is
a loop in switching, and how might they occur?
Well, it depends on how the switches are implemented, but in the simplest example, let's just
imagine that you have two switches, and they're connected to each other by a single cable.
That will certainly work, but a single cable also means that you have a single point of failure if
that cable should fail. So you can add a second cable for redundancy. Now if one cable fails no
problem, you have the redundant 1 so services can continue.
But as long as both cables are healthy, that redundancy also creates a loop. So as an example,
imagine host 1 is plugged into switch 1, switch 1 is plugged into switch 2 with two cables, and
host 2 is plugged into switch 2. So if host 1 needs to reach host 2, it will first send out an ARP or
Address Resolution Protocol request to obtain the MAC address of host 2.
That's how systems find each other in switching. But ARP requests are broadcasts, so like any
broadcast, the traffic is sent out over every other interface other than the one on which it was
received, to ensure that host 2 will hear it.
Now host 2 will hear it, but so will the port where the second cable is attached. So when switch
2 receives the broadcast, it sends it back out over that redundant cable back to switch 1, which
sends it back out over every other port, including the original cable, because the broadcast was
received from switch 2 over the redundant cable.
So since it sends it back out over every port, it sends it out the original cable which connects it
back to switch 2, and the loop is created. Now that's only one example, and ARP requests do
settle down, so to speak, once the switch becomes aware of all of the MAC addresses of the
systems that are connected to it.
But there are many other Ethernet frames that also use broadcasts, so even if ARP requests
stop, other protocols will still generate broadcast traffic. And on top of that, Ethernet frames
also do not have a TTL value or a time to live, meaning that if a loop does occur, it will loop
endlessly and the switches will quickly become flooded. So how then does the Spanning Tree
Protocol mitigate loops when they arise?
Well, in short, it blocks certain interfaces when necessary to create a loop-free topology. Now
obviously that's simple enough to say, but how does it determine which interface to block?
Well, there are a few components to that process, but for starters, each switch with spanning
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745… 18/21
12/06/2025, 10:44 Course Transcript
tree enabled will first send special frames to all other switches to which they're connected,
called a Bridge Protocol Data Unit, or BPDU, which itself contains 2 values, a MAC address of
the port where the interconnection cables are attached, and a priority.
These two values make up what is then treated as a bridge ID. Now the priority value is actually
the same on all switches by default, but it can be changed. But even if they're left at their
default values, which by the way is 32,768 then the MAC address is still used to ensure that the
bridge ID is unique and the switch with the lowest bridge ID considering both the priority and
the MAC address becomes what's known as the root, and all other switches are non-root.
So let's imagine now that there are three switches, and to help visualize this, imagine them
arranged in a triangle so each switch is connected to the other two. Now we don't need
redundant cables between all switches because there is still a redundant path from any switch
to any other switch. For example, if switch 1 is at the top of the triangle, switch 2 is on the
bottom left and switch 3 at the bottom right, then one can get to two directly. But if the
connection between them were to fail, it can still get to switch 2 by going through 3.
But once again, that redundancy still creates the possibility of a loop. So again, if all priorities
are left the same, then the MAC address breaks the tie. So let's just use simple figures and say
that the MAC address of switch 1 is AA, switch 2 is BB, and switch 3 is CC. As such, switch 1 has
the lowest bridge ID and it becomes the root and 2 and 3 are non-root. With the route
established, that switch will label its ports that are connected to switch 2 and switch 3 as
designated ports.
And all non route switches will first determine the shortest path to the route, which in this
example are the direct connections up to the route as opposed to a path that goes through the
other switch and it will label those connections as route ports. Now I know that this might be a
little tricky to visualize. So in summary, switch 1 is the root at the top of the triangle in this case,
and it's two ports that connect to switches 2 and 3 are called designated since it's the root.
Switch 2 at the bottom left finds the shortest path to the root which is straight up as opposed
to through switch 3. So that port is labeled as the root port. Then switch 3 does the exact same
thing. So with all of that in place, if a loop is detected, the spanning tree protocol on switches 2
and 3 will first determine that they both have lower priorities than the route and that the
package traveling between 2 and 3 are doing so over connections that aren't their route
connections. Because recall that the route connections only go up to switch 1, so it will shut
that connection down, thereby breaking the loop.
Now, just to finish up, recall that this only happens if a loop is detected. If switch 2 needs to
communicate directly with switch 3, it can certainly still do so. As long as it doesn't generate a
loop, then every switch will always try to take the shortest path. So again, spanning tree is only
there to deal with loops when they occur, otherwise, traffic will pass as directly as possible
through your switches.
After completing this video, you will be able to outline maximum transmission unit (MTU) and
jumbo frames.
[Video description begins] Topic title: Maximum Transmission Unit (MTU). Presented by: Aaron
Sampson. [Video description ends]
In this video, we'll provide an overview of the maximum transmission unit, or MTU value, along
with variations to that unit known as jumbo frames. But to get started, the MTU itself is pretty
much as its name indicates. It's a measurable value that indicates the largest acceptable packet
size that any given network connected device will accept.
However, if packets should arrive that are larger than the specified MTU value, they don't just
get discarded. They can be broken up into smaller pieces so that each sub packet, if you will, can
still be transmitted.
Then those sub packets are reassembled at the destination into their original format, which
helps to ensure greater compatibility in terms of the different types of systems that might be
on any given network. Now the MTU value itself is expressed in bytes, not bits, and in most
cases you'll see a default value of 1500 on most network interfaces.
So, with respect to packets that do exceed the MTU, as mentioned, they can be broken up into
smaller chunks so that they can still be transmitted, which itself is a process known as
fragmentation. Now any sender and receiver that are communicating can check with each
other to determine the MTU setting for each system and implement fragmentation if
necessary. But in many cases it won't just be something like computer 1 being directly
connected to computer 2.
The MTU value also has to be considered for any and all routers, switches, and network servers
that might be in between those two systems. So, let's just say that in order for computer 1 to
reach computer 2, it has to cross 2 switches and 2 routers.
Now, if neither computer is aware of the MTU setting of those interconnecting devices, and
assuming each computer is set to the default value of 1500 bytes, then of course their packets
will be sent out using that size. But if for some reason router 2 has an MTU of 1400, then it'll be
router 1 that will fragment the packets before sending them.
Because recall that any 2 systems can check with each other to verify their MTU values. In
other words, every system along the communication pathway does not need to know the MTU
value of every other system. It can be negotiated whenever needed. That said, there are some
cases where packets can't be fragmented, in which case they will not be delivered if the MTU
value is exceeded.
In addition, there might be instances where a "Don't Fragment" flag is set in the IP header of
the packets, in which case of course those packets won't be fragmented either. Now, this is
something that would be set by an application or another protocol. In other words, you aren't
manually applying this value yourself.
But if any don't fragment packets are received by a device that has a smaller MTU value than
the size of the packets being received, it will return a message back to the sender using the
ICMP or Internet Control Message protocol informing the sender that the packets couldn't be
accepted because they were too large and couldn't be fragmented.
So, you could at least be made aware of the issue. Now, on the opposite side is the jumbo frame,
which is any frame that is larger than the standard MTU of 1,500 bytes. They commonly have
an MTU value of 9,000 bytes, and they came about largely because the 1,500 byte specification
was determined quite some time ago, and modern internetworking devices are simply much
faster in terms of processing.
So, by allowing jumbo frames, much more data can be sent in fewer packets, which can help to
improve performance, particularly on network backbones that are 1 gigabyte or higher in
speed.
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745… 20/21
12/06/2025, 10:44 Course Transcript
In fact, for the highest speed networks you can even implement super jumbo frames which
allow MTU values even higher than 9,000 bytes with a theoretical value in IP version 4 of up to
65,535 bytes, but you would rarely see a value that high in practice.
Ultimately, in most scenarios, the default MTU of 1,500 bytes is likely going to be fine for
almost all transmissions, so unless you encounter a specific reason to change it on any portion
of a communications link, you can generally leave it set at its default. But it certainly could be
worth testing out the implementation of jumbo frames if your network is very fast.
In this video, we will summarize the key concepts covered in this course.
[Video description begins] Topic title: Course Summary. Presented by: Aaron Sampson. [Video
description ends]
So, in this course, we've examined characteristics of routing and switching technologies. We
did this by exploring static and dynamic routing, route selection, and NAT and PAT. First Hop
Redundancy Protocol, virtual IP addressing, layer 3 subinterfaces, VLANs, and SVIs.
Network and switch interface configuration, the spanning tree protocol, and the maximum
transmission unit. In our next course, we'll move on to explore how to select and configure
wireless devices and technologies, as well as important factors of physical installations.
https://cdn2.percipio.com/secure/c/1749747693.dbb42d16f86d82c84a8104915b17c59057aae3c8/eot/transcripts/3afd124c-befd-40ef-95a2-745… 21/21