OSI Reference: CCNA Study Notes
OSI Reference: CCNA Study Notes
OSI Reference
1. Identify and describe the functions of each of the seven layers of the OSI reference model.
Physical Layer
The physical layer defines the electrical, mechanical, procedural, and functional specifications for
activating, maintaining, and deactivating the physical link between communicating network
systems. Physical layer specifications define such characteristics as voltage levels, timing of
voltage changes, physical data rates, maximum transmission distances, and the physical
connectors to be used.
The data link layer provides reliable transit of data across a physical network link. Different data
link layer specifications define different network and protocol characteristics, including the
following:
Physical addressing -- Physical addressing (as opposed to network addressing) defines how
devices are addressed at the data link layer.
Network topology -- Data link layer specifications often define how devices are to be physically
connected (such as in a bus or a ring topology).
Error notification -- Error notification involves alerting upper layer protocols that a transmission
error has occurred.
Sequencing of frames -- Sequencing of data frames involves the reordering of frames that are
transmitted out of sequence.
Flow control -- Flow control involves moderating the transmission of data so that the receiving
device is not overwhelmed with more traffic than it can handle at one time.
The Institute of Electrical and Electronics Engineers (IEEE) has subdivided the data link layer into
two sublayers: Logical Link Control (LLC) and Media Access Control (MAC).
The LLC sublayer (defined in the IEEE 802.2 specification) manages communications between
devices over a single link of a network.
The MAC sublayer manages protocol access to the physical network medium.
Network Layer
The network layer provides routing and related functions that allow multiple data links to be
combined into an internetwork. This is accomplished by the logical addressing (as opposed to the
physical addressing) of devices. The network layer supports both connection-oriented and
connectionless service from higher-layer protocols.
Transport Layer
The transport layer implements reliable internetwork data transport services that are transparent
to upper layers. Transport layer functions typically include the following:
Flow control -- Flow control manages data transmission between devices so that the transmitting
device does not send more data than the receiving device can process.
Multiplexing -- Multiplexing allows data from several applications to be transmitted onto a single
physical link.
Virtual circuit management -- Virtual circuits are established, maintained, and terminated by the
transport layer.
Error checking and recovery -- Error checking involves various mechanisms for detecting
transmission errors. Error recovery involves taking an action (such as requesting that data be
retransmitted) to resolve any errors that occur.
Transmission Control Protocol (TCP), Name Binding Protocol (NBP), OSI transport protocols
Session Layer
The session layer establishes, manages, and terminates communication sessions between
presentation layer entities. Communication sessions consist of service requests and service
responses that occur between applications located in different network devices. These requests
and responses are coordinated by protocols implemented at the session layer. Some examples of
session layer implementations follow:
Presentation Layer
The presentation layer provides a variety of coding and conversion functions that are applied to
application layer data. These functions ensure that information sent from the application layer of
one system will be readable by the application layer of another system. Some examples of
presentation layer coding and conversion schemes follow:
Common data representation formats -- The use of standard image, sound, and video formats
allow the interchange of application data between different types of computer systems. (JPEG,
MPEG, GIF, …)
Common data compression schemes -- The use of standard data compression schemes allows
data that is compressed at the source device to be properly decompressed at the destination.
Common data encryption schemes -- The use of standard data encryption schemes allows data
encrypted at the source device to be properly unencrypted at the destination.
Presentation layer implementations are not typically associated with a particular protocol stack.
Some well known standards follow:
Application Layer
The application layer interacts with software applications that implement a communicating
component. Application layer functions typically include the following:
Identifying communication partners -- The application layer identifies and determines the
availability of communication partners for an application with data to transmit.
Determining resource availability -- The application layer must determine whether sufficient
network resources for the requested communication are available.
The application layer is the OSI layer closest to the end user. That is, both the OSI application
layer and the user interact directly with the software application. Some examples of application
layer implementations:
TCP/IP applications -- TCP/IP applications are protocols in the Internet Protocol suite, such as
Telnet, File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP).
OSI applications -- OSI applications are protocols in the OSI suite such as File Transfer, Access,
and Management (FTAM), Virtual Terminal Protocol (VTP), and Common Management
Information Protocol (CMIP).
2. Describe connection-oriented network service and connectionless network service and identify
the key differences between them.
Connection establishment -- During the connection establishment phase, a single path between
the source and destination systems is determined. Network resources are typically reserved at
this time to ensure a consistent grade of service (such as a guaranteed throughput rate).
Data transfer -- During the data transfer phase, data is transmitted sequentially over the path that
has been established. Data always arrives at the destination system in the order in which it was
sent.
Static path selection -- Because all traffic must travel along the same static path, a failure
anywhere along that path causes the connection to fail.
Connection-oriented services are useful for transmitting data from applications that are intolerant
of delays and packet re-sequencing. Voice and video applications are typically based on
connection-oriented services.
Connectionless network service does not predetermine the path from the source to the
destination system, nor are packet sequencing, data throughput, and other network resources
guaranteed. Each packet must be completely addressed because different paths through the
network might be selected for different packets, based on a variety of influences. Each packet is
transmitted independently by the source system and is handled independently by intermediate
network devices. Connectionless service offers two important advantages over connection-
oriented service:
Dynamic path selection -- Because paths are selected on a packet-by-packet basis, traffic can be
routed around network failures.
Dynamic bandwidth allocation -- Bandwidth is used more efficiently because network resources
are not allocated bandwidth that they are not going to use.
Connectionless services are useful for transmitting data from applications that can tolerate some
delay and re-sequencing. Data-based applications are typically based on connectionless service.
3. Describe data link addresses and network addresses and identify the key differences between
them.
A data link layer address uniquely identifies each physical network connection of a network
device. Data link addresses are sometimes referred to as physical or hardware addresses. Data
link addresses usually exist within a flat address space and have a pre-established and typically
fixed relationship to a specific device. End systems typically have only one physical network
connection, and thus have only one data link address. Routers and other internetworking devices
typically have multiple physical network connections. They therefore have multiple data link
addresses.
A network layer address identifies an entity at the network layer of the OSI reference model.
Network addresses usually exist within a hierarchical address space. They are sometimes called
virtual or logical addresses. The relationship of a network address with a device is logical and
unfixed. It is typically based either on physical network characteristics (the device is on a
particular network segment) or on groupings that have no physical basis (the device is part of an
AppleTalk zone). End systems require one network layer address for each network layer protocol
they support. (This assumes that the device has only one physical network connection.) Routers
and other internetworking devices require one network layer address per physical network
connection for each network layer protocol supported. For example, a router with three interfaces,
each running AppleTalk, TCP/IP, and OSI, must have three network layer addresses for each
interface. The router therefore has nine network layer addresses.
6. Define flow control and describe the three basic methods used in networking.
Flow control is a function that prevents network congestion by ensuring that transmitting devices
do not overwhelm receiving devices with data. There are a number of possible causes of network
congestion. For example, a high-speed computer might generate traffic faster than the network
can transfer it, or faster than the destination device can receive and process it. There are three
commonly used methods for handling network congestion:
Buffering - Buffering is used by network devices to temporarily store bursts of excess data in
memory until they can be processed. Occasional data bursts are easily handled by buffering.
However, excess data bursts can exhaust memory, forcing the device to discard any additional
datagrams that arrive.
Source quench messages - Source quench messages are used by receiving devices to help
prevent their buffers from overflowing. The receiving device sends source quench messages to
request that the source reduce its current rate of data transmission, as follows:
1. The receiving device begins discarding received data due to overflowing buffers.
2. The receiving device begins sending source quench messages to the transmitting device,
at the rate of one message for each packet dropped.
3. The source device receives the source quench messages and lowers the data rate until it
stops receiving the messages.
4. The source device then gradually increases the data rate as long as no further source
quench requests are received.
7. List the key internetworking functions of the OSI Network layer and how they are performed in
a router.
Selects the best path through an internetwork, establishes network addresses, & communicates
paths.
Routers use a routing protocol between routers, use a routed protocol to carry user packets, set
up and maintain routing tables, discover networks, adapt to internetwork topology changes, use a
two part address, and contains broadcasts.
WAN Protocols
8. Differentiate between the following WAN services: Frame Relay, ISDN / LAPD, HDLC, & PPP.
Frame Relay - Industry-standard, switched data link layer protocol that handles multiple virtual
circuits using HDLC encapsulation between connected devices. Frame Relay is more efficient
than X.25, the protocol for which it is generally considered a replacement.
HDLC - High-Level Data Link Control. Bit-oriented synchronous data link layer protocol developed
by ISO. Derived from SDLC, HDLC specifies a data encapsulation method on synchronous serial
links using frame characters and checksums.
PPP - Point-to-Point Protocol. A successor to SLIP, PPP provides router-to-router and host-to-
network connections over synchronous and asynchronous circuits.
Frame Relay is a CCITT & ANSI standard for sending data over a public data network. It is a
next-generation protocol to X.25 and is a connection-oriented data-link technology. It relies on
upper-layer protocols for error correction and today's more dependable fiber and digital networks.
Local access rate - clock speed of the connection to the Frame cloud.
Data-link connection identifier (DLCI) - a number that identifies the logical circuit between the
DTE and the
Frame Relay switch. The FR switch maps the DLCIs between each pair of routers to create a
PVC.
Local management interface (LMI) - a signaling standard between the DTE device and the FR
switch that
Is responsible for managing the connection and maintaining status between the devices.
Committed information rate (CIR) - the average rate (bps) that the FR switch agrees to transfer
data.
Committed burst - the maximum number of bits that the switch agrees to transfer during any
Committed Rate
Measurement Interval.
Excess burst - the maximum number of uncommitted bits that the FR switch will attempt to
transfer beyond
the CIR (typically limited to the port speed of the local access loop).
It sends a BECN packet to the source router instructing it to reduce its packet sending rate.
Forward explicit congestion notification (FECN) - when a FR switch recognizes congestion in the
network,
It sends a FECN packet to the destination device indicating that congestion has occurred.
Discard eligibility (DE) indicator - when the router detects network congestion, the FR switch will
drop packets with the
DE bit set first. The DE bit is set on the oversubscribed traffic; that is the traffic that was received
after the CIR was met.
10. List commands to configure Frame Relay LMIs, maps and subinterfaces
router(config-if)# frame-relay lmi-type [ ansi | cisco | q933i ] (autosensed 11.2 and up)
router(config-if)# bandwidth kilobits (configur bandwidth for the link, default is T1)
router(config-if)# ip bandwidth-percent eigrp as-number percent (total bandwidth EIGRP can use)
router(config-if)# frame-relay local-dlci number (to specify DLCI for local interface)
router(config-if)# frame-relay interface-dlci dlci-number (local DLCI number being linked to sub-
interface)
Router(config-if)# ppp authentication [chap | chap pap | pap chap | pap ] (pap is clear text)
Router(config-if)# ppp pap sent-username username password password (for router responding
to pap request, 11.1 and up)
Router(config-if)# ppp chap hostname hostname (for same host name on multiple routers)
Router(config-if)# ppp chap password secret (to send to hosts that want to authenticate the
router)
To support applications requiring high speed voice, video, and data communications.
Digital service with fast connection setup and higher bandwidth than traditional modems.
14. Identify ISDN protocols, function groups, reference points and channels.
ISDN components include terminals, terminal adapters (TAs), network-termination devices, line-
termination equipment, and exchange-termination equipment. ISDN terminals come in two types.
Specialized ISDN terminals are referred to as terminal equipment type 1 (TE1). Non-ISDN
terminals such as DTE that predate the ISDN standards are referred to as terminal equipment
type 2 (TE2). TE1s connect to the ISDN network through a four-wire, twisted-pair digital link.
TE2s connect to the ISDN network through a terminal adapter. The ISDN TA can either be a
stand-alone device or a board inside the TE2. If the TE2 is implemented as a standalone device,
it connects to the TA via a standard physical-layer interface. Examples include EIA/TIA-232-C
(formerly RS-232-C), V.24, and V.35.
Beyond the TE1 and TE2 devices, the next connection point in the ISDN network is the network
termination type 1 (NT1) or network termination type 2 (NT2) device. These are network-
termination devices that connect the four-wire subscriber wiring to the conventional two-wire local
loop. In North America, the NT1 is a customer premises equipment (CPE) device. In most other
parts of the world, the NT1 is part of the network provided by the carrier. The NT2 is a more
complicated device, typically found in digital private branch exchanges (PBXs), that performs
Layer 2 and 3 protocol functions and concentration services. An NT1/2 device also exists; it is a
single device that combines the functions of an NT1 and an NT2.
A number of reference points are specified in ISDN. These reference points define logical
interfaces between functional groupings such as TAs and NT1s. ISDN reference points include
the following:
U--The reference point between NT1 devices and line-termination equipment in the carrier
network.
The U reference point is relevant only in North America, where the NT1 function is not provided
by the carrier network.
The ISDN Basic Rate Interface (BRI) service offers two B channels and one D channel (2B+D).
BRI B-channel service operates at 64 kbps and is meant to carry user data; BRI D-channel
service operates at 16 kbps and is meant to carry control and signaling information, although it
can support user data transmission under certain circumstances. The D channel signaling
protocol comprises Layers 1 through 3 of the OSI reference model. BRI also provides for framing
control and other overhead, bringing its total bit rate to 192 kbps. The BRI physical layer
specification is International Telecommunication Union Telecommunication Standardization
Sector (ITU-T) (formerly the Consultative Committee for International Telegraph and
ISDN Primary Rate Interface (PRI) service offers 23 B channels and one D channel in North
America and Japan, yielding a total bit rate of 1.544 Mbps (the PRI D channel runs at 64 kbps).
ISDN PRI in Europe, Australia, and other parts of the world provides 30 B plus one 64-kbps D
channel and a total interface rate of 2.048 Mbps. The PRI physical-layer specification is ITU-T
I.431.
ISDN physical-layer (Layer 1) frame formats differ depending on whether the frame is outbound
(from terminal to network) or inbound (from network to terminal). The frames are 48 bits long, of
which 36 bits represent data. Layer 2 of the ISDN signaling protocol is Link Access Procedure, D
channel, also known as LAPD. LAPD is similar to High-Level Data Link Control (HDLC) and Link
Access Procedure, Balanced (LAPB). As the expansion of the LAPD acronym indicates, it is used
across the D channel to ensure that control and signaling information flows and is received
properly. The LAPD frame format is very similar to that of HDLC and, like HDLC, LAPD uses
supervisory, information, and unnumbered frames. The LAPD protocol is formally specified in
ITU-T Q.920 and ITU-TQ.921.
Two Layer 3 specifications are used for ISDN signaling: ITU-T (formerly CCITT) I.450 (also
known as ITU-T Q.930) and ITU-T I.451 (also known as ITU-T Q.931). Together, these protocols
support user-to-user, circuit-switched, and packet-switched connections. A variety of call
establishment, call termination, information, and miscellaneous messages are specified, including
SETUP, CONNECT, RELEASE, USER INFORMATION, CANCEL, STATUS, and DISCONNECT.
These messages are functionally similar to those provided by the X.25 protocol.
ITU-T groups and organizes the ISDN protocols according to the following gereral topic areas:
Protocols that begin with "E" recommend telephone network standards for ISDN.
Protocols that begin with "I" deal with concepts, terminology, and general methods.
Protocols that begin with "Q" cover how switching and signaling should operate.
IOS
User EXEC – User mode entered by logging in. Prompt will be Router>. To exit use the logout
command.
Privileged EXEC – From user EXEC mode, use the enable EXEC command. Prompt will be
Router#.
Entering a question mark (?) at the system prompt displays a list of commands available for each
command mode. You can also get a list of any command’s associated keyworkd and arguments
with the context-sensitive help feature. To get help specific to a command mode, a command, a
keyword, or arguments perform one of the following:
Task / Command
help
Configure a line or lines to receive help for the full set of user-level commands when a user types
"?".
full-help
Configure a line to receive help for the full set of user-level commands for this exec session.
terminal full-help
abbreviated-command-entry <Tab>
With the current IOS release, the user interface provides a history or record of commands that
you have entered. This feature is particularly useful for recalling long or complex command
entries including access lists. By default, the system records 10 command lines in its history
buffer. To set the number of command lines recorded during the current terminal session use the
following global command:
To configure the number of command lines the system records, complete the following command
from line configuration mode:
Crtl-P (or the up arrow key) Recall commands in the history buffer starting with the most recent
command.
Crtl-N (or the down arrow) Return to more recent commands in the history buffer after recalling
Crtl-B (or left arrow key) Move the cursor back one character
Crtl-F (or right arrow key) Move the cursor forward one character
Crtl-A Move the cursor to the beginning of the command line
ROM - Read Only, Hard Wired, Boot Strap, IOS, ROM Monitor
The Cisco Discovery Protocol (CDP) is a media- and protocol-independent protocol that runs on
all Cisco-manufactured equipment including routers, bridges, access servers and switches. CDP
runs on all media that supports Subnetwork Access Protocol (SNAP) including local area
network, Frame Relay and ATM media. CDP runs over the data link layer only.
Specify the amount of time a receiving device should hold the information sent by your device
before discarding it.
To disable CDP
no cdp run
no cdp enable
The show cdp neighbors command displays: Device ID, interface type and number, hold-time
settings, capabilities, platform and port ID information about neighbors. Using the detail option
displays the following additional neighbor details: network address, enabled protocols and
show startup-config to view the configuration in NVRAM (show config = pre 10.3)
show running-config to view the current running configuration (write term = pre 10.3)
show version displays the configuration of the system hardware, the software version, the names
show protocols displays the configured protocols and status of any configured Layer 3 protocol.
show mem shows statistics about the router's memory, including memory free pool statistics.
show interfaces displays statistics for all interfaces configured on the router.
Cisco routers have two levels of passwords that can be applied; user and privileged EXEC. The
user EXEC passwords are applied to the console, auxiliary and virtual terminal lines of the Cisco
router. Password authentication can be either on the line, through a local username definition or a
TACACS, extended TACACS, TACACS+ or RADIUS server. To enter privileged EXEC mode,
use the enable command. By default, the password will be compared against the password
entered with the enable secret global command.
Banners
banner exec
To display a banner on terminals with an interactive EXEC, use the banner exec global
configuration command. This command specifies a message to be displayed when an EXEC
process is created (a line is activated, or an incoming connection is made to a VTY line). The no
form of this command deletes the EXEC banner.
no banner exec
Syntax Description
Delimiting character of your choice--a pound sign (#) for example. You cannot use the delimiting
character in the banner message.
banner incoming
To specify a banner used when you have an incoming connection to a line from a host on the
network, use the banner incoming global configuration command. The no form of this command
deletes the incoming connection banner.
no banner incoming
banner motd
no banner motd
An incoming connection is one initiated from the network side of the router. Incoming connections
are also called reverse Telnet sessions. These sessions can display MOTD banners and
INCOMING banners, but they do not display EXEC banners. Use the no motd-banner line
configuration command to disable the MOTD banner for reverse Telnet sessions on
asynchronous lines. When a user connects to the router, the MOTD banner appears before the
login prompt. After the user successfully logs in to the router, the EXEC banner or INCOMING
banner will be displayed, depending on the type of connection. For a reverse Telnet login, the
INCOMING banner will be displayed. For all other connections, the router will display the EXEC
banner. Incoming banners cannot be suppressed. If you do not want the incoming banner to
appear, you must delete it with the no banner incoming command.
22. Identify the main Cisco IOS commands for router startup.
Boot system flash (to boot from flash ROM, 1st try)
Additionally, if you are not familiar with Cisco products and the command parser, the setup
command facility is a particularly valuable tool because it asks you the questions required to
make configuration changes.
Note: If you use setup to modify a configuration because you have added or modified the
hardware, be sure to verify the physical connections using the show version command. Also,
verify the logical port assignments using the show running-config command to ensure that you
configure the proper port.
To enter the setup command facility, enter ‘setup’ in privileged EXEC mode:
When you enter the setup command facility after first-time startup, an interactive dialog called the
System Configuration Dialog appears on the system console screen. The System Configuration
Dialog guides you through the configuration process. It prompts you first for global parameters
and then for interface parameters. The values shown in brackets next to each prompt are the
default values last set using either the setup command facility or the configure command. The
prompts and the order in which they appear on the screen vary depending on the platform and
the interfaces installed in the device.
You must run through the entire System Configuration Dialog until you come to the item that you
intend to change. To accept default settings for items that you do not want to change, press the
Return key.
To return to the privileged EXEC prompt without making changes and without running through the
entire System Configuration Dialog, press Ctrl-C.
When you complete your changes, the setup command facility shows you the configuration
command script that was created during the setup session. It also asks you if you want to use this
configuration. If you answer Yes, the configuration is saved to NVRAM. If you answer No, the
configuration is not saved and the process begins again. There is no default for this prompt; you
must answer either Yes or No.
Router# setup
At any point you may enter a question mark '?' for help.
copy running-config startup-config save config variables to NVRAM (write memory = pre 10.3)
copy running-config tftp save config variables to a remote server on the network. (write network =
pre 10.3)
copy tftp running-config copies a file from a TFTP server to RAM (config network = pre 10.3)
copy tftp startup-config loads a config file from a TFTP server directly into NVRAM (config
overwrite = pre 10.3)
erase startup-config to erase the contents of NVRAM (write erase = pre 10.3)
25. List the commands to load Cisco IOS from: flash memory, a TFTP server or ROM.
To configure a router to automatically boot an image in Flash memory, perform the following
tasks:
Task Command
Step 2 Enter the filename of an image stored in Flash memory boot system flash [filename]
Step 3 Set the configuration register to enable loading image from Flash memory (generally
0x2102)
config-register value
To configure a router to load a system image from a network server using TFTP, rcp or MOP:
Task Command
Step 2 Specify the system image to be booted from a network server using rcp, TFTP or MOP.
config-register value
To specify the use of the ROM system image as a backup to other boot instructions in the
configuration file:
Task Command
Step 2 Enter the filename of an image stored in Flash memory boot system rom
Step 3 Set the configuration register to enable loading image from ROM (generally 0x0101)
config-register value
26. Prepare to backup, upgrade and load a backup Cisco IOS software image.
To prepare for backup check; access to the server, space available on server, & naming
conventions.
Router# copy flash tftp ( it will ask you for IP address & source/destination file name )
Router# copy tftp flash ( it will ask you for IP address of TFTP server, file name, & whether to
erase flash )
27. Prepare the initial configuration of your router and enable IP.
Network Protocols
IP addressing supports five different address classes. The left-most (high-order) bits indicate the
network class. The following table provides reference information about the five IP address
classes:
IP networks can be divided into smaller networks called subnetworks (or subnets). Subnetting
provides extra flexibility, makes more efficient use of network address utilization, and contains
broadcast traffic because a broadcast will not cross a router. Subnets are under local
administration. As such, the outside world sees an organization as a single network, and has no
detailed knowledge of the organization's internal structure. A given network address can be
broken up into many subnetworks. For example, 172.16.1.0, 172.16.2.0, 172.16.3.0, and
172.16.4.0 are all subnets within network 171.16.0.0. (All 0s in the host portion of an address
specifies the entire network.)
Router(config-if)# ip address ip-address subnet-mask (assigns address & subnet mask, starts IP
processing on an interface)
(sets format of network mask for current session. Defaults back to bit count.)
Ping - uses ICMP to verify hardware connection and logical address of network layer.
Trace - uses TTL values to generate messages from each router used along the path.
arpa Ethernet_II
sap Ethernet_802.2
snap Ethernet_Snap
snap Token-Ring_Snap
sap Fddi_802.2
Router(config)# ipx routing [node] (If no node is specified, MAC address of interface is used.
Router(config)# ipx maximum-paths paths (configure round-robin load sharing over multiple equal
metric paths.)
IP provides connectionless, best-effort delivery routing of datagrams, It is not concerned with the
content of the datagrams. Instead, it looks for a way to move the datagrams to their destination.
ARP determines the data link layer address for known IP addressed.
RARP determines network addresses when data link layer addressed are known.
The Internet Control Message Protocol (ICMP) is a network layer Internet protocol that provides
message packets to report errors and other information relevant to IP packet processing back to
the source. ICMP is documented in RFC 792. ICMP provides a number of helpful messages
including the following:
The router does not have a route to the destination (less frequent).
Host unreachable -- This message usually implies delivery failures such as a wrong subnet mask.
Protocol unreachable -- This message usually implies that the destination does not support
upper-layer protocol specified in the packet.
Port unreachable -- This message usually implies that the Transmission Control Protocol
Echo Request and Reply - The ICMP echo request message is sent by any host to test node
reachability across an internetwork. It is generated by the ping command. The ICMP echo reply
message indicates that the node can be successfully reached.
Redirect - An ICMP redirect message is sent by the router to the source host to stimulate more
efficient routing. The router still forwards the original packet to the destination. ICMP redirects
allow host routing tables to remain small because knowing the address of only one router is
required (even if that router does not provide the best path). Even after receiving an ICMP
redirect message, some devices might continue using the less efficient route.
Time Exceeded - An ICMP time-exceeded message is sent by the router if an IP packet's Time-
to-Live field (expressed in hops or seconds) reaches zero. The Time-to-Live field prevents
packets from continuously circulating the internetwork if the internetwork contains a routing loop.
The router discards the original packet.
Router Advertisement and Router Solicitation - The ICMP Router Discovery Protocol (IDRP) uses
router advertisement and router solicitation messages to discover the addresses of routers on
directly attached subnets. IDRP works as follows:
1.Each router periodically multicasts router advertisement messages from each of its interfaces.
2.Hosts discover addresses of routers on directly attached subnets by listening for these
messages.
3.Hosts can use router solicitation messages to request immediate advertisements, rather than
waiting for unsolicited messages.
IRDP offers several advantages over other methods of discovering addresses of neighboring
routers. Primarily, it does not require hosts to recognize routing protocols, nor does it require
manual configuration by an administrator. Router advertisement messages allow hosts to
discover the existence of neighboring routers, but not which router is best to reach a particular
destination. If a host uses a poor first-hop router to reach a particular destination, it receives a
redirect message identifying a better choice.
Undeliverable ICMP messages (for whatever reason) do not generate a second ICMP message.
Doing so could create an endless flood of ICMP messages.
38. Configure IPX access lists and SAP filters to control basic Novell traffic
Routing
Separate routing---The ships-in-the-night approach involves the use of a different routing protocol
for each network protocol.
Integrated routing---Integrated routing involves the use of a single routing protocol (for example, a
link state protocol) that determines the least cost path for different routed protocols.
42. List problems that each routing type encounters when dealing with topology changes and
describe techniques to reduce these problems.
Distance Vector protocols, like RIP and IGRP, use the Bellman-Ford algorithm They are slow to
converge in a large LAN. This can lead to inconsistent routing entries and cause routing loops.
Hop-Count Limit - RIP permits a maximum hop count of 15. Any destination greater than 15 hops
away is tagged as unreachable. RIP's maximum hop count greatly restricts its use in large
internetworks, but prevents a problem called count to infinity from causing endless network
routing loops.
Hold-Downs - Hold-downs are used to prevent regular update messages from inappropriately
reinstating a route that has gone bad. When a route goes down, neighboring routers will detect
this. These routers then calculate new routes and send out routing update messages to inform
their neighbors of the route change. This activity begins a wave of routing updates that filter
through the network.
Triggered updates do not instantly arrive at every network device. It is therefore possible that a
device that has yet to be informed of a network failure may send a regular update message
(indicating that a route that has just gone down is still good) to a device that has just been notified
of the network failure. In this case, the latter device now contains (and potentially advertises)
incorrect routing information.
Hold-downs tell routers to hold down any changes that might affect recently removed routes for
some period of time. The hold-down period is usually calculated to be just greater than the period
of time necessary to update the entire network with a routing change. Hold-down prevents the
count-to-infinity problem.
Split Horizons - Split horizons derive from the fact that it is never useful to send information about
a route back in the direction from which it came. The split-horizon rule helps prevent two-node
routing loops.
Poison Reverse Updates - Whereas split horizons should prevent routing loops between adjacent
routers, poison reverse updates are intended to defeat larger routing loops. The idea is that
increases in routing metrics generally indicate routing loops. Poison reverse updates are then
sent to remove the route and place it in hold-down. Poison Reverse update are updates sent to
other routers with an unreachable metric.
Link State
Link State routing uses the Dijkstra algorithm to compute the shortest path first to another
network.
Link State routing protocols, like OSPF & NLSP, notify other routers of topology changes with
link-state updates. The router receiving these LSP's recalculate their routing table. The 2 link-
state concerns are:
Link state updates can arrive at different times based on bandwidth between routers. To solve
this problem:
Use targeted mulitcast (not flood), define router hierarchies (i.e. partition network)
Manageability - There are explicit protocols operating among routers, giving the network
administrator greater control over path selection; and network routing behavior is more visible.
Functionality - Because routers are visible to the end stations, you can implement mechanisms to
provide flow control, error and congestion control, fragmentation and reassembly services, and
explicit packet lifetime control.
Multiple active paths - With the implementation of a router, you can use a network topology using
more than one path between stations. Operating at the network layer, routers can examine
protocol, destination service assess point (DSAP), source service access point (SSAP), and path
metric information before making forwarding or filtering decisions.
Network Security
Standard access lists - check the source address of a packets that could be routed. The result
permits or denies output for an entire protocol suite, based on the network/subnet/host address.
Extended access lists - check for both source and destination packet addresses. They also can
check for specific protocols, port numbers, and other parameters, which allows administrators
more flexibility to describe what checking the access list will do. Packets can be permitted or
denied output based on where the packet originated and on its destination.
Show ip interface - displays IP interface information and indicates whether any access lists are
set.
Show access-lists [ name/number ] - displays the contents of all or specific access list(s)
LAN Switching
Dividing the network into smaller segments reduces the number of users per segment, thereby
increasing the bandwidth available to each user in the segment.
A bridge is a data link layer device used to connect two segments. It is protocol independent and
transparent to the end user. Bridges "learn" which end stations can be reached through which
port from the source address of a packet. If the destination is on the same segment as the
source, the packet is not forwarded. Bridges introduce a latency penalty due to processing
overhead ( 20-30 % in loss of throughput for acknowledgment-oriented protocols, and 10-20 %
for sliding window protocols). Bridges forward multicast and broadcast packets to other attached
segments (these destinations do no appear in the address tables).
Routers operate at the network layer and are used to extend a network across multiple data links,
finding routes between the source and destination stations on a internetwork. They typically
perform functions associated with bridging, such as making forwarding decisions based on table
look-up. Unlike a bridge, the router is known to the stations using its services, and a well-defined
protocol must be used among the stations and the router. Routers introduce a latency penalty
(associated with examining more fields than a bridge) of 30-40 % loss of throughput for
acknowledgment-oriented protocols, and 20-30 % for sliding window protocols.
A switched Ethernet connection operates like a network with only two nodes. In a switched
Ethernet network, the utilization can reach closer to the 100 % rate. A switch segments a LAN
collision domain into smaller collision domains thus reducing or eliminating station contention for
media access. LAN switches use the data-link layer information to create a direct point-to-point
path across the switch or across several switches between the source and destination. Use of the
MAC layer information for transmitting packets enables a LAN switch to be protocol independent.
Port configuration - allows a port to be assigned to a physical network segment under software
control.
Half-duplex Ethernet has each circuit used for a specific purpose. When a node is transmitting,
other nodes are receiving.
Efficiency is typically 50-60 percent of the 10 Mbps bandwidth.
Full-duplex Ethernet allows simultaneous transmission and reception. It requires a switched
connection between two nodes. A transmit circuit connection is wired directly to the receiver
circuit at the other end of the connection. Since just two stations are connected in this
arrangement, a collision-free environment exists here. Full-duplex Ethernet offers 100 %
efficiency in both directions. (10 Mbps transmit, and 10 Mbps receive.) This produces a
theoretical 20 Mbps of throughput.
Transmission of graphic files, images, full-motion video, and multimedia applications exceed the
10 Mbps bandwidth of traditional Ethernet. Also, use of the internet has increased network
utilization.
Provides more bandwidth per user due to fewer users per segment.
Packets with the destination and source addresses on the same segment are not forwarded.
The IEEE 802.3u 100BaseT Fast Ethernet standard is based on Ethernet's CSMA/CD protocol
but is 10 times faster. Fast Ethernet is well suited for bursty communication such as client/server
applications, centralized server farms or power workgroups, and backbone implementations.
Allows the use of existing cabling and network equipment, thus reducing the overall cost of
implementation and allowing easy integration into the existing 10BaseT neetworks.
Uses the same MAC and shares common circuitry. Dual speed adapters and switch can be used
for easy migration from 10 Mbps to 100 Mbps
Based on the proven CSMA/CD technology which is well specified and exhaustively tested &
verified
56. Describe the guidelines and distance limitations of Fast Ethernet
100BaseTX : uses Cat 5 UTP, RJ-45 connectors, and has a distance limit of 100 meters.
100BaseFX: uses multimode fiber, SC/ST/MIC connectors, & has a distance limit of 412 meters
(half-duplex) or 2 kilometers (full-duplex)
100BaseT4: uses 4-pair Cat 3, 4, or 5 UTP, RJ45 connectors, & can use voice grade wire.
1 Class I repeater (UTP Medium) 200 meters (UTP & Fiber) 261 meters
1 Class II repeater (UTP Medium) 200 meters (UTP & Fiber) 308 meters
2 Class II repeaters (UTP Medium) 205 meters (UTP & Fiber) 216 meters
Cut through switching will forward the packet as soon as the destination MAC is known. Store
and forward will forward after the packet has been received and declared to be valid. Cut through
is faster, but you may pass "bad" packets.
58. Describe the operation of the Spanning Tree Protocol and its benefit
Spanning-Tree Protocol is a link management protocol that provides path redundancy while
preventing undesirable loops in the network. For an Ethernet network to function properly, only
one active path can exist between two stations. Multiple active paths between stations cause
loops in the network. If a loop exists in the network topology, the potential exists for duplication of
messages. When loops occur, some switches see stations appear on both sides of the switch.
This condition confuses the forwarding algorithm and allows duplicate frames to be forwarded.
To provide path redundancy, Spanning-Tree Protocol defines a tree that spans all switches in an
extended network. Spanning-Tree Protocol forces certain redundant data paths into a standby
(blocked) state. If one network segment in the Spanning-Tree Protocol becomes unreachable, or
if Spanning-Tree Protocol costs change, the spanning-tree algorithm reconfigures the spanning-
tree topology and reestablishes the link by activating the standby path.
Spanning-Tree Protocol operation is transparent to end stations, which are unaware whether they
are connected to a single LAN segment or a switched LAN of multiple segments.
The election of a unique root switch for the stable spanning-tree network topology.
The Spanning-Tree Protocol root switch is the logical center of the spanning-tree topology in a
switched network. All paths that are not needed to reach the root switch from anywhere in the
switched network are placed in Spanning-Tree Protocol backup mode.
BPDUs contain information about the transmitting switch and its ports, including switch and port
Media Access Control (MAC) addresses, switch priority, port priority, and port cost. The
Spanning-Tree Protocol uses this information to elect the root switch and root port for the
switched network, as well as the root port and designated port for each switched segment.
The shortest distance to the root switch is calculated for each switch.
A designated switch is selected. This is the switch closest to the root switch through which frames
will be forwarded to the root.
A port for each switch is selected. This is the port providing the best path from switch to the root
switch.
If all switches are enabled with default settings, the switch with the lowest MAC address in the
network becomes the root switch. By increasing the priority (lowering the numerical priority
number) of the ideal switch so that it then becomes the root switch, youforce a Spanning-Tree
Protocol recalculation to form a new, stable topology.
Propagation delays can occur when protocol information is passed through a switched LAN. As a
result, topology changes can take place at different times and at different places in a switched
network. When a switch port transitions directly from non-participation in the stable topology to
the forwarding state, it can create temporary data loops. Ports must wait for new topology
information to propagate through the switched LAN before starting to forward frames. They must
also allow the frame lifetime to expire for frames that have been forwarded using the old topology.
Each port on a switch using Spanning-Tree Protocol exists in one of the following five states:
Blocking State - A port in the blocking state does not participate in frame forwarding.. After
initialization, a BPDU is sent to each port in the switch. A switch initially assumes it is the root
until it exchanges BPDUs with other switches. This exchange establishes which switch in the
network is really the root. If only one switch resides in the network, no exchange occurs, the
forward delay timer expires, and the ports move to the listening state. A switch always enters the
blocking state following switch initialization.
Listening State - The listening state is the first transitional state a port enters after the blocking
state, when Spanning-Tree Protocol determines that the port should participate in frame
forwarding. Learning is disabled in the listening state.
Learning State - A port in the learning state is preparing to participate in frame forwarding. This is
the second transitional state through which a port moves in anticipation of frame forwarding.
Disabled State - A port in the disabled state does not participate in frame forwarding or the
operation of Spanning-Tree Protocol. A port in the disabled state is virtually nonoperational.
Reduced Administration Costs - Moves, adds, and changes are one of the greatest expenses in
managing a network. VLANs provide an effective mechanism to control these changes and
reduce much of the cost of hub and router reconfiguration.
Controlling Broadcast Activity - Similar to routers, VLANs offer an effective mechanism for setting
up firewalls in a switch fabric, protecting the network against broadcast problems that are
potentially dangerous, and maintaining all the performance benefits of switching.
Better Network Security - You can increase security easily and inexpensively by segmenting the
network into distinct broadcast groups. VLANs therefore can be used to provide security firewalls,
restrict individual user access, flag any unwanted intrusion to the network, and control the size
and composition of the broadcast domain.
Leveraging Existing LAN Hub Investments - Organizations have installed many shared hub
chassis, modules, and stackable devices in the past three to five years. You can leverage this
investment by using backplane hub connections. It is the connections between shared hubs and
switches that provide opportunities for VLAN segmentation.
Media Access Control (MAC) addresses are a subset of data link layer addresses. MAC
addresses identify network entities in LANs implementing the IEEE MAC sublayer of the data link
layer. Like most data link addresses, MAC addresses are unique for each LAN interface. MAC
addresses are 48 bits in length and are expressed as 12 hexadecimal digits: The first 6
hexadecimal digits are the manufacturer identification (or vendor code), called the Organizational
Unique Identifier (OUI). These 6 digits are administered by the IEEE. The last 6 hexadecimal
digits are the interface serial number or another value administered by the specific vendor. MAC
addresses are sometimes called burned-in addresses (BIAs) because they are burned into read-
only memory (ROM) and copied into random-access memory (RAM) when the interface card
initializes.