5.UNIT 2 Lecture Notes
5.UNIT 2 Lecture Notes
TOPIC PAGE
TOPIC
NUMBER NUMBER
2.1 Media Access Protocols 2
2.2 ALOHA 3
2.3 CSMA/CA/CD 8
2.4 Ethernet 13
2.5 Wireless LANs802.11 18
2.6 Switching and bridging 28
2.7 Basic Internetworking 37
2.8 Global Addresses 41
2.9 ARP 43
2.10 DHCP 45
2.11 ICMP 49
RANDOM ACCESS
None is assigned the control over another. No station permits, or does not permit, another
station to send. At each instance, a station that has data to send uses a procedure defined by
the protocol to make a decision on whether or not to send. This decision depends on the state
of the medium (idle or busy). In other words, each station can transmit when it desires on the
condition that it follows the predefined procedure,including the testing of the state of the
medium.
Two features give this method its name. First, there is no scheduled time for a station to
transmit. Transmission is random among the stations. That is why these methods are called
random access. Second, no rules specify which station should send next. Stations compete
with one another to access the medium. That is why these methods are also called contention
methods.
In a random access method, each station has the right to the medium without being controlled
by any other station. However, if more than one station tries to send, there is an access
conflict-collision-and the frames will be either destroyed or modified. To avoid access
conflict or to resolve it when it happens, each station follows a procedure that answers the
following questions:
When can the station access the medium?
What can the station do if the medium is busy?
How can the station determine the success or failure of the transmission?
What can the station do if there is an access conflict?
The random access methods we study in this chapter have evolved from a very interesting
protocol known as ALOHA, which used a very simple procedure called multiple access
(MA). The method was improved with the addition of a procedure that forces the station to
sense the medium before transmitting. This was called carrier sense multiple access. This
method later evolved into two parallel methods: carrier sense multiple access with collision
detection (CSMAlCD) and carrier sense multiple access with collision avoidance
(CSMA/CA). CSMA/CD tells the station what to do when a collision is detected. CSMA/CA
tries to avoid the collision.
Example 1
The stations on a wireless ALOHA network are a maximum of 600 km apart. If we assume
hat signals propagate at 3 x 108 mis, we find Tp = (600 x 105) / (3 x 108) = 2 ms. Now we can
find the value of TB for different values of K.
a. For K = 1, the range is {O, I}. The station needs to generate a random number with a value
of 0 or 1. This means that TB is either °ms (0 x 2) or 2 ms (l x 2), based on the outcome of the
random variable.
b. For K =2, the range is {O, 1, 2, 3}. This means that TB can be 0, 2, 4, or 6 ms, based on the
outcome of the random variable.
c. For K =3, the range is to, 1,2,3,4,5,6, 7}. This means that TB can be 0,2,4, ... , 14 ms, based
on the outcome of the random variable.
Station A sends a frame at time t. Now imagine station B has already sent a frame between t - Tfr
and t. This leads to a collision between the frames from station A and station B. The end of B's
frame collides with the beginning of A's frame. On the other hand, suppose that station C sends a
frame between t and t + Tfr . Here, there is a collision between frames from station A and station
C. The beginning of C's frame collides with the end of A's frame.
Looking at Figure, we see that the vulnerable time, during which a collision may occur in
pure ALOHA, is 2 times the frame transmission time. Pure ALOHA vulnerable time = 2 x Tfr
Example.2
A pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps. What is
the requirement to make this frame collision-free?
Solution
Average frame transmission time Tfr is 200 bits/200 kbps or 1 ms. The vulnerable time is 2 x
1 ms =2 ms. This means no station should send later than 1 ms before this station starts
ransmission and no station should start sending during the one I-ms period that this station is
sending.
Throughput Let us call G the average number of frames generated by the system during one
frame transmission time. Then it can be proved that the average number of successful
transmissions for pure ALOHA is S = G x e-2G. The maximum throughput Smax is 0.184,
for G = 1. In other words, if one-half a frame is generated during one 2 frame transmission
time (in other words, one frame during two frame transmission times), then 18.4 percent of
these frames reach their destination successfully. This is an expected result because the
vulnerable time is 2 times the frame transmission time.
Therefore, if a station generates only one frame in this vulnerable time (and no other stations
generate a frame during this time), the frame will reach its destination successfully.
Example 4
A slotted ALOHA network transmits 200-bit frames using a shared channel with a 200-kbps
bandwidth. Find the throughput if the system (all stations together) produces a. 1000 frames per
second
b. 500 frames per second
c. 250 frames per second
Solution
This situation is similar to the previous exercise except that the network is using slotted ALOHA
instead of pure ALOHA. The frame transmission time is 200/200 kbps or 1 ms. a. In this case G is 1.
So S =G x e-G or S =0.368 (36.8 percent). This means that the throughput is 1000 x 0.0368 =368
frames. Only 368 out of 1000 frames will probably survive. Note that this is the maximum throughput
case, percentagewise.
b. Here G is ~. In this case S =G x e-G or S =0.303 (30.3 percent). This means that the throughput is
500 x 0.0303 =151. Only 151 frames out of 500 will probably survive. c. Now Gis 1. In this case S
=G x e-G or S =0.195 (19.5 percent). This means that the throughput 4 is 250 x 0.195 = 49. Only 49
frames out of 250 will probably survive.
To minimize the chance of collision and, therefore, increase the performance, the CSMA
method was developed. The chance of collision can be reduced if a station senses the medium
before trying to use it. Carrier sense multiple access (CSMA) requires that each station first
listen to the medium (or check the state of the medium) before sending. In other words,
CSMA is based on the principle "sense before transmit" or "listen before talk." CSMA can
reduce the possibility of collision, but it cannot eliminate it. The reason for this is shown in
below Figure , a space and time model of a CSMA network. Stations are connected to a
shared channel (usually a dedicated medium).
The possibility of collision still exists because of propagation delay; when a station sends a
frame, it still takes time (although very short) for the first bit to reach every station are
connected to a shared channel (usually a dedicated medium). The possibility of collision still
exists because of propagation delay; when a station sends a frame, it still takes time (although
very short) for the first bit to reach every station and for every station to sense it. In other
words, a station may sense the medium and find it idle, only because the first bit sent by
another station has not yet been received.
At time tI' station B senses the medium and finds it idle, so it sends a frame. At time t2 (t2>
tI)' station C senses the medium and finds it idle because, at this time, the first bits from
station B have not reached station C. Station C also sends a frame. The two signals collide
and both frames are destroyed.
Vulnerable Time
The vulnerable time for CSMA is the propagation time Tp . This is the time needed for a
signal to propagate from one end of the medium to the other. When a station sends a frame,
and any other station tries to send a frame during this time, a collision will result. But if the
first bit of the frame reaches the end of the medium, every station will already have heard the
bit and will refrain from sending.
Persistence Methods
What should a station do if the channel is busy? What should a station do if the channel is
idle? Three methods have been devised to answer these questions: the I-persistent method,
the nonpersistent method, and the p-persistent method. Below Figure shows the behavior of
three persistence methods when a station finds a channel busy.
The CSMA method does not specify the procedure following a collision. Carrier sense
multiple access with collision detection (CSMA/CD) augments the algorithm to handle the
collision.
In this method, a station monitors the medium after it sends a frame to see if the transmission
was successful. If so, the station is finished. If, however, there is a collision, the frame is sent
again.
To better understand CSMA/CD, let us look at the first bits transmitted by the two stations
involved in the collision. Although each station continues to send bits in the frame until it
detects the collision, we show what happens as the first bits collide. In Figure, stations A and
C are involved in the collision.
At time t 1, station A has executed its persistence procedure and starts sending the bits of its
frame. At time t2, station C has not yet sensed the first bit sent by A. Station C executes its
persistence procedure and starts sending the bits in its frame, which propagate both to the left
and to the right. The collision occurs sometime after time t2' Station C detects a collision at
time t3 when it receives the first bit of A's frame. Station C immediately (or after a short
time, but we assume immediately) aborts transmission. Station A detects collision at time t4
when it receives the first bit of C's frame; it also immediately aborts transmission. Looking at
the figure, we see that A transmits for the duration t4 - tl;C transmits for the duration t3 - t2'
Later we show that, for the protocol to work, the length of any frame divided by the bit rate in
this protocol must be more than either of these durations. At time t4, the transmission of A:s
frame, though incomplete, is aborted; at time t3, the transmission of B's frame, though
incomplete, is aborted. Now that we know the time durations for the two transmissions, we
can show a more complete graph in Figure.
The basic idea behind CSMA/CD is that a station needs to be able to receive while
transmitting to detect a collision. When there is no collision, the station receives one signal:
its own signal. When there is a collision, the station receives two signals: its own signal and
the signal transmitted by a second station. To distinguish between these two cases, the
received signals in these two cases must be significantly different. In other words, the signal
from the second station needs to add a significant amount of energy to the one created by the
first station.
In a wired network, the received signal has almost the same energy as the sent signal because
either the length of the cable is short or there are repeaters that amplify the energy between
the sender and the receiver. This means that in a collision, the detected energy almost
doubles.
However, in a wireless network, much of the sent energy is lost in transmission. The received
signal has very little energy. Therefore, a collision may add only 5 to 10 percent additional
energy. This is not useful for effective collision detection.
We need to avoid collisions on wireless networks because they cannot be detected. Carrier
sense multiple access with collision avoidance (CSMAlCA) was invented for this network.
Collisions are avoided through the use of CSMAICA's three strategies: the interframe space,
the contention window, and acknowledgments, as shown in Figure .
Timing in CSMAICA
Inter frame Space (IFS)
First, collisions are avoided by deferring transmission even if the channel is found idle. When
an idle channel is found, the station does not send immediately. It waits for a period of time
called the inter frame space or IFS. Even though the channel may appear idle when it is
sensed, a distant station may have already started transmitting.
The distant station's signal has not yet reached this station. The IFS time allows the front of
the transmitted signal by the distant station to reach this station. If after the IFS time the
channel is still idle, the station can send, but it still needs to wait a time equal to the
contention time (described next). The IFS variable can also be used to prioritize stations or
frame types. For example, a station that is assigned a shorter IF has a higher priority.
In CSMA/CA, the IFS can also be used to define the priority of a station or a frame.
The contention window is an amount of time divided into slots. A station that is ready to send
chooses a random number of slots as its wait time. The number of slots in the window
changes according to the binary exponential back-off strategy. This means that it is set to one
slot the first time and then doubles each time the station cannot detect an idle channel after
the IFS time. This is very similar to the p-persistent method except that a random outcome
defines the number of slots taken by the waiting station. One interesting point about the
contention window is that the station needs to sense the channel after each time slot.
However, if the station finds the channel busy, it does not restart the process; it just stops the
timer and restarts it when the channel is sensed as idle. This gives priority to the station with
the longest waiting time.
Acknowledgment
With all these precautions, there still may be a collision resulting in destroyed data. In
addition, the data may be corrupted during the transmission. The positive acknowledgment
and the time-out timer can help guarantee that the receiver has received the frame.
Procedure
Figure shows the procedure. Note that the channel needs to be sensed before and after the
IFS. The channel also needs to be sensed during the contention time. For each time slot of the
contention window, the channel is sensed. If it is found idle, the timer continues; if the
channel is found busy, the timer is stopped and continues after the timer becomes idle again.
CSMAICA and Wireless Networks
CSMAICA was mostly intended for use in wireless networks. The procedure described
above, however, is not sophisticated enough to handle some particular issues related to
wireless networks, such as hidden terminals or exposed terminals. We will see how these
issues are solved by augmenting the above protocol with hand-shaking features.
Developed in the mid-1970s by researchers at the Xerox Palo Alto Research Center (PARC), the
Ethernet eventually became the dominant local area networking technology, emerging from a pack of
competing technologies. Today, it competes mainly with 802.11 wireless networks but remains
extremely popular in campus networks and data centers. The more general name for the technology
behind the Ethernet is Carrier Sense,Multiple Access with Collision Detect (CSMA/CD).
As indicated by the CSMA name, the Ethernet is a multiple-access network,meaning that a set of
nodes sends and receives frames over a shared link. You can, therefore, think of an Ethernet as being
like a bus that has multiple stations plugged into it. The ―carrier sense‖ in CSMA/CD means that all
the nodes can distinguish between an idle and a busy link, and ―collision detect‖ means that a node
listens as it transmits and can therefore detect when a frame it is transmitting has interfered (collided)
with a frame transmitted by another node.
The Ethernet has its roots in an early packet radio network, called Aloha, developed at the University
of Hawaii to support computer communication across the Hawaiian Islands. Like the Aloha network,
the fundamental problem faced by the Ethernet is how to mediate access to a shared medium fairly
and efficiently (in Aloha, the medium was the atmosphere, while in the Ethernet the medium is a coax
cable). The core idea in both Aloha and the Ethernet is an algorithm that controls when each node can
transmit. Interestingly, modern Ethernet links are now largely point to point; that is, they connect one
host to an Ethernet switch, or they interconnect switches. Hence, ―multiple access‖ techniques are not
used much in today’s Ethernets. At the same time, wireless networks have become enormously
popular, so the multiple access technologies that started in Aloha are today again mostly used in
wireless networks such as 802.11 (Wi-Fi) networks.
PHYSICAL PROPERTIES:
Ethernet repeater
Ethernet HUB
Access Protocol
We now turn our attention to the algorithm that controls access to a shared Ethernet link. This
algorithm is commonly called the Ethernet’s media access control (MAC). It is typically implemented
in hardware on the network adaptor.We will not describe the hardware per se, but instead focus on the
algorithm it implements. First, however, we describe the Ethernet’s frame format and addresses.
Frame Format
Each Ethernet frame is defined by the format given in Figure The 64-bit preamble allows the receiver
to synchronize with the signal; it is a sequence of alternating 0s and 1s. Both the source and
Frame format
STANDARDS:
BROADCAST : address are just used to send messages in the network who are in the need of
messages can use it.
The most widely used wireless links today are usually asymmetric, that is, the two endpoints
are usually different kinds of nodes. BASE STATION, usually has no mobility, but has a
wired (or at least high bandwidth) connection to the internet or other networks.
WI-FI (802.11):
This section takes a closer look at a specific technology centered on the emerging IEEE
802.11 standard, also known as Wi-Fi. Wi-Fi is technically a trademark, owned by a trade
group called the Wi-Fi alliance that certifies product compliance with 802.11. 802.11 is
designed for use in a limited geographical area (homes, office buildings, campuses) and its
primarily challenge is to mediate access to a shared communication medium in this case,
signals propagating through space.
Physical Properties:
802.11 run over six different physical layer protocols. Five are based on spread spectrum
radio, and one on diffused infrared (and is of historical interest only at this point). The fastest
runs at a maximum of 54 Mbps. he original 802.11 standard defined two radio based
physical layers standards, one using frequency hopping and the other using direct sequence.
Both provide up to 2 Mbps. Then physical layer standard 802.11 b was added. Using a
signal is likely to be and so swamps the receiving circuitry. The reason why a node may not
receive transmissions from another node is because that node may be too far away or blocked
by an obstacle. This situation is a bit more complex than it first appears, as the following
discussion will illustrate. Consider the situation depicted in Figure , where A and C are both
within range of B but not each other. Suppose both A and C want to communicate with B and
so they each send it a frame. A and C are unaware of each other since their signals do not
carry that far. These two frames collide with each other at B, but unlike an Ethernet, neither
A nor C is aware of this collision. A and C are said to be hidden nodes with respect to each
other.
The hidden node problem .Although A and C are hidden from each other, their
signals can collied at B . (B's reach is not shown)
It would be a mistake, however, for C to conclude that it cannot transmit to anyone just
because it can hear B’s transmission. For example, suppose C wants to transmit to node D.
This is not a problem since C’s transmission toDwill not interfere with A’s ability to receive
fromB. (Itwould interfere with A sending to B, but B is transmitting in our example.)
802.11 addresses these problems by using CSMA/CA, where the CA stands for collision
avoidance, in contrast to the collision detection of CSMA/CD used on Ethernets. There are a
few pieces to make this work. The Carrier Sense part seems simple enough: Before sending a
packet, the transmitter checks if it can hear any other transmissions; if not, it sends. However,
because of the hidden terminal problem, just waiting for the absence of signals from other
transmitters does not guarantee that a collision will not occur from the perspective of the
receiver. For this reason, one part of CSMA/CA is an explicit ACK from the receiver to the
sender. If the packet was successfully decoded and passed its CRC at the receiver, the
receiver sends an ACK back to the sender.
Fig: the exposed node problem. Although B and C are exposed to each other's signals, there
is no interference if B transmits to A while C transmits t o D . (A and D reaches are not
shown)
Note that if a collision does occur, it will render the entire packet useless.10 For this reason,
802.11 adds an optional mechanism called RTS-CTS (Ready to Send-Clear to Send). This
goes some way toward addressing the hidden terminal problem. The sender sends an RTS a
short packet—to the intended receiver, and if that packet is received successfully the receiver
responds with another short packet, the CTS.
Of course, two nodes might detect an idle link and try to transmit an RTS frame at the same
time, causing their RTS frames to collide with each other. The senders realize the collision
has happened when they do not receive the CTS frame after a period of time, in which case
they each wait a random amount of time before trying againAfter a successful RTS-CTS
exchange, the sender sends its data packet and, if all goes well, receives an ACK for that
packet. In the absence ofa timely ACK, the sender will try again to request usage of the
channel again, using the same process described above. By this time, of course, other nodes
may again be trying to get access to the channel as well.
Distribution System
As described so far, 802.11 would be suitable for a network with a mesh (ad hoc) topology,
and development of an 802.11s standard for mesh networks is nearing completion. At the
current time, however, nearly all 802.11 networks use a base-station-oriented topology.
Instead of all nodes being created equal, some nodes are allowed to roam (e.g., your laptop)
and some are connected to a wired network infrastructure. 802.11 calls these base stations
access points (APs), and they are connected to each other by a so-called distribution system.
Figure 2.32 illustrates a distribution system that connects three access points, each of which
services the nodes in some region. Each access point operates on some channel in the
appropriate frequency range, and each AP will typically be on a different channel than its
neighbors.
The details of the distribution system are not important to this discussion—it could be an
Ethernet, for example. The only important point is that the distribution network operates at
the link layer, the same protocol layer as the wireless links. In other words, it does not depend
on any higher-level protocols (such as the network layer). Although two nodes can
communicate directly with each other if they are within reach of each other, the idea behind
this configuration is that each node associates itself with one access point. For node A to
communicate with node E, for example, A first sends a frame to its accesspoint (AP-1),
which forwards the frame across the distribution system to AP-3, which finally transmits the
Node modify
Consider the situation shown in Figure 2.33, where node C moves from the cell serviced by
AP-1 to the cell serviced by AP-2. As it moves, it sends Probe frames, which eventually
The mechanism just described is called active scanning since the node is actively searching
for an access point. APs also periodically send a Beacon frame that advertises the capabilities
of the access point; these include the transmission rates supported by the AP. This is called
passive scanning, and a node can change to this AP based on the Beacon frame simply by
sending an Association Request frame back to the access point.
Frame Format
Most of the 802.11 frame format, which is depicted in Figure , is exactly what we would
expect. The frame contains the source and destination node addresses, each of which is 48
bits long; up to 2312 bytes of data; and a 32-bit CRC. The Control field contains three
subfields of interest (not shown): a 6-bit Type field that indicates whether the frame carries
data, is an RTS or CTS frame, or is being used by the scanning algorithm, and a pair of 1-bit
fields—called ToDS and FromDS—that are described below. The peculiar thing about the
802.11 frame format is that it contains four, rather than two, addresses. How these addresses
are interpreted depends on the settings of the ToDS and FromDS bits in the frame’s Control
field. This is to account for the possibility that the frame had to be forwarded across the
distribution system, which would mean that the original sender is not necessarily the same as
the most recent transmitting node. Similar reasoning applies to the destination address. In the
simplest case, when one node is sending directly to another, both the DS bits are 0, Addr1
identifies the target node, and Addr2 identifies the source node. In the most complex case,
both DS bits are set to 1, indicating that the message went from a wireless node onto the
distribution system, and then from the distribution system to another wireless node.With both
bits set, Addr1 identifies the ultimate destination, Addr2 identifies the immediate sender (the
one that forwarded the frame from the distribution system to the ultimate destination), Addr3
identifies the intermediate destination (the one that accepted the frame from a wireless node
and forwarded it across the distribution system), and Addr4 identifies the original source.
While cellular telephone technology had its beginnings around voice communication, data
services based on cellular standards have become increasingly popular (thanks in part to the
increasing capabilities of mobile phones or smartphones).
One drawback compared to thetechnologies just described has tended to be the cost to users,
due in part to cellular’s use of licensed spectrum (which has historically been sold off to
cellular phone operators for astronomical sums). The frequency bands that are used for
cellular telephones (and now for cellular data) vary around the world. In Europe, for
example, the main bands for cellular phones are at 900 MHz and 1800 MHz. In North
The geographic area served by a base station’s antenna is called a cell. A base station could
serve a single cell or use multiple directional antennas to serve multiple cells. Cells don’t
have crisp boundaries, and they overlap. Where they overlap, a mobile phone could
potentially communicate with multiple base stations. This is somewhat similar to the 802.11
picture shown in Figure . At any time, however, the phone is in communication with, and
under the control of, just one base station.
As the phone begins to leave a cell, it moves into an area of overlap with one or more other
cells. The current base station senses the weakening signal from the phone and gives control
of the phone to whichever base station is receiving the strongest signal from it. If the phone is
involved in a call at the time, the call must be transferred to the new base station in what is
called a handoff.As we noted above, there is not one unique standard for cellular, but rather a
collection of competing technologies that support data traffic in different ways and deliver
different speeds. These technologies are loosely categorized by generation.
The first generation (1G) was analog, and thus of limited interest from a data
communications perspective.
CDMA uses a form of spread spectrum to multiplex the traffic from multiple devices into a
common wireless channel. Each transmitter uses a pseudorandom chipping code at a
frequency that is high relative to the data rate and sends the exclusive OR of the data with the
chipping code. Each transmitter’s code follows a sequence that is known to the intended
receiver—for example, a base station in a cellular network assigns a unique code sequence to
each mobile device with which it is currently associated.
When a large number of devices broadcast their signals in the same cell and frequency band,
the sum of all the transmissions looks like random noise. However, a receiver who knows the
code being used by a given transmitter can extract that transmitter’s data from the apparent
noise. Compared to other multiplexing techniques, CDMA has some good properties for
bursty data. There is no hard limit on how many users can share a piece of spectrum you just
need to make sure they all have unique chipping codes. The bit error rate does however go up
with increasing numbers of concurrent transmitters.
This makes it very well suited for applications where many users exist but at any given
instant many of them are not transmitting which pretty well describes many data applications
such as web surfing. And, in practical systems when it is hard to achieve very tight
synchronization among all the mobile handsets, CDMA achieves better spectral efficiency
(i.e., it gets closer to the theoretical limits of the Shannon–Hartley theorem) than other
multiplexing schemes Like TDMA.
In the simplest terms, a switch is a mechanism that allows us to interconnect links to form a
larger network. A switch is a multi-input, multi-output device that transfers packets from an
input to one or more outputs. Thus, a switch adds the star topology (see Figure ) to the point-
to-point link, bus (Ethernet), and ring topologies established .
A star topology has several attractive properties:
Even though a switch has a fixed number of inputs and outputs, which limits the
number of hosts that can be connected to a single switch, large networks can be built
by interconnecting a number of switches.
We can connect switches to each other and to hosts using point-to-point links, which
typically means that we can build networks of large geographic scope.
Adding a new host to the network by connecting it to a switch does not necessarily
reduce the performance of the network for other hosts already connected.
A host can send a packet anywhere at any time, since any packet that turns up at a
switch can be immediately forwarded (assuming a correctly populated forwarding
table). For this reason, datagram networks are often called connectionless; this
contrasts with the onnection-oriented networks described below, in which some
connection state needs to be established before the first data packet is sent.
When a host sends a packet, it has no way of knowing if the network is capable of
delivering it or if the destination host is even up and running.
Each packet is forwarded independently of previous packets that might have been sent
to the same destination. Thus, two successive packets from host A to host B may
follow completely different paths (perhaps because of a change in the forwarding
table at some switch in the network).
A switch or link failure might not have any serious effect on communication if it is
possible to find an alternate route around the failure and to update the forwarding
table accordingly.
Setup Request
Switch 1 receives connection setup request frame from host A.
It knows that frames for host B should be forwarded on port 3. Creates an entry in its VC
table for the new connection with incoming port=1 and outgoing port=3.
Chooses an unused VCI for frames to host B, say 14 as incoming VCI.
Outgoing VCI is unknown (left blank) and the frame is forwarded to switch 2.
Similarly entries are made at other switches as frame is forwarded to destination.
Destination B accepts the setup request frame. Assigns an unused VCI, say 77, for
frames that come from host A.
Acknowledgment
Host B sends an acknowledgment to switch 3.
The ACK frame carries source & destination addresses and chosen VCI by host B.
Switch 3 uses this VCI, i.e., 77 as outgoing VCI and completes VC table entry.
Similarly other switches fill up outgoing VCI and forward the ACK.
Finally switch 1 sends an acknowledgment to source host A containing VCI as 14.
Source host A uses 14 as its outgoing VCI for data frames to be sent to destination B.
It is a node that forward frames from one Ethernet to the other. This node would be in
promiscuous mode, accepting all frames transmitted on either of the Ethernets, so it could
forward them to the other. A bridge is connected between two LANs with port. By using the
port number the LANs are addressed. Connected LANs are known as extended LAN
LEARNING BRIDGES:
Bridges maintains a forwarding table which contains each host with their port number.
Having a human maintain this table is quite a burden, so a bridge can learn this information
for itself. The idea is for each bridge to inspect the source address in all the frames it receives.
When a bridge first boots, this table is empty; entries are added over time. Also a timeout is
associated with each entry and the bridge is cards the entry after a specified period of time.
If the extended LAN is having loops then the frames potentially loop through the
extended LAN forever. There are two reasons to an extended LAN to have a loop in
it.
One possibility is that the network is managed by more than one administrator; no
single person knows the entire configuration of the network. Second, loops are built in
Example
Service Model
The main concern in defining a service model for an internetwork is that we can provide a
host-to-host service only if this service can somehow be provided over each of the underlying
physical networks. For Example, it would be no good deciding that our internetwork service
model was going to provide guaranteed delivery of every packet in 1 ms or less if there were
underlying network technologies that could arbitrarily delay packets. The IP service model
can be thought of as having two parts: an addressing scheme, which provides a way to
identify all hosts in the internetwork, and a datagram (connectionless) model of data delivery.
This service model is sometimes called best effort because, although IP makes every effort to
delivery datagram, it makes no guarantees.
The next field, HLEN, specifies the length of the header in 32-bit words. When there
are no options, which is most of the time, the header is 5 words (20 bytes) long. The 8_bit
type of service (TOS) field has had a number of different definitions over the years, but its
basic function is to allow packets to be treated differently based on application needs. For
example, the TOS value might determine whether or not a packet should be placed in a
special queue that receives low delay. The next 16-bit of the header contain the Length of the
The central idea here is that every network type has a maximum transmission unit
(MTU), which is the largest IP datagram that it can carry in a frame.
The unfragmented packet has 1,400 bytes of data and a 20-byte IP header. When the
packet arrives at the R2, which has an MTU of 532 bytes, it has to be fragmented. A 532-byte
MTU leaves 512 bytes for data after the 20-byte IP header, so the first fragment contains 512
bytes of data. The router sets the M bit in the Flags field, meaning that there are more
fragments to follow, and it sets the offset to 0,since this fragmented contains the first part of
the original datagram.
The data carried in the second fragment starts with the 513th byte of the original data,
so the Offset field in this header is set to 64, which is 512/8. Why the division by 8? Because
the designers of IP decided that fragmentation should always happen on 8-byte boundaries,
which means that the Offset field counts 8-byte chunks, not bytes. The third fragment
contains the last 376 bytes of data, and the offset is now 2*512/8=128. since this is the last
fragment, the M bit is not set.
The class of an IP address is identified in the most significant few bits. If the first bit is 0, it is
a class A address. If the first bit is 1 and the second bit is 0, it is a class B address. If the first
two bits are 1 and the third bit is 0, t is a class C address.
Class A addresses have 7 bits for network part and 24 bits for host part. The 0 and 127 are
reserved. Class B addresses have 14 bits for network part and 16 bits for host part.
Class C addresses have 21 bits for network part and 8 bits for host part. The 0 and 127 are
reserved. There are approximately 4 billion possible IP addresses, one half for class A, one
quarter for class B and one-eighth for class C address. There are also class D and class E are
there. But class D for multicast and class E are currently unused. IP addresses are written as
four decimal integers separated via dots. Each integer represents the decimal value contained
in 1 byte of the address, starting at the most significant.
Bitwise analysis show 20 MSBs (11000000 00000100 0001) are same. Thus a 20-bit network
number is created, i.e., range between Class B and C network.
Thus higher address efficiency is achieved by providing small chunks of address, smaller
than Class B network. Thus a single network prefix is used in forwarding table.
CIDR uses a new type of notation to represent network numbers or prefixes.
It is represented as /X, where X is the prefix length in bits. For example, 192.4.16/20
Addresses in a block must be contiguous and number of addresses must be powers of 2.
Example
When different customers are connected to a service provider, prefixes can be assigned such
that they share a common, further aggregation can be achieved.
Consider an ISP providing internet connectivity to 8 customers. All customer prefix starts
with the same 21 bits.
Since all customers are reachable through the same provider network, a single route is
advertised by ISP with common 21-bit prefix that all customers share.
IP data grams contain IP addresses, but the physical interface hardware on the host or
router can only understands the addressing scheme of that particular network. So the
IP address should be translated to a link level address.
One simplest way to map an IP address in to a physical network address is to encode a
host’s physical address in the host part of its IP address.
For example, a host with physical address 00100001 01001001 (which has the
decimal value 33 in the upper byte and 81 in the lower byte) might be given the IP
address 128.96.33.81. But in class C only 8 bits for host part. It is not enough for 48
bit Ethernet address.
A more general solution would be for each host to maintain a table of address pairs,
i.e, and the table would map IP addresses into physical address.
While this table could be centrally managed by a system administrator and then be
copied to each host on the network, a better approach would be for each host to
dynamically learn the contents of the table using the network. This can be
accomplished by Address Resolution Protocol (ARP).
The goal of ARP is to enable each host on a network to build up a table of mappings
between IP address and link level addresses.
Since these mappings may change over time, the entries are timed out periodically
and removed. This happens on the order of every 15 minutes. The set of mappings
currently stored in a host is known as ARP cache or ARP table.
While communicating, a host needs Layer-2 (MAC) address of the destination machine
which belongs to the same broadcast domain or network. A MAC address is physically burnt
into the Network Interface Card (NIC) of a machine and it never changes.
On the other hand, IP address on the public domain is rarely changed. If the NIC is changed
in case of some fault, the MAC address also changes. This way, for Layer-2 communication
to take place, a mapping between the two is required.
Once the host gets destination MAC address, it can communicate with remote host using
Layer-2 link protocol. This MAC to IP mapping is saved into ARP cache of both sending and
receiving hosts. Next time, if they require to communicate, they can directly refer to their
respective ARP cache. Reverse ARP is a mechanism where host knows the MAC address of
remote host but requires to know IP address to communicate.
The above figure shows the ARP packet format for IP to Ethernet address mappings.
Internet Control Message Protocol (ICMP) is used to report error messages to source host and
diagnose network problems. ICMP message is encapsulated within an IP packet
Error reporting
Destination Unreachable―When a router cannot route a datagram, the datagram is discarded
and sends a destination unreachable message to source host.
Source Quench―When a router or host discards a datagram due to congestion, it sends a
source-quench message to the source host. This message acts as flow control.
Time Exceeded―Router discards a datagram when TTL field becomes 0 and a
timeexceeded message is sent to the source host.
Parameter Problem―If a router discovers ambiguous or missing value in any field of the
datagram, it discards the datagram and sends parameter problem message to source.
Redirection―Redirect messages are sent by the default router to inform the source host to
update its forwarding table when the packet is routed on a wrong path.
Query Messages
Echo Request & Reply―The combination of echo-request and echo-reply messages
determines whether two systems can communicate at the IP level.
Timestamp Request & Reply―Two machines can use the timestamp request and timestamp
reply messages to determine the round-trip time (RTT).
Address Mask Request & Reply―A host to obtain its subnet mask, sends an address mask
request message to the router, which responds with an address mask reply message.
Router Advertisement―A host broadcasts a router solicitation message to know about the
router. Router broadcasts its routing information with router advertisement message.