0% found this document useful (0 votes)
52 views46 pages

CN Module 3

This document discusses two approaches for transferring information over packet-switching networks: datagram/connectionless networks and virtual-circuit networks. In datagram networks, packets are routed independently from node to node without setting up connections. Virtual-circuit networks involve setting up connections by exchanging signaling messages before transferring packets, which guarantees packet order but increases delay. The document compares the minimum delays for transferring messages using these approaches, as well as message switching, and discusses advantages and disadvantages of each method.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views46 pages

CN Module 3

This document discusses two approaches for transferring information over packet-switching networks: datagram/connectionless networks and virtual-circuit networks. In datagram networks, packets are routed independently from node to node without setting up connections. Virtual-circuit networks involve setting up connections by exchanging signaling messages before transferring packets, which guarantees packet order but increases delay. The document compares the minimum delays for transferring messages using these approaches, as well as message switching, and discusses advantages and disadvantages of each method.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

CN Module 3

Datagrams and Virtual Circuits


Introduction
• A packet-switching network is usually represented as a cloud with multiple input
sources and output destinations as shown

• The network can be viewed as a generalization of a physical cable in the sense


of providing connectivity among multiple users.
• Unlike a cable, a packet-switching network is geographically distributed and
consists of a graph of transmission lines (links) interconnected by packet
switches (nodes). These transmission and switching resources are configured to
enable the flow of information among users
• Packet-switching networks provide for the interconnection of sources to
destinations on a dynamic basis.
• Resources are typically allocated to an information flow only when needed. In
this manner the resources are shared among the community of users resulting
in efficiency and lower costs.
• Two approaches to transferring information over a packet-switching network:
o Connection-oriented network: involves setting up a connection across the
network before information can be transferred. The setup procedure
typically involves the exchange of signalling messages and the allocation of
resources along the path from the source to the destination for the
duration of the connection.
o Datagram / Connectionless-oriented networks: does not involve setting
up connections. Instead, a packet of information is routed independently
from node to node until the packet arrives at its destination.
• Both approaches involve the use of packet switches to direct packets across the
network.

Datagram / Connectionless Packet Switching


• Message Switching
o Packet switching has its origin in message switching, where a message is
relayed from one switch to another until the message arrives at its
destination, as shown.

o A message switch typically operates in the store-and-forward fashion


whereby a message has to be completely received (and thus stored) by the
switch before it can be forwarded to the next switch.
o At the source each message has a header attached to it to provide source and
destination addresses. CRC checkbits are attached to detect errors.
o The message is transmitted in its entirety from one switch to the next switch.
Each switch performs an error check, and if no errors are detected, the
switch examines the header to determine the next hop in the path to the
destination. If errors are detected, a retransmission may be requested.
o After the next hop is determined, the message waits for transmission over
the corresponding transmission line. Because the transmission lines are
shared, the message may have to wait until previously queued messages are
transmitted.
o Message switching does not involve a call setup. Message switching can
achieve a high utilization of the transmission line. This increased utilization is
achieved at the expense of queueing delays. In addition, loss of messages
may occur when a switch has insufficient buffering to store the arriving
message.
o The below figure shows the minimum delay that is incurred when a message
is transmitted over a path that involves two intermediate switches.

o The message must first traverse the link that connects the source to the first
switch.
o We assume that each link has a propagation delay of τ seconds. We also
assume that the message has a transmission time of T seconds.
o The message must next traverse the link connecting the two switches, and
then it must traverse the link connecting the second switch and the
destination.
o Hence the minimum end-to-end message delay is 3τ + 3T. Hence, we can
conclude that for n intermediary switches, the delay would be
(n+1) τ + (n+1) T
o Note that this delay does not take into account any queueing delays that may
be incurred in the various links waiting for prior messages to be transmitted.
It also does not take into account the times required to perform the error
checks or any associated retransmissions.
o Disadvantage of Message Switching: It is not suitable for interactive
applications because it allows the transmission of very long messages that
can impose very long waiting delays on other messages.
o Hence, by placing a maximum length on the size of the blocks that are
transmitted, packet switching limits the maximum delay that can be
imposed by a single packet on other packets. Thus, packet switching is more
suitable than message switching for interactive applications
• Datagram / Connectionless Packet Switching
o In this approach, each packet is routed independently through the network.
Each packet has an attached header that provides all of the information
required to route the packet to its destination.
o When a packet arrives at a packet switch, the destination address in the
header is examined to determine the next hop in the path to the destination.

o The packet is then placed in a queue to wait until the given transmission line
becomes available. By sharing the transmission line among multiple packets,
packet switching can achieve high utilization at the expense of packet
queueing delays.
o Because each packet is routed independently, packets from the same source
to the same destination may traverse different paths through the network as
shown.
o For example, the routes may change in response to a network fault. Thus,
packets may arrive out of order, and re-sequencing may be required at
the destination.
o The below figure shows the minimum delay that is incurred by
transmitting a message that is broken into three separate packets.

o We neglect the overhead due to headers and suppose that each packet
requires P = T/3 seconds to transmit. The three packets are transmitted
successively from the source to the first packet switch.
o The first packet arrives at the first switch after τ + P seconds. It can then
begin transmission over the next hop after a brief processing time. The first
packet is received at the second packet switch at time 2τ + 2P. Then the
packet begins transmission over the final hop. The first packet then arrives at
the destination at time 3τ + 3P.
o Hence, the final packet will arrive at the destination at time 3τ +3P +2P = 3τ
+5P = 3τ + T +2P, which is less than the delay incurred in the message
switching.
o In general, if the path followed by a sequence of packets consists of L hops /
L-1 intermediary switches then the delay incurred by a message that consists
of k packets is given by
▪ Lτ + L P + (k − 1)P
o In contrast, the delay incurred using message switching is Lτ + LT = Lτ + L(k P)
Virtual-Circuit Packet Switching
• Virtual-circuit packet switching involves the establishment of a fixed path, often
called a virtual circuit or a connection, between a source and a destination prior to
the transfer of packets, as shown.

• The below figure shows the delay that is incurred when a message broken into
three packets is transmitted over a virtual circuit. Observe that the minimum delay
in virtual-circuit packet switching is similar to that in datagram packet switching,
except for an additional delay required to set up the virtual circuit.

• The virtual-circuit setup procedure first determines a path through the network
and then sets parameters in the switches by exchanging connect-request and
connect-confirm messages, as shown

• As in the datagram approach, packets from many other flows share the same
transmission line.
• Unlike the datagram approach, virtual-circuit packet switching guarantees the
order of the packets since packets for the same source-destination pair follow the
same path.
• In datagram packet switching each packet must contain the full address of the
source and destination. In large networks these addresses can require a large
number of bits and result in significant packet overhead and hence wasted
transmission bandwidth. One advantage of virtual-circuit packet switching is that
abbreviated headers can be used.
• The call setup procedure establishes a number of entries in routing tables located
in the various switches along the path. At the input to every switch, the virtual
circuit is identified by a virtual-circuit identifier (VCI). When a packet arrives at an
input port, the VCI in the header is used to access the table.
• The table lookup provides the output port to which the packet is to be forwarded
and the VCI that is to be used at the input port of the next switch. Thus, the call
setup procedure sets up a chain of pointers across the network that direct the flow
of packets in a connection.

• The number of bits required in the header in virtual-circuit switching is reduced to


the number required to represent the maximum number of simultaneous virtual
circuits over an input port. This number is much smaller than the number required
for full destination network addresses. This factor is one of the advantages of
virtual-circuit switching relative to datagram packet switching.
• Another advantage of virtual-circuit packet switching is that resources can be
allocated during call setup. For example, a certain number of buffers may be
reserved for a virtual circuit at every switch along the path, and a certain amount
of bandwidth can be allocated at each link in the path.
• However, virtual-circuit packet switching does have disadvantages relative to the
datagram approach. The switches in the network need to maintain information
about the flows that pass the switches. Thus, the amount of required “state”
information grows very quickly with the number of flows.

Cut-through packet switching


• A modified form of virtual-circuit packet switching, called cut-through packet
switching, a packet is forwarded as soon as the header is received and the table
lookup is carried out.
• As shown in the below figure, the minimum delay in transmitting the message is
then reduced to approximately the sum of the propagation delays in the various
hops plus the one-message transmission time.

• Cut-through packet switching may be desirable for applications such as speech


transmission, which has a delay requirement but can tolerate some errors. Cut-
through packet switching is also appropriate when the transmission is virtually
error free, as in the case of optical fiber transmission, so that hop-by-hop error
checking is unnecessary.

ROUTING IN PACKET NETWORKS


• Routing is a major component of the network layer and is concerned with the
problem of determining feasible paths (or routes) for packets to follow from each
source to each destination.
• Goals of a routing algorithm:
o Rapid and accurate delivery of packets: A routing algorithm must operate
correctly; i.e., it must be able to find a path to the correct destination if it
exists. In addition, the algorithm should not take an unreasonably long time
to find the path to the destination.
o Adaptability to changes in network topology resulting from node or link
failures: A routing algorithm must be able to adapt and reconfigure the paths
automatically when equipment fails.
o Adaptability to varying source-destination traffic load: Traffic loads change
dynamically. An adaptive routing algorithm would be able to adjust the paths
based on the current traffic loads.
o Ability to route packets away from temporarily congested links: A routing
algorithm should avoid heavily congested links.
o Ability to determine the connectivity of the network: To find optimal paths,
the routing system needs to know the connectivity or reachability
information.
o Ability to avoid routing loops: Inconsistent information in distributed
computation may lead to routing tables that create routing loops. The routing
system should avoid routing loops.
o Low overhead: A routing system typically obtains the connectivity
information by exchanging control messages with other routing systems.
These messages represent an overhead on bandwidth usage that should be
minimized.

Routing Algorithm Classification


Based on Responsiveness:
• Static Routing:
o Static Routing is also known as non-adaptive routing
o It doesn’t change routing table unless the network administrator changes or
modify them manually.
o Static routing does not use complex routing algorithms and provides greater
security than dynamic routing.
• Dynamic Routing:
o Dynamic routing is also known as adaptive routing.
o It changes routing table according to the change in topology.
o Dynamic routing uses complex routing algorithms and it does not provide
high security like static routing.
o In dynamic routing, each node continuously learns the state of the network
by communicating with its neighbours. Thus, a change in a network topology
is eventually propagated to all nodes.

Static Routing Dynamic Routing

In static routing routes are user defined. In dynamic routing, routes are updated
according to topology.

Static routing does not use complex Dynamic routing uses complex routing
routing algorithms. algorithms.

Static routing provides high or more Dynamic routing provides less security.
security.

Static routing is manual. Dynamic routing is automated

Static routing is implemented in small Dynamic routing is implemented in large


networks. networks.

In static routing, additional resources In dynamic routing, additional resources


are not required. are required.

Centralized vs Distributed

Centralized Routing Distributed Routing

Central node computes the routes and Routes are computed by individual
uploads the information nodes using distributed algorithm and by
communicating with each other

All state information sent to central node State information exchanged by routers

Does not scale well Scales well

Consistent results Inconsistent results due to loops

Problems adapting to frequent topology Adapts to topology and other changes


changes

There is a single point of failure. No single point of failure.


Routing Tables
• Once the routing algorithm has determined the set of paths, the path information
is stored in the routing table so that each node (switch or router) knows how to
forward packets
Example of virtual-circuit packet-switching network: We assume that virtual circuits
are bidirectional and that each direction uses the same value

• There are two virtual circuits between node A (host) and node 1 (switch). A packet
sent by node A with VCI 1 in the header will eventually reach node B, while a packet
with VCI 5 from node A will eventually reach node D.
• For each node pair, the VCI has local significance only. In our example VCI 1 from
node A gets translated to 2, and then to 7, and finally to 8 before reaching node B.
When node 1 receives a packet with VCI 1, that node should replace the incoming
VCI with 2 and then forward the packet to node 3.
• In practice, local port numbers are used instead of remote node numbers.
• With datagram packet switching, no virtual circuit has to be set up, since no
connection exists between a source and a destination.

• For the above network topology, the below would be the routing table. In general,
the destination address may be long (32 bits for IPv4), and thus a hash table or
more sophisticated lookup technique may be employed to yield a match quickly.
Hierarchical Routing
• The size of the routing tables that routers need to keep can be reduced if a
hierarchical approach is used in the assignment of addresses
• In this way routers need to examine only part of the address (i.e., the prefix) in
order to decide how a packet should be routed.

• In part (a) i.e., hierarchical routing, the hosts at each of the four sites have the
same prefix. Thus, the two routers need to only maintain tables with four entries
as shown.
• On the other hand, if the addresses are not hierarchical as in part (b) i.e., flat
routing, then the routers need to maintain 16 entries in their routing tables.

Specialized Routing
Here we examine two simple approaches to routing, called flooding and deflection
routing, which are used in certain network scenarios.
Flooding
• The principle of flooding calls for a packet switch (node) to forward an incoming
packet to all ports except the one the packet was received from.
• If each packet switch performs this flooding process, the packet will eventually reach
the destination as long as at least one path exists between the source and the
destination.
• Flooding may easily swamp the network as one packet creates multiple packets that
in turn create multiples of multiple packets, generating an exponential growth rate
as illustrated below.

• Clearly, flooding needs to be controlled so that packets are not generated


excessively. To reduce resource consumption in the network, one can implement a
number of mechanisms.
• Mechanisms to reduce resource consumption during flooding
o Time-to-live (TTL) field in each packet:
▪ When the source sends a packet, the TTL is initially set to some number.
Each node decrements the TTL by one before flooding the packet. If the
value reaches zero, the node discards the packet.
▪ To avoid unnecessary waste of bandwidth, the TTL should ideally be set
to the minimum hop number between two furthest nodes (called the
diameter of the network).
▪ In the above figure, the diameter of the network is two. To have a
packet reach any destination, it is sufficient to set the TTL to two.
o In the second method:
▪ Each node adds its identifier to the header of the packet before it
floods the packet. When a node receives a packet that contains the
identifier of the node, it discards the packet since it knows that the
packet already visited the node before.
▪ This method effectively prevents a packet from going around a loop.
o The third method:
▪ is similar to the second method in that they both try to discard old
packets. The only difference lies in the implementation.
▪ Here each packet from a given source is identified with a unique
sequence number. When a node receives a packet, the node records
the source address and the sequence number of the packet.
▪ If the node discovers that the packet has already visited the node,
based on the stored source address and sequence number, it will
discard the packet.
Deflection Routing
• This approach requires the network to provide multiple paths for each source-
destination pair. Each node first tries to forward a packet to the preferred port.
• If the preferred port is busy or congested, the packet is deflected to another port.
Deflection routing often works well in a regular topology. Ex of regular topology is
Manhattan Street Network.
• If node (0,2) would like to send a packet to node (1,0), the packet could go two left
and one down. However, if the left port of node (0,1) is busy (see Figure 7.29), the
packet will be deflected to node (3,1). Then it can go through nodes (2,1), (1,1),
(1,2), (1,3) and eventually reach the destination node (1,0).
• One advantage of deflection routing is that the node can be bufferless, since
packets do not have to wait for a specific port to become available. If the preferred
port is unavailable, the packet can be deflected to another port, which will
eventually find its own way to the destination.
• Since packets can take alternative paths, deflection routing cannot guarantee in-
sequence delivery of packets.

Shortest Path Routing

Bellman Ford Algorithm

Apply the Bellman-Ford algorithm to find both the minimum cost from each node to
the destination (node 6) and the next node along the shortest path.
Iteration N1 N2 N3 N4 N5
Initial (-1, ∞) (-1, ∞) (-1, ∞) (-1, ∞) (-1, ∞)
1 “ “ (6, 1) “ (6, 2)
2 (N3, N5 flood) (3, 3) (5, 6) “ (3, 3) (5, 5) “
2 (3, 3) (5, 6) (6, 1) (3, 3) (6, 2)
3 (N1, N2, N4 flood) (2, 9) (4, 8) (1, 6) (4, 4) (1, 5) (2, 9) (1, 8) (2, 7) (2, 10)
(4, 5) (4, 6) (4, 6)
3 (3, 3) (4, 4) (6, 1) (3, 3) (6, 2)
4 (N2 flood) (2, 7) “ “ (2, 5) (2, 8)
4 (3, 3) (4, 4) (6, 1) (3, 3) (6, 2)
At iteration 4, no node entries are updated and hence the algorithm has converged.
Now, draw the shortest path from each node to the destination node. (From last row)

Changes in the routing table trigger a node to broadcast the minimum costs to its
neighbours to speed up convergence.
Upon convergence, each node would know the minimum cost to each destination and
the corresponding next node along the shortest path.
Because only cost vectors (or distance vectors) are exchanged among neighbours, the
protocol implementing the distributed Bellman-Ford algorithm is often referred to as
a distance vector protocol.
Link breakage in above problem
Suppose that after the distributed algorithm stabilizes for the network shown above,
the link connecting node 3 and node 6 breaks. Compute the minimum cost from each
node to the destination node (node 6), assuming that each node immediately
recomputes its cost after detecting changes and broadcasts its routing updates to its
neighbours. The new network topology is as shown:
Iteration N1 N2 N3 N4 N5
Before Break (3, 3) (4, 4) (6, 1) ✖ (3, 3) (6, 2)
1 (N3 recomputes) “ “ (1, 5) (4, 5) “ “
1 (3, 3) (4, 4) (4, 5) (3, 3) (6, 2)
2 (When N3 updates, (2, 7) (4, 8) “ “ (1, 8) (2, 5) “
N1 & N4 recompute) (3, 7) (5, 5)
2 (3, 7) (4, 4) (4, 5) (2, 5) (6, 2)
3 (N3, N4, N2, N1, (3, 7) (2, 7) (4, 6) (5, 6) (1, 9) (4, 7) (1, 12) (3, 7) (6, 2) (4, 8)
N5 recomputes) (4, 10) (1, 10) (5, 5) (2, 5) (2, 8)
3 (3, 7) (4, 6) (4, 7) (2, 5) (6, 2)
4 (N1, N4, N5 (3, 9) (2, 9) “ “ (1, 12) (3, 9) (6, 2) (4, 8)
recomputes) (4, 10) (5, 5) (2, 7) (2, 10)
4 (2, 9) (4, 6) (4, 7) (5, 5) (6, 2)
5 (N3, N4, N2, N1, (3, 9) (2, 9) (4, 6) (5, 6) (1, 11) (1, 14) (3, 9) (6, 2) (4, 8)
N5 recomputes) (4, 10) (1, 12) (4, 7) (2, 7) (5, 5) (2, 10)
5 (no change) (2, 9) (4, 6) (4, 7) (5, 5) (6, 2)
Counting to Infinity Problem
This example shows that the distributed Bellman-Ford algorithm may react very
slowly to a link failure. To see this, consider the topology shown in Figure (a) with
node 4 as the destination. Suppose that after the algorithm stabilizes, link (3,4)
breaks, as shown in Figure (b). Recompute the minimum cost from each node to the
destination node (node 4).
Iteration N1 N2 N3
Before Break (2, 3) (3, 2) (4, 1) ✖
1 (N3 recomputes) (2, 3) (3, 2) (2, 3)
2 (N3 floods, hence N2 recomputes) (2, 3) (3, 4) (1, 4) (2, 3)
2 (2, 3) (3, 4) (2, 3)
3 (N2 floods→N1 & N3 recomputes) (2, 5) (3, 4) (2, 5)
4 (N1, N3 floods→N2 recomputes) (2, 5) (1, 6) (3, 6) (2, 5)
4 (2, 5) (3, 6) (2, 5)
. . . .
. . . .
As the table shows, each node keeps updating its cost (in increments of 2 units). At
each update, node 2 thinks that the shortest path to the destination is through node
3. Likewise, node 3 thinks the best path is through node 2. As a result, a packet in
either of these two nodes bounces back and forth until the algorithm stops updating.
Unfortunately, in this case the algorithm keeps iterating until the minimum cost is
infinite/very large, at which point, the algorithm realizes that the destination node is
unreachable. This problem is often called counting to infinity. It is easy to see that if
link (3,4) is restored, the algorithm will converge very quickly. Therefore: Good news
travels quickly, bad news travels slowly.
Methods to solve Counting to Infinity Problem
i) Split Horizon: Here, the minimum cost to a given destination is not sent to a
neighbour if the neighbour is the next node along the shortest path. For example, if
node X thinks that the best route to node Y is via node Z, then node X should not send
the corresponding minimum cost to node Z.
ii) Split Horizon with poisoned reverse: It allows a node to send the minimum costs
to all its neighbours; however, the minimum cost to a given destination is set to
infinity if the neighbour is the next node along the shortest path. Example, if node X
thinks that the best route to node Y is via node Z, then node X should set the
corresponding minimum cost to infinity before sending it to node Z.
Using Split Horizon with poisoned reverse on previous example: After the link
breaks, node 3 sets the cost to the destination equal to infinity, since the minimum
cost node 3 has received from node 2 is also infinity. When node 2 receives the
update message, it also sets the cost to infinity. Next node 1 also learns that the
destination is unreachable. Thus, split horizon with poisoned reverse speeds up
convergence in this case.
Iteration N1 N2 N3
Before Break (2, 3) (3, 2) (4, 1) ✖
1 (N3 recomputes) (2, 3) (3, 2) (-1, ∞)
2 (N2 recomputes) (2, 3) (-1, ∞) (-1, ∞)
3 (N1 recomputes) (-1, ∞) (-1, ∞) (-1, ∞)
Dijkstra’s Algorithm
Dijkstra’s algorithm is an alternative algorithm for finding the shortest paths from a
source node to all other nodes in a network. It is generally more efficient than the
Bellman-Ford algorithm but requires each link cost to be positive, which is
fortunately the case in communication networks. Example: Apply Dijkstra’s algorithm
to find the shortest paths from the source node (assumed to be node 1) to all the
other nodes.

Underlined - Selected nodes; Highlighted - Relaxed nodes


Selected N2 N3 N4 N5 N6
N3 3 2 5 ∞ ∞
N2 3 2 4 ∞ 3
N6 3 2 4 7 3
N4 3 2 4 5 3
N5 3 2 4 5 3
If we also keep track of the predecessor node of the next-closest node at each
iteration, we can obtain a shortest-path tree rooted at node 1, as shown.
For a datagram network, the routing table at node 1 looks like:
Destination Next Node Cost
2 2 3
3 3 2
4 3 4
5 3 5
6 3 3
Bellman Ford vs Dijkstra’s

Bellman Ford’s Algorithm Dijkstra’s Algorithm

Bellman Ford’s Algorithm works when Dijkstra’s Algorithm doesn’t work when
there is negative weight edge, it also there is negative weight edge.
detects the negative weight cycle.

It can easily be implemented in a It cannot be implemented easily in a


distributed way. distributed way.

It is relatively less time consuming. It is more time consuming than Bellman


Ford’s algorithm.

Dynamic Programming approach is Greedy approach is taken to implement


taken to implement the algorithm. the algorithm.

It takes more time to converge. It takes less time to converge.

Source Routing versus Hop-by-Hop Routing


In the datagram network, typically each node is responsible for determining the next
hop along the shortest path. If each node along the path performs the same process,
a packet traveling from the source is said to follow hop-by-hop routing to the
destination.
Source routing is another routing approach whereby the path to the destination is
determined by the source.
Source routing works in either datagram or virtual-circuit packet switching. Before
the source can send a packet, the source has to know the path to the destination in
order to include the path information in the packet header.
The path information contains the sequence of nodes to traverse and should give the
intermediate node sufficient information to forward the packet to the next node until
the packet reaches the destination. The below figure shows how source routing
works in a datagram network.

Each node examines the header, strips off the address identifying the node, and
forwards the packet to the next node. The source (host A) initially includes the entire
path (1, 3, 6, B) in the packet to be destined to host B. Node 1 strips off its address
and forwards the packet to the next node, which is node 3. The path specified in the
header now contains 3, 6, B. Nodes 3 and 6 perform the same function until the
packet reaches host B, which finally verifies that it is the intended destination.

Link-State Routing versus Distance-Vector Routing


In the distance-vector routing approach, neighboring routers exchange routing
tables to other destinations. After neighboring routers exchange this information,
they process it using a Bellman-Ford algorithm to see whether they can find new
better paths through the neighbour that provided the information. If a new better
path is found, the router will send the new vector to its neighbours. Distance-vector
routing adapts to changes in network topology gradually as the information on the
changes percolates through the network.
In the link-state routing approach each router floods information about the state of
the links that connect it to its neighbours. This process allows each router to
construct a map of the entire network and from this map to derive the routing table
using the Dijkstra algorithm. If the state of the link changes, the router detecting the
change will flood the new information throughout the network. Thus link-state
routing typically converges faster than distance-vector routing.
Distance Vector Protocols
• Neighbours exchange list of distances to destinations
• Best next-hop determined for each destination
• Bellman-Ford (distributed) shortest path algorithm
Link State Protocols
• Link state information flooded to all routers
• Routers have complete topology information
• Shortest path (& hence next hop) calculated
• Dijkstra (logically centralized) shortest path algorithm

THE TCP/IP ARCHITECTURE

The TCP/IP protocol suite usually refers protocols called the Transmission Control
Protocol (TCP) and the Internet Protocol (IP) but also to other related protocols like:

• Internet Control Message Protocol (ICMP) • User Datagram Protocol (UDP)

• Reverse Address Resolution Protocol (RARP) • Address Resolution Protocol (ARP)

The basic structure of the TCP/IP protocol suite is shown


Below figure shows the encapsulation of PDUs (Protocol Data Units) in TCP/IP and
addressing information in the headers (with HTTP as the application layer example).

• PDUs exchanged by peer TCP protocols are called segments.


• PDUs exchanged by UDP protocols are called datagrams.
• PDUs exchanged by IP protocols are called packets.
• IP multiplexes segments & datagrams and performs fragmentation if necessary
• Packets are sent to the network-interface for delivery across the physical
network. At the destination, packets are demultiplexed to the appropriate
protocol (IP, ARP or RARP).
• The receiving IP entity determines whether a packet should be sent to TCP or
UDP. Finally, TCP(or UDP) sends each segment(datagram) to the appropriate
application based on the port number.

Each host in the Internet is identified by a globally unique IP address.


An IP address is divided into two parts:
• network ID
• host ID.
The network ID must be obtained from an organization authorized to issue IP
addresses. The Internet layer provides for the transfer of information across multiple
networks through the use of routers, as shown below.
• To enhance the scalability of the routing algorithms and to control the size of the
routing tables, additional levels of hierarchy are introduced in the IP addresses.
• Within a domain the host address is further subdivided into a subnetwork part and
an associated host part.

The Internet Protocol


• IP corresponds to the network layer in the OSI reference model and provides a
connectionless best-effort delivery service to the transport layer.
• Recall that a connectionless service does not require a virtual circuit to be
established before data transfer can begin.
• The term best-effort indicates that IP will try its best to forward packets to the
destination, but does not guarantee that a packet will be delivered to the
destination.
• The term is also used to indicate that IP does not make any guarantee on the QoS

IP Packet
• Internet Protocol being a layer-3 protocol (OSI) takes data Segments from layer-4
(Transport) and divides it into packets
• IP packet encapsulates data unit received from above layer and add to its own
header information
(check previous-to-previous diagram)
• The header has a fixed-length component of 20 bytes plus a variable-length
component consisting of options that can be up to 40 bytes.
• Below shows the IP header of an IPv4 packet.

• Version: Version no. of Internet Protocol used (e.g., IPv4).


• IHL: Internet Header Length; Length of entire IP header.
• TOS: Type of Service; specifies the priority of the packet based on delay,
throughput, reliability, and cost requirements.
• Total Length: Length of entire IP Packet (including IP header & IP Payload/Data).
• Identification, flags, and fragment offset: These fields are used for fragmentation
and reassembly
• TTL: Time to live; to avoid looping in the network, every packet is sent with some
TTL value set, which tells the network how many routers (hops) this packet can
cross. At each hop, its value is decremented by one and when the value reaches
zero, the packet is discarded.
• Protocol: Tells the Network layer at the destination host, to which Protocol this
packet belongs to, i.e. the next level Protocol. For example, protocol number of
ICMP is 1, TCP is 6 and UDP is 17.
• Header Checksum: This field is used to keep checksum value of entire header
which is then used to check if the packet is received error-free.
• Source Address: 32-bit address of the Sender (or source) of the packet.
• Destination Address: 32-bit address of the Receiver (or destination) of the packet.
• Options: The options field, which is of variable length, allows the packet to request
special features such as security level, route to be taken by the packet, and
timestamp at each router.
IP Addressing
• An IP address has a fixed length of 32 bits. The address structure was originally
defined to have a two-level hierarchy: network ID and host ID.
• The network ID identifies the network the host is connected to. Consequently, all
hosts connected to the same network have the same network ID. The network ID
for an organization may be assigned by the ISP.
• The host ID identifies the network connection to the host rather than the actual
host. The host ID is assigned by the network administrator at the local site.
• The IP address structure is divided into five address classes: Class A, Class B, Class
C, Class D, and Class E, identified by the most significant bits of the address as
shown in the figure

• Class A addresses have 7 bits for network IDs and 24 bits for host IDs, allowing up
to 126 (27) networks and about 16 million (216) hosts per network.
• Class B addresses have 14 bits for network IDs and 16 bits for host IDs, allowing
about 16,000 (214) networks and about 64,000 (216) hosts for each network.
• Class C addresses have 21 bits for network IDs and 8 bits for host IDs, allowing
about 2 million (221) networks and 254 (28) hosts per network.
• Class D addresses are used for multicast services that allow a host to send
information to a group of hosts simultaneously. Class E addresses are reserved for
experiments.
• IP addresses are usually written in dotted-decimal notation so that they can be
communicated conveniently by people.
Class Begin Address End Address
A 1.0.0.0 127.255.255.255
B 128.0.0.0 191.255.255.255
C 192.0.0.0 223.255.255.255
D 224.0.0.0 239.255.255.255
E 240.0.0.0 255.255.255.255
• A set of specific ranges of IP addresses have been set aside for use in private
networks. These addresses are considered unregistered and routers in the
Internet must discard packets with these addresses.
o Range 1: 10.0.0.0 to 10.255.255.255
o Range 2: 172.16.0.0 to 172.31.255.255
o Range 3: 192.168.0.0 to 192.168.255.255

Subnet Addressing
• The basic idea of subnetting is to add another hierarchical level called the “subnet”
as shown:

• For the subnet address scheme to work, every machine on the network must know
which part of the host address will be used as the subnet address. This is
accomplished by assigning each machine a subnet mask.
• In a subnet mask, the 1's in the subnet mask represent the positions that refer to
the Network or Subnet IDs and the 0's represent the positions that refer to the
Host ID part of the IP address.
• Note: to find the subnet address, perform an AND operation between the given IP
address and the subnet mask.

Classless Interdomain Routing (CIDR)


Using a CIDR notation, a prefix 205.100.0.0 of length 22 is written as 205.100.0.0/22.
The /22 notation indicates that the network mask is 22 bits, or 255.255.252.0.
CIDR enables a technique called supernetting to allow a single routing entry to cover a
block of classful addresses.
For example, instead of having four entries for a contiguous set of Class C addresses
(e.g., 205.100.0.0, 205.100.1.0, 205.100.2.0, and 205.100.3.0), CIDR allows a single
routing entry 205.100.0.0/22, which includes all IP addresses from 205.100.0.0 to
205.100.3.255. The original four Class C entries:
Class C address 205.100.0.0 = 11001101 01100100 00000000 00000000
Class C address 205.100.1.0 = 11001101 01100100 00000001 00000000
Class C address 205.100.2.0 = 11001101 01100100 00000010 00000000
Class C address 205.100.3.0 = 11001101 01100100 00000011 00000000
become,
Mask (/22) 255.255.252.0 = 11111111 11111111 11111100 00000000
Supernet address 205.100.0.0 = 11001101 01100100 00000000 00000000
(Supernet is obtained by performing AND between Mask and the Addresses)

Address Resolution
• The source host must know the destination MAC address if the packet is to be
delivered to the destination host successfully.
• How does the host map the IP address to the MAC address? An elegant solution to
find the destination MAC address is to use the Address Resolution Protocol (ARP).

• Suppose H1 wants to send an IP packet to H3 but does not know the MAC address
of H3. H1 first broadcasts an ARP request packet asking the destination host, which
is identified by H3’s IP address, to reply.
• All hosts in the network receive the packet, but only the intended host, which is
H3, responds to H1.
• The ARP response packet contains H3’s MAC and IP addresses. From now on H1
knows how to send packets to H3.
• To avoid having to send an ARP request packet each time H1 wants to send a
packet to H3, H1 caches H3’s IP and MAC addresses in its ARP table.

Reverse Address Resolution


• In some situations, a host may know its MAC address but not its IP address
• The problem of getting an IP address from a MAC address can be handled by the
Reverse ARP (RARP), which works in a fashion similar to ARP.
• To obtain its IP address, the host first broadcasts an RARP request packet
containing its MAC address on the network.
• All hosts on the network receive the packet, but only the server replies to the host
by sending an RARP response packet containing the host’s MAC and IP addresses.
• One limitation with RARP is that the server must be located on the same physical
network as the host.

Fragmentation and Reassembly


• Each physical network usually imposes a certain packet-size limitation on the
packets that can be carried, called the maximum transmission unit (MTU).
• For example, Ethernet specifies an MTU of 1500 bytes, and FDDI specifies an MTU
of 4464 bytes.
• When IP has to send a packet that is larger than the MTU of the physical network,
IP must break the packet into smaller fragments whose size can be no larger than
the MTU.
• Each fragment is sent independently to the destination as though it were an IP
packet.
• If the MTU of some other network downstream is found to be smaller than the
fragment size, the fragment will be broken again into smaller fragments, as shown
(PTO).
• The destination IP is the only entity that is responsible for reassembling the
fragments into the original packet.
• To reassemble the fragments, the destination waits until it has received all the
fragments belonging to the same packet. If one or more fragments are lost in the
network, the destination abandons the reassembly process and discards the rest
of the fragments.
• To detect lost fragments, the destination host sets a timer once the first fragment
of a packet arrives. If the timer expires before all fragments have been received,
the host assumes the missing fragments were lost in the network and discards the
other fragments
• Three fields in the IP header (identification, flags, and fragment offset) have been
assigned to manage fragmentation and reassembly.
• The identification field is used to identify which packet a particular fragment
belongs to so that fragments for different packets do not get mixed up
• The flags field has three bits: one unused bit, one “Don’t Fragment” (DF) bit, and
one “More Fragment” (MF) bit.
o If the DF bit is set to 1, it forces the destination router not to fragment the
packet. If the packet length is greater than the destination MTU, then the
router will have to discard the packet and send an error message to the
source host.
o The MF bit tells the destination host whether or not more fragments follow.
If there are more, the MF bit is set to 1; otherwise, it is set to 0.
• The fragment offset field identifies the location of a fragment in a packet. The
value measures the offset, in units of eight bytes, between the beginning of the
packet to be fragmented and the beginning of the fragment, considering the data
part only.

Numerical on Fragmentation
Suppose a packet arrives at a router and is to be forwarded to an X.25 network having
an MTU of 576 bytes. The packet has an IP header of 20 bytes and a data part of
1484 bytes. Perform fragmentation and include the pertinent values of the IP header
of the original packet and of each fragment.
Ans) The maximum possible data length per fragment = 576 − 20 = 556 bytes
However, 556 is not a multiple of 8. Thus, we need to set the maximum data length to
552 bytes. We can break 1484 into 552 + 552 + 380

Total Length ID MF Fragment Offset

Original Packet 1504 (1484+20) X 0 0

Fragment 1 572 (552+20) X 1 0

Fragment 2 572 (552+20) X 1 69 (552/8)

Fragment 3 400 (380+20) X 0 138 ((552+552)/8)

Note: Remember that the fragment part value pertains to the data part only and does
not include the header. That is why we consider (552/8) and NOT (572/8). X denotes a
unique identification value.

IPv6
• In the early 1990s the Internet Engineering Task Force (IETF) began to work on the
successor of IP version 4 that would solve the address exhaustion problem and
other scalability problems.
• IPv6 was designed to interoperate with IPv4 since it would likely take many years
to complete the transition from version 4 to version 6.
• Thus, IPv6 should retain the most basic service provided by IPv4—a connectionless
delivery service. On the other hand, IPv6 should also change the IPv4 functions
that do not work well and support new emerging applications such as real-time
video conferencing, etc. Some of the changes from IPv4 to IPv6 include:
• Longer address fields: : The length of address field is extended from 32 bits to 128
bits. The address structure also provides more levels of hierarchy
• Simplified header format: The header format of IPv6 is simpler than that of IPv4.
Some of the header fields in IPv4 such as checksum, IHL, identification, flags, and
fragment offset do not appear in the IPv6 header
• Flow label capability: IPv6 adds a “flow label” to identify a certain packet “flow”
that requires a certain QoS.
• Security: IPv6 supports built-in authentication and confidentiality
• Large packets: IPv6 supports payloads that are longer than 64 K bytes, called
jumbo payloads.
• Fragmentation at source only: Routers do not perform packet fragmentation. If a
packet needs to be fragmented, the source should check the minimum MTU along
the path and perform the necessary fragmentation.
• No checksum field: The checksum field has been removed to reduce packet
processing time in a router.

IPv6 Header

• Version: The version field specifies the version number of the protocol and should
be set to 6 for IPv6
• Traffic class: The traffic class field specifies the traffic class / priority of the packet.
• Flow label: The flow label field can be used to identify the QoS requested by the
packet. In the IPv6 standard, a flow is defined as “a sequence of packets sent from
a particular source to a particular (unicast or multicast) destination for which the
source desires special handling by the intervening routers.”
• Payload length: The payload length indicates the length of the data (excluding
header).
• Next header: The next header field identifies the type of the extension header that
follows the basic header. The extension header is similar to the options field in IPv4
but is more flexible and efficient.
• Hop limit: The hop limit field replaces the TTL field in IPv4. The value specifies the
number of hops the packet can travel before being dropped by a router.
• Source address and destination address: The source address and the destination
address identify the source host and the destination host, respectively.

Network Addressing
• The dotted-decimal notation can be rather long when applied to IPv6 long
addresses. A more compact notation that is specified in the standard is to use a
hexadecimal digit for every 4 bits and to separate every 16 bits with a colon. An
example of an IPv6 address is: 4BF5:AA12:0216:FEBC:BA5F:039A:BE9A:2176
• The first shorthand notation can be exploited when the 16-bit field has some
leading zeros. Ex: 4BF5:0000:0000:0000:BA5F:039A:000A:2176 can be shortened
to 4BF5:0:0:0:BA5F:39A:A:2176
• Further shortening is possible where consecutive zero-valued fields appear. These
fields can be shortened with the double-colon notation (::). Continuing with the
preceding example, the address can be written even more compactly as:
4BF5::BA5F:39A:A:2176

Migration Issues from IPv4 to IPv6


• IPv6 was developed to improve the addressing capabilities so that new devices
requiring global addresses can be supported in the future. However, because IPv4
networks and hosts are widely deployed, migration issues need to be solved to
ensure that the transition from IPv4 to IPv6 is as smooth as possible.
• Current solutions are mainly based on the dual-IP-layer (or dual stack) approach
whereby both IPv4 and IPv6 functions are present.
• For example, routers independently run both IPv4 and IPv6 routing protocols and
can forward both types of packets. Recall that the type of a packet can be
identified from the version field.
• When islands of IPv6 networks are separated by IPv4 networks, one approach is
to build a tunnel across an IPv4 network connecting two IPv6 networks, as shown
in Figure (a):

• A tunnel is a path created between two nodes so that the tunnel appears as a
single link to the user, as shown in Figure (b).
• The tunnelling approach essentially hides the route taken by the tunnel from the
user. In our particular example, an IPv4 tunnel allows IPv6 packets to be forwarded
across an IPv4 network without the IPv6 user having to worry about how packets
are actually forwarded in the IPv4 network.
• A tunnel is typically realized by encapsulating each user packet in another packet
that can be forwarded along the tunnel
• In our example, IPv6 packets are first forwarded from the source to the tunnel
head-end in the IPv6 network. At the tunnel head-end packets are encapsulated
into IPv4 packets
• Then IPv4 packets are forwarded in the IPv4 network to the tunnel tail-end where
the reverse process (i.e., decapsulation) is performed. Finally, IPv6 packets are
forwarded from the tunnel tail-end to the destination.
User Datagram Protocol
• The User Datagram Protocol (UDP) is an unreliable, connectionless transport layer
protocol. It is a very simple protocol that provides only two additional services
beyond IP: demultiplexing and error checking on data.
• Applications that do not require zero packet loss such as in packet voice systems
are well suited to UDP
• The format of the UDP datagram is as shown below:

• The destination port allows the UDP module to demultiplex datagrams to the
correct application in a given host.
• The source port identifies the particular application in the source host to receive
replies.
• The UDP length field indicates the number of bytes in the UDP datagram (including
header and data).
• The UDP checksum field detects errors in the datagram, and its use is optional. If a
source host does not want to compute the checksum, the checksum field should
contain all 0s

Transmission Control Protocol


• The Transmission Control Protocol (TCP) provides a logical full-duplex (two-way)
connection between two application layer processes across the Internet.
• TCP provides these application processes with a connection-oriented, reliable, in-
sequence, byte-stream service.
• TCP also provides flow control that allows a TCP receiver to control the rate at
which the sender transmits information so that the receiver buffers do not
overflow
• TCP also provides congestion control that induces senders to reduce the rate at
which they send packets when there is congestion in the routers.
TCP Operation and Reliable Stream Service
A TCP connection goes through the three phases of a connection-oriented service.
• The TCP connection establishment phase sets up a connection between the two
application processes by creating and initializing variables that are used in the
protocol. These variables are stored in a connection record that is called the
transmission control block (TCB).
• Once the connection is established, TCP enters the data transfer phase where it
delivers data over each direction in the connection correctly and in sequence.
• When the applications are done exchanging data, TCP enters the connection
termination phase where each direction of the connection is terminated
independently, allowing data to continue flowing in one direction after the other
direction has been closed.
• As shown in the below figure, the application layer writes the data it needs to
transmit into a buffer

• TCP treats the data it gets from the application layer as a byte stream. Thus, when
a source writes a 1000-byte message in a single chunk (one write), the destination
may receive the message in two chunks of 500 bytes each (two reads), in three
chunks of 400 bytes, 300 bytes and 300 bytes / any other combination.
• The TCP transmitter arranges a consecutive string of bytes into a segment. The
segment contains a header with address information that enables the network to
direct the segment to its destination application process
• The segment contains a sequence number that corresponds to the number of the
first byte in the string that is being transmitted.
• The TCP receiver performs an error check on each segment it receives. If the
segment is error-free and is not a duplicate segment, the receiver inserts the bytes
into the appropriate locations in the receive buffer
TCP Segment

• Source port and destination port: The source and destination ports identify the
sending and receiving applications, respectively
• Sequence number: The 32-bit sequence number field identifies the position of the
first data byte of this segment in the sender’s byte stream during data transfer
• Acknowledgment number: This field identifies the sequence number of the next
data byte that the sender expects to receive if the ACK bit is set.
• Header length: This field specifies the length of the TCP header in 32-bit words
• Reserved: This field is reserved for future use and must be set to 0.
• URG: If this bit is set, the urgent pointer is valid
• ACK: If this bit is set, the acknowledgment number is valid
• PSH: When this bit is set, it tells the receiving TCP module to pass the data to the
application immediately
• RST: When this bit is set, it tells the receiving TCP module to abort the connection
because of some abnormal condition.
• SYN: This bit requests a connection
• FIN: When this bit is set, it tells the receiver that the sender does not have any
more data to send.
• Window size: The window size field specifies the number of bytes the sender is
willing to accept.
• Checksum: This field detects errors on the TCP segment
• Urgent pointer: When the URG bit is set, the value in the urgent pointer field
added to that in the sequence number field points to the last byte of the “urgent
data” (data that needs immediate delivery i.e., without buffering)
• Options: The options field may be used to provide other functions that are not
covered by the header.
TCP CONNECTION ESTABLISHMENT
• Before any host can send data, a connection must be established. TCP establishes
the connection using a three-way handshake procedure as shown:

The handshakes are described in the following steps:


• Host A sends a connection request to host B by setting the SYN bit. Host A also
registers its initial sequence number to use (Seq no = x)
• Host B acknowledges the request by setting the ACK bit and indicating the next
data byte to receive (Ack no = x + 1). The “plus one” is needed because the SYN bit
consumes one sequence number. At the same time, host B also sends a request by
setting the SYN bit and registering its initial sequence number to use (Seq no = y)
• Host A acknowledges the request from B by setting the ACK bit and confirming the
next data byte to receive (Ack no = y + 1). Note that the sequence number is set to
x + 1. On receipt at B the connection is established.
TCP CONNECTION TERMINATION
• TCP provides for a graceful close that involves the independent termination of
each direction of the connection. A termination is initiated when an application
tells TCP that it has no more data to send.
• The TCP entity completes transmission of its data and, upon receiving
acknowledgment from the receiver, issues a segment with the FIN bit set.
• Upon receiving a FIN segment, a TCP entity informs its application that the other
entity has terminated its transmission of data.

Numericals
Q1) A host in an organization has an IP Address 150.32.64.34 and subnet mask
255.255.254.0. What is the address of this network? What is the range of IP
addresses that a host can have on this subnet?
A) The given address is a class B address (128.x.x.x to 191.x.x.x)
IP address = 10010110 00100000 01000000 00100010 (150.32.64.34)
Mask = 11111111 11111111 11111110 00000000 (255.255.254.0)
Subnet address → 10010110 00100000 01000000 00000000 (150.32.64.0)
Hence, the subnet address is 150.32.64.0,
host range is from 150.32.64.1 to 150.32.64.254 and the
broadcast address is 150.32.64.255

Q2) A host has an IP address of 150.100.12.176 and a subnet mask of


255.255.255.128. Find i) the address of the subnet ii) range of addresses that a host
can have on this subnet
A) The given address is a class B address (128.x.x.x to 191.x.x.x)
IP address = 10010110 01100100 00001100 10110000 (150.100.12.176)
Mask = 11111111 11111111 11111111 10000000 (255.255.255.128)
Subnet address → 10010110 01100100 00001100 10000000 (150.100.12.128)
Hence, the subnet address is 150.100.12.128,
host range is from 150.100.12.129 to 150.100.12.254 and the
broadcast address is 150.100.12.255

Q3) What are the possible subnet masks for class C address space?
A) In class C, the first 3 octets contribute to the Net ID and the last octet is used for
the (Subnet ID + Host ID). Hence the default mask is 255.255.255.0 for class C
Hence, possible subnets masks for class C are:
255.255.255.00000000 → 255.255.255.0 (default)
255.255.255.10000000 → 255.255.255.128
255.255.255.11000000 → 255.255.255.192
255.255.255.11100000 → 255.255.255.224
255.255.255.11110000 → 255.255.255.240
255.255.255.11111000 → 255.255.255.248
255.255.255.11111100 → 255.255.255.252
255.255.255.11111110 → 255.255.255.254 (not usable)
255.255.255.11111111 → 255.255.255.255 (not usable)

Q4) A college requires 150 LANs and there should be 100 hosts in each LAN. It has a
single class B address. Design an appropriate subnetting scheme.
A) Host requirement is 100 → we need minimum 7 bits (27 = 128 but 26 = 64)
Given class B IP address, hence, the last 2 octets are used for the Host ID. Wkt,

2(number of zeroes in the Host ID) - 2 = number of hosts


2(number of ones in the Host ID) - 2 = number of subnets

Hence, we can use 255.255.255.10000000 → 255.255.255.128 which will provide


126 (27 - 2) host addresses and
510 (29 - 2) subnet addresses (LANs) and
Another appropriate mask would be 255.255.255.0 which would provide (28 - 2) host
addresses and greater scalability when requirements change in the future.
Q5) An organization requires 8 subnets and each subnet should have 10 hosts. It
uses a class C addressing. What is the most appropriate subnet mask which can be
used?
A) Host requirement is 10 → we need minimum 4 bits (24 = 16)
Given class C IP address, hence, the last octet is used for the Host ID. We know that

2(number of zeroes in the Host ID) - 2 = number of hosts


2(number of ones in the Host ID) - 2 = number of subnets

Hence, we can use 255.255.255.11110000 → 255.255.255.240 which will provide 14


(24 - 2) host addresses and 14 (24 - 2) subnet addresses.

Q6) Perform CIDR aggregation on the following /24 addresses. IP addresses:


200.96.86.0/24, 200.96.87.0/24, 200.96.88.0/2, 200.96.89.0/24
A)
200.96.86.0 = 11001000 01100000 01010110 00000000
200.96.87.0 = 11001000 01100000 01010111 00000000
200.96.88.0 = 11001000 01100000 01011000 00000000
200.96.89.0 = 11001000 01100000 01011001 00000000
Subnet mask = 11111111 11111111 11111111 00000000 (first 24 bits are 1s)
AND → 11001000 01100000 01010000 00000000 (200.96.80.0)
Also, as the first 20 bits are same for all given IP addresses, we can write the
aggregated IP address as 200.96.80.0/20

Q7) Suppose a Class C mask of 255.255.255.192. Then,


i. How many subnet bits are used in this mask?
ii. How many host bits are available per subnet?
iii. What are the subnet addresses?
iv. What is the broadcast address of each subnet?
v. What is the valid host range of each subnet?
A) i) 255.255.255.192 = 255.255.255.11000000
Also in class C, only the last octet represents the Host ID. Wkt, 1s in the Host ID are
for subnets and 0s for hosts. Hence, subnet bits = 2
ii) Host bits = 6
iii) The subnet addresses would be:
255.255.255.0000000 → 255.255.255.0
255.255.255.0100000 → 255.255.255.64
255.255.255.1000000 → 255.255.255.128
255.255.255.1100000 → 255.255.255.192
iv) Broadcast addresses:
255.255.255.0 → 255.255.255.63
255.255.255.64 → 255.255.255.127
255.255.255.128 → 255.255.255.191
255.255.255.192 → 255.255.255.255
v) Valid range for each subnet:
255.255.255.0 → 255.255.255.1 to 255.255.255.62
255.255.255.64 → 255.255.255.65 to 255.255.255.126
255.255.255.128 → 255.255.255.129 to 255.255.255.190
255.255.255.192 → 255.255.255.193 to 255.255.255.254

Q8) Perform CIDR aggregation on the following /22 addresses. IP addresses:


128.56.24.0/22, 128.56.25.0/22, 128.56.26.0/22, 128.56.27.0/22
A)
128.56.24.0 = 10000000 00111000 00011000 00000000
128.56.25.0 = 10000000 00111000 00011001 00000000
128.56.26.0 = 10000000 00111000 00011010 00000000
128.56.27.0 = 10000000 00111000 00011011 00000000
Subnet mask = 11111111 11111111 11111100 00000000 (first 22 bits are 1s)
AND → 10000000 00111000 00011000 00000000 (128.56.24.0)
Also, as the first 22 bits are same for all given IP addresses, we can write the
aggregated IP address as 128.56.24.0/22
Q9) Given the host address as 199.42.78.133 and the subnet mask 255.255.255.224,
determine:
a) the network address and the subnet address it belongs
b) total number of hosts in each subnet
c) total number of subnets in this network
d) range of addresses in each subnet.
e)broadcast address of each subnet.
A) Given IP address is of Class C. Hence only last octet is used for Host ID.
IP address = 11000111 00101010 01001110 10000101 (199.42.78.133)
Mask = 11111111 11111111 11111111 11100000 (255.255.255.224)
Subnet address → 11000111 00101010 01001110 10000000 (199.42.78.128)
a) Hence, the network address is 199.42.78.0 (first 3 octets constitute the Net ID) and
subnet address is 199.42.78.128
b) Total number of hosts = 25 - 2 = 30
c) Total number of subnets = 23 - 2 = 6
d, e) Range of address in each subnet and broadcast address
Subnet Address Range of Addresses Broadcast Address
199.42.78.32 199.42.78.33 - 199.42.78.62 199.42.78.63
199.42.78.64 199.42.78.65 - 199.42.78.94 199.42.78.95
199.42.78.96 199.42.78.97 - 199.42.78.126 199.42.78.127
199.42.78.128 199.42.78.129 - 199.42.78.158 199.42.78.159
199.42.78.160 199.42.78.161 - 199.42.78.190 199.42.78.191
199.42.78.192 199.42.78.193 - 199.42.78.222 199.42.78.223
Q10) Given a Class B subnet mask of 255.255.192.0 and figure out the subnets,
broadcast address and valid host range.
i. How many subnets does this mask provide?
ii. How many hosts per subnet does this mask provide?
iii. What are the valid subnets?
iv. What is the broadcast address for each subnet?
v. What is the host range of each subnet?
A) i, ii) In class B, last 2 octets give Host ID
→ 255.255.192.0 = 255.255.11000000.00000000 ∴ number of subnets = 22-2 = 2 and
number of hosts = 214 - 2
iii) Valid Subnets, broadcast address and range of addresses
X.Y.00000000.00000000 → X.Y.0.0 (not used)
X.Y.01000000.00000000 → X.Y.64.0 (valid)
X.Y.10000000.00000000 → X.128.0.0 (valid)
X.Y.11000000.00000000 → X.Y.192.0 (not used)
Subnet Address Range of Addresses Broadcast Address
X.Y.64.0 X.Y.64.1 - X.Y.127.254 X.Y.127.255
X.Y.128.0 X.Y.128.1 - X.Y.1.191.254 X.Y.191.255
Q11) Given the IP address 10.245.131.0/19, find its i) class ii) subnet mask iii)
Network address iv) subnet address v) nos. of subnets vi) nos. of hosts vii)
broadcast address of the subnet.
A) i) It belongs to class A (1.x.x.x to 127.x.x.x)
ii) Here mask is 11111111.11111111.11100000.00000000 (first 19 bits are 1s) →
255.255.224.0
In class A, Host ID part is last 3 octets and first octet is Net ID, hence
iii) network ID is 10.0.0.0
iv)
IP address = 00001010 11110101 10000011 00000000 (10.245.131.0)
Mask = 11111111 11111111 11100000 00000000 (255.255.224.0)
Subnet address → 00001010 11110101 10000000 00000000 (10.245.128.0)

v) Nos. of subnets = 211 - 2 vi) Nos. of hosts = 213 - 2

vii) broadcast address of the subnet → the next subnet address would be
10.11110101.10100000.00000000 → 10.245.160.0 ∴ the previous address would be
the broadcast address → 10.245.159.255
Q12) Evaluate how fragmentation and reassembly is done in an FDDI network, if the
original packet size of the IPv4 datagram is 6112 bytes.
A) We know that the MTU of FDDI is 4464 bytes
The maximum possible data length per fragment = 4464 − 20 = 4444 bytes
However, 4444 is not a multiple of 8. Thus, we need to set the maximum data length
to 4440 bytes. We can break 6112 into 4440 + 1672

Total Length ID MF Fragment Offset

Original Packet 6132 (6112 + 20) X 0 0

Fragment 1 4460 (4440 + 20) X 1 0

Fragment 2 1692 (1672 + 20) X 1 555 (4440/8)

Note: Remember that the fragment part value pertains to the data part only and does
not include the header. That is why we consider (4440/8) and NOT (4460/8).
X denotes a unique identification value.

You might also like