0% found this document useful (0 votes)
1 views

Computer Networks_Unit IV

Uploaded by

Anitha Sakthivel
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Computer Networks_Unit IV

Uploaded by

Anitha Sakthivel
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 49

Unit - IV

Network Layer - Design Issues - Routing Algorithms - Congestion Control


Algorithms - IP Protocol - IP Addresses - Internet Control Protocols.

4.1 Network Layer Design Issues


1. Store-and-forward packet switching
2. Services provided to transport layer
3. Implementation of connectionless service
4. Implementation of connection-oriented service
5. Comparison of virtual-circuit and datagram networks

1. Store-and-forward packet switching

Figure 4.1: Store And Forward


A host with a packet to send transmits it to the nearest router, either on its own LAN or
over a point-to-point link to the ISP. The packet is stored there until it has fully arrived, and the
link has finished its processing by verifying the checksum. Then it is forwarded to the next router
along the path until it reaches the destination host, where it is delivered. This mechanism is store-
and- forward packet switching.

2. Services provided to transport layer


The network layer provides services to the transport layer at the network layer/transport layer
interface. The services need to be carefully designed with the following goals in mind:
1. Services independent of router technology.
2. Transport layer shielded from number, type, topology of routers.
3. Network addresses available to transport layer use uniform numbering plan even
across LANs and WANs

3. Implementation of connectionless service


If connectionless service is offered, packets are injected into the network individually and
routed independently of each other. No advance setup is needed. In this context, the packets are
frequently called datagrams and the network is called a datagram network.

Figure 4.2: Connectionless Service


A’s Table(initially) A’s Table(later) C’s Table E’s Table

Let us assume for this example that the message is four times longer than the maximum
packet size, so the network layer has to break it into four packets, 1, 2, 3, and 4, and send each of
the min turn to router A. Every router has an internal table telling it where to send packets for
each of the possible destinations. Each table entry is a pair (destination and the outgoing line).
Only directly connected lines can be used.
A’s initial routing table is shown in the figure under the label ‘‘initially’’. At A, packets 1,
2, and 3 are stored briefly, having arrived on the incoming link. Then each packet is forwarded
according to A’s table, onto the outgoing link to C within a new frame.Packet1isthenforwardedto
E and then to F.
However, something different happens to packet 4. When it gets to A it is sent to router B,
even though it is also destined for F. For some reason (traffic jam along ACE path), A decided to
send packet 4 via a different route than that of the first three packets. Router A updated its
routing table, as shown underthelabel‘‘later.’’1-The algorithm that manages the tables and makes
the routing decisions is called the routing algorithm.
4. Implementation of connection-oriented service

Figure 4.3: Connection-oriented Service


A’s Table C’s Table E’s Table

If connection-oriented service is used, a path from the source router all the way to the
destination router must be established before any data packets can be sent. This connection is
called a VC (virtual circuit), and the network is called a virtual-circuit network
When a connection is established, a route from the source machine to the destination
machine is chosen as part of the connection setup and stored in tables inside the routers. That
route is used for all traffic flowing over the connection, exactly the same way that the telephone
system works. When the connection is released, the virtual circuit is also terminated. With
connection-oriented service, each packet carries an identifier telling which virtual circuit it
belongs to.
As an example, consider the situation shown in Figure. Here, host H1 has established
connection 1 with host H2. This connection is remembered as the first entry in each of the
routing tables. The first line of A’s table says that if a packet bearing connection identifier
1comes in from H1, it is to be sent to router C and given connection identifier 1. Similarly, the
first entry at C routes the packet to E, also with connection identifier 1.
Now let us consider what happens if H3 also wants to establish a connection to H2. It
chooses connection identifier 1 (because it is initiating the connection and this is its only
connection) and tells the network to establish the virtual circuit.
This leads to the second row in the tables. Note that it has a conflict here because
although A can easily distinguish connection 1 packets from H1 from connection 1 packets from
H3, C cannot do this. For this reason, A assigns a different connection identifier to the outgoing
traffic for the second connection. Avoiding conflicts of this kind is why routers need the ability
to replace connection identifiers in outgoing packets. In some contexts, this process is called
label switching. An example of a connection-oriented network service is MPLS (Multi-
Protocol Label Switching).

5. Comparison of virtual-circuit and datagram networks

4.2 Routing Algorithms


The main function of NL (Network Layer) is routing packets from the source machine to
the destination machine.
There are two processes inside router:
a) One of them handles each packet as it arrives, looking up the outgoing line to use for it in the
routing table. This process is forwarding.
b) The other process is responsible for filling in and updating the routing tables. That is where the
routing algorithm comes into play. This process is routing.
Regardless of whether routes are chosen independently for each packet or only when new
connections are established, certain properties are desirable in routing algorithm correctness,
simplicity, robustness, stability, fairness, optimality

Routing algorithms can be grouped into two major classes:


1. Non adaptive (Static Routing)
Non adaptive algorithm do not base their routing decisions on measurements or estimates
of the current traffic and topology. Instead, the choice of the route to use to get from I to J is
computed in advance, off line, and downloaded to the routers when the network is booted. This
procedure is sometimes called static routing.
2. Adaptive(Dynamic Routing)
Adaptive algorithm, in contrast, changes their routing decisions to reflect changes in the
topology, and usually the traffic as well.
Adaptive algorithms differ in
1. Where they get their information (e.g., locally, from a djacent routers, or from all routers),
2. When they change the routes (e.g., every ∆T sec, when the load changes or when the topology
changes), and What metric is used for optimization (e.g., distance, number of hops ,or estimated
transit time).This procedure is called dynamic routing

Different Routing Algorithms


1. Optimality principle
2. Shortest path algorithm
3. Flooding
4. Distance vector routing
5. Link state routing
6. Hierarchical Routing

1. The Optimality Principle

One can make a general statement about optimal routes without regard to network
topology or traffic. This statement is known as the optimality principle. It states that if router J is
on the optimal path from router I to router K, then the optimal path from J to K also falls along
the same
As a direct consequence of the optimality principle, it can see that the set of optimal
routes from all sources to a given destination form a tree rooted at the destination. Such a tree is
called a sink tree. The goal of all routing algorithms is to discover and use the sink trees for all
routers
Figure 4.4: A network (b) A sink tree for router B.

2. Shortest Path Routing (Dijkstra’s)

The idea is to build a graph of the subnet, with each node of the graph representing a
router and each arc of the graph representing a communication line or link. To choose a route
between a given pair of routers, the algorithm just finds the shortest path between them on the
graph.

1. Start with the local node (router) as the root of the tree. Assign a cost of 0 to this
node and make it the first permanent node.

2. Examine each neighbor of the node that was the last permanent node.

3. Assign a cumulative cost to each node and make it tentative

4. Among the list of tentative nodes a. Find the node with the smallest cost and make
it Permanent b. If a node can be reached from more than one route then select the route
with the shortest cumulative cost.

5. Repeat steps 2 to 4 until every node becomes permanent


Figure 4.5: Shortest Path Routing
3. Flooding
Another static algorithm is flooding, in which every incoming packet is sent out on every
outgoing line except the one it arrived on. Flooding obviously generates vast numbers of
duplicate packets, in fact, an infinite number unless some measures are taken to damp the
process.

One such measure is to have a hop counter contained in the header of each packet, which
is decremented at each hop, with the packet being discarded when the counter reaches zero.
Ideally, the hop counter should be initialized to the length of the path from source to
destination.

A variation of flooding that is slightly more practical is selective flooding. In this


algorithm the routers do not send every incoming packet out on every line, only on those
lines that are going approximately in the right direction.

Flooding is not practical in most applications. Intra- and Inter domain Routing An
autonomous system (AS) is a group of networks and routers under the authority of a single
administration. Routing inside an autonomous system is referred to as intra domain routing.
(DISTANCE VECTOR, LINK STATE)

Routing between autonomous systems is referred to as inter domain routing. (PATH


VECTOR) Each autonomous system can choose one or more intra domain routing protocols
to handle routing inside the autonomous system. However, only one inter domain routing
protocol handles routing between autonomous systems.
4. Distance Vector Routing
In distance vector routing, the least-cost route between any two nodes is the route
with minimum distance. In this protocol, as the name implies, each node maintains a vector
(table)of minimum distances to every node.
They are Mainly 3 things in this routing.
i) Initialization
Each node can know only the distance between itself and its immediate neighbors,
those directly connected to it. So for the moment, it can assume that each node can send a
message to the immediate neighbors and find the distance between itself and these neighbors.
Below fig show is the initial tables for each node. The distance for any entry that is not a
neighbor is marked as infinite (unreachable).

Figure 4.6: Initialization of tables in distance vector routing


ii) Sharing
The whole idea of distance vector routing is the sharing of information between
neighbors. Although node A does not know about node E, node C does. So if node C shares
its routing table with A, node A can also know how to reach node E. On the other hand, node
C does not know how to reach node D, but node A does. If node A shares its routing table
with node C, node C also knows how to reach node D. In other words, nodes A and C, as
immediate neighbors, can improve their routing tables if they help each other.
iii) Updating
When a node receives a two-column table from a neighbor, it needs to update its
routing table.
Updating takes three steps:
1. The receiving node needs to add the cost between itself and the sending node to each value
in the second column. (x+y)
2. If the receiving node uses information from any row. The sending node is the next node in
the route.
3. The receiving node needs to compare each row of its old table with the corresponding row
of the modified version of the received table.
a. If the next-node entry is different, the receiving node chooses the row with the
smaller cost. If there is a tie, the old one is kept.
b. If the next-node entry is the same, the receiving node chooses the new row. For
example, suppose node C has previously advertised a route to node X with distance3.
Suppose that now there is no path between C and X; node C now advertises this route with a
distance of infinity. Node A must not ignore this value even though its old entry is smaller.
The old route does not exist anymore. The new route has a distance of infinity.
Updating in distance vector routing

Figure 4.7: Final Diagram


When to Share
The question now is, When does a node send its partial routing table (only two columns) to
all its immediate neighbors?
The table is sent both periodically and when there is a change in the table.
Periodic Update
A node sends its routing table, normally every 30 s, in a periodic update. The period
depends on the protocol that is using distance vector routing.
Triggered Update
A node sends its two-column routing table to its neighbors anytime there is a change
in its routing table. This is called a triggered update.
The change can result from the following.
1. A node receives a table from a neighbor, resulting in changes in its own table after
updating.
2. A node detects some failure in the neighboring links which results in a distance change to
infinity.

Figure 4.8 : Two-node instability

Figure 4.9 : Three-node instability

SOLUTIONS FOR INSTABILITY


i) Defining Infinity: redefine infinity to a smaller number, such as 100. For our previous
scenario, the system will be stable in less than 20 updates. As a matter of fact, most
implementations of the distance vector protocol define the distance between each node to be
1 and define 16 as infinity. However, this means that the distance vector routing cannot be
used in large systems. The size of the network, in each direction, cannot exceed 15 hops.
ii) Split Horizon: In this strategy, instead of flooding the table through each interface, each
node sends only part of its table through each interface. If, according to its table, node B
thinks that the optimum route to reach X is via A, it does not need to advertise this piece of
information to A; the information has come from A (A already knows). Taking information
from node A, modifying it, and sending it back to node A creates the confusion. In our
scenario, node B eliminates the last line of its routing table before it sends it to A. In this
case, node A keeps the value of infinity as the distance to X. Later when node A sends its
routing table to B, node B also corrects its routing table. The system becomes stable after the
first update: both node A and B know that X is not reachable.

iii) Split Horizon and Poison Reverse: Using the split horizon strategy has one drawback.
Normally, the distance vector protocol uses a timer, and if there is no news about a route, the
node deletes the route from its table. When node B in the previous scenario eliminates the
route to X from its advertisement to A, node A cannot guess that this is due to the split
horizon strategy (the source of information was A) or because B has not received any news
about X recently. The split horizon strategy can be combined with the poison reverse
strategy. Node B can still advertise the value for X, but if the source of information is A, it
can replace the distance with infinity as a warning: "Do not use this value; what I know about
this route comes from it."

Figure 4.10: The Count-to-Infinity Problem


5. Link State Routing
Link state routing is based on the assumption that, although the global knowledge
about thetopology is not clear, each node has partial knowledge: it knows the state (type,
condition, andcost) of its links. In other words, the whole topology can be compiled from
the partial knowledge of each node
The idea behind link state routing is fairly simple and can be stated as five parts. Each
router must do the following things to make it work
1. Discover its neighbors and learn their network addresses.
2. Set the distance or cost metric to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to and receive packets from all other routers.
5. Compute the shortest path to every other router

Figure 4.11: Link State Routing


1. Creation of the states of the links by each node, called the link state packet (LSP).
2. Dissemination of LSPs to every other router, called flooding, in an efficient and reliable
way. 3. Formation of a shortest path tree for each node.
4. Calculation of a routing table based on the shortest path tree Creation of Link State
Packet(LSP)

I. Creation of Link State Packet (LSP)

A link state packet can carry a large amount of information. For the moment, it can
assume that it carries a minimum amount of data: the node identity, the list of links, a
sequence number, and age.

The first two, node identity and the list of links, are needed to make the topology. The
third, sequence number, facilitates flooding and distinguishes new LSPs from old ones. The
fourth, age, prevents old LSPs from remaining in the domain for a long time.

LSPs are generated on two occasions:

1. When there is a change in the topology of the domain

2. on a periodic basis: The period in this case is much longer compared to distance vector.
The timer set for periodic dissemination is normally in the range of 60 min or 2 h based on
the implementation. A longer period ensures that flooding does not create too much traffic on
the network.

II. Flooding of LSPs:

After a node has prepared an LSP, it must be disseminated to all other nodes, not only
to its neighbors.

The process is called flooding and based on the following

1. The creating node sends a copy of the LSP out of each interface

2. A node that receives an LSP compares it with the copy it may already have. If the newly
arrived LSP is older than the one it has (found by checking the sequence number), it discards
the LSP. If it is newer, the node does the following:

a. It discards the old LSP and keeps the new one.

b. It sends a copy of it out of each interface except the one from which the packet arrived.
This guarantees that flooding stops somewhere in the domain (where a node has only one
interface).

III. Formation of Shortest Path Tree:

Dijkstra Algorithm A shortest path tree is a tree in which the path between the root
and every other node is the shortest. The Dijkstra algorithm creates a shortest path tree from a
graph. The algorithm divides the nodes into two sets: tentative and permanent. It finds the
neighbors of a current node, makes them tentative, examines them, and if they pass the
criteria, makes them permanent.
Figure 4.12: Flow diagram of Shortest path diagram

Figure 4.13: Topology Formation

IV Calculation of a routing table


Routing table for node A
6. Hierarchical Routing
As networks grow in size, the router routing tables grow proportionally. Not only is
router memory consumed by ever-increasing tables, but more CPU time is needed to scan
them and more bandwidth is needed to send status reports about them.
At a certain point, the network may grow to the point where it is no longer feasible for
every router to have an entry for every other router, so the routing will have to be done
hierarchically, as it is in the telephone network.
When hierarchical routing is used, the routers are divided into what it will call
regions. Each router knows all the details about how to route packets to destinations within its
own region but knows nothing about the internal structure of other regions.
For huge networks, a two-level hierarchy may be insufficient; it may be necessary to
group the regions into clusters, the clusters into zones, the zones into groups, and so on, until
it run out of names for aggregations.

Figure 4.14: Hierarchical Routing


When a single network becomes very large, an interesting question is ‘‘how many
levels should the hierarchy have?’’ For example, consider a network with 720 routers. If there
is no hierarchy, each router needs 720 routing table entries.
If the network is partitioned into 24 regions of 30 routers each, each router needs 30
local entries plus 23 remote entries for a total of 53 entries.
If a three-level hierarchy is chosen, with 8 clusters each containing 9 regions of 10
routers, each router needs 10 entries for local routers, 8 entries for routing to other regions
within its own cluster, and 7 entries for distant clusters, for a total of 25 entries.
7. Broadcast Routing

In some applications, hosts need to send messages to many or all other hosts. For
Ex.Weather reports, stock market, updates etc. Sending a packet to all destinations
simultaneously is called ‘BroadCasting’.
1) One method is sending distinct packet to each destination by the source. This method
wastes the band width and also requires the source to have a complete list of all destinations.
2) The second is using Flooding technique. This generates too many packets and consumes
too much bandwidth.
3) Another method is multi destination routing. In this each packet contains either a list of
destinations or a bitmap indicating the desired destinations. When a packet arrives at a router,
the router checks all the destinations to determine the set of output lines that will be needed.
The router generates a new copy of the copy for each output line to be used and includes in
each packet only those destinations that are to use the line. This routing is like separately
addressed packets except that several packets must follow the same route.

4) Another algorithm which uses the spanning tree. A spanning tree is a subnet of the subnet
that includes all the routers but contains no loop. If each router knows which of its lines
belong to the spanning tree, it can copy an incoming broadcast packet onto all spanning tree
lines except the one it arrived on. This method makes use of bandwidth excellently and
generates minimum no. of packets necessary to do the job. The only disadvantage is that each
router must have knowledge of some spanning tree.

5) One more algorithm is an attempt to approximate the behavior of the previous one, even
when the routers do not know anything at all about spanning trees. The idea is remarkably
simple once it has been pointed out. When a broadcast packet arrives at a router, the router
checks to see if the packet arrived on that line that is normally used for sending packets to the
source of the broadcast. If so, there is an excellent chance that the broadcast packet itself
followed the best route from the router and is therefore the first copy to arrive at the router.
This being the case, the router forwards copies of it onto all lines except the one it arrived on.
If, however, the broadcast packet arrived on a line other than the preferred one for reaching
the source the packet is discarded as a likely duplicate.
Figure 4.15: Broadcast Routing

Figure 4.16: Flow of Broadcast Routing

Part (a) shows a network, part (b) shows a sink tree for router I of that network, and
part (c) shows how the reverse path algorithm works. On the first hop, I sends packets to F,
H, J, and N, as indicated by the second row of the tree. Each of these packets arrives on the
preferred path to I (assuming that the preferred path falls along the sink tree) and is so
indicated by a circle around the letter. On the second hop, eight packets are generated, two by
each of the routers that received a packet on the first hop. As it turns out, all eight of these
arrive at previously unvisited routers, and five of these arrive along the preferred line. Of the
six packets generated on the third hop, only three arrive on the preferred path (at C, E, and
K); the others are duplicates. After five hops and 24 packets, the broadcasting terminates,
compared with four hops and 14 packets had the sink tree been followed exactly.

8. Multicast Routing
For some applications, it is necessary for one process to send a message to all other
members of the group. If the group is small, it can just send each other member a point-to-
point message. If the group is large this strategy is expensive. Sometimes broad casting is
used, but using broadcasting is used, but using broadcasting to inform 1000 machines on a
million node network is inefficient because most receivers are not interested in the message.
Thus it is needed to send message to well-defined groups. Sending message to such a group is
called ‘multicasting’.
To do multicasting, group management is required. Some way is needed to create and
destroy groups and for processes to join and leave groups. When process joins a group, it
informs its host of this fact. It is important that routers know which of their hosts belong to
which group. Either hosts most inform their routers about change in group membership or
routers must query their hosts periodically. Routers tell their neighbors, so the information
propagates through the subnet.

To do multicast routing, each router computes a spanning tree covering all other
routers in the subnet. When a process sends a multicast packet, to a group, the first router
examines its spanning tree and prunes it, removing all lines that do not lead to hosts that are
members in the group. Multicast packets are forwarded.

9. Any cast Routing

Another delivery model, called any cast is sometimes also useful. In any cast, a packet
is delivered to the nearest member of a group. Schemes that find these paths are called any
cast routing.

Figure 4.17: (a) Anycast routes to group 1. (b) Topology seen by the routing protocol

Sometimes nodes provide a service, such astime of day or content distribution for
which it is getting the right information allthat matters, not the node that is contacted; any
node will do. Suppose it wants to anycast to the members of group 1. They will all be given
the address‘‘1,’’ instead of different addresses.

Distance vector routing will distribute vectors as usual, and nodes will choose the
shortest path to destination 1. This will result in nodes sending to the nearest instance of
destination 1. The routes are shown in 16(a). This procedure works because the routing
protocol does not realize that there are multiple instances of destination 1. That is, it believes
that all the instances of node 1 are the same node, as in the topology shown in Fig.16(b).

10. Routing for Mobile Hosts


Millions of people use computers while on the go, from truly mobile situations with
wireless devices in moving cars, to nomadic situations in which laptop computers are used in
a series of different locations. The model of the world that will consider is one in which all
hosts are assumed to have a permanent home location that never changes.

When they are moved to new Internet locations, laptops acquire new network
addresses. There is no association between the old and new addresses; the network does not
know that they belonged to the same laptop. In this model, a laptop can be used to browse the
Web, but other hosts cannot send packets to it (for example, for an incoming call), without
building a higher layer location service, for example, signing into Skype again after moving.
Moreover, connections cannot be maintained while the host is moving; new connections must
be started up instead. Network-layer mobility is useful to fix these problems.

The basic idea used for mobile routing in the Internet and cellular networks is for the
mobile host to tell a host at the home location where it is now. This host, which acts on behalf
of the mobile host, is called the home agent. Once it knows where the mobile host is currently
located, it can forward packets so that they are delivered.

Figure 4.18: Routing for Mobile Hosts

Fig. 4.18 shows mobile routing in action. A sender in the northwest city of Seattle
wants to send a packet to a host normally located across the United States in New York. The
case of interest to us is when the mobile host is not at home. Instead, it is temporarily in San
Diego. The mobile host in San Diego must acquire a local network address before it can use
the network. This happens in the normal way that hosts obtain network addresses; it will
cover how this works for the Internet later in this chapter. The local address is called a care of
address. Once the mobile host has this address, it can tell its home agent where it is now. It
does this by sending a registration message to the home agent (step 1) with the care of
address. The message is shown with a dashed line in Fig. 4.18 to indicate that it is a control
message, not a data message.

Next, the sender sends a data packet to the mobile host using its permanent address
(step 2). This packet is routed by the network to the host’s home location because that is
where the home address belongs. In New York, the home agent intercepts this packet because
the mobile host is away from home. It then wraps or encapsulates the packet with a new
header and sends this bundle to the care of address (step 3). This mechanism is called
tunneling.

When the encapsulated packet arrives at the care of address, the mobile host unwraps
it and retrieves the packet from the sender. The mobile host then sends its reply packet
directly to the sender (step 4). The overall route is called triangle routing because it may be
circuitous if the remote location is far from the home location. As part of step 4, the sender
may learn the current care of address. Subsequent packets can be routed directly to the mobile
host by tunneling them to the care of address (step 5), bypassing the home location entirely. If
connectivity is lost for any reason as the mobile moves, the home address can always be used
to reach the mobile.
11. Routing in Ad Hoc Networks

Possibilities when the routers are mobile:

1. Military vehicles on battlefield. – No infrastructure.

2. A fleet of ships at sea. – All moving all the time

3. Emergency works at earth quake . – The infrastructure destroyed.

4. A gathering of people with notebook computers. – In an area lacking 802.11.

In all these cases, and others, each node consists of a router and a host, usually on the
same computer. Networks of nodes that just happen to be near each other are called ad hoc
networks or MANETs (Mobile Ad hoc NETworks). What makes ad hoc networks different
from wired networks is that all the usual rules about fixed topologies, fixed and known
neighbours, fixed relationship between IP address and location, and more are suddenly tossed
out the window.

Routers can come and go or appear in new places at the drop of a bit. With a wired
network, if a router has a valid path to some destination, that path continues to be valid
indefinitely (barring a failure somewhere in the system). With an ad hoc network, the
topology may be changing all the time. A variety of routing algorithms for ad hoc networks
have been proposed. One of the more interesting ones is the AODV (Ad hoc On-demand
Distance Vector) routing algorithm (Perkins and Royer, 1999).

It takes into account the limited bandwidth and low battery life found in environment.
Another unusual characteristic is that it is an on-demand algorithm, that is, it determines a
route to some destination only when somebody wants to send a packet to that destination.

Route Discovery

Figure 4.19: (a) Range of A’s broadcast. (b) After B and D receive it. (c) AfterC, F, and
G receive it. (d) After E, H, and I receive it.

The shaded nodes are new recipients. The dashed lines show possible reverse routes.
The solid lines show the discovered route. To describe the algorithm, consider the newly
formed ad hoc network of Figure above. Suppose that a process at node A wants to send a
packet to node I. The AODV algorithm maintains a distance vector table at each node, keyed
by destination, giving information about that destination, including the neighbor to which to
send packets to reach the destination. First, A looks in its table and does not find an entry for
I. It now has to discover a route to I.

This property of discovering routes only when they are needed is what makes this
algorithm ‘‘on demand’’. To locate I, A constructs a ROUTE REQUEST packet and
broadcasts it using flooding. The transmission from A reaches B and D, as illustrated in Fig.
(a). Each node rebroadcasts the request, which continues to reach nodes F, G, and C in Fig.(c)
and nodes H, E, and I in Fig.(d). A sequence number set at the source is used to weed out
duplicates during the flood. For example, D discards the transmission from B in Fig. (c)
because it has already forwarded the request.

Eventually, the request reaches node I, which constructs a ROUTE REPLY packet.
This packet is unicast to the sender along the reverse of the path followed by the request. For
this to work, each intermediate node must remember the node that sent it the request.
The arrows in Fig. (b)–(d) show the reverse route information that is stored. Each
intermediate node also increments a hop count as it forwards the reply. This tells the nodes
how far they are from the destination.

The replies tell each intermediate node which neighbor to use to reach the destination: it
is the node that sent them the reply. Intermediate nodes G and D put the best route they hear
into their routing tables as they process the reply. When the reply reaches A, a new route,
ADGI, has been created.

Route Maintenance

Because nodes can move or be switched off, the topology can change spontaneously.
For example, in Fig., if G is switched off, A will not realize that the route it was using to I
(ADGI) is no longer valid. The algorithm needs to be able to deal with this. Periodically, each
node broadcasts a Hello message. Each of its neighbors is expected to respond to it. If no
response is forthcoming, the broadcaster knows that that neighbor has moved out of range or
failed and is no longer connected to it. Similarly, if it tries to send a packet to a neighbor that
does not respond, it learns that the neighbor is no longer available.
4.3 Congestion Control Algorithms

 When too many packets are present in (a part of) the subnet, performance degrades. This
situation is called congestion.

 Figure below depicts the symptom. When the number of packets dumped into the subnet
by the hosts is within its carrying capacity, they are all delivered (except for a few that are
afflicted with transmission errors) and the number delivered is proportional to the number
sent.

 However, as traffic increases too far, the routers are no longer able to cope and they begin
losing packets. This tends to make matters worse. At very high traffic, performance collapses
completely and almost no packets are delivered.
Figure 4.20: Flow of Congestion

 When too much traffic is offered, congestion sets in and performance degrades
sharply.

 Congestion can be brought on by several factors. If all of a sudden, streams of packets


begin arriving on three or four input lines and all need the same output line, a queue
will build up.

 If there is insufficient memory to hold all of them, packets will be lost.

 Slow processors can also cause congestion. If the routers' CPUs are slow at
performing the book keeping tasks required of them (queuing buffers, updating tables,
etc.), queues can build up, even though there is excess line capacity. Similarly, low-
bandwidth lines can also cause congestion.

1. Approaches to Congestion control

 Many problems in complex systems, such as computer networks, can be viewed from
a control theory point of view. This approach leads to dividing all solutions into two
groups: open loop and closed loop.

Figure 4.21: Time scales Of Approaches to Congestion Control

 Open loop solutions attempt to solve the problem by good design.

 Tools for doing open-loop control include deciding when to accept new traffic,
deciding when to discard packets and which ones, and making scheduling decisions at
various points in the network.

 Closed loop solutions are based on the concept to feedback loop.

 This approach has three parts when applied to congestion control:

1. Monitor the system to detect when and where congestion occurs.

2. Pass this information to places where action can be taken.

3. Adjust system operation to correct the problem.


 A variety of metrics can be used to monitor the subnet for congestion. Chief among
these are the percentage of all packets discarded for lack of buffer space, the average
queue lengths, the number of packets that time out and are retransmitted, the average
packet delay, and the standard deviation of packet delay. In all cases, rising numbers
indicate growing congestion.

 The second step in the feedback loop is to transfer the information about the
congestion from the point where it is detected to the point where something can be
done about it.

 In all feedback schemes, the hope is that knowledge of congestion will cause the hosts
to take appropriate action to reduce the congestion.

 The presence of congestion means that the load is (temporarily) greater than the
resources (in part of the system) can handle. Two solutions come to mind: increase the
resources or decrease the load.

Congestion Prevention Policies

The methods to control congestion by looking at open loop systems. These systems
are designed to minimize congestion in the first place, rather than letting it happen and
reacting after the fact. They try to achieve their goal by using appropriate policies at various
levels. In Fig. below see different data link, network, and transport policies that can affect
congestion.

Figure 4.22: Policies that affect congestion.

i. The data link layer Policies.

 The retransmission policy is concerned with how fast a sender times out and what it
transmits upon timeout. A jumpy sender that times out quickly and retransmits all outstanding
packets using go back n will put a heavier load on the system than will a leisurely sender that
uses selective repeat.

 Closely related to this is the buffering policy. If receivers routinely discard all out of order
packets, these packets will have to be transmitted again later, creating extra load. With
respect to congestion control, selective repeat is clearly better than go back n.

 Acknowledgement policy also affects congestion. If each packet is acknowledged


immediately, the acknowledgement packets generate extra traffic. However, if
acknowledgements are saved up to piggyback onto reverse traffic, extra timeouts and
retransmissions may result. A tight flow control scheme (e.g., a small window) reduces the
data rate and thus helps fight congestion.

ii. The network layer Policies.

 The choice between using virtual circuits and using datagrams affects congestion since
many congestion control algorithms work only with virtual-circuit subnets.

 Packet queueing and service policy relates to whether routers have one queue per input line,
one queue per output line, or both. It also relates to the order in which packets are processed
(e.g., round robin or priority based).

 Discard policy is the rule telling which packet is dropped when there is no space.

 A good routing algorithm can help avoid congestion by spreading the traffic over all the
lines, whereas a bad one can send too much traffic over already congested lines.

 Packet lifetime management deals with how long a packet may live before being discarded.
If it is too long, lost packets may clog up the works for a long time, but if it is too short,
packets may sometimes time out before reaching their destination, thus inducing
retransmissions

iii. The transport layer Policies

 The same issues occur as in the data link layer, but in addition, determining the timeout
interval is harder because the transit time across the network is less predictable than the
transit time over a wire between two routers. If the timeout interval is too short, extra packets
will be sent unnecessarily. If it is too long, congestion will be reduced but the response time
will suffer whenever a packet is lost.
2. Traffic Aware Routing
These schemes adapted to changes in topology, but not to changes in load. The goal in
taking load into account when computing routes is to shift traffic away from hotspots that
will be the first places in the network to experience congestion.
The most direct way to do this is to set the link weight to be a function of the (fixed)
link band width and propagation delay plus the (variable) measured load or average queuing
delay. Least-weight paths will then favor paths that are more lightly loaded, all else being
equal.
Consider the network of Fig. below, which is divided into two parts, East and West,
connected by two links, CF and EI. Suppose that most of the traffic between East and West is
using link CF, and, as a result, this link is heavily loaded with long delays. Including
queueing delay in the weight used for the shortest path calculation will make EI more
attractive. After the new routing tables have been installed, most of the East-West traffic will
now go over EI, loading this link. Consequently, in the next update, CF will appear to be the
shortest path. As a result, the routing tables may oscillate wildly, leading to erratic routing

and many potential problems.

Figure 4.23: Traffic Aware Routing

If load is ignored and only bandwidth and propagation delay are considered, this
problem does not occur. Attempts to include load but change weights within a narrow range
only slowdown routing oscillations. Two techniques can contribute to a successful solution.
The first is multipath routing, in which there can be multiple paths from a source to a
destination. In our example this means that the traffic can be spread across both of the East to
West links. The second one is for the routing scheme to shift traffic across routes slowly
enough that it is able to converge.

3. Admission Control

One technique that is widely used to keep congestion that has already started from
getting worse is admission control.
 Once congestion has been signaled, no more virtual circuits are set up until the problem has
gone away.

 An alternative approach is to allow new virtual circuits but carefully route all new virtual
circuits around problem areas. For example, consider the subnet of below Fig. (a), in which
two routers are congested, as indicated.

Figure 4.24: (a) A congested subnet.(b)Are drawn subnet that eliminates the congestion

A virtual circuit from A to B is also shown

Suppose that a host attached to router A wants to set up a connection to a host


attached to router B. Normally, this connection would pass through one of the congested
routers. To avoid this situation, it can redraw the subnet as shown in Fig(b),omitting the
congested routers and all of their lines. The dashed line shows a possible route for the virtual
circuit that avoids the congested routers.

4. Traffic Throttling
 Each router can easily monitor the utilization of its output lines and other resources. For
example, it can associate with each line a real variable, u, whose value, between 0.0 and1.0,
reflects the recent utilization of that line. To maintain a good estimate of u, a sample of the
instantaneous line utilization, f (either 0 or 1), can be made periodically and u updated
according to

Where the constant a determines how fast the router forgets recent history.

Whenever u moves above the threshold, the output line enters a ''warning'' state. Each
newly-arriving packet is checked to see if its output line is in warning state. If it is, some
action is taken. The action taken can be one of several alternatives.

i. The Warning Bit

The old DECNET architecture signaled the warning state by setting a special bit in the
packet' sheader.

 When the packet arrived at its destination, the transport entity copied the bit into the
next acknowledgement sent back to the source. The source then cut back on traffic.

 As long as the router was in the warning state, it continued to set the warning bit,
which meant that the source continued to get acknowledgements with it set.

 The source monitored the fraction of acknowledgements with the bit set and adjusted
its transmission rate accordingly. As long as the warning bits continued to flow in, the source
continued to decrease its transmission rate. When they slowed to a trickle, it increased its
transmission rate. Note that since every router along the path could set the warning bit, traffic
increased only when no router was in trouble.

ii. Choke packets

 In this approach, the router sends a choke packet back to the source host, giving it the
destination found in the packet.

 The original packet is tagged (a header bit is turned on) so that it will not generate
anymore choke packets farther along the path and is then forwarded in the usual way.

 When the source host gets the choke packet, it is required to reduce the traffic sent to
the specified destination by X percent. Since other packets aimed at the same destination are
probably already under way and will generate yet more choke packets, the host should ignore
choke packets referring to that destination for a fixed time interval. After that period has
expired, the host listens for more choke packets for another interval. If one arrives, the line is
still congested, so the host reduces the flow still more and begin signoring choke packets
again. If no choke packets arrive during the listening period, the host may increase the flow
again.

 The feedback implicit in this protocol can help prevent congestion yet not throttle any
flow unless trouble occurs.

 Hosts can reduce traffic by adjusting their policy parameters. Increases are done in
smaller increments to prevent congestion n2 from reoccurring quickly.

 Routers can maintain ever all thresholds. Depending on which threshold has been
crossed, the choke packet can contain a mild warning, a stern warning, or an ultimatum.

iii. Hop-By-Hopbackpressure
 At high speeds or over long distances, sending a choke packet to the source hosts does
not work well because there action is so slow.

Consider, for example, a host in San Francisco (router A in Fig. below) that is sending
traffic to a host in NewYork (router D in Fig. below)at155Mbps.IftheNewYorkhostbegins to
run out of buffers, it will take about 30 msec for a choke packet to get back to San Francisco
to tell it to slow down. The choke packet propagation is shown as the second, third, and
fourth steps in Fig. below (a). In those 30 msec, another 4.6 megabits will have been sent.
Even if the host in San Francisco completely shuts down immediately, the 4.6 megabits in
the pipe will continue to pour in and have to be dealt with. Only in the seventh diagram in
Fig. below (a) will the New York router notice as lower flow.

An alternative approach is to have the choke packet take effect at every hop it passes
through, as shown in the sequence of Fig. above (b). Here, as soon as the choke packet
reaches F, F is required to reduce the flow to D. Doing so will require F to devote more
buffers to the flow, since the source is still sending away at full blast, but it gives D
immediate relief, like a headache remedy in a television commercial. In the next step, the
choke packet reaches E, which tells E to reduce the flow to F. This action puts a greater
demand on E's buffers but gives F immediate relief. Finally, the choke packet reaches A and
the flow genuinely slows down.
Figure 4.25: (a) A choke packet that affects only the source. (b) A choke packet that
affects each hop it passes through.

The net effect of this hop-by-hop scheme is to provide quick relief at the point of
congestion at the price of using up more buffers upstream. In this way, congestion can be
nipped in the bud without losing any packets.

5. Load Shedding

 When none of the above methods make the congestion disappear, routers can bring out the
heavy artillery: load shedding.

 Load shedding is a fancy way of saying that when routers are being inundated by packets
that they cannot handle, they just throw them away.

 A router drowning in packets can just pick packets at random to drop, but usually it can do
better than that. Which packet to discard may depend on the applications running.

 To implement an intelligent discard policy, applications must mark their packets in priority
classes to indicate how important they are. If they do this, then when packets have to be
discarded, routers can first drop packets from the lowest class, then the next lowest class, and
so on.

Random Early Detection


 It is well known that dealing with congestion after it is first detected is more effective than
letting it gum up the works and then trying to deal with it. This observation leads to the idea
of discarding packets before all the buffer space is really exhausted. A popular algorithm for
doing this is called RED (Random Early Detection).

 In some transport protocols (including TCP), the response to lost packets is for the source
to slow down. The reasoning behind this logic is that TCP was designed for wired networks
and wired networks are very reliable, so lost packets are mostly due to buffer over runs rather
than transmission errors. This fact can be exploited to help reduce congestion.

 By having routers drop packets before the situation has become hopeless (hence the''early''
in the name), the idea is that there is time for action to be taken before it is too late. To
determine when to start discarding, routers maintain a running average of their queue lengths.
When the average queue length on some line exceeds a threshold, the line is said to be
congested and action is taken.

4.4 Network Layer in the Internet

The following will summarize the top 10 principles of the network layer in the
internet.

1. Make sure it works. Do not finalize the design or standard until multiple prototypes have
successfully communicated with each other. All too often, designers first write a 1000-page
standard, get it approved, then discover it is deeply flawed and does not work.

2. Keep it simple. When in doubt, use the simplest solution. If a feature is not absolutely
essential, leave it out, especially if the same effect can be achieved by combining other
features.

3. Make clear choices. If there are several ways of doing the same thing, choose one. Having
two or more ways to do the same thing is looking for trouble. Standards often have multiple
options or modes or parameters because several powerful parties insist that their way is best.

4. Exploit modularity. This principle leads directly to the idea of having protocol stacks,
each of whose layers is independent of all the other ones. In this way, if circumstances
require one module or layer to be changed, the other ones will not be affected.

5. Expect heterogeneity. Different types of hardware, transmission facilities, and


applications will occur on any large network. To handle them, the network design must be
simple, general, and flexible.

6. Avoid static options and parameters. If parameters are unavoidable (e.g., maximum
packet size), it is best to have the sender and receiver negotiate a value rather than defining
fixed choices.

7. Look for a good design; it need not be perfect. Often, the designers have a good design
but it cannot handle some weird special case. Rather than messing up the design, the
designers should go with the good design and put the burden of working around it on the
people with the strange requirements.

8. Be strict when sending and tolerant when receiving. In other words, send only packets
that rigorously comply with the standards, but expect incoming packets that may not be fully
conformant and try to deal with them.

9. Think about scalability. If the system is to handle millions of hosts and billions of users
effectively, no centralized databases of any kind are tolerable and load must be spread as
evenly as possible over the available resources.

10. Consider performance and cost. If a network has poor performance or outrageous costs,
nobody will use it.

In the network layer, the Internet can be viewed as a collection of networks or ASes
(Autonomous Systems) that are interconnected. There is no real structure, but several major
backbones exist. These are constructed from high-bandwidth lines and fast routers. The
biggest of these backbones, to which everyone else connects to reach the rest of the Internet,
are called Tier 1 networks. Attached to the backbones are ISPs (Internet Service
Providers) that provide Internet access to homes and businesses, data centers and colocation
facilities full of server machines, and regional (mid-level) networks. The data centers serve
much of the content that is sent over the Internet. Attached to the regional networks are more
ISPs, LANs at many universities and companies, and other edge networks. A sketch of this
quasi hierarchical organization is given in Fig. 4.26.
Figure 4.26: The Internet is an interconnected collection of many networks

The glue that holds the whole Internet together is the network layer protocol, IP
(Internet Protocol). Unlike older network layer protocols, IP was designed from the
beginning with internetworking in mind. A good way to think of the network layer is this: its
job is to provide a best-effort (i.e., not guaranteed) way to transport packets from source to
destination, without regard to whether these machines are on the same network or whether
there are other networks in between them.

Communication in the Internet works as follows. The transport layer takes data
streams and breaks them up so that they may be sent as IP packets. In theory, packets can be
up to 64 KB each, but in practice they are usually not more than 1500 bytes (so they fit in
one Ethernet frame). IP routers forward each packet through the Internet, along a path from
one router to the next, until the destination is reached. At the destination, the network layer
hands the data to the transport layer, which gives it to the receiving process. When all the
pieces finally get to the destination machine, they are reassembled by the network layer into
the original datagram. This datagram is then handed to the transport layer.

In the example of Fig. 4.26, a packet originating at a host on the home network has to
traverse four networks and a large number of IP routers before even getting to the company
network on which the destination host is located. This is not unusual in practice, and there are
many longer paths. There is also much redundant connectivity in the Internet, with
backbones and ISPs connecting to each other in multiple locations. This means that there are
many possible paths between two hosts. It is the job of the IP routing protocols to decide
which paths to use.
1. The IP Version 4 Protocol

An IPv4 datagram consists of a header part and a body or payload part. The header
has a 20-byte fixed part and a variable-length optional part. The header format is shown in
Fig. 26. The bits are transmitted from left to right and top to bottom, with the high-order bit
of the Version field going first. (This is a ‘‘big-endian’’ network byte order. On little-endian
machines, such as Intel x86 computers, a software conversion is required on both
transmission and reception.)

Figure 4.27: IP Header Format

Version: The Version field keeps track of which version of the protocol the datagram
belongs to. Version 4 dominates the Internet today. IPv6 is the next version of IP. It is use
will eventually be forced almost people has a desktop PC, a laptop, and an IP phone. IPv5 is
an experimental real-time stream protocol that was never widely used.

IHL: Header length is not constant, So, IHL is provided to tell how long the header is, in 32-
bit words. The minimum value is 5, which applies when no options are present. The
maximum value of this 4-bit field is 15, which limits the header to 60 bytes, and thus the
Options field to 40 bytes.

Type of Service: The Differentiated services are also the Type of service field. It is intended
to distinguish between different classes of service. Various combinations of reliability and
speed are possible. For digitized voice, fast delivery beats accurate delivery. For file transfer,
error-free transmission is more important than fast transmission. The Type of service field
provided 3 bits to signal priority and 3 bits to signal whether a host cared more about delay,
throughput, or reliability. However, no one really knew what to do with these bits at routers,
so they were left unused for many years.

Total length: The Total length includes everything in the datagram—both header and data.
The maximum length is 65,535 bytes. At present, this upper limit is tolerable, but with future
networks, larger datagrams may be needed. All the fragments of a packet contain the same
Identification value.

Unused bit: Next comes an unused bit, which is surprising, as available real estate in the IP
header is extremely scarce. This bit to detect malicious traffic and would greatly simplify
security, as packets with the ‘‘evil’’ bit set would be known to have been sent by attackers
and could just be discarded.

DF and MF: DF stands for Don’t Fragment. It is an order to the routers not to fragment
the packet. Now it is used as part of the process to discover the path MTU, which is the
largest packet that can travel along a path without being fragmented. By marking the
datagram with the DF bit, the sender knows it will either arrive in one piece, or an error
message will be returned to the sender. MF stands for More Fragments. All fragments
except the last one have this bit set. It is needed to know when all fragments of a datagram
have arrived.

Fragment offset: The Fragment offset tells where in the current packet this fragment
belongs. All fragments except the last one in a datagram must be a multiple of 8 bytes, the
elementary fragment unit. Since 13 bits are provided, there is a maximum of 8192 fragments
per datagram, supporting a maximum packet length up to the limit of the Total length field.

TtL: The TtL (Time to live) field is a counter used to limit packet lifetimes. It was originally
supposed to count time in seconds, allowing a maximum lifetime of 255 sec. It must be
decremented on each hop and is supposed to be decremented multiple times when a packet is
queued for a long time in a router. When it hits zero, the packet is discarded and a warning
packet is sent back to the source host. This feature prevents packets from wandering around
forever, something that otherwise might happen if the routing tables ever become corrupted.

Protocol: The Protocol field tells it which transport process to give the packet to. TCP is one
possibility, but so are UDP and some others. The numbering of protocols is global across the
entire Internet.

Header checksum: Header checksum is assumed to be zero upon arrival. Such a checksum
is useful for detecting errors while the packet travels through the network. It must be
recomputed at each hop because at least one field always changes (the Time to live field), but
tricks can be used to speed up the computation.
Source address and Destination address: The Source address and Destination address
indicate the IP address of the source and destination network interfaces.

Options field: The Options field was designed to provide an escape to allow subsequent
versions of the protocol to include information not present in the original design, to permit
experimenters to try out new ideas, and to avoid allocating header bits to information that is
rarely needed.

Option Description
Specifies how secret the datagram is
Security
Gives the complete path to be followed
Strict source routing
Gives a list of routers not to be missed
Loose source routing
Makes each router append its IP address
Record route
Makes each router append its address and
Timestamp timestamp

Figure 4.28: Some of the IP options.

Security option: The Security option tells how secret the information is. In theory, a military
router might use this field to specify not to route packets through certain countries the
military considers to be ‘‘bad guys.’’

Strict source routing: The Strict source routing option gives the complete path from source
to destination as a sequence of IP addresses. The datagram is required to follow that exact
route. It is most useful for system managers who need to send emergency packets when the
routing tables have been corrupted, or for making timing measurements.

Loose source routing: The Loose source routing option requires the packet to traverse the
list of routers specified, in the order specified, but it is allowed to pass through other routers
on the way.

Record route: The Record route option tells each router along the path to append its IP
address to the Options field.
Timestamp: The Timestamp option is like the Record route option, except that in addition to
recording its 32-bit IP address, each router also records a 32-bit timestamp.

2. IP Addresses

All the computers of the world on the Internet network communicate with each other
with underground or underwater cables or wirelessly. If the user wants to download a file
from the internet or load a web page or literally do anything related to the internet, my
computer must have an address so that other computers can find and locate mine in order to
deliver that particular file or webpage that I am requesting. In technical terms, that address is
called IP Address or Internet Protocol Address.

Example: If someone wants to send a mail then it must have home address.
Similarly, the computer too needs an address so that other computers on the internet can
communicate with each other without the confusion of delivering information to someone
else’s computer. And that is why each computer in this world has a unique IP Address. Or in
other words, an IP address is a unique address that is used to identify computers or nodes on
the internet. This address is just a string of numbers written in a certain format. It is generally
expressed in a set of numbers for example 192.155.12.1. Here each number in the set is from
0 to 255 range. Or it can say that a full IP address ranges from 0.0.0.0 to 255.255.255.255.

Working of IP addresses

The working of IP addresses is similar to other languages. It can also use some set of
rules to send information. Using these protocols it can easily send, and receive data or files to
the connected devices.

There are several steps behind the scenes.

 The device directly requests the Internet Service Provider which then grants device
access to the web.

 And an IP Address is assigned to the device from the given range available.

 The internet activity goes through service provider, and they route it back to user, using
the IP address.

 The IP address can change. For example, turning the router on or off can change the IP
Address.
 When the home location has home IP address, it changes the network of the device.

Subnetting

Subnetting refers to the concept of dividing the single vast network into more than
one smaller logical sub-networks called as subnets. Sub net is related to IP Address as it
borrows a bit from the host part of the IP Address. Thus the IP Address has three parts:
• Network part. (Higher order bits)

• Subnet part.

• Host part. (Remaining bits)

The subnet is formed by taking the last bit from the network component of the IP
address and used to specify the number of subnets required. Subnetting allows having various
sub-networks within the big network without having a new network number through IPS.
Subnetting reduces network traffic and complexity. The purpose of introducing the concept
of Subnetting was to fulfill the shortage of IP Addresses. The Subnetting process helps in
dividing the class A, class B, and class C network numbers into smaller parts. A subnet can
further be broken down into smaller networks known as sub-subnets.

IP address format

• The 32-bit IP address is grouped eight bits at a time, separated by dots and represented in
decimal format. This is known as dotted decimal notation as shown in fig.

• Each bit in the octet has a binary weight (128,64,32, 16,8,4,2, 1).

• The minimum value for an octet is 0, and the maximum value for an octet is 255.

Figure 4.29: IP Address Format


IPv4 Address Classes

IPv4 class is a way of division of addresses in the IPv4 based routing. Separate IP
classes are used for different types of networks. They can be explained as follows

CLASSES Range

Class A 1.0.0.0 to 127.255.255.255

Class B 128.0.0.0 to 191.255.255.255

Class C 192.0.0.0 to 223.255.255.255

Class D 224.0.0.0 to 239.255.255.255

Class E 240.0.0.0 to 255.255.255.255

A Router has more than one IP address because router connects two or more different
networks. But A computer or host can only have one and a unique ip address. A routers
function is to inspect incoming packet and determine whether it belongs to local network or
to a Remote Network, if a local packet is determined then there is no need of routing and if a
Remote packet is determined then it will route that packet according to the routing table
otherwise the packet will be discarded.

Types of IP Address

IP Address is of two types:

1. IPv4: Internet Protocol version 4. It consists of 4 numbers separated by the dots. Each
number can be from 0-255 in decimal numbers. But computers do not understand decimal
numbers, they instead change them to binary numbers which are only 0 and 1. Therefore, in
binary, this (0-255) range can be written as (00000000 – 11111111). Since each number N
can be represented by a group of 8-digit binary digits. So, a whole IPv4 binary address can be
represented by 32-bits of binary digits. In IPv4, a unique sequence of bits is assigned to a
computer, so a total of (2^32) devices approximately = 4,294,967,296 can be assigned with
IPv4.
IPv4 can be written as:

189.123.123.90

Classes of IPv4 Address: There are around 4.3 billion IPv4 addresses and managing all
those addresses without any scheme is next to impossible. If it has to find a word from a
language dictionary it will take less than 5 minutes to find that word. It is able to do this
because words in the dictionary are organized in alphabetical order. If it has to find out the
same word from a dictionary that doesn’t use any sequence or order to organize the words, it
will take an eternity to find the word. If a dictionary with one billion words without order can
be so disastrous, then it can imagine the pain behind finding an address from 4.3 billion
addresses. For easier management and assignment IP addresses are organized in numeric
order and divided into the following 5 classes :

Address
IP Class Range Maximum number of networks

Class A 0-126 126 (27-2)

Class B 128-191 16384

Class C 192-223 2097152

Class D 224-239 Reserve for multitasking

Class E 240-254 Reserved for Research and development

2. IPv6: But, there is a problem with the IPv4 address. With IPv4, it can connect only the
above number of 4 billion devices uniquely, and apparently, there are much more devices in
the world to be connected to the internet. So, gradually it is making our way to IPv6
Address which is a 128-bit IP address. In human-friendly form, IPv6 is written as a group of
8 hexadecimal numbers separated with colons(:). But in the computer-friendly form, it can be
written as 128 bits of 0s and 1s. Since, a unique sequence of binary digits is given to
computers, smartphones, and other devices to be connected to the internet. So, via IPv6 a
total of (2^128) devices can be assigned with unique addresses which are actually more than
enough for upcoming future generations.

IPv6 can be written as:

2011:0bd9:75c5:0000:0000:6b3e:0170:8394

Classification of IP Address
An IP address is classified into the following types:

1. Public IP Address: This address is available publicly and it is assigned by the network
provider to router, which further divides it to devices. Public IP Addresses are of two types,

Dynamic IP Address: When it connects a smartphone or computer to the internet, Internet


Service Provider provides an IP Address from the range of available IP Addresses. Now, the
device has an IP Address and it can simply connect the device to the Internet and send and
receive data to and from the device. The very next time when it tries to connect to the
internet with the same device, it provides with different IP Addresses to the same device and
also from the same available range. Since IP Address keeps on changing every time when it
connect to the internet, it is called a Dynamic IP Address.

Static IP Address: Static address never changes. They serve as a permanent internet address.
These are used by DNS servers. Static IP Address provides information such as device is
located on which continent, which country, which city, and which Internet Service Provider
provides internet connection to that particular device. Once, it is know who is the ISP, it can
trace the location of the device connected to the internet. Static IP Addresses provide less
security than Dynamic IP Addresses because they are easier to track.

2. Private IP Address: This is an internal address of the device which are not routed to the
internet and no exchange of data can take place between a private address and the internet.

3. Shared IP addresses: Many websites use shared IP addresses where the traffic is not huge
and very much controllable, they decide to rent it to other similar websites so to make it cost-
friendly. Several companies and email sending servers use the same IP address (within a
single mail server) to cut down the cost so that they could save for the time the server is idle.

4. Dedicated IP addresses: A dedicated IP Address is an address used by a single company


or an individual which gives them certain benefits using a private Secure Sockets Layer
(SSL) certificate which is not in the case of a shared IP address. It allows accessing the
website or logging in via File Transfer Protocol (FTP) by IP address instead of its domain
name. It increases the performance of the website when the traffic is high. It also protects
from a shared IP address that is black-listed due to spam.

3. Internet Control Protocols


In addition to IP, which is used for data transfer, the Internet has several companion
control protocols that are used in the network layer. They include ICMP, ARP, and DHCP.
ICMP and DHCP have similar versions for IPv6; the equivalent of ARP is called NDP
(Neighbor Discovery Protocol) for IPv6.

i. IMCP—The Internet Control Message Protocol

ICMP stands for Internet Control Message Protocol. The operation of the Internet is
monitored closely by the routers. When something unexpected occurs, the event is reported
by the ICMP, which is also used to test the Internet. About a dozen types of ICMP messages
are defined. The most important ones are listed in Fig. 29. Each ICMP message type is
encapsulated in an IP packet.

Figure 4.30: The principal ICMP message types.

The DESTINATION UNREACHABLE message is used when the subnet or a


router cannot locate the destination or when a packet with the DF bit cannot be delivered
because a ''smallpacket'' network stands in the way.

The TIME EXCEEDED message is sent when a packet is dropped because its
counter has reached zero. This event is a symptom that packets are looping, that there is
enormous congestion, or that the timer values are being set too low.

The PARAMETER PROBLEM message indicates that an illegal value has been
detected in a header field. This problem indicates a bug in the sending host's IP software or
possibly in the software of a router transited.

The SOURCE QUENCH message was formerly used to throttle hosts that were
sending too many packets. When a host received this message, it was expected to slow down.

The REDIRECT message is used when a router notices that a packet seems to be
routed wrong. It is used by the router to tell the sending host about the probable error.

The ECHO and ECHO REPLY messages are used to see if a given destination is
reachable and alive. Upon receiving the ECHO message, the destination is expected to send
an ECHO REPLY message back.

The TIMESTAMP REQUEST and TIMESTAMP REPLY messages are similar,


except that the arrival time of the message and the departure time of the reply are recorded in
the reply. This facility is used to measure network performance.

ii. Address Resolution Protocol

Address resolution protocol definition is, a protocol which is used to connect an


ever-changing IP address to an address of fixed physical machine like MAC (media access
control) address within a LAN is known as address resolution protocol.

This mapping process is significant because the IP & MAC addresses the change &
conversion of the length so that the systems can identify one another. At present, the most
frequently used IP is IPv4 (IP version 4). An IP address is 32-bits long whereas a MAC
address is 48-bits long. Address resolution protocol changes the 32 address bit into 48
address bit. The address resolution protocol diagram is shown below.

Figure 4.31: Address Resolution Protocol

When the source at the network layer wants to converse with the destination, first the
source requires discovering the Physical Address or MAC address of the destination. For
this, the source will verify the ARP table or ARP cache with the destination of the MAC
address. If this destination address is available within the ARP table or ARP cache, then the
source utilizes the address of MAC for communication.
Figure 4.32: Address Resolution Protocol

Address Resolution Protocol Working

If the address of MAC for the destination is not available in the ARP table or cache,
then the source will generate a request message of an ARP. The ARP table is mainly used to
maintain a connection between every MAC address & its equivalent IP address. This table
can be entered manually by the user. Here, the request message includes the IP address &
MAC address of the source and the destination. The destination’s MAC address will leave
null since the user has demanded this.

The request message of an address resolution protocol will be transmitted to the local
network through the source computer. In the network of LAN, all the devices will get the
broadcast message. Now, every device evaluates its own IP address through the destination
IP address.

Figure 4.33: Working of Address Resolution Protocol

Working of ARP

If both the IP address of the device and the destination match, then the device will
send an ARP to reply message. Similarly, if the IP address does not match, then the device
will drop the packet automatically. The destination transmits an address resolution protocol
reply packet once the address of the destination equals the device. The reply packet of ARP
includes the device’s MAC address. The destination device updates the table automatically to
store the MAC address of the source as this address will be necessary from the source for
communication purposes.

At present, the source for the destination device performs like a target & the
destination device transmits the reply message of ARP which is unicast rather than broadcast.
Once the source device gets the reply message of ARP, then it will recognize the destination
devices’ MAC address because the reply packet of ARP includes the destination device’s
MAC address with the other addresses. Here, the MAC address will be updated by the
destination MAC address within the ARP cache. So the sender is capable to converse to the
destination directly.

Address Resolution Protocol Types

There are four types of ARP which include the following.


1. Proxy ARP
2. Gratuitous ARP
3. Reverse ARP (RARP)
4. Inverse ARP

Figure 4.34: Address Resolution Protocol Types

1. Proxy ARP

Proxy ARP is a system that replies to the requests of ARP on the behalf of a different
system. Once the request is sent through an outside system of the network host, the router
functions as a gateway to transmit the packets outside of the networks to their destinations.

2. Reverse ARP (RARP)

Reverse ARP is a convention used through the framework of the customer within
LAN to command its IPv4 address from the gateway-router table. A table is prepared by the
manager of the organization within the gateway router that is used to determine the address
of MAC to the connecting IP address.
3. Gratuitous ARP

The gratuitous ARP is a type of protocol that requests from the broadcast to obtain
the IP address of the router. It is mainly utilized once an end system includes an IP address
however wishes to protect its MAC address from the local area network if the address of IP is
not used through any other system. This protocol is mainly used to update the table of ARP
for other devices. It also verifies whether the host is utilizing the actual IP address otherwise
a duplicate one.

4. Inverse ARP

The inverse of ARP is known as Inverse ARP which is mainly utilized to find out the
system’s IP addresses over local area network from its MAC addresses. It is most frequently
used in frame relays, ATM networks where the data from level-3 to level 2 are obtained.

Advantages

1. By using an ARP, the address of MAC can be known simply if it is know the same
system’s IP address.

2. End nodes should not be arranged to identify MAC addresses. It can be found once
required.

3. The main goal of this protocol is to allow every host on a network that permits to
increase a mapping in between two addresses like IP & physical.

4. The set of mappings stored within the host is known as ARP cache/table.

Disadvantages

1. ARP attacks may occur like ARP spoofing & Denial of Services.

2. ARP Spoofing is a method used to allow an attacker for attacking an Ethernet


Network. This network may lead to data frames sniffing on switched LAN otherwise the
attacker may end the overall traffic which is also called ARP denial of Services.

iii. Dynamic Host Configuration Protocol

Dynamic Host Configuration Protocol (DHCP) is a network management protocol


used to dynamically assign an IP address to nay device, or node, on a network so they can
communicate using IP (Internet Protocol). DHCP automates and centrally manages these
configurations. There is no need to manually assign IP addresses to new devices. Therefore,
there is no requirement for any user configuration to connect to a DHCP based network.

DHCP can be implemented on local networks as well as large enterprise networks.


DHCP is the default protocol used by the most routers and networking equipment. DHCP is
also called RFC (Request for comments) 2131.

DHCP does the following:

DHCP manages the provision of all the nodes or devices added or dropped from the
network.

 DHCP maintains the unique IP address of the host using a DHCP server.

 It sends a request to the DHCP server whenever a client/node/device, which is configured


to work with DHCP, connects to a network. The server acknowledges by providing an IP
address to the client/node/device.

 DHCP is also used to configure the proper subnet mask, default gateway and DNS server
information on the node or device.

 There are many versions of DCHP are available for use in IPV4 (Internet Protocol Version
4) and IPV6 (Internet Protocol Version 6).

Working Principles of DHCP

DHCP runs at the application layer of the TCP/IP protocol stack to dynamically
assign IP addresses to DHCP clients/nodes and to allocate TCP/IP configuration information
to the DHCP clients. Information includes subnet mask information, default gateway, IP
addresses and domain name system addresses.

DHCP is based on client-server protocol in which servers manage a pool of unique IP


addresses, as well as information about client configuration parameters, and assign addresses
out of those address pools.

The DHCP lease process works as follows:

 First of all, a client (network device) must be connected to the internet.

 DHCP clients request an IP address. Typically, client broadcasts a query for this
information.
 DHCP server responds to the client request by providing IP server address and other
configuration information. This configuration information also includes time period, called a
lease, for which the allocation is valid.

 When refreshing an assignment, a DHCP clients request the same parameters, but the
DHCP server may assign a new IP address. This is based on the policies set by the
administrator.

Components of DHCP

When working with DHCP, it is important to understand all of the components.


Following are the list of components:

DHCP Server: DHCP server is a networked device running the DCHP service that
holds IP addresses and related configuration information. This is typically a server or a router
but could be anything that acts as a host, such as an SD-WAN appliance.

DHCP client: DHCP client is the endpoint that receives configuration information
from a DHCP server. This can be any device like computer, laptop, IoT endpoint or anything
else that requires connectivity to the network. Most of the devices are configured to receive
DHCP information by default.

IP address pool: IP address pool is the range of addresses that are available to DHCP
clients. IP addresses are typically handed out sequentially from lowest to the highest.

Subnet: Subnet is the partitioned segments of the IP networks. Subnet is used to keep
networks manageable.

Lease: Lease is the length of time for which a DHCP client holds the IP address
information. When a lease expires, the client has to renew it.

DHCP relay: A host or router that listens for client messages being broadcast on that
network and then forwards them to a configured server. The server then sends responses back
to the relay agent that passes them along to the client. DHCP relay can be used to centralize
DHCP servers instead of having a server on each subnet.

You might also like