Computer Networks_Unit IV
Computer Networks_Unit IV
Let us assume for this example that the message is four times longer than the maximum
packet size, so the network layer has to break it into four packets, 1, 2, 3, and 4, and send each of
the min turn to router A. Every router has an internal table telling it where to send packets for
each of the possible destinations. Each table entry is a pair (destination and the outgoing line).
Only directly connected lines can be used.
A’s initial routing table is shown in the figure under the label ‘‘initially’’. At A, packets 1,
2, and 3 are stored briefly, having arrived on the incoming link. Then each packet is forwarded
according to A’s table, onto the outgoing link to C within a new frame.Packet1isthenforwardedto
E and then to F.
However, something different happens to packet 4. When it gets to A it is sent to router B,
even though it is also destined for F. For some reason (traffic jam along ACE path), A decided to
send packet 4 via a different route than that of the first three packets. Router A updated its
routing table, as shown underthelabel‘‘later.’’1-The algorithm that manages the tables and makes
the routing decisions is called the routing algorithm.
4. Implementation of connection-oriented service
If connection-oriented service is used, a path from the source router all the way to the
destination router must be established before any data packets can be sent. This connection is
called a VC (virtual circuit), and the network is called a virtual-circuit network
When a connection is established, a route from the source machine to the destination
machine is chosen as part of the connection setup and stored in tables inside the routers. That
route is used for all traffic flowing over the connection, exactly the same way that the telephone
system works. When the connection is released, the virtual circuit is also terminated. With
connection-oriented service, each packet carries an identifier telling which virtual circuit it
belongs to.
As an example, consider the situation shown in Figure. Here, host H1 has established
connection 1 with host H2. This connection is remembered as the first entry in each of the
routing tables. The first line of A’s table says that if a packet bearing connection identifier
1comes in from H1, it is to be sent to router C and given connection identifier 1. Similarly, the
first entry at C routes the packet to E, also with connection identifier 1.
Now let us consider what happens if H3 also wants to establish a connection to H2. It
chooses connection identifier 1 (because it is initiating the connection and this is its only
connection) and tells the network to establish the virtual circuit.
This leads to the second row in the tables. Note that it has a conflict here because
although A can easily distinguish connection 1 packets from H1 from connection 1 packets from
H3, C cannot do this. For this reason, A assigns a different connection identifier to the outgoing
traffic for the second connection. Avoiding conflicts of this kind is why routers need the ability
to replace connection identifiers in outgoing packets. In some contexts, this process is called
label switching. An example of a connection-oriented network service is MPLS (Multi-
Protocol Label Switching).
One can make a general statement about optimal routes without regard to network
topology or traffic. This statement is known as the optimality principle. It states that if router J is
on the optimal path from router I to router K, then the optimal path from J to K also falls along
the same
As a direct consequence of the optimality principle, it can see that the set of optimal
routes from all sources to a given destination form a tree rooted at the destination. Such a tree is
called a sink tree. The goal of all routing algorithms is to discover and use the sink trees for all
routers
Figure 4.4: A network (b) A sink tree for router B.
The idea is to build a graph of the subnet, with each node of the graph representing a
router and each arc of the graph representing a communication line or link. To choose a route
between a given pair of routers, the algorithm just finds the shortest path between them on the
graph.
1. Start with the local node (router) as the root of the tree. Assign a cost of 0 to this
node and make it the first permanent node.
2. Examine each neighbor of the node that was the last permanent node.
4. Among the list of tentative nodes a. Find the node with the smallest cost and make
it Permanent b. If a node can be reached from more than one route then select the route
with the shortest cumulative cost.
One such measure is to have a hop counter contained in the header of each packet, which
is decremented at each hop, with the packet being discarded when the counter reaches zero.
Ideally, the hop counter should be initialized to the length of the path from source to
destination.
Flooding is not practical in most applications. Intra- and Inter domain Routing An
autonomous system (AS) is a group of networks and routers under the authority of a single
administration. Routing inside an autonomous system is referred to as intra domain routing.
(DISTANCE VECTOR, LINK STATE)
iii) Split Horizon and Poison Reverse: Using the split horizon strategy has one drawback.
Normally, the distance vector protocol uses a timer, and if there is no news about a route, the
node deletes the route from its table. When node B in the previous scenario eliminates the
route to X from its advertisement to A, node A cannot guess that this is due to the split
horizon strategy (the source of information was A) or because B has not received any news
about X recently. The split horizon strategy can be combined with the poison reverse
strategy. Node B can still advertise the value for X, but if the source of information is A, it
can replace the distance with infinity as a warning: "Do not use this value; what I know about
this route comes from it."
A link state packet can carry a large amount of information. For the moment, it can
assume that it carries a minimum amount of data: the node identity, the list of links, a
sequence number, and age.
The first two, node identity and the list of links, are needed to make the topology. The
third, sequence number, facilitates flooding and distinguishes new LSPs from old ones. The
fourth, age, prevents old LSPs from remaining in the domain for a long time.
2. on a periodic basis: The period in this case is much longer compared to distance vector.
The timer set for periodic dissemination is normally in the range of 60 min or 2 h based on
the implementation. A longer period ensures that flooding does not create too much traffic on
the network.
After a node has prepared an LSP, it must be disseminated to all other nodes, not only
to its neighbors.
1. The creating node sends a copy of the LSP out of each interface
2. A node that receives an LSP compares it with the copy it may already have. If the newly
arrived LSP is older than the one it has (found by checking the sequence number), it discards
the LSP. If it is newer, the node does the following:
b. It sends a copy of it out of each interface except the one from which the packet arrived.
This guarantees that flooding stops somewhere in the domain (where a node has only one
interface).
Dijkstra Algorithm A shortest path tree is a tree in which the path between the root
and every other node is the shortest. The Dijkstra algorithm creates a shortest path tree from a
graph. The algorithm divides the nodes into two sets: tentative and permanent. It finds the
neighbors of a current node, makes them tentative, examines them, and if they pass the
criteria, makes them permanent.
Figure 4.12: Flow diagram of Shortest path diagram
In some applications, hosts need to send messages to many or all other hosts. For
Ex.Weather reports, stock market, updates etc. Sending a packet to all destinations
simultaneously is called ‘BroadCasting’.
1) One method is sending distinct packet to each destination by the source. This method
wastes the band width and also requires the source to have a complete list of all destinations.
2) The second is using Flooding technique. This generates too many packets and consumes
too much bandwidth.
3) Another method is multi destination routing. In this each packet contains either a list of
destinations or a bitmap indicating the desired destinations. When a packet arrives at a router,
the router checks all the destinations to determine the set of output lines that will be needed.
The router generates a new copy of the copy for each output line to be used and includes in
each packet only those destinations that are to use the line. This routing is like separately
addressed packets except that several packets must follow the same route.
4) Another algorithm which uses the spanning tree. A spanning tree is a subnet of the subnet
that includes all the routers but contains no loop. If each router knows which of its lines
belong to the spanning tree, it can copy an incoming broadcast packet onto all spanning tree
lines except the one it arrived on. This method makes use of bandwidth excellently and
generates minimum no. of packets necessary to do the job. The only disadvantage is that each
router must have knowledge of some spanning tree.
5) One more algorithm is an attempt to approximate the behavior of the previous one, even
when the routers do not know anything at all about spanning trees. The idea is remarkably
simple once it has been pointed out. When a broadcast packet arrives at a router, the router
checks to see if the packet arrived on that line that is normally used for sending packets to the
source of the broadcast. If so, there is an excellent chance that the broadcast packet itself
followed the best route from the router and is therefore the first copy to arrive at the router.
This being the case, the router forwards copies of it onto all lines except the one it arrived on.
If, however, the broadcast packet arrived on a line other than the preferred one for reaching
the source the packet is discarded as a likely duplicate.
Figure 4.15: Broadcast Routing
Part (a) shows a network, part (b) shows a sink tree for router I of that network, and
part (c) shows how the reverse path algorithm works. On the first hop, I sends packets to F,
H, J, and N, as indicated by the second row of the tree. Each of these packets arrives on the
preferred path to I (assuming that the preferred path falls along the sink tree) and is so
indicated by a circle around the letter. On the second hop, eight packets are generated, two by
each of the routers that received a packet on the first hop. As it turns out, all eight of these
arrive at previously unvisited routers, and five of these arrive along the preferred line. Of the
six packets generated on the third hop, only three arrive on the preferred path (at C, E, and
K); the others are duplicates. After five hops and 24 packets, the broadcasting terminates,
compared with four hops and 14 packets had the sink tree been followed exactly.
8. Multicast Routing
For some applications, it is necessary for one process to send a message to all other
members of the group. If the group is small, it can just send each other member a point-to-
point message. If the group is large this strategy is expensive. Sometimes broad casting is
used, but using broadcasting is used, but using broadcasting to inform 1000 machines on a
million node network is inefficient because most receivers are not interested in the message.
Thus it is needed to send message to well-defined groups. Sending message to such a group is
called ‘multicasting’.
To do multicasting, group management is required. Some way is needed to create and
destroy groups and for processes to join and leave groups. When process joins a group, it
informs its host of this fact. It is important that routers know which of their hosts belong to
which group. Either hosts most inform their routers about change in group membership or
routers must query their hosts periodically. Routers tell their neighbors, so the information
propagates through the subnet.
To do multicast routing, each router computes a spanning tree covering all other
routers in the subnet. When a process sends a multicast packet, to a group, the first router
examines its spanning tree and prunes it, removing all lines that do not lead to hosts that are
members in the group. Multicast packets are forwarded.
Another delivery model, called any cast is sometimes also useful. In any cast, a packet
is delivered to the nearest member of a group. Schemes that find these paths are called any
cast routing.
Figure 4.17: (a) Anycast routes to group 1. (b) Topology seen by the routing protocol
Sometimes nodes provide a service, such astime of day or content distribution for
which it is getting the right information allthat matters, not the node that is contacted; any
node will do. Suppose it wants to anycast to the members of group 1. They will all be given
the address‘‘1,’’ instead of different addresses.
Distance vector routing will distribute vectors as usual, and nodes will choose the
shortest path to destination 1. This will result in nodes sending to the nearest instance of
destination 1. The routes are shown in 16(a). This procedure works because the routing
protocol does not realize that there are multiple instances of destination 1. That is, it believes
that all the instances of node 1 are the same node, as in the topology shown in Fig.16(b).
When they are moved to new Internet locations, laptops acquire new network
addresses. There is no association between the old and new addresses; the network does not
know that they belonged to the same laptop. In this model, a laptop can be used to browse the
Web, but other hosts cannot send packets to it (for example, for an incoming call), without
building a higher layer location service, for example, signing into Skype again after moving.
Moreover, connections cannot be maintained while the host is moving; new connections must
be started up instead. Network-layer mobility is useful to fix these problems.
The basic idea used for mobile routing in the Internet and cellular networks is for the
mobile host to tell a host at the home location where it is now. This host, which acts on behalf
of the mobile host, is called the home agent. Once it knows where the mobile host is currently
located, it can forward packets so that they are delivered.
Fig. 4.18 shows mobile routing in action. A sender in the northwest city of Seattle
wants to send a packet to a host normally located across the United States in New York. The
case of interest to us is when the mobile host is not at home. Instead, it is temporarily in San
Diego. The mobile host in San Diego must acquire a local network address before it can use
the network. This happens in the normal way that hosts obtain network addresses; it will
cover how this works for the Internet later in this chapter. The local address is called a care of
address. Once the mobile host has this address, it can tell its home agent where it is now. It
does this by sending a registration message to the home agent (step 1) with the care of
address. The message is shown with a dashed line in Fig. 4.18 to indicate that it is a control
message, not a data message.
Next, the sender sends a data packet to the mobile host using its permanent address
(step 2). This packet is routed by the network to the host’s home location because that is
where the home address belongs. In New York, the home agent intercepts this packet because
the mobile host is away from home. It then wraps or encapsulates the packet with a new
header and sends this bundle to the care of address (step 3). This mechanism is called
tunneling.
When the encapsulated packet arrives at the care of address, the mobile host unwraps
it and retrieves the packet from the sender. The mobile host then sends its reply packet
directly to the sender (step 4). The overall route is called triangle routing because it may be
circuitous if the remote location is far from the home location. As part of step 4, the sender
may learn the current care of address. Subsequent packets can be routed directly to the mobile
host by tunneling them to the care of address (step 5), bypassing the home location entirely. If
connectivity is lost for any reason as the mobile moves, the home address can always be used
to reach the mobile.
11. Routing in Ad Hoc Networks
In all these cases, and others, each node consists of a router and a host, usually on the
same computer. Networks of nodes that just happen to be near each other are called ad hoc
networks or MANETs (Mobile Ad hoc NETworks). What makes ad hoc networks different
from wired networks is that all the usual rules about fixed topologies, fixed and known
neighbours, fixed relationship between IP address and location, and more are suddenly tossed
out the window.
Routers can come and go or appear in new places at the drop of a bit. With a wired
network, if a router has a valid path to some destination, that path continues to be valid
indefinitely (barring a failure somewhere in the system). With an ad hoc network, the
topology may be changing all the time. A variety of routing algorithms for ad hoc networks
have been proposed. One of the more interesting ones is the AODV (Ad hoc On-demand
Distance Vector) routing algorithm (Perkins and Royer, 1999).
It takes into account the limited bandwidth and low battery life found in environment.
Another unusual characteristic is that it is an on-demand algorithm, that is, it determines a
route to some destination only when somebody wants to send a packet to that destination.
Route Discovery
Figure 4.19: (a) Range of A’s broadcast. (b) After B and D receive it. (c) AfterC, F, and
G receive it. (d) After E, H, and I receive it.
The shaded nodes are new recipients. The dashed lines show possible reverse routes.
The solid lines show the discovered route. To describe the algorithm, consider the newly
formed ad hoc network of Figure above. Suppose that a process at node A wants to send a
packet to node I. The AODV algorithm maintains a distance vector table at each node, keyed
by destination, giving information about that destination, including the neighbor to which to
send packets to reach the destination. First, A looks in its table and does not find an entry for
I. It now has to discover a route to I.
This property of discovering routes only when they are needed is what makes this
algorithm ‘‘on demand’’. To locate I, A constructs a ROUTE REQUEST packet and
broadcasts it using flooding. The transmission from A reaches B and D, as illustrated in Fig.
(a). Each node rebroadcasts the request, which continues to reach nodes F, G, and C in Fig.(c)
and nodes H, E, and I in Fig.(d). A sequence number set at the source is used to weed out
duplicates during the flood. For example, D discards the transmission from B in Fig. (c)
because it has already forwarded the request.
Eventually, the request reaches node I, which constructs a ROUTE REPLY packet.
This packet is unicast to the sender along the reverse of the path followed by the request. For
this to work, each intermediate node must remember the node that sent it the request.
The arrows in Fig. (b)–(d) show the reverse route information that is stored. Each
intermediate node also increments a hop count as it forwards the reply. This tells the nodes
how far they are from the destination.
The replies tell each intermediate node which neighbor to use to reach the destination: it
is the node that sent them the reply. Intermediate nodes G and D put the best route they hear
into their routing tables as they process the reply. When the reply reaches A, a new route,
ADGI, has been created.
Route Maintenance
Because nodes can move or be switched off, the topology can change spontaneously.
For example, in Fig., if G is switched off, A will not realize that the route it was using to I
(ADGI) is no longer valid. The algorithm needs to be able to deal with this. Periodically, each
node broadcasts a Hello message. Each of its neighbors is expected to respond to it. If no
response is forthcoming, the broadcaster knows that that neighbor has moved out of range or
failed and is no longer connected to it. Similarly, if it tries to send a packet to a neighbor that
does not respond, it learns that the neighbor is no longer available.
4.3 Congestion Control Algorithms
When too many packets are present in (a part of) the subnet, performance degrades. This
situation is called congestion.
Figure below depicts the symptom. When the number of packets dumped into the subnet
by the hosts is within its carrying capacity, they are all delivered (except for a few that are
afflicted with transmission errors) and the number delivered is proportional to the number
sent.
However, as traffic increases too far, the routers are no longer able to cope and they begin
losing packets. This tends to make matters worse. At very high traffic, performance collapses
completely and almost no packets are delivered.
Figure 4.20: Flow of Congestion
When too much traffic is offered, congestion sets in and performance degrades
sharply.
Slow processors can also cause congestion. If the routers' CPUs are slow at
performing the book keeping tasks required of them (queuing buffers, updating tables,
etc.), queues can build up, even though there is excess line capacity. Similarly, low-
bandwidth lines can also cause congestion.
Many problems in complex systems, such as computer networks, can be viewed from
a control theory point of view. This approach leads to dividing all solutions into two
groups: open loop and closed loop.
Tools for doing open-loop control include deciding when to accept new traffic,
deciding when to discard packets and which ones, and making scheduling decisions at
various points in the network.
The second step in the feedback loop is to transfer the information about the
congestion from the point where it is detected to the point where something can be
done about it.
In all feedback schemes, the hope is that knowledge of congestion will cause the hosts
to take appropriate action to reduce the congestion.
The presence of congestion means that the load is (temporarily) greater than the
resources (in part of the system) can handle. Two solutions come to mind: increase the
resources or decrease the load.
The methods to control congestion by looking at open loop systems. These systems
are designed to minimize congestion in the first place, rather than letting it happen and
reacting after the fact. They try to achieve their goal by using appropriate policies at various
levels. In Fig. below see different data link, network, and transport policies that can affect
congestion.
The retransmission policy is concerned with how fast a sender times out and what it
transmits upon timeout. A jumpy sender that times out quickly and retransmits all outstanding
packets using go back n will put a heavier load on the system than will a leisurely sender that
uses selective repeat.
Closely related to this is the buffering policy. If receivers routinely discard all out of order
packets, these packets will have to be transmitted again later, creating extra load. With
respect to congestion control, selective repeat is clearly better than go back n.
The choice between using virtual circuits and using datagrams affects congestion since
many congestion control algorithms work only with virtual-circuit subnets.
Packet queueing and service policy relates to whether routers have one queue per input line,
one queue per output line, or both. It also relates to the order in which packets are processed
(e.g., round robin or priority based).
Discard policy is the rule telling which packet is dropped when there is no space.
A good routing algorithm can help avoid congestion by spreading the traffic over all the
lines, whereas a bad one can send too much traffic over already congested lines.
Packet lifetime management deals with how long a packet may live before being discarded.
If it is too long, lost packets may clog up the works for a long time, but if it is too short,
packets may sometimes time out before reaching their destination, thus inducing
retransmissions
The same issues occur as in the data link layer, but in addition, determining the timeout
interval is harder because the transit time across the network is less predictable than the
transit time over a wire between two routers. If the timeout interval is too short, extra packets
will be sent unnecessarily. If it is too long, congestion will be reduced but the response time
will suffer whenever a packet is lost.
2. Traffic Aware Routing
These schemes adapted to changes in topology, but not to changes in load. The goal in
taking load into account when computing routes is to shift traffic away from hotspots that
will be the first places in the network to experience congestion.
The most direct way to do this is to set the link weight to be a function of the (fixed)
link band width and propagation delay plus the (variable) measured load or average queuing
delay. Least-weight paths will then favor paths that are more lightly loaded, all else being
equal.
Consider the network of Fig. below, which is divided into two parts, East and West,
connected by two links, CF and EI. Suppose that most of the traffic between East and West is
using link CF, and, as a result, this link is heavily loaded with long delays. Including
queueing delay in the weight used for the shortest path calculation will make EI more
attractive. After the new routing tables have been installed, most of the East-West traffic will
now go over EI, loading this link. Consequently, in the next update, CF will appear to be the
shortest path. As a result, the routing tables may oscillate wildly, leading to erratic routing
If load is ignored and only bandwidth and propagation delay are considered, this
problem does not occur. Attempts to include load but change weights within a narrow range
only slowdown routing oscillations. Two techniques can contribute to a successful solution.
The first is multipath routing, in which there can be multiple paths from a source to a
destination. In our example this means that the traffic can be spread across both of the East to
West links. The second one is for the routing scheme to shift traffic across routes slowly
enough that it is able to converge.
3. Admission Control
One technique that is widely used to keep congestion that has already started from
getting worse is admission control.
Once congestion has been signaled, no more virtual circuits are set up until the problem has
gone away.
An alternative approach is to allow new virtual circuits but carefully route all new virtual
circuits around problem areas. For example, consider the subnet of below Fig. (a), in which
two routers are congested, as indicated.
Figure 4.24: (a) A congested subnet.(b)Are drawn subnet that eliminates the congestion
4. Traffic Throttling
Each router can easily monitor the utilization of its output lines and other resources. For
example, it can associate with each line a real variable, u, whose value, between 0.0 and1.0,
reflects the recent utilization of that line. To maintain a good estimate of u, a sample of the
instantaneous line utilization, f (either 0 or 1), can be made periodically and u updated
according to
Where the constant a determines how fast the router forgets recent history.
Whenever u moves above the threshold, the output line enters a ''warning'' state. Each
newly-arriving packet is checked to see if its output line is in warning state. If it is, some
action is taken. The action taken can be one of several alternatives.
The old DECNET architecture signaled the warning state by setting a special bit in the
packet' sheader.
When the packet arrived at its destination, the transport entity copied the bit into the
next acknowledgement sent back to the source. The source then cut back on traffic.
As long as the router was in the warning state, it continued to set the warning bit,
which meant that the source continued to get acknowledgements with it set.
The source monitored the fraction of acknowledgements with the bit set and adjusted
its transmission rate accordingly. As long as the warning bits continued to flow in, the source
continued to decrease its transmission rate. When they slowed to a trickle, it increased its
transmission rate. Note that since every router along the path could set the warning bit, traffic
increased only when no router was in trouble.
In this approach, the router sends a choke packet back to the source host, giving it the
destination found in the packet.
The original packet is tagged (a header bit is turned on) so that it will not generate
anymore choke packets farther along the path and is then forwarded in the usual way.
When the source host gets the choke packet, it is required to reduce the traffic sent to
the specified destination by X percent. Since other packets aimed at the same destination are
probably already under way and will generate yet more choke packets, the host should ignore
choke packets referring to that destination for a fixed time interval. After that period has
expired, the host listens for more choke packets for another interval. If one arrives, the line is
still congested, so the host reduces the flow still more and begin signoring choke packets
again. If no choke packets arrive during the listening period, the host may increase the flow
again.
The feedback implicit in this protocol can help prevent congestion yet not throttle any
flow unless trouble occurs.
Hosts can reduce traffic by adjusting their policy parameters. Increases are done in
smaller increments to prevent congestion n2 from reoccurring quickly.
Routers can maintain ever all thresholds. Depending on which threshold has been
crossed, the choke packet can contain a mild warning, a stern warning, or an ultimatum.
iii. Hop-By-Hopbackpressure
At high speeds or over long distances, sending a choke packet to the source hosts does
not work well because there action is so slow.
Consider, for example, a host in San Francisco (router A in Fig. below) that is sending
traffic to a host in NewYork (router D in Fig. below)at155Mbps.IftheNewYorkhostbegins to
run out of buffers, it will take about 30 msec for a choke packet to get back to San Francisco
to tell it to slow down. The choke packet propagation is shown as the second, third, and
fourth steps in Fig. below (a). In those 30 msec, another 4.6 megabits will have been sent.
Even if the host in San Francisco completely shuts down immediately, the 4.6 megabits in
the pipe will continue to pour in and have to be dealt with. Only in the seventh diagram in
Fig. below (a) will the New York router notice as lower flow.
An alternative approach is to have the choke packet take effect at every hop it passes
through, as shown in the sequence of Fig. above (b). Here, as soon as the choke packet
reaches F, F is required to reduce the flow to D. Doing so will require F to devote more
buffers to the flow, since the source is still sending away at full blast, but it gives D
immediate relief, like a headache remedy in a television commercial. In the next step, the
choke packet reaches E, which tells E to reduce the flow to F. This action puts a greater
demand on E's buffers but gives F immediate relief. Finally, the choke packet reaches A and
the flow genuinely slows down.
Figure 4.25: (a) A choke packet that affects only the source. (b) A choke packet that
affects each hop it passes through.
The net effect of this hop-by-hop scheme is to provide quick relief at the point of
congestion at the price of using up more buffers upstream. In this way, congestion can be
nipped in the bud without losing any packets.
5. Load Shedding
When none of the above methods make the congestion disappear, routers can bring out the
heavy artillery: load shedding.
Load shedding is a fancy way of saying that when routers are being inundated by packets
that they cannot handle, they just throw them away.
A router drowning in packets can just pick packets at random to drop, but usually it can do
better than that. Which packet to discard may depend on the applications running.
To implement an intelligent discard policy, applications must mark their packets in priority
classes to indicate how important they are. If they do this, then when packets have to be
discarded, routers can first drop packets from the lowest class, then the next lowest class, and
so on.
In some transport protocols (including TCP), the response to lost packets is for the source
to slow down. The reasoning behind this logic is that TCP was designed for wired networks
and wired networks are very reliable, so lost packets are mostly due to buffer over runs rather
than transmission errors. This fact can be exploited to help reduce congestion.
By having routers drop packets before the situation has become hopeless (hence the''early''
in the name), the idea is that there is time for action to be taken before it is too late. To
determine when to start discarding, routers maintain a running average of their queue lengths.
When the average queue length on some line exceeds a threshold, the line is said to be
congested and action is taken.
The following will summarize the top 10 principles of the network layer in the
internet.
1. Make sure it works. Do not finalize the design or standard until multiple prototypes have
successfully communicated with each other. All too often, designers first write a 1000-page
standard, get it approved, then discover it is deeply flawed and does not work.
2. Keep it simple. When in doubt, use the simplest solution. If a feature is not absolutely
essential, leave it out, especially if the same effect can be achieved by combining other
features.
3. Make clear choices. If there are several ways of doing the same thing, choose one. Having
two or more ways to do the same thing is looking for trouble. Standards often have multiple
options or modes or parameters because several powerful parties insist that their way is best.
4. Exploit modularity. This principle leads directly to the idea of having protocol stacks,
each of whose layers is independent of all the other ones. In this way, if circumstances
require one module or layer to be changed, the other ones will not be affected.
6. Avoid static options and parameters. If parameters are unavoidable (e.g., maximum
packet size), it is best to have the sender and receiver negotiate a value rather than defining
fixed choices.
7. Look for a good design; it need not be perfect. Often, the designers have a good design
but it cannot handle some weird special case. Rather than messing up the design, the
designers should go with the good design and put the burden of working around it on the
people with the strange requirements.
8. Be strict when sending and tolerant when receiving. In other words, send only packets
that rigorously comply with the standards, but expect incoming packets that may not be fully
conformant and try to deal with them.
9. Think about scalability. If the system is to handle millions of hosts and billions of users
effectively, no centralized databases of any kind are tolerable and load must be spread as
evenly as possible over the available resources.
10. Consider performance and cost. If a network has poor performance or outrageous costs,
nobody will use it.
In the network layer, the Internet can be viewed as a collection of networks or ASes
(Autonomous Systems) that are interconnected. There is no real structure, but several major
backbones exist. These are constructed from high-bandwidth lines and fast routers. The
biggest of these backbones, to which everyone else connects to reach the rest of the Internet,
are called Tier 1 networks. Attached to the backbones are ISPs (Internet Service
Providers) that provide Internet access to homes and businesses, data centers and colocation
facilities full of server machines, and regional (mid-level) networks. The data centers serve
much of the content that is sent over the Internet. Attached to the regional networks are more
ISPs, LANs at many universities and companies, and other edge networks. A sketch of this
quasi hierarchical organization is given in Fig. 4.26.
Figure 4.26: The Internet is an interconnected collection of many networks
The glue that holds the whole Internet together is the network layer protocol, IP
(Internet Protocol). Unlike older network layer protocols, IP was designed from the
beginning with internetworking in mind. A good way to think of the network layer is this: its
job is to provide a best-effort (i.e., not guaranteed) way to transport packets from source to
destination, without regard to whether these machines are on the same network or whether
there are other networks in between them.
Communication in the Internet works as follows. The transport layer takes data
streams and breaks them up so that they may be sent as IP packets. In theory, packets can be
up to 64 KB each, but in practice they are usually not more than 1500 bytes (so they fit in
one Ethernet frame). IP routers forward each packet through the Internet, along a path from
one router to the next, until the destination is reached. At the destination, the network layer
hands the data to the transport layer, which gives it to the receiving process. When all the
pieces finally get to the destination machine, they are reassembled by the network layer into
the original datagram. This datagram is then handed to the transport layer.
In the example of Fig. 4.26, a packet originating at a host on the home network has to
traverse four networks and a large number of IP routers before even getting to the company
network on which the destination host is located. This is not unusual in practice, and there are
many longer paths. There is also much redundant connectivity in the Internet, with
backbones and ISPs connecting to each other in multiple locations. This means that there are
many possible paths between two hosts. It is the job of the IP routing protocols to decide
which paths to use.
1. The IP Version 4 Protocol
An IPv4 datagram consists of a header part and a body or payload part. The header
has a 20-byte fixed part and a variable-length optional part. The header format is shown in
Fig. 26. The bits are transmitted from left to right and top to bottom, with the high-order bit
of the Version field going first. (This is a ‘‘big-endian’’ network byte order. On little-endian
machines, such as Intel x86 computers, a software conversion is required on both
transmission and reception.)
Version: The Version field keeps track of which version of the protocol the datagram
belongs to. Version 4 dominates the Internet today. IPv6 is the next version of IP. It is use
will eventually be forced almost people has a desktop PC, a laptop, and an IP phone. IPv5 is
an experimental real-time stream protocol that was never widely used.
IHL: Header length is not constant, So, IHL is provided to tell how long the header is, in 32-
bit words. The minimum value is 5, which applies when no options are present. The
maximum value of this 4-bit field is 15, which limits the header to 60 bytes, and thus the
Options field to 40 bytes.
Type of Service: The Differentiated services are also the Type of service field. It is intended
to distinguish between different classes of service. Various combinations of reliability and
speed are possible. For digitized voice, fast delivery beats accurate delivery. For file transfer,
error-free transmission is more important than fast transmission. The Type of service field
provided 3 bits to signal priority and 3 bits to signal whether a host cared more about delay,
throughput, or reliability. However, no one really knew what to do with these bits at routers,
so they were left unused for many years.
Total length: The Total length includes everything in the datagram—both header and data.
The maximum length is 65,535 bytes. At present, this upper limit is tolerable, but with future
networks, larger datagrams may be needed. All the fragments of a packet contain the same
Identification value.
Unused bit: Next comes an unused bit, which is surprising, as available real estate in the IP
header is extremely scarce. This bit to detect malicious traffic and would greatly simplify
security, as packets with the ‘‘evil’’ bit set would be known to have been sent by attackers
and could just be discarded.
DF and MF: DF stands for Don’t Fragment. It is an order to the routers not to fragment
the packet. Now it is used as part of the process to discover the path MTU, which is the
largest packet that can travel along a path without being fragmented. By marking the
datagram with the DF bit, the sender knows it will either arrive in one piece, or an error
message will be returned to the sender. MF stands for More Fragments. All fragments
except the last one have this bit set. It is needed to know when all fragments of a datagram
have arrived.
Fragment offset: The Fragment offset tells where in the current packet this fragment
belongs. All fragments except the last one in a datagram must be a multiple of 8 bytes, the
elementary fragment unit. Since 13 bits are provided, there is a maximum of 8192 fragments
per datagram, supporting a maximum packet length up to the limit of the Total length field.
TtL: The TtL (Time to live) field is a counter used to limit packet lifetimes. It was originally
supposed to count time in seconds, allowing a maximum lifetime of 255 sec. It must be
decremented on each hop and is supposed to be decremented multiple times when a packet is
queued for a long time in a router. When it hits zero, the packet is discarded and a warning
packet is sent back to the source host. This feature prevents packets from wandering around
forever, something that otherwise might happen if the routing tables ever become corrupted.
Protocol: The Protocol field tells it which transport process to give the packet to. TCP is one
possibility, but so are UDP and some others. The numbering of protocols is global across the
entire Internet.
Header checksum: Header checksum is assumed to be zero upon arrival. Such a checksum
is useful for detecting errors while the packet travels through the network. It must be
recomputed at each hop because at least one field always changes (the Time to live field), but
tricks can be used to speed up the computation.
Source address and Destination address: The Source address and Destination address
indicate the IP address of the source and destination network interfaces.
Options field: The Options field was designed to provide an escape to allow subsequent
versions of the protocol to include information not present in the original design, to permit
experimenters to try out new ideas, and to avoid allocating header bits to information that is
rarely needed.
Option Description
Specifies how secret the datagram is
Security
Gives the complete path to be followed
Strict source routing
Gives a list of routers not to be missed
Loose source routing
Makes each router append its IP address
Record route
Makes each router append its address and
Timestamp timestamp
Security option: The Security option tells how secret the information is. In theory, a military
router might use this field to specify not to route packets through certain countries the
military considers to be ‘‘bad guys.’’
Strict source routing: The Strict source routing option gives the complete path from source
to destination as a sequence of IP addresses. The datagram is required to follow that exact
route. It is most useful for system managers who need to send emergency packets when the
routing tables have been corrupted, or for making timing measurements.
Loose source routing: The Loose source routing option requires the packet to traverse the
list of routers specified, in the order specified, but it is allowed to pass through other routers
on the way.
Record route: The Record route option tells each router along the path to append its IP
address to the Options field.
Timestamp: The Timestamp option is like the Record route option, except that in addition to
recording its 32-bit IP address, each router also records a 32-bit timestamp.
2. IP Addresses
All the computers of the world on the Internet network communicate with each other
with underground or underwater cables or wirelessly. If the user wants to download a file
from the internet or load a web page or literally do anything related to the internet, my
computer must have an address so that other computers can find and locate mine in order to
deliver that particular file or webpage that I am requesting. In technical terms, that address is
called IP Address or Internet Protocol Address.
Example: If someone wants to send a mail then it must have home address.
Similarly, the computer too needs an address so that other computers on the internet can
communicate with each other without the confusion of delivering information to someone
else’s computer. And that is why each computer in this world has a unique IP Address. Or in
other words, an IP address is a unique address that is used to identify computers or nodes on
the internet. This address is just a string of numbers written in a certain format. It is generally
expressed in a set of numbers for example 192.155.12.1. Here each number in the set is from
0 to 255 range. Or it can say that a full IP address ranges from 0.0.0.0 to 255.255.255.255.
Working of IP addresses
The working of IP addresses is similar to other languages. It can also use some set of
rules to send information. Using these protocols it can easily send, and receive data or files to
the connected devices.
The device directly requests the Internet Service Provider which then grants device
access to the web.
And an IP Address is assigned to the device from the given range available.
The internet activity goes through service provider, and they route it back to user, using
the IP address.
The IP address can change. For example, turning the router on or off can change the IP
Address.
When the home location has home IP address, it changes the network of the device.
Subnetting
Subnetting refers to the concept of dividing the single vast network into more than
one smaller logical sub-networks called as subnets. Sub net is related to IP Address as it
borrows a bit from the host part of the IP Address. Thus the IP Address has three parts:
• Network part. (Higher order bits)
• Subnet part.
The subnet is formed by taking the last bit from the network component of the IP
address and used to specify the number of subnets required. Subnetting allows having various
sub-networks within the big network without having a new network number through IPS.
Subnetting reduces network traffic and complexity. The purpose of introducing the concept
of Subnetting was to fulfill the shortage of IP Addresses. The Subnetting process helps in
dividing the class A, class B, and class C network numbers into smaller parts. A subnet can
further be broken down into smaller networks known as sub-subnets.
IP address format
• The 32-bit IP address is grouped eight bits at a time, separated by dots and represented in
decimal format. This is known as dotted decimal notation as shown in fig.
• Each bit in the octet has a binary weight (128,64,32, 16,8,4,2, 1).
• The minimum value for an octet is 0, and the maximum value for an octet is 255.
IPv4 class is a way of division of addresses in the IPv4 based routing. Separate IP
classes are used for different types of networks. They can be explained as follows
CLASSES Range
A Router has more than one IP address because router connects two or more different
networks. But A computer or host can only have one and a unique ip address. A routers
function is to inspect incoming packet and determine whether it belongs to local network or
to a Remote Network, if a local packet is determined then there is no need of routing and if a
Remote packet is determined then it will route that packet according to the routing table
otherwise the packet will be discarded.
Types of IP Address
1. IPv4: Internet Protocol version 4. It consists of 4 numbers separated by the dots. Each
number can be from 0-255 in decimal numbers. But computers do not understand decimal
numbers, they instead change them to binary numbers which are only 0 and 1. Therefore, in
binary, this (0-255) range can be written as (00000000 – 11111111). Since each number N
can be represented by a group of 8-digit binary digits. So, a whole IPv4 binary address can be
represented by 32-bits of binary digits. In IPv4, a unique sequence of bits is assigned to a
computer, so a total of (2^32) devices approximately = 4,294,967,296 can be assigned with
IPv4.
IPv4 can be written as:
189.123.123.90
Classes of IPv4 Address: There are around 4.3 billion IPv4 addresses and managing all
those addresses without any scheme is next to impossible. If it has to find a word from a
language dictionary it will take less than 5 minutes to find that word. It is able to do this
because words in the dictionary are organized in alphabetical order. If it has to find out the
same word from a dictionary that doesn’t use any sequence or order to organize the words, it
will take an eternity to find the word. If a dictionary with one billion words without order can
be so disastrous, then it can imagine the pain behind finding an address from 4.3 billion
addresses. For easier management and assignment IP addresses are organized in numeric
order and divided into the following 5 classes :
Address
IP Class Range Maximum number of networks
2. IPv6: But, there is a problem with the IPv4 address. With IPv4, it can connect only the
above number of 4 billion devices uniquely, and apparently, there are much more devices in
the world to be connected to the internet. So, gradually it is making our way to IPv6
Address which is a 128-bit IP address. In human-friendly form, IPv6 is written as a group of
8 hexadecimal numbers separated with colons(:). But in the computer-friendly form, it can be
written as 128 bits of 0s and 1s. Since, a unique sequence of binary digits is given to
computers, smartphones, and other devices to be connected to the internet. So, via IPv6 a
total of (2^128) devices can be assigned with unique addresses which are actually more than
enough for upcoming future generations.
2011:0bd9:75c5:0000:0000:6b3e:0170:8394
Classification of IP Address
An IP address is classified into the following types:
1. Public IP Address: This address is available publicly and it is assigned by the network
provider to router, which further divides it to devices. Public IP Addresses are of two types,
Static IP Address: Static address never changes. They serve as a permanent internet address.
These are used by DNS servers. Static IP Address provides information such as device is
located on which continent, which country, which city, and which Internet Service Provider
provides internet connection to that particular device. Once, it is know who is the ISP, it can
trace the location of the device connected to the internet. Static IP Addresses provide less
security than Dynamic IP Addresses because they are easier to track.
2. Private IP Address: This is an internal address of the device which are not routed to the
internet and no exchange of data can take place between a private address and the internet.
3. Shared IP addresses: Many websites use shared IP addresses where the traffic is not huge
and very much controllable, they decide to rent it to other similar websites so to make it cost-
friendly. Several companies and email sending servers use the same IP address (within a
single mail server) to cut down the cost so that they could save for the time the server is idle.
ICMP stands for Internet Control Message Protocol. The operation of the Internet is
monitored closely by the routers. When something unexpected occurs, the event is reported
by the ICMP, which is also used to test the Internet. About a dozen types of ICMP messages
are defined. The most important ones are listed in Fig. 29. Each ICMP message type is
encapsulated in an IP packet.
The TIME EXCEEDED message is sent when a packet is dropped because its
counter has reached zero. This event is a symptom that packets are looping, that there is
enormous congestion, or that the timer values are being set too low.
The PARAMETER PROBLEM message indicates that an illegal value has been
detected in a header field. This problem indicates a bug in the sending host's IP software or
possibly in the software of a router transited.
The SOURCE QUENCH message was formerly used to throttle hosts that were
sending too many packets. When a host received this message, it was expected to slow down.
The REDIRECT message is used when a router notices that a packet seems to be
routed wrong. It is used by the router to tell the sending host about the probable error.
The ECHO and ECHO REPLY messages are used to see if a given destination is
reachable and alive. Upon receiving the ECHO message, the destination is expected to send
an ECHO REPLY message back.
This mapping process is significant because the IP & MAC addresses the change &
conversion of the length so that the systems can identify one another. At present, the most
frequently used IP is IPv4 (IP version 4). An IP address is 32-bits long whereas a MAC
address is 48-bits long. Address resolution protocol changes the 32 address bit into 48
address bit. The address resolution protocol diagram is shown below.
When the source at the network layer wants to converse with the destination, first the
source requires discovering the Physical Address or MAC address of the destination. For
this, the source will verify the ARP table or ARP cache with the destination of the MAC
address. If this destination address is available within the ARP table or ARP cache, then the
source utilizes the address of MAC for communication.
Figure 4.32: Address Resolution Protocol
If the address of MAC for the destination is not available in the ARP table or cache,
then the source will generate a request message of an ARP. The ARP table is mainly used to
maintain a connection between every MAC address & its equivalent IP address. This table
can be entered manually by the user. Here, the request message includes the IP address &
MAC address of the source and the destination. The destination’s MAC address will leave
null since the user has demanded this.
The request message of an address resolution protocol will be transmitted to the local
network through the source computer. In the network of LAN, all the devices will get the
broadcast message. Now, every device evaluates its own IP address through the destination
IP address.
Working of ARP
If both the IP address of the device and the destination match, then the device will
send an ARP to reply message. Similarly, if the IP address does not match, then the device
will drop the packet automatically. The destination transmits an address resolution protocol
reply packet once the address of the destination equals the device. The reply packet of ARP
includes the device’s MAC address. The destination device updates the table automatically to
store the MAC address of the source as this address will be necessary from the source for
communication purposes.
At present, the source for the destination device performs like a target & the
destination device transmits the reply message of ARP which is unicast rather than broadcast.
Once the source device gets the reply message of ARP, then it will recognize the destination
devices’ MAC address because the reply packet of ARP includes the destination device’s
MAC address with the other addresses. Here, the MAC address will be updated by the
destination MAC address within the ARP cache. So the sender is capable to converse to the
destination directly.
1. Proxy ARP
Proxy ARP is a system that replies to the requests of ARP on the behalf of a different
system. Once the request is sent through an outside system of the network host, the router
functions as a gateway to transmit the packets outside of the networks to their destinations.
Reverse ARP is a convention used through the framework of the customer within
LAN to command its IPv4 address from the gateway-router table. A table is prepared by the
manager of the organization within the gateway router that is used to determine the address
of MAC to the connecting IP address.
3. Gratuitous ARP
The gratuitous ARP is a type of protocol that requests from the broadcast to obtain
the IP address of the router. It is mainly utilized once an end system includes an IP address
however wishes to protect its MAC address from the local area network if the address of IP is
not used through any other system. This protocol is mainly used to update the table of ARP
for other devices. It also verifies whether the host is utilizing the actual IP address otherwise
a duplicate one.
4. Inverse ARP
The inverse of ARP is known as Inverse ARP which is mainly utilized to find out the
system’s IP addresses over local area network from its MAC addresses. It is most frequently
used in frame relays, ATM networks where the data from level-3 to level 2 are obtained.
Advantages
1. By using an ARP, the address of MAC can be known simply if it is know the same
system’s IP address.
2. End nodes should not be arranged to identify MAC addresses. It can be found once
required.
3. The main goal of this protocol is to allow every host on a network that permits to
increase a mapping in between two addresses like IP & physical.
4. The set of mappings stored within the host is known as ARP cache/table.
Disadvantages
1. ARP attacks may occur like ARP spoofing & Denial of Services.
DHCP manages the provision of all the nodes or devices added or dropped from the
network.
DHCP maintains the unique IP address of the host using a DHCP server.
DHCP is also used to configure the proper subnet mask, default gateway and DNS server
information on the node or device.
There are many versions of DCHP are available for use in IPV4 (Internet Protocol Version
4) and IPV6 (Internet Protocol Version 6).
DHCP runs at the application layer of the TCP/IP protocol stack to dynamically
assign IP addresses to DHCP clients/nodes and to allocate TCP/IP configuration information
to the DHCP clients. Information includes subnet mask information, default gateway, IP
addresses and domain name system addresses.
DHCP clients request an IP address. Typically, client broadcasts a query for this
information.
DHCP server responds to the client request by providing IP server address and other
configuration information. This configuration information also includes time period, called a
lease, for which the allocation is valid.
When refreshing an assignment, a DHCP clients request the same parameters, but the
DHCP server may assign a new IP address. This is based on the policies set by the
administrator.
Components of DHCP
DHCP Server: DHCP server is a networked device running the DCHP service that
holds IP addresses and related configuration information. This is typically a server or a router
but could be anything that acts as a host, such as an SD-WAN appliance.
DHCP client: DHCP client is the endpoint that receives configuration information
from a DHCP server. This can be any device like computer, laptop, IoT endpoint or anything
else that requires connectivity to the network. Most of the devices are configured to receive
DHCP information by default.
IP address pool: IP address pool is the range of addresses that are available to DHCP
clients. IP addresses are typically handed out sequentially from lowest to the highest.
Subnet: Subnet is the partitioned segments of the IP networks. Subnet is used to keep
networks manageable.
Lease: Lease is the length of time for which a DHCP client holds the IP address
information. When a lease expires, the client has to renew it.
DHCP relay: A host or router that listens for client messages being broadcast on that
network and then forwards them to a configured server. The server then sends responses back
to the relay agent that passes them along to the client. DHCP relay can be used to centralize
DHCP servers instead of having a server on each subnet.