unit 4
unit 4
NETWORK LAYER:
Internet Working:
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
Network layer is majorly focused on getting packets from the source to the destination, routing
error handling and congestion control.
NETWORK LAYER functions:.
Addressing:
Maintains the address at the frame header of both source and destination and performs
addressing to detect various devices in network.
Packeting:
This is performed by Internet Protocol. The network layer converts the packets from its upper
layer.
Routing:
It is the most important functionality. The network layer chooses the most relevant and best
path for the data transmission from source to destination.
Inter-networking:
It works to deliver a logical connection across multiple devices.
The host sends the packet to the nearest router, either on its own LAN or over a point-to-
point link to the ISP.
This packet is stored there until it has fully arrived once the link is fully processed by
verifying the checksum then it is forwarded to the next router till it reaches the destination.
This mechanism is called “Store and Forward packet switching.”
But before providing the services to the transfer layer following goals must be kept in
mind :-
Offering services must not depend on router technology.
The transport layer needs to be protected from the type, number and topology of the
available router.
The network addresses for the transport layer should use uniform numbering pattern also
at LAN and WAN connections.
Through the network/transport layer interface, the network layer transfers its services to the
transport layer. These services are described below.
Based on the connections there are 2 types of services provided:
Connectionless – The routing and insertion of packets into subnet is done individually. No
added setup is required.
Connection-Oriented – Subnet must offer reliable service and all the packets must be
transmitted over a single route.
To use a connection-oriented service, first we establish a connection, use it and then release
it. In connection-oriented services, the data packets are delivered to the receiver in the same
order in which they have been sent by the sender.
It can be done in either two ways:
Circuit Switched Connection – A dedicated physical path or a circuit is established between
the communicating nodes and then data stream is transferred.
Virtual Circuit Switched Connection – The data stream is transferred over a packet
switched network, in such a way that it seems to the user that there is a dedicated path from
the sender to the receiver. A virtual path is established here. While, other connections may
also be using the same path.
If connectionless service is offered, packets are injected into the network individually and
routed independently of each other.
No advance setup is needed. In this context, the packets are frequently called datagrams
(in analogy with telegrams) and the network is called a datagram network.
A’s table (initially) A’s table (later) C’s Table E’s
Table
Let us assume for this example that the message is four times longer than the maximum
packet size, so the network layer has to break it into four packets, 1, 2, 3, and 4, and send each
of them in turn to router A.
Every router has an internal table telling it where to send packets for each of the possible
destinations.
Each table entry is a pair (destination and the outgoing line). Only directly connected lines can
be used.
A’s initial routing table is shown in the figure under the label ‘‘initially.’’
At A, packets 1, 2, and 3 are stored briefly, having arrived on the incoming link. Then each
packet is forwarded according to A’s table, onto the outgoing link to C within a new frame.
Packet 1 is then forwarded to E and then to F. However, something different happens to packet
4. When it gets to A it is sent to router B, even though it is also destined for F. For some reason
(traffic jam along ACE path), A decided to send packet 4 via a different route than that of the
first three packets. Router A updated its routing table, as shown under the label ‘‘later.’’
The algorithm that manages the tables and makes the routing decisions is called the routing
algorithm.
Router:
Routers are networking devices operating at layer 3 or a network layer of the OSI
model. They are responsible for receiving, analyzing, and forwarding data packets among
the connected computer networks.
When a data packet arrives, the router inspects the destination address, consults its
routing tables to decide the optimal route and then transfers the packet along this route.
Types of routers:
Wireless Router
Brouter
Core router
Edge router
Routing Table in Router:
A routing table determines the path for a given packet with the help of an IP address of a
device and necessary information from the table and sends the packet to the destination
network.
The routers have the internal memory that is known as Random Access Memory (RAM).
All the information of the routing table is stored in RAM of routers.
For example:
Default Eth3
o It contains an IP address of all routers which are required to decide the way to reach the
destination network.
o It includes extrovert interface information.
o Furthermore, it is also contained IP addresses and subnet mask of the destination host.
Types of Routing:
o Static Routing
o Default Routing
o Dynamic Routing
The changes in routing choices are a mirror in the topology because of the network's traffic.
Additionally referred to as dynamic routing, these create the use of dynamic data like current
topology, load, delay, etc., to pick out routes.
Optimization specifications are distance, range of hops, and approximated transit time.
(1) Isolated – During this technique, every node makes its routing choices, victimizing the data it has
while not seeking information from alternative nodes. The sending nodes don't have data
concerning the status of a selected link.
The drawback is that it could also be dispatched packets through a congested connection,
which may result in a delay.
(2) Centralized – During this technique, a centralized node has complete data concerning the
network and makes all the routing choices. Often the precedence of this is just one node is needed
to keep the data of the complete network, and also, the drawback is that if the central node
proceeds down, the complete network is complete.
The link state algorithm is centralized since it knows the cost of each link in the network.
(3) Distributed – During this technique, the node collects data from its neighbors and then makes
the choice concerning routing the packets.
A drawback is that the packet could be hindered if there is a modification among intervals
during which it collects data and releases packets.
It is called static routing, which could catch a route computed in advance and downloaded to
routers when a router is booted.
Non Adaptive routing algorithms in computer networks do not keep the routing choice
depending on the network topology or traffic.
Routing Protocols
A routing protocol is used to select a route within a network to find the best path to forward a
packet.
The router communicates with each other gathers information regarding the different paths to
the specific network and stores it in the routing tables.
The routing table stores the information of the path that is discovered by the router to another
network and finds the most optimal path while forwarding packets from source to destination.
Intra and Inter domain Routing :
An autonomous system (AS) is a group of networks and routers under the authority of a
single administration. Routing inside an autonomous system is referred to as intra
domain routing. (DISTANCE VECTOR, LINK STATE)
Routing between autonomous systems is referred to as inter domain routing. (PATH
VECTOR)
In distance vector routing, the least - cost route between any two nodes is
the route with minimum distance
.
In this protocol, as the name implies, each node maintains a vector (table)
of minimum distances to every node.
Initialization
Sharing
Updating
Initialization :
Each node can know only the distance between itself and its immediate
neighbors, those
directly connected to it.
So for the moment, we assume that each node can send a message to the
immediate neighbors and find the distance between itself and these
neighbors.
Below fig shows the initial tables for each node. The distance for any entry
that is not a neighbor is marked as infinite (unreachable).
Sharing :
The whole idea of distance vector routing is the sharing of information
between neighbors.
Although node A does not know about node E, node C does.
So if node C shares its routing table with A, node A can also know how to
reach node E. On the other hand, node C does not know how to reach node
D, but node A does. If node A shares its routing table with node C, node C
also knows how to reach node D.
In other words, nodes A and C, as immediate neighbors, can improve their
routing tables if they help each other
.
NOTE: In distance vector routing, each node shares its routing table with its
immediate
Neighbors periodically and when there is a change.
Updating:
When a node receives a two-column table from a neighbor, it needs to update its routing
table.
1. The receiving node needs to add the cost between itself and the sending node to each value
in the second column. (x+y)
2. If the receiving node uses information from any row. The sending node is the next node in
the route.
3. The receiving node needs to compare each row of its old table with the corresponding row
of the modified version of the received table.
a. If the next-node entry is different, the receiving node chooses the row with the
smaller cost. If there is a tie, the old one is kept.
b. If the next-node entry is the same, the receiving node chooses the new row.
For example, suppose node C has previously advertised a route to node X with
distance 3.
Suppose that now there is no path between C and X; node C now advertises this
route with a
Distance of infinity.
Node A must not ignore this value even though its old entry is smaller. The old
route does not exist anymore. The new route has a distance of infinity.
Updating in distance vector routing:
Final Diagram:
CONGESTION CONTROL ALGORITHMS:
Too many packets present in (a part of) the network causes packet delay
and loss that
degrades performance. This situation is called congestion.
The network and transport layers share the responsibility for handling
congestion.
Since congestion occurs within the network, it is the network layer that
directly experiences it and must ultimately determine what to do with the
excess packets.
However, the most effective way to control congestion is to reduce the load
that the transport layer is placing on the network.
This requires the network and transport layers to work together.
When too much traffic is offered, congestion sets in and performance
degrades sharply
1. In contrast to the LB, the Token Bucket Algorithm, allows the output rate
to vary,
depending on the size of the burst.
2. In the TB algorithm, the bucket holds tokens. To transmit a packet, the
host must capture
and destroy one token.
3. Tokens are generated by a clock at the rate of one token every t sec.
4. Idle hosts can capture and save up tokens (up to the max. size of the
bucket) in order to
send larger bursts later.
Congestion Control techniques in Computer Networks:
Congestion control refers to the techniques used to control or prevent
congestion. Congestion control techniques can be broadly classified into
two categories:
Open Loop Congestion Control
Open loop congestion control policies are applied to prevent congestion before it happens. The
congestion control is handled either by the source or the destination.
1. Retransmission Policy:
It is the policy in which retransmission of the packets are taken care of. If the sender feels that
a sent packet is lost or corrupted, the packet needs to be retransmitted. This transmission may
increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion and
also able to optimize efficiency.
2. Window Policy:
The type of window at the sender’s side may also affect the congestion. Several packets in the
Go-back-n window are re-sent, although some packets may be received successfully at the
receiver side. This duplication may increase the congestion in the network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet that may
have been lost.
3. Discarding Policy:
A good discarding policy adopted by the routers is that the routers may prevent congestion
and at the same time partially discard the corrupted or less sensitive packages and also be
able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.
4. Acknowledgment Policy:
Since acknowledgements are also the part of the load in the network, the acknowledgment
policy imposed by the receiver may also affect congestion. Several approaches can be used to
prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send an acknowledgment only if it
has to send a packet or a timer expires.
5. Admission Policy:
In admission policy a mechanism should be used to prevent congestion. Switches in a flow
should first check the resource requirement of a network flow before transmitting it further. If
there is a chance of congestion or there is congestion in the network, router should deny
establishing a virtual network connection to prevent further congestion.
In above diagram the 3rd node is congested and stops receiving packets as a result 2nd node may be
get congested due to slowing down of the output data flow. Similarly 1st node may get congested and inform
the source to slow down.
3. Implicit Signaling:
In implicit signaling, there is no communication between the congested nodes and the source. The source
guesses that there is congestion in a network. For example when sender sends several packets and there is
no acknowledgment for a while, one assumption is that there is a congestion.
4. Explicit Signaling:
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the source or
destination to inform about congestion. The difference between choke packet and explicit signaling is that the
signal is included in the packets that carry data rather than creating a different packet as in case of choke
packet technique.
Explicit signaling can occur in either forward or backward direction.
Forward Signaling: In forward signaling, a signal is sent in the direction of the congestion. The destination
is warned about congestion. The receiver in this case adopts policies to prevent further congestion.
Backward Signaling: In backward signaling, a signal is sent in the opposite direction of the congestion.
The source is warned about congestion and it needs to slow down.