0% found this document useful (0 votes)
12 views12 pages

CH 5

Computer network

Uploaded by

Manoj Thamke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views12 pages

CH 5

Computer network

Uploaded by

Manoj Thamke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Q.

1 ) Congestion control is vital in computer networks to prevent network overload, reduce


latency, and avoid data packet loss.

Congestion occurs when there is more data being transmitted through a network than it can
handle efficiently, leading to packet loss, increased latency, and decreased network
performance.

The primary goals of congestion control are:

1. Preventing Network Congestion: Congestion control mechanisms aim to prevent the


occurrence of congestion in the first place by ensuring that the rate at which data is
injected into the network does not exceed the capacity of the network to handle it.

2. Fairness: Congestion control seeks to ensure fair allocation of network resources


among different users or flows, so that no single user or flow dominates the available
bandwidth to the detriment of others.

3. Efficiency: It aims to maximize the utilization of network resources while


maintaining good performance, without overwhelming the network with excessive
traffic.

4. Stability: Congestion control mechanisms should ensure that the network remains
stable even under varying conditions, such as changes in traffic patterns, link failures,
or fluctuations in network capacity.

There are various congestion control algorithms and techniques employed at different layers
of the network protocol stack, including:

 Window-based Congestion Control: TCP (Transmission Control Protocol) uses a


window-based congestion control mechanism where the sender adjusts the rate of data
transmission based on feedback received from the receiver and the network.

 Traffic Shaping and Policing: These techniques control the rate of data transmission
into the network, preventing bursts of traffic that could lead to congestion.

 Quality of Service (QoS) Management: QoS mechanisms prioritize certain types of


traffic over others, ensuring that critical or time-sensitive applications receive
preferential treatment to avoid congestion-related performance degradation.

 Explicit Congestion Notification (ECN): ECN allows routers to notify senders of


impending congestion before packet loss occurs, enabling the sender to react
accordingly by reducing transmission rates.

Congestion control and flow control are both mechanisms used in computer
networking, but they serve different purposes and operate at different layers of the
network stack.
1. Congestion Control:

 Congestion control manages the flow of data through a network to prevent


congestion, which occurs when the network becomes overloaded with more
traffic than it can handle efficiently.

 It is primarily concerned with regulating the rate at which data is transmitted


into the network and ensuring that the network remains stable and operates
efficiently under varying conditions.

 Congestion control algorithms dynamically adjust the rate of data transmission


based on feedback received from the network, such as packet loss, delay, or
explicit congestion signals from routers.

 Congestion control operates at the network layer (e.g., TCP/IP) and is


typically implemented in end-hosts (e.g., computers, servers) and routers.

2. Flow Control:

 Flow control manages the transmission of data between individual nodes (e.g.,
between a sender and receiver) to ensure that the sender does not overwhelm
the receiver with data faster than it can process.

 It regulates the rate of data transmission at the data link layer or transport layer
to prevent buffer overflow at the receiver and to maintain reliable
communication between sender and receiver.

 Flow control mechanisms include techniques such as sliding window


protocols, where the receiver advertises its available buffer space to the
sender, allowing the sender to adjust its transmission rate accordingly.

 Flow control is typically implemented in protocols such as TCP (Transmission


Control Protocol) at the transport layer, although some data link layer
protocols also incorporate flow control mechanisms.

Various congestion prevention techniques are employed in computer networks to


manage and alleviate congestion. These techniques aim to prevent network congestion
from occurring or mitigate its effects. Here are some common congestion prevention
techniques:

1. Traffic Shaping:

 Traffic shaping regulates the flow of data into the network by smoothing out
bursts of traffic and ensuring a more consistent transmission rate.

 This technique is often used to shape traffic to conform to a desired traffic


profile, preventing sudden spikes in traffic that can lead to congestion.
 Traffic shaping can be implemented at routers or switches, where traffic is
buffered and scheduled for transmission according to predefined shaping
policies.

2. Load Balancing:

 Load balancing distributes network traffic across multiple paths or resources


to avoid overloading any single network component.

 By evenly distributing traffic, load balancing helps prevent congestion on


individual network links or devices and ensures optimal utilization of network
resources.

 Load balancing techniques can be implemented at various layers of the


network, including at the application layer (e.g., DNS load balancing),
transport layer (e.g., multipath TCP), or network layer (e.g., Equal-Cost Multi-
Path routing).

3. Quality of Service (QoS):

 QoS mechanisms prioritize certain types of traffic over others based on their
importance or requirements.

 By giving preferential treatment to critical or time-sensitive traffic, QoS helps


ensure that important applications receive adequate bandwidth and minimal
latency, even during periods of congestion.

 QoS mechanisms typically involve classification, marking, policing, and


scheduling of network traffic based on predefined service levels or policies.

4. Explicit Congestion Notification (ECN):

 ECN allows routers to notify endpoints of impending congestion before packet


loss occurs.

 When a router experiences congestion, it marks packets with an ECN flag


instead of dropping them.

 Endpoints can then react to ECN signals by reducing their transmission rates
or taking other appropriate actions to alleviate congestion without waiting for
packet loss to occur.

5. Packet Dropping Policies:

 Packet dropping policies determine how routers or switches discard packets


during periods of congestion.

 Various dropping policies include Random Early Detection (RED), Weighted


Random Early Detection (WRED), and Tail Drop.
 These policies aim to selectively drop packets based on factors such as packet
arrival rates, queue sizes, and congestion levels, to maintain fairness and
prevent congestion collapse.

6. Capacity Planning:

 Capacity planning involves estimating future network traffic demands and


provisioning sufficient network resources to accommodate expected growth.

 By regularly monitoring network performance and capacity utilization,


network administrators can identify potential congestion hotspots and take
proactive measures to upgrade network infrastructure or adjust routing policies
to prevent congestion

Q. 2 ) Differentiate between static routing and dynamic routing. Also Explain Working Of
RIP , Problems And Solution Of RIP in detail.
Routing Information Protocol (RIP) is a distance vector protocol which is used for updating
the routing tables.

- The routing, updates are exchanged between the neighbouring routers after every 30
seconds with the help of the RIP response message.

- These messages are also known as the RIP advertisements.

- These messages are sent by the routers. or hosts. They contain a list of multiple destinations
within an Autonomous System (AS).

- RIP is an interior routing protocol used inside an Autonomous System (AS)

RIP is a very simple intra domain or interior routing protocol which works inside an
Autonomous System (AS).

RIP is the most used Internet interior routing protocols.

It is based on the distance vector routing principle.

- RIP has many limitations. Some of them are as follows :

1. Width restriction: RIP uses a 4-bit metric to count router hops to the destination. For RIP
infinity is defined as 16 which corresponds to 15 hops.

No direct subnet support: RIP came into existence prior to subnetting and has no direct
support for it. We can use it in the subnetted environment with some restrictions.

3. Bandwidth consumptive: An RIP router will broadcast lists of networks and subnets it can
reach after every 30 seconds. This will consume a large amount of bandwidth.

4. Difficult to diagnos fault: Like any other distance vector routing protocols, RIP also is
difficult to debug.

5. Weak security: RIP does not have any security features of its own.

6.Looping problem: Being based on distance vector principle the RIP faces the looping
(routing loop) problem.

Remedies :

- Some of the above mentioned problems are overcome with RIP2 while the looping problem
can be overcome by using either a link state routing protocol like OSPF or a newer distance
vector routing protocol like BGP.
Q.3 ) Differentiate between connection oriented and connectionless services.
Q.4 ) What is traffic shaping ? Explain Leaky Bucket algorithm for traffic shaping.

Traffie shaping is defined as a mechanism to control the amount and rate of the traffic sent to
the network.
The two popularly used traffic shaping techniques are :
1.Leaky bucket
2. Token bucket.

Leaky Bucket algorithm for traffic shaping

Leaky bucket algorithm is used to control congestion in network traffic.


- As the name suggests it's working is similar to a leaky bucket in real life.
- The principle of leaky bucket algorithm is as follows :
- Leaky bucket is a bucket with a hole at bottom.
- Flow of the water from bucket is at a constant rate (data rate is constant) which is
independent of water entering the bucket (incoming data).
- If bucket is full, any additional water entering in the bucket is thrown out (Packets are
discarded).
- Same technique is applied to control congestion in network traffic.
- Every host in the network is having a buffer (equivalent to a bucket) with finite queue
length.
- Packets which are put in the buffer when buffer is full are thrown away.
The buffer may send some number of packets per unit time onto the subnet
The data flow at the input of the bucket is unregulated but that at bucket output is regulated
one.
Algorithm : - The algorithm for variable length packets is as follows :

1. Initialize a counter to a number "n" at the tick of the clock.

2. If "n" is greater than the packet size, then send the packet and decrement the counter by
the packet size.

3. Repeat step 2 until "n" becomes smaller than the packet size.

4. Reset the counter and go back to step 1.

Q.5 ) Write short note on :RPC .

Remote procedure call (RPC) is when a computer program causes a procedure (subroutine)
to execute in a different address space (commonly on another computer on a shared network),
which is written as if it were a normal (local) procedure call, without the programmer
explicitly writing the details for the remote interaction.

That is, the programmer writes essentially the same code whether the subroutine is local to
the executing program, or remote.

This is a form of client–server interaction (caller is client, executor is server), typically


implemented via a request–response message-passing system.

In the object-oriented programming paradigm, RPCs are represented by remote method


invocation (RMI). The RPC model implies a level of location transparency, namely that
calling procedures are largely the same whether they are local or remote, but usually, they are
not identical, so local calls can be distinguished from remote calls.

Remote calls are usually orders of magnitude slower and less reliable than local calls, so
distinguishing them is important.
Step 1: Client calls the client stub. This is a local

procedure call and the parameters are pushed on to the stack in the normal way.

Step 2: Client stub encapsulates the parameters into a message and makes a system call and
sends the message. Packing the parameters into a message is called as marshaling.

Step 3: The message is sent from client machine to server machine.

Step 4: The received packet by the server is passed to the server stub.

Step 5: Server stub calls the server procedure with the unmarshaled parameters.

Q. 6 ) Draw and explain TCP segment header format in detail. Explain TCP connection
establishment And Closing Mechanism.

Every segment begins with a 20 byte fixed format header.

- The fixed header may be followed by header options.

After the options, if any, upto 65535 - 20 - 20 = 65495

data bytes may follow.

The TCP segment without data are used for sending the acknowledgements and control
messages.

Source port:

- A 16-bit number identifying the application the TCP segment originated from within the
sending host.

- The port numbers are divided into three ranges, well-known ports (0 through 1023),
registered ports (1024 through 49,151) and private ports (49,152 through 65,535).

Port assignments are used by TCP as an interface to the application layer.

Destination port:

- A 16-bit number identifying the application the TCP

segment is destined for on a receiving host.

Destination ports use the same port number assignments as those set aside for source ports.

Sequence number:

- A 32-bit number identifying the current position of the first data byte in the segment within
the entire byte stream for the TCP connection.
- After reaching 232 - 1, this number will wrap around to 0.

Acknowledgement number :

- A 32-bit number identifying the next data byte the .. sender expects from the receiver.

- Therefore, the number will be one greater than the most recently received data byte.

- This field is only used when the ACK control bit is turned on.

Header length or offset :

- A 4-bit field that specifies the total TCP header length in

32-bit words (or in ultiples of 4 bytes if you prefer).

- Without options, a TCP header is always 20 bytes in length. The largest a TCP header may
be is 60 bytes.

- This field is required because the size of the options field(s) cannot be determined in
advance.

- Note that this field is called "data offset" in the official

TCP standard, but header length is more commonly used.

Reserved :

- A 6-bit field currently unused and reserved for future use

TCP connection establishment :-


To make the transport services reliable, TCP hosts must establish a connection-oriented
session with one another.

- Connection establishment is performed by using a three-way handshake mechanism.

- A three-way handshake synchronizes both ends of a connection by allowing both sides to


agree upon initial sequence numbers.

- This mechanism also guarantees that both sides are ready to transmit data and know that the
other side is ready to transmit as well.

- This is necessary so that packets are not transmitted or re-transmitted during session
establishment or after session termination.

Each host randomly chooses a sequence number used to track bytes within the stream it is
sending and receiving.

Then the three – way handshake proceeds in the manner

The requesting end (HOST A) sends a SYN segment specifying the port number of the server
that the client wants to get connected to, and the client's initial sequence number (x).

- The server (HOST B) responds with its own SYN segment containing the server's initial
sequence number (y).

- The server also acknowledges the client's SYN by acknowledging the client's SYN plus one
(x + 1). A SYN consumes one sequence number.

- The client must acknowledge this SYN from the server by acknowledging the server's SYN
plus one.

(SEQ. = x + 1, ACK = y + 1).

- This is how a TCP connection is established.

TCP Closing Mechanism :-


While it takes three segments to establish a connection, it takes four to terminate a
connection.

Since a TCP connection is full-duplex (that is, data flows in each direction independently of
the other direction), the connection should be terminated in both the directions independently.

The rule is that either side can send a FIN when it has finished sending data (FIN indicates
finished).

When a TCP program on a host receives a FIN, it informs the application that the other end
has determinated the data flow.

The receipt of a FIN only means there will be no more data flowing in that direction. A TCP
can still send data after receiving a FIN.

The end that first issues the close (e.g., sends the first FIN) performs the active close and the
other end (that receives this FIN) performs the passive close.

When the server receives the FIN it sends back an ACK of the received sequence number
plus one.

At this point the server's TCP also delivers an end-of-file to the application (the discard
server).The server then closes its connection and its TCP sends a FIN to the client.

The client's TCP informs the application and sends an ACK to server by incrementing the
received sequence number by one.

Connections are normally initiated by the client, with the first SYN going from the client to
the server.

A client or server can actively close the connection (i.e. send the first FIN).

But in practice generally the client determines when the connection should be terminated,
since client processes are often driven by an interactive user, who enters something like quit
to terminate.

This is how the TCP connection is released.

You might also like