Computer Network QB - 2
Computer Network QB - 2
Address Resolution Protocol is a communication protocol used for discovering physical address associated
with given network address. Typically, ARP is a network layer to data link layer mapping process, which is
used to discover MAC address for given Internet Protocol Address. In order to send the data to destination,
having IP address is necessary but not sufficient; we also need the physical address of the destination machine.
ARP is used to get the physical address (MAC address) of destination machine.
Before sending the IP packet, the MAC address of destination must be known. If not so, then sender broadcasts
the ARP-discovery packet requesting the MAC address of intended destination. Since ARP-discovery is
broadcast, every host inside that network will get this message but the packet will be discarded by everyone
except that intended receiver host whose IP is associated. Now, this receiver will send a unicast packet with its
MAC address (ARP-reply) to the sender of ARP-discovery packet. After the original sender receives the ARP-
reply, it updates ARP-cache and start sending unicast message to the destination.
RARP
Reverse ARP is a networking protocol used by a client machine in a local area network to request its Internet
Protocol address (IPv4) from the gateway-router’s ARP table. The network administrator creates a table in
gateway-router, which is used to map the MAC address to corresponding IP address. When a new machine is
setup or any machine which don’t have memory to store IP address, needs an IP address for its own use. So the
machine sends a RARP broadcast packet which contains its own MAC address in both sender and receiver
hardware address field.
A special host configured inside the local area network, called as RARP-server is responsible to reply for these
kind of broadcast packets. Now the RARP server attempt to find out the entry in IP to MAC address mapping
table. If any entry matches in table, RARP server send the response packet to the requesting device along with
IP address.
LAN technologies like Ethernet, Ethernet II, Token Ring and Fiber Distributed Data Interface (FDDI) support
the Address Resolution Protocol.
RARP is not being used in today’s networks. Because we have much great featured protocols like BOOTP
(Bootstrap Protocol) and DHCP( Dynamic Host Configuration Protocol).
1) Retransmission Policy
The sender retransmits a packet, if it feels that the packet it has sent is lost or corrupted. However retransmission
in general may increase the congestion in the network. But we need to implement good retransmission policy
to prevent congestion.The retransmission policy and the retransmission timers need to be designed to optimize
efficiency and at the same time prevent the congestion.
2) Window Policy
To implement window policy, selective reject window method is used for congestion control.Selective Reject
method is preferred over Go-back-n window as in Go-back-n method, when timer for a packet times out,
several packets are resent, although some may have arrived safely at the receiver. Thus, this duplication may
make congestion worse.Selective reject method sends only the specific lost or damaged packets.
3) Acknowledgement Policy
The acknowledgement policy imposed by the receiver may also affect congestion. If the receiver does not
acknowledge every packet it receives it may slow down the sender and help prevent congestion.
Acknowledgments also add to the traffic load on the network. Thus, by sending fewer acknowledgements we
can reduce load on the network. To implement it, several approaches can be used:A receiver may send an
acknowledgement only if it has a packet to be sent.A receiver may send an acknowledgement when a timer
expires.A receiver may also decide to acknowledge only N packets at a time.
4) Discarding Policy
A router may discard less sensitive packets when congestion is likely to happen. Such a discarding policy may
prevent congestion and at the same time may not harm the integrity of the transmission.
5) Admission Policy
An admission policy, which is a quality-of-service mechanism, can also prevent congestion in virtual circuit
networks. Switches in a flow first check the resource requirement of a flow before admitting it to the network.
A router can deny establishing a virtual circuit connection if there is congestion in the "network or if there is a
possibility of future congestion.
List four domain types.
Domain Name System (DNS) Domain
Active Directory Domain
Internet Domain
Broadcast Domain
What does SMTP and FTP stand for?
SMTP: SMTP stands for Simple Mail Transfer Protocol. It is a protocol used for sending and receiving email
messages over the Internet. SMTP is primarily responsible for the transmission of outgoing email messages
from a sender's email client or server to the recipient's email server. It's an essential component of email
communication and is used to route email messages through the Internet to their final destination.
FTP: FTP stands for File Transfer Protocol. It is a standard network protocol used for transferring files between
a client computer and a server on a network, typically the Internet. FTP allows users to upload (send) and
download (retrieve) files and directories from a remote server. It's widely used for tasks like uploading website
files to a web server, sharing files, and managing files on remote servers.
What do you mean by ICMP and IGMP?
ICMP: ICMP stands for Internet Control Message Protocol. It is an integral part of the Internet Protocol suite
(specifically, IP) and is primarily used for sending error messages and operational information about network
conditions. ICMP messages are typically generated by network devices, such as routers and switches, to
communicate important information to other devices on the network.
IGMP: IGMP stands for Internet Group Management Protocol. It is a network-layer protocol used by hosts
and adjacent routers in an IP network to manage and communicate information about multicast group
memberships. Multicast is a communication model where data is sent from one source to multiple recipients
who have expressed interest in receiving the data. IGMP helps routers and switches determine which hosts are
interested in receiving multicast traffic on a specific network segment.
Which is better: link state routing or distance vector routing? Justify with reason
In practice, many networks use a combination of routing protocols. For example, a large network may use
link-state routing within its core for efficiency and fast convergence while using distance-vector protocols at
the edge or in simpler segments for ease of management. Ultimately, the choice of routing protocol should
align with the network's requirements and constraints.
Use Link-State Routing When:
The network is large and complex.
Fast convergence is essential.
There are ample hardware resources available.
Fine-grained control over routing is required.
Use Distance-Vector Routing When:
The network is relatively small and simple.
Ease of configuration and management are important.
Hardware resources are limited.
Suboptimal routing and slower convergence are acceptable trade-offs.
List various open loop congestion control policies.
LISTEN : When a server is ready to accept an incoming connection it executes the LISTEN primitive. It
blocks waiting for an incoming connection.
CONNECT : It connects the server by establishing a connection. Response is awaited.
RECIEVE: Then the RECIEVE call blocks the server.
SEND : Then the client executes SEND primitive to transmit its request followed by the execution of
RECIEVE to get the reply. Send the message.
DISCONNECT : This primitive is used for terminating the connection. After this primitive one can’t send
any message. When the client sends DISCONNECT packet then the server also sends the DISCONNECT
packet to acknowledge the client. When the server package is received by client then the process is
terminated.
Explain Three Way Handshake technique in TCP.
SYN: The active open is performed by the client sending an SYN to the server. The client sets the segment’s
sequence number to a random value A.
SYN-ACK: In response, the server replies with an SYN-ACK. The acknowledgment number is set to one
more than the received sequence number (A + 1), and the sequence number that the server chooses for the
packet is another random number, B.
ACK: Finally, the client sends an ACK back to the server. The sequence number is set to the received
acknowledgment value i.e. A + 1, and the acknowledgment number is set to one more than the received
sequence number i.e. B + 1.
At this point, both the client and server have received an acknowledgment of the connection. Steps 1, 2
establish the connection parameter (sequence number) for one direction and it is acknowledged. Steps 2, 3
establish the connection parameter (sequence number) for the other direction and it is acknowledged. With
these, full-duplex communication is established.
What is traffic shaping? Explain leaky bucket algorithm and compare it with token bucket algorithm.
Traffic shaping (also referred to as packet shaping) is the technique of delaying and restricting certain packets
traveling through a network to increase the performance of packets that have been given priority.
• Classes are defined to separate the packets into groupings so that they can each be shaped separately
allowing some classes to pass through a network more freely than others. Traffic shapers are usually placed at
the boundaries of a network to either shape the traffic going entering or leaving a network.
• Traffic shaping is a mechanism to control the amount and rate of the traffic sent to the network.
The two traffic shaping techniques are:
i. Leaky Bucket Algorithm
• Leaky bucket is a bucket with a hole at bottom. Flow of the water from bucket is at a constant rate which is
independent of water entering the bucket. If bucket is full, any additional water entering in the bucket is
thrown out.
Same technique is applied to control congestion in network traffic.Every host in the network is having a
buffer with finite queue length
• Packets which are put in the buffer is full are thrown away.The buffer may drain onto the subnet either by
some number of packets per unit time,or by some total number of bytes per unit time.
• A FIFO queue is used for holding the packets.
• If the arriving packets are of fixed size,then the process removes a fixed number of packets from the queue
at each tick of the clock.
• If the arriving packets are of different size,then the fixed output rate will not be based on the number of
departing packets.
• Instead it will be based on the number of departing bytes or bits.
Write a program for client server application using socket programming (TCP).
Explain distance vector routing?What are its limitations and how to overcome them?
Distance vector routing is so named because it involves two factors: the distance and the vector i.e. the
direction that it takes to get there.
• Routing information is only exchanged between directly connected neighbours.
• The protocol is simple, requires little management and is efficient for small networks. But they have poor
convergence property.
• We consider two assumptions:
The problem with distance vector routing is its slowness in converging to the correct answer. This is due to a
problem called count to infinity problem.
• Another problem is that this algorithm does not take the line bandwidth into consideration when choosing
root.
• The distance vector routing works properly theoretically but practically it has a serious problem called
count to infinity problem. Although we get a correct answer, we get it slowly.
• Consider a topology with four nodes where each of them is connected to another and the link cost is
the link between the 3rd and the 4th node breaks, then the algorithm tries to count till infinity. When the link
breaks, node 3 counts that the shortest path to 4 is through 2 and it sends the data to 2.
• When node 2 receives data, it counts the shortest path to node 4 and finds it through node 1. Hence it sends
data to both 3 and 1.
• Thus all the nodes believe that the neighbouring node has the shortest possible path and they update their
routing table accordingly. The counting would not stop until the router runs out of the memory.
• The problem of count to infinity can be avoided by following a few mechanism namely:
o Defining maximum count: The routing information protocol defines a maximum count of 15. Hence the
count to infinity would stop at 16th iteration. It also means that it could not support networks that are more
than 5 hops away.
o Split horizon: It defines that the route does not send routing information back along the path on which the
packet travelled.
o Poisoned reverse: It is an improvement over split horizon wherein the router sends data but puts the cost
of the link to infinity.
o Hold down timers: It is the timer which is started when:
What do you mean by congestion control? Explain various closed loop congestion control policies.
Closed Loop Congestion Control
Closed loop congestion control mechanisms try to remove the congestion after it happens.The various methods
used for closed loop congestion control are:
i. Backpressure
Backpressure is a node-to-node congestion control that starts with a node and propagates, in the opposite
direction of data flow.
The backpressure technique can be applied only to virtual circuit networks. In such virtual circuit each node
knows the upstream node from which a data flow is coming.In this method of congestion control, the congested
node stops receiving data from the immediate upstream node or nodes.This may cause the upstream node on
nodes to become congested, and they, in turn, reject data from their upstream node or nodes.As shown in fig
node 3 is congested and it stops receiving packets and informs its upstream node 2 to slow down. Node 2 in
turns may be congested and informs node 1 to slow down. Now node 1 may create congestion and informs the
source node to slow down. In this way the congestion is alleviated. Thus, the pressure on node 3 is moved
backward to the source to remove the congestion.
ii. Choke Packet
In this method of congestion control, congested router or node sends a special type of packet called choke
packet to the source to inform it about the congestion.Here, congested node does not inform its upstream node
about the congestion as in backpressure method.In choke packet method, congested node sends a warning
directly to the source station i.e. the intermediate nodes through which the packet has traveled are not warned.
iii. Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes and the source.
The source guesses that there is congestion somewhere in the network when it does not receive any
acknowledgment. Therefore the delay in receiving an acknowledgment is interpreted as congestion in the
network.On sensing this congestion, the source slows down.This type of congestion control policy is used by
TCP.
iv. Explicit Signaling
In this method, the congested nodes explicitly send a signal to the source or destination to inform about the
congestion.Explicit signaling is different from the choke packet method. In choke packed method, a separate
packet is used for this purpose whereas in explicit signaling method, the signal is included in the packets that
carry data .Explicit signaling can occur in either the forward direction or the backward direction .In backward
signaling, a bit is set in a packet moving in the direction opposite to the congestion. This bit warns the source
about the congestion and informs the source to slow down.In forward signaling, a bit is set in a packet moving
in the direction of congestion. This bit warns the destination about the congestion. The receiver in this case
uses policies such as slowing down the acknowledgements to remove the congestion.
LISTEN : When a server is ready to accept an incoming connection it executes the LISTEN primitive. It
blocks waiting for an incoming connection.
CONNECT : It connects the server by establishing a connection. Response is awaited.
RECIEVE: Then the RECIEVE call blocks the server.
SEND : Then the client executes SEND primitive to transmit its request followed by the execution of
RECIEVE to get the reply. Send the message.
DISCONNECT : This primitive is used for terminating the connection. After this primitive one can’t send
any message. When the client sends DISCONNECT packet then the server also sends the DISCONNECT
packet to acknowledge the client. When the server package is received by client then the process is
terminated.
Connection Oriented Service Primitives
There are 4 types of primitives for Connection Oriented Service :
What are Berkeley Socket Primitives?
Berkeley sockets is an application programming interface (API) for Internet sockets and UNIX domain
sockets.
It is used for inter-process communication (IPC).
It is commonly implemented as a library of linkable modules.
It originated with the 4.2BSD UNIX released in 1983.
Socket Programming:
I) Server side:
Server startup executes SOCKET, BIND & LISTEN primitives.
LISTEN primitive allocate queue for multiple simultaneous clients.
Then it use ACCEPT to suspend server until request.
When client request arrives: ACCEPT returns.
Start new socket (thread or process) with same properties as original, this handles the request, server goes on
waiting on original socket.
If new request arrives while spawning thread for this one, it is queued.
If queue full it is refused.
TCP Timers:
In the TCP (Transmission Control Protocol), timers play a crucial role in managing various aspects of
communication. These timers help ensure reliable data transmission and timely recovery from network
issues.
Common TCP timers include:
Retransmission Timer: Used to retransmit data if an acknowledgment is not received within a certain time
frame, helping to recover from packet loss or network congestion.
Round-Trip Time (RTT) Estimation Timer: Used to calculate the estimated round-trip time between the
sender and receiver, influencing the retransmission timeout value.
Keep-Alive Timer: Ensures that an idle TCP connection is maintained by periodically sending small keep-
alive packets.
TIME_WAIT Timer: Controls the time a connection remains in the TIME_WAIT state after it has been
closed to prevent delayed packets from a previous connection interfering with a new one.
Properly configured timers are essential for TCP's reliability and robustness in handling various network
conditions.
TCP connection management involves the setup, maintenance, and termination of TCP connections between
two devices in a network. This process ensures reliable data exchange:
Three-Way Handshake: To establish a connection, a sender initiates a SYN (synchronize) packet, the
receiver responds with a SYN-ACK (synchronize-acknowledgment) packet, and the sender acknowledges
with an ACK packet.
Data Transfer: Once the connection is established, data is exchanged between sender and receiver.
Connection Termination: To gracefully close a connection, TCP employs a four-step process known as the
four-way handshake, involving FIN (finish) and ACK packets.
Connection States: TCP connections go through various states, including CLOSED, LISTEN, SYN-SENT,
SYN-RECEIVED, ESTABLISHED, FIN-WAIT-1, FIN-WAIT-2, CLOSE-WAIT, LAST-ACK, CLOSING,
and TIME-WAIT.
Proper connection management ensures data integrity, order, and reliability during transmission.
Slotted ALOHA:
Slotted ALOHA adds synchronization by dividing time into discrete slots, making all stations transmit and
listen only at the beginning of each time slot.
In slotted ALOHA, there is no contention within a slot, and collisions can only occur if multiple stations
transmit simultaneously at the beginning of a slot.
The probability of a successful transmission in slotted ALOHA is higher than in pure ALOHA because
collisions are reduced.
The maximum achievable channel utilization in slotted ALOHA is approximately 37%, which is about
double that of pure ALOHA. This means that, on average, around 37% of transmission attempts are
successful.
Short note on DHCP.
The Dynamic Host Configuration Protocol (DHCP) is a client/server protocol designed to provide the
four pieces of information for a diskless computer or a computer that is booted for the first time.
DHCP operation:
The DHCP client and server can either be on the same network or on different networks.
Same Network :
Although the practice is not very common, the administrator may put the client and the server on the same
network
Client and server on the same network
Different Networks :
As in other application-layer processes, a client can be in one network and the server in another, separated by
several other networks
The DHCP request is broadcast because the client does not know the IP address of the server.
A broadcast IP datagram cannot pass through any router. A router receiving such a packet discards it. Recall
that an IP address of all 1s is a limited broadcast address.
To solve the problem, there is a need for an intermediary. One of the hosts (or a router that can be configured
to operate at the application layer) can be used as a relay. The host in this case is called a relay agent.
The relay agent knows the unicast address of a DHCP server and listens for broadcast messages on port 67.
When it receives this type of packet, it encapsulates the message in a unicast datagram and sends the request
to the DHCP server. The packet, carrying a unicast destination address, is routed by any router and reaches
the DHCP server.
The DHCP server knows the message comes from a relay agent because one of the fields in the request
message defines the IP address ofthe relay agent. The relay agent, after receiving the reply, sends it to the
DHCP client.
What is the need of DNS and explain how DNS works?
Need for DNS:
One identifier for a host is its hostname.
Hostnames are mnemonic and are therefore appreciated by humans. such as: a.www.booksmountain.com. b.
www.Facebook.com. c. www.Google.co.in. d. surf.eurecom.fr.
Hostnames provide little information about the location within the Internet of the host.
A hostname such as surf.eurecom.fr, which ends with the country code .fr, tells us that the host is in France,
but doesn't say much more.
Furthermore, because hostnames can consist of variable-length alpha-numeric characters, they would be
difficult to process by routers.
For these reasons, hosts are also identified by so-called IP addresses.
An IP address consists of four bytes and has a rigid hierarchical structure.
An IP address looks like 121.7.106.83, where each period separates one of the bytes expressed in decimal
notation from 0 to 127.
An IP address is hierarchical because as we scan the address from left to right, we obtain more and more
specific information about where the host is located in the Internet. (Like a postal address)
An IP address is included in the header of each IP datagram.
Internet routers use this IP address to route datagram towards its destination.
Channel allocation problem can be solved by two schemes: Static Channel Allocation in LANs and
MANs, and Dynamic Channel Allocation.
1. Static Channel Allocation in LANs and MANs:
It is the classical or traditional approach of allocating a single channel among multiple competing users using
Frequency Division Multiplexing (FDM). if there are N users, the frequency channel is divided into N equal
sized portions (bandwidth), each user being assigned one portion. since each user has a private frequency
band, there is no interference between users. It is not efficient to divide into fixed number of chunks.
T = 1/(U*C-L)
T(FDM) = N*T(1/U(C/N)-L/N)
Where,
T = mean time delay,
C = capacity of channel,
L = arrival rate of frames,
1/U = bits/frame,
N = number of sub channels,
T(FDM) = Frequency Division Multiplexing Time