0% found this document useful (0 votes)
46 views46 pages

Computer Network Question Bank

Uploaded by

samruddhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views46 pages

Computer Network Question Bank

Uploaded by

samruddhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 46

Computer Network

Question Bank

Note: Study design issues of each and every layer


Questions marked yellow in chapter 4 are not there for Unit 2

Chapter1
1. Explain in detail about the design issues of all the layers in OSI model.
2. Differentiate between OSI and TCP/IP model.
3. Compare between connection oriented and connection less service.
4. What is a topology. Explain in detail about different types of topologies with advantages and
disadvantages.

5.

6.

Chapter2
1. Discuss different types of guided media.

2.

Chapter 3
1. Explain different types of framing techniques.
Character Count :
This method is rarely used and is generally required to count total number of characters that are
present in frame. This is done by using field in header.
Character count method ensures data link layer at the receiver or destination about total number of
characters that follow, and about where the frame ends.
• There is disadvantage also of using this method i.e., if anyhow character count is
disturbed or distorted by an error occurring during transmission, then destination or
receiver might lose synchronization.
• The destination or receiver might also be not able to locate or identify beginning of
next frame.
Character Stuffing :
o Character stuffing is also known as byte stuffing or character-oriented framing and
is same as that of bit stuffing but byte stuffing actually operates on bytes whereas
bit stuffing operates on bits.
o In byte stuffing, special byte that is basically known as ESC (Escape Character) that
has predefined pattern is generally added to data section of the data stream or
frame when there is message or character that has same pattern as that of flag byte.
• But receiver removes this ESC and keeps data part that causes some problems or
issues. In simple words, we can say that character stuffing is addition of 1
additional byte if there is presence of ESC or flag in text.

Bit Stuffing :
Bit stuffing is also known as bit-oriented framing or bit-oriented approach.
In bit stuffing, extra bits are being added by network protocol designers to data streams.
It is generally insertion or addition of extra bits into transmission unit or message to be
transmitted as simple way to provide and give signaling information and data to receiver and to
avoid or ignore appearance of unintended or unnecessary control sequences.
• It is type of protocol management simply performed to break up bit pattern that
results in transmission to go out of synchronization. Bit stuffing is very essential part
of transmission process in network and communication protocol. It is also required
in USB.
Physical Layer Coding Violations :
Encoding violation is method that is used only for network in which encoding on physical medium
includes some sort of redundancy i.e., use of more than one graphical or visual structure to simply
encode or represent one variable of data.

2. Explain CSMA protocols. Explain how how collision are handled in CSMA/CD. (multiple accesss
protocol)
 CSMA Protocols stands for Carrier Sense Multiple Access Protocols.
 The basic idea behind CSMA/CA is that the station should be able to receive
while transmitting to detect a collision from different stations
Therefore CSMA/CA has been specially designed for wireless networks.
These are three types of strategies:
 InterFrame Space (IFS):
 Contention Window:
 Acknowledgments:

1. A starts at t1.
2. C starts at t2.
3. C detects A at t3.
4. A detects C at t4.
5. C's transmission: t3 - t2.
6. A's transmission: t4 - t1.

Types of CSMA Protocols:


1. 1-Persistent: It senses the shared channel first and delivers the data right away if the
channel is idle.
If not, it must wait and continuously track for the channel to become idle
. It is an aggressive transmission algorithm.
2. Non-Persistent: It first assesses the channel before transmitting data; if the channel is
idle, the node transmits data right away.
If not, the station must wait for an arbitrary amount of time ( not continuously), and
when it discovers the channel is empty, it sends the frames.
3. P-Persistent: It consists of the 1-Persistent and Non-Persistent modes combined. Each
node observes the channel in the 1Persistent mode, and if the channel is idle, it sends a
frame with a P probability.
If the data is not transferred, the frame restarts with the following time slot after
waiting for a (q = 1-p probability) random period.
4. O-Persistent: A supervisory node gives each node a transmission order.

1. Persistent CSMA

2. Non-Persistent CSMA

3.P-Persistent CSMA
4.CSMA/CD
How long will it take a station to realize that a collision has taken place?
1. Let the time for a signal to propagate between the two farthest stations be ττ.
2. Assume that at time t0, :Station A starts at t0.
3. B senses idle at τ - ε.
4. Collision occurs at τ.
5. B stops almost instantly.
6. Noise gets back at ττ + ττ = 2τ.
7. Station can't be sure it has the channel until 2τ without collision.

3. Explain in detail about stop and wait protocol.(diagrams)


In one – bit sliding window protocol, the size of the window is 1. So the sender
transmits a frame, waits for its acknowledgment, then transmits the next frame.
Thus it uses the concept of stop and waits for the protocol. This protocol provides
for full – duplex communications. Hence, the acknowledgment is attached along
with the next data frame to be sent by piggybacking.

 stop and wait means, whatever the data that sender wants to send, he sends the data to
the receiver. After sending the data, he stops and waits until he receives the
acknowledgment from the receiver. The stop and wait protocol is a flow control protocol
where flow control is one of the services of the data link layer.
 It is a data-link layer protocol which is used for transmitting the data over the noiseless
channels.
 It provides unidirectional data transmission which means that either sending or receiving
of data will take place at a time. It provides flow-control mechanism but does not provide
any error control mechanism.

 The idea behind the usage of this frame is that when the sender sends the frame then he
waits for the acknowledgment before sending the next frame.
 Primitives of Stop and Wait Protocol
 The primitives of stop and wait protocol are:
 Sender side
o Rule 1: Sender sends one data packet at a time.
o Rule 2: Sender sends the next packet only when it receives the acknowledgment
of the previous packet.
o Therefore, the idea of stop and wait protocol in the sender's side is very simple,
i.e., send one packet at a time, and do not send another packet before receiving
the acknowledgment.

 Receiver side

o Rule 1: Receive and then consume the data packet.


o Rule 2: When the data packet is consumed, receiver sends the acknowledgment to
the sender.
o Therefore, the idea of stop and wait protocol in the receiver's side is also very
simple, i.e., consume the packet, and once the packet is consumed, the
o acknowledgment is sent. This is known as a flow control mechanism.
Sliding Window Protocol
The sliding window is a technique for sending multiple frames at a time. It controls the data
packets between the two devices where reliable and gradual delivery of data frames is needed. It
is also used in TCP (Transmission Control Protocol).

In this technique, each frame has sent from the sequence number. The sequence numbers are
used to find the missing data in the receiver end. The purpose of the sliding window technique is
to avoid duplicate data, so it uses the sequence number.

Types of Sliding Window Protocol


Sliding window protocol has two types:

1. Go-Back-N ARQ
2. Selective Repeat ARQ

4. Explain in detail about Go back N protocol.(diagrams)

• The 'N' in Go-Back-N specifies the size of the sender window.

• The number of frames transmitted while the sender waits for an acknowledgment is
also determined by the value of N.

• The sender sends frames in sequential order. The receiver can only buffer one
frame at a time since the size of the receiver window is equal to 1, but the sender
can buffer N frames.

• The Go-Back-N sender uses a retransmission timer to detect the lost segments.

• The Go- Back-N receiver uses both independent acknowledgment and cumulative
acknowledgment.

 The sender's window has a size of 3 or Go-Back-3.


 The sender has sent out packets 0, 1, and 2.
 The receiver is now anticipating packet 1 after acknowledging packet 0.
 Packet 1 is lost somewhere in the network.
Packet 1 is lost
 Since it's waiting for the packet with sequence number 1, the receiver will ignore every
packet the sender delivered after packet one.
 On the sender's side, a retransmission timer is set to expire.
 The sender will re-transmit all packets from one up to N, which is 3.

5. Generate CRC for a message represented by the polynomial M(x) = x5 + x4 + x Consider a


generating polynomial G(x) = x3 + x2 + 1 (1101).
6. Show how a message is verified on both the sender and receiver side using CRC.

7. Show in detail steps how checksum is calculated on both sender and receiver side.

why does data link layer always put the CRC in a trailer rather than in header? give answer in short and
simple words

The Data Link Layer places the CRC (Cyclic Redundancy Check) in a trailer, not the header, because it
allows the sender to calculate the CRC based on the actual data in the frame and then append it. Placing
the CRC in the trailer ensures that the entire frame's integrity, including the data, is verified during
transmission and reception. If the CRC were in the header, it would require retransmitting the data, which
can be less efficient.
8.
9

The Data Link Layer is the second layer of the OSI (Open Systems Interconnection) model and
plays a crucial role in ensuring reliable data communication over a physical network medium.
It primarily deals with the following key duties:

1. Data Framing: One of the primary duties of the Data Link Layer is to frame data into
manageable, fixed-sized units known as frames. Frames include the data to be
transmitted, control information, and error-checking bits. Framing allows for the easy
identification of the start and end of data packets, aiding in proper data transmission.
2. Addressing and Routing: The Data Link Layer assigns unique addresses (such as MAC
addresses in Ethernet) to each device on a network. These addresses are used for local
network communication and help in routing data packets to the correct destination.
The Data Link Layer ensures that data is delivered to the right recipient on a shared
medium.
3. Flow Control: The Data Link Layer manages the flow of data between devices to
prevent congestion and data loss. It employs techniques like buffering and
acknowledgments to ensure that data is transmitted at a rate that the receiving device
can handle. Flow control prevents data overflow and ensures efficient communication.
4. Error Detection and Correction: The Data Link Layer is responsible for detecting
errors that may occur during data transmission due to interference or noise on the
physical medium. It uses techniques like CRC (Cyclic Redundancy Check) to identify
errors in received frames. While it doesn't correct errors directly, it can request
retransmission if errors are detected.
5. Media Access Control (MAC): In shared network environments, the Data Link Layer
manages access to the physical medium. It controls how devices on the same network
contend for transmission rights. This can be done using protocols like CSMA/CD
(Carrier Sense Multiple Access with Collision Detection) in Ethernet.
6. Logical Link Control (LLC): The Data Link Layer can also include a sublayer called LLC,
responsible for providing a link between the Data Link Layer and the Network Layer
(Layer 3). The LLC sublayer manages network protocol-related functions, including
encapsulation and addressing.
7. Duplex Mode Handling: The Data Link Layer handles duplex mode, which determines
whether data can be transmitted in both directions simultaneously (full-duplex) or in
only one direction at a time (half-duplex).
What are the three major duties of the data link layer?
The three main functions of the data link layer are to deal with transmission errors, regulate
the flow of data, and provide a well-defined interface to the network layer.

10. For a pattern of, 10101001 00111001 00011101 Find out whether any transmission errors
have occurred or not using checksum.
11. Show how a transmitter sends data and corrects data on the receiver side using
hamming code.
Transmitted word: 011100101010
Received word 011100101110

12.
A strong generator polynomial is used to generate checksums in error-checking codes, like
the Cyclic Redundancy Check (CRC) used in computer networks

A strong generator polynomial is formed for error-checking in computer networks by picking


a special binary pattern. The algorithm for computing the checksum using this polynomial is
straightforward:

1. Choose Polynomial: Pick a binary pattern that represents the generator polynomial,
with the leftmost bit as '1'.
2. Append Zeroes: Add zeroes at the end of your message.
3. Use Shift Register: Create a shifting window (register) equal to the polynomial's
length.
4. Process the Message: For each bit in your message, shift the register, and if the
leftmost bit is 1, apply the polynomial.
5. Checksum: The value left in the register after processing is your checksum.
6. Attach Checksum: Add the checksum to your message and send it.
7. Receiver Side: At the receiver, use the same polynomial to verify the checksum. If it
matches, the message is likely error-free.
8. Error Correction: If needed, apply error correction techniques.

The strength of the polynomial lies in its specific pattern, which helps detect errors effectively.

Chapter 4
1. Explain with examples the classification of IPv4 addresses.
2. Explain the need of subnet mask in subnetting.
3. What is subnetting? What are default subnet masks.

4.
5. One of the addresses in a block is 110.23.120.14/20. Find the number of addresses in
the network, the first address, and the last address in the block.
6. An organization is granted the block 130.34.12.64/26.The organization needs 4 subnets
each with equal no hosts. Design the sub networks and find the information about each
n/w?

7. An organization is granted a block of addresses with the beginning address 14.24.74.0/24.


The organization needs to have 3 subnets as shown below:
one sub block of 120 addresses
one sub block of 60 addresses
one sub block of 10 addresses
Design the subnets.

8. An ISP is granted a block of addresses starting with 190.100.0.0/16. The ISP needs
to distribute these addresses to three groups of customers as follows:

1. The first group has 64 customers; each needs 256 addresses.


2. The second group has 128 customers; each needs 128 addresses.
3. The third group has 128 customers; each needs 64 addresses.
Design the subblocks and give the CIDR notation for each subblock. Find out how many
addresses are still available after these allocations.

9. Assume a company has three offices: Central, East, and West.


o The Central office is connected to the East and West offices via private, point-
to- point WAN lines.
o The Company is granted a block of 64 addresses with the beginning
address 70.12.100.128/26.
o The management has decided to allocate 32 addresses for the Central office
and divides the rest of addresses between the two offices. Design the network

10. What is IPv4 Protocol and explain the header with neat diagram.
11. Compare Open Loop Congestion and Closed Loop Congestion.

12. What is ICMP Protocol? explain in detail with header diagram.

13. Explain in detail the working of ARP and RARP with neat diagrams.
14. Differentiate between IPv4 and IPv6.

15. Explain distance vector routing in detail with neat diagrams.


16. Explain in detail about Link state routing.
17. Explain OSPF with an example. Show neat diagrams for finding the shortest path.
18. What is Traffic Shaping.

19. Explain with neat diagrams Leaky bucket and Token bucket algorithms.

20.
tr

21. Calculate the new delay from J with the help of below diagram using distance vector routing.
22.

23.

Chapter 5
1. Differentiate between TCP and UDP.

Transmission Control Protocol User Datagram Protocol


Basis (TCP) (UDP)

An acknowledgment segment is No acknowledgment


Acknowledgment
present. segment.

TCP is comparatively slower than UDP is faster, simpler, and


Speed
UDP. more efficient than TCP.

There is no retransmission of
Retransmission of lost packets is
Retransmission lost packets in the User
possible in TCP, but not in UDP.
Datagram Protocol (UDP).

TCP has a (20-60) bytes variable length UDP has an 8 bytes fixed-
Header Length
header. length header.

Weight TCP is heavy-weight. UDP is lightweight.

Handshaking Uses handshakes such as SYN, ACK, It’s a connectionless protocol


Transmission Control Protocol User Datagram Protocol
Basis (TCP) (UDP)

Techniques SYN-ACK i.e. No handshake

Broadcasting TCP doesn’t support Broadcasting. UDP supports Broadcasting.

UDP is used by DNS, DHCP,


TCP is used by HTTP,
Protocols TFTP, SNMP, RIP,
HTTPs, FTP, SMTP and Telnet.
and VoIP.

UDP connection is a message


Stream Type The TCP connection is a byte stream.
stream.

Overhead Low but higher than UDP. Very low.

TCP
For example, When a user requests a web page on the internet, somewhere in
the world, the server processes that request and sends back an HTML Page to
that user. The server makes use of a protocol called the HTTP Protocol.
The HTTP then requests the TCP layer to set the required connection and send
the HTML file.
UDP

1. Source Port: Source Port is a 2 Byte long field used to identify the port
number of the source.
2. Destination Port: It is a 2 Byte long field, used to identify the port of the
destined packet.
3. Length: Length is the length of UDP including the header and the data. It is a
16-bits field.
4. Checksum: Checksum is 2 Bytes long field
. It is the 16-bit one’s complement of the one’s complement sum of the UDP
header,
the pseudo-header of information from the IP header,
and the data, padded with zero octets at the end (if necessary) to make a
multiple of two octets.
2. Explain the use of TCP timers in detail.

TCP uses several timers to ensure that excessive delays are not encountered
during communications.
Several of these timers are elegant, handling problems that are not immediately
obvious at first analysis.

TCP implementation uses four timers –

 Retransmission Timer – (TIME OUT TIMER)


1. To retransmit lost segments, TCP uses retransmission timeout (RTO).
2. When TCP sends a segment the timer starts and stops when the
acknowledgment is received.
3. If the timer expires timeout occurs and the segment is retransmitted.
RTO to calculate retransmission timeout we first need to calculate the
RTT(round trip time).
RTT three types –
Measured RTT(RTTm) – The measured round-trip time for a segment is the time
required for the segment to reach the destination and be acknowledged, although the
acknowledgement may include other segments.
Smoothed RTT(RTTs) – It is the weighted average of RTTm
Initially -> No value
After the first measurement -> RTTs=RTTm
After each measurement -> RTTs= (1-t)*RTTs + t*RTTm
Note: t=1/8 (default if not given)

Deviated RTT(RTTd) – Most implementations do not use RTTs alone so RTT


deviated is also calculated to find out RTO.

Initially -> No value


After the first measurement -> RTTd=RTTm/2
After each measurement -> RTTd= (1-k)*RTTd + k*(RTTm-RTTs)
Note: k=1/4 (default if not given)

 Persistent Timer –
To deal with a zero-window-size deadlock situation, TCP uses a persistence
timer.
When the sending TCP receives an acknowledgment with a window size of
zero, it starts a persistence timer.
When the persistence timer goes off, the sending TCP sends a special
segment called a probe
. This segment contains only 1 byte of new data.
It has a sequence number, but its sequence number is never acknowledged; it
is even ignored in calculating the sequence number for the rest of the data.
 Keep Alive Timer –
1. A keepalive timer is used to prevent a long idle connection between
two TCPs.
2. If a client opens a TCP connection to a server transfers some data and
becomes silent the client will crash. In this case, the connection
remains open forever. So a keepalive timer is used.
3. Each time the server hears from a client, it resets this timer.
4. The time-out is usually 2 hours. If the server does not hear from the
client after 2 hours, it sends a probe segment. If there is no response
after 10 probes, each of which is 75 s apart, it assumes that the client is
down and terminates the connection.
 Time Wait Timer – This timer is used during tcp connection termination .
The timer starts after sending the last Ack for 2nd FIN and closing the
connection.

3. Describe in brief about the concept of piggybacking.

Piggybacking is a process of attaching the acknowledgment with the data packet to be sent.
Piggybacking concept is explained below:

How Piggybacking is Done?

Suppose there is two-way communication between two devices A and B. When the data frame
is sent by A to B, then device B will not send the acknowledgment to A until B does not have
the next frame to transmit. And the delayed acknowledgment is sent by the B with the data
frame. The method of attaching the delayed acknowledgment with sending the data frame is
known as piggybacking.
Why Do We Need Piggybacking?

All other protocols such as stop and wait, Go Back N ARQ, etc. provide us a half duplex way of
communication. But in real-world situations, full-duplex communication is required. So piggybacking
comes in the scenario for defining the rules for full-duplex communication. TCP packets are also
transmitted in the full-duplex mode. So piggybacking is also required for the TCP packet transmission

Advantages of Piggybacking

 Efficient use of available channel bandwidth.


 Reduction in usage cost
 Data transfer latency improved

Disadvantages of Piggybacking

 This technique requires additional complexity for its implementation.

4. Illustrate the concept of TCP 3 way handshaking signals with neat diagram.
Transmission Control Protocol (TCP) provides a secure and reliable connection
between two devices using the 3-way handshake process.
TCP uses the full-duplex connection to synchronize (SYN) and acknowledge
(ACK) each other on both sides. There are three steps for both establishing
and closing a connection. They are − SYN, SYN-ACK, and ACK.
3-Way Handshake Connection Establishment Process

The following diagram shows how a reliable connection is established using 3-way
handshake. It will support communication between a web browser on the client and
server sides whenever a user navigates the Internet.
Synchronization Sequence Number (SYN) − The client
sends the SYN to the server
 When the client wants to connect to the server, then it sends the message to the server by setting the
SYN flag as 1.
 The message carries some additional information like the sequence number (32-bit random
number).
 The ACK is set to 0. The maximum segment size and the window size are also set.
For example, if the window size is 1000 bits and the maximum segment size is 100 bits, then a
maximum of 10 data segments can be transmitted in the connection by dividing (1000/100=10).
Synchronization and Acknowledgement (SYN-ACK) to the
client
 The server acknowledges the client request by setting the ACK flag to 1.
 For example, if the client has sent the SYN with sequence number = 500, then the server will send
the ACK using acknowledgment number = 5001.
 The server will set the SYN flag to '1' and send it to the client if the server also wants to establish
the connection.
 The sequence number used for SYN will be different from the client's SYN.
 The server also advertises its window size and maximum segment size to the client. And, the
connection is established from the client-side to the server-side.
Acknowledgment (ACK) to the server
 The client sends the acknowledgment (ACK) to the server after receiving the synchronization
(SYN) from the server.
 After getting the (ACK) from the client, the connection is established between the client and the
server.
 Now the data can be transmitted between the client and server sides.
3 -Way Handshake Closing Connection Process

To close a 3-way handshake connection,

 First, the client requests the server to terminate the established connection by sending FIN.
 After receiving the client request, the server sends back the FIN and ACK request to the client.
 After receiving the FIN + ACK from the server, the client confirms by sending an ACK to the
server.

5. S E L E C T I V E R E P E A T

The advantage of Selective Repeat over Go-Back-N in computer network protocols is like
being more precise and efficient:

Selective Repeat: (sliding window)

 If a receiver detects a missing or corrupted packet, it asks for retransmission of that


specific packet only.
 Efficient use of network resources because it doesn't request retransmission of all
subsequent packets.
 It minimizes unnecessary data retransmission, leading to faster data delivery.
In simple terms, Selective Repeat is like requesting exactly what you need, while Go-Back-N
requests more than necessary, making Selective Repeat a more efficient and precise approach for
handling lost or corrupted packets in network communication.
key features include:
 Receiver-based protocol
 Each packet is individually acknowledged by the receiver
 Only lost packets are retransmitted, reducing network congestion
 Maintains a buffer to store out-of-order packets
 Requires more memory and processing power than Go-Back-N
 Provides efficient transmission of packets.

6. Explain in detail about selective Repeat ARQ.

 The Selective Repeat protocol is another sliding window protocol used


for reliable data transfer in computer networks.
 It is a receiver-based protocol that allows the receiver to acknowledge
each packet individually
 The sender sends packets in a window and waits for acknowledgements
for each packet in the window.

 If a packet is lost, the receiver sends a NACK for the lost packet, and the
sender retransmits only that packet.
 The sender also maintains a timer for each packet, and if an
acknowledgement is not received within the timer’s timeout period, the
sender retransmits only that packet.
key features include:
 Receiver-based protocol
 Each packet is individually acknowledged by the receiver
 Only lost packets are retransmitted, reducing network congestion
 Maintains a buffer to store out-of-order packets
 Requires more memory and processing power than Go-Back-N
 Provides efficient transmission of packets.

S.NO Selective Repeat Protocol

2. Sender window size of selective Repeat protocol is also N.

3. Receiver window size of selective Repeat protocol is N.

4. Selective Repeat protocol is more complex.

5. In selective Repeat protocol, receiver side needs sorting to sort the frames.

6. In selective Repeat protocol, type of Acknowledgement is individual.

Efficiency of selective Repeat protocol n go back n is also


9.
N/(1+2*a)

7. Explain in detail about slowstart algorithm with neat diagrams.


Congestion Control is a mechanism that controls the entry of data packets into
the network, enabling a better use of a shared network infrastructure and
avoiding congestive collapse.
TCP slow start is part of the congestion control algorithms put in place by TCP to help control
the amount of data flowing through to a network.

8. Explain in detail about Fast retransmit and Fast Recovery with neat diagrams.
TCP Slow Start and Congestion Avoidance, lower the data throughput
drastically when segment loss is detected. Fast re-transmit and fast
recovery have been designed to speed up the recovery of the
connection
When there is a packet loss detected, the TCP sender does 4 things:

1. Reduces the cwnd by 50%.


2. Reduces the ssthresh value by 50% of cwnd.
3. Retransmit the lost packet.
4. Enters the Fast Recovery or retransmit phase.

Fast Retransmit:

1. Sender transmits packets, and receiver sends ACKs for received packets.
2. If a packet is lost, the receiver detects the gap and sends a duplicate ACK (e.g., ACK for
packet 4 is missing, so it sends ACK 3 again).
3. When the sender receives duplicate ACKs, it assumes a packet is lost and retransmits
that packet.
4. This retransmission speeds up recovery by avoiding a timeout, which can be a long
wait.
Fast Recovery:

1. Similar to Fast Retransmit, the sender transmits packets and the receiver sends ACKs.
2. When the sender receives three duplicate ACKs, it knows a packet is lost. Instead of just
retransmitting, it enters Fast Recovery.
3. In Fast Recovery, the sender reduces its sending rate but continues sending new
packets.
4. It keeps track of the congestion window and inflight packets, maintaining a more
efficient use of network resources.
5. Once it receives a non-duplicate ACK (indicating some data has been successfully
received), it exits Fast Recovery and adjusts the congestion window size for more
efficient transmission.

Feature Fast Retransmit Fast Recovery

Suitable for recovering single lost Suitable for maintaining network performance during
Use Case packets congestion
9.

tcp connection establishment and release


(ANS 4)

Berkely sockets
Berkeley sockets are part of an application programming interface (API)
that specifies the data structures and function calls that interact with the
operating system's network subsystem.
Transport service primitves

A service is a set of primitives or we call it as operations where a user can invoke to


access the service.

The different types of service primitives are explained below –

1. LISTEN : When a server is ready to accept an incoming connection it executes the LISTEN
primitive. It blocks waiting for an incoming connection.
2. CONNECT : It connects the server by establishing a connection. Response is awaited.
3. RECIEVE: Then the RECIEVE call blocks the server.
4. SEND : Then the client executes SEND primitive to transmit its request followed by the
execution of RECIEVE to get the reply. Send the message.
5. DISCONNECT : This primitive is used for terminating the connection. After this primitive one
can’t send any message. When the client sends DISCONNECT packet then the server also
sends the DISCONNECT packet to acknowledge the client. When the server package is
received by client then the process is terminated.

Connection Oriented Service Primitives

 There are 4 types of primitives for Connection Oriented Service :

CONNECT This primitive makes a connection

DATA, DATA-ACKNOWLEDGE, EXPEDITED- Data and information is sent using this


DATA primitive

DISCONNECT Primitive for closing the connection

RESET Primitive for reseting the connection

Connectionless Oriented Service Primitives

 There are 2 types of primitives for Connectionless Oriented Service:

UNIDATA This primitive sends a packet of data

FACILITY, Primitive for enquiring about the performance of the network, like delivery
REPORT statistics.
10.

S.No. Open-Loop Control System Closed-Loop Control System

1. It easier to build. It is difficult to build.

It can perform better if the calibration is It can perform better because of the
2.
properly done. feedback.

3. It is more stable. It is comparatively less stable.

Optimization for desired output can not


4. Optimization can be done very easily.
be performed.

It does not consists of feedback


5. Feedback mechanism is present.
mechanism.

6. It requires less maintenance. Maintenance is difficult.

7. It is less reliable. It is more reliable.

8. It is comparatively slower. It is faster.

It can be easily installed and is Complicated installation is required and is


9.
economical. expensive.

Retransmission Policy : Choke Packet Technique :


Implicit Signaling :
Window Policy :
S.No. Open-Loop Control System Closed-Loop Control System

Discarding Policy : Explicit Signaling :


Acknowledgment Policy :
 Forward Signaling
Admission Policy :  Backward Signaling

Open loop congestion control Closed loop congestion control


policies are applied to prevent techniques are used to treat or alleviate
congestion before it happens. The congestion after it happens. Several
congestion control is handled either techniques are used by different
by the source or the destination. protocols;

Congestion control in data-gram and sub-nets :


Some congestion Control approaches which can be used in the datagram Subnet (and
also in virtual circuit subnets) are given under.
1. Choke packets
2. Load shedding
3. Jitter control.
Approach-1: Choke Packets :
This congestion control technique assigns a utilization value (between 0 and 1) to each router's output
line.
When utilization crosses a threshold, the line enters a warning state.
Incoming packets are checked, and if a line is in warning, choke packets are sent, varying in severity
based on the threshold.
It can also use queue lengths for congestion detection.
Drawback –
The problem with the choke packet technique is that the action to be taken by the
source host on receiving a choke packet is voluntary and not compulsory.
Approach-2: Load Shedding :
 When congestion control techniques like admission control, choke packets, and fair queuing
fail to alleviate congestion, load shedding comes into play.
 It involves discarding excess packets to relieve router overload.
 Prioritizing packets based on their importance (e.g., old for file transfer, new for multimedia)
can help decide which packets to drop.
 Cooperation from senders, marking packets with priority, and using header bits is crucial for an
intelligent discard policy.
Approach-3: Jitter control :

Jitter, the variation in packet delay, is critical for real-time audio and video but inconsequential for file
data.
For audio/video, consistent delays (e.g., 24.5 ms to 25.5 ms) are essential.
Chapter 6
1. Explain the need for DNS and functioning of protocol.
DNS Stands for Domain Name System.
DNS is a hierarchical decentralized naming system for computers, services, or any resource
connected to the Internet or a private network.
Need for DNS:
1. One identifier for a host is its hostname.
2. An IP address consists of four bytes and has a rigid hierarchical structure.
3. An IP address is included in the header of each IP datagram.
4. A hostname such as surf.eurecom.fr, which ends with the country code .fr, tells us that the
host is in France, but doesn't say much more.
functioning of protocol.

1. User types domain.


2. Browser queries OS.
3. OS contacts resolving server.
4. Resolving server locates root servers.
5. Resolving server queries top-level domain server (e.g., COM).
6. Authoritative server for 'booksmountain.com.'
7. Authoritative server provides IP.
8. Resolving server caches data.
9. Information returns to browser.
10. User reaches website.
DNS NAMESPACE:

 The DNS name space is the set of all domain names that are registered in the
DNS.
 These domain names are organized into a tree-like structure, with the top of
the tree being the root domain.
 Below the root domain, there are a number of top-level domains, such
as .com, .net, and .org.
 Below the top-level domains, there are second-level domains, and so on.
 Each domain name in the DNS name space corresponds to a set of resource
records, which contain information about that domain name, such as its IP
address, mail servers, and other information.
 The DNS name space is hierarchical, meaning that each domain name can
have subdomains beneath it.
 For example, the domain name "example.com" could have subdomains such
as "www.example.com" and "mail.example.com".
 This allows for a very flexible and scalable naming structure for the Internet.
 The DNS name space is managed by a number of organizations, including the
Internet Corporation for Assigned Names and Numbers (ICANN), which
is responsible for coordinating the allocation of unique domain names and IP
addresses.

DNS record: DNS records (short for "Domain Name System records") are types
of data that are stored in the DNS database and used to specify information
about a domain, such as its IP address and the servers that handle its email
DNS Record Types

There are several different types of DNS records, including −

1. A record (Address Record) − maps a domain or subdomain to an IP address.


2. MX record (Mail Exchange Record) − routes email for a domain to the correct email server.
3. CNAME record (Canonical Name Record) − creates an alias for a domain.
4. TXT record (Text Record) − stores arbitrary text in a domain's DNS record.
5. PTR record (Pointer Record) − maps an IP address to a domain name.
6. NS record (Name Server Record) − specifies the name servers for a domain.
7. SOA record (Start of Authority Record) − specifies the DNS server that is the authority for a
specific domain.
8. SRV record (Service Record) − specifies the hostname and port number for a specific service,
such as a website or email server.
9. AAAA record (Quad-A Record) − maps a domain or subdomain to an IPv6 address.
10. CAA record (Certification Authority Authorization Record) − specifies which certificate
authorities (CAs) are authorized to issue SSL/TLS certificates for a domain.
11. DS record (Delegation Signer Record) − stores a cryptographic hash of a domain's DNSKEY
record, which is used to secure the domain's DNS delegation.
12. DNSKEY record (DNS Key Record) − stores a public key that is used to create a digital signature
for a domain's DNS records.
13. RRSIG record (Resource Record Signature Record) − stores a digital signature for a set of DNS
records.
14. NSEC record (Next Secure Record) − specifies the next DNS record in a domain's DNS zone file,
and also lists the types of records that are present for a domain.
15. NSEC3 record (Next Secure Record version 3) − like NSEC, but uses a hash of the domain name
instead of the plaintext name in order to provide additional security.

TYPES OF NAME SERVER (explain dns)

 DNS recursor - The recursor can be thought of as a librarian who is asked to go find a
particular book somewhere in a library.

The DNS recursor is a server designed to receive queries from client machines through
applications such as web browsers.

Typically the recursor is then responsible for making additional requests in order to satisfy
the client’s DNS query.

 Root nameserver - The root server is the first step in translating (resolving) human
readable host names into IP addresses.

They direct queries to the appropriate Top-Level Domain (TLD) servers.

It can be thought of like an index in a library that points to different racks of books -
typically it serves as a reference to other more specific locations.

 TLD nameserver - The top level domain server (TLD) can be thought of as a specific rack of
books in a library.
This nameserver is the next step in the search for a specific IP address, and it hosts the last
portion of a hostname (In example.com, the TLD server is “com”).

 Authoritative nameserver - This final nameserver can be thought of as a dictionary on a


rack of books, in which a specific name can be translated into its definition.

The authoritative nameserver is the last stop in the nameserver query.

If the authoritative name server has access to the requested record, it will return the IP
address for the requested hostname back to the DNS Recursor (the librarian) that made the
initial request.

 Forwarding DNS Servers: These servers forward DNS queries to other DNS
servers, often provided by ISPs (Internet Service Providers) or companies for
faster resolution or caching purposes.

Each of these servers plays a role in the process of translating domain names to IP
addresses and vice versa. They work together in a hierarchical manner to efficiently
resolve DNS queries across the internet.

What Are the Types of DNS Queries?


There are basically three types of DNS Queries that occur in DNS Lookup. These are stated
below.
 Recursive Query: In this query, if the resolver is unable to find the record, in that
case, DNS client wants the DNS Server will respond to the client in any way like
with the requested source record or an error message.
 Iterative Query: Iterative Query is the query in which DNS Client wants the best
answer possible from the DNS Server.
 Non-Recursive Query: Non-Recursive Query is the query that occurs when a DNS
Resolver queries a DNS Server for some record that has access to it because of the
record that exists in its cache.
What is DNS Caching?
DNS Caching can be simply termed as the process used by DNS Resolvers for storing the
previously resolved information of DNS that contains domain names, and IP Addresses for
some time. The main principle of DNS Caching is to speed up the process of future DNS
lookup and also help in reducing the overall time of DNS Resolution.

2. Explain in detail about HTTP, DHCP, SMTP, FTP with neat diagrams.
APPLICATION LAYERS
HTTP stands for Hyper Text Transfer Protocol, FTP for File Transfer Protocol,
while SMTP stands for Simple Mail Transfer Protocol. All three are used to
transfer information over a computer network and are an integral part of today’s
internet.
We need the three protocols as they all serve different purposes. These are HTTP, FTP,
and SMTP.
1. HTTP is the backbone of the World Wide Web (WWW).

2. FTP is the underlying protocol that is used to, as the name suggests, transfer
files over a communication network.

3. SMTP is what is used by Email servers all over the globe to communicate
with each other

4. Dynamic Host Configuration Protocol (DHCP) is a client/server


protocol that automatically provides an Internet Protocol (IP) host
Parameter HTTP FTP SMTP DHCP

Port number 80 20 and 21 25 67 and 68

Type of band
transfer In-band Out-of-band In-band In-band

State Stateless Maintains state - Stateless

Number of TCP
connections 1 2 (Data and Control) 1 -

Type of TCP Both Persistent and Non- Persistent for Control,


connection persistent Non-persistent for Data Persistent -

Push Protocol
Type of Protocol Pull Protocol (Mainly) - (Primarily) Push Protocol

Transfer files between Transfer directly between Transfers mails via Manages IP
Type of Transfer Web server and client computers Mail Servers allocation

3.

World wide unique identifiers


1. IP Address (IPv4 and IPv6):
 IPv4 Example: 192.168.1.1
 IPv6 Example: 2001:0db8:85a3:0000:0000:8a2e:0370:7334
2. MAC Address:
 Example: 00:1A:2B:3C:4D:5E
3. Domain Name:
 Example: www.example.com
4. URL (Uniform Resource Locator):
 Example: https://www.openai.com/research/
5. Port Number:
 Example: Port 80 for HTTP, Port 21 for FTP
6. IPv6 Interface ID:
 Example (part of an IPv6 address): 2001:0db8:85a3::8a2e:0370:7334
7. ASN (Autonomous System Number):
 Example: AS15169 (Google's Autonomous System)
8. GUID (Globally Unique Identifier):
 Example: {21EC2020-3AEA-1069-A2DD-08002B30309D}

1. Scheme :
https://
The protocol or scheme part of the URL and indicates the set of rules that will decide
the transmission and exchange of data.

2. Subdomain :
https://www.
The subdomain is used to separate different sections of the website as it specifies the
type of resource to be delivered to the client

3. Domain Name :
https://www.example.
Domain name specifies the organization or entity that the URL belongs to

4. Top-level Domain :
https://www.example.co.uk
The TLD (top-level domain) indicates the type of organization the website is
registered to. Like the .com

5. Port Number :
https://www.example.co.uk:443
A port number specifies the type of service that is requested by the client
since servers often deliver multiple services.

6. Path :
https://www.example.co.uk:443/blog/article/search
Path specifies the exact location of the web page, file, or any resource that the user
wants access to.
7. Query String Separator :
https://www.example.co.uk:443/blog/article/search?
The query string which contains specific parameters of the search is preceded by a
question mark (?).

8. Query String :
https://www.example.co.uk:443/blog/article/search?docid=720&hl=en
The query string specifies the parameters of the data that is being queried from a
website’s database.

9. Fragment :
https://www.example.co.uk:443/blog/article/search?docid=720&hl=en#dayone
The fragment identifier of a URL is optional, usually appears at the end, and begins
with a hash (#). It indicates a specific location within a page such as the ‘id’ or
‘name’ attribute for an HTML element.

RIP

• RIP (Routing Information Protocol) – used for exchanging routing information


between routers in a network.

• RIP (Routing Information Protocol) is a distance-vector routing protocol that is used


to distribute routing information within a network
• It’s one of the earliest routing protocols developed for use in IP (Internet Protocol)
networks, and it’s still widely used in small to medium-sized networks

• RIP has a simple and straightforward operation, which makes it easy to understand
and configure.

• However, it also has some limitations, such as its slow convergence time and limited
scalability. In large networks, RIP can become slow and inefficient, which is why it’s
often replaced by more advanced routing protocols such as OSPF (Open Shortest Path
First)

TELNET
TELNET stands for Teletype Network. It is a type of protocol that enables one
computer to connect to the local computer.
It is used as a standard TCP/IP protocol for virtual terminal service which is provided
by ISO.
The computer which starts the connection is known as the local computer.
The computer which is being connected to i.e. which accepts the connection known as
the remote computer

Logging
The logging process can be further categorized into two parts:
1. Local Login
2. Remote Login
1. Local Login: Whenever a user logs into its local system, it is known as local
login.
2. Remote Login: Remote Login is a process in which users can log in to a remote
site i.e. computer and use services that are available on the remote computer.
With the help of remote login, a user is able to understand the result of
transferring the result of processing from the remote computer to the local
computer.

Network Virtual Terminal(NVT)

NVT (Network Virtual Terminal) is a virtual terminal in TELNET that has a


fundamental structure that is shared by many different types of real terminals. NVT
(Network Virtual Terminal) was created to make communication viable between
different types of terminals with different operating systems.

TELNET Commands
Commands of Telnet are identified by a prefix character, Interpret As Command (IAC)
with code 255. IAC is followed by command and option codes.
The basic format of the command is as shown in the following figure :

Following are some of the important TELNET commands:


Character Decimal Binary Meaning

1. Offering to enable.
WILL 251 11111011 2. Accepting a request to enable.

1. Rejecting a request to enable.


2. Offering to disable.
WON’T 252 11111100
3. Accepting a request to disable.

1. Approving a request to enable.


DO 253 11111101` 2. Requesting to enable.

1. Disapproving a request to enable.


2. Approving an offer to disable.
DON’T 254 11111110
3. Requesting to disable.

Modes of Operation
Most telnet implementations operate in one of the following three modes:
1. Default mode
2. Character mode
3. Line mode
1. Default Mode: If no other modes are invoked then this mode is used. Echoing
is performed in this mode by the client.
In this mode, the user types a character and the client echoes the character on
the screen but it does not send it until the whole line is completed.
2. Character Mode: Each character typed in this mode is sent by the client to the
server. A server in this type of mode normally echoes characters back to be displayed
on the client’s screen.
3. Line Mode: Line editing like echoing, character erasing, etc. is done from the
client side. The client will send the whole line to the server.

TCP STATE TRANSITION DIAGRAM


To keep track of all the different events happening during connection establishment, connection
termination, and data transfer, TCP is specified as the finite state machine shown in Figure.
The figure shows the two FSMs used by the TCP client and server combined in one diagram. The
ovals represent the states. The transition from one state to another is shown using directed lines.

The dotted black lines in the figure represent the transition that a server normally goes through; the
solid black lines show the transitions that a client normally goes through.

The state marked as ESTBLISHED in the FSM is in fact two different sets of states that the client and
server undergo to transfer data.

The states for TCP are as follows:


CHAPTER 3
Design issues of DATA link
Data-link layer is the second layer after the physical layer. The data link layer is
responsible for maintaining the data link between two hosts or nodes.
Before going through the design issues in the data link layer. Some of its sub-layers
and their functions are as following below.
The data link layer is divided into two sub-layers :
1. Logical Link Control Sub-layer (LLC) –
Provides the logic for the data link, Thus it controls the synchronization, flow
control, and error checking functions of the data link layer. Functions are –
 (i) Error Recovery.
 (ii) It performs the flow control operations.
 (iii) User addressing.

2. Media Access Control Sub-layer (MAC) –


It is the second sub-layer of data-link layer. It controls the flow and
multiplexing for transmission medium. Transmission of data packets is
controlled by this layer.
Functions are –
 (i) To perform the control of access to media.
 (ii) It performs the unique addressing to stations directly connected
to LAN.
 (iii) Detection of errors.
Design issues with data link layer are :
1. Services provided to the network layer –
The data link layer act as a service interface to the network layer.
The principle service is transferring data from network layer on sending
machine to the network layer on destination machine. This transfer also takes
place via DLL (Data link-layer).
It provides three types of services:
1. Unacknowlwdged and connectionless services.
2. Acknowledged and connectionless services.
3. Acknowledged and connection-oriented services
Unacknowledged and connectionless services.
 Here the sender machine sends the independent frames without any
acknowledgement from the sender.
 There is no logical connection established.
Acknowledged and connectionless services.
 There is no logical connection between sender and receiver established.
 Each frame is acknowledged by the receiver.
 If the frame didn’t reach the receiver in a specific time interval it has to be
sent again.
 It is very useful in wireless systems.
Acknowledged and connection-oriented services
 A logical connection is established between sender and receiver before data
is transmited
 Each frame is numbered so the receiver can ensure all frames have arrived
and exactly once.
2. Frame synchronization –
The source machine sends data in the form of blocks called frames to the
destination machine.
The starting and ending of each frame should be identified so that the frame can
be recognized by the destination machine.
3. Flow control –
Flow control is done to prevent the flow of data frame at the receiver end. The source
machine must not send data frames at a rate faster than the capacity of destination
machine to accept them.
4. Error control –
Error control is done to prevent duplication of frames. The errors introduced during
transmission from source to destination machines must be detected and corrected at the
destination machine.

Channel allocation problem,


Channel allocation is a process in which a single channel is divided and allotted to
multiple users in order to carry user specific tasks.

If there are N number of users and channel is divided into N equal-sized sub channels,
Each user is assigned one portion.
If the number of users are small and don’t vary at times, then Frequency Division
Multiplexing can be used as it is a simple and efficient channel bandwidth allocating
technique.
Channel allocation problem can be solved by two schemes: Static Channel Allocation
in LANs and MANs, and Dynamic Channel Allocation.
1. Static Channel Allocation in LANs and MANs:
It is the classical or traditional approach of allocating a single channel
among multiple competing users using Frequency Division Multiplexing
(FDM).
if there are N users, the frequency channel is divided into N equal sized
portions (bandwidth), each user being assigned one portion. since each
user has a private frequency band, there is no interference between users.
It is not efficient to divide into fixed number of chunks.

T = 1/(U*C-L)

T(FDM) = N*T(1/U(C/N)-L/N)
Where,

T = mean time delay,


C = capacity of channel,
L = arrival rate of frames,
1/U = bits/frame,
N = number of sub channels,
T(FDM) = Frequency Division Multiplexing Time
2. Dynamic Channel Allocation:
Possible assumptions include:

1. Station Model:
Assumes that each of N stations independently produce frames.
The probability of producing a packet in the interval IDt where I is the
constant arrival rate of new frames.

2. Single Channel Assumption:


In this allocation all stations are equivalent and can send and receive on that
channel.

3. Collision Assumption:
If two frames overlap in time-wise, then that’s collision.
Any collision is an error, and both frames must re transmitted. Collisions are
only possible error.

4. Time can be divided into Slotted or Continuous.

5. Stations can sense a channel is busy before they try it.

Multiple access protocol= aloha,


Data-link layer is the second layer after the physical layer. The data link layer is
responsible for maintaining the data link between two hosts or nodes.

The data link layer is used in a computer network to transmit the data between two devices or
nodes.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through
multiple stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha

 Whenever data is available for sending over a channel at stations, we use Pure Aloha.

 In pure Aloha, when each station transmits data to a channel without checking whether
the channel is idle or not, the chances of collision may occur, and the data frame can
be lost.

 When any station transmits the data frame to a channel, the pure Aloha waits for the
receiver's acknowledgment.

Slotted Aloha

 The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure
Aloha has a very high possibility of frame hitting.

 In slotted Aloha, the shared channel is divided into a fixed time interval called slots

 So that, if a station wants to send a frame to a shared channel, the frame can only be
sent at the beginning of the slot, and only one frame is allowed to be sent to each slot.
 And if the stations are unable to send data to the beginning of the slot, the station will
have to wait until the beginning of the slot for the next time.

 However, the possibility of a collision remains when trying to send a frame at the
beginning of two or more station time slot.

Elementary data link protocols


Data-link layer is the second layer after the physical layer. The data link layer is
responsible for maintaining the data link between two hosts or nodes.

The data link layer is used in a computer network to transmit the data between two devices or
nodes.

Protocols in the data link layer are designed so that this layer can perform its basic
functions: framing, error control and flow control

Types of Data Link Protocols

Data link protocols can be broadly divided into two categories, depending on whether
the transmission channel is noiseless or noisy.
Simplex Protocol

 The Simplex protocol is hypothetical protocol designed for unidirectional data


transmission over an ideal channel, i.e. a channel through which transmission
can never go wrong.
 It has distinct procedures for sender and receiver.
 The sender simply sends all its data available onto the channel as soon as they
are available its buffer.
 The receiver is assumed to process all incoming data instantly. It is
hypothetical since it does not handle flow control or error control.

Stop – and – Wait Protocol

 Stop – and – Wait protocol is for noiseless channel too. It provides


unidirectional data transmission without any error control facilities.
 However, it provides for flow control so that a fast sender does not drown a
slow receiver.
 The receiver has a finite buffer size with finite processing speed.
 The sender can send a frame only when it has received indication from the
receiver that it is available for further data processing.

Stop – and – Wait ARQ

 Stop – and – wait Automatic Repeat Request (Stop – and – Wait ARQ) is a
variation of the above protocol with added error control mechanisms,
appropriate for noisy channels.
 The sender keeps a copy of the sent frame. It then waits for a finite time to
receive a positive acknowledgement from receiver.
 If the timer expires or a negative acknowledgement is received, the frame is
retransmitted.
 If a positive acknowledgement is received then the next frame is sent.
Go – Back – N ARQ

 Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgement for the first frame.
 It uses the concept of sliding window, and so is also called sliding window
protocol. The frames are sequentially numbered and a finite number of frames
are sent.
 If the acknowledgement of a frame is not received within the time period, all
frames starting from that frame are retransmitted.

Selective Repeat ARQ

 This protocol also provides for sending multiple frames before receiving the
acknowledgement for the first frame.
 However, here only the erroneous or lost frames are retransmitted, while the
good frames are received and buffered.

You might also like