FCN QB PT2
1) Explain simple parity check code process with neat diagram and example.
The Simple Parity-Check Code is an error-detecting code that adds
a parity bit to a k-bit dataword to form an n-bit codeword, where
n=k+1n = k + 1n=k+1. The parity bit ensures that the total number of
1's in the codeword is even (or odd in some cases, though we focus on
even). This allows the detection of single-bit errors, as any change in
one bit will make the number of 1's either even or odd, revealing the
error. However, this code cannot correct errors, as its minimum
Hamming distance is 2, meaning it can only detect, not correct, single-
bit errors. An example with a 4-bit dataword (k=4) would add a parity
bit to form a 5-bit codeword, ensuring the total number of 1's is even
for error detection.
2) Explain different types of errors. Give example.
Error:
During data transmission, data can get corrupted due to noise or interference, causing
bits to change from 0 to 1 or 1 to 0. The two main types of errors are single-bit errors
and burst errors.
Single-Bit Error:
A single-bit error means only one bit in the data is altered. For example, if 00000010
(ASCII STX) is sent and 00001010 (ASCII LF) is received, one bit has changed. This type of
error is rare, as noise must last for just one bit duration.
Burst Error:
A burst error involves two or more bits being changed. The bits may not be next to each
other, but the error is measured from the first to the last changed bit. For example, if
0100010001000011 is sent and 0101110101100011 is received, the burst length is 8.
Burst errors are more common because noise often affects several bits at once.
3) Describe Cyclic Redundancy check process with neat diagram and example. 8
Cyclic Redundancy Check (CRC)
The Cyclic Redundancy Check (CRC) is an error-detecting technique used in networks
like LANs and WANs to ensure data integrity. It is a type of cyclic code where a fixed-
length check value is computed from the data, using binary division. The process
involves both encoding (at the sender's end) and decoding (at the receiver's end).
CRC Encoder:
In the CRC process, the original dataword is first augmented by adding n – k zeros
(where n is the total codeword length and k is the dataword length). This extended
dataword is then divided (using modulo-2 division) by a predefined divisor or generator
polynomial, which is known to both sender and receiver. The remainder from this
division (called the CRC bits) is appended to the original dataword to form the
codeword. For example, if a 4-bit dataword is used and the generator is 4 bits long, then
3 zeros are added to the dataword before division. If the remainder is 101, the final
codeword becomes the dataword followed by 101.
CRC Decoder:
The decoder at the receiver’s end receives the full codeword. It then performs the same
division operation using the same generator. If the remainder (syndrome) is all zeros
(e.g., 000), it indicates no error, and the leftmost k bits are accepted as the correct
dataword. However, if the remainder is non-zero (e.g., 011), it means an error has
occurred during transmission, and the received codeword is rejected.
4) Justify “Error correction is more complex than error detection.”
Error correction is more complex than error detection because it not only identifies that
an error has occurred, but also requires locating and fixing the exact bit or bits that are
wrong. Error detection simply answers whether an error is present—yes or no—without
concern for how many bits are affected or where the error is located. In contrast, error
correction must determine both the number of errors and their exact positions within
the message. For example, correcting one bit error in an 8-bit message involves checking
8 possible positions, while correcting two bit errors requires evaluating 28 possible
combinations. As the number of bits and errors increases, the complexity grows
significantly. For instance, finding and correcting 10 errors in a 1000-bit message
involves an enormous number of possibilities, making correction far more demanding
than detection.
5) Describe character oriented and bit oriented protocols.
In character-oriented protocols, data is transmitted in the form of 8-bit characters,
usually based on a standard coding system such as ASCII. The protocol treats the entire
frame, including the header, data, and trailer, as a series of characters. Special
character-based flags are used to indicate the start and end of each frame. For example,
a predefined 8-bit character (like a start-of-text or end-of-text symbol) serves as a
delimiter. This approach is simple and easy to implement but can face issues if the data
itself contains characters identical to the control flags.
On the other hand, bit-oriented protocols treat the data as a continuous stream of bits,
not limited to any character encoding. The data can represent text, images, audio, or
video. These protocols use a specific bit pattern, usually 01111110, as a flag to mark the
beginning and end of a frame. Since the data is handled at the bit level, bit-oriented
protocols are more flexible and efficient, especially when transmitting non-text data.
However, they require special handling like bit stuffing to ensure that the flag pattern
does not appear within the actual data.
6) Explain flow control and error control.
Flow control is a technique used in the data link layer to manage the rate at which data
is sent from the sender to the receiver. It ensures that the sender does not overwhelm
the receiver by sending data faster than it can be processed. Since receiving devices
have limited processing speed and memory (buffer space), flow control helps prevent
data loss by allowing the receiver to pause the sender or limit the flow of data until it is
ready to receive more. This is especially important in networks where the sender is
much faster than the receiver.
Error control, on the other hand, deals with detecting and correcting errors that occur
during data transmission. It enables the receiver to identify frames that are lost or
damaged and request their retransmission. This process is commonly known as
Automatic Repeat Request (ARQ). Error control in the data link layer typically involves
using error detection codes (like CRC or checksums), and if an error is found, the
receiver asks the sender to resend the affected frame to ensure reliable communication.
7) Explain with neat diagram Go Back N ARQ protocol in detail.
Go-Back-N ARQ is a sliding window error control protocol used in data communication
to ensure reliable transmission. It allows the sender to send multiple frames (up to a
certain window size) without waiting for individual acknowledgments for each frame.
This helps improve transmission efficiency by keeping the channel busy. However, if an
error is detected in a frame, the receiver discards that frame and all subsequent frames,
and the sender must go back and retransmit the erroneous frame and all the ones after
it—hence the name Go-Back-N.
•Key Features:
Sliding Window Size (N): Controls how many frames can be sent without
acknowledgment.
Cumulative Acknowledgment: The receiver acknowledges the last correctly received
frame in order.
Error Handling: If a frame is lost or damaged, all subsequent frames are retransmitted.
8) Describe with neat diagram Selective repeat ARQ protocol in detail.
Selective Repeat ARQ is an advanced error control protocol designed to improve
efficiency over noisy communication links. Unlike Go-Back-N ARQ, which discards all
frames after an error and retransmits them even if they were received correctly,
Selective Repeat ARQ resends only the specific frames that are lost or corrupted. This
approach makes better use of bandwidth, especially in environments where
transmission errors are frequent.
In Selective Repeat, both the sender and the receiver maintain a sliding window. The
sender can transmit multiple frames without waiting for individual acknowledgments,
and the receiver is capable of accepting and buffering out-of-order frames. When a
frame is found to be missing or damaged, the receiver sends a negative
acknowledgment (NAK) or simply waits for the sender to retransmit the specific frame.
Once the correct frame is received, the receiver reorders the buffered frames and
delivers them in the proper sequence.
9) Explain concept of sliding window with neat diagram.
the sliding window is an abstract concept that defines the range of sequence numbers
that is the concern of the sender and receiver. In other words, the sender and receiver
need to deal with only part of the possible sequence numbers. The range which is the
concern of the sender is called the send sliding window; the range that is the concern of
the receiver is called the receive sliding window. We discuss both here. The send
window is an imaginary box covering the sequence numbers of the data frames which
can be in transit. In each window position, some of these sequence numbers define the
frames that have been sent; others define those that can be sent. The maximum size of
the window is 2m - 1 for reasons that we discuss later. In this chapter, we let the size be
fixed and set to the maximum value, but we will see in future chapters that some
protocols may have a variable window size. Figure 11.12 shows a sliding window ofsize
15 (m =4). The window at any time divides the possible sequence numbers into four
regions. The first region, from the far left to the left wall of the window, defines the
sequence
10) Numericals on CRC and Hamming coding.
11) Define piggybacking and its usefulness.
piggybacking is used to improve the efficiency of the bidirectional protocols. When a
frame is carrying data from A to B, it can also carry control information about arrived (or
lost) frames from B; when a frame is carrying data from B to A, it can also carry control
information about the arrived (or lost) frames from A.
Piggybacking in networking enhances efficiency in data transmission. Here are some key
benefits:
Optimized Bandwidth Usage: Reduces the number of frames sent by combining
acknowledgment with data transmission.
Improved Efficiency: Eliminates the need for separate acknowledgment frames, saving
time and resources.
Minimized Transmission Overhead: Reduces extra bits and control information, leading
to better network performance.
Enhanced Network Throughput: Speeds up communication by avoiding unnecessary
delays in acknowledgment.
Lower Latency: Helps maintain smooth data flow by acknowledging received frames
without extra wait time.
12) Bit-stuff the following data and highlight stuffed bit. (each 2)
1000111111100111110100011111111111000011111
Bit-stuffing is a technique used in data communication to ensure that a specific bit
pattern—usually a sequence of consecutive ones—does not interfere with data
transmission. The rule typically states that whenever five consecutive 1s appear in a
data stream, an extra 0 is inserted to prevent confusion with control flags.
Let's apply bit-stuffing to your provided data:
Original data:
1000111111100111110100011111111111000011111
Now, we'll insert a 0 after every sequence of five consecutive 1s:
Stuffed data:
10001111101**0**10011111010**0**00111111**0**1111**0**000011111
13) A senders ends a series of packets to the same destination using 5-bit sequence
numbers.
In a system using 5-bit sequence numbers, the available range is 0 to 31 (since 5 bits
can represent (2^5 = 32) values).
The sequence numbers cycle back to 0 after reaching 31, so to determine the sequence
number after sending 100 packets:
1. Find the remainder when 100 is divided by 32:
(100 \mod 32 = 100 - (32 \times 3) = 100 - 96 = 4)
Thus, the sequence number after sending 100 packets would be 4.
14) Using 5-bit sequence numbers, what is the maximum size of the send and receive
windows for each of the following protocols?
a. Stop-and-Wait ARQ
b. Go-Back-NARQ
c. Selective-Repeat ARQ
When using 5-bit sequence numbers ((2^5 = 32) possible sequence values), the window sizes
for different Automatic Repeat reQuest (ARQ) protocols vary based on their mechanisms for
handling retransmissions and acknowledgments.
Maximum window sizes:
a. Stop-and-Wait ARQ
o Send window: 1 (Only one frame is sent at a time before waiting for
acknowledgment)
o Receive window: 1 (Only one frame is expected at a time)
b. Go-Back-N ARQ
o Send window: 31 (Maximum theoretical value: (2^5 - 1 = 31), ensuring unique
sequence numbers during transmission)
o Receive window: 1 (Receiver only accepts in-sequence frames; out-of-order
frames are discarded)
c. Selective-Repeat ARQ
o Send window: 16 (Maximum theoretical value: (2^5 / 2 = 16), preventing
ambiguity in acknowledgment handling)
o Receive window: 16 (Receiver can buffer out-of-order frames until missing ones
arrive)
15) Describe ICMP protocols (any 4 methods in each type)
The Internet Control Message Protocol (ICMP) is used in network diagnostics and error
reporting. ICMP messages can be categorized into Error Reporting and Query Messages.
Here are four common methods in each type:
1. Error Reporting Messages:
These messages notify network devices of problems that occur during data
transmission.
Destination Unreachable: Sent when a packet cannot reach its intended destination due
to network issues.
Time Exceeded: Used when a packet’s TTL (Time to Live) expires before reaching its
destination.
Redirect: Helps optimize routing by informing a sender that there is a better route
available.
Source Quench: Informs the sender to slow down transmission due to congestion
(though rarely used in modern networks).
2. Query Messages:
These messages request information about network connectivity and performance.
Echo Request & Reply (Ping): Used to check if a device is reachable by sending an echo
request and receiving a reply.
Timestamp Request & Reply: Helps synchronize clocks between devices by sending
timestamps in requests and replies.
Address Mask Request & Reply: Used to discover a subnet mask from a network router.
Router Advertisement & Solicitation: Allows routers to advertise their presence to
hosts, helping them identify gateways.
16) Compare TCP and UDP
17) Discuss concept of Classful addressing
IPv4 addressing, at its inception, used the concept of classes. This architecture is called
classful addressing. Although this scheme is becoming obsolete, we briefly discuss it
here to show the rationale behind classless addressing. In classful addressing, the
address space is divided into five classes: A, B, C, D, and E. Each class occupies some part
of the address space.
18) Find the error, if any, in the given IPv4 address. 4 (Problem)
1. 192.168.1.300 – ❌ This address is invalid because each octet in an IPv4 address must be
between 0 and 255. The last octet (300) exceeds this range.
2. 256.200.100.50 – ❌ Invalid, because the first octet (256) is out of range (it must be
between 0 and 255).
3. 999.999.999.999 – ❌ Completely invalid, as all octets exceed the allowable range.
4. 10.0.0.256 – ❌ Invalid due to the last octet being outside the valid range.
An IPv4 address consists of four numbers (octets), separated by dots, each ranging from
0 to 255. Anything outside this range is incorrect.
19) Compare IPv4 and IPv6. (8 points)
20) Explain the need of the Network address translation.
The number of home users and small businesses that want to use the Internet is ever
increasing. In the beginning, a user was connected to the Internet with a dial-up line,
which means that she was connected for a specific period of time. An ISP with a block of
addresses could dynamically assign an address to this user. An address was given to a
user when it was needed. But the situation is different today. Home users and small
businesses can be connected by an ADSL line or cable modem. In addition, many are not
happy with one address; many have created small networks with several hosts and need
an IP address for each host. With the shortage of addresses, this is a serious problem.
A quick solution to this problem is called network address translation (NAT).
NAT enables a user to have a large set of addresses internally and one address, or a
small
set of addresses, externally. The traffic inside can use the large set; the traffic outside,
the
small set.
21) Draw frame format of datagram.
22) Write advantages of IPv6 over IPv4.
Advantages of IPv6 over IPv4
Larger Address Space: IPv6 uses 128-bit addresses, providing an exponentially larger
pool compared to IPv4’s 32-bit addresses. This expansion effectively eliminates address
exhaustion issues.
Improved Header Format: IPv6 introduces a streamlined header structure, separating
options from the base header. This enhances routing efficiency and speeds up packet
processing by minimizing the need for routers to inspect every option.
Enhanced Functionality: IPv6 includes new options that enable additional
functionalities, improving flexibility for future networking innovations.
Extensibility: The protocol is designed to be adaptable, allowing seamless extensions to
accommodate emerging technologies and applications.
Optimized Resource Allocation: IPv6 replaces the type-of-service field with a flow label
mechanism, enabling sources to request special packet handling. This benefits
applications such as real-time audio and video streaming.
Robust Security: IPv6 has built-in encryption and authentication features, ensuring data
integrity and confidentiality, making network communications more secure.
These advantages make IPv6 the preferred choice for modern networking, supporting
the growing demands of the digital world. Would you like a comparison table between
IPv6 and IPv4 to summarize these differences?
23) Explain frame fields related to the Fragmentation in datagram.
Fields Related to Fragmentation in IPv4 Datagram
Fragmentation allows an IPv4 datagram to be divided into smaller
packets to fit the Maximum Transmission Unit (MTU) of a network. The
key fields responsible for fragmentation and reassembly are:
Identification: A 16-bit field that uniquely labels a datagram from the
source. All fragments of a single datagram share the same
identification number, helping the destination system reassemble them
correctly.
Flags: A 3-bit field that controls fragmentation behavior:
o Reserved Bit: Always set to 0.
o Don't Fragment (DF) Bit: If set to 1, the packet must not be
fragmented; if the packet is too large, it is dropped.
o More Fragments (MF) Bit: If set to 1, it indicates there are
more fragments; if set to 0, it marks the last fragment.
Fragmentation Offset: A 13-bit field that specifies the position of a
fragment within the original datagram. Measured in units of 8 bytes, it
ensures proper sequencing of fragments during reassembly.
These fields ensure fragmented packets arrive correctly, even if they
take different paths, allowing efficient data transmission across
networks. Would you like an example scenario to illustrate how
fragmentation works in practice?
24) Design a network and assign IP address to host system in a network. Find Last
address, First address and subnet mask of sub networks. 8 (Problem)
To design a network and assign IP addresses to hosts, follow these
steps:
Step 1: Define Network Requirements
Determine the number of hosts needed per subnet.
Choose an appropriate IP address range.
Step 2: Select an IP Addressing Scheme
Let's assume the network 192.168.1.0/24, a common private IPv4
address range.
Step 3: Subnetting
If dividing into subnets, calculate the subnet mask:
o For 8 subnets, we need 3 extra bits (2^3 = 8), modifying the
default /24 mask to /27.
o New subnet mask: 255.255.255.224 (since 27 bits are used for
the network).
Step 4: Identify First & Last Addresses
Each subnet has 2^(32-27) = 32 addresses, with 30 usable for
hosts.
Subnet 1 (192.168.1.0/27)
Network Address: 192.168.1.0
First Usable IP: 192.168.1.1
Last Usable IP: 192.168.1.30
Broadcast Address: 192.168.1.31
Subnet 2 (192.168.1.32/27)
Network Address: 192.168.1.32
First Usable IP: 192.168.1.33
Last Usable IP: 192.168.1.62
Broadcast Address: 192.168.1.63
(Repeat for additional subnets up to 8)
Step 5: Assign IPs to Host Systems
Each host in a subnet gets an address between First Usable IP and
Last Usable IP.
25) Numerical on IP address.
26) Explain different functions of Network Layer.
Functions of the Network Layer
1. Logical Addressing: The network layer assigns unique IP addresses
to devices, allowing them to be identified across different networks.
These logical addresses help distinguish the source and destination
systems.
2. Routing: The network layer determines the best path for data packets
to travel across a network. Using routers and routing algorithms, it
ensures packets reach their intended destination efficiently.
3. Packet Forwarding: Once routing determines the best path,
forwarding ensures that packets are relayed from one network to
another until they reach the recipient.
4. Fragmentation and Reassembly: If a packet is too large to be
transmitted through a network with a smaller Maximum Transmission
Unit (MTU), the network layer fragments it. At the destination, these
fragments are reassembled to reconstruct the original packet.
5. Error Handling and Diagnostics: The network layer uses protocols
such as Internet Control Message Protocol (ICMP) to detect and report
errors. For example, if a packet cannot reach its destination, an ICMP
message is sent to notify the sender.
6. Quality of Service (QoS): The network layer prioritizes certain types
of traffic, such as video and voice data, to ensure optimal performance
and reliability in real-time applications.
27) Explain different functions of Data Link Layer.
Functions of the Data Link Layer
The data link layer plays a crucial role in ensuring reliable communication between
devices on the same network. Its main responsibilities include:
1. Framing: The data link layer organizes raw data into structured units called frames,
making transmission manageable and efficient.
2. Physical Addressing: It assigns unique hardware addresses (MAC addresses) to frames,
enabling proper delivery of data within the local network.
3. Flow Control: To prevent a fast sender from overwhelming a slower receiver, the data
link layer regulates the data transmission rate.
4. Error Control: By detecting and correcting errors in transmitted frames, this layer
ensures data integrity through mechanisms like cyclic redundancy checks (CRC).
5. Access Control: When multiple devices share a single communication medium, the data
link layer manages access to prevent collisions and ensure orderly transmission.
6. Hop-to-Hop Delivery: Unlike the network layer, which handles end-to-end delivery, the
data link layer is responsible for delivering frames between directly connected devices.
28) Explain ARP and RARP with diagram.
Address Resolution Protocol (ARP) is a procedure used to map logical (IP) addresses to
physical (MAC) addresses, enabling network devices to communicate over a physical
network. Here's how it works:
1. Sending ARP Request: When a sender (host or router) knows the IP address of a
receiver but needs its physical address, it sends out an ARP request. This request is
broadcast across the network, asking which device has a specific IP address.
2. Receiving ARP Reply: Only the device that matches the requested IP address responds
with an ARP reply, providing its MAC address. This reply is unicast directly to the sender.
3. Caching: To optimize future communication, the sender stores the mapping of the IP
address to the MAC address in a cache memory for a limited time (usually 20–30
minutes). This prevents repeated broadcasts.
ARP is encapsulated in a data-link layer frame, and its packet format includes fields like
hardware type, protocol type, operation (request or reply), and sender/target
addresses.
Reverse Address Resolution Protocol (RARP) is used by devices to discover their IP
address when they only know their MAC address. Here's how it works:
A device (usually diskless) sends a RARP request, asking the network for its IP address,
as it lacks a configuration file with this information.
The request is broadcast within the local network and received by a RARP server, which
has a mapping of MAC addresses to IP addresses.
The RARP server replies with the corresponding IP address, allowing the device to start
communicating on the network.
29) Describe SMTP protocols with diagram.
Simple Mail Transfer Protocol (SMTP) is the standard protocol used for sending and
receiving emails over the internet. Here's an overview of how it works:
1. Role of SMTP: SMTP facilitates the transfer of email messages between servers. It
handles outgoing email traffic, ensuring that messages are relayed and delivered
correctly.
2. Workflow:
o The user agent (email client) prepares the message and passes it to the SMTP
server.
o The SMTP server at the sender's site communicates with the SMTP server at the
receiver's site.
o The message is stored in the receiver's mail server.
o The receiver retrieves the email using protocols like POP3 or IMAP.
3. Modes of Operation:
o SMTP operates in a client-server model. The sender's mail server acts as the
client, and the receiver's mail server acts as the server.
o It uses TCP port 25 for communication, ensuring reliable message delivery.
4. Communication Sequence:
o The sender's SMTP server establishes a connection with the recipient's SMTP
server.
o SMTP commands and responses are exchanged to transfer the email data.
o Once the message is successfully sent, the connection is terminated.
© 2025 THE VARDH CHHAJER