0% found this document useful (0 votes)
9 views42 pages

C#final 1

The document outlines the OSI seven-layer model, detailing the functions and responsibilities of each layer, from the Physical Layer to the Application Layer. It also explains the concept of sockets in network communication, including how they facilitate client-server interactions, and discusses multiplexing, specifically Time Division Multiplexing (TDM) and its types. Additionally, it addresses factors affecting reliable data delivery in network communication, such as packet loss and network congestion, along with their impacts and solutions.

Uploaded by

karthikjyekkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views42 pages

C#final 1

The document outlines the OSI seven-layer model, detailing the functions and responsibilities of each layer, from the Physical Layer to the Application Layer. It also explains the concept of sockets in network communication, including how they facilitate client-server interactions, and discusses multiplexing, specifically Time Division Multiplexing (TDM) and its types. Additionally, it addresses factors affecting reliable data delivery in network communication, such as packet loss and network congestion, along with their impacts and solutions.

Uploaded by

karthikjyekkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Module -1

a) Identify the function of each layer in the OSI seven-layer model with
graphical representation.

1. Physical Layer

Function:
The Physical layer is in charge of the actual transmission and reception of unstructured raw
data between a device and a physical transmission medium. It converts digital data into
electrical, optical, or radio signals suitable for the transmission medium and vice versa. It
focuses purely on the physical connection and signal transmission, without any awareness of
the data structure or meaning.

Key Responsibilities:

• Defines physical characteristics of interfaces and media (cables, voltages, pin


layouts).
• Converts digital bits into electrical, optical, or radio signals.
• Controls data rate, signal modulation, and bit synchronization.
• Specifies network topologies (bus, ring, star, mesh).
• Involves hardware like hubs, cables, switches (Layer 1), and repeaters.

2. Data Link Layer

Function:
The Data Link layer ensures reliable transmission of data frames between two nodes
connected by a physical layer. It packages raw bits from the physical layer into structured
frames and handles error detection, correction, and flow control to ensure proper data
communication over a link.

Key Responsibilities:

• Framing: Divides the bitstream into manageable data units (frames).


• Error detection and correction using CRC or checksums.
• Media Access Control (MAC): Determines how devices use the shared medium
(Ethernet, Wi-Fi).
• Flow control to prevent sender from overwhelming the receiver.
• Assigns and uses physical addressing (MAC addresses).
• Switches and bridges operate at this layer.

3. Network Layer

Function:
The Network layer is responsible for the delivery of packets across networks by determining
the best logical path between the source and the destination. It manages logical addressing
and routing, ensuring that each packet takes the most efficient route through interconnected
networks.

Key Responsibilities:

• Assigns logical addresses (IP addresses).


• Determines the best routing path across networks (via routers).
• Handles packet forwarding and addressing.
• Manages fragmentation and reassembly of packets.
• Provides connectionless (IP) or connection-oriented services.
• Routers and Layer 3 switches operate at this layer.

4. Transport Layer

Function:
The Transport layer ensures reliable data transfer between devices or applications on different
hosts. It provides services such as connection setup, error recovery, data integrity, and flow
control, ensuring complete and accurate data delivery to the correct application processes.
Key Responsibilities:

• Establishes, maintains, and terminates logical connections between hosts.


• Ensures reliable data delivery using error correction and retransmission (TCP).
• Provides flow control and congestion control.
• Supports multiplexing using port numbers.
• Handles segmentation and reassembly of messages.
• Common protocols: TCP (reliable), UDP (unreliable).

5. Session Layer

Function:
The Session layer is responsible for establishing, managing, and terminating sessions
between applications. It provides mechanisms for managing and synchronizing
communication and dialogue between devices, ensuring a stable and continuous session is
maintained throughout the exchange.

Key Responsibilities:

• Establishes, manages, and terminates sessions.


• Provides dialog control (full-duplex or half-duplex communication).
• Synchronization through checkpoints for data recovery.
• Manages multiple connections and data streams per session.
• Useful in scenarios like video conferencing, remote procedure calls.

6. Presentation Layer

Function:
The Presentation layer translates data between the application layer and the lower layers to
ensure that the data sent from one system can be properly understood by another. It handles
differences in data representation, encrypts data for secure transmission, and compresses data
for optimized performance.

Key Responsibilities:

• Translates data between different formats (e.g., EBCDIC to ASCII).


• Handles encryption and decryption for secure data transmission (SSL/TLS).
• Performs data compression and decompression to improve performance.
• Ensures platform-independent data representation.
• Defines standards like JPEG, MPEG, GIF, etc.
7. Application Layer

Function:
The Application layer serves as the interface between the user and the network. It provides
services and protocols that directly support end-user applications, facilitating network access,
communication, and data exchange.

Key Responsibilities:

• Acts as the interface between the user and the network.


• Supports services such as file transfers, email, browsing.
• Provides protocols like HTTP, FTP, SMTP, DNS, SNMP, Telnet.
• Handles user authentication and privacy control.
• Manages data input/output and network resource sharing.

b) What are sockets? Discuss the responsibility of each function involved in


establishing communication between a client and server using sockets.

A socket is a software abstraction that acts as an endpoint for communication between two
devices over a network. It allows applications to send and receive data across network
connections using protocols like TCP and UDP. In simpler terms, a socket is like a "door"
that connects an application to the network.

Sockets are part of the Application Programming Interface (API) provided by the
operating system to support network communication. They enable developers to build
network applications like web browsers, chat applications, file transfer tools, and more.

Socket is as the point where a local application process attaches to the network. • The
interface defines operations for creating a socket, attaching the socket to the network,
sending/ receiving messages through the socket, and closing the socket.

How Sockets Help in Implementing Network Software

Sockets play a crucial role in implementing network software by:

1. Providing Abstraction:
Developers can interact with the network without dealing with low-level details like
packet construction or routing.
2. Facilitating Communication:
They allow processes to communicate over a network using standard APIs.
3. Supporting Multiple Protocols:
Sockets support both connection-oriented (TCP) and connectionless (UDP)
communication.
4. Flexibility:
Sockets can be used for client-server applications and peer-to-peer communication.

Key Socket Operations

1. Creating a Socket

int socket(int domain, int type, int protocol);

• domain: Specifies the protocol family (e.g., PF_INET for IPv4, PF_INET6 for IPv6).
o PF INET denotes the Internet family-
o PF UNIX denotes the Unix pipe facility-
o PF PACKET denotes direct access to the network interface
• type: Specifies the communication type (SOCK_STREAM for TCP, SOCK_DGRAM for
UDP).
• protocol: Specifies the protocol (usually 0, meaning the default protocol for the
domain and type).

• The next step depends on whether you are a client or a server.

• On a server machine, the application process performs a passive open—the server says that
it is prepared to accept connections, but it does not actually establish a connection.

2. Binding the Socket (Server-Side)

int bind(int socket, struct sockaddr *address, int addr_len);

• Associates the socket with a specific IP address and port number on the server.
• The bind operation, binds the newly created socket to the specified address.

3. Listening for Connections (Server-Side)

int listen(int socket, int backlog);

• Puts the socket in a state where it can accept incoming connection requests.

4. Accepting a Connection (Server-Side)

int accept(int socket, struct sockaddr *address, int *addr_len);

• Accepts a connection from a client and creates a new socket for this connection.

5. Connecting to a Server (Client-Side)

int connect(int socket, struct sockaddr *address, int addr_len);

• Establishes a connection to a server.


6. Sending and Receiving Data

Once a connection is established, the application processes invoke the following two
operations to send and receive data
int send(int socket, char *message, int msg_len, int flags);
int recv(int socket, char *buffer, int buf_len, int flags);

• send: Sends data over the socket.


• recv: Receives data from the socket.

7. Closing the Socket

int close(int socket);

• Closes the socket and releases resources.

a) What is multiplexing? Discuss Time division multiplexing with it’s type.


Multiplexing

two common multiplexing methods: Time Division Multiplexing (TDM) and Frequency
Division Multiplexing (FDM).

Multiplexing

Multiplexing is a technique that allows multiple signals or data streams to share the same
physical medium (e.g., a wire, optical fiber) by dividing resources such as time or frequency.
This improves the efficiency of the communication channel.

Multiplexing is a technique used in communication systems to combine multiple signals or


data streams into one signal over a shared medium. The main purpose of multiplexing is to
efficiently utilize the available bandwidth of a communication channel, enabling
simultaneous transmission of multiple signals without interference.

In other words, multiplexing allows multiple users or applications to share a single


communication link, thereby reducing infrastructure costs and increasing system efficiency.
At the receiver's end, a demultiplexer separates the combined signal back into the original
individual signals.

Time Division Multiplexing (TDM)


How It Works:

• TDM divides the available time on a physical link into fixed intervals called time
slots.
• Each data flow is assigned a specific time slot during which it can transmit data.
• Time slots are repeated cyclically, allowing multiple flows to share the same link
sequentially.
• In TDM, the total time available in the channel is divided into several time slots, each
assigned to a different signal or user. These time slots are then transmitted in rapid
succession, one after another. Because the switching happens very quickly, it appears
as though all the signals are transmitted simultaneously.

Advantages:

1. Efficiency: Works best when flows have similar data transmission rates.
2. Simplicity: Straightforward slot scheduling makes implementation easy.
3. No Interference: No signal interference since each flow transmits only in its assigned
slot.

Disadvantages:

1. Fixed Allocation: If a flow doesn’t use its time slot, the slot remains wasted, reducing
efficiency.
2. Latency: Delays occur if a flow has to wait for its next slot, especially with many
flows.
3. Synchronization Required: Requires precise timing to ensure data is transmitted in
the correct slot.

Telephone Networks (Digital):

Traditional digital telephone systems use TDM to combine multiple voice calls on the same
physical line. Each call is allocated a specific time slot to transmit its voice data sequentially.
Types of Time Division Multiplexing

1. Synchronous TDM:

• These time slots are pre-assigned and reserved exclusively for a particular sender.
This means even if a device has no data to transmit, its time slot goes unused, which
can lead to inefficiencies and wasted bandwidth
• Each input source is assigned a fixed time slot, even if it has no data to send.
• Time slots are pre-assigned and occur at regular intervals.
• If a device has no data, its slot remains empty, leading to inefficient bandwidth
usage.
• Simple and easy to implement.

Example: Traditional T1 and E1 lines in telecommunication.

2. Asynchronous TDM (or Statistical TDM):

• TDM, also known as statistical TDM, is a more flexible and efficient approach where
time slots are not fixed. Instead, they are dynamically assigned to devices based on
their demand for bandwidth. Only those devices that have data to transmit are
allocated a time slot. This leads to better utilization of the available bandwidth, as no
time is wasted on idle devices.
• Time slots are assigned dynamically based on demand.
• Only active devices are given time slots, which improves bandwidth efficiency.
• More complex due to the need for a mechanism to identify the data source.
• Reduces idle time and increases utilization.

Example: Modern digital communication systems that adjust to varying data rates.

Advantages of TDM:

• Efficient use of bandwidth in asynchronous TDM.


• Allows multiple users to share the same channel.
• Simple synchronization in synchronous TDM.

Disadvantages of TDM:

• Synchronous TDM may waste bandwidth with idle time slots.


• Requires precise synchronization between sender and receiver.
• Latency may increase if many users are competing for slots.
b) Select any four scenarios that affect reliable data delivery in network
communication. Explain how different factors can impact the reliability of
data transfer and provide the solution.

1. Packet Loss
Impact on Reliability:

Packet loss refers to the failure of transmitted packets to reach their intended destination. This
often occurs in IP networks and can be caused by various issues like hardware faults, buffer
overflows in routers, network congestion, signal degradation, or faulty transmission lines. In
applications that require real-time data (like VoIP or live streaming), even a small amount of
packet loss can lead to noticeable glitches such as missing audio or video frames, while in
data-heavy transfers, it can result in incomplete files or corrupted data.

Solution:

• Transmission Control Protocol (TCP): This protocol ensures reliability through


mechanisms like acknowledgment (ACK), retransmissions, and sequence
numbering. If the receiver does not acknowledge a packet within a certain timeframe,
TCP resends it.
• Quality of Service (QoS): Prioritizes important traffic like voice or video over less
time-sensitive data to prevent packet drops during congestion.
• Network Monitoring Tools: Tools like Wireshark or PingPlotter can help diagnose
where and why packet loss is happening.
• Upgrading Equipment: Replace outdated switches, routers, or cables that may
contribute to packet loss.

2. Network Congestion
Impact on Reliability:

Congestion occurs when too many devices attempt to send data over a network with limited
bandwidth. This results in overloaded routers and switches dropping packets or increasing
delays. In severe cases, it can lead to timeout errors, data retransmissions, or broken
connections. Applications that require consistent throughput, such as cloud storage syncing
or online multiplayer games, are particularly sensitive to congestion.

Solution:

• Congestion Control Algorithms: TCP uses techniques like slow start, congestion
avoidance, and fast recovery to gradually increase the data flow and back off if
congestion is detected.
• Traffic Shaping and Bandwidth Allocation: Techniques like Rate Limiting and
Token Bucket Algorithms can be used to smooth traffic and avoid sudden bursts.
• Load Balancers: Distribute traffic evenly across multiple servers or network paths,
reducing the burden on any single link.
• Network Segmentation: Dividing the network into smaller segments (e.g., using
VLANs) helps localize traffic and reduce the chances of congestion.

3. Transmission Errors
Impact on Reliability:

These errors occur when the data is altered during transmission. The cause can be physical
problems like electrical interference, attenuation over long distances, noise in the
communication channel, or faulty connectors. Bit errors, even in small quantities, can lead to
corrupted files or malfunctioning applications.

Solution:

• Error Detection Techniques:


o Parity Bits: Add an extra bit to indicate whether the number of 1s in the byte
is even or odd.
o Checksum: Adds up the data and sends the total along with the message to
verify integrity.
o Cyclic Redundancy Check (CRC): A more complex method to detect data
corruption, commonly used in network and storage protocols.
• Error Correction Techniques:
o Forward Error Correction (FEC) adds redundant data so the receiver can
detect and correct errors without needing retransmission.
• Physical Media Improvements: Use shielded twisted pair (STP) cables, fiber optics,
or coaxial cables with better insulation to reduce noise.
• Signal Boosters or Repeaters: Used in long-distance communication to maintain
signal strength and reduce distortion.

4. Use of Unreliable Protocols (e.g., UDP)


Impact on Reliability:

UDP is a connectionless protocol that does not guarantee the delivery, order, or integrity of
packets. It is lightweight and fast but not suitable for data that must arrive accurately and
completely, like emails, web page requests, or file transfers. In such cases, using UDP
without additional safeguards can lead to lost messages, duplicated data, or out-of-order
packet arrival, making the communication unreliable.

Solution:
• Use TCP when reliability is essential: TCP handles connection establishment, error
recovery, and data ordering, making it ideal for file transfers, emails, and HTTP.
• Application Layer Reliability in UDP:
o Custom ACK/NACK mechanism: Implementing acknowledgment and
retransmission logic at the application level.
o Sequencing: Numbering packets so that out-of-order delivery can be
corrected.
o Timeouts and retries: Re-sending data if no response is received within a set
time.
• Hybrid Protocols: Some protocols like QUIC (used by Google) are designed to
provide the speed of UDP with reliability features similar to TCP.

Module-2

1) stop and wait protocol


Stop-and-Wait is a data link layer protocol used for reliable communication between a
sender and a receiver. It ensures that each data packet is acknowledged before the next one
is sent.

a) Compare advantages of sliding window protocol over stop and


wait protocol.

1. Efficient Link Utilization


• Stop-and-Wait: Only one frame is sent at a time. The sender must wait for an
acknowledgment (ACK) before sending the next frame. This leads to under-utilization
of the communication link, especially in networks with high latency or bandwidth.
• Sliding Window: Allows multiple frames to be in transit before needing an
acknowledgment. This ensures the transmission channel remains busy, maximizing
bandwidth usage and improving overall throughput.

Advantage: Sliding Window provides significantly better efficiency and throughput than
Stop-and-Wait.

2. Reduced Idle Time

• Stop-and-Wait: The sender remains idle while waiting for each ACK, causing
frequent gaps in transmission.
• Sliding Window: The sender can continue sending new frames up to the window
limit, without having to wait for ACKs for each one.

Advantage: More continuous data transmission with minimal idle time.

3. Supports Out-of-Order Delivery (with buffering)

• Stop-and-Wait: Only one frame is handled at a time. If a frame arrives out of order
or is lost, there's no mechanism to handle it efficiently.
• Sliding Window: With a receive window size greater than 1, the receiver can buffer
out-of-order frames until the missing ones arrive.

Advantage: Provides better fault tolerance and supports flexible data handling in unreliable
networks.

4. Better Handling of Lost or Delayed Packets

• Stop-and-Wait: A lost ACK forces the sender to retransmit the same frame, possibly
leading to duplication and inefficiency.
• Sliding Window: Sequence numbers and cumulative or selective ACKs allow only
the lost or delayed frames to be retransmitted.

Advantage: Retransmission is more efficient and intelligent, reducing unnecessary data


duplication.

5. Scalable for High Bandwidth-Delay Products


• Stop-and-Wait: Performs poorly when the bandwidth-delay product is high. The link
may be capable of handling more data, but only one frame is sent at a time.
• Sliding Window: The window size can be tuned to match the bandwidth-delay
product, making full use of the available link capacity.

Advantage: More suitable for long-distance and high-speed networks.

6. Smarter Acknowledgment Mechanisms

• Stop-and-Wait: Uses simple acknowledgment logic with limited feedback to the


sender.
• Sliding Window: Can use cumulative acknowledgments, duplicate ACKs, and even
selective acknowledgments (SACK) for more precise communication and error
handling.

Advantage: Enables quicker detection and correction of errors, maintaining better flow and
performance.

Summary Comparison

Stop-and-
Feature Sliding Window Advantage
Wait
Frames in Transit 1 Multiple Higher throughput
More efficient
Idle Time High Low
transmission
Out-of-Order Frame Buffered (if RWS >
Not supported Greater flexibility
Handling 1)
Packet Loss Recovery Basic Intelligent Higher reliability
High-Speed Link
Not suitable Well suited Better scalability
Compatibility

Discuss the Sliding window protocol with example.


The Sliding Window Protocol is a flow control method used in the data link layer and
transport layer (like in TCP) to ensure reliable and sequential delivery of data frames between
sender and receiver.

It allows the sender to send multiple frames before needing an acknowledgment, improving
throughput and efficiency.
Components of Sliding Window Protocol
1. Sender Window:
o A buffer at the sender’s side that holds the frames that are sent but not yet
acknowledged.
o Defines how many frames the sender can send before stopping to wait for
ACKs.
2. Receiver Window:
o A buffer at the receiver’s side that defines how many frames it can accept and
process.
o Allows the receiver to accept a fixed number of frames even if they arrive out
of order (in Selective Repeat).
3. Acknowledgments (ACKs):
o Sent by the receiver to inform the sender about successful receipt of frames.
o Can be cumulative (e.g., ACK 3 implies frames 0 to 2 are received).
Key Concepts
• Each frame is assigned a sequence number.
• The window size determines the number of frames that can be sent without
acknowledgment.
• The window “slides” forward as acknowledgments are received, allowing new frames
to be sent.

Example (Window Size = 4)


Initial State:

• Sender sends frames 0, 1, 2, 3.


• If all are acknowledged, window slides forward to allow 4, 5, 6, 7 to be sent.

Case: Frame 2 is lost

Go-Back-N:

• Receiver discards frame 3 and waits for frame 2.


• ACK is sent for the last correctly received frame (ACK 1).
• Sender goes back and retransmits frames 2 and 3.

Selective Repeat:

• Receiver stores frame 3 but waits for frame 2.


• Sends NACK for frame 2.
• Sender retransmits only frame 2.

Types of Sliding Window Protocol


1. Go-Back-N ARQ

• The sender continues sending frames up to the window size.


• If an error or lost frame is detected, the receiver discards all subsequent frames.
• The sender goes back and retransmits the frame in error and all after it.

Example:

• Sent: 0, 1, 2, 3, 4
• Frame 2 is lost.
• Receiver sends ACK 1.
• Sender resends 2, 3, 4.
2. Selective Repeat ARQ

• The receiver accepts and buffers out-of-order frames.


• It sends individual ACKs for correctly received frames.
• The sender retransmits only the frame(s) that were lost or corrupted.

Example:

• Sent: 0, 1, 2, 3
• Frame 2 is lost.
• Receiver sends ACKs for 0, 1, and 3; NACK for 2.
• Sender retransmits only frame 2.
Use Cases
• TCP (Transmission Control Protocol): Uses a version of the sliding window for
flow control and congestion control.
• Reliable file transfers: Ensures that all packets are delivered without duplication and
in order.
• Streaming services and real-time applications: Where efficient and timely delivery
is essential.

Advantages
• Allows efficient use of bandwidth by avoiding stop-and-wait behavior.
• Ensures reliable, ordered delivery.
• In Selective Repeat, reduces retransmissions compared to Go-Back-N.

Disadvantages
• Go-Back-N may lead to unnecessary retransmissions.
• Selective Repeat requires more buffer space and is more complex to implement.
• More complex error handling logic is required for managing sequence numbers and
ACKs.
b) Consider the information sequence: 11010111011 and polynomial(x)= x4 + 1.

a. Generate CRC

b. Find and justify if the information sequence has error in the third bit(left) how the
receiver is going to detect the error.

a) Convert bit 10111011 to signal using NRZ, NRZI, Manchester. compare the
disadvantages of NRZI over Manchester.

a) Line Encoding

1. NRZ (Non-Return-to-Zero)

• Logic:
o '1' → High voltage
o '0' → Low voltage
• Bitstream: 1 0 1 1 1 0 1 1
• Signal:

High → Low → High → High → High → Low → High → High

• So, voltage stays constant during bit duration based on the bit value.

2. NRZI (Non-Return-to-Zero Inverted)

• Logic:
o '1' → Toggle the voltage level
o '0' → Keep the previous level
• Initial level: Assume Low
• Bitstream: 1 0 1 1 1 0 1 1
• Signal (starting from Low):
o 1 → toggle → High
o 0 → same → High
o 1 → toggle → Low
o 1 → toggle → High
o 1 → toggle → Low
o 0 → same → Low
o 1 → toggle → High
o 1 → toggle → Low
• Voltage levels:
Low → High → High → Low → High → Low → Low → High → Low

3. Manchester Encoding

• Logic:
o Each bit has a transition in the middle of the bit duration.
o '1' → Low to High transition
o '0' → High to Low transition
• Bitstream: 1 0 1 1 1 0 1 1
• Signal transitions:
o 1 → Low to High
o 0 → High to Low
o 1 → Low to High
o 1 → Low to High
o 1 → Low to High
o 0 → High to Low
o 1 → Low to High
o 1 → Low to High
• Always has a mid-bit transition, which helps with synchronization.
b) Comparison: NRZI vs. Manchester

Criteria NRZI Manchester


Synchronization No guaranteed transition per bit Transition in every bit (better sync)
Lower (no need for mid-bit Higher (requires double the
Bandwidth
transitions) bandwidth)
More complex due to mid-bit
Complexity Less complex to implement
transitions
Poor (no transition = hard to detect
Error Detection Better (missing transition = error)
errors)
Clock Recovery Difficult if long sequence of 0s Easy due to regular transitions

Disadvantages of NRZI over Manchester

1. Lack of synchronization: Long sequences of 0s in NRZI result in no transitions,


making clock recovery difficult.
2. Harder error detection: In NRZI, missing a transition can go undetected, unlike
Manchester which ensures a transition in each bit.
3. Less robust for noisy environments: NRZI is more susceptible to timing and voltage
errors because of fewer transitions.

b) Identify and discuss any two byte oriented protocol with its header
format.

byte-oriented protocols treat the frame as a collection of bytes (or characters) rather than a stream
of individual bits. These protocols use specific control characters (also known as sentinels) to define
the start and end of a data frame, and are typically easier to implement in software that was
originally designed for character-based terminals. Two commonly discussed byte-oriented protocols
are BISYNC (Binary Synchronous Communication) and PPP (Point-to-Point Protocol).

1. BISYNC (Binary Synchronous Communication)

Definition:

BISYNC is a byte-oriented communication protocol developed by IBM in the 1960s. It


was designed to transfer data between mainframe computers and terminals over serial
communication lines. BISYNC is called binary synchronous because it sends data in byte
format (8 bits) and uses synchronization characters to keep sender and receiver in sync.
How it works:

BISYNC uses special characters called "sentinels" to mark where a frame (block of data)
starts and ends. These special characters include:

• SYN – Synchronization
• SOH – Start of header
• STX – Start of text
• ETX – End of text
• DLE – Data Link Escape

Steps:

1. The frame begins with one or more SYN characters to synchronize the receiver.
2. A SOH character is used to start the header section (contains control info like
address).
3. Then comes STX, marking the beginning of actual data.
4. The data (also called Body) is then sent.
5. ETX marks the end of data.
6. A CRC (Cyclic Redundancy Check) is added at the end to detect errors.

Important: If ETX or DLE appears inside the data, they are escaped using the DLE character
(this is called character stuffing).

What it does:

• Breaks data into frames


• Helps devices understand where data starts and ends
• Uses control characters to ensure correct communication
• Detects errors using CRC

Where it is used:

• Earlier used to connect mainframe computers to terminals


• Legacy systems in banking and government
• Some old telecommunication systems
BISYNC Header Format:

| SYN | SYN | SOH | Header | STX | Body | ETX | CRC |

• SYN – Synchronization
• SOH – Start of header
• Header – Contains control info
• STX – Start of text
• Body – Actual message
• ETX – End of text
• CRC – Error detection field

2. PPP (Point-to-Point Protocol)

Definition:

PPP is a byte-oriented protocol used to transmit data between two directly connected
computers (point-to-point). It is most commonly used to carry Internet traffic over dial-up
connections, DSL, and serial links.

How it works:

PPP also uses a sentinel-based framing method, where a special byte called Flag
(01111110) marks the start and end of each frame.

Steps:

1. A frame starts with the Flag byte (01111110).


2. Then comes the Address and Control fields (usually set to default values).
3. The Protocol field tells what kind of data is being sent (e.g., IP, IPX).
4. The Payload carries the actual data.
5. A Checksum is added for error checking.
6. The frame ends with another Flag byte.

If the Flag byte appears inside the data, it is escaped using a process called byte stuffing
(similar to BISYNC).
What it does:

• Transfers IP packets and other data over serial links


• Detects and avoids transmission errors
• Supports features like authentication, compression, and encryption
• Can negotiate frame settings using LCP (Link Control Protocol)

Where it is used:

• Dial-up internet connections


• DSL and broadband connections
• Serial communication between routers
• VPNs (in some cases)

PPP Header Format:

| Flag | Address | Control | Protocol | Payload | Checksum | Flag |

• Flag – Frame boundary (01111110)


• Address – Usually set to default (0xFF)
• Control – Usually set to default (0x03)
• Protocol – Identifies upper layer protocol (e.g., IP)
• Payload – Data being transmitted
• Checksum – Error detection
• Flag – End of frame

Summary

Feature BISYNC PPP


Protocol Type Byte-Oriented Byte-Oriented
Feature BISYNC PPP
Developed By IBM IETF (Internet Engineering Task Force)
Framing Sentinel-based with
Sentinel-based with byte stuffing
Technique character stuffing
Mainframe-to-terminal
Internet data over serial/point-to-point
Usage
communication links (e.g., dial-up, DSL)
Control Flag (01111110), Protocol field, LCP for
SYN, SOH, STX, ETX, DLE
Characters negotiation
CRC (Cyclic Redundancy Checksum (usually 2 bytes, can be 4 bytes
Error Detection
Check) optionally)

Feature BISYNC DDCMP PPP


Framing Type Sentinel-based Byte-counting Sentinel-based
Frame Delimiters STX, ETX, DLE COUNT field Flag (01111110)
Data Stuffing Character stuffing Not needed Character stuffing
Error Detection CRC CRC Checksum (2/4 bytes)
Usage Mainframe-terminal links DECNET networks Internet over point-links

Module-3

a) Explain characteristics of datagram and virtual circuit switching.

b) List importance of routing? Provide the steps and protocol which follow in the Distance Vector
algorithm with examples.

a) Explain an algorithm, which addresses the problem of loops in bridges.

b) With the diagram, Discuss IPV6 header format.


Module-4

a) Discuss state transition diagram in reliable byte stream.

b) Explain FIFO and Fair Queuing Disciplines with real world application.

a) Provide a scenario for congestion in a network and provide an avoidance mechanism.

b) Justify why UDP is a simple demultiplexer.

Module-5

a) Write a note on cryptographic Building blocks.

(a) Note on Cryptographic Building Blocks

Cryptographic building blocks are essential components used in securing communication


across networks. They help achieve confidentiality, data integrity, and authentication. The
main cryptographic building blocks include ciphers, symmetric and asymmetric
encryption, block ciphers, and authenticators.

1. Principles of Ciphers

• Encryption is a process that transforms a message into an unintelligible form so that


unauthorized parties cannot understand it.
• The sender uses an encryption function to convert plaintext into ciphertext, which is
sent over the network.
• The receiver uses a secret decryption function, which is the inverse of the
encryption function, to recover the original plaintext.

The ciphertext is unintelligible to any eavesdropper who does not know the decryption
function.

• A cipher refers to the transformation represented by the encryption and


corresponding decryption functions.
• The basic requirement for an encryption algorithm is that it should convert plaintext
into ciphertext in such a way that only the intended recipient, who has the decryption
key, can recover the original plaintext.

2. Types of Cryptographic Attacks

When an attacker obtains a piece of ciphertext, they may have more information than just the
ciphertext. There are several types of attacks:

• Known Plaintext Attack: Attacker knows both the ciphertext and some part of the
corresponding plaintext.
• Ciphertext-only Attack: Attacker only has access to ciphertext.
• Chosen Plaintext Attack: Attacker can encrypt plaintexts of their choice to study the
resulting ciphertexts.

3. Block Ciphers

A block cipher encrypts fixed-size blocks of plaintext using a key. One common mode of
operation is Cipher Block Chaining (CBC).

• In CBC mode, each plaintext block is


XORed with the previous ciphertext
block before encryption.
• This makes the ciphertext of each
block dependent on the previous
blocks, adding context-based security.
• The first plaintext block, which has
no preceding ciphertext, is XORed
with a random number called the
Initialization Vector (IV).
• The IV is transmitted along with the
ciphertext so the first block can be
properly decrypted.

Example to Illustrate CBC

Let’s assume we have three plaintext blocks:

Plaintext Block 1 → P1
Plaintext Block 2 → P2
Plaintext Block 3 → P3

Encryption process:

1. P1 ⊕ IV → Encrypt → Ciphertext Block 1 (C1)


2. P2 ⊕ C1 → Encrypt → Ciphertext Block 2 (C2)
3. P3 ⊕ C2 → Encrypt → Ciphertext Block 3 (C3)

Decryption process:

1. Decrypt C1 → Get intermediate result → XOR with IV → Recover P1


2. Decrypt C2 → Get intermediate result → XOR with C1 → Recover P2
3. Decrypt C3 → Get intermediate result → XOR with C2 → Recover P3
4. Symmetric Key Ciphers

• In symmetric-key encryption, both sender and receiver share the same key.
• The same key is used for both encryption and decryption.

NIST Symmetric Standards:

• The U.S. National Institute of Standards and Technology (NIST) has developed
several symmetric-key ciphers:

i. DES (Data Encryption Standard):

o Uses a 56-bit key.


o No known attack better than brute-force.
o However, due to modern processing power, brute-force attacks have become
faster, making DES insecure by today’s standards.

ii. Triple DES (3DES):

o Designed to improve DES's security.


o Uses three different DES keys (DES-key1, DES-key2, DES-key3).

Encryption Process in 3DES:

1. Encrypt the plaintext using DES-key1.


2. Decrypt the output using DES-key2.
3. Encrypt again using DES-key3.

The result is the final ciphertext.

Decryption Process in 3DES:

1. Decrypt the ciphertext using DES-key3.


2. Encrypt the output using DES-key2.
3. Decrypt again using DES-key1.

The result is the original plaintext.

5. Public Key Ciphers (Asymmetric Encryption)


• An alternative to symmetric-key ciphers is asymmetric, or public-key ciphers.
• Instead of a single key shared by two participants, a public-key cipher uses a pair of
related keys:
o One for encryption
o One for decryption
• The pair of keys is “owned” by just one participant:
o The decryption key is private (kept secret).
o The encryption key is public (shared openly).
• The owner keeps the decryption key secret so that only the owner can decrypt
messages.
o This key is called the private key.

• The owner makes the encryption key public, so anyone can encrypt messages for
the owner.
o This key is called the public key.
• For the scheme to work, it must not be possible to deduce the private key from the
public key.
• Any participant can:
o Get the public key
o Send an encrypted message to the owner
o Only the owner has the private key necessary to decrypt it

Authentication using Public Keys:

• The private key can also be used to encrypt a message.


• Anyone with the public key can decrypt it, verifying that it came from the private key
owner.
• This technique is useful for authentication, not confidentiality.
Notable Public-Key Algorithms:

• RSA (Rivest–Shamir–Adleman):
o Based on the computational difficulty of factoring large numbers.
• ElGamal:
o Based on the discrete logarithm problem.
o Requires keys of at least 1024 bits for security.

The concept of public-key cryptography was introduced in 1976 by Diffie and Hellman.

6. Authenticator

• An authenticator is a value added to a message to:


o Verify authenticity (prove sender’s identity)
o Ensure data integrity (ensure message hasn’t been altered)

It is used to confirm that a message comes from a legitimate source and was not modified
during transmission.

Conclusion

Cryptographic building blocks like ciphers, encryption modes, keys, and authenticators
form the foundation of network security. They ensure that sensitive data remains
confidential, authentic, and untampered, even when transmitted over insecure channels.

b) Discuss any four application protocols.

1. HTTP (Hypertext Transfer Protocol)


• What it is: HTTP is an application-layer protocol used primarily for transferring web
pages, media, and data over the internet. It forms the foundation of data
communication on the World Wide Web.
• Why it is used: It allows web browsers (clients) to communicate with web servers,
enabling users to access websites, download content, or submit forms. It is a stateless
protocol, meaning each request is independent and does not store previous interaction
data.
• How it works: HTTP follows a request-response model. A client sends an HTTP
request to a server using methods like GET, POST, PUT, or DELETE. The server then
responds with an HTTP response, which includes a status code (like 200 OK or 404
Not Found) and the requested content (like HTML, JSON, or images).
• Other details:
o Port Number: 80 for HTTP, 443 for HTTPS (secure version using SSL/TLS).
o Versions: HTTP/1.1 (most widely used), HTTP/2 (supports multiplexing), and
HTTP/3 (based on QUIC protocol).
o Security: HTTPS (HTTP Secure) encrypts data for secure communication.

Advantages:

• Widely adopted and supported


• Easy to implement and extend
• Can carry various types of data: HTML, JSON, XML, multimedia

Limitations:

• Stateless by default
• Not secure without HTTPS
• Can be slow for large or interactive applications (HTTP/1.1)

Use Cases:

• Accessing web pages


• Interacting with REST APIs
• Web-based applications

2. FTP (File Transfer Protocol)

• What it is: FTP is a protocol used to transfer files between a client and a server on a
computer network. It enables uploading, downloading, and managing files remotely.
• Why it is used: FTP is helpful for website developers, system administrators, and
users who need to transfer large or multiple files across networks reliably and
efficiently.
• How it works: FTP uses two separate channels:
o Control channel (Port 21): Handles the commands and responses between
client and server.
o Data channel (Port 20): Transfers the actual file content.
o It can operate in active or passive mode, depending on how connections are
established for data transfer.
o Users authenticate using a username and password, or it can be used in
anonymous mode.
• Other details:
o File operations supported include upload, download, rename, delete, etc.
o Not secure by default. For secure transfers, variants like FTPS (FTP Secure)
or SFTP (SSH File Transfer Protocol) are used.

Advantages:

• Fast for large file transfers


• Simple to use
• Widely supported by tools like FileZilla

Limitations:

• Not secure (transfers data, including credentials, in plain text)


• Can be blocked by firewalls (active mode especially)
• No built-in encryption unless used with FTPS/SFTP

Use Cases:

• Website deployment
• Backup systems
• File sharing between organizations

3. SMTP (Simple Mail Transfer Protocol)

• What it is: SMTP is an email protocol used to send messages from an email client to
a mail server or between mail servers.
• Why it is used: It is the standard protocol for sending emails across the internet.
Without SMTP, email delivery from one person to another would not be possible.
• How it works: When a user sends an email, the SMTP client connects to the SMTP
server using port 25 or 587.
o The sender’s email client sends the email message to the server.
o The server then routes the email to the recipient’s server (using DNS to
resolve the domain).
o SMTP does not handle receiving or reading emails; for that, IMAP or
POP3 is used.
• Other details:
o Ports: 25 (default), 587 (submission), 465 (for SMTPS - secure).
o SMTP uses a push model, which means it actively sends data to the server.
o SMTP headers carry metadata like sender, recipient, subject, and date.
Advantages:

• Reliable and fast delivery


• Supports relaying and forwarding emails
• Compatible with most email clients

Limitations:

• Lacks built-in authentication and encryption


• Can be exploited for spam if not secured properly
• Requires complementary protocols to retrieve emails

Use Cases:

• Email services (Gmail, Outlook, etc.)


• Automated notification systems
• Sending marketing or transactional emails

4. DNS (Domain Name System)

• What it is: DNS is a protocol that translates human-readable domain names (like
www.google.com) into IP addresses (like 142.250.77.78) which computers use to
identify each other.
• Why it is used: Users find it easier to remember domain names than numerical IP
addresses. DNS makes internet usage more user-friendly and scalable.
• How it works:
o When a user types a URL in a browser, a DNS query is sent to a DNS
resolver.
o The resolver checks its cache or queries other DNS servers (root, TLD, and
authoritative servers) until it gets the IP address.
o The resolved IP is then returned to the browser to establish a connection to the
destination server.
o DNS can perform recursive or iterative queries depending on the type of
server interaction.
• Other details:
o Port: 53 (UDP for normal queries, TCP for large responses).
o DNS records include:
▪ A record (IPv4 address),
▪ AAAA record (IPv6 address),
▪ MX (mail exchange),
▪ CNAME (canonical name), etc.
▪ Security extension: DNSSEC adds integrity and authentication to
DNS

Advantages:

• Scalable and fast due to caching


• Essential for usability of the web
• Supports redundancy and fault tolerance

Limitations:

• Can be vulnerable to spoofing or poisoning attacks


• Privacy concerns due to unencrypted queries (solved by DNS over HTTPS/DNSSEC)
• Downtime of DNS servers can affect access

Use Cases:

• Web browsing
• Email delivery
• CDN redirection
• Load balancing

o responses to prevent spoofing.

a) Discuss predistribution of public and symmetric keys.


Predistribution of Public and Symmetric Keys

Key predistribution refers to the process of establishing and distributing cryptographic keys
to entities before secure communication can take place. This is a fundamental step in
enabling secure systems, especially in environments where secure key exchange over a
network is not initially possible.

1. Predistribution of Public Keys

In public key cryptography, each user generates a public/private key pair. The private key is
kept secret, and the public key is shared. However, simply making the public key available is
not enough—others must be confident that the key truly belongs to the claimed owner.

Key Concepts:

• Anyone can generate a public/private key pair, but proving the identity linked to
the key is difficult.
• A user can publish their public key on a website or send it over email, but this can be
intercepted or replaced in a man-in-the-middle attack.
• The solution is to use a Public Key Infrastructure (PKI), which allows users to
trust public keys by verifying them with Certificate Authorities (CAs).
Challenges:

• Authentication of public keys: An adversary can forge a key and claim it belongs to
someone else.
• Secure distribution: Public keys must be distributed in a verifiable way to avoid
impersonation attacks.

How It Works:

• The user sends their public key to a Certificate Authority (CA) along with identity
verification.
• The CA verifies the user’s identity and creates a digital certificate.
• The digital certificate contains:
o User’s identity (e.g., name, domain)
o Public key
o Name of the issuing CA
o Digital signature of the CA
o Algorithm used
o Expiration date
• The certificate is in X.509 format and is digitally signed by the CA.
• Anyone who trusts the CA can now trust the public key in the certificate.\

Solution: Public Key Infrastructure (PKI)

A PKI is a system that:

• Binds public keys to verified identities, often using certificates.


• Uses Certification Authorities (CAs) to sign certificates verifying the identity of the
key owner.

Components of a Certificate (e.g., X.509):

• Identity of the certificate holder


• Their public key
• Identity of the signer (CA)
• CA’s digital signature
• Signature algorithm details
• (Optional) Expiration time

Models of Trust:

• Hierarchy (CA-based): A root CA certifies subordinate CAs, which certify users.


Trust is inherited through chains.
• Web of Trust (PGP): Users sign each other’s keys. Trust is decentralized and
subjective.

Certificate Revocation:

• If a private key is compromised, the corresponding certificate must be revoked.


• Certificate Revocation List (CRL): A digitally signed list of revoked certificates.
• Certificates typically include expiration dates to limit how long they are valid.

2. Predistribution of Symmetric Keys

Symmetric key cryptography uses the same key for both encryption and decryption. The
challenge in this system is securely sharing the same key with the other party. This is
especially difficult when the number of users increases.

Challenges:

• Scalability: For N users, each pair requires a unique key → N(N-1)/2 keys.
• Confidentiality: Keys must be distributed securely and kept secret.

Solution: Key Distribution Center (KDC)

A KDC is a trusted third party that:

• Shares a unique secret key with each user (N-1 keys total).
• When Alice wants to communicate with Bob, the KDC:
o Authenticates both users
o Generates a temporary session key
o Sends the session key encrypted using the pre-shared keys
• Alice and Bob then communicate using the session key, without involving the KDC
further.

How It Works:

1. User A requests a session key from the KDC to communicate with User B.
2. KDC generates a session key and sends:
o The session key encrypted with A’s key.
o The same session key encrypted with B’s key (A forwards this to B).
3. A and B now both have the shared session key and can communicate securely.

Advantages:

• Reduces the number of stored keys from N(N-1)/2 to just N-1 (each user only needs a
single key with the KDC).
• Provides centralized control and better scalability.

Example:

• Kerberos is a real-world implementation of symmetric key predistribution using a


KDC. It uses session keys and timestamps to ensure secure authentication and
communication.

Summary Comparison
Feature Public Key Predistribution Symmetric Key Predistribution
Symmetric (Same key for both
Key Type Asymmetric (Public/Private)
users)
Number of Keys for N
N N(N-1)/2
Users
KDC (Centralized key
Trust Establishment PKI (CA, Certificates)
distribution)
Scalability High (fewer keys) Low (many keys required)
Security Concerns Authenticating public keys Securing the key exchange
Revocation Through CRL, expiration Not directly revocable
SSL/TLS, HTTPS (X.509,
Real-world Systems Kerberos (KDC-based)
CA)

In conclusion, public key predistribution relies on a hierarchical or decentralized trust


system to authenticate and distribute keys securely, whereas symmetric key predistribution
uses a central authority (KDC) to manage keys and establish session-based
communications.

b) Discuss any two protocols which provide Infrastructure services.

Infrastructure service protocols support the internal operation of the Internet. They are not typically
used directly by end-users, but other applications and administrators depend heavily on them for
tasks such as name resolution, device monitoring, and network troubleshooting. Two of the most
important infrastructure protocols are the Domain Name System (DNS) and the Simple Network
Management Protocol (SNMP).

1. Domain Name System (DNS)


What is DNS?

The Domain Name System (DNS) is an essential infrastructure protocol that maps human-
readable domain names (such as www.google.com) to their corresponding IP addresses (such
as 142.250.182.132). This system allows users to access websites and services using easy-
to-remember names instead of numerical IP addresses required by computers.

How it Works

1. A user types a domain name into a web browser.


2. The system first checks the local DNS cache.
3. If not found, the query is sent to a DNS resolver, typically managed by the Internet
Service Provider (ISP).
4. The resolver contacts the root DNS server, then the Top-Level Domain (TLD) server
(such as .com), and finally the authoritative DNS server.
5. The authoritative server returns the IP address for the requested domain.
6. The resolver provides the IP address to the browser, which then connects to the server
hosting the website.

The DNS name resolution involves a series of queries passed from one server to another in a
hierarchical structure. Here’s the step-by-step process:

1. Initial Query:
The client first sends a query to its local DNS server. If it doesn’t have the answer
cached, the server begins querying other name servers.
2. Root Server:
The local DNS server sends the query to a root name server, which responds with a
referral to a Top-Level Domain (TLD) server (e.g., .edu for educational domains).
3. TLD Server:
The TLD server returns the address of the authoritative name server for the next
domain level (e.g., princeton.edu).
4. Authoritative Server:
This process continues down the hierarchy until the final authoritative name server for
the full domain (e.g., cs.princeton.edu) provides the IP address for
penguins.cs.princeton.edu.
5. Caching:
Intermediate responses are cached at each level (especially at the local DNS server),
improving efficiency for future queries.

Example

Typing www.amazon.com in a browser results in DNS resolving it to the correct IP address,


enabling the browser to access Amazon’s servers.

Key Features

• Provides domain name to IP address translation


• Supports hierarchical name structure (root, TLD, authoritative servers)
• Uses caching to reduce lookup times and improve performance
• Offers load balancing through multiple IP responses
• Operates primarily over UDP port 53 for speed

Other Details

• Critical for almost all internet services to function


• Supports multiple record types such as A, AAAA, MX, CNAME, and TXT
• DNS security extensions (DNSSEC) can be used to prevent attacks like spoofing
2. Simple Network Management Protocol (SNMP)
What is SNMP?

Simple Network Management Protocol (SNMP) is used to monitor, manage, and configure
network devices such as routers, switches, servers, firewalls, and printers. It provides a
standardized framework for collecting information and managing network performance
remotely.

How it Works

1. Each network device runs an SNMP Agent, which collects and stores management
data.
2. A central SNMP Manager sends requests to retrieve or modify data from agents.
3. The agent stores data in a database called the Management Information Base (MIB).
4. The manager can issue GET, SET, or WALK commands to communicate with the
agent.
5. Devices may also send unsolicited alerts (called Traps) to the manager when specific
events occur.

Example

An administrator uses SNMP to monitor bandwidth usage on a router. If traffic exceeds a


defined limit, SNMP sends a Trap alert to the manager, helping in real-time response.

Key Features

• Enables centralized network monitoring and control


• Uses UDP port 161 for requests and 162 for Traps
• Lightweight and efficient communication protocol
• Traps allow real-time fault detection and alerts
• Supported by most enterprise-grade networking hardware and software

Other Details

• Three versions: SNMPv1 (basic), SNMPv2c (improved performance), SNMPv3 (adds


security with authentication and encryption)
• Helps in fault management, configuration management, and performance analysis
• Compatible with both IPv4 and IPv6 networks

Conclusion

Both DNS and SNMP are fundamental infrastructure protocols:

• DNS enables the internet to be user-friendly by resolving names to IPs.


• SNMP provides tools for network administrators to maintain, monitor, and optimize
network health.
These protocols are essential for scalable, reliable, and manageable networking
systems.

You might also like