0% found this document useful (0 votes)
10 views15 pages

Acn Old

The document covers fundamental concepts of web communication protocols, including HTTP, FTP, DNS, and transport layer protocols like TCP and UDP. It discusses the structure of HTTP messages, the role of cookies, web caching, and the differences between SMTP and HTTP. Additionally, it explains DNS resolution, P2P file distribution, multiplexing, demultiplexing, and the importance of routing algorithms in networking.

Uploaded by

sayan10c
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views15 pages

Acn Old

The document covers fundamental concepts of web communication protocols, including HTTP, FTP, DNS, and transport layer protocols like TCP and UDP. It discusses the structure of HTTP messages, the role of cookies, web caching, and the differences between SMTP and HTTP. Additionally, it explains DNS resolution, P2P file distribution, multiplexing, demultiplexing, and the importance of routing algorithms in networking.

Uploaded by

sayan10c
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

UNIT – 1

1. Define HTTP and explain its primary purpose in web communication.


• HTTP (Hypertext Transfer Protocol) is a stateless, application-layer protocol that enables
communication between clients and servers on the web.
• Primary Purpose: It defines how requests and responses are structured to transfer web resources
like HTML pages, images, and data, enabling seamless web browsing and data exchange.

2. What is the difference between persistent and non-persistent HTTP connections?


• In a persistent connection, a single connection is maintained between the client and server for
multiple HTTP requests and responses, reducing latency and improving efficiency. It is used in
HTTP/1.1 by default.
• In a non-persistent connection, a new connection is established for each HTTP request-response
pair. Once the server sends the response, the connection is closed, leading to higher overhead and
slower performance. Common in HTTP/1.0.

3. Explain the role of cookies in user-server interaction.


• A cookie is a small piece of data sent by the server and stored on the user's browser.
• It is included in subsequent requests, allowing the server to track users and maintain session state.
• They help in user authentication, session management, and storing preferences for a better
browsing experience.

4. List any two commands used in FTP and their functions.


1. USER – Sends the username to the FTP server for authentication.
Example: USER anonymous
2. RETR (Retrieve) – Downloads a file from the server to the client.
Example: RETR file.txt

5. What are the main services provided by DNS in the Internet?


1. Name Resolution – Translates domain names into IP addresses.
2. Reverse DNS Lookup – Resolves IP addresses back to domain names.
3. Email Routing – Uses MX records to direct emails to the correct mail server.
4. Load Balancing – Distributes traffic among multiple servers for better performance.
5. Service Discovery – Helps locate services on a network, such as printers or file servers.

1. Describe the structure and components of an HTTP request and response message.

HTTP Request Structure


Request Line: Specifies the HTTP method (e.g., GET, POST), resource URL, and HTTP version.
Example: GET /index.html HTTP/1.1
Headers: Provide additional information like the client type, encoding, and cache preferences.
Example: User-Agent: Mozilla/5.0
Message Body (optional): Used in methods like POST to send data (e.g., form inputs).

HTTP Response Structure


The server responds with:
1) Status Line: Indicates the HTTP version, status code, and reason phrase.
Example: HTTP/1.1 200 OK.
2) Headers: Provide metadata like content type, content length, and caching information.
Example: Content-Type: text/html; charset=UTF-8.
3) Message Body: Contains the requested resource (e.g., HTML, JSON, or an image).

Client (Browser) Server


| |
| HTTP Request |
| -------------------------------------------> |
| GET /index.html HTTP/1.1 |
| Host: www.example.com |
| User-Agent: Mozilla/5.0 |
| |
| HTTP Response |
| <------------------------------------------- |
| HTTP/1.1 200 OK |
| Content-Type: text/html |
| <html><body>...</body></html> |

2. Explain how web caching improves web performance and describe the concept of the Conditional GET.

Web caching is a mechanism used to store copies of web resources (e.g., HTML pages, images, JavaScript,
CSS) temporarily, enabling faster retrieval on subsequent requests.
How Web Caching Improves Performance:
1. Reduced Latency: Cached resources load faster, improving page speed.
2. Lower Bandwidth Usage: Fewer requests are sent to the origin server, saving network resources.
3. Decreased Server Load: Reduces the number of requests a web server needs to handle.
4. Improved User Experience: Faster page loads result in a better browsing experience.

A Conditional GET ensures the server sends a resource only if it has changed since the last request This
avoids unnecessary data transfer.
Mechanism of Conditional GET:
• Last-Modified Header: The server sends the resource with a timestamp (Last-Modified: Wed, 12
Feb 2025 10:00:00 GMT).
• If-Modified-Since Header: The client sends a request with If-Modified-Since: Wed, 12 Feb 2025
10:00:00 GMT.
• If the resource has not changed, the server responds with 304 Not Modified, reducing bandwidth
usage.
• If the resource has changed, the server sends a fresh copy with a 200 OK response.

3. Compare SMTP and HTTP, highlighting their similarities and differences in communication protocols.
Similarities:
1. Both are application-layer protocols used for communication over the internet.
2. Both follow a client-server model for exchanging data.
3. Both use TCP for reliable data transmission.
Differences:
Feature SMTP (Simple Mail Transfer Protocol) HTTP (Hypertext Transfer Protocol)
Purpose Sends emails between servers and clients. Transfers web pages and data.
Model Push-based (emails are pushed to servers). Pull-based (clients request resources).
Port Uses 25, 465, 587. Uses 80, 443.
State Stateful (maintains session info). Stateless (each request is independent).
Conclusion:
SMTP is used for email transfer, while HTTP is for web browsing. Both use TCP but differ in data exchange
models and purpose.

4. Explain the process of DNS resolution and the importance of DNS records in this process.
How DNS Works
DNS (Domain Name System) operates in a hierarchical, distributed manner to resolve domain names into
IP addresses. The process includes:
1. User Request – The browser first checks its local cache when a user enters a URL (e.g.,
www.example.com).
2. Query to DNS Resolver – If not found, the request is sent to the local DNS resolver (usually
provided by an ISP).
3. Recursive Query – If still unresolved, the resolver queries root DNS servers.
4. Root DNS Servers – These direct the query to the TLD (Top-Level Domain) servers (e.g., .com).
5. TLD Servers – These refer the resolver to the Authoritative DNS Server for the domain.
6. Authoritative DNS Server – This responds with the IP address of the requested domain (e.g.,
93.184.216.34).
7. Website Access – The browser uses the IP to connect to the web server, loading the website.

Importance of DNS Records:


DNS records store information that helps in domain resolution. Key records include:
• A Record – Maps a domain to an IPv4 address.
• AAAA Record – Maps a domain to an IPv6 address.
• CNAME Record – Aliases one domain to another (e.g., www.example.com → example.com).
• MX Record – Directs email to the correct mail server.
• TXT Record – Stores additional information for verification/security (e.g., SPF, DKIM).

5. Discuss the working and advantages of Peer-to-Peer (P2P) file distribution, including the role of
Distributed Hash Tables (DHTs).
P2P file sharing allows users to exchange files directly without a central server. Each peer acts as both a
downloader and an uploader, improving efficiency and scalability.
• File Splitting & Parallel Downloads – Large files are divided into chunks, enabling simultaneous
downloads from multiple peers, speeding up distribution.
• Swarming – Peers exchange file chunks, ensuring fast availability and reducing reliance on a single
source.
• Decentralization – No central server; files are shared across multiple peers, improving fault tolerance
and availability.
• Tracker or DHT Support – Some systems use trackers for peer discovery, while others use Distributed
Hash Tables (DHTs) for decentralized lookup.
• Reciprocal Sharing – Users upload while downloading, keeping the network efficient and balanced.
Role of Distributed Hash Tables (DHTs)

DHTs replace traditional trackers by distributing peer information across the network. Each peer stores
part of a lookup table, allowing decentralized and efficient peer discovery, improving scalability and fault
tolerance.
UNIT – 2

1. What is the purpose of multiplexing and demultiplexing in the transport layer?

Multiplexing combines multiple application data streams into a single stream for transmission, assigning
unique port numbers to distinguish them.
Demultiplexing separates the received data and delivers it to the correct application using port numbers.

2. Differentiate between TCP and UDP in terms of connection orientation.


TCP (Transmission Control Protocol) is connection-oriented, meaning it establishes a reliable connection
between sender and receiver before data transfer (via a handshake process).
UDP (User Datagram Protocol) is connectionless, meaning it sends data without establishing a prior
connection, making it faster but less reliable.

3. What is the role of a socket in transport layer communication?

• A socket is an endpoint for communication in the transport layer, enabling data exchange between
applications over a network.
• It consists of an IP address and port number, allowing processes to send and receive data reliably
(TCP) or quickly (UDP).
• Sockets help identify applications and manage multiple network connections on a device.

4. List any two fields in a UDP segment and their purpose.


Two fields in a UDP segment and their purposes are:
1. Source Port – Identifies the sending application, allowing the recipient to know where the
response should be sent.
2. Checksum – Used for error detection to ensure data integrity during transmission.

5. What is the function of the TCP three-way handshake?

The TCP three-way handshake establishes a reliable connection between a client and a server before data
transmission. It consists of three steps:
1. SYN – The client sends a synchronization (SYN) request to the server to initiate a connection.
2. SYN-ACK – The server responds with a synchronization acknowledgment (SYN-ACK) to confirm the
request.
3. ACK – The client sends an acknowledgment (ACK) back, completing the connection setup.

1. Explain the process of socket programming using TCP with an example.


TCP (Transmission Control Protocol) socket programming allows two devices to communicate reliably over
a network. A socket is an endpoint for sending and receiving data between a client and a server.
How TCP Socket Programming Works

1. Server Side:

• The server creates a socket, binds it to a specific IP address and port, and listens for incoming
client connections.
• When a client requests a connection, the server accepts it and establishes a dedicated socket for
communication.

2. Client Side:

• The client creates a socket and connects to the server using its IP address and port.
• Once connected, the client and server exchange data reliably.

Example: Chat App


When using a chat app, the client (your device) connects to a server, which ensures messages are sent and
received in the correct order. TCP ensures that no data is lost or duplicated, providing a seamless
communication experience.

2. Compare TCP and UDP based on reliability, connection orientation, and use cases.

Feature TCP (Transmission Control Protocol) UDP (User Datagram Protocol)

Reliability Reliable – ensures data is delivered in order Unreliable – no error checking or


and without loss using error checking and retransmission, so data may be lost or
retransmission. arrive out of order.

Connection Connection-oriented – establishes a Connectionless – sends data without


Orientation connection before data transfer (three-way establishing a connection.
handshake).

Use Cases Used for applications requiring reliable Used for applications needing fast, low-
data transfer, such as web browsing latency communication, such as video
(HTTP/HTTPS), email (SMTP), and file streaming, VoIP, and online gaming.
transfer (FTP).

Conclusion
TCP is suitable for scenarios where accuracy matters, while UDP is ideal for real-time applications where
speed is more important than reliability.

3. Describe how TCP ensures reliable data transfer.


TCP (Transmission Control Protocol) provides reliable data transfer using the following mechanisms:
1. Connection Establishment (Three-Way Handshake)
o TCP establishes a connection between sender and receiver using a three-step process: SYN
→ SYN-ACK → ACK. This ensures both parties are ready for communication.
2. Data Segmentation and Sequencing

o TCP breaks data into smaller segments and assigns sequence numbers, ensuring the
receiver can reassemble them in the correct order.

3. Acknowledgment (ACK) and Retransmission


o The receiver sends an ACK for each received segment. If an ACK is not received within a
timeout, TCP retransmits the lost data.
4. Error Detection (Checksum)
o Each segment includes a checksum to detect errors. If corruption is found, the receiver
requests retransmission.
5. Flow and Congestion Control
o TCP uses Sliding Window and Congestion Control mechanisms to prevent network
congestion and ensure efficient data flow

4. What is RTT, and how does it affect TCP performance?


RTT (Round-Trip Time) is the time taken for a data packet to travel from the sender to the receiver and
back. It includes transmission, propagation, and processing delays.
Impact on TCP Performance
1. Affects Acknowledgment Delays – Higher RTT increases the time for TCP to receive
acknowledgments, slowing down data transmission.
2. Reduces Throughput – TCP’s congestion control and window size adjustment depend on RTT. A
high RTT lowers the data transmission rate.
3. Impacts TCP Retransmissions – If RTT is too high, TCP may mistakenly assume packet loss and
retransmit unnecessarily, reducing efficiency.
4. Slows Initial Connection (Three-Way Handshake) – A higher RTT delays the establishment of a TCP
connection, impacting real-time applications.
5. Affects Adaptive Algorithms – TCP uses RTT estimation to adjust timeouts dynamically. Unstable
RTT can lead to inefficient timeout settings.
5. Explain TCP congestion control in simple terms with an example.

TCP congestion control prevents network overload by adjusting the data transmission rate based on
network conditions. It ensures efficient use of bandwidth while avoiding packet loss due to congestion.
Example:
Imagine you are transferring a large file over TCP:

1. Slow Start Phase – The sender starts by sending a small number of packets (1 MSS). For every
acknowledgment (ACK) received, it doubles the packet count (1 MSS → 2 MSS → 4 MSS, etc.).
2. Congestion Avoidance Phase – Once the congestion window reaches a threshold, TCP increases
the window size linearly instead of exponentially to prevent congestion.
3. Packet Loss & Recovery – If packet loss occurs (e.g., three duplicate ACKs are received), TCP:

o Retransmits the lost packet immediately (Fast Retransmit).


o Reduces the congestion window size by half (Fast Recovery) and re-enters congestion
avoidance mode.
4. Efficient Transfer – This process continues until the file transfer is complete, ensuring smooth data
flow without overloading the network.
This process ensures that large files are transferred efficiently while preventing excessive traffic from
overwhelming the network.
UNIT – 3
1. Differentiate between virtual-circuit networks and datagram networks.

2. What is the purpose of the Internet Protocol (IP) in networking?

• The Internet Protocol (IP) operates at the network layer of the TCP/IP model and is responsible for
addressing, routing, and delivering data packets across networks.
• It enables devices to communicate globally by ensuring data reaches the correct destination using
unique IP addresses.
3. What is the role of ICMP in the Internet Protocol suite?
The Internet Control Message Protocol (ICMP) plays a crucial role in the Internet Protocol (IP) suite by
handling error reporting and network diagnostics.
• Error Reporting – ICMP sends messages (e.g., "Destination Unreachable") to notify the sender
about network issues.
• Network Diagnostics – It is used in tools like ping and traceroute to check connectivity and
network performance.

4. List any two key differences between IPv4 and IPv6.


Two Key Differences Between IPv4 and IPv6:
1. Address Length – IPv4 uses a 32-bit address, whereas IPv6 uses a 128-bit address, providing a
significantly larger address space.
2. Address Notation – IPv4 addresses are written in dotted decimal format (e.g., 192.168.1.1), while
IPv6 addresses use colon-separated hexadecimal notation (e.g., 2001:db8::1).

5. What is the purpose of a routing algorithm in a network?

• A routing algorithm determines the best path for data transmission between devices in a network,
ensuring efficient communication.
• It helps routers decide how to forward packets based on network conditions.

1. Explain the format of an IP datagram, highlighting its key fields and their functions.

An IP datagram is a structured packet used for transmitting data over a network. It consists of a header
and a payload (data). The header contains essential information for routing, addressing, and ensuring
proper delivery.

Key Fields in an IP Datagram Header:


1. Version (4 bits) – Specifies the IP version (IPv4 or IPv6).
2. Header Length (IPv4: 4 bits, Not in IPv6) – Indicates the length of the header.
3. Total Length (16 bits, IPv4 only) – Defines the total datagram size (header + data).
4. Source & Destination IP Address – Identifies sender and receiver (IPv4: 32-bit, IPv6: 128-bit).
5. Time to Live (TTL, IPv4) / Hop Limit (IPv6) (8 bits) – Prevents infinite looping by limiting hops.
6. Protocol (IPv4) / Next Header (IPv6) (8 bits) – Identifies the transport layer protocol (e.g., TCP,
UDP).
7. Header Checksum (IPv4 only, 16 bits) – Ensures error detection in IPv4 headers (not needed in
IPv6).
8. Fragmentation Fields (IPv4 only) – Supports packet fragmentation. IPv6 does not allow
fragmentation.
9. Payload Length (IPv6 only, 16 bits) – Specifies the size of the data being sent.
10. Options (IPv4) / Extension Headers (IPv6) – Used for additional functionalities.

IPv4 vs. IPv6 Differences:


• IPv4 has a 20-byte header, while IPv6 has a fixed 40-byte header for efficiency.
• IPv6 removes fragmentation and checksum fields, simplifying processing.

2. Compare the Link-State (LS) and Distance-Vector (DV) routing algorithms in terms of their working
principles and use cases.
Working Principles
• Link-State (LS) Routing:
o Each router maintains a complete map of the network topology.
o It exchanges Link-State Advertisements (LSAs) with all routers, updating topology changes
dynamically.
o Uses Dijkstra’s algorithm to compute the shortest path.
• Distance-Vector (DV) Routing:
o Each router only knows about its directly connected neighbours.
o It shares entire routing tables periodically with neighbours.
o Uses Bellman-Ford algorithm to compute the best path.
Use Cases
• LS Routing (OSPF, IS-IS):
o Used in large, scalable networks like ISPs, enterprises, and cloud data centers.
o Preferred for networks requiring fast convergence and low latency.
• DV Routing (RIP, IGRP):
o Suitable for small to medium-sized networks where ease of configuration is a priority.
o Works well in low-bandwidth environments but struggles with large-scale networks

3. Describe the process of hierarchical routing and its advantages in large networks.

Hierarchical Routing is a method of dividing large networks into smaller, more manageable sub-networks
or regions. It is commonly used to reduce the complexity of routing in large-scale networks like the
internet.

Working of Hierarchical Routing


1. Region Division: The network is divided into multiple regions, each containing several routers.
2. Local Routing: Within each region, an internal routing algorithm (e.g., Link-State or Distance-
Vector) is used for communication.
3. Inter-Region Routing: Higher-level routers exchange summary routing information between
regions, reducing the amount of data that needs to be processed globally.

Advantages of Hierarchical Routing


• Scalability: Handles large networks efficiently.
• Reduced Overhead: Reduces routing table size and bandwidth usage.
• Improved Performance: Reduces processing and storage requirements

4. Explain the functioning of the OSPF (Open Shortest Path First) protocol in intra-AS routing.
OSPF (Open Shortest Path First) is a link-state routing protocol used within an Autonomous System (AS)
for efficient and scalable routing

Working of OSPF
1. LSA Exchange – Each router shares link-state advertisements (LSAs) with all other routers in the
AS.
2. Shortest Path Calculation – Routers use Dijkstra’s Algorithm (Shortest Path First) to determine the
best routes.
3. Routing Table Update – The best paths are stored in the routing table, and updates occur only
when topology changes.

Key Features
• Uses cost (based on bandwidth) as the routing metric.
• Supports hierarchical routing with areas for scalability.
• Converges faster than RIP, ensuring efficient network updates.

5. Discuss the role and working of BGP (Border Gateway Protocol) in inter-AS routing.
BGP (Border Gateway Protocol) is the primary inter-domain routing protocol used to exchange routing
information between different Autonomous Systems (ASes) on the internet. It is a Path-Vector Protocol
designed for scalability and policy-based routing.

Role of BGP
1. Inter-AS Routing – Connects ISPs and organizations, enabling global internet routing.
2. Policy-Based Routing – Allows route selection based on policies, not just shortest paths.
3. Scalability – Manages large-scale networks by optimizing route advertisements.

Working of BGP
1. BGP Peering – BGP routers establish sessions and exchange routing updates.
2. Route Advertisement – Each router shares available routes with AS path details.
3. Path Selection – Routes are chosen based on AS path, next-hop, local preference, and policies.
UNIT – 4

1. What is the purpose of error detection in the data link layer?


Error detection in the Data Link Layer ensures data integrity by identifying errors caused by noise,
interference, or signal distortions. It helps in:
1. Detecting Corrupt Data – Identifies transmission errors using CRC, parity check, or checksums.
2. Ensuring Reliable Communication – Prevents incorrect data processing and enables
retransmission if needed.

2. Define Cyclic Redundancy Check (CRC).


1. Cyclic Redundancy Check (CRC) is an error-detection technique used in networking and data
transmission to detect accidental changes in data.
2. It works by generating a checksum (CRC code) using binary division of the data by a
predetermined polynomial divisor. The receiver performs the same division; if the remainder is
zero, the data is error-free; otherwise, errors are detected.

3. What is the role of Address Resolution Protocol (ARP) in a network?


ARP is used to map an IP address to a MAC address in a local network. When a device wants to
communicate with another device, it uses ARP to find the corresponding MAC address of the destination
IP, enabling data transmission at the Data Link Layer.

4. List two key features of an Ethernet frame.

1. MAC Addressing – Contains source and destination MAC addresses for identifying devices in a
network.
2. Error Detection – Includes a Cyclic Redundancy Check (CRC) in the Frame Check Sequence (FCS)
field to detect transmission errors.

5. What is a Virtual Local Area Network (VLAN)?

A VLAN is a logical network within a physical LAN that groups devices into separate broadcast domains. It
improves network segmentation, security, and efficiency by allowing devices to communicate as if they
were on the same network, even if they are physically apart.

1. Explain the working of Cyclic Redundancy Check (CRC) with an example.


Cyclic Redundancy Check (CRC) is an error-detection technique used to verify data integrity in
transmission. It uses binary division of the data by a predefined polynomial divisor and appends the
remainder as a checksum. The receiver performs the same division to check for errors.

Steps in CRC Calculation:


1. Append Zeros to the Message – Add zeros (equal to the length of the divisor minus one) to the
original data.
2. Perform Binary Division – Divide the data (including zeros) by the generator polynomial using XOR
operation.
3. Append the Remainder – The remainder from the division is appended to the original data as the
CRC checksum.
4. Validation at the Receiver – The receiver divides the received data (including the CRC) by the same
polynomial. If the remainder is zero, the data is correct; otherwise, errors are detected.
Example:
Given:
• Data: 110101
• Divisor (Generator Polynomial): 1011
Step 1: Append three zeros (since divisor length = 4) → 110101000
Step 2: Perform binary division using XOR → Remainder 101
Step 3: Append remainder to data → 110101101 (Transmitted message)
Step 4: Receiver divides 110101101 by 1011. If remainder = 0, data is error-free; otherwise, an error is
detected.

2. Compare Channel Partitioning Protocols and Random-Access Protocols, providing examples


of each.
Network access protocols manage how multiple devices share a communication channel.
Feature Channel Partitioning Protocols Random-Access Protocols
Definition Divides the channel into fixed or dynamic Allows devices to transmit whenever
portions for multiple users. needed, leading to potential collisions.
Efficiency Efficient under high traffic load as resources Efficient under low traffic since no
are allocated. fixed allocation is required.
Collision No collisions occur as each device gets a Collisions can occur, requiring
Handling dedicated portion. retransmission.
Latency Higher latency in low traffic due to waiting for Lower latency in low traffic as devices
a dedicated slot. transmit immediately.
Example FDMA (Frequency Division Multiple Access), ALOHA, Slotted ALOHA, CSMA/CD
Protocols TDMA (Time Division Multiple Access), CDMA (Carrier Sense Multiple Access with
(Code Division Multiple Access) Collision Detection)
Examples:
• Channel Partitioning Example: TDMA in mobile networks assigns time slots to users.
• Random-Access Example: CSMA/CD in Ethernet allows devices to transmit when the channel is
free but handles collisions using retransmission.

3. Describe how self-learning is implemented in link-layer switches.

Link-layer switches operate at the Data Link Layer (Layer 2) of the OSI model and use MAC addresses to
forward frames efficiently within a LAN.
Self-Learning Process:
1. Learning: When a frame arrives at a switch port, the switch reads the source MAC address.
2. Table Update: The switch updates its MAC address table (CAM) by associating the source MAC
address with the incoming port.
3. Forwarding:
o If the destination MAC address is found in the table, the frame is sent to the correct port.
o If not, the switch floods the frame to all ports except the incoming one.
Example:
If a device with MAC address 00:1A:2B:3C:4D:5E sends data, the switch learns its MAC address and port.
Future frames to this address will be directly forwarded instead of being flooded.
4. Explain the Ethernet frame structure and its key components.

An Ethernet frame is a structured data packet used for communication over an Ethernet network.

An Ethernet frame typically consists of the following key components:


1. Preamble (7 bytes):
o A sequence of alternating 1s and 0s that helps receivers synchronize with the incoming
frame.
2. Start Frame Delimiter (SFD) (1 byte):
o Marks the start of the frame and indicates that the following bits contain actual data.
3. Destination MAC Address (6 bytes):
o The MAC address of the receiving device (unicast, multicast, or broadcast).
4. Source MAC Address (6 bytes):
o The MAC address of the sending device.
5. EtherType/Length (2 bytes):
o If ≤ 1500, it represents the payload size.
o If > 1500, it indicates the protocol type (e.g., IPv4, IPv6).
6. Payload/Data (46–1500 bytes):
o The actual data being transmitted.
o Must be at least 46 bytes (padding is added if necessary).
7. Frame Check Sequence (FCS) (4 bytes):
o Contains a Cyclic Redundancy Check (CRC) value for error detection

5. What is Multiprotocol Label Switching (MPLS), and how does it enhance network efficiency?

Multiprotocol Label Switching (MPLS) is a high-performance routing technique used in modern networks
to speed up and manage traffic flow.

How MPLS Works:


1. Label Assignment: When a packet enters the MPLS network, the ingress router assigns a label to the
packet.
2. Label Switching: As the packet moves through the network, each LSR looks at the label to forward the
packet along the predetermined path.
3. Label Removal: When the packet reaches the egress router, the label is removed, and the packet is
forwarded based on its original IP address.

How MPLS Enhances Network Efficiency:


1. Faster Forwarding – Uses labels instead of IP lookups, speeding up routing.
2. Traffic Prioritization – Ensures QoS for critical applications.
3. Reduced Congestion – Optimizes load balancing for smoother traffic flow.
4. Scalability & VPN Support – Enables secure, multi-site connectivity.
5. High Reliability – Reroutes traffic instantly during failures.
UNIT – 5

1. What is the primary purpose of the 802.11 architecture in wireless LANs?


• Enables wireless communication in local area networks (WLANs).
• Defines standards for interoperability, security, and efficient data transmission.

2. Define mobility within the same IP subnet.


• Allows a device to move between access points without changing its IP address.
• Ensures seamless connectivity and minimal disruption to ongoing connections.

3. What is the role of Mobile IP in managing mobility?


• Maintains a constant IP address as a device moves across different networks.
• Uses home and foreign agents to forward data packets without interruption.

4. What is a handoff in GSM networks?


• Transfers an ongoing call or data session from one cell tower to another.
• Ensures seamless connectivity and prevents call drops during movement.

5. List any two features of cellular architecture.


• Frequency Reuse: Divides areas into small cells to optimize spectrum usage.
• Handoff Mechanism: Transfers active connections smoothly between cells.

1. Explain the key components of the 802.11 architecture and their roles in wireless networks.

The 802.11 architecture enables wireless communication within a Wireless Local Area Network (WLAN).
It consists of several key components that work together to provide seamless connectivity.
1. Station (STA):
o Any device with a wireless network interface, such as laptops, smartphones, and IoT
devices.
o Communicates using the 802.11 protocol and connects to an access point (AP) or another
station in an ad-hoc network.
2. Access Point (AP):
o Acts as a bridge between wireless stations and a wired network.
o Provides connectivity, authentication, and data forwarding in infrastructure mode.
3. Basic Service Set (BSS):
o A group of stations communicating with each other under a single AP.
o Forms the fundamental building block of 802.11 networks.
4. Extended Service Set (ESS):
o A collection of multiple BSSs interconnected through a wired network.
o Provides seamless roaming for users across multiple APs within the same network.
5. Distribution System (DS):
o Connects multiple access points to allow communication between different BSSs.
o Typically implemented using Ethernet or other wired backbone networks.
2. Describe how mobility management works in cellular networks.

Mobility management ensures seamless communication as users move between network areas. It
involves location management, handoff management, roaming management, QoS assurance, and
security measures.
1. Location Management
o Tracks a mobile device’s location for efficient call and data delivery.
o Includes registration (location updates) and paging (finding the device when needed).
2. Handoff (Handover) Management
o Ensures uninterrupted service when a user moves between cells.
o Hard handoff (GSM): Old connection is dropped before a new one is established.
o Soft handoff (CDMA): New connection is formed before the old one is released.
3. Roaming Management
o Allows users to stay connected when moving across different service providers or
geographic areas.
o Involves agreements between operators for inter-network communication.
4. Quality of Service (QoS) Assurance
o Ensures minimal delays, high-speed connectivity, and priority handling for critical
applications like voice and video calls.
5. Security and Privacy
o Protects user data and prevents unauthorized access through authentication, encryption,
and secure handoff mechanisms.

3. Discuss the process of routing a call to a mobile user in GSM.

Call Routing Process in GSM


1. Call Initiation
o The caller’s network MSC routes the call request to the Home Location Register (HLR) of
the mobile user’s home network.
2. Querying the Home Location Register (HLR)
o The HLR checks the user’s current location (Location Area) and directs the query to the
appropriate Visitor Location Register (VLR) if the user is roaming.
3. Querying the Visitor Location Register (VLR)
o The VLR identifies the nearest Mobile Switching Center (MSC) and provides routing details
to reach the mobile user.
4. Call Routing to the Mobile User
o The MSC routes the call to the appropriate Base Station Controller (BSC) and Base
Transceiver Station (BTS) covering the user’s current cell.
o If the user is in a foreign network, international gateways assist in routing.
5. Ringing & Call Establishment
o The BTS sends a paging signal to the mobile device. If answered, the call is established
through the MSC and BTS.
6. Call Termination & Handoff
o If the user moves during the call, a handoff transfers the call to the new cell without
disruption.
o If the user ends the call, the network releases resources.
4. Explain the principles of handoff in GSM, including the steps involved.

Handoff (handover) in GSM refers to the process of transferring an active call or data session from one cell
(Base Transceiver Station - BTS) to another without interruption. It ensures seamless communication
when a mobile user moves across coverage areas.
Types of Handoff in GSM
1. Intra-Cell Handoff – Within the same BTS, switching to a different frequency.
2. Inter-Cell Handoff – Between two BTSs under the same BSC.
3. Inter-BSC Handoff – Between two BSCs under the same MSC.
4. Inter-MSC Handoff – Between two MSCs, typically when moving between regions.
Steps Involved in Handoff
1. Signal Monitoring
o The mobile device and network continuously measure signal strength and quality of
nearby cells.
2. Handoff Decision
o If the current BTS signal weakens, the Base Station Controller (BSC) or Mobile Switching
Center (MSC) decides to switch to a stronger BTS.
3. Allocation of Resources
o The target BTS allocates a frequency channel for the incoming call before the switch
occurs.
4. Switching the Connection
o The call is transferred to the new BTS, and the mobile device tunes to the new frequency.
5. Release of Old Channel
o The connection with the previous BTS is terminated, and resources are freed for other
users.

5. What are the advanced features of 802.11, and how do they improve wireless communication?

Quality of Service (QoS)


• Prioritizes time-sensitive data (voice, video, gaming).
• Uses Wi-Fi Multimedia (WMM) to classify and optimize network traffic.
Advanced Power Management
• Devices enter low-power states to conserve battery.
• Crucial for IoT, mobile, and wearable devices.
Enhanced Security
• 802.11i introduced WPA/WPA2 with AES encryption for stronger protection.
• Supports 802.1X authentication to prevent unauthorized access.
Improved Network Efficiency
• MIMO & MU-MIMO use multiple antennas for better speed and reliability.
• Beamforming directs signals toward devices, improving connection quality.
• Channel Bonding increases data rates by combining adjacent channels.
Better Performance in High-Density Networks
• OFDMA (802.11ax) splits channels to serve multiple devices efficiently.
• Fast Roaming (802.11r) enables seamless handoff between access points.
• Wi-Fi HaLow (802.11ah) optimizes IoT connectivity over long distances

You might also like