SUMMER 2022: Q.5 (A) What Is UDP? Define Remote Procedure Call in Detail. 07
SUMMER 2022: Q.5 (A) What Is UDP? Define Remote Procedure Call in Detail. 07
SUMMER 2022
Q.5 (a) What is UDP? Define Remote procedure call in
detail. 07
UDP stands for User Datagram Protocol, a connectionless communication protocol
used in computer networks. It's an alternative to TCP (Transmission Control
Protocol) and is often used for real-time applications where speed is more
important than reliability, such as video streaming, online gaming, and VoIP.
Definition:
UDP is a connectionless transport layer protocol in the TCP/IP model used to
send data without establishing a connection between the sender and receiver.
Key Features:
No connection setup needed.
Faster than TCP (no overhead for acknowledgments or handshakes).
No guarantee of delivery, order, or duplication protection.
Often used in real-time applications like video streaming, online games, or
VoIP (e.g., Skype, Zoom).
Example Applications:
DNS, DHCP, TFTP, online multiplayer games.
Remote Procedure Call (RPC)
Definition:
A Remote Procedure Call (RPC) allows a program to execute a function on another
computer (server) as if it were a local function call.
Remote Procedure Call (RPC) is a powerful technique for
constructing distributed, client-server based applications. It is based on
extending the conventional local procedure calling so that the called procedure
does not exist in the same address space as the calling procedure. The two
processes may be on the same system, or they may be on different systems with a
network connecting them.
What is Remote Procedure Call (RPC)?
Remote Procedure Call (RPC) is a type of technology used in computing to enable
a program to request a service from software located on another computer in a
network without needing to understand the network's details. RPC abstracts the
complexities of the network by allowing the developer to think in terms of function
calls rather than network details, facilitating the process of making a piece of
software distributed across different systems.
RPC works by allowing one program (a client) to directly call procedures
(functions) on another machine (the server). The client makes a procedure call that
appears to be local but is run on a remote machine. When an RPC is made, the
calling arguments are packaged and transmitted across the network to the server.
The server unpacks the arguments, performs the desired procedure, and sends the
results back to the client.
Working of a RPC
1. A client invokes a client stub procedure, passing parameters in the usual way.
The client stub resides within the client's own address space.
2. The client stub marshalls(pack) the parameters into a message. Marshalling
includes converting the representation of the parameters into a standard format,
and copying each parameter into the message.
3. The client stub passes the message to the transport layer, which sends it to the
remote server machine. On the server, the transport layer passes the message to a
server stub, which demarshalls(unpack) the parameters and calls the desired
server routine using the regular procedure call mechanism.
4. When the server procedure completes, it returns to the server stub (e.g., via a
normal procedure call return), which marshalls the return values into a message.
5. The server stub then hands the message to the transport layer. The transport layer
sends the result message back to the client transport layer, which hands the
message back to the client stub.
6. The client stub demarshalls the return parameters and execution returns to the
caller.
How to Make a Remote Procedure Call?
Key Features:
Reliable: Ensures only error-free packets are sent forward.
Slower (High Latency): Because the full packet must be received
before forwarding.
Used in: Telecommunication networks and high-integrity data
transfers.
Advantages:
Error detection before forwarding.
Prevents network congestion by discarding corrupted packets.
Disadvantages:
Increased latency due to full packet storage and checking.
📍 Example:
Imagine sending a letter from Ahmedabad to Delhi via courier hubs. The
letter stops at Mumbai, gets verified, and is then sent to Jaipur, and
finally to Delhi. Each stop is like a router using store-and-forward.
Switching Methods
Cut-through Switching: Forwards packets as soon as the
destination address is read, ensuring low latency but no error
checking.
Store-and-Forward Switching: Receives the entire frame, checks
for errors, and then forwards it, ensuring reliable transmission with
higher latency.
What is Store-and-Forward Switching?
Store-and-forward switching is a method of switching data packets by
the switching device that receives the data frame and then checks for
errors before forwarding the packets. It supports the efficient
transmission of non-corrupted frames. It is generally used in
telecommunication networks.
In store-and-forward switching, the switching device waits to receive the
entire frame and then stores the frame in the buffer memory. Then the
frame is checked for errors by using CRC(Cyclic Redundancy Check) if
the error is found then the packet is discarded else it is forwarded to the
next device.
In Short:
Term Layer it connects to Acts like
TSAP Between Transport ↔ Session Port number / endpoint
NSAP Between Network ↔ Transport Network address
🌐 Example Scenario:
You're using an app like WhatsApp on your phone to send a message.
📌 Real-Life Analogy:
TSAP = Like an apartment number in a building (specifies which app or
service on a device).
NSAP = Like the building's address (specifies which device in a network).
✅ Example Values:
Concept Example
TSAP Port number 443 (HTTPS)
NSAP IP address 192.168.1.5
OR
Q.5 (a) Explain DNS in detail. 07
Domain Name System (DNS)
Types of DNS
Domain Name Server
The client machine sends a request to the local name server, which, if the
root does not find the address in its database, sends a request to the root
name server, which in turn, will route the query to a top-level domain
(TLD) or authoritative name server. The root name server can also
contain some hostName to IP address mappings. The Top-level domain
(TLD) server always knows who the authoritative name server is. So
finally the IP address is returned to the local name server which in turn
returns the IP address to the host.
Linkedin.co
108.174.10.10 01101100.10101110.00001010.00001010
m
SUMMER-2023
Q.5(a) Explain HTTP in detail.
HTTP Full Form - Hypertext Transfer Protocol
HTTP is the primary method through which web browsers and servers
communicate to share information on the internet. It was invented by Tim Berners-
Lee. HyperText refers to text that is specially coded using a standard coding
language called HyperText Markup Language (HTML). HTTP/2 is the updated
version of HTTP, while HTTP/3 is the latest version, which was published in 2022.
What is the Full Form of HTTP?
HTTP stands for "Hypertext Transfer Protocol." It is a set of rules for sharing data
on the World Wide Web (WWW). When you visit a website, HTTP helps your
browser request and receive the data needed to display the web pages you see. It is
a fundamental part of how the internet works, making it possible for us to browse
and interact with websites.
Basic Structure: HTTP forms the foundation of the web, enabling data
communication and file sharing.
Web Browsing: Most websites use HTTP, so when you click on a link or
download a file, HTTP is at work.
Client-Server Model: HTTP works on a request-response system. Your
browser (client) asks for information, and the website's server responds with
the data.
Application Layer Protocol: HTTP operates within the Internet Protocol
Suite, managing how data is transmitted and received.
What is HyperText?
The protocol used to transfer hypertext between two computers is known as
HyperText Transfer Protocol. HTTP provides a standard between a web browser
and a web server to establish communication. It is a set of rules for transferring
data from one computer to another. Data such as text, images, and other
multimedia files are shared on the World Wide Web. Whenever a web user opens
their web browser, the user indirectly uses HTTP. It is an application protocol that
is used for distributed, collaborative, hypermedia information systems.
Working of HTTP [HyperText Transfer Protocol]
First of all, whenever we want to open any website, we first open a web browser.
after that we will type the URL of that website (e.g., www.facebook.com ). This
URL is now sent to the Domain Name Server (DNS). Then DNS first checks
records for this URL in their database, and then DNS will return the IP address to
the web browser corresponding to this URL. Now, the browser can send requests to
the actual server.
After the server sends data to the client, the connection will be closed. If we want
something else from the server, we should have to re-establish the connection
between the client and the server.
IPv4 IPv6
It Supports Manual
It supports Auto and renumbering address
and DHCP address
configuration
configuration
Fragmentation performed by
In IPv6 fragmentation is performed only by the
Sender and forwarding
sender
routers
IPv4 supports
VLSM( Variable Length IPv6 does not support VLSM.
subnet mask ).
66.94.29.13 2001:0000:3238:DFE1:0063:0000:0000:FEFB
Domain Hierarchy:
SUMMER-2024
Q.5 (a) Explain bit-map protocol. 07
Bit map protocol is called collision free Protocol. In bitmap protocol, each
contention period consists of exactly N slots. If any station has to send a frame,
then it transmits a 1 bit in the respective slot.
Almost all collisions can be avoided in CSMA/CD but they can still occur during
the contention period. The collision during the contention period adversely affects
the system performance, this happens when the cable is long and length of packet
are short. This problem becomes serious as fiber optics network came into use.
Here we shall discuss some protocols that resolve the collision during the
contention period.
Bit-map Protocol
Binary Countdown
Limited Contention Protocols
The Adaptive Tree Walk Protocol
Pure and slotted Aloha, CSMA and CSMA/CD are Contention based Protocols:
Try-if collide-Retry
No guarantee of performance
What happen if the network load is high?
Collision Free Protocols:
Pay constant overhead to achieve performance guarantee
Good when network load is high
1. Bit-map Protocol:
Bit map protocol is collision free Protocol. In bitmap protocol method, each
contention period consists of exactly N slots. If any station has to send frame, then
it transmits a 1 bit in the corresponding slot. For example, if station 2 has a frame
to send, it transmits a 1 bit to the 2nd slot.
In general, Station 1 Announce the fact that it has a frame questions by inserting a
1 bit into slot 1. In this way, each station has complete knowledge of which station
wishes to transmit. There will never be any collisions because everyone agrees on
who goes next. Protocols like this in which the desire to transmit is broadcasting
for the actual transmission are called Reservation Protocols.
Bit Map Protocol fig (1.1)
For analyzing the performance of this protocol, We will measure time in units of
the contention bits slot, with a data frame consisting of d time units. Under low
load conditions, the bitmap will simply be repeated over and over, for lack of data
frames. All the stations have something to send all the time at high load, the N bit
contention period is prorated over N frames, yielding an overhead of only 1 bit per
frame.
Generally, high numbered stations have to wait for half a scan before starting to
transmit low numbered stations have to wait for half a scan(N/2 bit slots) before
starting to transmit, low numbered stations have to wait on an average 1.5 N slots.
2. Binary Countdown:
Binary countdown protocol is used to overcome the overhead 1 bit per binary
station. In binary countdown, binary station addresses are used. A station wanting
to use the channel broadcast its address as binary bit string starting with the high
order bit. All addresses are assumed of the same length. Here, we will see the
example to illustrate the working of the binary countdown.
In this method, different station addresses are read together who decide the priority
of transmitting. If these stations 0001, 1001, 1100, 1011 all are trying to seize the
channel for transmission. All the station at first broadcast their most significant
address bit that is 0, 1, 1, 1 respectively. The most significant bits are read together.
Station 0001 see the 1 MSB in another station address and knows that a higher
numbered station is competing for the channel, so it gives up for the current round.
Other three stations 1001, 1100, 1011 continue. The next station at which next bit
is 1 is at station 1100, so station 1011 and 1001 give up because there 2nd bit is 0.
Then station 1100 starts transmitting a frame, after which another bidding cycle
starts.
SMTP
Types of SMTP Protocol
The SMTP model supports two types of email delivery methods: end-to-
end and store-and-forward.
End-to-end delivery is used between organizations. In this method, the
email is sent directly from the sender's SMTP client to the recipient's SMTP
server without passing through intermediate servers.
Store-and-forward is used within organizations that have TCP/IP and
SMTP-based networks. In this method, the email may pass through several
intermediate servers (Message Transfer Agents, or MTAs) before reaching
the recipient.
With end-to-end delivery, the SMTP client waits until the email is successfully
copied to the recipient's SMTP server before sending it. This is different from
the store-and-forward method, where the email might stop at multiple
intermediate servers before reaching its destination. In store-and-forward systems,
the sender is notified as soon as the email reaches the first server, not the final
destination.
SMTP
Before diving deeper into the Model of SMTP System, it's important to understand
how SMTP is leveraged by service providers like SMTP.com in the real-world
scenario.
SMTP.com is a platform that caters to all your transaction, email relay and email
delivery needs at a very affordable price. With decades of experience, SMTP.com
is regarded as the most trusted sender in the industry by ISPs. SMTP.com had been
trusted by over 100,000 customers over the years. SMTP.com is extremely intuitive
and easy to set up. It can be integrated seamlessly into your current business
system. If you need to migrate from another provider, SMTP.com make it
effortless.
Features
Dedicated IP
Email API: Integrating SMTP.com with your business can be easy with the
email API feature. They have complete API documentation on their website
that can help you integrate your business in just 5 minutes.
24x7 Customer Support: The round-the-clock support is one of the best
features of SMTP.com. Support is available both on the website and also for
paid customers. 24x7, all human support is available for all customers across
all plans. No third party is involved and solutions are provided fast for easy
implementation. Online chat support is also available for those who are
looking for more information about SMTP.com
High Volume Sending Solutions: This newly launched feature is great for
those businesses who want to send more than 250 million emails a month.
Customized quotations and solutions are available.
Reputation Defender: This is an add-on feature that helps clean up your
email lists. It doesn’t need any integration but actively monitors your lists
and provides a report.
Pricing
SMTP.com offers affordable delivery services and caters to all kinds of businesses.
Their plans range from $25 to $500 and above. The best part about this platform is
that all the features are available in all the plans. The prices change only based on
the volume of emails sent monthly. Even with the lowest price pack, users can get
access to 24x7 customer support and all the SMTP tools. The Reputation Defender
for list cleaning is an add-on feature available for all users.
Model of SMTP System
SMTP Model
In the SMTP model user deals with the user agent (UA), for example, Microsoft
Outlook, Netscape, Mozilla, etc. To exchange the mail using TCP, MTA is used.
The user sending the mail doesn't have to deal with MTA as it is the responsibility
of the system admin to set up a local MTA. The MTA maintains a small queue of
mail so that it can schedule repeat delivery of mail in case the receiver is not
available. The MTA delivers the mail to the mailboxes and the information can
later be downloaded by the user agents.
Components of SMTP
Mail User Agent (MUA): It is a computer application that helps you in
sending and retrieving mail. It is responsible for creating email messages for
transfer to the mail transfer agent(MTA).
Mail Submission Agent (MSA): It is a computer program that receives mail
from a Mail User Agent(MUA) and interacts with the Mail Transfer
Agent(MTA) for the transfer of the mail.
Mail Transfer Agent (MTA): It is software that has the work to transfer
mail from one system to another with the help of SMTP.
Mail Delivery Agent (MDA): A mail Delivery agent or Local Delivery
Agent is basically a system that helps in the delivery of mail to the local
system.
How does SMTP Work?
SMTP
1. Sending Email:
When a user wants to send an email, they use a User Agent (UA), like
Outlook or Gmail.
The email is handed over to the MTA, which is responsible for transferring
the email to the recipient’s mail server.
2. SMTP Client and Server:
Sender-SMTP (Client): The email sender’s MTA initiates the connection to
the recipient’s MTA (Receiver-SMTP).
Receiver-SMTP (Server): The receiving MTA listens for incoming
connections and receives the email from the sender-SMTP.
This communication happens over TCP port 25.
3. Relays and Gateways:
Relays: In some cases, the email may pass through several intermediate
MTAs before reaching the destination server. These MTAs act as relays.
Gateways: If the sending and receiving systems use different email
protocols (e.g., SMTP and non-SMTP), an email gateway can convert the
email to the appropriate format for delivery.
4. Email Delivery:
The sender’s MTA sends the email to the receiver’s MTA, either directly or
through relays.
The MTA uses the SMTP protocol to transfer the message. Once it’s
delivered to the destination MTA, the email is placed in the recipient’s
mailbox.
The recipient’s User Agent (UA) can then download the email.
SMTP Envelope
Purpose
The SMTP envelope contains information that guides email
delivery between servers.
It is distinct from the email headers and body and is not visible to the email
recipient.
Contents of the SMTP Envelope
Sender Address: Specifies where the email originates.
Recipient Addresses: Indicates where the email should be delivered.
Routing Information: Helps servers determine the path for email delivery.
Comparison to Regular Mail
Think of the SMTP envelope as the address on a physical envelope for
regular mail.
Just like an envelope guides postal delivery, the SMTP envelope directs
email servers on where to send the email.
SMTP Commands
It provides
the
HELO<SP><domain><CRL identification
1. HELO Mandatory
F> of the sender
i.e. the host
name.
It specifies
MAIL<SP>FROM :
2. MAIL the originator Mandatory
<reverse-path><CRLF>
of the mail.
It specifies
RCPT<SP>TO : <forward-
3. RCPT the recipient Mandatory
path><CRLF>
of mail.
S.No. Keywor Command form Description Usage
It specifies
4. DATA DATA<CRLF> the beginning Mandatory
of the mail.
It closes the
5. QUIT QUIT<CRLF> TCP Mandatory
connection.
It aborts the
current mail
transaction Highly
6. RSET RSET<CRLF> but the TCP recommende
connection d
remains
open.
It is use to
Highly
VRFY<SP><string><CRLF confirm or
7. VRFY recommende
> verify the
d
user name.
Highly
8. NOOP NOOP<CRLF> No operation recommende
d
receiver.
It specifies
the mailing
10. EXPN EXPN<SP><string><CRLF> Seldom used
list to be
expanded.
It send some
specific
11. HELP HELP<SP><string><CRLF> documentatio Seldom used
n to the
system.
It send mail
SEND<SP>FROM :
12. SEND to the Seldom used
<reverse-path><CRLF>
terminal.
It send mail
to the
SOML<SP>FROM : terminal if
13. SOML Seldom used
<reverse-path><CRLF> possible;
otherwise to
mailbox.
It send mail
SAML<SP>FROM : to the
14. SAML Seldom used
<reverse-path><CRLF> terminal and
mailbox.
SMTP Ports
Port 587: This is the most commonly used port for secure SMTP
submission using TLS (Transport Layer Security). It is recommended for
client-to-server communication, as it ensures the security of the email
transmission.
Port 465: Previously used for secure SMTP (SMTPS), this port is no
longer considered an official standard and is generally not recommended
anymore. Many email providers have moved away from port 465 in favor of
port 587.
Port 25: This port is traditionally used for SMTP relay between mail
servers, not for email submission from clients. It is often blocked by ISPs for
outgoing mail due to its frequent use for spam and malicious activities.
Port 2525: Although not an official SMTP port, it is sometimes used as an
alternative for SMTP submission, especially in cases where port 25 is
blocked or restricted. Many email providers support this port as an
alternative for secure communication.
Difference Between SMTP and Extended SMTP
We cannot reduce the size of the We can reduce the size of the email in
email in SMTP. Extended SMTP.
SMTP Extended SMTP
Advantages of SMTP
If necessary, the users can have a dedicated server.
It allows for bulk mailing.
Low cost and wide coverage area.
Offer choices for email tracking.
Reliable and prompt email delivery.
Disadvantages of SMTP
SMTP's common port can be blocked by several firewalls.
SMTP security is a bigger problem.
Its simplicity restricts how useful it can be.
Just 7-bit ASCII characters can be used.
If a message is longer than a certain length, SMTP servers may reject the
entire message.
Delivering your message will typically involve additional back-and-forth
processing between servers, which will delay sending and raise the
likelihood that it won't be sent.
SMTP vs POP vs IMAP
SMTP POP IMAP
It is in band
It is in band protocol. It is in band protocol.
protocol.
Used at receiver
Not used at receiver side. Used at receiver side.
side.
OR
Q.5 (a) Explain bit stuffing with appropriate
example.07
Bit stuffing is a technique used in computer networks to ensure data is transmitted
correctly. The data link layer is responsible for framing, which is the division of a
stream of bits from the network layer into manageable units (called frames).
Frames could be of fixed size or variable size. In variable-size framing, we need a
way to define the end of the frame and the beginning of the next frame. A Bit
stuffing is the insertion of non-information bits into data. Note that stuffed bits
should not be confused with overhead bits. Overhead bits are non-data bits that
are necessary for transmission (usually as part of headers, checksums, etc.).
What is Bit Stuffing?
Bit stuffing is a method used in data communication to avoid confusion between
data and special control signals (like start or end markers). When a specific
sequence of bits appears in the data an extra bit is added to break the pattern. This
prevents the receiver from mistaking the data for control information. Once the
data is received the extra bit is removed restoring the original message. This
technique helps ensure accurate data transmission.
What is Byte Stuffing?
Byte stuffing is the same as bit stuffing the only difference is that instead of a
single bit, one byte of data is added to the message to avoid confusion between
data and special control signals. This ensures accurate message transmission
without misinterpreting the data.
How Bit Stuffing Work?
Below are the working of bit stuffing in computer network:
Sender Side: While sending data if the sender detects a sequence of bits that
matches a special control pattern like five consecutive 1s it inserts an extra
bit usually a 0 into the data stream to break the sequence.
Receiver Side: The receiver gets the data and removes the extra stuffed bit
whenever it detects a specific bit pattern. This restores the data to its original
form.
After 5 consecutive 1-bits, a 0-bit is stuffed. Stuffed bits are marked bold.
Applications of Bit Stuffing
1. Synchronize several channels before multiplexing.
2. Rate-match two single channels to each other.
3. Run length limited coding.
Run length limited coding : To limit the number of consecutive bits of the same
value(i.e., binary value) in the data to be transmitted. A bit of the opposite value is
inserted after the maximum allowed number of consecutive bits.
Bit stuffing technique does not ensure that the sent data is intact at the receiver side
(i.e., not corrupted by transmission errors). It is merely a way to ensure that the
transmission starts and ends at the correct places.
Advantages of Bit Stuffing
Data Integrity: Bit stuffing helps ensure that the data is transmitted
accurately without confusion between data and control signals. This prevents
errors in message interpretation.
Error Prevention: By adding extra bits bit stuffing reduces the chances of
misidentifying special bit patterns which can lead to data loss or corruption.
Flexible Data Handling: Bit stuffing allows networks to handle various
data types without needing complex encoding schemes making the
transmission process simpler.
Compatibility: It works well with existing communication protocols
allowing for easy integration into systems that use bit-oriented protocols.
Easy Decoding: The technique is straightforward for receivers to implement
as they simply need to look for the stuffed bits and remove them during data
processing.
Disadvantages of Bit Stuffing
Increased Data Size: Bit stuffing adds extra bits to the data stream which
increases the overall size of the transmitted data leading to a slight increase
in bandwidth usage.
More Processing: Both the sender and receiver need to process the data for
bit stuffing and removal which can add complexity to the communication
system.
Reduced Efficiency: In situations where many bits are stuffed the overhead
can reduce the overall efficiency of the data transmission.
Limited to Specific Protocols: Bit stuffing is primarily used in bit-oriented
protocols and may not be suitable for all types of communication systems.
Latency: The added processing and extra bits might introduce
small delays in data transmission especially in systems where speed is
critical.
Conclusion
In conclusion, bit stuffing is an essential technique in computer networks that helps
maintain data integrity during transmission. By adding extra bits to avoid
confusion with special control sequences it ensures that the receiver accurately
interprets the message. This method enhances the reliability of data communication
making it easier to manage and understand the information being sent across
networks.
Consider router X , X will share it routing table to neighbors and neighbors will
share it routing table to it to X and distance from node X to destination will be
calculated using bellmen- ford equation.
Dx(y) = min { C(x,v) + Dv(y)} for each node y ? N
As we can see that distance will be less going from X to Z when Y is intermediate
node(hop) so it will be update in routing table X.
Applications:
Used in early Internet protocols (e.g., RIP), telephone networks, and military
routing systems.
Conclusion:
Distance Vector Routing is a foundational algorithm that enables routers to find the
shortest path by sharing route information with neighbors. Though simple and
widely used, it has limitations that have led to the development of more advanced
routing protocols.
WINTER-2022
Q.5 (a) Explain infrastructure mode and Ad-hoc mode
in 802.11 networks. 07
Infrastructure Mode
Definition:
Infrastructure mode is the most common Wi-Fi setup where wireless devices
communicate with each other through a central device called an Access
Point (AP) or wireless router.
How it works:
o Wireless clients (laptops, smartphones, etc.) connect to the AP.
o The AP manages network traffic, coordinates communication, and
connects wireless clients to the wired network (like the internet).
o The AP acts as a bridge between wireless devices and the wired LAN.
Use cases:
o Home Wi-Fi networks with a wireless router.
o Enterprise networks with multiple APs providing seamless coverage.
o When you want centralized management, security controls, and
internet access.
Advantages:
o Better scalability — many clients can connect through one AP.
o Supports roaming between APs.
o Easier to manage and secure.
o Provides internet or LAN connectivity.
Summary Table
Feature Infrastructure Mode Ad-Hoc Mode
Central device Access Point (AP) None (peer-to-peer)
Network structure Star topology Mesh (peer-to-peer)
Connectivity Through AP to wired network Direct device-to-device
Scalability Supports many devices Limited devices
Management Centralized via AP Decentralized
Feature Infrastructure Mode Ad-Hoc Mode
The basic model of how the web works are shown in the figure below. Here the
browser is displaying a web page on the client machine. When the user clicks on a
line of text that is linked to a page on the abd.com server, the browser follows the
hyperlink by sending a message to the abd.com server asking it for the page.
Here the browser displays a web page on the client machine when the user clicks
on a line of text that is linked to a page on abd.com, the browser follows the
hyperlink by sending a message to the abd.com server asking for the page.
Working of WWW
A Web browser is used to access web pages. Web browsers can be defined as
programs which display text, data, pictures, animation and video on the Internet.
Hyperlinked resources on the World Wide Web can be accessed using software
interfaces provided by Web browsers. Initially, Web browsers were used only for
surfing the Web but now they have become more universal.
The below diagram indicates how the Web operates just like client-server
architecture of the internet. When users request web pages or other information,
then the web browser of your system request to the server for the information and
then the web server provide requested services to web browser back and finally
the requested service is utilized by the user who made the request.
WWW Internet
Web Browser Evolution and the Growth of the World Wide Web
In the early 1990s, Tim Berners-Lee and his team created a basic text web browser.
It was the release of the more user-friendly Mosaic browser in 1993 that really
sparked widespread interest in the World Wide Web (WWW). Mosaic had a
clickable interface similar to what people were already familiar with on personal
computers, which made it easier for everyone to use the internet.
Mosaic was developed by Marc Andreessen and others in the United States. They
later made Netscape Navigator, which became the most popular browser in 1994.
Microsoft's Internet Explorer took over in 1995 and held the top spot for many
years. Mozilla Firefox came out in 2004, followed by Google Chrome in 2008,
both challenging IE's dominance. In 2015, Microsoft replaced Internet Explorer
with Microsoft Edge.
Challenges of the World Wide Web
Privacy Concerns: Personal data is often collected and misused.
Security Risks: Vulnerable to hacking, phishing, and malware.
Digital Divide: Unequal access to the internet globally.
Misinformation: Spread of fake news and unreliable content.
Cyberbullying: Online harassment and abuse.
Addiction: Overuse of the web leading to reduced productivity.
Dependence: Heavy reliance on the web for daily activities.
Copyright Issues: Unauthorized sharing of copyrighted material.
Environmental Impact: High energy consumption of servers and data
centers.
Complexity of Regulation: Difficult to enforce laws across countries.
Explanation of HTTP Query (Request) and Response
HTTP Requests
HTTP Requests are the message sent by the client to request data from the server
or to perform some actions. Different HTTP requests are:
GET: GET request is used to read/retrieve data from a web server. GET
returns an HTTP status code of 200 (OK) if the data is successfully retrieved
from the server.
POST: POST request is used to send data (file, form data, etc.) to the server.
On successful creation, it returns an HTTP status code of 201.
PUT: A PUT request is used to modify the data on the server. It replaces the
entire content at a particular location with data that is passed in the body
payload. If there are no resources that match the request, it will generate one.
PATCH: PATCH is similar to PUT request, but the only difference is, it
modifies a part of the data. It will only replace the content that you want to
update.
DELETE: A DELETE request is used to delete the data on the server at a
specified location.
Now, let's understand all of these request methods by example. We have set up a
small application that includes a NodeJS server and MongoDB database. NodeJS
server will handle all the requests and return back an appropriate response.
Project Setup and Modules Installation
Step 1: To start a NodeJS application, create a folder called RestAPI and run the
following command.
npm init -y
Step 2: Using the following command, install the required npm packages.
npm install express body-parser mongoose
Step 3: In your project directory, create a file called index.js.
Project Structure:
Our project directory should now look like this.
1. HTTP Request (Query)
When you enter a URL in your browser or click a link:
The browser sends an HTTP request to the web server hosting the website.
This request contains:
o Request line: Method (GET, POST, etc.), URL path, HTTP version
e.g., GET /index.html HTTP/1.1
o Headers: Metadata like User-Agent, Accept types, Host, Cookies, etc.
o Optional body: Data sent to server (usually with POST or PUT)
Example HTTP Request (GET):
GET /index.html HTTP/1.1
Host: www.example.com
User-Agent: Mozilla/5.0
Accept: text/html
2. HTTP Response
The web server processes the request and sends back an HTTP response:
This response contains:
o Status line: HTTP version, status code, and status message
e.g., HTTP/1.1 200 OK
o Headers: Metadata like Content-Type, Content-Length, Set-Cookie,
etc.
o Body: The actual content requested (HTML, JSON, image, etc.)
Example HTTP Response:
HTTP/1.1 200 OK
Content-Type: text/html
Content-Length: 1256
<html>
<head><title>Example</title></head>
<body>
<h1>Welcome to Example</h1>
</body>
</html>
OR
(a) Describe Utopian Simplex Protocol. 07
We will consider a protocol that is simply because it does not worry about the
possibility of anything going wrong. Data are transmitted in one direction only.
Both transmitting and receiving network layers are always ready. Processing time
can be ignored. Infinite buffer space is available. This thoroughly unrealistic
protocol, which we will nickname "Utopia", is simply to show the basic structure
on which we will build. Its implementation is shown below.
Utopian simplex Protocol: There is one direction of data transmission only from
sender to receiver. Here we assume the communication channel to be error-free and
the receiver will infinitely quickly process the input. The sender pumps out the data
onto the line as fast as it can.
typedef enum {frame_arrival} event_type;
#include"protocol.h"
void sender1(void)
{
frame s; /* buffer for an outbound frame */
packet buffer; /* buffer for an outbound packet */
while(true)
{
from_network_layer(&buffer); /* go get something to send */
s.info=buffer; /* copy it into s for transmission */
to_physical_layer(&s); /* send it on its way */
}
}
void receiver1(void)
{
frame r;
event_type event; /* filled in by wait, but not used here */
while(true)
{
wait_for_event(&event); /* only possibility is frame_arrival */
from_physical_layer(&r); /* go get the inbound frame */
to_network_layer(&r.info); /* pass the data to the network layer */
}
}
This protocol has two different procedures, a sender and a receiver. MAX_SEQ is
not needed because no sequence numbers or acknowledgements are used. The only
event type possible is frame_arrival (i.e. the arrival of an undamaged frame). The
sender pumps out the data in an infinite while loop as fast as it can. The loop body
consists of three actions and they are -
Fetch a packet from the network layer,
Construct an outbound frame using the variable s,
Send the frame on its way.
Other fields have to do with error and flow control and there are no errors or flow
control restrictions here so only the info field is used here. The receiver is equally
simple. The procedure wait_for_event returns when the frame arrives and the event
is set to frame_arrival. The newly arrived frame from the hardware buffer is
removed by the call from_physical_layer and put in the variable r, so that receiver
can get it. The data link layer settles back to wait for the next frame when the data
portion is passed on to the network layer, suspending itself until the frame arrives.
It does not handle either flow control or error correction therefore it is unrealistic.
Properties of Utopian Simplex Protocol:
The design of Utopian Simplex Protocol is based on 2 procedures i.e. Sender
and Receiver.
Both Sender and Receiver run in the data link layer but the sender runs in the
data link layer of the source machine while Receiver runs in datalink layer of
the destination machine.
It is designed for Uni-directional data transmission.
Sender and receiver are always ready for data processing.
Both of them have infinite buffer space available.
The communication links never losses any data frames.
It is considered as unrealistic as it does not handle flow control or error
correction.
The protocol assumes that the communication channel is dedicated to the
sender and receiver, and no other device can interfere with the transmission.
The sender and receiver have the same clock rate, so there is no need for any
synchronization mechanism between them.
The protocol assumes that the data frames are of fixed size and known to
both sender and receiver.
There is no need for any addressing mechanism, as the receiver is assumed
to be the only device connected to the channel.
The protocol does not support retransmission of lost or corrupted frames, as
it assumes that the communication channel never loses any data.
There is no provision for multiplexing or demultiplexing of data streams, as
the channel is dedicated to a single sender-receiver pair.
The protocol does not provide any mechanism for detecting errors in the
received data frames, as it assumes that the communication channel is error-
free.
The protocol is designed for simple, one-way communication scenarios, and
is not suitable for complex, bi-directional data transfers.
(b) What is domain resource record? Explain DNS
resource record types. 07(Repeat)
–WINTER-2023
Q.5 (a) Describe Bluetooth architecture. 07
What is Bluetooth?
Bluetooth is used for short-range wireless voice and data communication. It is a
Wireless Personal Area Network (WPAN) technology and is used for data
communications over smaller distances. This generation changed into being
invented via Ericson in 1994. It operates within the unlicensed, business, scientific,
and clinical (ISM) bands from 2.4 GHz to 2.485 GHz.
Bluetooth stages up to 10 meters. Depending upon the version, it presents
information up to at least 1 Mbps or 3 Mbps. The spreading method that it uses is
FHSS (Frequency-hopping unfold spectrum). A Bluetooth network is called a
piconet and a group of interconnected piconets is called a scatter net.
Bluetooth
Bluetooth is a wireless technology that lets devices like phones, tablets, and
headphones connect to each other and share information without needing cables.
Bluetooth simply follows the principle of transmitting and receiving data
using radio waves. It can be paired with the other device which has also Bluetooth
but it should be within the estimated communication range to connect. When two
devices start to share data, they form a network called piconet which can further
accommodate more than five devices.
Key Features of Bluetooth
The transmission capacity of Bluetooth is 720 kbps.
Bluetooth is a wireless technology.
Bluetooth is a Low-cost and short-distance radio communications standard.
Bluetooth is robust and flexible.
The basic architecture unit of Bluetooth is a piconet.
Architecture of Bluetooth
The architecture of Bluetooth defines two types of networks:
Piconet
Piconet is a type of Bluetooth network that contains one primary node called the
master node and seven active secondary nodes called slave nodes. Thus, we can
say that there is a total of 8 active nodes which are present at a distance of 10
meters. The communication between the primary and secondary nodes can be one-
to-one or one-to-many. Possible communication is only between the master and
slave; Slave-slave communication is not possible. It also has 255 parked nodes,
these are secondary nodes and cannot take participation in communication unless it
gets converted to the active state.
Scatternet
It is formed by using various piconets. A slave that is present in one piconet can act
as master or we can say primary in another piconet. This kind of node can receive a
message from a master in one piconet and deliver the message to its slave in the
other piconet where it is acting as a master. This type of node is referred to as a
bridge node. A station cannot be mastered in two piconets.
Open loop congestion control policies are applied to prevent congestion before it
happens. The congestion control is handled either by the source or the destination.
1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care of. If
the sender feels that a sent packet is lost or corrupted, the packet needs to be
retransmitted. This transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent
congestion and also able to optimize efficiency.
2. Window Policy :
The type of window at the sender's side may also affect the congestion.
Several packets in the Go-back-n window are re-sent, although some packets
may be received successfully at the receiver side. This duplication may
increase the congestion in the network and make it worse.
Therefore, Selective repeat window should be adopted as it sends the
specific packet that may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may
prevent congestion and at the same time partially discard the corrupted or
less sensitive packages and also be able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets
to prevent congestion and also maintain the quality of the audio file.
4. Acknowledgment Policy :
Since acknowledgements are also the part of the load in the network, the
acknowledgment policy imposed by the receiver may also affect congestion.
Several approaches can be used to prevent congestion related to
acknowledgment.
The receiver should send acknowledgement for N packets rather than
sending acknowledgement for a single packet. The receiver should send an
acknowledgment only if it has to send a packet or a timer expires.
5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion.
Switches in a flow should first check the resource requirement of a network
flow before transmitting it further. If there is a chance of a congestion or
there is a congestion in the network, router should deny establishing a virtual
network connection to prevent further congestion.
Closed Loop Congestion Control
Closed loop congestion control techniques are used to treat or alleviate congestion
after it happens. Several techniques are used by different protocols; some of them
are:
1. Backpressure :
Backpressure is a technique in which a congested node stops receiving packets
from upstream node. This may cause the upstream node or nodes to become
congested and reject receiving data from above nodes. Backpressure is a node-to-
node congestion control technique that propagate in the opposite direction of data
flow. The backpressure technique can be applied only to virtual circuit where each
node has information of its above upstream node.
In above diagram the 3rd node is congested and stops receiving packets as a
result 2nd node may be get congested due to slowing down of the output data flow.
Similarly 1st node may get congested and inform the source to slow down.
4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly send a
packet to the source or destination to inform about congestion. The difference
between choke packet and explicit signaling is that the signal is included in the
packets that carry data rather than creating a different packet as in case of choke
packet technique.
Explicit signaling can occur in either forward or backward direction.
Forward Signaling : In forward signaling, a signal is sent in the direction of
the congestion. The destination is warned about congestion. The receiver in
this case adopt policies to prevent further congestion.
Backward Signaling : In backward signaling, a signal is sent in the
opposite direction of the congestion. The source is warned about congestion
and it needs to slow down.
TCP Special Techniques
Slow Start – Start slow, then speed up.
AIMD – Add speed slowly, drop speed quickly when jammed.
Fast Retransmit/Recovery – Fix lost packets quickly.
So in this example, the Bellman-Ford algorithm will converge for each router, they
will have entries for each other. B will know that it can get to C at a cost of 1, and
A will know that it can get to C via B at a cost of 2.
If the link between B and C is disconnected, then B will know that it can no longer
get to C via that link and will remove it from its table. Before it can send any
updates it's possible that it will receive an update from A which will be advertising
that it can get to C at a cost of 2. B can get to A at a cost of 1, so it will update a
route to C via A at a cost of 3. A will then receive updates from B later and update
its cost to 4. They will then go on feeding each other bad information toward
infinity which is called as Count to Infinity problem.
Solution for Count to Infinity problem:-
Route Poisoning:
When a route fails, distance vector protocols spread the bad news about a route
failure by poisoning the route. Route poisoning refers to the practice of advertising
a route, but with a special metric value called Infinity. Routers consider routes
advertised with an infinite metric to have failed. Each distance vector routing
protocol uses the concept of an actual metric value that represents infinity. RIP
defines infinity as 16. The main disadvantage of poison reverse is that it can
significantly increase the size of routing announcements in certain fairly common
network topologies.
Split horizon:
If the link between B and C goes down, and B had received a route from A, B
could end up using that route via A. A would send the packet right back to B,
creating a loop. But according to the Split horizon Rule, Node A does not advertise
its route for C (namely A to B to C) back to B. On the surface, this seems
redundant since B will never route via node A because the route costs more than
the direct route from B to C.
Consider the following network topology showing Split horizon-
In addition to these, we can also use split horizon with route poisoning were
above both techniques will be used combined to achieve efficiency and less
increase the size of routing announcements.
Split horizon with Poison reverse technique is used by Routing Information
Protocol (RIP) to reduce routing loops. Additionally, Holddown timers can
be used to avoid the formation of loops. The hold-down timer immediately
starts when the router is informed that the attached link is down. Till this
time, the router ignores all updates of the down route unless it receives an
update from the router of that downed link. During the timer, If the downlink
is reachable again, the routing table can be updated.
To send an email:
1. Compose a new message in your email client.
2. Enter the recipient's email address in the "To" field.
3. Add a subject line to summarize the content of the message.
4. Write the body of the message.
5. Attach any relevant files if needed.
6. Click "Send" to deliver the message to the recipient's email server.
7. Emails can also include features such as cc (carbon copy) and bcc (blind
carbon copy) to send copies of the message to multiple recipients, and reply,
reply all, and forward options to manage the conversation.
Electronic Mail (e-mail) is one of most widely used services of Internet. This
service allows an Internet user to send a message in formatted manner (mail) to
the other Internet user in any part of world. Message in mail not only contain text,
but it also contains images, audio and videos data. The person who is sending mail
is called sender and person who receives mail is called recipient. It is just like
postal mail service. Components of E-Mail System : The basic components of an
email system are : User Agent (UA), Message Transfer Agent (MTA), Mail Box,
and Spool file. These are explained as following below.
1. User Agent (UA) : The UA is normally a program which is used to send and
receive mail. Sometimes, it is called as mail reader. It accepts variety of
commands for composing, receiving and replying to messages as well as for
manipulation of the mailboxes.
2. Message Transfer Agent (MTA) : MTA is actually responsible for transfer
of mail from one system to another. To send a mail, a system must have
client MTA and system MTA. It transfer mail to mailboxes of recipients if
they are connected in the same machine. It delivers mail to peer MTA if
destination mailbox is in another machine. The delivery from one MTA to
another MTA is done by Simple Mail Transfer Protocol.
3. Mailbox : It is a file on local hard drive to collect mails. Delivered mails are
present in this file. The user can read it delete it according to his/her
requirement. To use e-mail system each user must have a mailbox . Access
to mailbox is only to owner of mailbox.
4. Spool file : This file contains mails that are to be sent. User agent appends
outgoing mails in this file using SMTP. MTA extracts pending mail from
spool file for their delivery. E-mail allows one name, an alias, to represent
several different e-mail addresses. It is known as mailing list, Whenever
user have to sent a message, system checks recipient's name against alias
database. If mailing list is present for defined alias, separate messages, one
for each entry in the list, must be prepared and handed to MTA. If for
defined alias, there is no such mailing list is present, name itself becomes
naming address and a single message is delivered to mail transfer entity.
Services provided by E-mail system :
Composition - The composition refer to process that creates messages and
answers. For composition any kind of text editor can be used.
Transfer - Transfer means sending procedure of mail i.e. from the sender to
recipient.
Reporting - Reporting refers to confirmation for delivery of mail. It help
user to check whether their mail is delivered, lost or rejected.
Displaying - It refers to present mail in form that is understand by the user.
Disposition - This step concern with recipient that what will recipient do
after receiving mail i.e save mail, delete before reading or delete after
reading.
Advantages Or Disadvantages:
Advantages of email:
1. Convenient and fast communication with individuals or groups globally.
2. Easy to store and search for past messages.
3. Ability to send and receive attachments such as documents, images, and
videos.
4. Cost-effective compared to traditional mail and fax.
5. Available 24/7.
Disadvantages of email:
1. Risk of spam and phishing attacks.
2. Overwhelming amount of emails can lead to information overload.
3. Can lead to decreased face-to-face communication and loss of personal
touch.
4. Potential for miscommunication due to lack of tone and body language in
written messages.
5. Technical issues, such as server outages, can disrupt email service.
6. It is important to use email responsibly and effectively, for example, by
keeping the subject line clear and concise, using proper etiquette, and
protecting against security threats.
Quick Analogy:
Concept Analogy OSI Layer
NSAP Address of a house Network Layer
TSAP Room number in the house Transport Layer
Transport Conn. Phone call between two rooms Logical connection (L4)
OR
Q.5 (a) Compare Virtual-Circuit and Datagram
Networks. 07
Difference Between Virtual Circuits and Datagram Networks
Virtual Circuit
Criteria Networks Datagram Networks
Prior to data
transmission, a
Connection No connection setup is
connection is established
Establishment required.
between sender and
receiver.
Uses network-assisted
Uses end-to-end
congestion control,
congestion control,
where routers monitor
where the sender adjusts
Congestion Control network conditions and
its rate of transmission
may drop packets or
based on feedback from
send congestion signals
the network.
to the sender.
Virtual Circuit
Criteria Networks Datagram Networks
Provides reliable
Provides unreliable
delivery of packets by
delivery of packets and
Error Control detecting and
does not guarantee
retransmitting lost or
delivery or correctness.
corrupted packets.
Conclusion
Another term for virtual circuits is connection-oriented switching. Virtual
circuit switching establishes a predetermined path before a message is sent.
The path in virtual circuits is called a virtual circuit because it seems to the
user to be a dedicated physical circuit.
In datagram networks, sometimes referred to as packet-switching
technology, each packet—also known as a datagram—is regarded as an
autonomous entity. The switch uses the destination information included in
each packet to guide the packet to its intended location.
Reserving resources is not necessary in Datagram Networks since there isn't
a specific channel for connection sessions. Packets now have a header
containing all of the data intended for the destination.
Datagram networks use first-come, first-serve (FCFS) scheduling to manage
resource distribution.
✅ Detailed Explanation:
Computers communicate over the internet using IP addresses (e.g.,
192.168.1.1).
Humans use domain names (e.g., example.com) because they are easier to
remember.
The Name Server is a server that helps resolve (convert) a domain name
into its corresponding IP address.
✅ How It Works:
1. A user enters a domain name in the browser.
2. The request is sent to a recursive resolver (usually provided by the ISP).
3. The resolver queries:
o Root name servers → tell which TLD server to contact.
o TLD (Top-Level Domain) name servers → e.g., for .com, .org.
o Authoritative name server → stores the actual IP address for the
domain.
4. The IP address is returned to the user's browser.
5. The browser connects to the server using the IP address to load the website.
✅ Types of Name Servers:
Type Role
Root Name Server Points to the TLD name servers (.com, .org, etc.)
Points to the authoritative name servers of specific
TLD Name Server
domain names
Authoritative Name Holds actual DNS records like A, AAAA, MX for a
Server domain
✅ Important Terms:
DNS – Domain Name System, the protocol that uses name servers.
IP Address – Unique address of a device on the internet.
Domain Name – Human-readable name mapped to an IP.
✅ Example:
For www.google.com:
1. The resolver asks the root name server → gets .com TLD server.
2. Asks .com TLD server → gets Google's authoritative name server.
3. Asks the authoritative server → gets IP address like 142.250.195.36.
4. Browser uses this IP to open the site.