Updated CN Notes Unit 2+3+4+5
Updated CN Notes Unit 2+3+4+5
NOTES
Unit-2 : TRANSMISSION MEDIA
1. GUIDED MEDIA
Guided Media is a type of transmission media that can be otherwise known as wired
transmission. It is also termed as Bounded transmission media, as it is bound to a specific
limit in the communication network. In this Guided Media, the transmission signals properties
are restricted and focused in a fixed constricted channel, which can be implemented with the
help of bodily wired contacts. One of the notable properties of the Guided Media is the velocity
of transmission, which is observed to be at high speed. Other reasons that make the users
choose guided media over unguided media are the security provided in transmission and the
coverage of the network to be controlled inside a smaller geographical area.
The Guided Media transmission is further classified into three different types based on the
type of connecting material used for creating the network. They are as follows:
Twisted Pair Cables
Fiber Optics
Coaxial Cables
a. TWISTED PAIR CABLE: The Twisted Pair Cables can be defined as a cable formed by
twisting two different shielded cables around each other to form a single cable. The shields are
1
usually made of insulated materials that allow both the cables to transmit on their own. This
twisted cable is then placed inside a protective layer around it for the sake of ease of use.
These Twisted pair cables are available in two different forms, where one is Shielded, and
another is Unshielded.
Shielded Twisted Pair Cable: Shielded cables are nothing but the transmission media that
has exceptional casing to obstruct any or all the peripheral intrusions during the transmission
process. These cables are known for their high performance that doesn’t allow signal crossings
and faster transmission rates. A typical application of the Shielded Twisted Pair Cable is the
telephone lines seen in domestic utilities. Like any other medium, shielded twisted pair cables
have their own cons in them, which are the difficulty faced in installation, a huge volume of
wires is required, and they are expensive than other cables.
Unshielded Twisted Pair Cable: This type of cable doesn’t have the casing, as the name says,
and has many qualities inversely proportional to the shielded cable type. These cables are a
less expensive, effortless installation process, with faster transmitting abilities. It also lets outer
interferences, which leads to lesser performance qualities.
b. OPTICAL FIBRE CABLE: Optical Fibre Cables can be defined as the cables made of
glass material, which uses the light signals for transmission purposes. An optical fiber is a
flexible glass or plastic fiber that can transmit light from one end to the other. The
reflection principles are used for light signal transmission through the cables. It is known
for letting bulky data to be transmitted with higher bandwidth and lesser electromagnetic
interferences during transmission. Since the material is not corrosive in nature and it is
weightless, these cables are preferred over twisted cables in most cases. A few of the
2
disadvantages are the complications in maintenance or installation, expensive and the costs
higher than other types of transmission media.
c. COAXIAL CABLE: Coaxial Cables are made of plastic layering on the outside and two
conducting material placed in parallel to one another while being wrapped in individual
insulating layers around them. It is used for transmitting data with dedicated cables or a single
cable cracked into different bandwidths, and they are termed referred to as Baseband mode
and Broadband mode, respectively. A well-known application of this type of cable is for
providing television network in the houses. A few of the advantageous qualities of this type of
cable are exceptional bandwidth range, simple installation or maintenance, and not as
expensive as other cable types. Whereas the Coaxial Cable can form a single cable network,
and if it fails, the network is disordered completely.
2. UNGUIDED MEDIA
As the name says, Unguided Media is not a guided media, which simply means that the
network created using this type of transmission media cannot be bound to a certain physical
plan. It can be defined as a wireless transmission media with no physical medium to provide
the connection to the nodes or servers in the network. The electromagnetic signal waves are
transmitted in the air across a larger geographical area, and so it is less secure than the guided
media. This type of transmission media is further classified into three types with respect to the
signals used for the transmission.
3
a. RADIO WAVES: Radio waves are the simplest form of transmission signals, which
doesn’t involve any complicated steps to create and transmit. This signal generally ranges
between 3 KHz and 1 GHz of frequency, and the signal types can be of AM and FM
signals. The main application of this transmission media is the cordless phones for domestic
or official usage and the radio devices used as an element in mass media communication.
These Radio waves can be of Terrestrial or Satellite method of communication.
b. MICRO WAVES: Micro Waves are the type of transmission media that uses antennas as
the main element for sending and receiving the data. The area coverage provided by these
signals directly related to the elevation of the antenna placement. The signal range for this type
of transmission is between 1 GHz and 300 GHz, which are usually used for mobile phone and
television networks.
c. INFRARED: Infrared is another way of transmitting the data inside a small area, which
cannot pass through the obstacles and doesn’t give –in for interference. These waves come in
a range of 300 GHz to 400 THz and can be used for wireless peripheral devices like mouse,
remotes, keyboards, printers, etc.
TRANSMISSION IMPAIRMENT
Transmission impairment is the damage or harm caused to the signal during the signal
transmission. Due to the transmission impairment, the signal received at the receiver end may
differ from the signal sent by the sender. This difference in the strength of the signal is signal
impairment.
1. Attenuation
2. Distortion
3. Noise
ATTENUATION
Attenuation can be defined as the loss in the strength and energy of the signal. Whenever the
signal travels through any transmission medium it has to overcome the resistance of that
transmission medium doing which the signal some of its energy.
You may have experienced that sometimes the wire (medium) carrying signal gets a little
warm. This is because the electrical energy in the signal is converted to heat while the signal
tries to overcome the resistance in the medium.
4
To overcome this loss in the energy of the signal amplifiers are send at a finite distance to
amplify the signals.
DISTORTION
Distortion can be defined as the change in the shape or form of the signal while it travels
through the transmission medium. Each signal component has its own propagation speed in
the transmission medium due to which it has its own delay in reaching the final destination.
If the delay is not exactly the same as it may also create a difference in the phase of the signal.
This means that the phase of the signal at the sender’s end is not the same as the phase of the
signal at the receiver’s end.
For example, observe the composite signal in the figure below, as you can see it has
components each of which is in a different phase. You can see that the composite signal at the
receiver end has a distorted shape.
NOISE
Noise can be defined as unwanted variation or fluctuation in the signal that may corrupt
the signal. Noise can be classified into various types such as impulse noise, crosstalk
thermal noise, induced noise.
Thermal noise can be defined as the impairment that is caused because of the random motion
of the electrons inside the wire when the signal travels through the wire. This creates an extra
signal inside the wire which is not originally sent by the sender. Crosstalk is the impairment
caused by one wire over another among which one is the sender wire and the other is the
5
receiver. The impulse noise is a sudden spike in the signal which means signals with high
energy which come from power lines lightning and so on.
NETWORK THROUGHPUT
Throughput is defined as “the amount of material or items passing through a system or
process.” Relating this to networking, the materials are referred to as “packets” while
the system they are passing through is a particular “link”, physical or virtual.
Furthermore, when discussing network throughput, the measurement is typically taken per unit
time, between two devices, and represented as Bits per second (bps), Kilobit per second
(Kbps), Megabits per second (Mbps), Gigabit per second (Gbps), and so on.
So for example, if a packet with a size of 100 bytes takes 1 second to flow from Computer_A
to Computer_B, we can say the throughput between the two devices is 800 bps.
Note: 1 byte is equal to 8 bits. Therefore, 100 bytes is 800 bits, resulting in the throughput
calculation of 800 bits per second.
BANDWIDTH VS. THROUGHPUT
Looking at the description above, one question that comes to mind is, “What is the difference
between bandwidth and throughput?” To explain this difference, let’s use an analogy of water
flowing through pipes.
Considering the two pipes shown above, which pipe do you think will pass more water through
it? The default answer is PIPE B because that is a fatter pipe. However, the real answer to the
question is that “it depends”.
If water is flowing at maximum capacity through both pipes, then PIPE B will carry more
water through at a particular time. But what if much more water is coming in to PIPE A than
PIPE B? Or, what if there is debris in PIPE B that is restricting the flow of water inside the
pipe?
In summary, we can conclude that in ideal conditions and at maximum capacity, PIPE B will
carry more water than PIPE A. However, any number of factors can cause more water to flow
through PIPE A per unit time.
Using the analogy above, Bandwidth can be compared to the fatness of the pipes (i.e. the
maximum and theoretical capacity of the pipe) while Throughput is the actual amount
of water that flows through per unit time. Therefore, even though bandwidth will set a limit
on throughput, throughput can be affected by a host of other factors.
FACTORS THAT AFFECT THROUGHPUT
THROUGHPUT refers to the rate at which data is successfully transmitted from one
point to another. Here are some basic factors that affect network throughput:
6
Bandwidth: The maximum rate at which data can be transmitted over the network link.
Network Congestion: High traffic loads can lead to congestion, causing delays and reducing
throughput.
Packet Loss: Loss of data packets due to errors or network issues can necessitate
retransmissions, affecting throughput.
Network Interface Card (NIC) Speed: The speed and efficiency of the network interface
hardware on each device can influence throughput.
Error Rates: Higher error rates can lead to retransmissions, which can decrease overall
throughput.
Now that we have seen the difference between bandwidth and throughput, let us now take a
detailed look into some of the factors that can affect the throughput on a network.
7
BCA: UNIT-3 TELEPHONY
Telephony is the field of technology involving the development, application, and deployment
of telecommunication services.
MULTIPLEXING
Multiplexing is a technique used in communications and data transmission to combine
multiple signals or data streams into one signal over a shared medium. This allows for more
efficient use of resources and improves overall system performance.
Example: TDM allows multiple telephone calls to share the same physical transmission
line efficiently. Each call gets a dedicated time slot, ensuring that the data for each call
is transmitted in its designated time frame without overlap.
2. Frequency Division Multiplexing (FDM): In FDM, the available bandwidth of a
communication channel is divided into separate frequency bands, each of which is used
8
to carry a different signal. This is often used in radio and television broadcasting, where
different channels are assigned different frequency bands.
Frequency Division Multiplexing (FDM) is a technique used in communications to
transmit multiple signals simultaneously over a single communication channel.
3. Wavelength Division Multiplexing (WDM): Similar to FDM, but used in optical fiber
communications. It involves multiplexing multiple optical signals on different
wavelengths (or channels) of laser light in a single fiber.
4. Code Division Multiplexing (CDM): This method uses unique codes to distinguish
between different signals that are transmitted simultaneously over the same channel.
CDM is used in some wireless communication systems, including some cellular
networks.
Multiplexing enhances the efficiency and capacity of communication systems by allowing
multiple signals to share the same transmission medium without interference.
RELATED SOME IMPOTENT CONCEPTS:-
Propagation Speed: This refers to the speed at which a signal travels through a
medium. In fiber optic cables, this can be about 2/3 the speed of light (approximately
200,000 km/s), while in copper cables, it’s slower due to electrical resistance.
Propagation Time: This is the time it takes for a signal to travel from the sender to the
receiver. It can be calculated using the formula:
9
2. WAVELENGTH
3. SHANNON CAPACITY ( C )
The Shannon Capacity is a measure of the maximum data rate that can be transmitted
over a communication channel without error, defined by the Shannon-Hartley theorem:
10
4. COMPARISON OF MEDIA IN COMPUTER NETWORKS
Summary
In summary, understanding these concepts helps in designing and optimizing networks. The
choice of media affects the propagation speed, bandwidth, and overall network performance.
The Shannon Capacity offers a theoretical limit for data transmission, while propagation speed
and time are critical for assessing latency and response times in network communications.
ERROR is a condition when the receiver’s information does not match the sender’s. Digital
signals suffer from noise during transmission that can introduce errors in the binary bits
traveling from sender to receiver. That means a 0 bit may change to 1 or a 1 bit may change
to 0.
TYPES OF ERRORS:
SINGLE-BIT ERROR:
A single-bit error refers to a type of data transmission error that occurs when one bit (i.e., a
single binary digit) of a transmitted data unit is altered during transmission, resulting in an
incorrect or corrupted data unit.
11
(single bit error)
MULTIPLE-BIT ERROR:
A multiple-bit error is an error type that arises when more than one bit in a data transmission
is affected. Although multiple-bit errors are relatively rare when compared to single-bit errors,
they can still occur, particularly in high-noise or high-interference digital environments.
BURST ERROR:
When several consecutive bits are flipped mistakenly in digital transmission, it creates a burst
error. This error causes a sequence of consecutive incorrect values.
(Burst error)
12
ERROR DETECTION METHODS:
To detect errors, a common technique is to introduce redundancy bits that provide additional
information. Various techniques for error detection include:
Simple-bit parity is a simple error detection method that involves adding an extra bit to a data
transmission. It works as:
This scheme makes the total number of 1’s even, that is why it is called even parity checking.
13
Fast Error Detection: The process of calculating and checking the parity bit is quick,
which allows for rapid error detection without significant delay in data processing or
communication.
Single-Bit Error Detection: It can effectively detect single-bit errors within a data unit,
providing a basic level of error detection for relatively low-error environments.
Single Parity check is not able to detect even no. of bit error.
For example, the Data to be transmitted is 101010. Code word transmitted to the receiver
is 1010101 (we have used even parity).
Let’s assume that during transmission, two of the bits of code word flipped to 1111101.
On receiving the code word, the receiver finds the no. of ones to be even and hence no error,
which is a wrong assumption.
Checksum error detection is a method used to identify errors in transmitted data. The
process involves dividing the data into equally sized segments and using a 1’s complement to
calculate the sum of these segments. The calculated sum is then sent along with the data to the
receiver. At the receiver’s end, the same process is repeated and if all zeroes are obtained in
the sum, it means that the data is correct.
Example – If the data unit to be transmitted is 10101001 00111001, the following procedure
is used at Sender site and Receiver site.
Sender Site:
10101001 subunit 1
00111001 subunit 2
11100010 sum (using 1s complement)
00011101 checksum (complement of sum)
Data transmitted to Receiver is:
Receiver Site :
10101001 subunit 1
00111001 subunit 2
00011101 checksum
11111111 sum
00000000 sum's complement
14
Result is zero, it means no error.
Unlike the checksum scheme, which is based on addition, CRC is based on binary division.
In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are appended
to the end of the data unit so that the resulting data unit becomes exactly divisible by a second,
predetermined binary number.
At the destination, the incoming data unit is divided by the same number. If at this step there
is no remainder, the data unit is assumed to be correct and is therefore accepted.
A remainder indicates that the data unit has been damaged in transit and therefore must be
rejected.
CRC WORKING
Example: Let’s data to be send is 1010000 and divisor in the form of polynomial is x3+1. CRC
method discussed below.
15
SWITCHING TECHNIQUES
In large networks, there can be multiple paths from sender to receiver. The switching technique
will decide the best route for data transmission.
Switching technique is used to connect the systems for making one-to-one communication.
CIRCUIT SWITCHING:
o Once the dedicated path is established, the only delay occurs in the speed of data
transmission.
16
o It takes a long time to establish a connection approx 10 seconds during which no data
can be transmitted.
o It is more expensive than other switching techniques as a dedicated path is required for
each connection.
o It is inefficient to use because once the path is established and no data is transferred,
then the capacity of the path is wasted.
o In this case, the connection is dedicated therefore no other data can be transferred even
if the channel is free.
MESSAGE SWITCHING:
o Data channels are shared among the communicating devices that improve the efficiency
of using available bandwidth.
o Traffic congestion can be reduced because the message is temporarily stored in the
nodes.
o Message priority can be used to manage the network.
o The size of the message which is sent over the network can be varied. Therefore, it
supports the data of unlimited size.
17
Disadvantages Of Message Switching
o The message switches must be equipped with sufficient storage to enable them to store
the messages until the message is forwarded.
o The Long delay can occur due to the storing and forwarding facility provided by the
message switching technique.
PACKET SWITCHING:
o The packet switching is a switching technique in which the message is sent in one go,
but it is divided into smaller pieces, and they are sent individually.
o The message splits into smaller pieces known as packets and packets are given a unique
number to identify their order at the receiving end.
o Every packet contains some information in its headers such as source address,
destination address and sequence number.
o Packets will travel across the network, taking the shortest path as possible.
o All the packets are reassembled at the receiving end in correct order.
o If any packet is missing or corrupted, then the message will be sent to resend the
message.
o If the correct order of the packets is reached, then the acknowledgment message will be
sent.
18
DATA LINK CONTROL PROTOCOLS
Data Link Control Protocols are responsible for ensuring reliable communication over
a network by managing how data is sent, received, and acknowledged between devices.
Here’s an overview of the various components and protocols involved:
1. Line Discipline
Definition: Line discipline refers to the process of managing when devices can
communicate over a shared communication link. It defines which device can send and
receive data and ensures that there is no conflict or data loss.
Types:
o ENQ/ACK (Enquiry/Acknowledge): This method is used in half-duplex
communication to determine which device gets to send or receive data.
o Polling/Selecting: In this, a central controller (primary) sends a poll to each
device (secondary) to check if it has data to send (Polling), or selects a specific
device for communication (Selecting).
2. Flow Control
Definition: Flow control manages the rate at which data is transmitted between sender
and receiver to prevent overwhelming the receiver’s buffer.
Methods:
o Stop-and-Wait: The sender transmits a frame and waits for an acknowledgment
before sending the next one.
o Sliding Window: Multiple frames can be sent before needing an
acknowledgment, but a window size limits the number of unacknowledged
frames in transmission.
3. Error Control
Link Access Procedure (LAP): The data link layer protocol used for establishing
communication over a point-to-point or multi-point link.
Types:
o LAPB (Link Access Procedure, Balanced): A protocol that uses balanced two-
way communication, suitable for point-to-point links.
o LAPD (Link Access Procedure for the D channel): Used in ISDN (Integrated
Services Digital Network) communication.
o LAPM (Link Access Procedure for Modems): Error control protocol for
modems.
These protocols and procedures work together to ensure the reliable transmission of data across
a network, managing communication, preventing data loss, and correcting errors when
necessary.
20
AUTOMATIC REPEAT REQUEST (ARQ) [ Error Control ]
ARQs are used to provide reliable transmissions over unreliable upper layer services. They are
often used in Global System for Mobile (GSM) communication.
Working Principle
In these protocols, the receiver sends an acknowledgement message back to the sender if it
receives a frame correctly. If the sender does not receive the acknowledgement of a transmitted
frame before a specified period of time, i.e. a timeout occurs, the sender understands that the
frame has been corrupted or lost during transit. So, the sender retransmits the frame. This
process is repeated until the correct frame is transmitted.
(i) Stop – and – Wait ARQ − Stop – and – wait ARQ provides
unidirectional data transmission with flow control and error control
mechanisms, appropriate for noisy channels. The sender keeps a copy
of the sent frame. It then waits for a finite time to receive a positive
acknowledgement from receiver. If the timer expires, the frame is
retransmitted. If a positive acknowledgement is received then the next
frame is sent.
21
(ii) Go – Back – N ARQ − Go – Back – N ARQ provides for sending
multiple frames before receiving the acknowledgement for the first
frame. It uses the concept of sliding window, and so is also called
sliding window protocol. The frames are sequentially numbered and a
finite number of frames are sent. If the acknowledgement of a frame is
not received within the time period, all frames starting from that frame
are retransmitted.
(iii) Selective Repeat ARQ − This protocol also provides for sending
multiple frames before receiving the acknowledgement for the first
frame. However, here only the erroneous or lost frames are
retransmitted, while the good frames are received and buffered.
22
POINT-TO-POINT PROTOCOL (PPP)
Point-to-Point Protocol (PPP) is a data link layer protocol used to establish a direct
connection between two networking devices. It’s commonly used for connecting to the
internet, like when your computer talks to an internet provider’s server.
PPP involves several elements to manage how data is transmitted, controlled, and secured:
1. TRANSMISSION STATES
These states define what happens when two devices are communicating:
o Establishing: Devices start the connection by exchanging settings, like how fast
data should travel.
o Authenticating: One or both devices may ask for a username and password.
o Networking: Once authenticated, devices send and receive actual data.
o Terminating: When communication is done, the connection is closed.
Example: When you dial up to the internet (in old-school dial-up connections), the connection
goes through these states from establishing to terminating.
2. PPP Layers
PPP is divided into three layers, each handling different aspects of the connection:
Link Control Protocol (LCP): Handles the setup, configuration, and testing of the
connection.
23
Authentication: Ensures the correct users are communicating. Protocols like PAP
(Password Authentication Protocol) and CHAP (Challenge Handshake Authentication
Protocol) are used.
Network Control Protocol (NCP): Configures the network layer protocols like IP
addresses.
LCP establishes the basic connection between two devices, checks for errors, negotiates
link settings (like maximum frame size), and handles any problems that occur during
the connection.
Example: LCP sets up the "rules" for how data is transmitted, like agreeing on a speed for the
internet connection between your modem and the service provider.
4. AUTHENTICATION
Example: When you log in to your Wi-Fi network, PAP or CHAP might be used behind the
scenes to verify your credentials.
NCP configures and enables network layer protocols like IP (Internet Protocol) or
IPX. It allows the devices to agree on the network protocol they will use to send data.
Example: NCP could configure your device's IP address to communicate on the internet.
ISDN is a set of communication standards to transmit voice, video, and data over traditional
telephone lines. It's a precursor to modern broadband internet.
1. ISDN SERVICES
24
o Basic Rate Interface (BRI): Designed for home or small business use. It
provides two data channels (64 kbps each) and one signaling channel (16 kbps).
o Primary Rate Interface (PRI): Used by larger businesses. It provides 23 data
channels (in the US) or 30 data channels (in Europe), each at 64 kbps, plus one
signaling channel.
Example: BRI could be used in small offices to allow simultaneous use of voice calls and
internet access.
2. HISTORICAL OUTLINE
ISDN was introduced in the 1980s as a way to improve telephone systems by allowing
digital transmission over existing copper phone lines.
While popular in the 1990s, ISDN has mostly been replaced by faster technologies like
DSL and fiber optic connections.
3. SUBSCRIBER’S ACCESS
BRI Access: For homes and small offices, ISDN BRI provides access to voice and data
services over the same phone line.
PRI Access: For large companies, ISDN PRI provides many channels for voice and
data, supporting multiple simultaneous users.
Example: ISDN allows you to make a phone call while surfing the internet over the same line,
unlike old dial-up, where you had to choose one or the other.
4. ISDN LAYERS
ISDN divides the communication process into layers, similar to how the internet works:
Physical Layer: This is where the physical connection happens, like the copper phone
line.
Data Link Layer: Manages the flow of data over the physical connection (like how to
package and send data).
Network Layer: Responsible for the routing and transmission of data between the
phone company and your device.
Broadcast ISDN was intended to provide faster digital communication, using fiber-
optic cables to transmit huge amounts of data, including video and multimedia.
However, it was never widely adopted and was overtaken by more modern technologies
like DSL and cable broadband.
Summary:
25
PPP is a protocol used for direct communication between two devices over a point-to-
point connection, handling everything from connection setup to data transmission.
ISDN is a digital communication standard that allows voice, video, and data
transmission over phone lines, but it has been largely replaced by faster internet
technologies.
UNIT-IV CN Notes
26
NETWORK DEVICES AND THE NETWORK LAYER:
In the world of networking, several devices and protocols work together behind the scenes to
ensure data moves smoothly between computers, devices, and even across the globe. From
home Wi-Fi to complex corporate networks, these devices and protocols play crucial roles.
Let’s explore some of the most essential network devices and the network layer itself, using
real-world analogies and examples to make the concepts easier to understand.
A repeater is a simple yet powerful device that amplifies a network signal. Imagine you're
talking to a friend across a long field. If you’re too far apart, your voice fades, and your friend
can’t hear you. A repeater acts like a person standing in the middle, hearing your message and
shouting it louder so it reaches the other end.
At home, you may notice that your Wi-Fi signal is strong in the living room but weak in the
kitchen or upstairs. A Wi-Fi repeater (or extender) can boost the signal to ensure you get a
strong connection in those far corners. It "repeats" the signal from the main router to cover
more ground.
A bridge connects two separate network segments, allowing them to communicate as if they
are part of one larger network. Imagine two neighboring towns that are divided by a river. To
allow people to travel easily between the towns, you build a bridge. In networking, bridges
reduce network congestion by managing data traffic and filtering out unnecessary data.
Picture a company with two departments: marketing and sales. Each department has its own
network. A bridge can connect these networks so marketing and sales teams can share files
without overwhelming the company’s entire network. This keeps things efficient, allowing
only necessary data to pass between the two.
27
3. GATEWAYS: Translators of the Network
A gateway is like a translator between two people who speak different languages. If one
network uses a different set of communication rules (protocols) than another, a gateway
converts the data so both can understand each other. It’s the bridge between different
"languages" of networks.
When your computer sends data to the internet, it uses one set of rules, while the internet may
use another. A gateway on your home router translates these rules, allowing your computer to
communicate with websites, apps, and other services on the internet.
A router is like a GPS system for your data. When you send information across a network
(whether it’s an email, a file, or a video stream), the router decides the best path for the data
to travel so it reaches its destination quickly and efficiently. Routers connect different
networks, and they’re responsible for directing traffic between them.
In your home, a router connects all your devices (phones, laptops, smart TVs) to the internet.
When you visit a website, the router figures out the fastest route for your data to travel across
the internet and back to you, ensuring a smooth browsing experience.
The network layer is one of the most important layers in the OSI model (which organizes
network communication into layers). It’s responsible for deciding how data moves from one
point to another across networks, handling addressing, routing, and ensuring that data is
delivered to the correct destination.
The network layer faces several challenges that need to be addressed for efficient
communication:
Addressing: Just like houses need addresses to receive mail, devices on a network need
unique addresses (such as IP addresses) to ensure data reaches the right place.
Routing: Deciding the best path for data to take across the network.
28
Packet Forwarding: Ensuring that data moves smoothly from one device (or router) to
the next.
Sending data across the network is similar to mailing a letter. The addressing system (like IP
addresses) ensures the letter is delivered to the right house. Routing is like deciding whether
the letter should go through the main post office or take a more direct route to reach its
destination.
Routing algorithms determine the optimal path for data to travel between devices on different
networks. These algorithms consider factors like network traffic, distance, and link quality to
find the best route.
Static Routing: Routes are manually set up by network administrators. These routes
don’t change unless manually reconfigured.
Dynamic Routing: Routes are automatically updated in real-time based on network
conditions.
Example: Navigation
Static routing is like using an old-fashioned paper map, where the path is set and doesn’t
change. Dynamic routing, on the other hand, is like using a GPS system that adjusts your route
based on traffic updates or road closures.
In static routing, routes are user- In dynamic routing, routes are updated
defined. according to the topology.
Static routing does not use complex Dynamic routing uses complex routing
routing algorithms. algorithms.
29
STATIC ROUTING DYNAMIC ROUTING
In static routing, failure of the link In dynamic routing, failure of the link does
disrupts the rerouting. not interrupt the rerouting.
Just like roads can become congested with too many cars, networks can become overwhelmed
with too much data. Congestion control algorithms help prevent this by managing the flow
of data and avoiding bottlenecks.
Traffic Shaping: Controls the rate at which data is sent into the network to avoid
overwhelming the system.
Load Balancing: Distributes data traffic across multiple routes to prevent any one path
from becoming overloaded.
On a busy highway, traffic management systems use ramp meters to control how many cars
can enter the road at once. Similarly, traffic shaping controls how much data enters the
network. Load balancing is like directing cars to different lanes to ensure no single lane gets
too crowded.
The Leaky Bucket Algorithm is a simple, effective way to control the rate of data transmission
in a network and prevent congestion. It works by smoothing out bursts of data traffic and
ensuring a steady flow, regardless of how quickly data is produced by the sender.
How It Works:
30
Imagine you have a bucket with a small hole at the bottom (the "leak"). Data packets are like
water poured into the bucket. No matter how fast the water is poured in (data being
transmitted), it only drips out through the hole at a fixed, steady rate.
Input: Data packets (like water) enter the bucket at any rate, potentially in bursts.
Output: Data (water) leaves the bucket at a constant, controlled rate through the "leak,"
preventing overwhelming the network.
Overflow: If the bucket (buffer) gets too full, meaning more data is being sent than can be
handled, the excess data is discarded, similar to water spilling over.
Key Concepts:
Fixed Output Rate: The algorithm ensures data is transmitted at a constant rate, preventing
any sudden surge in traffic that could overwhelm the network.
Overflow and Data Loss: If data comes in faster than it can be processed (more than the
bucket can hold), some data is lost, simulating overflow. This loss prevents congestion from
building up.
Buffering: The "bucket" acts as a buffer, holding incoming data temporarily and releasing it
in a controlled manner, even if it arrives in bursts.
Example:
Consider a network server receiving multiple file uploads from users. Some users might upload
large files quickly (bursting traffic). Without control, this burst could overwhelm the network
and slow down other operations. Using the Leaky Bucket algorithm, the server can buffer the
incoming data and release it to the network at a constant, manageable rate. If the incoming
traffic exceeds what the buffer can handle, the excess data is discarded, preventing congestion.
31
Advantages:
Smooths Traffic: It converts bursty traffic into a smooth, steady stream, which is easier for the
network to handle.
Simple and Predictable: Easy to implement and understand, as it controls traffic flow with a
single, fixed rate.
Prevents Congestion: By controlling the outflow rate and discarding excess data, the
algorithm avoids overloading the network.
Real-Life Analogy:
Imagine filling a cup with water from a fast-flowing tap. If you fill it too quickly, the water
will overflow. But if there’s a small hole at the bottom of the cup, the water will flow out
slowly and steadily. Even if you pour more water in, the output rate stays constant. If you pour
too much at once, the cup will overflow, and excess water (data) will be lost.
In essence, the Leaky Bucket Algorithm ensures that data flows steadily and predictably,
preventing network congestion while efficiently managing bursts of traffic.
Quality of Service (QoS) ensures that important data gets priority on the network. For
example, a video call requires real-time communication, So QoS makes sure it gets more
bandwidth than non-urgent tasks like downloading a file.
Imagine a VIP at a concert. QoS is like giving the VIP special access to the front row, while
everyone else has to wait in line. The same concept applies to network traffic, where critical
data gets priority over less important traffic.
Traffic Prioritization: Certain types of data (e.g., voice, video) are given priority
over others (e.g., emails, file downloads).
Bandwidth Allocation: Critical services are allocated more bandwidth to ensure they
function smoothly even during network congestion.
Traffic Shaping: Network traffic is shaped to delay less important packets, so high-
priority data is transmitted first.
32
5. INTERNETWORKING: Connecting Different Networks
Internetworking refers to the process of connecting multiple different networks to create one
large, seamless network. This allows devices from different networks (like your home
network, the office network, and the internet) to communicate with each other.
Internetworking is like global trade between countries. Even though each country has its own
rules and systems, they work together through agreements to exchange goods. Similarly,
internetworking enables different networks to exchange data.
On the internet, the network layer’s most critical job is managing IP addresses and routing
data between them. Every device on the internet has a unique IP address, and routers use these
addresses to direct traffic between devices. Generally two types of IP addresses , which is
given blow:
IPv4: The current standard, using 32-bit addresses. It’s running out of unique addresses
because of the growing number of devices.
IPv6: The newer standard, using 128-bit addresses, allows for many more devices to be
connected.
Example: Phone Numbers
IP addresses are like phone numbers. When you want to call someone, you dial their
unique number. Similarly, when you send data across the internet, you use the device’s
unique IP address.
IP ADDRESSES are divided into five classes: A, B, C, D, and E. Each class serves different purposes in
networking, primarily depending on the number of hosts that can be accommodated in the network and
whether it's used for unicast, multicast, or reserved purposes.
Class A:
Class B:
33
Range: 128.0.0.0 to 191.255.255.255
First Octet Range: 128 – 191
Network/Host Bits: 16 bits for the network, 16 bits for the host.
Default Subnet Mask: 255.255.0.0
Purpose: Medium-sized networks, such as universities or organizations.
Class C:
Class D (Multicast):
Class E (Experimental):
Here is a table outlining the different classes of IP addresses, their ranges, and an example
for each class:
IP Default Subnet
Range Network Size Example
Class Mask
128.0.0.0 to Medium-sized
Class B 255.255.0.0 172.16.0.1
191.255.255.255 networks
192.0.0.0 to
Class C Small networks 255.255.255.0 192.168.1.1
223.255.255.255
224.0.0.0 to
Class D Multicast addressing N/A 224.0.0.1
239.255.255.255
240.0.0.0 to
Class E Reserved for research N/A N/A
255.255.255.255
Key Points:
34
Class A is used for very large organizations with many devices.
Class B is for mid-sized organizations.
Class C is commonly used in small organizations or home networks.
Class D is reserved for multicast groups.
Class E is reserved for experimental use and is not used for public purposes.
In Summary:
Together, these devices and the network layer make our digital world possible, enabling
everything from simple file sharing to complex global communication.
35
BCA: UNIT-5: TRANSPORT AND UPPER LAYER IN OSI
The Transport layer is the layer-4 of the OSI reference model. The transport layer is mainly
responsible for the process-to-process delivery of the entire message. A process is basically an
application program that is running on the host.
The basic function of the Transport layer is to accept data from the layer above, split it up into
smaller units, pass these data units to the Network layer, and ensure that all the pieces arrive
correctly at the other end.
Furthermore, all this must be done efficiently and in a way that isolates the upper layers from
the inevitable changes in the hardware technology.
The Transport layer also determines what type of service to provide to the Session layer, and,
ultimately, to the users of the network. The most popular type of transport connection is
an error-free point-to-point channel that delivers messages or bytes in the order in which
they were sent.
The Transport layer is a true end-to-end layer, all the way from the source to the destination.
In other words, a program on the source machine carries on a conversation with a similar
program on the destination machine, using the message headers and control messages.
The transport layer also identifies errors like damaged packets, lost packets, and duplication
of packets, and provides sufficient techniques for error correction.
The protocols of the network layer are only implemented in the end systems but not in the
network routers.
There are many services that are provided by the protocols of the Transport layer such as
multiplexing, de-multiplexing, reliable data transfer, the guarantee of bandwidth.
This layer mainly provides the transparent transfer of data between end-users, also provides
the reliable transfer of data to the upper layers.
36
FUNCTIONS OF TRANSPORT LAYER
1. Service Point Addressing: Transport Layer header includes service point address which is
the port address. This layer gets the message to the correct process on the computer unlike
Network Layer, which gets each packet to the correct computer.
2. Segmentation and Reassembling: A message is divided into segments(that are
transmittable); each segment contains a sequence number, which enables this layer in
reassembling the message. The message is reassembled correctly upon arrival at the
destination and replaces packets that were lost in transmission.
3. Connection Control: It includes 2 types:
Connectionless Transport Layer: Each segment is considered as an independent packet
and delivered to the transport layer at the destination machine.
Connection-Oriented Transport Layer: Before delivering packets, the connection is made
with the transport layer at the destination machine.
4. Flow Control: In this layer, flow control is performed end to end rather than across a single
link.
5. Error Control: Error Control is performed end to end in this layer to ensure that the complete
message arrives at the receiving transport layer without any error. Error Correction is done
through retransmission.
Let us now understand the process-to-process delivery by the transport layer.
Illustration of Process-to-Process Delivery
Thus the Transport layer is mainly responsible for the delivery of messages from one process
to another.
CONNECTION MANAGEMENT In Transport Layer
The connection is established in TCP using the three-way handshake as discussed earlier to
create a connection. One side, say the server, passively stays for an incoming link by
implementing the LISTEN and ACCEPT primitives, either determining a particular other side
or nobody in particular.
The other side performs a connect primitive specifying the I/O port to which it wants to join.
The maximum TCP segment size available, other options are optionally like some private data
(example password).
37
The CONNECT primitive transmits a TCP segment with the SYN bit on and the ACK bit off
and waits for a response.
The sequence of TCP segments sent in the typical case, as shown in the figure below −
When the segment sent by Host-1 reaches the destination, i.e., host -2, the receiving server
checks to see if there is a process that has done a LISTEN on the port given in the destination
port field. If not, it sends a response with the RST bit on to refuse the connection. Otherwise,
it governs the TCP segment to the listing process, which can accept or decline (for example,
if it does not look similar to the client) the connection.
CALL COLLISION
If two hosts try to establish a connection simultaneously between the same two sockets, then
the events sequence is demonstrated in the figure under such circumstances. Only one
connection is established. It cannot select both the links because their endpoints identify
connections.
Suppose the first set up results in a connection identified by (x, y) and the second connection
are also released up. In that case, only tail enter will be made, i.e., for (x, y) for the initial
sequence number, a clock-based scheme is used, with a clock pulse coming after every 4
microseconds. For ensuring additional safety when a host crashes, it may not reboot for sec,
which is the maximum packet lifetime. This is to make sure that no packets from previous
connections are roaming around.
WHAT IS SESSION LAYER?
The services provided by the first three layers are not enough for some processes. The session
layer is also known as a network dialog controller, it creates, maintains, synchronizes the
interaction between communicating applications.
The session layer tracks the dialogs between systems, which are also called sessions. This
layer manages a session by initiating the opening and closing of sessions between end-user
application processes.
It also controls single or multiple connections for each end-user application and directly
communicates with both the presentation and the transport layers. The services provided by
38
the session layer are generally implemented in the application environment using remote
procedure calls (RPCs).
In the Session layer, streams of data are marked and are resynchronized properly, so that the
ends of the messages are not cut prematurely and data loss is avoided.
A protocol such as Zone Information Protocol, AppleTalk Protocol, and Session Control
Protocol are used to implement sessions on Web browsers.
Through check pointing and recovery session management and restoration are possible using
these protocols.
For example, in live television programs sessions are implemented, in which the audio and
video streams emerging from two different sources are merged. This can be avoid overlapping
and silent broadcast time. This figure shows the relationship of the Session layer to the
transport layer and presentation layer,
SESSION LAYER
Design Issues with Session Layer
Management of dialog control.
It allows machines to make sessions between them seamlessly.
Token management and Synchronization, such services also provided by the session layer.
Provide quality and intensified services to the user.
Functionalities of session layer
Specific functionalities of the session layer are as follows:
1. Dialog Control
The session layer behaves as a dialog controller.
It allows two communication machines to enter into a dialog.
39
It permits to communicate in either half-duplex (one way at a time) or full-duplex (two ways
at a time) mode of communication.
For Example, A dialog between a terminal connected to the mainframe can be half-duplex.
2. Synchronization
This layer permitted a process to add checkpoints which are referred to as synchronization
points into the stream of data.
Example: If a system is sending a file of 2500 pages, It is advisable to add checkpoints after
every 100 to ensures that a 100-page unit is successfully received and acknowledged
independently.
In this case, if a crash happens during transmission of page number 824; then retransmission
begins on page 801. There is no need to retransmit pages 1 to 800 pages.
3. Token Management
This layer is also responsible for managing tokens. Through this, it prevents the two users to
simultaneously attempt access of the same critical operation.
PRESENTATION LAYER
As we know, the OSI model layer has 7 layers and it is number 6 in layers. It is the layer
between the application layer and the session layer and its function is to access all the
information coming from the session layer.
After checking its accuracy, then after defining it well, all the information has to be presented
to the application layer in its standardized format (standardized format).
When a machine sends any collected information to another machine, the syntax layer keeps
in mind that the machine receiving all the information can understand the information and can
also use all that information.
Meaning that the presentation layer plays the role of a translator or translator in the OSI model
layer.
The presentation layer is also called the syntax layer, as its main function is to take care of the
syntax and semantics of the information exchanged between any two communication systems.
syntax means grammar and semantics mean semantics.
How A Presentation Layer Works ?
This layer works by converting all the information into a variety of file formats and forms of
encryption.
The syntax layer is able to do these using built-in algorithms and is able to standardize the
information whether it can be handled by XML, C++, or TLV.
In addition to passing all the information from the application layer to the session layer, the
presentation layer is also responsible for passing information from the session layer to
the application layer.
Functions of Presentation Layer
Character-Code Translation
40
Data Conversion
Data Compression
Data Encryption and Decryption
Data Translation
Graphic handling
Character-Code Translation –
It stores all the information given by the computer user in a different way, for example in some
computers American standard code for information interchange (ASCII) code is used then
in some computers Extended binary code decimal interchange code (EBCDIC) is used.
The syntax layer can translate ASCII into EBCDIC if needed.
Data Conversion –
It is responsible for changing the type of layer data for example it converts integer numbers to
floating point numbers.
Data Compression –
It is able to compress the data by compressing it so that they can be transferred much faster.
Data Encryption and Decryption –
While transferring data in the network field, Data Encryption is required to keep all that
information from hackers. Similarly, once the data reaches its destination user, there is a need
to decryption it so that the usage of that data can be used.
The Presentation Layer facilitates data information encryption and decryption so that the user
can be protected from hackers.
Data Translation –
We can connect different types of computers, laptops, mobiles, tablets, printers, screeners,
servers and mainframes etc. using the network. All these computers use different types of
Programming Language, Operating System and Language.
In such a situation, when one computer sends any information to another computer, then the
data sent by it can be understood by another machine, so it is very important that the
information of both computers is presented in the correct language to its destination computer.
The presentation layer acts as a translator that presents information to machines in
understandable language.
Graphic handling –
It also presents various types of graphic such as images, videos etc. in an understandable
format.
Protocols Supported at Presentation
Independent Computing Architecture (ICA)
Apple Filing Protocol (AFP)
41
NetWare Core Protocol
Network Data Representation
Independent Computing Architecture (ICA) –
The Independent Computing Architecture is a designed protocol for syntax server systems.
Network Data Representation –
It makes it understandable by correcting the syntax or grammar of the information received
through the network.
Apple Filing Protocol (AFP) –
We are used to exchange data files of computers.
NetWare Core Protocol –
It was created by a company named Novell Inc, which uses it in its client-server operating
system
Application Layer
The application layer sits at Layer 7, the top of the Open Systems Interconnection (OSI)
communications model. It ensures an application can effectively communicate with other
applications on different computer systems and networks.
The application layer is not an application. Instead, it is a component within an application
that controls the communication method to other devices. It is an abstraction layer service that
masks the rest of the application from the transmission process.
The application layer relies on all the layers below it to complete its process. At this stage, the
data or the application is presented in a visual form that the user can understand.
FUNCTIONS OF THE APPLICATION LAYER
The application layer handles the following functions:
ensures that the receiving device is identified, reachable and ready to accept data;
when appropriate, enables authentication between devices for an extra layer of network
security;
makes sure necessary communication interfaces exist, such as whether there is an Ethernet or
Wi-Fi interface in the sender’s computer;
ensures agreement at both ends on error recovery procedures, data integrity and privacy;
determines protocol and data syntax rules at the application level; and
presents the data on the receiving end to the user application.
Two types of software provide access to the network within the application layer:
1. network-aware applications, such as email; and
2. application-level services, such as file transfer or print spooling.
42
IMPORTANT PROTOCOLS USED IN COMPUTER NETWORKS
PROTOCOLS
In computer networks, protocols are sets of rules that dictate how devices communicate with
each other. Think of them like languages or traffic rules that ensure everything moves
smoothly and securely. There are many different types of protocols, each with a specific role
in managing various aspects of network communication. Here, we’ll explore some of the most
important basic protocols used in computer networks, explaining them with simple language
and examples.
TCP is one of the core protocols of the internet, used to ensure reliable data transmission
between devices.
What it Does: TCP breaks down data into smaller packets, sends them over the network,
and makes sure they are delivered in the correct order, without errors.
How it Works: TCP establishes a connection between the sender and receiver before
transmitting data. It also checks for errors, and if any packet of data is lost or damaged,
it resends that specific packet.
Example:
Imagine you are mailing a book to a friend, but the book is too large to fit in one envelope.
You divide the book into smaller packages (packets) and send them one by one. TCP makes
sure each packet arrives at your friend's house in the correct order. If any packet gets lost along
the way, TCP will send a replacement.
IP is responsible for addressing and routing data packets across networks. It works closely
with TCP and is often referred to as TCP/IP.
What it Does: IP assigns a unique address (IP address) to each device on the network.
It helps data packets find their way from one device to another by hopping between
routers until they reach the destination.
How it Works: Data is sent in packets, and IP provides the addressing system. Just like
houses have street addresses, devices on a network have IP addresses, which are used
to route the data to the correct destination.
43
Example:
Sending an email is like mailing a letter. IP works like the postal service, ensuring that the
letter (your email) reaches the correct address (IP address of the recipient's device).
UDP is similar to TCP but does not guarantee reliable delivery of data. It’s faster but less
reliable than TCP.
What it Does: UDP sends data packets without waiting for acknowledgment or error
checking. It’s ideal for applications where speed is more important than accuracy.
How it Works: Unlike TCP, UDP does not establish a connection or check for lost
packets. It simply sends the data and moves on, making it faster but with the risk of lost
or out-of-order data
Example:
UDP is like broadcasting a message over a loudspeaker. You send out the message (data), but
you don’t wait to see if everyone heard it clearly. This works well for things like live video
streaming or online gaming, where speed is more important than making sure every single
packet is received perfectly.
HTTP is the protocol used for transferring web pages over the internet.
What it Does: HTTP allows a web browser (client) to request web pages from a web
server. It’s the foundation of data communication for the World Wide Web.
How it Works: When you type a website URL into your browser, your browser sends
an HTTP request to the web server hosting the website. The server then sends the
requested webpage back to your browser.
Example:
When you visit a website like www.example.com, HTTP works like a waiter in a restaurant.
You (the browser) request a dish (webpage) from the waiter (server), and the waiter brings the
dish back to you.
44
5. FILE TRANSFER PROTOCOL (FTP)
What it Does: FTP allows users to upload and download files between a client (your
computer) and a server.
How it Works: FTP clients connect to an FTP server to browse and transfer files. It can
be used for uploading files to a website or downloading files from a server.
Example:
Imagine you need to move a folder of photos from your computer to a website server. You use
FTP to "upload" the files to the server, like sending a box of photos to a storage facility.
What it Does: SMTP ensures that emails are sent from one server to another. It handles
the sending part of email communication.
How it Works: When you send an email, your email client (like Gmail or Outlook) uses
SMTP to send your message to the email server, which then delivers it to the recipient’s
server.
Example:
Think of SMTP like a postal worker. When you send an email, SMTP works like the postal
service, picking up the email from your device and delivering it to the recipient’s mailbox.
POP and IMAP are two protocols used to retrieve emails from a server.
POP (Post Office Protocol): Downloads emails from the server to your device and
typically deletes the email from the server after downloading.
IMAP (Internet Message Access Protocol): Syncs emails between the server and your
device, keeping the emails on the server even after you’ve read them.
Example:
POP: If you use POP, it’s like receiving physical mail at home. Once you’ve picked up
the letter from the mailbox (server), it’s no longer there.
45
IMAP: IMAP is more like accessing a shared online document. You can view, read, or
delete your emails on any device, but the emails remain on the server unless you
specifically delete them.
DNS is like the internet’s phone book. It translates human-readable domain names (like
www.google.com) into IP addresses that computers use to identify each other on the network.
What it does: DNS converts domain names into IP addresses, allowing users to enter
readable website names instead of numerical addresses.
How it works: when you type a website address, DNS looks up the corresponding ip
address and directs your browser to the correct server.
Example:
Imagine you want to call a friend but don’t remember their phone number. You look up their
name in your phone’s contact list. DNS works similarly, turning "www.example.com" into an
IP address so that your browser knows where to send your request.
What it Does: DHCP assigns IP addresses dynamically, meaning each device gets an
IP address automatically when it connects to the network.
How it Works: When a device joins a network, the DHCP server assigns an available
IP address to the device, making the process seamless for users.
Example:
When you join a public Wi-Fi network at a coffee shop, your device is automatically given an
IP address. This happens because the DHCP server assigns it to you, just like getting a parking
spot number in a parking lot when you enter.
46
What it Does: SSH allows users to securely manage and communicate with remote
devices over an encrypted connection.
How it Works: SSH creates a secure connection between the client and the server,
ensuring that any data transmitted between them is encrypted and protected from
eavesdropping.
Example:
Imagine you need to log into your work computer from home. SSH works like a secure tunnel,
allowing you to access your work machine from anywhere while keeping your communication
safe and private from anyone trying to spy on you.
Imagine you’re managing a large office building. SNMP is like having sensors in every room
that report back to you. For example, you could monitor temperature (CPU usage), check door
statuses (network interfaces), and even receive alerts if something unusual happens (like a
device failure).
1. GET Request: The network management system (NMS) sends a request to retrieve
data, such as asking a router how much traffic it has handled today.
2. SET Request: You can configure devices remotely. For instance, if you need to reset a
device or change its settings, a SET request does the job.
3. Trap Alerts: Devices can send alerts without being asked. If a server is overheating or
a router goes down, SNMP sends a trap to the NMS to notify the administrator instantly.
Example
Think of an office network with dozens of devices. If a critical switch fails, SNMP sends an
alert (trap) to the admin’s dashboard, allowing them to fix the issue before it disrupts the entire
network. It’s like having a 24/7 monitoring service for your digital infrastructure.
In short, SNMP makes network management easy, allowing for real-time monitoring,
configuration, and troubleshooting—all from a central location.
The process of sending and receiving emails might seem instant, but behind the scenes, there's
a fascinating journey that takes place on email servers. Here’s a quick breakdown:
47
PROCESS:
You start by writing an email and hitting Send. At this point, your email client (like Gmail,
Outlook, or Thunderbird) connects to a SMTP server (Simple Mail Transfer Protocol). Think
of this as the post office that handles outgoing mail.
The SMTP server checks if the recipient’s domain is the same as yours (e.g., both are on
Gmail). If it is, the email is sent directly to the recipient’s mailbox. But if they’re on a different
service (like Yahoo), the SMTP server looks up the DNS (Domain Name System) to find the
recipient’s mail server.
The email is then sent to the recipient’s Mail Transfer Agent (MTA)—basically, their
incoming mail server. This server could be using protocols like IMAP (Internet Message
Access Protocol) or POP3 (Post Office Protocol) to store and organize the email.
Once the email reaches the recipient’s server, it sits there until they open their email client.
The email client connects to the server, retrieves the new message, and displays it in their
inbox. This is like the recipient picking up their letter from the mailbox.
Example in Action:
Imagine you’re sending an email invitation for a party. Once you hit send, your SMTP server
determines whether your friend’s address is hosted on the same server. If not, it finds the
correct path to the destination and delivers it through their server. All this happens in seconds,
ensuring your invitation arrives in their inbox, ready to be read.
In short, Email servers act like digital post offices, ensuring your message is delivered
quickly and efficiently across the internet. It’s a seamless process that keeps
communication flowing!
Summary:
Each protocol plays a unique role in making the internet and networks work smoothly,
allowing us to send emails, browse the web, transfer files, and much more, often without us
even realizing they are at work behind the scenes.
49