0% found this document useful (0 votes)
0 views

computer networking 2

A network is an interconnection of communication elements for information exchange, with computer networks specifically connecting multiple computers to share resources. Nodes in a network can be devices like computers, routers, or switches, and can be classified based on communication media, scale, topology, and functional relationships. The main goals of computer networks include resource sharing, reliability, and cost efficiency, with applications spanning business, home, and social contexts.

Uploaded by

Kenneth Ngum
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

computer networking 2

A network is an interconnection of communication elements for information exchange, with computer networks specifically connecting multiple computers to share resources. Nodes in a network can be devices like computers, routers, or switches, and can be classified based on communication media, scale, topology, and functional relationships. The main goals of computer networks include resource sharing, reliability, and cost efficiency, with applications spanning business, home, and social contexts.

Uploaded by

Kenneth Ngum
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 245

What is a network and what are the nodes present in a network?

A network is an interconnection between various


communication elements connected by various communication
links for information interchange. When two or more elements
are connected in a limited geographical area to share their resources
and information, they are said to be in a network. We can see
examples of a network in almost every field around us.

But in this blog, we'll mainly focus on computer networks. We'll also
study the nodes present in a network, classifications, goals, and
applications of a network.

Computer Network
A computer network is a system in which multiple computers
are connected to each other to share information and
resources. In other words, it is a network of various communicating
devices or elements connected by communication links. The
communication elements can be a computer, mobile, router, switch,
etc., and communication links can be an optical fibre cable, coaxial
fibre cable, wireless LAN, etc.

In a computer network, one process in one device is able to


send/receive data to/from at least one process residing in a remote
device. The internet is a network of networks. It is not managed by a
single organization.
Node
In a computer network, a node is either a connection point, a
redistribution point, or a communication point. In other words, a
node refers to a point or joint where a connection takes place.

It can be a computer or device that is part of a network. Generally,


two or more nodes are needed in order to form a network
connection. The definition of a node depends on the network and
protocol layer referred to.

A node may be a data communication equipment (that can be used


to establish communication, such as modem, hub, bridge, switch,
etc.) or a data terminal equipment (that can be an end device, such
as digital telephone, handset, printer, host computer, etc.).

A physical network node is an active electronic device that is


attached to a network. It is capable of sending, receiving, or
forwarding information over a communication channel.

Each device on a network that has a unique logical or IP


(Internet Protocol) address can also be termed as a node. When
connected in a network, every node in a network must have a MAC
address. MAC address is a unique identifier assigned by device
manufacturers to a network interface controller (NIC) for
communications in a network. NIC is a computer hardware
component that connects a computer to a computer network. When
connected to the internet or intranet, the nodes are referred to as
internet nodes. These nodes are identified by their IP addresses.
Some Data Link layer devices(switches, bridges, WLAN access points,
etc.) do not have an IP address. Thus, they are physical but not
internet nodes.

In a distributed network, nodes can be clients, servers, or peers. It


may also use some virtual nodes, so as to maintain transparency. In
cloud computing, each user computer that is connected to a cloud
can be treated as an end node.

The degree of connectivity of a node is the measure of the number of


connections a node has with other nodes.

Classification of Computer Network


A computer network can be classified on the basis of communication
media, functional relationships, topology, and scale of the network.

Now have a look at all these classifications one by one.

• Classification based on communication media:


Computer Networks can be broadly classified in the following two
categories based on communication media:

1. Wired Network: It can be implemented using coaxial cable,


optical fibre cable, etc.
2. Wireless Network: It can be implemented using Terrestrial
Microwave, Communication Satellites, Wireless LANs.

• Classification according to scale:


Computer Networks can be broadly classified in the following three
categories according to scale or the area of a network:
1. LAN: It is the acronym for Local Area Network. It is confined to
a small geographical area such as a library, college building,
etc.
2. MAN: It is the acronym for Metropolitan Area Network. It is
confined to a large geographical area such as a city or town.
3. WAN: It is the acronym for Wide Area Network. It is confined
to a very large geographical area such as a country or even the
whole world.

• Classification based on Network topology:


Computer Networks can be broadly classified in the following five
categories based on network topology i.e based on how the nodes are
connected in a network:

1. Bus: In this network topology, every node is connected to a


single cable, also called a bus.
2. Star: In this network topology, all the devices are connected to
a single hub through a cable. This hub is the central node. The
hub can be active or passive in nature.
3. Ring: In this network topology, a ring is formed between
various nodes that connect a device with its exactly two
neighbour devices.
4. Mesh: In this network topology, every node is connected to
another node via a particular channel.
5. Hybrid: This network topology is a combination of two or more
topologies.

• Classification based on Functional Relationship:


Computer Networks can be broadly classified in the following three
categories based on functional relationship:
1. Active Networking: It allows the packets flowing through a
telecommunications network to dynamically modify the
operations of a network.
2. Client-Server Networking: It is a network in which a client
runs the program and access data that are stored on the server.
3. Peer-to-Peer Networking: This network facilitates the flow of
information from one peer to another without any central
server.

Goals of a Computer Network


The main goal of a computer network is mainly related to
interconnection and information interchange. Some major goals of a
computer network are as follows :

• Resource Sharing: All the devices present in a computer


network can share their resources with each other.
• Reliability: A computer network makes the system reliable. If
one node fails, the working of all nodes will not be affected.
• Money Efficiency: In a computer network, nodes can share
hardware and software components with each other, thus
making it money efficient.
• Scalability: A computer network makes the system scalable.
We can add or remove the nodes from the network with our
convenience.
Applications of Computer Network
A computer network can be applied to various fields some of the
application areas are as follows:

• Business Applications: In business a computer network can be


used for B2B, and B2C communication.
• Home Applications: Home applications of a computer network
can be accessed for remote information, person-to-person
communication, electronic commerce, etc.
• Mobile Applications: Mobile applications of computer
networks can be mobile banking, communication, etc.
• Social Applications: Social applications of computer networks
can be various social networking applications like facebook,
twitter, etc.

What are the different types of network?

A computer network is a system in which multiple computers


are connected to share information and resources. Computer
network varies with each other based on their functionality,
geography, ownership, and communication media used.

So, in this blog, we are going to learn about various types of


computer networks based on geographical areas they cover,
functionality, ownership, and communication media used.

A computer network can be divided into the following types, based


on the geographical area that they cover, they are:

1. LAN(Local Area Network)


2. MAN(Metropolitan Area Network)
3. WAN(Wide Area Network)
Now, let us study these networks one by one:

LAN(Local Area Network)


A local area network is a network, which is designed to operate over
a very small geographical or physical area such as an office, building,
a group of buildings, etc.

Generally, it is used to connect two or more personal computers


through a communication medium such as coaxial, twisted-pair
cables, etc. A LAN can use either wired or wireless mode of
communication. The LAN which entirely uses wireless media for
communication can be termed as WLAN(Wireless Local Area
Network).

Local Area Networks came under existence in around 1970s. IEEE


developed the specifications for LAN. The speed of this network
varies from 10mbps(Ethernet network) to 1gbps(FDDI or Gigabit
Ethernet).

In other words, a LAN connects a relatively small number of


machines in a relatively close geographical area. Bus, Ring, and Star
topology are generally used in a local area network. In LAN, one
computer can become a server in a star topology, serving all other
computers called clients. Two different buildings can be connected
very easily in LAN using a 'Bridge'.

Ethernet LAN is the most commonly used LAN. The speed of a Local
Area Network also depends on the topology used. For example, a
LAN using bus topology has a speed of 10mbps to 100mbps, while in
ring topology it is around 4mbps to 16mbps. LAN's are generally
privately owned networks.

Following are the functionalities of a Local Area Network:

1. File Serving: In LAN, a large storage disk acts as a central


storage repository.
2. Print Serving: Printers can be shared very easily in a LAN by
various computers.
3. Academic Support: A LAN can be used in the classroom, labs,
etc. for educational purposes.
4. Manufacturing Support: LAN can support the manufacturing
and industrial environment.
5. High Reliability: Individual workstations might survive the
network in case of failures.
Following are the advantages of a LAN:

1. File transfer and file access


2. Resource or peripherals sharing
3. Personal computing
4. Document distribution
5. Easy to design and troubleshoot
6. Minimum propagation delay
7. High data rate transfer
8. Low error rate
9. Easily scalable(devices can be added or removed very easily)
Following are the disadvantages of a LAN:

1. Equipment and support may be costly


2. Some hardware devices may not inter-operate properly

MAN(Metropolitan Area Network)


A Metropolitan Area Network is a bigger version of LAN that uses
similar technology as LAN. It spans over a larger geographical area
such as a town or an entire city.

It can be connected using an optical fiber cable as a communication


medium. Two or more LAN's can also be connected using routers to
create a MAN. When this type of network is created for a specific
campus, then it is termed as CAN(Campus Area Network).

The MAN spans over a geographical area of about 50km. The best
example of MAN is the cable television network that spans over the
whole city.

A MAN can be either a public or privately owned network. Generally,


a telephone exchange line is most commonly used as a
communication medium in MAN. The protocols that are used in
MAN are RS-232, Frame Relay, ISDN, etc.

Uses of MAN are as follows:

1. MAN can be used for connecting the various offices of the same
organization, spread over the whole city.
2. It can be used for communication in various governmental
departments.
Following are the advantages of using MAN:

1. Large geographical area cover as compared to LAN


2. High-speed data connectivity
3. The Propagation delay of MAN is moderate
Following are the disadvantages of MAN:

1. It is hard to design and maintain a MAN


2. MAN is less fault-tolerant
3. It is costlier to implement
4. Congestions are more in a MAN

WAN(Wide Area Network)


A Wide Area Network is the largest spread network. It spans over
very large-distances such as a country, continent or even the whole
globe. Two widely separated computers can be connected very easily
using WAN. For Example, the Internet.

A WAN may include various Local and Metropolitan Area Network.


The mode of communication in a WAN can either be wired or
wireless. Telephone lines for wired and satellite links for wireless
communication can be used in a wide area network.

In other words, WAN provides long distance transmission of data,


voice, image, and video, over a large geographical area. A WAN may
span beyond 100km range. It may be privately or publicly owned.
The protocols used in WAN are ISDN(Integrated Service Digital
Network), SMDS(Switched Multi-Megabit Data Service),
SONET(Synchronous Optical Network), HDLC(High Data Link
Control), SDLC(Synchronous Data Link Control), etc.

The advantage of WAN is that it spans over a very large geographical


area, and connects a huge mass of people.

Following are the disadvantages of WAN:

1. The propagation delay is more in a WAN


2. The data rate is low
3. The error rate is high
4. It is very complex to design a WAN
These are the types of network according to geographical area.

Following are the types of network, based on functionality:

• Client-Server Network: Client-Server network is a network in


which a client runs the program and access data that are stored
on the server. In this kind of network, one computer becomes
the server, serving all other computers called clients.
• Peer-to-Peer Network: Peer-to-Peer network facilitates the
flow of information from one peer to another without any
central server. In other words, each node on a server acts as
both client and server.
Following are the types of network, based on Ownership:

• Private Network: A private network is a network in which


various restrictions are imposed to secure the network, to
restrict unauthorized access. This type of network is privately
owned by a single or group of people for their personal use.
Local Area Network(LAN) can be used as a private network.
• Public Network: A public network is a network that has the
least or no restrictions on it. It can be freely accessed by
anyone, without any restrictions. This type of network is
publicly owned by the government or NGOs. Metropolitan Area
Network(MAN) and Wide Area Network(WAN) can be used as a
public network.
Following are the types of network, based on Transmission Media:

• Bound/Guided Media Network: Bounded/Guided media can


also be referred to as wired media. This kind of networks
provides a physical link between two nodes connected in a
network. The physical links are directed towards a particular
direction in the network. Co-axial, twisted pair, optical fiber
cable, etc. can be used in such networks for connectivity. Local
Area Network(LAN) and Metropolitan Area Network(MAN) can
be used as a Bound/Guided media network.
• Unbound/Unguided Media Network: Unbounded/Unguided
media can also be referred to as wireless media. This kind of
network does not need any physical link for electromagnetic
transmission. Radio waves, Microwaves, Infrared, etc. can be
used in such networks for connectivity. Metropolitan Area
Network(MAN) and Wide Area Network(WAN) can be used as
an Unbound/Unguided media network.

What are Peer-to-Peer networks and Server-Based networks?

A network is an interconnection between various


communication elements connected by various communication
links for information interchange. A network can be classified on
various basis. But one of the most important network classifications
is based on network design.

Based on network design, a computer network can be divided into


the following two types:

1. Peer-to-Peer Network
2. Server-Based Network
Now, we will learn about these two types of networks in detail.

Peer-to-Peer Network
The Peer-to-Peer network is also called P2P or computer-to-
computer network. 'Peers' are the nodes or computer system which
are connected to each other. In this kind of network, each node is
connected to each other node in the network.

The nodes can share printers or CDROM drives, and allow other
devices to read or write to its hard disk, allowing sharing of files,
access to its internet connection, and other resources. Files or
resources can be shared directly between the system on the network,
without the need of any central server. Such kind of network, where
we allow nodes to become a server and share things in this manner,
can be referred to as a peer-to-peer network.

In a peer-to-peer network, each node can work as either a server as


well as a client. This network does not distinguish between the client
or server. Each of the nodes can act as both client/server depending
on whether the node is requesting or providing the service. All the
nodes are functionally equal and can send or receive data directly
with one another.
Peer-to-Peer networks can be deployed very easily with most
modern Operating Systems such as Windows and Mac O.S., etc.
Computers in the peer-to-peer network run the same network
protocols and software. Once connected to the network, P2P
software allows users to search for files and other resources on some
other node. The pattern of communication between peers depends
entirely on the application requirement. Each object is replicated in
several computers to further distribute the load and to provide
flexibility in the event of disconnection of the individual computer.

A peer-to-peer network can be configured as both wired as well as a


wireless network. It is most commonly used in the Local Area
Network, especially in small offices, or within a single department of
a large organization. The nodes present in the network are situated
very near to each other. Each node has access to devices and files on
other computers and can store independently its own software and
information.

For Example, BitTorrent is a widely used peer-to-peer network.

Following are the advantages of using a peer-to-peer network:

1. Easy to implement and manage.


2. Nodes or workstations are independent of one another. Also,
no access permissions are needed.
3. The network is reliable in nature. If a peer fails, it will not
affect the working of others.
4. There is no need for any professional software in such kind of
networks.
5. The cost of implementation of such networks is very less.
Following are the disadvantages of using a peer-to-peer network:

1. Storage is decentralized, and also not so efficiently managed.


2. No data backup options are available in peer-to-peer networks.
3. These kinds of networks are not so secure.

Server-Based Networks
A Server-Based network can also be termed as a Client-Server
network. A server is a node that acts as a service provider for
clients. They wait for client requests and then respond to them. The
server is located elsewhere on the network, usually on a more
powerful machine. Here, the server is the central location where
users share and access network resources. It controls the level of
access that users have to share resources. In other words, a server
provides functionality and serve other programs called clients.

There is various kind of servers depending upon their use, they can
be a web server(which servers HTTP requests), Database
servers(which runs DBMS), File server(which provides files to
clients), Mail server, print server, Game server, Application server,
and so on. A server can contain web resources, host web
applications, store user and program data, etc.

A client is a machine or program requesting services from a server.


Clients are often situated at workstations or on personal computers.
They can be a simple application or a whole system that accesses
services being provided by a server. A client program provides an
interface to allow a computer user to request for services of the
server and to display the results the server returns. Each client has to
log on to the system or server to access the data and its resources.
A server-based network is centralized in nature. Also, the storage in
this kind of network is centralized. In other words, we can say that a
server-based network is based on a centralized structure and
provides a way to communicate via the web. The Internet is the most
widely used client-server network.

The Server-based network can be applied for various uses and


applications. Some of them are as follows:

1. Centralization: The server administers the whole set-up in the


network. Access rights and resource allocations are also done
by the server.
2. Proper Management: Due to centralized storage, it becomes
easy to find a file or some other resource.
3. Backup and Recovery: A centralized server makes data backup
and recovery possible in a convenient manner.
4. Upgradation and Scalability: Changes in the network can be
made very easily by just upgrading the server. Also, the network
is easily scalable.
5. Accessibility: Servers can be accessed remotely from various
platforms in the network.
6. Security: Rules defining security and access rights can be
defined at the time of the set-up of the server.
Following are the advantages of using a server-based network:

1. It facilitates a Centralized storage system.


2. Centralization makes administration easy.
3. Data can be easily backed in such networks.
4. The network is easy to scale.
5. Data sharing speed is high.
6. Servers can serve multiple clients at a time.
Following are the disadvantages of using a server-based network:

1. Dependency is more on a centralized server.


2. If the server's data is corrupted, all nodes will be affected.
3. A network administrator is required.
4. The cost of the server and network software is very high.

What are the Data Transmission Modes in a network?

Data Transmission mode defines the direction of the flow of


information between two communication devices. It is also
called Data Communication or Directional Mode. It specifies the
direction of the flow of information from one place to another in a
computer network.

In the Open System Interconnection(OSI) Layer Model, the Physical


Layer is dedicated to data transmission in the network. It mainly
decides the direction of data in which the data needs to travel to
reach the receiver system or node.

So, in this blog, we will learn about different data transmission


modes based on the direction of exchange, synchronization between
the transmitter and receiver, and the number of bits sent
simultaneously in a computer network.

The data transmission modes can be characterized in the following


three types based on the direction of exchange of information:

1. Simplex
2. Half-Duplex
3. Full Duplex
The data transmission modes can be characterized in the following
two types based on the synchronization between the transmitter
and the receiver:

1. Synchronous
2. Asynchronous
The data transmission modes can be characterized in the following
two types based on the number of bits sent simultaneously in the
network:

1. Serial
2. Parallel
Now, let us study these various data transmission modes in the
computer network one by one.

According to the Direction of Exchange of Information:

1. Simplex
Simplex is the data transmission mode in which the data can
flow only in one direction, i.e., the communication is
unidirectional. In this mode, a sender can only send data but can
not receive it. Similarly, a receiver can only receive data but can not
send it.
This transmission mode is not so popular because we cannot perform
two-way communication between the sender and receiver in this
mode. It is mainly used in the business field as in sales that do not
require any corresponding reply. It is similar to a one-way street.

For Example, Radio and TV transmission, keyboard, mouse, etc.

Following are the advantages of using a Simplex transmission


mode:

1. It utilizes the full capacity of the communication channel


during data transmission.
2. It has the least or no data traffic issues as data flows only in
one direction.
Following are the disadvantages of using a Simplex
transmission mode:
1. It is unidirectional in nature having no inter-communication
between devices.
2. There is no mechanism for information to be transmitted back
to the sender(No mechanism for acknowledgement).

2. Half-Duplex
Half-Duplex is the data transmission mode in which the data
can flow in both directions but in one direction at a time. It is
also referred to as Semi-Duplex. In other words, each station can
both transmit and receive the data but not at the same time. When
one device is sending the other can only receive and vice-versa.

In this type of transmission mode, the entire capacity of the channel


can be utilized for each direction. Transmission lines can carry data
in both directions, but the data can be sent only in one direction at a
time.
This type of data transmission mode can be used in cases where
there is no need for communication in both directions at the same
time. It can be used for error detection when the sender does not
send or the receiver does not receive the data properly. In such
cases, the data needs to be transmitted again by the receiver.

For Example, Walkie-Talkie, Internet Browsers, etc.

Following are the advantages of using a half-duplex


transmission mode:

1. It facilitates the optimum use of the communication channel.


2. It provides two-way communication.
Following are the disadvantages of using a half-duplex
transmission mode:

1. The two-way communication can not be established


simultaneously at the same time.
2. Delay in transmission may occur as only one way
communication can be possible at a time.

3. Full-Duplex
Full-Duplex is the data transmission mode in which the data
can flow in both directions at the same time. It is bi-directional
in nature. It is two-way communication in which both the stations
can transmit and receive the data simultaneously.
Full-Duplex mode has double bandwidth as compared to the half-
duplex. The capacity of the channel is divided between the two
directions of communication. This mode is used when
communication in both directions is required simultaneously.

For Example, a Telephone Network, in which both the persons can


talk and listen to each other simultaneously.

Following are the advantages of using a full-duplex


transmission mode:

1. The two-way communication can be carried out simultaneously


in both directions.
2. It is the fastest mode of communication between devices.
Following are the disadvantages of using a half-duplex
transmission mode:
1. The capacity of the communication channel is divided into two
parts. Also, no dedicated path exists for data transfer.
2. It has improper channel bandwidth utilization as there exist
two separate paths for two communicating devices.
According to the synchronization between the transmitter and the
receiver:

1. Synchronous
The Synchronous transmission mode is a mode of
communication in which the bits are sent one after another
without any start/stop bits or gaps between them. Actually, both
the sender and receiver are paced by the same system clock. In this
way, synchronization is achieved.

In a Synchronous mode of data transmission, bytes are transmitted


as blocks in a continuous stream of bits. Since there is no start and
stop bits in the message block. It is the responsibility of the receiver
to group the bits correctly. The receiver counts the bits as they arrive
and groups them in eight bits unit. The receiver continuously
receives the information at the same rate that the transmitter has
sent it. It also listens to the messages even if no bits are transmitted.

In synchronous mode, the bits are sent successively with no


separation between each character, so it becomes necessary to insert
some synchronization elements with the message, this is called
"Character-Level Synchronization".

For Example, if there are two bytes of data, say(10001101, 11001011)


then it will be transmitted in the synchronous mode as follows:
For Example, communication in CPU, RAM, etc.

Following are the advantages of using a Synchronous


transmission mode:

1. Transmission speed is fast as there is no gap between the data


bits.
Following are the disadvantages of using a Synchronous
transmission mode:

1. It is very expensive.

2. Asynchronous
The Asynchronous transmission mode is a mode of
communication in which a start and the stop bit is introduced
in the message during transmission. The start and stop bits
ensure that the data is transmitted correctly from the sender to the
receiver.
Generally, the start bit is '0' and the end bit is '1'.Asynchronous here
means 'asynchronous at the byte level', but the bits are still
synchronized. The time duration between each character is the same
and synchronized.

In an asynchronous mode of communication, data bits can be sent at


any point in time. The messages are sent at irregular intervals and
only one data byte can be sent at a time. This type of transmission
mode is best suited for short-distance data transfer.

For Example, if there are two bytes of data, say(10001101, 11001011)


then it will be transmitted in the asynchronous mode as follows:

For Example, Data input from a keyboard to the computer.

Following are the advantages of using an Asynchronous


transmission mode:

1. It is a cheap and effective mode of transmission.


2. Data transmission accuracy is high due to the presence of start
and stop bits.
Following are the disadvantages of using an Asynchronous
transmission mode:

1. The data transmission can be slower due to the gaps present


between different blocks of data.
According to the number of bits sent simultaneously in the network:

1. Serial
The Serial data transmission mode is a mode in which the data
bits are sent serially one after the other at a time over the
transmission channel.

It needs a single transmission line for communication. The data bits


are received in synchronization with one another. So, there is a
challenge of synchronizing the transmitter and receiver.
In serial data transmission, the system takes several clock cycles to
transmit the data stream. In this mode, the data integrity is
maintained, as it transmits the data bits in a specific order, one after
the other.

This type of transmission mode is best suited for long-distance data


transfer, or the amount of data being sent is relatively small.

For Example, Data transmission between two computers using serial


ports.

Following are the advantages of using a serial transmission


mode:

1. It can be used for long-distance data transmission as it is


reliable.
2. The number of wires and complexity is less.
3. It is cost-effective.
Following are the disadvantages of using a serial transmission
mode:

1. The Data transmission rate is slow due to a single transmission


channel.

2. Parallel
The Parallel data transmission mode is a mode in which the
data bits are sent parallelly at a time. In other words, there is a
transmission of n-bits at the same time simultaneously.
Multiple transmission lines are used in such modes of transmission.
So, multiple data bytes can be transmitted in a single system clock.
This mode of transmission is used when a large amount of data has
to be sent in a shorter duration of time. It is mostly used for short-
distance communication.

For n-bits, we need n-transmission lines. So, the complexity of the


network increases but the transmission speed is high. If two or more
transmission lines are too close to each other, then there may be a
chance of interference in the data, degrading the signal quality.

For Example, Data transmission between computer and printer.

Following are the advantages of using a parallel transmission


mode:

1. It is easy to program or implement.


2. Data transmission speed is high due to the n-transmission
channel.
Following are the disadvantages of using a parallel
transmission mode:

1. It requires more transmission channels, and hence cost-


ineffective.
2. Interference in data bits, likewise in video conferencing.
Hence, after learning the various transmission modes, we can
conclude that some points need to be considered when selecting a
data transmission mode:

• Transmission Rate.
• The Distance that it covers.
• Cost and Ease of Installation.
• The resistance of environmental conditions.

What is network topology and types of network topology?

Topology is derived from two Greek words topo and logy, where
topo means 'place' and logy means 'study'. In computer
networks, a topology is used to explain how a network is
physically connected and the logical flow of information in the
network. A topology mainly describes how devices are connected
and interact with each other using communication links.
In computer networks, there are mainly two types of topologies,
they are:

1. Physical Topology: A physical topology describes the way in


which the computers or nodes are connected with each other in
a computer network. It is the arrangement of various
elements(link, nodes, etc.), including the device location and
code installation of a computer network. In other words, we can
say that it is the physical layout of nodes, workstations, and
cables in the network.
2. Logical Topology: A logical topology describes the way, data
flow from one computer to another. It is bound to a network
protocol and defines how data is moved throughout the
network and which path it takes. In other words, it is the way in
which the devices communicate internally.
Network topology defines the layout, virtual shape, or structure
of the network, not only physically but also logically. A network
can have one physical topology and multiple logical topologies
at the same time.

In this blog, we will mainly concentrate on physical topologies. We'll


learn about different types of physical topologies, their advantages,
and disadvantages.

In a computer network, there are mainly six types of physical


topology, they are:

1. Bus Topology
2. Ring Topology
3. Star Topology
4. Mesh Topology
5. Tree Topology
6. Hybrid Topology
Now let us learn these topologies one by one:

Bus Topology
Bus topology is the simplest kind of topology in which a
common bus or channel is used for communication in the
network. The bus is connected to various taps and
droplines. Taps are the connectors, while droplines are the cables
connecting the bus with the computer. In other words, there is only a
single transmission line for all nodes.

When a sender sends a message, all other computers can hear it, but
only the receiver accepts it(verifying the mac address attached with
the data frame) and others reject it. Bus technology is mainly suited
for small networks like LAN, etc.
In this topology, the bus acts as the backbone of the network, which
joins every computer and peripherals in the network. Both ends of
the shared channel have line terminators. The data is sent only in
one direction and as soon as it reaches the end, the terminator
removes the data from the communication line(to prevent signal
bounce and data flow disruption).

In a bus topology, each computer communicates to another


computer on the network independently. Every computer can share
the network's total bus capabilities. The devices share the
responsibility for the flow of data from one point to the other in the
network.

For Example Ethernet cable, etc.

Following are the advantages of Bus topology:

1. Simple to use and install.


2. If a node fails, it will not affect other nodes.
3. Less cabling is required.
4. Cost-efficient to implement.
Following are the disadvantages of Bus topology:

1. Efficiency is less when nodes are more(strength of signal


decreases).
2. If the bus fails, the network will fail.
3. A limited number of nodes can connect to the bus due to
limited bus length.
4. Security issues and risks are more as messages are broadcasted
to all nodes.
5. Congestion and traffic on the bus as it is the only source of
communication.

Ring Topology
Ring topology is a topology in which each computer is
connected to exactly two other computers to form the ring. The
message passing is unidirectional and circular in nature.

This network topology is deterministic in nature, i.e., each computer


is given access for transmission at a fixed time interval. All the nodes
are connected in a closed-loop. This topology mainly works on a
token-based system and the token travels in a loop in one specific
direction.

In a ring topology, if a token is free then the node can capture the
token and attach the data and destination address to the token, and
then leaves the token for communication. When this token reaches
the destination node, the data is removed by the receiver and the
token is made free to carry the next data.

For Example, Token Ring, etc.

Following are the advantages of Ring topology:

1. Easy Installation.
2. Less Cabling Required.
3. Reduces chances of data collision(unidirectional).
4. Easy to troubleshoot(the faulty node does not pass the token).
5. Each node gets the same access time.
Following are the disadvantages of Ring topology:

1. If a node fails, the whole network will fail.


2. Slow data transmission speed(each message has to go through
the ring path).
3. Difficult to reconfigure(we have to break the ring).

Star Topology
Star topology is a computer network topology in which all the
nodes are connected to a centralized hub. The hub or switch acts
as a middleware between the nodes. Any node requesting for service
or providing service, first contact the hub for communication.
The central device(hub or switch) has point to point communication
link(the dedicated link between the devices which can not be
accessed by some other computer) with the devices. The central
device then broadcast or unicast the message based on the central
device used. The hub broadcasts the message, while the switch
unicasts the messages by maintaining a switch table. Broadcasting
increases unnecessary data traffic in the network.

In a star topology, hub and switch act as a server, and the other
connected devices act as clients. Only one input-output port and one
cable are required to connect a node to the central device. This
topology is better in terms of security because the data does not pass
through every node.

For Example High-Speed LAN, etc.

Following are the advantages of Star topology:

1. Centralized control.
2. Less Expensive.
3. Easy to troubleshoot(the faulty node does not give response).
4. Good fault tolerance due to centralized control on nodes.
5. Easy to scale(nodes can be added or removed to the network
easily).
6. If a node fails, it will not affect other nodes.
7. Easy to reconfigure and upgrade(configured using a central
device).
Following are the disadvantages of Star topology:

1. If the central device fails, the network will fail.


2. The number of devices in the network is limited(due to limited
input-output port in a central device).

Mesh Topology
Mesh topology is a computer network topology in which nodes
are interconnected with each other. In other words, direct
communication takes place between the nodes in the network.
There are mainly two types of Mesh:

1. Full Mesh: In which each node is connected to every other


node in the network.
2. Partial Mesh: In which, some nodes are not connected to every
node in the network.
In a fully connected mesh topology, each device has a point to point
link with every other device in the network. If there are 'n' devices in
the network, then each device has exactly '(n-1)' input-output ports
and communication links. These links are simplex links, i.e., the data
moves only in one direction. A duplex link(in which data can travel
in both the directions simultaneously) can replace two simplex links.

If we are using simplex links, then the number of communication


links will be 'n(n-1)' for 'n' devices, while it is 'n(n-1)/2' if we are
using duplex links in the mesh topology.
For Example, the Internet(WAN), etc.

Following are the advantages of Mesh topology:

1. Dedicated links facilitate direct communication.


2. No congestion or traffic problems on the channels.
3. Good Fault tolerance due to the dedicated path for each node.
4. Very fast communication.
5. Maintains privacy and security due to a separate channel for
communication.
6. If a node fails, other alternatives are present in the network.
Following are the disadvantages of Mesh topology:

1. Very high cabling required.


2. Cost inefficient to implement.
3. Complex to implement and takes large space to install the
network.
4. Installation and maintenance are very difficult.

5. Tree Topology:
Tree topology is a computer network topology in which all the
nodes are directly or indirectly connected to the main bus
cable. Tree topology is a combination of Bus and Star topology.
In a tree topology, the whole network is divided into segments,
which can be easily managed and maintained. There is a main hub
and all the other sub-hubs are connected to each other in this
topology.

Following are the advantages of Tree topology:

1. Large distance network coverage.


2. Fault finding is easy by checking each hierarchy.
3. Least or no data loss.
4. A Large number of nodes can be connected directly or
indirectly.
5. Other hierarchical networks are not affected if one of them
fails.
Following are the disadvantages of Tree topology:

1. Cabling and hardware cost is high.


2. Complex to implement.
3. Hub cabling is also required.
4. A large network using tree topology is hard to manage.
5. It requires very high maintenance.
6. If the main bus fails, the network will fail.

Hybrid Topology:
A Hybrid topology is a computer topology which is a
combination of two or more topologies. In practical use, they are
the most widely used.
In this topology, all topologies are interconnected according to the
needs to form a hybrid. All the good features of each topology can be
used to make an efficient hybrid topology.

Following are the advantages of Hybrid topology:

1. It can handle a large volume of nodes.


2. It provides flexibility to modify the network according to our
needs.
3. Very Reliable(if one node fails it will not affect the whole
network).
Following are the disadvantages of Hybrid topology:

1. Complex design.
2. Expensive to implement.
3. Multi-Station Access Unit(MSAL) required.
Hence, after learning the various computer network topologies, we
can conclude that some points need to be considered when selecting
a physical topology:

• Ease of Installation.
• Fault Tolerance.
• Implementation Cost.
• Cabling Required.
• Maintenance Required.
• Reliable Nature.
• Ease of Reconfiguration and upgradation.
What are Routers, Hubs, Switches, Bridges?

Routers, Hubs, Switches, and Bridges are all network connecting


devices. A network connecting device is a device that connects
two or more devices together that are present in the same or
different networks.

A network connecting device can be a repeater, hub, bridge, switch,


router, or gateway. But in this blog, we'll focus on hubs, switches,
routers, and gateways. We'll also learn about their features,
advantages, and disadvantages in networking.

All these connecting devices operate in some specific layers of the


OSI(Open System Interconnection) Model. These specifications are
provided in the diagram below.
Now let us learn about these network connecting devices one-by-
one.

1. Hub
Hub is a very simple network connecting device. In Star/hierarchical
topology, a Repeater is called Hub. It is also known as a Multiport
Repeater Device.

A Hub is a layer-1 device and operates only in the physical


network of the OSI Model. Since it works in the physical layer, it
mainly deals with the data in the form of bits or electrical signals. A
Hub is mainly used to create a network and connect devices on the
same network only.

A Hub is not an intelligent device, it forwards the incoming


messages to other devices without checking for any errors or
processing it. It does not maintain any address table for connected
devices. It only knows that a device is connected to one of its ports.

When a data packet arrives at one of the ports of a Hub, it simply


copies the data to every port. In other words, a hub broadcasts the
incoming data packets in the network. Due to this, there are various
security issues in the hub. Broadcasting also leads to unnecessary
data traffic on the channel.

A Hub uses a half-duplex mode of communication. It shares the


bandwidth of its channel with the connecting devices. It has only
one collision domain, so there are more chances of collision and
traffic on the channel. A hub is connected in limited network size. If
the network size is increased, the speed of the network will slow
down. Also, a hub can only connect the devices in the same network
with the same data rates and format only.

There are mainly two types of Hub, they are:

1. Active Hub: An Active hub is also known as Concentrator. It


requires a power supply and can work as a repeater. Thus, it can
analyze the data packets and can amplify the transmission
signals, if needed.
2. Passive Hub: A passive hub does not need any power supply to
operate. It only provides communication between the
networking devices and does not amplify the transmission
signals. In other words, it just forwards the data as it is.
Following are the advantages of using a Hub:

1. It is simple to implement.
2. The implementation cost is low.
3. It does not require any special system administration
configuration. We can just plug and play it.
Following are the disadvantages of using a Hub:

1. It can connect devices of the same network only.


2. It uses a half-duplex mode of communication.
3. It is less secure, as it broadcasts the data packets.
4. It can be used in a limited network size only.
5. Broadcasting induces unnecessary traffic on the channel.
2. Bridge
A bridge is a layer-2 network connecting device, i.e., it works on
the physical and data-link layer of the OSI model. It interprets
data in the form of data frames. In the physical layer, the bridge acts
as a Repeater which regenerates the weak signals, while in the data-
link layer, it checks the MAC(Media Access Control) address of the
data frames for its transmission.

A bridge connects the devices which are present in the same


network. It is mainly used to segment a network to allow a large
network size. It has two types of port - incoming and outgoing. It
uses the incoming port to receive the data frames and outgoing port
to send the data frames to other devices. It has two collision
domains, so there is still a chance of collision and traffic in the data
transmission channel.

A Bridge has filtering capacity. It means that it can discard the


faulty data frames and will allow only the errorless data frames in
the network. Also, it can check the destination MAC address of a
frame and decides the port from which the frame should be sent out.
For this, it maintains a table containing the physical(MAC) addresses
of all the devices in the network. Whenever a data frame arrives at
the incoming port of the bridge, it first checks the data frame for any
kind of errors. If the frame is errorless, it directs the data frame to
the specified MAC address(taking instance from the address table)
using its outgoing port. It does not change the physical(MAC
Address) of the frames during transmission. In other words, a
Bridge is a Repeater with filtering capability.
There are mainly two types of Bridge, they are:

1. Transparent Bridge: Transparent bridge simply works as a


transmission medium between two devices. They are actually
transparent(they are present but are not functionally visible to
the devices) to the networking devices.
2. Routing Bridge: Routing bridges have their unique identity,
they can be easily identified by the network devices. The source
station or the sender can send the data packets through specific
bridges(using the unique identity of bridges).
Following are the advantages of using a Bridge:

1. It is not so complex to implement.


2. The implementation cost is medium.
3. It does not require any special system administration
configuration. We can just plug and play it.
4. Improves security by limiting the scope of data frames.
5. It has the filtering capability.
6. It can be used in a large network.
Following are the disadvantages of using a Bridge:

1. It can connect devices of the same network only.


2. There is a delay in forwarding the frames due to error checking.
3. There is a need to maintain an Address table.

3. Switch
A switch is a layer-2 network connecting device, i.e., it works on
the physical and data-link layer of the OSI model. It interprets
data in the form of data frames. A switch acts as a multiport bridge
in the network. It provides the bridging functionality with greater
efficiency.

A switch maintains a Switch table which has the MAC addresses of


all the devices connected to it. It is preferred more over the hub, as it
reduces any kind of unnecessary traffic in the transmission channel.
A switch can connect the devices only in the same network. It uses
the full-duplex mode of communication and saves bandwidth. The
switch table keeps on updating every few seconds for better
processing.

A Switch is an intelligent device with filtering capabilities. It can


discard the faulty data frames and will allow only the errorless data
frames in the network. Also, it will forward the data frames to the
specific node based on the MAC address(taken from the Switch
table). A Switch has multiple collision domains, so it has least or no
collisions in the transmission channel. In fact, every port of switch
has a separate collision domain.

When a data frame arrives at the Switch, it first checks for any kind
of error in the data frame. If the frame is error-free, it will search the
MAC address of the destination in the Switch table. If the address is
available in the switch table, it will forward the data frame to that
specific node, else switch will register the MAC address in the switch
table. If the destination address is not specified, it will broadcast the
data frame to each node in the network.

A Switch can have 8/6/24/48 ports. The data transmission speed is


slow in a switch(around 10-100 Mbps). Also, it has only one
broadcasting domain.
There are mainly four types of Switches, they are:

1. Store and Forward Switch: It is the most widely and


commonly used switch. It does not forward the data frames
unless the frames are errorless and completely received in the
switch buffer. It is reliable in nature.
2. Cut-through Switch: Cut-through switches have no error
checking. Also, it starts sending the data frame to the
destination node when it starts receiving it. It is unreliable in
nature.
3. Fragment-Free Switch: It is a combination of store and
forward, and cut-through switch. It checks only the starting 64
bytes(header information) of the data frame before
transmitting the frame.
4. Adaptive Switch: It is the most advanced kind of switch which
automatically chooses any of the above three switches as per
the need.
Following are the advantages of using a Switch:

1. The implementation cost is medium.


2. It does not require any special system administration
configuration. We can just plug and play it.
3. Improves security by limiting the scope of data frames.
4. It has the filtering capability.
5. It can be used in a large network.
6. It uses full-duplex mode of communication
7. It has multiple collision domains, so there are least or no
collisions in the channel.
Following are the disadvantages of using a Switch:
1. It can connect devices of the same network only.
2. There is a delay in forwarding the frames due to error checking.
3. There is a need to maintain a Switch table.

4. Router
A Router is a layer-3 network connecting device, i.e., it works
on the physical, data-link and network layer of the OSI
model. It interprets data in the form of data packets. It is mainly an
internetworking device, which can connect devices of different
networks(implementing the same architecture and protocols). In
other words, it can connect two physically and logically different
network devices with each other. A Router is used to connect the
networks or it routes traffic between the networks. In other words,
a Router is the Gateway of a network.

Since, connecting two devices of different networks, the connecting


device should implement an Internet Protocol(IP) address. So, the
Router has a physical and logical(Internet Protocol) address for each
of its interfaces. It routes or forwards the data packets from one
network to another based on their IP addresses. It changes the
physical address of the data packet(both source and destination)
when it forwards the data packets.

A router maintains a routing table using the routing algorithms.


When a data packet is received at the router, it first checks the IP
address. If the IP address is the same as the network's IP address, it
receives the data packet, else it forwards the data packet to the
destination IP address using the routing table.
A router does not perform addressing. It can have 2/4/8 ports for
connecting the devices. It can control both the collision
domain(inside the network) and the broadcast domain(outside the
network). It has a fast data transmission speed(up to 1 Gbps). A
Router can be a Wireless Router, Core Routers, Edge Routers, Virtual
Routers, etc.

There are mainly two types of routing performed by Routers, they


are:

1. Static Routing: In Static Routing, the path for the data packets
is manually set. It is generally used for small networks.
2. Dynamic Routing: In Dynamic Routing, various routing
algorithms are used to find the best and shortest path for the
data packets.
Following are the advantages of using a Router:

1. It can connect devices and provides routing facilities over


different networks implementing the same protocol and
structure.
2. Improves security by limiting the scope of data packets.
3. It has the filtering capability.
4. It can be used in a large network.
5. It uses full-duplex mode of communication
6. It has control over both the collision and broadcast domain.
Following are the disadvantages of using a Router:

1. It is very complex to implement.


2. The implementation cost is quite high.
3. There is a need to maintain a Routing table.
4. There is a delay in forwarding the packets due to error
checking.
5. It requires a special system administration configuration.

What is Flow-Control in networking?

In a network, the sender sends the data and the receiver receives the
data. But suppose a situation where the sender is sending the data at
a speed higher than the receiver is able to receive and process it,
then the data will get lost. Flow-control methods will help in
ensuring this. The flow control method will keep a check that the
senders send the data only at a speed that the receiver is able to
receive and process. So, let's get started with the blog and learn more
about flow control.

Flow Control
Flow control tells the sender how much data should be sent to the
receiver so that it is not lost. This mechanism makes the sender wait
for an acknowledgment before sending the next data. There are two
ways to control the flow of data:

1. Stop and Wait Protocol


2. Sliding Window Protocol

Stop and Wait Protocol


It is the simplest flow control method. In this, the sender will send
one frame at a time to the receiver. Until then, the sender will stop
and wait for the acknowledgment from the receiver. When the
sender gets the acknowledgment then it will send the next data
packet to the receiver and wait for the acknowledgment again and
this process will continue. This can be understood by the diagram
below.

Suppose if any frame sent is not received by the receiver and is lost.
So the receiver will not send any acknowledgment as it has not
received any frame. Also, the sender will not send the next frame as
it will wait for the acknowledgment for the previous frame which it
had sent. So a deadlock situation can be created here. To avoid any
such situation there is a time-out timer. The sender will wait for this
fixed amount of time for the acknowledgment and if the
acknowledgment is not received then it will send the frame again.

There are two types of delays while sending these frames:

• Transmission Delay: Time taken by the sender to send all the


bits of the frame onto the wire is called transmission delay.
This is calculated by dividing the data size(D) which has to be
sent by the bandwidth(B) of the link.
Td = D / B

• Propagation Delay: Time taken by the last bit of the frame to


reach from one side to the other side is called propagation
delay. It is calculated by dividing the distance between the
sender and receiver by the wave propagation speed.
Tp = d / s ; where d = distance between sender and receiver, s = wave
propagation speed

The propagation delay for sending the data frame and the
acknowledgment frame is the same as distance and speed will
remain the same for both frames. Hence, the total time required to
send a frame is

Total time= Td(Transmission Delay) + Tp(Propagation Delay for data


frame) + Tp(Propagation Delay for acknowledgment frame)
The sender is doing work only for Td time and for the rest 2Tp time
the sender is waiting for the acknowledgment.

Efficiency = Useful Time/ Total Time

η=Td / (Td+2Tp)

Advantages of Stop and Wait Protocol

1. It is very simple to implement.

Disadvantages of Stop and Wait Protocol

1. We can send only one packet at a time.


2. If the distance between the sender and the receiver is large
then the propagation delay would be more than the
transmission delay. Hence, efficiency would become very low.
3. After every transmission, the sender has to wait for the
acknowledgment and this time will increase the total
transmission time.
Sliding Window Protocol
As we saw that the disadvantage of the stop and wait protocol is that
the sender waits for the acknowledgment and during that time the
sender is idle. In sliding window protocol we will utilize this time.
We will change this waiting time into transmission time.

A window is a buffer where we store the frames. Each frame in a


window is numbered. If the window size is n then the frames are
numbered from the number 0 to n-1. A sender can send n frames at a
time. When the receiver sends the acknowledgment of the frame
then we need not store that frame in our window as it has already
been received by the receiver. So, the window in the sender
side slides to the next frame and this window will now contain a new
frame along with all the previous unacknowledged frames of the
window. At any instance of time window will only contain the
unacknowledged frames. This can be understood with
the example below:

1. Suppose the size of the window is 4. So, the frames would be


numbered as 0,1,2,3,0,1,2,3,0,… so on.
2. Initially, the frames in the window are 0,1,2, 3. Now, the sender
starts transmitting the frames. The first frame is sent, then
second and so on.
3. When the receiver receives the first frame i.e. frame 0. Then it
sends an acknowledgment.
4. When the acknowledgment is received by the sender then it
knows that the first frame has been received by the receiver and
it need not keep its record. So, the window slides to the next
frame.
5. The new window contains the frame 1, 2, 3, 0. In this way, the
window slides hence the name sliding window protocol.

Using sliding window protocol, the efficiency can be made maximum


i.e. 1. In sliding window protocol we are using the propagation delay
time also for the transmission. For doing this we the sender should
be sending the data frame all the time i.e for Td+2Tp time. So, what
should be the number of packets such that the efficiency is
maximum?

We will apply a simple unitary method to find this. In Td units of


time, we can send one packet. So in one unit of time, we can send
1/Td packets. We have total time as Td+2Tp. Therefore, in Td+2Tp
time we can send (Td+2Tp)/Td packets. Let a=Tp/Td. So, if we send
1+2a packets then the efficiency is 1.

Td units of time → 1 packet transmitted


1 unit of time → (1/ Td) packet transmitted
Td + 2Tp units of time → (Td + 2Tp) / Td packets transmitted

What is Error-Control in Networking?

While sending the data from the sender to the receiver there is a
high possibility that the data may get lost or corrupted. Error is a
situation when the sender's data does not match the data at the
receiver's end. When an error is detected then we need to retransmit
the data. So, there are various techniques of error control in
computer networks. In this blog, we will see all these techniques. So
let's get started.
Error Control
Error Control in the data link layer is a process of detecting and
retransmitting the data which has been lost or corrupted during the
transmission of data. Any reliable system must have a mechanism
for detecting and correcting such errors. Error detection and
correction occur at both the transport layer and the data link layer.
Here we will talk about the data link layer and check bit by bit that if
there is any error or not.

TYPES OF ERROR
Single bit Error: When there is a change in only one bit of the
sender's data then it is called a single bit error.

Example: If the sender sends 101(5) to the receiver but the receiver
receives 100(4) then it is a single bit error.

101(sent bits) → 100(received bits)


Burst Error: When there is a change in two or more bits of the
sender’s data then it is called a burst error.

Example: If the sender sends 1011(11) to the sender but the receiver
receives 1000(8) then it is a burst error.

1011(sent bits) → 1000(received bits)

Phases in Error Control


• Error Detection: Firstly, we need to detect at the receiver end
that the data received has an error or not.
• Acknowledgement: If any error is detected the receiver sends
a negative acknowledgement(NACK) to the receiver.
• Retransmission: When the sender receives a negative
acknowledgement or if any acknowledgement is not received
from the receiver sender retransmits the data again.

Error Detection
1. Vertical Redundancy Check
2. Longitudinal Redundancy Check
3. Circular Redundancy Check
4. CheckSum

Vertical Redundancy Check(VRC)


In this method, a redundant bit called parity bit is added to the
data. This parity bit is added such that the number of 1’s in the data
is even. This is called even parity checking. If the number of 1’s is
even then the bit to be added is 0. If the number of 1's is odd then
the bit to be added is 1.

Some systems may also check for the odd number of 1’s. This is
called odd parity checking. If the number of 1’s is odd then the bit to be
added is 0. If the number of 1's is even then the bit to be added is 1.
Example: We have a data 1100001. Now, this data is sent to the even
parity generator which adds a redundant bit to it by checking the
number of 1's. The even parity generator will add a 1 as it has odd
numbers of 1’s. So, the data which is going to be transmitted is
original data along with the parity bit i.e. 11000011. At the receiver
side, we have a checking function that checks if the number of 1’s is
even or not.

Limitation of VRC
Suppose in the above example, if at the transmission time two bits
are altered such that the receivers receive the data as 10000001. The
receiver will successfully accept this data. This is because the
checking function will check for an even number of 1’s and the
received data will satisfy this condition.

So, VRC fails when there is an even number of changes in the data.

Longitudinal Redundancy Check(LRC)


In LRC we use a block of codes as the parity bit. We will take each
block of data and calculate the parity bit longitudinally instead of
vertically. The sender will send the original data along with the block
of parity bit generated. This can be understood by the example
below.
Example: Suppose we have to send the data, 11100111 11011101
00111001 10101001. So we will calculate the parity bit by using even
parity checking. We will start from the zeroth position of each block
of bits and gradually move towards the higher positions. We take all
the bits present in the zeroth position i.e 1, 1, 1, 1. Now, the output
is decided according to the following two rules:

1. If there are even number of 1's then the output will be 0.


2. If there are odd numbers of 1's then the output will be 1.
As '4' 1's are present which is an even number so the output at the
zeroth position will be 0 and so on for the higher positions.

The transmitted data will be 11100111 11011101 00111001 10101001


10101010.

Limitation of LRC
Suppose in the above example, during transmission some of the bits
get changed and the received data is 11100111 11011101 00110011
10100011. If we calculate the parity bit for this received data then it
will again come out to be 10101010 at the receiver's end.
So even if the data bits have changed this error won't be detected at
the receiver's end.

So, if two bits in one data unit are damaged and the two bits at the exact
same position in another data unit are damaged the LRC will not be able
to detect it.

Checksum
There are two processing methods involved in this. The sender
generates the checksum and sends the original data along with the
checksum. The receiver end also generates the checksum from the
received data. If the generated sum at the receiver side is all zeroes
then only the data is accepted.

The sender follows these steps:

1. Data is divided into k sections or blocks of given ‘n’ bits.


2. All the sections are added using 1’s complement(data).
3. The final sum is bitwise complemented(convert 0 to 1 and 1 to
0) to get the checksum.
4. The sender sends the original data along with the checksum.
The receiver follows the following steps:

1. Data is divided into k sections or blocks of given 'n' bits.


2. All the sections are added using 1’s
complement(data+checksum).
3. The final sum is bitwise complemented(convert 0 to 1 and 1 to
0).
4. If the result is all zeros then it accepted else rejected.
Example: We have to send the data where data is divided into four
sections each of 8 bits. Suppose we have to send 10110011
10101011 01011010 11010101. So, there are 32 bits and we will
divide the whole 32 bit data into a group of 8 bits i.e. 4 groups.

In the sender's side

1. Take any two blocks of data and perform 1's complement


arithmetic.
2. Take the next block of data and add it to the result of the last
addition. Perform the addition until all the blocks are added.
3. Take 1's complement of the last sum obtained. It is a checksum.
4. Transmit the original bits along with the checksum to the
receiver's side.
In the receiver's side

1. Take any two blocks of data and perform 1's complement


arithmetic.
2. Take the next block of data and add it to the result of the last
addition. Perform the addition until all the blocks are added.
3. Take 1's complement of the last sum obtained. If the
complement is all zeros then only the data is accepted. Here,
the result is all zeros hence the receivers accept the data.
Since the result is 00000000. So, the received data is correct.

Cyclic Redundancy Check(CRC)


The cyclic redundancy check is based on binary division. The sender
and the receiver both agree upon a generator polynomial. A
generator polynomial can be any polynomial expression. This
generator polynomial helps in finding the CRC generator. CRC
bit(having the same number of bits as the CRC generator) is
generated at the sender's side by dividing the bit sequence of the
data by the CRC generator. Before dividing the bit sequence by the
CRC generator we will append the original data with n-1 bits of 0's
assuming that the CRC generator has n bits. The sender will then
append the data with the CRC bit and send it to the receiver.

Example: Given any generator polynomial, we can find the CRC


generator by taking the coefficient of each variable.

At the receiver end, the received bits are divided again by the CRC
generator. If the remainder of the division is zero then the data is
accepted else rejected.

Example: We have our data as 100100. The generator Polynomial is


as in the above explanation i.e. 1.x^3+1.x^2+0.x^1+1.x^0. So, our
CRC generator will be 1101(4 bits). Now, we have to append the
original data with n-1 numbers of 0’s where 'n' is the number of bits
in the CRC generator. Here n is 4. So, the dividend is 100100000(i.e.
100100 and 000) and the divisor is 1101. Now while dividing we will
perform the XOR operation as follows. This division will be
performed on both the sender and receiver sides. For the receiver
side, the divisor would be the same and the dividend would be the
original data along with the CRC bits. If the remainder at the
receiver side is zero then only the data is accepted.
Retransmission
When any error is detected then the specified frames are sent again
this process is called automatic repeat request(ARQ). Error Control
in the data link layer is based on the ARQ.

The following error control techniques can be used once the error
is detected.

1. Stop and wait ARQ


2. Sliding Window ARQ

Stop and Wait ARQ


A time out counter is maintained on the sender's side. Firstly, if the
sender does not receive the acknowledgement of the sent data within
the given time then the sender assumes that the sent data has been
lost or the acknowledgement of the data has been lost. So, the
sender retransmits the data to the receiver. Secondly, if the receiver
detects an error in the data frame indicating that it has been
corrupted during the transmission the receiver sends a
NACK(negative acknowledgement). If the sender receives a negative
acknowledgement of the data then it retransmits the data.

Sliding Window ARQ


In sliding window ARQ, a sender can send multiple data frames at
the same time. The sender keeps a record of all the sent frames until
they have been acknowledged. The receiver can send an ACK
(acknowledgement ) or NACK(negative acknowledgement)
depending upon if the data frame is received correctly, if any error
has been detected, or has been lost.

The sliding ARQ is of two types:

1. Go-Back-N ARQ
2. Selective Repeat ARQ

Go-Back-N ARQ
In this protocol, if any frame is lost or corrupted then all the frames
since the last frame that was acknowledged are sent once again. The
sender's window size is N but the receivers window size is only one.

Example: Suppose we have a window size of 4 for the data frames


which we are going to send. Now, suppose while sending the
data frame 2 some error occurred and it got corrupted. So the
receiver will send a negative acknowledgement (NACK) of the data.
All the data frames after the last acknowledged(ACK) frames i.e
after frame 1 will now be sent again.

Limitations of Go-Back-N ARQ


In this, we have to send all the frames once again even though it has
no errors. In the above example, we had to send all the frame i.e 2, 3,
4, 5 once again though the error was only in frame 2. How we can
overcome this?

Selective Repeat ARQ


In this ARQ, if any frame is lost or corrupted then only that frame is
sent again which has a negative acknowledgement. The sender’s
window size and the receiver’s window size is the same here.
It removes the problem of the Go-Back-N ARQ as the error-free
frames can be accepted as the receiver's window size now equal to
the sender's size, unlike the Go-Back-N ARQ where the receiver's
window size was only 1. The retransmission method is modified as
only the individual frames are retransmitted.

Example: In the above example, if there was an error in the frame 2.


So, we will send only the frame number 2 again.

What are various Message switching techniques?

Have you ever wondered what is the mechanism behind the video
calls on Skype and WhatsApp? How our voice is sent over the earlier
telephone calls? These facilities are provided to us by the
various message switching techniques. Different massage
techniques are used according to our requirements like the earlier
telephone calls used circuit switching. In this blog, we will get the
answers to all the above questions as we go through this blog. So,
let's get started.
Message Switching Techniques
For transferring the message from the sender to the receiver we use
various message switching techniques. The technique we use,
depends upon the factors like kind of message we want to transfer,
quality of the message, etc. In this blog, we will see all these
techniques in detail.

There are three types of message switching techniques used:

1. Circuit Switching
2. Message Switching
3. Packet Switching

Circuit Switching
Circuit Switching is a switching technique method that establishes a
dedicated path between the sender and the receiver to send the data.
The example of a circuit-switch network is a telephone network.

A communication through circuit switching has 3 phases:

1. Circuit Establishment: In the first phase, a dedicated link is


established between the sender and the receiver through a
number of switching centers or nodes.
2. Data Transmission: Once the connection is established data is
transferred from the sender to the receiver over the connection
established. This connection remains as long as the data
transfer takes place.
3. Circuit Disconnect: Once the data transfer is completed the
circuit disconnects. The disconnection is initiated by one of the
user i.e the sender or the receiver.
Example: Suppose we want to send the data from the A to B, we will
first have to establish the connection in the switching nodes. These
nodes establish a link such that there is a dedicate path from A to B.
When the link is established the date is transferred. Once the data
sharing takes place, the links are broken and are free to be used by
some other systems.

There are a number of paths available from a sender to a receiver.


The sender can select any of the paths from the available paths.

Advantages of Circuit Switching Network


1. Once the circuit is established there is no delay in data transfer.
So it is useful in real-time data transfer like voice calls.
2. The communication channel is dedicated which ensures a
steady rate of data transfer.
3. There is no need of packet sorting in the receiver's side because
all the node follow the same path and in the same order.

Disadvantages of Circuit Switching Network


1. As the channel is dedicated it cannot be used to transfer any
other data even if the connected systems are not using the
channel. Hence circuit switching is insufficient in terms of
resource utilization.
2. It is expensive as compared to other techniques because of
dedicated path requirements.
3. The time required to establish the connection between the
source and destination is high. This means that there is no
communication until the connection is established and the
resources are available.
4. It mainly used in voice traffic so not suitable for data
transmission.

Message Switching
In-circuit switching when the source does not have enough data to
transmit, the resources are unnecessarily kept idle. To avoid such
situation Message switching is used. Message switching is a
connectionless network in which the data from the source to
destination is sent in the form of message units. A message is a
logical unit of the information that can be of any length. The
sender and the receiver are not directly connected. There are many
intermediate nodes which ensure the delivery of the message to the
destination. The message switching was used in sending telegrams.
It has two main characteristics:

1. Store and Forward: The sender sends the messages to the


nodes. The intermediate nodes store the complete message
temporarily. The node inspects for any error and then transfers
to the next node on the basis available free channel. Then the
data of that node will be forwarded or transfered to the next
node only if the next node has sufficient resources. The actual
path taken by the message is dynamic as the path is established
as it travels.
2. Message Delivery: Each message must have a header that
contains the routing information like source and destination
address. All these pieces of information are wrapped in the
message and then the message is sent from the source to
destination.
Example: Suppose we have to send two messages i.e Message
1 and Message 2 from the sender to the receiver. We will directly
send the message without establishing any connection.
Advantage of Message Switching
1. The efficiency is improved as a single channel can now be used
for transferring many messages.
2. The source and the destination don't need to be ready at the
same time. Even if the receiver is not ready the sender can send
the message and it can be stored by the nodes temporarily.
3. The transfer of message is possible also when the transfer rate
of the sender and receiver is different.
4. It reduces congestion due to its store and forward property. Any
node can be used to store the message until the next resources
are available to transfer the data.

Disadvantages of Message Switching


1. The store and forward method cause delay at each node. So, the
primary disadvantage of message switching is that it cannot be
used for real-time applications like voice or video calls.
2. As the message can be of any length so each node must have
sufficient buffer to store the message.

Packet Switching
Packet switching is a message switching technique in which the data
is divided into packets. These packets contain a header that contains
the information of the destination. The packets find the route with
the help of this information.

The biggest packet-switched network is the internet.


A packet contains header and payload. The header contains the
routing information and the payload contains the data to be
transferred. The packet switching is also based on the Store and
Forward method. Each packet contains the source and destination
information so, they can independently travel in the network. The
packet belonging to the same file may take different paths
depending upon the availability of the path. At the destination,
these packets are re-assembled. It the responsibility of the receiver
to re-arrange the received packet to the original data.

Example: Suppose the data to be sent is divided into three packets


i.e. 1, 2, 3. Now, these packets travel independently in the network.
The intermediate nodes forward the nodes according to the
availability of the channel. At the receiver side, the order of packets
can be different. It is the duty of the receiver to re-arrange the
received packets.
The path taken by the packet 1 is S → F → A → D → E →R.
Similarly, the path taken by the packet 2 is S → F → A → B → E →
R. Similarly, the path taken by the packet 3 is S → F → C → D → E
→ R.

Advantages of Packet Switching


1. Packets have fixed sizes, so the switching devices do not
require large secondary storage devices. The storage was a
problem in message switching which is removed here.
2. It is more efficient for data transmission as it doesn't require
any dedicated paths.
3. If the link is busy or not available, the packets can be re-routed.
This ensures a reliable connection.
4. The same channel can be used by many users simultaneously.
5. With improved protocols, packet switching is used for
applications like Skype, WhatsApp, etc.
Disadvantages of Packet Switching
1. They cannot be used for applications that cannot afford delays
like high-quality voice calls.
2. Protocols used in packet-switching are complex and require
higher implementation costs.
3. If the network is overloaded then the packets may be lost or
delayed. This may lead to loss of critical information.
4. Sorting of received packets is required at the receiver's side.
Packet switching was originally designed to overcome the weakness of
circuit switching. The circuit switching is inefficient for sending small
messages. Also, the analogous(continuos) circuit makes it prone to noise
and errors.

Can you connect two computers for file sharing without using a hub or router?

Suppose you have to share files between the computers at your


home. So we want to connect these computers and share our files
easily. This can be done in many ways. The first way to connect them
is with the hub or router. But why to spend money on the new
hardware and do complex network settings when this can easily be
done in an inexpensive way with minimal network settings. Yes, you
heard it right. We can connect two computers for file sharing
without using a hub or router. But, how can this be done? In this
blog, we will learn this. So, let's get started.

We can connect two computers for file sharing with the help of only
one cable. This can be done using a commonly available Ethernet
crossover cable. All you need to do is to assign both two computers
to work as a default gateway to each other. A default gateway is a
path used by the computer to send the data when it does not know a
specified path to send it to the destination. We will see how this is
done in windows as we go through this blog.

An Ethernet crossover cable looks like the picture below:

Image Source: Amazon.com

So, we will now see step by step how this is done.

1. Connect one end of the cable to the network adapter of the first
computer and the other end of the cable to the network adapter
of the second computer.
2. We need to perform step 2 to step 13 in the first Computer.
Open the Control Panel in the first Computer.
3. Click on Network Sharing Center.
4. Click on Change Advanced Sharing Settings.
5. Now you will see an All Networks Option. Expand this by
clicking the side arrow.
6. When the All Networks Option expands it looks like this the image
attached below. In the Public Folder Sharing option, tick on the
option- Turn on sharing so anyone with network access can read
and write files in the Public folders.

7. In the Password protected sharing option, tick on the


option- Turn off password protected sharing.
8. Click on Save changes after this. You will go back to the Network
and Sharing Center option. In this, now you can view your active
connections. In a section named View your Active Networks, you
will see a network whose connections are showing as Ethernet,
something like in the image below.

9. Double Click On this connections i.e. Ethernet. Now, click on


the Properties of this pop-up box.
10. A pop-box will again open. Go to the Networking Tab of this
pop-up box. Tick the option- Internet Protocol Version 4
(TCP/IPv4).

11. Double Click on Internet Protocol Version 4 (TCP/IPv4). A pop-


up box will again open. Now we will assign an IP address to the
system 1. Select on the option-Use the following IP addresses.

12. Fill the IP address as 192.168.1.1, Subnet Mask as 255.255.255.0


and the default gateway as 192.168.1.2. Choose the option-Use the
following DNS server. In the preferred DNS server fill 8.8.8.8.
13. Now, we will open the second Computer and perform the steps
from step 2 to Step 11 as above. The only change will be while filling
the IP addresses i.e. for step 12.

14. Fill the IP address as 192.168.1.2, Subnet Mask as 255.255.255.0


and the default gateway as 192.168.1.1. Choose the option-Use the
following DNS server. In the preferred DNS server fill 8.8.8.8. Here
we are making the first computer as the default gateway for the
second computer and in the 12th step, we made the second
computer to act as a default gateway to the first computer.
15. Click on Ok and Close. Your systems are ready for file sharing.

16. You can see the Computer connected. Go to My Computer and


Click on Network. Refresh and you can see the name of the first and
the second Computer which are connected. Now, you can any file,
audio, video that you want.

So we can connect two computers without using any hubs or routers.


Hope you learned something new today.
WHAT ARE GATEWAYS?
In our day-to-day life, we use the Internet on our computers which
may be connected to a LAN network. Internet is an example of a
WAN network, it may implement some other protocols or have a
different architecture from a LAN. Have you ever wondered, how it is
possible?

In such cases, there is a need for a network connecting device,


known as Gateways. Gateways are the most intelligent and highly
configurable network connecting devices. It operates in all the
layers of the OSI(Open System Interconnection) model, which is also
depicted in the image below. So in this blog, we'll learn about
Gateways in detail. We'll also see the working, features, advantages,
and disadvantages of using a Gateway.
Gateways
Gateway is a network connecting device that can be used to
connect two devices in two different networks implementing
different networking protocols and overall network
architecture. In other words, a gateway is a node on a network that
serves as an entrance to another network.

A Gateway is the most intelligent device among the network


connecting devices. Intelligent in terms of its working, error control,
data packet routing, transmission speed, etc. It is a combination of
both hardware as well as software components.

One of the main features of using a gateway is that we can have


routing controls for different networks through gateways. This way,
the traffic flow in the transmission channels for different networks
can be easily controlled by gateways.

A gateway operates on all the layers of the OSI model, so it can be


used as a one-stop solution for all kinds of network device
connectivities. But the major disadvantage of using a gateway is its
implementation cost. So, it will not be so effective to be used for
small networks, or for a single network. Also, the implementation of
gateways is very complex. These things constraint the wide use of
gateways for small purposes.

A Gateway is also called as 'Protocol Converter' because it can


convert the data packets as per the destination network protocol
requirement. It can also translate the data format as per the
destination needs or architecture. A gateway is used either at the
starting or endpoint of the network. It is an intelligent device that
can be used to connect a local node with an external node having a
completely different structure(protocols/architecture/languages/data
formatting structures). In other words, a gateway acts as a 'gate'
between two networks and enables traffic to flow(in and out) of the
network.

Gateways are often associated with both Routers and Switches. A


Router routes the data packets(arriving at the gateway) to the correct
node in the destination network. While a switch specifies the actual
path of the data in and out of the gateway. In general, a gateway
expands the router's functionality by performing data translation
and protocol conversion. For Example, a router can act as a gateway
in a single network. In similar ways, Switches, Servers, Firewalls, etc.
can also be used as a gateway at different places, as per the needs.

A default gateway passes the local subnet to devices on other


subnets. In other words, a default gateway connects a local network
to the internet or some other network. Actually, each network has an
internal default gateway in order to connect its devices to the
dissimilar network. A gateway can also have multiple NIC's(a chip
that allows the nodes to communicate to another computer on a
network) connected to it. Unlike routers, it does not supports
dynamic routing. It mostly uses a packet switching technique to
transmit data from one network to another. A gateway mainly works
on IP(Internet Protocol) Addresses for dissimilar network
communication. It has control over both collisions(inside a network)
as well as broadcast(outside the networks) domain. It can also
encapsulate and decapsulate the data packets when they send and
receive the data packets respectively.

For Example, Electronic Mail Gateway(X.400), etc.


Need: To establish an intelligent communication between two dissimilar
networks.
Working: When a data packet arrives at the gateway, it first checks the
header information. After checking the destination IP address and any
kind of errors in the data packets. It performs data translation and
protocol conversion of the data packet as per the destination network
needs. Finally, it forwards the data packet to the destination IP address
by setting up a specific transmission path for the packet.
Following are the advantages of using a Gateway:

1. It can connect the devices of two different networks having


dissimilar structures.
2. It is an intelligent device with filtering capabilities.
3. It has control over both collisions as well as a broadcast
domain.
4. It uses a full-duplex mode of communication.
5. It has the fastest data transmission speed amongst all network
connecting devices.
6. It can perform data translation and protocol conversion of the
data packet as per the destination network's need.
7. It can encapsulate and decapsulate the data packets.
8. It has improved security than any other network connecting
device.
Following are the disadvantages of using a Gateway:

1. It is complex to design and implement.


2. The implementation cost is very high.
3. It requires a special system administration configuration.
What is the difference between Unicasting, Anycasting, Multicasting, and
Broadcasting?

n computer networks, when we have to send any message to other


nodes, we first think of the audience, who will be receiving this
message. The message is intended either for a single node, or a
group of nodes, or to all nodes as per the needs. Due to this, we use
various network traffic or transmission types. These types are
classified according to the receiver. The four network
transmission types are as follows:

1. Unicasting
2. Anycasting
3. Multicasting
4. Broadcasting
Now, we'll learn these transmission types one by one in detail.

1. Unicasting
Unicasting is the most commonly used data transmission type on the
internet. In Unicasting, the data traffic flows from a single
source node to a single destination node on the network. It is a
'one-to-one' type of data transmission between the sender and
receiver. In other words, we can say that a single station is sending
information to another station on the network. The below-
mentioned diagram best describes unicasting:
It can be best implemented in computer-to-computer or server-to-
server or client-to-server kind of communications. SMTP(Simple
Mail Transfer Protocol) protocol can be used for unicasting an email
on the internet. Similarly, FTP(File Transfer Protocol) can be used
for unicasting a particular file from one computer to another on the
network. Some other protocols like HTTP(HyperText Markup
Language), Telnet, etc. can also be used for unicasting on the
network.

The scope of unicasting is within the whole network. One-to-one


type communication maintains the privacy of the information
between two devices. The major tasks that can be performed using
unicasting are web surfing and file transfer.
Following are the advantages of using Unicasting:

1. It provides dedicated point-to-point communication between


devices.
2. It maintains data privacy, as it is shared only with a single
destination.
Following are the disadvantages of using Unicasting:

1. It is not efficient if we have to send the same message to


multiple devices.

2. Anycasting
Anycast is a one-to-nearest kind of transmission in which a
single source sends a message to the nearest destination(among
multiple possible destinations). It can only be implemented using
IPv6 addressing. IPv4 addressing can not be used for anycasting. In
Anycasting, a single IPv6 address is assigned to multiple devices in
the network. Anycasting is described in the below-mentioned
diagram.
Anycasting is mainly used by Routers. The Anycast address is an
address that can be assigned to a group of devices on the
network(mostly routers). In the above diagram, all the devices with
green shade have the same anycast address. But the data is received
by only one device, which is in dark green shade(because it was the
first one to receive the message).

In Anycasting, any data sent to the anycast address is forwarded to


the nearest device having the destination anycast address. The
Router decides the nearest device with the help of the routing table.
The nearest is calculated based on the number of hops, distance,
efficiency, latency, and cost.

For Example, if we search something on the internet, the request is


fulfilled by the nearest source using anycasting. In Anycast, the data
is delivered to only one destination randomly(based on its distance
from the source). In other words, the traffic is received by the
nearest receiver amongst multiple receivers having the same IP and
anycast address. Protocols like '6to4', etc. can be used for anycasting
data packets in the network. The scope of anycasting is within the
whole network.

Following are the advantages of using Anycasting:

1. It provides efficient communication to the nearest device on


the network.
2. It maintains data privacy, as it is shared only with a single
destination.
Following are the disadvantages of using Anycasting:

1. It creates ambiguity in the network.


2. There is an extra overhead of finding the nearest device for
anycasting.

3. Multicasting
Multicast is a kind of transmission type in which a single source
communicates a message to a group of devices. It is a kind of
one-to-multiple transmission. All the devices which are interested in
receiving the messages will have to first join the multicast group.
Multicasting is described in the below-mentioned diagram:

Multicasting is used in an IP Multicast group in the network. The IP


multicast group consists of all the devices which are interested in
receiving the multicast traffic. The source need not be a member of
that group. Multicasting is always done using a single source. Also, a
multicast address can never be the source address.
Multicasting uses a class-D type of address(to connect multiple
destination nodes for multicasting). If a sender multicasts some data
on a destination address, all the devices that are connected to that
destination IP Multicast group will receive that data. The IPv6
address uses a prefix 'FF00::/8' for multicasting the messages.

Multicasting acts as a middleware between unicasting and


broadcasting. A common frame is shared with a group of interested
devices. Due to this, the communication channel is efficiently used
in multicasting. They are very specialized but is complex to
implement. IGMP(Internet Group Message Protocol) is mainly used
in multicasting. It is widely used for Multimedia delivery and stock
exchanges.

There are mainly two types of Multicast Addresses, they are as


follows:

1. Well-Known: These are the multicast addresses that are used


for all nodes and all routers predefined multicast groups.
'FF02::1' is used for multicasting the messages to all the nodes,
while 'FF02::2' is used for multicasting the messages to all
routers in the group.
2. Solicited Node: A solicited-node address is valid within the
network. Generally, each IPv6 is having one such address per
interface. It is mainly used in Neighbour Discovery
Protocol(which gathers the network's communication and
configuration information).
Following are the advantages of using Multicasting:

1. Messages will be delivered only to the interested nodes in the


multicast group.
2. There is an efficient use of the communication channel.
Following are the disadvantages of using Multicasting:

1. It can not be implemented using IPv4 addressing.


2. There is an extra overhead of forming multicast groups.
3. It is complex to implement.

4. Broadcasting
Broadcasting is a transmission type in which the data traffic
flows from a single source to all the devices on the network. It
sends the information to every device at once. The same data is
received by everyone, making it efficient for wide-spreading the
message with all nodes. Broadcasting is an IPv4 specific data
transmission type. The below-mentioned diagram best describes
broadcasting.

In broadcasting, every node has a look at the sent data and


information in the network. HTTP(HyperText Transfer Protocol) can
be used for broadcasting. For example, whenever we connect to the
internet through HTTP, we broadcast requests to all devices asking
about the DHCP server. It has a limited domain. The messages can be
broadcasted within the broadcast domain, which is a local subnet.

A Hub generally performs broadcasting in a network. Broadcast is an


IPv4 type of communication, where the message from a single sender
can be heard by all other devices on the same broadcast network. It
is mainly used when we do not have any specific destination address,
and we want to wide-spread the message. For this, a special code in
the address field is used to broadcast the messages. It can not be
implemented using IPv6 addressing. It mostly induces unnecessary
traffic on the communication channel. It is mainly used for
broadcasting the Router updates and ARP(Address Resolution
Protocol) requests.

There are mainly two types of Broadcasting, they are:

1. Limited Broadcast: It is used to send or broadcast the


messages to all nodes in the same network. '255.255.255.255' is
the destination address used for limited broadcast.
2. Directed Broadcast: It is used to broadcast the messages to all
nodes of another network. In the last 24 bits of the destination
address, '255.255.255' is used as a suffix for direct broadcast.
Following are the advantages of using Broadcasting:

1. Messages will be delivered to all the nodes in the broadcast


domain at the same time.
2. It is efficient for the situation, where we want to share the
messages with all other nodes.
3. It is simple to implement.
Following are the disadvantages of using Broadcasting:
1. It can not be implemented using IPv6 addressing.
2. In most of the cases, there is unnecessary traffic in the
communication channel.

What is the OSI model and how it works?

Whenever you connect two devices either on the same or different


networks, a question may arise in your mind regarding the
connectivity of the devices. The two devices may have different
architecture and implement different protocols, then how can they
connect and share information with each other.

Actually, they need a standardized model that can be implemented


by both to establish a connection between them. There are also some
standardized protocols that they can implement to connect.

So, In this blog, we will learn about a widely accepted standardized


model, i.e., the OSI(Open System Interconnection) model. We will
also learn the mechanism, how two devices are connected using this
model. We'll also focus on the different layers of this model along
with their functionalities.

OSI Model
OSI model is a layered framework that allows communication
between all types of the computer system. It has seven
layers. OSI model is introduced by ISO(International Organization
for Standardization) in 1984. Each layer has its own functionalities
and calls upon the services of the layer just below it. These layers are
a package of protocols that are implemented by computers to
connect in the network. In other words, the OSI model defines
and is used to understand how two computers connect with
each other in a computer network.

The seven layers of the OSI model, are as follows:

1. Physical Layer
2. Data-Link Layer
3. Network Layer
4. Transport Layer
5. Session Layer
6. Presentation Layer
7. Application Layer
We will learn about these layers and their functionalities one by one.

1. Physical Layer
The Physical Layer is the lowest layer of the OSI model and it
deals with data in the form of bits or signals. The type of signal
being generated depends upon the transmission medium. For
example, if we are using copper wire or LAN cable, the output signal
will be an electrical signal. Likewise, the output signal will be a light
signal for optical fibre cable, and radio signal for air as a
transmission medium.

At the sender's side, the physical layer will get the data from the
upper layer and convert it into bitstreams(0's and 1's) and send it
through a physical channel. At the receiver's side, it will convert the
bitstreams into frames to be passed to the data-link layer.

Following are the functionalities of a physical layer:

1. It defines the transmission media between two connecting


devices.
2. It also specifies the data rate(number of bits sent each second)
over the defined media.
3. It defines the topology of the network. The topology may be
Bus, Ring, Star, Mesh, Tree, or Hybrid.
4. It defines a data transmission mode. It can be Simplex, Half-
Duplex, or Duplex.
5. It defines the type of data encoding used in the transmission.
6. It defines the line configuration of the network. It can be point-
to-point or multiport.
2. Data-Link Layer
The Data-Link Layer is the second layer of the OSI model. It
performs the physical addressing of data. Physical addressing is
the process of adding the physical(MAC) address to the data.
MAC(Media Access Control) Address is a 48-bit alpha-numeric
number that is embedded in NIC(Network Interface Card) by the
manufacturer. In other words, the data-link layer is embedded as
software in the NIC which provides a means for data transfer from
one computer to another via a local media. Thus, the data-link layer
facilitates the transmission of data within the same network only.

The source and destination MAC addresses are included in the data
header file by the data-link layer. At the sender's side, it receives the
data in the form of packets from the network layer and converts it
into smaller forms, called the data frame. At the receiver's side, it
converts the data frame into packets for the network layer.

Following are the main functionalities of a data-link layer:

1. Allows media access using framing: It allows the upper


layers to access the media using framing, as it performs
physical addressing of the data.
2. Controls data: It performs flow, error, and access control of
the data. It controls the data rate of the transmission to control
the data flow. It uses the header information or checksum bits
to control the error. Most importantly, it performs access
control of the data using the MAC address.
3. Network Layer
The Network layer is the third layer of the OSI model. It mainly
performs the transmission of data from one computer to
another in different networks. This layer may not be so beneficial
if we are transmitting the data in the same network. The network
layer performs logical addressing(IP addressing) of the
data. The source and destination IP addresses are included in the
data header file by the network layer. The data is in the form of
packets in this layer.

At the sender side, the network layer breaks the data segments
received from the upper layer into smaller units, called data packets.
Similarly, at the receiver's side, it reassembles the data packets into
segments for the upper layer, i.e., the transport layer. Routers are
mainly used in the network layer for routing purposes. Some of the
protocols that are mostly used in this layer are OSPF(Open Shortest
Path First), BGP(Border Gateway Protocol), IS-IS(Intermediate
System to Intermediate System), etc.

Following are the main functionalities of a network layer:

1. Logical Addressing: Every computer in a network has a unique


IP(Internet Protocol) address. The network layer attaches the
source and destination IP address to the data so that it can be
transmitted even in different networks. Internet Protocol
Version 4(IPv4) and Internet Protocol Version 6(IPv6)
addressing are used by the network layer for logical addressing.
2. Routing: Routing is a process through which the data packets
can travel from one node to another in a computer network. In
the network layer, the routing decisions are mainly based on IP
addresses or logical addressing.
3. Path Determination: Path determination is the process of
selecting a path from various available paths based on the
routing information. Path determination is done by the
network layer for finding the most optimum path for data
transmission.

4. Transport Layer
The Transport layer is the fourth layer of the OSI model. It is
mainly responsible for the process-to-process delivery of the
data. It performs flow and error control in the data for its
proper transmission. The transport layer controls the reliability of
communication through various functionalities.

At the sender's side, the transport layer receives the data from the
upper layer and performs segmentation. The source and destination
port numbers are also included in the header file of the data before
forwarding it to the network layer. At the receiver's side, the
transport layer performs the reassembly and sequencing of data. It
reads the port number of the data from the header file, and then
direct it towards the proper application.

Following are the main functionalities of a transport layer:

1. Segmentation: Dividing the data received into multiple data


segments can be termed as segmentation. The transport layer
performs the assembly as well as reassembly of data at the
sender's and receiver's side respectively. Each segment has the
source and destination 'port' and 'sequence' number. The port
number helps to direct each data segment to the correct
application, while the sequence number keeps them in a correct
sequence when the segmented data is received at the receiver's
side.
2. Flow Control: The transport layer controls the flow of the data
being transmitted. It is mainly done to avoid any data loss and
enhance data transmission efficiency.
3. Error Control: The transport layer checks for any kind of
errors in the data using the checksum bits that are present in
the data header. It can also request for retransmission of some
data if it is not received at the receiver's end.
4. Connection Control: The transport layer also maintains the
connection between the devices in a proper way. For
connection-oriented transmission, TCP(Transmission Control
Protocol) is used. TCP is quite slow but is reliable in nature. It
can be used for long-distance transmissions. For connection-
less transmission, UDP(User Datagram Protocol) is used. UDP is
fast but not reliable in nature. It is mainly preferred for short-
distance transmissions.

5. Session Layer
The Session layer is the fifth layer of the OSI model. It mainly
helps in setting up, closing and managing the connection in the
network. Actually, whenever two devices get connected, a session is
created, which is terminated as soon the connection is no longer
required. The termination of the session is important to avoid the
unnecessary wastage of resources. In other words, the session layer
performs session management.

The session layer enables the devices to send and receive the data by
establishing connections and also terminates the connection after
the data transfer. It mainly performs authentication and
authorization for establishing a secure connection in the network.

Following are the main functionalities of a session layer:

1. Authentication: Authentication is a process of verifying the


user. The session layer may ask the devices to enter valid login
credentials, so as to maintain a secure data connection.
2. Authorization: Authorization is the process of determining
the user's authority to access the data. The session layer
determines whether the device has permission to access those
data elements or not.
3. Synchronization: The session layer synchronizes the sender
and receiver. It adds various checkpoints with the data to
synchronize data at the sender's and receiver's side. In case of
any crash or transfer failure, the data transmission can be
resumed from the last checkpoint. There is no need to
retransfer the whole data.

6. Presentation Layer
The Presentation layer is the sixth layer of the OSI model. It
mainly performs data translation, encryption & decryption, and
compression in the network. The presentation layer deals with the
syntax and semantics of the information exchanged between two
systems.

At the sender's side, it receives the data from the application layer
and performs data encryption and compression to it. At the
receiver's side, it receives the data from the transport layer and
performs data translation, decryption, and uncompresses data.
Following are the main functionalities of a presentation layer:

1. Data Translation: Data translation refers to transforming data


from one form to the other. The presentation layer transforms
the high-level user language data to the equivalent low-level
machine-level language, and vice versa. Some of the standards
used by this layer for translation are ASCII, EBCDIC, etc.
2. Data Encryption and Decryption: Data encryption is the
process of converting a plain text into cypher text for security.
Encryption is applied to the data at the sender's side. Data
decryption is the process of converting a ciphertext into plain
text. It is applied to the data at the receiver's side. The
presentation layer uses the SSL(Secure Socket Layer) for data
encryption and decryption.
3. Data Compression: Data compression is the process of
reducing the number of bits in the data. It can either be lossy or
lossless in nature. Lossless compression is mostly preferred for
some important data items.

7. Application Layer
The Application layer is the topmost layer of the OSI model.
This layer is mostly used by the network applications, that use
the network. It mainly acts as an interface between the user and the
network services. The Application layer provides services for
network applications with the help of protocols. Some of the most
widely used application layer protocols are HTTP, HTTPS, FTP, NFS,
DHCP, FMTP, SNMP, SMTP, Telnet, etc.

Following are the main functionalities of an application layer:


1. File Transfer: The Application layer mainly facilitates the file
transfer between two network devices with the help of FTP(File
Transfer Protocol).
2. Web Surfing: Web surfing is possible only in the application
layer. Some protocols like HTTP(Hypertext Transfer Protocol),
HTTPs(Hypertext Transfer Protocol Secure), etc. enables web
surfing.
3. Emails: Electronic-mails can be sent from one device to
another on the network only through the application layer.
Some protocols like SMTP(Simple Mail Transfer Protocol), etc.
are used for sending emails over the network.
4. Network Virtual Terminal: The Application layer facilitates
the remote host login in the network with the help of protocols
like Telnet, etc. It can also be referred to as the software
version of the physical terminal in the network.

What is the TCP/IP model and how it works?

The TCP/IP reference model is a layered model developed by the


Defense Project Research Agency(ARPA or DARPA) of the United
States as a part of their research project in 1960. Initially, it was
developed to be used by defense only. But later on, it got widely
accepted. The main purpose of this model is to connect two remote
machines for the exchange of information. These machines can be
operating in different networks or have different architecture.

In the early days, the TCP/IP reference model has four layers, as
described below.
These layers are much similar to the layers of the OSI modl. The
Application layer in the TCP/IP model has approximately the same
functionality as the upper three layers(Application, Presentation,
and Session layer) of the OSI model. Also, the Internet layer acts as
the Network layer, and the Network Access layer acts as the lower
two layers(Physical and Data-Link layer) of the OSI model. TCP/IP
network model is named after two main protocols(TCP and IP) and is
widely used in current internet architecture. But nowadays, we
generally use a five-layer TCP/IP model, as shown below.
In the above diagram, the Physical and Data-Link layer acts as the
Network Access layer of the previously used TCP/IP model. This
TCP/IP model is currently in use. So, in this blog, we'll learn about
the five-layer TCP/IP reference model. We'll also see the key features
of this model and the functionalities of its five layers.

The key features of the TCP/IP model are as follows:

1. Supports flexible architecture: We can connect two devices


with totally different architecture using the TCP/IP model.
2. End-node verification: The end-nodes(source and
destination) can be verified, and connection can be made for
the safe and successful transmission of data.
3. Dynamic Routing: The TCP/IP model facilitates the dynamic
routing of the data packets through the shortest and safest
path. Due to dynamic routing, the path taken by the data
packet can not be predicted, and thus it improves data security.
There are also some demerits of using the TCP/IP model, these are
as follows:

1. Replacing a protocol is not easy.


2. The roles and functionalities of each layer are not documented
and specified properly, as it is described in the OSI model.
Following are the five layers of the TCP/IP model:

1. Physical Layer
2. Data-Link Layer
3. Internet Layer
4. Transport Layer
5. Application Layer
Now, we will learn about the functionalities of these layers one-by-
one in detail.

1. Physical Layer
The Physical Layer is the lowest layer of the TCP/IP model. It deals
with data in the form of bits. This layer mainly handles the host to
host communication in the network. It defines the transmission
medium and mode of communication between two devices. The
medium can be wired or wireless, and the mode can be simplex, half-
duplex, or full-duplex.
It also specifies the line configuration(point-to-point or multiport),
data rate(number of bits sent each second), and topology in the
network. There are no specific protocols that are used in this layer.
The functionality of the physical layer varies from network-to-
network.

2. Data-Link Layer
The Data-Link Layer is the second layer of the TCP/IP layer. It deals
with data in the form of data frames. It mainly performs the data
framing in which, it adds some header information to the data
packets for the successful delivery of data packets to correct
destinations. For this, it performs physical addressing of the data
packets by adding the source and the destination address to it.

The data-link layer facilitates the delivery of frames within the same
network. It also facilitates the flow and error control of the data
frames. The flow of the data can be controlled through the data rate.
Also, the errors in the data transmission and faulty data frames can
be detected and retransmitted using the checksum bits in the header
information.

3. Internet Layer
The Internet layer of the TCP/IP model is approximately the same as
the Network layer of the OSI model. It deals with data in the form of
datagrams or data packets. This layer mainly performs the logical
addressing of the data packets by adding the IP(Internet Protocol)
address to it. The IP addressing can be done either by using the
Internet Protocol Version 4(IPv4) or Internet Protocol Version
6(IPv6).
The Internet layer also performs routing of data packets using the IP
addresses. The data packets can be sent from one network to another
using the routers in this layer. This layer also performs the
sequencing of the data packets at the receiver's end. In other words,
it defines the various protocols for logical transmission of data
within the same or different network. The protocols that are used in
the Internet layer are IP(Internet Protocol), ICMP(Internet Control
Message Protocol), IGMP(Internet Group Management Protocol),
ARP(Address Resolution Protocol), RARP(Reverse Address
Resolution Protocol), etc.

4. Transport Layer
The Transport layer is the fourth layer of the TCP/IP model. It deals
with data in the form of data segments. It mainly performs
segmentation of the data received from the upper layers. It is
responsible for transporting data and setting up communication
between the application layer and the lower layers. This layer
facilitates the end-to-end communication and error-free delivery of
the data. It also facilitates flow control by specifying data rates. The
transport layer is used for process-to-process communication with
the help of the port number of the source and the destination.

The Transport layer facilitates the congestion control using the


following protocols:

1. TCP: TCP stands for Transmission Control Protocol. It is a


connection-oriented protocol. It performs sequencing and
segmentation of data. It also performs flow and error control in
data transmission. There is an acknowledgement feature in TCP
for the received data. It is a slow but reliable protocol. It is
suitable for important and non-real time data items.
2. UDP: UDP stands for User Datagram Protocol. It is a
connection-less protocol. It does not perform flow and error
control in data transmission. There is no acknowledgement
feature in UDP for the received data. It is a fast but unreliable
protocol. It is suitable for real-time data items.

5. Application Layer
The Application layer in the TCP/IP model is equivalent to the upper
three layers(Application, Physical, and Session Layer) of the OSI
model. It deals with the communication of the whole data message.
The Application layer provides an interface between the network
services and the application programs. It mainly provides services to
the end-users to work over the network. For Example, file transfer,
web browsing, etc. This layer uses all the higher-level protocols like
HTTP, HTTPS, FTP, NFS, DHCP, FMTP, SNMP, SMTP, Telnet, etc.

The application layer helps in setting up and managing the network


connections. It also checks for the user's program authentication and
authorization for the data. It also performs some complex operations
like data translation, encryption and decryption, and data
compression. The application layer synchronizes the data at the
sender's and the receiver's end. In other words, it is the topmost
layer and defines the interface for application programs with
transport layer services.

What is a TCP 3-way handshake process?

Nowadays we mainly use TCP(Transfer Control Protocol) for data


transmission in a connection-oriented network. But have you ever
wondered, why we prefer TCP over any other protocol for this
purpose?
Actually, TCP provides us with a secure and reliable connection link
between two devices. And, this is possible only due to the 3-way
handshake process that takes place in the TCP during establishing
and closing connections between two devices. As the name suggests,
there are three steps for both establishing and closing the
connection. So in this blog, we'll learn about the TCP 3-way
handshake process and the different steps involved in it.

TCP 3-Way Handshake Process


The 3-Way Handshake process is the defined set of steps that
takes place in the TCP for creating a secure and reliable
communication link and also closing it. Actually, TCP uses the 3-
way handshake process to establish a connection between two
devices before transmitting the data. After the establishment of the
connection, the data transfer takes place between the devices. After
which the connection needs to be terminated, which is also done by
using the 3-way handshake process. The secure and reliable
connection is established to reserve the CPU, buffer, and bandwidth
of the devices to communicate properly. Thus, it is a must to free
these resources by terminating the connection after data
transmission. Hence, the TCP 3-way handshake process can be used
to establish and terminate connections in the network in a secure
way.

Below is the pictorial representation of the TCP header.


There are a few elements in the TCP header file which are used in
the 3-way handshake process, they are:

1. Sequence Number: Sequence number is a random 32 bits(in


the range of 0 to (2^32 -1)) number which is assigned to the
first bit of the data. Generally, a sequence number is used only
once in one connection. For other data transmission in the
same connection, some other random sequence number can be
used.
2. Acknowledgement Number: It is the next sequence number
that the acknowledgement sending device expects from the
sender. It is generally, 1 greater than the sequence number
received from the sender.
3. Window Size: Window size is the buffer size. It is the capacity
up to which data can be received in the buffer.
4. Maximum Segment Size: It is the maximum acceptable size of
each data segment by the connected device. Above this size,
the device will not be able to receive the data segments.
5. SYN Flag: SYN stands for synchronization. It can be described
as a request for establishing a connection. If SYN is 1, it means
that the device wants to establish a secure connection, else not.
6. ACK Flag: ACK stands for acknowledgement. It can be
described as the response of SYN. If ACK is 1, the device has
received the SYN message and acknowledges it, else not.
7. FIN Flag: FIN stands for Finished. After the data transmission
has been completed, devices have to terminate the connection
using the FIN flag. If FIN is 1, the device wants to terminate the
connection, else not.
Below is the pictorial representation of the connection
establishment using the 3-way handshake process.
Following are the three steps involved in establishing the
connection using the 3-way handshake process in TCP:

1. The client sends the SYN to the server: When the client
wants to connect to the server. It sets the 'SYN' flag as 1 and
sends the message to the server. The message has also some
additional information like the sequence number(any random
32 bits number), the ACK is set here to 0, the window size, and
the maximum segment size. For Example, if the window size is
2000 bits, and the maximum segment size is 200 bits then a
maximum of 10 data segments (2000/200 = 10) can be
transmitted in the connection.
2. The server replies with the SYN and the ACK to the
client: After receiving the client's synchronization request, the
server sends an acknowledge to the client by setting the ACK
flag to '1'. The acknowledgement number of the ACK is one
more than the received sequence number. For Example, if the
client has sent the SYN with sequence number = 1000, then the
server will send the ACK with acknowledgement number =
10001. Also, the server sets the SYN flag to '1' and sends it to
the client, if the server also wants to establish the connection.
The sequence number used here for the SYN will be different
from the client's SYN. The server also advertises its window
size and maximum segment size to the client. After completion
of this step, the connection is established from the client to the
server-side.
3. The client sends the ACK to the server: After receiving the
SYN from the server, the client sets the ACK flag to '1' and
sends it with an acknowledgement number 1 greater than the
server's SYN sequence number to the client. Here, the SYN flag
is kept '0'. After completion of this step, the connection is now
established from the server to the client-side also. After the
connection is being established, the minimum of the sender's
and receiver's maximum segment size is taken under
consideration for data transmission.
Below is the pictorial representation of the connection termination
using the 3-way handshake process.
Following are the three steps involved in terminating the connection
using the 3-way handshake process in TCP:

1. The client sends the FIN to the server: When the client
wants to terminate the connection. It sets the FIN flag as '1'
and sends the message to the server with a random sequence
number. Here, the ACK is set to 0.
2. The server replies with the FIN and the ACK to the
client: After receiving the client's termination request, the
server sends an acknowledge to the client by setting the ACK
flag to '1'. The acknowledgement number of the ACK is one
more than the received sequence number. For Example, if the
client has sent the FIN with sequence number = 1000, then the
server will send the ACK with acknowledgement number =
10001. Also, the server sets the FIN flag to '1' and sends it to
the client, if the server also wants to terminate the connection.
The sequence number used here for the FIN will be different
from the client's FIN. After completion of this step, the
connection is terminated from the client to the server-side.
3. The client sends the ACK to the server: After receiving the
FIN from the server, the client sets the ACK flag to '1' and sends
it with an acknowledgement number 1 greater than the server's
FIN sequence number to the client. Here, the FIN flag is kept
'0'. After completion of this step, the connection is now
terminated from the server to the client-side also.

Which model is better, OSI or TCP/IP?

Whenever we implement a network and try to connect and


communicate different devices over the network. We use either
the OSI or the TCP/IP reference model. But a question always arises
in our mind, that which model is better for specification,
implementation, connection, communication, etc.

It has rightly been said that each coin has two faces. Likewise, we
can not say that one model is the best and the other is the worst.
Both of them have some advantages and disadvantages as well. One
model can work fine for one case and worst for the other. So in
this blog, we'll take some major points which are essential in a
network connection and communication, and then evaluate which
model is better for which case.

First, we will see the various similarities between the OSI and
TCP/IP models. The similarities between them are as follows:

1. Reference Model: Both the OSI as well as the TCP/IP are


reference models. This means that we can take a reference or
help from the specification of these two models during
implementing the network.
2. Layered Architecture: Both the OSI and TCP/IP model have a
layered architecture. Each layer provides different
functionalities in the network. The OSI model has generally 7
layers, while the TCP/IP has 5 layers.
3. Protocols: Both the OSI as well as the TCP/IP model makes use
of different protocols in different layers for the proper
implementation of the model over the network.
4. Functionalities: The layers of the OSI and the TCP/IP model
provides approximately the same functionality. The
Application layer of the TCP/IP model acts as the upper three
layers(Application, Presentation, and Session layer) of the OSI
model, while the Internet layer in the TCP/IP model acts as the
Network layer of the OSI model. Rest of the layers in both the
models works the same.
Now, we will see which model is better in which case, and we'll also
see the dissimilarities between the models.

Following are the dissimilarities between the OSI and the TCP/IP
model:

1. Evolution: The OSI model evolved as a logical and conceptual


model. It was documented first and the functionalities of each
layer are specified. Afterwards, the protocols for each layer are
identified. On the other hand, the TCP/IP model is
implemented first with the specified protocols and then it is
documented. Hence, the OSI model evolved as a theoretical
model, while the TCP/IP as a practical model. So, if someone
just needs the theoretical aspects of the model, they should go
with the OSI model. But if someone wants to practically
implement the model, they should go with the TCP/IP model.
2. Objective: The objective of the OSI model is to come up with a
generic standard model for specifying the connection
procedures, layered architecture, services, interfaces, and
protocols. On the other hand, the TCP/IP model aims to
provide a reliable and end-to-end transmission model. So, if
someone needs a generic and standard model, they should go
for the OSI model. But if someone needs reliability and security
over the network, they should go for the TCP/IP model.
3. Area Focused: The OSI model is a generic model, and hence
universal in nature. It can be used accordingly in different
types of networks as per the specifications. On the other hand,
the TCP/IP model is dependent on protocols and is compatible
with the current Internet architecture. Thus, the TCP/IP model
is able to solve only a specific set of problems. So, if someone
needs a universal model that can be applied to different
networks, they should choose the OSI model. But if they have
to perform some network functionalities on the Internet, they
should choose the TCP/IP reference model.
4. Documentation: The OSI model is documented properly. The
three major concepts, i.e., services, interfaces, and protocols
are clearly specified in this model. On the other hand, the
TCP/IP model is not properly documented. The specifications
and functionalities of each layer are not so clear in the TC/IP
model. So, if someone needs proper documentation and
guidance during implementing the network, they should refer
to the OSI model.
5. Set-up and Configuration: The OSI model is easy and
standardized to set-up and configure, as it is a generic model.
On the other hand, the TCP/IP model is complex to set-up and
configure, as it is compatible with only specific domains of
networks. So, the OSI model is better if we consider the
network set-up and configuration functionality.
6. Modularity: Both models are modular in nature. But the OSI
model has more layers(7) as compared to the TCP/IP model(5
layers). Hence, the OSI model is more modular then the TCP/IP
model, and the functionalities of each module are clearly
specified in the OSI model. So, if someone is focusing on a
more modular network with proper functionalities, they should
go for the TCP/IP model.
7. Replacing Protocols: The OSI model is a protocol-
independent model. We can implement our own protocols as
per our needs. On the other hand, the TCP/IP model is protocol
dependent. It defines a specific set of protocols for
implementing the model. It is very complex to make any
changes or replace some protocols in the TCP/IP model. So, if
someone just needs the specified set of protocols, they should
go with the TCP/IP model, else the OSI model is better for
implementing our own protocols.
8. Data Delivery: Data delivery is the functionality of the
Transport layer in both models. In the OSI model, the transport
layer facilitates the connection-oriented transfer and hence it
guarantees the delivery of packets. On the other hand, in the
TCP/IP model, the transport layer facilitates both connection-
oriented as well as connectionless transfer, and hence it does
not guarantees the delivery of data packets. So, we can use the
OSI model if we want to guarantee the proper data delivery
over the network.
9. Reliable and Secure Connection: The OSI model does not
have any special mechanism for providing a reliable and secure
connection for data transmission. On the other hand, the
TCP/IP model has a 3-way handshake mechanism for providing
a reliable and secure connection link oner the network. So, we
can opt for the TCP/IP model is we want a reliable and secure
network connection.
Thus, we can conclude that both models have their own advantages
and disadvantages. If someone is focusing on the proper
documentation, specification, and modularization, they should
prefer the OSI model over the TCP/IP model. But if someone is
focusing more on the implementation, reliability, and security of the
network, they should prefer the TCP/IP model over the OSI model.

What is the difference between TCP and UDP?

The Transport Layer in the OSI model and the TCP/IP


model implements two protocols for transmitting the data, they are -
TCP(Transmission Control Protocol), and UDP(User Datagram
Protocol). Any of these protocols can be implemented in the
transport layer as per the needs.

In this blog, we will briefly learn about the TCP and UDP protocols,
and the dissimilarities between these protocols. We'll also see which
protocol can be opted to implement in which case.

At first, we'll introduce these two protocols in brief.

TCP(Transmission Control Protocol)


TCP is a transport layer connection-oriented protocol. It ensures
reliable connection and secure data transmission between the
connecting devices over the network. It first establishes a secure
connection and then transmits the data. TCP transmits the data from
one device to the other in the form of data blocks. It is quite slow in
data transmission but has better functionalities like flow control,
error control, and congestion control in the network. The TCP
header is of 20-60 bytes, and thus include various information to
enhance the reliability, but the overhead is increased. Due to its
reliability, protocols like HTTP, FTP, etc. use TCP for proper data
transmission over the network.

UDP(User Datagram Protocol)


UDP is a transport layer connection-less protocol. It ensures the fast
transmission of data between the connecting devices over the
network. There is no overhead of establishing, maintaining, and
terminating a connection in UDP. It is mainly used to transmit real-
time data where we can not afford any transmission delays. UDP
transmits the data from one device to the other in the form of
continuous data streams. The UDP header is of fixed size, i.e., 8
bytes. It is unreliable in nature but faster in speed. Due to its
transmission speed, protocols like DNS, DHCP, RIP, etc. use UDP for
proper data transmission over the network.

Now, let us learn about the dissimilarities between them.

Following are dissimilarities between the TCP and UDP protocol:

1. Connection setup: TCP is a connection-oriented protocol


while UDP is a connection-less protocol. The connecting
devices implementing TCP are connected together using a
physical link, and the data transmission takes place through
that specified link only. While in UDP, there is no need for a
physical medium between the connecting devices and thus the
data transmission can take place through different paths.
2. Data Unit: TCP is message-oriented. It transmits the data in
the form of distinct data blocks. While UDP is stream-oriented,
it transmits the data in the form of continuous data streams
over the network. TCP sends the whole message at once, while
UDP sends it bit-by-bit in the form of continuous data streams.
3. Data Delivery and Retransmission: TCP guarantees the
successful delivery of data as the receiver sends the
acknowledgement after receiving the data. If the
acknowledgement is negative(when data is lost), TCP
retransmits the data. On the other hand, UDP does not ensure
the successful delivery of the data, as there is no
acknowledgement mechanism in UDP. Thus, UDP does not
retransmit the lost data packets.
4. Transmission Speed and Efficiency: TCP transmits the data
between the connecting devices using a specific physical path,
while UDP can transmit the data using multiple paths. Thus,
UDP is faster than TCP. Also, the TCP is heavy-weight, while
the UDP is light-weight in nature. Hence, UDP can transmit the
data faster and efficiently than TCP.
5. Reliability: TCP is reliable in nature. It ensures reliability by
providing a secure and reliable data transmission link with the
help of the 3-way handshake process. The data transmission
takes place in TCP only after the establishment of a reliable
link. On the other hand, there is no such mechanism in UDP,
and thus it is unreliable in nature.
6. Flow Control: TCP takes care of the receiver's window size to
check how much data it can accept and with what speed. It uses
the sliding window protocols to do so. Thus, the TCP
implements the flow control mechanism. On the other hand,
UDP lacks the flow control mechanism, which sometimes leads
to the losses of data.
7. Error Control: Both TCP and UDP make use of the checksum
bits(present in the header information of data) for the error
control mechanism. The error control mechanism ensures the
errorless transmission of the data. There is a major
dissimilarity in the error control mechanism of TCP and UDP.
In TCP, the error control is mandatory making use of the 16 bits
checksum bits. On the other hand, error control is optional in
the UDP, all checksum bits can be set to 0 to avoid error
control.
8. Congestion Control: TCP takes care of the capacity of the
network through which the data is sent. It does so by using
algorithms like AIMD(Additive Increase/Multiplicative
Decrease), etc. There may be a delay in data transmission when
the network is congested. Thus, TCP implements the
congestion control mechanism ensuring no loss of data. On the
other hand, there is no congestion control mechanism in UDP.
9. Overhead: TCP is heavy-weight in nature. There is an
overhead of establishing a secure and reliable connection
before data transmission. There are also overheads of sending
and receiving acknowledgements, and retransmissions of data
in case of losses in TCP. On the other hand, UDP is light-weight
in nature. There is no overhead of establishing, maintaining,
and terminating a connection in UDP.
10. Transmission Types: TCP transmits data through a
physical link between the connecting devices.
Thus, Anycasting is mainly used in the TCP. On the other hand,
UDP does not use any physical link for transmitting data
between the devices. Thus, UDP is mainly used for Multicasting
and Broadcasting.
11. Protocols Implementing TCP/UDP: Due to reliability,
protocols like HTTP, FTP, SMTP, etc. use TCP for proper data
transmission over the network. On the other hand, due to the
transmission speed, protocols like DNS, DHCP, RIP, etc. use
UDP for proper data transmission over the network.
12. Application: TCP is applied when we need reliability and
security in data transmission. Also, when data loss cannot be
tolerated. TCP is also used for long-distance transmission of
non-real-time data. On the other hand, UDP is applied where
we need a good transmission speed, and we cannot afford delay
in transmission. It is mainly used for transmission of real-time
data over short distances.

What happens when you type a URL in the web browser?

Ever wondered what happens when you type a URL in the browser? It
is a commonly asked question in technical interviews. In this blog,
we will see what happens in the background, step by step when we
type any URL. So, let's get started.

1. You enter the URL in the browser.


Suppose you want to visit the website of AfterAcademy. So you
type afteracademy.com in the address bar of your browser. When you
type any URL you basically want to reach the server where the
website is hosted.

2. The browser looks for the IP address of the


domain name in the DNS(Domain Name Server).
DNS is a list of URLs and their corresponding IP address just like the
telephone book has phone numbers corresponding to the names of
the people. We can access the website directly by typing the IP
address but imagine remembering a group of numbers to visit any
website. So, we only remember the name of the website and the
mapping of the name with the IP address is done by the DNS.

The DNS checks at the following places for the IP address.

1. Check Browser Cache: The browser maintains a cache of the


DNS records for some fixed amount of time. It is the first place
to run a DNS query.
2. Check OS Cache: If the browser doesn't contain the cache then
it requests to the underlying Operating System as the OS also
maintains a cache of the DNS records.
3. Router Cache: If your computer doesn't have the cache, then it
searches the routers as routers also have the cache of the DNS
records.
4. ISP(Internet Service Provider) Cache: If the IP address is not
found at the above three places then it is searched at the cache
that ISP maintains of the DNS records. If not found here also,
then ISP’s DNS recursive search is done. In "DNS recursive
search", a DNS server initiates a DNS query that communicates
with several other DNS servers to find the IP address.
So, the domain name which you entered got converted into a DNS
number. Suppose the above-entered domain
name afteracademy.com has an IP address 100.95.224.1. So, if we
type https://100.95.224.1 in the browser we can reach the website.

3. The Browser initiates a TCP connection with


the server.
When the browser receives the IP address, it will build a connection
between the browser and the server using the internet protocol. The
most common protocol used is TCP protocol. The connection is
established using a three-way handshake. It is a three-step process.

1. Step 1 (SYN): As the client wants to establish a connection so


it sends an SYN(Synchronize Sequence Number) to the server
which informs the server that the client wants to start a
communication.
2. Step 2 (SYN + ACK): If the server is ready to accept
connections and has open ports then it acknowledges the
packet sent by the server with the SYN-ACK packet.
3. Step 3 (ACK): In the last step, the client acknowledges the
response of the server by sending an ACK packet. Hence, a
reliable connection is established and data transmission can
start now.

4. The browser sends an HTTP request to the


server.
The browser sends a GET request to the server asking
for afteracademy.com webpage. It will also send the cookies that the
browser has for this domain. Cookies are designed for websites to
remember stateful information (items in the shopping cart or
wishlist for a website like Amazon) or to record the user’s browsing
history etc. It also has additional information like request header
fields(User-Agent) for that allows the client to pass information
about the request, and about the client itself, to the server. Other
header fields like the Accept-Language header tells the server
which language the client is able to understand. All these header
fields are added together to form an HTTP request.
Sample Example of HTTP Request: Now let’s put it all together to
form an HTTP request. The HTTP request below will
fetch abc.html page from the web server running on
afteracademy.com

GET /abc.htm HTTP/1.1


User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
Host: www.afteracademy.com
Accept-Language: en-us
Accept-Encoding: gzip, deflate
Connection: Keep-Alive

5. The server handles the incoming request and


sends an HTTP response.
The server handles the HTTP request and sends a response. The first
line is called the status line. A Status-Line consists of the protocol
version(e.g HTTP/1.1) followed by numeric status code(e.g 200)and
its associated textual phrase(e.g OK). The status code is important as
it contains the status of the response.

1. 1xx: Informational: It means the request was received and the


process is continuing.
2. 2xx: Success: It means the action was successful.
3. 3xx: Redirection: It means further action must be taken in
order to complete the request. It may redirect the client to
some other URL.
4. 4xx: Client Error: It means some sort of error in the client’s
part.
5. 5xx: Server Error: It means there is some error on the server-
side.
It also contains response header fields like Server, Location, etc.
These header fields give information about the server. A Content-
Length header is a number denoting the exact byte length of the
HTTP body. All these headers along with some additional
information are added to form an HTTP response.

Sample Example of HTTP Response: Now let’s put it all together


to form an HTTP response for a request to fetch the abc.htm page
from the web server running on afteracademy.com.

HTTP/1.1 200 OK
Date: Tue, 28 Jan 2020 12:28:53 GMT
Server: Apache/2.2.14 (Win32)
Last-Modified: Wed, 22 Jul 2019 19:15:56 GMT
Content-Length: 88
Content-Type: text/html
Connection: Closed

6. The browser displays the HTML content.


Now the browser gets the response and the HTML web page is
rendered in phases. First, it gets the HTML structure and then it
sends multiple GET requests to get the embedded links, images, CSS,
javascript files, etc and other stuff. The web page will be rendered
and in this case, the afteracademy web page will be displayed.

All these steps happen each tim

What are Protocols and what are the key elements of protocols?

In the era of Computer and Mobile technologies, computer network


technology is growing at a very fast speed and frequency. Billions of
electronic devices and gadgets are operating to make this happen.
These devices are designed and manufactured by different
manufacturers. They may have been developed using different
hardware and software resources. Due to this, they are unable to
establish a connection and communicate with each other for sharing
data and other information. Hence, to resolve this problem, we need
protocols. Protocols provide us with a medium and set of rules
to establish communication between different devices for the
exchange of data and other services.
Protocols are needed in every field like society, science &
technology, Data Communication, media, etc. But in this blog, we’ll
mainly concentrate on the protocols used in computer networks and
data communication. We'll further focus on the types, key elements,
and functionalities of protocols. So, let's get started with the basics
of protocols.

Protocols
Protocols are a fundamental aspect of digital communication as they
dictate how to format, transmit and receive data. They are a set of
rules that determines how the data will be transmitted over the
network.

It can also be defined as a communication standard followed by the


two key parties(sender and receiver) in a computer network to
communicate with each other.

It specifies what type of data can be transmitted, what commands


are used to send and receive data, and how data transfers are
confirmed.

In simple terms, a protocol is similar to a language. Every language


has its own rules and vocabulary. Protocols have their own rules,
specifications, and implementations. If two people share the same
language, they can communicate very easily and effectively.
Similarly, two hosts implementing the same protocol can connect
and communicate easily with each other. Hence, protocols provide a
common language for network devices participating in data
communication.
Protocols are developed by industry-wide organizations. The ARPA
(Advanced Research Project Agency) part of the US Defense program
was the first organization to introduce the concept of a standardized
protocol. Support for network protocols can be built into the
software, hardware, or both. All network end-users rely on network
protocols for connectivity.

Protocols use a specific model for their implementation like the OSI
(Open System Interface) Model, TCP/IP (Transmission Control
Protocol / Internet Protocol) Model, etc. There are different layers
(for instance, data, network, transport, and application layer, etc.) in
these models, where these protocols are implemented.

Combining all these, we can say that protocol is an agreement


between a sender and a receiver, which states how communication
will be established, and how to maintain & release it. It is the
communication between entities in different systems, where entities
can be a user application program, file transfer package, DBMS, etc.,
and systems can be a remote computer, sensor, etc.

Levels of a Protocol
There are mainly three levels of a protocol, they are as follows:

1. Hardware Level: In this level, the protocol enables the


hardware devices to connect and communicate with each other
for various purposes.
2. Software Level: In the software level, the protocol enables
different software to connect and communicate with each other
to work collaboratively.
3. Application Level: In this level, the protocol enables the
application programs to connect and communicate with each
other for various purposes.
Hence protocols can be implemented at the hardware, software, and
application levels.

Types of Protocols
Protocols can be broadly divided into the following two types:

1. Standard Protocols
2. Proprietary Protocols
Let's learn one by one:

Standard Protocols
A standard protocol is a mandated protocol for all devices. It
supports multiple devices and acts as a standard.

Standard protocols are not vendor-specific i.e. they are not specific
to a particular company or organization. They are developed by a
group of experts from different organizations .

These protocols are publicly available, and we need not pay for them.

Some of the examples of Standard Protocols are FTP, DNS, DHCP,


SMTP, TELNET, TFTP, etc.

Proprietary Protocols
Proprietary protocols are developed by an individual organization for
their specific devices. We have to take permission from the
organization if we want to use their protocols.

It is not a standard protocol and it supports only specific devices. We


may have to pay for these protocols.

Some of the examples of Proprietary Protocols are IMessage, Apple


Talk, etc.

Key Elements of protocols


The key elements of the protocol determine what to be
communicated, how it is communicated, and when it is
communicated.

There are mainly three key elements of a protocol, they are as


follows:

1. Syntax
2. Semantics
3. Timing
Let's learn these elements in detail.

Syntax
Syntax refers to the structure or format of data and signal levels. It
indicates how to read the data in the form of bits or fields. It also
decides the order in which the data is presented to the receiver.
Example: A protocol might expect that the size of a data packet will
be 16 bits. In which, the first 4 bits are the sender’s address, the next
4 bits are the receiver’s address, the next 4 bits are the check-sum
bits, and the last 4 bits will contain the message. So, every
communication that is following that protocol should send 16-bit
data.

Semantics
Semantics refers to the interpretation or meaning of each section of
bits or fields. It specifies which field defines what action. It defines
how a particular section of bits or pattern can be interpreted, and
what action needs to be taken. It includes control information for
coordination and error handling.

Example: It interprets whether the bits of address identify the route


to be taken or the final destination of the message or something else.

Timing
Timing refers to two characteristics:

1. when the data should be sent?


2. what will be the speed of sending and receiving the data?
It performs speed matching, sequencing and flow control of the data
items.

Example: A sender can send the data at a speed of 100 Mbps, but the
receiver can consume it only at a speed of 20 Mbps, then there may
be data losses or the packets might get dropped. So, proper
synchronization must be there between a sender and a receiver.
Functions of protocols
Following are the main functionalities of a protocol:

• Data Sequencing: It mainly refers to dive data into packets i.e.


it divided the whole data into some packets.
• Data Flow: It mainly deals with sending data to the correct
destination i.e. the flow of the data is correct or not.
• Data Routing: It refers to select the best path for data
transmission between a sender and a receiver because there can
be many routes from sender to receiver and you should select
the best possible route.
• Encapsulation: It refers to the process of taking one protocol
and transferring it to some other another protocol.
• Segmentation & Reassembly: It deals with segmenting the
data message i.e. diving the data into packets when data flows
from the upper protocol layer to lower, and reassembly is vice-
versa of segmentation i.e. all the segmented packets are
recollected in the correct order at the receiver side.
• Connection Control: It ensures connection oriented data
transfer for lengthy data items.
• Multiplexing: It allows combining multiple transmission unit
signals or channels of higher-level protocols in one
transmission unit of a lower-level protocol. Multiplexing can be
upward or downward.
• Ordered Delivery: Protocol facilitates ordered delivery of
data, by providing a unique sequence number to each data
packet. It is the function of the sender to maintain ordered
delivery. By doing so, the receiver will receive the data in the
same order as sent by the sender.
• Transmission Services: It mainly deals with priority, Quality
of Service (QoS), and security of data packets.
• Addressing: It mainly deals with addressing levels, addressing
scope, communication identifiers, and addressing modes.
• Flow Control: It facilitates to limit the flow of data. It is the
function of the receiver's end to maintain flow control of data.
• Error Control: It deals with error detection (using the
checksum bits) and its control. If any error is detected during
the transmission of the data, a request for retransmission of
data is sent to the sender by the receiver, and the corrupt data
packet is discarded.

What is Stop and Wait protocol?

While sending the data from the sender to the receiver the flow of
data needs to be controlled. Suppose a situation where the sender is
sending the data at a rate higher than the receiver is able to receive
and process it, then the data will get lost. The Flow-
control methods will help in ensuring that the data doesn't get lost.
The flow control method will keep a check that the senders send the
data only at a rate that the receiver is able to receive and process.
There are mainly two ways in which this can be achieved i.e.
using Stop-and-wait protocol or sliding window protocol. In this
blog, we are going to learn about the Stop-and-wait protocol. So,
let’s get started.

Stop and Wait Protocol


It is the simplest flow control method. In this, the sender will send
one frame at a time to the receiver. The sender will stop and
wait for the acknowledgment from the receiver. This time(i.e. the
time between message sending and acknowledgement receiving) is
the waiting time for the sender and the sender is totally idle during
this time. When the sender gets the acknowledgment(ACK), then it
will send the next data packet to the receiver and wait for the
acknowledgment again and this process will continue as long as the
sender has the data to send. This can be understood by the diagram
below:

The above diagram explains the normal operation in a stop-and-wait


protocol. Now, we will see some situations where the data or
acknowledgment is lost and how the stop-and-wait protocol
responds to it.

Situation 1
Suppose if any frame sent is not received by the receiver and is lost.
So the receiver will not send any acknowledgment as it has not
received any frame. Also, the sender will not send the next frame as
it will wait for the acknowledgment for the previous frame which it
had sent. So a deadlock situation arises here. To avoid any such
situation there is a time-out timer. The sender waits for this fixed
amount of time for the acknowledgment and if the acknowledgment
is not received then it will send the frame again.

Situation 2
Consider a situation where the receiver has received the data and
sent the acknowledgment but the ACK is lost. So, again the sender
might wait till infinite time if there is no system of time-out timer.
So, in this case also, the time-out timer will be used and the sender
will wait for a fixed amount of time for the acknowledgment and
then send the frame again if the acknowledgement is not received.
There are two types of delays while sending these frames:

• Transmission Delay: Time taken by the sender to send all the


bits of the frame onto the wire is called transmission delay.
This is calculated by dividing the data size(D) which has to be
sent by the bandwidth(B) of the link.
Td = D / B

• Propagation Delay: Time taken by the last bit of the frame to


reach from one side to the other side is called propagation
delay. It is calculated by dividing the distance between the
sender and receiver by the wave propagation speed.
Tp = d / s ; where d = distance between sender and receiver, s = wave
propagation speed
The propagation delay for sending the data frame and the
acknowledgment frame is the same as distance and speed will
remain the same for both frames. Hence, the total time required to
send a frame is:

Total time= Td(Transmission Delay) + Tp(Propagation Delay for data


frame) + Tp(Propagation Delay for acknowledgment frame)
Total time=Td+ 2Tp
The sender is doing work only for Td time (useful time)and for the
rest 2Tp time the sender is waiting for the acknowledgment.

Efficiency
Efficiency = Useful Time/ Total Time
η = Td / (Td+2*Tp)
η = 1/(1+2a) →(1)
where a=Tp / Td

Throughput
The number of bits that a receiver can accept in total time
duration (i.e. transmission time(Td) + 2 * propagation delay(Tp)). It
is also called effective bandwidth or bandwidth utilization.

In Stop and Wait, in the total duration, the receiver can accept only
one frame. One frame is of data size D i.e. D bits in one frame.

Therefore, Throughput= D / (Td + 2Tp)

Throughput = D / Td(1+2a) →(2)

where a= Tp / Td

From the definition of Transmission delay,

Td=D/B

Cross multiplying B and Td, we get

B = D/Td → (3)

Now putting the value of equation 3 in equation 2, we get,

Throughput= B /(1+2a) → (4)


Now, putting the value of equation 1 in equation 4, we get,

Throughput= η * B

Advantages of Stop and Wait Protocol


1. It is very simple to implement.
2. The main advantage of this protocol is the accuracy. The next
frame is sent only when the first frame is acknowledged. So,
there is no chance of any frame being lost.

Disadvantages of Stop and Wait Protocol


1. We can send only one packet at a time.
2. If the distance between the sender and the receiver is large
then the propagation delay would be more than the
transmission delay. Hence, efficiency would become very low.
3. After every transmission, the sender has to wait for the
acknowledgment and this time will increase the total
transmission time. This makes the transmission process slow.

What is FTP and how does an FTP work?

Have you ever downloaded a new version of Firefox or any other


application? If so, then you probably have used the FTP(File Transfer
Protocol) without even knowing it. Through today's browser, we can
download the files via FTP from within the browser window. It is
very easy and convenient for downloading files. But there is not
much flexibility available while downloading some files from the
browser also, we cannot upload files. So, we can use FTP clients for
transferring files. So, in this blog, we will learn how does an FTP
work.

FTP
File Transfer Protocol is a set of protocols that the networked
computers use to talk over the internet. In more simple terms, it is a
way to connect two computers and move files between them. FTP
allows new web pages that are created by an individual to show up
on the internet. It allows the web pages to transfer to the server so
that others can access them.

Using an FTP client we can upload, download, delete, move, rename


and copy the file on a server. If you send your file through FTP then
your files mainly perform upload or download from the FTP server.
When you are uploading the files then you are transferring the files
to the server from your personal computer and when you are
downloading the file you are transferring the file from the server to
your personal computer.

How does File Transfer Protocol work?


FTP is a client-server protocol and it relies on two communication
channels between the client and the server.

1. Control Connection: The FTP client, for example, FileZilla or


FileZilla Pro sends a connection request usually to server port
number 21. This is the control connection. It is used for
sending and receiving commands and responses. Typically a
user needs to log on to the FTP server for establishing the
connection but there are some servers that make all their
content available without login. These servers are known
as anonymous FTP.
2. Data Connection: For transferring the files and folder we use a
separate connection called data connection.

This connection can be established in two ways:

• Active Mode: In this mode, the user connects from a random


port(random port 1) on the FTP client to the port 21 of the
server. It sends the PORT command which tells the server that
what port of the client it should connect to i.e.(random port 2).
The server connects from the port 20 to the port which the
client has designated i.e. Random Port 2. Once the connection
is established the data transfer takes place through these client
and server port.
• Passive Mode: In the situation, where the client can not accept
connection like when blocked by a firewall, the passive mode
has to be used. This is the most common mode because
nowadays the client is behind the firewall(e.g. built-in
Windows Firewall). In this mode, the user connects from a
random port(random port 1) on the FTP client to the port 21 of
the server. It sends the PASV command which tells the client
that what port of the server it should connect to i.e.(random
port 3) for establishing the connection. The client connects
from the Random port 2 to the port which the server has
designated i.e. Random Port 3. Once the connection is
established the data transfers take place through these client
and server port.

Advantages of using FTP


1. It allows you to transfer multiple files and folders.
2. When the connection is lost then it has the ability to resume
the transfer.
3. There is no limitation on the size of the file to be transferred.
The browsers allow a transfer of only up to 2 GB.
4. Many FTP clients like FileZilla have the ability to schedule the
transfers.
5. The data transfer is faster than HTTP.
6. The items that are to be uploaded or downloaded are added to
the ‘queue’. The FTP client can add items to the ‘queue’.

Disadvantages of using FTP


1. FTP doesn't encrypt the traffic so usernames, passwords, and
other data can easily be read by capturing the data packets
because while transferring as they are sent in cleartext. FTP is
vulnerable to packet capture and other attacks.

What is DHCP and how does it work?

The host in any network can be assigned the IP address manually or


dynamically. In a small home network having 2 or 3 computers, we
can assign the IP addresses manually but imagine a network having
hundreds of computer and you have to assign the IP addresses to all
of them. It can be a nightmare for network administrators!! No two
hosts can have the same IP address and assigning them IP address
manually can lead to errors and confusion. So, to resolve this
problem DHCP is needed. The DHCP is needed to simplify the
assignment of IP addresses on a network. So, let's learn more about
DHCP as we go through this blog.

DHCP
Dynamic Host Configuration Protocol is a network management
protocol that is used to dynamically assign the IP address and other
information to each host on the network so that they can
communicate efficiently. DHCP automates and centrally manages
the assignment of IP address easing the work of network
administrator. In addition to the IP address, the DHCP also assigns
the subnet masks, default gateway and domain name
server(DNS) address and other configuration to the host and by
doing so, it makes the task of network administrator easier.

Components of DHCP
1. DHCP Server: It is typically a server or a router that holds the
network configuration information.
2. DHCP Client: It is the endpoint that gets the configuration
information from the server like any computer or mobile.
3. DHCP Relay Agent: If you have only one DHCP Server for
multiple LAN’s then the DHCP relay agent present in every
network will forward the DHCP request to the servers. This
because the DHCP packets cannot travel across the router.
Hence, the relay agent is required so that DHCP servers can
handle the request from all the networks.
4. IP address pool: It contains the list of IP address which are
available for assignment to the client.
5. Subnet Mask: It tells the host that in which network it is
currently present.
6. Lease Time: It is the amount of time for which the IP address
is available to the client. After this time the client must renew
the IP address.
7. Gateway Address: The gateway address lets the host know
where the gateway is to connect to the internet.
How do DHCP works?
DHCP works at the application layer to dynamically assign the IP
address to the client and this happens through the exchange of a
series of messages called DHCP transactions or DHCP conversation.

• DHCP Discovery: The DHCP client broadcast messages to


discover the DHCP servers. The client computer sends a packet
with the default broadcast destination of 255.255.255.255 or
the specific subnet broadcast address if any configured.
255.255.255.255 is a special broadcast address, which means
“this network”: it lets you send a broadcast packet to the
network you’re connected to.

• DHCP Offer: When the DHCP server receives the DHCP


Discover message then it suggests or offers an IP address(form
IP address pool) to the client by sending a DHCP offer message
to the client. This DHCP offer message contains the proposed
IP address for DHCP client, IP address of the server, MAC
address of the client, subnet mask, default gateway, DNS
address, and lease information.
1. the proposed IP address for DHCP client (here 192.168.1.11)
2. Subnet mask to identify the network (here 255.255.255.0)
3. IP of the default gateway for the subnet (here 192.168.1.1)
4. IP of DNS server for name translations (here 8.8.8.8)

• DHCP Request: In most cases, the client can receive multiple


DHCP offer because in a network there are many DHCP
servers(as they provide fault tolerance). If the IP addressing of
one server fails then other servers can provide backup. But, the
client will accept only one DHCP offer. In response to the offer,
the client sends a DHCP Request requesting the offered
address from one of the DHCP servers. All the other offered IP
addresses from remaining DHCP servers are withdrawn and
returned to the pool of IP available addresses.
• DHCP Acknowledgment: The server then sends
Acknowledgment to the client confirming the DHCP lease to
the client. The server might send any other configuration that
the client may have asked. At this step, the IP configuration is
completed and the client can use the new IP settings.

Advantages of DHCP
1. It is easy to implement and automatic assignment of an IP
address means an accurate IP address.
2. The manual configuration of the IP address is not required.
Hence, it saves time and workload for the network
administrators.
3. Duplicate or invalid IP assignments are not there which means
there is no IP address conflict.
4. It is a great benefit for mobile users as the new valid
configurations are automatically obtained when they change
their network.

Disadvantages of DHCP
1. As the DHCP servers have no secure mechanism for the
authentication of the client so any new client can join the
network. This posses security risks like unauthorized clients
being given IP address and IP address depletion from
unauthorized clients.
2. The DHCP server can be a single point of failure if the network
has only one DHCP server.

What is ARP and how does it work?


Suppose you want to access any website like google.com. The browser behind the scene will use
the application layer services such as HTTP for establishing the connection between two
systems. Now, the HTTP will get help from the transport layer including TCP (Transmission
Control Protocol) and add the information like Port number and the details regarding transport
layer protocol. Now, the network layer will add IP information. Network Layer will add the
source IP address and the destination IP address. How will the source computer know about the
destination IP address? The DNS will resolve the URL or name to the IP address. Now, this data
packet is handed down to layer 2 i.e. data link layer. In layer 2, the communication happens
mostly over the MAC address or physical address(MAC address is the permanent physical
address of the computer). So how in the world would the source computer know the destination
IP address or the MAC address associated with it? This is where ARP comes into the picture.
ARP helps in knowing the MAC address of the destination given the IP address. So, let's dive
deep into ARP and start the blog.

ARP
Address Resolution Protocol is one of the most important protocols of the network layer in the
OSI model which helps in finding the MAC(Media Access Control) address given the IP address
of the system i.e. the main duty of the ARP is to convert the 32-bit IP address(for IPv4) to 48-bit
address i.e. the MAC address.
How does ARP work?
• At the network layer when the source wants to find out the MAC address of the
destination device it first looks for the MAC address(Physical Address) in the ARP
cache or ARP table. If present there then it will use the MAC address from there for
communication. If you want to view your ARP cache(in Windows Operating System)
then open Command Prompt and type command —‘arp -a’ (without quotes). An ARP
table looks something like this.

• If the MAC address is not present in the ARP table then the source device will generate
an ARP Request message. In the request message the source puts its own MAC address,
its IP address, destination IP address and the destination MAC address is left blank since
the source is trying to find this.

Sender's MAC Address 00-11-0a-78-45-AD


Sender's IP Address 192.16.10.104

Target's MAC Address 00-00-00-00-00-00


Target's IP Address 192.16.20.204

• The source device will broadcast the ARP request message to the local network.
• The broadcast message is received by all the other devices in the LAN network. Now
each device will compare the IP address of the destination with its own IP address. If the
IP address of destination matches with the device's IP address then the device will send
an ARP Reply message. If the IP addresses do not match then the device will simply
drop the packet.
• The device whose IP address has matched with the destination IP address in the packet
will reply and send the ARP Reply message. This ARP Reply message contains
the MAC address of this device. The destination device updates its ARP table and stores
the MAC address of the source as it will need to contact the source soon. Now, the source
becomes destination(target) for this device and the ARP Reply message is sent.

Sender's MAC Address 00-11-0a-78-45-AA


Sender's IP Address 192.16.20.204

Target's MAC Address 00-11-0a-78-45-AD


Target's IP Address 192.16.10.104

• The ARP reply message is unicast and it is not broadcasted because the source which is
sending the ARP reply to the destination knows the MAC address of the source device.
• When the source receives the ARP reply it comes to know about the destination MAC
address and it also updates its ARP cache. Now the packets can be sent as the source
nows destination MAC address.

Sample Example
The ARP Request and Reply messages can be captured. The sample example below is the
example of an ARP Request Message which is captured. You can see that the destination MAC
address is blank i.e. 00:00:00:00:00:00.

The request message contains various other fields like

1. Hardware type- It specifies the type of hardware used while transmitting the ARP
message. Mostly the hardware type is Ethernet.
2. Protocol type- a number is assigned to each protocol and here IPv4 is used. IPv4 is 2048
(0x0800 in Hexa).
3. Protocol size- length of IPv4 address(here 4 bytes).
4. Opcode-it specifies the nature of the ARP message. 1 for ARP request and 2 for ARP
reply,
5. Source IP Address- here 10.10.10.2
6. Destination(Target) IP Address- here 10.10.10.1
7. Source MAC Address -here 00:1a:6b:6c:0c:cc
A sample example of ARP Reply message captured. The reply message contains the MAC
address which was asked by the source. The MAC address 00:1d:09:f0:92:ab is sent in the ARP
Reply message.

Advantages of using ARP


1. MAC addresses can easily be known if we know the IP address.
2. End nodes do not need to be configured to “know” MAC
addresses. They can be found when required.

Disadvantages of using ARP


1. There may be ARP attacks like ARP spoofing and ARP Denial
of Services. ARP Spoofing is a technique that allows an
attacker to attack an Ethernet Network which may lead to
sniffing of data frames on switched Local Area Network or to
the attacker may stop the traffic altogether which is also known
as ARP denial of Services.

What is RIP(Routing Information Protocol)?

Have you ever imagined how you can access the servers in America
from India? How are they connected? Our systems are connected to
the routers which in turn is connected to many other routers which
eventually are connected to the servers. So whenever we want to
access any server, the link between our computer and server is
established through these routers only. But how the routers are
selected so that the distance between our computer and the server is
minimum? This is what RIP does. It selects the shortest path
between the computer and the remote server. Now, let's get down to
the nitty and gritty of the RIP and discuss it in more detail.

RIP
It is a vector routing protocol that uses the hop count as the routing
unit for finding the most suitable path between the source and the
destination. Now, let us understand the meaning of the terms used
in the definition of RIP.

Vector Routing Protocol


In a vector routing protocol, the routers interchange the network
accessibility information with the nearest neighbours. They
interchange the information of the set of destinations that they can
reach and the next-hop address to which the data packet should be
sent so that the data reaches the destination.

Hop Count
Hop count is the number of routers that are between the source and
the destination in a network. RIP considers the path with the
shortest number of hops as the best path to a remote network hence
placed in the routing table. RIP allows only 15 hops to reach any
network. If the packet does not reach the destination in 15 hop
counts then the destination is considered as unreachable.

Routing Table
Every RIP router maintains a routing table. These tables store the
information of all the destinations that the router knows it can
reach. Each router interchange the information of their routing table
to their nearest neighbours. The routers broadcast the routing table
information every 30 seconds to their closest neighbours.

Example: If you are the user and you want to reach google.com.
There can be many paths through which you can reach the server
of Google. In the example below, the user has three paths. RIP will
count the number of routers required to reach the destination server
from each route. Then it would select that route that has a minimum
number of paths.
The route 1 has 2 hop counts, route 2 has 3 hop counts and route 3
has 4 hop counts to reach the destination server. So, the RIP will
choose Route 1.

We can trace the route of the data packet and know about the router
that comes in its path before it reaches the destination. Open the
Command Prompt and type → “tracert google.com” (without double
quotes) to see the path the data packets would take i.e. the routers that
are between your computer and the destination server of google.
There is a total of 14 hops i.e. the data packet has to go through
these routers in order to reach google.com.

Request Timed Out means that the server doesn't respond to the
request for the information.

RIP timers
1. Update Timers: All the routers configured with RIP send their
update(a complete copy of their routing table) every 30 seconds
to the neighbouring routers.
2. Invalid Timers: If any router gets disconnected from the
network then the nearing routers wait for 180 seconds for the
update. When it doesn't hear the update until 180 seconds then
it will put it into hold-down.
3. Hold-Down Timer: Hold-downs ensure that regular update
messages do not inappropriately cause a routing loop( A
routing loop is a serious network problem in which the
data packets continue to be routed within the network in
an endless circle). The router doesn’t act on new
information(of routing table which it receives after every 30
seconds) for a certain period of time. It is 180 seconds by
default.
4. Flush Timer: RIP will wait for an additional 60
seconds(total=180+60 =240 seconds) after the route has been
declared invalid. Even now if it doesn't hear any update then it
removes the route from the routing table.

Versions of RIP
1. RIPv1(Routing Information Protocol version 1): It is also
called a classful routing protocol because it does not send the
information of the subnet mask in the routing update. The
routing update is sent as a broadcast( at 255.255.255.255) to
every station on the attached network.
2. RIPv2(Routing Information Protocol version 2): It is
a classless routing protocol because it does send the
information of the subnet mask in its routing updates. RIPv2
sends the routing table as multicast (at 224.0.0.9)to reduce the
network traffic.
3. RIPng(Routing Information Protocol next generation): It is
an extended version of RIPV2 that was made to support IPv6.
RIPng sends the routing table as multicast (at FF02::9).

Advantages of RIP
• It is easy to configure.
• that it does not require an update every time the topology of
network changes.
• Guaranteed to support almost all routers
Disadvantages of RIP
• It is only based on hop count. So, if there is a better route
available with better bandwidth then it will not select that
route.
Example: Suppose we have two routes, the first route has a
bandwidth of 100 Kbps(Kilobits per second) and is there is high
traffic in this route whereas the second route has a bandwidth
of 100 Mbps (Megabits per second) and is free. Now the RIP
will select route 1 though it has high traffic its bandwidth is much
less than the bandwidth of route 2. This is one of the biggest
disadvantages of RIP
• Bandwidth utilization in RIP is very high as it broadcasts its
updates every 30 seconds.
• RIP supports only 15 hop count so a maximum of 16 routers can
be configured in RIP.
• Here the convergence rate is slow. It means that when any link
goes down it takes a lot of time to choose alternate routes.

What is a NIC(Network Interface Card)?

One of the ways to access the internet is to connect a LAN


cable(Ethernet Cable) to our computer. What happens when we
connect this LAN cable to our computer? This LAN cable connects to
a hardware device already present in our computer called Network
Interface card. So, any computer in order to connect to the internet
needs a Network Interface Card(NIC). These days almost all
computers have built-in NIC. So let's learn more about NIC.

NIC
Network Interface Card is a hardware device that is installed on the
computer so that it can be connected to the internet. It is also
called Ethernet Card or Network Adapter. Every NIC has a 48-bit
unique serial number called a MAC address which is stored in ROM
carried on the card. Every computer must have at least one NIC if it
wants to connect to the internet.

NIC is not the only component that is required to connect to the internet.
If your device is a part of a large network and you want it to connect to
the internet then a router is also required. The NIC will connect to the
router then this router will connect to the internet.
Types of Network Interface Card:
• Wired: These NIC have input jacks made of cables(Ethernet
Cable). The motherboard has a slot for the network cards where
they are inserted. The most widely used LAN technology is
Ethernet. Ethernet-based NIC is available in hardware shops.
The speed of Ethernet-based NIC can be 10/100/1000 Mbps.
Example: TP-LINK TG-3468 Gigabit PCI Express Network Adapter

• Wireless: Wireless network cards are inserted into the


motherboard but no network cables are required to connect the
computer to the internet. These NICs are designed for Wi-Fi
connections.
• Example: Intel 3160 Dual-Band Wireless Adapter

• USB: These are NICs that provide network connection over the
device plugged in the USB port. For Example, if you are a
gamer and you are tired of watching helplessly that your
gaming character dies due to Wi-Fi-induced lags. So the USB-
ethernet adapter can be a solution to your problem.
Example: TP-Link TL-UE300 USB 3.0 to RJ45 Gigabit Ethernet
Network Adapter

How fast are the Network interface Cards?


Every NIC comes with a speed rating such as 11 Mbps, 100 Mbps,
etc. that suggests the performance of the NIC. The speed of NIC
depends on two other. First, the available bandwidth and second,
the speed that you are paying for.

Example: If you are paying for 10Mbps download speed but using
a 54 Mbps NIC then NIC will not increase your speed. Now imagine
you are paying for 15 Mbps but using an 11 Mbps NIC then your
download speed will be slower and you will not get the speed of what
you are paying for.

Now, imagine another situation where your download speed is 54


Mbps and your NIC also supports it. But, you have two computers
connected to the network downloading simultaneously. So, the
downloading speed is split into two halves and each computer will
get a bandwidth of only 27 Mbps.

Advantages of NIC
1. Network Interface Cards help to connect the system to the
internet and enable data flow.
2. It also helps to connect a remote computer.
Disadvantages of NIC
1. The data is not secure in NIC connection and the data can be
hacked. But, the security can be implemented through external
software and encryption to encrypt the data while sending the
data to the other computers.

What are ICMP and IGMP protocols?

When you are not connected to the internet and search for any
website then you get an error message like destination unreachable
or time limit exceeded etc. These messages are displayed through
the ICMP protocol. The IP protocol does not have any mechanism
for error reporting and sending query messages. This problem is
resolved by the ICMP protocol.

Online streaming of videos and games generally uses the IGMP


protocol for more efficient use of the resources. In this blog, we will
learn about these two protocols of the TCP/IP. So, let's get started.

ICMP
ICMP or Internet Control Message Protocol is one of the major
protocols of the TCP/IP. ICMP is a mechanism used by the host,
routers, and gateways to send error messages back to the sender. As
the IP does not provide any mechanism for error reporting and
control, ICMP has been designed to compensate for these
deficiencies of the IP. However, it only reports the error and
doesn't correct the error.

The ICMP messages are divided into two categories:


1. Error Message
2. Query Message

Error Message
The error messages report the problems which may be faced by the
hosts or routers when they process the IP packet.

1. Destination Unreachable: When any router or gateway


determines that the packet cannot be sent(due to link failure,
congestion, etc) to the final destination then it sends an ICMP
destination unreachable message to the source. Not only the
routers but the destination host can also send the ICMP error
message if there is any failure at the destination like hardware
failure, port failure, etc.
2. Source Quench: A source quench is a request by the receiver
to the sending host or sender to reduce the rate at which the
sender is sending the data. This message is sent by the receiver
when it has congestion and there are chances that the packet
may get lost if the sender keeps on sending the packets at the
same rate.
3. Parameter Problem: When the packet is received by the router
then the calculated checksum should be equal to the received
checksum. If there is any ambiguity then the packet is dropped
by the router and the parameter problem message is sent.
4. Time Exceeded: Whenever the TTL(Time to Live) field of the
datagram reduces to zero then the router discards the datagram
and sends the time exceeded message to the source.
5. Route Redirect: If any router determines that the host has
incorrectly sent the packet to the different router the router
uses the route redirect message to inform the host to update its
routing information. So, it helps in improving the efficiency of
the routing process.

Query Message
The ICMP protocol can diagnose some network problems also. Query
messages help the hosts to get some specific information from a
router or another host.

1. TimeStamp Request/Reply: Host and routers determine


the round trip- time required for an IP datagram to travel
between hosts or routers. It can also be used to synchronize the
clocks in two systems.
2. Router Solicitation and Advertisement: If the host wants to
send the data to a host on another network then it needs to
know the address of the routers connected. The host also needs
to know if routers are alive and operational. All these
functions are provided by the router solicitation and
advertisement message.
3. Address Mask Request/Reply: The host broadcast the address
mask request if it does not know the address of the router.
The router receiving the address mask request replies with the
necessary mask for the host.
4. Echo Request/ Echo Reply: It a command designed checking
the connectivity between two hosts. Example: ping command.
Let's say you want to check the connectivity between your computer
and the Google server. You can do this by writing the
command “ping www.google.com” in the command line.
When the ping command is invoked then the ICMP echo
request message is sent to the target host(google, here). If the target
is connected to the network and operational then it sends an echo
reply message as an acknowledgement.

IGMP
IGMP is also a protocol of the TCP/IP. Internet Group Message
Protocol is an Internet protocol that manages multicast group
membership on IP networks. Multicast routers are used to send
the packets to all the hosts that are having the membership of a
particular group. These routers receive many packets that are to be
transmitted to various groups and they just can't broadcast it as it
will increase the load on the network.

So to overcome this problem a list of groups and their members is


maintained and IGMP helps the multicast router in doing so. The
multicast router has a list of the multicast address for which there
are any members in the network. There is a multicast router for each
group that distributes the multicast traffic of the group to the
members of that group.
Major goals of the IGMP protocol.
1. To inform the local multicast router that the host wants to
receive the multicast traffic of a particular group.
2. To inform the local multicast router that the host wants to
leave a particular group.

Versions of IGMP
• IGMPv1: It was the first version where the host announced
that it wants to receive the traffic of a particular multicast
group. 0.0.0.0 is defined as the group address and
the 224.0.0.1 as the destination address for the general IGMP
requests. The default interval for these requests which is sent
automatically by the routers is 60 seconds. There was no
system of leaving a multicast group. Only a timeout(delay
timer 180 seconds)removes the respective host from groups
they’re in. Suppose the host which is in a particular group
closes its system. This results in a situation where the traffic is
sent to the host even if is not accepting the traffic. When the
router discovers after some time that the host is no longer
accepting the traffic then the multicast traffic is stopped. This
problem was resolved in the next version.
• IGMPv2: The group address (0.0.0.0) and destination
address(224.0.0.1) remain unchanged. but, the default interval
for these requests which is sent automatically by the routers is
increased to 125 seconds. The most important feature added in
this version is “leave message” which a host can send if it
wants to leave a group. This allows the router to stop an
unnecessary multicast of traffic.
• IGMPv3: The group address (0.0.0.0) and destination
address(224.0.0.1) remain unchanged and the default interval
for these requests which is sent automatically by the routers is
125 seconds. The most feature added in this version was the
option to select the source of the multicast stream. This
reduces the demands on the network and ensures greater
security during transmission.

What are Proxy Servers and how do they protect computer networks?

The IP address which is given to your system by the ISP(Internet


Service Providers) is used to uniquely identify your system. But there
are many risks involved with the IP addresses like they might find
personal information about you, spam you with “Personalized Ads”,
etc. So, Proxy Servers or VPNs can be used to overcome this
problem. In this blog, we will learn how the proxy server will protect
your computer network. So, let's get started.

Proxy Server
The word proxy literally means a substitute. A proxy server
substitutes the IP address of your computer with some substitute IP
address. If you can't access a website from your computer or you
want to access that website anonymously because you want your
identity to be hidden or you don't trust that website then you can use
a proxy. These proxy servers are dedicated computer systems or
software running on a computer system that acts as
an intermediary separating the end-users from the server. These
proxy servers have special popularity among countries like China
where the government has banned connection to some specific
websites.
How does a proxy server work?
Every computer on the network has a unique IP address. This IP
address is analogous to your street address which must be known by
the post office in order to deliver your parcel to your home. A proxy
server is a computer on the internet with its own IP address and the
client which is going to use this proxy server knows this IP address.
Whenever the client makes any request to any web server then its
request first goes to this proxy server. This proxy server then makes
a request to the destination server on behalf of the client. The proxy
server actually changes the IP address of the client so that the
actual IP address of the client is not revealed to the webserver. The
proxy server then collects the response from the webserver and
forwards the result to the client and the client can see the result in
its web browser.

Types of Proxy servers


1. Anonymous Proxy: An anonymous proxy is the most familiar
type of proxy server and it hides the original IP address of the
client and passes any anonymous IP address to the web server
while making the request. By doing this, there is no way that
the end-user receiving the request can find out the location
from where the request was made. That's why most people use
a proxy. This helps in preventing the identity thefts and keep
your browsing habits private.
2. High Anonymity Proxy: These types of servers change the IP
address periodically This makes it very difficult for the
webserver to keep a track of which IP address belongs to
whom. TOR network is an example of a high anonymity proxy
server. A high anonymity server has an advantage over the
anonymous proxy in terms of privacy and security.
3. Transparent Proxy Server: As the name suggests this proxy
server will pass your IP address to the webserver. This is not
used for the same purpose as the above two rather it is used for
resource sharing. They are mainly used in public libraries and
schools for content filtering. Example: If the students of a
school are viewing the same article again and again via their
school network, then it would be more efficient to cache the
content and serve the next request from the cache. A
transparent proxy can do this for the organizations.
4. Reverse Proxy: Here the goal of the proxy server is not to
protect you from while accessing the webpages but to stop the
others on the internet form freely accessing your network. The
most basic application of reverse proxy is that it protects the
company resources and data on individual computers by
stoping third party access on these computers.

Advantages of Using Proxy Servers


1. Privacy: Individuals and organizations use the proxy server so
that they can browse the internet more privately. The use of
proxy protects them from identity theft and keeps
their browsing habits safe. Many ISPs also collect the data of
your browsing history and sell it to the retailers and
government.
2. Access to Blocked Resources: Several governments around
the world restrict access to citizens to many websites and proxy
servers provide access to the uncensored internet. Example:
The Chinese government has blocked access to many websites
for its citizens but a proxy server is all they need.
3. Speed up Internet Surfing: Proxy Server also caches the data.
So, if you ask for the website afteracademy.com then your
request will first reach the proxy server. Proxy Server checks if
it has cached this website. If it has cached it then you will get
feedback from the proxy server which will be faster than
directly accessing the website.
4. To control Internet usage of Employee and
Children: Organisations can use proxy servers and stop the
employees from accessing certain websites (like Facebook )in-
office hours. Parents can also use a proxy server to monitor
how their children use the internet.

Risk of Using a Proxy Server


1. The most common risk is spyware that gets downloads as free
software. This is where the quality of your proxy server
becomes more important. The proxy providers should have
advanced security protocols so that the spyware is left passive
and can't do any harm to our system even though installed. The
proxy servers which don't have such kind of security allow the
spyware to send your computer information and other personal
data leaving the proxy servers useless.
2. The proxy server has your IP address and web request saved
mostly unencrypted. You must know if they save and log your
data. You must know what are the policies that they follow. It is
possible that they might sell your data to the vendors.
3. This risk out of your control as the hackers can take control of
the proxy server and monitor and change the data that comes
over the proxy server.

What do you mean by a Firewall?

In today's world, most of the organization works on the internet.


However, internet benefits come along with the risk involved with
the internet. When the internet interacts with the organization it
can be a threat to the organization itself. A firewall will reduce the
exposure to external networks and host which poses a threat to the
organization. It is the foundation from which the current network
security technologies are built. So, let's get started and know more
about the firewall.

Firewall
A firewall is a software program or a hardware device that acts as
a filter for the data entering and leaving the network. The firewall
can be analogous to the security guards who have control over who
can enter or leave a building. A firewall reduces the risk and threat
from the malicious packets that are travelling over the public
network and can hamper the security of a private network.

How does a Firewall work?


A firewall acts as a border between your computer and the connected
network(like LAN or internet). It inspects all the incoming and
outgoing packets of the network. It inspects on the basis of
programmed rules which are created by humans. These rules may
depend on the demand, necessity and security policies defined by
the organization. These rules will tell whether a packet will be
allowed by the network barrier or not. If any packet is identified
as a danger or threat according to the defined rules then it will not
be allowed through the network. Though there are many rules and
policies, the firewall also defines some default policies. It consists
of three actions.

1. Accept: Allows traffic to pass through.


2. Drop: The network packages are dropped directly.
3. Reject: It rejects or blocks the traffic. Additionally, it will reply
with an error message.
The firewall establishes a secured network and protects the internal
network from the outside network or internet. In the above diagram,
the firewall allows some traffic while it rejects malicious traffic.

Types of Firewall
1. Packet-Filtering Firewalls: This is the oldest type of firewall
architecture. When a packet passes through this firewall then it
would check its source address, a destination address, port
number, and protocols without opening the packet. If the
packet is not according to the rules then the packet is dropped.
These firewalls are of low cost and are best suited for small
networks. However, these firewalls work only on the network
layer and they are vulnerable to spoofing.
2. Circuit-Level Gateway Firewalls: It works on the Session
layer of the OSI model. It checks that the TCP 3-way
handshakes are legitimate(according to the rules) or not. While
they are extremely efficient but the firewall doesn't check the
packet itself. So if any packet contains the malware and passes
the TCP handshake checks then it would pass through the
firewall and the system would be at risk.
3. Stateful Inspection Firewalls: They are also called dynamic
packet filtering firewalls. They maintain a track of all the open
connections of the network. If any new packet arrives then the
firewall checks if the new packet is of one the open connections
then it simply allows the packet to pass. But, if the new packet
is not of one of those open connections then it checks the
packets according to the rules set for new connections.
4. Application Firewall: It is also called a proxy-based firewall.
This firewall operates at the application layer and filters the
incoming traffic. When there is a proxy firewall then both the
client and the server connect through an intermediary
i.e. proxy servers. So, now when any external client wants to
connect to any internal server or vice versa, then the client will
have to open a connection with proxy instead. The proxy
firewall first establishes a connection to the source of the
traffic and then it inspects the incoming data packet. These
firewalls may check the actual content of the packet so that if
the packet contains any malware it can be rejected. The
advantage of using a proxy server is that it makes hard for the
attacker to discover a network actually is and hence provides
security.
5. Next-Generation Firewall: Most of the newly released
firewalls are advertised as a next-generation architecture. Some
of the common features of these firewalls are DataPacket
Inspection(checking the actual content of the data), SSL/SSH
Inspection(this secures you from HTTPs prone attacks),
IPS(Intrusion Prevention System-this works to automatically
stop the attack against your network). There is no proper
definition of the Next-generation Firewall so one should check
the capabilities of the firewall before buying one.
6. Software Firewall: The software firewall may include any type
of the above firewall that is installed on the local computer
instead rather than a separate piece of hardware. It provides
security as each individual network endpoint is isolated from
others. Example: Windows Firewall is a software program that
comes includes in Microsofts Windows.
7. Hardware Firewall: Hardware Firewalls are the hardware
devices which are found mostly on the routers. The hardware
Firewall provides security from the malicious traffic from the
outside networks as they are intercepted and blocked before
they reach the internal network. Example: Cisco ASA 5540
series firewall

So, a firewall is important when


1. You are surfing on the internet where you are using an ‘always
on’ connection.
2. You connect to an open Wi-Fi in a cafe, park, railway station,
etc.
3. Your organization needs to be isolated from the outside
network.
4. You want to know if any program on your system wants to
connect to the internet.

Difference between a Firewall and Antivirus

Although a firewall protects external networks form unauthorized


access to your system, you need to protect your systems from the
threats that are already present in the system. Antivirus software
will detect, identify, and remove the malicious programs that are
coming from the internet. Before we further go into the
dissimilarities between the firewall and antivirus we will first discuss
in detail what is an antivirus. If you haven't read about the firewall
read it from here. So, let's get started.

Antivirus
Antivirus software is a cybersecurity mechanism that detects and
eliminates threats that are a risk to system security. Antivirus
usually deals with more established threats like viruses, worms, and
trojans. It was originally designed to detect, protect and eliminate
the viruses from the system, hence, the name antivirus. Some
common examples of antivirus software are Norton, McAfee,
BullGuard, etc.

A common path from where viruses enter our system is through


emails. The attachments of the email may contain viruses. If the
anti-virus detects any such programs which can be a risk for your
system security then it can block, fix or completely remove that
program from the system.

How does an antivirus work?


An antivirus follows the approach in which it performs the detection,
identification, and removal of threats.
1. Detection: The antivirus first detects the infected file or
program.
2. Identification: After detection, it identifies that if the threat is
a virus, worm, trojan, etc.
3. Removal: Depending upon the detected problem, antivirus
takes action for removing the infected file. It can block, fix, or
completely remove the program from the system and restore
the original backup program(if there is any backup present).
An analogy can be considered for understanding the firewall and
antivirus. A firewall can be considered as an army whereas the
antivirus can be considered as police. An antivirus like police fights
the threats that have already entered your computer or are going to
be installed that may get the system slow or failure. In contrast, the
firewall is like the army at the border which blocks the attack from
any external network in the first place.

Difference between Firewall and Antivirus


1. Implementation: A firewall can be employed using both
software and hardware whereas the antivirus is employed using
software only.
2. Security Type: A firewall provides network-level security like
IP blocking, Packet filtering, etc. whereas antivirus
provides application-level security like detection and removal
of viruses, worms, etc.
3. Operation: Antivirus works by scanning the system to remove
the infected file and programs whereas the firewall is a network
security system that monitors and filters the incoming and
outgoing packets based on the predetermined security rules.
4. Threats: Antivirus deals with both external and internal
threats whereas firewall deals only with external threats. An
antivirus can scan the storage devices like flash drives which
firewall cannot do.
5. Counter-Attack: There is no possibility of counter-attacks
once the viruses, worms, etc. are removed using the antivirus.
In contrast, firewall mainly deals with the external network so
there is a possibility of external threats like IP Spoofing and
routing attacks.

What is an IP address?

Every machine in the network has a unique identifier to identify it.


Just as your house has an address so that you receive all the parcels,
similarly, the computer uses a unique identifier over the internet to
send the data. Without a specific address, information cannot be
received. Most of the computers on the internet today
use TCP/IP protocol for communication. This unique identifier in the
TCP/IP protocol is called IP address. So, let's dive deep into this
blog to know more about IP addresses.

IP address
The Internet Protocol(IP) address is a unique identifying number
that helps in connecting your device with the devices over the
internet or in the same network. This is a unique number for all
devices like printer, router, modems, laptop, mobile, etc. An IP
address is made of characters or
numbers. Example: 203.90.105.206. There are two versions of IP
standards that co-exist in the global world.

1. IPv4
2. IPv6

IPv4
IP version 4 is the older version of IP. It uses 32 bits to create a
single uniques address on the internet. IPv4 is limited to
4,294,967,296 addresses i.e. 2³² addresses. It consists of four
numbers each of which can contain one to three digits ranging from
0 to 255 separated by a single dot(.). Here, each number is the
decimal representation(base-10) for an 8 digit binary number(base-
2). These IP addresses pretty much guarantee that our emails will
come as go as expected, our google searches would take us to the
website which we want.

Example of an IPv4 address: 63.171.234.171

Currently, most of the devices use IPv4 but these IP addresses are
running out quickly. IPv6 solved this problem. It can accommodate
up to trillions of users.

IPv6
It is the replacement for IPv4. It uses 128 bits to create a uniques
address. This means that there can be theoretically 2¹²⁸ uniques
address i.e. 340,282,366,920,938,463,463,374,607,431,768,211,456
and this number will never run out. It consists of eight groups of
hexadecimal numbers separated by a colon. The IPV4 version used
numerical values, so, IPv6 adopted the hexadecimal numbers to
avoid any conflict. If any group contains all zeros then the notation
can be shortened by using a colon to replace the zeroes.

Example of an IPv6: adba:1925:0000:0000:0000:0000:8a2e:7334


In the above IP address, four groups contain only zeros. This zero
can be replaced by a colon and can be re-written
as adba:1925::8a2e:7334.

How does your computer get an IP address?


The IP address is assigned to your computer by the ISP. It can be
static or dynamic. In most of the cases, it is a dynamic IP.

Static IP address
This type of IP address is one that is assigned to you by the ISP’s.
This is fixed and can't change automatically. This is generally used
by the server hosting websites, providing mails, databases, etc. The
ISP’s charge an extra amount for static IP’s.

Advantages of Static IP address:


1. Better DNS Support: Static IP addresses are easy to manage
with DNS Servers as the IP address is fixed for the domain
name.
2. Server Hosting: It is good for hosting web servers, email
servers, and internet servers. Having a static IP address means
it's easier for the clients to reach you via DNS more quickly.

Disadvantages of Static IP address


1. Security Concern: As the IP address is static it is easier for
hackers to attack.
2. Higher Cost: The ISP generally charges extra for the static IP’s.
Dynamic IP address
This type of IP address is dynamically assigned to you by the ISP. ISP
assigns this IP address by using the DHCP(Dynamic Host
Configuration Protocol) which typically runs on routers or dedicated
DHCP servers. This dynamic IP address is assigned using a leasing
system which means that this IP address would be assigned only for
a fixed amount of time. When the lease time gets over then this IP
address has to be revived. Mostly, DHCP reassigns the same IP
address to the same machine but it may be possible that the DHCP is
not able to give the same IP address to you again.

Advantages of Dynamic IP address


1. More secure: The changing IP address provides more security.
2. Lower fees: It is cheaper than static IP.
3. Prevents IP Conflicts: As the IP address is assigned
automatically it prevents the IP conflicts.

Disadvantages of Dynamic IP address


1. Not suitable for Hosted Services: It is not suitable for hosting
web services as the changing IP address can be troublesome for
the DNS. DNS does not work well with dynamic IP addresses.
However, we have Dynamic DNS which mange this problem
which comes with additional complexity and cost.
2. Less-accurate Geo-location: As your, IP address keeps on
changing so your IP address may not reflect your accurate real-
time location. So, sometimes the geo-locations services may
fail.
A static IP address is good for business which hosts their own
websites whereas a dynamic IP address is good enough for the end
customers.

No matter if you are using a static IP address or a dynamic IP address it


is always vulnerable to hackers. So you should use a proxy server or
a VPN to hide your IP address.

How to find your IP address?


You might not know but you have two IP addresses. First, Public or
external IP address and second, Private or internal IP address.

• Public Address: The public address is provided by the ISP and


it how the internet recognizes you as a network. It is unique for
each user. You can find your public IP address, by typing
something like "what is my ip address" in google chrome.

Google shows an IP address:203.90.104.203

This is your external IP address. We can also understand that it is


an IP version 4 address.
• Private Address: Every device on a local network has a unique
local IP address that is assigned to you by the router of the
internal network. You can find your private IP address by going
to command prompt and typing "ipconfig"(without double
quotes) in it.

This time your IP address is 192.168.0.113.

This is your Internal IP address.

The internal IP address used in your local internal network whereas


the external IP address is used when trying to communicate with the
systems on the Intenet.

What is a Subnet mask?

Given an IP address, how will the router identify what is the network
ID of the network to which this IP address belongs? The router has a
routing table for this. The subnet mask helps the router in doing so.
In this blog, we will start from the basics and see how this is done by
the router. You should know about what are the various classes of
IP and how it is divided before reading this blog. So, let's get started.

Subnet(SubNetwork)
A subnet is a logical partition of an IP network into smaller
networks.
Subnetting
Dividing the network into smaller networks or subnets is called
subnetting.

Why are we dividing?


Suppose we take a network of class A. So, in class A, we have 2²⁴
hosts. So to manage such a large number of hosts is a tedious job. So
if we divide this large network into the smaller network then
maintaining each network would be easy.

How we do the subnetting?


Suppose we have a class C network having network ID as
201.10.1.0(range of class C 192–223). So the total number of hosts
is 256(for class C host is defined by last octet i.e. 2⁸). But, the total
usable host is 254. This is because the first IP address is for
the network ID and the last IP address is Direct Broadcast
Address(for sending any packet from one network to all other hosts
of another network).

So, in subnetting we will divide these 254 hosts logically into two
networks. A class C network has 24 bits for Network ID and the last 8
bits for the Host ID. We are going to borrow the left-most bit of
the host ID and declare for identifying the subnet. If the leftmost bit
of the host address is 0 then it is the 1st subnet network and if the
leftmost bit is 1 then it would be 2nd subnet network. Using one bit
we can divide it into 2 networks i.e. 2¹. If we want to divide it into
four subnet networks then we need 2 bits(2²=4 networks).
The range of IP address which is in 1st subnet network is
from 201.10.1.0 to 201.10.1.127. The range of IP address that lies in
the 2nd subnet network is from 201.10.1.128 to 201.10.1.255.

In 1st subnet network(S1), we have a total of 128 hosts. But, the first
IP address (201.10.1.0)is the network ID of the first subnet and the
last IP address(201.10.1.127) is the Direct Broadcast Address of the
first subnet. So, there are actually 126 usable hosts in the first
subnet.
Similarly, in the 2nd subnet network(S2), we have a total of 128
hosts. But, the first IP address (201.10.1.128)is the network ID of
the first subnet and the last IP address(201.10.1.255) is the Direct
Broadcast Address of the first subnet. So, there are actually 126
usable hosts in the second subnet.

Overall, there are 252 usable hosts after subnetting. So, because of
subnetting, there is a loss in the number of IP addresses.
The Network ID of the whole network is 201.10.1.0. Also, the
network ID of S1 is 201.10.1.0. Which network are we referring to
when the IP address is 201.10.1.0? It depends on where you are in
the network. If we are inside the network we are referring to the
subnet (S1)and if we are outside the network we are referring to the
entire network.
We have an internal router which is connected to the two subnet
network. Suppose a packet is arriving with a destination IP address
of 201.10.1.130 at the internal router. Now, how will the router
identify that this IP address will belong to which subnet network?
Or, given an IP address how will the router identify the what is
the network ID of the network to which this IP address belongs.
Here, by seeing the range of each subnet we can easily tell that it
belongs to subnet S2. But, how the router will find it. For this, we
have Subnet Mask.

Subnet Mask
A subnet mask is 32 bits numbers in which the series
of 1’s represents the Network ID part and the Subnet ID part
whereas the series of 0’s represents the Host ID part.

So, in the above example of the Class C IP address, we represent all


the network ID bits by 1. We have reserved 1 bit of the host ID to
represent the Subnet ID. So this leftmost bit of the last octet will
also be represented by 1. Rest all the bits which represent
the host are represented by 0.

Combining all these bits the subnet mask is represented as


11111111.11111111.11111111.10000000 i.e. 255.255.255.128.

If we know the subnet mask of the network then we can find the network
of the IP address by bitwise ANDing the binary bits of the address.
Example: If a packet has arrived on the router having the IP address
as 201.10.1.130. The router knows the subnet mask(255.255.255.128)
of the network. First, convert both the address into its binary
equivalent. The network to which this IP address belongs can be
easily be found by bitwise ANDing the subnet mask and the
incoming IP address.
Using the subnet mask we have found the network ID of the IP
address and hence found that this IP address belongs to the subnet
S2 network.

The router has the subnet mask stored in the routing table. The
routing table contains the network ID, subnet mask and the
corresponding interface to which it has to forward the packet if the
network ID matches the table. In this case, the size of all the
networks is the same. This is called Fixed Length Subnet Masking.
If the result matches the network ID then it sends the packet to the
corresponding interface. If it doesn't match the first entry then it is
matched with the next entry. If it doesn't match any of the entry
then the packet has to be sent out of the network i.e. default entry.
The subnet mask for default entry is 0.0.0.0. The significance of all
zeroes is that ANDing any entry with 0 produces the result as zero.

This is all about subnet masks. Hope you learned something new
today.
What is the concept of Subnetting and Supernetting?

Computer networks can be broken into many networks or small


networks can be combined to form large networks depending upon
our needs. This is done by IP subnetting and supernetting. In this
blog, we will learn about these concepts in detail. So, let's get
started.

Subnetting
Dividing the network into smaller contiguous networks or subnets is
called subnetting.

Why subnetting?
Suppose we take a network of class A. So, in class A, we have 2²⁴
hosts. So to manage such a large number of hosts is tedious. So if we
divide this large network into the smaller network then maintaining
each network would be easy.

How does subnetting work?


Suppose we have a class C network having network ID as
201.10.1.0(range of class C 192–223). So the total number of hosts
is 256(for class C host is defined by last octet i.e. 2⁸). But, the total
usable host is 254. This is because the first IP address is for
the network ID and the last IP address is Direct Broadcast
Address(for sending any packet from one network to all other hosts
of another network).

So, in subnetting we will divide these 254 hosts logically into two
networks. In the above class C network, we have 24 bits for Network
ID and the last 8 bits for the Host ID. We are going to borrow
the left-most bit of the host address and declare for identifying the
subnet. If the leftmost bit of the host address is 0 then it is the 1st
subnet network and if the leftmost bit is 1 then it would be 2nd
subnet network. Using 1 bit we can divide it into 2 networks i.e. 2¹.
If we want to divide it into four networks then we need 2 bits(2²=4
networks). The range of IP address which is in 1st subnet network is
from 201.10.1.0 to 201.10.1.127. The range of IP address that lies in
the 2nd subnet network is from 201.10.1.128 to 201.10.1.255.

In the 1st subnet network(S1), we have a total of 126 hosts only


because the first and last IP address is reserved for the network
ID and the Direct Broadcast Address respectively. Similarly, in the
2nd subnet network, we have 126 hosts.

Overall, there are 252 usable hosts after subnetting. So, because of
subnetting, there is a loss in the number of IP addresses.
This network will have two subnets as in the diagram below:

The subnet mask is represented as


11111111.11111111.11111111.10000000 i.e. 255.255.255.128 for the
above network.

The router inside the network will have the routing table which
will be as follows:
Supernetting or Aggregation
It is the opposite of Subnetting. In this multiple smaller networks are
combined together to form a large network.

Why supernetting?
The routing table contains the entry of a subnet mask for every
network. If there are lots of small networks then the size of the
routing table increases. When the router has a big routing table then
it takes a lot of time for the router to process the routing table.
Supernetting is used to reduce the size of the IP routing table to
improve network routing efficiency.

How does supernetting work?


All the networks are not suitable for aggregation. There are some
rules according to which the network can be aggregated. For any
network to be aggregated it should follow three rules.

1. Contiguous: All the networks should be contiguous.


2. Same size: All the networks should be of the same size and
also a power of 2 i.e. 2^n.
3. Divisibility: The first network ID should be divisible by the size
of the block.
Note: If a binary number is divided by 2^n then last n bits are the
remainder.
Example: Suppose we have four small networks with network ID
as 201.1.0.0, 201.1.1.0, 201.1.2.0, 201.1.3.0.

Now, let's check if this can be aggregated or not.


1. Contiguous: As we can see that all the four networks are Class
C networks. The range of the first network is from 201.1.0.0 to
201.1.0.255. The range of the second network start from
201.1.1.0. If we add 1 to the last IP address of the first network
we get the starting IP address of the second network. Similarly,
we can check that all the networks are contiguous.
2. Same Size: As all the networks are of class C. Each network has
2⁸ i.e. 256 hosts.
3. Divisibility: The first IP address should be divisible by the total
size of the networks. The total size of the network is 4*2⁸ i.e.
2¹⁰. The last 10 bits are the remainder if we divide the first IP
address by 2¹⁰. In order that they are divisible, the last ten bist
should be 0.
First IP address binary representation:
11001001.00000001.00000000.00000000
The last 10 bits are zero. Hence it divisible by the size of the
network. Hence, all three conditions are satisfied.

These four networks can be combined to form a supernet.


The supernet ID or the network ID for all the four networks will
be 201.1.0.0.

Supernet Mask
Supernet Mask is a 32-bit number where all the fixed bits of the
network are represented by 1 and the variable part is represented by
0.
Overall, there are 252 usable hosts after subnetting. So, because of
subnetting, there is a loss in the number of IP addresses.

The bits to the left of the red line are fixed bits and the bits right of it
represent the variable bits.
The routing table at the router 2 is now reduced and contains only
one entry for all four networks. But, the router 1 needs a routing
table which should contain all the four entries because it should
know where to forward the packet next.

The routing table at router 2:

What are the classes of IPV4? How to identify an IP class from a given IP
address?

IP addressing is the most popular way to identify a device on the


network. The address has 32 bits which can be broken into four
octets(1 octet=8 bit). These octets provide an addressing method
through which we can accommodate large and small networks.
Accordingly, there are 5 classes of the network about which we will
study in this blog. So, let's get started.

IPv4
IP version 4 is 32 bits long. The maximum value of a number that can
be formed by using 32 bits is 2³². So, the maximum number of IPv4
addresses is 4,294,967,296 addresses i.e. 2³² addresses. It consists of
four octets each of which can contain one to three digits ranging
from 0 to 255 separated by a single dot(.). Here, each number is the
decimal representation(base-10) for an 8 digit binary number(base-
2).

Example of an IPv4 address: 63.171.234.171

Classes of IPv4
1. Class A
2. Class B
3. Class C
4. Class D
5. Class E
The order of the bits in the first octet of the IP address decides the class
of the IP address.
Some bits of the IP address represents the network and the
remaining bits represent the host. The IP address can be further
be divided into two parts:

Network ID: It identifies which network you are on. The number of
networks in any class is given by the formula:
Number of Networks= 2^networkBits

Host ID: It identifies your machine on the network. The number of


hosts in any class is given by the formula:

Number of Hosts= 2^hostBits-2.


Here, 2 IP addresses are subtracted because

1. Host ID in which all the bits are set to 0 is not assigned because
this represents the network ID.
2. Host ID in which are the bits are set to 1 is reserved for Direct
Broadcast Address(for sending the data from one network to
all the other hosts in another network).

Class A
The IP address belonging to Class A uses only the first octet to
identify the network and the last three octets are used to identify the
host.

1. The Network ID has 8 bits.


2. The Host ID has 24 bits.
The first bit of the first octet is always set to 0.
The default subnet mask for Class A IP address is 255.0.0.0. Subnet
masks are used to tell hosts on the network which part is the
network address and which part is the host address of an IP address.

How does the subnet mask do this?

Suppose you have an IP address as

10.20.15.3 = 00001010.00010100.00001111.00000011

and the mask as,

255.0.0.0 = 11111111.00000000.00000000.00000000

The IP address bits that have corresponding mask bits


as 1 represents the network ID and the address bits that have
corresponding mask bits as set to 0 represent the host ID.

10.20.15.1 = 00001010.00010100.00001111.00000001

255.0.0.0 = 11111111.00000000.00000000.00000000

by comparing corresponding bits of address bits and mask bits we


get,

netid = 00001010 = 10

hostid = 00010100.00001111.00000011 = 20.15.3

Class A has:
• Network ID =2⁷-2 = 126 (Here 2 address is subtracted because
0.0.0.0 and 127.x.x.x are special address. 127.x.x.x is reserved
for localhost )
• Host ID = 2²⁴-2 = 16,777,214
The IP address belonging to Class A range from 1.a.a.a to
126.a.a.a.(where a ranges from 0 to 255)

Class B
The IP address belonging to Class B uses the first two octets to
identify the network and the last two octets are used to identify the
host.

1. The Network ID has 14 bits.


2. The Host ID has 16 bits.
The first two bit of the first octet is always set to 10.

The subnet mask for class B is 255.255.0.0.

So, class B has:

• Network IDs = 2¹⁴ = 16384 network ID


• Host IDs = 2¹⁶ = 65534 host address
The IP address belonging to Class B range from 128.0.a.a to
191.255.a.a.(where a ranges from 0 to 255)

Class C
The IP address belonging to Class C uses the first three octets to
identify the network and the last octet is used to identify the host.

1. The Network ID has 21 bits.


2. The Host ID has 8 bits.
The first two bit of the first octet is always set to 110.

The subnet mask for class B is 255.255.255.0.

So, class C has:

• Network IDs = 2²¹= 2097152


• Host IDs = 2⁸= 254
The IP address belonging to Class C range from 192.0.0.a to
223.255.255.a.(where a ranges from 0 to 255)

Class D
The IP address belonging to Class D has the first four bits of the first
octet set as 1110. The remaining bits are the host bits.

The IP address belonging to Class D range from 224.0.0.0 to


239.255.255.255.
Class D is reserved for multicasting. Also, this class doesn't have any
subnet mask.

Class E
The IP address belonging to Class E has the first four bits of the first
octet set as 1111. The remaining bits are the host bits.

The IP address belonging to Class D range from 240.0.0.0 to


255.255.255.254.
This class is reserved for future use, research and development
purposes. It also doesn't have any subnet mask.
How to identify the IP class from a given IP
address?
So, using the above knowledge given an IP address you can identify
the class of the IP address.

You can do it by looking at the first octet of the IP address. Convert


the dotted-decimal IP address to its binary equivalent.

• If it begins with 0, then it’s a Class A network.


• If it begins with 10, then it’s a Class B network.
• If it begins with 110, then it’s a Class C network.
• If it begins with 1110, then it’s a Class D network.
• If it begins with 1111, then it’s a Class E network.
Alternatively, you can learn the range of IP addresses of each class.

Difference between IPv4 and IPv6


During the starting of the Internet, IPv4 was mainly used
everywhere but nowadays due to an increase in the use of the
Internet for almost every possibility, the address space has
exhausted. Therefore, IPv6 was introduced which has almost infinite
address capability with advanced features like auto-configuration of
IP address and mobility, etc. and this address space will not be
exhausted in the near future. In this blog, we will see the differences
between IPv4 and IPv6 versions of IP addresses.

IPv4
IP version 4 is the older version. It uses 32 bits to create a single
uniques address on the internet. IPv4 is limited to 4,294,967,296
addresses i.e. 2³² addresses. It consists of four numbers each of
which can contain one to three digits ranging from 0 to 255
separated by a single dot(.). Here, each number is the decimal
representation(base-10) for an 8 digit binary number(base-2). These
IP addresses pretty much guarantee that our emails will come and go
as expected, our google searches would take us to the website where
we want and so many other things that we do on the internet.

Example of an IPv4 address: 63.171.234.171

IPv4 Packet Format


IPv4 datagram is a variable-length packet composed of the header(20
bytes) and data(up to 65,536 bytes).

• Version: It defines the version number of IP which is 4 for this


version. Its length is 4 bits.
• Header length(HLEN): It shows the size of the header. Its
length is 4 bits.
• DSCP: It stands for a differentiated services code field. It
determines how datagram should be handled. Its length is 8
bits.
• Total length: It tells the entire length of IP datagram. Its
length is 16 bits.
• Identification: During transmission, if the data packet is
fragmented then this field is used to allocate the same number
to each fragment and so that it can be used for reconstructing
the original packet. Its length is 16 bits.
• Flags: It is used to handle fragmentation and it identifies the
first, middle or last fragment. Its length is 3 bits.
• Fragment offset: It represents the offset of data in the original
data stream. Its length is 13 bits.
• Time to leave(TTL): It tells the number of hops a datagram
can travel before it is abandoned. At each hop, the value of TTL
is decreased by 1 and when it reaches 0, the packet is
abandoned. Its length is 8 bits.
• Protocol: It tells which protocol is used for data transmitting
i.e. TCP, UDP, etc. TCP has protocol number 6 and UDP has
protocol number 17. Its length is 8 bits.
• Header Checksum: This is used for error-detection. Its length
is 16 bits.
• Source IP address: It has the IP address of the source. The
length is 32 bits.
• Destination IP address: It has the address of the destination.
The length is 32 bits.
• Options: It provides more functionality to IP datagram. It
contains information like routing, timing, management, etc.
IPv6
It is the replacement for IPv4. It uses 128 bits to create a uniques
address. This means that there can be theoretically 2¹²⁸uniques
address i.e. 340,282,366,920,938,463,463,374,607,431,768,211,456
and this number will never run out(at least in near future). It
consists of eight groups of hexadecimal numbers separated by a
colon(:). The IPV4 version used numerical values, so, IPv6 adopted
the hexadecimal numbers to avoid any conflict. If any group contains
all zeros then the notation can be shortened by using a colon to
replace the zeroes.

Example of an IPv6: adba:1925:0000:0000:0000:0000:8a2e:7334

In the above IP address, four groups contain only zeros. This zero
can be replaced by a colon and can be re-written
as adba:1925::8a2e:7334.

IPv6 Packet Format


IPv6 datagram is a packet composed of the base header(40 bytes)
and payload(up to 65,536 bytes) Payload has extension
header (optional) and data packet.

The base header consists of the following fields:

• Version: It defines the version number of IP which is 6 here. Its


length is 4 bits.
• Priority: It defines the priority of the packet. Its length is 4
bits.
• Flow label: It helps in controlling the flow of data. The source
device labels to the data packets so that the router route the
packet in sequence efficiently. Its length is 24 bits.
• Payload length: It tells the entire length of the IP datagram
except for the base header. Its length is 16 bits.
• Next header: It denotes the presence of any extension headers
or if is not present then it denotes the protocol such as TCP or
UDP.
• Hop limit: This works similarly as TTL as in IPv4. This is used
to prohibit the data to go in an infinite loop in the system. At
each hop, the value of TTL is decreased by 1 and when it
reaches 0, the packet is abandoned. Its length is 8 bits.
• Source address: It has the IP address of the source. The length
is 128 bits.
• Destination address: It has the IP address of the destination.
The length is 128 bits.
Difference between IPv4 and IPv6
1. Size of IP address: IPv4 is a 32 bits address and IPv6 is a 128
bits address.
2. Addressing Method: IPv4 uses a numeric(decimal) addressing
method. The binary bits are separated by dots(.). It uses
decimal representation. The IPv6 uses alphanumeric
addressing(alphabets and numbers) method. The binary bits are
separated by a colon(:). It uses hexadecimal representation.
3. Address Space: IPv4 can generate 2³² address spaces. IPv6 can
generate 2¹²⁸ address spaces.
4. Address Configuration: IPv4 uses the DHCP server to allocate
IP addresses to the host or it is done manually. In IPv6 this is
done by IPv6 Stateless Address Autoconfiguration. The general
idea is to have a device generate a temporary address until it
can determine the characteristics of the network it is on, and
then create a permanent address it can use based on that
information.
5. Mapping: IPV4 uses ARP to map IPv4 addresses to the MAC
address. IPv6 uses NDP(Neighbour Discovery Protocol)map
IPv6 addresses to MAC address.
6. Security: IPv4 security is dependent on the
application. IPSEC(Internet Protocol Security) is an inbuilt
security feature of IPV6 protocol.
7. Encryption: In IPv4 encryption and authentication are not
provided. In IPv6 encryption and authentication are provided.
8. Packet Fragmentation: In IPv4, fragmentation is done
by sender and forwarding routers. In IPv6, fragmentation is
done by only sender routers. We can also say that IPv6 uses
end-to-sender fragmentation whereas the in IPv4
fragmentation can also be done by the intermediate routers if
the packet is larger.
9. Header Length: The header length of 20 bytes in IPv4 whereas
the header length is 40 bytes in IPv6.
10. Checksum Field: IPv4 uses the checksum field in the
header format for handling errors whereas the IPv6 doesn't
have this filed.
11. Transmission of Packets: IPv4 uses broadcasting for
transferring packets from source to destination whereas IPv6
uses multicasting and anycasting.

What is DNS?
Whenever we make a search on the internet, we generally request
some services of the server. We make use of user-friendly words and
keywords to request the services, or make the search. But, we know
that the computer understands only the low-level binary data, not
the user-friendly data. We also know that each device on the
network has an IP(Internet Protocol) address through which we can
reach to that device. The IP address can be in decimal, hexadecimal,
or alphanumeric format, which is very tough for a user to remember.
So, they use the user-friendly keywords to search the devices over
the network. Thus, we need to map the user-friendly keywords with
the IP addresses to make use of it.

Previously, when the number of server machines or websites was


less, there is a centralized file containing the key-value pair. Here,
the user-friendly name acts as a key, while the IP address acts as a
value. This file simply performs the mapping of keys and values
together. But with the advancement of technology and an excessive
increase in the number of server machines or websites, the
centralized file fails to provide the mapping. Thus, we need a system
to structure these things to meet the present-day mapping
requirements, and hence we use a Domain Name System(DNS).

So, in this blog, we'll mainly learn about the Domain Name
System(DNS) in detail. We'll also see the working of the DNS, and its
two types, i.e., Authoritative, and Recursive DNS.

Now, let us study these things one by one.

DNS(Domain Name System)


The Domain Name System(DNS) is a system that is used to map
an IP address with an alias name. It is much similar to a telephone
book, where we store the names corresponding to the phone
numbers. DNS mainly translates the user-friendly domain names to
IP addresses, so that the user can access the contents stored at that
IP address. For Example, www.demo.com can be the domain name
for an IP address say, 198.115. 212.1.

DNS records are distributed across the globe. All the information of
the DNS system is decentralized, so as to reduce the dependency on
a centralized source. Hence, the host computers can access the
nearest computer holding the domain system information.

Following are the key terms associated with DNS:

• Namespace: A namespace assigns a unique name to each


address. This is so because each device has a unique IP address
on the network. A namespace is of following two types:

1. Flat: A flat namespace has no defined structure for assigning


the alias name to the IP address. The name assigned using a flat
namespace is just a sequence of characters. It is mainly
preferred for small systems, where the number of IP addresses
is less.
2. Hierarchical: A hierarchical namespace has a defined structure
for assigning an alias name to an IP address. The alias name
constitutes of several parts. This kind of namespace is good for
large systems having numerous devices with numerous IP
addresses. The main advantage of using a hierarchical
namespace is that it can be decentralized, removing the
dependency from a central location.

• Domain Name Space: Domain namespace uses hierarchical


namespace. The names can be derived using an inverted tree-
like structure with the root at the top. The maximum number of
levels in a tree can be 128(that ranges from 0 to 127).
• Label: A label is assigned to each node in the inverted tree(that
is used for Domain Name Space). It specifies the name of a
particular node in the system and can be depicted using a string
of 63 characters maximum. A label must be unique in order to
reduce the ambiguity and enhance uniqueness in the Domain
name.
• Domain Name: A Domain name is a sequence of labels
separated by dots(.). It is always read from the child to the root
node in the inverted tree. For example: if the root node's label
is 'com', the intermediate node's label is 'demo', and the child
node's label is 'abc'. Then this domain name can be written as
'abc.demo.com'.
• TLD: TLD stands for Top Level Domains. These are the
domains that are mostly used. Also, each TLD is capable of
holding various domains. They are mainly 2-3 characters in
length. For Example, com, edu, org, in, us, etc.
Following are the two classifications of Domain Name System:

1. Authoritative DNS: An Authoritative DNS is also known as a


domain provider or registrar. It answers the DNS queries by
translating the domain names into the equivalent IP addresses.
Authoritative DNS has the final authority over a domain name
and is responsible for fulfilling the queries of the Recursive
DNS. For Example, Amazon Route 53, etc.
2. Recursive DNS: A Recursive DNS is also referred to as the
Resolver DNS. It acts as a mediator between the application
program at the host computer and the Authoritative DNS. It
forwards the domain name queries to the Authoritative DNS to
get the equivalent IP addresses. So that the application
program can access the contents on that IP address. When it
fetches the IP address for a specific domain name for the first
time, it stores it temporarily its cache memory. So as to reduce
the unnecessary overhead to find the same IP address next
time.
Working of DNS:
The host computer uses some application programs to access the
domain name. The application program communicates to the
Recursive DNS, which in turn communicates to the Authoritative
DNS to translate the domain name to the equivalent IP address.
Hence, the contents of that IP address can be fetched and displayed
on the application program.

What is the difference between MAC address and IP address?


In computer networks, all the network devices can connect and
communicate with each other. But a question always arises in our
mind that how one device will uniquely identify the other device in
the network. This can be possible only with the help of the MAC and
IP addresses. Now, again there is a confusion between these two. So,
before defining them, let us first take an example to explain them.

For Example, if someone has to send a courier to some other person.


The sender has to specify two things about the receiver in order to
successfully send the courier. The two things are - Receiver's
address(it may contain house number, street, city, state, and pin
code), and the receiver's name(in order to specifically identify the
right person to deliver the courier). If we correlate this example in
networking, then the IP address will be the address of the network
connection in which multiple devices can be present, and the MAC
address will be the address of specific nodes, where we want to
deliver the data.

We'll learn about these two addresses in detail. We'll also learn the
dissimilarities between them. Now let us see about them one by one.

IP Address(Internet Protocol Address)


An IP address is an address that uniquely identifies a network
connection. It is termed as the 'Logical Address' which is
provided to a connection in a network.

IP addresses are generally provided by the administrator of the


network or the Internet Service Providers(ISP). It can be static or
dynamic in nature. It may be temporary and keep on changing each
time whenever a device connects to different networks. IP addresses
are available in binary form. It is mainly used in routing operation as
it specifically identifies a network connection. It is used in the
network layer of the OSI or TCP/IP reference models.

There are mainly two types of IP addresses:

1. IPv4(Internet Protocol Version 4): IPv4 is a 32-bits address.


This address is available in decimal form along with dots(.) in
between. For Example - 192.168.0.11. The header field of the
IPv4 is 20 bytes, and the checksum bits are present in the
header for error control. The IPsec support(for security feature)
is optional in IPv4. The optional fields are also available in the
IPv4 addressing. It supports a packet size of up to 576 bytes.
The IPv4 addressing can be used for Multicasting and
Broadcasting the data packets.
2. IPv6(Internet Protocol Version 6): IPv6 is a 128-bits address.
This address is available in hexadecimal form along with semi-
colons(:) in between. For Example:
2FFE:F300:0213:AB01:0132:7289:2134:ABDC. The header field
of the IPv6 is 40 bytes, but the checksum bits are not present in
the header file. The IPsec support(for security feature) is
mandatory in IPv6. The optional fields are not also available in
IPv6 addressing. It supports a packet size of up to 1280 bytes.
The IPv6 addressing can not be used for broadcasting.
MAC Address(Media Access Control Address)
MAC address is the address that uniquely identifies a node on
the network. It is also called the physical address, or the Burnt-In
address, or the software address. The MAC address is provided by the
manufacturer of the NIC(Network Interface Card). It is embedded
into the hardware and remains constant for that device.

MAC is a 48 bits address which either contains 6 groups of 2


hexadecimal digits, or 3 groups of 4 hexadecimal devices. These
hexadecimal digits can be separated either by hyphens(-) or
colons(:). For example: 23-AB-CD-EF-56-78, OR 23AB:CDEF:5678.
The 48 bits MAC address has two parts of 24 bits each. The first 24
bits represent the OUI(Organization Unique Identifier), and the next
24 bits represent the vendor's specific information. The MAC address
works on the Data-Link Layer of the OSI or TCP/IP reference models.

Following are the dissimilarities between the MAC and IP address:

1. Purpose: IP address is mainly used to identify the connection


of a node on the network, while the MAC address is used to
identify the unique address of that node.
2. Address Type: The IP address is a software-based or logical
address, while the MAC address is a hardware-based, burnt-in,
or physical address.
3. Address Provider: The IP address is provided by the
administrator of the network, DHCP(Dynamic Host
Configuration Protocol), or the ISP(Internet Service Provider).
On the other hand, the MAC address is provided by the device
manufacturer and is embedded in the NIC(Network Interface
Card).
4. Address Length and Representation: In IP address, Ipv4 has
an address length of 32-bits, while IPv6 has an address length
of 128-bits. On the other hand, the MAC address is a 48-bits
address. Also, the IP address is represented in the binary format
with dots(.) in between, while the MAC address is represented
in the hexadecimal formal with hyphens(-) or colons(:) in
between.
5. Network Classes: The IP addresses uses all kinds of network
classes, i.e., A, B, C, D, and E for addressing a connection. On
the other hand, no such network classes are used for addressing
the MAC address for a specific device.
6. Subnetting: Subnetting is the process of dividing a network
into two or more small networks. IP address uses subnetting,
while the MAC address does not use it.
7. Flexibility: The IP address is flexible in nature, it gets changed
whenever a device connects to some other network. On the
other hand, the MAC addresses are not flexible and remain
constant for a device.
8. Network Traffic Used: The IP address can be used for
Multicasting or Broadcasting, while the MAC address can be
used for broadcasting.
9. Implementation Layer: The IP address or logical addressing is
implemented in the Network layer of the OSI or TCP/IP model.
On the other hand, the MAC address or physical addressing is
implemented in the Data-Link layer of the OSI or TCP/IP
reference model.

What is a Network Operating System?

An Operating System(O.S.) is a System software that manages


the hardware resources and provides services to the Application
software. There are many types of operating systems depending
upon its features and functionalities. They can be Batch O.S.,
Multitasking O.S., Multiprocessing O.S., Network O.S., Hybrid O.S.,
etc.

In this blog, we'll focus on the Network Operating System. We'll


learn about the two types of Network O.S., their advantages, and
disadvantages. At last, we'll see some common features of the
Network O.S.

Network Operating System


Network Operating System is a computer operating system that
facilitates to connect and communicate various autonomous
computers over a network. An Autonomous computer is an
independent computer that has its own local memory, hardware, and
O.S. It is self capable to perform operations and processing for a
single user. They can either run the same or different O.S.

The Network O.S. mainly runs on a powerful computer, that runs the
server program. It facilitates the security and capability of managing
the data, user, group, application, and other network functionalities.
The main advantage of using a network o.s. is that it facilitates the
sharing of resources and memory amongst the autonomous
computers in the network. It can also facilitate the client computers
to access the shared memory and resources administered by the
Server computer. In other words, the Network O.S. is mainly
designed to allow multiple users to share files and resources over the
network.

The Network O.S. is not transparent in nature. The workstations


connected in the network are aware of the multiplicity of the
network devices. The Network Operating Systems can distribute
their tasks and functions amongst connected nodes in the network,
which enhances the system overall performance. It can allow
multiple access to the shared resources concurrently, which results
in efficiency. One of the major importance of using a Network O.S. is
remote access. It facilitates one workstation to connect and
communicate with another workstation in a secure manner. For
providing security, it has authentication and access control
functionality. The network o.s. implements a lot of protocols over
the network, which provides a proper implementation of the network
functionalities. One drawback of Network O.S. is its tightly coupled
nature in the network.

Some examples of Network O.S. are Novel Netware, Microsoft


Windows server (2000, 2003, 2008), Unix, Linux, etc.

There are mainly two types of Network O.S., they are:

1. Peer-to-Peer
2. Client-Server
Now let us learn them one by one, along with their advantages and
disadvantages.

Peer-to-Peer
Peer-to-Peer Network Operating System is an operating system
in which all the nodes are functionally and operationally equal
to each other. No one is superior or inferior. They all are capable to
perform similar kinds of tasks. All the nodes have their own local
memory and resources. Using the Network O.S., they can connect
and communicate with each other. They can also share data and
resources with one another. One node can also communicate and
share data and resources with a remote node in the network by using
the authentication feature of the Network O.S. The nodes are
directly connected with each other in the network with the help of a
switch or a hub.

Following are the advantages of the Peer-to-Peer Network


Operating System:

1. Easy to install and setup.


2. The setup cost is low.
3. There is no requirement for any specialized software.
4. The sharing of information and resources is fast and easy.
Following are the disadvantages of the Peer-to-Peer Network
Operating System:

1. The performance of autonomous computers may not be so good


when sharing some resources.
2. There is no centralized management.
3. It is less secure.
4. It does not have backup functionalities.
5. Ther is no centralized storage system.

Client-Server
The Client-Server Networking Operating System operates with a
single server and multiple client computers in the network. The
Client O.S. runs on the client machine, while the Network Operating
System is installed on the server machine. The server machine is a
centralized hub for all the client machines. The client machines
generate a request for information or some resource and forward it
to the server machine. The server machine, in turn, replies to the
client machine by providing appropriate services to it in a secure
manner. The server machine is a very powerful computer, that is
capable of tackling large calculations and operations. It can also
have the ability to administer the whole network and its resources. It
can be multiprocessing in nature, which can process multiple client
requests at the same time. The Network O.S. enhances the reach of
client machines by providing remote access to other nodes and
resources of the network in a secure manner.

Following are the advantages of the Client-Server Network


Operating System:

1. It has centralized control and administration.


2. It has a backup facility for lost data.
3. The shared data and resources can be accessed concurrently by
multiple clients.
4. It has better reliability and performance.
Following are the disadvantages of the Client-Server Network
Operating System:

1. The setup cost is very high.


2. There is a requirement of specialized software for client and
server machines to function properly.
3. There is a need for an administrator to administer the network.
4. There may be network failure, in case of central server failure.
5. A huge amount of client requests may overload the server.
Following are the common functionalities of the Network Operating
System:

1. Data and Resource sharing


2. Performance
3. Security
4. Robustness
5. Scalability
6. Memory management

Difference between a Domain and a Workgroup


In the present day scenario, the computer network is spreading
vastly. In various situations, we need to sub-divide our network into
multiple small networks to easily administer and manage the
network and its resources. These goals can be easily achieved by
creating a network domain or workgroup in computer networks.

Both domain and workgroup can be used to group the computers to


sub-divide the network. But, they have different functionalities and
applications. So in this blog, we’ll learn about domains and
workgroups in a computer network. We will also see the applications
and differences between these two in detail.

Now, let us first learn about these two terms one by one.

Domain
A domain can be seen as a logical grouping of computers or
devices on the same or different kinds of networks. Each
computer on a domain is administered by a centralized server that
manages each computer within a domain. These network domains
are uniquely identified using unique domain names that are assigned
by a domain controller. A domain controller acts as a server within a
domain for the domain hosts and provides the authentication
services, domain names, and various functionalities to them. One of
the major functionality of using a domain is secure access in which
no other computer outside the domain can access the domain
computers. The domain controller can also be used as a centralized
database for storage which can be shared by all the devices over a
particular domain.

A host computer can be added to a domain via LAN, WAN, or VPN. A


computer connected in a domain can access any computer on that
domain to access their files and resources. On the Internet, network
domains can be identified using the IP address. If two devices share
some common part of the IP address, they are said to be in a
common network domain. For example, if two devices have IP
addresses - 192.168.10.2 and 192.168.10.3, then they are said to be in
the same network domain. A domain can have multiple sub-
domains. A router can be used to connect different networks and
sub-domains.

A domain has centralized control over the devices in the domain. It


also has some features like reliability, scalability, etc. There is a need
for some specialized software for creating and managing domains.
Most of the O.S. provides some inbuilt software for these purposes.
Domains are mainly used in a ‘Client-Server’ model and are
beneficial when the number of computers is very large. A network
domain can be mainly used where we want to sub-divide the
network, and also want to join multiple networks having different
architecture.
For example, in an organization, if we want that all the computers
can share the resources of one another and have their full access.
Also, we need some centralized control over the devices, then we can
achieve these goals by creating a network domain and adding all the
intended devices to it.

Workgroup
A workgroup is a collection of autonomous computers that are
connected over a network and can share common files,
resources, and responsibilities with one another. It is
approximately the same as a workgroup, i.e., it can be used to sub-
divide or categorize a network. But the main difference is that it has
no centralized control over the devices in the workgroup. It can be
implemented to sub-divide a large network into workgroups for
better management.

A workgroup name is not provided by any server. Also, there is no


dependency on any hardware components for assigning workgroup
names. In general, we provide some workgroup names to some
devices, and they start working as a workgroup.

A workgroup mainly implements a peer-to-peer networking model,


where each computer is autonomous having its own user account
and permissions, memory, and are equally important. Also, these
computers are not so secure. They have local security, i.e., each
device maintains its own security. It may also happen that one
computer in the workgroup may not have access permissions to all
the computers in that particular workgroup. Every computer has to
maintain its own user accounts and access permissions.
A workgroup can have the computers of the same network only.
These computers can be connected using a hub or a switch. It is very
easy to install and configure and is beneficial for fewer computers
only. Most of the O.S. provides some inbuilt software for creating
and managing the workgroups. A workgroup is beneficial to be used
in small local area networks like schools, colleges, buildings, etc.

Following are the differences between domain


and workgroup:
• Installation and Configuration: A domain is complex than a
workgroup to install and configure. On the other hand, a
workgroup is easy to install and configure, but it is very hard to
maintain.
• Networking Model: Domain is based on a client-server model,
where multiple clients rely on a single server for various
services. On the other hand, a workgroup is based on a peer-to-
peer model where each computer is equally important.
• Administration and Management: A domain has centralized
control over the device. On the other hand, the administration
and management of a workgroup are non-centralized in nature.
• Database: The computers in a domain have a centralized
database. On the other hand, each computer in a workgroup
mainly has its own local database.
• Autonomous: The devices connected in a domain are not
autonomous, they are governed by centralized servers. On the
other hand, the devices connected in a workgroup are mainly
autonomous in nature.
• Naming: In the case of a domain, the domain names are
provided by the domain controllers on the basis of IP address.
On the other hand, there are no dependencies on any hardware
components and server for assigning the workgroup names.
• User account and groups: The user accounts and groups are
manages and maintained on the domain level. On the other
hand, in a workgroup, it is managed and maintained by every
computer of the workgroup individually.
• Location: A domain can be formed using the devices of one or
more different networks. On the other hand, the devices of the
same network can only be added to a workgroup.
• The number of computers: A domain can work better when
there is a large number of devices connected to it. On the other
hand, a workstation can work better with fewer computers.
• Scalability: A domain has a centralized control and is easy to
scale. On the other hand, a workgroup is very hard to scale due
to no centralized control. The complexity enhances when we
increase the number of workgroup computers.
• Security: A domain has very advanced security due to
centralized control. On the other hand, a workgroup is very less
secure due to no centralized access control.
• Data Recovery: Data can be recovered in a domain from the
centralized storage. On the other hand, data recovery is not
possible in a workgroup due to the local storage of each
computer.
• Type of data: A domain is mainly used to transfer and share
sensitive and important data due to security. On the other
hand, a workgroup is used to share less secure and personal
data only due to less security.
• Application: A domain is mainly preferred for large public and
business networks. On the other hand, a workgroup is mainly
preferred for small local area networks like schools, colleges,
buildings, etc.

What is a VPN? Explain its working


Have you ever tried to visit a website and end up getting messages
like “Your requested URL has been blocked as per the …”. Why does
this happen and how can we overcome this? Also, have you ever
thought that if your identity is safe while you are using free wifi or a
public network? If not that then how can this be achieved? Think :)

Yes, you got it right. We can achieve this by using VPN i.e. Virtual
Private Network. In this blog, we will learn about the VPN.

The following are the topics that are going to be discussed in this
blog:

1. What is a Private Network?


2. What is a VPN?
3. How VPN actually works?
4. What are the benefits of using a VPN?
5. Disadvantages of VPN

6. Private Network
7. A private network is a network which is configured such that
the devices outside the private network cannot access the
network and they can't communicate with the systems that are
present in that private network. This network has restrictions
on their access. Such a network is mostly used in business and
private organizations because they have confidential
information and they don't want to share it outside the
organization.

8. For example, if you are having a fully automated house where


every device is connected with each other then that network of
your house is your private network and no one else from
outside can use your network i.e. only you have the access over
that network. But when you are accessing your private network
from other places i.e. not from your reach of a private network,
then you might encounter some problem. Let's see some of
those problems.

9. Problems
10. Suppose we have our own private network and we want to
access that network over the internet from a remote location
and send some sensitive information. How can we do that? You
can access the data or your private network using the internet
but there is one problem with this approach. The information
can be hacked by hackers even if we use some encryption
techniques. Also, your ISP(Internet Service Provider) will have
all the information about this data or information transfer.

11. Another problem is that whenever we are using internet


service by using some ISP(Internet Service Provider) then the
ISP can have a record of all the activities that we do over
internet i.e. you browsing history and all that sensitive
information can be accessed by the ISP. Why share your
information with your ISP?
12. Both these problems can be solved with the help of a VPN
i.e. Virtual Private Network. VPN servers create a secure
network between the remote client and our private network and
we can easily send the information.

13. What is a VPN?


14. Virtual Private Network or simply VPN is a service in
which we extend our local network so that we can securely
access it from any remote location over the internet. When we
make a request using a VPN, then our internet service providers
redirects us to the VPN server and now all the websites which
we access are done through these servers. Here, our ISP will not
know that what are all the sites which we are accessing, it
thinks that we have made a connection request only for the
VPN server and our identity will be shielded. So, the online
identity of the individual can also be easily be hidden from
ISPs.

How does VPN actually work?


VPN is used when we have to send messages from a remote
computer to any local network over the internet. Our message has
header and contents according to which it is routed to the
destination. But here we have to send the message over the internet
and the header of the message contains a local address of the
destination. So, when we send the message it is received by the VPN
server. This server wraps the message and gives the header to the
packet which is used for routing it over the internet. It also encrypts
the message and makes a secure connection. This connection from
VPN server to the VPN client appears as a tunnel and the packet is
sent through this tunnel. This connection is made using
the tunnelling protocol. So, VPN uses an additional protocol along
with the TCP/IP protocol for tunnelling. Finally, the VPN client at
the other end decrypts the data and deliver it to the destination
address.

Benefits of using a VPN?


So, it’s time to answer the question which was asked in the starting
of the blog. Why do we get messages like “Your requested URL…
”? This may happen due to many reasons:

• Firstly, the site which you want to access is restricted by the


government like torrents etc.
• Secondly, the site which you want to access is not available for
access in your region. All these problems can be resolved by
using a VPN.

1. We can securely connect from a remote system to a local


network.
2. We can hide our online identity.
3. We can have access to all the sites irrespective of our location.
The network connectivity becomes slow when we use a VPN but that
is overshadowed by the benefits. Let's see some more disadvantages
of using VPN.

Disadvantages of using VPN


VPN has some disadvantages too. The following are some of the
disadvantages of a VPN:

1. Internet connectivity becomes slow. This is because VPN works


by connecting the network first to a private server and then to
the website. So some VPN might take a longer time to connect
to the VPN server which may result in longer loading time.
2. The VPN service provider might monitor our activity and use
our data and sell it to the third parties.
3. As the VPN provides encryption for our data we don’t know
how strong is the encryption and it varies from one service
provider to another. So, it can be a problem.

What is Data Encapsulation and de-encapsulation in networking?

Whenever we send the data from one node to another in a computer


network. The data is encapsulated at the sender's side, while it is de-
encapsulated at the receiver's end. Actually, the encapsulation of
data at various layers of the implementing model(OSI or TCP/IP)
adds various functionalities and features to the data transmission.
The most important feature that it adds is the security and reliability
of data transmission between two nodes in a network.

In this blog, we will mainly learn what is encapsulation. We will also


learn the encapsulation and de-encapsulation process in
the OSI and TCP/IP models in detail. So, now let us learn these
things one by one.

Data Encapsulation
Data Encapsulation is the process in which some extra
information is added to the data item to add some features to
it. We use either the OSI or the TCP/IP model in our network, and
the data transmission takes place through various layers in these
models. Data encapsulation adds the protocol information to the
data so that data transmission can take place in a proper way. This
information can either be added in the header or the footer of the
data.

The data is encapsulated on the sender’s side, starting from the


application layer to the physical layer. Each layer takes the
encapsulated data from the previous layer and adds some more
information to encapsulate it and some more functionalities with the
data. These functionalities may include proper data sequencing,
error detection and control, flow control, congestion control, routing
information, etc.

Data De-encapsulation
Data De-encapsulation is the reverse process of data
encapsulation. The encapsulated information is removed from
the received data to obtain the original data. This process takes
place at the receiver’s end. The data is de-encapsulated at the same
layer at the receiver’s end to the encapsulated layer at the sender’s
end. The added header and trailer information are removed from the
data in this process.
The below diagram shows how header and footer are added and
removed from the data in the process of encapsulation and de-
encapsulation respectively.

The data is encapsulated in every layer at the sender’s side and also
de-encapsulated in the same layer at the receiver’s end of the OSI or
TCP/IP model. Actually, we use different terms for the encapsulated
form of the data that is described in the below-mentioned diagram.
Now, we will learn the whole process of encapsulation and de-
encapsulation in the OSI and TCP/IP model step-by-step as
mentioned in the below picture.
Encapsulation Process (At sender’s side)
1. Step 1: The Application, Presentation, and Session layer in
the OSI model, or the Application layer in the TCP/IP
model takes the user data in the form of data streams,
encapsulates it and forwards the data to the Transport layer. It
does not necessarily add any header or footer to the data. But it
is application-specific and can add the header if needed.
2. Step 2: The Transport layer (in the OSI or TCP/IP model) takes
the data stream from the upper layers, and divide it into
multiple pieces. The Transport layer encapsulates the data by
adding the appropriate header to each piece. These data pieces
are now called as data segments. The header contains the
sequencing information so that the data segments can be
reassembled at the receiver’s end.
3. Step 3: The Network layer (in the OSI model) or the Internet
layer (in the TCP/IP model) takes the data segments from the
Transport layer and encapsulate it by adding an additional
header to the data segment. This data header contains all the
routing information for the proper delivery of the data. Here,
the encapsulated data is termed as a data packet or datagram.
4. Step 4: The Data-Link layer (in the OSI or TCP/IP model) takes
the data packet or datagram from the Network layer and
encapsulate it by adding an additional header and footer to the
data packet or datagram. The header contains all the switching
information for the proper delivery of the data to the
appropriate hardware components, and the trailer contains all
the information related to error detection and control. Here,
the encapsulated data is termed as a data frame.
5. Step 5: The Physical layer (in the OSI or TCP/IP model) takes
the data frames from the Data-Link layer and encapsulate it by
converting it to appropriate data signals or bits (corresponding
to the physical medium).

De-Encapsulation Process (At receiver’s side)


1. Step 1: The Physical layer (in the OSI or TCP/IP model) takes
the encapsulated data signals or bits from the sender, and de-
encapsulate it in the form of a data frame to be forwarded to
the upper layer, i.e., the Data-Link layer.
2. Step 2: The Data-Link layer (in the OSI or TCP/IP model) takes
the data frames from the Physical layer. It de-encapsulates the
data frames and checks the frame header whether the data
frame is switched to the correct hardware or not. If the frame is
switched to the incorrect destination, it is discarded, else it
checks the trailer information. If there is any error in the data,
data retransmission is requested, else it is de-encapsulated and
the data packet is forwarded to the upper layer.
3. Step 3: The Network layer (in the OSI model) or the Internet
layer (in the TCP/IP model) takes the data packet or datagram
from the Data-Link layer. It de-encapsulates the data packets
and checks the packet header whether the packet is routed to
the correct destination or not. If the packet is routed to the
incorrect destination, the packet is discarded, else it is de-
encapsulated and the data segment is forwarded to the upper
layer.
4. Step 4: The Transport layer (in the OSI or TCP/IP model) takes
the data segments from the network layer and de-encapsulate
it. It first checks the segment header and then reassembles the
data segments to form data streams, and these data streams are
then forwarded to the upper layers.
5. Step 5: The Application, Presentation, and Session layer in
the OSI model, or the Application layer in the TCP/IP
model takes encapsulated data from the Transport layer, de-
encapsulate it, and the application-specific data is forwarded to
the applications.
What happens when you use cables longer than the prescribed length in
a network?

In computer networks, we use different networking devices that may


be connection-oriented or connection-less in nature. If we are using
the connection-oriented devices, then there is a need for some
physical medium that will ensure the connection between the
devices. Most commonly, we use cables to connect these devices
physically in order to provide a path for data transmission.

But the length of these cables is prescribed for the optimum use of
the network and its resources. The prescribed length of the cables
depends upon the type of cable used. For example, if we are using a
co-axial 10BASE5 cable then its prescribed length is around 500
meters, while the prescribed length for co-axial 10BASE2 cable is
around 180 meters.

So in this blog, we will see what happens when we use the cables
above the prescribed length in computer networks. We will also see
how using a longer length cable than prescribed length can affect the
network and its functionality.

Actually, the standards or the prescribed length of any cable is set by


some global organization based on all the parameters that will affect
the network or its components. In the below-mentioned points, we
will see the adverse effect of using the longer length cables than the
standard length on the network and its functionalities.

Adverse Effects of using a longer length cable than the


prescribed length are as follows:
1. Signal Quality: When we use cables of unnecessarily long
lengths, the signal quality gets degraded due to the resistance
in the cable.
2. Transmission Speed: The transmission speed of the data is
inversely proportional to the length of the cable. We should
also not take the very little cable that will create the chances of
interference in the network. But the longer cables reduce the
data transmission speed greatly.
3. Data Loss: Due to the longer length of the cables there may be
a loss in the data. If some header information is lost, the
transmission will be affected.
4. Signal Attenuation: Signal attenuation refers to the reduction
in the amplitude of the signal. Due to the increasing length,
there may be a significant drop in the amplitude of the sent
data.
5. Latency: The longer cables induce latency in the network that
adversely affects the whole network. The whole network will
work very slow.
6. Protocols: The undesirable length of cables also affects the
working of some protocols. Actually, some protocols are greatly
dependent on time, any latency in the network will affect their
implementation.
7. Noise: As we increase the cable length, the chances of noise
(unwanted elements) in the data also increases. The long cables
are more prone to noises.
8. Troubleshooting: In case of any network failures or any issues
in the network, we troubleshoot the network to find the issue,
and then solve the issue. But in a long cable, it becomes very
difficult to troubleshoot these issues.
9. Installation Cost: When we unnecessarily increase the length
of the cables, the cost of installation of that network also
increases.
10. Reliability and Maintainability: In the case of very long
cables, the reliability of the network decreases. Such a long
cable is more prone to the physical cracks in the cable. Also, it
is very difficult to regularly maintain such long cables.

What can be done to fix signal attenuation problems?

While doing any call like WhatsApp call you might have encountered
the connection error like a weak signal or poor signal. Why does the
signal get distorted? What can be done to fix the signal attenuation
problem? In this blog, we will try to find the answer to this problem.
So, let's get started.

What is Attenuation?
In terms of your Internet connection, attenuation means
a reduction or loss in the strength of a signal. It is a natural process
that happens when we transmit the signal over distances. It is
measured in decibels(dB) per unit distance.

Lower the attenuation per unit distance higher is the efficiency of the
cable.
If the rate of attenuation increases then the mail which we are
sending or a WhatsApp call or a normal call we are making to our
friend becomes more distorted.

What causes Attenuation?


1. Noise: The noises like radio frequencies, electricity, etc may
interfere with the signal and weaken the signal strength and
cause attenuation. Higher the interference with noise higher is
the attenuation you experience.
2. Travel Distance: If the signal has to travel over a longer
distance then the signal strength decreases with the distance.
3. Physical Surrounding: Factors like temperature, improper
installation of the cable may decrease the signal strength and
cause attenuation.
4. Wire Size: Wires having more diameter suffer less attenuation
than the wires having less diameter. The fibre optics cable has
a lower attenuation rate than the copper cable. Fiber optics
cable carries the light over long distances with low
attenuation and distortion of the signal. On contrary
the copper wires there is significant attenuation and
distortion of the signal. The copper wires are made up of
electrical frequencies which are very much prone to noise.

What can be done?


1. The most common way of dealing with this problem is to
use repeaters(a device used to regenerate or replicate a
signal)and hubs that will boost the signal strength ad hence
prevent attenuation of the signals. This will also increase the
maximum range that the signal can travel.
2. The connection should be checked if the installation of cables
is done properly or not.
3. The strength of the signal can be also be increased by
amplification of the signal. The repeaters regenerate the
original signal if the received signal is weak but in
amplification, only the amplitude or strength of the signal is
increased if the received signal is weak.

What is the maximum segment length of a 100Base-FX network?

The Institute of Electrical and Electronics Engineers(IEEE) uses


shorthand identifiers like 10Base5, 10BaseF, 100BaseFX, etc which
include information about the transferring speed, the physical
medium used for transferring, segment length(the length of the
wire that can be used for transferring before attenuation happens),
etc. In this blog, we will see how the nomenclature is done and then
find out the maximum segment length of 100BaseFX. So, let's get
started.

What are pieces of information hidden in the


name itself or how is the nomenclature done?
1. Starting number: Suppose you have a network of 10BaseT,
the number 10, which is present at the starting of the identifier
represents the standard transmission speed of 10 megabits per
second i.e. 10Mbps.
2. The word Base: It refers to Baseband digital transmission.
This tells that the network uses only one carrier frequency for
signaling and requires all the network stations to share its use.
3. Segment Type Or Segment length: This last part of the
identifier can be a digit or a letter.

• Digit: This tells the segment length(in meters) or how long it


can be the segment of the cable before any attenuation
happens. In 10Base5, the maximum segment length can be 500
meters.
• Letter: This letter identifies the segment type or physical type
of cable. In 10BaseT, the segment type is Twisted-pair cable.
In 10BaseF, the segment type is Fiber.
The last character (‘X’, etc) refers to the line code method used. Line
code is a pattern of voltage, current or photons used to represent the
digital data transmitted on the transmission line. Fast Ethernet cable is
sometimes referred to as 100BaseX where X can be replaced by two
variants i.e. FX and TX.

100BaseFX
100BaseFX is the Fast Ethernet Cable over the Optical Fiber. 100 in
the 100BaseFX refers that the data transfer rate is 100 megabits per
second i.e. 100Mbps. The word Base refers to Baseband digital
transmission. The letter F signifies that the segment type is Optical
Fiber.

The maximum segment length is 2000 meters.

1. It has two pairs of optical fibers. The first transmit frames from
hub to device and the second transmits from device to hub.
2. In most of the Fast Ethernet applications, the individual
devices are connected by twisted-pair copper
wires i.e. 100BaseTX(maximum segment length is only 100
meters) and the optical fibers are used for transmission over
longer distances(as maximum segment length is 2000 meters
of 100baseFX). So, 100baseTX to 100Base FX convertor is
required for sending the signal from the sender end over the
optical fiber. Similarly, at the receiver end, 100baseFX to
100Base TX is required.

You might also like