0% found this document useful (0 votes)
42 views25 pages

Iot Unit 3

Uploaded by

lijinv9072890125
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views25 pages

Iot Unit 3

Uploaded by

lijinv9072890125
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

UNIT 3

PROTOCOLS AND TECHNOLOGIES BEHIND IOT

1. IOT PROTOCOLS

IPV6
IPv6 was developed by Internet Engineering Task Force (IETF) to deal with the
problem of IPv4 exhaustion. IPv6 is a 128-bits address having an address space of
2128, which is way bigger than IPv4. IPv6 use Hexa-Decimal format separated by
colon (:) .

Need for IPv6:


The Main reason of IPv6 was the address depletion as the need for electronic devices
rose quickly when Internet Of Things (IOT) came into picture after the 1980s & other
reasons are related to the slowness of the process due to some unnecessary processing,
the need for new options, support for multimedia, and the desperate need for security.
IPv6 protocol responds to the above issues using the following main changes in the
protocol:
1. Large address space
An IPv6 address is 128 bits long .compared with the 32 bit address of IPv4, this is a
huge(2 raised 96 times) increases in the address space.
2. Better header format
IPv6 uses a new header format in which options are separated from the base header and
inserted, when needed, between the base header and the upper layer data . This
simplifies and speeds up the routing process because most of the options do not need to
be checked by routers.
3. New options
IPv6 has new options to allow for additional functionalities.
4. Allowance for extension
IPv6 is designed to allow the extension of the protocol if required by new technologies or
applications.
5. Support for resource allocation
In IPv6,the type of service field has been removed, but two new fields , traffic class and
flow label have been added to enables the source to request special handling of the
packet . this mechanism can be used to support traffic such as real-time audio and video.
6. Support for more security
The encryption and authentication options in IPv6 provide confidentiality and integrity of
the packet.
In IPv6 representation, we have three addressing methods :
 Unicast
 Multicast
 Anycast
Addressing methods
1. Unicast Address
Unicast Address identifies a single network interface. A packet sent to a unicast address
is delivered to the interface identified by that address.
2. Multicast Address
Multicast Address is used by multiple hosts, called as groups, acquires a multicast
destination address. These hosts need not be geographically together. If any packet is
sent to this multicast address, it will be distributed to all interfaces corresponding to that
multicast address. And every node is configured in the same way. In simple words, one
data packet is sent to multiple destinations simultaneously.
3. Anycast Address
Anycast Address is assigned to a group of interfaces. Any packet sent to an anycast
address will be delivered to only one member interface (mostly nearest host possible).
Advantages of IPv6 :
1. Realtime Data Transmission : Realtime data transmission refers to the process of
transmitting data in a very fast manner or immediately. Example : Live streaming
services such as cricket matches, or other tournament that are streamed on web exactly
as soon as it happens with a maximum delay of 5-6 seconds.
2. IPv6 supports authentication: Verifying that the data received by the receiver from
the sender is exactly what the sender sent and came through the sender only not from
any third party. Example : Matching the hash value of both the messages for verification is
also done by IPv6.
3. IPv6 performs Encryption: Ipv6 can encrypt the message at network layer even if the
protocols of application layer at user level didn’t encrypt the message which is a major
advantage as it takes care of encryption.
4. Faster processing at Router: Routers are able to process data packets of Ipv6 much
faster due to smaller Base header of fixed size – 40 bytes which helps in decreasing
processing time resulting in more efficient packet transmission. Whereas in Ipv4, we have
to calculate the length of header which lies between 20-60 bytes.

What is 6LoWPAN?
Last Updated : 29 Apr, 2023



6LoWPAN is an IPv6 protocol, and It’s extended from is IPv6 over Low Power Personal Area
Network. As the name itself explains the meaning of this protocol is that this protocol works on
Wireless Personal Area Network i.e., WPAN.
WPAN is a Personal Area Network (PAN) where the interconnected devices are centered around a
person’s workspace and connected through a wireless medium. You can read more about WPAN
at WPAN. 6LoWPAN allows communication using the IPv6 protocol. IPv6 is Internet Protocol
Version 6 is a network layer protocol that allows communication to take place over the network. It is
faster and more reliable and provides a large number of addresses.
6LoWPAN initially came into existence to overcome the conventional methodologies that were
adapted to transmit information. But still, it is not so efficient as it only allows for the smaller devices
with very limited processing ability to establish communication using one of the Internet Protocols,
i.e., IPv6. It has very low cost, short-range, low memory usage, and low bit rate.
It comprises an Edge Router and Sensor Nodes. Even the smallest of the IoT devices can now be part
of the network, and the information can be transmitted to the outside world as well. For example,
LED Streetlights.
 It is a technology that makes the individual nodes IP enabled.
 6LoWPAN can interact with 802.15.4 devices and also other types of devices on an IP
Network. For example, Wi-Fi.
 It uses AES 128 link layer security, which AES is a block cipher having key size of
128/192/256 bits and encrypts data in blocks of 128 bits each. This is defined in IEEE
802.15.4 and provides link authentication and encryption.
Basic Requirements of 6LoWPAN:
1. The device should be having sleep mode in order to support the battery saving.
2. Minimal memory requirement.
3. Routing overhead should be lowered.
Features of 6LoWPAN:
1. It is used with IEEE 802.15,.4 in the 2.4 GHz band.
2. Outdoor range: ~200 m (maximum)
3. Data rate: 200kbps (maximum)
4. Maximum number of nodes: ~100
Advantages of 6LoWPAN:
1. 6LoWPAN is a mesh network that is robust, scalable, and can heal on its own.
2. It delivers low-cost and secure communication in IoT devices.
3. It uses IPv6 protocol and so it can be directly routed to cloud platforms.
4. It offers one-to-many and many-to-one routing.
5. In the network, leaf nodes can be in sleep mode for a longer duration of time.
Disadvantages of 6LoWPAN:
1. It is comparatively less secure than Zigbee.
2. It has lesser immunity to interference than that Wi-Fi and Bluetooth.
3. Without the mesh topology, it supports a short range.
Applications of 6LoWPAN:
1. It is a wireless sensor network.
2. It is used in home-automation,
3. It is used in smart agricultural techniques, and industrial monitoring.
4. It is utilised to make IPv6 packet transmission on networks with constrained power and
reliability resources possible.
Security and Interoperability with 6LoWPAN:
 Security: 6LoWPAN security is ensured by the AES algorithm, which is a link layer
security, and the transport layer security mechanisms are included as well.
 Interoperability: 6LoWPAN is able to operate with other wireless devices as well
which makes it interoperable in a network.
Introduction of Message Queue Telemetry Transport Protocol
(MQTT)
Last Updated : 26 Feb, 2024



Message Queuing Telemetry Transport, or MQTT, is a communications protocol designed for


Internet of Things devices with extremely high latency and restricted low bandwidth. Message
Queuing Telemetry Transport is a perfect protocol for machine-to-machine (M2M) communication
since it is designed specifically for low-bandwidth, high-latency settings.
What is Message Queue Telemetry Transport Protocol(MQTT)?
MQTT is a simple, lightweight messaging protocol used to establish communication between
multiple devices. It is a TCP-based protocol relying on the publish-subscribe model. This
communication protocol is suitable for transmitting data between resource-constrained devices
having low bandwidth and low power requirements. Hence this messaging protocol is widely used
for communication in the IoT Framework.
Publish-Subscribe Model
This model involves multiple clients interacting with each other, without having any direct
connection established between them. All clients communicate with other clients only via a third
party known as a Broker.
MQTT Client and Broker
Clients publish messages on different topics to brokers. The broker is the central server that receives
these messages and filters them based on their topics. It then sends these messages to respective
clients that have subscribed to those different topics. The heart of any publish/subscribe protocol is
the MQTT broker. A broker can handle up to thousands of concurrently connected MQTT customers,
depending on how it is implemented. All communications must be received by the broker, who will
then sort them, ascertain who subscribed to each one, and deliver the messages to the clients who
have subscribed. All persistent customers’ sessions, including missed messages and subscriptions,
are likewise kept by the Broker.

Working of MQTT
MQTT’s publish/subscribe (pub/sub) communication style, which aims to maximise
available bandwidth, is an alternative to conventional client-server architecture that
communicates directly with an endpoint. In contrast, the client who transmits the message
(the publisher) and the client or clients who receive it (the subscribers) are not connected
in the pub/sub paradigm. Third parties—the brokers—manage the relationships between
the publishers and subscribers because they don’t communicate with one another directly.
Publishers and subscribers, which denote whether a client is publishing messages or has
subscribed to receive messages, are examples of MQTT clients. The same MQTT client
can be used to accomplish these two features. A publish occurs when a client or device
want to submit data to a server or broker.
The term “subscribe” refers to the reversal of the procedure. Several clients can connect
to a broker under the pub/sub paradigm and subscribe to subjects that interest them.
When a broker and a subscribing client lose contact, the broker will store messages in a
buffer and send them to the subscriber whenever the broker is back up and running. The
broker has the right to cut off communication with subscribers and send them a cached
message containing publisher instructions if the publishing client abruptly disconnects
from the broker.
“Publishers send the messages, subscribers receive the messages they are interested in,
and brokers pass the messages from the publishers to the subscribers,” reads an IBM
write-up describing the pub/sub paradigm. MQTT clients, such as publishers and
subscribers, can only speak with MQTT brokers. Any device or programme that runs a
MQTT library can be a MQTT client, ranging from microcontrollers like the Arduino to
entire application servers housed in the cloud.
Characterstics of MQTT
 Lightweight: MQTT is designed to be lightweight, making it suitable for use in aid-
restrained environments inclusive of embedded systems and low-strength devices. The
protocol minimizes bandwidth and processing overhead, enabling green
communication even on restricted networks.
 Publish-Subscribe Model: In the publish-subscribe version, clients (publishers) send
messages to subjects, and different clients (subscribers) acquire messages from
subjects of interest. This decoupling of producers and purchasers permits for flexible
and dynamic conversation styles.
 Quality of Service (QoS) Levels: MQTT supports exclusive stages of message
delivery warranty, referred to as Quality of Service (QoS) . QoS levels range from 0 to
2, providing various stages of reliability and message transport guarantees, relying at
the utility necessities.
 Retained Messages: MQTT lets in agents to store retained messages on topics,
making sure that new subscribers acquire the maximum latest message posted on a
subject right now after subscribing. This characteristic is beneficial for fame updates
and configuration settings.
 Last Will and Testament (LWT): MQTT clients can specify a Last Will and Testament
message to be posted by way of the broker in the occasion of an sudden consumer
disconnect. This function affords a mechanism for detecting patron failures and dealing
with them gracefully.
 Security: MQTT helps various protection mechanisms, consisting of Transport Layer
Security (TLS) encryption and authentication mechanisms which include
username/password and consumer certificates. These capabilities make certain the
confidentiality, integrity, and authenticity of messages exchanged over MQTT
connections.
Advantages of MQTT
This model is not restricted to one-to-one communication between clients. Although the
publisher client sends a single message on specific topic, broker sends multiple
messages to all different clients subscribed to that topic. Similarly, messages sent by
multiple such publisher clients on multiple different topics will be sent to all multiple clients
subscribed to those topics. Hence one-to-many, many-to-one, as well as many-to-many
communication is possible using this model. Also, clients can publish data and at the
same time receive data due to this two-way communication protocol. Hence MQTT is
considered to be bi-directional protocol. The default unencrypted MQTT port used for data
transmission is 1883. The encrypted port for secure transmission is 8883.
 Lightweight protocol that is quick to create and allows for efficient data transport
 Minimal data packet usage, resulting in low network usage
 Effective data dispersion
 The effective use of remote sensing and control
 Prompt and effective message delivery
 Minimises power consumption, which is beneficial for the linked devices, and
maximises network capacity.
 Data transmission is quick, efficient, and lightweight because MQTT messages have
small code footprint. These control messages have a fixed header of size 2 bytes and
payload message up to size 256 megabytes.
Disadvantages of MQTT
 When compared to Constrained Application Protocol (CoAP), MQTT has slower send
cycles.
 Resource discovery in MQTT is based on flexible topic subscription, while resource
discovery in CoAP is based on a reliable system.
 MQTT lacks encryption. Rather, security encryption is accomplished by TLS/SSL
(Transport Layer Security/Secure Sockets Layer).
 Building an internationally scalable MQTT network is challenging.

Constrained Application Protocol (CoAP)


Last Updated : 01 Apr, 2024



There are several protocols in the application layer of the Internet protocol suite. One such useful
protocol is the CoAP or Constrained Application Protocol. This protocol has a wide range of
advantages and applications in the field of the Internet of Things (IoT) and cloud computing. CoAP
also has a powerful contribution in providing versatile solutions to IoT applications.
This article delves into a set of key topics and fundamental concepts in CoAP protocol along with its
applications in the real world.
What is CoAP?
CoAP or Constrained Application Protocol, as the name suggests, is an application layer protocol
that was introduced by the Internet Engineering Task Force in the year 2014. CoAP is basically
designed for the constrained environment.
It is a web-based protocol that resembles HTTP. It is also based on the request-response model.
Based on the REST-style architecture, this protocol considers the various objects in the network as
resources. These resources are uniquely assigned a URI or Uniform Resource Identifier. The data
from one resource to another resource is transferred in the form of CoAP message packets whose
format is briefly described later.
The Client requests for some resources and in response to that, the server sends some response over
which the client sends an acknowledgement. Although, some types of CoAP do not involve the
receiver sending acknowledgments for the information received. This type of CoAP message is
called NoN or Non Confirmable messages, whereas the messages in which the receiver sends a
response to sender is known as CON or confirmable messages.
Similar to HTTP, a CoAP request is sent by a client using a method code to request an
action on a URI identifiable object.
The server replies with a response code which may include a resource representation.
CoAP model is essentially a client/server model enabling the client to request for service
from server as needed and the server responds to client’s request.
However, CoAP messages are asynchronous since it uses UDP. The message layer
interfaces with UDP layer which formats the data received into a datagram and sends it to
the lower layer of the OSI or the TCP/IP model.
Methods in CoAP
CoAP is a web-based protocol. This means CoAP resembles the HTTP protocol and is
capable to utilize the HTTP methods.
These methods are-
 GET – The get method is used to retrieve resource information identified by the request
URI. In response to GET method success a 200(OK) response is sent.
 POST – The post method creates a new subordinate resource under the parent URI
requested by it to the server. On successful resource creation on the server, a 201
(Created) response is sent while on failure a 200 (OK) response code is sent.
 DELETE – The delete method deletes the resource identified by the requested URI and
a 200 (OK) response code is sent on successful operation.
 PUT – The PUT method updates or creates the resource identified by the request URI
with the enclosed message body. The message body is considered as modified version
of a resource if it already exists at the specified URI otherwise a new resource with that
URI is created. A 200 (OK) response is received in former case whereas a 201
(Created) response is received in later case. If the resource is neither created nor
modified then an error response code is sent.
The most fundamental difference between CoAP and HTTP is that CoAP defines a new
method which is not present in HTTP. This method is called Observe method.
The observe method is very similar to the GET method in addition with an observe option.
This alerts the server, to send every update about the resource to the client. Therefore,
upon any change in the resource, the server sends a response to the client.
These responses could either be directly sent individually or they can be piggy-backed.
Message Format of CoAP
CoAP messages are encoded in binary-format or 0/1 format. Like other message
formats, CoAP message has a header and a payload section along with an optional
section.
The size of CoAP header is 4 bytes or 32 bits. This size is fixed for every CoAP message.
Whereas the other part of message is the optional part which includes payload and tokens
of variable size ranging from 0-8 bytes.
The message format of CoAP contains the following fields:
 Version – The size of version field is 2 bits. It represents the version of the CoAP
protocol.
 Type Code – The size of type field is 2 bits. There are four types of messages namely
confirmable, non-confirmable, acknowledgement and reset represented by the bit
patterns 00, 01, 10, 11 respectively.
 Option Count – The size of option count field is 4 bits. These 4 bits, means there
could be a total of 16 possible options in header.
 Code – The size of code field is 8 bits. This indicates whether message is empty,
request message or response message.
 Message ID – The size of message ID field is 16 bits. It is used to detect the message
duplication and types of messages.
 Tokens [Optional] – The size of tokens field is variable which ranges from 0-8 bytes.
It’s used to match a response with request.
 Options [Optional] – The options field in CoAP message has a variable size. It defines
the type of payload message.
 Payload [Optional] – Similar to options field, the payload field has a variable size. The
payload of requests or of responses is typically a representation of the requested
resource or the result of the requested action.
Applications of CoAP
 Real Time Monitoring in Grid – Smart cities can monitor the distribution and
generation of power remotely. The CoAP sensors could be embedded inside the
transformers and the data could be transferred over GPRS or 6LowPAN.
 Defense utilities – The armory and tanks are now-a-days fitted with sensors so that
information could be communicated remotely without any interference. The CoAP
sensors could detect any intrusion. This makes them capable to transfer more data
even under low bandwidth network.
 Aircraft utilities – The Aircraft sensors and actuators could be connected with other
sensors and communication can take place using smart CoAP based sensors and
actuators.

Introduction of Radio Frequency Identification (RFID)


Last Updated : 11 Apr, 2024



Radio Frequency Identification (RFID) is a form of wireless communication that


incorporates the use of electromagnetic or electrostatic coupling in the radio frequency
portion of the electromagnetic spectrum to uniquely identify an object, animal or person. It
uses radio frequency to search ,identify, track and communicate with items and people. it
is a method that is used to track or identify an object by radio transmission uses over the
web. Data digitally encoded in an RFID tag which might be read by the reader. This
device work as a tag or label during which data read from tags that are stored in the
database through the reader as compared to traditional barcodes and QR codes. It is
often read outside the road of sight either passive or active RFID.
Kinds of RFID
There are many kinds of RFID, each with different properties, but perhaps the most
fascinating aspect of RFID technology is that most RFID tags have neither an electric plug
nor a battery. Instead, all of the energy needed to operate them is supplied in the form of
radio waves by RFID readers. This technology is called passive RFID to distinguish it from
the(less common) active RFID in which there is a power source on the tag.
1. UHF RHID ( Ultra-High Frequency RFID ). It is used on shipping pallets and some
driver’s licenses. Readers send signals in the 902-928 MHz band. Tags communicate
at distances of several meters by changing the way they reflect the reader signals; the
reader is able to pick up these reflections. This way of operating is called backscatter.
2. HF RFID (High-Frequency RFID ). It operates at 13.56 MHz and is likely to be in your
passport, credit cards, books, and noncontact payment systems. HF RFID has a short-
range, typically a meter or less because the physical mechanism is based on induction
rather than backscatter.
There are also other forms of RFID using other frequencies, such as LF RFID(Low-
Frequency RFID), which was developed before HF RFID and used for tracking.

Types of RFID
Passive RFID: Passive RFID tags does not have their own power source. It uses power
from the reader. In this device, RF tags are not attached by a power supply and passive
RF tag stored their power. When it is emitted from active antennas and the RF tag are
used specific frequency like 125-134KHZ as low frequency, 13.56MHZ as a high
frequency and 856MHZ to 960MHZ as ultra-high frequency.
 No need embedded power
 Tracking inventory
 Has unique identification number
 Sensitive for interference
 Semi-passive RFID

Active RFID: In this device, RF tags are attached by a power supply that emits a signal
and there is an antenna which receives the data. means, active tag uses a power source
like battery. It has it’s own power source, does not require power from source/reader.
 Embedded power: communication over large distance
 Has unique identifier /identification number
 Use other devices like sensors
 Better than passive tags in the presence of metal

Working Principle of RFID


Generally, RFID uses radio waves to perform AIDC function. AIDC stands for Automatic
Identification and Data Capture technology which performs object identification and
collection and mapping of the data. An antenna is an device which converts power into
radio waves which are used for communication between reader and tag. RFID readers
retrieve the information from RFID tag which detects the tag and reads or writes the data
into the tag. It may include one processor, package, storage and transmitter and receiver
unit.
Working of RFID System
Every RFID system consists of three components: a scanning antenna, a transceiver and
a transponder. When the scanning antenna and transceiver are combined, they are
referred to as an RFID reader or interrogator. There are two types of RFID readers —
fixed readers and mobile readers. The RFID reader is a network-connected device that
can be portable or permanently attached. It uses radio waves to transmit signals that
activate the tag. Once activated, the tag sends a wave back to the antenna, where it is
translated into data.
The transponder is in the RFID tag itself. The read range for RFID tags varies based on
factors including the type of tag, type of reader, RFID frequency and interference in the
surrounding environment or from other RFID tags and readers. Tags that have a stronger
power source also have a longer read range.
Features of RFID
 An RFID tag consists of two-part which is an microcircuit and an antenna.
 This tag is covered by protective material which acts as a shield against the outer
environment effect.
 This tag may active or passive in which we mainly and widely used passive RFID.
Application of RFID
 It utilized in tracking shipping containers, trucks and railroad, cars.
 It uses in Asset tracking.
 It utilized in credit-card shaped for access application.
 It uses in Personnel tracking.
 Controlling access to restricted areas.
 It uses ID badging.
 Supply chain management.
 Counterfeit prevention (e.g., in the pharmaceutical industry).
RFID Standards
 ISO 14443
 Components operating at 13.56Mhz
 Power consumption 10mW
 Data throughput is 100 kbps
 Operates at working distance 10 cm
 ISO 15693
 Components operating at 13.56Mhz
 Operating at working distances as high as 1m
 Data throughput few kbps
Advantages of RFID
 It provides data access and real-time information without taking to much time.
 RFID tags follow the instruction and store a large amount of information.
 The RFID system is non-line of sight nature of the technology.
 It improves the Efficiency, traceability of production.
 In RFID hundred of tags read in a short time.
Disadvantages of RFID
 It takes longer to program RFID Devices.
 RFID intercepted easily even it is Encrypted.
 In an RFID system, there are two or three layers of ordinary household foil to dam the
radio wave.
 There is privacy concern about RFID devices anybody can access information about
anything.
 Active RFID can costlier due to battery.
Application Area of RFID
 Warehouses retailer automotive
 Grocery chain transportation
 Distribution center asset management
 Manufacturing
 Inventory management
 Warehousing and distribution
 Shop floor (Production)
 Document tracking and asset management
 Industrial application (e.g. time and attendances, shipping document tracking, receiving
fixed assets)
 Retail applications.

Wireless Sensor Network (WSN)




Wireless Sensor Network (WSN) is an infrastructure-less wireless network that is deployed in a


large number of wireless sensors in an ad-hoc manner that is used to monitor the system, physical or
environmental conditions.
Sensor nodes are used in WSN with the onboard processor that manages and monitors the
environment in a particular area. They are connected to the Base Station which acts as a processing
unit in the WSN System.
Base Station in a WSN System is connected through the Internet to share data.
WSN can be used for processing, analysis, storage, and mining of the data.
Applications of WSN:

1. Internet of Things (IoT)


2. Surveillance and Monitoring for security, threat detection
3. Environmental temperature, humidity, and air pressure
4. Noise Level of the surrounding
5. Medical applications like patient monitoring
6. Agriculture
7. Landslide Detection
Challenges of WSN:

1. Quality of Service
2. Security Issue
3. Energy Efficiency
4. Network Throughput
5. Performance
6. Ability to cope with node failure
7. Cross layer optimisation
8. Scalability to large scale of deployment
A modern Wireless Sensor Network (WSN) faces several challenges, including:
 Limited power and energy: WSNs are typically composed of battery-powered sensors
that have limited energy resources. This makes it challenging to ensure that the
network can function for
long periods of time without the need for frequent battery replacements.
 Limited processing and storage capabilities: Sensor nodes in a WSN are typically
small and have limited processing and storage capabilities. This makes it difficult to
perform complex tasks or store large amounts of data.
 Heterogeneity: WSNs often consist of a variety of different sensor types and nodes
with different capabilities. This makes it challenging to ensure that the network can
function effectively and
efficiently.
 Security: WSNs are vulnerable to various types of attacks, such as eavesdropping,
jamming, and spoofing. Ensuring the security of the network and the data it collects is a
major challenge.
 Scalability: WSNs often need to be able to support a large number of sensor nodes
and handle large amounts of data. Ensuring that the network can scale to meet these
demands is a significant
challenge.
 Interference: WSNs are often deployed in environments where there is a lot of
interference from other wireless devices. This can make it difficult to ensure reliable
communication between sensor nodes.
 Reliability: WSNs are often used in critical applications, such as monitoring the
environment or controlling industrial processes. Ensuring that the network is reliable
and able to function correctly
in all conditions is a major challenge.
Components of WSN:
1. Sensors:
Sensors in WSN are used to capture the environmental variables and which is used for
data acquisition. Sensor signals are converted into electrical signals.
2. Radio Nodes:
It is used to receive the data produced by the Sensors and sends it to the WLAN
access point. It consists of a microcontroller, transceiver, external memory, and power
source.
3. WLAN Access Point:
It receives the data which is sent by the Radio nodes wirelessly, generally through the
internet.
4. Evaluation Software:
The data received by the WLAN Access Point is processed by a software called as
Evaluation Software for presenting the report to the users for further processing of the
data which can be used for processing, analysis, storage, and mining of the data.
Advantages of Wireless Sensor Networks (WSN):
Low cost: WSNs consist of small, low-cost sensors that are easy to deploy, making them
a cost-effective solution for many applications.
Wireless communication: WSNs eliminate the need for wired connections, which can be
costly and difficult to install. Wireless communication also enables flexible deployment and
reconfiguration of the network.
Energy efficiency: WSNs use low-power devices and protocols to conserve energy,
enabling long-term operation without the need for frequent battery replacements.
Scalability: WSNs can be scaled up or down easily by adding or removing sensors,
making them suitable for a range of applications and environments.
Real-time monitoring: WSNs enable real-time monitoring of physical phenomena in the
environment, providing timely information for decision making and control.
Disadvantages of Wireless Sensor Networks (WSN):
Limited range: The range of wireless communication in WSNs is limited, which can be a
challenge for large-scale deployments or in environments with obstacles that obstruct
radio signals.
Limited processing power: WSNs use low-power devices, which may have limited
processing power and memory, making it difficult to perform complex computations or
support advanced applications.
Data security: WSNs are vulnerable to security threats, such as eavesdropping,
tampering, and denial of service attacks, which can compromise the confidentiality,
integrity, and availability of data.
Interference: Wireless communication in WSNs can be susceptible to interference from
other wireless devices or radio signals, which can degrade the quality of data
transmission.
Deployment challenges: Deploying WSNs can be challenging due to the need for proper
sensor placement, power management, and network configuration, which can require
significant time and resources.
while WSNs offer many benefits, they also have limitations and challenges that must be
considered when deploying and using them in real-world applications.

What is Big Data Analytics ? – Definition, Working,


Benefits
Last Updated : 13 May, 2024



Big data analysis uses advanced analytical methods that can extract important business insights from
bulk datasets. Within these datasets lies both structured (organized) and unstructured (unorganized)
data. Its applications cover different industries such as healthcare, education, insurance, AI, retail,
and manufacturing. By analyzing this data, organizations get better insight on what is good and what
is bad, so they can make the necessary improvements, develop the production system, and increase
profitability.
What is Big-Data Analytics?
Big data analytics is all about crunching massive amounts of information to uncover
hidden trends, patterns, and relationships. It’s like sifting through a giant mountain of data
to find the gold nuggets of insight.
Here’s a breakdown of what it involves:
 Collecting Data: Such data is coming from various sources such as social media, web
traffic, sensors and customer reviews.
 Cleaning the Data: Imagine having to assess a pile of rocks that included some gold
pieces in it. You would have to clean the dirt and the debris first. When data is being
cleaned, mistakes must be fixed, duplicates must be removed and the data must be
formatted properly.
 Analyzing the Data: It is here that the wizardry takes place. Data analysts employ
powerful tools and techniques to discover patterns and trends. It is the same thing as
looking for a specific pattern in all those rocks that you sorted through.
The multi-industrial utilization of big data analytics spans from healthcare to finance to
retail. Through their data, companies can make better decisions, become more efficient,
and get a competitive advantage.
How does big data analytics work?
Big Data Analytics is a powerful tool which helps to find the potential of large and complex
datasets. To get better understanding, let’s break it down into key steps:
 Data Collection: Data is the core of Big Data Analytics. It is the gathering of data from
different sources such as the customers’ comments, surveys, sensors, social media,
and so on. The primary aim of data collection is to compile as much accurate data as
possible. The more data, the more insights.
 Data Cleaning (Data Preprocessing): The next step is to process this information. It
often requires some cleaning. This entails the replacement of missing data, the
correction of inaccuracies, and the removal of duplicates. It is like sifting through a
treasure trove, separating the rocks and debris and leaving only the valuable gems
behind.
 Data Processing: After that we will be working on the data processing. This process
contains such important stages as writing, structuring, and formatting of data in a way it
will be usable for the analysis. It is like a chef who is gathering the ingredients before
cooking. Data processing turns the data into a format suited for analytics tools to
process.
 Data Analysis: Data analysis is being done by means of statistical, mathematical, and
machine learning methods to get out the most important findings from the processed
data. For example, it can uncover customer preferences, market trends, or patterns in
healthcare data.
 Data Visualization: Data analysis usually is presented in visual form, for illustration –
charts, graphs and interactive dashboards. The visualizations provided a way to
simplify the large amounts of data and allowed for decision makers to quickly detect
patterns and trends.
 Data Storage and Management: The stored and managed analyzed data is of utmost
importance. It is like digital scrapbooking. May be you would want to go back to those
lessons in the long run, therefore, how you store them has great importance. Moreover,
data protection and adherence to regulations are the key issues to be addressed
during this crucial stage.
 Continuous Learning and Improvement: Big data analytics is a continuous process
of collecting, cleaning, and analyzing data to uncover hidden insights. It helps
businesses make better decisions and gain a competitive edge.
Types of Big Data Analytics
Big Data Analytics comes in many different types, each serving a different purpose:
1. Descriptive Analytic s: This type helps us understand past events. In social media, it
shows performance metrics, like the number of likes on a post.
2. Diagnostic Analytics : In Diagnostic analytics delves deeper to uncover the reasons
behind past events. In healthcare, it identifies the causes of high patient re-admissions.
3. Predictive Analytics: Predictive analytics forecasts future events based on past data.
Weather forecasting, for example, predicts tomorrow’s weather by analyzing historical
patterns.
4. Prescriptive Analytics: However, this category not only predicts results but also offers
recommendations for action to achieve the best results. In e-commerce, it may suggest
the best price for a product to achieve the highest possible profit.
5. Real-time Analytics: The key function of real-time analytics is data processing in real
time. It swiftly allows traders to make decisions based on real-time market events.
6. Spatial Analytics: Spatial analytics is about the location data. In urban management, it
optimizes traffic flow from the data unde the sensors and cameras to minimize the
traffic jam.
7. Text Analytics: Text analytics delves into the unstructured data of text. In the hotel
business, it can use the guest reviews to enhance services and guest satisfaction.
These types of analytics serve different purposes, making data understandable and
actionable. Whether it’s for business, healthcare, or everyday life, Big Data
Analytics provides a range of tools to turn data into valuable insights, supporting better
decision-making.
Big Data Analytics Technologies and Tools
Big Data Analytics relies on various technologies and tools that might sound complex, let’s
simplify them:
 Hadoop: Imagine Hadoop as an enormous digital warehouse. It’s used by companies
like Amazon to store tons of data efficiently. For instance, when Amazon suggests
products you might like, it’s because Hadoop helps manage your shopping history.
 Spark: Think of Spark as the super-fast data chef. Netflix uses it to quickly analyze
what you watch and recommend your next binge-worthy show.
 NoSQL Databases: NoSQL databases, like MongoDB, are like digital filing cabinets
that Airbnb uses to store your booking details and user data. These databases are
famous because of their quick and flexible, so the platform can provide you with the
right information when you need it.
 Tableau: Tableau is like an artist that turns data into beautiful pictures. The World
Bank uses it to create interactive charts and graphs that help people understand
complex economic data.
 Python and R: Python and R are like magic tools for data scientists. They use these
languages to solve tricky problems. For example, Kaggle uses them to predict things
like house prices based on past data.
 Machine Learning Frameworks (e.g., TensorFlow): In Machine learning frameworks
are the tools who make predictions. Airbnb uses TensorFlow to predict which
properties are most likely to be booked in certain areas. It helps hosts make smart
decisions about pricing and availability.
These tools and technologies are the building blocks of Big Data Analytics and helps
organizations gather, process, understand, and visualize data, making it easier for them to
make decisions based on information.
Benefits of Big Data Analytics
Big Data Analytics offers a host of real-world advantages, and let’s understand with
examples:
1. Informed Decisions: Imagine a store like Walmart. Big Data Analytics helps them
make smart choices about what products to stock. This not only reduces waste but also
keeps customers happy and profits high.
2. Enhanced Customer Experiences: Think about Amazon. Big Data Analytics is what
makes those product suggestions so accurate. It’s like having a personal shopper who
knows your taste and helps you find what you want.
3. Fraud Detection: Credit card companies, like MasterCard, use Big Data Analytics to
catch and stop fraudulent transactions. It’s like having a guardian that watches over
your money and keeps it safe.
4. Optimized Logistics: FedEx, for example, uses Big Data Analytics to deliver your
packages faster and with less impact on the environment. It’s like taking the fastest
route to your destination while also being kind to the planet.
Challenges of Big data analytics
While Big Data Analytics offers incredible benefits, it also comes with its set of challenges:
 Data Overload: Consider Twitter, where approximately 6,000 tweets are posted every
second. The challenge is sifting through this avalanche of data to find valuable insights.
 Data Quality: If the input data is inaccurate or incomplete, the insights generated by
Big Data Analytics can be flawed. For example, incorrect sensor readings could lead to
wrong conclusions in weather forecasting.
 Privacy Concerns: With the vast amount of personal data used, like in Facebook’s ad
targeting, there’s a fine line between providing personalized experiences and infringing
on privacy.
 Security Risks: With cyber threats increasing, safeguarding sensitive data becomes
crucial. For instance, banks use Big Data Analytics to detect fraudulent activities, but
they must also protect this information from breaches.
 Costs: Implementing and maintaining Big Data Analytics systems can be expensive.
Airlines like Delta use analytics to optimize flight schedules, but they need to ensure
that the benefits outweigh the costs.
Overcoming these challenges is essential to fully harness the power of Big Data Analytics.
Businesses and organizations must tread carefully, ensuring they make the most of the
insights while addressing these obstacles effectively.
Usage of Big Data Analytics
Big Data Analytics has a significant impact in various sectors:
 Healthcare: It aids in precise diagnoses and disease prediction, elevating patient care.
 Retail: Amazon’s use of Big Data Analytics offers personalized product
recommendations based on your shopping history, creating a more tailored and
enjoyable shopping experience.
 Finance: Credit card companies such as Visa rely on Big Data Analytics to swiftly
identify and prevent fraudulent transactions, ensuring the safety of your financial
assets.
 Transportation: Companies like Uber use Big Data Analytics to optimize drivers’
routes and predict demand, reducing wait times and improving overall transportation
experiences.
 Agriculture: Farmers make informed decisions, boosting crop yields while conserving
resources.
 Manufacturing: Companies like General Electric (GE) use Big Data Analytics to
predict machinery maintenance needs, reducing downtime and enhancing operational
efficiency.
Conclusion
Big Data Analytics is a game-changer that’s shaping a smarter future. From improving
healthcare and personalizing shopping to securing finances and predicting demand, it’s
transforming various aspects of our lives. However, Challenges like managing
overwhelming data and safeguarding privacy are real concerns. In our world flooded with
data, Big Data Analytics acts as a guiding light. It helps us make smarter choices, offers
personalized experiences, and uncovers valuable insights. It’s a powerful and stable tool
that promises a better and more efficient future for everyone.

What Is Cloud Computing ?


Last Updated : 06 May, 2024



Nowadays, Cloud computing is adopted by every company, whether it is an MNC or a startup many
are still migrating towards it because of the cost-cutting, lesser maintenance, and the increased
capacity of the data with the help of servers maintained by the cloud providers.
One more reason for this drastic change from the On-premises servers of the companies to the Cloud
providers is the ‘Pay as you go’ principle-based services provided by them i.e., you only have to pay
for the service which you are using. The disadvantage On-premises server holds is that if the server is
not in use the company still has to pay for it.
What Is Cloud Computing?
Cloud Computing means storing and accessing the data and programs on remote
servers that are hosted on the internet instead of the computer’s hard drive or local server.
Cloud computing is also referred to as Internet-based computing, it is a technology where
the resource is provided as a service through the Internet to the user. The data that is
stored can be files, images, documents, or any other storable document.
The following are some of the Operations that can be performed with Cloud Computing
 Storage, backup, and recovery of data
 Delivery of software on demand
 Development of new applications and services
 Streaming videos and audio
Understanding How Cloud Computing Works?
Cloud computing helps users in easily accessing computing resources like storage, and
processing over internet rather than local hardwares. Here we discussing how it works in
nutshell:
 Infrastructure: Cloud computing depends on remote network servers hosted on
internet for store, manage, and process the data.
 On-Demand Acess: Users can access cloud services and resources based on-
demand they can scale up or down the without having to invest for physical hardware.
 Types of Services: Cloud computing offers various benefits such as cost saving,
scalability, reliability and acessibility it reduces capital expenditures, improves
efficiency.
Origins Of Cloud Computing
Mainframe computing in the 1950s and the internet explosion in the 1990s came together
to give rise to cloud computing. Since businesses like Amazon, Google, and Salesforce
started providing web-based services in the early 2000s. The term “cloud computing” has
gained popularity. Scalability, adaptability, and cost-effectiveness are to be facilitated by
the concept’s on-demand internet-based access to computational resources.
These days, cloud computing is pervasive, driving a wide range of services across
markets and transforming the processing, storage, and retrieval of data
What is Virtualization In Cloud Computing?
Virtualization is the software technology that helps in providing the logical isolation of
physical resources. Creating logical isolation of physical resources such as RAM, CPU,
and Storage.. over the cloud is known as Virtualization in Cloud Computing. In simple we
can say creating types of Virtual Instances of computing resources over the cloud. It
provides better management and utilization of hardware resources with logical isolation
making the applications independent of others. It facilitates streamlining the resource
allocation and enhancing scalability for multiple virtual computers within a single physical
source offering cost-effectiveness and better optimization of resources.
To know about this refer this Article – Virtualization in Cloud Computing and Types
Architecture Of Cloud Computing
Cloud computing architecture refers to the components and sub-components required for
cloud computing. These components typically refer to:
1. Front end ( Fat client, Thin client)
2. Back-end platforms ( Servers, Storage )
3. Cloud-based delivery and a network ( Internet, Intranet, Intercloud )
1. Front End ( User Interaction Enhancement )
The User Interface of Cloud Computing consists of 2 sections of clients. The Thin clients
are the ones that use web browsers facilitating portable and lightweight accessibilities and
others are known as Fat Clients that use many functionalities for offering a strong user
experience.
2. Back-end Platforms ( Cloud Computing Engine )
The core of cloud computing is made at back-end platforms with several servers for
storage and processing computing. Management of Applications logic is managed through
servers and effective data handling is provided by storage. The combination of these
platforms at the backend offers the processing power, and capacity to manage and store
data behind the cloud.
3. Cloud-Based Delivery and Network
On-demand access to the computer and resources is provided over the Internet, Intranet,
and Intercloud. The Internet comes with global accessibility, the Intranet helps in internal
communications of the services within the organization and the Intercloud enables
interoperability across various cloud services. This dynamic network connectivity ensures
an essential component of cloud computing architecture on guaranteeing easy access and
data transfer.
What Are The Types of Cloud Computing Services?
The following are the types of Cloud Computing:
1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)
4. Function as as Service (FaaS)
1. Infrastructure as a Service ( IaaS )
 Flexibility and Control: IaaS comes up with providing virtualized computing resources
such as VMs, Storage, and networks facilitating users with control over the Operating
system and applications.
 Reducing Expenses of Hardware: IaaS provides business cost savings with the
elimination of physical infrastructure investments making it cost-effective.
 Scalability of Resources: The cloud provides in scaling of hardware resources up or
down as per demand facilitating optimal performance with cost efficiency.
2. Platform as a Service ( PaaS )
 Simplifying the Development: Platform as a Service offers application development
by keeping the underlying Infrastructure as an Abstraction. It helps the developers to
completely focus on application logic ( Code ) and background operations are
completely managed by the AWS platform.
 Enhancing Efficiency and Productivity: PaaS lowers the Management of
Infrastructure complexity, speeding up the Execution time and bringing the updates
quickly to market by streamlining the development process.
 Automation of Scaling: Management of resource scaling, guaranteeing the program’s
workload efficiency is ensured by PaaS.
3. SaaS (software as a service)
 Collaboration And Accessibility: Software as a Service (SaaS) helps users to easily
access applications without having the requirement of local installations. It is fully
managed by the AWS Software working as a service over the internet encouraging
effortless cooperation and ease of access.
 Automation of Updates: SaaS providers manage the handling of software
maintenance with automatic latest updates ensuring users gain experience with the
latest features and security patches.
 Cost Efficiency: SaaS acts as a cost-effective solution by reducing the overhead of IT
support by eliminating the need for individual software licenses.
4. Function as a Service (FaaS)
 Event-Driven Execution: FaaS helps in the maintenance of servers and infrastructure
making users worry about it. FaaS facilitates the developers to run code as a response
to the events.
 Cost Efficiency: FaaS facilitates cost efficiency by coming up with the principle “Pay
as per you Run” for the computing resources used.
 Scalability and Agility: Serverless Architectures scale effortlessly in handing the
workloads promoting agility in development and deployment.
To know more about the Types of Cloud Computing Difference please read this article
– IaaS vs PaaS vs SaaS
What Are Cloud Deployment Models?
The following are the Cloud Deployment Models:
1. Private Deployment Model
 It provides an enhancement in protection and customization by cloud resource
utilization as per particular specified requirements. It is perfect for companies which
looking for security and compliance needs.
2. Public Deployment Model
 It comes with offering a pay-as-you-go principle for scalability and accessibility of cloud
resources for numerous users. it ensures cost-effectiveness by providing enterprise-
needed services.
3. Hybrid Deployment Model
It comes up with a combination of elements of both private and public clouds providing
seamless data and application processing in between environments. It offers flexibility in
optimizing resources such as sensitive data in private clouds and important scalable
applications in the public cloud.
To know more about the Cloud Deployment Models, read this Articles
 Cloud Deployment Models
 Differences of Cloud Deployment Models
What Is Cloud Hosting?
The Infrastructure is where the people start and begin to build from the scratch. This is the
layer where the cloud hosting lives. Let’s say you have a company and a website and the
website has a lot of communications that are exchanged between members. You start
with a few members talking with each other and then gradually the number of members
increases. As time passes, as the number of members increases, there would be more
traffic on the network and your server will get slow down. This would cause a problem.
A few years ago, the websites are put on the server somewhere, in this way you have to
run around or buy and set the number of servers. It costs a lot of money and takes a lot of
time. You pay for these servers when you are using them and as well as when you are not
using them. This is called hosting. This problem is overcome by cloud hosting. With Cloud
Computing, you have access to computing power when you needed. Now, your website is
put in the cloud server as you put it on a dedicated server. People start visiting your
website and if you suddenly need more computing power, you would scale up according
to the need.
Characteristics Of Cloud Computing
The following are the characterisitics of Cloud Computing:
1. Scalability: With Cloud hosting, it is easy to grow and shrink the number and size of
servers based on the need. This is done by either increasing or decreasing the
resources in the cloud. This ability to alter plans due to fluctuations in business size
and needs is a superb benefit of cloud computing, especially when experiencing a
sudden growth in demand.
2. Save Money: An advantage of cloud computing is the reduction in hardware costs.
Instead of purchasing in-house equipment, hardware needs are left to the vendor. For
companies that are growing rapidly, new hardware can be large, expensive, and
inconvenient. Cloud computing alleviates these issues because resources can be
acquired quickly and easily. Even better, the cost of repairing or replacing equipment is
passed to the vendors. Along with purchase costs, off-site hardware cuts internal
power costs and saves space. Large data centers can take up precious office space
and produce a large amount of heat. Moving to cloud applications or storage can help
maximize space and significantly cut energy expenditures.
3. Reliability: Rather than being hosted on one single instance of a physical server,
hosting is delivered on a virtual partition that draws its resource, such as disk space,
from an extensive network of underlying physical servers. If one server goes offline it
will have no effect on availability, as the virtual servers will continue to pull resources
from the remaining network of servers.
4. Physical Security: The underlying physical servers are still housed within data centers
and so benefit from the security measures that those facilities implement to prevent
people from accessing or disrupting them on-site.
5. Outsource Management: When you are managing the business, Someone else
manages your computing infrastructure. You do not need to worry about management
as well as degradation.
Top Reasons to Switch from On-premise to Cloud Computing
The following are the Top reasons to switch from on-premise to cloud computing:
1. Reduces cost: The cost-cutting ability of businesses that utilize cloud computing over
time is one of the main advantages of this technology. On average 15% of the total
cost can be saved by companies if they migrate to the cloud. By the use of cloud
servers businesses will save and reduce costs with no need to employ a staff of
technical support personnel to address server issues. There are many great business
modules regarding the cost-cutting benefits of cloud servers such as the Coca-
Cola and Pinterest case studies.
2. More storage: For software and applications to execute as quickly and efficiently as
possible, it provides more servers, storage space, and computing power. Many tools
are available for cloud storage such as Dropbox, Onedrive, Google Drive, iCloud Drive,
etc.
3. Employees Better Work Life Balance: Direct connections between cloud computing
benefits, and the work and personal lives of an enterprise’s workers can both improve
because of cloud computing. Even on holidays, the employees have to work with the
server for its security, maintenance, and proper functionality. But with cloud storage the
thing is not the same, employees get ample of time for their personal life and the
workload is even less comparatively.
Top leading Cloud Computing companies
1. Amazon Web Services(AWS)
One of the most successful cloud-based businesses is Amazon Web Services(AWS),
which is an Infrastructure as a Service(Iaas) offering that pays rent for virtual computers
on Amazon’s infrastructure.
2. Microsoft Azure Cloud Platform
Microsoft is creating the Azure platform which enables the .NET Framework Application to
run over the internet as an alternative platform for Microsoft developers. This is the classic
Platform as a Service(PaaS).
3. Google Cloud Platform ( GCP )
 Google has built a worldwide network of data centers to service its search engine.
From this service, Google has captured the world’s advertising revenue. By using that
revenue, Google offers free software to users based on infrastructure. This is called
Software as a Service(SaaS).
Advantages of Cloud Computing
The following are main advantages of Cloud Computing:
1. Cost Efficiency: Cloud Computing provides flexible pricing to the users with the
principal pay-as-you-go model. It helps in lessening capital expenditures of
Infrastructure, particularly for small and medium-sized businesses companies.
2. Flexibility and Scalability: Cloud services facilitate the scaling of resources based on
demand. It ensures the efficiency of businesses in handling various workloads without
the need for large amounts of investments in hardware during the periods of low
demand.
3. Collaboration and Accessibility: Cloud computing provides easy access to data and
applications from anywhere over the internet. This encourages collaborative team
participation from different locations through shared documents and projects in real-
time resulting in quality and productive outputs.
4. Automatic Maintenance and Updates: AWS Cloud takes care of the infrastructure
management and keeping with the latest software automatically making updates they is
new versions. Through this, AWS guarantee the companies always having access to
the newest technologies to focus completely on business operations and innvoations.
Disadvantages Of Cloud Computing
The following are the main disadvantages of Cloud Computing:
1. Security Concerns: Storing of sensitive data on external servers raised more security
concerns which is one of the main drawbacks of cloud computing.
2. Downtime and Reliability: Even though cloud services are usually dependable, they
may also have unexpected interruptions and downtimes. These might be raised
because of server problems, Network issues or maintenance disruptions in Cloud
providers which negative effect on business operations, creating issues for users
accessing their apps.
3. Dependency on Internet Connectivity: Cloud computing services heavily rely on
Internet connectivity. For accessing the cloud resources the users should have a stable
and high-speed internet connection for accessing and using cloud resources. In
regions with limited internet connectivity, users may face challenges in accessing their
data and applications.
4. Cost Management Complexity: The main benefit of cloud services is their pricing
model that coming with Pay as you go but it also leads to cost management
complexities. On without proper careful monitoring and utilization of resources
optimization, Organizations may end up with unexpected costs as per their use scale.
Understanding and Controlled usage of cloud services requires ongoing attention.
Cloud Sustainability
The following are the some of the key points of Cloud sustainability:
 Enery Efficiency: Cloud Providers supports the optimization of data center operations
for minimizing energy consumption and improve efficiency.
 Renewable Energy: On increasing the adoption of renewable energy sources like
solar and wind power to data centers and reduce carbon emissions.
 Virtualization: Server virtualization facilitates better utilization of hardware resources,
reducing the need for physical servers and lowering the energy consumptions.
Cloud Security
Cloud security recommended to measures and practices designed to protect data,
applications, and infrastructure in cloud computing environments. The following are some
of the best practices of cloud security:
 Data Encryption: Encryption is essential for securing data stored in the cloud. It
ensures that data remains unreadable to unauthorized users even if it is intercepted.
 Access Control: Implementing strict access controls and authentication mechanisms
helps ensure that only authorized users can access sensitive data and resources in the
cloud.
 Multi-Factor Authentication (MFA): MFA adds an extra layer of security by requiring
users to provide multiple forms of verification, such as passwords, biometrics, or
security tokens, before gaining access to cloud services.
Use Cases Of Cloud Computing
Cloud computing provides many use cases across industries and various applications:
1. Scalable Infrastructure: Infrastructure as a Service (IaaS) enables organizations to
scale computing resources based on demand without investing in physical hardware.
2. Efficient Application Development: Platform as a Service (PaaS) simplifies
application development, offering tools and environments for building, deploying, and
managing applications.
3. Streamlined Software Access: Software as a Service (SaaS) provides subscription-
based access to software applications over the internet, reducing the need for local
installation and maintenance.
4. Data Analytics: Cloud-based platforms facilitate big data analytics, allowing
organizations to process and derive insights from large datasets efficiently.
5. Disaster Recovery: Cloud-based disaster recovery solutions offer cost-effective data
replication and backup, ensuring quick recovery in case of system failures or disasters.

Introduction of Embedded Systems | Set-1


Last Updated : 20 Feb, 2023



Before going to the overview of Embedded Systems, Let’s first know the two basic things i.e
embedded and system, and what actually do they mean.
System is a set of interrelated parts/components which are designed/developed to perform common
tasks or to do some specific work for which it has been created.
Embedded means including something with anything for a reason. Or simply we can say something
which is integrated or attached to another thing. Now after getting what actual systems and
embedded mean we can easily understand what are Embedded Systems.
Embedded System is an integrated system that is formed as a combination of computer hardware
and software for a specific function. It can be said as a dedicated computer system has been
developed for some particular reason. But it is not our traditional computer system or general-
purpose computers, these are the Embedded systems that may work independently or attached to a
larger system to work on a few specific functions. These embedded systems can work without human
intervention or with little human intervention.
Three main components of Embedded systems are:
1. Hardware
2. Software
3. Firmware

Some examples of embedded systems:

 Digital watches
 Washing Machine
 Toys
 Televisions
 Digital phones
 Laser Printer
 Cameras
 Industrial machines
 Electronic Calculators
 Automobiles
 Medical Equipment

Application areas of Embedded System:

Mostly Embedded systems are present everywhere. We use it in our everyday life
unknowingly as in most cases it is integrated into the larger systems. So, here are some
of the application areas of Embedded systems:
 Home appliances
 Transportation
 Health care
 Business sector & offices
 Defense sector
 Aerospace
 Agricultural Sector

Important Characteristics of an Embedded System:

1. Performs specific task: Embedded systems perform some specific function or tasks.
2. Low Cost: The price of an embedded system is not so expensive.
3. Time Specific: It performs the tasks within a certain time frame.
4. Low Power: Embedded Systems don’t require much power to operate.
5. High Efficiency: The efficiency level of embedded systems is so high.
6. Minimal User interface: These systems require less user interface and are easy to
use.
7. Less Human intervention: Embedded systems require no human intervention or very
less human intervention.
8. Highly Stable: Embedded systems do not change frequently mostly fixed maintaining
stability.
9. High Reliability: Embedded systems are reliable they perform tasks consistently well.
10. Use microprocessors or microcontrollers: Embedded systems use microprocessors
or microcontrollers to design and use limited memory.
11. Manufacturable: The majority of embedded systems are compact and affordable to
manufacture. They are based on the size and low complexity of the hardware.
BLOCK DIAGRAM OF EMBEDDED SYSTEM

Advantages of Embedded System:

 Small size.
 Enhanced real-time performance.
 Easily customizable for a specific application.

Disadvantages of Embedded System:

 High development cost.


 Time-consuming design process.
 As it is application-specific less market available.
Top Embedded Programming Languages: Embedded systems can be programmed
using different programming languages like Embedded C, Embedded C++, Embedded
Java, and Embedded Python . However, it entirely depends on the developer to use which
programming language for the development of the embedded systems.

You might also like