Iot Unit 3
Iot Unit 3
1. IOT PROTOCOLS
IPV6
IPv6 was developed by Internet Engineering Task Force (IETF) to deal with the
problem of IPv4 exhaustion. IPv6 is a 128-bits address having an address space of
2128, which is way bigger than IPv4. IPv6 use Hexa-Decimal format separated by
colon (:) .
What is 6LoWPAN?
Last Updated : 29 Apr, 2023
6LoWPAN is an IPv6 protocol, and It’s extended from is IPv6 over Low Power Personal Area
Network. As the name itself explains the meaning of this protocol is that this protocol works on
Wireless Personal Area Network i.e., WPAN.
WPAN is a Personal Area Network (PAN) where the interconnected devices are centered around a
person’s workspace and connected through a wireless medium. You can read more about WPAN
at WPAN. 6LoWPAN allows communication using the IPv6 protocol. IPv6 is Internet Protocol
Version 6 is a network layer protocol that allows communication to take place over the network. It is
faster and more reliable and provides a large number of addresses.
6LoWPAN initially came into existence to overcome the conventional methodologies that were
adapted to transmit information. But still, it is not so efficient as it only allows for the smaller devices
with very limited processing ability to establish communication using one of the Internet Protocols,
i.e., IPv6. It has very low cost, short-range, low memory usage, and low bit rate.
It comprises an Edge Router and Sensor Nodes. Even the smallest of the IoT devices can now be part
of the network, and the information can be transmitted to the outside world as well. For example,
LED Streetlights.
It is a technology that makes the individual nodes IP enabled.
6LoWPAN can interact with 802.15.4 devices and also other types of devices on an IP
Network. For example, Wi-Fi.
It uses AES 128 link layer security, which AES is a block cipher having key size of
128/192/256 bits and encrypts data in blocks of 128 bits each. This is defined in IEEE
802.15.4 and provides link authentication and encryption.
Basic Requirements of 6LoWPAN:
1. The device should be having sleep mode in order to support the battery saving.
2. Minimal memory requirement.
3. Routing overhead should be lowered.
Features of 6LoWPAN:
1. It is used with IEEE 802.15,.4 in the 2.4 GHz band.
2. Outdoor range: ~200 m (maximum)
3. Data rate: 200kbps (maximum)
4. Maximum number of nodes: ~100
Advantages of 6LoWPAN:
1. 6LoWPAN is a mesh network that is robust, scalable, and can heal on its own.
2. It delivers low-cost and secure communication in IoT devices.
3. It uses IPv6 protocol and so it can be directly routed to cloud platforms.
4. It offers one-to-many and many-to-one routing.
5. In the network, leaf nodes can be in sleep mode for a longer duration of time.
Disadvantages of 6LoWPAN:
1. It is comparatively less secure than Zigbee.
2. It has lesser immunity to interference than that Wi-Fi and Bluetooth.
3. Without the mesh topology, it supports a short range.
Applications of 6LoWPAN:
1. It is a wireless sensor network.
2. It is used in home-automation,
3. It is used in smart agricultural techniques, and industrial monitoring.
4. It is utilised to make IPv6 packet transmission on networks with constrained power and
reliability resources possible.
Security and Interoperability with 6LoWPAN:
Security: 6LoWPAN security is ensured by the AES algorithm, which is a link layer
security, and the transport layer security mechanisms are included as well.
Interoperability: 6LoWPAN is able to operate with other wireless devices as well
which makes it interoperable in a network.
Introduction of Message Queue Telemetry Transport Protocol
(MQTT)
Last Updated : 26 Feb, 2024
Working of MQTT
MQTT’s publish/subscribe (pub/sub) communication style, which aims to maximise
available bandwidth, is an alternative to conventional client-server architecture that
communicates directly with an endpoint. In contrast, the client who transmits the message
(the publisher) and the client or clients who receive it (the subscribers) are not connected
in the pub/sub paradigm. Third parties—the brokers—manage the relationships between
the publishers and subscribers because they don’t communicate with one another directly.
Publishers and subscribers, which denote whether a client is publishing messages or has
subscribed to receive messages, are examples of MQTT clients. The same MQTT client
can be used to accomplish these two features. A publish occurs when a client or device
want to submit data to a server or broker.
The term “subscribe” refers to the reversal of the procedure. Several clients can connect
to a broker under the pub/sub paradigm and subscribe to subjects that interest them.
When a broker and a subscribing client lose contact, the broker will store messages in a
buffer and send them to the subscriber whenever the broker is back up and running. The
broker has the right to cut off communication with subscribers and send them a cached
message containing publisher instructions if the publishing client abruptly disconnects
from the broker.
“Publishers send the messages, subscribers receive the messages they are interested in,
and brokers pass the messages from the publishers to the subscribers,” reads an IBM
write-up describing the pub/sub paradigm. MQTT clients, such as publishers and
subscribers, can only speak with MQTT brokers. Any device or programme that runs a
MQTT library can be a MQTT client, ranging from microcontrollers like the Arduino to
entire application servers housed in the cloud.
Characterstics of MQTT
Lightweight: MQTT is designed to be lightweight, making it suitable for use in aid-
restrained environments inclusive of embedded systems and low-strength devices. The
protocol minimizes bandwidth and processing overhead, enabling green
communication even on restricted networks.
Publish-Subscribe Model: In the publish-subscribe version, clients (publishers) send
messages to subjects, and different clients (subscribers) acquire messages from
subjects of interest. This decoupling of producers and purchasers permits for flexible
and dynamic conversation styles.
Quality of Service (QoS) Levels: MQTT supports exclusive stages of message
delivery warranty, referred to as Quality of Service (QoS) . QoS levels range from 0 to
2, providing various stages of reliability and message transport guarantees, relying at
the utility necessities.
Retained Messages: MQTT lets in agents to store retained messages on topics,
making sure that new subscribers acquire the maximum latest message posted on a
subject right now after subscribing. This characteristic is beneficial for fame updates
and configuration settings.
Last Will and Testament (LWT): MQTT clients can specify a Last Will and Testament
message to be posted by way of the broker in the occasion of an sudden consumer
disconnect. This function affords a mechanism for detecting patron failures and dealing
with them gracefully.
Security: MQTT helps various protection mechanisms, consisting of Transport Layer
Security (TLS) encryption and authentication mechanisms which include
username/password and consumer certificates. These capabilities make certain the
confidentiality, integrity, and authenticity of messages exchanged over MQTT
connections.
Advantages of MQTT
This model is not restricted to one-to-one communication between clients. Although the
publisher client sends a single message on specific topic, broker sends multiple
messages to all different clients subscribed to that topic. Similarly, messages sent by
multiple such publisher clients on multiple different topics will be sent to all multiple clients
subscribed to those topics. Hence one-to-many, many-to-one, as well as many-to-many
communication is possible using this model. Also, clients can publish data and at the
same time receive data due to this two-way communication protocol. Hence MQTT is
considered to be bi-directional protocol. The default unencrypted MQTT port used for data
transmission is 1883. The encrypted port for secure transmission is 8883.
Lightweight protocol that is quick to create and allows for efficient data transport
Minimal data packet usage, resulting in low network usage
Effective data dispersion
The effective use of remote sensing and control
Prompt and effective message delivery
Minimises power consumption, which is beneficial for the linked devices, and
maximises network capacity.
Data transmission is quick, efficient, and lightweight because MQTT messages have
small code footprint. These control messages have a fixed header of size 2 bytes and
payload message up to size 256 megabytes.
Disadvantages of MQTT
When compared to Constrained Application Protocol (CoAP), MQTT has slower send
cycles.
Resource discovery in MQTT is based on flexible topic subscription, while resource
discovery in CoAP is based on a reliable system.
MQTT lacks encryption. Rather, security encryption is accomplished by TLS/SSL
(Transport Layer Security/Secure Sockets Layer).
Building an internationally scalable MQTT network is challenging.
There are several protocols in the application layer of the Internet protocol suite. One such useful
protocol is the CoAP or Constrained Application Protocol. This protocol has a wide range of
advantages and applications in the field of the Internet of Things (IoT) and cloud computing. CoAP
also has a powerful contribution in providing versatile solutions to IoT applications.
This article delves into a set of key topics and fundamental concepts in CoAP protocol along with its
applications in the real world.
What is CoAP?
CoAP or Constrained Application Protocol, as the name suggests, is an application layer protocol
that was introduced by the Internet Engineering Task Force in the year 2014. CoAP is basically
designed for the constrained environment.
It is a web-based protocol that resembles HTTP. It is also based on the request-response model.
Based on the REST-style architecture, this protocol considers the various objects in the network as
resources. These resources are uniquely assigned a URI or Uniform Resource Identifier. The data
from one resource to another resource is transferred in the form of CoAP message packets whose
format is briefly described later.
The Client requests for some resources and in response to that, the server sends some response over
which the client sends an acknowledgement. Although, some types of CoAP do not involve the
receiver sending acknowledgments for the information received. This type of CoAP message is
called NoN or Non Confirmable messages, whereas the messages in which the receiver sends a
response to sender is known as CON or confirmable messages.
Similar to HTTP, a CoAP request is sent by a client using a method code to request an
action on a URI identifiable object.
The server replies with a response code which may include a resource representation.
CoAP model is essentially a client/server model enabling the client to request for service
from server as needed and the server responds to client’s request.
However, CoAP messages are asynchronous since it uses UDP. The message layer
interfaces with UDP layer which formats the data received into a datagram and sends it to
the lower layer of the OSI or the TCP/IP model.
Methods in CoAP
CoAP is a web-based protocol. This means CoAP resembles the HTTP protocol and is
capable to utilize the HTTP methods.
These methods are-
GET – The get method is used to retrieve resource information identified by the request
URI. In response to GET method success a 200(OK) response is sent.
POST – The post method creates a new subordinate resource under the parent URI
requested by it to the server. On successful resource creation on the server, a 201
(Created) response is sent while on failure a 200 (OK) response code is sent.
DELETE – The delete method deletes the resource identified by the requested URI and
a 200 (OK) response code is sent on successful operation.
PUT – The PUT method updates or creates the resource identified by the request URI
with the enclosed message body. The message body is considered as modified version
of a resource if it already exists at the specified URI otherwise a new resource with that
URI is created. A 200 (OK) response is received in former case whereas a 201
(Created) response is received in later case. If the resource is neither created nor
modified then an error response code is sent.
The most fundamental difference between CoAP and HTTP is that CoAP defines a new
method which is not present in HTTP. This method is called Observe method.
The observe method is very similar to the GET method in addition with an observe option.
This alerts the server, to send every update about the resource to the client. Therefore,
upon any change in the resource, the server sends a response to the client.
These responses could either be directly sent individually or they can be piggy-backed.
Message Format of CoAP
CoAP messages are encoded in binary-format or 0/1 format. Like other message
formats, CoAP message has a header and a payload section along with an optional
section.
The size of CoAP header is 4 bytes or 32 bits. This size is fixed for every CoAP message.
Whereas the other part of message is the optional part which includes payload and tokens
of variable size ranging from 0-8 bytes.
The message format of CoAP contains the following fields:
Version – The size of version field is 2 bits. It represents the version of the CoAP
protocol.
Type Code – The size of type field is 2 bits. There are four types of messages namely
confirmable, non-confirmable, acknowledgement and reset represented by the bit
patterns 00, 01, 10, 11 respectively.
Option Count – The size of option count field is 4 bits. These 4 bits, means there
could be a total of 16 possible options in header.
Code – The size of code field is 8 bits. This indicates whether message is empty,
request message or response message.
Message ID – The size of message ID field is 16 bits. It is used to detect the message
duplication and types of messages.
Tokens [Optional] – The size of tokens field is variable which ranges from 0-8 bytes.
It’s used to match a response with request.
Options [Optional] – The options field in CoAP message has a variable size. It defines
the type of payload message.
Payload [Optional] – Similar to options field, the payload field has a variable size. The
payload of requests or of responses is typically a representation of the requested
resource or the result of the requested action.
Applications of CoAP
Real Time Monitoring in Grid – Smart cities can monitor the distribution and
generation of power remotely. The CoAP sensors could be embedded inside the
transformers and the data could be transferred over GPRS or 6LowPAN.
Defense utilities – The armory and tanks are now-a-days fitted with sensors so that
information could be communicated remotely without any interference. The CoAP
sensors could detect any intrusion. This makes them capable to transfer more data
even under low bandwidth network.
Aircraft utilities – The Aircraft sensors and actuators could be connected with other
sensors and communication can take place using smart CoAP based sensors and
actuators.
Types of RFID
Passive RFID: Passive RFID tags does not have their own power source. It uses power
from the reader. In this device, RF tags are not attached by a power supply and passive
RF tag stored their power. When it is emitted from active antennas and the RF tag are
used specific frequency like 125-134KHZ as low frequency, 13.56MHZ as a high
frequency and 856MHZ to 960MHZ as ultra-high frequency.
No need embedded power
Tracking inventory
Has unique identification number
Sensitive for interference
Semi-passive RFID
Active RFID: In this device, RF tags are attached by a power supply that emits a signal
and there is an antenna which receives the data. means, active tag uses a power source
like battery. It has it’s own power source, does not require power from source/reader.
Embedded power: communication over large distance
Has unique identifier /identification number
Use other devices like sensors
Better than passive tags in the presence of metal
1. Quality of Service
2. Security Issue
3. Energy Efficiency
4. Network Throughput
5. Performance
6. Ability to cope with node failure
7. Cross layer optimisation
8. Scalability to large scale of deployment
A modern Wireless Sensor Network (WSN) faces several challenges, including:
Limited power and energy: WSNs are typically composed of battery-powered sensors
that have limited energy resources. This makes it challenging to ensure that the
network can function for
long periods of time without the need for frequent battery replacements.
Limited processing and storage capabilities: Sensor nodes in a WSN are typically
small and have limited processing and storage capabilities. This makes it difficult to
perform complex tasks or store large amounts of data.
Heterogeneity: WSNs often consist of a variety of different sensor types and nodes
with different capabilities. This makes it challenging to ensure that the network can
function effectively and
efficiently.
Security: WSNs are vulnerable to various types of attacks, such as eavesdropping,
jamming, and spoofing. Ensuring the security of the network and the data it collects is a
major challenge.
Scalability: WSNs often need to be able to support a large number of sensor nodes
and handle large amounts of data. Ensuring that the network can scale to meet these
demands is a significant
challenge.
Interference: WSNs are often deployed in environments where there is a lot of
interference from other wireless devices. This can make it difficult to ensure reliable
communication between sensor nodes.
Reliability: WSNs are often used in critical applications, such as monitoring the
environment or controlling industrial processes. Ensuring that the network is reliable
and able to function correctly
in all conditions is a major challenge.
Components of WSN:
1. Sensors:
Sensors in WSN are used to capture the environmental variables and which is used for
data acquisition. Sensor signals are converted into electrical signals.
2. Radio Nodes:
It is used to receive the data produced by the Sensors and sends it to the WLAN
access point. It consists of a microcontroller, transceiver, external memory, and power
source.
3. WLAN Access Point:
It receives the data which is sent by the Radio nodes wirelessly, generally through the
internet.
4. Evaluation Software:
The data received by the WLAN Access Point is processed by a software called as
Evaluation Software for presenting the report to the users for further processing of the
data which can be used for processing, analysis, storage, and mining of the data.
Advantages of Wireless Sensor Networks (WSN):
Low cost: WSNs consist of small, low-cost sensors that are easy to deploy, making them
a cost-effective solution for many applications.
Wireless communication: WSNs eliminate the need for wired connections, which can be
costly and difficult to install. Wireless communication also enables flexible deployment and
reconfiguration of the network.
Energy efficiency: WSNs use low-power devices and protocols to conserve energy,
enabling long-term operation without the need for frequent battery replacements.
Scalability: WSNs can be scaled up or down easily by adding or removing sensors,
making them suitable for a range of applications and environments.
Real-time monitoring: WSNs enable real-time monitoring of physical phenomena in the
environment, providing timely information for decision making and control.
Disadvantages of Wireless Sensor Networks (WSN):
Limited range: The range of wireless communication in WSNs is limited, which can be a
challenge for large-scale deployments or in environments with obstacles that obstruct
radio signals.
Limited processing power: WSNs use low-power devices, which may have limited
processing power and memory, making it difficult to perform complex computations or
support advanced applications.
Data security: WSNs are vulnerable to security threats, such as eavesdropping,
tampering, and denial of service attacks, which can compromise the confidentiality,
integrity, and availability of data.
Interference: Wireless communication in WSNs can be susceptible to interference from
other wireless devices or radio signals, which can degrade the quality of data
transmission.
Deployment challenges: Deploying WSNs can be challenging due to the need for proper
sensor placement, power management, and network configuration, which can require
significant time and resources.
while WSNs offer many benefits, they also have limitations and challenges that must be
considered when deploying and using them in real-world applications.
Big data analysis uses advanced analytical methods that can extract important business insights from
bulk datasets. Within these datasets lies both structured (organized) and unstructured (unorganized)
data. Its applications cover different industries such as healthcare, education, insurance, AI, retail,
and manufacturing. By analyzing this data, organizations get better insight on what is good and what
is bad, so they can make the necessary improvements, develop the production system, and increase
profitability.
What is Big-Data Analytics?
Big data analytics is all about crunching massive amounts of information to uncover
hidden trends, patterns, and relationships. It’s like sifting through a giant mountain of data
to find the gold nuggets of insight.
Here’s a breakdown of what it involves:
Collecting Data: Such data is coming from various sources such as social media, web
traffic, sensors and customer reviews.
Cleaning the Data: Imagine having to assess a pile of rocks that included some gold
pieces in it. You would have to clean the dirt and the debris first. When data is being
cleaned, mistakes must be fixed, duplicates must be removed and the data must be
formatted properly.
Analyzing the Data: It is here that the wizardry takes place. Data analysts employ
powerful tools and techniques to discover patterns and trends. It is the same thing as
looking for a specific pattern in all those rocks that you sorted through.
The multi-industrial utilization of big data analytics spans from healthcare to finance to
retail. Through their data, companies can make better decisions, become more efficient,
and get a competitive advantage.
How does big data analytics work?
Big Data Analytics is a powerful tool which helps to find the potential of large and complex
datasets. To get better understanding, let’s break it down into key steps:
Data Collection: Data is the core of Big Data Analytics. It is the gathering of data from
different sources such as the customers’ comments, surveys, sensors, social media,
and so on. The primary aim of data collection is to compile as much accurate data as
possible. The more data, the more insights.
Data Cleaning (Data Preprocessing): The next step is to process this information. It
often requires some cleaning. This entails the replacement of missing data, the
correction of inaccuracies, and the removal of duplicates. It is like sifting through a
treasure trove, separating the rocks and debris and leaving only the valuable gems
behind.
Data Processing: After that we will be working on the data processing. This process
contains such important stages as writing, structuring, and formatting of data in a way it
will be usable for the analysis. It is like a chef who is gathering the ingredients before
cooking. Data processing turns the data into a format suited for analytics tools to
process.
Data Analysis: Data analysis is being done by means of statistical, mathematical, and
machine learning methods to get out the most important findings from the processed
data. For example, it can uncover customer preferences, market trends, or patterns in
healthcare data.
Data Visualization: Data analysis usually is presented in visual form, for illustration –
charts, graphs and interactive dashboards. The visualizations provided a way to
simplify the large amounts of data and allowed for decision makers to quickly detect
patterns and trends.
Data Storage and Management: The stored and managed analyzed data is of utmost
importance. It is like digital scrapbooking. May be you would want to go back to those
lessons in the long run, therefore, how you store them has great importance. Moreover,
data protection and adherence to regulations are the key issues to be addressed
during this crucial stage.
Continuous Learning and Improvement: Big data analytics is a continuous process
of collecting, cleaning, and analyzing data to uncover hidden insights. It helps
businesses make better decisions and gain a competitive edge.
Types of Big Data Analytics
Big Data Analytics comes in many different types, each serving a different purpose:
1. Descriptive Analytic s: This type helps us understand past events. In social media, it
shows performance metrics, like the number of likes on a post.
2. Diagnostic Analytics : In Diagnostic analytics delves deeper to uncover the reasons
behind past events. In healthcare, it identifies the causes of high patient re-admissions.
3. Predictive Analytics: Predictive analytics forecasts future events based on past data.
Weather forecasting, for example, predicts tomorrow’s weather by analyzing historical
patterns.
4. Prescriptive Analytics: However, this category not only predicts results but also offers
recommendations for action to achieve the best results. In e-commerce, it may suggest
the best price for a product to achieve the highest possible profit.
5. Real-time Analytics: The key function of real-time analytics is data processing in real
time. It swiftly allows traders to make decisions based on real-time market events.
6. Spatial Analytics: Spatial analytics is about the location data. In urban management, it
optimizes traffic flow from the data unde the sensors and cameras to minimize the
traffic jam.
7. Text Analytics: Text analytics delves into the unstructured data of text. In the hotel
business, it can use the guest reviews to enhance services and guest satisfaction.
These types of analytics serve different purposes, making data understandable and
actionable. Whether it’s for business, healthcare, or everyday life, Big Data
Analytics provides a range of tools to turn data into valuable insights, supporting better
decision-making.
Big Data Analytics Technologies and Tools
Big Data Analytics relies on various technologies and tools that might sound complex, let’s
simplify them:
Hadoop: Imagine Hadoop as an enormous digital warehouse. It’s used by companies
like Amazon to store tons of data efficiently. For instance, when Amazon suggests
products you might like, it’s because Hadoop helps manage your shopping history.
Spark: Think of Spark as the super-fast data chef. Netflix uses it to quickly analyze
what you watch and recommend your next binge-worthy show.
NoSQL Databases: NoSQL databases, like MongoDB, are like digital filing cabinets
that Airbnb uses to store your booking details and user data. These databases are
famous because of their quick and flexible, so the platform can provide you with the
right information when you need it.
Tableau: Tableau is like an artist that turns data into beautiful pictures. The World
Bank uses it to create interactive charts and graphs that help people understand
complex economic data.
Python and R: Python and R are like magic tools for data scientists. They use these
languages to solve tricky problems. For example, Kaggle uses them to predict things
like house prices based on past data.
Machine Learning Frameworks (e.g., TensorFlow): In Machine learning frameworks
are the tools who make predictions. Airbnb uses TensorFlow to predict which
properties are most likely to be booked in certain areas. It helps hosts make smart
decisions about pricing and availability.
These tools and technologies are the building blocks of Big Data Analytics and helps
organizations gather, process, understand, and visualize data, making it easier for them to
make decisions based on information.
Benefits of Big Data Analytics
Big Data Analytics offers a host of real-world advantages, and let’s understand with
examples:
1. Informed Decisions: Imagine a store like Walmart. Big Data Analytics helps them
make smart choices about what products to stock. This not only reduces waste but also
keeps customers happy and profits high.
2. Enhanced Customer Experiences: Think about Amazon. Big Data Analytics is what
makes those product suggestions so accurate. It’s like having a personal shopper who
knows your taste and helps you find what you want.
3. Fraud Detection: Credit card companies, like MasterCard, use Big Data Analytics to
catch and stop fraudulent transactions. It’s like having a guardian that watches over
your money and keeps it safe.
4. Optimized Logistics: FedEx, for example, uses Big Data Analytics to deliver your
packages faster and with less impact on the environment. It’s like taking the fastest
route to your destination while also being kind to the planet.
Challenges of Big data analytics
While Big Data Analytics offers incredible benefits, it also comes with its set of challenges:
Data Overload: Consider Twitter, where approximately 6,000 tweets are posted every
second. The challenge is sifting through this avalanche of data to find valuable insights.
Data Quality: If the input data is inaccurate or incomplete, the insights generated by
Big Data Analytics can be flawed. For example, incorrect sensor readings could lead to
wrong conclusions in weather forecasting.
Privacy Concerns: With the vast amount of personal data used, like in Facebook’s ad
targeting, there’s a fine line between providing personalized experiences and infringing
on privacy.
Security Risks: With cyber threats increasing, safeguarding sensitive data becomes
crucial. For instance, banks use Big Data Analytics to detect fraudulent activities, but
they must also protect this information from breaches.
Costs: Implementing and maintaining Big Data Analytics systems can be expensive.
Airlines like Delta use analytics to optimize flight schedules, but they need to ensure
that the benefits outweigh the costs.
Overcoming these challenges is essential to fully harness the power of Big Data Analytics.
Businesses and organizations must tread carefully, ensuring they make the most of the
insights while addressing these obstacles effectively.
Usage of Big Data Analytics
Big Data Analytics has a significant impact in various sectors:
Healthcare: It aids in precise diagnoses and disease prediction, elevating patient care.
Retail: Amazon’s use of Big Data Analytics offers personalized product
recommendations based on your shopping history, creating a more tailored and
enjoyable shopping experience.
Finance: Credit card companies such as Visa rely on Big Data Analytics to swiftly
identify and prevent fraudulent transactions, ensuring the safety of your financial
assets.
Transportation: Companies like Uber use Big Data Analytics to optimize drivers’
routes and predict demand, reducing wait times and improving overall transportation
experiences.
Agriculture: Farmers make informed decisions, boosting crop yields while conserving
resources.
Manufacturing: Companies like General Electric (GE) use Big Data Analytics to
predict machinery maintenance needs, reducing downtime and enhancing operational
efficiency.
Conclusion
Big Data Analytics is a game-changer that’s shaping a smarter future. From improving
healthcare and personalizing shopping to securing finances and predicting demand, it’s
transforming various aspects of our lives. However, Challenges like managing
overwhelming data and safeguarding privacy are real concerns. In our world flooded with
data, Big Data Analytics acts as a guiding light. It helps us make smarter choices, offers
personalized experiences, and uncovers valuable insights. It’s a powerful and stable tool
that promises a better and more efficient future for everyone.
Nowadays, Cloud computing is adopted by every company, whether it is an MNC or a startup many
are still migrating towards it because of the cost-cutting, lesser maintenance, and the increased
capacity of the data with the help of servers maintained by the cloud providers.
One more reason for this drastic change from the On-premises servers of the companies to the Cloud
providers is the ‘Pay as you go’ principle-based services provided by them i.e., you only have to pay
for the service which you are using. The disadvantage On-premises server holds is that if the server is
not in use the company still has to pay for it.
What Is Cloud Computing?
Cloud Computing means storing and accessing the data and programs on remote
servers that are hosted on the internet instead of the computer’s hard drive or local server.
Cloud computing is also referred to as Internet-based computing, it is a technology where
the resource is provided as a service through the Internet to the user. The data that is
stored can be files, images, documents, or any other storable document.
The following are some of the Operations that can be performed with Cloud Computing
Storage, backup, and recovery of data
Delivery of software on demand
Development of new applications and services
Streaming videos and audio
Understanding How Cloud Computing Works?
Cloud computing helps users in easily accessing computing resources like storage, and
processing over internet rather than local hardwares. Here we discussing how it works in
nutshell:
Infrastructure: Cloud computing depends on remote network servers hosted on
internet for store, manage, and process the data.
On-Demand Acess: Users can access cloud services and resources based on-
demand they can scale up or down the without having to invest for physical hardware.
Types of Services: Cloud computing offers various benefits such as cost saving,
scalability, reliability and acessibility it reduces capital expenditures, improves
efficiency.
Origins Of Cloud Computing
Mainframe computing in the 1950s and the internet explosion in the 1990s came together
to give rise to cloud computing. Since businesses like Amazon, Google, and Salesforce
started providing web-based services in the early 2000s. The term “cloud computing” has
gained popularity. Scalability, adaptability, and cost-effectiveness are to be facilitated by
the concept’s on-demand internet-based access to computational resources.
These days, cloud computing is pervasive, driving a wide range of services across
markets and transforming the processing, storage, and retrieval of data
What is Virtualization In Cloud Computing?
Virtualization is the software technology that helps in providing the logical isolation of
physical resources. Creating logical isolation of physical resources such as RAM, CPU,
and Storage.. over the cloud is known as Virtualization in Cloud Computing. In simple we
can say creating types of Virtual Instances of computing resources over the cloud. It
provides better management and utilization of hardware resources with logical isolation
making the applications independent of others. It facilitates streamlining the resource
allocation and enhancing scalability for multiple virtual computers within a single physical
source offering cost-effectiveness and better optimization of resources.
To know about this refer this Article – Virtualization in Cloud Computing and Types
Architecture Of Cloud Computing
Cloud computing architecture refers to the components and sub-components required for
cloud computing. These components typically refer to:
1. Front end ( Fat client, Thin client)
2. Back-end platforms ( Servers, Storage )
3. Cloud-based delivery and a network ( Internet, Intranet, Intercloud )
1. Front End ( User Interaction Enhancement )
The User Interface of Cloud Computing consists of 2 sections of clients. The Thin clients
are the ones that use web browsers facilitating portable and lightweight accessibilities and
others are known as Fat Clients that use many functionalities for offering a strong user
experience.
2. Back-end Platforms ( Cloud Computing Engine )
The core of cloud computing is made at back-end platforms with several servers for
storage and processing computing. Management of Applications logic is managed through
servers and effective data handling is provided by storage. The combination of these
platforms at the backend offers the processing power, and capacity to manage and store
data behind the cloud.
3. Cloud-Based Delivery and Network
On-demand access to the computer and resources is provided over the Internet, Intranet,
and Intercloud. The Internet comes with global accessibility, the Intranet helps in internal
communications of the services within the organization and the Intercloud enables
interoperability across various cloud services. This dynamic network connectivity ensures
an essential component of cloud computing architecture on guaranteeing easy access and
data transfer.
What Are The Types of Cloud Computing Services?
The following are the types of Cloud Computing:
1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)
4. Function as as Service (FaaS)
1. Infrastructure as a Service ( IaaS )
Flexibility and Control: IaaS comes up with providing virtualized computing resources
such as VMs, Storage, and networks facilitating users with control over the Operating
system and applications.
Reducing Expenses of Hardware: IaaS provides business cost savings with the
elimination of physical infrastructure investments making it cost-effective.
Scalability of Resources: The cloud provides in scaling of hardware resources up or
down as per demand facilitating optimal performance with cost efficiency.
2. Platform as a Service ( PaaS )
Simplifying the Development: Platform as a Service offers application development
by keeping the underlying Infrastructure as an Abstraction. It helps the developers to
completely focus on application logic ( Code ) and background operations are
completely managed by the AWS platform.
Enhancing Efficiency and Productivity: PaaS lowers the Management of
Infrastructure complexity, speeding up the Execution time and bringing the updates
quickly to market by streamlining the development process.
Automation of Scaling: Management of resource scaling, guaranteeing the program’s
workload efficiency is ensured by PaaS.
3. SaaS (software as a service)
Collaboration And Accessibility: Software as a Service (SaaS) helps users to easily
access applications without having the requirement of local installations. It is fully
managed by the AWS Software working as a service over the internet encouraging
effortless cooperation and ease of access.
Automation of Updates: SaaS providers manage the handling of software
maintenance with automatic latest updates ensuring users gain experience with the
latest features and security patches.
Cost Efficiency: SaaS acts as a cost-effective solution by reducing the overhead of IT
support by eliminating the need for individual software licenses.
4. Function as a Service (FaaS)
Event-Driven Execution: FaaS helps in the maintenance of servers and infrastructure
making users worry about it. FaaS facilitates the developers to run code as a response
to the events.
Cost Efficiency: FaaS facilitates cost efficiency by coming up with the principle “Pay
as per you Run” for the computing resources used.
Scalability and Agility: Serverless Architectures scale effortlessly in handing the
workloads promoting agility in development and deployment.
To know more about the Types of Cloud Computing Difference please read this article
– IaaS vs PaaS vs SaaS
What Are Cloud Deployment Models?
The following are the Cloud Deployment Models:
1. Private Deployment Model
It provides an enhancement in protection and customization by cloud resource
utilization as per particular specified requirements. It is perfect for companies which
looking for security and compliance needs.
2. Public Deployment Model
It comes with offering a pay-as-you-go principle for scalability and accessibility of cloud
resources for numerous users. it ensures cost-effectiveness by providing enterprise-
needed services.
3. Hybrid Deployment Model
It comes up with a combination of elements of both private and public clouds providing
seamless data and application processing in between environments. It offers flexibility in
optimizing resources such as sensitive data in private clouds and important scalable
applications in the public cloud.
To know more about the Cloud Deployment Models, read this Articles
Cloud Deployment Models
Differences of Cloud Deployment Models
What Is Cloud Hosting?
The Infrastructure is where the people start and begin to build from the scratch. This is the
layer where the cloud hosting lives. Let’s say you have a company and a website and the
website has a lot of communications that are exchanged between members. You start
with a few members talking with each other and then gradually the number of members
increases. As time passes, as the number of members increases, there would be more
traffic on the network and your server will get slow down. This would cause a problem.
A few years ago, the websites are put on the server somewhere, in this way you have to
run around or buy and set the number of servers. It costs a lot of money and takes a lot of
time. You pay for these servers when you are using them and as well as when you are not
using them. This is called hosting. This problem is overcome by cloud hosting. With Cloud
Computing, you have access to computing power when you needed. Now, your website is
put in the cloud server as you put it on a dedicated server. People start visiting your
website and if you suddenly need more computing power, you would scale up according
to the need.
Characteristics Of Cloud Computing
The following are the characterisitics of Cloud Computing:
1. Scalability: With Cloud hosting, it is easy to grow and shrink the number and size of
servers based on the need. This is done by either increasing or decreasing the
resources in the cloud. This ability to alter plans due to fluctuations in business size
and needs is a superb benefit of cloud computing, especially when experiencing a
sudden growth in demand.
2. Save Money: An advantage of cloud computing is the reduction in hardware costs.
Instead of purchasing in-house equipment, hardware needs are left to the vendor. For
companies that are growing rapidly, new hardware can be large, expensive, and
inconvenient. Cloud computing alleviates these issues because resources can be
acquired quickly and easily. Even better, the cost of repairing or replacing equipment is
passed to the vendors. Along with purchase costs, off-site hardware cuts internal
power costs and saves space. Large data centers can take up precious office space
and produce a large amount of heat. Moving to cloud applications or storage can help
maximize space and significantly cut energy expenditures.
3. Reliability: Rather than being hosted on one single instance of a physical server,
hosting is delivered on a virtual partition that draws its resource, such as disk space,
from an extensive network of underlying physical servers. If one server goes offline it
will have no effect on availability, as the virtual servers will continue to pull resources
from the remaining network of servers.
4. Physical Security: The underlying physical servers are still housed within data centers
and so benefit from the security measures that those facilities implement to prevent
people from accessing or disrupting them on-site.
5. Outsource Management: When you are managing the business, Someone else
manages your computing infrastructure. You do not need to worry about management
as well as degradation.
Top Reasons to Switch from On-premise to Cloud Computing
The following are the Top reasons to switch from on-premise to cloud computing:
1. Reduces cost: The cost-cutting ability of businesses that utilize cloud computing over
time is one of the main advantages of this technology. On average 15% of the total
cost can be saved by companies if they migrate to the cloud. By the use of cloud
servers businesses will save and reduce costs with no need to employ a staff of
technical support personnel to address server issues. There are many great business
modules regarding the cost-cutting benefits of cloud servers such as the Coca-
Cola and Pinterest case studies.
2. More storage: For software and applications to execute as quickly and efficiently as
possible, it provides more servers, storage space, and computing power. Many tools
are available for cloud storage such as Dropbox, Onedrive, Google Drive, iCloud Drive,
etc.
3. Employees Better Work Life Balance: Direct connections between cloud computing
benefits, and the work and personal lives of an enterprise’s workers can both improve
because of cloud computing. Even on holidays, the employees have to work with the
server for its security, maintenance, and proper functionality. But with cloud storage the
thing is not the same, employees get ample of time for their personal life and the
workload is even less comparatively.
Top leading Cloud Computing companies
1. Amazon Web Services(AWS)
One of the most successful cloud-based businesses is Amazon Web Services(AWS),
which is an Infrastructure as a Service(Iaas) offering that pays rent for virtual computers
on Amazon’s infrastructure.
2. Microsoft Azure Cloud Platform
Microsoft is creating the Azure platform which enables the .NET Framework Application to
run over the internet as an alternative platform for Microsoft developers. This is the classic
Platform as a Service(PaaS).
3. Google Cloud Platform ( GCP )
Google has built a worldwide network of data centers to service its search engine.
From this service, Google has captured the world’s advertising revenue. By using that
revenue, Google offers free software to users based on infrastructure. This is called
Software as a Service(SaaS).
Advantages of Cloud Computing
The following are main advantages of Cloud Computing:
1. Cost Efficiency: Cloud Computing provides flexible pricing to the users with the
principal pay-as-you-go model. It helps in lessening capital expenditures of
Infrastructure, particularly for small and medium-sized businesses companies.
2. Flexibility and Scalability: Cloud services facilitate the scaling of resources based on
demand. It ensures the efficiency of businesses in handling various workloads without
the need for large amounts of investments in hardware during the periods of low
demand.
3. Collaboration and Accessibility: Cloud computing provides easy access to data and
applications from anywhere over the internet. This encourages collaborative team
participation from different locations through shared documents and projects in real-
time resulting in quality and productive outputs.
4. Automatic Maintenance and Updates: AWS Cloud takes care of the infrastructure
management and keeping with the latest software automatically making updates they is
new versions. Through this, AWS guarantee the companies always having access to
the newest technologies to focus completely on business operations and innvoations.
Disadvantages Of Cloud Computing
The following are the main disadvantages of Cloud Computing:
1. Security Concerns: Storing of sensitive data on external servers raised more security
concerns which is one of the main drawbacks of cloud computing.
2. Downtime and Reliability: Even though cloud services are usually dependable, they
may also have unexpected interruptions and downtimes. These might be raised
because of server problems, Network issues or maintenance disruptions in Cloud
providers which negative effect on business operations, creating issues for users
accessing their apps.
3. Dependency on Internet Connectivity: Cloud computing services heavily rely on
Internet connectivity. For accessing the cloud resources the users should have a stable
and high-speed internet connection for accessing and using cloud resources. In
regions with limited internet connectivity, users may face challenges in accessing their
data and applications.
4. Cost Management Complexity: The main benefit of cloud services is their pricing
model that coming with Pay as you go but it also leads to cost management
complexities. On without proper careful monitoring and utilization of resources
optimization, Organizations may end up with unexpected costs as per their use scale.
Understanding and Controlled usage of cloud services requires ongoing attention.
Cloud Sustainability
The following are the some of the key points of Cloud sustainability:
Enery Efficiency: Cloud Providers supports the optimization of data center operations
for minimizing energy consumption and improve efficiency.
Renewable Energy: On increasing the adoption of renewable energy sources like
solar and wind power to data centers and reduce carbon emissions.
Virtualization: Server virtualization facilitates better utilization of hardware resources,
reducing the need for physical servers and lowering the energy consumptions.
Cloud Security
Cloud security recommended to measures and practices designed to protect data,
applications, and infrastructure in cloud computing environments. The following are some
of the best practices of cloud security:
Data Encryption: Encryption is essential for securing data stored in the cloud. It
ensures that data remains unreadable to unauthorized users even if it is intercepted.
Access Control: Implementing strict access controls and authentication mechanisms
helps ensure that only authorized users can access sensitive data and resources in the
cloud.
Multi-Factor Authentication (MFA): MFA adds an extra layer of security by requiring
users to provide multiple forms of verification, such as passwords, biometrics, or
security tokens, before gaining access to cloud services.
Use Cases Of Cloud Computing
Cloud computing provides many use cases across industries and various applications:
1. Scalable Infrastructure: Infrastructure as a Service (IaaS) enables organizations to
scale computing resources based on demand without investing in physical hardware.
2. Efficient Application Development: Platform as a Service (PaaS) simplifies
application development, offering tools and environments for building, deploying, and
managing applications.
3. Streamlined Software Access: Software as a Service (SaaS) provides subscription-
based access to software applications over the internet, reducing the need for local
installation and maintenance.
4. Data Analytics: Cloud-based platforms facilitate big data analytics, allowing
organizations to process and derive insights from large datasets efficiently.
5. Disaster Recovery: Cloud-based disaster recovery solutions offer cost-effective data
replication and backup, ensuring quick recovery in case of system failures or disasters.
Before going to the overview of Embedded Systems, Let’s first know the two basic things i.e
embedded and system, and what actually do they mean.
System is a set of interrelated parts/components which are designed/developed to perform common
tasks or to do some specific work for which it has been created.
Embedded means including something with anything for a reason. Or simply we can say something
which is integrated or attached to another thing. Now after getting what actual systems and
embedded mean we can easily understand what are Embedded Systems.
Embedded System is an integrated system that is formed as a combination of computer hardware
and software for a specific function. It can be said as a dedicated computer system has been
developed for some particular reason. But it is not our traditional computer system or general-
purpose computers, these are the Embedded systems that may work independently or attached to a
larger system to work on a few specific functions. These embedded systems can work without human
intervention or with little human intervention.
Three main components of Embedded systems are:
1. Hardware
2. Software
3. Firmware
Digital watches
Washing Machine
Toys
Televisions
Digital phones
Laser Printer
Cameras
Industrial machines
Electronic Calculators
Automobiles
Medical Equipment
Mostly Embedded systems are present everywhere. We use it in our everyday life
unknowingly as in most cases it is integrated into the larger systems. So, here are some
of the application areas of Embedded systems:
Home appliances
Transportation
Health care
Business sector & offices
Defense sector
Aerospace
Agricultural Sector
1. Performs specific task: Embedded systems perform some specific function or tasks.
2. Low Cost: The price of an embedded system is not so expensive.
3. Time Specific: It performs the tasks within a certain time frame.
4. Low Power: Embedded Systems don’t require much power to operate.
5. High Efficiency: The efficiency level of embedded systems is so high.
6. Minimal User interface: These systems require less user interface and are easy to
use.
7. Less Human intervention: Embedded systems require no human intervention or very
less human intervention.
8. Highly Stable: Embedded systems do not change frequently mostly fixed maintaining
stability.
9. High Reliability: Embedded systems are reliable they perform tasks consistently well.
10. Use microprocessors or microcontrollers: Embedded systems use microprocessors
or microcontrollers to design and use limited memory.
11. Manufacturable: The majority of embedded systems are compact and affordable to
manufacture. They are based on the size and low complexity of the hardware.
BLOCK DIAGRAM OF EMBEDDED SYSTEM
Small size.
Enhanced real-time performance.
Easily customizable for a specific application.