Iot Betck105h Notes
Iot Betck105h Notes
NOTES
By:
Dr. LEVINA T
AssociateProfessor
Course Outcomes:
CO4: Analyze IOT technologies and design of sensor based IOT nodes.
Course objectives
Understand about the fundamentals of Internet of Things and its building blocks along with their characteristics.
Understand the recent application domains of IoT in everyday life.
Gain insights about the current trends of Associated IOT technologies and IOT Anlaytics.
Teaching-Learning Process
These are sample Strategies, which teachers can use to accelerate the attainment of the various Course
Outcomes.
1. Lecturer method (L) need not to be only a traditional lecture method, but alternative effective teaching
methods could be adopted to attain the outcomes.
2. Use of Video/Animation to explain functioning of various concepts.
3. Encourage collaborative (Group Learning) Learning in the class.
4. Ask at least three HOT (Higher order Thinking) questions in the class, which promotes critical thinking.
5. Adopt Problem Based Learning (PBL), which fosters students’ Analytical skills, develop design thinking skills
such as the ability to design, evaluate, generalize, and analyze information rather than simply recall it.
6. Introduce Topics in manifold representations.
7. Show the different ways to solve the same problem with different circuits/logic and encourage the students to
come up with their own creative ways to solve them.
8. Discuss how every concept can be applied to the real world - and when that's possible, it helps improve the
students' understanding
9. Use any of these methods: Chalk and board, Active Learning, Case Studies
Module-2
IoT Sensing and Actuation 8 hours
IoT Sensing and Actuation: Introduction, Sensors, Sensor Characteristics, Sensorial Deviations, Sensing Types,
Sensing Considerations, Actuators, Actuator Types, Actuator Characteristics.
Module-3
IoT Processing Topologies and Types 8 hours
IoT Processing Topologies and Types:Data Format, Importance of Processing in IoT, Processing Topologies,
IoT Device Design and Selection Considerations, Processing Offloading.
Module-4
Associated Iot Technologies 8 Hours
Associated Iot Technologies: Cloud Computing: Introduction, Virtualization, Cloud Models, Service-Level
Agreement In Cloud Computing, Cloud Implementation, Sensor-Cloud: Sensors-As-A-Service.
IOT CASE STUDIES: Agricultural Iot –Introduction And Case Studies
Textbook 1: Chapter 10–10.1 To 10.6; Chapter 12-12.1-12.2
Module-5
The weightage of Continuous Internal Evaluation (CIE) is 50% and for Semester End Exam (SEE) is
50%.
The minimum passing mark for the CIE is 40% of the maximum marks (20 marks out of 50).
The minimum passing mark for the SEE is 35% of the maximum marks (18 marks out of 50).
A student shall be deemed to have satisfied the academic requirements and earned the credits allotted to
each subject/ course if the student secures not less than 35% (18 Marks out of 50) in the semester-end
examination (SEE), and a minimum of 40% (40 marks out of 100) in the sum total of the CIE (Continuous
Internal Evaluation) and SEE (Semester End Examination) taken together.
One Improvement test before the closing of the academic term may be conducted if necessary. However best two
tests out of three shall be taken into consideration
The teacher has to plan the assignments and get them completed by the students well before the closing of the
term so that marks entry in the examination portal shall be done in time. Formative (Successive) Assessments
include Assignments/Quizzes/Seminars/ Course projects/Field surveys/ Case studies/ Hands-on practice
(experiments)/Group Discussions/ others. . The Teachers shall choose the types of assignments depending on the
requirement of the course and plan to attain the Cos and POs. (to have a less stressed CIE, the portion of the
syllabus should not be common /repeated for any of the methods of the CIE. Each method of CIE should have a
different syllabus portion of the course). CIE methods /test question paper is designed to attain the different levels
of Bloom’s taxonomy as per the outcome defined for the course.
The sum of two tests, two assignments, will be out of 100 marks and will be scaled down to 50 marks
Theory SEE will be conducted by University as per the scheduled timetable, with common question papers for
the subject (duration 03 hours)
The question paper shall be set for 100 marks. The medium of the question paper shall be
English/Kannada). The duration of SEE is 03 hours.
The question paper will have 10 questions. Two questions per module. Each question is set for 20 marks.
The students have to answer 5 full questions, selecting one full question from each module. The student
has to answer for 100 marks and marks scored out of 100 shall be proportionally reduced to 50 marks.
There will be 2 questions from each module. Each of the two questions under a module (with a maximum
of 3 sub-questions), should have a mix of topicsunder that module.
Books (Title of the Book/Name of the author/Name of the publisher/Edition and Year)
1.SudipMisra, Anandarup Mukherjee, Arijit Roy, “Introduction to IoT”, Cambridge University Press 2021.
CO POs Mapping
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
CO1 3 3 3 2
CO2 3 3 3 3
CO3 3 3 3
CO4 3 3 3 3 3
CO5 3 3 3 3 3
With Each Module wise
Module-I
Basics of Networking
Introduction
BASICS OF NETWORKING
Internet of Things (IoT) means a network of physical things sending, receiving, or
communicating information using the internet or other communication technologies and
network just as the computers, tablets and mobiles do, and thus enabling the monitoring,
coordinating or controlling processes across the internet or another data network. (Raj Kamal)
The purpose of IoT can be visualized using the following examples.
An umbrella can be made to function like a living entity using IoT. By installing a tiny
embedded device, which can interact with a web based weather service and the devices
owner through the Internet the following communication can take place. The umbrella,
embedded with a circuit for the purpose of computing and communication connects to the
Internet. Websites regularly publishes the weather report. The umbrella receives these reports
each morning, analyses the data and issues reminders to the owner at intermittent intervals
around his/her office-going time. The reminders can be distinguished using differently
coloured LED flashes such as red LED flashes for hot and sunny days, yellow flashes for rainy
days. A reminder can be sent to the owner's mobile at a pre-set time before leaving for office
using NFC, Bluetooth or SMS technologies. The message can be—(i) protect yourself from
rain. It is going to rain. Don't forget to carry the umbrella; (ii) Protect yourself from the sun. It
is going to be hot and sunny. Don't forget to carry the umbrella. The owner can decide to carry
or not to carry the umbrella using the Internet connected umbrella.
Streetlights in a City can be made to function like living entities through sensing and
computing using tiny embedded devices that communicate and interact with a central control-
and-command station through the Internet. Assume that each light in a group of 32 streetlights
comprises a sensing, computing and communication circuit. Each group connects to a group-
controller (or coordinator) through Bluetooth or ZigBee. Each controller further connects to
the central command-and-control station through the Internet. The station receives information
about each streetlight in each group in the city at periodic intervals. The information received
is related to the functioning of the 32 lights, the faulty lights, about the presence or absence of
traffic in group vicinity, and about the ambient conditions, whether cloudy, dark or normal
daylight. The station remotely programs the group controllers, which automatically take an
appropriate action as per the conditions of traffic and light levels. It also directs remedial
actions in case a fault develops in a light at a specific Location. Thus, each group in the city is
controlled by the 'Internet of streetlights’.
Current era is of data and information-centric operations. Right from agriculture to military
operations depend on the information. The quality of any particular information and speed at
which data is updated to all members of a team (which may be a group of individuals, an
organization, or a country) dictates the advantage that the team has over others in generating
useful information from the gathered data. In the present-day global scale of operations of
various organizations or militaries of various countries, the speed and nature of genuine
information are critical in maintaining an edge over others in the same area. To sum it up,
today’s world relies heavily on data and networking, which allows for the instant availability
of information from anywhere on the earth at any moment.
Networking implies linking of computers and communication network devices (also referred
to as hosts), which are interconnected through Internet or Intranet. These devices are separated
by unique device identifiers (Internet protocol, IP addresses and media access control, MAC
addresses). These hosts may be connected by a single path or through multiple paths for sending
and receiving data. The data transferred between the hosts may be text, images, or videos,
which are typically in the form of binary bit streams.
Network Types
Computer networks are classified based on,
1) Type of connection
2) Physical topology
3) Reach of the network.
Types of Connection
Depending on the way a host communicates with other hosts, computer networks are of two
types, (i) Point-to-point and
(ii) Point-to-multipoint.
(i) Point-to-point: Point-to-point connections are used to establish direct connections between
two hosts. Day-to-day systems such as a remote control for an air conditioner or television is
a point to point connection, where the connection has the whole channel dedicated to it only.
These networks were designed to work over duplex links and are functional for both
synchronous as well as asynchronous systems. Regarding computer networks, point to point
connections find usage for specific purposes such as in optical networks.
(ii) Point-to-multipoint: In a point-to-multipoint connection, more than two hosts share the
same link. This type of configuration is similar to the one-to-many connection type. Point-to-
multipoint connections find popular use in wireless networks and IP telephony. The channel
is shared between the various hosts, either spatially (three dimensional) or temporally (time
based). One common scheme of spatial sharing of the channel is frequency division multiple
access (FDMA). Temporal sharing of channels include approaches such as time division
multiple access (TDMA). Each of the spectral and temporal sharing approaches has various
schemes and protocols for channel sharing in point-to-multipoint networks. Point-to-
multipoint connections find popular use in present-day networks, especially while enabling
communication between a massive numbers of connected devices. Fig. 1 illustrates thenetwork
types based on types of connection.
Physical Topology
Subjected to the physical manner in which communication paths between the hosts are
connected, computer networks are classified into four broad topologies called Star, Mesh, Bus,
and Ring.
(ii) Mesh: In this topology, every host is connected to every other host using a dedicated link
(in a point-to-point manner). For ‘n’ hosts in a mesh, there are a total of n(n 1)/2 dedicated full
duplex links between the hosts. This massive number of links makes the mesh topology
expensive. However, it offers certain specific advantages over other topologies. The first
significant advantage is the robustness and resilience of the system. Even if a link is down or
broken, the network is still fully functional as there remain other pathways for the traffic to
flow through. The second advantage is the security and privacy of the traffic as the data is
only seen by the intended recipients and not by all members of the network. The third advantage
is the reduced data load on a single host, as every host in this network takes care of its traffic
load. However, owing to the complexities in forming physical connections between devices
and the cost of establishing these links, mesh networks are used very selectively, such as in
backbone networks.
Network Reachability
Based on network reachability computer networks are divided into four categories: personal
area networks, local area networks, wide area networks, and metropolitan area networks.
(i) Personal Area Networks (PAN): PANs, are restricted to individual usage. Wireless
headphones, wireless speakers, laptops, smartphones, wireless keyboards, wireless mouse, and
printers within a house are few examples of PANs. Generally, PANs are wireless networks,
which make use of low-range and low-power technologies such as Bluetooth. The reachability
of PANs is limited to the range of a few centimetres to a few metres.
(ii) Local Area Networks (LAN): A LAN is a group of hosts connected to a single network
through wired or wireless connections. LANs are normally restricted to buildings,
organizations, or campuses. LAN constitutes few leased lines connected to the Internet which
provide web service to the whole organization or a campus. These lines are further redistributed
to multiple hosts within the LAN enabling hosts. The number of hosts is much more than the
actual direct lines of the Internet to access the web from within the organization. This enables
the organization to define control policies for web access within its hierarchy. The data access
rates within the LANs lie in the range of 100 Mbps to 1000 Mbps, with very high fault-tolerance
levels. The network components commonly used in a LAN are servers, hubs, routers, switches,
terminals, and computers.
(iii) Wide Area Networks (WAN): WANs usually connect diverse geographic locations. But
they are restricted within the boundaries of a state or country. The data rate of WANs is in the
order of a fraction of LAN’s data rate. Typically, WANs connecting two LANs or MANsmay
use public switched telephone networks (PSTNs) or satellite-based links. WANs tend to have
more errors and noise during transmission due to the long transmission ranges, and are very
costly to maintain. The fault tolerances of WANs are also generally low.
(iv) Metropolitan Area Networks (MAN): The reachability of a MAN lies between that of a
LAN and a WAN. Typically, MANs connect various organizations or buildings within agiven
geographic location or city. An excellent example of a MAN is an Internet service provider
(ISP) supplying Internet connectivity to various organizations within a city. As MANs are
costly, they may not be owned by individuals or even single organizations. Typical networking
devices/components in MANs are modems and cables. MANs tend to have moderate fault
tolerance levels.
up, are as follows: 1) Physical layer, 2) Data link layer, 3) Network layer, 4) Transport layer,
5) Session layer, 6) Presentation layer, and 7) Application layer.
The major highlights of each of these layers are explained in this section.
(i) Physical Layer: This is layer 1 of the OSI model, which is also known as media layer. The
electrical and mechanical operations of the host are performed by the physical layer. These
operations include or deal with issues relating to signal generation, signal transfer, voltages,
the layout of cables, physical port layout, line impedances, and signal loss. This layer is
responsible for the topological layout of the network (star, mesh, bus, or ring), communication
mode (simplex, duplex, full duplex), and bit rate control operations. The protocol data unit
associated with this layer is referred to as a symbol.
(ii) Data Link Layer: This is layer 2 of the OSI model and called the media layer. The data
link layer is mainly concerned with the establishment and termination of the connection
between two hosts, and the detection and correction of errors during communication between
two or more connected hosts. IEEE 802 divides the OSI layer 2 further into two sub-layers [2]:
Medium access control (MAC) and logical link control (LLC). MAC is responsible for access
control and permissions for connecting networked devices; whereas LLC is mainly tasked with
error checking, flow control, and frame synchronization. The protocol data unit associated with
this layer is referred to as a frame.
(iii) Network Layer: This layer is a media layer and layer 3 of the OSI model. It provides a
means of routing data to various hosts connected to different networks through logical paths
called virtual circuits. These logical paths may pass through other intermediate hosts (nodes)
before reaching the actual destination host. The primary tasks of this layer include addressing,
sequencing of packets, congestion control, error handling, and Internetworking. The protocol
data unit associated with this layer is referred to as a packet.
(iv) Transport Layer: This is layer 4 of the OSI model and is a host layer. The transport layer
is tasked with end-to-end error recovery and flow control to achieve a transparent transfer of
data between hosts. This layer is responsible for keeping track of acknowledgments during
variable-length data transfer between hosts. In case of loss of data, or when no acknowledgment
is received, the transport layer ensures that the particular erroneous data segment is re-sent to
the receiving host. The protocol data unit associated with this layer is referred to as a segment
or datagram.
(v) Session Layer: This is the OSI model’s layer 5 and is a host layer. It is responsible for
establishing, controlling, and terminating of communication between networked hosts. The
session layer sees full utilization during operations such as remote procedure calls and remote
sessions. The protocol data unit associated with this layer is referred to as data.
(vi) Presentation Layer: This layer is a host layer and layer 6 of the OSI model. It is mainly
responsible for data format conversions and encryption tasks such that the syntactic
compatibility of the data is maintained across the network, for which it is also referred to as the
syntax layer. The protocol data unit associated with this layer is referred to as data.
(vii) Application Layer: This is layer 7 of the OSI model and is a host layer. It is directly
accessible by an end-user through software APIs (application program interfaces) and
terminals. Applications such as file transfers, FTP (file transfer protocol), e-mails, and other
such operations are initiated from this layer. The application layer deals with user
authentication, identification of communication hosts, quality of service, and privacy. The
protocol data unit associated with this layer is referred to as data.
A networked communication between two hosts following the OSI model is shown in Figure
6. Table 2 summarizes the OSI layers and their features, where PDU stands for protocol data
unit.
Fig. 6 Networked communication between two hosts following the OSI model
Table 2 Summary of the OSI layers and their features
Fig. 7 Networked communication between two hosts following the TCP/IP suite
Emergence of IoT
Introduction
The modern-day advent of network-connected devices has given rise to the popular paradigm
of the Internet of Things (IoT). Each second, the present-day Internet allows massively
heterogeneous traffic through it. This network traffic consists of images, videos, music,speech,
text, numbers, binary codes, machine status, banking messages, data from sensors and
actuators, healthcare data, data from vehicles, home automation system status and control
messages, military communications, and many more. This huge variety of data is generated
from a massive number of connected devices, which may be directly connected to theInternet
or connected through gateway devices. According to statistics from the Information Handling
Services, the total number of connected devices globally is estimated to be around 25 billion.
This figure is projected to triple within a short span of 5 years by the year 2025. Figure 8 shows
the global trend and projection for connected devices worldwide.
Fig. 8 10-year global trend and projection of connected devices (statistics sourced from the
Information Handling Services [7])
The traffic flowing through the Internet can be attributed to legacy systems as well asmodern-
day systems. The miniaturization of electronics and the cheap affordability of technology is
resulting in a surge of connected devices, which in turn is leading to an explosion of traffic
flowing through the Internet.
One of the best examples of this explosion is the evolution of smartphones. In the late 1990’s,
cellular technology was still expensive and which could be afforded only by a select few.
Moreover, these particular devices had only the basic features of voice calling, text messaging,
and sharing of low-quality multimedia. Within the next 10 years, cellular technology had
become common and easily affordable. With time, the features of these devices evolved, and
the dependence of various applications and services on these gadgets on packet-based Internet
accesses started rapidly increasing. The present-day mobile phones (commonly referred to as
smartphones) are more or less Internet-based. The range of applications on these gadgets such
as messaging, video calling, e-mails, games, music streaming, video streaming, and others are
solely dependent on network provider allocated Internet access or WiFi. Most of the present-
day consumers of smartphone technology tend tocarry more than one of these units. In line
with this trend, other connected devices have rapidly increased in numbers resulting in the
number of devices exceeding the number of humans on Earth by multiple times. Now imagine
that as all technologies and domains are moving toward smart management of systems, the
number of sensor/actuator-based systems is rapidly increasing. With time, the need for
location-independent access to monitored and controlled systems keep on rising. This rise in
number leads to a further rise in the number of Internet-connected devices.
The original Internet intended for sending simple messages is now connected with all sorts of
“Things”. These things can be legacy devices, modern-day computers, sensors, actuators,
household appliances, toys, clothes, shoes, vehicles, cameras, and anything which may benefit
a product by increasing its scientific value, accuracy, or even its cosmetic value.
IoT is an anytime, anywhere, and anything (as shown in Figure 9) network of Internet-
connected physical devices or systems capable of sensing an environment and affecting the
sensed environment intelligently. This is generally achieved using low-power and low-form-
factor embedded processors on-board the “things” connected to the Internet. In other words,
IoT may be considered to be made up of connecting devices, machines, and tools; these things
are made up of sensors/actuators and processors, which connect to the Internet throughwireless
technologies. Another school of thought also considers wired Internet access to be inherent to
the IoT paradigm. For the sake of harmony, in this book, we will consider any technology
enabling access to the Internet—be it wired or wireless—to be an IoT enabling technology.
However, most of the focus on the discussion of various IoT enablers will be restricted to
wireless IoT systems due to the much more severe operating constraints and challenges faced
by wireless devices as compared to wired systems. Typically, IoT systems can be characterized
by the following features:
• Associated architectures, which are also efficient and scalable.
• No ambiguity in naming and addressing.
• Massive number of constrained devices, sleeping nodes, mobile devices, and non-IP
devices.
Fig. 9 the three characteristic features - anytime, anywhere, and anything - highlight
therobustness and dynamic nature of IoT
IoT is speculated to have achieved faster and higher technology acceptance as compared
to electricity and telephony. These speculations are not ill placed as evident from the
various statistics shown in Figures 10, 11, and 12.
Fig. 10 The global IoT spending across various organizations and industries and its
subsequent projection until the year 2021 (sourced from International Data
Corporation)
Fig. 11 The compound annual growth rate (CAGR) of the IoT market (statistics
12
sourced
from)
Fig.12 The IoT market share across various industries (statistics sourced from
InternationalData Corporation [8])
13
• ATM
• WEB
• SMART METERS
• SMART LOCK
• CONNECTED HEALTHCARE
• CONNECTED VEHICLES
• SMART DUST
• SMART FACTORIES
• UAVs
• ATM: ATMs or automated teller machines are cash distribution machines, which are
linked to a user’s bank account. ATMs dispense cash upon verification of the identity of
a user and their account through a specially coded card. The central concept behind ATMs
was the availability of financial transactions even when banks were closed beyond their
regular work hours. These ATMs were ubiquitous money dispensers. The first ATM
became operational and connected online for the first time in 1974.
• Web: World Wide Web is a global information sharing and communication platform.
The Web became operational for the first time in 1991. Since then, it has been massively
responsible for the many revolutions in the field of computing and communication.
• Smart Meters: The earliest smart meter was a power meter, which became operational
in early 2000. These power meters were capable of communicating remotely with the
power grid. They enabled remote monitoring of subscribers’ power usage and eased the
process of billing and power allocation from grids.
• Digital Locks: Digital locks can be considered as one of the earlier attempts at
connected home-automation systems. Present-day digital locks are so robust that
smartphones can be used to control them. Operations such as locking and unlocking doors,
changing key codes, including new members in the access lists, can be easily performed,
and that too remotely using smartphones.
• Connected Healthcare: Here, healthcare devices connect to hospitals, doctors, and
relatives to alert them of medical emergencies and take preventive measures. The devices
may be simple wearable appliances, monitoring just the heart rate and pulse of the wearer,
as well as regular medical devices and monitors in hospitals. The connected nature of
these systems makes the availability of medical records and test results much faster,
cheaper, and convenient for both patients as well as hospital authorities.
• Connected Vehicles: Connected vehicles may communicate to the Internet or with
other vehicles, or even with sensors and actuators contained within it. These vehicles self-
diagnose themselves and alert owners about system failures.
• Smart Cities: This is a city-wide implementation of smart sensing, monitoring, and
actuation systems. The city-wide infrastructure communicating amongst themselves
enables
unified and synchronized operations and information dissemination. Some of the facilities
which may benefit are parking, transportation, and others.
• Smart Dust: These are microscopic computers. Smaller than a grain of sand each, they
can be used in numerous beneficial ways, where regular computers cannot operate. For
14
example, smart dust can be sprayed to measure chemicals in the soil or even to diagnose
problems in the human body.
• Smart Factories: These factories can monitor plant processes, assembly lines,
distribution lines, and manage factory floors all on their own. The reduction in mishaps
due to human errors in judgment or un-optimized processes is drastically reduced.
• UAVs: UAVs or Unmanned Aerial Vehicles have emerged as robust public domain
solutions tasked with applications ranging from agriculture, surveys, surveillance,
deliveries, stock maintenance, asset management, and other tasks.
The present-day IoT spans across various domains and applications. The major highlight
of this paradigm is its ability to function as a cross-domain technology enabler. Multiple
domains can be supported and operated upon simultaneously over IoT-based platforms.
Support for legacy technologies and standalone paradigms, along with modern
developments,makes IoT quite robust and economical for commercial, industrial, as well
as consumer applications. IoT is being used in vivid and diverse areas such as smart
parking, smartphone detection, traffic congestion, smart lighting, waste management,
smart roads, structural health, urban noise maps, river floods, water flow, silos stock
calculation, water leakages, radiation levels, explosive and hazardous gases, perimeter
access control, snow level monitoring, liquid presence, forest fire detection, air pollution,
smart grid, tank level, photovoltaic installations, NFC (near-field communications)
payments, intelligent shopping applications, landslide and avalanche prevention, early
detection of earthquakes, supply chaincontrol, smart product management, and others.
Figure 14 shows the various technological interdependencies of IoT with other domains
and networking paradigms such as
M2M,
CPS,
The Internet of environment (IoE),
The Internet of people (IoP), and
Industry 4.0.
Each of these networking paradigms is a massive domain onits own, but the omnipresent
nature of IoT implies that these domains act as subsets of IoT. The paradigms are briefly
discussed here:
15
Figure .14 The interdependence and reach of IoT over various application
domains andnetworking paradigms
(i) M2M: The M2M or the machine-to-machine paradigm signifies a system of connected
machines and devices, which can talk amongst themselves without human intervention.
The communication between the machines can be for updates on machine status (stocks,
health, power status, and others), collaborative task completion, overall knowledge of the
systems and the environment, and others.
(ii) CPS: The CPS or the cyber physical system paradigm insinuates a closed control
loop— from sensing, processing, and finally to actuation—using a feedback mechanism.
CPS helps in maintaining the state of an environment through the feedback control loop,
which ensures that until the desired state is attained, the system keeps on actuating and
sensing. Humans have a simple supervisory role in CPS-based systems; most of the
ground-level operations areautomated.
(iii) IoE: The IoE paradigm is mainly concerned with minimizing and even reversing the
ill- effects of the permeation of Internet-based technologies on the environment. The
major focusareas of this paradigm include smart and sustainable farming, sustainable and
energy-efficienthabitats, enhancing the energy efficiency of systems and processes, and
others. In brief, we can safely assume that any aspect of IoT that concerns and affects the
environment, falls under the purview of IoE.
(iv) Industry 4.0: Industry 4.0 is commonly referred to as the fourth industrial revolution
pertaining to digitization in the manufacturing industry. The previous revolutions
chronologically dealt with mechanization, mass production, and the industrial revolution,
respectively. This paradigm strongly puts forward the concept of smart factories, where
machines talk to one another without much human involvement based on a framework
of CPS and IoT. The digitization and connectedness in Industry 4.0 translate to better
resource and workforce management, optimization of production time and resources, and
better upkeep and lifetimes of industrial systems.
(v) IoP: IoP is a new technological movement on the Internet which aims to decentralize
online social interactions, payments, transactions, and other tasks while maintaining
confidentiality and privacy of its user’s data. A famous site for IoP states that as the introduction
of the Bitcoin has severely limited the power of banks and governments, the acceptance of IoP
will limit the power of corporations, governments, and their spy agencies.
16
devices/things, things, and people, things and applications, and people with applications;
M2M enables the amalgamation of workflows comprising such interactions within IoT.
Internet connectivity is central to the IoT theme but is not necessarily focused on the use
of telecom networks.
IoT versus CPS
Cyber physical systems(CPS) encompasses sensing, control, actuation, and feedback as a
complete package. In other words, a digital twin is attached to a CPS-based system. As
mentioned earlier, a digital twin is a virtual system–model relation, in which the system
signifies a physical system or equipment or a piece of machinery, while the model
represents the mathematical model or representation of the physical system’s behaviour
or operation. Many a time, a digital twin is used parallel to a physical system, especially
in CPS as it allows for the comparison of the physical system’s output, performance, and
health. Based onfeedback from the digital twin, a physical system can be easily given
corrective directions/commands to obtain desirable outputs. In contrast, the IoT paradigm
does not compulsorily need feedback or a digital twin system. IoT is more focused on
networking thancontrols. Some of the constituent sub-systems in an IoT environment
(such as those formedby CPS-based instruments and networks) may include feedback
and controls too. In this light,CPS may be considered as one of the sub-domains of IoT,
as shown in Figure 14.
IoT versus WoT
From a developer’s perspective, the Web of Things (WoT) paradigm enables access and
control over IoT resources and applications. These resources and applications are
generally built using technologies such as HTML 5.0, JavaScript, Ajax, PHP, and others.
REST (representational state transfer) is one of the key enablers of WoT. The use of
RESTful principles and RESTful APIs (application program interface) enables both
developers and deployers to benefit from the recognition, acceptance, and maturity of
existing web technologies without having to redesign and redeploy solutions from scratch.
Still, designing and building the WoT paradigm has various adaptability and security
challenges, especially when trying to build a globally uniform WoT. As IoT is focused on
creating networks comprising objects, things, people, systems, and applications, which
often do not consider theunification aspect and the limitations of the Internet, the
need for WoT, which aims tointegrate the various focus areas of IoT into the existing
Web is really invaluable. Technically, WoT can be thought of as an application layer-
based hat added over the networklayer. However, the scope of IoT applications is much
broader; IoT also which includes non- IP-based systems that are not accessible through
the web.
17
Fig. 15 The IoT planes, various enablers of IoT, and the complex interdependencies
amongthem
Typically, the services offered in this layer are a combination of things and low-power
connectivity. For example, any IoT application requires the basic setup of sensing,
followed by rudimentary processing (often), and a low-power, low-range network, which
is mainly built upon the IEEE 802.15.4 protocol. The things may be wearables, computers,
smartphones, household appliances, smart glasses, factory machinery, vending machines,
vehicles, UAVs, robots, and other such contraptions (which may even be just a sensor).
The immediate low-power connectivity, which is responsible for connecting the things in
local implementation, may be legacy protocols such as WiFi, Ethernet, or cellular. In
contrast, modern-day technologies are mainly wireless and often programmable such as
Zigbee, RFID,Bluetooth, 6LoWPAN, LoRA, DASH, Insteon, and others. The range of
these connectivity technologies is severely restricted; they are responsible for the
connectivity between the things of the IoT and the nearest hub or gateway to access the
Internet.
18
The local connectivity is responsible for distributing Internet access to multiple local IoT
deployments. This distribution may be on the basis of the physical placement of the
things, on the basis of the application domains, or even on the basis of providers of
services. Services such as address management, device management, security, sleep
scheduling, and others fall within the scope of this plane. For example, in a smart home
environment, the first floor and the ground floor may have local IoT implementations,
which have various things connected to the network via low-power, low-range
connectivity technologies. The traffic from thesetwo floors merges into a single router
or a gateway. The total traffic intended for the Internet from a smart home leaves through
a single gateway or router, which may be assigned a singleglobal IP address (for the whole
house). This helps in the significant conservation of already limited global IP addresses.
The local connectivity plane falls under the purview of IoT management as it directly
deals with strategies to use/reuse addresses based on things and applications. The modern-
day “edge computing” paradigm is deployed in conjunction with these first two planes:
services and local connectivity.
In continuation, the penultimate plane of global connectivity plays a significant role in
enabling IoT in the real sense by allowing for worldwide implementations and
connectivity between things, users, controllers, and applications. This plane also falls
under the purview ofIoT management as it decides how and when to store data, when to
process it, when to forward it, and in which form to forward it. The Web, data-centers,
remote servers, Cloud, and others make up this plane. The paradigm of “fog computing”
lies between the planes of local connectivity and global connectivity. It often serves to
manage the load of global connectivity infrastructure by offloading the computation
nearer to the source of the data itself, which reduces the traffic load on the global Internet.
The final plane of processing can be considered as a top-up of the basic IoT networking
framework. The continuous rise in the usefulness and penetration of IoT in various
application areas such as industries, transportation, healthcare, and others is the result of
this plane. The members in this plane may be termed as IoT tools, simply because they
wring-out useful and human-readable information from all the raw data that flows from
various IoT devices and deployments. The various sub-domains of this plane include
intelligence, conversion (data and format conversion, and data cleaning), learning
(making sense of temporal and spatial data patterns), cognition (recognizing patterns and
mapping it to already known patterns), algorithms (various control and monitoring
algorithms), visualization (rendering numbers and strings in the form of collective
trends, graphs, charts, and
projections), and analysis (estimating the usefulness of the generated information, making
sense of the information with respect to the application and place of data generation, and
estimating future trends based on past and present patterns of information obtained).
Various computing paradigms such as “big data”, “machine learning”, and others, fall
within the scope of this domain.
19
However, we outline the broad components that come into play during the establishment
of any IoT network, into six types:
1) IoT node, 2) IoT router, 3) IoT LAN, 4) IoT WAN, 5) IoT gateway,
and 6) IoT proxy.
A typical IoT implementation from a networking perspective is shown in Figure 16.
Theindividual components are briefly described here:
20
or the Internet. Gateways can implement several LANs and WANs. Their primary task is
to forward packets between LANs and WANs, and the IP layer using only layer 3.
(vi) IoT Proxy: Proxies actively lie on the application layer and performs application
layer functions between IoT nodes and other entities. Typically, application layer proxies
are a means of providing security to the network entities under it; it helps to extend the
addressing range of its network.
In Figure 16, various IoT nodes within an IoT LAN are configured to to one another as
wellas talk to the IoT router whenever they are in the range of it. The devices have locally
unique (LU-x) device identifiers. These identifiers are unique only within a LAN. There
is a high chance that these identifiers may be repeated in a new LAN. Each IoT LAN has
its own unique identifier, which is denoted by IoT LAN-x in Figure 16. A router acts as a
connecting link between various LANs by forwarding messages from the LANs to the
IoT gateway or the IoT proxy. As the proxy is an application layer device, it is
additionally possible to include features such as firewalls, packet filters, and other security
measures besides the regular routing operations. Various gateways connect to an IoT
WAN, which links these devices to the Internet. There may be cases where the gateway
or the proxy may directly connect to the Internet. This network may be wired or wireless;
however, IoT deployments heavily rely on wireless solutions. This is mainly attributed to
the large number of devicesthat are integrated into the network; wireless technology is the
only feasible and neat-enough solution to avoid the hassles of laying wires and dealing
with the restricted mobility rising outof wired connections.
21
Introduction to Internet of Things (IOT) BETCK105H
MODULE-II
IoT SENSING AND
ACTUATION
MODULE-2
INTRODUCTION
A major chunk of IoT applications involves sensing in one form or the other. Almost all the
applications in IoT—be it a consumer IoT, an industrial IoT, or just plain hobby-based
deployments of IoT solutions—sensing forms the first step. Incidentally, actuation forms the
final step in the whole operation of IoT application deployment in a majority of scenarios.
The basic science of sensing and actuation is based on the process of transduction.
Transduction is the process of energy conversion from one form to another. A transducer is
a physical means of enabling transduction. Transducers take energy in any form (for which it
is designed)— electrical, mechanical, chemical, light, sound, and others—and convert it into
another, which may be electrical, mechanical, chemical, light, sound, and others. Sensors and
actuators are deemed as transducers. For example, in a public announcement (PA) system, a
microphone (input device) converts sound waves into electrical signals, which is amplified by
an amplifier system (a process). Finally, a loudspeaker (output device) outputs this into
audible sounds by converting the amplified electricalsignalsbackintosoundwaves.Table5.1
outlines the basic terminological differences between transducers, sensors, and actuators.
Function Can work as a Used for quantifying Used for converting signals into
sensor or an environmental proportional mechanical or
actuator but not stimuli into signals. Electrical outputs.
simultaneously.
Humidity sensors, Motors (convert electrical
Temperature sensors, energy to rotary motion), Force
Anemometers heads (which impose a force),
(measures flow Pumps (which convert rotary
velocity), motion of shafts into either a
Manometers pressure or a fluid velocity).
Examples Any sensor or
(measures fluid
actuator
pressure),
Accelerometers
(measures the
acceleration of a
body), Gas sensors
(measures
Concentration of
specific gas or
gases),and others
SENSORS
Sensors are devices that can measure, or quantify, or respond to the ambient changes in their
environment or within the intended zone of their deployment. They generate responses to
external stimuli or physical phenomenon through characterization of the input
functions(which are these external stimuli) and their conversion into typically electrical
signals. For example, heat is converted to electrical signals in a temperature sensor, or
atmospheric pressure is converted to electrical signals in a barometer. A sensor is only
sensitive to the measured property (e.g., a temperature sensor only senses the ambient
temperature of a room). It is insensitive to any other property besides what it is designed to
detect (e.g.,a temperature sensor does not bother about light or pressure while sensing the
temperature). Finally, a sensor does not influence the measured property (e.g., measuring the
temperature does not reduce or increase the temperature). Figure 2.1 shows the simple outline
of a sensing task. Here, a temperature sensor keeps on checking an environment for changes.
In the event of a fire, the temperature of the environment goes up. The temperature sensor
notices this change in the temperature of the room and promptly communicates this
information to a remote monitor via the processor.
Power Requirements: The way sensors operate decides the power requirements that
must be provided for an IoT implementation. Some sensors need to be provided with
separate power sources for them to function, whereas some sensors do not require any
power sources. Depending on the requirements of power, sensors can be of two types.
(i) Active: Active sensors do not require an external circuitry or mechanism to
provide it with power. It directly responds to the external stimuli from its ambient
environment and converts it into an output signal. For example, a photodiode converts
light into electrical impulses.
(ii) Passive: Passive sensors require an external mechanism to power them up. The
sensed properties are modulated with the sensor’s inherent characteristics to generate
patterns in the output of the sensor. For example, a thermistor’s resistance can be
detected by applying voltage difference across it or passing a current through it.
Sensors are broadly divided into two types, depending on the type of output generated from
these sensors, as follows.
(i) Analog: Analog sensors generate an output signal or voltage, which is proportional
(linearly or non-linearly) to the quantity being measured and is continuous in time and
amplitude. Physical quantities such as temperature, speed, pressure, displacement, strain, and
others are all continuous and categorized as analog quantities. For example, a thermometer or
a thermocouple can be used for measuring the temperature of a liquid (e.g., in household
water heaters). These sensors continuously respond to changes in the temperature of the
liquid.
(ii) Digital: These sensors generate the output of discrete time digital representation (time, or
amplitude, or both) of a quantity being measured, in the form of output signals or voltages.
Typically, binary output signals in the form of a logic 1 or a logic 0 for ON or OFF,
respectivelyareassociatedwithdigitalsensors.Thegenerateddiscrete(non-continuous)values
maybeoutputasasingle“bit”(serialtransmission),eightofwhichcombinetoproduceasingle “byte”
output (parallel transmission) in digital sensors.
Measured Property: The property of the environment being measured by the sensors
can be crucial in deciding the number of sensors in an IoT implementation. Some
properties to be measured do not show high spatial variations and can be quantified
only based on temporal variations in the measured property, such as ambient
temperature, atmospheric pressure, and others. Whereas some properties to be
measured show high spatial as well as temporal variations such as sound, image, and
others. Depending on the properties to be measured, sensors can be of two types.
(i) Scalar: Scalar sensors produce an output proportional to the magnitude of the
quantitybeingmeasured.Theoutputisintheformofasignalorvoltage.Scalarphysical
quantities are those where only the magnitude of the signal is sufficient for describing
or characterizing the phenomenon and information generation. Examples of such
measurable physical quantities include color, pressure, temperature, strain, and others.
A thermometer or thermocouple is an example of a scalar sensor that has the ability to
detect changes in ambient or object temperatures (depending on the sensor’s
configuration). Factors such as changes in sensor orientation or direction do not affect
these sensors (typically).
(ii) Vector: Vector sensors are affected by the magnitude as well as the direction
and/or orientation of the property they are measuring. Physical quantities such as
velocity and images that require additional information besides their magnitude for
completely categorizing a physical phenomenon are categorized as vector quantities.
Measuring such quantities are undertaken using vector sensors. For example, an
electronic gyroscope, which is commonly found in all modern aircraft, is used for
detecting the changes in orientation of the gyroscope with respect to the Earth’s
orientation along all three axes.
Most of the sensing in IoT is non-critical, where minor deviations in sensorial outputs seldom
change the nature of the under taken tasks. However, some critical applications of IoT, such
as healthcare, industrial process monitoring, and others, do require sensors with high-quality
measurement capabilities. As the quality of the measurement obtained from a sensor is
dependent on a large number of factors, there are a few primary considerations that must be
incorporated during the sensing of critical systems. In the event of a sensor’s output signal
going beyond its designed maximum and minimum capacity for measurement, the sensor
output is truncated to its maximum or minimum value, which is also the sensor’s limits. The
measurement range between a sensor’s characterized minimum and maximum values is also
Physical changes in the sensor or its material may result in long-term drift, which can
span over months or years. Noise is a temporally varying random deviation of signals. In
contrast, if a sensor’s output varies/deviates due to deviations in the sensor’s previous input
values, it is referred to as hysteresis error. The present output of the sensor depends on the
past input values provided to the sensor. Typically, the phenomenon of hysteresis can be
observed in analog sensors, magnetic sensors, and during heating of metal strips. One way to
check for hysteresis error is to check how the sensor’s output changes when we first increase,
then decrease the input values to the sensor over its full range. It is generally denoted as a
positive and negative percentage variation of the full-range of that sensor.
Multimedia sensing
Multimedia sensing encompasses the sensing of features that have a spatial variance property
associated with the property of temporal variance. Unlike scalar sensors, multimedia sensors
Hybrid sensing
The act of using scalar as well as multimedia sensing at the same time is referred to as hybrid
sensing. Many a time, there is a need to measure certain vector as well as scalar properties of
an environment at the same time. Under these conditions, a range of various sensors are
employed (from the collection of scalar as well as multimedia sensors)to measure the various
properties of that environment at any instant of time, and temporally map the collected
information to generate new information. For example, in an agricultural field, it is required
to measure the soil conditions at regular intervals of time to determine plant health. Sensors
such as soil moisture and soil temperature are deployed underground to estimate the soil’s
water retention capacity and the moisture being held by the soil at any instant of time.
However, this setup only determines whether the plant is getting enough water or not. There
may be a host of other factors besides water availability, which may affect a plant’s health.
The additional inclusion of a camera sensor with the plant may be able to determine the
actual condition of a plant by additionally determining the color of leaves. The aggregate
information from soil moisture, soil temperature, and the camera sensor will be able to
collectively determine a plant’s health at any instant of time. Other common examples of
hybrid sensing include smart parking systems, traffic management systems, and others.
Figure 2.4(c) shows an example of hybrid sensing, where a camera and a temperature sensor
are collectively used to detect and confirm forest fires during wildlife monitoring.
Virtual sensing
Many a time, there is a need for very dense and large-scale deployment of sensor nodes
spread over a large area for monitoring of parameters. One such domain is agriculture. Here,
often, the parameters being measured, such as soil moisture, soil temperature, and water
level, do not show significant spatial variations. Hence, if sensors are deployed in the fields
SENSING CONSIDERATIONS
The choice of sensors in an IoT sensor node is critical and can either make or break the
feasibility of an IoT deployment. The following major factors influence the choice of sensors
in IoT-based sensing solutions: 1) sensing range, 2) accuracy and precision, 3) energy, and 4)
device size. These factors are discussed as follows:
Sensing Range: The sensing range of a sensor node defines the detection fidelity of
that node. Typical approaches to optimize the sensing range in deployments include
fixed k-coverage and dynamic k-coverage. A lifelong fixed k-coverage tends to usher
in redundancy as it requires a large number of sensor nodes, the sensing range of
some of which may also overlap. In contrast, dynamic k-coverage incorporates mobile
sensor nodes post detection of an event, which, however, is a costly solution and may
not be deployable in all operational areas and terrains. Additionally, the sensing range
of a sensor may also be used to signify the upper and lower bounds of a sensor’s
measurement range. For example, a proximity sensor has a typical sensing range of a
couple of meters. In contrast, a camera has a sensing range varying between tens of
meters to hundreds of meters. As the complexity of the sensor and its sensing range
goes up, its cost significantly increases.
Accuracy and Precision: The accuracy and precision of measurements provided by a
sensorarecriticalindecidingtheoperationsofspecificfunctionalprocesses.Typically, off-
the-shelf consumer sensors are low on requirements and often very cheap. However,
their performance is limited to regular application domains. For example, a standard
Facilitated by these sensors. Industrial sensors are typically very sophisticated, and as
a result, very costly. However, these industrial sensors have very high accuracy and
precision score, even under harsh operating conditions.
Energy: The energy consumed by a sensing solution is crucial to determine the life
time of that solution and the estimated cost of its deployment. If the sensor or the
sensor node is so energy inefficient that it requires replenishment of its energy sources
quite frequently, the effort in maintaining the solution and its cost goes up; whereas its
deployment feasibility goes down. Consider a scenario where sensor nodes are
deployed on the top of glaciers. Once deployed, access to these nodes is not possible.
If the energy requirements of the sensor nodes are too high, such a deployment will
not last long, and the solution will be highly infeasible as charging or changing of the
energy sources of these sensor nodes is not an option.
Device Size: Modern-day IoT applications have a wide penetration in all domains of
life. Most of the applications of IoT require sensing solutions which are so small that
theydonothinderanyoftheregularactivitiesthatwerepossiblebeforethesensornode
deployment was carried out. Larger the size of a sensor node, larger is the obstruction
caused by it, higher is the cost and energy requirements, and lesser is its demand for
the bulk of the IoTapplications.Considerasimplehumanactivitydetector.Ifthedetection
unit is too large to be carried or too bulky to cause hindrance to regular normal
movements, the demand for this solution would be low. It is because of this that the
onset of wearable’s took off so strongly. The wearable sensors are highly energy-
efficient, small in size, and almost part of the wearer’s regular wardrobe.
ACTUATORS
An actuator can be considered as a machine or system’s component that can affect the
movementorcontrolthesaidmechanismorthesystem.Controlsystemsaffect changes to the
environment or property they are controlling through actuators. The system activates the
actuator through a control signal, which may be digital or analog. It elicits a response from
the actuator, which is in the form of some form of mechanical motion. The control system of
ACTUATOR TYPES
Broadly, actuators can be divided into seven classes:
1)Hydraulic, 2)pneumatic, 3)electrical, 4)thermal/magnetic, 5)mechanical, 6)soft, and 7)shape
memory polymers. Figure2.6shows some of the commonly used actuators in IoT applications.
Hydraulic actuators
A hydraulic actuator works on the principle of compression and decompression of fluids.
These actuators facilitate mechanical tasks such as lifting loads through the use of hydraulic
power derived from fluids in cylinders or fluid motors. The mechanical motion applied to a
hydraulic actuator is converted to either linear, rotary, or oscillatory motion. The almost
incompressible property of liquids is used in hydraulic actuators for exerting significant force.
These hydraulic actuators are also considered as stiff systems. The actuator’s limited
acceleration restricts its usage.
Pneumatic actuators
A pneumatic actuator works on the principle of compression and decompression of gases.
These actuators use a vacuum or compressed air at high pressure and convert it into either
linear or rotary motion. Pneumatic rack and pinion actuators are commonly used for valve
Electric actuators
Typically, electric motors are used to power an electric actuator by generating mechanical
torque. This generated torque is translated into the motion of a motor’s shaft or for switching
(as in relays). For example, actuating equipments such as solenoid valves control the flow of
water in pipes in response to electrical signals. This class of actuators is considered one of the
cheapest, cleanest and speedy actuator types available.
Figure2.6 Some common commercially available actuators used for IoT-based control Applications
Mechanical actuators
In mechanical actuation, the rotary motion of the actuator is converted into linear motion to
execute some movement. The use of gears, rails, pulleys, chains, and other devices are
necessary for these actuators to operate. These actuators can be easily used in conjunction
with pneumatic, hydraulic, or electrical actuators. They can also work in a standalone mode.
The best example of a mechanical actuator is a rack and pinion mechanism.
Soft actuators
Soft actuators (e.g., polymer-based) consists of elastomeric polymers that are used as
embedded fixtures in flexible materials such as cloth, paper, fiber, particles, and others. The
conversion of molecular level microscopic changes into tangible macroscopic deformations is
the primary working principle of this class of actuators. These actuators have a high stake in
modern-day robotics. They are designed to handle fragile objects such as agricultural fruit
harvesting, or performing precise operations like manipulating the internal organs during
robot- assisted surgeries.
Shape memory polymers
Shape memory polymers (SMP) are considered as smart materials that respond to some
external stimulus by changing their shape, and then revert to their original shape once the
affecting stimulus is removed. Features such as high strain recovery, biocompatibility, low
density, and biodegradability characterize these materials. SMP-based actuators function
similar to our muscles. Modern-day SMPs have been designed to respond to a wide range of
stimuli such as pH changes, heat differentials, light intensity, and frequency changes,
magnetic changes, and others. Photopolymer/light-activated polymers (LAP) are a particular
type of SMP, which require light as a stimulus to operate. LAP-based actuators are
characterized by their rapid response times. Using only the variation of light frequency or its
intensity, LAPs can be controlled remotely without any physical contact. The development of
ACTUATORCHARACTERISTICS
The choice or selection of actuators is crucial in an IoT deployment, where a control
mechanism is required after sensing and processing of the information obtained from the
sensed environment. Actuators perform the physically heavier tasks in an IoT deployment;
tasks which require moving or changing the orientation of physical objects, changing the state
of objects, and other such activities. The correct choice of actuators is necessary for the long-
term sustenance and continuity of operations, as well as for increasing the lifetime of the
actuators themselves. A set of four characteristics can define all actuators:
Weight: The physical weight of actuators limits its application scope. For example,
the use of heavier actuators is generally preferred for industrial applications and
applications requiring no mobility of the IoT deployment. In contrast, lightweight
actuators typically find common usage in portable systems in vehicles, drones, and
home IoT applications. It is to be noted that this is not always true. Heavier actuators
also have selective usage in mobile systems, for example, landing gears and engine
motors in aircraft.
Power Rating: This helps in deciding the nature of the application with which an
actuator can be associated. The power rating defines the minimum and maximum
operating power an actuator can safely withstand without damage to itself. Generally,
it is indicated as the power-to-weight ratio for actuators. For example, smaller servo
motors used in hobby projects typically have a maximum rating of 5 VDC, 500 mA,
which is suitable for an operations-driven battery-based power source. Exceeding this
limitmightbedetrimentaltotheperformanceoftheactuatorandmaycauseburnoutof the
motor. In contrast to this, servo motors in larger applications have a rating of 460
VAC,2:5A, which requires stand alone power supply systems for operations. It is to
be noted that actuators with still higher ratings are available and vary according to
application requirements.
Torque to Weight Ratio: The ratio of torque to the weight of the moving part of an
instrument/deviceisreferredtoasitstorque/weightratio.Thisindicatesthe sensitivity of the
MODULE-III
IOT PROCESSING
TOPOLOGIES AND
TYPES
38
Dr..LEVINA Assoc Professor ,Dept Of CSE ,KNSIT Bangalore.-64
38
Introduction to Internet of Things (IOT) BETCK105H
DATA FORMAT
The Internet is a vast space where huge quantities and varieties of data are generated regularly
and flow freely .As of January 2018, there are a reported 4:021 billion Internet users worldwide.
The massive volume of data generated by this huge number of users is further enhanced by the
multiple devices utilized by most users. In addition to these data-generating sources, non-
human data generation sources such as sensor nodes and automated monitoring systems further
add to the data load on the Internet. This huge data volume is composed of a variety of data
such as e-mails, text documents (Word docs, PDFs, and others), social media posts, videos,
audio files, and images, as shown in Figure 3.1. However, these data can be broadly grouped
into two types based on how they can be accessed and stored: 1) Structured data and 2)
unstructured data.
Figure 3.1Thevarious data generating and storage sources connected to the Internet and the
plethora of data types contained within it
Structured data
These are typically text data that have a pre-defined structure. Structured data are associated
with relational database management systems(RDBMS).These are primarily created by using
length-limited data fields such as phone numbers, social security numbers, and other such
information. Even if the data is human or machine generated, these data are easily searchable
by querying algorithms as well as human generated queries. Common usage of this type of data
is associated with flight or train reservation systems, banking systems, inventory controls, and
other similar systems.
39
Dr..LEVINA Assoc Professor ,Dept Of CSE ,KNSIT Bangalore.-64
39
Introduction to Internet of Things (IOT) BETCK105H
Established languages such as Structured Query Language (SQL) are used for accessing these
data in RDBMS. However, in the context of IoT, structured data holds a minor share of the
total generated data over the Internet.
Unstructured data
In simple words, all the data on the Internet, which is not structured, is categorized as
unstructured. These data types have no pre-defined structure and can vary according to
applications and data-generating sources. Some of the common examples of human-generated
unstructured data include text, e-mails, videos, images, phone recordings, chats, and others.
Some common examples of machine-generated unstructured data include sensor data from
traffic, buildings, industries, satellite imagery, surveillance videos, and others. As already
evident from its examples, this data type does not have fixed formats associated with it, which
makes it very difficult for querying algorithms to perform a look-up. Querying languages such
as NoSQL is generally used for this data type.
40
Dr..LEVINA Assoc Professor ,Dept Of CSE ,KNSIT Bangalore.-64
40
Introduction to Internet of Things (IOT) BETCK105H
Here, the need for processing the data in place or almost nearer to the source is crucial in
achieving the deployment success of such domains. Similarly, considering the requirements of
processing from category 2 data sources (time-critical), the processing requirements allow for
the transmission of data to be processed to remote locations/processors such as clouds or
through collaborative processing. Finally, the last category of data sources (normal) typically
have no particular time requirements for processing urgently and are pursued leisurely as such.
PROCESSING TOPOLOGIES
The identification and intelligent selection of processing requirement of an IoT application are
one of the crucial steps in deciding the architecture of the deployment. A properly designed
IoT architecture would result in massive savings in network bandwidth and conserve significant
amounts of overall energy in the architecture while providing the proper and allowable
processing latencies for the solutions associated with the architecture. Regarding the
importance of processing in IoT as outlined in Section3.2, we can divide the various processing
solutions into two large topologies: 1)On-site and 2)Off-site. The off-site processing topology
can be further divided into the following: 1) Remote processing and 2) Collaborative
processing.
On-site Processing
As evident from the name, the on-site processing topology signifies that the data is processed
at the source itself. This is crucial in applications that have a very low tolerance for latencies.
These latencies may result from the processing hardware or the network (during transmission
of the data for processing away from the processor).Applications such as those associated with
health care and flight control systems (real-time systems) have a break neck data generation
rate. These additionally show rapid temporal changes that can be missed (leading to
catastrophic damages) unless the processing infrastructure is fast and robust enough to handle
such data. Figure 3.2 shows the on-site processing topology, where an event (here, fire) is
detected utilizing a temperature sensor connected to a sensor node. The sensor node processes
the information from the sensed event and generates an alert. The node additionally has the
option of forwarding the data to a remote infrastructure for further analysis and storage.
41
Dr..LEVINA Assoc Professor ,Dept Of CSE ,KNSIT Bangalore.-64
41
Introduction to Internet of Things (IOT) BETCK105H
Off-Site Processing
The off-site processing paradigm, as opposed to the on-site processing paradigms, allows for
latencies (due to processing or network latencies); it is significantly cheaper than on-site
processing topologies. This difference in cost is mainly due to the low demands and
requirements of processing at the source itself. Often, the sensor nodes are not required to
process data on an urgent basis, so having a dedicated and expensive on-site processing
infrastructure is not sustainable for large-scale deployments typical of IoT deployments. In the
off-site processing topology, the sensor node is responsible for the collection and framing of
data that is eventually to be transmitted to another location for processing. Unlike the on-site
processing topology, the off-site topology has a few dedicated high-processing enabled
devices, which can be borrowed by multiple simpler sensor nodes to accomplish their tasks. At
the same time, this arrangement keeps the costs of large-scale deployments extremely
manageable. In the off-site topology, the data from these sensor nodes (data generating sources)
is transmitted either to a remote location (which can either be a server or a cloud) or to multiple
processing nodes. Multiple nodes can come together to share their processing power in order
to collaboratively process the data (which is important in case a feasible communication
pathway or connection to a remote location cannot be established by a single node).
Remote Processing
This is one of the most common processing topologies prevalent in present-day IoT solutions.
It encompasses sensing of data by various sensor nodes; the data is then forwarded to a remote
server or a cloud-based infrastructure for further processing and analytics. The processing of
data from hundreds and thousands of sensor nodes can be simultaneously off loaded to a single,
42
Dr..LEVINA Assoc Professor ,Dept Of CSE ,KNSIT Bangalore.-64
42
Introduction to Internet of Things (IOT) BETCK105H
powerful computing platform; this results in massive cost and energy savings by enabling the
reuse and reallocation of the same processing resource while also enabling the deployment of
smaller and simpler processing nodes at the site of deployment. This setup also ensures massive
scalability of solutions, without significantly affecting the cost of the deployment. Figure 3.3
shows the outline of one such paradigm, where the sensing of an event is performed locally,
and the decision making is outsourced to a remote processor (here, cloud). However, this
paradigm tends to use up a lot of network bandwidth and relies heavily on the presence of
network connectivity between the sensor nodes and the remote processing infrastructure.
Collaborative Processing
This processing topology typically finds use in scenarios with limited or no network
connectivity, especially systems lacking a backbone network. Additionally, this topology can
be quite economical for large-scale deployments spread over vast areas, where providing
networked access to a remote infrastructure is not viable. In such scenarios, the simplest
solution is to club together the processing power of nearby processing nodes and
collaboratively process the data in the vicinity of the data source itself. This approach also
reduces latencies due to the transfer of data over the network. Additionally, it conserves
bandwidth of the network, especially ones connecting to the Internet.
Figure 3.4 shows the collaborative processing topology for collaboratively processing data
locally. This topology can be quite beneficial for applications such as agriculture, where an
intenseandtemporallyhighfrequencyofdataprocessingisnotrequiredasagriculturaldatais
generally logged after significantly long intervals(in the range of hours). One important point
to mention about this topology is the preference of mesh networks for easy implementation of
this topology.
43
Dr..LEVINA Assoc Professor ,Dept Of CSE ,KNSIT Bangalore.-64
43
Introduction to Internet of Things (IOT) BETCK105H
The main consideration of minutely defining an IoT solution is the selection of the processor
for developing the sensing solution (i.e.,the sensor node).This selection is governed by many
parameters that affect the usability, design, and affordability of the designed IoT sensing and
processing solution. In this chapter, we mainly focus on the deciding factors for selecting a
processor for the design of a sensor node. The main factor governing the IoT device design and
Selection for various applications is the processor. However, the other important considerations
are as follows.
Size: This is one of the crucial factors for deciding the form factor and the energy consumption
of a sensor node. It has been observed that larger the form factor, larger is the energy
consumption of the hardware. Additionally, large form factors are not suitable for a significant
bulk of IoT applications, which rely on minimal form factor solutions (e.g., wearables).
Energy:The energy requirements of a processor is the most important deciding factor in
designing IoT-based sensing solutions. Higher the energy requirements, higher are the energy
source (battery) replacement frequency. This principle automatically lowers the long-term
sustainability of sensing hardware, especially for IoT-based applications.
Cost:The cost of a processor, besides the cost of sensors, is the driving force in deciding the
density of deployment of sensor nodes for IoT-based solutions. Cheaper cost of the hardware
enables a much higher density of hardware deployment by users of an IoT solution. For
example, cheaper gas and fire detection solutions would enable users to include much more
sensing hardware for a lesser cost.
Memory: The memory requirements (both volatile and non-volatile memory) of IoT devices
44
Dr..LEVINA Assoc Professor ,Dept Of CSE ,KNSIT Bangalore.-64
44
Introduction to Internet of Things (IOT) BETCK105H
determine the capabilities the device can be armed with. Features such as local data processing,
data storage, data filtering, data formatting, and a host of other features rely heavily on the
memory capabilities of devices. However, devices with higher memory tend to be costlier for
obvious reasons.
Processing power: As covered in earlier sections, processing power is vital (comparable to
memory)in deciding what type of sensors can be accommodated with the IoT device/node, and
what processing features can integrate on-site with the IoT device. The processing power also
decides the type of applications the device can be associated with. Typically, applications that
handle video and image data require IoT devices with higher processing power as compared to
applications requiring simple sensing of the environment.
I/O Rating: The input–output (I/O) rating of IoT device, primarily the processor, is the deciding
factor in determining the circuit complexity, energy usage, and requirements for support of
various sensing solutions and sensor types. Newer processors have a meager I/O voltage rating
of 3.3 V, as compared to 5 V for the somewhat older processors. This translates to requiring
additional voltage and logic conversion circuitry to interface legacy technologies and sensors
with the newer processors. Despite low power consumption due to reduced I/O voltage levels,
this additional voltage and circuitry not only affects the complexity of the circuits but also
affects the costs.
Add-ons: The support of various add-ons a processor or for that matter, an IoT device provides,
such as analog to digital conversion (ADC) units, in-built clock circuits, connections to USB
and ethernet, inbuilt wireless access capabilities, and others helps in defining the robustness
and usability of a processor or IoT device in various application scenarios. Additionally, the
provision for these add-ons also decides how fast a solution can be developed, especially the
hardware part of the whole IoT application. As interfacing and integration of systems at the
circuit level can be daunting to the uninitiated, the prior presence of these options with the
processor makes the processor or device highly lucrative to the users/ developers.
PROCESS OFF-LOADING
The processing off loading paradigm is important for the development of densely deployable,
energy-conserving, miniaturized, and cheap IoT-based solutions for sensing tasks. Building
upon the basics of the off-site processing topology covered in the previous sections in this
chapter, we delve a bit further into the various nuances of processing offloading in IoT. Figure
3.5 shows the typical outline of an IoT deployment with the various layers of processing that
45
Dr..LEVINA Assoc Professor ,Dept Of CSE ,KNSIT Bangalore.-64
45
Introduction to Internet of Things (IOT) BETCK105H
are encountered spanning vastly different application domains from as near as sensing the
environment to as far as cloud-based infrastructure. Starting from the primary layer of sensing,
we can have multiple sensing types tasked with detecting an environment (fire, surveillance,
and others). The sensors enabling these sensing types are integrated with a processor using
wired or wireless connections (mostly, wired). In the event that certain applications require
immediate processing of the sensed data, a non-site processing topology is followed, similar to
the one in Figure3.2. However, for the majority of IoT applications, the bulk of the processing
is carried out remotely in order to keep the on-site devices simple, small, and economical.
Typically, for off-site processing, data from the sensing layer can be forwarded to the fog or
cloud or can be contained within the edge layer. The edge layer makes use of devices within
the local network to process data that which is similar to the collaborative processing topology
shown in Figure3.4
Figure3.5The various data generating and storage sources connected to the Internet and the plethora of data types
contained within it
The devices within the local network, till the fog, generally communicate using short-range
wireless connections. In case the data needs to be sent further up the chain to the cloud, long-
range wireless connection enabling access to a backbone network is essential.
46
Dr..LEVINA Assoc Professor ,Dept Of CSE ,KNSIT Bangalore.-64
46
Introduction to Internet of Things (IOT) BETCK105H
Fog-based processing is still considered local because the fog nodes are typically localized
within a geographic area and serve the IoT nodes within a much smaller coverage area as
compared to the cloud. Fog nodes, which are at the level of gateways, may or may not be
accessed by the IoT devices through the Internet.
Finally, the approach of forwarding data to a cloud or a remote server, as shown in the topology
in Figure 3.3, requires the devices to be connected to the Internet through long-range
wireless/wired networks, which eventually connect to a backbone network. This approach is
generally costly concerning network bandwidth, latency, as well as the complexity of the
devices and the network infrastructure involved.
This section on data off loading is divided into Three parts: 1)off load location (which outlines
where all the processing can be off loaded in the IoT architecture), 2) offload decision making
(how to choose where to offload the processing to and by how much), and finally 3) offloading
considerations (deciding when to offload).
Offload location
The choice of offload location decides the applicability, cost, and sustainability of the IoT
application and deployment. We distinguish the offload location into four types:
Edge: Off loading processing to the edge implies that the data processing is facilitated to a
location at or near the source of data generation itself. Off loading to the edge is done to achieve
aggregation, manipulation, bandwidth reduction, and other data operations directly on an IoT
device.
Fog: Fog computing is a decentralized computing infrastructure that is utilized to conserve
network bandwidth, reduce latencies, restrict the amount of data unnecessarily flowing through
the Internet, and enable rapid mobility support for IoT devices. The data, computing, storage
and applications are shifted to a place between the data source and the cloud resulting in
significantly reduced latencies and network bandwidth usage.
Remote Server:A simple remote server with good processing power may be used with IoT
based applications to offload the processing from resource constrained IoT devices. Rapid
scalability may be an issue with remote servers, and they may be costlier and hard to maintain
in comparison to solutions such as the cloud.
Cloud:Cloud computing is a configurable computer system, which can get access to
configurable resources, platforms, and high-level services through a shared pool hosted
remotely.Acloudisprovisionedforprocessingoffloadingsothatprocessingresourcescanbe rapidly
47
Dr..LEVINA Assoc Professor ,Dept Of CSE ,KNSIT Bangalore.-64
47
Introduction to Internet of Things (IOT) BETCK105H
provisioned with minimal effort over the Internet, which can be accessed globally. Cloud
enables massive scalability of solutions as they can enable resource enhancement allocated to
a user or solution in a non-demand manner, without the user having to go through the pains of
acquiring and configuring new and costly hardware.
Naive Approach:This approach is typically a hard approach, without too much decision
making. It can be considered as a rule-based approach in which the data from IoT devices are
off loaded to the nearest location based on the achievement of certain off load criteria. Although
easy to implement, this approach is never recommended, especially for dense deployments, or
deployments where the data generation rate is high or the data being offloaded in complex to
handle (multimedia or hybrid data types). Generally, statistical measures are consulted for
generating the rules for offload decision making.
Bargaining based approach:This approach, although a bit processing-intensive during the
decision making stages, enables the alleviation of network traffic congestion, enhances service
QoS (quality of service) parameters such as bandwidth, latencies, and others. At times, while
trying to maximize multiple parameters for the whole IoT implementation, in order to provide
the most optimal solution or QoS, not all parameters can be treated with equal importance.
Bargaining based solutions try to maximize the QoS by trying to reach a point where the
qualities of certain parameters are reduced, while the others are enhanced. This measure is
undertaken so that the achieved QoS is collaboratively better for the full implementation rather
than a select few devices enjoying very high QoS. Game theory is a common example of the
bargaining based approach. This approach does not need to depend on historical data for
decision making purposes.
Learning based approach:Unlike the bargaining based approaches, the learning based
approaches generally rely on past behavior and trends of data flow through the IoT architecture.
The optimization of QoS parameters is pursued by learning from historical trends and trying to
optimize previous solutions further and enhance the collective behavior of the IoT
48
Dr..LEVINA Assoc Professor ,Dept Of CSE ,KNSIT Bangalore.-64
48
Introduction to Internet of Things (IOT) BETCK105H
implementation. The memory requirements and processing requirements are high during the
decision making stages. The most common example of a learning based approach is machine
learning.
Offloading considerations
There are a few offloading parameters which need to be considered while deciding upon the
offloading type to choose. These considerations typically arise from the nature of the IoT
application and the hardware being used to interact with the application. Some of these
parameters are as follows.
Bandwidth: The maximum amount of data that can be simultaneously transmitted over the
network between two points is the bandwidth of that network. The bandwidth of a wired or
wireless network is also considered to be its data-carrying capacity and often used to describe
the data rate of that network.
Latency:It is the time delay incurred between the start and completion of an operation. In the
present context, latency can be due to the network (network latency) or the processor
(processing latency). In either case, latency arises due to the physical limitations of the
infrastructure, which is associated with an operation. The operation can be data transfer over a
network or processing of a data at a processor.
Criticality:It defines the importance of a task being pursued by an IoT application. The more
critical a task is, the lesser latency is expected from the IoT solution. For example, detection of
fires using an IoT solution has higher criticality than detection of agricultural field parameters.
The former requires a response time in the tune of milliseconds, whereas the latter can be
addressed within hours or even days.
Resources: It signifies the actual capabilities of an offload location. These capabilities may be
the processing power, the suite of analytical algorithms, and others. For example, it is futile
and wasteful to allocate processing resources reserved for real-time multimedia processing
(which are highly energy-intensive and can process and analyze huge volumes of data in a short
duration) to scalar data (which can be addressed using no minal resources without wasting
much energy).
Data volume: The amount of data generated by a source or sources that can be simultaneously
handled by the offload location is referred to as its data volume handling capacity. Typically,
for large and dense IoT deployments, the offload location should be robust enough to address
the processing issues related to massive data volumes.
49
Dr..LEVINA Assoc Professor ,Dept Of CSE ,KNSIT Bangalore.-64
49
Module-IV
CLOUDCOMPUTING
INTRODUCTION
Sensor nodes are the key components of Internet of Things (IoT). These nodes are resource-
constrained in terms of storage, processing, and energy. Moreover, in IoT, the devices are
connected and communicate with one another by sharing the sensed and processed data. Handling
the enormous data generated by this large number of heterogeneous devices is a non- trivial task.
Consequently, cloud computing becomes an essential building block of the IoT architecture. It
aims at providing an extensive overview of cloud computing. Additionally, check yourself will
help the learner to learn different concepts are related to cloud computing. Cloud computing is
more than traditional network computing. Unlike network computing, cloud computing
comprises a pool of multiple resources such as servers, storage, and network from single/multiple
organizations. These resources are allocated to the end users as per requirement, on a payment
basis. In cloud computing architecture, an end user can request for customized resources such as
storage space, RAM, operating systems, and other software to a cloud service provider (CSP) as
shown in Figure 4.1. For example, a user can request for a Linux operating system for running an
application from a CSP; another end user can request for Windows 10 operating system from the
same CSP for executing some application. The cloud services are accessible from anywhere and
at any time by an authorized user through Internet connectivity.
VIRTUALIZATION
The key concept of cloud computing is virtualization. The technique of sharing a single resource
among multiple end user organizations or end users is known as virtualization. In the
virtualization process, a physical resource is logically distributed among multiple users. However,
a user perceives that the resource is unlimited and is dedicatedly provided to him/her. Figure4.2
(a) Represents a traditional desktop, where an application(App)is running on top of an OS, and
resources are utilized only for that particular application. On the other hand, multiple resources
can be used by different end users through virtualization software, as shown in Figure
4.2(b).Virtualization software separates the resources logically so that there is no conflict among
the users during resource utilization.
Figure4.2Traditionaldesktopversus Virtualization
Advantages of Virtualization
With the increasing number of interconnected heterogeneous devices in IoT, the importance of
virtualization also increases. In IoT, a user is least bothered about where the data from different
heterogeneous devices are stored or processed for a particular application. Users are mainly
concerned for their services. Typically, there are different software such as VMware, which
enable the concept of virtualization. With the increasing importance of cloud computing, different
organizations and individuals are using it extensively. Moreover, there is always a risk of system
crash at any instant of time. In such a scenario, cloud computing plays a vital role by keeping
backups through virtualization. Primarily, there are two entities in a cloud computing
architecture: end users and CSP. Both end users and CSP are benefited in several aspects through
the process of virtualization.
The major advantages, from the perspective of the END USER and CSP, are as follows:
Advantages for End Users
Variety: The process of virtualization in cloud computing enables an end user organization to
use various types of applications based on the requirements. As an example, suppose John takes
up still photography as a hobby. His resource-limited PC can barely handle the requirements for
a photo editing software, sayX-photo editor.
In order to augment his PC’s regular performance, he uninstalls the X-photo editor software and
purchases a cloud service, which lets him access a virtual machine(VM). In his VM, he installs
the X-photo editor software, by which he can edit photos efficiently and, most importantly,
without worrying about burdening his PC or running out of processing resources. After six
months, John’s interest in his hobby grows and he moves on to video-editing too. For editing his
captured videos, he installs a video editing software, Y-video editor, in his VM and can edit
videos efficiently. Additionally, he has the option of installing and using a variety of software for
different purposes.
Availability: Virtualization creates a logical separation of the resources of multiple entities
without any intervention of end users. Consequently, the concept of virtualization makes
available a considerable amount of resources as per user requirements. The end users feel that
there are unlimited resources present dedicatedly for him/her. Let us suppose that Jane uses a
particular email service. Her account has been active for over ten years now however, it offers
limited storage of 2 GB. Due to the ever accumulating file attachments in different emails, her 2
GB complimentary space is exhausted. However, there is a provision that if she pays $100
annually, she can attach additional space to her mail service. This upgrade allows her to have
more storage at her disposal for a considerable time in the future.
Portability: Portability signifies the availability of cloud computing services from anywhere in
the world, at any instant of time. For example, a person flying from the US to the UK still has
access to their documents, although they cannot physically access the devices on which the data
is stored. This has been made possible by plat forms such as Google Drive.
Elasticity: Through the concept of virtualization, an end user can scale-up or scale- down
resource utilization as per requirements. We have already explained that cloud computing is based
on a pay-per-use model. The end user needs to pay the amount based on their usage. For example,
Jack rents two VMs in a cloud computing infrastructure from a CSP. VM1 has the Ubuntu
operating system (OS), on which Jack is simulating a network scenario using NetworkSimulator-
2(NS2). VM2 has Windows 10OS, on which he is running a MATLAB simulation.
However, after a few days, Jack feels that his VM2 has served its purpose and is no longer
required. Consequently, he releases VM2 and, after that, he is only billed for VM1. Thus, Jack
can scale-up or scale-down his resources in cloud computing, which employs the concept of
virtualization.
Advantages for CSP
Resource Utilization: Typically, a CSP in a cloud computing architecture procures resources on
their own or get them from third parties. These resources are distributed among different users
dynamically as per their requirements. A segment of a particular resource provided to a user at a
time instant, can be provided to another user at a different time instant. Thus, in the cloud
computing architecture, resources can be re- utilized for multiple users.
Effective Revenue Generation: A CSP generates revenue from the end users based on resource
utilization. As an example, today, a user A is utilizing storage facility from a particular CSP. The
user will release the storage after a few days when his/her requirement is complete. The CSP
earns some revenue from user A for the utilization of the allocated storage facility. In the future,
the CSP can provide the same storage facility to a different user, B. Again, the CSP can generate
revenue from user B for his/her storage utilization.
Types of Virtualization
Based on the requirements of the users, we categorized virtualization as shown in Figure 4.3.
Hardware Virtualization: This type of virtualization indicates the sharing of hardware resources
among multiple users. For example, a single processor appears as many different processors in a
cloud computing architecture. Different operating system scan be installed in these processors
and each of them can work as stand-alone machines.
Application Virtualization: A single application is stored at the cloud end. However, as per
requirement, a user can use the application in his/her local computer without ever actually
installing the application. Similar to storage virtualization, in application virtualization, the users
get the impression that applications are stored and executed in their local computer.
Desktop Virtualization: This type of virtualization allows a user to access and utilize the
services of a desktop that resides at the cloud. The users can use the desktop from their local
desktop.
CLOUD MODELS
As per the National Institute of Standards and Technology (NIST) and Cloud Computing
Standards Roadmap Working Group, the cloud model can be divided into two parts:(1)Service
model and (2) Deployment model as shown in Figure 4.4.
Further the service model is categorized as: Software-as-a-Service (SaaS), Platform-as-a-Service
(PaaS), and Infrastructure-as-a-Service (IaaS). On the other hand, the deployment model is further
categorized as: Private cloud, Community cloud, Public cloud, and Hybrid cloud.
Deployment Model
Private Cloud: This type of cloud is owned explicitly by an end user organization. The internal
resources of the organization maintain the private cloud.
Community Cloud: This cloud forms with the collaboration of a set of organizations for a
specific community. For a community cloud, each organization has some shared interests.
Public Cloud: The public cloud is owned by a third party organization, which provides services
to the common public. The service of this cloud is available for any user, on a payment basis.
Hybrid Cloud: This type of cloud comprises two or more clouds (private, public, or community).
generate revenue from them as per their usage. Therefore, for a clear understanding between CSP
and the customer about the services, an agreement is required to be made, which is known as
service-level agreement (SLA).An SLA provides a detailed description of the services that will
be received by the customer. Based on the SLA, a customer can be aware of each and every term
and condition of the services before availing them. An SLA may include multiple organizations
for making the legal contract with the customers.
Importance of SLA
An SLA is essential in cloud computing architecture for both CSP and customers. It is important
because of the following reasons:
Customer Point of View: Each CSP has its SLA, which contains a detailed description of the
services. If a customer wants to use a cloud service, he/she can compare the SLAs of different
organizations. Therefore, a customer can choose a preferred CSP based on the SLAs.
CSP Point of View: In many cases, certain performance issues may occur for a particular service,
because of which a CSP may not be able to provide the services efficiently. Thus, in such a
situation, a CSP can explicitly mention in the SLA that they are not responsible for inefficient
service.
CLOUD IMPLEMENTATION
Cloud Simulation
With the rapid deployment of IoT infrastructure for different applications, the requirement for
cloud computing is also increasing. It is challenging to estimate the performance of an IoT system
with the cloud before real implementation. On the other hand, real deployment of the cloud is a
complex and costly procedure. Thus, there is a requirement for simulating the system through a
cloud simulator before real implementation. There are many cloud simulators that provide pre-
deployment test services for repeatable performance evaluation of a system. Typically, a cloud
simulator provides the following advantages to a customer:
Pre-deployment test before real implementation
System testing at no cost
Repeatable evaluation of the system
Pre-detection of issues that may affect the system performance
Flexibility to control the environment
Currently, different types of cloud simulators are available. A few cloud simulators are
listed here:
Cloud Sim
Description: Cloud Sim is a popular cloud simulator that was developed at the University of
Melbourne. This simulator is written in a Java-based environment. In CloudSim, a user is allowed
to add or remove resources dynamically during the simulation and evaluate the performance of
the scenario.
Features: CloudSim has different features, which are listed as follows:
The CloudSim simulator provides various cloud computing data centers along with different
data center network topologies in a simulation environment.
Using CloudSim, virtualization of server hosts can be done in a simulation.
A user is able to allocate virtual machines(VMs)dynamically.
It allows users to define their own policies for the allocation of host resources to VMs.
It provides flexibility to add or remove simulation components dynamically.
A user can stop and resume the simulation at any instant of time.
Cloud Analyst
Description: Cloud Analyst is based on CloudSim. This simulator provides a graphical user
interface (GUI) for simulating a cloud environment, easily. The Cloud Analyst is used for
simulating large-scale cloud applications.
Features:
The Cloud Analyst simulator is easy to use due to the presence of the GUI.
It allows a user to add components and provides a flexible and high-level of configuration.
A user can perform repeated experiments, considering different parameter values.
It can provide a graphical output, including a chart and table.
Green Cloud
Description: Green Cloud is developed as an extension of a packet level network simulator,
NS2. This simulator can monitor the energy consumption of different network components such
as servers and switches.
Features:
Green Cloud is an open-source simulator with user-friendly GUI.
Itprovidesthefacilityformonitoringtheenergyconsumptionofthenetworkandits various
components.
It supports the simulations of cloud network components.
It enables improved power management schemes.
It allows a user to manage and configure devices, dynamically, in simulation.
An open-source cloud: OpenStack
For the real implementation of cloud, there are various open-source cloud platforms available
such as Open Stack, Cloud Stack, and Eucalyptus. Here, we will discuss the Open Stack platform
briefly. The Open Stack is free software, which provides a cloud IaaS to users. A user can easily
use this cloud with the help of a GUI-based web interface or through the command line. Open
Stack supports a vastly scalable cloud system, in which different pre-configured software suites
are available. The service components of OpenStack along with their functions are depicted in
Table 4.1.
Features of OpenStack
Open Stack allows a user to create and deploy virtual machines.
It provides the flexibility of setting up a cloud management environment.
OpenStack supports an easy horizontal scaling: dynamic addition or removal of instances for
providing services to multiple numbers of users.
This cloud platform allows the users to access the source code and share their code to the
community.
Table4.1 Components in OpenStack
AWS provides excellent management tools, which help a user to monitor and automate
different components of the cloud.
The cloud provides machine learning facilities, which are very useful for data scientists and
developers.
For extracting meaning from data, analytics play an important role. AWS a data analytics
platform.
SENSOR-CLOUD: SENSORS-AS-A-SERVICE
The new concept known as Sensors-as-a-Service (Se-aaS) in sensor-cloud architecture is
explored. Virtualization of resources is the backbone of cloud computing. Similarly, in a sensor-
cloud, virtualization of sensors plays an essential role in providing services to multiple users.
Typically, in a sensor-cloud architecture, multiple users receive services from different a sensor
node, simultaneously. However, the users remain oblivious to the fact that a set of sensor nodes
is not dedicated solely to them for their application requirements. In reality, a particular sensor
may be used for serving multiple user applications, simultaneously. The main aim of sensor-cloud
infrastructure is to provide an opportunity for the common mass to use Wireless Sensor Networks
(WSNs) on a payment basis. Similar to cloud computing, sensor- cloud architecture also follows
the pay-per-use model.
Importance of Sensor-Cloud
The sensor-cloud infrastructure is based on the concept of cloud computing, in which a user
application is served by a set of homogeneous or heterogeneous sensor nodes. These sensor nodes
are selected from a common pool of sensor nodes, as per the requirement of user applications.
Using the sensor-cloud infrastructure, a user receives data for an application from multiple sensor
nodes without owning them. Unlike sensor-cloud, if a user wants to use traditional WSN for a
certain application, he/she has to go through different pre-deployment and post-deployment
hurdles.
Figures 4.6 depict the usage of sensor nodes using traditional WSN and sensor cloud
infrastructure. With the help of a case study, we will discuss the advantages of sensor-cloud over
traditional WSN.
Case Study: John is a farmer, and he has a significantly vast farmable area with him. As manual
supervision of the entire field is very difficult, he has planned to deploy a WSN in his farming
field. Before purchasing the WSN, he has to decide which sensors should be used in his fields for
sensing the different agricultural parameters. Additionally, he has to decide the type and number
of other components such as an electronics circuit board and communication module required
along with the sensors. As there are numerous vendors, it is challenging for him to choose the
correct (in terms of quality and cost) vendor, as well as the sensor owner from whom the WSN
will be procured. He finally decides the type of sensors along with the other components that are
required for monitoring his agricultural field. Now, John faces the difficulty of optimally planning
the sensor node deployment in his fields. After going through these hurdles, he decides on the
number of sensor nodes that are required for monitoring his field. Finally, John procures the
WSNs from a vendor. After procurement, he deploys the sensor nodes and connects different
components. As WSN consists of different electronic components, he has to maintain the WSN
after its deployment. After three months, as his requirement of agricultural field monitoring is
completed, he removes the WSN from the agricultural field. Six months later, John plans to use
the WSN that was deployed in the agricultural field for home surveillance.
As the agriculture application is different from the home surveillance application, the sensor
required for the system also changes. Thus, John has to go through all the steps again, including
maintenance, deployment, and hardware management, for the surveillance system. Thus, we
observe that the users face different responsibilities for using a WSN for an application.
In such a situation, if sensor-cloud architecture is present, John can easily use WSNs for his
Dr..LEVINA Assoc Professor ,Dept Of CSE,KNSIT,Bangalore-64
61
61
Introduction to Internet of Things (IOT) BETCK105H 2024-25
application on a rental basis. Moreover, through the use of sensor cloud, John can easily switch
the application without any manual intervention.
On the other end, service providers of the sensor- cloud infrastructure may serve multiple users
with the same sensors and earn profit.
registered. Finally, through the Web portal, the end user receives the services, as shown in
Figure4.7. Based on the type and usage duration of service, the end user pays the charges to the
SCSP.
Sensor Owner: We have already discussed that the sensor-cloud architecture is based on the
concept of Se-aaS. Therefore, the deployment of the sensors is essential in order to provide
services to the end users. These sensors in a sensor cloud architecture are owned and deployed
by the sensor owners, as depicted in Figure4.7. A particular sensor owner can own multiple
homogeneous or heterogeneous sensor nodes. Based on the requirements of the users, these
sensor nodes are virtualized and assigned to serving multiple applications at the same time. On
the other hand, a sensor owner receives rent depending upon the duration and usage of his/her
sensor node(s).
Sensor-Cloud Service Provider (SCSP): An SCSP is responsible for managing the entire
sensor-cloud infrastructure (including management of sensor owners and end users handling,
resource handling, database management, cloud handling etc.), centrally. The SCSP receives rent
from end users with the help of a pre-defined pricing model. The pricing scheme may include the
infrastructure cost, sensor owners’ rent, and there venue of the SCSP. Typically, different
algorithms are used for managing the entire infrastructure. The SCSP receives the rent from the
end users and shares a partial amount with the sensor owners. The remaining amount is used for
maintaining the infrastructure. In the process, the SCSP earns a certain amount of revenue from
the payment of the end users.
Sensor-Cloud Architecture from Different Viewpoints
We explore the sensor-cloud architecture from two view points: (i) User organizational view and
(ii) real architectural view [5]. Different views of sensor-cloud architecture are shown in Figure
4.8.
User Organizational View: This view of sensor-cloud architecture is simple. In a sensor-cloud,
end users interact with a Web interface for selecting templates of the services. Thereafter, the
services are received by the end users through the Web interface. In this architecture, an end user
is unaware of the complex processes that are running at the back end.
Real Architectural View: The complex processing of sensor-cloud architecture is visualized
through this view. The processes include sensor allocation, data extraction from the sensors,
virtualization of sensor nodes, maintenance of the infrastructure, data center management, data
caching, and others. For each process, there is a specific algorithm or scheme.
Figure 4.11 depicts a typical agricultural food chain with the different operations that are involved
in it. Additionally, the figure depicts the applications of different IoT components required for
performing these agricultural operations. In the agrichain, we consider farming as the first stage.
In farming, various operations, such as seeding, irrigation, fertilizer spreading, and pesticide
spraying, are involved. For performing these operations, different IoT components are used. As
an example, for monitoring the soil health, soil moisture and temperature sensors are used; drones
are used for spraying pesticides; and through wireless connectivity, a report on on-field soil
conditions is sent directly to a users’ handheld device or cloud. After farming, the next stage in
the agri-chain is transport. Transport indicates the transfer of crops from the field to the local
storage, and after that, to long-term storage locations. In transport, smart vehicles can
automatically load and unload crops. The global positioning system (GPS)plays an important
role by tracking these smart devices, and radio frequency identification (RFID) is used to collect
information regarding the presence of a particular container of a crop at a warehouse.
crop into the market. Thus ,it is essential to track every package and store all the details related
to the crops in the cloud. Logistics enables the transfer of the packed crops to the market with the
help of smart vehicles. These smart vehicles are equipped with different sensors that help in
loading and unloading the packed crop autonomously. Additionally, GPS is used in these smart
vehicles for locating the position of the packed crops at any instant and tracking their where about.
All the logistical information gets logged in the cloud with the help of wireless connectivity.
Finally, the packed items reach the market using logistical channels. From the market, these items
are accessible to consumers. The details of the sale and purchase of the items are stored in the
form of records in the cloud.
Advantages of IoT in Agriculture
Modern technological advancements and the rapid developments in IoT components have
gradually increased agricultural productivity. Agricultural IoT enables the autonomous execution
of different agricultural operations. The specific advantages of the agricultural IoT are as follows:
Automatic seeding:IoT-basedagriculturalsystemsarecapableofautonomousseeding and planting
over the agricultural fields. These systems significantly reduce manual effort, error probability,
and delays in seeding and planting.
Efficient fertilizer and pesticide distribution: Agricultural IoT has been used to
developsolutionsthatarecapableofapplyingandcontrollingtheamountoffertilizers and pesticides
efficiently. These solutions are based on the analysis of crop health.
Water management: The excess distribution of water in the agricultural fields may affect the
growth of crops. On the other hand, the availability of global water resources is finite. The
constraint of limited and often scarce usable water resources is an influential driving factor for
the judicious and efficient distribution of agricultural water resources. Using the various solutions
available for agricultural IoT, water can be distributed efficiently, all the while, increasing field
productivity and yields. The IoT- enabled agricultural systems are capable of monitoring the
water level and moisture in the soil, and accordingly, distribute the water to the agricultural fields.
Real-time and remote monitoring: Unlike traditional agriculture, in IoT-based farming, a
stakeholder can remotely monitor different agricultural parameters, such as crop and soil
conditions, plant health, and weather conditions. More over, using a smart handheld
device(e.g.,cellphone),afarmercanactuateon-fieldfarmingmachinerysuch as a water pump,
valves, and other pieces of machinery.
Easy yield estimation: Agricultural IoT solutions can be used to record and aggregate data,
which may be spatially or temporally diverse, over long periods. These records can be used to
come up with various estimates related to farming and farm management. The most prominent
among these estimates is crop yield, which is done based on established crop models and
historical trends.
Production overview: The detailed analysis of crop production, market rates, and market
demand are essential factors for a farmer to estimate optimized crop yield sand decide upon the
essential steps for future cropping practices. Unlike traditional practices, IoT-based agriculture
acts as a force multiplier for farmers by enabling them to have a stronger hold on their farming
as well as crop management practices, and that too mostly autonomously. Agricultural IoT
provides a detailed product overview on the farmers’ handheld devices.
CASESTUDIES
In this section, we discuss a few case studies that will provide an overview of real implementation
of IoT infrastructure for agriculture.
Communication
The LAI system consists of multiple components, such as WSN, IoT gate way, and IoT based
network. All of these components are connected through wired or wireless links. The public
landmobilenetwork(PLMN)isusedtoestablishconnectivitybetweenexternalIoTnetworks and the
gateway. The data are analyzed and visualized with the help of a farm management information
system(FMIS),which resides in the IoT-based infrastructure. Further, a prevalent data transport
protocol: MQTT, is used in the system. MQTT is a very light-weight, publish/subscribe
messaging protocol, which is widely used for different IoT applications. The wireless LAN is
used for connecting the cluster head with a gateway. The Telos B motes are based on the IEEE
802.15.4 wireless protocol.
Software
Software is an essential part of the system by which different operations of the system are
executed. In order to operate the Telos B motes, Tiny OS, an open-source, low-power operating
system, is used. This OS is widely used for different WSN applications. Typically, in this system,
the data acquired from the sensor node is stored with a time stamp and sequence number (SN).
For wired deployments (the first generation deployment), the sampling rate used is 30
samples/hour. However, in the wireless deployment (the second generation),the sampling rate is
significantly reduced to 6 samples/hour. The Tiny OS is capable of activating low-power listening
modes of a mote, which is used for switching a mote into low-power mode during its idle state.
In the ground sensor, Telos B motes broadcast the data frame, and the cluster head (Raspberry-
Pi) receives it. This received at a is transmitted to the gateway. Besides acquiring ground sensor
data, the Raspberry-Pi works as a cluster head. In this system, the cluster head can re-boot any
affected ground sensor node automatically.
The MQTT broker runs in the Internet server of the system. This broker is responsible for
receiving the data from the WSN. In the system, the graphical user interface (GUI) is built using
an Apache server. The visualization of the data is performed at the server itself. Further, when a
sensor fails, the server informs the users. The server can provide different system- related
information to the smartphone of the registered user.
supply in the agricultural field can damage the crops. On the other hand, in sufficient water supply
in the agricultural field also affects the healthy growth of crops. Thus, efficient and optimized
water supply in the agricultural field is essential. This case study
highlightsaprototypeofanirrigationmanagementsystemdevelopedattheIndianInstituteof
Technology Kharagpur, funded by the Government of India. The primary objective of this system
is to provide a Web-based platform to the farmer for managing the water supply of an irrigated
agricultural field. The system is capable of providing a farmer-friendly interface by which the
field condition can be monitored. With the help of this system, a farmer can take the necessary
decision for the agricultural field based on the analysis of the data. However, the farmer need not
worry about the complex background architecture of the system. It is an affordable solution for
the farmers to access the agricultural field data easily and remotely.
Architecture
The architecture of this system consists of three layers: Sensing and actuating layer, remote
processing and service layer, and application layer. These layers perform dedicated tasks
depending on the requirements of the system. Figure 4.13depicts the architecture of the system.
The detailed functionalities of different layers of this system are as follows:
Sensing and Actuating layer: This layer deals with different physical devices, such as
sensor nodes, actuators, and communication modules. In the system, a specially designated sensor
node works as a cluster head to collect data from other sensor nodes, which are deployed on the
field for sensing the value of soil moisture and water level. A cluster head is equipped with two
communication module: ZigBee (IEEE 802.15.4) and General Packet Radio Service (GPRS).
MODULE –V
IoT Case
Studies And
Future Trends
Vehicular IoT
5.1. Introduction
The use of connected vehicles is increasing rapidly across the globe. The number of on- road
accidents and mismanagement of traffic is also increasing. The increasing number of vehicles
gives rise to the problem of parking. The evolution of IoT helps to form a connected vehicular
environment to manage the transportation systems efficiently. Vehicular IoT systems have
penetrated different aspects of the transportation ecosystem, including on-road to off-road
traffic management, driver safety for heavy to small vehicles, and security in public
transportation.
The architecture of the vehicular IoT is divided into three sublayers: device, fog, and cloud.
Device: The device layer is the bottom-most layer, which consists of the basic infrastructure
of the scenario of the connected vehicle. This layer includes the vehicles and
road side units (RSU). These vehicles contain certain sensors which gather the internal
information of the vehicles. On the other hand, the RSU works as a local centralized unit that
manages the data from the vehicles.
Fog: In vehicular IoT systems, fast decision making is pertinent to avoid accidents and traffic
mismanagement. In such situations, fog computing plays a crucial role by providing decisions
in real-time, much near to the devices. Consequently, the fog layer helps to minimize data
transmission time in a vehicular IoT system.
Cloud: Fog computing handles the data processing near the devices to take decisions
instantaneously. However, for the processing of huge data, fog computing is not enough.
Therefore, in such a situation, cloud computing is used. In a vehicular IoT system, cloud
computing helps to handle processes that involve a huge amount of data. Further, for long-
term storage, cloud computing is used as a scalable resource in vehicular IoT systems.
Modern cars come equipped with different types of sensors and electronic components. These
sensors sense the internal environment of the car and transmit the sensed data to a processor.
The on-road deployed sensors sense the external environment and transmit the sensed data to
the centralized processor. Thereafter, based on requirements, the processor delivers these
sensed data to fog or cloud to perform necessary functions. These processes seem to be
simple, but practically, several components, along with their challenges, are involved in a
vehicular IoT system. Figure 5.2 depicts the components required for vehicular IoT systems.
Sensors: In vehicular IoT, sensors monitor different environmental conditions and help to
make the system more economical, efficient, and robust. Traditionally, two types of sensors,
internal and external, are used in vehicular IoT systems.
a. Internal: These types of sensors are placed within the vehicle. The sensors are typically
used to sense parameters that are directly associated with the vehicle. Along with the
sensors, the vehicles are equipped with different electronic components such as
processing boards and actuators. The internal sensors in a vehicle are connected with the
processor board, to which they transmit the sensed data. Further, the sensed data are
processed by the board to take certain predefined actions. A few examples of internal
sensors are GPS, fuel gauge, ultrasonic sensors, proximity sensors, accelerometer,
pressure sensors, and temperature sensors.
b. External: External sensors quantify information of the environment outside the vehicle.
For example, there are sensors used in the smart traffic system that are capable of sensing
vacant parking lots in a designated parking area. The still images and videos from cameras
are important inputs to generate decisions in a vehicular IoT system. Therefore, on-road
cameras are widely used as external sensors to capture still images and videos. The
captured images and videos are processed further, either in the fog or in the cloud layer,
to take certain pre-programmed actions. As an example, camera sensor can capture the
image of the license plate of an overspeeding vehicle at a traffic signal; the image can be
processed to identify the owner of the vehicle to charge a certain amount of fine.
Similarly, temperature, rainfall, and light sensors are also used in the vehicular IoT
infrastructure.
Satellites: In vehicular IoT systems, automatic vehicle tracking and crash detection are
among the important available features. Satellites help the system to track vehicles and detect
on-road crashes. The satellite image is also useful for detecting on-road congestions and road
blocks.
Road Side Unit (RSU): The RSU is a static entity that works collaboratively with internal
and external sensors. Typically, the RSUs are equipped with sensors, communication units,
and fog devices. Vehicular IoT systems deal with time critical applications, which need to
take decisions in real time. In such a situation, the fog devices attached to the RSUs process
the sensed data and take necessary action promptly. If a vehicular system involves heavy
computation, the RSU transmits the sensed data to the cloud end. Sometimes, these RSUs
also work as an intermediate communication agent between two vehicles.
Cloud and fog computing: In vehicular IoT systems, fog computing handles the light-
weight processes geographically closer to the vehicles than the cloud. Consequently, for
faster decision making, fog computing is used in vehicular IoT systems. However, for a
heavy-weight process, fog computing may not be a suitable option. In such a situation, cloud
computing is more adept for vehicular IoT systems. Cloud computing provides more
scalability of resources as compared to fog computing. Therefore, the choice of the
application of fog and cloud computing depends on the situation. For example, the location
and extent of short on-road congestion from a certain location can be determined by fog
computing with the help of sensed data.
The congestion information can be shared by the RSU among other onroad vehicles, thereby
suggesting that they avoid the congested road. On the other hand, for determining regular on-
road congestion, predictions are typically handled with the help of cloud computing. For the
regular congestion prediction, the cloud end needs to process a huge amount of instantaneous
data, as well as, historical data for that stretch of road spanning back a few months to years.
Key Points:
The sensors attached to the different parts of a vehicle, such as the battery and fuel
pump, transmit the data to the cloud for analyzing the requirements for the
maintenance of those parts.
The evolution of IoT enables a user to lock, unlock, locate their cars, even from a
remote location.
The evolution of IoT resulted in the development of a connected vehicular environment. The
typical advantages of IoT architectures directly impact the domain of connected vehicular
systems. Therefore, the advantages of IoT are inherently included in vehicular IoT
environments. A few selected advantages of vehicular IoT are depicted in Figure 5.3.
(i) Easy tracking: The tracking of vehicles is an essential part of vehicular IoT. Moreover,
the system must know from which location and which vehicle the system is receiving the
information. In a vehicular IoT system, the tracking of vehicles is straightforward; the system
can collect information at a remote location.
ii) Fast decision making: Most of the decisions in the connected vehicle environment are
time critical. Therefore, for such an application, fast and active decision making are
pertinent for avoiding accidents. In the vehicular IoT environment, cloud and fog
computing help to make fast decisions with the data received from the sensor-based
devices.
iii) Connected vehicles: A vehicular IoT system provides an opportunity to remain connected
and share information among different vehicles.
iv) Easy management: Since vehicular IoT systems consist of different types of sensors, a
communication unit, processing devices, and GPS, the management of the vehicle
becomes easy. The connectivity among different components in a vehicular IoT enables
systems to track every activity in and around the vehicle. Further, the IoT infrastructure
helps in managing the huge number of users located at different geographical coordinates.
v) Safety: Safety is one of the most important advantages of a vehicular IoT system. With
easy management of the system, both the internal and external sensors placed at different
locations play an important role in providing safety to the vehicle, its occupants, as well
as the people around it.
vi) Record: Storing different data related to the transportation system is an essential
component of a vehicular IoT. The record may be of any form, such as video footage, still
images, and documentation. By taking advantage of cloud and fog computing architecture,
the vehicular IoT systems keep all the required records in its database.
In this section, we discuss a case study on smart safety in a vehicular IoT infrastructure. The
system highlights a fog framework for intelligent public safety in vehicular environments
(fog-FISVER). The primary aim of this system is to ensure smart
transportation safety (STS) in public bus services. The system works through the following
three steps:
(i) The vehicle is equipped with a smart surveillance system, which is capable of
executing video processing and detecting criminal activity in real time.
(ii) A fog computing architecture works as the mediator between a vehicle and a
police vehicle.
(iii) A mobile application is used to report the crime to a nearby police agent.
Architecture The architecture of the fog-FISVER consists of different IoT
components. The developers utilized the advantages of the low-latency fog
computing architecture for designing their system. Fog-FISVER is based on a
three-tiered architecture, as shown in Figure 5.4. Each of the tiers are as follows:
i) Tier1—In-vehicle FISVER STS Fog: In this system component, a fog node is placed
for detecting criminal activities. This tier accumulates the real sensed data from within the
vehicle and processes it to detect possible criminal activities inside the vehicle. Further, this
tier is responsible for creating crime-level metadata and transferring the required
information to the next tier. For performing all the activities, Tier 1 consists of two
subsystems: Image processor and event dispatcher
Image Processor: The image processor inside Tier 1 is a potent component, which has a
capability similar to the human eye for detecting criminal activities. Developers of the system
used a deep-learning-based approach for enabling image processing techniques in the
processor. To implement the fog computing architecture in the vehicle, a Raspberry-Pi- 3
processor board is used, which is equipped with a high-quality camera. Further, this
architecture uses template matching and correlation to detect the presence of dangerous
articles (such as a pistol or a knife) in the sub-image of a video frame.
Typically, the image processor stores a set of crime object templates in the fog-FISVER STS
fog infrastructure, which is present in Tier 2 of the system. The image processor is divided
into the following three parts:
(a) Crime definition downloader: This component periodically checks for the presence of
new crime object template definitions in fog-FISVER STS fog infrastructure. If a new crime
object template is available, it is stored locally.
(b) Crime definition storage: In order to use template matching, the crime object template
definition is required to be stored in the system. The crime definition storage is used to store
all the possible crime object template definitions.
(c) Algorithm launcher: This component initiates the instances of the registered algorithm
in order to match the template with the video captured by the camera attached in the vehicles.
If a crime object is matched with the video, criminal activity is confirmed.
Event dispatcher: This is another key component of Tier 1. The event dispatcher is
responsible for accumulating the data sensed from vehicles and the image processor. After
the successful detection of criminal activity, the information is sent to the fog-FISVER STS
fog infrastructure. The components of the event dispatcher are as follows:
(a) Event notifier: It transfers the data to the fog-FISVER STS fog infrastructure, after
receiving it from the attached sensor nodes in the vehicle.
(b) Data gatherer: This is an intermediate component between the event notifier and the
physical sensor; it helps to gather sensed data.
(c) Virtual sensor interface: Multiple sensors that sense data from different locations of the
vehicle are present in the system. The virtual sensor interface helps to maintain a particular
procedure to gather data. This component also cooperates to register the sensors in the system.
ii) Tier 2—FISVER STS Fog Infrastructure: Tier 2 works on top of the fog architecture.
Primarily, this tier has three responsibilities — keep updating the new object template
definitions, classifying events, and finding the most suitable police vehicle to notify the event.
FISVER STS fog infrastructure is divided into two sub-components:
Target Object Training: Practically, there are different types of crime objects. The system
needs to be up-to-dated regarding all crime objects. This subcomponent of Tier 2 is
responsible for creating, updating, and storing the crime object definition. The algorithm
launcher uses these definitions in Tier 1 for the template matching process. The template
definition includes different features of the crime object such as color gradient and shape
format.
A new object definition is stored in the definition database. The database requires to be
updated based on the availability of new template definitions.
iii) Tier 3 consists of mobile applications that are executed on the users’ devices.
The application helps a user, who witnesses a crime, to notify the police.
Internet of Things (IoT) has resulted in the development and emergence of a variety of
technologies that has had a huge impact on the medical field, especially wearable healthcare.
The salient features of IoT encourage researchers and industries to develop new IoT-based
technologies for healthcare. These technologies have given rise to small, power- efficient,
health monitoring and diagnostic systems. Consequently, the development of numerous
healthcare technologies and systems has rapidly increased over the last few years. Currently,
various IoT-enabled healthcare devices are in wide use around the globe for diagnosing
human diseases, monitoring human health conditions, caring/monitoring for elders, children,
and even infants. Moreover, IoT-based healthcare systems and services help to increase the
quality of life for common human beings; in fact, it has a promising scope of revolutionizing
healthcare in developing nations.
IoT-based healthcare devices provide access and knowledge about human physiological
conditions through hand held devices. With this development, users can be aware of the risks
in acquiring various diseases and take necessary precautions to avoid preventable diseases.
The basic skeleton of an IoT-based healthcare system is very similar to the conventional IoT
architectures. However, for IoT-based healthcare services, the sensors are specifically
designed to measure and quantify different physiological conditions of its users/patients. A
typical architecture for healthcare IoT is shown in Figure 5.5. We divide the architecture into
four layers.
The detailed description of these layers are as follows:
Layer 1: Layer 1 contains different physiological sensors that are placed on the human body.
These sensors collect the values of various physiological parameters. The physiological data
are analyzed to extract meaningful information.
Layer 2: Layer 1 delivers data to Layer 2 for short-term storage and low-level processing.
The devices that belong to Layer 2 are commonly known as local processing units (LPU) or
centralized hubs. These units collect the sensed data from the physiological sensors attached
to the body and process it based on the architecture’s requirement. Further, LPUs or the
centralized hubs forward the data to Layer 3.
Layer 3: This layer receives the data from Layer 2 and performs application specific high-
level analytics. Typically, this layer consists of cloud architecture or high-end servers. The
data from multiple patients, which may be from the same or different locations, are
accumulated in this layer. Post analysis of data, some inferences or results are provided to the
application in Layer 4.
Layer 4: The end-users directly interact with Layer 4 through receiver-side applications. The
modes of accessibility of these services by an end user are typically through cellphones,
computers, and tablets.
Sensors: Layer 1 consists of physiological sensors that collect the physiological parameters
of the patient. Few commonly used physiological sensors and their uses are depicted in Table
5.1.
Wireless Connectivity: Without proper connectivity and communication, the data sensed by
the physiological sensors are of no use in an IoT-based healthcare system. Typically, the
communication between the wearable sensors and the LPU is through either wired or wireless
connectivity. The wireless communication between the physiological sensors and LPU
occurs with the help of Bluetooth and ZigBee. On the other hand, the communication between
the LPU and the cloud or server takes place with Internet connectivity such as WiFi and
WLAN.
In Layer 4 of the healthcare IoT architecture, the healthcare data are received by the end users
with different devices such as laptops, desktops, and cellphones. These communication
protocols vary depending on the type of device in use. For example, when a service is received
by a cellphone, it uses GSM (global system for mobile communications). On the other hand,
if the same service is received on a desktop, it can be through Ethernet or Wi-Fi.
Communication and connectivity in healthcare IoT is an essential component.
Privacy and Security: The privacy and security of health data is a major concern in
healthcare IoT services. In a healthcare IoT architecture, several devices connect with the
external world. Moreover, between LPU and the server/cloud, different networking devices
work via network hops (from one networked device to another) to transmit the data. If any of
these devices are compromised, it may result in the theft of health data of a patient, leading
to serious security breaches and ensuing lawsuits. In order to increase the security of the
healthcare data, different healthcare service providers and organizations are implementing
healthcare data encryption and protection schemes.
Analytics: For converting the raw data into information, analytics plays an important role in
healthcare IoT. Several actors, such as doctors, nurses, and patients, access the healthcare
information in a different customized format. This customization allows each actor in the
system to access only the information pertinent to their job/role. In such a scenario, analytics
plays a vital role in providing different actors in the system access to meaningful information
extracted from the raw healthcare data. Analytics is also used for diagnosing a disease from
the raw physiological data available.
Cloud and Fog Computing: In a healthcare IoT system, several physiological sensors are
attached to a patient’s body. These sensors continuously produce a huge amount of
heterogeneous data. For storing these huge amounts of heterogeneous health data, efficient
storage space is essential. These data are used for checking the patient’s history, current
health status, and future for diagnosing different diseases and the symptoms of the patient.
Typically, the cloud storage space is scalable, where payment is made as per the usage of
space. Consequently, to store health data in a healthcare IoT system, cloud storage space is
used. Analytics on the stored data in cloud storage space is used for drawing various
inferences. The major challenges in storage are security and delay in accessing the data.
Therefore, cloud and fog computing play a pivotal role in the storage of these massive
volumes of heterogeneous data.
Interface: The interface is the most important component for users in a healthcare IoT
system. Among IoT applications, healthcare IoT is a very crucial and sensitive application.
Thus, the user interface must be designed in such a way that it can depict all the required
information clearly and, if necessary, reformat or represent it such that it is easy to
understand. Moreover, an interface must also contain all the useful information related to the
services.
Key points:
As healthcare data is private, a popular US legislation - Health Insurance Portability
and Accountability (HIPAA) - protects through data privacy and security provisions.
Drones are used to deliver medicines in disaster rescue and management scenarios.
IoT has already started to penetrate the domain of medical science. In healthcare, IoT has
become significantly popular due to its various features, which have been covered previously
in this book. Healthcare IoT helps in managing different healthcare subsystems efficiently.
Although it has many advantages, healthcare IoT has some risks too, which may be crucial
in real-life applications. In this section, we discuss the different advantages and risks of
healthcare IoT as depicted in Figure 5.7.
Advantages of healthcare IoT The major advantages of healthcare IoT can be listed as
follows:
Real-time: In healthcare sectors, different components, such as the condition of the patients,
availability of doctors and beds in a hospital, medical facilities with their monetary charges,
can vary dynamically with time. In such a dynamic scenario, one of the important
characteristics of an IoT-based healthcare system is real-timeliness.
A healthcare IoT system enables users, such as doctors, end users at the patient-side, and staff
in a healthcare unit, to receive real-time updates about the healthcare IoT components, as
mentioned earlier. Moreover, a healthcare IoT system can enable a doctor to observe a
patient’s health condition in real-time even from a remote location, and can suggest the type
of care to be provided to the patient.
On the other hand, users at the patient-end can easily take different decisions, such as where
to take a patient during critical situations. Moreover, the staff in a healthcare unit are better
aware of the current situation of their unit, which includes the number of patients admitted,
availability of the doctors and bed, total revenue of the unit, and other such information.
Low cost: Healthcare IoT systems facilitate users with different services at low cost. For
example, an authorized user can easily find the availability of the beds in a hospital with
simple Internet connectivity and a web-browser-based portal. The user need not visit the
hospital physically to check the availability of beds and facilities. Moreover, multiple
registered users can retrieve the same information simultaneously.
Easy management: Healthcare IoT is an infrastructure that brings all its end users under the
same umbrella to provide healthcare services. On the other hand, in such an infrastructure,
the management of numerous tangible and intangible entities (such as users, medical devices,
facilities, costs, and security) is a challenging task. However, healthcare IoT facilitates easy
and robust management of all the entities.
Automatic processing: A healthcare unit consists of multiple subsystems, for which manual
interventions are required. For example, to register a patient with a hospital, the user may be
required to enter his/her details manually. However, automatic processing features can
remove such manual intervention with a fingerprint sensor/device. Healthcare IoT enables
end-to-end automatic processing in different units and also consolidates the information
across the whole chain: from a patient’s registration to discharge.
Easy record-keeping: The healthcare IoT system, includes a huge number of patients,
doctors, and other staff. Different patients suffer from different types of diseases. A particular
disease requires particular treatment, which requires knowledge of a patient’s health history,
along with other details about them. Therefore, the timely delivery of health data of the
patient to the doctor is important. In such a situation, the permanent storage of the patients’
health data along with their respective details is essential.
Similarly, for the smooth execution of the healthcare unit, details of the staff with their daily
activity in a healthcare unit are also required for storage. A healthcare unit must also track its
condition and financial transactions for further development of the unit. A healthcare IoT
enables the user to keep these records in a safe environment and deliver them to the
authorized user as per requirement. Moreover, these recorded data are accessible from any
part of the globe.
Easy diagnosis: The healthcare IoT system stores the data of the patient in a secure manner.
Sometimes, for diagnosing a disease, a huge chunk of prior data is required. In a healthcare
IoT system, the diagnosis of the disease becomes easier with the help of certain learning
mechanisms along with the availability of prior datasets.
In a healthcare IoT system, there are multiple risks as well. Here, we discuss the various risks
associated with a healthcare IoT system.
Security: A healthcare IoT system contains the health data of different patients associated
with the system. The healthcare system must keep the data confidential. This data should not
be accessible to any unauthorized person. On the other hand, different persons and devices
are associated with a healthcare IoT system. In such a system, the risk of data tampering and
unauthorized access is quite high.
Error: Data analytics helps a healthcare IoT system to predict the patients’ condition and
diagnosis of diseases. A huge amount of data needs to be fed into the system in order to
perform accurate analytics. Moreover, the management of a huge amount of data is a crucial
task in any IoT-based system. Particularly, in the healthcare system, errors in data may lead
to misinterpretation of symptoms and lead to the wrong diagnosis of the patient. It is a
challenging task to construct an error-free healthcare IoT architecture.
To overcome these shortcomings, the Smart Wireless Applications and Networking (SWAN)
laboratory at the Indian Institute of Technology Kharagpur developed a system: AmbuSens.
The system was primarily funded by the Ministry of Human Resource and Development
(MHRD) of the Government of India. This product system is a very crucial part of the
healthcare IoT system.
• Digitization and standardization of the healthcare data, which can be easily accessed by
the registered hospital authorities.
• Real-time monitoring of the patients who are in transit from one hospital to another. At
both hospitals, doctors can access the patients’ health conditions.
• Accessibility by which multiple doctors can access the patient’s health data at the same
time.
• In the AmbuSens system, wireless physiological sensor nodes are used. These sensor
nodes make the system flexible and easy to use.
Architecture
The AmbuSens system is equipped with different physiological sensors along with a local
hub. These sensors sense the physiological parameters from the patient’s body and transmit
those to a local data processing unit (LDPU). The physiological sensors and LDPU form a
wireless body area network (WBAN). Further, this local hub forwards the physiological data
to the cloud for storing and analyzing the health parameters. Finally, the data are accessed by
different users. The detailed layered architecture of the AmbuSens system is depicted in
Figure 5.8.
Layer 1: This layer consists of multiple WBANs attached to a patient’s body. These WBANs
acquire the physiological data from the patient and transmit them to the upper layer. The
physiological sensors are heterogeneous, that is, each of these sensors senses different
parameters of the body. Moreover, the physiological sensors require calibration for acquiring
the correct data from a patient’s body. Layer 1 takes care of the calibration of the
physiological sensor nodes. Further, in order to deliver the patient’s physiological data from
the sensor node to the LDPU, it is essential to form a proper WBAN. The formation of WBAN
takes place by connecting multiple physiological sensor nodes to the LDPU so that the
sensors can transmit the data to the LDPU, simultaneously.
Layer 2: In the AmbuSens system, cloud computing has an important role. Layer 2 is
responsible for handling the cloud-related functions. From Layer 1, WBANs attached to the
different patients deliver data to the cloud end. The cloud is used for the long-term analysis
and storage of data in the AmbuSens system. Moreover, the previous health records of the
patients are stored in the cloud in order to perform patient-specific analysis. A huge volume
of health data is produced by the WBANs, which are handled by the cloud with the help of
big data analytics for providing real-time analysis.
Layer 3: In the AmbuSens system, the identity of the patients remains anonymous. An
algorithm is designed to generate a dynamic hash value for each patient in order to keep the
patient’s identity anonymous. Moreover, in the AmbuSens system, at different time instants,
a new hash value is generated for the patients. The entire hashing mechanism of the
AmbuSens is performed in this layer.
Layer 4: The users simply register into the system and use it as per requirement.
Hardware
In the AmbuSens system, a variety of hardware components are used such as sensors,
communication units, and other computing devices.
Sensors: The sensors used in the AmbuSens system are non-invasive. The description of the
sensors used for forming the WBAN in the AmbuSens system are as follows:
Optical Pulse Sensing Probe: It senses the photoplethysmogram (PPG) signal and transmits
it to a GSR expansion module. Typically, PPG signals are sensed from the ear lobe, fingers,
or other location of the human body. Further, the GSR expansion module transfers the sensed
data to a device in real-time.
Electrocardiogram (ECG) unit and sensor: The ECG module used in AmbuSens is in the form
of a kit, which contains ECG electrodes, biophysical 9” leads, biophysical 18” leads, alcohol
swabs, and wrist strap. Typically, the ECG sensor measures the pathway of electrical
impulses through the heart to sense the heart’s responses to physical exertion and other factors
affecting cardiac health.
Electromyogram (EMG) sensor: This sensor is used to analyze and measure the biomechanics
of the human body. Particularly, the EMG sensor is used to measure different electrical
activity related to muscle contractions; it also assesses nerve conduction, and muscle response
in injured tissue.
Temperature sensor: The body temperature of patients changes with the condition of the
body. Therefore, a temperature sensor is included in the AmbuSens system, which can easily
be placed on the body of the patient.
Galvanic Skin Response (GSR) sensor: The GSR sensor is used for measuring the change in
electrical characteristics of the skin.
Local Data Processing Unit (LDPU): In AmbuSens, all the sensors attached to the human
body sense and transmit the sensed data to a centralized device, which is called an LDPU. An
LDPU is a small processing board with limited computation capabilities. The connectivity
between the sensors and the LDPU follows a single-hop star topology. The LDPU is
programmed in such a way that it can receive the physiological data from multiple sensor
nodes, simultaneously. Further, it transmits the data to the cloud for long-term storage and
heavy processing.
Front End
These data may not be required for the nurse; therefore, a nurse is unable to access the same
set of data a doctor can access. The system provides the flexibility to a patient to log in to
his/her account and download the details of his/her previous medical/treatment details.
Therefore, in AmbuSens, the database is designed in an efficient way such that it can deliver
the customized data to the respective actor. Each of the users has to register with the system
to avail of the service of the AmbuSens.
Therefore, in this system, the registration process is also designed in a customized fashion,
that is, the details of a user to be entered into the registration form is different for different
actors. For example, a doctor must enter his/her registration number in the registration form.
The sensors collect data from the environment and serve different IoT-based applications.
The raw data from a sensor require processing to draw inferences. An IoT based system
generates data with complex structures; therefore, conventional data processing on these data
is not sufficient. Sophisticated data analytics are necessary to identify hidden patterns. In this
chapter, we discuss a few traditional data analytics tools that are popular in the context of IoT
applications. These tools include k-means, decision tree (DT), random forest (RF), k-nearest
neighbor (KNN), and density-based spatial clustering of applications with noise (DBSCAN)
algorithms.
The term “machine learning” was coined by Arthur Lee Samuel, in 1959. He defined machine
learning as a “field of study that gives computers the ability to learn without being explicitly
programmed”. ML is a powerful tool that allows a computer to learn from past experiences
and its mistakes and improve itself without user intervention. Typically, researchers envision
IoT-based systems to be autonomous and self-adaptive, which enhNHHances services and
user experience. To this end, different ML models play a
crucial role in designing intelligent systems in IoT by leveraging the massive amount of
generated data and increasing the accuracy in their operations. The main components of ML
are statistics, mathematics, and computer science for drawing inferences, constructing ML
models, and implementation, respectively
Key Points:
5.4.2 Advantages of ML
Self-learner: An ML-empowered system is capable of learning from its prior and run-time
experiences, which helps in improving its performance continuously. For example, an ML-
assisted weather monitoring system predicts the weather report of the next seven days with
high accuracy from data collected in the last six months. The system offers even better
accuracy when it analyzes weather data that extends back to three more months.
ML is beneficial in predicting the weather with less delay and accuracy as compared to
humans.
Self-guided: An ML tool uses a huge amount of data for producing its results. These tools
have the capability of analyzing the huge amount of data for identifying trends autonomously.
As an example, when we search for a particular item on an online e- commerce website, an
ML tool analyzes our search trends. As a result, it shows a range of products similar to the
original item that we searched for initially.
Minimum Human Interaction Required: In an ML algorithm, the human does not need to
participate in every step of its execution. The ML algorithm trains itself automatically, based
on available data inputs. For instance, let us consider a healthcare system that predicts
diseases. In traditional systems, humans need to determine the disease by analyzing different
symptoms using standard “if– else” observations. However, the ML algorithm determines the
same disease, based on the health data available in the system and matching the same with
the symptoms of the patient.
Diverse Data Handling: Typically, IoT systems consist of different sensors and produce
diverse and multi-dimensional data, which are easily analyzed by ML algorithms. For
example, consider the profit of an industry in a financial year. Profits in such industries
depend on the attendance of laborers, consumption of raw materials, and performance of
heavy machineries. The attendance of laborers is associated with an RFID (radio frequency
identification)-based system. On the other hand, industrial sensors help in the detection of
machiney failures, and a scanner helps in tracking the consumption of raw materials. ML
algorithms use these diverse and multi-dimensional data to determine the profit of the
industry in the financial year.
5.4.3 Challenges in ML
An ML algorithm utilizes a model and its corresponding input data to produce an output. A
few major challenges in ML are listed as follows:
Data Description: The data acquired from different sensors are required to be informative
and meaningful. Description of data is a challenging part of ML.
Amount of Data: In order to provide an accurate output, a model must have sufficient
amount of data. The availability of a huge amount of data is a challenge in ML.
Erroneous Data: A dataset may contain noisy or erroneous data. On the other hand, the
learning of a model is heavily dependent on the quality of data. Since erroneous data misleads
the ML model, its identification is crucial.
Selection of Model: Multiple models may be suitable for serving a particular purpose.
However, one model may perform better than others. In such cases, the proper selection of
the model is pertinent for ML.
Quality of Model: After the selection of a model, it is difficult to determine the quality of
the selected model. However, the quality of the model is essential in an ML-based system.
5.4.4 Types of ML
The different categories of ML are labeled- and unlabeled-data. As the name suggests, labeled
data contain certain meaningful tags, known as labels. Typically, the labels correspond to the
characteristics or properties of the objects. For example, in a dataset containing the images
of two birds, a particular sample is tagged as a crow or a pigeon. On the other hand, the
unlabeled dataset does not have any tags associated with them. For example, a dataset
containing the images of a bird without mentioning its name.
Supervised Learning: This type of learning supervises or directs a machine to learn certain
activities using labeled datasets. The labeled data are used as a supervisor to make the
machine understand the relation of the labels with the properties of the corresponding input
data. Consider an example of a student who tries to learn to solve equations using a set of
labeled formulas. The labels indicate the formulae necessary for solving an equation. The
student learns to solve the equation using suitable formulae from the set. In the case of a new
equation, the student tries to identify the set of formulae necessary for solving it. Similarly,
ML algorithms train themselves for selecting efficient formulae for solving equations.
The selection of these formulae depends primarily on the nature of the equations to be solved.
Supervised ML algorithms are popular in solving classification and regression problems.
Typically, the classification deals with predictive models that are capable of approximating a
mapping function from input data to categorical output. On the other hand, regression
provides the mapping function from input data to numerical output. There are different
classification algorithms in ML. However, in this chapter, we discuss three popular
classification algorithms: (i) k-nearest neighbor (KNN), (ii) decision tree (DT), and
(iii) random forest (RF).
We use regression to estimate the relationship among a set of dependent variables with
independent variables, as shown in Figure 5.11. The dependent variables are the primary
factors that we want to predict. However, these dependent variables are affected by the
independent variables. Let x and y be the independent and dependent variables, respectively.
Mathematically, a simple regression model is represented as:
y = β0 x0 + βx + (17.1) where β represents the amount of impact of variable x on y and denotes an error.
In the given equation, x0 creates β0 impact on y, which indicates that the value of y can never be 0.
Similarly, for multiple variables, say n, the regression model is represented as:
n
y= ∑ βixi +€
i=0
Unsupervised Learning: Unsupervised learning algorithms use unlabeled datasets to find scientific
trends. Let us consider an example of the student similar to that described in the case of supervised
learning, and illustrate how it differs in case of unsupervised learning. As already mentioned,
unsupervised learning does not use any labels in its operations. Instead, the ML algorithms in this
category try to identify the nature and properties of the input equation and the nature of the formulae
responsible for solving it. Unsupervised learning algorithms try to create different clusters based on the
features of the formulae and relate it with the input equations. Unsupervised learning is usually applied
to solve two types of problems: clustering and association. Clustering divides the data into multiple
groups. In contrast, association discovers the relationship or association among the data in a dataset.
Semi-Supervised Learning: Semi-supervised learning belongs to a category between supervised and
unsupervised learning. Algorithms under this category use a combination of both labeled and unlabeled
datasets for training. Labeled data are typically expensive and are relatively difficult to label correctly.
Unlabeled data is less expensive than labeled data. Therefore, semi-supervised learning includes both
labeled and unlabeled dataset to design the learning model. Traditionally, semi-supervised learning uses
mostly unlabeled data, which makes it efficient to use, and capable of overcoming samples with missing
labels.
Reinforcement Learning: Reinforcement learning establishes a pattern with the help of its experiences
by interacting with the environment. Consequently, the agent performs a crucial role in reinforcement
learning models. It aims to achieve a particular goal in an uncertain environment. Typically, the model
starts with an initial state of a problem, for which different solutions are available. Based on the output,
the model receives either a reward or a penalty from the environment. The output and reward act as
inputs for proceeding to the next state. Thus, reinforcement learning models continue learning iteratively
from their experiences while inducing correctness to the output.
Introduction To IoT Question Bank 2024-25
QUESTIONBANK
MODULE1
MODULE2
MODULE3
MODULE4
MODULE5
8
Why do we use ML? 4 CO4 L2
9
What are the major challenges in ML? 4 CO4 L2
10
What are the types of ML? 5 CO4 L2