0% found this document useful (0 votes)
22 views6 pages

Unit 1

The document provides an overview of business processes in IoT, emphasizing their complexity and reliance on real-world interactions for efficiency and decision-making. It discusses the integration of IoT with enterprise systems, highlighting the shift towards cloud computing and various service models like SaaS, PaaS, and IaaS. Additionally, it covers analytics in M2M and IoT, focusing on the importance of big data technologies for managing and analyzing vast amounts of data generated by connected devices.

Uploaded by

aabb012005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views6 pages

Unit 1

The document provides an overview of business processes in IoT, emphasizing their complexity and reliance on real-world interactions for efficiency and decision-making. It discusses the integration of IoT with enterprise systems, highlighting the shift towards cloud computing and various service models like SaaS, PaaS, and IaaS. Additionally, it covers analytics in M2M and IoT, focusing on the importance of big data technologies for managing and analyzing vast amounts of data generated by connected devices.

Uploaded by

aabb012005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Chapter -1 [Part-3] IoT an Overview - M2M and IoT Technology Fundamentals

Q. Explain Business Process in IoT.


 Business process refers to a series of activities, often a collection of interrelated processes in a logical
sequence, within an enterprise, leading to
a specific result.
 There are several types of business
processes such as management,
operational, and supporting, all of which
aim at achieving a specific mission
objective.
 As business processes usually span
several systems and may get very complex,
several methods and techniques have
been developed for their modeling, such as
the Business Process Model and Notation
(BPMN), which graphically represents
business processes in a business process
model.
 Managers and business analysis model an
enterprise’s processes in an effort to depict
the real way an enterprise operates and
subsequently to improve efficiency and
quality.
 Several key business processes in modern enterprise systems heavily rely on interaction with real-world
processes, largely for monitoring, but also for some control (management), in order to take business-
critical decisions and optimize actions across the enterprise.
 The introduction of modern ICT has significantly changed the way enterprises (and therefore business
processes) interact with the real world.
 As depicted in Figure, we have witnessed a paradigm change with the dramatic reduction of the data
acquisition from the real world; this was attributed mostly to the automation offered by machines
embedded in the real world.
 Initially all these interactions were human-based (e.g. via a keyboard) or human-assisted (e.g. via a
barcode scanner); however, with the prevalence of RFID, WSNs, and advanced networked embedded
devices, all information exchange between the real-world and enterprise systems can be done
automatically without any human intervention and at blazing speeds.
Q. Explain IoT integration with
enterprise system. (Business Process
in IoT)
 M2M communication and the vision of
the IoT pose a new era where billions of
devices will need to interact with each
other and exchange information in order
to fulfill their purpose.
 Much of this communication is expected
to happen over Internet technologies and
tap into the extensive experience
acquired with architectures and
experiences in the Internet/Web over the
last several decades.
 More sophisticated, though still
overwhelmingly experimental, approaches
go beyond simple integration and target
more complex interactions where
collaboration of devices and systems is
taking place.
 As shown in Figure, cross-layer
interaction and cooperation can be pursued:
 At the M2M level, where the machines cooperate with each other (machine-focused interactions), as
well as at the machine-to-business (M2B) layer, where machines cooperate also with network-based
services, business systems (business service focus), and applications.
 As depicted in Figure, we can see several devices in the lowest layer.
 These can communicate with each other over short-range protocols (e.g. over ZigBee, Bluetooth), or
even longer distances (e.g. over Wi-Fi, etc.).
 Promising real-world integration is done using a service-oriented approach by interacting directly with the
respective physical elements, for example, via web services running on devices (if supported) or via more
lightweight approaches such as REST.
 Many of the services that will interact with the devices are expected to be network services available, for
example, in the cloud.
 The main motivation for enterprise services is to take advantage of the cloud characteristics such as
virtualization, scalability, multi-tenancy, performance, lifecycle management, etc.
Q. Explain XaaS. (Everything as a Service).
 There is a general trend away from locally managing dedicated hardware toward cloud infrastructures that
drives down the overall cost for computational capacity and storage.
 This is commonly referred to as “cloud
computing.” Cloud computing is a model for
enabling ubiquitous, on-demand network
access to a shared pool of configurable
computing resources (e.g. networks, servers,
storage, applications, and services) that can
be provisioned, configured, and made
available with minimal management effort or
service provider interaction.
 Cloud computing, however, does not change
the fundamentals of software engineering.
 All applications need access to three things:
compute, storage, and data processing
capacities.
 With cloud computing, a fourth element is
added distribution services i.e. the manner
in which the data and computational
capacity are linked together and
coordinated.
 A cloud-computing platform may therefore be viewed conceptually.
 Several essential characteristics of cloud computing have been defined by as follows.
 On-Demand Self-Service
 A consumer can unilaterally provision computing capabilities, such as server time and network
storage, as needed, or automatically, without requiring human interaction with each service
provider.
 Broad Network Access
 Capabilities are available over the network and accessed through standard mechanisms that
promote use by heterogeneous thin or thick client platforms (e.g. mobile phones, tablets, laptops,
and workstations).
 Resource Pooling
 The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant
model, with different physical and virtual resources dynamically assigned and reassigned according
to consumer demand.
 Rapid Elasticity
 Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly
outward and inward commensurate with demand.
 To the consumer, the capabilities available for provisioning often appear to be unlimited, and can be
appropriated in any quantity at any time.
 Measured Service
 Cloud systems automatically control and optimize resource use by leveraging a metering capability,
at some level of abstraction, appropriate to the type of service (e.g. storage, processing, bandwidth,
and active user accounts).
 Resource usage can be monitored, controlled, and reported, providing transparency for both the
provider and consumer of the utilized service.
 Once such infrastructures are available, however, it is easier to deploy applications in software.
 For M2M and IoT, these infrastructures provide the following:
 Storage of the massive amounts of data that sensor, tags, and other “things” will produce.
 Computational capacity in order to analyze data rapidly and cheaply.
 Over time, cloud infrastructure will allow enterprises and developers to share datasets, allowing for
rapid creation of information value chains.
Q. Explain XaaS different service models. (Everything as a Service).
 Cloud computing comes in several different service models and deployment options for enterprises wishing
to use it. The three main service models may be defined as
 Software as a Service (SaaS)
 Refers to software that is provided to consumers on demand, typically via a thin client. The end-users do
not manage the cloud infrastructure in any way.
 This is handled by an Application Service Provider (ASP) or Independent Software Vendor (ISV).
 Examples include office and messaging software, email, or CRM tools housed in the cloud. The end-user
has limited ability to change anything beyond user-specific application configuration settings.
 Cloud computing comes in several different service models and deployment options for enterprises wishing
to use it. The three main service models may be defined as,
 Platform as a Service (PaaS)
 Refers to cloud solutions that provide both a computing platform and a solution stack as a service via the
Internet.
 The customers themselves develop the necessary software using tools provided by the provider, who also
provides the networks, the storage, and the other distribution services required.
 Cloud computing comes in several different service models and deployment options for enterprises wishing
to use it. The three main service models may be defined as,
 Infrastructure as a Service (IaaS):
 In this model, the provider offers virtual machines and other resources such as hypervisors (e.g. Xen,
KVM) to customers.
 Pools of hypervisors support the virtual machines and allow users to scale resource usage up and down in
accordance with their computational requirements.
 Users install an OS image and application software on the cloud infrastructure.
 The provider manages the underlying cloud infrastructure, while the customer has control over OS,
storage, deployed applications, and possibly some networking components.
Q. Explain different deployment models of everything as a service (Xaas).
Q. Explain different deployment model of cloud or XaaS.
 Private Cloud
 The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple
consumers (e.g. business units).
 It may be owned, managed, and operated by the organization, a third party, or some combination of them,
and it may exist on or off premises.
 Community Cloud
 The cloud infrastructure is provisioned for exclusive use by a specific community of consumers from
organizations that have shared concerns (e.g. mission, security requirements, policy, and compliance
considerations).
 It may be owned, managed, and operated by one or more of the organizations in the community, a third
party, or some combination of them, and it may exist on or off premises.
 Public Cloud
 The cloud infrastructure is provisioned for open use by the general public. It may be owned, managed,
and operated by a business, academic, or government organization, or some combination thereof. It exists
on the premises of the cloud provider.
 Hybrid Cloud
 The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private,
community, or public) that remain unique entities, but are bound together by standardized or proprietary
technology that enables data and application portability
Q. Explain analytics in M2M and IoT.
 Traditionally, M2M data has been sent from specific
devices to specific services, which store the data of
interest.
 This approach uses semantically well-defined data for
specific purposes, and only requires storing the data that
is needed for the explicit use cases, and only for as long
as it’s required.
 For the most part, the applications have been monitoring,
reporting, and rule-based actions.
 To further increase the speed of M2M deployments, it’s
important to look at methods to extract additional value
from these devices.
 Given the enormous amounts of data that will be
generated by the IoT and the advancements within the
area of Big Data, new opportunities arise from the possibility to reuse data from devices for multiple
purposes, many of which will not even be imagined at the time of deployment.
 The opportunities of using M2M data for advanced
analytics and business intelligence are very promising.
 By applying technologies from the Big Data domain, it is
possible to store more data, such as contextual and
situational information, and given a more open
approach to data, such as the open-data government
initiatives (e.g. Data.gov and Data.gov.uk), even more
understanding can be derived, which can be used to
improve everything from Demand/Response in a power
grid to wastewater treatment in a city.
 For M2M data, traditional data warehousing and
analytics will for many cases not be up to the task.
 Big Data technologies such as MapReduce for massively parallel analytics, as well as analytics on online
streaming data where the individual data item is not necessarily stored, will play an important role in the
management and analysis of large-scale M2M data.
 To handle the analytical needs related to M2M and IoT, it’s expected that in the near term, vendors of Big
Data solutions will provide for the needs of in-house analytics.
 Purposes and considerations
 Regardless of whether you call it statistics, data mining, or machine learning, there exist a multitude
of methods to extract different types of information from data.
 The information can be used in everything from static reports to interactive decision support
systems, or even fully automated real-time systems.
 Descriptive Analytics:
 Use of means, variances, maxima, minima, aggregates, and frequencies, optionally grouped by
selected characteristics.
 Predictive Analytics:
 Use current and historical facts to predict what will happen next.
 Forecast demand and supply in a power grid and train a model to predict how price affects electric
usage to optimize the performance and minimize peaks in the electricity consumption.
 Clustering:
 Identification of groups with similar characteristics. Perform customer segmentation or find
behavioral patterns in a large set of M2M devices.
 Anomaly Detection:
 Detect fraud for smart meters by checking for anomalous electricity consumption compared to
similar customers, or historic consumption for the subscriber.
 M2M data fulfills all the characteristics of Big Data, which is usually described by the four “Vs”.
 Volume:
 To be able to create good analytical models it’s no longer enough to analyze the data once and then
discard it. Creating a valid model often requires a longer period of historic data.
 This means that the amount of historic data for M2M devices is expected to grow rapidly.
 Velocity:
 Even though M2M devices usually report quite seldom, the sheer number of devices means that the
systems will have to handle a huge number of transactions per second.
 Also, often the value of M2M data is strongly related to how fresh it is to be able provide the best
actionable intelligence, which puts requirements on the analytical platform.
 Variation:
 Given the multitude of device types used in M2M, it’s apparent that the variation will be very high. This is
further complicated by the use of different data formats as well as different configurations for devices of
the same type (e.g. where one device measures temperature in Celsius every minute, another device
measures it in Fahrenheit every hour).
 The upside is that the data is expected to be semantically well-defined, which allows for simple
transformation rules.
 Veracity:
 It’s imperative that we can trust the data that is analyzed.
 There are many pitfalls along the way, such as erroneous timestamps, non-adherence to standards,
proprietary formats with missing semantics, wrongly calibrated sensors, as well as missing data.
 This requires rules that can handle these cases, as well as fault-tolerant algorithms that, for example, can
detect outliers (anomalies)
Q. Explain M2M and Analytics architecture.
 The architecture for analytics needs to take a few basic requirements into account.
 One of these is to serve as a platform for data exploration and modeling by data scientists and other
advanced information consumers
performing business analytics and
intelligence.
 As much time is spent on data
preparation before any analytics can
take place, this is also an integral
part of the architecture to facilitate.
 Finally, efficient means of building
and viewing reports, as well as
integrating with back-end systems
and business processes, is of
importance.
 These requirements concern batch
analytics, but should also be
considered for stream analytics.
 A sandbox for Big Data analytics
can be realized in a number of ways,
of which the Hadoop ecosystem is
probably the best known.
 Other alternatives include:
 Columnar databases such as HP Vertica, Actian ParAccel MPP, SAP Sybase IQ, and Infobright.
 Massively Parallel Processing (MPP) architectures such as Pivotal Greenplum and Teradata Aster.
 In-memory databases such as SAP Hana and QlikView.
 An analytical architecture should preferably also provide:
 Authentication and authorization to access data.
 Failover and redundancy features.
 Management facilities.
 Efficient batch loading of data and support self-service.
 Scheduling of batch jobs, such as data import and model training.
 Connectors to import data from external sources
 Although it’s not unusual for developers to use MapReduce directly, there exist a number of technologies
that provide further abstraction levels, such as:
 HBase: A column-oriented data store that provides real-time read/write access to very large tables
distributed over HDFS.
 Mahout: A distributed and scalable library of machine learning algorithms that can make use of
MapReduce.
 Pig: A tool for converting relational algebra scripts into MapReduce jobs that can read data from
HDFS and HBase.
 Hive: Similar to Pig, but offers an SQL-like scripting language called HiveQL instead.
 Impala: Offers low-latency queries using HiveQL for interactive exploratory analytics, as compared to
Hive, which is better suited for long running batch-oriented tasks.
Q. Explain M2M and Analytics Methodology.
 Knowledge discovery and analytics can be described as a project methodology, following certain steps in a
process model.
 To perform efficient analytics and find answers to important questions, it’s paramount to involve the right
people with the necessary business understanding at the beginning of a project.
 Business understanding
 The first phase in the process is to
understand the business objectives and
requirements, as well as success criteria.
 This forms the basis for formulating the
goals and plan for the data mining process.
 In these cases, it’s not unusual to bring in
the help of an analytics team to identify
potential business cases that can benefit
from the data.
 Data understanding
 The next phase consists of collecting data
and gaining an understanding of the data
properties, such as amount of data and
quality in terms of inconsistencies, missing
data, and measurement errors.
 The tasks in this phase also include gaining
some understanding of actionable insights
contained in the data, as well as to form
some basic hypotheses.
 Data preparation
 Before it’s possible to start modeling the
data to achieve our goals, it’s necessary to prepare the data in terms of selection, transformation, and
cleaning.
 In this phase, it’s frequently the case that new data is necessary to construct, both in terms of entirely
new attributes as well as imputing new data into records where data is missing.
 Data modeling
 Modeling At the modeling phase, it’s finally time to use the data to gain an understanding of the actual
business problems that were stated in the beginning of the project.
 Various modeling techniques are usually applied and evaluated before selecting which ones are best
suited for the particular problem at hand.
 As some modeling techniques require data in a specific form, it’s quite common to go back to the data
preparation phase at this stage
 Data Evaluation
 After evaluating a number of models, it’s time to select a set of candidate models to be methodically
assessed.
 The assessment should estimate the effectiveness of the results in terms of accuracy, as well as ease of
use in terms of interpretation of the results.
 Deployment
 At this last phase in the project, the models are deployed and integrated into the organization.
 This can mean several things, such as writing a report to disseminate the results, or integrating the model
into an automated system.
 This part of the project involves the customer directly, who has to provide the resources needed for an
effective deployment.
 The deployment phase also includes planning for how to monitor the models and evaluate when they have
played out their role or need to be maintained.
Q. Explain knowledge management in M2M and IoT.
Q. Explain knowledge management reference architecture in M2M and IoT.
 Figure outlines a high-level knowledge management reference architecture that illustrates how data
sources from M2M and IoT may be combined with other types of data, for example, from databases or
even OSS/BSS data from MNOs.


 There are three levels to the diagram: (1) data sources, (2) data integration, and (3) knowledge discovery
and information access.
 Data sources
 Data sources refer to the broad variety of sources that may now be available to build enterprise solutions.
 Data integration
 The data integration layer allows data from different formats to be put together in a manner that can be
used by the information access and knowledge discovery tools.
 Staged Data
 Staged data is data that has been abstracted to manage the rate at which it is received by the
analysis platform. Essentially, “staged data” allows the correct flow of data to reach information
access and knowledge discovery tools to be retrieved at the correct time.
 Strong Type Data
 Strong type data refers to data that is stored in traditional database formats, i.e. it can be
extracted into tabular format and can be subjected to traditional database analysis techniques.
 Weak Type Data:
 Weak type data is data that is not well structured according to traditional database techniques.
Examples are streaming data or data from sensors.
 Often, this sort of data has a different analysis technique compared to strong type data
 Processed data
 Processed data is combined data from both strong and weak typed data that has been combined within an
IoT context to create maximum value for the enterprise in question.
 There are various means by which to do this processing from stripping data separately and creating
relational tables from it or pooling relevant data together in one combined database for structured queries.
 Retrieval layer
 Once data has been collated and processed, it is time to develop insights from the data via retrieval. This
can be of two main forms: Information Access and Knowledge Discovery.
 Information access tools
 Information access relates to more traditional access techniques involving the creation of
standardized reports from the collation of strong and weak typed data.
 Information access essentially involves displaying the data in a form that is easily understandable
and readable by end users.
 Knowledge discovery tools
 Knowledge Discovery, meanwhile, involves the more detailed use of ICT in order to create knowledge,
rather than just information, from the data in question.
 Knowledge Discovery means that decisions may be able to be taken on such outputs for example, in
the case where actuators (rather than just sensors) are involves, Knowledge Discovery Systems may
be able to raise an alert that a bridge or flood control system may need to be activated.

You might also like