0% found this document useful (0 votes)
15 views27 pages

Lecture12 CurrentFutureIoTTrends

The document outlines current and future trends in the Internet of Things (IoT), focusing on the growth of connected devices, the integration of machine learning and edge computing, and the importance of platform security architecture. It discusses advancements in AI, deep learning, and the rise of edge computing technologies that enhance performance and efficiency in IoT applications. Additionally, it highlights research topics such as federated learning and model compression to address challenges in deploying machine learning on IoT devices.

Uploaded by

a.thuphanho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views27 pages

Lecture12 CurrentFutureIoTTrends

The document outlines current and future trends in the Internet of Things (IoT), focusing on the growth of connected devices, the integration of machine learning and edge computing, and the importance of platform security architecture. It discusses advancements in AI, deep learning, and the rise of edge computing technologies that enhance performance and efficiency in IoT applications. Additionally, it highlights research topics such as federated learning and model compression to address challenges in deploying machine learning on IoT devices.

Uploaded by

a.thuphanho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Current and

Future IoT Trends


Syllabus
This module will cover the following

• Current state of IoT landscape


• Machine learning
• Edge computing
• Platform Security Architecture
• Research topics

2 © 2020 Arm Limited


Current state of IoT landscape
Faster growth than previously anticipated

• Growth driven by expansion of connected consumer market, fast


penetration of voice-controlled personal assistants (e.g., Alexa),
and increase in mobile machine-to-machine connections
• Most businesses start with systems that serve a single
9.5bn
Internet-connected devices at
the end of 2019
application → rapid return of investment, but not maximizing
full potential of IoT
• More investment in medical IoT, industrial IoT, and intelligent
transportation
• Evidence of security vulnerabilities continue to affect consumer
trust and limit adoption

3 © 2020 Arm Limited


More value created from data

• Data still largely used for process monitoring and optimization


• Additional value created from forecasts based on IoT data
collected, e.g., in predictive maintenance (automotive IoT)
• Feeding data into the operation of other systems, e.g., Essential to build AI
interconnecting distribution and manufacturing (value chain) capabilities to
• Increasing appetite to use machine learning (ML) and artificial increase the value of
the data gathered
intelligence (AI) to extract hidden knowledge from massive
amounts of data collected by sensors
• Hardware is mature → new edge computing methods needed

4 © 2020 Arm Limited


AI and ML
Teaching computers to solve problems and learn like humans

• AI paradigm aims to develop computing capabilities that mimic


human cognitive functions
• Machine learning (ML) is a subset of AI by which artificial
processes learn from data and make decisions without having
been explicitly programmed
• Supervised learning: relies on training examples
• Unsupervised learning: discovers new patterns without human supervision
• Reinforcement learning: agents learn actions in an environment to
maximize some reward
• Deep learning is a family of ML that seeks to mimic the
biological nervous systems

5 © 2020 Arm Limited


Deep learning
Neural network architecture resembles the brain perception process in a brain, with specific neurons
activated depending on the input and leading to some inference

Goal: Approximate complex functions through simple operations performed by layers of “neurons” (or
units)

Examples: Classification (assigning labels to different input), regression (computing future time series
values based on historical data), and control (board game moves)

Operations: Weighted combinations of groups of hidden units with a non-linear activation function

Model weights learned by minimizing a loss function, through back-propagation of its gradient

6 © 2020 Arm Limited


Multi-layer Perceptron (MLP)
Densely connected layers

• Large number of weights


• Given an input , a layer computes the following output:

where are a set of weights and are biases;


is an activation function, e.g.,
• Sigmoid:

• Rectified linear unit:

7 © 2020 Arm Limited


Convolutional Neural Network (CNN)
Powerful in (image) classification tasks

• Replace dense
connections with
filters (kernels)
that share weights
across small
receptive fields
• Pooling layers
reduce the feature
For each location convolution computes dimensions
where are positions in the receptive field and the filter weights
(avg/max of all
values)

8 © 2020 Arm Limited


The rise of edge computing
Deploying advanced computing, storage, and applications on devices

• Goals: Reduce application latency, conserve bandwidth, offload


cloud computation, and improve privacy

• Advances in hardware:
• High-performance and power-efficient processors, e.g., Arm Cortex-A55
• Embedded Graphics Processing Units (GPUs), e.g. Arm Mali-G77
• Micro Neural Processing Units (NPUs), e.g., Arm Ethos-U55
• Nano solid-state drives (gigabytes of storage per mm2)

• Software libraries:
• Neural network kernels optimized for constrained CPUs, e.g., Cortex
Microcontroller Software Standard Neural Network (CMSIS NN)
• Lightweight ML inference frameworks, e.g., TensorFlow Light Micro

9 © 2020 Arm Limited


Arm Cortex-A55

• Delivers best combination of power efficiency and


performance in its class.
• Part of the first generation of application CPUs
based on DynamIQ technology (single cluster
design with a mix of up to eight big.LITTLE
processors)
• Armv8-A architecture extensions, with dedicated
machine learning instructions
• Cryptography extensions
• Target applications: Mobile augmented/virtual
reality (AR/VR), automotive
10 © 2020 Arm Limited
Ethos-U55 microNPU
32x ML performance boost

• Integrates fully with a single Cortex-M toolchain


• Allows acceleration of neural networks in an
extremely low-area with 90% lower power
consumption
• Ideal for AI applications in cost-sensitive devices
• In combination with Cortex-M55, delivers up to a
480x higher performance on ML workloads,
compared to previous Cortex-M generations
• MAC engine handles 16-bit Multiply-and-
Accumulate instructions relevant to neural
networks (e.g., convolution); up to 512 GOP/s
11 © 2020 Arm Limited
CMSIS NN software library
Neural network kernels optimized for Cortex-M cores

• Utility functions
can be used to
construct more
complex neural
structures, e.g.,
Long Short-Term
Memory (LSTM)
• Fixed-point
quantization used
to reduce memory
footprint

12 © 2020 Arm Limited


CMSIS NN software library
Convolution

• In NNs, a convolution layer extracts a feature


map by computing a dot product between filter
weights and a small receptive field in the input

• im2col transforms image-like input into data


columns required by each filter

• Partial im2col used to expand only a limited


number of columns (e.g. 2) to reduce memory
footprint, while ensuring a performance boost

13 © 2020 Arm Limited


Tiny Machine Learning (TinyML)
Machine learning at the edge
• Running ML on low-power (mW range) embedded devices
• Requires only few kilobytes of memory to run the ML model on microcontrollers
• Focus on inferencing
• Processing raw sensor data at edge
• Useful in battery powered IoT applications
• Applications: Recognizing speech commands, detecting vibration pattern, gesture
recognition

14 © 2020 Arm Limited


TensorFlow Lite Micro
Machine learning at the edge
• Resulted from a merger of TensorFlow Lite and Arm microTensor
• uTensor – One of the first open-source frameworks for converting ML models to self-
contained C++ source files, to enable deployment on embedded devices
• TensorFlow Lite – Open-source deep learning framework for on-device inference

15 © 2020 Arm Limited


NN optimizer

• Tool to format a neural network for the


Ethos-U55 microNPU.
• Takes trained NN in a TensorFlow Lite flat
file as input and formats that to output a
modified flat file deployable on target
• Identifies subgraphs that can execute on
the Ethos-U55 microNPU and optimizes
the scheduling of these subgraphs
• Compresses weights to reduce SRAM and
Flash footprint

16 © 2020 Arm Limited


Platform Security Architecture (PSA)
Security challenges

Success of the IoT depends on the trust and security built into connected devices

Security can be expensive to implement and there is a shortage of experts

Difficult to manage secure devices at scale

Lack of confidence in data to/from sensors/actuators

New vulnerabilities appear all the time

17 © 2020 Arm Limited


Platform Security Architecture (PSA)
Framework for securing connected devices

1. Analyze: Threat
Models and Security
Analyses, derived from
IoT use cases.
2. Architect:
Specifications for FW
and HW.
3. Implement: Open
source reference
implementation of FW
architecture.
4. Certify: PSA Certified
scheme – independent
evaluation
18 © 2020 Arm Limited
Platform Security Architecture (PSA)
Phase 1: Analyze with threat models and security analyses

Example: smart meter


• Three threat models created using an English
Language Protection Profile-style approach, to
establish a set of Security Functional
Requirements (SFR) for Target of Evaluation (TOE)
• Each profile considers the functional description,
the TOE, and the security requirements
• Documentation makes threat modeling more
useable by engineers, regardless of prior security
expertise.

19 © 2020 Arm Limited


Platform Security Architecture (PSA)
Phase 2: Architect with architecture specifications

PSA Security Model (PSA-SM) – Foundational trust models and patterns

Factory Initialization (PSA-FI) – Requirements for initial secure device programming and configuration

Trusted Base System Architecture (TBSA-M) – Hardware platform requirements

Trusted Boot and Firmware Update (TBFU) – System and FW technical requirements for ensuring MCU boot integrity

Firmware Framework (PSA-FF) – Firmware interface definition of a Secure Processing Environment (SPE) for
constrained IoT platforms, including PSA Root of Trust APIs

Developer APIs – Interfaces to security services for application developers

20 © 2020 Arm Limited


Platform Security Architecture (PSA)
Phase 3: Implement with Trusted Firmware-M (TF-M)
TF-M is an open source,
open governance project,
providing:
• Bootloader for
authenticated boot
• Implementation of PSA
Firmware Framework
• Secure Services for
Storage, Crypto,
Attestation, etc.
• Multiple OS support
• Integration guide

21 © 2020 Arm Limited


Platform Security Architecture (PSA)
Phase 4: Certify with PSA Certified and PSA Functional API Certification

• Enables testing IoT chipsets and devices to be


tested in laboratory conditions, to evaluate their
level of security
• Multi-level assurance for devices, depending on
the security requirements established through
analysis of threats for a specific use case.
• Three progressive levels of security certification:
foundation, lab-based evaluation, and extensive
attacks testing (under development)

22 © 2020 Arm Limited


Research topics: Federated learning
Training neural models across decentralized edge devices

• Training global
model on local
data samples
• Edge nodes
exchanging
parameter
updates with
central server
• Server aggregates
updates and
refines model

23 © 2020 Arm Limited


Research topics: Federated learning
Challenges

How to reduce communication overhead between central server and edge devices?

How to handle conflicting parameter updates?

How to ensure model updating is robust to link failures?

How to ensure global model updates are not influenced by malicious updates?

How to personalize model for location-specific inference?

24 © 2020 Arm Limited


Neural model compression and acceleration
• Inference performance tends to grow with neural model “depth” (number of layers)
• This also increases memory requirements and poses challenges to deployment on IoT
devices → model compression becomes necessary
• Parameter pruning: Remove model parameters that are not critical to performance
• Knowledge distillation: Train a compact model to behave like a large network (teacher)
• Convolution operations are computationally expensive → certain applications (e.g.,
autonomous vehicles, robotics) require real-time recognition
• Factorization of convolutional kernels: Decomposition of matrices into products of
smaller ones to speed up inference

25 © 2020 Arm Limited


Model pruning
Supported in hardware to improve efficiency

• Many weights have small values after


training (low importance), hence can be
removed (synapse pruning)
• Hidden units with no input connections
can be removed (neuron pruning)
• Retraining necessary to preserve
accuracy
• Pruning usually an iterative process

26 © 2020 Arm Limited


The Arm trademarks featured in this presentation are registered
trademarks or trademarks of Arm Limited (or its subsidiaries) in
the US and/or elsewhere. All rights reserved. All other marks
featured may be trademarks of their respective owners.

www.arm.com/company/policies/trademarks

You might also like