0% found this document useful (0 votes)
117 views55 pages

ML For Embedded Systems at The Edge - NXP and Arm - FINAL

Uploaded by

Harshad Lokhande
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
117 views55 pages

ML For Embedded Systems at The Edge - NXP and Arm - FINAL

Uploaded by

Harshad Lokhande
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

AI Virtual Tech Talks Series

Machine learning for embedded


systems at the edge
Arm and NXP

Kobus Marneweck, Product Manager, Arm


Anthony Huereca, Systems Engineer, NXP
June 16, 2020
Confidential © 2020 Arm Limited
A I V I R T U A L T E C H TA L K S S E R I E S

Date Title Host

Today Machine learning for embedded systems at the edge Arm and NXP

June, 30 tinyML development with Tensorflow Lite for Microcontrollers and CMSIS-NN Arm

July, 14 Demystify artificial intelligence on Arm MCUs Cartesiam.ai

July, 28 Speech recognition on Arm Cortex-M Fluent.ai

Getting started with Arm Cortex-M software development and Arm Development
August, 11 Arm
Studio

August, 25 Efficient ML across Arm from Cortex-M to Web Assembly Edge Impulse

Visit: developer.arm.com/solutions/machine-learning-on-arm/ai-virtual-tech-talks
1
SPEAKERS

Kobus Marneweck, Senior Product Manager Anthony Huereca, Embedded Systems Engineer
Arm NXP Semiconductor

2
AGENDA

• ML on the edge
• eIQ deployment
− Arm support for TFLµ
− TensorFlow
− Glow
− Getting started
• The future
• Wrap-up

3
Machine Learning on
the Edge

CONFIDENTIAL & PROPRIETARY


NXP, THE NXP LOGO AND NXP SECURE CONNECTIONS FOR A SMARTER WORLD ARE TRADEMARKS OF NXP B.V.
ALL OTHER PRODUCT OR SERVICE NAMES ARE THE PROPERTY OF THEIR RESPECTIVE OWNERS. © 2020 NXP B.V. 4 4
E X A M P L E E M B E D D E D A I A P P L I C AT I O N S

!
Image Classification Audio Analysis Anomaly Detection
• Identify what camera is looking at − Keyword actions − Identify factory issues before
− Coffee pods they become catastrophic
§ “Alexa”/“Hey Google”
− Empty vs full trucks
− Voice commands − Smartwatch health monitoring
− Factory defects on manufacturing − Motor performance monitoring
line − Alarm Analytics
− Produce on supermarket scale § Breaking glass − Sensor Analysis
• Personalization based on facial § Crying baby
recognition
− Appliances
− Home
− Toys
− Auto
• Security Video Analysis
5
MACHINE LEARNING PROCESS

1. Training Phase Inference Phase


2. Inference Phase
Training Phase Input

Collect and Deployed


Prepare Data Train Model Test Model Model

Iterate on parameters and Prediction


algorithm to get best model

6
INFERENCE ON THE EDGE

• Inference is using a model to make a prediction on new data


• Data can come from embedded camera, microphone, or sensors

Two possibilities:

Inference on Inference on
the Cloud the Edge

• Requires network bandwidth • Increased privacy and security


• Latency issues • Faster response time and throughput
• Cloud compute costs • Lower Power
• Don’t need internet connectivity

7
NXP Enablement for
Machine Learning

CONFIDENTIAL & PROPRIETARY


NXP, THE NXP LOGO AND NXP SECURE CONNECTIONS FOR A SMARTER WORLD ARE TRADEMARKS OF NXP B.V.
ALL OTHER PRODUCT OR SERVICE NAMES ARE THE PROPERTY OF THEIR RESPECTIVE OWNERS. © 2020 NXP B.V. 8 8
NXP BROAD-BASED MACHINE LEARNING SOLUTIONS AND SUPPORT

eIQ eIQ™ ML Enablement


• eIQ (edge intelligence) for edge AI/ML inference enablement
• Based on open source technologies (TensorFlow Lite, Arm NN, Glow, ONNX, OpenCV)
• Support for i.MX 8 family, i.MX RT1050/1060/600
• Fully integrated into NXP development environments (MCUXpresso, Yocto/Linux)
• BYOM – Bring Your Own Model

DIY

Third Party SW and HW Turnkey Solutions


Coral • Coral Dev Board SLN-ALEXA-IOT • Alexa Voice Services (AVS) solution
• i.MX 8M Development Kit for Amazon® • i.MX RT106A (kit – SLN-ALEXA-IOT)
Alexa Voice Service w/ DSP Concepts • Local voice control solution
• Au-Zone Network Development Tools • i.MX RT106L (kit – SLN-LOCAL-IOT)
• Arcturus video applications • Face & emotion recognition solution
• SensiML tools for sensor analysis • i.MX RT106F (kit – SLN-VIZN-IOT)

…. And more Fully Tested

9
ARM CORTEX-M PORTFOLIO

Cortex-M7
Maximum High
performance, performance
control and
DSP TrustZone
Cortex-M3 Cortex-M4 Cortex-M33 Cortex-M55
Helium vector Performance
Mainstream Flexibility,
Performance extensions
efficiency
control and control and Optimized for
efficiency
DSP DSP DSP & ML

Cortex-M0 Cortex-M0+ Cortex-M23


Highest Lowest
Lowest cost, Smallest
energy area, lowest power & area
low power power
efficiency

Armv6-M Armv7-M Armv8-M


Well suited for ML & DSP applications

10
CORTEX-M7: HIGHEST PERFORMANCE CORTEX-M

High performance – dual-issue processor


− Achieves 2.14 DMIPS/MHz, 5.01 CoreMark/MHz
− Achieves 1.4GHz in 16FFC (typ config with caches and FPU)

Retains all of the Cortex-M benefits


− Ease-of-use, low interrupt latency

Flexible memory interfaces


− Up to 16MB TCM for critical data and code
− Up to 64KB I-cache and D-cache
− AXI master interface

Performance
− Floating-point Unit (FPU) – Single precision (SP) and double precision
(DP), sustained 2x 32bit or 2x 16bit MACs per cycle
− Digital signal processing (DSP) extension

11
C O R T E X - M 3 3 : N E X T- G E N E R AT I O N C O R T E X - M W I T H T R U S T Z O N E S E C U R I T Y

Industry-standard 32bit processor


− 3-stage pipeline, Harvard architecture
− Extremely flexible design configurations

Wide choice of options for differentiated products


− TrustZone security foundation with up to two memory protection
units (MPUs)
− Digital signal processing (DSP) extension with SIMD, single-cycle
MAC, saturating arithmetic
− Floating-point Unit (FPU)
− Coprocessor interface
− Arm Custom Instructions
− Powerful debug and non-intrusive real-time trace (ETM, MTB)

12
eIQ

CONFIDENTIAL & PROPRIETARY


NXP, THE NXP LOGO AND NXP SECURE CONNECTIONS FOR A SMARTER WORLD ARE TRADEMARKS OF NXP B.V.
ALL OTHER PRODUCT OR SERVICE NAMES ARE THE PROPERTY OF THEIR RESPECTIVE OWNERS. © 2020 NXP B.V. 13 13
MACHINE LEARNING PROCESS
Note: There is no unified method
for converting neural networks
from different frameworks to run
on Arm Cortex-M products

Definition Training Optimize (optional) Convert Inference

TensorFlow Collect and tflite_convert.py


Prepare
Data TensorFlow
Lite
Keras Pruning code_gen.py

Caffe Train Model model_compiler CMSIS-NN

Quantization
PyTorch
Glow
Test Model Custom script…
other…

Model Frameworks Framework dependent eIQ


Inference engines
& ML examples
i.MX RT eIQ inference engine options:
• CMSIS-NN – Can be used for several different model frameworks
• TensorFlow Lite – Used for TensorFlow model frameworks
• Glow – Machine Learning compiler for several different model frameworks (Coming in July)

14
eIQ – EDGE INTELLIGENCE

Collection of Libraries and Development Tools for Building Machine Learning Apps
Targeting NXP MCUs and App Processors

Deploying open-source inference Integrated into Yocto Linux BSP Supporting materials
engines and MCUXpresso SDK for ease of use

Integration and optimization of neural net (NN) No separate SDK or release to download Documentation: eIQ White Paper, Release
inference engines (Arm NN, Arm CMSIS-NN, • iMX: New layer meta-imx- Notes, eIQ User’s Guide, Demo User’s Guide
OpenCV, TFLite, ONNX, etc.) machinelearning in Yocto Guidelines for importing pretrained models
End-to-end examples demonstrating customer • MCU: Integrated in MCUXpresso based on popular NN frameworks (e.g.
use-cases (e.g. camera à inference engine) SDK middleware TensorFlow, Caffe)
Support for emerging neural net compilers Training collateral for CAS, DFAEs and
(e.g. Glow) customers (e.g. lectures, hands-on, video)
Suite of classical ML algorithms such as
support vector machine (SVM) and random
forest
BYOM – Bring Your Own Model

15
eIQ DEMO

• Retrained a Mobilenet model written in TensorFlow to identify 5 different flower types


• Use eIQ to run model on i.MX RT1060 EVK
− Lab at https://community.nxp.com/docs/DOC-343827
− Lab steps can be used for any types of images you’re interested in

16
eIQ Deployment
Overview

CONFIDENTIAL & PROPRIETARY


NXP, THE NXP LOGO AND NXP SECURE CONNECTIONS FOR A SMARTER WORLD ARE TRADEMARKS OF NXP B.V.
ALL OTHER PRODUCT OR SERVICE NAMES ARE THE PROPERTY OF THEIR RESPECTIVE OWNERS. © 2020 NXP B.V. 17 17
A D D I T I O N A L F L AV O R S o f N X P e I Q ™ M A C H I N E L E A R N I N G D E V E L O P M E N T E N V I R O N M E N T

UNTRAINED ML CLOUD TRAINING


MODEL

MICROSOFT AMAZON TRAINED


01001 GOOGLE
WEB OPTIMIZED
00101 AZURE CLOUD
SERVICES QUANTIZED
UNTRAINED
MODEL
MODEL

User Application with eIQ Deployment NN Models

with CMSIS-NN
NXP EIQ
INFERENCE
ENGINES &
LIBRARIES

NPU µNPU
COMPUTE
ENGINES Cortex-M DSP Cortex-A GPU ML Accelerator

i.MX RT600 i.MX i.MX 8M Plus i.MX 8M Plus i.MX 8M Plus Future MCU
i.MX RT1050 RT600 i.MX 8QM i.MX 8QM
i.MX RT1060 i.MX 8QXP i.MX 8QXP
i.MX RT1170 i.MX 8M Quad/Nano i.MX 8M Quad/Nano
i.MX 8M Mini
18
e I Q A D VA N TA G E S

• eIQ implements performance enhancements with CMSIS-NN for Cortex M cores and DSP
− Up to 2.4x improvement in inference time in TensorFlow Lite over original code
• eIQ inference engines work out-of-the-box and are already tested and optimized.
− Get up and running in minutes instead of weeks
NXP eIQ Enablement

Click Click
Import eIQ Compile Program Use Model
Project Output
Button Button

Roll Your Own


Figure out
Successfully
which files Setup cross- Integrate
compile Download to
Download are needed compiler for Create camera/LCD Configure
project, Create LCD board using Check Use Model
source from for target device camera code with Jlink
working display code Jlink output Output
Github embedded and create input code inference programing
through any commands
inference MAKE file engine code
known bugs
engine
Debug
inference,
camera,
LCD, and
integration
code 19
eIQ FOR I.MX RT
Camera / Microphone / Sensor / Other

Input
Customer‘s or a third-party model
trained on a CPU, GPU or in the Cloud i.MX RT Device
PC
Optimizations eIQ
Pre-trained Inference
(Quantization/ Convert Engine
Model
Pruning)
Optional

Prediction
Inference engines available with eIQ for i.MX RT:
• CMSIS-NN – Can be used for several different model frameworks
• TensorFlow Lite – Used for TensorFlow model frameworks
• Glow – Machine Learning compiler for several different model frameworks (Coming in July)

20
Arm support for TFLµ

CONFIDENTIAL & PROPRIETARY


NXP, THE NXP LOGO AND NXP SECURE CONNECTIONS FOR A SMARTER WORLD ARE TRADEMARKS OF NXP B.V.
ALL OTHER PRODUCT OR SERVICE NAMES ARE THE PROPERTY OF THEIR RESPECTIVE OWNERS. © 2020 NXP B.V. 21 21
CMSIS-NN INFERENCE

• Developed by Arm
• API to implement common model layers such as convolution, fully-connected, pooling, activation,
etc, efficiently at a low level
• Conversion scripts (provided by Arm) to convert models into CMSIS-NN API calls.
• CMSIS-NN optimized the implementation of inference engines like TFLite micro
(https://www.tensorflow.org/lite/microcontrollers)

22
CMSIS-NN OPTIMIZED FOR PERFORMANCE

• Key ML function support 4.6x


− Aiming
for best-in-class performance for Cortex-M CPUs CNN Runtime improvement higher
perf.
(compared to other libraries) Series1 Series2

Relative throughtput
6
− Available now through open source license
4
• Consistent interface to all Cortex-M CPUs 2
− Extending to Arm v8-M 0
1 2 3 4
• Open-source, via Apache 2.0 license
4.9x
− https://github.com/ARM-software/CMSIS_5 Energy efficiency improvement higher
eff.
6

Relative Ops per


4
CMSIS-NN

Joule
2
Optimised for Cortex-M CPUs
0
Armv7-M Armv8.1-M 1 2 3 4

23
T O O L S & T F L Μ O P E R AT O R S U P P O R T – C M S I S - N N A N D E T H O S M I C R O N P U

Modified .TF
Optimized custom operators for the microNPU

Input File
Ethos microNPU
Ethos-U
driver
microNPU

Optimization

Micro TensorFlow Lite runtime


CMSIS-NN
Offline
optimized operators Cortex-M
Armv6M Armv7M
Armv8M Armv8.1M (MVE)
TensorFlow Lite
File File

Reference kernels
InputInput

Cortex-M
start
.TF

24
eIQ TensorFlow

CONFIDENTIAL & PROPRIETARY


NXP, THE NXP LOGO AND NXP SECURE CONNECTIONS FOR A SMARTER WORLD ARE TRADEMARKS OF NXP B.V.
ALL OTHER PRODUCT OR SERVICE NAMES ARE THE PROPERTY OF THEIR RESPECTIVE OWNERS. © 2020 NXP B.V. 25 25
TENSORFLOW LITE INFERENCE ENGINE

• Developed by Google
− TensorFlow à Training and Inference
− TensorFlow Lite eIQ à NXP’s implementation of TF Lite for MCUs
− TensorFlow Lite Micro à TensorFlow’s implementation of TF Lite for MCUs

• Can only be used with TensorFlow models


• Use tflite_convert utility (provided by TensorFlow) to convert a TensorFlow model to
a .tflite binary
• TFLite flat buffer binary is read from memory by TFLite inference engine running on
i.MX RT

26
TENSORFLOW LITE CONVERSION PROCESS

1. Transform a TensorFlow .pb model to TFLite flat buffer file.


2. Convert TFLite flat buffer file to C array in a .h header
3. Copy .h header file into eIQ TensorFlow Lite SDK example

tflite_convert xxd Import

eIQ

.pb .tflite .h i.MX RT

27
TENSORFLOW LITE CODE FLOW

• Import model
#include “mobilenet_model.h”
model = tflite::FlatBufferModel::BuildFromBuffer(mobilenet_model, mobilenet_model_len);
• Get input
/* Extract image from camera to data buffer. */
CSI2Image(data, Rec_w, Rec_h, pExtract, true);
/* Resize image to input tensor size. */
ResizeImage(interpreter->tensor(input), data, Rec_h, Rec_w, image_height, image_width, image_channels, &s);
• Run inference
interpreter->Invoke();
• Get Results
std::vector<std::pair<float, int>> top_results;
GetTopN<float>(interpreter->typed_output_tensor<float>(0), output_size, s->number_of_results, threshold, &top_results, true);
auto result = top_results.front(); //Get results
const float confidence = result.first; //Get confidence level
const int index = result.second; //Get highest class

28
G E M M L O W P A S S E M B LY- C O D E D D S P O P T I M I Z AT I O N B E N E F I T S F O R T E N S O R F L O W L I T E

GCC Arm® 8-2018-q4


DSP Optimized (-O2) Reference Kernel (-O2)
400
Label Image 186 ms 370 ms
CIFAR-10 61 ms 229 ms 350

300

IAR EW 8.32.3 250


DSP Optimized Original
200
Label Image 217 ms 307 ms
CIFAR-10 67 ms 159 ms 150

100

Keil MDK 5.27 50


DSP Optimized Original
0
Label Image 178 ms 198 ms GCC ARM 8 IAR 8 MDK 5
CIFAR-10 64 ms 87 ms
DSP Optimized Reference Kernel

29
eIQ Glow

CONFIDENTIAL & PROPRIETARY


NXP, THE NXP LOGO AND NXP SECURE CONNECTIONS FOR A SMARTER WORLD ARE TRADEMARKS OF NXP B.V.
ALL OTHER PRODUCT OR SERVICE NAMES ARE THE PROPERTY OF THEIR RESPECTIVE OWNERS. © 2020 NXP B.V. 30 30
G L O W ( C O M I N G I N J U LY )

• Developed by Facebook

• Glow is a compiler that turns a model into an machine executable binary for the target
device
− Both the model and the inference engine are compiled into the binary that is generated.
− Integrate the generated binary into an SDK software project
− Can make use of compiler optimizations
− Supports ONNX (universal model format) and Caffe2 models

• Cutting-edge inference technology

31
P E R F O R M A N C E C O M PA R I S O N U S I N G C I FA R - 1 0 M O D E L O N R T 1 0 5 0

70

60

50

40

30

20

10

0
Glow w/CMSIS-NN Optimized TensorFlow Lite

32
O P T I M I Z AT I O N S F O R G L O W

• NXP developed optimizations for Glow on i.MX RT devices


• Operations can be dispatched to the HiFi4 DSP on RT685
− HiFi4 DSP increases performance up to 34x
• Operations can also use CMSIS-NN library optimizations for all Glow supported devices
Floating Point MNIST Model
120.00

100.00

80.00

60.00
Glow Inference Time on RT685 (in milliseconds) MNIST Model CIFAR10 Model
40.00
Floating Point Model 104.63 213.78
20.00 Floating Point Model using HiFi4 DSP 3.02 13.36
0.00
No optimizations Using HiFi4 Quantized Model 59.77 165.37
Quantized Model using CMSIS-NN 28.52 89.95
Quantized MNIST Model Quantized Model using CMSIS-NN + HiFi4 DSP 2.50 6.70
70.00
60.00
50.00
40.00
30.00
20.00
10.00
0.00
No optimizations Using CMSIS-NN Using CMSIS-NN + HiFi4 DSP

33
GLOW

1. Transform model to the universal ONNX format.


2. Optimize model with profiler to create profile.yml file for quantization
3. Compile with Glow model_compiler to generate compiled files and weights.
4. Copy binary files into eIQ Glow SDK example.

tf2onnx image_classifier model_compiler Import

.inc eIQ
.pb .onnx .yml .o i.MX RT
.weights

34
ADD COMPILED CODE TO PROJECT

• Add <network_name>.o compiled file to project settings


• Include <network_name>.h file
• Set input data

• Run model

• Get result

35
G L O W M E M O RY U S A G E

• Glow does not use dynamically allocated memory (heap).


• All the memory requirements of a compiled model can be found in the auto-generated
header file.
// Memory sizes (bytes).
#define CIFAR10_CONSTANT_MEM_SIZE 34176 // Stores model weights. Can be stored in Flash or RAM.
#define CIFAR10_MUTABLE_MEM_SIZE 12352 // Stores model inputs/outputs. Must be in RAM.
#define CIFAR10_ACTIVATIONS_MEM_SIZE 71680 // Store model scratch memory required for intermediate computations. Must be in RAM.

36
Getting eIQ

CONFIDENTIAL & PROPRIETARY


NXP, THE NXP LOGO AND NXP SECURE CONNECTIONS FOR A SMARTER WORLD ARE TRADEMARKS OF NXP B.V.
ALL OTHER PRODUCT OR SERVICE NAMES ARE THE PROPERTY OF THEIR RESPECTIVE OWNERS. © 2020 NXP B.V. 37 37
eIQ IN MCUXPRESSO SDK

• eIQ for the i.MX RT family is included as part of MCUXpresso SDK


(https://mcuxpresso.nxp.com/en/welcome)
• Make sure eIQ selected in MCUXpresso SDK builder:

38
eIQ EXAMPLES

eIQ RT1060 SDK examples available:


Keyword Spotting
CIFAR-10 Label Image Anomaly Detection
(KWS)
Classifies 128x128 Use FRDM-STBC-
Classifies 32x32
Detects specific image from camera AGM01 sensor board
image from camera
Description keywords from input into one of 1000 for accelerometer
input into one of 10
microphone input categories using anomaly analysis
categories
Mobilenet model (Select “agm01” board)
TensorFlow Lite
Example
CMSIS-NN
Example

39
eIQ FOLDER STRUCTURE

Project Files for Examples

Project Files for TensorFlow Lite Library

Source Code for CMSIS-NN Examples


CMSIS-NN Source Code
Source Code for TensorFlow Lite Examples
TensorFlow Lite Source Code

40
eIQ APP NOTES

• Anomaly Detection with eIQ using K-Means clustering in TF-Lite (AN12766)


• Handwritten Digit Recognition using TensorFlow Lite (AN12603)

Coming Soon:
• Transfer Learning and Datasets

41
INFERENCE TIMES

• Benchmarking ongoing and optimizations still under development. Numbers subject to


change

• Inference time heavily dependent on the particular model


− Different images (if same size) will not affect inference time

• Each eIQ example reports inference time

Image Classification (ms) CIFAR-10 (32x32 input) Mobilenet (128x128 input)


RT685 w/ HiFi4 Glow 6.7 61
RT1060 Glow 24 74
RT1060 TensorFlow Lite 64 178

42
M E M O RY R E Q U I R E M E N T S

• Flash: Model, inference engine code, and input data


• RAM: Intermediate products of the model layers
- Size depends on amount of data, size, and type of the layers and is very model dependent

Benchmark and optimizations ongoing. Numbers subject to change:

Model Inference Engine Flash RAM


CIFAR-10 CMSIS-NN 110KB 50KB
CIFAR-10 TensorFlow Lite 600KB (92KB model, 450KB engine) 320KB
CIFAR-10 Glow 69KB 131KB
Mobilenet v1 TensorFlow Lite 1.5MB (450KB model, 450KB engine) 2.5MB
Mobilenet v1 Glow 507KB 1MB

43
The future

CONFIDENTIAL & PROPRIETARY


NXP, THE NXP LOGO AND NXP SECURE CONNECTIONS FOR A SMARTER WORLD ARE TRADEMARKS OF NXP B.V.
ALL OTHER PRODUCT OR SERVICE NAMES ARE THE PROPERTY OF THEIR RESPECTIVE OWNERS. © 2020 NXP B.V. 44 44
PUSHING THE BOUNDARIES FOR REAL-TIME ON-DEVICE PROCESSING

New Cortex-M CPU


Cortex-M today enabled by Helium Arm microNPUs

Cortex-M7

Relative
control code
performance Cortex-M35P Cortex-M55 Cortex-M55 + Ethos-U55
Cortex-M33 (multiple performance points available)

Cortex-M3
Cortex-M4
Signal conditioning ML performance and efficiency
Cortex-M23 and ML foundation
Cortex-M0+
Cortex-M0
Cortex-M1

Relative ML and DSP performance

Well suited for ML & DSP applications


45
C O R T E X - M 5 5 & E T H O S - U 5 5 : T R A N S F O R M I N G C A PA B I L I T I E S O F T H E S M A L L E S T D E V I C E S

Boosting signal processing and ML performance for millions of developers

Signal processing Machine learning

Signal Feature Decision


conditioning extraction algorithm

Cortex-M55 &
Cortex-M55 Cortex-M55 Ethos-U55
Up to 5x higher Up to 15x Up to 480x
signal processing higher ML
higher ML
performance performance*
(matrix multiplication in
performance*
(CFFT in int32) (matrix multiplication in
int8)
int8)

*Compared to existing Armv8-M implementations

46
Summary

CONFIDENTIAL & PROPRIETARY


NXP, THE NXP LOGO AND NXP SECURE CONNECTIONS FOR A SMARTER WORLD ARE TRADEMARKS OF NXP B.V.
ALL OTHER PRODUCT OR SERVICE NAMES ARE THE PROPERTY OF THEIR RESPECTIVE OWNERS. © 2020 NXP B.V. 47 47
FURTHER READING

• NXP eIQ
• TensorFlow Lite
• Glow
• CMSIS-NN

Machine Learning Courses:


• Video series on Neural Network basics
• Arm Embedded Machine Learning for Dummies
• Google TensorFlow Lab
• Google Machine Learning Crash Course
• Google Image Classification Practical
• YouTube series on the basics of ML and TensorFlow (ML Zero to Hero Series)

Book:
You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a
Weirder Place

48
GIT REPOS

• TensorFlow Lite
− https://github.com/tensorflow/tensorflow/tree/v1.13.1/tensorflow/lite
• TensorFlow Lite for Microcontrollers
− https://www.tensorflow.org/lite/microcontrollers

• CMSIS-NN
− https://github.com/ARM-software/CMSIS_5/tree/master/CMSIS/NN
− CIFAR-10: https://github.com/ARM-software/ML-examples/tree/master/cmsisnn-cifar10
− KWS: https://github.com/ARM-software/ML-KWS-for-MCU

• Glow
− https://github.com/pytorch/glow

49
NXP eIQ RESOURCES

• eIQ for i.MX RT is included in MCUXpresso SDK


− https://mcuxpresso.nxp.com
− TF-Lite and CMSIS-NN eIQ User Guides in SDK documents

• eIQ available for i.MX RT1050 and i.MX RT1060 today


− Can also run on i.MX RT1064: https://community.nxp.com/docs/DOC-344225
• eIQ available for i.MX RT685 in July

• Transfer Learning Lab: https://community.nxp.com/docs/DOC-343827


• Anomaly Detection App Note: https://www.nxp.com/docs/en/application-note/AN12766.pdf
• Handwritten Digit Recognition: https://www.nxp.com/docs/en/application-note/AN12603.pdf

50
Virtual Tech Talks Series
Thank You
Danke
Merci
谢谢
ありがとう
Gracias
Kiitos
감사합니다
ध"यवाद
‫ﺷﻛًرا‬
‫תודה‬
Join our next virtual tech talk:
AI Virtual Tech Talks Series
tinyML development with
Tensorflow Lite for
Microcontrollers and CMSIS-NN

Tuesday 30 June

Register here:
developer.arm.com/solutions/machine-learning-on-arm/ai-virtual-tech-talks
Confidential © 2020 Arm Limited
The Arm trademarks featured in this presentation are registered
trademarks or trademarks of Arm Limited (or its subsidiaries) in
the US and/or elsewhere. All rights reserved. All other marks
featured may be trademarks of their respective owners.

www.arm.com/company/policies/trademarks
Virtual Tech Talk Series
Thank You
Danke
Merci
谢谢
ありがとう
Gracias
Kiitos
감사합니다
ध"यवाद
‫ﺷﻛًرا‬
‫תודה‬

You might also like