0% found this document useful (0 votes)
45 views

EmbeddedML TinyML

Uploaded by

Dev Chan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

EmbeddedML TinyML

Uploaded by

Dev Chan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Home About Us News & Events Resources Investors Careers Contact Us

CLOUD BIG DATA IOT CYBER SECURITY MOBILITY ANALYTICS SDN - NFV

BLOGS HOME CATEGORIES OUR BLOGGERS ARCHIVE INSIGHTS WE'RE HIRING !

Embedded Machine Learning & formulating the right


approach to Testing
By Jason Chandralal On 30 Sep 2022

This blog is published by Jason Chandralal


These blogs might interest you
Machine learning (ML) enables embedded systems to learn automatically from existing data and to use this
Get in Touch
captured knowledge to independently make predictions and conclusions.

Embedded devices used for machine learning applications can fulfill many tasks especially at the edge and it is
Streamlining Onboarding: The
expected that AI-enabled IoT shipments will grow at a rapid rate in the coming years. Several new applications, Transformative Impact of KYC
such as voice assistants, have been made possible due to the progress in the field of neural networks. This is in turn Automation on Building Trust
increasing demand on processing capabilities at the endpoint.
by Padmini Sridhar on 28 Jun 2024
What is embedded ML?

Significant recent advances in microprocessor architecture and algorithm design have made it possible to run
complex machine learning workloads on even the smallest of microcontrollers. TinyML is a foundation that
Effortlessly Develop Bot-Based
supports Embedded ML on devices through open framework. Expanding upon the definition by TinyML.org, Tiny Reporting Tool with Happiest Minds
machine learning is broadly defined as a fast-growing field of machine learning technologies and applications Report Generator Portal
including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor
by Kedar Bhade on 25 Jun 2024
(vision, audio, IMU, biomedical, etc.) data analysis at extremely low power consumption, typically within the range
of mW and below, and hence enabling various always-on use-cases and aiming at battery operated devices. The
key advantages are bandwidth, latency, cost savings, reliability and privacy.
Transforming Payments with
There are primarily three types of Learning – Generative AI
Supervised Learning by Senthil Kumar A L on 24 May 2024
Unsupervised Learning
Reinforcement Learning
Developing Intelligent Applications
Furthermore, there are different types of classification models to be used depending on the data you are going to
process such as images, sound, text, and numerical values. by Arjun Surendra Shravani MR on 3 May 2024

Some of them are –


View all >
Linear Regression
Random Forest
Kernel SVM
K-means Stay updated with us.
I bayes
Follow us on
Decision Tree

If you are planning to explore or evaluate using Embedded ML for your devices, there are some key points that you
need to consider before you begin deployment.

Pick your Machine Learning (ML) language with an eye to deployment


Find a run-time inference engine
Tag Cloud
Train/Tune your model with focused training data
Calibrate your training data on consistent newer parameters (new Datasets)
Predication/Inference

An approach to Testing ML Systems: Categories


Testing is a technique used to make sure something functions as planned. In order to prevent poor development, .Net Maui
rising downstream expenses, and lost time, we are encouraged to implement tests and identify sources of 5G
mistakes or failure as early in the development phase as possible. 5g- OTT Space
Code: Access Governance
AD verification
The foundation for testing any software is using a framework that adheres to the Arrange Act Assert Cleaning adaptive security
methodology. In Python, there are many tools such as Unittest, Pytest, etc that allow for any implementation and
Agriculture Industry
have a lot of built-in functionality such as parameterization, filters, markers and aligned to coverage tools to
AI
validate many conditions at scale.
AI/ML/RPA
Data: Analytics
Analytics driven solutions
A machine model is trained using the data that is collected and provided to it, which makes data collection a critical
apache kafka
factor in the machine learning process. Accuracy of a model of a machine learning model is totally dependent on the
API
Data that is provided. Effective machine learning (ML) algorithms require accurate Data to make accurate
Application Development
predictions and decisions in the real world.
Artificial Intelligence
Data can be collected from various resources or databases. Data that is collected for a particular problem in the Artwork
right format is termed as a dataset. As far as embedded devices are concerned, data can be collected from various ATSC 3.0
sensors and actuators. Different sensor data can serve as different attributes combined to form different datasets. Augmented and Virtual Reality
A dataset is analyzed to form a pattern and make predictions further. Augmented Reality
The data that is collected is further segregated into training and test data. To validate your data, you need to first Auto

understand the use of training data and test data.

Training data: This part of data develops the machine learning model. In other words, the machine learning
models trains themselves by analyzing the training data and understanding the data pattern to make further Subscribe for blog updates
predictions. The data that is provided to the algorithm is similar to the expected output, so it’s fair to say that
the model takes the training data as a reference point to make predictions. Ultimately if the model is trained
Enter email address... Subscribe
with varied data patterns, it turns out to be more efficient.
Test data: Once the model is built using the training data, the model is fed with the test data and this time the
model takes the data as an input and makes predictions based on the test data from its past experience from
the training phase. In the training phase, the data that is fed is sometimes labelled which helps the model
understand data much better and build more efficient models. But the test data is unlabeled which
corresponds to the real-world data. Test data provides a final, real-world check of an unseen dataset to
confirm that the ML algorithm was trained effectively and the model is efficient.

Once we define what our data should look like, we can use, expand and adapt these expectations as our dataset
grows. The key expectations to validate in data is rows/columns, individual values, and aggregate values. This data
needs to wrapped around a dataset module.

To test the accuracy or correctness of the data, Great Expectations library can be used, which is quite efficient. The
great expectations library covers all the common issues being faced and provides an upper hand in automation due
to its flexibility in testing the data. For instance, if a machine learning model is being implemented to detect faulty
gears, the expected dataset will have faulty gears and functional gear details and if we look at the data collection
closely, there would be certain data patterns. But when the data is being collected, a faulty gear data would be
stored as a functional gear data which will in turn affect the machine learning model development. So, it becomes
important to check whether the data being collected is correct. Expected dataset collection can be tested by
applying certain rules or conditions where one checks the collected data (faulty, functional) follows the same
pattern of a faulty or functional gear data.

To implement automated codes to test the data, GE libraries can be used to check certain conditions and assert
functions that can be applied at required checkpoints. Once the expected checkpoints are created, one can test it on
incoming data or data that has already been collected. If the data, i.e. row, column, or an individual value doesn’t
satisfy the conditions created, it can be discarded. These tests can be run on pipelines via Makefile, or workflow
orchestrator like Airflow, KubeFlow Pipelines, etc. One can also use the Great Expectations GitHub Actions to
automate validate data pipeline code. Lastly, create data documentation for the validation tests & runs. For
example, if you use Great Expectation Open-source library, the CLI supports this automatically.

Models:

The final phase of testing ML systems would be to test the models that are selected across various phases such as
training, evaluation, inference, and deployment.

Training – Here we write tests while we’re developing our training pipelines so that we can spot errors quickly. For
instance, we can write tests to check whether the data being fed for training the model or what percentage of data
is fed to train the model etc.

Evaluation/Behavior Testing – Behavioral testing can be termed as testing of input data and expected outputs.
During these tests the models are treated as a black box i.e. in this type of testing the main focus would be on the
data that is fed and what kind of expected output it predicts. This type of test is done on different data patterns. So
we can say that these tests are a sanity test on the model behavior.

Calculated Inference – Ultimately once the model is built and deployed, it is the end users who will be using the
model for deriving conclusions. It is a best practice to test all scenarios before deployment.

Deployment – Once we are certain that the model works in a customer equivalent or accepted environment, we can
run system level tests on the model to determine the quality and efficiency.

The greatest potential for the next computing revolution lies in scaling ML to the billions of smaller, power-
constrained endpoint devices. Multiple companies in the embedded space are coming out with innovative solutions
to accelerate ML workloads on existing controllers/devices and open communities like TinyML are bringing together
industry and academia to provide the right platform to come out with the best technologies.

Jason Chandralal is general manager of Product Engineering Services at Happiest Minds Technologies. He is
responsible for defining and leading Test Engineering solutions in the area of Datacenter Technologies specially
focused on SDN and NFV technologies. Jason is also responsible for embedded and Systems and device related
technologies associated with Internet of Things and Industrial Automation. He has over 24 years of experience in
Telecom, Datacom, Networking and IoT product development and testing working across Network Equipment
Provider, Telecom Service Provider, Hi-Tech and Manufacturing customers with specialization Networking and Testing,
QA. Before, Happiest Minds, Jason has held multiple senior roles in Large PES organizations based in India and did
work stints in US and Europe.

Read other blogs by Jason


Chandralal

More from Jason Chandralal 's Blogs

A real time hybrid validation Intent Based Networking’s


approach is essential for success is driven by
Industrial Automation Testing formulating the right Test
A real time hybrid Validation strategy
validation approach is Intent Based Networking’s
essential for Industrial success is driven by
Automation Testing formulating the right Test
Open Platform Validation strategy
Communications (OPC)-UA: Industrial Automation Testing
Digital Quality Assurance Intent-Based Networking (IBN)
for Industry 4.0

SOLUTIONS TECHNOLOGY FOCUS SERVICES INDUSTRIES

APPrise Big Data Agile Infrastructure Automotive

Digital Assisted Selling Platform(DASP) Cloud Computing Data Management Banking

Data Accel Data Science Digital Transformation Services Consumer packaged Goods

Digital Content Broker Solution(MCaaS) DevOps Managed Infrastructure & Security Services E-Commerce

Integrated Security as a Services(iSaaS) Internet of Things (IoT) Product Engineering Services Education Publishing

Location Mobile Personalization(LO-MO-ME) Mobility Solution IT Security Services Manufacturing

MIDAS SDN - NFV Web Technologies Retail

Personalized Product Recommendations(PnR) Travel & Hospitality

ABOUT NEWS & EVENTS RESOURCE CENTER ABOUT HAPPIEST MINDS

Happiest Minds enables Digital Transformation for


Leadership Team Press Releases Audio video podcasts
enterprises and technology providers by delivering
Alliances In the News White papers seamless customer experience, business efficiency
and actionable insights through an integrated set of
Awards and Accolades Events Brochures disruptive technologies: big data analytics, internet of
things, mobility, cloud, security, unified
Investors Articles communications, etc...
Culture Case studies
Read more
Insights

Infographics

Newsletter

© Happiest Minds 2024 Terms and Conditions Privacy Policy

You might also like