Minor Project Sem5
Minor Project Sem5
Data collection:
The first step is to gather a relevant and representative dataset that
contains examples of the problem you want to solve or the patterns
you want to discover. The quality and size of the dataset play a
crucial role in the performance of the machine learning model.
Data pre-processing:
Raw data often requires cleaning and pre-processing before it can be
used effectively. This step involves tasks such as removing outliers,
handling missing values, normalizing or scaling data, and feature
engineering (creating new features from existing ones).
Model selection:
There are various machine learning algorithms available, each with
its strengths and weaknesses. The choice of algorithm depends on
the type of problem you are trying to solve (e.g., classification,
regression, clustering) and the characteristics of your dataset.
Model training:
In this step, the selected algorithm is trained on the preprocessed
dataset. The model learns the underlying patterns and relationships
in the data by adjusting its internal parameters iteratively. The goal is
to minimize the difference between the model's predictions and the
actual values or labels in the training data.
Model evaluation:
Once the model is trained, it needs to be evaluated to assess its
performance. This is done using evaluation metrics appropriate for
the specific problem, such as accuracy, precision, recall, or mean
squared error. The evaluation helps identify potential issues like
over-fitting (when the model performs well on the training data but
poorly on new data) or under-fitting (when the model fails to capture
the underlying patterns).
Model optimization:
If the model's performance is not satisfactory, optimization
techniques like hyper-parameter tuning can be applied. Hyper-
parameters are parameters that are not learned during training but
control the learning process, such as learning rate, regularization
strength, or the number of hidden layers in a neural network.
Prediction or decision-making:
After the model is trained and evaluated, it can be used to make
predictions or decisions on new, unseen data. The model takes the
input data, processes it through its learned parameters, and
produces the desired output, such as predicting the class of an
image, estimating the price of a house, or recommending a movie.
Machine learning has numerous applications in various domains,
including image and speech recognition, natural language processing,
recommendation systems, fraud detection, autonomous vehicles,
healthcare, finance, and more. It continues to advance and reshape
many industries, making automated analysis and decision-making
more efficient and accurate.
Machine learning has been recognized as central to the success of
Artificial Intelligence, and it has applications in various areas of
science, engineering and society.
Keywords:
Luggage Security, Machine Learning, Arduino Nano 33 BLE Sense,
Real-time Monitoring, Notifications.
Table of Contents:
Introduction
Objectives
Methodology
Hardware and Software Setup
Data Acquisition and Preprocessing
Machine Learning Model Development
Real-Time Monitoring and Alerting
Mobile Application Integration
Testing and Evaluation
Expected Outcomes
Conclusion
Objectives:
The objectives section outlines the specific goals of the project. It
aims to detect unauthorized access, tampering, or mishandling of
luggage by leveraging the sensor data collected from the Arduino
Nano 33 BLE Sense. The project also focuses on real-time monitoring
and providing instant notifications to the user.
Methodology:
The methodology section describes the overall approach adopted to
achieve the project's objectives. It includes data acquisition from the
onboard sensors, preprocessing techniques to extract relevant
features, machine learning model development, real-time
monitoring, and integration with a mobile application.
Hardware Setup:
The Arduino Nano 33 BLE Sense microcontroller is chosen for its
small form factor, making it ideal for integration into luggage.
Additionally, the microcontroller comes equipped with a range of
onboard sensors, including an accelerometer, gyroscope, and
magnetometer. These sensors will be utilized to capture data about
the movements and orientation of the luggage.
Expected Outcomes:
This section outlines the expected outcomes of the project, including
improved luggage security, real-time monitoring capabilities, and
enhanced user experience. It also discusses potential future
enhancements and applications.
LINES OF CODE FOR LUGGAGE SECURITY
Here is the code for luggage security using arduino nano 33 BLE
sense –
Here, as a output we get light of different colours as the chip is being
trained for different motions . Similarly , it can be connected to any
buzzer and can be trained , so as far as any unwanted motion is
detected , buzzer will give a sound .
………………………………………………………ACCELEROMETER.INO……………………………………………………………
/* Edge Impulse ingestion SDK
* Copyright (c) 2022 EdgeImpulse Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include <gestures_inferencing.h>
#include <Arduino_LSM9DS1.h>
#define CONVERT_G_TO_MS2 9.80665f
#define MAX_ACCEPTED_RANGE 2.0f
Serial.begin(115200);
while (!Serial);
Serial.println("Edge Impulse Inferencing Demo");
if (!IMU.begin()) {
ei_printf("Failed to initialize IMU!\r\n");
}
else {
ei_printf("IMU initialized\r\n");
}
if (EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME != 3) {
ei_printf("ERR: EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME should be equal to
3 (the 3 sensor axes)\n");
return;
}
}
/**
* @brief *
* @param number
* @return
*/
float ei_get_sign(float number) {
return (number >= 0.0) ? 1.0 : -1.0;
}
/**
* @brief
* @param[in] debug
*/
void loop()
{
ei_printf("\nStarting inferencing in 2 seconds...\n");
delay(2000);
digitalWrite(LEDR, LOW);
digitalWrite(LEDG, LOW);
digitalWrite(LEDB, LOW);
ei_printf("Sampling...\n");
float buffer[EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE] = { 0 };
buffer[ix + 0] *= CONVERT_G_TO_MS2;
buffer[ix + 1] *= CONVERT_G_TO_MS2;
buffer[ix + 2] *= CONVERT_G_TO_MS2;
delayMicroseconds(next_tick - micros());
}
signal_t signal;
int err = numpy::signal_from_buffer(buffer,
EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, &signal);
if (err != 0) {
ei_printf("Failed to create signal from buffer (%d)\n", err);
return;
}
digitalWrite(LEDR, HIGH);
digitalWrite(LEDG, HIGH);
digitalWrite(LEDB, HIGH);
ei_printf("Predictions ");
ei_printf("(DSP: %d ms., Classification: %d ms., Anomaly: %d ms.)",
result.timing.dsp, result.timing.classification,
result.timing.anomaly);
ei_printf(": \n");
for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
ei_printf(" %s: %.5f\n", result.classification[ix].label,
result.classification[ix].value);
if (result.classification[ix].label == "idle")
{
if(result.classification[ix].value>= 0.80)
{
digitalWrite(LEDG,LOW);
delay(5000);
digitalWrite(LEDG,HIGH);
}
}
else if (result.classification[ix].label == "updown")
{
if(result.classification[ix].value>= 0.80)
{
digitalWrite(LEDR,LOW);
digitalWrite(D2,HIGH);
delay(5000);
digitalWrite(LEDR,HIGH);
digitalWrite(D2,LOW);
}
}
else if (result.classification[ix].label == "wave")
{
if(result.classification[ix].value>= 0.80)
{
digitalWrite(LEDB,LOW);
delay(5000);
digitalWrite(LEDB,HIGH);
}
}
else if (result.classification[ix].label == "snake")
{
if(result.classification[ix].value>= 0.80)
{
digitalWrite(LEDR,LOW);
digitalWrite(LEDB,LOW);
delay(5000);
digitalWrite(LEDR,HIGH);
digitalWrite(LEDB,HIGH);
}
}
}
digitalWrite(LEDR,HIGH);
digitalWrite(LEDB,HIGH);
digitalWrite(LEDG,HIGH);
#if EI_CLASSIFIER_HAS_ANOMALY == 1
ei_printf(" anomaly score: %.3f\n", result.anomaly);
#endif
}
“Thank you”