Diksha 9
Diksha 9
EXPERIMENT - 9
Student Name: Diksha Pandey UID: 22BCS10359
Branch: BE-CSE Section/Group: FL_IOT_602‘A’
Semester: 6th Date of Performance: 13/04/25
Subject Name: Foundation of Cloud IOT Edge ML Subject Code: 22CSP-367
1. Aim: Automate quality inspection of products using cameras and edge computing.
2. Objective: To design and implement an automated quality inspection system for products using
cameras and edge computing.
3. Pre requisites:
Software Requirements:
1. Python (version 3.8 or above)
2. TensorFlow/Keras or PyTorch
3. OpenCV for image processing
4. Flask/Django for backend integration
5. MQTT or HTTP protocols for IoT data transfer
4. Procedure:
Step 1: Data Collection (Image Acquisition)
1. Cameras: Use high-quality cameras (e.g., industrial cameras or machine vision cameras) to
capture product images. Depending on the application, cameras can be positioned at various
points along the production line.
2. Lighting: Proper lighting is essential to ensure clear and consistent image capture. Lighting can
be adjusted to reduce shadows and enhance defect visibility.
3. Trigger Mechanism: Use sensors (like proximity sensors or conveyors) to trigger the camera
when a product passes through.
9. Pre-trained Models for Feature Extraction: Use pre-trained models like ResNet or VGG16 for
feature extraction and fine-tune them on your dataset if you have a labeled dataset of defective
vs. non-defective products.
10. Inference on Edge Devices: The trained model is deployed to an edge computing device like a
Raspberry Pi, NVIDIA Jetson, or an industrial-grade embedded system for inference.
Dense(512, activation='relu'),
Dense(2, activation='softmax') # 2 classes: defect or no defect
])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Load the trained model model =
tf.keras.models.load_model("defect_detection_model.h5")
6. Screenshot:
DEPARTMENT OF
Using transfer learning with a pre-trained VGG16 model, we extracted meaningful features and
customized the network to differentiate between defective and non-defective products, improving
classification accuracy. Finally, we successfully deployed the trained TensorFlow model, demonstrating
its capability to make real-time predictions and trigger automated responses, such as alerts or actuator
controls. This workflow highlights the practical implementation of deep learning for industrial
automation, enhancing defect detection efficiency and reliability.