Artificial Intelligence and Image Processing Based Part Feeding Control in a Robot Cell
Artificial Intelligence and Image Processing Based Part Feeding Control in a Robot Cell
Abstract: In this study, an artificial intelligence-assisted image processing system was developed to prevent errors in part
feeding processes within an industrial robot cell. Using the YOLOv7-tiny model, accurate detection of parts was ensured,
enabling effective quality control. While PLC communication was established via the ModBus protocol, the system hardware
included an NVIDIA JETSON AGX ORIN, a BASLER acA2500-60uc camera, and a Raspberry Pi WaveShare monitor. A
total of 2400 data samples were used for model training, achieving an accuracy rate of 98.07%. The developed system
minimized human errors by preventing incorrect part feeding issues and significantly improved efficiency in production
processes. Notably, the system's superior accuracy and processing speed demonstrated its suitability for real-time
applications. In conclusion, this study highlights the effective implementation of artificial intelligence and image processing
techniques in industrial manufacturing processes.
Keywords: Artificial Intelligence, Image Processing, YOLOv7-tiny, Industrial Automation, Part Inspection.
How to Cite: Enesalp ÖZ; Muhammed Kürşad UÇAR (2025). Artificial Intelligence and Image Processing Based Part Feeding
Control in a Robot Cell. International Journal of Innovative Science and Research Technology, 10(3), 455-465.
https://doi.org/10.38124/ijisrt/25mar609
I. INTRODUCTION The first type consists of fully automated lines where humans
are not involved. In these lines, parts and body structures are
The industrial sector has increasingly turned to industrial transferred using automated equipment, positioned by robots
robots to optimize production processes and enhance and fixtures, and welded through intercommunicating robotic
efficiency during the Fourth and Fifth Industrial Revolutions. stations. The second type consists of side processes where
In this period, robots have undertaken various tasks in humans are actively involved in assembling fundamental
production environments, including part handling, process vehicle components, transferring parts, and positioning them
monitoring, and collaboration with operators. As a result, correctly. Various errors, such as part damage, missing or
many manufacturing facilities have improved efficiency and excessive part assembly, and incorrect part feeding, frequently
ensured production continuity [1]. However, in factories and occur in these side processes. Accurately detecting and
workshops where human labor still plays a crucial role, issues identifying objects in production processes is critical for the
such as quality defects, missing parts, and insufficient efficiency and quality of industrial manufacturing facilities
production speed persist. In this context, integrating the [2]. Correctly determining object characteristics such as color,
advantages of automation with human flexibility and shape, orientation, and texture enables various improvements
sensitivity in environments where industrial robots collaborate in production processes. This detection and identification
with humans is essential. This approach enhances interaction process ensures the selection of correct parts and contributes
between industrial robots and humans, enabling more efficient to the early detection of potential defects. Consequently,
and effective management of production processes. In the overall efficiency increases, and product quality improves in
automotive industry, various issues arise in processes industrial production facilities. Additionally, accurate object
involving human workers. In welding factories, vehicle bodies detection helps reduce human errors, minimizing production
are assembled and welded by robots. To form the body, defects and enhancing workplace safety. Therefore, object
multiple subcomponents are welded together. The part detection and identification play a fundamental role in
assembly process is divided into two main production lines. improving manufacturing efficiency and quality [2]. The
After completing the data collection process, data diversity must be ensured, and data augmentation should be performed to
enhance the accuracy and sensitivity of the AI model. Data augmentation has been categorized as shown in Fig.6.
Geometric transformations refer to operations such as The random occlusion technique involves modifying
rotation and cropping. However, since the camera is fixed in collected images by cutting or reducing certain parts. The final
this system, geometric transformations were not necessary. method, deep learning-based data augmentation, generates
Photometric data augmentation, on the other hand, involves additional data by recreating existing objects using a trained
altering the pixel colors of existing images to generate AI model.
additional data. In this project, the photometric data
augmentation method was applied. During image collection, Image Processing - Labeling
the camera was first adjusted to an optimal exposure setting, As mentioned earlier in the image collection process, the
as shown in Fig.7. Subsequently, to simulate environmental exposure time was adjusted to simulate environmental effects
effects and improve model training, the exposure settings were in the production line. In addition to exposure adjustments,
varied to collect images under different conditions, as shown shadows were intentionally created by positioning objects near
in Fig.8. This approach helps simulate real-world factors such the structures where the parts are placed, further enhancing
as shadows and lighting variations, allowing the AI model to model training. Furthermore, with advancements in
function more accurately. technology and evolving needs, various algorithms
categorized under image preprocessing are used to both
augment and diversify the data. Filters such as median filtering
were applied to achieve data augmentation and diversification.
The increased data variety obtained through preprocessing
significantly strengthens the model's accuracy [19].
With its advancements, YOLOv7 has started to become environments with limited computational resources. When
an industry standard. The primary reason for this is embedded comparing different YOLO versions, each iteration, including
in its name, "You Only Look Once." As the name suggests, v2, v3, v4, v5, v6, and v7, has introduced key improvements
YOLO analyzes an image in a single pass, making it highly while maintaining the fundamental steps of the YOLO
efficient and well-suited for real-time applications and framework. YOLOv2 integrated anchor boxes and introduced
System Installation, Interface Design, and Model placement before the "go to robot" signal is sent. The signal
Integration from the button to the PLC must also be transmitted to the edge
For the system installation, a step-by-step approach must device to capture an image. Various methods, including
be followed. The first step involves conducting a site Ethernet TCP/IP, GPIO, and ModBus, can be used to transfer
inspection to analyze the characteristics of the object to be the PLC signal. In this project, ModBus communication
detected, review the conditions of the process where the protocol and Adam IO were used to transfer the PLC signal to
system will be installed, and assess the site requirements. In the edge device. Adam IO is a device that collects dry contacts
the second step, the camera installation area is determined, from the PLC or any other device and transmits them via
ensuring that the camera can capture the required angle with ModBus. Using Adam IO, the "go to robot" signal from the
an appropriate lens. Additionally, if the camera is installed in PLC was sent to the edge device. This setup ensures that the
an environment affected by external factors, protective system is triggered by the signal, captures an image, and
equipment must be considered. After setting up the camera, performs part feeding verification using the AI model. To
data collection, preprocessing, model selection, and model implement the "go to robot" signal control, modifications were
training are performed. These steps complete the artificial required in the PLC software. The AI model’s control signal
intelligence-related components of the system. To integrate was added as a condition before executing the "go to robot"
the system into production and inform operators, a control and command. The AI model’s verification signal is transmitted to
visualization design is required. First, the process workflow the PLC via Adam IO, and the robot proceeds only if the
must be reviewed to determine when data should be collected, verification is successful. Once the inspection is completed,
and which signals will trigger the system. In this project, at the the result should be displayed to the operator via the user
FSM RR Tack 2 station, after the main part and sub-part are interface. The user interface can be designed using various
positioned on the fixture, a "go to robot" (start welding) signal Python libraries such as Tkinter, Kivy, wxPython, and PyQt.
is sent by pressing a button, which then transmits a signal to In this project, the PyQt5 library was used for UI design. The
the PLC. The system's objective is to inspect the part interface dynamically updates based on the AI object detection