Application and Optimization of Robot Systems in Industrial Production Line
Application and Optimization of Robot Systems in Industrial Production Line
2024 Second International Conference on Inventive Computing and Informatics (ICICI) | 979-8-3503-7329-5/24/$31.00 ©2024 IEEE | DOI: 10.1109/ICICI62254.2024.00119
Abstract— This study focuses on the application and factories to conduct more intelligent production planning and
optimization of robotic systems in industrial production lines, forecasting, thereby enhancing production efficiency and
with the overall goal of increasing manufacturing efficiency and product quality. The establishment of digital factories not only
streamlining process automation. A pioneering vision-based enhances production efficiency but also presents new
methodology for industrial robot sensing is presented, which challenges and opportunities to traditional human resource
enables precise control mechanisms through improved management approaches. Employees require enhanced digital
environmental data acquisition. In addition, a comprehensive skills and interdisciplinary capabilities to adjust to the
methodology is presented for optimizing robot control at the production paradigm of the digital factory. Simultaneously,
physical level, thereby improving overall system efficiency and digital factories provide modern enterprises with expanded
performance. By combining insights from an exhaustive review
development opportunities, enabling them to effectively
of related literature with innovative methodologies, this study
makes a significant contribution to advancing the integration of
respond to market dynamics and customer demands, and
robotic systems in industrial production lines. The experimental accomplish industrial upgrading and transformation. Within
platform established for the visual target recognition and the realm of the digital factory, industrial production lines
grasping system of industrial robots has undergone rigorous stand out as the cornerstone, central to driving efficiency and
testing involving various objects commonly encountered in innovation. The integration of robot systems into these
industrial environments. The results showed remarkable success production lines is emerging as a paramount concern and an
rates in grasping various objects, with certain objects achieving essential focus for further advancement.
perfect success rates of 100% over multiple trials. In particular,
As shown in Figure 1, the illustration depicts the intricate
the bottle and small box showed exceptional performance, with a
perfect success rate across all trials. Conversely, some objects interplay of robotic systems within an industrial production
had comparatively lower success rates due to factors such as line. This visual representation serves as a reference point to
shape complexity or surface texture. These empirical results illustrate integration and importance of robotic technologies in
highlights the effectiveness of proposed methods in facilitating modern manufacturing processes.
reliable object recognition and gripping in industrial production
lines.
I. INTRODUCTION
Digital factories [1]-[3] represent forefront of
contemporary manufacturing. They entail not merely
digitizing traditional manufacturing processes but constituting
a comprehensive revolution. Digital transformation has
fundamentally altered all facets of production, manufacturing,
and product design. This transformation enhances not only the
efficiency of production process but also endows companies Fig. 1. The Robot Systems in Industrial Production Line (Original Image
with heightened flexibility and innovation capabilities. IoT Source: https://www.assemblymag.com/articles/97926-global-automotive-
industry-employs-one-million-robots)
technology plays a pivotal role in the establishment of digital
factories. Extensively deploying sensors and smart devices In this study, the application and optimization of robot
facilitates real-time monitoring and data collection across all systems in industrial production line will be discussed and the
factory domains, thereby automating and optimizing the primary contributions are:
production processes. Intelligent manufacturing technology
endows machines with heightened intelligence, enabling 1. A innovative vision-based methodology for industrial
production equipment to flexibly respond to diverse robot sensing is presented that aims to efficiently collect
production requirements and achieve customization. Cloud environmental data critical for precise control of robotic
computing technology furnishes digital factories with potent systems in production lines. This innovative approach
data storage and processing capabilities, facilitating revolutionizes the process by improving the collection of
centralized data management and real-time data sharing. The environmental information, thereby optimizing the control
integration of the artificial intelligence technology enables
702
Authorized licensed use limited to: ULAKBIM UASL ISTANBUL TEKNIK UNIV. Downloaded on December 11,2024 at 17:06:22 UTC from IEEE Xplore. Restrictions apply.
and-rescue missions [14]. Rouček et al. describe the multi- distortion and the working environment are also important. A
automaton exploration system of the CTU-CRAS team, which lens typically consists of an aperture and lens elements.
achieved significant success in the challenge. The study Adjusting the aperture and focal length based on the images
discusses the platforms, algorithms, and strategies employed, being captured allows you to obtain clear images, mitigate the
offering insights into the design and operation of effective effects of distortion, and optimize performance in a variety of
exploration systems. conditions. The working distance (W) is as shown in formula:
III. PROPOSED METHODOLOGY W L /D / f (1)
Where the L defines the camera size, D is the viewing
A. The Vision Optimization of Robot Systems
angle, f is the focal length. LED lights have a wide range of
Optimizing the vision modeling of robotic systems for colors to choose from, high brightness but low heat, long use
industrial production lines requires a holistic approach that time and low prices. Combined with the needs of shooting,
includes both hardware and software elements. LED ring lights are chosen as the lighting for this system.
In order to achieve target recognition and accurately guide Then, for the software level, the camera coordinate system
robot grasping, it is crucial to select appropriate hardware calibration [21] is firstly considered. During manufacturing
equipment. Key hardware includes cameras, lenses, light process, cameras are exposed to various adverse factors such
sources, and robots that perform the grasping. Optimizing as manufacturing techniques, processing equipment, and
these devices will improve the performance and reliability of temperature fluctuations, which create inherent distortion
the system to effectively support the automation tasks of the problems. Distortion can significantly alter captured images,
industrial production lines. In machine vision systems, image causing straight lines to appear curved and hindering
capture is the foundation for subsequent work. Choosing the subsequent processes. Therefore, camera calibration is
right camera, lens, and light source is critical to reducing the essential to determine true intrinsic and extrinsic parameters to
impact of the environment on image capture, improving image eliminate distortion effects and provide a solid foundation for
quality, and providing reliable data for subsequent processing a seamless workflow. Typically, cameras are shipped from the
and target recognition. There are two main types of cameras factory with initial intrinsic parameters set. However, the
available on the market: CCD (Charge Coupled Device) [15]- actual parameters can vary significantly from these defaults
[17] and CMOS (Complementary Metal Oxide for certain cameras. The degree of distortion in images is
Semiconductor) [18]-[20]. CCD cameras have a consistent and largely influenced by the distance of the pixels from the
stable signal, while CMOS cameras have faster capture speeds center: the farther from the center, the more distorted the
but less stability. Considering requirement of high-precision image. This non-linear distortion can be described by Equation
shooting and real-time image transmission of plastic 2.
membranes, it is more appropriate to choose a CMOS camera. ° x x Q x x, y
Although CMOS cameras are less expensive, in terms of the ® (2)
balance between performance and real-time performance, °̄ y y Q y x, y
CMOS cameras are a better choice to meet the requirements of Where the x, y is the real image point coordinates, the
the system. In the Figure 2, the camera specifications are
presented. x, y is the distortion-free ideal coordinates of pixels
consistent with linear imaging model. The Q x ,Q y are the
nonlinear distortion value, which is related to the position of
the image point in the image which can be defined as:
Q x x, y -1x x 2 y 2 (3)
Q y x, y -2 x x 2 y 2 (4)
In general, the radial distortion in this model can already
express the nonlinear distortion well. However, according to
research results, adding too many nonlinear parameters
sometimes not only fails to improve the accuracy, but also
may lead to instability of the solution. Therefore, during the
distortion correction process, the number of the nonlinear
parameters must be weighed to ensure a balance between
accuracy and stability. Then, the formula 2 can be expressed
as updated formula 5:
° x x 1 -1t 2
® (5)
°̄ y y 1 -2t 2
Fig. 2. The Camera Specifications
When choosing a lens, considerations go beyond camera The t is defined as formula 6:
position, field of view and focal length. Factors such as lens t x2 y 2 (6)
703
Authorized licensed use limited to: ULAKBIM UASL ISTANBUL TEKNIK UNIV. Downloaded on December 11,2024 at 17:06:22 UTC from IEEE Xplore. Restrictions apply.
As the radial radius increases, the distortion will become and noise points, fitting surfaces, and identifying regions of
more obvious, that is, the distortion value will become larger interest (ROI). By calculating and fitting the normal vector of
as the distance from the center of the image increases. As the the work-piece plane, more characteristic information about
illustration, the Figure 3 shows the sample noise the curved work-piece can be obtained. These pre-processing
checkerboard. steps help to improve the quality and accuracy of the point
cloud data. Finally, using the proposed optimization model,
the problem of extracting and further analyzing the work-piece
surface features can be perfectly solved, providing a reliable
basis for subsequent work-piece processing and analysis.
B. The Vision based Control Optimization of Robot Systems
The robot-side program realizes two main functions: real-
time communication with the host computer and motion
control of the robot. The program adopts multi-process, front-
end and back-end modes. The background program is
responsible for the communication establishment, message
parsing, instruction identification, and execution of non-
motion instructions; the foreground program executes motion
instructions. The background program starts automatically
when the robot is powered on. After the robot starts, it
initializes and creates a TCP server to wait for a connection
Fig. 3. The Sample Noise Checkerboard from the host computer. Once connected, the background
Then, complex surface point cloud collection is designed program blocks receiving instructions and polls the robot for
for the efficient visual sensing for the robots. In this model, a status information, such as posture, running status, etc., so that
binocular stereo vision system is used to obtain point cloud the host computer can see the robot's status in real time. In
information on the work-piece surface based on the work- addition, the background program controls the opening and
piece surface characteristics and scene conditions. Traditional closing of the EGM mode in the foreground program and
controls the power on/off of the robot. When the EGM mode
binocular systems can usually only obtain two-dimensional
is off, the robot can move controlled by messages, and when it
information of the work-piece, and the depth information must is on, the robot can only be controlled in real time through
be derived by trigonometric principles, but its accuracy often UDP. After the robot is turned on, it is enabled to move and
cannot meet industrial standards. Therefore, structured light is can execute motion control commands; after it is turned off
often combined with a binocular vision system to form a and disabled, it can only communicate to query the status, but
binocular stereo vision measurement system. The structured refuses to execute motion control commands. This type of
light solution allows the camera to directly obtain the three- programming makes communication and motion control
dimensional coordinate information of the target work-piece between the robot and the host computer more flexible and
surface by projecting coded light strips or gratings, thereby secure. In the Figure 4, the host computer control block
achieving accurate depth measurement of the work-piece diagram for robot systems is presented.
surface. This system can overcome limitations of traditional
binocular vision systems in the depth perception, improve
measurement accuracy, and meet requirements of industrial
applications. Let the normal vector of the plane formed by
three non-collinear feature points be n. In order to establish the
structured light plane equation, define the vector Ti , j :
Ti, j >mij , aij , nij @
> x j xi , y j yi , z j zi @ (7) Fig. 4. The Host Computer Control Block Diagram for Robot Systems
704
Authorized licensed use limited to: ULAKBIM UASL ISTANBUL TEKNIK UNIV. Downloaded on December 11,2024 at 17:06:22 UTC from IEEE Xplore. Restrictions apply.
This method establishes a model while disregarding joint
force and mass, and derives mathematical expressions of joint
motion by determining the geometric parameters and motion
rules between each joint of the robot. This analysis offers
essential theoretical underpinning for thorough comprehension
of robot motion characteristics and the optimization of motion
control and grasping processes. The Figure 5 shows the typical
robot link coordinate system.
705
Authorized licensed use limited to: ULAKBIM UASL ISTANBUL TEKNIK UNIV. Downloaded on December 11,2024 at 17:06:22 UTC from IEEE Xplore. Restrictions apply.
In the experimental results, the success rates of grasping [6] Molnar, Tamas G., Ryan K. Cosner, Andrew W. Singletary, Wyatt
Ubellacker, and Aaron D. Ames. "Model-free safety-critical control for
various objects is evaluated with the proposed robotic system. robotic systems." IEEE robotics and automation letters 7, no. 2 (2021):
The success rates were determined based on 100 grasp 944-951.
attempts for each object. The objects tested included a watch, [7] Yang, Zhen, Junli Li, Liwei Yang, Qian Wang, Ping Li, and Guofeng Xia.
hand cream, bottle, small box, toothpaste, tape, toothbrush, "Path planning and collision avoidance methods for distributed multi-
comb, glasses, and razor. Among these, the bottle and small robot systems in complex dynamic environments." Mathematical
Biosciences and Engineering 20, no. 1 (2023): 145-178.
box showed the highest success rates, with a perfect 100%
[8] Roveda, Loris, Jeyhoon Maskani, Paolo Franceschi, Arash Abdi,
success rate in all trials. This indicates the effectiveness of the Francesco Braghin, Lorenzo Molinari Tosatti, and Nicola Pedrocchi.
robotic system in reliably picking up objects of different "Model-based reinforcement learning variable impedance control for
shapes and sizes. On the other hand, some objects had lower human-robot collaboration." Journal of Intelligent & Robotic Systems
100, no. 2 (2020): 417-433.
success rates. For example, the glasses and the razor had
[9] Mathew, Anup Teejo, Ikhlas Ben Hmida, Costanza Armanini, Frederic
success rates of 50% and 75%, respectively. These lower Boyer, and Federico Renda. "SoRoSim: A MATLAB toolbox for hybrid
success rates could be attributed to factors such as the object's rigid–soft robots based on the geometric variable-strain approach." IEEE
shape, surface texture, or the complexity of the grasping Robotics & Automation Magazine 30, no. 3 (2022): 106-122.
requirements. [10] Yu, Xinbo, Wei He, Hongyi Li, and Jian Sun. "Adaptive fuzzy full-state
and output-feedback control for uncertain robots with output constraint."
V. CONCLUSION IEEE Transactions on Systems, Man, and Cybernetics: Systems 51, no.
11 (2020): 6994-7007.
The integration of robotic systems into industrial [11] Liu, Hongyi, and Lihui Wang. "Remote human–robot collaboration: A
production lines is a key step in improving manufacturing cyber–physical system application for hazard manufacturing
efficiency and advancing process automation. The proposed environment." Journal of manufacturing systems 54 (2020): 24-34.
vision-based methodology for industrial robot sensing, [12] Simoni, Michele D., Erhan Kutanoglu, and Christian G. Claudel.
coupled with optimization techniques for robot control, "Optimization and analysis of a robot-assisted last mile delivery
highlights the potential for significant advances in this area. system." Transportation Research Part E: Logistics and Transportation
Review 142 (2020): 102049.
Future research efforts should explore deeper into refining
[13] Yang, Yang, Li Juntao, and Peng Lingling. "Multi‐robot path planning
these methodologies, exploring novel applications of robotic based on a deep reinforcement learning DQN algorithm." CAAI
systems in various industrial sectors, and addressing pertinent Transactions on Intelligence Technology 5, no. 3 (2020): 177-183.
challenges such as human-robot collaboration and safety [14] Rouček, Tomáš, Martin Pecka, Petr Čížek, Tomáš Petříček, Jan Bayer,
concerns. By continuously innovating and optimizing robotic Vojtěch Šalanský, Daniel Heřt et al. "Darpa subterranean challenge:
systems, the study can develop a transformative impact on Multi-robotic exploration of underground environments." In Modelling
manufacturing efficiency, thereby driving industrial progress and Simulation for Autonomous Systems: 6th International Conference,
MESAS 2019, Palermo, Italy, October 29–31, 2019, Revised Selected
toward increased automation, productivity, and Papers 6, pp. 274-290. Springer International Publishing, 2020.
competitiveness in the global market landscape. [15] Barak, Liron, Itay M. Bloch, Ana Botti, Mariano Cababie, Gustavo
Cancelo, Luke Chaplinsky, Fernando Chierchie et al. "SENSEI:
VI. ACKNOWLEDGEMENT Characterization of single-electron events using a skipper charge-
The study is funded by: coupled device." Physical Review Applied 17, no. 1 (2022): 014022.
[16] Fonsêca, Hugo, Diego Rativa, and Ricardo Lima. "In-Loco Optical
No.: 2023-XJKJ-07. Spectroscopy through a Multiple Digital Lock-In on a Linear Charge-
Coupled Device (CCD) Array." Sensors 23, no. 16 (2023): 7195.
Name: Optimization design of industrial robot manipulator [17] Cancelo, Gustavo, Claudio Chavez, Fernando Chierchie, Juan Estrada,
(school-level scientific research project of Chongqing Guillermo Fernandez Moroni, Eduardo Paolini, Miguel Sofo Haro et al.
Aerospace Polytechnic). "Low threshold acquisition controller for Skipper charge-coupled
devices." Journal of Astronomical Telescopes, Instruments, and Systems
REFERENCES 7, no. 1 (2021): 015001-015001.
[1] Wang, Han, Wencai Du, and Shaobin Li. "Key Issues for Digital Factory [18] Rahimi Azghadi, Mostafa, Ying-Chen Chen, Jason K. Eshraghian, Jia
Designing and Planning: A Survey." Software Engineering, Artificial Chen, Chih-Yang Lin, Amirali Amirsoleimani, Adnan Mehonic et al.
Intelligence, Networking and Parallel/Distributed Computing 22 (2022): "Complementary metal‐oxide semiconductor and memristive hardware
18-29. for neuromorphic computing." Advanced Intelligent Systems 2, no. 5
(2020): 1900189.
[2] Shamsuzzoha, Ahm, Rayko Toshev, Viet Vu Tuan, Timo Kankaanpaa,
and Petri Helo. "Digital factory–virtual reality environments for [19] Turchetta, R. "Complementary metal-oxide-semiconductor (CMOS)
industrial training and maintenance." Interactive learning environments sensors for high-performance scientific imaging." In High performance
29, no. 8 (2021): 1339-1362. silicon imaging, pp. 289-317. Woodhead Publishing, 2020.
[3] Chandra Sekaran, Sivadas, Hwa Jen Yap, Siti Nurmaya Musa, Kan Ern [20] Zhao, Chenyi, Donglai Zhong, Lijun Liu, Yingjun Yang, Huiwen Shi,
Liew, Chee Hau Tan, and Atikah Aman. "The implementation of virtual Lian-Mao Peng, and Zhiyong Zhang. "Strengthened complementary
reality in digital factory—a comprehensive review." The International metal–oxide–semiconductor logic for small-band-gap semiconductor-
Journal of Advanced Manufacturing Technology 115, no. 5 (2021): based high-performance and low-power application." ACS nano 14, no.
1349-1366. 11 (2020): 15267-15275.
[4] Wang, Jue, and Alex Chortos. "Control strategies for soft robot systems." [21] Huang, Bingyao, Ying Tang, Samed Ozdemir, and Haibin Ling. "A fast
Advanced Intelligent Systems 4, no. 5 (2022): 2100165. and flexible projector-camera calibration system." IEEE Transactions on
Automation Science and Engineering 18, no. 3 (2020): 1049-1063.
[5] Sakurama, Kazunori, and Toshiharu Sugie. "Generalized coordination of
multi-robot systems." Foundations and Trends® in Systems and Control
9, no. 1 (2021): 1-170.
706
Authorized licensed use limited to: ULAKBIM UASL ISTANBUL TEKNIK UNIV. Downloaded on December 11,2024 at 17:06:22 UTC from IEEE Xplore. Restrictions apply.