Deep Learning-Enabled Visual Inspection of Gap Spacing in High-Precision Equipment: A Comparative Study
Abstract
:1. Introduction
- (1)
- We propose an Inspection System that utilizes Machine Vision Techniques (ISMVT) based on deep learning-based line detection.
- (2)
- An investigation of the usability of the ISMVT is performed in an industrial inspection task. Moreover, we present one of the first case studies to investigate how ISMVT can evaluate the inspection of product quality on the gap spacing between parts, compared to using the feeler gauge-based inspection method.
- (3)
- Additionally, we discuss the benefits and implications of using line detection based on deep learning to improve product quality inspection.
2. Related Works
2.1. Quality Inspection Using Machine Vision and Deep Learning
2.2. Lines Feature Detection
2.3. Summary
3. Prototype System
3.1. The Proposed Approach
- (1)
- Prior to measurement, it is essential to clean both the surface to be assessed and the feeler gauge using a lint-free cloth to eliminate any potential contaminants, such as dirt or metal shavings, which could compromise measurement accuracy.
- (2)
- Carefully insert the feeler gauge into the designated gap and maneuver it back and forth. A slight resistance felt during this process suggests that the gap is approximately equal to the value indicated on the feeler gauge. Conversely, if the resistance encountered is excessive or insufficient, it implies that the actual gap measurement is either smaller or larger than the value indicated on the gauge.
- (3)
- In the process of measuring and adjusting the gap, it is advisable to first select a feeler gauge that corresponds to the specified gap size and insert it into the gap under consideration. Subsequently, adjust the gauge while applying a gentle pull until a slight resistance is felt. At this juncture, secure the lock nut, as the value indicated on the feeler gauge will represent the measured gap value.
- (1)
- System setup and camera calibration. Firstly, one needs to complete the focusing of the camera as well as the adjustment of the LED lighting. Then, the system has the ability to detect circular points for camera calibration using OpenCV (https://opencv.org/, accessed on 10 November 2024) and we obtain the actual length of a single pixel. We designed a 3D-printed fixture to fix the calibration plate by CATIA P3 V5-6R 2020 because the size of the calibration plate is 20 mm × 20 mm; the detailed parameters are shown in Figure 4.
- (2)
- Random sampling inspection. The five positions on the product are distinct and unique. However, to improve efficiency during the inspection process, we only need to concentrate on the regions surrounding these five locations.
- (3)
- Gap measuring. The data collection process for the inspection operation begins. In this step, straight-line detection is most important. Moreover, gap measuring is the key part of the ISMVT method and it will be discussed in Section 3.3.
3.2. The ISMVT Environment
3.3. The LSD Algorithm
- (1)
- Setting and camera calibration
- (2)
- Image preprocessing and DeepLSD
- (3)
- Remove redundant lines and calculate the distance
4. Case Study
4.1. Experimental Details
4.1.1. Conditions
- (1)
- FGAI: In this case, the operator uses a feeler gauge to inspect the gap spacing, as shown in Figure 9a.
- (2)
- ISMVT: The inspection method is based on machine vision using DeepLSD, as shown in Figure 9b. The prototype can facilitate the inspection tasks in the working settings by machine vision, allowing the workers to remain focused on their tasks.
4.1.2. Task and Hypotheses
- (1)
- H1: Time. The ISMVT method will improve performance compared with the FGAI method.
- (2)
- H2: Accuracy. The ISMVT method is expected to have superior inspection accuracy compared to the FGAI method.
- (3)
- H3: Workload. The workload of using the ISMVT method for users is lower than that for FGAI.
- (4)
- H4: User experience. The ISMVT method can provide a better user experience than FGAI.
4.1.3. Participants and Procedure
4.2. Results
4.2.1. Inspection Time
4.2.2. Accuracy
4.2.3. Workload
4.2.4. User Experience
4.2.5. Preferences
5. Discussion and Limitations
6. Conclusions and Future Works
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Huang, H.L.; Jywe, W.Y.; Chen, G.R. Application of 3D Vision and Networking Technology for Measuring the Positional Accuracy of Industrial Robots. Int. J. Adv. Manuf. Technol. 2024, 135, 2051–2064. [Google Scholar] [CrossRef]
- de Assis Dornelles, J.; Ayala, N.F.; Frank, A.G. Smart Working in Industry 4.0: How Digital Technologies Enhance Manufacturing Workers’ Activities. Comput. Ind. Eng. 2022, 163, 107804. [Google Scholar] [CrossRef]
- Runji, J.M.; Lin, C.-Y. Markerless Cooperative Augmented Reality-Based Smart Manufacturing Double-Check System: Case of Safe PCBA Inspection Following Automatic Optical Inspection. Robot. Comput. Integr. Manuf. 2020, 64, 101957. [Google Scholar] [CrossRef]
- Natsagdorj, S.; Chiang, J.Y.; Su, C.-H.; Lin, C.; Chen, C. Vision-Based Assembly and Inspection System for Golf Club Heads. Robot. Comput. Integr. Manuf. 2015, 32, 83–92. [Google Scholar] [CrossRef]
- Lv, C.; Liu, B.; Wu, D.; Lv, J.; Li, J.; Bao, J. AR-Assisted Assembly Method Based on Instance Segmentation. Int. J. Comput. Integr. Manuf. 2024, 1–17. [Google Scholar] [CrossRef]
- Azamfirei, V.; Granlund, A.; Lagrosen, Y. Multi-Layer Quality Inspection System Framework for Industry 4.0. Int. J. Autom. Technol. 2021, 15, 641–650. [Google Scholar] [CrossRef]
- Hu, J.; Zhao, G.; Xiao, W.; Li, R. AR-Based Deep Learning for Real-Time Inspection of Cable Brackets in Aircraft. Robot. Comput. Integr. Manuf. 2023, 83, 102574. [Google Scholar] [CrossRef]
- Lonarevi, Z.; Gams, A.; Reberek, S.; Nemec, B.; Ude, A. Specifying and Optimizing Robotic Motion for Visual Quality Inspection. Robot. Comput. Integr. Manuf. 2021, 72, 102200. [Google Scholar] [CrossRef]
- Chen, C.; Wang, H.; Pan, Y.; Li, D. Vision-Based Robotic Peg-in-Hole Research: Integrating Object Recognition, Positioning, and Reinforcement Learning. Int. J. Adv. Manuf. Technol. 2024, 135, 1119–1129. [Google Scholar] [CrossRef]
- Zhao, Z.; Ma, X.; Shi, Y.; Yang, X. Multi-Scale Defect Detection for Plaid Fabrics Using Scale Sequence Feature Fusion and Triple Encoding. Vis. Comput. 2024, 1–17. [Google Scholar] [CrossRef]
- Yang, D.; Chen, N.; Liu, Z.J. Research on Defect Detection of Toy Sets Based on an Improved U-Net. Vis. Comput. 2024, 40, 1095–1109. [Google Scholar] [CrossRef]
- Azamfirei, V.; Psarommatis, F.; Lagrosen, Y. Application of Automation for In-Line Quality Inspection, a Zero-Defect Manufacturing Approach. J. Manuf. Syst. 2023, 67, 1–22. [Google Scholar] [CrossRef]
- Tarallo, A.; Mozzillo, R.; Di Gironimo, G.; De Amicis, R. A Cyber-Physical System for Production Monitoring of Manual Manufacturing Processes. Int. J. Interact. Des. Manuf. 2018, 12, 1235–1241. [Google Scholar] [CrossRef]
- Segura, Á.; Diez, H.V.; Barandiaran, I.; Arbelaiz, A.; Álvarez, H.; Simões, B.; Posada, J.; García-Alonso, A.; Ugarte, R. Visual Computing Technologies to Support the Operator 4.0. Comput. Ind. Eng. 2020, 139, 105550. [Google Scholar] [CrossRef]
- Montironi, M.A.; Castellini, P.; Stroppa, L.; Paone, N. Adaptive Autonomous Positioning of a Robot Vision System: Application to Quality Control on Production Lines. Robot. Comput. Integr. Manuf. 2014, 30, 489–498. [Google Scholar] [CrossRef]
- da Silva Santos, K.R.; Villani, E.; de Oliveira, W.R.; Dttman, A. Comparison of Visual Servoing Technologies for Robotized Aerospace Structural Assembly and Inspection. Robot. Comput. Integr. Manuf. 2022, 73, 102237. [Google Scholar] [CrossRef]
- Chiu, Y.C.; Tsai, C.Y.; Chang, P.H. Development and Validation of a Real-Time Vision-Based Automatic HDMI Wire-Split Inspection System. Vis. Comput. 2024, 40, 7349–7367. [Google Scholar] [CrossRef]
- Feng, Q.; Li, F.; Li, H.; Liu, X.; Fei, J.; Xu, S.; Lu, C.; Yang, Q. Feature Reused Network: A Fast Segmentation Network Model for Strip Steel Surfaces Defects Based on Feature Reused. Vis. Comput. 2023, 40, 3633–3648. [Google Scholar] [CrossRef]
- Leco, M.; Kadirkamanathan, V. A Perturbation Signal Based Data-Driven Gaussian Process Regression Model for in-Process Part Quality Prediction in Robotic Countersinking Operations. Robot. Comput. Integr. Manuf. 2021, 71, 102105. [Google Scholar] [CrossRef]
- Mei, B.; Liang, Z.; Zhu, W.; Ke, Y. Positioning Variation Synthesis for an Automated Drilling System in Wing Assembly. Robot. Comput. Integr. Manuf. 2021, 67, 102044. [Google Scholar] [CrossRef]
- Ferraguti, F.; Pini, F.; Gale, T.; Messmer, F.; Storchi, C.; Leali, F.; Fantuzzi, C. Augmented Reality Based Approach for On-Line Quality Assessment of Polished Surfaces. Robot. Comput. Integr. Manuf. 2019, 59, 158–167. [Google Scholar] [CrossRef]
- Kozamernik, N.; Braun, D. A Novel FuseDecode Autoencoder for Industrial Visual Inspection: Incremental Anomaly Detection Improvement with Gradual Transition from Unsupervised to Mixed-Supervision Learning with Reduced Human Effort. Comput. Ind. 2025, 164, 104198. [Google Scholar] [CrossRef]
- Orabi, M.; Tran, K.P.; Egger, P.; Thomassey, S. Anomaly Detection in Smart Manufacturing: An Adaptive Adversarial Transformer-Based Model. J. Manuf. Syst. 2024, 77, 591–611. [Google Scholar] [CrossRef]
- Kohler, M.; Mitsios, D.; Endisch, C. Reconstruction-Based Visual Anomaly Detection in Wound Rotor Synchronous Machine Production Using Convolutional Autoencoders and Structural Similarity. J. Manuf. Syst. 2025, 78, 410–432. [Google Scholar] [CrossRef]
- Nahar, L.; Islam, M.S.; Awrangjeb, M.; Verhoeve, R. Automated Corner Grading of Trading Cards: Defect Identification and Confidence Calibration through Deep Learning. Comput. Ind. 2025, 164, 104187. [Google Scholar] [CrossRef]
- Gu, G.; Ko, B.; Go, S.; Lee, S.-H.; Lee, J.; Shin, M. Towards Light-Weight and Real-Time Line Segment Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 22 February–1 March 2022; Volume 36, pp. 726–734. [Google Scholar]
- Pautrat, R.; Barath, D.; Larsson, V.; Oswald, M.R.; Pollefeys, M. Deeplsd: Line Segment Detection and Refinement with Deep Image Gradients. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–27 June 2023; pp. 17327–17336. [Google Scholar]
- Von Gioi, R.G.; Jakubowicz, J.; Morel, J.-M.; Randall, G. LSD: A Fast Line Segment Detector with a False Detection Control. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 32, 722–732. [Google Scholar] [CrossRef]
- Jafari-Marandi, R.; Khanzadeh, M.; Tian, W.; Smith, B.; Bian, L. From In-Situ Monitoring toward High-Throughput Process Control: Cost-Driven Decision-Making Framework for Laser-Based Additive Manufacturing. J. Manuf. Syst. 2019, 51, 29–41. [Google Scholar] [CrossRef]
- Altice, B.; Nazario, E.; Davis, M.; Shekaramiz, M.; Moon, T.K.; Masoum, M.A.S. Anomaly Detection on Small Wind Turbine Blades Using Deep Learning Algorithms. Energies 2024, 17, 982. [Google Scholar] [CrossRef]
- Memari, M.; Shekaramiz, M.; Masoum, M.A.S.; Seibi, A.C. Data Fusion and Ensemble Learning for Advanced Anomaly Detection Using Multi-Spectral RGB and Thermal Imaging of Small Wind Turbine Blades. Energies 2024, 17, 673. [Google Scholar] [CrossRef]
- Niyongabo, J.; Zhang, Y.; Ndikumagenge, J. Bearing Fault Detection and Diagnosis Based on Densely Connected Convolutional Networks. Acta Mech. Autom. 2022, 16, 130–135. [Google Scholar] [CrossRef]
- Franceschini, F.; Galetto, M.; Maisano, D.; Mastrogiacomo, L. Towards the Use of Augmented Reality Techniques for Assisted Acceptance Sampling. Proc. Inst. Mech. Eng. B J. Eng. Manuf. 2016, 230, 1870–1884. [Google Scholar] [CrossRef]
- Cao, A.; Chintamani, K.K.; Pandya, A.K.; Ellis, R.D. NASA TLX: Software for Assessing Subjective Mental Workload. Behav. Res. Methods 2009, 41, 113–117. [Google Scholar] [CrossRef] [PubMed]
- Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. Adv. Psychol. 1988, 52, 139–183. [Google Scholar]
- Wang, Z.; Zhang, X.; Li, L.; Zhou, Y.; Lu, Z.; Dai, Y.; Liu, C.; Su, Z.; Bai, X.; Billinghurst, M. Evaluating Visual Encoding Quality of a Mixed Reality User Interface for Human-Machine Co-Assembly in Complex Operational Terrain. Adv. Eng. Inform. 2023, 58, 102171. [Google Scholar] [CrossRef]
- Zhang, X.; Bai, X.; Zhang, S.; He, W.; Wang, S.; Yan, Y.; Yu, Q.; Liu, L. A Novel MR Remote Collaboration System Using 3D Spatial Area Cue and Visual Notification. J. Manuf. Syst. 2023, 67, 389–409. [Google Scholar] [CrossRef]
- Zhang, X.; Bai, X.; Zhang, S.; He, W.; Wang, S.; Yan, Y.; Wang, P.; Liu, L. A Novel Mixed Reality Remote Collaboration System with Adaptive Generation of Instructions. Comput. Ind. Eng. 2024, 194, 110353. [Google Scholar] [CrossRef]
- Wang, P.; Wang, Y.; Billinghurst, M.; Yang, H.; Xu, P.; Li, Y. BeHere: A VR/SAR Remote Collaboration System Based on Virtual Replicas Sharing Gesture and Avatar in a Procedural Task. Virtual Real. 2023, 27, 1409–1430. [Google Scholar] [CrossRef] [PubMed]
- Yan, Y.X.; Bai, X.; He, W.; Wang, S.; Zhang, X.Y.; Wang, P.; Liu, L.; Zhang, B. Can You Notice My Attention? A Novel Information Vision Enhancement Method in MR Remote Collaborative Assembly. Int. J. Adv. Manuf. Technol. 2023, 127, 1835–1857. [Google Scholar] [CrossRef] [PubMed]
- Wang, P.; Bai, X.; Billinghurst, M.; Zhang, S.; Zhang, J. 3DGAM: Using 3D Gesture and CAD Models for Training on Mixed Reality Remote Collaboration. Multimed. Tools Appl. 2020, 80, 31059–31084. [Google Scholar] [CrossRef]
- Wang, P.; Bai, X.; Billinghurst, M.; Zhang, S.; He, W.; Han, D.; Wang, Y.; Min, H.; Lan, W.; Han, S. Using a Head Pointer or Eye Gaze: The Effect of Gaze on Spatial AR Remote Collaboration for Physical Tasks. Interact. Comput. 2020, 32, 153–169. [Google Scholar] [CrossRef]
- Gurevich, P.; Lanir, J.; Cohen, B. Design and Implementation of TeleAdvisor: A Projection-Based Augmented Reality System for Remote Collaboration. Comput. Support. Coop. Work 2015, 24, 527–562. [Google Scholar] [CrossRef]
- Zhai, S.; Zhao, X.; Zu, G.; Lu, L.; Cheng, C. An Algorithm for Lane Detection Based on RIME Optimization and Optimal Threshold. Sci. Rep. 2024, 14, 27244. [Google Scholar] [CrossRef] [PubMed]
Gap/mm | 0.01 < i ≤ 0.40 | 0.40 < i < 0.90 | ≥0.90 |
---|---|---|---|
Welding Process | WEP-A | WEP-B | Rejection |
QU# | Questions: 1 (I strongly disagree) to 5 (I strongly agree) |
1 | The interaction operation was natural and intuitive. |
2 | I was able to stay focused on the task actively. |
3 | I can quickly adjust myself to inspection tasks when the interaction fails (controllable). |
4 | I enjoyed the inspection process. |
5 | I can perform the tasks by the status information provided by this condition. |
6 | This interface brings me a sense of amazement. |
7 | I can receive timely notifications during inspecting tasks. |
8 | I can easily learn how to use this interface. |
9 | I was able to receive timely feedback during inspection activities. |
10 | The interactive operation can help me forget my daily troubles (Immersion). |
11 | This interface is relatively noticeable. |
12 | I felt confident that we completed the task correctly. |
NO. | Questions | Z | p |
---|---|---|---|
QU1 | Natural and intuitive | −2.517 | 0.012 |
QU2 | Focus | −0.397 | 0.691 |
QU3 | Controllable | −2.285 | 0.022 |
QU4 | Enjoyment | −2.697 | 0.007 |
QU5 | Providing information | −2.581 | 0.010 |
QU6 | Amazement | −2.523 | 0.012 |
QU7 | Receiving timely notifications | −2.810 | 0.005 |
QU8 | Easy to learn | −2.404 | 0.016 |
QU9 | Timely feedback | −3.391 | 0.001 |
QU10 | Immersion | −1.234 | 0.217 |
QU11 | Noticeable | −2.228 | 0.026 |
QU12 | Confidence | −2.092 | 0.036 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, X.; Li, F.; Yang, H.; Wang, P. Deep Learning-Enabled Visual Inspection of Gap Spacing in High-Precision Equipment: A Comparative Study. Machines 2025, 13, 74. https://doi.org/10.3390/machines13020074
Li X, Li F, Yang H, Wang P. Deep Learning-Enabled Visual Inspection of Gap Spacing in High-Precision Equipment: A Comparative Study. Machines. 2025; 13(2):74. https://doi.org/10.3390/machines13020074
Chicago/Turabian StyleLi, Xiuling, Fusheng Li, Huan Yang, and Peng Wang. 2025. "Deep Learning-Enabled Visual Inspection of Gap Spacing in High-Precision Equipment: A Comparative Study" Machines 13, no. 2: 74. https://doi.org/10.3390/machines13020074
APA StyleLi, X., Li, F., Yang, H., & Wang, P. (2025). Deep Learning-Enabled Visual Inspection of Gap Spacing in High-Precision Equipment: A Comparative Study. Machines, 13(2), 74. https://doi.org/10.3390/machines13020074