Next Article in Journal
Demagnetization Analysis and Optimization of Bonded Nd-Fe-B Magnet Rings in Brushless DC Motors
Previous Article in Journal
Fault Diagnostics of Synchronous Motor Based on Artificial Intelligence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Enabled Visual Inspection of Gap Spacing in High-Precision Equipment: A Comparative Study

1
School of Mechanical and Electrical Engineering, Zheng Zhou Railway Vocational & Technical College, Zhengzhou 450052, China
2
College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
3
Big Data Institute, Baoshan University, Baoshan 678000, China
*
Author to whom correspondence should be addressed.
Submission received: 24 December 2024 / Revised: 16 January 2025 / Accepted: 17 January 2025 / Published: 21 January 2025
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)

Abstract

:
In the realm of industrial quality control, visual inspection plays a pivotal role in ensuring product precision and consistency. Moreover, it enables non-contact inspection, preventing the products from potential damage, and timely monitoring capabilities facilitate quick decision making. However, traditional methods, such as manual inspection using feeler gauges, are time-consuming, labor-intensive, and prone to human error. To address these limitations, this study proposes a deep learning-based visual inspection system for measuring gap spacing in high-precision equipment. Utilizing the DeepLSD algorithm, the system integrates traditional and deep learning techniques to enhance line segment detection, resulting in more robust and accurate inspection outcomes. Key performance improvements were realized, with the proposed system being a piece of deep learning-enabled high-precision mobile equipment for inspecting gap spacing in real-time. Through a comparative analysis with the traditional feeler gauge method, the proposed system demonstrated significant improvements in inspection time, accuracy, and user experience, while reducing workload. Experimental results validate the effectiveness and efficiency of the proposed approach, highlighting its potential for widespread application in industrial quality inspection activities.

1. Introduction

This paper focuses on inspecting gap spacing based on machine vision using the deep learning technique. The inspection activities can be enhanced through the implementation of digital technologies, such as machine vision [1], particularly in the context of quality control during assembly [2,3]. In practice, the assembly of high-precision aerospace equipment typically involves substantial manual labor and high costs and requires human-operated machinery during most processes [4]. The economic benefits derived from enhancing inspection productivity and reliability significantly outweigh those obtained simply by reducing production costs. Traditionally, the process for aerospace equipment involves manual completion through various stages [5]. At the same time, the increasing inclination towards product diversification and small batches has considerably hindered the high-precision and intelligence capabilities of assembly quality inspection. This situation has, in turn, resulted in a heightened dependence on manual inspection methods. After initial assembly, the size of the fit gap space is examined by applying a feeler gauge. Subsequently, human visual inspection is conducted to verify if the products comply with quality standards. This process entails extensive manual checking, which is not only laborious and time-intensive but also susceptible to errors.
Therefore, the inspection of critical steps is highly important during the assembly process, while the inspection of assembly quality in manual assembly has become a bottleneck, limiting both product quality improvement and the enhancement of batch production capacity [6]. The rapid advancement of machine vision technologies utilizing deep learning methodologies has made the benefits of machine vision-based inspection increasingly apparent, such as high accuracy and precision, improving inspection efficiency, reducing the need for manual labor, and cutting costs. In light of the advantages of inspection based on machine vision, the integration of inspection based on machine vision and visualization can play an important role in improving workers’ productivity.
Indeed, some researchers have pioneered the use of machine vision for assessing product quality in the industry. This innovative approach has the potential to revolutionize the sector by significantly enhancing product quality standards and customer satisfaction. Machine vision systems, which combine high-resolution cameras and advanced image processing algorithms, offer several key advantages over traditional human inspection methods. They can inspect products with greater efficiency, accuracy, and consistency. In Industry 4.0, cyber–physical integration aims to improve efficiency and quality in manufacturing inspection, Hu et al. [7] developed a vision-based platform that combined AR with deep learning for inspecting cable brackets in aircraft assembly. However, the methods mentioned above are designed for a specific product. In particular, this method is incapable of performing quality inspections that depend on the spacing between components in contact.
Taking these factors into account, our research is designed to close the existing gap in inspection methods that rely on machine vision. We aim to enhance the understanding of how inspection performance and user experience are influenced by our proposed approach. Specifically, this study was largely inspired by previous research [4,7], and our research expands upon and continues the work conducted by using the advantages of vision-based no-destructive inspection and intuitive visual representations concentrating on performance, precision, and user experience. Our methodology is characterized by its exceptional flexibility and capability to identify manufacturing defects in real time, thereby ensuring high-precision inspections without the necessity for highly skilled technicians. As a result, this proposed approach enables swift quality evaluations and markedly improves the overall productivity and efficiency of the inspection process. It effectively addresses the inefficiencies inherent in traditional inspection methods that depend on feeler gauges, which are frequently time-consuming and labor-intensive and often involve random sampling checks. Ultimately, we have developed a machine vision inspection system aimed at assisting technicians in industrial quality inspections by enhancing efficiency, improving user experience, and minimizing errors in comparison to conventional methods. This research makes significant contributions by addressing the limitations of prior studies and providing compelling evidence of the considerable advantages of machine vision-based inspection utilizing deep learning techniques for quality assessment in practical aeronautical product assembly contexts. Therefore, compared to previous work, the novelty of our research is threefold:
(1)
We propose an Inspection System that utilizes Machine Vision Techniques (ISMVT) based on deep learning-based line detection.
(2)
An investigation of the usability of the ISMVT is performed in an industrial inspection task. Moreover, we present one of the first case studies to investigate how ISMVT can evaluate the inspection of product quality on the gap spacing between parts, compared to using the feeler gauge-based inspection method.
(3)
Additionally, we discuss the benefits and implications of using line detection based on deep learning to improve product quality inspection.
The rest of this paper is structured as follows. We present previous related works in Section 2. Second, we describe the proposed approach at length. Third, we perform a case study to evaluate the prototype system and present the results. Then follows the discussion and limitations and, finally, the conclusion and future work.

2. Related Works

2.1. Quality Inspection Using Machine Vision and Deep Learning

Machine vision plays a crucial role in executing various tasks within industrial production. It facilitates the acquisition of information regarding the quality inspection and quality control processes. Additionally, quality inspection is recognized as one of the standard tasks associated with machine vision in industrial settings. Prior research has explored how machine vision and deep learning can be integrated and used in the industrial sector, uncovering significant benefits such as increased productivity, better training and maintenance procedures, reduction of costs, and improved cost-effectiveness and operational interaction experience [8,9,10,11].
In a constantly evolving business landscape, there is a growing need for high-quality products [12]. For visual quality inspection, Lončarević et al. [8] present and experimentally evaluate a novel methodology for defining visual inspection trajectories using CAD models of the workpieces designated for inspection. In their experimental findings, the algorithm based on machine vision and reinforcement learning demonstrated a cycle time reduction of up to 53% compared to the manual operation. In addition, a machine vision technique is utilized in the quality inspection procedure to concurrently detect product flaws through the camera system, thereby enhancing the efficiency of operators’ inspection tasks [3,13] and improving technicians’ cognitive processes in understanding and gaining inspection results [14]. In this trend, the machine vision-based system has been significantly influential, particularly in the realm of process control and online quality evaluation across various industries. Montironi et al. [15] developed a vision-based adaptive strategy for positioning a camera within a 3D space to automatically avoid obstacles when obstructing the camera’s line of sight during inspection by applying a Fast Fourier Transform algorithm. The method was used for the online quality control of washing machine parts and it was found that it could improve inspection efficiency. In Industry 4.0, cyber–physical integration aims to improve efficiency and quality in manufacturing inspection. Runji et al. [3] introduced a new markerless AR inspection system for PCBA following an automatic optical inspection. Yang et al. [11] proposed an optimized U-Net by introducing a Squeeze-and-Excitation block to improve the efficiency and accuracy of the defect detection of toy sets using machine vision. A new visual servoing method for automated aerospace manufacturing is introduced by Santos et al. [16]. The proposed method uses a camera and beam sensors on a collaborative robot to detect boundaries on aircraft parts. This enables the robot to automatically generate and follow precise trajectories during assembly and inspection tasks. The approach achieves an accuracy of 0.04 mm and a repeatability of 0.59 mm. Chiu et al. [17] presented a real-time visual inspection system for HDMI cables based on YOLOv4, and the results showed that the proposed method, which uses deep learning, can reduce the execution time compared to the traditional manual inspection. Zhao et al. [18] introduced an improved YOLOv8 to detect fabric defects by integrating scale sequence feature fusion and triple encoding. Leco and Kadirkamanathan [19] presented a method using Gaussian process regression. It combines real-time data with inspection in robotic countersinking on aircraft parts. Experiments showed it can predict part quality, making inspection more efficient during manufacturing. Feng et al. [10] developed a fast segmentation network, known as the feature reused network (FR-Net), aimed at optimizing the identification of minor imperfections and improving real-time detection capabilities during the inspection of strip steel surface quality. Mei et al. [20] investigated the application of an industrial camera in the assessment and enhancement of the precision in the assembly of aircraft wings. AR is a key technology in Industry 4.0, enhancing quality and productivity. Ferraguti et al. [21] developed a novel AR online quality assessment method to assist workers in evaluating industrial polishing. AR-assisted assembly overlays virtual models and instructions on real scenes to guide workers but lacks scene awareness and requires frequent interactions. To overcome this drawback, Lv et al. [5] proposed the assembly status perception using an instance segmentation method and RGB-D information, enabling step determination and AR instruction triggering to minimize user interaction. Kozamernik et al. [22] presented the FuseDecode Autoencoder (FuseDecode AE), an innovative anomaly detection model designed for automated visual inspection. The model employs incremental learning, initially operating in an unsupervised manner on predominantly normal images, then progressively adapting to weakly labeled data and further refining its performance with real anomalous samples. Orabi et al. [23] introduced the “Adaptive Adversarial Transformer,” an innovative deep learning technique that integrates Transformer architecture with anomaly attention and adversarial learning. This method is designed to detect complex temporal patterns and differentiate between normal and anomalous behaviors in smart manufacturing data. Kohler et al. [24] presented a novel unsupervised visual anomaly detection method for wound rotor synchronous machines using convolutional autoencoders and the structural similarity index measure, introducing the Winding Anomaly Dataset and masked mean structural dissimilarity index measure for improved detection. Nahar et al. [25] developed a deep neural network for automated defect detection in trading cards, with a focus on corner grading. Utilizing DenseNet enhanced by regularization techniques, the network achieves a mean accuracy of 83%. The study further investigates model calibration and integrates a rule-based approach along with a human-in-the-loop system to boost the reliability and overall performance of the defect detection process.
The above-mentioned research showed that the vision inspection approach based on the deep learning technique is very useful for improving product/assembly quality. Nevertheless, for machine vision-based inspection tasks in industry, the combination of vision inspection using deep learning and gap detection has not been well explored.

2.2. Lines Feature Detection

Numerous studies have shown that employing machine vision for inspection in assembly processes can boost the efficiency and precision of products. Mechanical products typically exhibit minimal surface texture but possess regular shapes with abundant shape features, such as straight lines. Consequently, we are investigating a machine vision technique for extracting line features to assess product quality. Research has established that line segments and their intersections are fundamental in vision, offering essential information for advanced visual tasks [26,27]. Due to their broad presence and detectability even in texture-less areas, like those found on mechanical products, they constitute a valuable feature for machine vision inspection tasks. For example, they are useful for checking the gap spacing between mating parts. Such applications require a robust and accurate detector capable of identifying line features in images. Moreover, Gioi et al. [28] introduced a linear-time line segment detection (LSD) that is capable of reliably identifying local straight lines with subpixel precision. Nevertheless, these types of techniques may encounter obstacles in demanding environments such as low lighting, where the image gradient is laden with noise. Furthermore, they might disregard the overall scene and incorrectly identify any cluster of pixels exhibiting a similar gradient orientation.
Golf club head production involves creating metallic films, forming blank casts, and undergoing complex assembly, inspection, and testing procedures. These processes not only expose operators to harmful chemicals but also require substantial manual labor for assembly and welding. To address these issues, Natsagdorj et al. [4] introduced a vision-based inspection system designed for golf club heads that utilized two cameras, thereby mitigating the time-consuming and labor-intensive nature of the process. This system can oversee the penetration depth of the striking plate and verify the loft angle to guarantee compliance with specifications, while the synchronization of the two cameras enhances the precision and productivity of the assembly and inspection procedure. Recently, Azamfirei et al. [12] carried out a thorough examination of the existing scholarly works pertaining to the utilization of real-time quality inspection and zero-defect manufacturing methodologies. They described four quality inspections conducted along the production line, in situ quality inspection, in-line quality inspection, on-machine quality inspection, and in-process quality inspection. Specifically, in situ quality inspection refers to evaluating the workpiece or sub-assembly within the same operational context and manufacturing surroundings. It can be flexibly utilized interchangeably between at-line quality inspection and different phases of in-line quality inspection [12,29]. Particularly, our research is centered on the in situ quality inspection stage for the assembly of aeronautical products, specifically examining the gap spacing between contacting parts.
Due to the rapid advancements in deep learning, deep neural networks have been effectively utilized to overcome these limitations, enhancing accuracy in tasks such as detecting LSD through diverse geometric cues, inspecting product features, and identifying faults [26,30]. Pautrat et al. [27] proposed DeepLSD combining traditional and learned approaches for accurate and robust line detection. It uses a deep network to create a line attraction field, which is converted to a surrogate gradient for existing detectors. A new optimization tool refines line segments using attraction fields and vanishing points, significantly improving accuracy. Additionally, Gu et al. [26] introduced a real-time and lightweight line segment detector named Mobile-LSD (MLSD). This detector achieves its efficiency and reduces computational load by streamlining the network architecture. Memari et al. [31] presented an innovative approach for the inspection of wind turbine blades, leveraging the advantages of RGB and thermal imaging in conjunction with a deep learning methodology that integrates a vision transformer with DenseNet161 [32]. The results highlighted the effectiveness of utilizing global feature analysis through the vision transformer, in tandem with the dense connectivity offered by DenseNet161, to identify fault characteristics in wind turbines.
In summary, line feature detection is highly valuable for facilitating real-time instructions and streamlining complex procedure tasks in inspection activities. For product quality inspection tasks on the gap spacing, however, the potential of integrating vision inspection through deep learning remains largely unexplored in current research. Thus, these studies have room to be improved via the line feature detection techniques. Moreover, it is essential to establish appropriate principles for the design of the inspection system to effectively minimize cognitive workload and enhance decision making.

2.3. Summary

From the related work on machine vision- and deep learning-based inspection, we can draw two conclusions. (1) much research is focused on inspection activities in industrial sectors, particularly in the field of in situ quality inspection. The vision-based method of detecting lines plays an important role in extracting parts’ features. Furthermore, inspection activities based on deep learning are getting more and more attention from researchers. (2) Although a lot of research has explored the effects of quality inspection in assembly and maintenance, there is relatively little study investigating how detecting line features would affect quality inspection. Critically, this research will open up new opportunities for improving quality inspection activities in industrial settings. More significantly, there is a lack of comprehensive exploration into the integration of machine vision through deep learning techniques for inspection purposes within the industrial sector. Therefore, we propose a deep learning-based approach for inspecting the gap spacing of high-precision equipment. In summary, the paper focuses on line detection based on deep learning for inspection of product quality. Although some parts have few texture features, they have regular shapes with rich shape characteristics (e.g., straight lines and circles). Therefore, we explore the use of vision-based methods using deep learning such as line detection to extract the main features of aeronautical parts, then judge the quality of the product based on the value of the gap.

3. Prototype System

3.1. The Proposed Approach

The proposed approach for quality inspection is a long step composed of subsequent measuring of the gap space, which primarily determines the appropriate welding processes (e.g., WEP-A, WEP-B), as shown in Table 1. The process aims to classify products based on the gap space.
Figure 1 shows the main stages of the method and returns a distance value for quality assessment as a crucial part of the inspection process. The inspection method can adopt the traditional method of using a Feeler Gauge-assisted Inspection (FGAI) or ISMVT. Indeed, the quality assessment process mainly follows the result of sampling inspection, which is a statistical procedure for classifying or rejecting production in the next process. Specifically speaking, quality inspection procedures are typically employed to verify that incoming product lots meet specified standards. When the lot size is significantly large and inspections are time-consuming or inefficient, acceptance sampling is usually favored over 100% inspection [33].
Acceptance random sampling typically involves randomly selecting a sample of locations from a product that should represent the completion of quality inspection (see Figure 2). Then, the product is evaluated based on the inspection results of the sample locations. If the value of gap spacing is tolerable, then the product is accepted and adopts the welding process A/B, otherwise, the product is rejected.
Research within the field of enterprise indicates that there are multiple criteria pertaining to the appropriate utilization of feeler gauges, which will have a substantial and direct impact on the outcomes of inspections. Standard operational precautions encompass:
(1)
Prior to measurement, it is essential to clean both the surface to be assessed and the feeler gauge using a lint-free cloth to eliminate any potential contaminants, such as dirt or metal shavings, which could compromise measurement accuracy.
(2)
Carefully insert the feeler gauge into the designated gap and maneuver it back and forth. A slight resistance felt during this process suggests that the gap is approximately equal to the value indicated on the feeler gauge. Conversely, if the resistance encountered is excessive or insufficient, it implies that the actual gap measurement is either smaller or larger than the value indicated on the gauge.
(3)
In the process of measuring and adjusting the gap, it is advisable to first select a feeler gauge that corresponds to the specified gap size and insert it into the gap under consideration. Subsequently, adjust the gauge while applying a gentle pull until a slight resistance is felt. At this juncture, secure the lock nut, as the value indicated on the feeler gauge will represent the measured gap value.
In this research, we utilize a feeler gauge that ranges from a minimum thickness of 0.01 mm to a maximum thickness of 1.00 mm. Thus, the human factor (e.g., fatigue, emotional state, superficiality) has a direct impact on the inspection results, and the FGAI method heavily relies on the level of experience and skill proficiency of the operators.
The operator inherently performs an assessment of the gap spacing along the inspection process and determines when it meets the expected specifications. Gap qualification is possible via the proposed ISMVT, which returns objective data that is primarily classified into two categories: acceptance and rejection types. The former includes the WEP-A and WEP-B conditions, while the latter refers to the gap value of any one of the five positions greater than 0.900 mm, where the majority of assessments considered the product to be of inferior quality. The ISMVT-based tool returns reliable inspection results about the gap spacing and allows us to identify the connection between the welding process and the product quality. Consequently, the ISMVT facilitates the objective assessment of gap spacing, thereby aiding in the determination of the subsequent processes that should be implemented.
The present research focuses on the crucial quality inspection step and provides a new approach to help the operator in the inspection stages by enhancing the real components with high-quality data. The approach involves several steps and relies on the ISMVT/FGAI system, as depicted in Figure 3.
In the following, a detailed explanation of each step is given.
(1)
System setup and camera calibration. Firstly, one needs to complete the focusing of the camera as well as the adjustment of the LED lighting. Then, the system has the ability to detect circular points for camera calibration using OpenCV (https://opencv.org/, accessed on 10 November 2024) and we obtain the actual length of a single pixel. We designed a 3D-printed fixture to fix the calibration plate by CATIA P3 V5-6R 2020 because the size of the calibration plate is 20 mm × 20 mm; the detailed parameters are shown in Figure 4.
(2)
Random sampling inspection. The five positions on the product are distinct and unique. However, to improve efficiency during the inspection process, we only need to concentrate on the regions surrounding these five locations.
(3)
Gap measuring. The data collection process for the inspection operation begins. In this step, straight-line detection is most important. Moreover, gap measuring is the key part of the ISMVT method and it will be discussed in Section 3.3.

3.2. The ISMVT Environment

The ISMVT was developed on the Windows 10 OS using Pycharm Community Edition 2024.2.1, Python 3.12.6, Opencv-python 4.8.0, and PySide 5.10, torch 1.12, torchvision 0.13. The devices include a StarMICRO camera (Suzhou Dexinshun Trading Co., Ltd., Suzhou, China) and a computer (Intel Core i7-10700K CPU @ 3.80 GHz 3.79 GHz, RAM 32 GB). Additionally, the environment provides a screen for the operators to see the gap spacing value when inspecting the product.

3.3. The LSD Algorithm

In recent years, deep learning-based approaches to LSD have garnered increasing attention. In this study, we utilized DeepLSD (Code is available: https://github.com/moonlightpeng/ISMVT, accessed on 17 January 2025) [27], drawing inspiration from prior research pertaining to LSD [26,27,28].
The DeepLSD approach involves utilizing a deep neural network to process images in order to create a line attraction field, as shown in Figure 5. This field is subsequently transformed into a surrogate representation of the image’s gradient magnitude and angle, which can then be utilized by any pre-existing handcrafted line detection algorithm. Furthermore, this approach introduces a novel optimization tool designed to enhance line segments by utilizing the attraction field and vanishing points. This refinement significantly increases the accuracy of existing deep detection models. This new line segment detector of DeepLSD can overcome the limitations of traditional line detectors, such as insufficient resilience in the presence of noisy images and under challenging conditions. For more details about DeepLSD, please refer to Pautrat et al. [27].
The overview of our deep learning-based DeepLSD approach for assessing the quality of gap spacing is described in Figure 6 and mainly includes three steps:
(1) 
Setting and camera calibration
Firstly, we set up the ISMVT. Then, we calculate the actual length of a single pixel through camera calibration. In the current research, the value is 0.0021 mm. Furthermore, we define the configuration files (CamConfig.xml), including the camera calibration value and parameters of image preprocessing.
(2) 
Image preprocessing and DeepLSD
Although the core of straight-line detection is a deep learning algorithm, the preprocessing of images is very necessary because there is an artifact near the gap, as illustrated in Figure 7. The zoomed-in image (see Figure 7b,d) of the region of interest shows that artifacts along that part of boundary will disrupt the detection of the straight line. This can be demonstrated through the results of the DeepLSD algorithm detection, as shown in Figure 7b.
To address this problem, image processing is achieved through binarization followed by a closing operation that first dilates and then erodes. More specifically, the threshold for image binarization is determined by the median value of 27, which is statistically calculated from the artifacts found in 80 images of six different parts. Figure 8(*, 3) is the result of image preprocessing, while Figure 8(*, 4) is the result of applying DeepLSD following the last step.
In the test, we found that the DeepLSD algorithm performs robustly when the situation is Figure 8(1, 1). That is, the region of interest is between the straight line la and le, as depicted in Figure 2. However, sometimes the detection of the line would fail when the region of interest was between the straight line la and the position a or between le and e, as depicted in Figure 2. In this case, our approach would divide the image into four equal slices (see Figure 8(2, 1),(2, 3)), use the approximate straight-line segments in each slice to represent the arcs, and then detect lines based on DeepLSD in the loop.
(3) 
Remove redundant lines and calculate the distance
When detecting the straight lines, we should remove the redundant lines, otherwise, it is impossible to accurately calculate the gap spacing, as shown in Figure 8(4th column). Thus, calculate the slopes of each line, then find the lines that share the same slope. Next, compute the distance between the parallel lines. Based on the distance value, we can remove any lines that are not within the threshold range. Subsequently, if multiple lines satisfy the established criteria, the final gap value will be determined as their average, as shown in Figure 8(5th column).

4. Case Study

4.1. Experimental Details

4.1.1. Conditions

In the present research, the primary independent variable under investigation was the inspection method. Consequently, our user study comprised two distinct conditions, as outlined below:
(1)
FGAI: In this case, the operator uses a feeler gauge to inspect the gap spacing, as shown in Figure 9a.
(2)
ISMVT: The inspection method is based on machine vision using DeepLSD, as shown in Figure 9b. The prototype can facilitate the inspection tasks in the working settings by machine vision, allowing the workers to remain focused on their tasks.

4.1.2. Task and Hypotheses

In the context of the inspection tasks, we selected standard procedural inspection activities, as depicted in Figure 9, which involve examining five specific positions of a product. To mitigate the potential impact of individual operational habits on the experimental results, participants were instructed to follow a prescribed sequence from a to e (see Figure 2) rigorously.
(1)
H1: Time. The ISMVT method will improve performance compared with the FGAI method.
(2)
H2: Accuracy. The ISMVT method is expected to have superior inspection accuracy compared to the FGAI method.
(3)
H3: Workload. The workload of using the ISMVT method for users is lower than that for FGAI.
(4)
H4: User experience. The ISMVT method can provide a better user experience than FGAI.

4.1.3. Participants and Procedure

In the study, we recruited 18 participants (12 males and 6 females) aged between 18 and 35 years old (mean 26.17). We gathered participants’ background experience through a structured questionnaire. The questionnaire included five topics: visual inspection, deep learning, using the feeler gauge, quality inspection tasks, and machine vision. For each topic, participants were presented with four answer options: None (Never used it), Novice (Just recently got into this), Occasionally (Used it several times a month), and Frequently (Used it several times a week). The detailed results are depicted in Figure 10. Most participants have no experience in using the feeler gauge (61%, 11/18), but they all have a certain basic knowledge of deep learning, visual inspection, and quality inspection. Specifically, the percentages of those without relevant experience are 17%, 28%, 61%, 22%, and 11% for the categories visual inspection, deep learning, using the feeler gauge, quality inspection, and machine vision, respectively. Notably, the proportions of novices in this area exceed 44% for all categories except using the feeler gauge, which takes up 28%. This will facilitate participants’ better understanding of the ISMVT system and at the same time reduce the impact on the experiment caused by different knowledge of visual inspection.
We carried out a user study by a with-subject and a Latin square design, and the procedure mainly has seven steps, as shown in Figure 11. For the workload questionnaires, we employed the NASA Task Load Index (NASA-TLX [34,35]), a standardized and validated subjective questionnaire designed to quantitatively measure task workload regarding six issues.
In order to evaluate user experience, a questionnaire (see Table 2) was developed based on the relevant literature [36,37,38], with minor modifications made to align it with the objectives of this study. For reliability and validity, the questionnaire was reviewed by experts in the field of human–computer interaction and engineering information visualization. Additionally, we conducted a pilot study with a small sample (eight participants, five males, three females, aged between 18 and 28 years old, mean 24.38) to gather preliminary data, which allowed us to assess the internal consistency and improve the prototype system.

4.2. Results

In this section, we reported the results of inspection time (H1), accuracy (H2), workload (H3), user experience (H4), and preferences. We conducted an analysis of the results from each measurement across two distinct interfaces and subsequently presented a summary of the findings.

4.2.1. Inspection Time

Prior to conducting an analysis of the time data, we assessed the consistency and normality of the data using the Shapiro–Wilk test. The results indicated that the data in both the FGAI and ISMVT conditions exhibited no significant deviations for p > 0.05 (FGAI: W(18) = 0.975, p = 0.890 and ISMVT: W(18) = 0.953, p = 0.474). Then, we conducted a paired t test (α = 0.05) on the inspection time results for each condition; the results revealed a significant difference (t(17) = 10.482, p < 0.001) between the FGAI interface (mean 136.17 s, SD 5.02) and the ISMVT interface (mean 74.11 s, SD 4.84), as shown in Figure 12. The results showed that the ISMVT can significantly improve the inspection time compared with the FGAI method. In our specific case, the reduction in inspection time allows for a smoother workflow, as technicians can inspect and release parts more quickly. This, in turn, reduces the waiting time for subsequent processes, leading to a more efficient production line. The ability to process more parts in less time also means that we can scale our operations more effectively.

4.2.2. Accuracy

For accuracy, the FGAI method depends on the value of the actual length of a single pixel. In this study, the value is 0.0021 mm. The accuracy of the FGI method is contingent upon both the quality of the hardware utilized and the expertise of the operators involved. The minimum thickness of the feeler gauge is 0.01 mm. The theoretical accuracy of the inspection is estimated to be less than 0.01 mm when accounting for all relevant factors. Thus, the inspection precision of the ISMVT method is higher than that of the FGAI method. This can be confirmed by the average results of five inspection locations on the product, as shown in Figure 13. Moreover, from the graph, we can draw the conclusion that the results measured via FGAI are lower than those measured using the ISMVT on the whole.

4.2.3. Workload

We conducted a comparative analysis of the participants’ workload utilizing the NASA-TLX survey, in alignment with numerous preceding studies [38,39,40] in this domain. Participants were instructed to evaluate six subscales upon the completion of inspection tasks within a specific condition. We performed Wilcoxon signed-rank tests (α = 0.05) to evaluate the significant effects. Overall, the workload results show that the ISMVT condition enabled participants to perform tasks (Z = −3.724, p < 0.001) at a considerably lower workload level in comparison to the FGAI condition, as shown in Figure 14. Specifically, the average NASA-TLX scores for the ISMVT and FGAI conditions were 50.22 and 33.56 (SE = 6.12 and 4.57), respectively.
We conducted the Wilcoxon signed-rank tests (α = 0.05) for each subscale. Specifically, the subscale workload scores have significant differences regarding performance, effort, mental demand, frustration level, and temporal demand (p < 0.001) but not physical demand, as shown in Figure 15.

4.2.4. User Experience

User experience played a pivotal role in the accessibility of the developed system. As illustrated in Table 2, we designed a five-point Likert scale questionnaire to evaluate the impact of the FGAI and ISMVT methods concerning user experience from 12 aspects. We analyzed the users’ feedback on whether there was a difference in user experience between the FGAI method and the ISMVT method via Wilcoxon signed-rank tests (α = 0.05). The results are shown in Figure 16. Specifically, there were statistically significant differences in terms of QU1 (natural and intuitive), QU3 (controllable), QU4 (enjoyment), QU5 (providing information), QU6 (amazement), QU7 (receiving timely notifications), QU8 (easy to learn), QU9 (timely feedback), QU11 (noticeable), and QU12 (confidence); QU2 (focus) and QU10 (immersion) did not shown significant differences, as shown in Table 3.

4.2.5. Preferences

For users’ preferences, through the analysis of statistical data regarding user preferences, it is possible to ascertain the method that users favor. As illustrated in Figure 17, participants were instructed to complete a preference questionnaire. The results showed that most participants preferred the ISMVT method (78%), whereas almost 22% of them preferred the FGAI method.

5. Discussion and Limitations

The aim of our research is to explore the effect of the proposed machine vision method based on deep learning in industrial inspection activities, in particular quality inspection tasks requiring complex processes and engineering knowledge. We conducted a comparative analysis of the efficacy of the two different interfaces (i.e., FGAI and ISMVT) in performing inspection tasks on the gap spacing in terms of four hypotheses related to inspection time (H1), accuracy (H2), workload (H3), and user experience (H4).
As illustrated in Figure 12, machine vision support for the gap spacing measurement contributed to a faster inspection time. Thus, the results support hypothesis H1. The ISMVT method can provide timely inspection data, whereas the FGAI method is a relatively complex method to inspect the gap spacing. More specifically, the FGAI method needs the operators to utilize a single-blade feeler gauge or a combination of multiple blades stacked together, heavily depending on the operational techniques and users’ expertise and experience.
It should be noted that different individuals may have slightly varying understandings of how to correctly use a feeler gauge. For instance, some might insert the gauge at a slightly different angle or apply varying amounts of pressure, which can lead to inconsistencies in measurements. Additionally, changes in the measurement position can also have a significant impact on the measurement results. If the gauge is not placed consistently at the same point or along the same axis each time, it may yield different readings. This variability can introduce errors and affect the reliability of the measurements, especially in precision engineering applications where high accuracy is crucial. Therefore, it is essential to standardize the use of feeler gauges and ensure that measurements are taken in a consistent and controlled manner to obtain accurate and repeatable results. In contrast, visual inspection methods provide a notable advantage in terms of both stability and precision. Unlike the manual use of feeler gauges, which can be subject to human error and variability in technique, visual inspection systems rely on advanced imaging technologies and algorithms. This level of accuracy and consistency makes visual inspection an invaluable tool in quality control and manufacturing processes, where maintaining strict tolerances is essential for product quality and performance.
The survey data from NASA-TLX shows that the ISMVT method can significantly reduce the workload compared to the FGAI method, as illustrated in Figure 14. This may stem from the fact that the ISMVT method is simple to operate and has intuitive results, which significantly reduces the effort (i.e., the level of working), the mental demand of the activity (i.e., deciding, thinking, calculating, etc.), the frustration level (i.e., the feeling of being discouraged, stressed, relaxed, etc.), the temporal demand (i.e., the time pressure of performance), and markedly improves the performance (i.e., the sense of accomplishment of performing) but not the physical demand (i.e., the level of physical activity such as pushing, turning, pulling), as shown in Figure 15. Regarding the Physical demand, the inspection devices and products are compact in size and have a lightweight quality. This significantly facilitates users to effortlessly adjust positions and move freely, resulting in negligible significant differences between the two methods. Overall, the survey data supports hypothesis H2.
We used the user experience questionnaire regarding twelve aspects (see Table 2) to evaluate the users’ experiences. As shown in Figure 16, the user experience results showed that the FGAI and ISMVT methods for participants differed significantly in the aspects natural and intuitive interaction (QU1), controllable (QU3), improving the sense of enjoyment (QU4), amazement (QU6), confidence (QU12), providing timely information (QU5), notifications (QU7), easy to learn (QU8), feedback (QU9), and noticeable interface (QU11). However, we discovered that the ISMVT method did not offer significant advantages in terms of enhancing the usability of focus (QU2) and immersion (QU10). This may be because when using the ISMVT interface there is a fractured ecology in which participants need to split attention between the physical task and the external display, increasing the workload and hindering performance to some extent. This idea has been confirmed by Wang et al. [41,42] and Gurevich et al. [43]. Thus, to sum up, the user experience can be increased from the visual inspection method, which supports hypothesis H4 and is consistent with the results of performance efficiency and workload.
Regarding user preference, as anticipated, the overwhelming majority of participants favored the ISMVT method over the FGAI method. This clear preference likely arises from a multitude of factors that resonate deeply with users. Firstly, the ISMVT method boasts superior inspection efficiency, enabling users to accomplish tasks faster and more accurately than when using the FGAI method. Secondly, ISMVT’s intuitive design and streamlined workflow contribute to its ease of use, minimizing the learning curve and enhancing overall user experience. Furthermore, the ISMVT method showcases innovative features that cater to the evolving needs of users, making it a more attractive option in today’s fast-paced, technology-driven world. These factors, among others, contribute to the widespread adoption and preference for the ISMVT method.
Finally, this research has some limitations. First, during the experiment, we observed that the line detection algorithm exhibited occasional instability when inspecting within curved regions. To address the occasional instability of the line detection algorithm in curved regions, we can implement a more robust algorithm that is specifically designed to handle curved structures, such as the one from Zhai et al. [44]. Second, our current objective was focused on detecting three specific types of products, but we aim to expand this scope by collaborating with enterprises to incorporate a wider variety of products into our inspection process. Third, the detection threshold for image reprocessing needs to be set manually. To automate the setting of the detection threshold for image reprocessing, we can implement an adaptive thresholding technique. Adaptive thresholding methods, such as the OTSU method, can automatically determine the optimal threshold value based on the image’s histogram. Finally, in the current study, the number of participants was limited. In the future, we plan to recruit a larger number of participants, particularly workers who use the feeler gauge for inspection, to conduct experiments. Based on their feedback, we aim to improve the proposed method, thereby enhancing the system’s usability and stability.

6. Conclusions and Future Works

In this research, we developed an inspection system for gap spacing based on machine vision and deep learning techniques. Then, we conducted a case study using the designed inspection system to explore the effect of supporting visual inspection in industrial inspection tasks requiring complex operations with the traditional method. We compared two different methods: one was a traditional solution that utilizes a typical inspection tool (i.e., a feeler gauge) to inspect the gap spacing based on engineering knowledge and experience, the other was our proposed new method that takes advantage of machine vision based on DeepLSD. Specially, the results, on the whole, confirm our initial hypothesis that machine vision using deep learning can support a higher inspection performance, a lower workload, and provide a better user experience. It is suggested that the collection and analysis of contextual information regarding the user and inspection activities utilizing machine vision based on deep learning techniques represents a feasible approach to improving the efficacy of the inspection system.
In the current study, the experimental task cases show that the proposed ISMVT method using deep learning performs well in industrial inspection activities that usually involve complex process operations using the traditional method. Thus, our method shows positive potential in industrial applications such as inspecting gap spacing and may have the ability to reshape the landscape of quality inspection.
In the near future, we intend to investigate three main avenues to improve the prototype system. First, there is significant potential for enhancing the display by leveraging Mixed Reality (MR) to improve usability and user experience. Second, there is a need to improve the line detection method based on DeepLSD, in order to enhance the quality and robustness of inspecting gap spacing. Third, to make the experimental results more convincing, it is imperative that we involve the businesses directly by inviting select groups of workers to participate in the study. Their insights and feedback will be invaluable in identifying and addressing any shortcomings of the system, ultimately refining it to better suit their needs and enhancing its overall effectiveness.

Author Contributions

X.L.: conceptualization, original draft; F.L.: methodology formal analysis, supervision; H.Y.: editing, project administration; P.W.: software, investigation. All authors have read and agreed to the published version of the manuscript.

Funding

This research was sponsored by the 2022 annual key research and development and promotion projects in Henan Province (222102220065).

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Huang, H.L.; Jywe, W.Y.; Chen, G.R. Application of 3D Vision and Networking Technology for Measuring the Positional Accuracy of Industrial Robots. Int. J. Adv. Manuf. Technol. 2024, 135, 2051–2064. [Google Scholar] [CrossRef]
  2. de Assis Dornelles, J.; Ayala, N.F.; Frank, A.G. Smart Working in Industry 4.0: How Digital Technologies Enhance Manufacturing Workers’ Activities. Comput. Ind. Eng. 2022, 163, 107804. [Google Scholar] [CrossRef]
  3. Runji, J.M.; Lin, C.-Y. Markerless Cooperative Augmented Reality-Based Smart Manufacturing Double-Check System: Case of Safe PCBA Inspection Following Automatic Optical Inspection. Robot. Comput. Integr. Manuf. 2020, 64, 101957. [Google Scholar] [CrossRef]
  4. Natsagdorj, S.; Chiang, J.Y.; Su, C.-H.; Lin, C.; Chen, C. Vision-Based Assembly and Inspection System for Golf Club Heads. Robot. Comput. Integr. Manuf. 2015, 32, 83–92. [Google Scholar] [CrossRef]
  5. Lv, C.; Liu, B.; Wu, D.; Lv, J.; Li, J.; Bao, J. AR-Assisted Assembly Method Based on Instance Segmentation. Int. J. Comput. Integr. Manuf. 2024, 1–17. [Google Scholar] [CrossRef]
  6. Azamfirei, V.; Granlund, A.; Lagrosen, Y. Multi-Layer Quality Inspection System Framework for Industry 4.0. Int. J. Autom. Technol. 2021, 15, 641–650. [Google Scholar] [CrossRef]
  7. Hu, J.; Zhao, G.; Xiao, W.; Li, R. AR-Based Deep Learning for Real-Time Inspection of Cable Brackets in Aircraft. Robot. Comput. Integr. Manuf. 2023, 83, 102574. [Google Scholar] [CrossRef]
  8. Lonarevi, Z.; Gams, A.; Reberek, S.; Nemec, B.; Ude, A. Specifying and Optimizing Robotic Motion for Visual Quality Inspection. Robot. Comput. Integr. Manuf. 2021, 72, 102200. [Google Scholar] [CrossRef]
  9. Chen, C.; Wang, H.; Pan, Y.; Li, D. Vision-Based Robotic Peg-in-Hole Research: Integrating Object Recognition, Positioning, and Reinforcement Learning. Int. J. Adv. Manuf. Technol. 2024, 135, 1119–1129. [Google Scholar] [CrossRef]
  10. Zhao, Z.; Ma, X.; Shi, Y.; Yang, X. Multi-Scale Defect Detection for Plaid Fabrics Using Scale Sequence Feature Fusion and Triple Encoding. Vis. Comput. 2024, 1–17. [Google Scholar] [CrossRef]
  11. Yang, D.; Chen, N.; Liu, Z.J. Research on Defect Detection of Toy Sets Based on an Improved U-Net. Vis. Comput. 2024, 40, 1095–1109. [Google Scholar] [CrossRef]
  12. Azamfirei, V.; Psarommatis, F.; Lagrosen, Y. Application of Automation for In-Line Quality Inspection, a Zero-Defect Manufacturing Approach. J. Manuf. Syst. 2023, 67, 1–22. [Google Scholar] [CrossRef]
  13. Tarallo, A.; Mozzillo, R.; Di Gironimo, G.; De Amicis, R. A Cyber-Physical System for Production Monitoring of Manual Manufacturing Processes. Int. J. Interact. Des. Manuf. 2018, 12, 1235–1241. [Google Scholar] [CrossRef]
  14. Segura, Á.; Diez, H.V.; Barandiaran, I.; Arbelaiz, A.; Álvarez, H.; Simões, B.; Posada, J.; García-Alonso, A.; Ugarte, R. Visual Computing Technologies to Support the Operator 4.0. Comput. Ind. Eng. 2020, 139, 105550. [Google Scholar] [CrossRef]
  15. Montironi, M.A.; Castellini, P.; Stroppa, L.; Paone, N. Adaptive Autonomous Positioning of a Robot Vision System: Application to Quality Control on Production Lines. Robot. Comput. Integr. Manuf. 2014, 30, 489–498. [Google Scholar] [CrossRef]
  16. da Silva Santos, K.R.; Villani, E.; de Oliveira, W.R.; Dttman, A. Comparison of Visual Servoing Technologies for Robotized Aerospace Structural Assembly and Inspection. Robot. Comput. Integr. Manuf. 2022, 73, 102237. [Google Scholar] [CrossRef]
  17. Chiu, Y.C.; Tsai, C.Y.; Chang, P.H. Development and Validation of a Real-Time Vision-Based Automatic HDMI Wire-Split Inspection System. Vis. Comput. 2024, 40, 7349–7367. [Google Scholar] [CrossRef]
  18. Feng, Q.; Li, F.; Li, H.; Liu, X.; Fei, J.; Xu, S.; Lu, C.; Yang, Q. Feature Reused Network: A Fast Segmentation Network Model for Strip Steel Surfaces Defects Based on Feature Reused. Vis. Comput. 2023, 40, 3633–3648. [Google Scholar] [CrossRef]
  19. Leco, M.; Kadirkamanathan, V. A Perturbation Signal Based Data-Driven Gaussian Process Regression Model for in-Process Part Quality Prediction in Robotic Countersinking Operations. Robot. Comput. Integr. Manuf. 2021, 71, 102105. [Google Scholar] [CrossRef]
  20. Mei, B.; Liang, Z.; Zhu, W.; Ke, Y. Positioning Variation Synthesis for an Automated Drilling System in Wing Assembly. Robot. Comput. Integr. Manuf. 2021, 67, 102044. [Google Scholar] [CrossRef]
  21. Ferraguti, F.; Pini, F.; Gale, T.; Messmer, F.; Storchi, C.; Leali, F.; Fantuzzi, C. Augmented Reality Based Approach for On-Line Quality Assessment of Polished Surfaces. Robot. Comput. Integr. Manuf. 2019, 59, 158–167. [Google Scholar] [CrossRef]
  22. Kozamernik, N.; Braun, D. A Novel FuseDecode Autoencoder for Industrial Visual Inspection: Incremental Anomaly Detection Improvement with Gradual Transition from Unsupervised to Mixed-Supervision Learning with Reduced Human Effort. Comput. Ind. 2025, 164, 104198. [Google Scholar] [CrossRef]
  23. Orabi, M.; Tran, K.P.; Egger, P.; Thomassey, S. Anomaly Detection in Smart Manufacturing: An Adaptive Adversarial Transformer-Based Model. J. Manuf. Syst. 2024, 77, 591–611. [Google Scholar] [CrossRef]
  24. Kohler, M.; Mitsios, D.; Endisch, C. Reconstruction-Based Visual Anomaly Detection in Wound Rotor Synchronous Machine Production Using Convolutional Autoencoders and Structural Similarity. J. Manuf. Syst. 2025, 78, 410–432. [Google Scholar] [CrossRef]
  25. Nahar, L.; Islam, M.S.; Awrangjeb, M.; Verhoeve, R. Automated Corner Grading of Trading Cards: Defect Identification and Confidence Calibration through Deep Learning. Comput. Ind. 2025, 164, 104187. [Google Scholar] [CrossRef]
  26. Gu, G.; Ko, B.; Go, S.; Lee, S.-H.; Lee, J.; Shin, M. Towards Light-Weight and Real-Time Line Segment Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 22 February–1 March 2022; Volume 36, pp. 726–734. [Google Scholar]
  27. Pautrat, R.; Barath, D.; Larsson, V.; Oswald, M.R.; Pollefeys, M. Deeplsd: Line Segment Detection and Refinement with Deep Image Gradients. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–27 June 2023; pp. 17327–17336. [Google Scholar]
  28. Von Gioi, R.G.; Jakubowicz, J.; Morel, J.-M.; Randall, G. LSD: A Fast Line Segment Detector with a False Detection Control. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 32, 722–732. [Google Scholar] [CrossRef]
  29. Jafari-Marandi, R.; Khanzadeh, M.; Tian, W.; Smith, B.; Bian, L. From In-Situ Monitoring toward High-Throughput Process Control: Cost-Driven Decision-Making Framework for Laser-Based Additive Manufacturing. J. Manuf. Syst. 2019, 51, 29–41. [Google Scholar] [CrossRef]
  30. Altice, B.; Nazario, E.; Davis, M.; Shekaramiz, M.; Moon, T.K.; Masoum, M.A.S. Anomaly Detection on Small Wind Turbine Blades Using Deep Learning Algorithms. Energies 2024, 17, 982. [Google Scholar] [CrossRef]
  31. Memari, M.; Shekaramiz, M.; Masoum, M.A.S.; Seibi, A.C. Data Fusion and Ensemble Learning for Advanced Anomaly Detection Using Multi-Spectral RGB and Thermal Imaging of Small Wind Turbine Blades. Energies 2024, 17, 673. [Google Scholar] [CrossRef]
  32. Niyongabo, J.; Zhang, Y.; Ndikumagenge, J. Bearing Fault Detection and Diagnosis Based on Densely Connected Convolutional Networks. Acta Mech. Autom. 2022, 16, 130–135. [Google Scholar] [CrossRef]
  33. Franceschini, F.; Galetto, M.; Maisano, D.; Mastrogiacomo, L. Towards the Use of Augmented Reality Techniques for Assisted Acceptance Sampling. Proc. Inst. Mech. Eng. B J. Eng. Manuf. 2016, 230, 1870–1884. [Google Scholar] [CrossRef]
  34. Cao, A.; Chintamani, K.K.; Pandya, A.K.; Ellis, R.D. NASA TLX: Software for Assessing Subjective Mental Workload. Behav. Res. Methods 2009, 41, 113–117. [Google Scholar] [CrossRef] [PubMed]
  35. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. Adv. Psychol. 1988, 52, 139–183. [Google Scholar]
  36. Wang, Z.; Zhang, X.; Li, L.; Zhou, Y.; Lu, Z.; Dai, Y.; Liu, C.; Su, Z.; Bai, X.; Billinghurst, M. Evaluating Visual Encoding Quality of a Mixed Reality User Interface for Human-Machine Co-Assembly in Complex Operational Terrain. Adv. Eng. Inform. 2023, 58, 102171. [Google Scholar] [CrossRef]
  37. Zhang, X.; Bai, X.; Zhang, S.; He, W.; Wang, S.; Yan, Y.; Yu, Q.; Liu, L. A Novel MR Remote Collaboration System Using 3D Spatial Area Cue and Visual Notification. J. Manuf. Syst. 2023, 67, 389–409. [Google Scholar] [CrossRef]
  38. Zhang, X.; Bai, X.; Zhang, S.; He, W.; Wang, S.; Yan, Y.; Wang, P.; Liu, L. A Novel Mixed Reality Remote Collaboration System with Adaptive Generation of Instructions. Comput. Ind. Eng. 2024, 194, 110353. [Google Scholar] [CrossRef]
  39. Wang, P.; Wang, Y.; Billinghurst, M.; Yang, H.; Xu, P.; Li, Y. BeHere: A VR/SAR Remote Collaboration System Based on Virtual Replicas Sharing Gesture and Avatar in a Procedural Task. Virtual Real. 2023, 27, 1409–1430. [Google Scholar] [CrossRef] [PubMed]
  40. Yan, Y.X.; Bai, X.; He, W.; Wang, S.; Zhang, X.Y.; Wang, P.; Liu, L.; Zhang, B. Can You Notice My Attention? A Novel Information Vision Enhancement Method in MR Remote Collaborative Assembly. Int. J. Adv. Manuf. Technol. 2023, 127, 1835–1857. [Google Scholar] [CrossRef] [PubMed]
  41. Wang, P.; Bai, X.; Billinghurst, M.; Zhang, S.; Zhang, J. 3DGAM: Using 3D Gesture and CAD Models for Training on Mixed Reality Remote Collaboration. Multimed. Tools Appl. 2020, 80, 31059–31084. [Google Scholar] [CrossRef]
  42. Wang, P.; Bai, X.; Billinghurst, M.; Zhang, S.; He, W.; Han, D.; Wang, Y.; Min, H.; Lan, W.; Han, S. Using a Head Pointer or Eye Gaze: The Effect of Gaze on Spatial AR Remote Collaboration for Physical Tasks. Interact. Comput. 2020, 32, 153–169. [Google Scholar] [CrossRef]
  43. Gurevich, P.; Lanir, J.; Cohen, B. Design and Implementation of TeleAdvisor: A Projection-Based Augmented Reality System for Remote Collaboration. Comput. Support. Coop. Work 2015, 24, 527–562. [Google Scholar] [CrossRef]
  44. Zhai, S.; Zhao, X.; Zu, G.; Lu, L.; Cheng, C. An Algorithm for Lane Detection Based on RIME Optimization and Optimal Threshold. Sci. Rep. 2024, 14, 27244. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The proposed inspection approach. Quality assessment processes are based on the value of gap spacing for the next welding processes. The gap i is the different inspection locations on a product. The inspector should inspect all locations according to the steps illustrated in the schematic diagrams. The gap inspection can be performed through two methods: the proposed ISMVT and the traditional method using a feeler gauge. ✕: requires rework and re-inspection, √: passes inspection.
Figure 1. The proposed inspection approach. Quality assessment processes are based on the value of gap spacing for the next welding processes. The gap i is the different inspection locations on a product. The inspector should inspect all locations according to the steps illustrated in the schematic diagrams. The gap inspection can be performed through two methods: the proposed ISMVT and the traditional method using a feeler gauge. ✕: requires rework and re-inspection, √: passes inspection.
Machines 13 00074 g001
Figure 2. A schematic depiction of the sampling inspection. The inspection points were randomly chosen in proximity to five designated locations (a b, c, d, e) for measuring the gap spacing. Specifically, a and e represent the left and right endpoints of the gap, m denotes the distance between a and e.
Figure 2. A schematic depiction of the sampling inspection. The inspection points were randomly chosen in proximity to five designated locations (a b, c, d, e) for measuring the gap spacing. Specifically, a and e represent the left and right endpoints of the gap, m denotes the distance between a and e.
Machines 13 00074 g002
Figure 3. Scheme representing the designed approach, whose purpose is to visualize gap spacing based on the ISMVT. The proposed method mainly has four parts: running the ISMVT, the random sampling inspection, gap measuring, and visualization.
Figure 3. Scheme representing the designed approach, whose purpose is to visualize gap spacing based on the ISMVT. The proposed method mainly has four parts: running the ISMVT, the random sampling inspection, gap measuring, and visualization.
Machines 13 00074 g003
Figure 4. The calibration board. (a) A CAD fixture, (b) a 3D-printed fixture and a glass calibration board, (c) a close-up view of the calibration board and the detailed structure and dimensions.
Figure 4. The calibration board. (a) A CAD fixture, (b) a 3D-printed fixture and a glass calibration board, (c) a close-up view of the calibration board and the detailed structure and dimensions.
Machines 13 00074 g004
Figure 5. Overview of DeepLSD. (a) Get ground truth line distance and angle fields (DF/AF). (b) A deep network is trained to forecast the DF/AF, subsequently translated into a proxy image gradient. (c) Line segment extraction using LSD. (d) Line refinement.
Figure 5. Overview of DeepLSD. (a) Get ground truth line distance and angle fields (DF/AF). (b) A deep network is trained to forecast the DF/AF, subsequently translated into a proxy image gradient. (c) Line segment extraction using LSD. (d) Line refinement.
Machines 13 00074 g005
Figure 6. The algorithm flowchart of lines detection and calculating the gap spacing using DeepLSD.
Figure 6. The algorithm flowchart of lines detection and calculating the gap spacing using DeepLSD.
Machines 13 00074 g006
Figure 7. The gap spacings and their artifacts: (a,c) are the random gaps and (b,d) are the red marked areas at 3 times magnification for gaps 01 and 02, respectively. The two red lines indicate the boundaries of the gap in (b,d).
Figure 7. The gap spacings and their artifacts: (a,c) are the random gaps and (b,d) are the red marked areas at 3 times magnification for gaps 01 and 02, respectively. The two red lines indicate the boundaries of the gap in (b,d).
Machines 13 00074 g007
Figure 8. The gap spacing and the results of detecting straight lines using DeepLSD. The rows are two types of gap spacing. The columns are the results obtained at different stages. The blue straight line represents the line detected using the DeepLSD algorithm, and the green straight line represents the boundary of the gap.
Figure 8. The gap spacing and the results of detecting straight lines using DeepLSD. The rows are two types of gap spacing. The columns are the results obtained at different stages. The blue straight line represents the line detected using the DeepLSD algorithm, and the green straight line represents the boundary of the gap.
Machines 13 00074 g008
Figure 9. Two inspection methods. (a) The FGAI condition, where the user can inspect the gap spacing using a feeler gauge. (b) The ISMVT condition based on the DeepLSD algorithm can provide the results on the screen.
Figure 9. Two inspection methods. (a) The FGAI condition, where the user can inspect the gap spacing using a feeler gauge. (b) The ISMVT condition based on the DeepLSD algorithm can provide the results on the screen.
Machines 13 00074 g009
Figure 10. The participants’ backgrounds.
Figure 10. The participants’ backgrounds.
Machines 13 00074 g010
Figure 11. The procedure.
Figure 11. The procedure.
Machines 13 00074 g011
Figure 12. The inspection time.
Figure 12. The inspection time.
Machines 13 00074 g012
Figure 13. The inspection results.
Figure 13. The inspection results.
Machines 13 00074 g013
Figure 14. Overall NASA-TLX scores in each condition (* indicates that the two conditions differ significantly).
Figure 14. Overall NASA-TLX scores in each condition (* indicates that the two conditions differ significantly).
Machines 13 00074 g014
Figure 15. The workload of six items (* indicates that the two conditions differ significantly).
Figure 15. The workload of six items (* indicates that the two conditions differ significantly).
Machines 13 00074 g015
Figure 16. The user experience (mean ± SE; * indicates that two interfaces differ significantly).
Figure 16. The user experience (mean ± SE; * indicates that two interfaces differ significantly).
Machines 13 00074 g016
Figure 17. Users’ preferences.
Figure 17. Users’ preferences.
Machines 13 00074 g017
Table 1. The relationship between the gap spacing and the subsequent welding process.
Table 1. The relationship between the gap spacing and the subsequent welding process.
Gap/mm0.01 < i ≤ 0.400.40 < i < 0.90≥0.90
Welding ProcessWEP-AWEP-BRejection
Table 2. Likert scale rating items of user experience.
Table 2. Likert scale rating items of user experience.
QU#Questions: 1 (I strongly disagree) to 5 (I strongly agree)
1The interaction operation was natural and intuitive.
2I was able to stay focused on the task actively.
3I can quickly adjust myself to inspection tasks when the interaction fails (controllable).
4I enjoyed the inspection process.
5I can perform the tasks by the status information provided by this condition.
6This interface brings me a sense of amazement.
7I can receive timely notifications during inspecting tasks.
8I can easily learn how to use this interface.
9I was able to receive timely feedback during inspection activities.
10The interactive operation can help me forget my daily troubles (Immersion).
11This interface is relatively noticeable.
12I felt confident that we completed the task correctly.
Table 3. The results of user experience (bold indicates that two interfaces differ significantly).
Table 3. The results of user experience (bold indicates that two interfaces differ significantly).
NO.QuestionsZp
QU1Natural and intuitive−2.5170.012
QU2Focus−0.3970.691
QU3Controllable−2.2850.022
QU4Enjoyment−2.6970.007
QU5Providing information−2.5810.010
QU6Amazement−2.5230.012
QU7Receiving timely notifications−2.8100.005
QU8Easy to learn−2.4040.016
QU9Timely feedback−3.3910.001
QU10Immersion−1.2340.217
QU11Noticeable−2.2280.026
QU12Confidence−2.0920.036
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Li, F.; Yang, H.; Wang, P. Deep Learning-Enabled Visual Inspection of Gap Spacing in High-Precision Equipment: A Comparative Study. Machines 2025, 13, 74. https://doi.org/10.3390/machines13020074

AMA Style

Li X, Li F, Yang H, Wang P. Deep Learning-Enabled Visual Inspection of Gap Spacing in High-Precision Equipment: A Comparative Study. Machines. 2025; 13(2):74. https://doi.org/10.3390/machines13020074

Chicago/Turabian Style

Li, Xiuling, Fusheng Li, Huan Yang, and Peng Wang. 2025. "Deep Learning-Enabled Visual Inspection of Gap Spacing in High-Precision Equipment: A Comparative Study" Machines 13, no. 2: 74. https://doi.org/10.3390/machines13020074

APA Style

Li, X., Li, F., Yang, H., & Wang, P. (2025). Deep Learning-Enabled Visual Inspection of Gap Spacing in High-Precision Equipment: A Comparative Study. Machines, 13(2), 74. https://doi.org/10.3390/machines13020074

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop