Machines 11 01083
Machines 11 01083
Article
Integrating Computer Vision and CAD for Precise Dimension
Extraction and 3D Solid Model Regeneration for Enhanced
Quality Assurance
Binayak Bhandari 1,2, * and Prakash Manandhar 3,4,5
Abstract: This paper focuses on the development of an integrated system that can rapidly and
accurately extract the geometrical dimensions of a physical object assisted by a robotic hand and
generate a 3D model of an object in a popular commercial Computer-Aided Design (CAD) software
using computer vision. Two sets of experiments were performed: one with a simple cubical object
and the other with a more complex geometry that needed photogrammetry to redraw it in the CAD
system. For the accurate positioning of the object, a robotic hand was used. An Internet of Things
(IoT) based camera unit was used for capturing the image and wirelessly transmitting it over the
network. Computer vision algorithms such as GrabCut, Canny edge detector, and morphological
operations were used for extracting border points of the input. The coordinates of the vertices of the
solids were then transferred to the Computer-Aided Design (CAD) software via a macro to clean
and generate the border curve. Finally, a 3D solid model is generated by linear extrusion based on
the curve generated in CATIA. The results showed excellent regeneration of an object. This research
makes two significant contributions. Firstly, it introduces an integrated system designed to achieve
Citation: Bhandari, B.; Manandhar, P. precise dimension extraction from solid objects. Secondly, it presents a method for regenerating
Integrating Computer Vision and intricate 3D solids with consistent cross-sections. The proposed system holds promise for a wide
CAD for Precise Dimension range of applications, including automatic 3D object reconstruction and quality assurance of 3D-
Extraction and 3D Solid Model printed objects, addressing potential defects arising from factors such as shrinkage and calibration,
Regeneration for Enhanced Quality
all with minimal user intervention.
Assurance. Machines 2023, 11, 1083.
https://doi.org/10.3390/
Keywords: vision-based measurement; automation; computer vision; 3D model regeneration;
machines11121083
Computer-Aided Design; Internet of Things
Academic Editor: Panagiotis Kyratsis
tasks rest, from object recognition to tracking and, notably, the reconstruction of three-
dimensional (3D) models from image and video data [3].
Central to the success of these applications is the ability to identify and analyze the
shapes and contours of segmented regions. Contour detection, a cornerstone in computer
vision, provides crucial information about the object’s shape, which, in turn, serves as
the basis for the recognition system. The precision and accuracy of contour detection
are pivotal in ensuring the retrieval of high-quality information from the visual data. In
particular, robots require 3D information about the grasping object in an unstructured
industrial environment [4].
The present study is focused on the integration of computer vision and Computer-
Aided Design (CAD) to achieve precise dimension extraction and 3D solid model
regenerations. These aspects are critical components in the field of quality assurance,
enabling the thorough analysis of products and structures in three dimensions. The
paper further explores the significance of this approach with case studies addressing
existing challenges, thereby contributing to the evolving field of computer vision for
enhanced quality assurance.
engineering method based on a CAD model and point cloud model because of the diffi-
culty in dealing with Non-Uniform Rational B-Splines (NURBs) surfaces.
method based onthe
To address a CAD model
inherent and pointincloud
limitations modelvision
computer because
for of
3Dthe difficulty
model in dealing
generation and
with Non-Uniform Rational
the time-consuming nature B-Splines
of manual(NURBs) surfaces.the integration of a computer vi-
CAD modeling,
sion To addressand
algorithm the inherent limitations
a CAD system in computer
has emerged vision for 3D
as a promising modelThis
solution. generation and
integration
the time-consuming nature of manual CAD modeling, the integration of a
leverages the strengths of each method to compensate for their respective weaknesses.computer vision
algorithm and a focus
The central CAD of system has emerged
this research as a promising
is to devise solution.
a cost-effective Thistointegration
solution overcome
leverages
these limitations by harnessing the power of computer vision algorithms,weaknesses.
the strengths of each method to compensate for their respective the Internet of
ThingsThe(IoT),
central
andfocus of this research
a commercial is to devise
CAD package a cost-effective
to seamlessly solution
regenerate to overcome
3D solid models
these limitations by harnessing the power of computer vision algorithms,
within a CAD environment. To make this happen, an integrated system was developed the Internet of
Things (IoT), and a commercial CAD package to seamlessly regenerate 3D
that combines computer vision with CAD capabilities that efficiently extract essential in- solid models
within
formationa CAD
fromenvironment. To make this
images and automates happen,
the 3D an integrated
solid model system
generation was developed
processes within the
that combines computer
CAD software CATIA V5. vision with CAD capabilities that efficiently extract essential
information from images and automates the 3D solid model generation processes within
the CAD software CATIA V5.
3. Methodology
In the course of these efforts, several intricate tasks were addressed, such as the pre-
3. Methodology
cise control of objects via robotic manipulation to optimize camera angles, image segmen-
In the course of these efforts, several intricate tasks were addressed, such as the precise
control image
tation, transformation
of objects and realignment,
via robotic manipulation contour
to optimize detection,
camera virtual
angles, imagemeasurements,
segmentation,
exporting contour coordinates, and the automatic generation
image transformation and realignment, contour detection, virtual measurements, of a computerexporting
program
contour coordinates, and the automatic generation of a computer program based onwith
based on these coordinates. These processes are designed to seamlessly interface thesea
specialized commercial CAD system to produce comprehensive three-dimensional
coordinates. These processes are designed to seamlessly interface with a specialized mod-
els.
commercial CAD system to produce comprehensive three-dimensional models.
Two sets
Two sets of
of experiments
experiments werewere performed:
performed: one one with
with aa simple
simple cubical
cubical object,
object, and
and the
the
other with a more complex geometry that needed various measurements
other with a more complex geometry that needed various measurements to redraw it in to redraw it in
the CAD system. The dimensional accuracy of the real and virtual objects
the CAD system. The dimensional accuracy of the real and virtual objects was compared, was compared,
and the
and the surface
surface roughness
roughness of of the
the regenerated
regenerated objectobject was
was analyzed
analyzed after
after 3D
3D printing.
printing. The
The
schematicdiagram
schematic diagramshowing
showingprocesses
processes forfor
thethe auto-generation
auto-generation of objects
of objects is shown
is shown in Fig-
in Figure 1.
ure 1.
One ofOne of the
the main main contributions
contributions of the is
of the research research is the development
the development of an integrated
of an integrated system for
system for regenerating
regenerating a 3Duniform
a 3D solid with solid with uniformThough
thickness. thickness.
the Though
developed thesystem
developed system
has specific
has specific applications as it is limited to the regeneration of 3D objects
applications as it is limited to the regeneration of 3D objects with uniform cross-sections, itwith uniform
cross-sections,
has huge potentialit has huge potential
applications applications
in automatic in automatic
3D object 3D object
reconstruction andreconstruction
in the quality
and in the of
assurance quality assurance
3D-printed of 3D-printed
objects with minimal objects
user with minimal user intervention.
intervention.
4. Model Description
4.1. Robotic Hand Design
The Hackberry open-source 3D-printable bionic arm was used in this study [13]
which has gathered huge attention from developers, designers, and artificial arm users
The Hackberry open-source 3D-printable bionic arm was used in this study [13
Machines 2023, 11, 1083 which has gathered huge attention from developers, designers, and artificial4arm of 12 users i
recent days. The Hackberry bionic hand is an active prosthesis with an electrically pow
ered hand controlled via Arduino Micro technology. The assembled robotic hand has var
iable
in movements suitable
recent days. The for grabbing
Hackberry objects
bionic hand and
is an changing
active the with
prosthesis orientation. The hand wa
an electrically
printed by the
powered hand3D printer Ultimaker
controlled via Arduino2+ using
Micro an ABS filament
technology. with arobotic
The assembled circuithand
board and
has variable movements suitable for grabbing objects and changing the orientation.
rechargeable battery embedded in the hand, making it versatile and appealing to research
ers.The hand was printed by the 3D printer Ultimaker 2+ using an ABS filament with a
circuit board and a rechargeable battery embedded in the hand, making it versatile and
appealing to researchers.
4.2. Control
4.2.
A Control
small single-board computer developed by the Raspberry Pi Foundation calle
RaspberryA small
Pi 3single-board
Model B [14]computer
and adeveloped
MotionEyeOSby the Raspberry
were usedPi Foundation called Rasp-
for this study. The Motion
berry Pi 3 Model B [14] and a MotionEyeOS were used for this study. The MotionEyeOS is
EyeOS is a Linux distribution that turns a Raspberry Pi into a video surveillance system
a Linux distribution that turns a Raspberry Pi into a video surveillance system [15]. Out of
[15].
theOut
manyoffeatures
the many features of MotionEyeOS,
of MotionEyeOS, it is easy to set up,itconnects
is easy to
toaset up,
local connects
network usingto a loca
network
Ethernetusing Ethernet
or Wi-Fi, and is or Wi-Fi, and
compatible withisUSB
compatible with USB
cameras. Figure cameras.
2 shows Figure 2 shows th
the implementation
implementation
of Raspberry Piof Raspberry
with Pi with
MotionEyeOS MotionEyeOS
for image acquisition.for image acquisition.
Figure 2. Schematic
Figure 2. Schematicworking principle.
working principle.
4.3. Cameras
4.3. Cameras
Three cameras were mounted on three axes (X, Y, Z) to capture the front, top, and side
Three
views cameras
of the were mountedwebcams
object. General-purpose on threeasaxes (X, were
cameras Y, Z)used
to capture the front,
for capturing imagestop, an
sideofviews of the
the object. object.c270
Logitech General-purpose
model webcams webcams as cameras
were selected as they arewere used for
inexpensive and capturin
produce high-definition images. However, because of the lack of an autofocus
images of the object. Logitech c270 model webcams were selected as they are inexpensiv function
andinproduce
the webcams, focus was performed
high-definition manually. because
images. However, While only two lack
of the cameras
of ansuffice for
autofocus func
capturing two views for extracting the surface and thickness data, the system is designed
tionwith
in the webcams, focus was performed manually. While only two cameras suffice fo
additional cameras for future advanced research work.
capturing two views for extracting the surface and thickness data, the system is designe
4.4.additional
with System Integration
cameras for future advanced research work.
The experiment test frame was designed to fix all the hardware in the proper position
4.4.toSystem
conduct the experiment. The robotic hand was used to hold the object at the required
Integration
position and angle. In addition to fine-tuning the angle and position of the object, it was
The experiment
possible to adjust thetest frameforce
gripping was indesigned to fix
the robotic all The
hand. the hardware
robotic handinposition
the proper
was positio
to conduct thefine-tuning
adjusted by experiment. The rotation
the wrist robotic button.
hand was Threeused to hold
webcams were the object atonthe
positioned therequire
position
three perpendicular axes, X, Y, and Z, as shown in Figure 3. The cameras were interfaced it wa
and angle. In addition to fine-tuning the angle and position of the object,
with the
possible toRaspberry
adjust thePi 3gripping
to take images
forceofinthethe
object held by
robotic the robotic
hand. hand. This
The robotic helped
hand to
position wa
capture images of the object in three perpendicular directions simultaneously.
adjusted by fine-tuning the wrist rotation button. Three webcams were positioned on th This method
was efficient and less time-consuming than acquiring images one at a time.
three perpendicular axes, X, Y, and Z, as shown in Figure 3. The cameras were interface
with the Raspberry Pi 3 to take images of the object held by the robotic hand. This helpe
to capture images of the object in three perpendicular directions simultaneously. Thi
method was efficient and less time-consuming than acquiring images one at a time.
Machines 2023, 11,
Machines x FOR
2023, PEER REVIEW
11, 1083 5 of 125 of 12
Figure
Figure 3. Experimentalset-up
3. Experimental set-up with
with three
threecameras
camerasand a robotic
and hand
a robotic holding
hand the object.
holding the object.
5. Methodology of Extracting Coordinate Details
5. Methodology of Extracting Coordinate Details
5.1. Image Segmentation
5.1. Image
TheSegmentation
3D construction uses scanned data and 2D images taken by rotating the object in
360Thedegrees. Thus, this approach
3D construction has nodata
uses scanned background;
and 2D images however,takenit is time consuming
by rotating theand
object in
360often expensive.
degrees. Thus,Unlike the above approach,
this approach this research however,
has no background; focuses on it theisimage
time taken by an and
consuming
ordinary
often webcam
expensive. (or camera)
Unlike with approach,
the above the background. The problem
this research of separating
focuses on the imagethe object
taken by
from the background (image segmentation) in a complex environment was performed
an ordinary webcam (or camera) with the background. The problem of separating the ob-
by using a powerful GrabCut [16,17] segmentation algorithm. This algorithm is popular
jectfor
from the background (image segmentation) in a complex environment was performed
efficient and interactive foreground/background segmentation of images where the
by user
using a powerful
is required GrabCut
to draw [16,17]around
a rectangle segmentation
the objectalgorithm.
to be cut. The This algorithm
GrabCut is popular
is built on
forthe
efficient
foundation of GraphCut [18] which is a powerful optimization technique to achieve the
and interactive foreground/background segmentation of images where
user is required
robust to draw a rectangle
image segmentation. The GrabCutaround
uses athe object Mixture
Gaussian to be cut. ModelThe(GMM)
GrabCut is built on
to model
theforeground
foundation and
ofbackground.
GraphCut [18] which is a powerful optimization technique to achieve
robust To help with
image segmentation,
segmentation. Thea trimap
GrabCut T =uses{ TB , TaU ,Gaussian
T F } is usedMixture
where TBModel
(alpha (GMM)
value to
α n = 0),
model foreground T U and Tand
F (alpha value
background. α n = 1) stores background, unknown, and foreground
pixels. GrabCut performs iterations to refine the labeling of the foreground and background
To help with segmentation, a trimap 𝑇 𝑇 , 𝑇 , 𝑇 is used where TB (alpha value
pixels. All of the pixels in TU are assigned a cluster based on the minimum unary weighing
𝛼 function
0), TUD(n) TF (alpha value 𝛼
and[19]. 1) stores background, unknown, and foreground
pixels. GrabCut performs iterations to refine the labeling of the foreground and back-
ground pixels. All of theK pixels in TU are 1 assigned 1 a cluster T based −1on the minimum unary
D (n) = −log ∑ π (αn , i ) p e(− 2 [ Zn −µ(αn ,i)] ∑ (αn ,i) [ Zn −µ(αn ,i)]) (1)
weighing function D(n)i=[19]. 1 det∑(αn , i )
where K is the number of clusters, µ is the 1 mean RGB value, π is, the∑ weighing
, coefficient,
,
𝐷 𝑛RGB row
and Z is the 𝑙𝑜𝑔vector𝜋 of𝛼 n., 𝑖 𝑒 (1)
𝑑𝑒𝑡 ∑ 𝛼 , 𝑖
Figure 4 shows the process of how the object was segmented from the 2D image:
where K is4athe
Figure shows the image
number of the µobject
of clusters, is theheld
mean byRGB
the robotic
value, hand, Figure
π is the 4b shows
weighing a
coefficient,
user-created rectangle that
and Z is the RGB row vector of n. is used to create a trimap for the GrabCut algorithm, and
Figure 4c shows the segmented object from the image. The blue box in the second image
Figure 4 shows the process of how the object was segmented from the 2D image:
is a region of interest (ROI) and the black patch is a manually marked mask denoting the
Figure 4a shows the image of the object held by the robotic hand, Figure 4b shows a user-
background color. Detailed operations and finer touch-ups of the GrabCut algorithm
created
can berectangle
found in that
[11]. is used to create a trimap for the GrabCut algorithm, and Figure 4c
shows the segmented object from the image. The blue box in the second image is a region
of interest (ROI) and the black patch is a manually marked mask denoting the background
color. Detailed operations and finer touch-ups of the GrabCut algorithm can be found in
[11].
Machines
Machines 2023,
Machines2023, 11,
11,x1083
2023,11, xFOR
FORPEER
PEERREVIEW
REVIEW 666ofof 12
12
12
(a)
(a) (b)
(b) (c)
(c)
Figure4.
Figure
Figure 4.4.The
Theobject
The objectbefore
object beforeand
before andafter
and afterthe
after theGrabcut
the GrabcutAlgorithm
Grabcut Algorithm(a)
Algorithm (a)An
(a) Anobject
An objectheld
object heldby
held bythe
by therobotic
the roboticarm,
robotic arm,
arm,
(b)the
(b) theregion
regionofofinterest
interestmarked
markedby bythe
theblue
bluerectangle,
rectangle,and
and(c)
(c)the
thesegmented
segmentedimage
imageafter
afterititwas
was
(b) the region of interest marked by the blue rectangle, and (c) the segmented image after it was
processedby
processed bythe
theGrabcut
Grabcutalgorithm.
algorithm.
processed by the Grabcut algorithm.
5.2.Image
5.2.
5.2. ImageTransformation
Image Transformation
Transformation
Segmentedimages
Segmented
Segmented imagescould
images couldbe
could beinclined
be inclinedand/or
inclined and/orskewed
and/or skewedas shownin
asshown
shown inFigure
in Figure555(left).
Figure (left).In
(left). In
In
orderto
order
order toalign
to alignthe
align thesegmented
the segmentedimage
segmented imageto
image tothe
to theperpendicular
the perpendicularaxis,
perpendicular axis,the
axis, thefour
the fouredge
four edgepoints
edge pointswere
points were
were
usedtotoapply
used
used applythetheperspective
perspectivetransformation
perspective transformationtototo
transformation obtain
obtain
obtain atop-down
top-down
a atop-down “bird
“bird
“bird eye
eye
eye view”
view”
view” [20]
[20]
[20] of
of
of the
thethe image
image
image as
as as shown
shown
shown in
in in Figure
Figure
Figure 5 (right).
5 (right).
5 (right).
Figure5.
Figure
Figure 5.5.The
Theskewed
The skewedimage
skewed image(left)
image (left)and
(left) andperspective-transformed
and perspective-transformedimage
perspective-transformed image(right).
image (right).
(right).
5.3. Contours
5.3.Contours
5.3. Contoursandand Photogrammetry
andPhotogrammetry
Photogrammetry
Circular
Circular red
Circularred stickers
stickerswith
redstickers withaaknown
with aknown
known dimension
dimension
dimension were
were
were used as as
used
used reference objects
asreference
reference in the
objects
objects top
ininthe
the
and front
topand
top views.
andfront A
frontviews. Canny
views.AACanny edge
Cannyedgedetector [21]
edgedetector was
detector[21] employed
[21]was
wasemployedto detect edges
employedtotodetect in
detectedgesthe images,
edgesininthe the
and
images,contour
images, and gaps
andcontour were
contour gapsfilled
gaps using
were
were dilation
filled
filled usingand
using erosion
dilation
dilation and
and functions. For hysteresis
erosionfunctions.
erosion functions. threshold-
Forhysteresis
For hysteresis
ing in the Canny
thresholdingininthe
thresholding edge
theCanny detection
Cannyedge algorithm,
edgedetection two threshold
detectionalgorithm,
algorithm,two values
twothreshold were utilized:
thresholdvalues
valueswere minVal
wereutilized: and
utilized:
maxVal.
minValand Any edges
andmaxVal.
maxVal.Any with an
Anyedges intensity
edgeswithwithangradient greater
anintensity than
intensitygradient maxVal
gradientgreater are
greaterthan considered
thanmaxVal
maxValare edges,
arecon-con-
minVal
while
sideredthose below
edges, minVal
while those arebelow
deemed non-edges
minVal and arenon-edges
aredeemed
deemed therefore discarded.
andare Segmented
aretherefore
therefore dis-
sidered edges, while those below minVal are non-edges and dis-
image
carded. had at least two
Segmented imagecontours
hadatat(a) of the
least object
two and (b)
contours the
(a)of reference
ofthe
the shape
objectand
and (b)(red circle).
thereference
reference It
carded. Segmented image had least two contours (a) object (b) the
is clear that the largest contour represents the object, while the smaller contour represents
shape(red
shape (redcircle).
circle).ItItisisclear
clearthat
thatthe
thelargest
largestcontour
contourrepresents
representsthe theobject,
object,while
whilethe thesmaller
smaller
the reference shape. The sample code snippet is provided in Table 1.
contourrepresents
contour representsthe thereference
referenceshape.
shape.TheThesample
samplecodecodesnippet
snippetisisprovided
providedininTableTable1.1.
Table 1. Code snippet of the contour detection in the image.
Table1.1.Code
Table Codesnippet
snippetofofthe
thecontour
contourdetection
detectionininthe
theimage.
image.
# Perform
## Perform edge
Perform edge detection,
edge detection, dilation dilation to
detection, dilation+ erosion close gaps to
++ erosion
erosion to close
close gaps
gaps
edged ===cv2.Canny(gray,
edged
edged cv2.Canny(gray,
cv2.Canny(gray, 50, 100)#
50, 100)#
50, 100)#minVal,
image, image,
image, minVal, maxVal
maxVal
minVal, maxVal
edged ===cv2.dilate(edged,
edged
edged cv2.dilate(edged,
cv2.dilate(edged, None, iterations=2)
None, None, iterations=2)
iterations=2)
edged ===cv2.erode(edged,
edged
edged cv2.erode(edged,
cv2.erode(edged, None, iterations=2)
None,None, iterations=2)
iterations=2)
### Find
Findcontours
Find contours
contours
cnts
cnts
cnts ===
cv2.findContours(edged.copy(),cv2.RETR_TREE,cv2
cv2.findContours(edged.copy(),cv2.RETR_TREE,cv2
cv2.findContours(edged.copy(),cv2.RETR_TREE,cv2
.CHAIN_APPROX_SIMPLE)
.CHAIN_APPROX_SIMPLE)
.CHAIN_APPROX_SIMPLE)
Figure66shows
Figure showsthetheoriginal
originalobject
objectwith
withaamarker
markerwith
withknown
knowndimension
dimensionand
andthe
the
Figure 6 shows the original object with a marker with known dimension and the
complexcross-sectional
complex cross-sectionalobject.
object.
complex cross-sectional object.
Machines2023,
Machines 11,x1083
2023,11, 7ofof12
12
Machines 2023, 11, FOR PEER
x FOR PEERREVIEW
REVIEW 77 of 12
(a)
(a) (b)
Figure6.6.
Figure
Figure 6.(a)
(a)AA
(a) Acomplex
complexcross-sectional
complex cross-sectional object
cross-sectionalobject with
objectwith the
withthe referencemarker
thereference markerand
and(b)
and (b)aaabounding
(b) boundingbox
bounding box
box
enclosing
enclosing the
the object
object and
and the
the contour
contour points.
points.
enclosing the object and the contour points.
6.6. Result
6.Result
Resultand andDiscussion
Discussion
The three-dimensionalviews
Thethree-dimensional
The viewsor pictorialdrawings
orpictorial
pictorial drawingsare
drawings arenot
notable
abletotoshow
showall
show allthe
all thedetails
the details
details
of
of the
the object
object such
such as
as holes
holes and grooves. In
In order
order to
to obtain
obtain more
more
of the object such as holes and grooves. In order to obtain more complete information complete
complete information
information
about
aboutthe
about the object,an
theobject, anorthographic
orthographicprojection
orthographic projection(or
projection (ormulti-view
(or multi-viewdrawings)
multi-view drawings)is
drawings) isisgenerally
generallyused
generally usedin
used in
in
engineering
engineering design, as shown in Figure
Figure 7.
7. Orthographic
Orthographic views
views are
are
engineering design, as shown in Figure 7. Orthographic views are two-dimensional views two-dimensional
two-dimensional views
views
of
of three-dimensional
ofthree-dimensional objectscreated
three-dimensionalobjects createdby projectingaaaview
by projecting
projecting viewof
view ofan
of anobject
an objectonto
object ontoaaaplane
onto planeparallel
plane parallel
parallel
to one
totoone of the
oneofofthe planes
theplanes of the object. Technical
planesof the object. Technical drawings
Technical drawings usually
drawings usually include
usually include
include the the front,
the front,
front, top,top, and
top, and
and
side
side orthographic
orthographicviews.
sideorthographic views.However,
views. However,two
However, twoviews
two viewsare
views sufficient
are
are sufficient
sufficient forfor
uniform
for uniform
uniform cross-sectional
cross-sectional
cross-sectional and
cylindrical
and bodies.
cylindrical In
bodies. the
In case
the of
casecomplicated
of complicated bodies with
bodies slots
with and
slots
and cylindrical bodies. In the case of complicated bodies with slots and holes, an addi- holes,
and an
holes, additional
an addi-
sectional
tional view might
tionalsectional
sectional view be needed.
viewmight
might beneeded.
be needed.
Figure 7.
Figure 7. Orthographic
Orthographic views
views of
of aa 3D
3D object
object showing
showingfront,
front,top,
top,and
andside
sideviews.
views.
Figure 7. Orthographic views of a 3D object showing front, top, and side views.
Integrating all
Integrating all three
threeviews
viewswithin
withinaaCAD CADenvironment
environmenttotogenerate
generateaa3D 3Dmodel
modelwithout
with-
Integratingtraining
out adequate
all three views
is anot
within a CAD environment
a straightforward
to generate a 3D model with-
adequate training is not straightforward task.task. In this
In this study,
study, we employed
we employed a Com-
a Computer-
out adequate
puter-Aided
Aided training is not
Three-dimensional
Three-dimensional a straightforward
InteractiveInteractive task.
Application
Application In
(CATIAthis study,
(CATIA
V5) V5) we
for 3D employed
for model
3D model a Com-
gener-
generation.
puter-Aided
ation. Although
Although Three-dimensional
CATIA CATIA
V5 [22]V5 is [22]
widely isInteractive
widely Application
recognized
recognized as oneas (CATIA
ofone
the of theV5)
most most forpopular
popular 3D model gener-
engineer-
engineering and
ation. Although
ing andsoftware
design CATIA
design software
programs,V5 [22]
programs,is widely recognized
it still demands
it still demands as one of
the expertise
the expertise the most popular
of a skilled
of a skilled engineer-
engineer
engineer to
to create
ing andintricate
create
intricate design software
3D shapes. To programs,
3D shapes. To streamline
streamline itthe
stillprocess
demands
the process the expertise
of generating
of generating ofa a3D
a 3D skilled
model
model engineer
from
from con-
contourto
create intricate
tour information
information 3D shapes.
(x, y, (x,
andy,zand To streamline
z coordinates),
coordinates), the process
we developed
we developed of generating
a Visual
a Visual BasicBasica
Script3D model
Script
macro from
macro con-
(CAT-
(CATScript).
tour information
Script).
Figure 8 Figure 8(x,
illustrates y, and
illustrates
the z coordinates),
detailedthealgorithm we integrating
detailed algorithm
for developed aprevious
Visual Basic
for integrating Script
with amacro
previous
steps steps
CADwith (CAT-a
system
Script).
CAD
in Figure
thissystem
context. 8 illustrates
in this context. the detailed algorithm for integrating previous steps with a
CAD Thesystem in this context.
flowchart illustrates the comprehensive system architecture that outlines the
process of generating a contour of an object from input images captured by the cameras.
The workflow commences on the input of the object’s image, followed by meticulous
background removal using the GrabCut algorithm. Subsequently, edge detection is applied
to the processed image, resulting in the extraction of key object edges. These detected edges
are utilized to construct the object’s contour.
Machines 2023,11,
Machines2023, 11,1083
x FOR PEER REVIEW 88 of 12
12
CATIA macro
Figure8.8. An
Figure Anintegrated
integratedflowchart
flowchartfor
forextracting
extractingobject
objectdimensions
dimensionsinin OpenCV
OpenCV and
and creating
creating geome-
geometry
try in CATIA
in CATIA V5. V5.
The flowchart
Moreover, illustratesaccuracy
the contour’s the comprehensive
is ensured bysystem architecture
referencing marker that outlines thefacili-
dimensions, pro-
cess ofprecise
tating generating a contourof
measurement ofboth
an object fromyinput
its x and images captured
coordinates. by thedata
The coordinate cameras. The
are then
preserved
workflowincommences
a *.txt file, concluding
on the input theofoperation using
the object’s OpenCV.
image, A custom
followed Python program
by meticulous back-
is employed
ground removalto read
usingthethe
x and y dimensions
GrabCut algorithm. from the text file,
Subsequently, subsequently
edge detection isgenerating
applied to
athe
macro (*CATScript)
processed in VBScript
image, resulting programming
in the extraction oflanguage.
key objectThis generated
edges. macro can
These detected be
edges
directly imported
are utilized into CAD
to construct software,
the object’s where it aids in sketching a 2D surface that can be
contour.
extruded to the user’s
Moreover, specifiedaccuracy
the contour’s dimensions, ultimately
is ensured producing marker
by referencing a 3D solid model. fa-
dimensions,
cilitating precise measurement of both its x and y coordinates. The coordinate data are
Case
thenStudies
preserved in a *.txt file, concluding the operation using OpenCV. A custom Python
Machines 2023, 11, x FOR PEER REVIEW 9 of 12
Two sets
program of experiments
is employed to readwere
the xperformed to validate
and y dimensions the the
from proposed process
text file, of 3D model
subsequently gen-
generation using (*CATScript)
erating a macro (a) a relativelyinsimple
VBScriptcubical object and (b)
programming a more complicated
language. This generated irregular-
macro
shaped cross-section
can be directly importedobject.
intoBoth
CAD experiments followed
software, where a similar
it aids procedure
in sketching a 2Das shownthat
surface in
Figure 9a.
can be extruded to the user’s specified dimensions, ultimately producing a 3D solid
model.
Case Studies
Two sets of experiments were performed to validate the proposed process of 3D
model generation using (a) a relatively simple cubical object and (b) a more complicated
irregular-shaped cross-section object. Both experiments followed a similar procedure as
shown in Figure 9a.
(a) (b)
Virtual Measurement
9. (a) Object coordinate extraction
Figure
Figure 9. (a) Object coordinate extraction to to
Macro 3D3D model
model generation
generation steps
steps andand
(b)(b) physical
physical andand virtual
virtual
measurement
measurement comparisons.
comparisons.
Database CAD Program
In order to measure the size of an object in an image, the program first performs cal-
ibration using information about the reference object. For calculating ‘pixel/metric’, the
ratio of the pixel of a known reference object to its known dimensions was used.
𝑜𝑏𝑗𝑒𝑐𝑡 𝑤𝑖𝑑𝑡ℎ 𝑖𝑛 𝑝𝑖𝑥𝑒𝑙
𝑃𝑖𝑥𝑒𝑙𝑠 𝑝𝑒𝑟 𝑚𝑒𝑡𝑟𝑖𝑐
𝑘𝑛𝑜𝑤𝑛 𝑤𝑖𝑑𝑡ℎ
In the case of a simple cubical object, the coordinates of two end points of a rectan-
gular surface suffice for the generation of a rectangular shape; similar processes can be
Machines 2023, 11, 1083 9 of 12
In order to measure the size of an object in an image, the program first performs
calibration using information about the reference object. For calculating ‘pixel/metric’, the
ratio of the pixel of a known reference object to its known dimensions was used.
29.9 − 29.8
Percentage error = × 100% = 0.3%
29.9
It can be seen that the discrepancy between the two is less than 1%, which is acceptable
in many practical scenarios. This discrepancy can further be reduced by taking images from
the camera at a perfect 90-degree angle, using higher resolution cameras, sharp-focused
images, and the proper selection of ROI for GrabCut Algorithm operation.
The complex-shaped object in the second case produced a large number of continuous
points along the boundary of the contour. The contour pixel coordinates (p(x), p(y)) were
transformed to the (x, y) coordinate using the reference dimension (φ) and the correspond-
ing pixel width (∆) using the following relation:
φ φ
( x, y) = × p( x ) , × p(y)
∆ ∆
The dimensions of the object were exported to Computer-Aided Design (CAD) soft-
ware via a macro [23] to generate a 3D model. A separate Python program was developed to
automatically generate a segment of the macro program (*.CATScript) using the coordinate
data. In the final step, the macro was executed within the CAD program to produce a
sketch. This sketch had a few open contours requiring minor geometry adjustments. The
resulting CAD model displayed an almost imperceptible level of roughness in its contour
profile, measuring in the order of micrometers (~µm).
Machines 2023, 11, x FOR PEER REVIEW Figure 10a presents a plot of points within the CAD environment, which, upon 10 extru-
of 12
sion, resulted in the creation of a 3D solid model depicted in Figure 10b.
(a) (b)
Figure
Figure10.
10.Comparison
Comparisonofofa areal
realobject
objectand steps
and stepsof of
thethe
CAD
CAD generation
generationprocess. (a) (a)
process. TheThe
sketch of
sketch
the contour point in the CAD environment; (b) isometric view of the object generated
of the contour point in the CAD environment; (b) isometric view of the object generated by theby the CAD
program.
CAD program.
The generated CAD file is saved on the disk as a stereolithography CAD file (*.stl),
which is imported into the open-source slicing software Ultimaker Cura for 3D printing,
as shown in Figure 11a. This slicing software helps to set various printing parameters,
such as model position and orientation, infill density, layer deposition thickness, and the
selection of materials, and gives information on the estimated time of realization, and the
weight and length of the filament. Finally, Figure 11b shows the direct comparison be-
Machines 2023, 11, x FOR PEER REVIEW 10 of 12
Figure 10. Comparison of a real object and steps of the CAD generation process. (a) The sketch of
Machines 2023, 11, 1083 10 of 12
the contour point in the CAD environment; (b) isometric view of the object generated by the CAD
program.
(a) (b)
Figure 11. (a)
Figure 11.G-Code file generation
(a) G-Code in Ultimaker
file generation CuraCura
in Ultimaker (cyan(cyan
is support, red isred
is support, theisobject shellshell
the object and and
yellow shows the layer infill) and (b) Comparison of the 3D printed object with the original object.
yellow shows the layer infill) and (b) Comparison of the 3D printed object with the original object.
7. Conclusions
7. Conclusions
ThisThis research
research successfully
successfully accomplished
accomplished the generation
the generation of a of
3Damodel
3D model fromfrom
imagesimages
within
within a commercial
a commercial CADCAD program,
program, demonstrating
demonstrating its efficacy
its efficacy for both
for both simple
simple cubiccubic
shapes
shapes and complex
and complex objects.
objects. This achievement
This achievement was madewaspossible
made possible
through through the metic-
the meticulous
ulous separation
separation of objects fromof objects from their backgrounds,
their backgrounds, a process
a process executed executedthe
by selecting byregion
selecting
of interest and leveraging the potent GrabCut algorithm in the OpenCV and Python envi- and
the region of interest and leveraging the potent GrabCut algorithm in the OpenCV
Python environments.
ronments.
To attain
To attain precise
precise object
object dimensions,
dimensions, imageimage
pixelpixel calibration
calibration was was meticulously
meticulously carried
carried
out with
out with reference
reference objects.
objects. Furthermore,
Furthermore, sophisticated
sophisticated computer
computer vision
vision algorithms
algorithms werewere
applied for in-depth shape
applied for in-depth shape analysis. analysis.
For automating
For automating thesolid
the 3D 3D solid model
model generation
generation within
within commercial
commercial CADCAD software,
software, a a
macro was developed. The macro efficiently plotted and connected
macro was developed. The macro efficiently plotted and connected coordinate points, re- coordinate points,
resulting
sulting in the in the creation
creation of high-quality
of high-quality closedclosed
curves.curves.
The overall time required for the entire process
The overall time required for the entire process ranged ranged
fromfrom
5 to 515tomin.
15 min.
ThisThis
timetime
frame encapsulated not only the set-up and warm-up duration for
frame encapsulated not only the set-up and warm-up duration for the camera and Rasp- the camera and Rasp-
berry PI but also accounted for variations arising from the complexity of the CAD shapes
berry PI but also accounted for variations arising from the complexity of the CAD shapes
involved. The intricate nature of certain shapes influenced the number of contour points
involved. The intricate nature of certain shapes influenced the number of contour points
generated and subsequently impacted the overall processing time.
generated and subsequently impacted the overall processing time.
This research exhibits significant potential applications across diverse domains,
This research exhibits significant potential applications across diverse domains, in-
including 3D design, shape analysis, and 3D printing, due to its ability to automate 3D
cluding 3D design, shape analysis, and 3D printing, due to its ability to automate 3D
model generation within CAD packages. Notably, this approach mitigates the demand
model generation within CAD packages. Notably, this approach mitigates the demand for
for extensive CAD proficiency and streamlines the laborious, time-intensive process of
extensive CAD proficiency and streamlines the laborious, time-intensive process of meas-
measuring multiple object dimensions. Furthermore, it offers valuable implications for
uring multiple object dimensions. Furthermore, it offers valuable implications for quality
quality assurance in 3D printing, where uneven shrinkage can significantly impact the
assurance in 3D printing, where uneven shrinkage can significantly impact the final prod-
final product.
uct.
Additionally, the incorporation of a robotic hand in our methodology provides a
means to precisely rotate the object held by the hand and capture finer details of the object.
This innovation opens new avenues for object analysis and modeling that hold promise
Machines 2023, 11, 1083 11 of 12
References
1. Esteva, A.; Chou, K.; Yeung, S.; Naik, N.; Madani, A.; Mottaghi, A.; Liu, Y.; Topol, E.; Dean, J.; Socher, R. Deep learning-enabled
medical computer vision. NPJ Digit. Med. 2021, 4, 5. [CrossRef] [PubMed]
2. Bhandari, B.; Lee, M. Haptic identification of objects using tactile sensing and computer vision. Adv. Mech. Eng. 2019, 11,
1687814019840468. [CrossRef]
3. Maire, M.R. Contour Detection and Image Segmentation. Ph.D. Thesis, University of California at Berkeley, Berkeley, CA,
USA, 2009.
4. Sileo, M.; Bloisi, D.D.; Pierri, F. Grasping of Solid Industrial Objects Using 3D Registration. Machines 2023, 11, 396. [CrossRef]
5. Sculpteo. 3D Model and CAD Model. Available online: https://www.sculpteo.com/en/glossary/3d-model-definition/ (accessed
on 26 July 2019).
6. Intwala, A. Image to CAD: Feature Extraction and Translation of Raster Image of CAD Drawing to DXF CAD Format. In Computer
Vision and Image Processing; Nain, N., Vipparthi, S.K., Raman, B., Eds.; Springer: Singapore, 2020; pp. 205–215.
7. Fan, N. Application of CAD Combined with Computer Image Processing Technology in Mechanical Drawing. In Application of
Intelligent Systems in Multi-modal Information Analytics; Sugumaran, V., Xu, Z., Zhou, H., Eds.; Springer International Publishing:
Cham, Switzerland, 2021; pp. 319–324.
8. Zou, C.; Guo, R.; Li, Z.; Hoiem, D. Complete 3D Scene Parsing from an RGBD Image. Int. J. Comput. Vis. 2019, 127, 143–162.
[CrossRef]
Machines 2023, 11, 1083 12 of 12
9. Jackson, A.S.; Bulat, A.; Argyriou, V.; Tzimiropoulos, G. Large Pose 3D Face Reconstruction from a Single Image via Direct
Volumetric CNN Regression. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy,
22–29 October 2017.
10. Patil, D.B.; Nigam, A.; Mohapatra, S. Image processing approach to automate feature measuring and process parameter optimizing
of laser additive manufacturing process. J. Manuf. Process. 2021, 69, 630–647. [CrossRef]
11. Peddireddy, D.; Fu, X.; Shankar, A.; Wang, H.; Joung, B.G.; Aggarwal, V.; Sutherland, J.W.; Jun, M.B.-G. Identifying manufactura-
bility and machining processes using deep 3D convolutional networks. J. Manuf. Process. 2021, 64, 1336–1348. [CrossRef]
12. Fan, L.; Wang, J.; Xu, Z.; Yang, X. A Reverse Modeling Method Based on CAD Model Prior and Surface Modeling. Machines 2022,
10, 905. [CrossRef]
13. Hackberry Open Source Community. Available online: http://exiii-hackberry.com/ (accessed on 10 February 2019).
14. Raspberry Pi 2 Model, B. Raspberry Pi 2 Model B. Available online: https://www.raspberrypi.org/products/raspberry-pi-2-
model-b/ (accessed on 1 September 2020).
15. A Video Surveillance OS for Single-Board Computers. Available online: https://github.com/motioneye-project/motioneyeos/
releases (accessed on 1 September 2020).
16. Rother, C.; Kolmogorov, V.; Blake, A. “GrabCut”—Interactive Foreground Extraction using Iterated Graph Cuts. ACM Trans.
Graph. 2004, 23, 309–314. [CrossRef]
17. OpenCV GitHub Repository. Available online: https://github.com/opencv/opencv/blob/master/samples/python/grabcut.py
(accessed on 1 September 2020).
18. Boykov, Y.Y.; Jolly, M.P. Interactive graph cuts for optimal boundary & region segmentation of objects in N-D images. In
Proceedings of the Eighth IEEE International Conference on Computer Vision, ICCV 2001, Vancouver, BC, Canada, 7–14 July 2001;
Volume 101, pp. 105–112. [CrossRef]
19. Talbot, J.; Young, B.; Xu, X. Implementing GrabCut; Brigham Young University: Provo, UT, USA, 2004; Volume 3, Available online:
https://api.semanticscholar.org/CorpusID:15150449 (accessed on 1 September 2020).
20. Rosebrock, A. How to Build a Kick-Ass Mobile Document Scanner in Just 5 Minutes. Available online: https://pyimagesearch.
com/2014/09/01/build-kick-ass-mobile-document-scanner-just-5-minutes/ (accessed on 1 September 2020).
21. Canny Edge Detection. OpenCV. Available online: https://docs.opencv.org/4.x/da/d22/tutorial_py_canny.html (accessed on
1 July 2023).
22. Plantenburg, K. Introduction to CATIA V5 Release 19; SDC Publications: Mission, KS, USA, 2009.
23. Ziethen, D.R. CATIA V5: Macro Programming with Visual Basic Scrip; McGraw-Hill Education: Singapore, 2013.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.