Motion Induced Phase Error Reduction Using A Hilbe
Motion Induced Phase Error Reduction Using A Hilbe
Abstract: Object motion can introduce phase error and thus measurement error for phase-
shifting profilometry. This paper proposes a generic motion error compensation method based
on our finding that the dominant motion-introduced phase error doubles the frequency of the
projected fringe frequency, and the Hilbert transform shifts the phase of a fringe pattern by
π/2. We apply a Hilbert transform to phase-shifted fringe patterns to generate another set of
fringe patterns, calculate one phase map using the original fringe patterns and another phase
map using Hilbert transformed fringe patterns, and then use the average of these two phase maps
for three-dimensional reconstruction. Both simulation and experiments demonstrated that the
proposed method can substantially reduce motion-introduced measurement error.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Three-dimensional (3D) shape measurement using digital fringe projection (DFP) technique
has been exhaustively studied and widely applied due to its simple setup, high-speed and
high-resolution measurement capabilities [1–3].
Conventional DFP technique typically projects 8-bit sinusoidal fringe patterns, and its
measurement speed is limited to the maximum refresh rate of the projector, typically 120 Hz.
Therefore, it is challenging for the conventional DFP technique to measure rapidly moving objects
if the object moves too much while acquiring the required phase-shifted fringe patterns for
phase retrieval. To overcome this limitation, the digital binary defocusing techniques have been
developed to generate quasi-sinusoidal fringes with 1-bit binary patterns through projector lens
defocusing [4, 5], and the advanced digital-light-processing (DLP) projection platform allows
researchers to achieve speed breakthroughs [6–8]. However, the binary defocusing method still
has measurement error if the object moves too quickly. Phase-shifting profilometry works well
with the assumption that the object stays quasi-static when the required phase-shifted fringe
patterns are captured. Therefore, any motion of the object could introduce phase error and thus
measurement error, and we refer this type of error as motion-introduced phase error.
The motion-introduced phase error has the similarity to the phase-shift error introduced by
the interferometry systems when the phase shift cannot be precisely generated. However, the
motion-introduced phase error is more complex because the motion-introduced phase error
might not be homogeneous but the phase-shift error in the interferometry is homogeneous.
This is because for the interferometry system, the major phase-shift error is introduced by the
displacement error of the mirror driven by a piezoelectric device, and the phase-shift error is
overall homogeneous (e.g., remains the same across the entire measurement surface). However,
the phase error introduced by motion could be nonhomogeneous because the motion of a
dynamically deformable object varies from one point to another.
To eliminate the motion introduced error, Lu et al. [9] proposed a method to reduce error
caused by the planar motion parallel to the imaging plane. By placing a few markers on the object
and analyzing the movement of markers, the rigid-body motion of an object can be estimated.
#349667 https://doi.org/10.1364/OE.26.034224
Journal © 2018 Received 2 Nov 2018; revised 30 Nov 2018; accepted 6 Dec 2018; published 17 Dec 2018
Vol. 26, No. 26 | 24 Dec 2018 | OPTICS EXPRESS 34225
Later, they improved their method to handle error induced by translation in the direction of object
height [10]. The motion is estimated using the arbitrary phase shift estimation method developed
by Wang and Han [11]. However, this method is limited to homogeneous background intensity
and modulation amplitude. Feng et al. [12] proposed to solve the nonhomogeneous motion
artifact problem by segmenting objects with different rigid shifts. However, this method still
assumes that the phase-shift error within a single segmented object is homogeneous, thus it may
not work well for dynamically deformable objects where the phase-shift error varies from one
pixel to another. Cong et al. [13] proposed a Fourier-assisted approach to correct the phase-shift
error by differentiating the phase maps of two successive fringe images. However, the accuracy is
limited due to the use of Fourier Transform Profilometry (FTP) for phase shift estimation. Liu et
al. [14] developed a method to find the phase shift more precisely with an iterative process based
on the assumption that the motion is uniform between two adjacent 3D frames. However, such a
method requires the acquisition of another set of fringe patterns for motion estimation, and thus
such a method may not work well if the object moves too rapidly when the motion cannot be
precisely estimated.
We propose a motion-induced phase error compensation method based on our finding that the
dominant motion introduced phase error doubles the frequency of the projected fringe frequency,
and Hilbert transform shifts the phase of a fringe pattern by π/2. Our proposed method includes
four major steps: 1) apply Hilbert transform to the phase-shifted fringe patterns to generate
another set of fringe patterns; 2) calculate one phase map φ using the original fringe patterns and
another phase map φ H using Hilbert transformed fringe patterns; 3) generate the final phase map
φ f by averaging φ and φ H , i.e., φ f = (φ + φ H )/2; and 4) reconstruct 3D shape using the averaged
phase map φ f . This method can substantially reduce motion-introduced phase error for objects
with rigid uniform motion as well as for dynamically deformable objects with non-uniform
motion.
Section 2 explains the principle of the proposed method. Section 3 shows experimental results
to verify the performance of the proposed method; and Sec. 4 summarizes the paper.
2. Principle
2.1. Multi-step phase-shifting algorithm
Phase-shifting methods are widely used in optical metrology because of their speed and
accuracy [15]. The intensity distributions of the n−th fringe for an N−step phase-shifting
algorithm with a phase shift of δn can be described as,
where A(x, y) is the average intensity, B(x, y) the intensity modulation, and Φ(x, y) the phase to
be solved for. If N ≥ 3, the wrapped phase can be calculated by
"Í #
n=0 In (x, y) sin δn
N −1
φ(x, y) = − tan Í N −1 (2)
−1
,
n=0 In (x, y) cos δn
where the arctangent function results in a value ranging [−π, +π) with 2π discontinuities. The
continuous phase map Φ(x, y) can be obtained by applying a phase unwrapping algorithm to
determine the fringe order k(x, y),
The unwrapped phase can be used for 3D reconstruction once the system is calibrated.
Phase-shifting profilometry works well if φ(uc, v c ) can be accurately determined using Eq. (3),
which requires the phase shift δk to be precisely known. However, if the object moves between
different frames, the actual phase shift δk differsVol.
from ideal phase shift δk with
26, No. 26 | 24 Dec 2018 | OPTICS
an error k , i.e.,
EXPRESS 34226
where k (uc, v c ) is motion dependent and thus can vary from point to point.
Figure
2.2.1 illustrates that the phase
Motion-introduced motionerror
of an deformable object can introduce phase shift error.
analysis
The camera image point C
The phase-shifting algorithm
1 corresponds to point S1 onphase
could determine accurate the object surface
information φ(x,without
y) if and motion,
only but
actually corresponds to Sn 1 if the object moves. The corresponding projected fringe pattern points
if the phase shift δ is precisely known. For DFP systems, the phase-shift can be accurately
controlled
are P1 and with digital projectors.
P1 , respectively. These twoHowever,
pointsdue
ontothe
theprojector
existence ofcorrespond
motion, the actual
to two phase shift phase
different
δn0 could be different from the projected value δn ,
values Φ1 and Φ1 ; and we define the difference between these phase values as phase shift error,
δn0 (x, y) = δn + n (x, y), (4)
1 = Φ1 − Φ1 . (6)
where n (x, y) is caused by motion and could vary from one pixel to another.
𝑣𝑝 Fringe pattern
𝑢𝑝
𝐶2 CMOS
DMD 𝑃2 𝑃2 𝑃1 𝑃1
𝐶1
𝑆1
𝑆2
𝑆ഥ1 𝑆2
xw
p P
Therefore, we can obtain the motion-introduced phase error by xw
u
0 p
yw
11 P12 P13 P14
= P21 P22 P23# P24 yw
s p y) p
= A "[R , t ]
φ (x, y) −p p
φ(x, y)
(7) (8)
∆φ(x, v =
1 Í Í N −1 0
In sin
δn + sinPφ31 n=0
z w zw
(8)
= tan−1
cos φ n=0
N −1 0
P In cosPδ33 P
ÍN 1 Í N −1 320 , 34 1
n
sin φ
−1 0
In sin δn − cos φ In cos δn
" Í N −1 Í N −1 #
n=0 n=0
cos φ n=0 cos(φ + δn0 ) sin δn + sin φ n=0 cos(φ + δn0 ) cos δn
= tan−1 Í N −1 Í N −1 , (9)
sin φ n=0 cos(φ + δn0 ) sin δn − cos φ n=0 cos(φ + δn0 ) cos δn
" Í N −1 Í N −1 Í N −1 #
cos 2φ n=0 sin αn + sin 2φ n=0 cos αn − n=0 sin n
= tan−1 Í N −1 Í N −1 Í N −1 , (10)
sin 2φ n=0 sin αn − cos 2φ n=0 cos αn − n=0 cos n
Vol. 26, No. 26 | 24 Dec 2018 | OPTICS EXPRESS 34227
where αn = 2δn + n .
From Eq. 10, it can be seen that the motion-introduced phase error ∆φ highly correlates to the
real phase φ. Since three-step and four-step phase-shifting algorithms are more extensively used
for high-speed applications, we provide detailed analysis of phase error for these two special
cases.
For a three-step phase-shifting algorithm, the fringe patterns, wrapped phase and phase error
can be respectively described as,
A + B cos[φ(x, y) + (n − 1) ∗ (2π/3 + )], n = 0, 1, 2,
In0 (x, y) = (11)
"√ #
3(I00 − I20 )
φ (x, y) = tan
0 −1
, (12)
2I10 − I00 − I20
sin 2φ [cos( + π/3) − 1/2]
∆φ(x, y) = tan −1
. (13)
(cos + 1/2) − cos 2φ [cos( + π/3) − 1/2]
Given that is very small, we will have
√
cos( + π/3) − 1/2 = cos() cos(π/3) − sin() sin(π/3) − 1/2 ≈ − 3/2, (14)
and
cos + 1/2 ≈ 3/2. (15)
Therefore, Eq. 13 can be further approximated as,
" √ #
− 3 sin 2φ
∆φ(x, y) ≈ tan−1 √ , (16)
3 + 3 cos 2φ
" √ #
− 3 sin 2φ
≈ tan −1
· , (17)
3
√ !
3
≈ − · sin 2φ. (18)
3
This equation indicates that the phase error approximately doubles the fringe frequency of the
projected fringe pattern.
Similarly, we deduced the phase error model for the four-step phase-shifting algorithm as
follows. The fringe patterns, wrapped phase and phase error can be respectively described as,
In0 (x, y) =A + B cos[φ(x, y) + (2n − 3) ∗ (π/4 + )], n = 0, 1, 2, 3, (19)
0
I − I10
φ 0(x, y) = tan−1 30 + 3π/4, (20)
I0 − I20
0
(I3 − I10 ) cos(φ − 3π/4) − (I00 − I20 ) sin(φ − 3π/4)
∆φ(x, y) = tan −1
, (21)
(I30 − I10 ) sin(φ − 3π/4) + (I00 − I20 ) cos(φ − 3π/4)
γ0 cos 2(φ − 3π/4)
= tan−1 , (22)
γ0 sin 2(φ − 3π/4) + γ1
where γ0 = 2[sin(3) − sin ], and γ1 = 2[cos(3) + cos ].
Once again, because is very small, we will have,
γ0 = 2[sin(3) − sin ]
≈ 2[3 − ] = 4, (23)
γ1 = 2[cos(3) + cos ]
≈ 4. (24)
Vol. 26, No. 26 | 24 Dec 2018 | OPTICS EXPRESS 34228
The above analysis also reveals that for the four-step phase-shifting algorithms, the motion-induced
phase error also approximately doubles the frequency of the projected fringe frequency.
0.1 0.1
Phase error(rad)
Phase error(rad)
0 =-0.10 0 =-0.10
0 =-0.05 0 =-0.05
0 0 =0 0 0 =0
0 =0.05 0 =0.05
0 =0.10 0 =0.10
-0.1 -0.1
0 2 4 6 0 2 4 6
Phase (rad) Phase (rad)
(a) (b)
Fig. 2. Simulation results of the motion-introduced phase error for three-step phase-shifting
algorithm. (a) Uniform motion; (b) non-uniform motion.
0.1
Phase error(rad)
Phase error(rad)
0.1
0 =-0.10 0 =-0.10
0 =-0.05 0 =-0.05
0 0 =0 0 0 =0
0 =0.05 0 =0.05
0 =0.10 0 =0.10
-0.1 -0.1
0 2 4 6 0 2 4 6
Phase (rad) Phase (rad)
(a) (b)
Fig. 3. Simulation results of the motion-introduced phase error for four-step phase-shifting
algorithm. (a) Uniform motion; (b) non-uniform motion.
Based on this finding, we came to the idea that if we shift the original phase map by 1/4
period (or π/2), and then we can use the shifted phase map to compensate the motion-induced
Vol. 26, No. 26 | 24 Dec 2018 | OPTICS EXPRESS 34229
error by averaging it with the original phase map. This method can simultaneously tackle error
caused by the uniform and non-uniform motions. Meanwhile, this method does not require
additional pattern acquisition, nor motion estimation, which is especially important for high-speed
applications.
InH (x, y) = H[In (x, y)] = A0(x, y) + B(x, y) sin[φ(x, y) + δn0 ]. (29)
where A0(x, y) is the average intensity that might be different from the original one. Then another
phase map can be calculated using Hilbert transformed fringe patterns as,
"Í #
n=1 In (x, y) cos δn
N H
φ (x, y) = tan ÍN H (30)
H −1
.
n=1 In (x, y) sin δn
Since the phase error of the original phase φ(x, y) and that of Hilbert phase φ H (x, y) has opposite
distributional tendencies, we can generate another phase map by
The averaged phase map φ f (x, y) could significantly reduce periodic motion phase error.
To test the performance of this proposed method, we first evaluated the phase error for both
uniform and non-uniform motion cases for three-step phase-shifting algorithm. In the simulations,
we set the uniform motion induced phase-shift errors as 0 = 0 rad, 1 = 0.1 rad, 2 = 0.2 rad.
The phase error plots of one period are shown in Fig. 4(a). It can be seen that the phase error
φe obtained from the original phase-shifted patterns, and the phase error φeH obtained from
Hilbert transformed fringe patterns indeed have different tendencies. The averaged phase error
f
φe is significantly reduced: from root-mean-square(rms) of 0.042 rad to 0.0012 rad. Figure
4(c) shows the phase rms errors when the phase-shift error 1 varies from −0.1 rad to 0.1 rad
and 2 = 21 . It can be seen that the phase rms error increases approximately linearly with the
Vol. 26, No. 26 | 24 Dec 2018 | OPTICS EXPRESS 34230
0.1 0.1
?He
?H
e
0 ?fe 0 ?fe
-0.1 -0.1
1 2 3 4 5 6 1 2 3 4 5 6
Phase (rad) Phase (rad)
(a) (b)
Phase rms err(rad)
0 0
-0.1 -0.05 0 0.05 0.1 -0.1 -0.05 0 0.05 0.1
01 (rad) 01 (rad)
(c) (d)
Fig. 4. Simulation results of phase error compensation for three-step phase-shifting algorithm.
(a) Phase error plots when 1 = 0.1 rad and 2 = 0.2 rad; (b) Phase error plots when 1 = 0.1
rad and 2 = 0.3 rad;(c)phase rms error with uniform motion; (d) phase rms error with
nonuniform motion.
0.1 0.1
Phase error(rad)
Phase error(rad)
?e ?e
?H ?H
e
e
0 ?fe 0 ?fe
-0.1 -0.1
1 2 3 4 5 6 1 2 3 4 5 6
Phase (rad) Phase (rad)
(a) (b)
Phase rms err(rad)
0.06 ?e
0.06
?e
?H ?H
0.04 e
0.04 e
?fe ?fe
0.02 0.02
0 0
-0.1 -0.05 0 0.05 0.1 -0.1 -0.05 0 0.05 0.1
01 (rad) 01 (rad)
(c) (d)
Fig. 5. Phase error compensation for four-step phase-shifting algorithm. (a) Phase error
plots when 1 = 0.1 rad, 2 = 0.2 rad, and 3 = 0.3 rad; (b) Phase error plots when 1 = 0.1
rad, 2 = 0.3 rad, and 3 = 0.6 rad; (c)phase rms error with uniform motion; (d) phase rms
error with nonuniform motion.
phase-shift error, and the proposed method can effectively reduce the phase error caused by
uniform motion. Figure 4(b) shows simulation results for non-uniform motion when setting the
Vol. 26, No. 26 | 24 Dec 2018 | OPTICS EXPRESS 34231
phase-shift error as 0 = 0 rad, 1 = 0.1 rad, 2 = 31 . It can be found the proposed method can
effectively reduce the phase rms error from 0.066 rad to 0.003 rad. Figure 4(d) plots the results
when 1 varies from -0.1 rad to 0.1 rad and 2 = 31 . Once again, the proposed method can
drastically reduce phase error caused by non-uniform motion.
Besides, similar simulations were also conducted to test the proposed method for four-step
phase-shifting algorithm with the results shown in Fig. 5. As shown in Fig. 5(a), the proposed
method can reduce the phase rms error from 0.035 rad to 0.0009 rad for uniform motion. And
for nonuniform motion as shown in Fig. 5(b), the phase rms error decreases from 0.072 rad to
0.0036 rad. Figures 5(c) and 5(d) show the compensation results with different phase-shifting
errors for uniform and nonuniform motions, which obviously indicate the effectiveness of the
proposed compensation method.
3. Experiments
We further evaluated the performance of our proposed method through experiments. The system
includes a CMOS camera (model: PointGrey Grasshopper3 GS3-U3-23S6M) that is attached with
a 8 mm focal length lens (model: Computar M0814-MP2), and a DLP projection developmental
kit (model: DLP Lightcrafter 4500). The camera resolution was set as 640 × 480 pixels while
the projector resolution was 912 × 1140 pixels. The system was calibrated using the method
described by Li et al. [16]. The projector and the camera were synchronized with a speed of 120
Hz, and a three-frequency temporal phase-unwrapping algorithm was adopted for absolute phase
retrieval.
1 1 1
0 0 0
-1 -1 -1
(d) (e) (f)
First, a moving sphere with a diameter of 79.2 mm at the speed of approximately 80 mm/s
was used to quantitatively evaluate the performance of the proposed method for both three-step
and four-step phase-shifting algorithms. Figure 6 shows the results for three-step phase-shifting
algorithm. Figure 6(a) shows the 3D result obtained from original fringe patterns where the
motion-introduced measurement error (i.e. vertical stripes) is obvious. Figure 6(b) shows the 3D
result obtained from Hilbert-transformed fringe patterns, and the motion-introduced measurement
error is also obvious. Figure 6(c) shows the 3D result obtained from our proposed method,
clearly demonstrating that measurement error is greatly reduced (i.e., smoother surface). To
quantitatively evaluate the improvements, we compared the measurement results with an ideal
sphere, and the corresponding error maps are obtained as shown in Figs. 6(d)-6(f). For the result
obtained from the original fringe patterns, the mean measurement error is 0.172 mm and the
standard deviation is 0.124 mm. Similarly, Hilbert transformed fringe patterns gives nearly the
same error: the mean error of 0.160 mm and the standard deviation of 0.116 mm. In contrast, the
result from our proposed method reduces the mean error to 0.038 mm and the standard deviation
to 0.032 mm.
1 1 1
0 0 0
-1 -1 -1
(d) (e) (f)
Fig. 7. Measurement result of a moving sphere for four-step phase-shifting algorithm. (a)
3D result from original phase-shifted fringe patterns; (b) 3D result from Hilbert transformed
fringe patterns; (c) 3D result using our proposed method; (d) error map of the result in (a)
(mean: 0.118 mm, standard deviation: 0.099 mm); (e) error map of (b) (mean: 0.104 mm,
standard deviation: 0.090 mm); (f) error map of (c) (mean: 0.031 mm, standard deviation:
0.027 mm).
We then did the same experiments using four-step phase-shifting algorithm with the results
shown in Fig. 7. Figures 7(a)-7(c) respectively show the 3D reconstructed results for original
fringe patterns, Hilbert transformed patterns and the proposed method, while Figs. 7(d)-7(f)
show the corresponding error maps. For the result obtained from the original fringe patterns,
the mean measurement error is 0.118 mm and the standard deviation is 0.099 mm. And Hilbert
transformed fringe patterns gives the close error: the mean error of 0.104 mm and the standard
Vol. 26, No. 26 | 24 Dec 2018 | OPTICS EXPRESS 34233
deviation of 0.090 mm. In contrast, the result from our proposed method reduces the mean error
to 0.031 mm and the standard deviation to 0.027 mm.
Fig. 8. Measurement result of a moving vase with complex surface structures. (a) Photograph;
(b) raw 3D result; (c) 3D result using our proposed method.
Besides, a moving vase with more complex structure was also measured using three-step
phase-shifting algorithm, with the texture shown in Fig. 8(a). Figure 8(b) shows the 3D result
obtained from the original phase-shifted fringe patterns, depicting clear motion-introduced error.
Figure 8(c) shows the 3D result obtained from our proposed method, and most vertical stripes
caused by motion are no longer obvious. Even though Hilbert transform could theoretically
smooth out the complex structure due to the filtering effect, our experiments demonstrate that the
filtering effect is not obvious even for complex surface geometry like the one shown here.
Lastly, we evaluated our proposed method by measuring dynamically deformable objects such
as human facial expressions. Figure 9(a) shows the 3D result from the original fringe patterns,
while Fig. 9(b) shows the 3D result obtained from Hilbert transformed fringe patterns. Again
both 3D results shows severe motion introduced error. Figure 9(c) shows the result obtained
from our proposed method, demonstrating that our proposed method can effectively alleviate the
motion introduced error and greatly improve measurement quality.
Vol. 26, No. 26 | 24 Dec 2018 | OPTICS EXPRESS 34234
(a) (b)
Fig. 10. Experimental results of texture image generation. (a) Texture image from original
fringe patterns; (b) texture image using corrected phase.
In addition, we noticed that our proposed method can also significantly improve the quality of
texture (e.g., A(x, y) in Eq. 1). Figure 10(a) shows the recovered texture using the original fringe
patterns. Due to motion, obvious vertical stripes present on the image. After correcting the
phase with the proposed method, the texture information can also be greatly improved, as shown
in Fig. 10(b). These experimental data clearly demonstrated the effectiveness of our proposed
method even for dynamically deformable objects with complex surface geometry.
In our research, we have an assumption that the camera speed is not significantly slower than
the motion, thus the mismatch problem between adjacent frames is not serious. Actually, our
proposed method also works for high-speed motions if we adopt the binary defocusing technique
with a high-speed structure light system.
4. Summary
This paper has presented a motion-induced error compensation algorithm based on Hilbert
transform, without requiring the acquisition of additional images or estimation of unknown
phase shifts. We successfully demonstrated that the proposed method can drastically reduce
measurement error introduced by homogeneous and non-homogeneous motions. Since this
method requires no extra fringe pattern, it is suitable for high-speed applications.
Funding
National Institute of Justice (NIJ) (2016-DN-BX-0189); National Science Foundation (NSF)
(IIS-1637961); National Natural Science Foundation of China (NSFC) (61603360).
5. Acknowledgements
We thank the National Institute of Justice (NIJ), and the National Science Foundation (NSF), and
the National Natural Science Foundation of China (NSFC). Views expressed in this paper are
those of the authors and not necessarily those of the NIJ, NSF, or NSFC.
References
1. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser. Eng. 48, 133–140 (2010).
2. S. Zhang, “Recent progresses on real-time 3-d shape measurement using digital fringe projection techniques,” Opt.
Laser Eng. 48, 149–158 (2010).
Vol. 26, No. 26 | 24 Dec 2018 | OPTICS EXPRESS 34235
3. X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: A review,” Opt. Laser. Eng 48, 191–204 (2010).
4. S. Lei and S. Zhang, “Flexible 3-d shape measurement using projector defocusing,” Opt. Lett. 34, 3080–3082 (2009).
5. C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for
three-dimensional profilometry with projector defocusing,” Appl. Opt. 51, 4477–4490 (2012).
6. B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3d shape measurement with
digital binary defocusing techniques,” Opt. Laser Eng. 54, 236–246 (2014).
7. Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,”
Opt. Express 19, 5143–5148 (2011).
8. J. Zhu, P. Zhou, X. Su, and Z. You, “Accurate and fast 3d surface measurement with temporal-spatial binary encoding
structured illumination,” Opt. Express 24, 28549–28560 (2016).
9. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-d shape measurement of moving object
using phase shifting profilometry,” Opt. Express 21, 30610–30622 (2013).
10. L. Lu, J. Xi, Y. Yu, and Q. Guo, “Improving the accuracy performance of phase-shifting profilometry for the
measurement of objects in motion,” Opt. Lett. 39, 6715–6718 (2014).
11. Z. Wang and B. Han, “Advanced iterative algorithm for phase extraction of randomly phase-shifted interferograms,”
Opt. Lett. 29, 1671–1673 (2004).
12. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with
motion-compensated phase-shifting profilometry,” Opt. Laser. Eng. 103, 127–138 (2018).
13. P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3d sensing with fourier-assisted phase shifting,”
IEEE J. Sel. Top. Signal Process. 9, 396–408 (2015).
14. Z. Liu, P. C. Zibley, and S. Zhang, “Motion-induced error compensation for phase shifting profilometry,” Opt. Express
26, 12632–12637 (2018).
15. D. Malacara, ed., Optical Shop Testing, 3rd ed. (John Wiley and Sons, New York, 2007).
16. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured light system with an out-of-focus
projector,” Appl. Opt. 53, 3415–3426 (2014).