Mirror and Camera Synch
Mirror and Camera Synch
Mirror and Camera Synch
∗ Tomohiko [email protected]
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31648
11. K. Okumura, H. Oku, and M. Ishikawa, “High-speed gaze controller for millisecond-order pan/tilt camera,” in
Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 6186–6191.
12. M. Ito, “Cerebellar control of the vestibulo-ocular reflex - around the flocculus hypothesis,” Annu. Rev. Neurosci.
1(5), 275–296 (1982).
13. M. Davis and P. Green, “Head-bobbing during walking, running and flying: relative motion perception in the
pigeon,” J. Exp. Biol. 138(1), 71–91 (1988).
14. J. Heo, J. Kim, and D. Lee, “Real-time digital image stabilization using motion sensors for search range reduc-
tion,” in SoC Design Conference (ISOCC, 2012), pp. 363–366.
15. I. V. Romanenko, E. A. Edirisinghe, and D. Larkin, “Block matching noise reduction method for photographic
images applied in Bayer RAW domain and optimized for real-time implementation,” Proc. SPIE 8437, 84370F
(2012).
16. O. Yang and B. Choi, “Laser speckle imaging using a consumer-grade color camera,” Opt. Lett. 37(19), 3957–
3959 (2012).
17. V. Duma, J. P. Rolland, O. Group, A. Vlaicu, and R. Ave, “Advancements on galvanometer scanners for high-end
applications,” Proc. SPIE 8936, 893612 (2014).
1. Introduction
To perform visual inspection of extremely large targets, such as walls, surfaces of structures,
roads, assembly lines, and so on, in an efficient manner, in terms of both time and cost, real-
time inspection systems must have a simple construction and be capable of operating at high
speed. However, high-speed motion degrades image quality owing to motion blur, and some-
times results in lost frames. For example, tunnels on highways have a comparatively high risk
of deteriorating owing to their structures, and it is difficult to enforce the frequent traffic restric-
tions that are needed for their inspection. Therefore, there is an increasing demand for systems
that can monitor tunnel surfaces from a moving vehicle. In particular, as a substitute for human
visual inspection, high-quality images of tunnel surfaces are necessary for accurately judging
faults such as cracks and stains in the structures. However, there is a trade-off relationship be-
tween efficiency and precision, as high-resolution pictures suffer from motion blur easily, and
high-speed motion deteriorates the quality of images because of motion blur. In the vehicle in-
spection systems for infrastructure, intense illumination is used to compensate for motion blur
to achieve fine spatial resolution; however, such illumination might cause other drivers to have
accidents. Additionally, in general, intense light may cause damage to the surface of targets,
and hence lower illumination is required. For example, inspection of products on a conveyor
belt needs to be efficient; however, some products might be damaged by intensive illumina-
tion. Hence, a method that compensates for motion blur without using intense illumination is
required.
Many methods have been proposed to compensate for such motion blur, and they can be cat-
egorized as those that compensate for motion blur [1–3, 9–11], in which the sensor and system
are made to follow the moving object to avoid motion blur, and those that restore the captured
image in post-processing [4–8]. Although considerable research effort in the computational
imaging community has been focused on the latter category, the method proposed in this paper
belongs to the former category. There are numerous ways to compare these categories; how-
ever, in general the former is a more powerful method because it is always better to avoid the
blur in the first place rather than having to remove it post-capture.
As a method of the former category, time delayed integration (TDI) extends the exposure
time virtually [1]; however, the extension of the exposure time is limited because, as the relative
speed between the camera and the target increases, the exposure time at each stage of the
TDI drops, and more stages are required. However, TDI sensor costs become very high as
the number of stages increases, and so this system has a limitation when applied to efficient
practical systems. In addition, the TDI method requires precise encoder information. Another
method in the former category is optical image stabilization (OIS). This method is also effective
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31649
for compensating for motion blur caused by hand shake [2, 3]; however, OIS has low accuracy,
and a built-in gyro sensor or acceleration sensor is needed to control the actuator.
Although additional sensors can help to reduce degradation, as methods of the latter cate-
gory, there are motion blur rectification methods that do not require any additional sensors, for
example, blind deconvolution [4, 5]. However, the usual blind deconvolution methods operate
off-line to estimate the point spread function, and they are therefore not suitable for real-time
applications. Additionally, blind deconvolution is known to be an NP-hard problem, so the ac-
curacy and speed are poor without additional information. Unlike the usual blind deconvolution,
images can be processed with motion vectors in deconvolution, simplifying software process-
ing [6–8]. Levin et al.’s method corresponds with a variant motion vector within one exposure to
rotate the camera itself [6]. Their method is very comprehensive for arbitrary, one-dimensional
motion; however, their hardware is not designed for high-speed motion, and deconvolution is
performed as an off-line process. Raskar et al.’s method enables one to get the motion vector
easily by using a flutter shutter [7]. However, in their method, the exposure time is limited by
generating a coded exposure, and, because they are based on an off-line process, real-time appli-
cation is not supported. In contrast, Qian et al. developed a real-time deconvolution method [8];
however, its operating speed of 1 Hz is too slow to capture all views necessary for continuous
capturing in the case of high-speed motion. Moreover, since deconvolution is also a rectifica-
tion method that is performed after motion blur occurs, high-spatial-frequency information will
be lost. Finally, all those deconvolution methods need additional hardware. Software processes
become simpler than blind deconvolution; however, simplicity is lost in a hardware setup.
To satisfy the requirements for speed and simplicity, we considered adopting the concept of
active vision [9]. This concept nearly belongs to the former category, but the purpose is not to
compensate motion blur. By using this concept, dynamic image acquisition becomes possible if
gaze control can be performed so that a subject is always captured at the center of the acquired
image. However, conventional active vision systems have a limitation in terms of the speed at
which the optical gaze direction can be moved, since the weight of the camera prevents rapid
motion when the camera itself is moved by an actuator [10]. To solve this problem, Okumura
et al. proposed using a two-axis galvanometer mirror to control the optical gaze of a camera at
high speed [11] and achieved high-speed gaze control for general target tracking. In their sys-
tem, however, the optical gaze of the camera follows the center point of the target, and hence
the response time causes motion blur when the target is moving at high speed. The active vision
concept is suitable for tracking a single target continuously; however, in one-dimensional mo-
tion (e.g., on roads, rails, conveyors, and so on), targets will be updated as the relative position
between the camera and the targets changes. Hence the camera must capture those images one
after another so as not to miss frames, and we apply such an active vision concept for updating
the target successively to capture updating local targets in a large target. Active vision systems
move in a similar manner to the active motion of the human eye when tracking moving ob-
jects, whereas our system is based on a model of the human eye’s vestibule-ocular reflex [12]
and pigeon-head-bobbing during walking [13]. Their body mechanisms compensate for mo-
tion blur, and since we found that they can be effectively adopted into a vision system, we
developed a motion blur compensation system to prolong exposure time. Additionally, unlike
active vision, which has a varying motion vector within one image, here we can assume that
the motion blur is invariant, since the target is large, and thus the real-time capability will be
high enough to sustain the relative speed between the camera and the target. We call this novel
concept background tracking.
In this paper, we propose a real-time motion blur compensation system with optical gaze
control using a galvanometer mirror. In our system, we use a lightweight galvanometer mirror
for gaze control [11], allowing fast mirror rotation for capturing the next target. Additionally, we
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31650
employ a back-and-forth oscillating motion of the galvanometer mirror to achieve quick motion
that can rapidly respond to changes in the speed of the target. This back-and-forth motion is
realized by applying a sinusoidal driving pattern, and the exposure timing is synchronized with
a particular angle so that the rotation is considered to be linear.
To compensate for motion blur, we estimate a one-dimensional motion vector used to set the
angular speed of the galvanometer mirror. The relative angular speed between the camera and
the galvanometer mirror is determined by using a Bayer raw block matching method to reduce
computational costs to make the system suitable for real-time applications.
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31651
sw
xd
l
α
r r r
t4 t3 t2 t1
vr
xd ωr
= tan . (2)
2l 2
Without any additional sensors, l is an unknown parameter; however, by rearranging Eqs. (1)
and (2) and solving for ωr for the parameter ωr , we obtain
xd α
ωr = 2 tan−1 ( tan ). (3)
sw 2
Thus, if the target is planar, ωr can be computed from two successive images, without using l or
any additional sensors (e.g., distance sensors). This contributes to the simplicity of the system.
Finally, ωr is substituted for ωm up to the current time t, yielding
ωr (t1 ≤ t ≤ t3 ),
ωm = (4)
−ωr (e.g. t = t4 ).
2.3. Background tracking using rapid block matching method in the Bayer raw domain
2.3.1. Background tracking for the rapid block matching method
To calculate xd , we adopt the concept of background tracking. In a conventional active vision
system, particular feature information of the target (e.g., color or shape) is used to calculate
the target position. Since the part of the target at which the optical gaze is directed is updated
at each successive image acquisition in high-speed motion, we use a block matching method
for detecting an arbitrary part of the target as a search window. Actually, we do not need the
target position but only its speed. Then, we can implement block matching at any position. In
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31652
addition, since we assume vr is one dimensional, we only need to assign at least one row or one
column as a search window at any part of the target (depending on the direction of motion). Heo
et al. demonstrated that modeling of the search range is valid for reducing the computational
cost [14]. If the original height is 100 pixels, and the direction of motion is horizontal, then the
computational cost can be reduced by a factor of 100, theoretically. This concept is illustrated
in Fig. 2(a).
Search window
Search every two pixels
horizontally
Imgp
Search range
Block matching
Imgc
Search range
(a) (b)
Fig. 2. Rapid block matching method in the Bayer raw domain (BGR). (a) Background
tracking for the rapid block matching method. (b) Block matching between two Bayer raw
images.
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31653
1 Ww −1
RSSD = ∑ ∑ (Img p (i, j) − Imgc (i + 2, j))2 . (5)
j=0 i=0
Here Ww represents the width of the window for block matching. This calculation is iterated
from one end to another horizontally. When RSSD between the previous image Img p and the
current image Imgc is the smallest, we set the x position of the window to xd .
2.4. Temporal control of the real-time high-speed motion blur compensation system
2.4.1. Control flow
Figure 3 illustrates the control flow. In the initial state P1, we can set an initial value of ωm ;
then ωm is set automatically in successive processes. After setting an arbitrary value of ωm ,
at P2, the system itself checks whether or not the current angle of the galvanometer mirror
is appropriate for exposure. Then, at P3, the camera exposes an image until a fixed exposure
time has elapsed. After the exposure, the mirror starts to rotate in the opposite direction until it
reaches the original angle. At the same time, the latest ωm is calculated by using the acquired
images at P4 and P5, and the value is set at P1 again. These processes from P1 to P5 are
repeated. The frequency of this flow, f , is set before P1 and is governed by the acceleration of
the galvanometer mirror and the computational speed. We will discuss f in more detail in Sec.
3.2.1.
START
Configuration of
P1
mirror angular speed m
Waiting
2.4.2. Method of synchronization between the camera exposure timing and the galvanometer mirror
angle
The frequency f and the amplitude of the oscillating galvanometer mirror are limited by its
weight. Thus, the mirror size and acceleration have a trade-off relationship. Actually, a constant
angular speed is the most appropriate condition for making ωr and ωm agree with each other,
namely, for compensation of motion blur, and we can generate triangular waves with positive
and negative constant angular speeds for back-and-forth motion:
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31654
because of the control delay. To avoid this problem, we adopt sine waves for control to approx-
imate triangular waves that have common A:
θ ω mt θ=Asin(2πft)
0
A
tex tex
After configuration, the camera exposes an image from −tex /2 to +tex /2 to synchronize with
the mirror rotation. These processes are repeated every 1/ f .
3. Experimental evaluation
3.1. Experimental setup
To demonstrate our proposed method, we compensated for the motion of a rapidly moving
conveyor belt. Figure 5 illustrates the experimental system. To evaluate the performance of our
system, we prepared a resolution chart and detailed images to paste onto the surface of the
conveyor belt. The still image of the resolution chart had a steep slope on a horizontal profile;
therefore, we checked peak-to-peak values of black-and-white pairs at each vr .
We used a CMOS high-speed color camera (Mikrotron Eosens MC4083). This camera can
acquire full HD images at almost 900 Hz. The galvanometer mirror was an M3 series device
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31655
Belt conveyor
vt
Lamp
Galvanometer mirror
l
Targets
Telephoto
lens
High-speed
Zoom of targets camera PC
Controlling unit
Galvanometer mirror
Fig. 6. Optical components of the prototype real-time high-speed motion blur compensation
system.
At the beginning of the experiments, we set the parameters as follows: tex = 1 ms; vr = 0 to
30 km/h; α = 4.5◦ ; Sw = 2336 pixels; and l = 3.0 m.
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31656
a function generator to generate sine waves with frequencies from 100 to 500 Hz. M3 can
officially operate at frequencies up to 300 Hz; however, we experimented to find the limita-
tion characteristics. Additionally, we set the amplitude from 0 to 500 mV. An input amplitude
of ±3 V gets converted into a rotation angle of ±30◦ . When vr is 30 km/h, the target moves
forward by 4.2 cm within 5 ms (half of the period corresponding to a frequency of 100 Hz),
and, therefore, we can derive the theoretical maximum input amplitude to be ±1.39 mV from
arctan(0.042/3)/30 × 3000. However, since the lower value of the input amplitude includes
noise components, we checked the response up to 500 mV to determine the tendency of the
response characteristics.
As a result, we obtained the characteristics shown in Figs. 7(a) and (b). In the figures, the
input voltage corresponds to A, and the input frequency corresponds to f . In Fig. 7(a), we found
that the plots were linear when f was 100 and 200 Hz, up to an input of 500 mV, and the plots
at 100 Hz corresponded to y = x. Moreover, Fig. 7(b) shows that the gain at 100 Hz was 0 dB,
whereas the others were below zero. Hence, we set f to 100 Hz in the main experiment.
500 5
y=x
450 100Hz
200Hz
400 300Hz 0
400Hz
Output amplitude[mV]
350 500Hz
300 -5
Gain[dB]
250
200 -10
150
100Hz
100 -15 200Hz
300Hz
50 400Hz
500Hz
0 -20
0 100 200 300 400 500 0 100 200 300 400 500
Input amplitude[mV] Input amplitude[mV]
(a) (b)
Fig. 7. Response characteristics of the galvanometer mirror. (a) Input signal [mV] and out-
put signal [mV] (with noise removed to smooth the averaging). (b) Input signal [mV] and
gain [dB].
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31657
Search window of Search range of
our method our method
Search direction
(a) (b)
a high computational cost; the other processes are very simple and involve light computation,
so we can exclude consideration of those processes. Thus, we demonstrated that our method
is appropriate for implementing a 100-Hz real-time system, and the algorithm was a factor of
almost 1000 faster than the straightforward one.
We used the same red pattern in the main experiment also.
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31658
(a) (b) (c)
Fig. 9. Fundamental result obtained with our system when vr was 30 km/h vertically, and
vertical profiles at the position of the blue lines (with images trimmed for aligned display).
(a) Still image. (b) Image during vr =30 km/h with motion blur compensation off). (c)
Image during vr = 30 km/h with motion blur compensation on.
90
80 Still condition
Peak-to-peak intensity of
70 Compensation ON
black and white
60 Compensation OFF
50
40
30
20
10
0
0 5 10 15 20 25 30
Speed of velt conveyor v r [km]
Fig. 10. Peak-to-peak intensity of the initial vertical black-and-white pair at each vr .
captured from a helicopter. After image acquisition, the precision of image searching can be
improved because motion blur is compensated for. In each of these real-world situations, the
operating frequency of 100 Hz makes it possible to capture images without temporal gaps. Thus,
we demonstrated that our method is simple and can be performed in real time to compensate
for motion blur.
4. Discussion
4.1. Improved method of motion blur compensation
The sharpness in Fig. 9(c) is degraded compared with that in Fig. 9(a). Figure 10 also shows
that, when motion compensation was turned on, the peak-to-peak intensity at a speed of 30
km/h is around one-half that in the still condition. As possible reasons for this, first we consider
the imperfect synchronization between the camera exposure timing and θ of the galvanometer
mirror. Since we controlled the galvanometer mirror with open-loop control from the PC, con-
trol delay may have caused the imperfect synchronization. In Fig. 4, if the phase is delayed,
the effect of motion blur compensation will decrease. To avoid this, we must use closed-loop
control or a real-time operating system. Then, we can consider use of another waveform (e.g.,
a triangular wave, a saw tooth wave, etc.); however, as we explained in Sec. 2.4.2, sharp edges
in the waveform require the galvanometer to have extremely high acceleration and, therefore,
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31659
(a) (b) (c)
Fig. 11. Applications of our system in practical situations (with images trimmed for aligned
display). The first row ((a), (b), and (c)) shows cracked roads, the second row ((d), (e),
and (f)) shows printed boards, and the third row ((g), (h), and (i)) shows helicopter shots.
The first column ((a), (d), and (g)) shows still images, the second column ((b), (e), and
(h)) shows images during vr = 30 km/h with motion blur compensation off, and the third
column ((c), (f), and (i)) shows images during vr = 30 km/h with motion blur compensation
on.
we also need to consider how to increase the acceleration. This is also discussed in Sec. 4.2.
Finally, we can consider the inconsistency between ωr and ωm . This can also be avoided by
using a closed loop to check the parameters and to modify the theoretical model to one that is
suitable for practical use.
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31660
current for achieving higher acceleration, by using a lighter mirror (with the same surface area,
only thinner), adopting other control methods, or using other types of galvanometer mirrors.
5. Conclusions
To compensate for motion blur in real time without additional sensors, we developed a system
that captures successive images with a high-speed color camera using motion blur compensa-
tion. Motion blur compensation was achieved by back-and-forth motion of a galvanometer mir-
ror. To achieve real-time performance, we proposed the concept of background tracking. With
this method, we demonstrated that our rapid block matching takes 4.3 ms. We also demon-
strated that a frequency of 100 Hz is suitable for controlling the galvanometer mirror, and we
demonstrated that our system reduced motion blur at this frequency compared with the conven-
tional approach. We envisage that our system can be applied to various fields (e.g., searching
for defective parts on conveyor lines, inspection of road conditions, precise image searching,
and so on). We will continue to investigate higher performance systems and methods that can
compensate for motion blur more effectively than our current system.
#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31661