Mirror and Camera Synch

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Real-time high-speed motion blur

compensation system based on


back-and-forth motion control of
galvanometer mirror

Tomohiko Hayakawa,1, Takanoshin Watanabe,1,2
and Masatoshi Ishikawa1
1 Department of Creative Informatics, University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo
113-8656, Japan
2 Hitachi Industry & Control Solutions, Ltd., 6-1, Akihabara, Taito-ku, Tokyo 110-0006, Japan

∗ Tomohiko [email protected]

Abstract: We developed a novel real-time motion blur compensation


system for the blur caused by high-speed one-dimensional motion between
a camera and a target. The system consists of a galvanometer mirror and
a high-speed color camera, without the need for any additional sensors.
We controlled the galvanometer mirror with continuous back-and-forth
oscillating motion synchronized to a high-speed camera. The angular speed
of the mirror is given in real time within 10 ms based on the concept of
background tracking and rapid raw Bayer block matching. Experiments
demonstrated that our system captures motion-invariant images of objects
moving at speeds up to 30 km/h.

© 2015 Optical Society of America


OCIS codes: (170.0110) Imaging systems; (110.1080) Active or adaptive optics; (110.4153)
Motion estimation and optical flow; (120.4630) Optical inspection.

References and links


1. E. Bodenstorfer, J. Furtler, J. Brodersen, K. J. Mayer, C. Eckel, K. Gravogl, and H. Nachtnebel, “High-speed
line-scan camera with digital time delay integration,” Proc. SPIE 6496, 64960I (2007).
2. B. Golik and D. Wueller, “Measurement method for image stabilizing systems,” Proc. SPIE 6502, 65020O
(2007).
3. C. W. Chiu, P. C. P. Chao, and D. Y. Wu, “Optimal design of magnetically actuated optical image stabilizer
mechanism for cameras in mobile phones via genetic algorithm,” IEEE Trans. Magn. 43(6), 2582–2584 (2007).
4. Y. Yitzhaky, R. Milberg, S. Yohaev, and N. S. Kopeika, “Comparison of direct blind deconvolution methods for
motion-blurred images,” Appl. Opt. 38(20), 4325–4332 (1999).
5. J. Zhang, Q. Zhang, and G. He, “Blind deconvolution of a noisy degraded image,” Appl. Opt. 48(12), 2350–2355
(2009).
6. A. Levin, P. Sand, T. S. Cho, F. Durand, and W. T. Freeman, “Motion-invariant photography,” ACM Trans. Graph.
27(3), 71 (2008).
7. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: motion deblurring using fluttered shutter,”
ACM Trans. Graph. 25(3), 795–804 (2006).
8. Y. Qian, Y. Li, J. Shao, and H. Miao, “Real-time image stabilization for arbitrary motion blurred image based on
opto-electronic hybrid joint transform correlator,” Opt. Express 19(11), 10762–10768 (2011).
9. K. Daniilidis, C. Krauss, M. Hansen, and G. Sommer, “Real-time tracking of moving objects with an active
camera,” Real-Time Imag. 4(1), 3–20 (1998).
10. H. Oike, H. Wu, C. Hua, and T. Wada, “Clear image capture-active cameras system for tracking a high-speed
moving object,” in Proceedings of the Fourth International Conference on Informatics in Control (2007), pp.
94–102.

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31648
11. K. Okumura, H. Oku, and M. Ishikawa, “High-speed gaze controller for millisecond-order pan/tilt camera,” in
Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 6186–6191.
12. M. Ito, “Cerebellar control of the vestibulo-ocular reflex - around the flocculus hypothesis,” Annu. Rev. Neurosci.
1(5), 275–296 (1982).
13. M. Davis and P. Green, “Head-bobbing during walking, running and flying: relative motion perception in the
pigeon,” J. Exp. Biol. 138(1), 71–91 (1988).
14. J. Heo, J. Kim, and D. Lee, “Real-time digital image stabilization using motion sensors for search range reduc-
tion,” in SoC Design Conference (ISOCC, 2012), pp. 363–366.
15. I. V. Romanenko, E. A. Edirisinghe, and D. Larkin, “Block matching noise reduction method for photographic
images applied in Bayer RAW domain and optimized for real-time implementation,” Proc. SPIE 8437, 84370F
(2012).
16. O. Yang and B. Choi, “Laser speckle imaging using a consumer-grade color camera,” Opt. Lett. 37(19), 3957–
3959 (2012).
17. V. Duma, J. P. Rolland, O. Group, A. Vlaicu, and R. Ave, “Advancements on galvanometer scanners for high-end
applications,” Proc. SPIE 8936, 893612 (2014).

1. Introduction
To perform visual inspection of extremely large targets, such as walls, surfaces of structures,
roads, assembly lines, and so on, in an efficient manner, in terms of both time and cost, real-
time inspection systems must have a simple construction and be capable of operating at high
speed. However, high-speed motion degrades image quality owing to motion blur, and some-
times results in lost frames. For example, tunnels on highways have a comparatively high risk
of deteriorating owing to their structures, and it is difficult to enforce the frequent traffic restric-
tions that are needed for their inspection. Therefore, there is an increasing demand for systems
that can monitor tunnel surfaces from a moving vehicle. In particular, as a substitute for human
visual inspection, high-quality images of tunnel surfaces are necessary for accurately judging
faults such as cracks and stains in the structures. However, there is a trade-off relationship be-
tween efficiency and precision, as high-resolution pictures suffer from motion blur easily, and
high-speed motion deteriorates the quality of images because of motion blur. In the vehicle in-
spection systems for infrastructure, intense illumination is used to compensate for motion blur
to achieve fine spatial resolution; however, such illumination might cause other drivers to have
accidents. Additionally, in general, intense light may cause damage to the surface of targets,
and hence lower illumination is required. For example, inspection of products on a conveyor
belt needs to be efficient; however, some products might be damaged by intensive illumina-
tion. Hence, a method that compensates for motion blur without using intense illumination is
required.
Many methods have been proposed to compensate for such motion blur, and they can be cat-
egorized as those that compensate for motion blur [1–3, 9–11], in which the sensor and system
are made to follow the moving object to avoid motion blur, and those that restore the captured
image in post-processing [4–8]. Although considerable research effort in the computational
imaging community has been focused on the latter category, the method proposed in this paper
belongs to the former category. There are numerous ways to compare these categories; how-
ever, in general the former is a more powerful method because it is always better to avoid the
blur in the first place rather than having to remove it post-capture.
As a method of the former category, time delayed integration (TDI) extends the exposure
time virtually [1]; however, the extension of the exposure time is limited because, as the relative
speed between the camera and the target increases, the exposure time at each stage of the
TDI drops, and more stages are required. However, TDI sensor costs become very high as
the number of stages increases, and so this system has a limitation when applied to efficient
practical systems. In addition, the TDI method requires precise encoder information. Another
method in the former category is optical image stabilization (OIS). This method is also effective

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31649
for compensating for motion blur caused by hand shake [2, 3]; however, OIS has low accuracy,
and a built-in gyro sensor or acceleration sensor is needed to control the actuator.
Although additional sensors can help to reduce degradation, as methods of the latter cate-
gory, there are motion blur rectification methods that do not require any additional sensors, for
example, blind deconvolution [4, 5]. However, the usual blind deconvolution methods operate
off-line to estimate the point spread function, and they are therefore not suitable for real-time
applications. Additionally, blind deconvolution is known to be an NP-hard problem, so the ac-
curacy and speed are poor without additional information. Unlike the usual blind deconvolution,
images can be processed with motion vectors in deconvolution, simplifying software process-
ing [6–8]. Levin et al.’s method corresponds with a variant motion vector within one exposure to
rotate the camera itself [6]. Their method is very comprehensive for arbitrary, one-dimensional
motion; however, their hardware is not designed for high-speed motion, and deconvolution is
performed as an off-line process. Raskar et al.’s method enables one to get the motion vector
easily by using a flutter shutter [7]. However, in their method, the exposure time is limited by
generating a coded exposure, and, because they are based on an off-line process, real-time appli-
cation is not supported. In contrast, Qian et al. developed a real-time deconvolution method [8];
however, its operating speed of 1 Hz is too slow to capture all views necessary for continuous
capturing in the case of high-speed motion. Moreover, since deconvolution is also a rectifica-
tion method that is performed after motion blur occurs, high-spatial-frequency information will
be lost. Finally, all those deconvolution methods need additional hardware. Software processes
become simpler than blind deconvolution; however, simplicity is lost in a hardware setup.
To satisfy the requirements for speed and simplicity, we considered adopting the concept of
active vision [9]. This concept nearly belongs to the former category, but the purpose is not to
compensate motion blur. By using this concept, dynamic image acquisition becomes possible if
gaze control can be performed so that a subject is always captured at the center of the acquired
image. However, conventional active vision systems have a limitation in terms of the speed at
which the optical gaze direction can be moved, since the weight of the camera prevents rapid
motion when the camera itself is moved by an actuator [10]. To solve this problem, Okumura
et al. proposed using a two-axis galvanometer mirror to control the optical gaze of a camera at
high speed [11] and achieved high-speed gaze control for general target tracking. In their sys-
tem, however, the optical gaze of the camera follows the center point of the target, and hence
the response time causes motion blur when the target is moving at high speed. The active vision
concept is suitable for tracking a single target continuously; however, in one-dimensional mo-
tion (e.g., on roads, rails, conveyors, and so on), targets will be updated as the relative position
between the camera and the targets changes. Hence the camera must capture those images one
after another so as not to miss frames, and we apply such an active vision concept for updating
the target successively to capture updating local targets in a large target. Active vision systems
move in a similar manner to the active motion of the human eye when tracking moving ob-
jects, whereas our system is based on a model of the human eye’s vestibule-ocular reflex [12]
and pigeon-head-bobbing during walking [13]. Their body mechanisms compensate for mo-
tion blur, and since we found that they can be effectively adopted into a vision system, we
developed a motion blur compensation system to prolong exposure time. Additionally, unlike
active vision, which has a varying motion vector within one image, here we can assume that
the motion blur is invariant, since the target is large, and thus the real-time capability will be
high enough to sustain the relative speed between the camera and the target. We call this novel
concept background tracking.
In this paper, we propose a real-time motion blur compensation system with optical gaze
control using a galvanometer mirror. In our system, we use a lightweight galvanometer mirror
for gaze control [11], allowing fast mirror rotation for capturing the next target. Additionally, we

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31650
employ a back-and-forth oscillating motion of the galvanometer mirror to achieve quick motion
that can rapidly respond to changes in the speed of the target. This back-and-forth motion is
realized by applying a sinusoidal driving pattern, and the exposure timing is synchronized with
a particular angle so that the rotation is considered to be linear.
To compensate for motion blur, we estimate a one-dimensional motion vector used to set the
angular speed of the galvanometer mirror. The relative angular speed between the camera and
the galvanometer mirror is determined by using a Bayer raw block matching method to reduce
computational costs to make the system suitable for real-time applications.

2. Principle of the real-time motion blur compensation system


2.1. Simplifying the problem
To realize a real-time high-speed motion blur compensation system without additional sensors,
we need to simplify the problem. First, since we use a high-speed camera as an imaging device,
the motion vector can be assumed to be invariant within one exposure. However, the speed
between the target and the camera is changeable as long as the difference in depth does not
cause motion blur within one exposure. In case the speed changes, a short updating cycle of
the system will update the angular speed of the galvanometer mirror. A high-speed camera has
the advantage that it allows us to update the angular speed of the galvanometer mirror quickly
after capturing an image because of the high frame rate. Especially in high-speed motion, the
relative speed vr between the camera and the target does not change greatly within a short time
because of inertia. Especially for industrial inspection, systems are heavy and move fast for
efficiency, so their inertia becomes high. For inspection applications, we assume motion in a
one-dimensional path (e.g., on roads, rails, or conveyors). In addition, if the surface of the target
in practical high-precision inspection is considered to be planar (e.g., the walls in construction,
long vehicles, etc.), then we can assume that the distance l between the target and the camera is
also invariant. However, the surface can have three-dimensional textures unless the difference
of depth does not cause motion blur. Especially when l is long enough to compare with the
differences of depth on the surface, the surface can be regarded as planar.

2.2. Back-and-forth motion control of the galvanometer mirror


2.2.1. Concept
Figure 1 illustrates the concept of back-and-forth control of the galvanometer mirror. To com-
pensate for motion blur, we control the galvanometer mirror in front of the camera to point in
the direction of vr with angular speed ωm . From the viewpoint of the camera, vr is expressed as
a relative angular speed ωr between the camera and the target during an extremely short time,
since we assume that l is long enough compared with the width of the camera’s viewing field,
sw , which is determined by the camera’s angle of view α . In Fig. 1, l is shown as being shorter
than the actual distance to save space; in practice, however, l is long in actual visual inspection
situations, especially in remote sensing applications. Hence, we can substitute ωr into ωm . The
mirror follows a moving target with constant ωm from t1 to t3 . In the case where ωr and ωm are
equivalent during exposure time tex , the optical gaze stays at the same position, and hence the
acquired image will not include motion blur. If the galvanometer mirror is comparatively light,
this system allows a high degree of controllability of the gaze direction. From this point, it is
simple to compensate for motion blur. Namely, we can extend the exposure time by using this
method. The mirror angle after the image acquisition returns to the original angle for the next
shot, e.g., t4 . This back-and-forth motion process is performed repeatedly as ωm is updated. ωr
is calculated by a block matching method with the transfer distance xd between the previous
acquired image and the current one, which have an overlapping area. We will explain the block

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31651
sw
xd

l
α

r r r

t4 t3 t2 t1

vr

Fig. 1. Concept of back-and-forth motion control of the galvanometer mirror.

matching method in more detail in Sec. 2.3.

2.2.2. Method of acquiring the relative angular speed ωr


To control the galvanometer mirror, we need to compute ωr from xd . From Fig. 1, we can write
sw α
= tan , (1)
2l 2

xd ωr
= tan . (2)
2l 2
Without any additional sensors, l is an unknown parameter; however, by rearranging Eqs. (1)
and (2) and solving for ωr for the parameter ωr , we obtain
xd α
ωr = 2 tan−1 ( tan ). (3)
sw 2
Thus, if the target is planar, ωr can be computed from two successive images, without using l or
any additional sensors (e.g., distance sensors). This contributes to the simplicity of the system.
Finally, ωr is substituted for ωm up to the current time t, yielding

ωr (t1 ≤ t ≤ t3 ),
ωm = (4)
−ωr (e.g. t = t4 ).

2.3. Background tracking using rapid block matching method in the Bayer raw domain
2.3.1. Background tracking for the rapid block matching method
To calculate xd , we adopt the concept of background tracking. In a conventional active vision
system, particular feature information of the target (e.g., color or shape) is used to calculate
the target position. Since the part of the target at which the optical gaze is directed is updated
at each successive image acquisition in high-speed motion, we use a block matching method
for detecting an arbitrary part of the target as a search window. Actually, we do not need the
target position but only its speed. Then, we can implement block matching at any position. In

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31652
addition, since we assume vr is one dimensional, we only need to assign at least one row or one
column as a search window at any part of the target (depending on the direction of motion). Heo
et al. demonstrated that modeling of the search range is valid for reducing the computational
cost [14]. If the original height is 100 pixels, and the direction of motion is horizontal, then the
computational cost can be reduced by a factor of 100, theoretically. This concept is illustrated
in Fig. 2(a).

2.3.2. Rapid block matching method in the Bayer raw domain


Here we introduce the rapid block matching method used to acquire xd for color cameras. Al-
though there are not many inspection systems that adopt color sensors, color sensors allow more
information to be obtained compared with monochrome sensors for comprehensive inspection
systems. If more information is obtained in the inspection, there is a higher possibility of de-
tecting abnormal points; hence color sensors are necessary for more comprehensive systems.
However, in general, the amount of information and the speed have a trade-off relationship, and
so conventional systems mainly use monochrome sensors.
Then, we propose our method, which is compatible with high-speed image processing meth-
ods and color sensors. However, software conversion from Bayer raw images into RGB images
takes additional time, complicating the implementation of real-time systems. Therefore, to re-
alize a real-time system, here we propose using a block matching method that does not require
the conversion. There has been some research on processing of Bayer raw images [15, 16].
Romanenko et al. implemented a block matching method between a Bayer raw image and a
noise model for de-noising [15]. Yang et al. used only red pixels from a Bayer raw image for
filtering [16]. In contrast, we implement block matching between two Bayer raw images. To
achieve higher computational speed, we implement the block matching method every two pix-
els, since even before conversion the array of pixels repeats RGrRGrRGr. . . or GbBGbBGbB. . .
horizontally (Fig. 2(b)).
The following equation illustrates this calculation:

Search window
Search every two pixels
horizontally
Imgp

Only one row block matching is


effective to get motion
xd vector for 1D motion.

Search range
Block matching

Imgc

Search range
(a) (b)

Fig. 2. Rapid block matching method in the Bayer raw domain (BGR). (a) Background
tracking for the rapid block matching method. (b) Block matching between two Bayer raw
images.

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31653
1 Ww −1
RSSD = ∑ ∑ (Img p (i, j) − Imgc (i + 2, j))2 . (5)
j=0 i=0

Here Ww represents the width of the window for block matching. This calculation is iterated
from one end to another horizontally. When RSSD between the previous image Img p and the
current image Imgc is the smallest, we set the x position of the window to xd .

2.4. Temporal control of the real-time high-speed motion blur compensation system
2.4.1. Control flow
Figure 3 illustrates the control flow. In the initial state P1, we can set an initial value of ωm ;
then ωm is set automatically in successive processes. After setting an arbitrary value of ωm ,
at P2, the system itself checks whether or not the current angle of the galvanometer mirror
is appropriate for exposure. Then, at P3, the camera exposes an image until a fixed exposure
time has elapsed. After the exposure, the mirror starts to rotate in the opposite direction until it
reaches the original angle. At the same time, the latest ωm is calculated by using the acquired
images at P4 and P5, and the value is set at P1 again. These processes from P1 to P5 are
repeated. The frequency of this flow, f , is set before P1 and is governed by the acceleration of
the galvanometer mirror and the computational speed. We will discuss f in more detail in Sec.
3.2.1.

START

Configuration of
P1
mirror angular speed m

Waiting

Rapid block matching


Appropriate mirror angle? P4
P2 method
no
yes
Calculation of the relative
P5
P3 angular speed
Camera exposure r

Fig. 3. Control flow of real-time motion blur compensation system.

2.4.2. Method of synchronization between the camera exposure timing and the galvanometer mirror
angle
The frequency f and the amplitude of the oscillating galvanometer mirror are limited by its
weight. Thus, the mirror size and acceleration have a trade-off relationship. Actually, a constant
angular speed is the most appropriate condition for making ωr and ωm agree with each other,
namely, for compensation of motion blur, and we can generate triangular waves with positive
and negative constant angular speeds for back-and-forth motion:

θ = ωmt {(t1 ≤ t ≤ t3 )}. (6)


However, since triangular waves have sharp instantaneous turns, the galvanometer mirror
would need to have a extremely high acceleration and, also, the amplitude would be small

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31654
because of the control delay. To avoid this problem, we adopt sine waves for control to approx-
imate triangular waves that have common A:

θ = A sin(2π f t). (7)


Here, parameter A is given by
ωm
A= . (8)
4f
In Eq. (8), 1/ f corresponds with the cycle of the sine wave, and the mirror angle reaches A at a
quarter of the cycle when it started from 0.
Sine waves are smooth at all points, and hence the required acceleration is lower than that
of triangular waves [17]. This is good for preventing saturation of control and performance
degradation of the galvanometer mirror. At the same time, sine waves have approximately linear
parts away from the turning points. Thus, we use sine waves to achieve high-speed control of
the galvanometer mirror motion. Figure 4 illustrates the difference between a triangular wave
and a sine wave. In Fig. 4, t1 and t3 represent turning points of the rotational direction (see also
Fig. 1).
Actually, although this sine wave seems to deviate from the triangular wave, in high-speed
applications, A becomes quite small for realizing a high frame rate, and hence the difference
will become small. The slope of this sine curve around 0◦ agrees with a straight line of gradient
ωm given by Eq. (4), and this sine curve allows us to compensate for motion blur.
mirror angle θ

θ ω mt θ=Asin(2πft)

0
A

tex tex

t1 -tex/2 0, t2 tex/2 t3 1/f-tex/2 1/f 1/f+tex/2


time t

Fig. 4. Mirror angle waveform and exposure timing.

After configuration, the camera exposes an image from −tex /2 to +tex /2 to synchronize with
the mirror rotation. These processes are repeated every 1/ f .

3. Experimental evaluation
3.1. Experimental setup
To demonstrate our proposed method, we compensated for the motion of a rapidly moving
conveyor belt. Figure 5 illustrates the experimental system. To evaluate the performance of our
system, we prepared a resolution chart and detailed images to paste onto the surface of the
conveyor belt. The still image of the resolution chart had a steep slope on a horizontal profile;
therefore, we checked peak-to-peak values of black-and-white pairs at each vr .
We used a CMOS high-speed color camera (Mikrotron Eosens MC4083). This camera can
acquire full HD images at almost 900 Hz. The galvanometer mirror was an M3 series device

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31655
Belt conveyor

vt
Lamp

Galvanometer mirror
l
Targets
Telephoto
lens

High-speed
Zoom of targets camera PC

Controlling unit

Fig. 5. Schematic diagram of the experimental system.

manufactured by Cambridge Technology, which is capable of oscillating at a few hundred hertz


with an analogue servo driver Mini Sax II and has an effective diameter of 3 cm, making it
suitable for laser projection and camera sensing. We also prepared an AD/DA interface board
(LPC-361216) having a 16-bit resolution. The PC had a CPU Intel Xeon E5-1620 processor and
ran Windows 7 Professional. Software was written in C/C++ with OpenCV 2.4.6. The system
also consisted of a lamp (Mintage M Power Light PMX-120) and a lens (Nikon AF-S NIKKOR
200mm f/2G ED VR II). A photograph of the prototype motion blur compensation system is
shown in Fig. 6.

Lamp High-speed camera Lens

Galvanometer mirror

Fig. 6. Optical components of the prototype real-time high-speed motion blur compensation
system.

At the beginning of the experiments, we set the parameters as follows: tex = 1 ms; vr = 0 to
30 km/h; α = 4.5◦ ; Sw = 2336 pixels; and l = 3.0 m.

3.2. Preliminary experiment


3.2.1. Response characteristics of the galvanometer mirror
In a first preliminary experiment, we tested the response characteristic of the galvanometer
mirror with respect to f to analyze the relationship between the input and output amplitude.
As Duma et al. has researched, the response characteristics will help in the design of appro-
priate optical applications [17], and those data will become a software requirement. We used

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31656
a function generator to generate sine waves with frequencies from 100 to 500 Hz. M3 can
officially operate at frequencies up to 300 Hz; however, we experimented to find the limita-
tion characteristics. Additionally, we set the amplitude from 0 to 500 mV. An input amplitude
of ±3 V gets converted into a rotation angle of ±30◦ . When vr is 30 km/h, the target moves
forward by 4.2 cm within 5 ms (half of the period corresponding to a frequency of 100 Hz),
and, therefore, we can derive the theoretical maximum input amplitude to be ±1.39 mV from
arctan(0.042/3)/30 × 3000. However, since the lower value of the input amplitude includes
noise components, we checked the response up to 500 mV to determine the tendency of the
response characteristics.
As a result, we obtained the characteristics shown in Figs. 7(a) and (b). In the figures, the
input voltage corresponds to A, and the input frequency corresponds to f . In Fig. 7(a), we found
that the plots were linear when f was 100 and 200 Hz, up to an input of 500 mV, and the plots
at 100 Hz corresponded to y = x. Moreover, Fig. 7(b) shows that the gain at 100 Hz was 0 dB,
whereas the others were below zero. Hence, we set f to 100 Hz in the main experiment.
500 5
y=x
450 100Hz
200Hz
400 300Hz 0
400Hz
Output amplitude[mV]

350 500Hz

300 -5

Gain[dB]
250

200 -10

150
100Hz
100 -15 200Hz
300Hz
50 400Hz
500Hz
0 -20
0 100 200 300 400 500 0 100 200 300 400 500
Input amplitude[mV] Input amplitude[mV]
(a) (b)

Fig. 7. Response characteristics of the galvanometer mirror. (a) Input signal [mV] and out-
put signal [mV] (with noise removed to smooth the averaging). (b) Input signal [mV] and
gain [dB].

3.2.2. Performance of the rapid block matching method


In a second preliminary experiment, we evaluated the performance of our rapid block matching
method. From the result in Sec. 3.2.1, we set f to 100 Hz, and hence the duration of each cycle
is 10 ms. If the total time for block matching and calculation of the angular speed is <10 ms,
then the latest ωr is set to ωm . Equations (3) and (4) are comparatively light computational
processes, whereas Eq. (5) is heavy. We checked whether or not our proposed block matching
method takes less than 10 ms.
To do so, we prepared two horizontally separated Bayer-array still images (see Fig. 8), which
are part of the belt conveyor, and we pasted a noticeable red seal onto it to check the results in
a simple manner. The width of the images was 1500 pixels, and the height of the images was
848 pixels. The distance between the images in Figs. 8(a) and (b) was 346 pixels.
Table 1 shows that it took 8.9 ms for Bayer conversion, 4351 ms for the straightforward
block matching method with a full size search range, and 4.3 ms for our proposed method
with a reduced size search range on a raw Bayer image (see Fig. 8). The two block matching
methods had the same precision, as shown by their distance of 346 pixels. Figure 3 shows some
other processes; however, only P4 entails two-dimensional image processing, which requires

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31657
Search window of Search range of
our method our method
Search direction

Search window of Search range of


original method original method

(a) (b)

Fig. 8. Horizontally separated still Bayer-array images to be processed by the straight-


forward block matching method (green) and the rapid block matching method (red). (a)
Previous image. (b) Current image.

a high computational cost; the other processes are very simple and involve light computation,
so we can exclude consideration of those processes. Thus, we demonstrated that our method
is appropriate for implementing a 100-Hz real-time system, and the algorithm was a factor of
almost 1000 faster than the straightforward one.
We used the same red pattern in the main experiment also.

Table 1. Performance Comparison of Block Matching Methods for Color Images


Straightforward method [ms] Our method [ms]
Bayer conversion 8.9 Not necessary
Block matching 4351 4.3
Total 4359.9 4.3

3.3. Experimental results


Figures 9(a)–(c) show the fundamental results of our system. Despite the fact that the image in
Fig. 9(c) had degraded sharpness compared with that in Fig. 9(a), the image in Fig. 9(c) had
significantly better sharpness than that in Fig. 9(b). Figure 9 shows the profiles obtained by
analyzing the performance of our motion blur compensation system quantitatively. The profile
in Fig. 9(b) is entirely flat, whereas that in Fig. 9(c) is bumpy because the contrast of black-
and-white stripes improved.
To discuss the results more quantitatively, the peak-to-peak intensity of the initial black-and
white-pair at each vr is shown in Fig. 10. When the motion compensation was turned off, it was
difficult to distinguish between black and white, whereas when the motion compensation was
turned on, the peak-to-peak value was maintained even at vr = 30 km/h. In all trials, we could
acquire images at 100 Hz with motion blur compensation, which was achieved by synchroniz-
ing with the galvanometer mirror.
Finally, we show example applications of our system in Fig. 11. Figures 11(a)–(c) show im-
ages of cracks in asphalt, which is improved in (c) compared with the image in (b), demonstrat-
ing that this system is effective for inspecting the condition of roads, especially under limited
illumination and during high-speed motion. This real-time compensation system will help to
warn of dangerous road damage that must be urgently mended. Figures 11(d)–(f) show that
this system is also effective for checking whether or not defective parts exist when inspecting
objects on a conveyor line. Figures 11(g)–(i) show that this system is also effective for images

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31658
(a) (b) (c)

Fig. 9. Fundamental result obtained with our system when vr was 30 km/h vertically, and
vertical profiles at the position of the blue lines (with images trimmed for aligned display).
(a) Still image. (b) Image during vr =30 km/h with motion blur compensation off). (c)
Image during vr = 30 km/h with motion blur compensation on.

90
80 Still condition
Peak-to-peak intensity of

70 Compensation ON
black and white

60 Compensation OFF
50
40
30
20
10
0
0 5 10 15 20 25 30
Speed of velt conveyor v r [km]

Fig. 10. Peak-to-peak intensity of the initial vertical black-and-white pair at each vr .

captured from a helicopter. After image acquisition, the precision of image searching can be
improved because motion blur is compensated for. In each of these real-world situations, the
operating frequency of 100 Hz makes it possible to capture images without temporal gaps. Thus,
we demonstrated that our method is simple and can be performed in real time to compensate
for motion blur.

4. Discussion
4.1. Improved method of motion blur compensation
The sharpness in Fig. 9(c) is degraded compared with that in Fig. 9(a). Figure 10 also shows
that, when motion compensation was turned on, the peak-to-peak intensity at a speed of 30
km/h is around one-half that in the still condition. As possible reasons for this, first we consider
the imperfect synchronization between the camera exposure timing and θ of the galvanometer
mirror. Since we controlled the galvanometer mirror with open-loop control from the PC, con-
trol delay may have caused the imperfect synchronization. In Fig. 4, if the phase is delayed,
the effect of motion blur compensation will decrease. To avoid this, we must use closed-loop
control or a real-time operating system. Then, we can consider use of another waveform (e.g.,
a triangular wave, a saw tooth wave, etc.); however, as we explained in Sec. 2.4.2, sharp edges
in the waveform require the galvanometer to have extremely high acceleration and, therefore,

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31659
(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Fig. 11. Applications of our system in practical situations (with images trimmed for aligned
display). The first row ((a), (b), and (c)) shows cracked roads, the second row ((d), (e),
and (f)) shows printed boards, and the third row ((g), (h), and (i)) shows helicopter shots.
The first column ((a), (d), and (g)) shows still images, the second column ((b), (e), and
(h)) shows images during vr = 30 km/h with motion blur compensation off, and the third
column ((c), (f), and (i)) shows images during vr = 30 km/h with motion blur compensation
on.

we also need to consider how to increase the acceleration. This is also discussed in Sec. 4.2.
Finally, we can consider the inconsistency between ωr and ωm . This can also be avoided by
using a closed loop to check the parameters and to modify the theoretical model to one that is
suitable for practical use.

4.2. Gain compensation for applications requiring faster performance


As mentioned in Sec. 3.2.1, our system was not compatible with real-time applications requir-
ing f > 100 Hz because of the lack of responsiveness of the galvanometer mirror. When f
increases, the output gain of the galvanometer mirror decreases. This is considered to be be-
cause the performance of PID control is limited by the input frequency; if the input frequency
is high, PID control does not work well unless we change the parameters, especially propor-
tional coefficient. The galvanometer mirror M3 uses a servo driver that is controlled by PID
control. In general, tuning of parameters for PID control can be used to solve the problem of
gain decrease; however, the performance of the system will degrade when f is decreased. If the
mirror is made smaller, the responsiveness will improve, allowing f to be set higher; however,
the amount of illumination received by the camera will decrease because of the reduced mirror
area. For higher-speed applications with the same amount of illumination, we will need to im-
prove the galvanometer mirror or the control method. This could be achieved by using a higher

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31660
current for achieving higher acceleration, by using a lighter mirror (with the same surface area,
only thinner), adopting other control methods, or using other types of galvanometer mirrors.

5. Conclusions
To compensate for motion blur in real time without additional sensors, we developed a system
that captures successive images with a high-speed color camera using motion blur compensa-
tion. Motion blur compensation was achieved by back-and-forth motion of a galvanometer mir-
ror. To achieve real-time performance, we proposed the concept of background tracking. With
this method, we demonstrated that our rapid block matching takes 4.3 ms. We also demon-
strated that a frequency of 100 Hz is suitable for controlling the galvanometer mirror, and we
demonstrated that our system reduced motion blur at this frequency compared with the conven-
tional approach. We envisage that our system can be applied to various fields (e.g., searching
for defective parts on conveyor lines, inspection of road conditions, precise image searching,
and so on). We will continue to investigate higher performance systems and methods that can
compensate for motion blur more effectively than our current system.

#247651 Received 11 Aug 2015; revised 10 Oct 2015; accepted 16 Nov 2015; published 30 Nov 2015
© 2015 OSA 14 Dec 2015 | Vol. 23, No. 25 | DOI:10.1364/OE.23.031648 | OPTICS EXPRESS 31661

You might also like