0% found this document useful (0 votes)
22 views14 pages

Real-Time Audio Detection and Regeneration of Moving Sound Source Based On Optical Flow Algorithm of Laser Speckle Images

This research article presents a novel laser microphone system that enables real-time audio detection and regeneration of moving sound sources using an optical flow algorithm applied to laser speckle images. The system captures speckle patterns and processes them rapidly to extract motion information, allowing for high-quality audio signal restoration even under varying conditions. Experimental results demonstrate the system's effectiveness in regenerating audio signals across different frequencies, highlighting its potential applications in remote monitoring and other fields.

Uploaded by

madscijr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views14 pages

Real-Time Audio Detection and Regeneration of Moving Sound Source Based On Optical Flow Algorithm of Laser Speckle Images

This research article presents a novel laser microphone system that enables real-time audio detection and regeneration of moving sound sources using an optical flow algorithm applied to laser speckle images. The system captures speckle patterns and processes them rapidly to extract motion information, allowing for high-quality audio signal restoration even under varying conditions. Experimental results demonstrate the system's effectiveness in regenerating audio signals across different frequencies, highlighting its potential applications in remote monitoring and other fields.

Uploaded by

madscijr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Research Article Vol. 28, No.

4 / 17 February 2020 / Optics Express 4475

Real-time audio detection and regeneration of


moving sound source based on optical flow
algorithm of laser speckle images
N AN W U * AND S. H ARUYAMA
Graduate School of System Design and Management, Keio University, 4-1-1 Hiyoshi, Kohoku-ku,
Yokohama, Kanagawa 223-8521, Japan
* [email protected]

Abstract: Sound detection with optical means is an appealing research topic. In this manuscript,
we proposed a laser microphone system allowing simultaneous detection and regeneration of
the audio signal by observing the movement of secondary speckle patterns. In the proposed
system, optical flow method, along with some denoising algorithms are employed to obtain the
motion information of the speckle sequence with high speed. Owing to this, audio signal can be
regenerated in real time with simple optical setup even the sound source is moving. Experiments
have been conducted and the results show that the proposed system can restore high quality audio
signal in real time under various conditions.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction
Sound detection with optical means is an appealing research topic due to its broad application
prospects, such as remote monitoring, rescue, and so on [1,2]. One of the approaches is detecting
sound with laser speckle images. The principle of laser speckle method is simple: when a
coherent light is reflected by an optically rough surface, a high-contrast grainy speckle pattern
can be observed with an image device due to the interferometry of the multiple reflection light
waves [3]. A major property of speckle pattern is that the speckle motion is very sensitive to the
motion of the object [4,5]. The captured speckle pattern shows significant displacement even the
object is moved slightly. Based on this property, sound vibrations can be detected with speckle
images, and the audio signal can be recovered by extracting information from the movement of
the captured speckle image sequence. Compared with other methods, like the interferometric
or holographic measurement method [6,7], the laser speckle detection method has a simple
structure and low hardware cost and can achieve remote sound detection. Previously there were
several researches on recovering sound with laser speckle, mainly focused on the applications
in remote monitoring. In [8], the authors proposed a remote sound extraction system based
on laser speckle. The result shows they can record the speech or heart beats with a distance
up to 100 meters. In [9], the authors proposed an intensity variance-based method for sound
recovery via the appropriate pixels’ gray-value variations from the laser speckle patterns. In
these researches, people usually take a short video and then analyze the video to restore the
audio signal. Although these works successfully achieved sound regeneration with laser speckle
images, still real-time sound detection and regeneration have not been considered, nor has it been
considered for detection under moving sound source situation, which greatly limits the potential
applications of this technology.
In this manuscript, a real-time sound detection and regeneration system based on laser speckle
image is proposed. Different from the previous researches, the proposed system for the first time
took the real-time processing and regeneration of audio signal with moving sound source into
consideration. In our system, after capturing speckle images, high-speed calculation is conducted
immediately to obtain the displacement of the captured speckle images instead of storing the

#383442 https://doi.org/10.1364/OE.383442
Journal © 2020 Received 18 Nov 2019; revised 17 Jan 2020; accepted 21 Jan 2020; published 3 Feb 2020
Research Article Vol. 28, No. 4 / 17 February 2020 / Optics Express 4476

pattern in the computer. Thus, the system can output audio signals in real time while sampling.
To achieve this, only a small part of the imaging sensor is used to capture the speckle patterns. In
this way, not only high camera sampling rates can be achieved even with a common industrial
camera, but also the computation time can be reduced because of the small image size. Moreover,
optical flow algorithm is adopted to obtain the displacement between two frames in a short time.
These two points enable a real-time processing speed and sub-pixel level accuracy. In addition,
some denoising algorithms are proposed to correct the calculation noise in real time. This not
only improves the accuracy of the results, but also enables the system to regenerate audio signal
with moving sound sources. Compared with the previous systems, our system works more like a
microphone rather than a recorder, which enables our system to have a wider range of potential
applications, such as a meeting scenario.
The structure of this paper is as follows: the flowchart of our system is intially introduced in
Section 2, where the optical flow method, along with the denoising algorithms of the sampling
signal are explained. Then the experiment results of our system are shown in Section 3, including
the results under different signal amplitude and the camera defocusing and the results of moving
sound source detection. Finally, the conclusion of the paper is given in Section 4.

2. Methodologies
2.1. Farneback optical flow algorithm
According to the results of previous research, when we illuminate a vibrating object with a
coherent laser source, the captured speckle is periodically vibrated in one direction [10]. In
the past there were several studies that recover the audio signal via the gray value variation of
selected pixels [11]. The advantage is that it does not require too much workload of calculation,
thus this method is possible to achieve real time calculation speed. However, the gray value
method requires linear distribution of gray value within a certain pixel range in the direction of
vibration. Therefore, the quality of result cannot be guaranteed when the amplitude of the audio
signal changes. Therefore, we decided to regenerate the audio signal according to the motion
information of the speckle sequence. In the past, cross-correlation between images was widely
used to calculate speckle motion [12,13]. However, it is difficult for cross correlation method
to achieve a high-speed calculation and a sub-pixel accuracy at the same time. Besides, in our
system, the speckle image size is settled to be very small, which causes reduction of the available
image information. This makes most feature points method [14] unavailable with our situation.
For these reasons stated above, the Farneback optical flow algorithm, which is proposed by
Gunnar Farneback in 2003, is employed to analysis the speckle motion [15]. In the algorithm,
each image is regarded as a 2D function f (x, y). Specifically, by fitting the gray value of each
pixel and its neighbors, a quadratic polynomial expansion based on the coordinate (x, y) of the
interested pixel can be expressed as:

f (x) = xT Ax + bT x + c. (1)
r6
© r4 2 ª © r2 ª
Where x represents the coordinate (x, y), A = ­ ®, b = ­ ®, c = r1 , r1 ∼r6 are the
r6
« 2 r 5 r 3
coefficients of the quadratic polynomial fitting. When the image undergoes a global shifting d,
¬ « ¬

the new signal can be expressed as:


f 0(x) = f (x − d)
= (x − d)T A(x − d) + bT (x − d) + c
(2)
= xT Ax + (b − 2Ad)T x + dT Ad − bT d + c
= xT A0x + b0T x + c
Research Article Vol. 28, No. 4 / 17 February 2020 / Optics Express 4477

Optical flow method assumes that the brightness in the same pixel of the two images does not
change, thus we have:
A0 = A. (3)
b0 = b − 2Ad. (4)
c = d Ad − b d + c.
0 T T
(5)
According to Eq. (4), the displacement d can be solved as:
1
d = − A−1 (b0 − b). (6)
2
The above description is the basic idea of the Farneback optical flow algorithm. In practical
considerations, a weighted estimation over a neighborhood of the interested pixel is performed to
reduce noise and obtain a reliable calculation result. Figure 1 shows two speckle images and the
optical flow result between them. Since the algorithm calculates the displacement pixelwise, a
dense optical flow that represents the displacement between two frames can be obtained even the
image size is very small, as shown in Fig. 1(c).

Fig. 1. Two speckle images and the optical flow field between them. (a) Former frame. (b)
Later frame. (c) Optical flow.

2.2. Real-time signal processing


The algorithm’s flowchart of the proposed system is shown in Fig. 2. The specific description of
the whole process is as follows.

Fig. 2. Flowchart of the real-time signal processing algorithm of our system.

Capture Images: The frame rate of the camera directly determines the detectable frequency
range of the laser microphone system. In order to reduce computational costs and improve
transmission speed, we use a common USB3.0 camera to capture the speckle images. The
Research Article Vol. 28, No. 4 / 17 February 2020 / Optics Express 4478

window size is set to be 32 × 32 pixels. Under this resolution the camera can reach the frame rate
of 2300fps.
Calculate displacement vector of frames: After getting frame sequence, the Farneback optical
flow algorithm is adopted to obtain the motion between frames. As shown in Fig. 1(c), the dense
optical flow algorithm computes the motion vector for every pixel between two images. Here one
parameter “Window Size” in the algorithm is settled to be 32, and in this way the results of each
pixel are approximately the same. The average of all vectors is taken as the global shifting d
between two frames.
Calculate sampling value: After getting vector of displacement d between adjacent frames, the
displacement |d| between the two frames can be easily obtained as:
q
|d| = x2 + y2 . (7)
The angle α of the global vector is expressed as:
y
α = tan−1 . (8)
x
According to the speckle motion model, the speckle sequence shows nearly linear vibration when
the object of interest vibrates. Figure 3 is the statistical histogram that shows angle of displacement
of every two frames in 10,000 pictures. From Fig. 3 we can see that the angles are clearly
distributed in two different intervals when the speckle sequence linearly reciprocates. Therefore,
the displacement |d| is superimposed according to the direction angle of the displacement vector
to obtain a sinusoidal waveform representing the original signal.

Fig. 3. Angle statistical histogram of 10 thousand vectors.

Fix accumulated deviation: Because of the noise of the image sequence, each calculation of
displacement brings a tiny deviation. Especially the high sampling rate of the system causes the
deviation to be accumulated very quickly. Let the deviation be expressed as ei , the displacement
between frame i and frame i + 1 be |d|i , the accumulated displacement be si , which can be
expressed as:
i−1
Õ i−1
Õ
si = |d| i + ei (9)
i i
i−1
Taking a single frequency audio signal as an example. Ideally, the accumulated result
Í
|d| i
1
i−1
shows a sinusoidal waveform. However, due to the accumulation deviation ei , the regenerated
Í
1
Research Article Vol. 28, No. 4 / 17 February 2020 / Optics Express 4479

waveform will constantly drift. Figure 4 shows a waveform of regenerated 50Hz audio signal.
As shown with the black line, the regenerated sinusoidal waveform drifts drastically in only 10
seconds.

Fig. 4. Waveform before and after fixing of accumulated drift.

Fixing the drift problem is crucial to the system. In our system, we always take the latest 100
points, in other words, the sampling data of the latest 0.1 seconds, as sample to estimate the
real-time drift slope k. Every time after si is obtained, the data from si−100 to si−1 are used to
calculate the real-time drift slope k to help fixing the drift and obtain the fixed data Si , which can
be expressed as:
Si = si − k × i. (10)
The fixed waveform is shown with the red line in Fig. 4. From the result we can see that although
the accumulated deviation will cause drift, it can be fixed, and the flat waveform can be obtained
with the denoising algorithm. Besides, real-time fixing of drift makes it possible to detect the
audio signal with moving sound source, which will be illustrated in the later section.
Uniform sampling estimation: With real-time fixing, the drift problem can be solved. However,
because the time consumption of capturing image and calculating optical flow for each sampling

Fig. 5. Part of the waveform of uniform sampling date estimation.


Research Article Vol. 28, No. 4 / 17 February 2020 / Optics Express 4480

is not exactly equal, the sampling rate is not uniform. This will lead to noise if the non-uniform
sample data is replayed.
To deal with this problem, here an estimation algorithm is proposed to obtain uniform sampling
value. Since the sampling rate is fast, the variety between each two sample points can be
approximated as a linear function. As shown in Fig. 5, every time after obtaining two sample
values Si and Si+1 at the time of ti and ti+1 , we estimate the sample value Si_u corresponding to all
Ti times in the [ti , ti+1 ] interval, where Ti is a uniform time with the interval of 1ms. In this way
uniform sampling value Si_u can be obtained.
The above is the processing of each sampling of the system. For our system, it takes about
1ms from step I to step VI to obtain one sampling data. According to Nyquist sampling theorem,
it means audio signal under 500Hz cam be regenerated in real-time.

3. Experiment result
3.1. Single frequency test
In the first experiment we tried to regenerate single frequency audio signal with our system. The
setup of our laser microphone system is shown in Fig. 6, and the schematic of the whole system is
shown in Fig. 7. An expanded laser beam with the output power of 100mW at the wavelength of
650nm is illuminated on the membrane of the speaker. The PointGray GS3U3-32S4C-C camera
with a f = −25mm lens is used to capture the speckle images. As mentioned above, the image
resolution is settled to be 32 × 32 pixels, and the frame rate of the camera is 2300fps. The camera
is connected with a desktop that controlled with python code. Both the laser and the camera are
positioned around 1m away from the speaker.

Fig. 6. Experiment scenario of the proposed laser microphone system.

The audio signals with the frequencies of 50Hz, 100Hz, 150Hz and 200Hz are tested
respectively. The waveforms of the regenerated audio signal are shown in Fig. 8, and Fig. 9
shows the spectrum after the Fourier transform of each result. From the results we can see that
the information extracted from the speckle motion can correctly represent the frequency of the
signal source. Especially in the low-frequency region, the regenerated waveforms are very clear.
With the frequency increases, the quality of the regenerated audio signal becomes worse. This is
mainly caused by the limited sampling rate of the camera. It is foreseeable that if a higher speed
camera is employed, the system will be able to regenerate audio signals with higher frequency.
Research Article Vol. 28, No. 4 / 17 February 2020 / Optics Express 4481

Fig. 7. Schematic diagram of the proposed laser microphone system.

Fig. 8. Regenerated waveform of different frequency audio signal. (a) 50 Hz audio signal.
(b) 100 Hz audio signal. (c) 150 Hz audio signal. (d) 200 Hz audio signal.

In the next experiment we present that music can also be regenerated in real-time with the
proposed system. We used MATLAB to edit and generate the music “Moonlight Sonata No. 14”
for the first forty seconds and played it. Figure 10 shows the spectrogram of the created music,
and Fig. 11 shows the spectrogram of the regenerated music. The experiment shows that the
music can be regenerated with high quality in real-time by the laser speckle and the proposed
algorithm. For reference the audio file of both original music and regenerated music has been
provided as the result of this experiment.
Research Article Vol. 28, No. 4 / 17 February 2020 / Optics Express 4482

Fig. 9. Frequency domain diagram of each result. (a) 50 Hz audio signal. (b) 100 Hz audio
signal. (c) 150 Hz audio signal. (d) 200 Hz audio signal.

Fig. 10. Spectrogram of the original audio signal (see also Visualization 1).

3.2. Effect of amplitude and defocusing on the result


For the proposed laser speckle detection system, it was found that the amplitude of sound source
brings challenge to signal regeneration. The increasing amplitude of the object vibration causes
the displacement of the speckle to become larger, which in turn leads to a decrease in the degree
of correlation between adjacent frames. When the displacement between adjacent frames is large
enough, it cannot be correctly calculated since the correlation between two frames are too small.
As mentioned above, the image size is set to be 32 × 32 pixels in our system. Small window
size certainly increases the sampling speed of the system. However, on the other hand, it also
weakened the ability to observe large speckle motion.
Here the amplitude of the audio signal is gradually increased, and the performance of the
system is investigated. The 50Hz audio signal is played by the speaker, the amplitude of the
Research Article Vol. 28, No. 4 / 17 February 2020 / Optics Express 4483

Fig. 11. Spectrogram of the regenerated audio signal (see also Visualization 2).

audio signal is adjusted by the volume of the computer which is 48.3dB, 54.3dB, 57.9dB and
61.0dB respectively. Correspondingly, the signal to noise ratio (SNR) of the result is 28.76dB,
31.01dB, 16.83dB, and 1.38dB. Figure 12 shows the part of the regenerated waveforms under
different amplitude. The result shows that it is easier to recover a clear sinusoidal waveform
when the speckle motion is small. However, as the speckle motion becomes larger, the recovered
waveform is gradually distorted and the SNR of the regenerated signal decreases.

Fig. 12. Regenerated audio signal with different amplitude. (a) Sound volume is 48.3 dB.
(b) Sound volume is 54.3 dB. (c) Sound volume is 57.9 dB. (d) Sound volume is 61.0 dB.

One way to deal with this problem is using a higher speed camera to get a denser sequence of
sampled images so that the distance between every two frames is not too large. On the other
hand, adjusting the defocusing amount of the imaging system can also solve this problem. When
the camera focuses on the speckle at the near field from the investigated object (i.e. location À
in Fig. 6), the amount of defocusing is small. At this situation, the speckles will overlap on the
image, and the speckle motion is not sensitive to object motion. Conversely, when the camera
Research Article Vol. 28, No. 4 / 17 February 2020 / Optics Express 4484

focuses on the speckle at the far field (i.e. location Á in Fig. 6), the amount of defocus is large.
In this situation the bright and dark speckle are distributed on the image clearly, and the speckle
motion is sensitive to object motion.
In the next experiment, the distance between camera and object is set to be 1m, and the
amount of camera defocusing is set to be 0 (focus), 0.3m, 0.5m, 0.6m, 0.7m, 0.75m and 0.85m
respectively. Figure 13 shows the image captured under different camera defocusing. In our
system, the wavelength of the laser source is 650nm, and a color image sensor is used to capture
the speckle pattern. Thus, the captured image shows red grain pattern, as shown in Fig. 13(g).
When the image system is focused, the light intensity received by a single pixel becomes stronger.
Because the color pixel has four channels (red, green, green, blue), some pixels of the image
become colorful. The speaker plays a 50Hz sine wave with different amplitude (48.3dB, 54.3dB,
57.9dB and 61.0dB respectively). The SNR of the result with different situations is shown in
Fig. 14.

Fig. 13. Speckle image captured under different amount of camera defocusing L. (a) L = 0.
(b) L = 0.3m. (c) L = 0.5m. (d) L = 0.6m. (e) L = 0.7m. (f) L = 0.75m. (g) L = 0.85m.

Fig. 14. SNR of the result with different amount of defocusing under different amplitude of
audio signal.

First, when the camera focuses on the object (amount of defocus equals zero), speckles overlap
together and form a featureless bright spot, as shown in Fig. 13(a). In this situation the SNR of
the result is meaningless because object vibration cannot be observed through speckle motion.
With the aspect of defocusing situation, the SNR of results can always keep a high level (over
Research Article Vol. 28, No. 4 / 17 February 2020 / Optics Express 4485

20dB) when the amplitude of audio signal is small. While in case of large amplitude of audio
signal, reducing the amount of defocusing can make the motion of the speckle smaller, which
in turn makes the result better. For instance, with large amplitude of 61.0dB, if the camera is
focused on the speckle field at 0.5m away from object, the SNR of result reaches the optimal
value (30.30dB). Figure 15 shows the regenerated waveform in this situation. Compared with
Fig. 12(d), the distortion of the result is fixed due to the adjustment of the camera defocusing.

Fig. 15. Regenerated waveform with 0.5 m camera defocusing under the amplitude of
61.0 dB.

3.3. Detection of moving sound source


In the actual situation, usually the sound source cannot maintain absolute stillness. For example,
when a person is talking, the body will show a slight movement. Therefore, the detection of
moving sound source is investigated. First, we will explain the speckle motion model. The
six-degree spatial motion of object can be divided into three categories: transverse, axial, and tilt.
According to the previous research [16], the transverse and tilt motion cause two-dimensional
displacement of captured speckle pattern, while axis motion causes scaling variation of captured
speckle pattern, as shown in Fig. 16.

Fig. 16. Corresponding speckle motion caused by object motion.

Therefore, when the sound source undergoes transverse or tilt motion, the motion of the
captured speckle consists of two parts: sinusoidal vibration and translational motion, and the
calculated waveform will be sine wave with drift. The drift caused by object motion can be fixed
in real time by the algorithm mentioned in Section 2.
Research Article Vol. 28, No. 4 / 17 February 2020 / Optics Express 4486

In the next experiment, the speaker playing the 50Hz audio signal is placed on the linear motor,
and motor translates by 30mm at a speed of 5mm/s in transverse direction and then returns to the
original position. The black line in Fig. 17 shows the obtained waveform. The waveform clearly
reflects the superposition of sinusoidal vibration and horizontal movement that corresponding
to the object motion. Meanwhile the red line shows the fixed waveform with our proposed
algorithms, which proves that our algorithm can output the clear sinusoidal waveform in real
time without being affected by the motion of the object.

Fig. 17. Test result of transverse moving sound source.

Next the speaker is placed on the rotation motor, and motor rotates by 5° at a speed of 0.5°/s,
and then returns to the original position. The black line in Fig. 18 shows the obtained waveform,
and the red line shows the fixed waveform. The result also proves that our system can continuously
output a clear audio signal during the tilt motion of sound source.

Fig. 18. Test result of tilt moving sound source.

Fig. 19. Test result of axial moving sound source.


Research Article Vol. 28, No. 4 / 17 February 2020 / Optics Express 4487

Finally, the axial motion is investigated. The motor translates by 30mm at a speed of 5mm/s in
z-axial direction and then returns to the original position. The black line in Fig. 18 shows the
obtained waveform. Different from the other two motion, axial motion has little effect on speckle
motion under defocusing situation. It can be seen from Fig. 19 that the waveform drifts slightly.
The drift can also be fixed with our algorithm and the system can output a clear waveform
continuously.

4. Conclusion
In this paper a laser-speckle-based sound detection system has been proposed. In the proposed
system, laser speckle image approach is adopted to detect the vibration of sound source, and
optical flow method, along with some denoising algorithms proposed by the authors are employed
to fulfill the high accuracy and real time signal processing. The main advantages of the proposed
system are real-time audio signal regeneration with high quality and the ability of audio signal
regeneration of moving sound source. All these contributions make this technology more widely
used in areas, such as laser microphone function. The results of experiments proved that our
proposed method is an efficient way to regenerate audio signal under different situations.
Currently the effective real-time sampling rate of the system is around 1kHz, and the result
shows it can perform well in the low-frequency region. In the future, there is still room for further
improvement of the system sampling rat. Using of faster imaging sensors and optimization of
the algorithm can promote the sampling rate of the system so that the human speech can be
regenerated with this system.

Funding
Keio University.

Disclosures
The authors declare that there are no conflicts of interest related to this article.

References
1. M. Campbell, J. A. Cosgrove, C. A. Greated, S. Jack, and D. Rockliff, “Review of LDA and PIV applied to the
measurement of sound and acoustic streaming,” Opt. Laser Technol. 32(7-8), 629–639 (2000).
2. Z. Christian, A. Brutti, and P. Svaizer, “Acoustic based surveillance system for intrusion detection,” in Proceedings
of International Conference on Advanced Video and Signal Based Surveillance (IEEE, 2009), pp. 314–319.
3. J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company, 2007).
4. B. M. Smith, P. Desai, V. Agarwal, and M. Gupta, “CoLux: Multi-object 3d micro-motion analysis using speckle
imaging,” ACM Trans. Graph. (TOG) 36(4), 1–12 (2017).
5. B. M. Smith, M. O’Toole, and M. Gupta, “ Tracking multiple objects outside the line of sight using speckle imaging,”
in Proceedings of Conference on Computer Vision and Pattern Recognition (IEEE, 2018), pp. 6258–6266.
6. O. Matoba, H. Inokuchi, K. Nitta, and Y. Awatsuji, “Optical voice recorder by off-axis digital holography,” Opt. Lett.
39(22), 6549–6552 (2014).
7. K. Ishikawa, R. Tanigawa, K. Yatabe, Y. Oikawa, T. Onuma, and H. Niwa, “Simultaneous imaging of flow and sound
using high-speed parallel phase-shifting interferometry,” Opt. Lett. 43(5), 991–994 (2018).
8. Z. Zalevsky, Y. Beiderman, I. Margalit, S. Gingold, M. Teicher, V. Mico, and J. Garcia, “Simultaneous remote
extraction of multiple speech sources and heart beats from secondary speckles pattern,” Opt. Express 17(24),
21566–21580 (2009).
9. G. Zhu, X. Yao, P. Qiu, W. Mahmood, W. Yu, Z. Sun, G. Zhai, and Q. Zhao, “Sound recovery via intensity variations
of speckle pattern pixels selected with variance-based method,” Opt. Eng. 57(2), 1 (2018).
10. L. Li, F. A. Gubarev, M. S. Klenovskii, and A. I. Bloshkina, “Vibration measurement by means of digital speckle
correlation,” in Proceedings of International Siberian Conference on Control and Communications (IEEE, 2016), pp.
1–5.
11. Z. Chen, C. Wang, C. Huang, H. Fu, H. Luo, and H. Wang, “Audio signal reconstruction based on adaptively selected
seed points from laser speckle images,” Opt. Commun. 331, 6–13 (2014).
12. E. Archbold, J. M. Burch, and A. E. Ennos, “Recording of in-plane surface displacement by double-exposure speckle
photography,” Opt. Acta 17(12), 883–898 (1970).
Research Article Vol. 28, No. 4 / 17 February 2020 / Optics Express 4488

13. D. Amodio, G. B. Broggiato, F. Campana, and G. M. Newaz, “Digital speckle correlation for strain measurement by
image analysis,” Exp. Mech. 43(4), 396–402 (2003).
14. T. O. H. Charrett, K. Kotowski, and R. P. Tatam, “Speckle tracking approaches in speckle sensing,” Proc. SPIE
10231, 102310L (2017).
15. G. Farnebäck, “Two-frame motion estimation based on polynomial expansion,” in Proceedings of Scandinavian
conference on Image analysis. (Springer, 2003), pp 363–370.
16. J. Kensei, M. Gupta, and S. K. Nayar, “Spedo: 6 dof ego-motion sensor using speckle defocus imaging,” in
Proceedings of the International Conference on Computer Vision (IEEE, 2015), pp. 4319–4327.

You might also like