0% found this document useful (0 votes)
83 views13 pages

Extensions of DAMAS and Benefits and Limitations of Deconvolution in Beamforming

Extensions of DAMAS and Benefits and Limitations of Deconvolution in Beamforming

Uploaded by

Hakan AKAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views13 pages

Extensions of DAMAS and Benefits and Limitations of Deconvolution in Beamforming

Extensions of DAMAS and Benefits and Limitations of Deconvolution in Beamforming

Uploaded by

Hakan AKAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

11th AIAA/CEAS Aeroacoustics Conference (26th AIAA Aeroacoustics Conference) AIAA 2005-2961

23 - 25 May 2005, Monterey, California

Extensions of DAMAS and Benefits and Limitations of


Deconvolution in Beamforming

Robert P. Dougherty *
OptiNav, Inc, Bellevue, WA, 98004

The DAMAS deconvolution algorithm represents a breakthrough in phased array


imaging for aeroacoustics, potentially eliminating sidelobles and array resolution effects
from beamform maps . DAMAS is an iterative non-negative least squares solver. The original
algorithm is too slow and lacks an explicit regularization method to prevent noise
amplification. Two extensions are proposed, DAMAS2 and DAMAS3. DAMAS2 provides a
dramatic speedup of each iteration and adds regularization by a low pass filter. DAMAS3
also provides fast iterations, and additionally, reduces the required number of iterations. It
uses a different regularization technique from DAMAS2, and is partially based on the
Wiener filter. Both DAMAS2 and DAMAS3 restrict the point spread function to a
translationally-invariant, convolutional, form. This is a common assumption in optics and
radio astronomy, but may be a serious limitation in aeroacoustic beamforming. This
limitation is addressed with a change of variables from (x,y,z) to a new set, (u,v,w ). The
concepts taken together, along with appropriate array design, may permit practical 3D
beamforming in aeroacoustics.

Nomenclature
r
x = 3D source locations
= Time average, for both time domain processing and sums of STFT signals.
STFT
r
= Short Time Fourier Transform.
r
s ( x ′, j ) = Narrowband source strength at location x ′ and time block j.
r r r
C( x ′ ) = Narrowband array response vector for a source at x ' .
r r r
w (x ) = Narrowband array weighting vector to steer to x .
r r
b( x) = Beamform map value for the grid point x .

r r
= Hermitian conjugate; complex conjugate transpose.
q (x ) = Power-type acoustic source strength at x .
r r r r
psf ( x, x ′) = Point spread function connecting a source at x ′ to an image point x .
r r
psf ( x − x ′ ) = Shift-invariant or “convolutional” psf.
r
k = Spatial frequency in FFT-based image processing.
r r v
r
() () ()
p k ,b k , q k = psf, beamform map, and source strength in the spatial frequency domain.
X = The values of the source strengths over a grid, stacked on a vector.
r
r r
Y = The values of a beamform map, stacked on a vector.
A = A matrix form of the psf, as expressed in Y = AX
γ = Regularization parameter for the Weiner filter.
r
µ = (a , b,0 ) = The location of a microphone in the array.
(u , v, w ) = Beamforming coordinates transformed to make the psf approximately convolutional.

*
President, 10914 NE 18th St, Senior Member, AIAA

1
American Institute of Aeronautics and Astronautics

Copyright © 2005 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
I. Introduction

A eroacoustic measurements are often required to cover a frequency range of several octaves . This drives the
design of imaging phased arrays to sparse patterns that can avoid aliases, but tend to contaminate the results
with sidelobes. The sidelobes, together with the frequency dependence of the array resolution, can complicate
interpretation of the images, and make it difficult to establish quantitative results. Integration of the beamform maps
can sometimes produce useful component spectra, but care is required to compensate for the beamwidth, and
sidelobes can cause unexpected results.1
Phased arrays are usually used for 2D imaging, although severe errors can occur if the source is not in the
beamforming plane. These include parallax errors in source location and the possibility that sources can be missed
entirely or that sidelobes in the form of out of plane sources can be confused with real sources in the plane.
Expanded use of 3D imaging could improve the results.
r r
The beamwidth and sidelobe characteristics of an imager are expressed by the point spread function (psf). The
r r
function psf (x , x ′ ) represents the image, as a function of x that would result from a point source at x ′ . If the actual
r
source distribution is q ( x ) , then the resulting beamforming map, subject to an incoherence assumption discussed
below, is

r r r r r
b( x ) = ∫ psf (x , x ′ )q( x ′)dx ′ . (1)

Since it is usually simple to predict the psf, it may be possible to invert Eq. (1) to determine q x from the (r )
beamform map. This process has a long history in radio astronomy 2 and optical microscopy.3 Efforts in
aeroacoustics have focused on the CLEAN algorithm. 4 In 2004, Brooks and Humphreys introduced the DAMAS
algorithm. 5 DAMAS solves Eq. (1) by an iterative, non-negative least squares technique.6,7 It offers the possibility
of completely removing the sidelobes and resolution effects from beamform maps. Its primary drawback is that it is
too slow. Two-dimensional runs have been reported to require hours or days, and 3D calculations may be out of
reach. Another problem is that DAMAS lacks an explicit regularization technique to control the amplification of
high frequency noise in the reconstruction.

II. Convolutional form of narrowband beamform maps


Suppose there are N microphones in a phased array, and the data from each microphone is processed in the
standard way by a short time Fourier transform (STFT). In this approach, each pressure time history record is
divided into T short time duration blocks, and each block is processed by a Fast Fourier Transform, or similar
technique, to give complex narrowband Fourier coefficients. After selecting a single frequency bin, each
microphone signal has been replaced by a series of T complex coefficients, one for each time block. For microphone
i and time block j, let the corresponding Fourier amplitude be denoted pi (j). Arranging these N functions of j on a
r
vector gives the narrowband array data, p ( j ) , j = 1,…, T. (Arrow notation is used here for both N-vectors of array
v
quantities and 3D positions such as x .) Suppose there is a distribution of acoustic sources, and for each source
r r
location, x ′ , imagine that the STFT is applied to the source to give a source Fourier amplitude s ( x ′, j ) , j = 1,…,T.
r r
Let the frequency domain Green’s function for microphone location x i and source point x ′ be denoted C i (rx ′ ) .
r r
Arranging these complex numbers on an N-vector gives the array response vector C( x ′ ) . The source-receiver model
can then be written

r r r v v
p ( j ) = ∫ C (x ′ )s (x ′, j )dx ′ (2)

r
In a simple formulation, the beamform map for grid point x can be computed by
r r r r
b( x ) = w † (x ) p (3)

2
American Institute of Aeronautics and Astronautics
r r r
where the array weighting vector w (x ) is selected to isolate sounds from x . It is frequently appropriate to choose
r r r r
w (x ) to be parallel to C(x ) , but other options are available, including adaptive beamforming and array shading. The
bracket notation now refers to an average over the STFT blocks:

T
∑ f ( j)
1
f ≡ (4)
T j =1

r r r
As an aside, the beamforming expression (3) is equivalent to the more common form b(x ) = w† Aw where A is
rr
the cross spectral matrix, pp † .
The contents of the beamform map can be found by substituting Eq. (2) into Eq. (3):

r r r r r r r r r r r r r
b(x ) = ∫∫ w† (x )c (x ′)c † (x ′′)w(x ) s( x ′)s* (x ′′) d x ′dx ′ (5)

Up to this point, nothing has been assumed about the source distribution, except that it can be viewed as
stationary. (It is believed that moving sources in jet noise can be treated as stationary, extended, sources in this
formulation.) In order to interpret the phased array as a standard imaging system, it is necessary to assume an
incoherent source distribution:

r r r r r
s( x ′)s * (x ′ ) = q( x ′)δ ( x ′ − x ′′) (6)

With this assumption, Eq. (5) gives Eq. (1) with

r r r r r r
psf (x , x ′) = w† (x )C (x ′) . (7)

r r r r
Ideally, the point spread function psf (x , x ′ ) would be the delta function δ (x − x ′ ) . In that case, the beamform
r r
map, b( x ) , would reproduce the source strength distribution, q (x ) . The more realistic situation that the psf is not a
delta function creates the need for deconvolution.
r
Knowledge of q (x ) in the case of an incoherent source distribution is particularly useful because it makes it
possible to predict the sound pressure level that would result at a far field microphone from a portion of the source
distribution by simply integrating that source distribution over that region. In general, this integral would need to
r r
include the magnitude-squared of the Green’s function. It is often convenient to normalize the Green’s function to
r
unity in the definition of C( x ′ ) . In this case, a microphone SPL can be predicted by simply integrating q (x ) . It is
r
als o possible to obtain estimates of microphone SPL by integrating b( x ) , but this procedure is difficult because is
requires special steps to try to prevent the psf from distorting the results.1
r
It is often the case that the form of the psf is such that translating a point source causes the entire beamform map
to shift the same amount without significantly changing in any other way. Such a shift-invariant psf depends on x
r r r
and x ′ only through the difference x - x ′ . This convolutional form of the psf can be written in shorthand as
r r r r
psf ( x , x ′) = psf (x − x ′) . (8)

In this case, Eq. (1) takes on the special form


r r r r r
b( x ) = ∫ psf (x − x ′ )q( x ′)dx ′ . (9)

Equation (8) is generally a good approximation if the source region is small compared with the distance between
the array and the source. This causes the apparent size of the array, as viewed from the source region, to be nearly

3
American Institute of Aeronautics and Astronautics
constant over the source region. In the case of a planar array, Eq (9) would be expected to hold if the beamform map
were limted to a small angular sector and a small region of distances, as suggested in Fig. 1. The applicability of the
convolutional form can be extended to larger sectors by the use of special coordinates. For example, if the source
can be taken to be infinitely far from the array, then plane wave beamforming using the wavenumber components kx
and k y gives a formulation in which the convolutional model is valid over the entire hemisphere. A coordinate
transformation that permits the use of the model in 3D over a substantial range of distances is given in section VII.

III. Deconvolution and DAMAS


r
Equation (1) is a Fredholm integral equation of the first kind for
q ( x ) , and is known to be an ill-posed problem. One way to approach it
r
is to measure b( x ) on a grid of points and consider the values of
r Source
q ( x ) on the same grid to be the unknowns. The deconvolution problem region
then becomes a linear algebra problem,
r r
Y = AX
r
(10)
where Y represents all of the values of the beamform map stacked on a
r
vector, and X is the vector of unknown source strengths at the grid
points. The matrix A (not to be confused with the array cross spectral
matrix, also usually denoted A) is determined from the psf using Eq. (9).
In 2D beamforming, a typical size for the map is 100×100 grid
points. In this case, A would be a 10,000×10,000 matrix. In the 3D case,
A might have 1003 ×1003 = 1012 elements.
Explicit solution of Eq. (10) in the sense of a least squares solution
Array
of minimum norm is feasible with current computer technology, but not
recommended. The difficulty is that a least squares solution is likely to
r
include non-physical, negative values of q (x ) . Adding a non-negativity
Figure 1. A small source region
that might be compatible with a
constraint to the problem makes it computationally much more difficult, convoltional psf without special
and requires an iterative solution. coordinates.

IV. Spectral techniques and regularization


In the spatial frequency domain, Eq. (9) becomes

~ r r r
() ()()
bk = ~
pk~qk (11)

~ r r r r
() ()
where b k , ~ () r r
p k and q~ k represent the FFT of b( x) , psf (x ) , and q (x ) , respectively.
This suggests the Weiner filter algorithm for deconvolution.8,9 In general terms this is:

Algorithm 1 Wiener filter deconvolution.


r r
1. Compute the forward FFT of b( x ) and psf (x ) .
r r~ r
r
2. For each frequency k , compute q (k ) =
~
p *
(k )br(k ) .
~p * (kr )~
~
p (k )+ γ
r
3. () r
Compute the inverse FFT of q~ k to obtain q (x ) .
r
4. Replace any negative values of q (x ) by 0.

r
The regularization parameter γ is used in step 2 to avoid dividing by zero at spatial frequencies for which ~
pk () 2

r
is small. In practice, its value is chosen by trial and error. If γ is too small, then the reconstruction of q (x ) will

4
American Institute of Aeronautics and Astronautics
appear noisy. If γ is too large, then the deconvolution will not be accurate. The Weiner filter is commonly used in
image processing, but numerical experiments have shown that step 4 destroys the quantitative information in the
reconstruction. In other words, it does not appear to be possible to estimate microphone spectra by integrating the
source map resulting from Weiner filter deconvolution.

V. DAMAS2
The iteration in the DAMAS algorithm calls for a large number of matrix multiplications to evaluate Eq. (10)
r
with iterates for X . Since A is quite a large matrix, the execution times for DAMAS for 2D cases have been
reported in terms of days. Equation (10) is intended to be equivalent to Eq. (1). If the convolutional form of the psf
is appropriate, then Eq. (1) can be replaced by Eq. (9), and this can be implemented in the spatial frequency domain
using Eq. (11). The DAMAS2 algorithm is similar to DAMAS, except each evaluation analogous to Eq. (10) is
replaced by a forward FFT, frequency-by-frequency multiplication according to Eq. (11), and an inverse FFT. The
resulting algorithm runs in seconds, rather than days. DAMAS2 also includes a regularization feature to prevent the
reconstructions from becoming noisy if the grid is too fine for the array resolution to support. This regularization
consists of a Gaussian low-pass filter that is applied during the iteration, taking advantage of the frequency-domain
representation of the image. The cutoff frequency of the filter is chosen to optimize the visual appearance of the
reconstruction. Its value does not affect the quantitative integrated results.
The DAMAS2 algorithm for 2D deconvolution is given in Algorithm 2, below. The notation is the same as in
Algorithm 1. The spatial cutoff frequency for the Gaussian regularization is k c . A 2D Java implementation in the
form of a plugin for the open source image processing package ImageJ has been posted at Ref. 10.

Algorithm 2. DAMAS2 deconvolution.


r
1. () r
p k = forward FFT[ psf (x ) ].
Compute ~
2. Set a = ∑ psf
r
(r )
x,y,z

3. Set solution q x = 0 for each x in the beamforming grid.


4. Iterate
r
()
a. q~ k = forward FFT[ q ].
r r
b.
r r r
() r
( ( ))
For each k , scale q~ k by exp − k 2 / 2 k c2
c. Let ~r (k ) = ~
p (k )q(k ) for each k
r r
d. r (x ) = inverse FFT[ ~ r (k ) ].
r
q (x ) ← q( x ) + [b ( x ) − r(x )]/ a for each x .
r r ~ r r
e.
r
f. Replace each negative value of q (x ) by 0.

A. Quantitative example using synthetic data


This example extends one of the cases presented in Ref. 11. A representative non-redundant phased array is
shown in Fig. 4. There are 63 microphones arranged in a disk with a 34.4 inch diameter. The beamforming plane is
located parallel to the array at a distance of 100 inches. The point spread function for a broadband source analyzed
over a 1/12 octave band centered at 4000 Hz is shown in Fig. 5.

5
American Institute of Aeronautics and Astronautics
20

10
dB

y, inches
0

-10

-20 Figure 5. Point spread function for


-20 -10 0 10 20 the array in Fig. 4 for a 1/2 octave
x, inches band centered at 4000 Hz. The source
is 100 inches from the array. The
Figure 4. Array for synthetic data beamforming grid is 127 inches on a
example. side

A source distribution is postulated in Fig. 6. Synthetic data were produced by generating 127 independent
random source functions, propagating the sound waves from the source points on the V-shape to the microphones,
and coherently summing the results. The sampling rate is 32000 samples per second and the duration is four
seconds. The array data were low pass filtered and then band-pass filtered to the 4000 Hz, 1/3 octave, analysis band.
The average sound pressure level at the array microphones was a factor of 95, or 19.77 dB, higher than the average
level from a single source. This is less than the factor of 127, or 21.03 dB, that might be expected from the 127
independent sources because the source geometry increased the lengths of the propagation paths in some cases.
127 inches

dB

(r)
127 inches
Figure 6. Source distribution q x . The (r ) Figure 7. Beamforming result, b x ,
for the source distribution in Fig. 6 at
pattern is composed of 127 source points 4000 Hz. The color scale is 20 dB
in the plane z = 100 inches. The points are
relative to the single source.
separated by 0.5 inches in the horizontal
and vertical directions
Delay and sum beamforming with time -domain diagonal deletion (Ref. 1) was applied with the result shown in
Fig. 7. As expected, the beamforming plot indicates the source location, but also has effects of blurring and
sidelobes.

The peak level in Fig. 7, 10.99 dB, is not equal to the SPL at the microphones, 19.77 dB. Equality would only be
expected for a single source. To illustrate the technique of estimating the SPL by integrating the beamform map as
described in Ref. 1, the functions in Figs. 5 and 7 were integrated, subject to a threshold of 10 dB down from the
peak level in each case. The results were 20.14 dB for the integral of the psf and 40.24 dB for the integral of the
beamform map. Subtracting, the estimate for the integrated SPL is 20.1 dB, which is close to the exp ected value of
19.77 dB.
Deconvolution using a Weiner filter is given in Fig. 8 for γ = 0.1, 0.001, and 0.00001. Reducing the
regularization parameter sharpens the result, but introduces significant artifacts. The integrated values of 43.5,
48.93, and 52.42 dB for the three source reconstructions have no apparent relationship to the microphone SPL. This
suggests that the Wiener filter is not suitable for this quantitative application.

6
American Institute of Aeronautics and Astronautics
a) b) c)

dB

Figure 8. Deconvolution of Fig. 7 with the psf in Fig. 2 using a Weiner filter with regularization
parameter γ = a) 0.1, b) 0.001, and c) 0.00001. The color scale is 20 dB in each case.
Results from DAMAS2 are shown in Figure 9. The first case, Fig. 9 a), does not use a low pass filter. This
computation is similar to the one described by Brooks and Humphreys in Ref. 5, except for the assumed
convolutional form of the psf and the FFT implementation to increase the speed. The calculation required 12
seconds on a laptop computer for 500 iterations. The reconstruction is quite satisfactory except for the proliferation
of small spots in the source region. The integral is 20.32 dB, which is consistent with expectations. Figure 9 b) adds
regularization in the form a low pass filter at each step in the iteration with a cutoff of 0.5 pixels . The spots are
removed at the expense of a small increase in blurring. The computation time and the integral are the same as Fig. 9
a).

a) b)

dB

Figure 9. Deconvolution using the modified DAMAS algorithm with


no explicit regularization, a), and with a low pass filter cutoff of 0.5
pixels (0.5 inches). The color scale is 20 dB in both cases. The areas
shown in black contain no energy and are represented by -90 dB.

In order to study the effect of the regularization in more detail, the grid was refined by a factor of four in both
directions, giving a spacing of 0.25 inches. Details of the reconstruction with several settings of the low pass filter
are show in Fig. 10. A low pass filter cutoff of 2 pixels for the refined grid, as shown in Fig. 10 f), is equivalent to
the cutoff of 0.5 pixels for the original grid, since the cutoff represents 0.5 inches in both cases . Figure 10 shows that
the low pass filter smoothly controls the appearance of the reconstruction. The quantitative result, the integral of the
deconvolved map, is not affected by the choice of the filter setting.

7
American Institute of Aeronautics and Astronautics
a) b) c) d)

key baseline 0 0.5


e) f) g) h)

1 2 4 8
Figure 10. Effect of regularization low pass filter cutoff using a 0.25 inch grid. a) key to
inset size, b) beamforming map before deconvolution, c) modified DAMAS with no
regularization, d) -h): low pass filter cutoff = 0.5, 1, 2, 4, and 8 pixels. The color scale is 10 dB
in all cases, with differing upper levels. The integrals of c)-h) are all equal.

B. Wake vortex data


A flyover phased array test to image aircraft wa ke vortices was conducted at the Denver airport in 2003.12 The
array of 252 electret microphones (Fig. 11) was located on the ground two miles north of the threshold of runway
16L.

Direction of flight
20

10
y, m

-10

-20
-70 -60 -50 -40 -30 -20 -10 0 10 20 30 40 50 60 70
x, m

Figure 11. Array pattern for wake vortex measurements at the Denver airport.

After landing airplanes passed over the array at about 700 feet (213 m), the pair of trailing vortices remained in
the sky and slowly convected with the wind and sank toward the ground. One use of the array data is to form an
image in a vertical plane that is oriented perpendicular to the flight path and the vortices and perpendicular to the
array. This would ideally show two spots at the points where the vortices penetrate the beamforming plane. The
height and lateral locations of the vortices would then be evident. Unfortunately, the array’s resolution in the vertical
direction is poor, as shown in Fig. 12. In this plane, a compact source appears in the beamform map as an elongated
shaped that is aligned with a ray though the center of the acoustic microscope. Blurred shapes like these can be
confusing to untrained users of the results.

8
American Institute of Aeronautics and Astronautics
a)
500
500
r = 406.4 m b)
400
406.4
c)
source
300
H-plane
z, m

y,m

r,m
200

θ = 22.5º
100 θ = -22.5º
array r = 50.8 m 50.8
0 0 -22.5 22.5
-200 200 θ, deg.
-200 -100 0 100 200 x,m
x, m

Figure 12. Polar beamforming grid. Part a) shows a vertical beamforming grid for the Denver
wake vortex array. The direction of flight is perpendicular to the figure. A simulated source is
located at (x, y ,z) = (50, 0, 250) m as shown. Beamforming with the simulated source (125 Hz, 1/3
octave band) on the rectangular grid in part a) gives the map in part b). The peak is extended
vertically due to poor vertical resolution. The elongated peak points downward toward the center
of the array. Part c) is the result of beamforming on the polar grid outlined in part a). The peak is
aligned with the polar grid.

Deconvolution is applied to simplify the images. As a first step, the beamforming grid is changed to polar
coordinates . This causes the psf to be aligned with the grid so that it does not rotate as the source point is moved.
This is advantageous for traditional deconvolution algorithms, including the modified version of DAMAS, because
they assume a shift-invariant relationship between the source and the blurred image. The original DAMAS would
not require polar coordinates because it uses a different psf for each source location.
Beamforming results for a representative landing are given in Fig. 13. The airplane was a 737-300 that evidently
passed over the array at somewhat higher altitude than most of the arriving traffic. The figure shows a psf that was
computed by simulating a line source along the y-direction, a), beamforming results using two seconds of array data
measured approximately 30 seconds after the airplane flyover, b), and the deconvolution output, c). The low pass
filter cutoff for regularization was set to 0.1 pixels . Part c) is simpler to interpret as a cross section through the two
vortices than the direct beamforming output, b). The deconvolved image shows some vertical smearing that is due in
part to the fact that the vortices sank about six meters during the two second integration time .12

406.4
a) b) c)
r,m

50.8
-22.5 22.5
θ, deg.
Figure 13. Wake vortex beamforming example in the polar vertical plane. One-third
octave band results at 125 Hz center frequency. The psf, a), was produced by simulating a
line source parallel to the direction of flight (the y direction, perpendicular to the image).
Beamforming results from the test data are given in b) . The two shapes represent the two
wakes from the 737-300 imaged in cross-section. The deconvolution result using b) as the
image and a) as the psf is shown in c). The color scales are in dB.

9
American Institute of Aeronautics and Astronautics
From Fig 13 c), it can be inferred that the height of the vortices during the analysis time was about 254 m. A
horizontal beamforming plane, parallel to the ground and to the array, was created at that height. Its position and
lateral extent (194 m) are indicated by the “H-plane” in Fig 12a). The psf for this plane was created by simulating a
point source at a location 254 m over the center of the array. This psf is shown in Fig. 14a). The beamforming result
and the deconvolved image are given in Fig. 14b) and 14c), respectively. The deconvolved image suggests that the
wake vortices are very thin line sources. The slight angle of the vortices relative to the y-axis (the flight direction,
vertical in the figure) is caused by a crosswind blowing from east to west (in the +x direction, to the right in the
figure). The lateral spacing between the vortices is consis tent with theoretical predictions.

97 a) b) c)

y,m

-97
-97 x,m 97
Figure 14. Horizontal-plane beamforming results for the wake vortex data. The direction
of flight is downward in the plots. PSF, a), conventional beamforming, b), and
deconvolution results, c). The beamforming plane is 254 m above the array. The 1/3 octave
band analysis frequency is 125 Hz. The low pass filter cutoff for regularization is 4 pixels, or
6 m. The lateral spacing between the vortices is 24 m, or 83% or the wingspan of the 737-
300. Elliptical lift distribution theory predicts an initial vortex spacing of 79% of the
wingspan (Ref. 17).

VI. DAMAS3
A hybrid technique that combines a Wiener filter with iteration for a non-negative solution is given in Algorithm
3. It solves psf o q = b ( o denotes convolution) by first doing a regularized division of psf and b by the FFT of
the psf in the spectral domain. This gives the modified deconvolution problem psf w o q = bw . A non-negative
~ r r r
solution to the modified problem is sought by iteration, as in DAMAS and DAMAS2, where the convolution is
performed in the spectral domain, like DAMAS2. As in the Wiener filter algorithm, b k , ~ () ()
p k and q~ k represent ()
r ~ r r
r r
the FFT of b( x ) , psf (x ) , and q (x ) , respectively. The new functions bw k , ~ () ()
p w k are the FFT of bw and psf w ,
respectively.

Algorithm 3. DAMAS3 deconvolution.


r r
1. Compute the forward FFT of b( x ) and psf (x ) .
r r r r r
k , compute b~w (k ) = *p r(k )br(k ) and ~p w (k ) = *p r(k )pr(k )
r ~* ~ r ~* ~
p (k )~
p (k ) + γ p (k )~
p (k ) + γ
2. For each frequency
~ ~
r
3. Compute the inverse FFT of ~
pw k ( ) to obtain psf w.
4. Set a = ∑ psf w
r
x,y,z
r
5. Set solution q (x ) = 0 for each x in the beamforming grid.
6. Iterate
r
()
a. q~ k = forward FFT [ q ].
r r r r
b. Let ~ () ( )()
r k = ~p w k q k for each k

10
American Institute of Aeronautics and Astronautics
r r
c. r (x ) = inverse FFT[ ~
r k ].() r
r r ~ r
[ r
]
d. q (x ) ← q (x ) + bw (x ) − r (x ) / a for each x .
r
e. Replace each negative value of q (x ) by 0.

Numerical experiments indicate the filtering step in DAMAS2 is not needed in DAMAS3, since the application
of γ reduces noise amplification. The Weiner filter processing (step 2) greatly reduces the number of iterations
needed for convergence, in comparison with DAMAS2. A 3D implementation of DAMAS3 has been posted in Ref.
13.

VII. Beamforming coordinates to make the convolutional form more accurate


A special set of coordinates can be used to improve the accuracy of the convolutional model of the psf in the
case of a planar array. Suppose the array is in the x-y plane. The beamforming coordinates are u, v, w , defined by ( )
the transformation

x y zmin z 2
u= , v= , w= , r ≡ x2 + y2 + z2 (12)
r r r r2

where z max is the largest value of z in the beamforming grid. These coordinates lie in the ranges − 1 ≤ u ≤ 1 ,
− 1 ≤ v ≤ 1 , 0 < w ≤ 1. The inverse transformation is

x = ru , y = rv , z = r 1 − u 2 − v2 , r =
zmin
w
(
1 − u 2 − v2 ) (13)

To see the benefit of these coordinates, suppose that one microphone is at the origin, and consider a second
r r r r r
( )
microphone at µ = x µ , y µ , zµ = (a, b,0 ) . Let the psf for a point source at x ′ be evaluated at x : psf ( x , x ′) .
r
Assuming free space propagation, the difference in travel times between the microphone at µ and the one at the
r
origin is ( rx′ − µr
r
− x ′ )/ c . In beamforming to the incorrect source location, x , the beamforming algorithm will
r r r r r
assume a travel time difference of ( x − µ − x )/ c . The phase that enters into the computation of psf (x , x ′ ) is
proportional to the difference between these travel time differences:

r r r r r r
cτ = ( x − µ − x ) − ( x ′ − µ − x ′ ). (14)

The convolutional form of the psf would hold if the expression and the similar ones for the other mic rophone
r r
pairs were functionals of x − x ′ . As discussed previously, this is only a good approximation in limited
circumstances. Expressing Eq (14) in terms of new coordinates, it can be shown that

1 a2 + b 2
cτ = −a (u − u ′ ) − b(v − v ′ ) + (w − w′) + ∆ (15)
2 z min

where

1 2  u 2 u ′2  1 2  v 2 v ′2    a  3  b 3  a 3  b  3 
∆= b  2 − 2  + a  2 − 2  + Ο   +   +   +   
2 r r′  2  r r′   r   r   r′   r′ 
 

This has the required functional form to second order in the microphone spacing relative to the source distance,
and the second order terms are likely to be negligible in many cases.

11
American Institute of Aeronautics and Astronautics
Figures 15 and 16 give the psf of the array in Fig. 4 at 6 kHz using Euclidean coordinates and the new
coordinates . It can be seen that the psf is much better described as translationally invariant in the new coordinates.
For example, the diameter of the first ring of sidelobes changes with height in the Euclidean coordinates, but not in
the new coordinates. Also, the central peak and the sidelobe ring become elliptical and for an off-axis source point in
the old coordinates, but not the new ones.
(0, 0, 100) (0, 0, 200) (60, 0, 60)
a) b) c)

dB
y

d) e) f)

x
Figure 15. Point spread functions for the array in Fig. 4 at 6000 Hz. The grids
are 400×400 inches. Horizontal planes, a)-c) and vertical planes, d) -f), are shown. In
the vertical planes, z runs from 4 to 404 inches. The source location in inches is (x,
y, z) = (0, 0, 100) in a) and d), (0, 0, 200) in b) and e), and (60, 0, 60) in c) and f).

(0, 0, 100) (0, 0, 200) (60, 0, 60)


a) b) c)
dB
v

d) e) f)

u
Figure 16. Point spread functions for the array in Fig. 4 at 6000 Hz, plotted using
the (u,v,w) grid. In a)-c), u and v run from -1 to 1. In d)-f), u goes from -1 to 1 and w
is shown from 0 to 0.5. As in Fig. 15, the source location in inches is (x, y, z) = (0, 0,
100) in a) and d), (0, 0, 200) in b) and e), and (60, 0, 60) in c) and f).

12
American Institute of Aeronautics and Astronautics
VIII. Conclusion
Deconvolution using DAMAS and its variants can significantly improve the usefulness of phased array results.
DAMAS2 provides a needed speed improvement relative to DAMAS by using FFT processing to reduce the time
required for each iteration. DAMAS3 provides a further speed improvement by reducing the required number of
iterations. Both DAMAS2 and DAMAS3 are restricted to cases where the convolutional form of the psf is
appropriate. A coordinate transformation was given that expands the range of 3D cases in which the convolutional
model applies.

IX. Acknowledgments
This work was supported by the Ohio Aerospace Institute as part of the Aeroacoustics Research Consortium.
Thanks to Leon Brusniak for proofreading the manuscript.

X. References
1
Dougherty, R. P. , “Beamforming in Acoustic Testing,” Aeroacoustic Measurements, Edited by T. J. Mueller, Springer,
Berlin, 2002, pp. 93-96.
2
Briggs, D. S. (1995), High Fidelity Deconvolution of Moderately Resolved Radio Sources, Ph.D. Thesis, New Mexico Inst.
of Mining & Technology, 1995.
3
McNally, J.G. , Karpova, T., Cooper, J. and Conchello, J.A., “Three-dimensional imaging by deconvolution microscopy,”
Methods, Vol. 19, No. 3, Nov. 1999, pp. 373-372.
4
Dougherty R.P. and R. W. Stoker, “Sidelobe suppression for phased array aeroacoustic measurements,” AIAA Paper 1998-
2242, June, 1998.
5
Brooks, T. F. and Humphreys, W. M. Jr., “A deconvolution approach for the mapping of acoustic sources (DAMAS)
determined from phased microphone arrays,” AIAA Paper. 2004-2954, May , 2004.
6
Lawson, C..L., and Hanson, R. J. Solving Least Squares Problems, Prentice-Hall, Englewood Cliffs NJ, 1974.
7
Lagendijk, R. L. and Biemond, J., Iterative Restoration of Images, Academic Press, Boston, 1991.
8
Gonzalez, R. C. and Woods, R. E., Digital Image Processing, 2nd Ed. Addison-Wesley, Reading, MA, 1992.
9
Linnenbrügger, N., “FFTJ and DeconvolutionJ,” ImageJ Plugin, URL: http://rsb.info.nih.gov/ij/plugins/fftj.html [cited 7
May 2004].
10
Dougherty, R. P., “Iterative Deconvolution,” ImageJ Plugin. URL: http://www.optinav.com/ImageJplugins/Iterative-
Deconvolution.htm [cited May 2005].
11
Doughery, R. P. , “Advanced time-domain beamforming techniques,” AIAA Paper. 2004-2955, May, 2004.
12
Doughery, R. P. , F. W. Wang, E. R. Booth, M. E. Watts, N. Fenichel, and R. E. D’Errico, “Aircraft wake vortex
measurements at Denver International Airport,” AIAA Paper. 2004-2880, May, 2004.
13
Dougherty, R. P., “Iterative Deconvolve 3D,” ImageJ Plugin. URL: http://www.optinav.com/ImageJplugins/Iterative-
Deconvolve-3D.htm [cited May, 2005].

13
American Institute of Aeronautics and Astronautics

You might also like