0% found this document useful (0 votes)
94 views25 pages

Optimal Design of Efficient Acoustic Antenna Arrays

This document discusses optimally designing efficient acoustic antenna arrays. It formulates the design problem as a nonlinear program to minimize the maximum sidelobe level across a set of sidelobe points, subject to constraints. The variables are a scaling and phase shift applied to each sensor's output. Lagrangian relaxation is applied to transform the problem into a sequence of subproblems with a convex quadratic objective and constraints. Computational results show the approach can efficiently find near-optimal solutions for problems with hundreds of variables and constraints.

Uploaded by

Bhargav Bikkani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
94 views25 pages

Optimal Design of Efficient Acoustic Antenna Arrays

This document discusses optimally designing efficient acoustic antenna arrays. It formulates the design problem as a nonlinear program to minimize the maximum sidelobe level across a set of sidelobe points, subject to constraints. The variables are a scaling and phase shift applied to each sensor's output. Lagrangian relaxation is applied to transform the problem into a sequence of subproblems with a convex quadratic objective and constraints. Computational results show the approach can efficiently find near-optimal solutions for problems with hundreds of variables and constraints.

Uploaded by

Bhargav Bikkani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Mathematical Programming 39 (1987) 131-155 131

North-Holland

O P T I M A L D E S I G N OF E F F I C I E N T A C O U S T I C A N T E N N A
ARRAYS

L.S. LASDON, John PLUMMER


Department of Management Science and Information Systems, College of Business Administration,
University of Texas, Austin, TX 78712, USA

B. B U E H L E R
Naval Underwater Systems Center, Code 3292, New London, CT 06320, USA

A.D. WAREN
Department of Computer and lnJormation Science, Cleveland State University, Cleveland, OH 44115,
USA

Received 12 December 1986


Revised manuscript received 17 March 1987

Minimax optimal design of sonar transducer arrays can be formulated as a nonlinear program
with many convex quadratic constraints and a nonconvex quadratic efficiency constraint. The
variables of this problem are a scaling and phase shill applied to the output of each sensor.
This problem is solved by applying Lagrangian relaxation to the convex quadratic constraints.
Extensive computational experience shows that this approach can efficiently find near-optimal
solutions of problems with up to 391 variables and 579 constraints.

Key words: Acoustic arrays, optimal design, Lagrangian relaxation, nonlinear programming,
computer implementation.

1. Introduction

A sonar transducer or sensor is a device which converts the energy of a sound


wave traveling in water (an acoustic signal) into an electrical signal. The design of
arrays of sonar transducers is an important problem, having both military and
nonmilitary applications. Oil companies use towed arrays to aid in undersea explor-
ation. Submarines use sonar arrays to detect other ships, while larger, stationary
arrays play an important role in monitoring surface and undersea activity. The
importance of sonar technology to US submarine and antisubmarine efforts is
outlined in [5]. Some technical background in sonar arrays may be found in [2].
This paper describes research on the optimal design of acoustic antenna arrays.
Much larger arrays are studied here than have been dealt with previously--up to
500 sensors for transmit arrays using large, high-powered elements, and an order

This work was supported by O N R Contracts N00014-83-C-0437 and N00014-82-C-0824.


132 L.S. Lasdon et aL / Optimal design

of magnitude more for arrays of receiving transducers (hydrophones). Arrays con-


sidered in other optimization studies are much smaller--84 sensors in [3] and 12
in [10]. The sensor positions in the current study are assumed fixed, where in [3]
and [10] these are decision variables. This greatly simplifies the array response
function, rendering it quadratic in the remaining design variables, and permits the
solution of much larger problems. These problems, however, are complicated by
the presence of a nonconvex quadratic efficiency constraint, not present in previous
optimization applications. Such constraints are easily incorporated using the
approach described here but are difficult to include if the method in [9] is used.
There, the minimax design problem is approximated by a linear program. This LP
is not sparse, so this method is not suitable for large problems.
In Section 2, we formulate array design problems as nonlinearly constrained
minimax optimization problems. Section 3 applies Lagrangian duality theory to this
problem, transforming it into a sequence of problems with a convex quadratic
objective, 2 linear constraints, and one nonconvex quadratic constraint. These
problems are parameterized by a vector of "weights" or Lagrange multipliers.
Strategies for solving these problems and for adjusting the weights are described in
Section 4. Section 5 contains an approach for extending these methods to very large
arrays by decomposing them into subarrays.
These algorithms form the core of a computer-aided design system. System
capabilities, the user interface, program organization, and data structures are dis-
cussed in Section 6. Changes in data structures have led to dramatic reductions in
paging activity and run time for large problems. Section 7 contains computational
results on problems with up to 391 sensors, showing convergence to within a few
decibels of minimax optimality. However, run times sometimes exceed several hours
on a VAX 11/780. Algorithmic improvements, plus the use of faster computers with
more memory, promise to reduce these times by an order of magnitude. Conclusions
and suggestions for further research are found in Section 8.

2. Problem formulation

The beam pattern

Consider the sensor shown in Fig. 1, whose position is specified by the vector
r = (x, y, z). Acoustic plane waves of wavelength A, incident in a direction specified
by the unit vector u = (cq/3, 3') impinge upon this sensor. The quantities a,/3, y are
direction cosines, specified by

cos a = sin ~b cos 0,

cos/3 = sin ~b sin 0,

COS T ---- COS ~b,


L.S. Lasdon et aL / Optimal {lesign 133

Prop or~ Of
agatibn, u

~ '~" frequene~ " f

Fig. 1. Single transducer.

where 0 and q5 are the spherical coordinate angles. It is also convenient to define
the "wave number"
k = (2~/A)u
Let the incoming signal have unit amplitude and zero phase at the origin. The
sensors output, os, can be represented by the complex number
os = B ( k ) exp(ikTr)
where B is a (possibly complex) response function, characteristic of the sensor,
giving its response to waves of different wavelengths incident from different direc-
tions. The phase term k T r arises because the plane wave reaches the sensor r seconds
before it reaches the origin, where
= d/v = d/fA

The variable d is the additional distance traveled, shown in Fig. 1, and f is the
wave's frequency. From the figure it is evident that
d = rVu
so the added phase (in radians) is
2~f~" = (2~/A)rTu = rVk.
Associated with each sensor are 2 adjustable parameters, a (complex) shading
coefficient, w, and a steering phase rTks. The shading w represents the effect of
applying an amplitude scaling and a phase shift to the sensor's output, prior to
summing over all outputs to obtain the array output. This is illustrated in Fig. 2.
The steering phase arises from a time delay, used to insure that signals from all
sensors are in phase when the incoming wave arrives from a specified steering
direction. After these operations are applied to its output, os, the sensor response is
rs = w . B ( k ) 9 exp(irV(k - ks)). (2.1)
134 L.S. Lasdon et al. / Optimal design

J DELAY ~ ] J

"[[ ~E,AY
"FI ~ al I] i
E
DELAY

Fig. 2. Time delay beamformer.

L e t j be the sensor index. The farfield array response, also called the beam pattern,
is obtained by summing (2.1) over all sensors:
tl
a(k, ks, w) = Y vvsBj(k ) exp(irT(k-- ks)). (2.2)
.j= 1

In the above, we redefine w to be the vector o f all shadings ( w ~ , . . . , w,,). These are
the design variables in the remainder of this paper. The sensor positions ri are
assumed fixed.

Minimax formulation

If ks and w are fixed, and a(k, ks, w) is evaluated in some plane o f interest, then
10 log,01a[-', the magnitude o f the pattern in decibels, may be plotted as a function
o f an angle within the plane. Such a plot is shown in Fig. 3, where the steering
angle is 90 ~ The shadings w have been adjusted so that lal = 1 at the steering angle,
90 ~. The region about the steering angle, out to the first null, is called the main lobe,
while the remainder o f the pattern on either side o f the main lobe is called the
sidelobe region. A frequent objective o f array design is to achieve sidelobe levels
which are as low as possible, given a m a x i m u m acceptable width for the main lobe.
Of course, in the general case, we deal with three-dimensional array responses, so
the main and sidelobe regions must be specified in 3-space.
This statement lends itself naturally to a minimax formulation. First we choose
a finite set o f sidelobe points at which the pattern is to be evaluated. The mth such
point is specified by direction cosines u,,, = (a,,,, fl .... 3',,), m = 1. . . . , M. It is con-
venient to let the steering direction be the zero'th point, i.e.

us = (as, ~s, 3,s ) =- Uo.


Let

k,,, = (2~r/A)um

and define

am(w) = a(kr,, ks, w). (2.3)


L.S. Lasdon et al. t Optimaldesign 135

o
o
i I I i I i i
O. O0 20,0 40.0 6r).O 80.0 100.0 120,0 140.0 160.0 180.0

Fig. 3. Typical beam pattern.

We d r o p the steering direction ks as an a r g u m e n t o f a,, because it is fixed for the


purposes o f optimization (although parametric studies may be done on it later)9
T h e n the p r o b l e m of obtaining sidelobe levels which are as low as possible in a
minimax sense is

minimize max la,,,(w)l 2 (2.4)


w I~-m~M

subject to the normalization or M R A ( m a x i m u m response angle) constraint

ao(w) = 1. (2.5)

It is convenient to introduce an additional variable, z, and work with the equivalent


problem
minimize z (2.6)

subject to

[a,,(w)12 ~< z, m=l,...,M, (2.7)

and

ao(w) = 1 (2.8)

Effective number of elements

Unfortunately, solving (2.6)-(2.8) usually leads to beam patterns which are both
extremely sensitive to r a n d o m errors and have low signal to noise gain for spatially
136 L.S. Lasdon et aL / Optimaldesign
uncorrelated noise. Random errors can occur either in the amplitude or phase
response of the sensors, or may be due to deviation of the wave front from a plane
wave [1]. As the sensitivity and signal to noise properties of an array improve, its
ability to achieve uniformly low sidelobes decreases. In [1], Cox and Thurston show
that, by imposing the constraint

Ne = wj lwil2~> N~ (2.9)
j=
and choosing the level -Ne appropriately, these problems can be resolved. The
constraint is derived assuming omnidirectional elements, but is an acceptable
approximation for the problem considered here. The quantity ATe is called the
effective number of elements; and (2.9) is called the efficiency constraint. It is shown
in [1] that
Ne<~n
and typical values for Ne may range from 0.4n to 0.8n.

Formulation using real variables


Thus far, the discussion has dealt with complex quantities. To express the problem
(2.6)-(2.9) in terms of real-valued functions and variables, let
wj = rw~ + iIwj, (2.10)
Bj( km) exp(ir~(k,,, - ks)) = rs~,, + iIsi, ,. (2.11)
Then, by (2.2) and (2.3)

am(w)= ~ (rwj+ilwj)(rsi,,+iIsi,,)
j--I

= ~ (rwirsj.,-Iwjlsj,,,)+i ~ (rwjlsj.,+Xwjrsj,.). (2.12)


j- 1 j l

The above may be expressed more compactly by defining


x=(rwl,...,rwn, I w l , . . . , Iw,,), (2.13)
RS,, = (rslm, 9 9 9 rs,,,,), (2.14)
IS,,, = ( I s ~ , , , . . . , Is,,,), (2.15)
a,,, = (RS,,,,-IS,.), (2.16)
b,,=(ISm, RSm). (2.17)
Then, by (2.12),
am(w) = amx+
T ibV,,,x (2.18)
and
[a,,, ( w)l 2 : ( aV,,,x)2 + ( b V,,,x)2 (2.19)
L.S. Lasdon et al. / Optimal design 137

Hence, the pattern magnitude squared at each farfield point (a,,,,/3m, y,,,) is a convex
quadratic function of the design variables x. This is an important structural feature
of this problem, and is exploited by the algorithms used.
The number of effective elements, Are in (2.9), can be expressed in terms of x by

Ne + xo+ ) (2.20)

Then our final formulation of the problem is (2.6)-(2.9) is


Problem SD (Sonar Design)
minimize z (2.21)
subject to the sidelobe constraints
T 2 T 2<2
(a,,x) +(bmx) ~z, m=l,...,M, (2.22)
and the M R A and efficiency constraints
a~x : 1, (2.23)

bTox = 0, (2.24)

Xk -~ Xn+k -- ~[e ( X ~ -~ X ~ + k ) ~ O, (2.25)


k=l 1 k=l

This problem has 2n variables x and M + 3 constraints. It would be a convex


program if the nonconvex efficiency constraint (2.25) were absent.

3. Solution by Lagrangian relaxation

An important feature of SD is the multiplicity of sidelobe constraints (2.22) (from


several dozen to several hundred) and the fact that these constraints are quadratic.
This structure can be exploited by dualizing with respect to (2.22). Let F be the set
of vectors x which satisfy the MRA and efficiency constraints (2.23)-(2.25), and let
A~ be the Lagrange multiplier associated with the ith sidelobe constraint. It is also
convenient to define
g , , ( x ) = ( a ~ x ) 2 + (b,,,x)
~ .
The Lagrangian function is
M
L ( x , z , , ~ ) = z + Y A,,,(g,o(x)-z)
trl = |

Fhe dual objective is


h(A) = inf {L(x, z, ,~)lx e F}
x,z
138 L.S. Lasdon et al. / Optimal design

and the dual feasible region is the set of nonnegative A for which this infimum is
finite. Since z is unconstrained, the infimum is finite if and only if the coefficient
of z vanishes. Hence, the Lagrangian dual problem (LD) is

Problem LD
maximize h(A) (3.1)

subject to
M

A/> 0, Y~ A,,, : 1, (3.2)


m= 1

where

h(A) = min A,,,g,,, (x) x e F . (3.3)


tr I

Since the objective of the above minimization problem is the weighted LI norm of
the vector of squared pattern magnitudes, we call the problem LI(A) and rewrite
it here:

Problem LI(A)
M

minimize ~ A,,,g,,(x) = f ( x , A) (3.4)


m ~ 1

subject to

a ~ x = 1, (3.5)
b-~x = O, (3.6)

- + x , , + k ) ~> O. (3.7)
\k=l k 1 k=l

The algorithm used to solve SD is Lagrangian relaxation. A sequence of problems


LI(A) is solved, with the weights h adjusted by a steepest ascent step on the dual
objective h. This is attractive because L1 (h) is much easier to solve than the original
problem SD. It has a convex quadratic objective and three constraints. Two are
simple linear equalities, while the third is the (nonconvex) quadratic efficiency
constraint. Of course, this problem has 2n variables (recall that n is the number of
sensors), and n may be in the hundreds or even thousands. However, a generalized
reduced gradient algorithm, using a conjugate gradient method to vary the nonbasic
variables, can solve LI(A) efficiently. We discuss this in more detail in the next
section.
The following are well-known properties of the Lagrangian dual:
1. The dual objective h(A) is concave, so the dual problem LD has no local
maxima which are not global.
L.S. Lasdon et aL / Optirnaldesign 139

2. If the dual subproblem LI(A) has a unique solution x(A), then h is differentiable
at A and its gradient is

Vh(A)=g(x(A))

where g is the vector whose mth component is g,,,(x) (g is a subgradient of h for


any optimal solution x(A)). Hence, Vh is available "for free," since g must be
evaluated when solving LI(A). We have no p r o o f that LI(A) must have a unique
solution, but computational experience and the nature of the problem suggest that
this is almost always true. In most instances, the set of all a,, and b,, vectors will
span E z'', so the objective of LI(A) will be strictly convex. Then, if the efficiency
constraint (3.7) is ignored, LI (A) has a unique solution. For these reasons, we treat
LD by methods appropriate for differentiable problems.
3. The dual objective h(A) is a lower bound on the optimal value of the original
problem, SD, for any A satisfying the dual constraints (3.2). Unfortunately, due to
the nonconvex efficiency constraint (3.7), this bound cannot be guaranteed to be
tight. However, it is quite satisfactory to solve practical problems to within a few
decibels of optimality. The success of Lagrangian relaxation in other contexts implies
that we can expect to obtain solutions of such accuracy in relatively few dual ascent
steps. This has been true for the vast majority of problems solved thus far.
4. If weights A~ exist such that primal and dual optima are equal, they satisfy
the complementary slackness condition

a ,~ (x ~ - ~) = 0

where

2= max gm(x ~
I~m<~M

is the peak sidelobe level and x ~ is an optimal solution to SD. This implies that
3,~ for most sidelobe points, since most are below the peak level. This fact is
used in the dual ascent algorithm, where we attempt to set several A,,,'s to zero
during each linesearch.

4. Details of the algorithm

Our Lagrangian relaxation algorithm for SD has the following major steps:
0. set A = ( l / M , . . . , 1 / M ) , i t e r = 0 and tol and itmax to user provided values
(default: tol = 2, itmax = 4).
1. solve LI(A), obtaining an optimal solution x(A). During this iterative process,
each time a new feasible point x is obtained compute the peak sidelobe level in
decibels as

peak=101og~o max g,,,(x)


140 LS. Lasdon et al. / Optimal design

Let ubd be the smallest value of peak obtained thus far. ubd is an upper bound on
the solution of SD.
2. Compute

lbd = 10 loglo h(~t)

lbd is a lower bound on the solution of SD.


3. If

ubd - lbd < tol

stop. The shadings x(A) are within tol decibels of optimality.


4. If i t e r > itmax, stop.
5. F o r j = 1 , . . . , p, choose search directions d j and step sizes a J, which determine
a sequence of multiplier vectors A J = M - l + a J d J, where A~ Details of this
procedure are described below. Replace A by Ap.
6. i t e r ~ i t e r + l and go to step 1.
In step 1, the weighted L1 subproblem is solved using the generalized reduced
gradient code G R G 2 [4]. G R G 2 can deal with problems with hundreds or thousands
of variables using one of its conjugate gradient options. The memoryless BFGS
algorithm [8] appears to be the most effective of these, and has been used exclusively.
GRG2 also has the advantage of providing a feasible point quickly (phase 1 never
requires more than a few linesearches in problems solved thus far), and maintains
feasibility thereafter. This permits computation of peak at each feasible point
encountered during phase 2.
The gradient of the objective of LI(A) is
M M
Vf= • A,,~'g,,(x)=2 • A,,,((aV,,x)a,,+(bV,,,x)b,,,). (4.1)
m = 1 m = 1

The expression, along with expressions for the constraint gradients, are coded for
GRG2 in a subroutine PARSH. The objective and constraints are evaluated in a
T
separate subroutine called GCOMP. Since the terms (a,,x) and ( b T,,x ) are common
to the objective and its gradient, they are computed only once, in G C O M P , and
are transferred to PARSH in a C O M M O N block. This makes the objective gradient
much faster to compute than the objective itself.
Since the efficiency constraint (2.25) is nonlinear and is almost always active,
some Newton iterations are needed by G R G 2 to restore feasibility each time the
nonbasic variables are changed. Since there are at most three basic variables, and
two of the three active constraints are linear, the average number of Newton iterations
is small--less than one in all cases observed thus far. Further, the Newton algorithm
does not require the value of the objective of L1 ()~). Hence, in G C O M P , the objective
computation is skipped in phase 1, or if the nonlinear constraint is not satisfied to
within the required tolerance. The elimination of such superfluous objective evalu-
ations has yielded time savings of from 17 to 36 percent on test problems ranging
from 15 to 31 sensors, using complex shadings.
L.S. Lasdon et al. / Optimal design 141

In step 5 (the dual ascent step), the conditions &/> 0 have been replaced by
A~/> h min
where Amin = 10 -7 currently. Early tests indicated that, if weights were set to zero,
pattern values corresponding to such weights sometimes took on peak or near peak
values in subsequent subproblems. At the start of step 5, we evaluate the sets S and
N of superbasic and nonbasic weights respectively (see [7]) as
S = { / [ h i > hmin},
N={/[Ai=hmin}.
The objective gradient, V h, is projected into the intersection of the active constraints,
i.e. into the set

It is easy to show that this projection, d, has components

di=~gi(x(A))-g.~, i~S, (4.2)


[ O, i~N,
where g~ is the average pattern value over the set S;
1
g~=?~o~ • gi(x(h)). (4.3)
lal ieS
Lagrange multipliers for the active inequality constraints are then computed. These
are given by
tz~=g~(x(h))-~,., ioN, (4.4)
The largest such multiplier, #k, is computed. If tZk > 0, h(A) can be increased by
adding Ak to the superbasic set;
S~Su{k},
N ~ N-{k}.
The direction in (3.2) corresponding to these new sets is recomputed (which requires
only recalculation of ~, in (3.3)), and the process is repeated. When
tzi<~O, i6 N,
a search direction to initiate the linesearch width has been determined.
This process has intuitive appeal. Weights for "high" pattern points should be
increased, while those for " l o w " points should decrease. Taking a step of the form
,~ ~- h + ad, c~ > 0 causes weights Ag whose pattern value g~(x(h)) is above (below)
~.he average value g~. to increase (decrease). Weights at lower bound with gi(x(A )) > ~,
are also allowed to increase. Each time such a weight is freed from its bound, the
average ~, increases. The process ceases when the current average ~, exceeds the
g~ values for i c N, or when N becomes empty.
142 L.S. Lasdonet al. / Optimaldesign
In the dual linesearch of step 5, it is too expensive to compute the objective
h(A + ad) for any a > 0, since this would require solving the subproblem LI(A + ad).
Hence, linesearches commonly used in unconstrained minimization cannot be used.
Since an optimal weight vector has many components equal to zero, our linesearch
strategy is to take a step which sets several components of A to the lower bound
Amin. The largest step which maintains A~+ ad,/> Amin for i c S is computed, and
this step is taken. If A~ is a weight which has been set to Amin, A~ is deleted from
S and added to N. The search direction d in (3.2) is recomputed, and the process
is repeated. When k = 0.1 M weights have been set to A rain, the linesearch terminates.
As shown in the computational results of Section 6, this procedure has been
successful. It leads to large increases in the dual objective in earlier iterations,
although the rate of improvement diminishes as iterations progress. Early tests
showed that allowing more than 10% of the weights to achieve lower limits in one
search sometimes caused h to decrease, so k = 0 . 1 M was adopted as an effective
compromise.

5. Optimal combination of subarrays

One way to reduce the size of these optimization problems is to decompose the
sonar array into subarrays. Let subarray elements be indexed consecutively, with ip
and ./~, the indices of the initial and final elements in subarray p. A decomposition
occurs if the shading coefficient for sensor j, wj, is factored as

wj=w•wjp, ip<~j<~L , (5.1)


where wjp is known and w], is the shading for the subarray p, to be determined. The
u}p's can be found by applying any design procedure to subarray p.
If there are P subarrays, substituting (5.1) into the pattern expression (2.2) yields

a(k, ks, w ~) = wr~, wjrBi(k) e x p ( i r ~ ( k - k s ) ) . (5.2)


p=l j p

Defining the subarray response as

rp(k, ks)= 2 v%,Bj(k) exp(irjr(k - k s ) ) (5.3)

(5.2) becomes
P
a(k, ks, w") = ~, w~ rp(k, ks). (5.4)
p=l

Since the complex number rp can be computed in advance for all specified values
of k and ks, (5.4) has the same form as the original array response expression (2.2),
with r~, playing the role of the sensor response. Hence, the problem can be expressed
in terms of real quantities as in (2.21)-(2.25), where x now contains the real and
L.S. Lasdon et aL / Optimal design 143

imaginary parts of the subarray weights wp, and RS and IS in (2.14)-(2.15) are
replaced by the real and imaginary parts of re .
A subarray capability based on these ideas has been included in our computer
implementation. Some modifications to the expression for the effective number of
elements (2.9) are required. Substituting (5.1) into (2.9) yields

N~ : apW; t~A w;12 = N u m / D e n o m (5.5)

where
.r
% = Z wjp, (5.6)
.i- i~,
S;
~ip: Z Iwj.l -~. (5.7)
.]- io

Defining
% = r% +iI%,
a = a 9 a
Wp rwe+llWe,

x a=(rwcl *, 9 9 9 ,rwp," I w T , . . . , I kip),

the numerator and denominator of (5.5) may be expressed in terms of real-valued


quantities as

Num= ~ (r%xp-l%xp.,.p) + ~. ( I % x p + r % x e + p) ,
p=l .3 I_e=l

P
D e n o m = )~ [ 3 e ( x 2 + x ~ + Q .
p=I

The subarray counterpart of the efficiency constraint (2.25) is

N u m - Ne. D e n o m ~>0. (5.8)

6. Computer implementation

This section describes features of a fORTRAN 77 implementation of the algorithm


of Section 4.

D a t a structures

Most o f the time required to solve the weighted L1 subproblem LI(A) (over 90%
in large problems) is used to evaluate its objective, rewritten here as
M

f(x,A)= }~ A , , , ( ( aT, , , x )"~- + ( b . 1, x ) - )"~. (6.1)


m=l
144 L.S. Lasdon et al. / Optirnal design

This requires evaluation of 2 M scalar products a ,v, , x and bV,,,x, defined by (2.12)-
(2.19), rewritten here as
n
a ~ x = ~ rs;,,,x j - E Isj,,x,,+, (6.2)
j--I j=l

= axl(m) - ax2(m) (6.3)

b~,x= Isj,,,x~+ ~ rsjmxn+) (6.4)


./= I .j = 1

= bxl(m)+ bx2(m) (6.5)


The data rsj,, and Isj,, is stored in two linear arrays RS and IS organized by sensors
within sidelobe points:
RS=(rs~,..., rs,,~ , rs~2, . . . , rs,,2, . . . , r s l M , . . . , rs,,M )

I S = ( I s l l , . 9 Is,,1, I s l 2 , . . . , I s n 2 , . . . , IsjM,..., IS, M)

Storage is reserved for each scalar product a ~ x , bV,,,x, m = 1 . . . . , M , since they are
used in the computation of V f Denoting aV,,,x by a t x ( m ) and bV,,x by b t x ( m ) , these
quantities are computed via the following logic:

for rn = 1, M do
axl =0, bxl : 0
f o r j = 1, n do
k:=(j-1)n+m
ax l = ax l + RS( k )x(j)
bxl = bxl + RS(k)x(n +j)
end
atx( m ) = ax l
btx(m) = bxl
end
for m = 1, M do
a x 2 = O, b x 2 = 0
forj = 1, n do
k=(j-1)n+m
ax2 = ax2 + IS( k )x( n + j)
bx2 = bx2 + IS(k)x(j)
end
atx(m) = atx(m) - ax2
btx( m ) = btx( m ) + bx2
end

This scheme computes two of the partial sums in a single pass through the RS
array, followed by a pass through the IS array to compute the other two. This has
three advantages:
L.S. Lasdon et aL / Optimal design 145

(1) Sequential access of array elements reduces paging dramatically on computers


utilizing virtual memory. It ensures that a large portion of the data elements retrieved
by a disk access are utilized in computations before another disk access is required.
Our initial implementation used 2-dimensional arrays for storing the R S and IS
data, and computed the scalar products amx T and bmxr in an order which required
jumping back and forth in these arrays. Changing to the scheme described above
achieved a 48-fold reduction in paging activity and a 4-fold reduction in computation
time on a VAX 11/780 computer.
(2) These data structures lend themselves well to pipelining on machines capable
of vector processing.
(3) Using the one-dimensional arrays RS and IS eliminates the additional address
computations required with multi-dilnensional arrays.
C o m p u t a t i o n of V f in (3,1) also has been organized so that single passes of RS
and IS are required.

Storage allocation

Most of the arrays used in the optimization program have dimensions which
depend on problem size (number of sensors, number of farfield points, or number
of subarrays). A pseudo-dynamic storage allocation scheme is employed, which
partitions all working array space out of blank C O M M O N . This reduces the need
to redimension and recompile when problem size changes, and allows storage used
to be more easily matched to what is required. It also assures that all arrays used
in solving the subproblems Ll()t ) are contiguous, which helps reduce paging activity.

Algorithm modifications
The objective gradient V f in (4.1) is a sum of M terms. In problems with M = 284
and n =42, the G R G 2 optimizer encountered difficulties which stem from inac-
curacies in V f Several subproblems terminated when no progress was made in a
search along the negative reduced gradient direction. At that point, the norm of
this gradient was not small, and only a few iterations had been done. We hypothesized
that this was due to roundolt error resulting from cancellation of terms in V f of
approximately equal magnitudes and opposite signs. These problems were elimi-
nated by accumulating separately positive and negative terms (for each component
of Vf) in the sum (4.1), adding them together only after all M terms had been
processed.

System design

The system was designed with three objectives in mind:


(1) It should be able to efficiently solve, via the algorithm above, problems of
any size (subject to computer memory limitations) and be flexible enough to permit
specification of a wide range of problems.
(2) It should be easy to use, with compact and understandable input formats.
146 L.S. Lasdon et al. / Optimal design

(3) It should be sufficiently modular to facilitate modifications and extensions.


The final system consists of 6 modules: a main driver which controls the dual
maximization, multiplier adjustments, and storage allocations; an input module
which reads and checks the input problem specifications and produces a log of the
specifications for each problem; a module which reads in the array and farfield
point geometries and generates the RS and IS arrays; a module which evaluates
the LI objective, the L1 problem constraints, and their derivatives; and the G R G 2
optimizer which solves the L1 problems.
The main features of the system are:
(1) input is free format and keyword-driven, with problem specifications entered
in any order.
(2) Any number of problems may be solved in a single run.
Problems following the first need only specify those parameters differing from the
previous problem. Several combinations of cold and warm starts are available.
(3) Array elements may be aggregated into subarrays (see section 5).
(4) Sensor shadings may be constrained to be real. This is done by skipping all
computations involving the imaginary parts of sensor shadings.
(5) Two stopping criteria are available; absolute distance in db from the minimax
optimum (based on distance from lower bound to upper bound), and number of
L1 problems solved. Both of these may be specified by the user.
(6) Several levels of detail are available for system output.
Computational results achieved with this system are given in the next section.

7. Computational results

Initial implementations of this approach did not include the efficiency constraint
(2.9). Characteristics of six problems solved without this constraint are shown in
Table 1. All have omnidirectional sensors, i.e. B(k) = 1 for all k, and all but HA4
are line arrays. HA4 has a partial hoop geometry. All were solved allowing the

Table 1
Problem characteristics

Size of k l Problem
Sidelobe
Problem Points Sensors Variables Constraints

STI 30 7 14 2
ST2 60 7 14 2
ST3 30 15 30 2
ST4 60 15 30 2
HA4 182 31 62 2
LINE 172 49 98 2
L.S. Lasdon et aL / Optimal design 147

shading coefficients to be complex, so the number of decision variables ranges from


14 to 98 (twice the number of sensors). Results are shown in Table 2. The column
headed "iteration" gives the major iteration count, i.e. the value of iter in the
algorithmic description of Section 4, while "subproblem iterations" is the number
o f linesearches required by GRG2 to solve the subproblem Ll()t). The GRG2
stopping tolerance EPTOP (see [4]) was set to 10 -~, and most subproblems termin-
ated when the norm of the reduced gradient fell below 10 -7 . All runs were made
on an IBM 370/158 computer at the University of Texas at Austin, using the
V M / C M S operating system and the VS VORTRAN compiler.
As seen from Table 2, the difference between upper and lower bounds decreases
rapidly, becoming less than the tolerance of 2 decibels in all but two problems, and
coming close to that in those two. The number of subproblem iterations is always
significantly less than the the0retical maximum of 2n - 2 (the number of superbasic
variables). This limit applies because the objective is quadratic, the constraints

Table 2
Results for the problems of Table I

Pattern Lower Difference Subproblem Total


Problem Iteration Max (db) Bound (db) Iterations Time (sec)

ST1 0 -4.0 -12.5 8.5 5


1 5.1 -11.0 5.9 3
2 -6.5 -9.6 3.1 3
3 -7.7 -8.7 1.0 3 4.5

ST2 0 -4.3 -13.3 9.0 5


1 -4.6 -12.0 7.4 3
2 -5.6 -10.6 5.0 3
3 -6,8 -9.5 2.7 3
4 -7.0 9.2 2.2 3 6,8

ST3 0 -32.9 -40.3 7.4 9


1 -35.1 39.1 4.0 7
2 -36.2 -37.9 1.7 8 6.6

ST4 0 -31.6 41.2 9.6 9


l -34.0 -40.1 6.1 7
2 35.4 -38.8 3.4 8
3 -36.3 -37.8 1.5 8 12.2

HA4 0 -30.0 -44.2 14.2 16


1 -32.9 -43.1 10.2 10
2 -34.4 -41.6 7.2 10
3 -35.9 -39.9 4.0 11
4 -37.0 38.7 1.7 14 84

LI NE 0 -66.2 -75.3 9.1 43


1 -67.7 -74.0 6.3 14
2 -68.7 -72.6 3.9 20
3 -69.4 -71.5 2.1 10
4 -69.4 -71.5 2.1 2 176
148 L.S. Lasdon et al. / Optimal design

linear, and a conjugate gradient method with exact linesearch is employed. Computa-
tion times increase rapidly with problem size, reflecting the increased time required
to evaluate the pattern at all sidelobe points.
The second set of results, given in Table 3, shows the effects of imposing the
efficiency constraint. All problems have omnidirectional sensors and a "stacked
h o o p " geometry, with the sensors mounted on one, two, or four identical hoops,
each containing 42 sensors, spaced along the x axis. The " E F F ( % ) " column gives
the required percent efficiency, IVe/n. All problems were run on a VAX 11/780 at
the Naval Underwater Systems Center in New London, Connecticut. Note the rapid
increase of the peak sidelobe level as the percent efficiency requirement is increased.
When no efficiency constraint is imposed, the final number of effective elements is
unacceptably small, and tends to decrease with problem size. There is obviously a
strong tradeoff between efficiency and peak sidelobe level. The final distance between
upper and lower bounds is small, and tends to decrease as the efficiency specification
increases.

Table 3
Runs of stacked hoop problems with different percent efficiency levels

Final no, Max Max


Sensors/ Eft. of effective sidelobe distance Time Subproblems
sidelobe pts. (%) elements level (db) from opt. (db)(VAX) solved

42/284 0 NA -35.1 2.9 9 min. 4


30 12.6 -23.3 1.9 17 rain. 4
50 21 -17.8 1.9 15 rain. 1
75 31.5 -11.6 0.9 22 min. 1

84/290 0 6.2 -28 2.4 12 min. 4


30 25.2 failed
50 41 - 14.7 2.3 36 min. 4
75 63 -10.1 1.3 47 rain. 1

168/290 0 0.02 -27.4 5.2 1 hr. 4


5 rain.
30 50.4 -20,6 2.4 1 hr. 4
27 min.
50 84 15,2 1.9 1 hr. 1
14 rain.
75 126 -10.3 1.3 l hr. 1
58 rain.

The effects of varying the width of the main lobe are shown in Fig. 4. These
patterns, all within approximately 2 db of minimax optimality, are for the 31 sensor
problem HA4 of Table 1. The sensors are arranged on a partial hoop in the half of
the yz plane with y positive, and both farfield points and the patterns shown lie in
that half plane. Farfield point spacing is 1 degree, all sensors are onmi-directional,
the frequency is 300 hz, required efficiency is 50%, and shadings are constrained to
L.S. Lasdon et al. / Optimal design 149

interval for optimization-~ 35


~ i + i ' l ~--'-I I'\ ' ' I '

I+
i =- T--T---~--~
';-_;_ ~,,
I
i ~
i ~--17-+-~i
i I / ~-L~
+
t
i
!
-
i i I t
i i i i- i i i -
i i i !~___l__.{ i
i ~ i :l i II I i '
I ; l I# : I+ 1 ;

400 : ~ I [
I
' '' i
7' i!i I!~h- - - - + . . . - - + ~ . , - - + . ~ . . - - 4 . ' ' ' '
beamwidth
",,,'......
- I , I ',---1---- /i +-:~
i ~ I i i I
, ~ , , , _~ . . . . I .i ' ; i i i

l~ It0. -I~o. -liO.


i i
-liD, -101L -ll0.
iV
40.
~
.It0.
I'
40.
I
0. I~
i 10. It0.
II
l,
i'
6il.
7i
I~;Q.
i
II0. II0.
i i
IF+ll. Ill0.

0. , , , ! , I , I , , , l
i i i i . I ; ; ; i i i ;
"i~i-i
V \!/\!~
t :i ,I iI f
ii Ji ii iiA!l iAIAi
\i/ ",4
'" ! 'v' \i/'\~ I I i -i i i/-'+t-!,l-'f I
.,.I i 't/ I i I i i i i i J. tf l !
1 x ~ ; i i i ; it ; i i t.... [l l i i
~.i
i
i
i
1 ,\(k i i i i I li I i i [\ll ; i[ iI
, , i" i
,w i~ T
, i
, i i" i
, i, "
,i I
, I
, /l v! i
i i !
i i ~ ~ :1 ', ! 3 I : |I , 1 I l; I : ', ;

, ,--7,iT,,I , I, \,i I , ....../! ! i i i


300
i I i , ' i ; i : i i ; i
beamwidth
~
l.I.
! i
I
i i
I i
1T17-'I' u ~
Vi', i *vi v*!
, i , 'i
~
i~
v ~v,
i ,
!
;
i
i
' ,:i
!
i i
i
i

SL'
. . ~. . I~. . l . . !
'
II
~
I
'
!
*
I
!
'
~
!~ I
!' !
~
1 i
,
4.
,
---4-
I
,
---L-'
;
I '
--4------4~
l
, I
,
I l 1 I ~ ' ~ !
, I I i I I i I I I 1 i l i i l
llO. I I .I.-- I I --~ 4 1 L l ~ ~ I I I I ~
-Illl~. -110. -lit0. -120. -ill. 4G. .-+tlEI. .Ill. *-IQ. 0. 1O. till. ~lli. IO. l~II. 120. llll0* lli0. lllg.

a. ' ~ .... ~ I l , ! I ! !

I i i iN I i Ii I i -- I I I

ll. i i i i --~
' i I
20 o u. +---- ; , : ,; i i
beamwidth i I i i i i i
u.i i- i ,i---' i
, i ! 1 !
Ii. ~ I ! I '. ! --~ 1 '.

.. + - - - - + - - . + -' 1 1
. I - - 1 - . . ._l. I I, - - - ~ .!- - - + - - + - - - - ~ - - - - + - - - I I
-ii~. -iio. -ll~, -l~l~. -ioil -i10. .-I~, .-II. .,~i, 0,. IG. ii0. 6Q. III. to0, i~.0. Ill. iP+o. 111o.

Fig. 4. Eltect of vatting width of main lobe.

be real. H e n c e each p r o b l e m LI(A ) has 31 variables and 3 constraints. The steering


direction (as, bs, gs) is the positive y axis.
Four patterns are shown, with beamwidths of 40 ~ 30 ~ 20 ~ and 10 ~ Only the
points from - 9 0 ~ to +90 ~ those in front o f the array, are included in the optimization.
As b e a m w i d t h decreases, peak sidelobe level increases sharply, from about - 4 5 db
150 L.S. Lasdon et al. / Optimal design

I intervalfor optimization--~
Ik
' i I I
, I ~I
~ ' !
' ~
' ~
' ~' ~
' ~I ''

k.[~, , !V , i ! ! ! , !
__i__L__L_] i ! i Ii
, , ! , t t !~ ! ! i i
tO~ dth
beamwi ! ~ i ! i i *,* i i
I
I
' 1 i i 1 i I

i~-loo. -||o. -t~O. -I~. -tun.


'
i!
--~. .~m. ,.~n.
i -zo. o.
i ,OI.-- ~ . . . .
i i I tfl~.
i1 !,i i ]

Fig. 4 (continued).

at 40 ~ to - 2 8 db at 20 ~ a n d - 1 2 db at 10 ~ All p a t t e r n s have several n e a r l y equal


s i d e l o b e peaks, a characteristic o f m i n i m a x o p t i m a l i t y . Each run t o o k less than 1
m i n u t e on a V A X / 1 1 / 7 8 0 , using fairly b a d initial estimates o f the variables.
An o p t i m a l p a t t e r n for 40 ~ b e a m w i d t h a n d 75% efficiency is s h o w n in Fig. 5. The
p e a k s i d e l o b e level rises to a b o u t - 3 3 d b , c o m p a r e d to - 4 5 db at 50% efficiency.
F o r 40 ~ b e a m w i d t h a n d 50% efficiency, the efficiency c o n s t r a i n t is inactive at
optimality, so further decreases in the r e q u i r e d efficiency level leave the p a t t e r n
unchanged.
The largest p r o b l e m s solved to date to d a t e have 391 sensors m o u n t e d on 22 rings
on a c y l i n d r i c a l l y s h a p e d surface, with long axis along the negative x axis. T h e
sensors are on h a l f o f this cylinder, a n d the 576 farfield points are on a h e m i s p h e r e ,
b o t h in the h a l f s p a c e with positive y values. These p r o b l e m s have R S a n d I S d a t a
arrays o f size 3 9 1 x 5 7 6 = 2 2 5 2 1 6 , with either 391 or 782 variables (for real a n d

Q" ~ i 1 M I t I I ! !

~" ~ T r ~ i" r-~--i-/-t~-- i- ! ~ ' ~ ~ ~


I f
I
! ] _!I ! i i i
9 ! i i ~i i i I I t, ' ~ i V t i i
"" i i i : ~1 i l~i i i /i ! i i
,,,. ! .~ .... -_- __4___t.___L__iLL_L]_ ~-I I /I ! L -J
'
~. ~.-.-~- 4- , ' . . . . . . J-~-~--~i,.-/ ~,~ /

' I

""
S~ i
! i i I I i I '.
i I i i i l
o1~. -isn. -t~n. -lzo. -1oo. --~, ..~. -~o. --~. o. zo...~..- ~, |~. (~. llm. l~e_ l~o. 1~.

Fig. 5. P r o b l e m of Fig. 4 with 40 ~ beamwidth and 75% efficiency.


L.S. Lasdon et al. / Optimal design 151

c o m p l e x s h a d i n g s respectively). Runs o f such p r o b l e m s r e q u i r e d 8 to 10 hours on


the V A X 11/780, using a c o l d start.
D a t a for two 391 e l e m e n t cases is given in T a b l e 4. Case 1 has o m n i d i r e c t i o n a l
sensors, w h i l e the sensors in case 2 have the " c a r d i o i d - o p a q u e " r e s p o n s e function.

(1+cos0)2/4, 0~<90 ~
B(k)= 0, 0>90 ~

In the a b o v e , 0 is the angle b e t w e e n the d i r e c t i o n o f the i n c o m i n g wave, k, a n d the


n o r m a l , n, to the surface on which the s e n s o r is m o u n t e d B ( k ) , is unity in this
d i r e c t i o n , falls to -~ on the p l a n e n o r m a l to n, a n d is zero b e h i n d this plane.

Table 4
Data for two 391 sensor problems

Case 1 Case 2

sensors 391 391


sidelobe points 576 576
efficiency 40% 50"/o
steering direction +y axis +y axis
sensor type omni cardioid-opaque
vertical beamwidth 35~ 35~
horizontal beamwidth 25~ 30~
shading type real real
run time, VAX 11/780 8hr, I1 rain 9hr, 34rain
Peak side[obe level 29.4 -34.8
Total subproblem itns 381 501

F i g u r e s 6 a n d 7 c o n t a i n plots o f o p t i m i z e d p a t t e r n s for each case, in h o r i z o n t a l


( x y ) a n d vertical ( y x ) planes. Both cases s h o w significant s i d e l o b e s u p p r e s s i o n
( a b o u t - 3 0 db), which the lower b o u n d s s h o w to be within a b o u t 2 db o f m i n i m a x
o p t i m a l i t y . D i r e c t i o n a l e l e m e n t s clearly yield greater s i d e l o b e s u p p r e s s i o n o u t s i d e
o f the range o f o p t i m i z a t i o n (from - 9 0 ~ to + 9 0 ~ in each plane).
T a b l e 5 s h o w s the results o f each m a j o r i t e r a t i o n for these 2 cases. In case 1,381
total s u b p r o b l e m iterations n a r r o w the difference b e t w e e n u p p e r a n d lower b o u n d s
to 5.09 db, while case 2 requires 468 i t e r a t i o n s to achieve a difference o f 4.00 db.
Cases 1 a n d 2 take 8.18 a n d 9.55 hours r e s p e c t i v e l y on the VAX. These times i n c l u d e
p r o b l e m s e t - u p and m u l t i p l i e r a d j u s t m e n t times, and are slightly l o n g e r than the
sum o f s u b p r o b l e m s o l u t i o n times. S u b p r o b l e m s 1 and 2 c o m b i n e d require 73%
a n d 84% o f the total t i m e for cases 1 a n d 2 respectively, a n d a b o u t 80% o f total
time for b o t h cases is s p e n t e v a l u a t i n g the s u b p r o b l e m objective f ( x , A) in (3.4).
In T a b l e 5, the c o l u m n h e a d e d " s u b p r o b l e m iteration when best p e a k f o u n d "
gives the s u b p r o b l e m iteration when the p a t t e r n with p e a k value listed in the " p a t t e r n
m a x " c o l u m n is found. This is the best p a t t e r n d i s c o v e r e d up to that stage o f the
152 L,S. Lasdon et aL / Optimal design

~-- interval for optimization

...... r-- . . . . -r- . . . . . . . .

_I "/I
: - 7

~ J
i ' i
5 "+"---": .... , .. ! . . . . -.a . . . . . . . . . . . .
i " ' i i J

I I! I ~ #
i
!
,
I /
9 1
I
i
!
,
!
1 ; i i 1
.... j
~
1~ ii 1i
_.__~L____-I.
i i~......... _ j

; ; i ] i

i gC" 60' 180.

Horizontal Plane

O. i i
I , i0

15.~ -L ........ F ' - - - - F-. . . . . . . . . . . "T. . . . . . . -I . . . .

i i , i V ~
-l-~
in u
....... vi ....... ~-i ..... +i ....... / ~~ i
i
i
~IA~
Ill l;
i
~
i
;
i /!
~ l ~
i
I

Vertical Plane

Fig. 6. O p t i m i z e d p a t t e r n s in h o r i z o n t a l a n d v e r t i c a l p l a n e s for C a s e I.

algorithm. In case 1, subproblem 1 (see the row with ltn = 0 ) yields a pattern with
peak of -27.98 db in the 28th of its 138 iterations. Subproblem 2 improves this to
-29.39 db in the sixth of 120 iterations. The remaining 3 subproblems improve the
lower bound, but not the pattern peak. This behavior is even more pronounced in
case 2. Subproblem 1 finds a pattern with peak of -34.86 db in the 23rd of 113
iterations, and further subproblems fail to improve on this value. Clearly, early
termination of each subproblem, based on behavior of the pattern peak, could
produce dramatic time savings. This would invalidate the lower bound computation,
but a final run with normal G R G 2 termination criteria would re-establish this bound.
Such a strategy would allow more subproblems to be solved, and would permit the
weights A to be adjusted by the dual ascent step more often. Experiments with these
ideas are currently under way.
L.S. Lasdon et al. / Optimal design 153

r interval for optimization


O.
i
15. i
I
I
30.
i I

I i

18b
Horizontal Plane

i |

15.,
i
T - -
j,- ] - - - -

J
i I I
I
I J i
30. "' ~ l " - - " - - ~ i
! i i i
i
1 \ t
i

30 -G 18b

Vertical Plane
Fig. 7. O p t i m i z e d p a t t e r n s in h o r i z o n t a l a n d v e r t i c a l p l a n e s f o r C a s e 2.

8. Conclusions

We have considered a class of sonar array design problems with a minimax


objective, a nonlinear efficiency constraint, and the shading coefficients of the sensors
as design variables. These problems are formulated as nonlinear programs with a
linear objective, many convex qi)adratic constraints, 2 linear constraints and one
nonconvex quadratic constraint. A Lagrangian relaxation solution procedure which
relaxes the convex quadratic constraints leads to a Lagrangian subproblem with a
convex quadratic objective and the remaining three constraints. This subproblem
is efficiently solved by a reduced gradient algorithm. Extensive computational tests
have shown that this approach can solve problems with up to 391 variables and 579
constraints to within a few decibels of optimality. However, run times for problems
of this size on a VAX 11/780 computer are from 8 to 10 hours.
154 L.S. Lasdon et al. / Optimal design

Table 5
Iteration summary for 391 element problems

Subprob. Subprob.
Itn when Solution
Pattern Lower Subprob. Best Peak Time
Problem Itn max (db) Bound Diff. ltns Found (hrs.)

Case 1 0 -27.98 -42.49 14.51 138 28 2.93


1 -29.39 -34.92 5.52 120 6 3.03
2 -29.39 -34.72 5.33 49 -- 0.95
3 -29.39 -34.59 5.20 41 -- 0.68
4 -29.39 -34.48 5.09 33 -- 0.56
Total, 1 381 8.18

Case 2 0 -34.86 -51.76 16.89 113 23 2.40


1 -34.86 -39.05 4.19 273 -- 5.61
2 -34.86 -38.96 4.09 64 -- 1.12
3 -34.86 -38.91 4.00 18 -- 0.04
4 -34.86 -38.87 4.00 1 -- 0.04
Total, 2 468 9.55

In a companion paper [7] w e c o n s i d e r the implementation of the algorithm


proposed here on a vector processing computer, a C r a y 1M. T h i s m a c h i n e a l l o w s
u s to e x p l o i t t h e f a c t s t h a t o v e r 9 9 % of computation t i m e is s p e n t e v a l u a t i n g a
dense matrix-vector product. By a p p r o p r i a t e l y reordering this computation, run
t i m e s f o r 3 9 l v a r i a b l e p r o b l e m s a r e r e d u c e d f r o m 8 to 10 h o u r s to less t h a n o n e
minute. Early termination of the Lagrangian subproblems, as d i s c u s s e d in S e c t i o n
7, l e a d s to f u r t h e r s i g n i f i c a n t r e d u c t i o n s .

Acknowledgement

T h e a u t h o r s w i s h to t h a n k M s . P a t M a c i e j e w s k i a n d M r . M o h a m m e d Ahmed of
the Naval Underwater S y s t e m s C e n t e r f o r t h e i r a i d in t h e c o m p u t a t i o n a l phases of
t h i s w o r k , a n d Dr. N e a l G l a s s m a n o f t h e Office o f N a v a l R e s e a r c h f o r h e l p i n g to
initiate the research.

References

[ 1] H. Cox and J. Thurston, "'Advanced conformal submarine acoustic sensor beamforming engineer-
ing,'" Report No. 5258, Bolt Beranek and Newman, Inc., Suite 400, 1300 N. 17th St., Arlington,
VA 22209, (March 1983).
[2] D.J. Edelblute, J.M. Fisk and G.L Kinnison, "Criteria for optimum signal detection theory for
arrays," Journal q( the Acoustical Society o f America 41 (1967) 199-205.
[3] L.S. Lasdon, A.D. Waren and D. Suchman, "Optimal design of acoustic sonar transducer arrays,"
Proceedings oI'IEEE International Co~!ference on Engineering in the Ocean Environment 1 (1974) 3-10.
L.S. Lasdon et aL / Optimal design 155

[4] L.S. Lasdon and A.D. Waren, "Generalized reduced gradient software for linearly and nonlinearly
constrained problems," in: H. Greenberg, ed,, Design and hnplementation of Optimization S@ware
(Sijthoff and Noordhoff, 1978) 363-397.
[5] W. Mossberg, "'Quiet defender,'" The Wall Street Journal 1 (May 19, 1983) 22.
[6] B.A. Murtagh and M.A. Saunders, "Large-scale linearly constrained optimization,'" Mathematical
Programming 14 (1978) 41-72.
[7] J. Plummet, L.S. Lasdon and M. Ahmed, "'Solving a large nonlinear programming problem on a
vector processing computer,'" Working Paper 85/86-3-1, Department of Management Science and
Information Systems, School of Business Administration, University of Texas, Austin, TX 78712
(1985).
[8] D.F. Shanno, "'Conjugate gradient methods with inexact searches," Mathematics of Operations
Research 3 (1977) 241-254.
[9] R.L. Streit and A.H. Nuttall, "'A general chebyshev complex function approximation procedure
and an application to beamforming," Journal of the Acoustical Society of America 72 (1982) 181-190.
[ 10] A.D. Waren, L.S. Lasdon and D. Suchman, "'Optimization in engineering design," Proceedings of
the IEEE 55 (1967) 1185-1196.

You might also like